Industrial Robots: Design, Applications and Technology 1536177792, 9781536177794

"Industrial Robots: Design, Applications and Technology is an essential reference source that explores the fundamen

325 46 19MB

English Pages 461 [463] Year 2020

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Contents
Preface
Historical Development Of Robots
Robots: Today And Tomorrow
Chapter 1
The Application of Robots in the Industry
Abstract
Introduction
The Classification of Robotic Systems
The Application of Industrial Robots in the World
The Application of Industrial Robots in Top Five Countries in The world
The Application of Industrial Robots in China
The Application of Industrial Robots in Japan
The Application of Industrial Robots in the Republic of Korea
The Application of Industrial Robots in the USA
The Application of Industrial Robots in Germany
Conclusion
References
Chapter 2
Industrial Robot Systems and Classifications
Abstract
Introduction
Mechanical Arm
End Effector
Teaching Pendant
Controller
Actuators
Sensors
Industrial Robot Specification and Terms
Work Space
Number of Axes
Payload
Coordinate Systems
Tool Center Point (TCP)
Accuracy
Repeatability
Velocity
Mechanical Structures of Robots
Cartesian Structure
Cylindrical Structure
Spherical Structure
Articulated Structure
Selective Compliance Articulated Robot Arm – SCARA
Parallel Robots (Delta)
Conclusion
References
Chapter 3
Sensors in Robotics
Abstract
Introduction
Division of Sensors
Sensors of Internal State
Sensors of Position and Movement
Incremental Measuring Encoders
Sensors of Speed
Tachogenerator
Sensors of Deflection
Piezoelectric Sensor
INS (Inertial Navigation System)
Geomagnetic Sensors
Sensors of External State
Tactile Sensors
Magneto-Resistive Touch Sensor
Sensors of Force and Moment
Application of Strain Gauges with Sensors of Moment/Force
Six-Component Hand Force Sensors
Proximity Sensors
Inductive Proximity Sensors
Ultrasonic Proximity Sensors
Optoelectronic Sensors
Sensors of Vision
Application of Vision Sensors in Robotics
3D-Vision Sensors
Conclusion
References
Chapter 4
Robotic Vision
Abstract
Introduction
History of Robotic Vision
Basic Elements of Machine Vision System
Lenses
Deriving the Lens Equation
Image Construction
Lighting
Diffuse Front Lighting
Directional Front Lighting
Polarized Lighting
Coaxial Lighting
Structural Lighting
Backlighting
Cameras
CCD (Charge Coupled Devices) Sensors
CMOS (Complementary Metal Oxide Semiconductor) Sensors
Frame Grabbers
Conclusion
References
Chapter 5
3D Robot Vision in Industrial Applications
Abstract
Introduction
Sensors for 3D Vision
Pinhole Camera
Pinhole CameraCalibration
Depth Perception Camera
Time-of-Flight Camera
Projected-Light Camera
2D and 3D LiDAR Sensors
Stereo-Vision Camera Systems
Applications of Depth Sensing in Robotics
Volume Measurement
Path Planning for Industrial Robots
Fusion of Depth Cameras and Other Sensors
Conclusion
References
Chapter 6
Robot Actuators
Abstract
Introduction
Pneumatic Actuators
Hydraulic Actuators
Linear Cylinders
Rotary Engine
Manifolds
Servo Valves
Servo-Regulated Hydraulic Systems
Electric Actuators
DC Engines
AC Engines
Stepper Engine
Solenoids
Harmonic actuator
Conclusion
References
Chapter 7
Kinematics and Dynamics of Robots
Abstract
Introduction to Kinematics of Robots
Matrix of Transformations
Point Coordinates Transformation from One Coordinate System to Another
Special Cases of Transformation
Translation Matrix
Matrices of Rotation and Extended Matrices Of Rotation
Free Vector Transformation From One To Another System
Inner and Outer Coordinates
Inner and Outer Coordinates Relation
Direct Kinematic Problem Solution
Inverse Kinematic Problem Solution
Analytic Methods Procedures
Introduction to Dynamics of Robots
Kinematics Prerequsites for Newton-Euler Method
Newton – Euler Method
Outer Iteration
Inner Iteration
References
Chapter 8
Collaborative Robots: Overview and Future Trends
Abstract
Introduction
Collaborative Robots
The Cobot Big Challenges
Flexibility and Adaptability
Dexterity and Task Complexity
Sensitivity and Practical Experience
Types of Collaborations with Humans
Interaction Implementations Modes with Cobots
Safety Guidelines for Cobots
Safety vs Performance
Design Considerations for Future Cobots
Weight Reduction
Sensitive Joints Design
Dual Encoder Design
2.2. Joint Motor Current Monitoring
Force/Torque Feedback Sensors
Mechanical PFL
Sensoric System
Sensitive Skin
Capacitive Skin
Vision System
Safety Cameras
Multi-Purpose Cameras
Programming Modes
Hand Guiding and Teaching
The Security Concern
The Cybersecurity Solution for Collaborative Robots
Artificial Intelligence in Cobots
Industrial Applications
Use Case 1 – Electronic Panels Assembly
Description
Challenges
Adopted Solution
Outcome
Use Case 2 – Domestic Appliances Assembly
Description
Challenges
Adopted Solution
Outcome
Use Case 3 – Food Products Packaging
Description
Challenges
Adopted Solution
Outcome
Acknowledgments
Conclusion
References
Chapter 9
Artificial Intelligence Drives Advances in Human-Robot Collaboration
Abstract
Introduction
Advanced Forms of Human Robot Collaboration Applications
Assembly Applications
Picking and Packaging
Lifting and Transportation of Heavy Parts Using Industrial Exoskeleton
Other Applications
Digital Twin for Human Robot Collaboration
Artificial Intelligence in Human-Robot Interaction
AI in Decision Making and Control for Human Robot Collaboration
Fuzzy Logic Decision Making and Control
AI Algortihmsfor Classificationin Human-Robot Collaboration
Optimization Techniques in Human Robot Collaboration
Genetic Algorithm (GA)
Particle Swarm Optimization (PSO)
Multi-Objective Optimization
Conclusion
References
Chapter 10
The Study on Key Technologies of Collaborative Robot, Sensorless Applications and Extenstions
Abstract
Introduction
Related Work
Study of Friction Model
Study of Collision Detection
Study of Kinesthetic Teaching
Study of Compliant Behavior and Force Control
Robotic Dynamic Model and Parameter Identification
Robotic Dynamic Model
Output Torque Estimation
Current-Based Torque Estimation
Double Encoders-Based Torque Estimation
Friction Model for Collaborative Robot
Friction Model for Current-Based Torque Estimation
Velocity Dependence of Friction Model
Temperature Dependence of Friction Model
Load Dependence of Friction Model
Comprehensive Friction Model
Double Encoder-Based Estimation
Basic Applications of Dynamic Control
Collision Detection
Study of Collision Process
Principle of Collision Detection
Validation Experiments
Kinesthetic Teaching
Principle of Kinesthetic Teaching
Improvement Strategies
Under-Compensation of Friction
Smooth Transition of Coulomb Friction
Fast Deceleration
Motion Limits
1) Fusion with Friction Model for Double-Encoder Based Method
Validation Experiment
Cartesian Teaching
Basic Principle of Cartesian Teaching
Orientation Deviation Represented by Euler angles
Orientation Deviation Represented by Unit Quaternion
Experiments
Advanced Dynamic Control
Dynamic Model in Cartesian Space
Properties of Constant Force Tracking
Impedance Model in Cartesian Space
Position-Based Hybrid Control
Prediction of Shape Profile
Prediction of Normal Direction
Validation Experiments
Collaborative Application of Industrial Robots
Conclusion
References
Chapter 11
The Application of Robots in the Metal Industry
Abstract
Introduction
The Application of Industrial Robots in the Metalworking Industry
The Application of Industrial Robots in the Production Processes of the Metal Industry
The Application of Industrial Robots in Material Transport
The Application of Industrial Robots in Machine Handling
The Application of Robots in Welding Processes
The Application of Robots in Cutting Processes
The Application of Robots in Metal Sheet Cutting Processes
The Application of Robots in the Deformation Processes
The Application of Robots in the Foundries
The Application of Robots in the Painting Processes
The Application of Robots in Palletizing and Packaging Processes
The Application of Robots in the Assembly Processes
The Application of Robots in the Control Processes
The Application of Robots in the Product Storage
Conclusion
References
Chapter 12
The Implementation of Robots in Wood Industry
Abstract
Introduction
Robot Usage In Logs Storage Yard
Robots in Preparation of Logs For Processing
Robots in Primary Log Sawing Phase
Robots in Secondary Sawing Phase
Robots in External Inbetween Processing Phases
Robots in Veneer Processing
Robots in Plywood Curveplywood Processing
Robots in Woodbased Boards Production
Robots in Furniture Production
Robots in Storaging Phases
Conclusion
References
Chapter 13
Human Grasping as an Archetype of Grasping in Robotics: New Contributions in Premises, Experimentation and Mathematical Modeling
Abstract
Introduction
Historical References
Structural and Functional Characteristics of the Human Hand
Human Hand Biomechanism Actuation
Human Hand Protection and Sensitivity
Case Study
Minimum Mathematical Conditions of Static Grasping with the Human Hand
Mechanical Contact Modeling by Torsors
Preliminary Notions
A Vector’s Torsor
Matrix Expression of the Torsor
Torsors’ Vector Space Dimension
Static Torsor
Mechanical Contacts Types
Mechanical Frictionless Contact between Two Non-Deformable Entities
Mechanical Contact between Two Non-Deformable Solids, with Friction
Mechanical Contact between Two Deformable Solids with Friction
Equilibrium of a Solid Body
Minimum Mathematical Conditions for Static Grasping
Micromanipulation with Human Hand - Premises and Mathematical Modeling
Micromanipulation
Minimum Mathematical Conditions for the Stability of Micromanipulation
Conclusion
References
Chapter 14
Application of Industrial Robots for Robotic Machining
Abstract
Introduction
Robotic Machining
Kinematic Performance of Robots
Static Performance of Robots
Dynamic Performance of Robots
Modeling and Optimization of Robotic
Conclusion
References
About the Editors
Index
Blank Page
Recommend Papers

Industrial Robots: Design, Applications and Technology
 1536177792, 9781536177794

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

ROBOTICS RESEARCH AND TECHNOLOGY

INDUSTRIAL ROBOTS DESIGN, APPLICATIONS AND TECHNOLOGY

No part of this digital document may be reproduced, stored in a retrieval system or transmitted in any form or by any means. The publisher has taken reasonable care in the preparation of this digital document, but makes no expressed or implied warranty of any kind and assumes no responsibility for any errors or omissions. No liability is assumed for incidental or consequential damages in connection with or arising out of information contained herein. This digital document is sold with the clear understanding that the publisher is not engaged in rendering legal, medical or any other professional services.

ROBOTICS RESEARCH AND TECHNOLOGY Additional books and e-books in this series can be found on Nova’s website under the Series tab.

ROBOTICS RESEARCH AND TECHNOLOGY

INDUSTRIAL ROBOTS DESIGN, APPLICATIONS AND TECHNOLOGY

ISAK KARABEGOVIĆ AND

LEJLA BANJANOVIĆ-MEHMEDOVIĆ EDITORS

Copyright © 2020 by Nova Science Publishers, Inc. All rights reserved. No part of this book may be reproduced, stored in a retrieval system or transmitted in any form or by any means: electronic, electrostatic, magnetic, tape, mechanical photocopying, recording or otherwise without the written permission of the Publisher. We have partnered with Copyright Clearance Center to make it easy for you to obtain permissions to reuse content from this publication. Simply navigate to this publication’s page on Nova’s website and locate the “Get Permission” button below the title description. This button is linked directly to the title’s permission page on copyright.com. Alternatively, you can visit copyright.com and search by title, ISBN, or ISSN. For further questions about using the service on copyright.com, please contact: Copyright Clearance Center Phone: +1-(978) 750-8400 Fax: +1-(978) 750-4470 E-mail: [email protected].

NOTICE TO THE READER The Publisher has taken reasonable care in the preparation of this book, but makes no expressed or implied warranty of any kind and assumes no responsibility for any errors or omissions. No liability is assumed for incidental or consequential damages in connection with or arising out of information contained in this book. The Publisher shall not be liable for any special, consequential, or exemplary damages resulting, in whole or in part, from the readers’ use of, or reliance upon, this material. Any parts of this book based on government reports are so indicated and copyright is claimed for those parts to the extent applicable to compilations of such works. Independent verification should be sought for any data, advice or recommendations contained in this book. In addition, no responsibility is assumed by the Publisher for any injury and/or damage to persons or property arising from any methods, products, instructions, ideas or otherwise contained in this publication. This publication is designed to provide accurate and authoritative information with regard to the subject matter covered herein. It is sold with the clear understanding that the Publisher is not engaged in rendering legal or any other professional services. If legal or any other expert assistance is required, the services of a competent person should be sought. FROM A DECLARATION OF PARTICIPANTS JOINTLY ADOPTED BY A COMMITTEE OF THE AMERICAN BAR ASSOCIATION AND A COMMITTEE OF PUBLISHERS. Additional color graphics may be available in the e-book version of this book.

Library of Congress Cataloging-in-Publication Data Names: Karabegović, Isak, editor. Title: Industrial robots : design, applications and technology. Other titles: Industrial robots (Nova Science Publishers) Description: New York : Nova Science Publishers, Inc., [2020] | Series: Robotics research and technology | Includes bibliographical references and index. | Identifiers: LCCN 2020015298 (print) | LCCN 2020015299 (ebook) | ISBN 9781536177794 (hardcover) | ISBN 9781536177800 (adobe pdf) Subjects: LCSH: Robots, Industrial. Classification: LCC TS191.8 .I495 2020 (print) | LCC TS191.8 (ebook) | DDC 629.8/92--dc23 LC record available at https://lccn.loc.gov/2020015298 LC ebook record available at https://lccn.loc.gov/2020015299

Published by Nova Science Publishers, Inc. † New York

This book covers many issues in the design and applications of industrial robots: starting from classification to new areas like human-robot collaborative work and application in Industry 4.0 environment. The presentations and given examples are offering enjoyable reading for the novice as well as advanced readers interested in the industrial robotics. Academician Asif Šabanovič Academy of Sciences and Arts of Bosnia and Herzegovina Bosnia and Herzegovina The editors of this book combined a collection of chapters of an international group of famous experts. The goal of this book is to combine materials that provide information for students, managers, engineers, and researchers to contribute and encourage the exploration and enhancement of the robot's application in the industry manufacturing processes. By describing their own research in the chapters, the authors of the chapters will contribute to both the development of robotic technology and the application of the latest research. Full Professor Avdo Voloder, PhD Department of Mechanics and Robotics, Faculty of Engineering, University of Sarajevo, Sarajevo, Bosnia and Herzegovina

CONTENTS Preface

ix

Chapter 1

The Application of Robots in the Industry Isak Karabegović

1

Chapter 2

Industrial Robot Systems and Classifications Ermin Husak

27

Chapter 3

Sensors in Robotics Mehmed Mahmić

49

Chapter 4

Robotic Vision Samir Vojić

69

Chapter 5

3D Robot Vision in Industrial Applications Dinko Osmanković

91

Chapter 6

Robot Actuators Safet Isić and Ermin Husak

121

Chapter 7

Kinematics and Dynamics of Robots Emir Nezirić

147

Chapter 8

Collaborative Robots: Overview and Future Trends Mattia Zamboni and Anna Valente

171

Chapter 9

Artificial Intelligence Drives Advances in Human-Robot Collaboration Lejla Banjanović-Mehmedović

201

viii Chapter 10

Contents The Study on Key Technologies of Collaborative Robot, Sensorless Applications and Extenstions Jianjun Yuan, Yingjie Qian, Liming Gao, Zhaojiang Yu, Hanyue Lei, Sheng Bao and Liang Du

225

Chapter 11

The Application of Robots in the Metal Industry Edina Karabegović

293

Chapter 12

The Implementation of Robots in Wood Industry Salah Eldien Omer

325

Chapter 13

Human Grasping as an Archetype of Grasping in Robotics: New Contributions in Premises, Experimentation and Mathematical Modeling Ionel Staretu

Chapter 14

Application of Industrial Robots for Robotic Machining Janez Gotlih, Timi Karner, Karl Gotlih and Miran Brezočnik

351 397

About the Editors

435

Index

437

PREFACE HISTORICAL DEVELOPMENT OF ROBOTS The century we live in will certainly be, without a doubt, the century of robots. Throughout history, robots have intrigued the imagination of science fiction writers, philosophers, artists and engineers. Today, a few centuries after the first notion of a robot, the whole idea has been developed to the extent that we can surely state that robots are going to play a key role in the 21st century. Around 300 B. C., Aristotle wrote, “If every instrument could accomplish its own work, obeying or anticipating the will of others, chief workmen would not need servants.” Two assistants, made of gold, known as Vulcan and Hephaestus (the god of fire) in Roman and Greek mythology could also be considered the forerunner of robots. The first significant development of robots began in the 13th century, when an Arab writer named Al-Jazari wrote The Book of Knowledge of Ingenious Mechanical Devices, which included the first illustration of a robot. During this period, in the 14th and 15th century, the earliest clock robots were developed such as an automated rooster erected on the top of Strasbourg Cathedral, France, constructed in the year 1350, which flapped its wings and crowed at noon every day telling the citizens the accurate time. The 18th century brought several “serious” machines. In 1774, Pierre and Henri Louis Jacquet Droz designed a boy robot that could draw and write a short message. In addition, they developed a woman robot which could play the piano. At the same time, a known mechanical duck was created, which likewise to the aforementioned rooster flapped its wings, “ate” and “drank”. The nineteenth-century-robots were developed for the industry which rapidly evolved; textile machines were forerunners of modern robots. One of those first machines was a loom, which controlled the entire manufacturing process. The novel Frankenstein by Mary Shelley appeared at the time, which describes a monster created and designed by human hands. In 1834, Charles Babbage worked on the

x

Isak Karabegović and Lejla Banjanović-Mehmedović

Analytical Engine which would have been the first computer designed to perform complex mathematical calculations. Unfortunately, this engine was never fully developed. At the end of the nineteenth century, Thomas Edison, who designed a talking doll, was engaged in “robots”. In 1898, the first remote control was created by Nikola Tesla. He created a boat which could be remotely controlled. It was the first time that low power energy was transmitted wirelessly. The term robot was first used in 1921 in a play R.U.R – Rossum’s Universal Robots by playwright Karel Čapek. It is derivative of the Czech word “robota” which means a slave. In Čapek’s play a robot was a labor, and soon the term was adopted to denote a similar type of machines. The term robot is the only word of Slavic origin taken into the English language without translation. Seven years later in Great Britain, Captain Rickards and Ah Renfell built the first robot, however, it could only move across a platform it was on. In early 1940s, Westinghouse Electric Corporation created two robots that used an electric motor for the entire body motion. Robot Electro was able to dance, count to ten and smoke; and her robot dog Sparko could walk, stand on its hind legs and growl. These robots were shown at the World’s Fair in New York in 1940. In 1940s Isaac Asimov used the term robotics for the first time, and expounded The Laws of Robotics in the book Runaround, that all robots must obey. These laws state that: 1. A robot may not injure a human, or through inaction, allow a human to come to harm. 2. A robot must obey orders given to it by human beings, except where such orders would conflict with the first law. 3. A robot must protect its own existence as long as such protection does not conflict with the first and second laws. The 1940s were very important in many ways. The first Eniac and Whirlwind computers were invented in 1940s, which led to important development of robots. Norbert Wiener MIT-a (Massachusetts Institute of Technology) published the work Cybernetics, which discusses the concept of control and communication in those systems. In 1948, the first mobile robot The Beast was built by John Hopkins, and in the same year robots Elsie and Elmer were also built by Dr W. Grey Walter, a robotics pioneer. In the early 1950s, precisely in the year 1951, Raymond Goertz developed the first teleoperated robot for the Atomic Energy Commission. It was an arm designed to handle hazardous radioactive material. The first programmable robot was also designed in 1950s - in 1954 by George Dovel, who had his methods patented. Two years later, George Dovel and Joseph F. Engelberg started Unimation Inc., which was the first robot manufacturing company. The first robots were sold to the General Motors Company, however, they

Preface

xi

became profitable twenty years later. The first advertisement of a robot appeared in 1959 - the Planet Corporation advertised its work. In the early sixties, the company Unimation was bought by Condec Copr. After that, Unimate Robot system development began. In 1960 AMF (American Machine and Foundry) advertised the robot Versatran, which was built by Harry Johnson and Veljko Milenković. In the sixties, precisely in 1962, the first robot was installed on the production line of the General Motors Company. A year later, the first robot arm controlled by computer was made in the Los Amigos Hospital in California. Also, 1964 was a significant year when it comes to education in robotics. In fact, Artificial Intelligence (AI) research laboratories were opened at MIT, Stanford Research Institute, Stanford University, and the University of Edinburgh. Japan, which is among the leading countries engaged in robot production today, imported its first robot from the United States. In 1968, Kawasaki Heavy Industries Company started production under the license of Unimation Company. At the end of sixties Huges Aircraf began production of robots. They produced Mobot controlled by radio and camera systems which was designed for arease and tasks beyond human capabilities. The aforementiond research centre Stanford, which in the meantime became SRI International, developed Shakey in 1970 - a robot controlled by a computer with the ability to avoid walls and other obstacles on its way. In 1974, a year after the first commercial robot was put on sale, the developer of the Stanfor Arm, professor Sheinman, founded Vicarm Inc., a company which manufactured a robot arm to be used in the industry. In 1976, robot arms were used on the Viking 1 and 2 spacecrafts; in 1977 this company was bought by the Unimation Company. General Motors, the largest user of robots, with Fanuc Inc. founded a joint company to produce robots. More than half of their robots went back to the company General Motors. In the last two decades of the 20th century production of industrial robots significantly improved, and at the end of 1980s Japan became a leading country, which had forty commercial companies that produced robots.

ROBOTS: TODAY AND TOMORROW At present, robots have a wide application – probably wider than it seems. They have been used almost from the beginnings of space exploration (at the spacecrafts Viking 1 and 2, as we have seen) and, of course, to this day. For instance, Deep Space 1 spacecraft relies on an autonomous navigation, which at a certain period of time takes images of stars and asteroids to estimate the current location of the spacecraft. NASA uses robots to explore Mars. Similar vehicles have been developed after the success of the Pathfinder mission, and those are able to travel 100 meters per day on each Martian day while carrying instruments used to explore the Red Planet. There are, of course, robots on the International Space Station. AER cam, a free flying robot camera which looks like a

xii

Isak Karabegović and Lejla Banjanović-Mehmedović

soccer ball, assists astronauts in fulfilling their tasks involving repairing and maintaining the Station. Apart from operating inside, this camera can go out the International Space Station. Robots are used a lot by different armies as aircrafts without human pilot. Some of those aircrafts are operated from the ground, but there are also those that are fully independent and capable of autonomous flight. In addition, robot journalism has been developed which should reduce the risks that journalists face in dangerous conflict zones. This model is currently being tested in Afghanistan. Great number of robots is used in potentially dangerous situations. It can be seen on TV how robots manipulate with bombs or go through minefields. One of these is the Mini Andros, which has two “arms” and can climb and descend the stairs. It is equipped with three video cameras, and thus is useful in exploring new areas, such as large houses where there are dangerous people. Special versions of this robot are equipped with the radiation detector. The authors of Robug robot were inspired by crabs and spiders; this robot can climb walls, and like the Mini Andros, it is controlled by a human operator. There are alternative models equipped with cameras and are able to operate in a radioactive environment. One of the most popular robots is Sony’s Aibo, the robot dog. It is a small home robot designed for entertainment. Also, one of the latest models is equipped with a digital camera, and demonstrates its emotions. It expresses six emotions (happiness, sadness, anger, surprise, fear and dissatisfaction) and five instincts. Aibo has the ability to recognize speech and can be taught to recognize its own name. It can be told to take photos with its digital camera and the dog will do it. Aibo also recognizes seventy-five voice commands and its owenr's name. Sophia, a social humanoid robot developed by Hong Kong based company Hanson Robotics, made her first public appearance at South by Southwest Festival (SXSW) in mid-March 2016 in Austin, Texas, United States. Sophia uses artificial intelligence, visual data processing and facial recognition. Sophia also imitates human gestures and facial expressions and is able to answer certain questions and to make simple conversations on predefined topics. In 2015, ABB introduced YuMi, the world's first truly collaborative robot. ABB’s growing family of YuMi robots is part of collaborative automation solutions that help people and robots safely work closer together than ever before. YuMi is designed for a new era of automation, for example in small parts assembly, where people and robots work side-by-side on the same tasks. Safety is built into the functionality of the robot itself. YuMi removes the barriers to collaboration by making fencing and cages a thing of the past. Today, robots are used at home. The company Electrolux has recently introduced a robotic vacuum cleaner, which has changed household chores. In fact, this robot vacuum cleaner can avoid obstacles as it has an acoustic radar system similar to that of a bat. The company Electrolux states that this robot will effectively clean 95% of a room. Lego

Preface

xiii

Mindstorm robots are extremely popular; with a Mindstorm set one can design and program a real robot which will carry out tasks for which it is designed. The RCX microcomputer is the heart of this system, which is programmed by a personal computer. In addition to RCX, the set contains a lot of Lego parts, infrared receiver, software, two motors and many other parts. This set is designed mainly for children but also students and those who love robots. The project Lego Mindstrom itself was launched fifteen years ago by Lego and Massachusetts Institute of Technology. As the company Lego states a person who knows how to use a personal computer can build his first Lego robot in an hour. The Robot Operating System (ROS) is a flexible framework for writing robot software. It is a collection of tools, libraries and conventions that aim to simplify the task of creating complex and robust robot behavior across a wide variety of robotic platforms. ROS is the standard for open source robotics in dynamic environments such as navigating automated guided vehicles (AGV), gripping objects and collision prevention. Robots are also used in sports. It is old news that some RoboCup competitions are organized every now and then. RoboCup project goal is to develop a fully autonomous humanoid robot soccer team by 2050. It is claimed that it would be able to win against the current human world champion team. Nowadays, robots and artificial intelligence coexist and thus it is hard to imagine a present-day robot not being some kind of artificial intelligence. As with artificial intelligence, the question with robots, androids, as well as fusion of all three life forms is what if they get out of control? According to Hans Moravec, one of the robot/AI experts, robots will have become as smart as a man by 2040, and we are sure they will be much smarter than many of the inhabitants. Unlike pessimistic and paranoid predictions, Moravec is not worried. He believes that robots and artificial intelligence will actually extend the life of man and improve the quality of life in general. It is difficult for laypeople to assume which scientist is right. The truth is that some possibilities and theories are worrying, but we have already understood that by reading some of the great science fiction stories. As it seems, evolution has led a man nearly to the degree that it can build a being as intelligent as himself! The whole notion seems to be far advanced and impossible to control. What we just might try to do is to take advantage of it. The only real threat, as we have seen finishing the article on artificial intelligence, comes from humans themselves, who might have misused their time destroying nature, fighting destructive wars and spreading hatred. On the other hand, some science fiction stories have shown that coexistence of artificial intelligence, robots, androids and humans is possible, provided that a human progresses together with these creatures. However, we live in the century that has already brought a lot of scientific excitements, and those who do not perceive this outcome positively are rare. We live in a time that will undoubtedly be remembered for many things in the distant future, and it would be a shame if we are not aware of it now. It is enough just to look around and realize that what we only used to read or watch on TV is

xiv

Isak Karabegović and Lejla Banjanović-Mehmedović

already around us. Pessimistic predictions will be taken into consideration, but as always, with the hope that things will develop in the best possible way for a human, an android, a robot and artificial intelligence. The growing use of robots has its social consequences. By introducing robots into factories, a large number of unskilled and semiskilled workers who have done simple, dangerous and dull tasks lose jobs. They have no choice but to be retrained. With a rapid computerization of all forms of business and huge expansion of the Internet, it is expected that there will be a large gap between those who are technologically advanced and those who have lost their connection with modern times. Most people are not even aware of the extent to which robots are represented within their lives. Their cars and computers are almost certainly partially assembled by robots. The price of robots, as it has already been said, is constantly declining and robots are coming into wider use. It is only a matter of time before robots will become available to the population of today’s high school students, just as it happened with computers and cell phones. The book Industrial Robots: Design, Applications and Technology may be useful to students of technical schools, engineers in various industries who develop programs to increase competitiveness by introducing new highly sophisticated technology. The book illustrations show application of industrial robots in various industries such as the metal processing, automotive, construction, wood processing, and textile and process industries. It is possible to find in all these industries economic motives for installing industrial robots as well, especially in areas that are dangerous and monotonous for workers. By introducing robots in workplaces higher product quality is provided increasing competitiveness of the company on the increasingly demanding world market. The time of mechatronic engineering as a synthesis of mechanical engineering, electrical engineering, control and informatics, with the constant capacity increase of computers according to Moore's law, the capacity of new generation of computers doubles, is unstoppably coming. The robot is a prime example of a mechatronic system without which the modern flexible manufacturing system cannot be imagined. The fact that robots, from day to day, can perform increasingly complex operations, and that the robot cost decreases in comparison with the labor cost, will certainly be an additional motivation for managers to opt for the application of robotics in factories, no matter to which industry those factories belong. In the future, the application of robots in small and medium sized enterprises will constantly grow, since those enterprises are expected to vastly improve their productivity by applying robots and intelligent systems in workplaces. For that reason, this book can offer guidance and inspiration to all those who keep up or intend to stay current with the latest technological advances in the world. The book contents is designed to combine the fundamentals of kinematics, dynamics and robot control with pragmatic solutions and applications, since modern robotics dealing with rigid body motion represents not only the classical geometrical

Preface

xv

underpinnings and representation, but also their expression by using the modern exponentials and connection to algebras. This book consists of fourteen chapters on Industrial Robots: Design, Applications and Technology. Chapter 1 - The Application of Robots in the Industry shows analyses of the application of industrial robots worldwide in the last ten years, as well as estimations of the application in the following three years. In addition, the analysis of the application of robots in five top countries in the world was also conducted, in order to determine which industry exhibits the highest application of industrial robots, as well as which processes and tasks in production processes. Finally, the paper describes the vision of the development of robotic technology. Chapter 2 - Industrial Robot Systems and Classifications describes the basic industrial robot system as well as the basic elements that construct this system. The basic terms used in industrial robotics are defined, such as workspace, robot accuracy, repeatability, TCP (tool center point), robot degrees of freedom, etc. This chapter also provides a classification of industrial robots based on the mechanical structure of the robot. Chapter 3 - Sensors in Robotics presents sensors that are applied in robotics. The basic division of sensors is into sensors of internal and external state. The sensor system level, which is used with a certain robot, is in direct connection with a class of tasks which that robot can carry out. Programming of a robot is easier when a robot is equipped with the sensor system of a higher level. In the future, a significant progress of sensor systems is expected, e.g., sense of touch with a possibility of surface softness determination, sense of hearing etc. Chapter 4 - Robotic Vision presents an overview of a robotic vision with particular emphasis on the basic elements of vision systems. Vision systems are utilized for developing higher level interpretation of the nature of robotic workspace using images acquired by cameras mounted on a robot. Such information can be useful for tasks such as object recognition, object location, object inspection, obstacle avoidance and navigation. Chapter 5 - 3D Robot Vision in Industrial Application presents the principles of 3D environment sensing in modern robotics, mathematical model of epipolar geometry, which is a principle of stereo-vision. Moreover, other types of sensors that can perceive a 3D environment are presented. These include depth sensing cameras and different operating principles of these cameras – time-of-flight and projected-light cameras, and rotating setups that include 3D LiDAR sensors. Finally, the chapter addresses some interesting applications of 3D vision in a wide range of industrial applications including object measurement, safe path planning in human-robot collaboration environments and 3D thermal mapping for the purpose of improving energy efficiency of buildings.

xvi

Isak Karabegović and Lejla Banjanović-Mehmedović

Chapter 6 - Robot Actuators classifies the various actuators available, based on the type of energy source used. There are several types of actuators depending of input energy source: hydraulic, pneumatic, AC, DC and stepper actuators. A working principle, general design and practical design and usage for each type of these actuators are introduced. Chapter 7 - Kinematics and Dynamics of Robots the kinematics and dynamics of the robot are descriptive; the equations of motion are derived because it is necessary to know the equations of motion, the forces at any moment in time for each part of the robot, in order to know the motion control of the robot. The movement of the robot is explained by the equations of motion. Chapter 8 - Collaborative Robots: Overview and Future Trends describes collaborative robots that aim at improving production efficiency in the industry, combining skills and flexibility of human operators, precision, speed and reliability of industrial robots, between both worlds. This paradigm shift requires the robot to be able to work with humans in the same space, at the same piece, and at the same time: just as another operator would. This chapter provides a wide range of information on humanrobot collaboration, from types of collaboration involving factors to effective collaboration, proposes a comprehensive set of sensor technologies to tackle tomorrow's challenging tasks. Chapter 9 - Artificial Intelligence Driving Advances in Human Robot Collaboration presents an essential role of artificial intelligence (AI) in collaborative robot systems, capable of learning and work hand in hand with humans. The chapter provides a comprehensive survey of artificial intelligence techniques and discusses their applications in human robot collaboration. Fuzzy and game theory-based decision making, machine and deep learning for body posture, hand motion and voice recognition, predicting human reaching motion, optimal control for path planning of exoskeleton robots, tasks allocation optimization is an advanced approach for human-robot collaboration. The recent advances in an artificial intelligence, digital twin, cyber-physical systems and robotics have opened new possibilities for technological progress in manufacturing and led to efficient and flexible factories. Chapter 10 - The Study on key Technologies of Collaborative Robot, Sensorless Applications and Extenstions discusses sensorless dynamic control and related applications of collaborative robots. The idea of sensorless dynamic control means that no additional force/torque sensors nor joint torque sensors are involved. Two methods for torque estimation are proposed which are the current-based estimation method and the more novel one, double-encoder-based method. The former one is based on the linear relationship between motor’s current and output torque while the latter one uses the speed reducer in a robot joint as a torque indicator by considering angle deformation. Chapter 11 - The Application of Robots in the Metal Industry provides an overview of the application of industrial robots in the metal industry, with examples how automation

Preface

xvii

of production processes can be conducted with the introduction of the next-generation industrial robots – collaborative robots. Finally, a vision is provided of the application of industrial robots in the metal industry worldwide. Chapter 12 - The Implementation of Robots in Wood Industry provides a crosssection of the application of industrial robots in the wood industry. The new generation of technologies for processing wood easily allows the instillation of robots mostly in all phases of production. Robots will be installed where the safety of the process or the increase of productivity with changing the basic technology process is needed. Today, the robot industry is launching new ones to satisfy the need of wood processing industry. Chapter 13 - Human grasping as an archetype of grasping in robotics – premises, experimentation and mathematical modeling – new contributions, In this chapter we present in a more complete and unitary way, compared to similar studies, structural, constructive and functional characteristics of the human hand in order to be useful for the constructive and functional optimization of anthropomorphic grippers with fingers, for robots. Substantial human hand configurations useful for grasping and micromanipulation are then highlighted. We experiment micromanipulation of a rod-type object and the minimum mathematical conditions of safe micromanipulation are mentioned. Chapter 14 - Application of Industrial Robots for Robotic machining describes the use of the robot for processing itself, which further increases the adaptability and efficiency of the entire process and reduces the cost of the product. The most common application fields for robotic machining are found where machines tools are too expensive, which is in case of manufacturing small batches of large workpieces or where accuracy of robotic machining can be controlled, which is in case of operations where machining forces are low and more general in case of machining softer materials. To optimize robotic machining different approaches are used, whereby most deal with the robots structural deficiencies originating from its static, kinematic and dynamic properties. This book will be useful for researchers working in the field of application of innovative technologies, implementation of industrial robots in production processes as well as in other branches of industry and human environment. This book is ideal for postgraduate students, mechanical engineers, electrical engineers and civil engineers in industry and academia, management companies, investment and profit making sectors, people engaged in politics and job creation, companies working in production sciences, new technologies, etc. Sarajevo, Bosnia and Herzegovina December 2019 Isak Karabegović Lejla Banjanović-Mehmedović

In: Industrial Robots: Design, Applications and Technology ISBN: 978-1-53617-779-4 Editors: Isak Karabegović et al. © 2020 Nova Science Publishers, Inc.

Chapter 1

THE APPLICATION OF ROBOTS IN THE INDUSTRY Isak Karabegović* Akademy of Sciences and Arts of Bosnia and Herzegovina Bosnia and Herzegovina

ABSTRACT There have been some major technological changes in all industries, including reshaping of production processes, emerging business models, and changes of consumption, delivery, and transportation processes. These changes have occurred due to the implementation of new technological innovations that are developed with the support of robotics & automation, IoT, 3D printing, smart sensors, RFID, etc., whichare the foundations of Industry 4.0. Global competition in the market requires the implementation of Industry 4.0 in all industry branches. Its implementation is conditioned by the use of robots inthe production processes. Product diversity is increased, and product life cycle decreased, which requires flexible automation of production processes that cannot be accomplished without the use of robots. The chapter analyzes the application of industrial robots worldwide in the last ten years, as well as estimationsof the application in the following three years. In addition, the analysis of the application of robots in five top countries in the world was also conducted, in order to determine which industryexhibits the highest application of industrial robots, as well as which processes and tasks in the production processes. Finally, the chapter describes the vision of the development of robotic technology.

Keywords: robot, automation, application, production process, industry

Corresponding Author’s E-mail:[email protected].

*

Isak Karabegović

2

INTRODUCTION Intelligent machines and systems with different levels of complexity are becoming increasingly presentin different processes. Intelligent machines and systems, such as robots and technology cells, form the pillar of the CIM system (Computer Integrated Manufacturing) that forms the foundationof every concept of the factory of the future (Karabegović 2018,1-13; Karabegović 2017,110-117; Karabegović 2016, 92-97; Karabegović 2012, 368-378; Sulavik 2014, 3-15). Industrial robots are automated systems that use the computer as an intelligent control part. The definition of a robot – reprogrammable and multifunctional mechanical structure, is given by the International Standard Organization (ISO): “A robot is a machine that consists of mechanisms with multiple degrees of freedom of movement, and capable of manipulating a tool, workpiece, or other means.” The commercial application of computer-controlled industrial robots, i.e., computerized industrial robots, has begun in the 1970s. Process and machine automation is used primarily in production processes and machine control, but also in other important manufacturing activities such as serving a work station, positioning a workpiece, etc. Industrial robots can be applied in the following processes (Doleček 2008, 1-34):     

Service placement, Keeping materials in working position at various stages of production and operational transport, Technological operations (typical examples of this category are welding, painting, grinding, soldering, gluing, cleaning, polishing, etc.), Automatic assembly, and Pre-process, process and post-process control.

Industrial robots are ideal for jobs that are considered difficult and unsuitable for humans. They are used for repetitive jobs that are considered monotonous and very tedious. Industrial robots are also used in the processes that require high quality and high productivity. Modern industrial production is successfully using robotic systems in most of its branches. In regard to the mobility of individual parts of the robot, the ability to perform different paths, and the ability to reach any point of the manipulation space with a certain orientation, brings to the conclusion that the possibilities of using the robot in production are virtually limitless. The problem that limits the application of the robot in individual operations is the issue of cost-effectiveness. It is not cost-effective for a highvolume, high-speed, and high-power robotic structure to perform tasks for which it does not fully utilize its capabilities. For this reason, various industrial robots are designed specifically for a specific type of work tasks.

The Application of Robots in the Industry

3

THE CLASSIFICATION OF ROBOTIC SYSTEMS The reason for the increasing motivation of robot use lies in the following basic directions:      

Increased productivity, Reduced costs, Overcoming human skill deficiencies (precision), Greater flexibility in some areas of production, Improved production quality, Removing human beings from monotonous and repetitive tasks or tasks in a dangerous environment.

Generally, robotic systems can be divided into:   

Manipulation robotic systems, Mobile robotic systems, Information-controlled robotic systems.

Mobile robotic systems (Doleček 2008,1-34; Karabegović 2015,185-194; Litzenberger 2018,1-28; World Robotics 2017,26-492; World Robotics 2016,11-18)are platforms whose motion is controlled by an automatic system. They have a programmed movement path and are also programmed for automatic target determination. They are intended for the automatic delivery of parts and tools to machines and from machine tools to storage. A manipulation mechanism can be fitted to such movable systems. Information-controlled robotic systems are used to collect, process, transmit, and use information to form various control signals. In production, these are automatic controls and management systems for human-free production processes, where industrial robots are used (Karabegović 2014,7-16; Makowieckaja 2015,22-27; Naheme 2017,18-38; Beaupre 2015,20-36; Robotics 2020 2013,25-43;Anderson 2014). For example, in underwater conditions, these are the devices equipped with measuring and control devices and a camera for determining the properties of the bottom of water, for identifying objects, automatic display of information, etc. Manipulation robotic systems are most widely used in the industry and can be divided into:   

Robots, Manipulators, Robotic technological complexes.

Isak Karabegović

4

Robots are mainly used in industrial production, whereas manipulators are in most cases used in the event of radiation, air poisoning, explosion hazard, high and low temperatures. Robotic technological complexes are applied in complex work conditions. The manner of managing robotic manipulation systems can be:   

Automatic, Remote, Manual.

The automatic manipulation devices (robots, manipulators, robotic technological complexes) are divided into four groups, according to the already mentioned generation classification of robots: compact program manipulators, program manipulators, adaptive robots, and intelligent robots.Remote controlled robots and manipulators are divided into six types:      

Command-line manipulators, Copy manipulators, Semi-automatic manipulators, Robots with supervised control, Robots with combined control, Dialog-controlled (interactive) robots.

THE APPLICATION OF INDUSTRIAL ROBOTS IN THE WORLD The application of industrial robots on annual and total level in all branches of industry is shown in Figure 1. The statistics given in tables and diagrams is taken from the International Federation of Robotics (IFR) (Karabegović 2018,1-13; Karabegović 2017,110-117; Karabegović 2016,92-97; Litzenberger 2018,1-28; World Robotics 2017,26-492; World Robotics 2016,11-18; World Robotics 2015,13-26; World Robotics 2011,7-16; World Robotics 2010,7-12), the United Nations Economic Commission for Europe (UNECE) and the Organization for Economic Co-operation and Development (OECD). This presentation should show the effects that robots have on production and employment structure, and provide an overview of overall profitability.

The Application of Robots in the Industry

5

Figure 1. The trend of application of industrial robots in the world on annual and total level for the period 2008-2018, and estimates of application for the period 2019-2021.

The application of industrial robots in the world on annual level (Figure 1.a.) is increasing linearly each year, so that in 2018 around 421,000 robot units were applied. In the last ten years, the application has increased 3.6 times. In the past ten years, the lowest application was recorded in 2009 with 60,000 robot units, the reason for which was the world economic crisis. The growth of application of industrial robots over the next three years will continue, and it is estimated that in 2021 approximately 630,000 robot units will be used per year. The total application of industrial robots in the world is shown in Figure 1.b., which indicates that the application of robots is increasing every year. Based on the diagram we can conclude that the increasing trend is occurring with a slight

6

Isak Karabegović

exponential function, so that about 1.035 million industrial robots were installedin 2008, and about 1,4721 million robots in 2014. In 2018 the number of applied industrial robots was about 2.408 million robots in all industry branches. It is estimated that this growing trend will increase in the next three years, so that in the 2021 the use of about 3,788 million industrial robots is expected. We can conclude that in the coming years, the trend of use of industrial robots will increase, both on annual and total level. We are currently in the fourth industrial revolution “Industry 4.0” where all production processes need to become smart, i.e., smart factories, which would not be possible without industrial and service robots. The trend of application of industrial robots in Asia/Australia, Europe and America is shown in Figure 2. The application of robots in Africa was not examined due to the fact that they have low application rates compared to other continents.

Figure 2. The trend of application of industrial robots in Asia/Australia, Europe and America on annual level for the period 2008-2018 and estimate for the period 2019-2021.

The analysis of the trend of industrial robot application in the aforementioned continents indicates that Asia/Australia ranks firstin the application of industrial robots. The use of robots in Asia/Australia has rapidly increased since 2012, unlike the use on the continents of Europe and America. The application of industrial robots in Europe and America shows that every year the trend of application is gradually increasing, with somewhat higher application in Europe than in America. It is estimated that the trend of use of industrial robots for the next three years will be growing. Far ahead of other continents is Asia/Australia where it is estimated that in 2021 the application will increase to about 462,000 robot units. Europe expects the use of about 93,000 robot units, and America about 63,500 robot unitsin the same year. In order to understand the application of industrial robots, an analysis of the robot application was conducted in developed countries and developing countries: Japan, USA, Germany, Republic of Korea

The Application of Robots in the Industry

7

and China. The trend of application is shown in Figure 3. The trend of application of industrial robots in the above-mentioned countries is growing. Since China has the highest application in the world, with trend growing per exponential function, we have created separate diagrams of the application of robots, as shown in Figure 3.b. In all analyzed countries: Japan, Republic of Korea, USA and Germany, the trend of application has been increasing since 2010. We can rank them according to the application, from highest to the lowest: Japan, Republic of Korea, America and Germany. It is expected that in all four countries, the growth trend will continue until 2021. It is also estimated that in 2021 about 64,000 robot units will be applied in Japan, about 46,000 robot units in the Republic of Korea and the USA, and about 26,000 robot units in Germany. Of all the mentioned countries, it is noticeable that China demonstrates the highest use of industrial robots, with constantly increasing trend, as shown in Figure 3.b.It is estimated that around 290,000 robot units will be installed in this country in 2021, which is about 108,000 robot units more than in Japan, Republic of Korea, USA and Germany combined. The reason for such high use of industrial robots in China is the implementation of government strategy called “Made in China 2025,” which aims to make China the most technologically advanced country in the world by 2025. If we analyze the use of industrial robots in fifteen countries in the world in 2017, we can obtain an insight into the annual level of application of industrial robots by country, as shown in Figure 4.

Figure 3. (Continued)

8

Isak Karabegović

Figure 3. The trend of application of industrial robots in Japan, Republic of Korea, Germany and China on annual level for the period 2008-2018 and estimate for the period 2019-2021.

Figure 4. The trend of industrial robot application in fifteen top countries of the world in 2017. (World Robotics 2017,26-429).

The Application of Robots in the Industry

9

As we have seen in Figure 4, the highest use of industrial robots (more than 20,000 robot units) in 2017 was in the following five countries: China, Japan, Republic of Korea, USA and Germany. In the following ten countries, the use of robots is between 3,000 and 11,000 robot units: Taiwan, Vietnam, Italy, Mexico, France, Singapore, Spain, Canada, India and Thailand. Our focus is on the countries which have highly developed automotive industry and highest use of industrial robots: China, Japan, Republic of Korea, USA and Germany. Subsequently, we will analyze the application of industrial robots in these countries.It is also very important to have information which industries in the world useindustrial robots. The trend of robot use by industry is shown in Figure 5.

Figure 5. The trend of application of industrial robots in automotive, electrical, metal, plastics and food industry on the annual basis for the period 2008-2018 and estimates of the application for the period 2019-2021.

The first place in the application of industrial robots is held by the automotive industry (Figure 5). As we can see, the trend of application has been increasing, from about 30,000 robot units in 2009 to about 140,000 robot units in 2018. This trend is expected because the automotive industry was the first in the application of industrial robots since the very beginning. The competitiveness in the market determines the automation of the production process in the automotive industry, which cannot be realized without industrial robots. It is estimated that this growing trend will continue, and the predictions are that in 2021 about 231,000 robot units will be used in the automotive industry.In the second place in the application of industrial robots is the electric/electronic industry, that has shown a growing trend of application. In the last two years the trend of application in the electric/electronic industry has approached the trend of application in the automotive industry. In addition, it is estimated that the trend of robot application in the electric/electronic industry will continue to grow to about 222,000 robot units in 2021.In the third place are metal industry and machines, whose

10

Isak Karabegović

application trend is much lower than in the automotive and electric industry. However, it is still increasing and it is expected that in 2021 around 82,000 robot units will be applied. Following these three mentioned industries, there are plastic and chemical products industry, as well as preparation and production of food industry, which also exhibit growing trend of application. The density of robot use in the world can be observedbased on the trend of application of industrial robots per 10,000 persons employed in the production industry, as shown in Figure 6.

Figure 6. The trend of application of industrial robots (robot density) per 10,000persons employed in the industrial production processes in the world in 2017. (World Robotics 2017, 66-72).

The world average in the application of industrial robots (robot density) per 10,000 persons employed in the industrial production processes in 2017 is 85 robot units, as shown in Figure 6. We have calculated the average of robot use for twenty countries whose average is higher than the world average. It can be concluded that, regardless of the highest use of robots in the world, China ranks 20th in the robot density with 97 robot units, which is slightly more than the world average. The first place is held by the Republic of Korea with a density of 710 robot units, followed by Singapore with 658 robot units, Germany, Japan, Sweden, Denmark, etc. The highest robot density per 10,000 employees in production processes is recorded in countries that are developing automotive and electric/electronic industries. The average of robot applications per 10,000 employees in production processes varies in different continents. In Europe, the average application is 106 robot units per 10.000 employees in production processes in industry, while in America, this average is 91. The lowest average is recorded in Asia/Australia with only 75 robot units per 10,000 employees in production processes (World Robotics 2017, 66-72).

The Application of Robots in the Industry

11

Figure 7. The trend of the application of industrial robots (robot density) per 10,000 employees in the automotive industry for sixteen top countries in 2017. (World Robotics 2017, 66-72).

We can conclude that the largest number of production processes in Europe is automated by using industrial robots. Furthermore, in many countries in Europe the automotive industry is highly developed, employs the largest number of workers, and applies the highest number of industrial robots. In order to understand the robot density per 10,000 employees in the industry, we have shown the trend for the sixteen top countries in the world which have highly developed automotive industry Figure 7. Of the sixteen countries in the world that show the highest robot density per 10,000 persons employed in the automotive industry, the highest number of countries are from Europe, nine countries, followed by five countries in Asia and two countries in the Americas. The first place is held by the Republic of Korea with 2,145 robot units per 10,000 personsemployed in the automotive industry, followed by the USA with 1,261 robot units, and Japan with 1,240 robot units. If we examine Europe as a continent, we see that these nine countries (France, Germany, Spain, Slovenia, Austria, Italy, Slovakia, UK and Portugal) have the total robot density of 7,804 robot units per 10,000 persons employed in the automotive industry, which makes Europe the continent with the highest number of installed industrial robots per 10,000 persons employed in the automotive industry.It is noted that all countries in Europe that have developed automotive industry, install industrial robots in the automotive production processes. In other words, they are automating production processes, and the main reason is that automotive companies want to remain competitive in the global market.If we examine the fact that the world is currently in the era of Industry 4.0, as well as the idea that autonomous vehicles are introduced into transport, which can only be achieved by using robots, we come to the conclusion that robot density will grow in the future, i.e., the trend of robot use will increase in all production processes, especially in the automotive industry.

12

Isak Karabegović

THE APPLICATION OF INDUSTRIAL ROBOTS IN TOP FIVE COUNTRIES IN THE WORLD The Application of Industrial Robots in China As we have previously noted, the first place in the application of industrial robots in the world is held by China, as shown in Figure 3.b. An analysis was conducted of the trend of application of industrial robots in China, which ispresented in the following figures (Litzenberger 2018,1-28; World Robotics 2017,26-492; World Robotics 2016,1118; World Robotics 2015,13-26; World Robotics 2011,7-16; World Robotics 2010,7-12). The trend of application of industrial robots on the annual level in China has been steadily increasing every year. The application was rather low in 2008, but has increased to 165,000 robot units in 2018. Likewise, the total application of industrial robots in 2018 has also increased and reached the amount of 620,000 industrial robot units.

Figure 8. The annual and total application of industrial robots in China for the period 2008-2018.

The Application of Robots in the Industry

13

Figure 9. The trend of application of industrial robots in different industries in China for the period 2011-2018.

Based on Figure 9, we conclude that the trend of robot application is increasing in all industries. We have to emphasize the application in the electric/electronic industry which is the highest, and in 2018 reached the amount of 39,100 robot units. In the second place is the growing automotive industry that applied 35,300 robot units in 2018. The third place is held by the metal industry, with constantly increasing trend, which amounted to 19,300 robot units in 2018. Following these three is the plastics and chemical products industry where the trend of application is steadily increasing. The highest number of robot applications in China is recorded in the handling operations/machine tending. The trend of application is increasing rapidly every year, and in 2018 it has reached the value of about 54,100 robot units. Somewhat slower trend of robot application can be seen in the welding and soldering processes. However, it is still increasing, and in 2018, the application reached approximately 27,300 robot units. The third place is held by assembling and disassembling operations with an increasing trend of 19,300 robot units implemented in 2018. The application in dispensing and processing field has somewhat growing trend, but is still negligible compared to previously mentioned applications.

14

Isak Karabegović

Figure 10. The trend of industrial robot application in handling operations/machine tending, welding and soldering, dispensing, processing, assembling and disassembling in China for the period 20112018.

The Application of Industrial Robots in Japan Japan ranks second in the number of industrial robot applications in the world, as shown in Figure 3.b. An analysis was conducted of the application of industrial robots in Japan on annual and total basis, as well as across different industries, and presented in the following figures(Karabegović 2015,185-194; Karabegović 2014,7-16;Litzenberger 2018,128; World Robotics 2017,26-492; World Robotics 2016,11-18; World Robotics 2015,1326; World Robotics 2011,7-16; World Robotics 2010,7-12). Based on Figure 11, we see that the annual trend of applying industrial robots in Japan has a small increase over the past ten years, rising to 52,200 robot units in 2018. In terms of the total use of industrial robots in Japan since 2008, we notice a slight decline, which in 2018 reached 294,000 industrial robot units. Based on Figure 12, we can conclude that the trend of robot application is increasing in all industries in Japan. We have to point out the application in the automotive industry where the application is the highest, reaching a value of 16,400 robot units in 2018.

The Application of Robots in the Industry

15

Figure 11. The trend of application of industrial robots on annual and total level in Japan for the period 2008-2018.

Figure 12. The trend of application of industrial robots in different industries in Japan for the period 2011-2018.

Continuously growing electric/electronic industry is in the second place, and in 2018 it reached 11,900 robot units. The third place is held by the plastic and chemical products industry, in which the trend is constantly increasing. The application of robots in 2018 was about 6,230 units. Metal industry has the lowest application of robots, but is still demonstrating slow increase.

16

Isak Karabegović

Figure 13. The trend of industrial robot application in handling operations/machine tending, welding and soldering, dispensing, processing, assembling and disassembling in Japan for the period 2011-2018.

The highest number of robot applications in Japan is seen in the handling operations/machine tending. The trend of application is rapidly increasing every year, and in 2018 it has reached the value of about 15,150 robot units.Somewhat slower trend of robot application can be seen in the welding and soldering processes. In 2018, approximately 9,850 robot units wereapplied. Assembling and disassembling operations are in the third place, with an increasing trend of 8,250 robot units implemented in 2018. The application in dispensing and processing field has somewhat growing trend, but is still negligible compared to previously mentioned applications.

The Application of Industrial Robots in the Republic of Korea As Figure 3.b. indicates, the Republic of Korea ranks third in the world by the number of industrial robots applied. An analysis was conducted of the application of industrial robots in the Republic of Korea on annual and total level, as well as across different industries, as presented in the following figures (Karabegović 2015,185-194; Karabegović 2014,7-16; Litzenberger 2018,1-28; World Robotics 2017,26-492; World Robotics 2016,11-18; World Robotics 2015,13-26; World Robotics 2011,7-16; World Robotics 2010,7-12). Based on Figure 14, we can see that the trend of application of industrial robots on annual level in the Republic of Korea has increased slightly in the last ten years, amounting to 48,300 robot units in 2018. In terms of the total application of industrial robots in the Republic of Korea, we can see a constant growing trend of the total

The Application of Robots in the Industry

17

application since 2008, which in 2018 reached the amount of 320,000 industrial robot units. Based on Figure 15, we conclude that the trend of robot application in the Republic of Korea is increasing in all industries. We have to emphasize robot application in the electric/electronic industry where the application is the highest, reaching the value of 38,600 robot units in 2018. The growing automotive industry ranks second, with a total of 10,800 robot units applied in 2018. The metal industry is in third place with slightly increasing trend of application, which amounted to 2,100 robot units in 2018. The fourth place by robot application is held by the plastic and chemical products industry, whose trend of application is slowly increasing. The highest number of robot applications recorded in the Republic of Korea is in the handling operations /machine tending that is increasing rapidly every year, reaching the value of about 26,800 robot units in 2018. Somewhat slower trend of robot application can be seen in the welding and soldering processes. In 2018, approximately 8,100 robot units were applied. Assembling and disassembling operations are in the third place, with an increasing trend of 6,200 robot units implemented in 2018. The application in dispensing and processing field has somewhat growing trend, but is still negligible compared to previously mentioned applications.

Figure 14. The trend of application of industrial robots on annual and total level in the Republic of Korea for the period 2008-2018.

18

Isak Karabegović

Figure 15. The trend of application of industrial robots in different industries in the Republic of Korea for the period 2011-2018.

Figure 16. The trend of industrial robot application in handling operations/machine tending, welding and soldering, dispensing, processing, assembling and disassembling in the Republic of Korea for the period 2011-2018.

The Application of Robots in the Industry

19

The Application of Industrial Robots in the USA Based on Figure 3.b we can conclude that the USA ranks fourth in the number of applications of industrial robots in the world. An analysis is made of the application of industrial robots in the USA on annual and total level, as well as across different industries, as presented in the following figures(Karabegović 2015,185-194; Karabegović 2014,7-16; Litzenberger 2018,1-28; World Robotics 2017,26-492; World Robotics 2016,11-18; World Robotics 2015,13-26; World Robotics 2011,7-16; World Robotics 2010,7-12). Figure 17 depicts the annual trend of application of industrial robots in the USA. As can be seen, the trend has been increasing in the last ten years and reached 35,100 robot units in 2018. In terms of total application of industrial robots in the USA for the period 2008-2018, we can see a continuous growing trend that reached the amount of 316,000 industrial robot units in 2018. Based on Figure 18, we can conclude that the trend of robot application is increasing in all industries. We need to emphasize that the automotive industry has the highest application, reaching the value of 19,680 robot units in 2018. In the second place is the continuously growing electric/electronic industry that reached 7,800 robot units in 2018.

Figure 17. The trend of application of industrial robots on annual and total level in the USA for the period 2008-2018.

20

Isak Karabegović

Figure 18. The trend of application of industrial robots in different industries in the USA for the period 2011-2018.

Metal industry is in the third place, with constantly increasing trend, reaching 2,810 robot units in 2018. Following metal industry, there is plastic and chemical products industry, which has the slightly increasing trend in the 2018. The highest number of robot applications in the USA is recorded in the handling operations/machine tending. It is an increasing trend that reached about 16,920 robot units in 2018. Somewhat slower trend of robot application can be seen in the welding and soldering processes.In 2018 about 8,100 robot units were applied. Assembling and disassembling operations are in the third place, with an increasing trend of 2,185 robot units implemented in 2018. The application in dispensing operation is ranked fourth with a slight increase in application, that was about 1.010 robot units in 2018. The application in processing field has somewhat growing trend, but is still negligible compared to previously mentioned applications.

The Application of Robots in the Industry

21

Figure 19. The trend of industrial robot application in handling operations/machine tending, welding and soldering, dispensing, processing, assembling and disassembling in the USA for the period 20112018.

The Application of Industrial Robots in Germany Based on Figure 3.b we can conclude that Germany ranks fifth in the number of applications of industrial robots in the world. The analysis of the application of industrial robots on annual and total level in Germany was conducted, as well as across different industries, as presented in the following figures.(Karabegović 2015,185-194; Karabegović 2014,7-16; Litzenberger 2018,1-28; World Robotics 2017,26-492; World Robotics 2016,11-18; World Robotics 2015,13-26; World Robotics 2011,7-16; World Robotics 2010,7-12). Figure 20 indicates that in the past ten years there was a slight increase of the trend of application of industrial robots on annual basis in Germany, which reached 22,800 robot units in 2018. In terms of total use of industrial robots in Germany for the period 20082018 we have a continuously growing trend that amounted to 198,000 robot units in 2018. Based on Figure 21, we can conclude that the trend of robot application is increasing in all industries. We must emphasize the application in the automotive industry, whose application is by far the largest in comparison to other industries, reaching the value of 11,240 robot units in 2018. Metal industry is in the second place, with 3,620 robot units in 2018.

22

Isak Karabegović

Figure 20. The trend of application of industrial robots on annual and total level in Germany for the period 2008-2018.

Figure 21. The trend of application of industrial robots in different industries in Germany for the period 2011-2018.

The Application of Robots in the Industry

23

Figure 22. The trend of industrial robot application in handling operations/machine tending, welding and soldering, dispensing, processing, assembling and disassembling in Germanyfor the period 20112018.

Electric/electronic industry is in third place, with constantly increasingtrend, reaching 2,620 robot units in 2018. The last place is held by the plastic and chemical products industry, which has the slightly increasing trend in the 2018. The highest number of robot applications in Germany is recorded in the handling operations/machine tending. It is an increasing trend that reached about 16,920 robot units in 2018. Somewhat slower trend of robot application can be seen in the welding and soldering processes. In 2018 about 3,140 robot units were applied. Assembling and disassembling operations are in the third place, with an increasing trend of 2,120 robot units implemented in 2018. The application in dispensing operation is ranked fourth with a slight increase in application, that was about 958 robot units in 2018. The application in processing has somewhat growing trend, but is still negligible compared to previously mentioned applications.

CONCLUSION Based on the conducted analysis, we can determine that the use of industrial robots in the world has been increasing every year over the last ten years, on both annual and total level. It is estimated that the trend will continue to increase in the coming period until 2021. The highest application is recorded on the continent of Asia/Australia, followed by Europe and America. The trend of application of industrial robots (robot density) per

24

Isak Karabegović

10,000 persons employed in industrial production processes in the world in 2017 is 87 robot units. The highest application of industrial robots (robot density) per 10,000 employed persons is in the automotive industry, as shown in Figure 7. The Republic of Korea is in the first place with 2,145 robot units. We have identified the top five countries in the application of industrial robots. The first place is held by China, followed by Japan, Republic of Korea, USA and Germany. The analysis of the application of industrial robots in the top five countries China, Japan, Republic of Korea, USA and Germany, provides us with the conclusion that the highest application of industrial robots in all five countries is recorded in two industries: automotive and electric/electronic, which has been increasing over the last ten years. Following these two industries are metal and plastic and chemical products industries. The use of industrial robots in handling operations/machine tending, welding and soldering is the highest in all five countries. The third place in the application of industrial robots is held by assembling and disassembling processes in all five countries, whereas processing and dispensing are the last. As we are currently in era of the fourth industrial revolution, “Industry 4.0,” whose implementation would be impossible without industrial and service robots (for logistics), there will be a rapid increase in the use of industrial robots. The rapid development of new technologies is supporting robotic technology, so that second-generation industrial robots are being implemented, which do not need to be separated with fences because they collaborate with workers.Product diversity is increased and product life cycle is decreased, which requires flexible automation of production processes that cannot be accomplished without the use of robots. Likewise, improving product quality requires the use of sophisticated high-tech robotic systems.Companies are constantly confronted with global competition thatrepeatedly demand the modernization and complete automation of production processes, which can only be achieved through the use of robots. Industrial robots improve the quality of their work by taking on dangerous, tedious and dirty jobs that humans are unable to perform or the ones that are not safe for human health. The development of robotic technology also provides an opportunity for automation of tasks that could not be automated with the first-generation robots. The application of robots in the following period will be gradually increasing worldwide and in all industrial branches. The implementation of “Industry 4.0” through the collaboration of robots, IoT and machine learning/AI will guide the development of robotic technology and its applications in the coming years.

REFERENCES Anderson, J., Smith, A., 2014. AI, Robotics, and the Future of Jobshttp, Pew Research Center, Internet & Technology. // www.pewinternet.org/ 2014/08/06/future-of-jobs/.

The Application of Robots in the Industry

25

Beaupre, M. 2015. Collaborative Robot Technology and Applications, International Collaborative Robots, Workshop, Columbia;1-41. Doleček, Vlatko and Karabegović Isak. 2008. Robots in the industry, Technical faculty of Bihać, Bihać, Bosnia and Herzegovina;1-34. Karabegović Edina, Karabegović Isak, and Hadžalić Edit. 2012. Industrial Robots Application Trend in World Metal Industry, Journal Engineering Economics, 2012, Vol. 23. No. 4, 368-378. Karabegović Isak, Karabegović Edina, Mahmić Mehmed and Husak Ermin. 2015. The application of service robots for logistics in manufacturing processes, Advances in Production Engineering & Management, 10(4); 185-194. Karabegović, Isak and Husak Ermin. 2014. Significance of industrial robots in development of automobile industry in Europe and the World. Journal Mobility and Vehicle, 40(1);7-16. Karabegović, Isak and Husak Ermin. 2018. The Fourth Industrial Revolution and the Role of Industrial Robots a with Focus on China, Journal of Engineering and Architecture, 6(1); 1-13. Karabegović, Isak. 2016. Role of Industrial Robots in the Development of Automotive Industry in China, International Journal of Engineering Works, Vol. 3., Iss. 12, 9297. Karabegović, Isak. 2017. The Role of Industrial and Service Robots in Fourth Industrial Revolution with Focus on China.Journal of Engineering and Architecture,5(2); 110117. Litzenberger, Gudrun. 2018.World Roboics 2018, Industrijal and Service Robots,Paper prezentet at the IFR Press Conference, Tokyo, Japan, Octoner 18. Makowieckaja, Olga. 2015. Industrial Robots on the Market of Means of Production Automation. Biuletyn Instytute Spawalnictwa Gliwice, Czechoslovak, 2; 22-27. Naheme, S. 2017. Implementation of Collaborative Robot Applications, A Report from the Industrial Working Group, Kune;18-38. Robotics 2020 Strategic Research Agenda for Robotics in Europe, 2013. Produced by euRobotics aisbl Robotics 2020, Draft 0v42 11/10/2013, eu Robotics aisbl:25-43. https://ec.europa.eu/research/industrial_technologies/pdf/robotics-ppproadmap_en.pdf. Sulavik, C., Portnoy, M., Waller, T. 2014. How a new generation of robots is transforming manufacturing, Manufacturing Institute USA, Gaithersburg, USA;3-15. The UK Landscape for Robotics and Autonomous Systems, 2015. Contact Info Robotics and Autonomous Systems Special Interest Group, Barttelot Road Horsham, UK. file:///C:/Users/isak/Downloads/ PUB3InnovateUKRASreview2015.pdf. World Robotics 2010-Industrial Robots, 2010. The International Federation of Robotics, Statistical Department, Frankfurt am Main, Germany, 7-12.

26

Isak Karabegović

World Robotics 2011-Industrial Robots, 2011. The International Federation Statistical Department, Frankfurt am Main, Germany; 7-16. World Robotics 2015-Industrial Robots, 2015. The International Federation Statistical Department, Frankfurt am Main, Germany;13-26. World Robotics 2016-Industrial Robots, 2016. The International Federation Statistical Department, Frankfurt am Main, Germany;11-18. World Robotics 2017-Industrial Robots, 2017. The International Federation Statistical Department, Frankfurt am Main, Germany;26-492.

of Robotics, of Robotics, of Robotics, of Robotics,

In: Industrial Robots: Design, Applications and Technology ISBN: 978-1-53617-779-4 Editors: Isak Karabegović et al. © 2020 Nova Science Publishers, Inc.

Chapter 2

INDUSTRIAL ROBOT SYSTEMS AND CLASSIFICATIONS Ermin Husak*, PhD Department of Mechanical Engineering, Technical Faculty, University of Bihać, Bihać, Bosnia and Herzegovina

ABSTRACT This chapter describes the basic industrial robot system as well as the basic elements that construct this system. The kinematic structure of the mechanical arm is described and the manner in which the kinematic chain is formed. The basic terms used in industrial robotics are defined, such as workspace, robot accuracy, repeatability, TCP (tool center point), robot degrees of freedom, etc. This chapter also provides a classification of industrial robots based on the mechanical structure of the robot. Each of the basic structures such as Cartesian, Cylindrical, Spherical and Articulated have been specially elaborated with special reference to their drives and applications. Two specific groups of industrial robots, SCARA and delta robots, have been further elaborated.

Keywords: industrial robot, structure, degree of freedom, number of axis, work space

INTRODUCTION The basic elements of industrial robots are mechanical arm, sometimes called manipulator with a wrist, end-effector, sensors, actuators and controller. All these elements of an industrial robot represent the basic configuration of the industrial robot and can be seen in three components: a Teaching pendant, a Control cabinet, and an Corresponding Author’s E-mail: [email protected].

*

28

Ermin Husak

Industrial robot manipulator shown in Figure 1 (Doleček and Karabegović 2002, 13-25; Doleček and Karabegović 2008, 10-28; Rehg 2003, 25-42).

Figure 1. Basic components of an industrial robot system.

MECHANICAL ARM The basis for understanding the work of the mechanical arm of an industrial robot lies in understanding the operation of the mechanisms. It is known from the theory of mechanisms that the mechanism consists of interconnected rigid bodies whose purpose is to perform the desired motion and transmit the necessary forces. The rigid bodies that make up the mechanism are called links of the mechanism. Mechanism links may have different geomaterial forms. A compound of two links of a mechanism that allows relative motion among members is called a kinematic pair. A kinematic pair may have a minimum of 1 and a maximum of 5 degrees of freedom. From the point of view of rigidity of the mechanical structure, it is most preferable to use kinematic pairs with one degree of freedom of motion. These kinematic pairs are called joints. There are basically two types of joints: revolute (rotary) and prismatic (linear). The Figure 2 shows the geometric shape of the revolute and prismatic joint. The revoute joint (R) is like a hinge where relative rotation between the two bodies or links is enabled. The prismatic joint (P) gives the possibility of relative translation between the two bodies sometimes called translational joint (T) (Jazar 2007, 3; Velagić 2008, 4-5). In industrial robots, the joint of the robotic arm is associated with the term axis. Axis corresponds to a degree of freedom of motion. Robots are often classified based on the number of degrees of motion or the number of axes. Each joint that is actively involved

Industrial Robot Systems and Classifications

29

in starting the robot is powered by an actuator. By connecting the bodies and joints to a fixed or mobile base, the structure of an industrial robot is formed (Figure 3).

Figure 2. Revolute and prismatic joints.

Figure 3. Structuring an industrial robot.

There are five types of joints that almost all industrial robots have:     

Linear joint, Orthogonal joint, Rotational joint, Twisting joint, Revolving joint.

All these joints are shown in Figure 4 (Groover 2008, 231-232; Velagić 2008, 4-5). The manipulator or mechanical arm is the actual robot arm. It consists of a number of moving links and an axis that are linked together to a kinematic chain. The mechanical arm is driven by an electric, pneumatic or hydraulic actuator, moving each active joint individually by one of the actuators. Actuator transfers are adjusted depending on whether it is a rotary or a linear joint. Depending on the type of joints and bodies that form the structure of the robot, the basic structures of the robot can be Cartesian, Cylindrical, Spherical and Articulated.Figure 5 shows a mechanical arm with 6 rotary joints, which is a manipulator with six degrees of freedom of motion. Such a industrial robot arm can be divided into two parts. The first three joints, compared to the human

30

Ermin Husak

arm, represent motions or the arm sweep, shoulder swivel and elbow extension, and they serve to position the manipulator in the work space. The other three joint with rest of structure represent the wrist. And they allow the orientation of the end effector. These motions include pitch, yaw and roll (Groover 2008, 235; Weber 2002).

Figure 4. Types of joints used in industrial robots.

Industrial Robot Systems and Classifications

31

Figure 5. Industrial robot with 6 DOF.

Figure 6. Three degree of freedom wrist.

Wrist may have one, two or three degrees of freedom of motion depending on the application. Figure 6 shows a wrist with three degrees of freedom of motion and it is called a spherical wrist. The last wrist element has a tool plate. The end effector is attached to the tool plate (Jazar 2007, 5).

32

Ermin Husak

End Effector In order for an industrial robot to be able to perform its operational function, it is necessary to place the appropriate device on the last link of the robot or the tool plate. This device is called the end effector. The simplest form of end effector is gripper. The gripper has actions to open and close. It is the end effector who performs the work. Complex forms of end effectors would be anthropomorphic hands. The wrist and end effector represent the robotic hand. Figure 7 shows two types of grippers. One gripper has two fingers and the other three fingers. Figure 8 shows a set of different end effectors that can be mounted on the same robot joint (Doleček and Karabegović 2002, 23; Jazar 2007, 6; Whitney 2004).

Figure 7. Grippers with 2 and three fingers.

Figure 8. Set of end effectors.

Teaching Pendant One of the elements that every robot system has is a programming station. The programming of industrial robots can be done on line and off line. The teach pendant is

Industrial Robot Systems and Classifications

33

used to program robots on line. For offline programming, simulations methods of robotic process or text based programming methods are used. Teach pendant shown in the Figure 9 is a hand held device with a flat touch screen. With this device, the robot learns which movement sequences to perform and how to enter commands in the appropriate robot language. Figure 9 shows KUKA smartPAD for ON line robot programming with a teaching method (Weber 2002; Newton 2002).

Figure 9. Teach pendant.

Controller In order for the industrial robot to perform the required motions, it is necessary to control the servomotors by means of a controller. A controller is a special type of computer that has the same components as a classic computer, with special designs for real time systems. All components involved in the control of robotic systems are housed in the Control cabinet. Figure 10.Shows a control cabinet of the KUKA robot (Ogata 2007; Robert 2008, 111).

Figure 10. Control cabinet.

34

Ermin Husak

Actuators In order for an industrial robot to move each joint during its operation, it needs different energy sources for different functions of industrial robots. Motion of the arm, elbow, wrist and end effectors is done through various actuators. Actuators used in industrial robots are divided into three types:   

Electrical, Hidraulical, Pneumatic.

Most grippers use compressed air to activate them, so a source of compressed air is always required for an industrial robot to operate. Also is required for certain applications performed by industrial robots. Pneumatic drive for linear motion of robot joints is common in the case of the Cartesian structure of the robot and the small designs of the robot. The hydraulic drive of the robot is most commonly encountered in large robots. These drives provide these robots with high speed and power. Most industrial robots today use electric motors to drive. Electric motors for industrial robots are divided into three types:  DC (Direct Current) motors,  AC (Alternating current) motors,  Stepper motors. In some cases, it is simply not possible to use them because of working conditions such as high temperatures when most of the industrial robots are hydraulic driven (Robert 2008, 17-I; Bolton 2008, 150).

Sensors The elements in an industrial robot system that are in charge of gathering information for both control of servomotors and environmental information acquisition are called sensors. The most important group of sensors are sensors for position, velocity, acceleration and force when it comes to collecting information for internal states. Proximity sensors and robotic vision systems are the most important for gathering environmental information (Karabegovićet al., 2019; Robert 2008, 17-I; Bolton 2008, 22).

Industrial Robot Systems and Classifications

35

Figure 11. Specification for industrial robot.

INDUSTRIAL ROBOT SPECIFICATION AND TERMS With each development of new technologies, new terms emerge that are characteristic of that technology and whose knowledge enables a good understanding and application of that technology. In order for the buyer and user of industrial robots to select a robot with the exact characteristics required, he must understand the terms used in industrial robot as well as appearing in the robot supplier specification. Figure 11 shows the specification for the FANUC industrial robot.

Work Space Work space or work envelope is a space in which the end effector can move without restriction except those that depend on the structure of the robot itself, ie on the constraints that exist in the joints. Figure 12 shows the FANUC Robot M-20iA/20M/35M workspace. From the specification it is possible to see the maxsimal reach that this robot may have which defines the workspace of the robot.

36

Ermin Husak

Figure 12. Industrial robot work space.

Number of Axes The robot axis counts starting from the base robot with number 1. Each subsequent axis is denoted by the next number, moving towards the end effector. The first three axes are called Position axes and the following are used for orientation and are called Orienatation axes. It does not consider the Open and Close end effectors an independent axis because it is not used for either position or robot orientation.

Payload Maximum load capacity is the weight that a robot can manipulate in a random space depending on the specific speed, acceleration, repeatability, etc. This is a feature that makes it easy for the user to make the choice of the robot simply by knowing the loads which plan to apply.

Coordinate Systems An industrial robot system may have several coordinate systems. All programmed points are identified with respect to these coordinate systems. The basic coordinate

Industrial Robot Systems and Classifications

37

system is positioned at the base of the robot. Other coordinate systems can be placed in the robot flange, on a pallete, or in the center of the tool.

Tool Center Point (TCP) Tool centar point is the orign of the tool coordinate system. TCP is located relative to the flange centre point with a offset. Properly defining TCP is essential for accurate and repeatable robot operation. Figure 13 shows poition and orientation of the coordinate system in flange and coordinate system in Tool centre point.

Figure 13. Origin of tool center point.

Accuracy The accuracy of the robot is a measure of the deviation of the robot’s position, such as TCP, which it does relative to the desired position that the robot should occupy. For example, the robot is instructed to move the TCP by X = 300 mm by Y = 40 mm and by Z = 100 mm with the exact orientation of the end effector. The accuracy of the robot is to the extent that the achieved TCP position deviates from the desired previously specified value. The degree of inaccuracy is obtained by summing the mechanical tolerances of each element of the mechanical arm mounted one on another. Some assembly applications require that objects be located with a precision of 0.05 mm. while other applications generally require an accuracy of 0.5 to 1 mm.

38

Ermin Husak

Figure 14. Accuracy and repeatability.

Repeatability Repeatability is a measure of robotic ability to re-bring TCP to a previously learned position in a random space. Accuracy and repeatability are two different concepts of robot precision. The first one talks about the ability of a robot to reach a given point and the second the ability to come back to the same point (Figure 14).

Velocity The level at which an industrial robot can move any axis and TCP and according to the program is a measure of velocity. The average velocity between two programmed points is lower than the velocities achieved by the robotic arm while moving between points to accelerate and decelerate the robotic arm as it passes through the points.

MECHANICAL STRUCTURES OF ROBOTS An industrial robot must be able to position the robot arm in space to perform its function. The basic mechanical structures of a robotic arm or manipulator are:    

Cartesian structure or TTT, Cylindrical structure or RTT, Spherical structure or RRT and Articulated structure or RRR.

Industrial Robot Systems and Classifications

39

Cartesian Structure The structure of the robot consisting of three translation joints constitutes the Cartesian structure, that is, the three joints allow the robot end effector to move in the X, Y, Z coordinates as shown in Figure 15. The workspace shown in Figure 15 is a rectangular prism. The Cartesian structure of the robot can be found in two versions: gantry and traverse (linear). Electric motors and linear pneumatic actuators are most commonly used to drive these robots. The structure of these robots is rigid, allowing for great accuracy (Karabegović et al., 2012).

Figure 15. Cartesian structure of industrial robot with work space.

Figure 16. Linear and Ganry industrial robots.

40

Ermin Husak

The control system of such robots is not complex. Cartesian structure robots are the most commonly used in material handling, assembly, CCM (coordinate measuring machines).

Cylindrical Structure When the first translational joint of a Cartesian structure robot is replaced with a revolving joint, a robot of cylindrical structure is obtained. The workspace of this robot is in the form of a hollow cylinder as shown in Figure 17.

Figure 17. Cylindrical structure of industrial robot with work space.

This structure of the robot enables the entire structure to rotate at the base around the vertical axis, while the two translational joints allow the last arm member to move vertically and horizontally with the end effector. A good feature of these robots is that they have a large horizontal stroke of the end effector. Robots with this structure are very rigid and are good at handling heavy loads. The actuators can be pneumatic, hydraulic and electric. This robot structure allows for efficient servicing of machines spaced around a small radius circle.

Spherical Structure The spherical structure of the robot is characterized by the first two revolving joints and the third translational joint. To move the robot end effector in the X, Y, Z coordinate system, it is necessary to move it in each joint in a coordinated manner. The workspace of this robot is represented by two concentric hemispheres as shown in Figure 18, although in the actual workspace it is significantly smaller than the theoretical workspace due to different motion restrictions in the joints.

Industrial Robot Systems and Classifications

41

The actuators of such robots are mostly hydraulic and electric, while the only end effector or gripper can be pneumatically actuated. End effector orientation is achieved by three rotary joints. Good features for cylindrical structures can be mapped for spherical structures as well.

Figure 18. Spherical structure of industrial robot with work space.

Articulated Structure

Figure 19. Articulated structure of industrial robot.

42

Ermin Husak

Articulated structure is most commonly used today in industrial robots. This structure is often referred to as the anthropomorphic structure of the robot as shown in Figure 19.

Figure 20. Workspace of articulated robots.

The workspace of these robots is not strictly regular in shape, as can be seen in Figure 20. The basic structure of this robot consists of three revolving positioning joints and three orientation joints. So these robots have six degrees of freedom of motion. Examples of industrial robots with artticulated structure are shown in Figure 21. These types of robots are used for welding, painting, assembly etc. This group of robots belongs to vertically articulated robots. A special group of articulated robots consists of horizontally articulated robots that can be of two structural designs: Selective Compliance Articulated Robot Arm (SCARA) and the Horizontally Base Jointed Robot Arm (Karabegović and Husak 2014, 10-12; Karabegović and Husak 2010, 38-39).

Industrial Robot Systems and Classifications

43

Figure 21. Examples of articulated structure.

Selective Compliance Articulated Robot Arm – SCARA The structure of this robot in most cases consists of two rotary joints and one translational. The axes of all three joints are placed vertically, ie two horizontal segments

44

Ermin Husak

of the arm are connected to the vertical one. The workspace is characteristic and is shown in Figure 22. The load-bearing pillar is of high rigidity, which allows the transfer of relatively large masses. This robot structure has found its application in the assembly of electronic circuits.

Figure 22. Types of SCARA robots and their workspaces.

Figure 23. The horizontally Base jointed Arm.

The horizontally Base jointed Arm has a similar structure to SCARA with the difference that the second joint is translational where its axis is directed vertically. This structure has similar characteristics as the cylindrical one except that it takes up less space (Robotics Online 2019).

Industrial Robot Systems and Classifications

45

Parallel Robots (Delta)

Figure 24. Examples of parallel robots.

This robot has three arms or more that is attached to one board to which the end effector is attached. These robots have six degrees of freedom of motion and the ability to be fully positioned and oriented. The manipulation space is limited due to this construction. This robot has found application in picking and packaging as well as in assembly of electronic parts. These robots can be very fast.

CONCLUSION The objective of this chapter was to familiarize readers with basic information and terms related to industrial robots in order to make it easier to understand the rest of this book. The basis of industrial robots is a mechanical arm or manipulator. A mechanical arm is nothing more than a kinematic chain of mechanism. Therefore, for a clear understanding of the mechanical arm, it is necessary for readers to have some background in the theory of mechanism. In order for the mechanical structure of the industrial robot to be complete, it is necessary to add a wrist and an end effector to the mechanical arm. The wrist and end effector make the robot’s hand. Mechanical arm and hand make up the mechanical structure of an industrial robot. Starting an industrial robot, ie bringing the end effector to the desired position, is achieved by synchronizing the movement of each joint. The first three joints are used for positioning and the other for orientation of the end effector. One of three types of actuators is used to drive the joints: pneumatic, hydraulic or electric. Sensors and controls complete the system of industrial robots. Depending on

46

Ermin Husak

the requirements of industrial robots, robots with a specific robot structure are selected. The basic four structures of industrial robots are Cartesian, Cylindrical, Spherical and Articulated. Each of these structures has its advantages and disadvantages. A particular issue to be addressed that is not specifically addressed in this chapter is the safety of robotic systems. Robots enclosed by fences to protect workers can be classified as first generation robots. Today, collaborative robots have been developed that have the ability to work together with humans and not to injure them, ie they are not separated by any fence.

REFERENCES Balič, J. 1999. Contribution to integrated manufacturing. DAAAM International Vienna. Balič, J. 2004. Intelligent manufacturing systems (in Slovenian). University of Maribor, Faculty of Mechanical Engineering, Slovenia. Bolton, W. 2008. MECHATRONICS-electronic control systems in mechanical and electrical engineering. Pearson Education. Cetinkunt, S. 2007. Mechatronics. John Wiley & Sons, USA. Clarence, W. De S. 2005. Mechatronics – An Integrated Approach. CRC Press LLC Florida. Doleček, V. and Karabegović, I. 2008. Robots in the industry (in Bosnian). Society for Robotics of Bosnia and Herzegovina and Technical faculty Bihać, Bosnia and Herzegovina. Doleček, V. and Karabegović, I. 2002. Robotics (in Bosnian). Technical faculty Bihać, Bosnia and Herzegovina. Groover, M. P. 2008. Automation, Production Systems and Computer-Integrated Manufacturing. Pearson Education. Jazar, R. N. 2007. Theory of Applied Robotics, Springer, New York. Karabegović, I. and Husak, E. 2010. “Robot integration in production process modeling and simulation,” Proc. of 1st International Scientific Conference of Engineering, MAT 2010. pp. 37-40, Mostar, Bosnia and Herzegovina. Karabegović, I. and Husak, E. 2014. “Significance of industrial robots in development of automobile industry in Europe and the world.” International Journal for Vehicle Mechanics, Engines and Transportation Systems, Vol. 40, No 1. 8-16. Karabegović, I., Doleček, V. and Husak, E. 2015. “The Role of Industrial and Service Robots in Manufacturing Processes.” International Journal of robotics and automation technology, vol. 2, No.1. 26-31. Karabegović, I., Karabegović, E. and Husak, E. 2012. “Application Analysis of Industrial Robots Depending on Mechanical Robot Structure.” Proc. of 16th International Conference Mechanika2012, Kaunas, Lithuania.

Industrial Robot Systems and Classifications

47

Karabegović, I., Karabegović, E. and Husak, E. 2012. “Trend of Industrial Robot Share in Different Branches of Industry in America.” International Journal of Engineering Research and Application, IJERA, vol. 2, No. 2. 479-485. Karabegović, I., Karabegović, E., Mahmic, M. and Husak, E. 2019. “The Role of Smart Sensors in Production Processes and the Implementation of Industry 4.0.” Journal of Engineering Sciences, vol. 6.No. 2. Kusiak, A. 2000. Computational Intelligence in Design and Manufacturing. John Wiley & Sons. Miljković, Z. 2003. Artificial neural network systems in production technologies (in Serbian). Faculty of mechanical engineering Beograd, Serbia. Newton, C. B. 2002. Robotics, Mechatronics, and Artificial Intelligence. Butterworth – Heinemann. Ogata, K. 2007. Modern control Engineering. Pearson Education. Rehg, J. A. 2003. Introduction to Robotics in CIM systems. Pearson Education, New Jearsy. Robert, H. B. 2006. Mechatronics – An Introduction.CRC Press Taylor & Francis. Robert, H. B. 2008. Mechatronic Systems, Sensors, Actuators: Fudamentals and Modeling. CRC Press Taylor & Francis. Robotics Online. 2019. https://www.robotics.org/product-catalog-detail.cfm/ ReisRobotics-USA-Inc/RH-Series-Robots-Horizontal-articulated-arm-robots-for-tightwork-envelopes/productid/1859. Velagić, J. 2008. Analysis and control of robotic manipulators (in Bosnian). University Book, Mostar, Bosnia and Herzegovina. Weber, W. 2002. Industrie – roboter (in German). Carl HanserVerlag, München, Germany. Whitney, D. E. 2004. Mechanical Assemblies. Massachusetts Institute of Technology Oxford University Press.

In: Industrial Robots: Design, Applications and Technology ISBN: 978-1-53617-779-4 Editors: Isak Karabegović et al. © 2020 Nova Science Publishers, Inc.

Chapter 3

SENSORS IN ROBOTICS Mehmed Mahmić* Technical faculty, Department of Mechanical Engineering, University of Bihać, Bihać, Bosnia and Herzegovina

ABSTRACT This paper presents sensors that are applied in robotics. In order for the robot to uninterruptedly perform certain tasks, it is necessary to obtain information which is directly connected to task performance. Information is obtained by measuring certain physical values with sensor. The basic division of sensors is to sensors of internal and external state. The sensor system level, which is used with certain robot, is in direct connection with a class of tasks which that robot can carry out. Programming of robot is easier when the robot is equipped with sensor system of a higher level. In the future, a significant progress of sensor systems is expected, e.g., sense of touch with a possibility of surface softness determination, sense of hearing etc.

Keywords: sensor, measurement, robot, sensor system, robot’s environment, robotics

INTRODUCTION Robots, as flexible systems, are carrying out certain classes of tasks and are finding ever growing application in modern industry. In order to perform tasks, it is necessary to have certain information acquired by measuring.

Corresponding Author’s E-mail: [email protected].

*

Mehmed Mahmić

50

In modern science and technique, the measuring leads, because it helps to get quantitative information about an object of research. Information obtained by measuring most often has a form of an electrical signal (Popović 1996). Measuring of different nonelectric sizes are carried out with certain types of senders called sensors. More broadly, the term sensor implies all units in a chain from the reception and tumbling to processing of measuring signal, i.e., from the sensor to a microprocessor, Figure 1 (Doleček and Karabegović 2002; Bremer and Pfeifer 1995; Širić 2005; Bašić 2008). A converter is a device that converts physical or chemical value into an electrical signal which can be a voltage, electricity, frequency, series of impulses or a phase shift. The signal that comes out from the converter is most often of a weak intensity, so it needs to be boosted. The sensor is, therefore, a device that converts measured non-electric value into an electrical signal; it is of small dimensions, certain technical characteristics and has an ability of signal processing. In robotics, sensors are applied for measuring of angular and translational movements, distance, force, acceleration, etc. In mobile robots, a special class of sensor systems form visual systems (Doleček and Karabegović 2002; Rogić 2001; Širić 2005; Bašić 2008). The robots are expected to be independent in their work and their development goes in this direction.

Figure 1. Measuring system as a sensor.

In order to be highly independent in work, a significant advance in sensor technique is necessary: sense of touch with a possibility of class roughness determination, surface softness, three-dimensional vision, sense of hearing with voice recognition, etc. Some of the requests that are set for sensor application in robotics are (Bremer and Pfeifer 1995; Baginski 1998):       

in order to show an angular jointposition, it is usual to put the encoder on every robot axis, possibility of joint coordinate transformation in Cartesian coordinate system, application of telerobotics in places where a man cannot be present, etc. The development of sensors is carried out in three directions: miniaturisation and higher level of components, realisation of multiple actions, with one sensor a few different physical values can be determined at the same time, expansion of functional sensor possibilities.

Sensors in Robotics

51

DIVISION OF SENSORS A sensor classification that is used in robotics is defined in many ways, according to appropriate criteria. The sensor classification includes certain sensor characteristics or principles, e. g.:    

a principle which sensor work is based on, physical or chemical value which change some sensor parameter, energy type: solar, electric, mechanical and an object and sensor relation on which the signal is emitted: contact, no contact.

The division of sensors to contact and contactless is one of the earliest divisions, but is still kept. According to the sensor information complexity, sensors are classified in three groups:   

proximity sensors, measuring sensors, image sensors: tactile, thermal and optical.

Proximity sensors are highly applied in control of reference robot positions or its manipulative processes. These sensors have two level output signal, which means that the output value depends on whether it is bigger or smaller than the default value (Bašić 2008; Pavlin 1995; Popović 1996). Measuring sensors in stationary position have electrical signal which is proportional to measured physical value which is measured. They are most often made as resistive, capacitive, electromagnetic, piezoelectric or optoelectronic. Image sensors are much more complex from measuring sensors, even though they are less accurate. These sensors give information in a form of an image which refers to structure, form and topology of certain objects. The image is acquired as a projection of a three-dimensional scene on one-dimensional array, or two-dimensional matrix made of information (n x n) of measuring sensors. The electronic processing of one-dimensional array or two-dimensional matrix gives the serial signal, which represents the given image. The robot through the sensor gets the information about its position and its environment. Information about its position the robot gets by measuring the displacements of the joints. By measuring the displacements of the joints of the robot, the internal coordinates are determined, and sensors which give the information about internal coordinates are called internal sensors or sensors of internal state. Sensors of

52

Mehmed Mahmić

internal state are also built in robots of the first generation, which are used for measuring of kinematic values of robots. For work of any kind of servo-system it is necessary to measure the position and speed. Accordingly, these sensors are built even today in all robots. A modern robot can have a control scheme at more levels, so the information for work of the lowest level, i.e., servo-system level (Bašić 2008)are obtained by internal sensors. Sensors which give information about a relation of robot and environment are called external sensors or sensors of external state. These sensors give information for the higher levels of robot control, on which the robot intelligently reacts within previously defined limitations. Sensors of external state have more complex construction and control from sensors of internal state. The regulation of work of servo-systems, motors and cylinders is carried out with measurements of: movements, angles, speed and forces. With robot development, the sensor systems for recognition of external states are developing in parallel, i.e., systems that collect information about the state of robot environment.

SENSORS OF INTERNAL STATE It is necessary to know and set information about the position, movement and force in order to programme and control the work of the robot. Sensors of internal state give information about mechanical system, in relation to coordinate systems related to the robot. The kinematic modelling and control demand measuring the force of the object caught by a gripper of the robot or effector. The most relevant parameters of internal robot state are information about shifts and angles of the mechanical robot system, on which the control robot unit sends appropriate signals that operate back on a drive motor or cylinders (Doleček and Karabegović 2002; Širić 2005; Karabegović et al. 2003; Popović 1996). Sensors of internal state are classified in four groups: sensors of position, sensors of speed, sensors of deflection and inertial navigation system.

Sensors of Position and Movement Incremental Measuring Encoders The incremental measuring encoders are most often applied in measuring values of angle shaft rotation. These measuring encoders are working on the principle of a photoelectric effect.They are also called the impulse encoders of rotation number,

Sensors in Robotics

53

incremental encoder of rotation number, incremental encoder of angular step, or incremental encoder of impulses. Figure 2 shows one incremental measurer of rotation number with integrated elements (Doleček and Karabegović 2002). Series of electric impulses are received by rotating the floor panel, which directly determines the measure for rotation angle. On the floor glass panel, the lines are radially set, and those lines represent masked and unmasked segments.The floor panel is set between the light source and the reading panel. For measuring rotation of a certain angle higher than 360, the third trace is applied, where the reference signal is determined for every full rotation. Segments on the glass panel are lighted with parallel rays over the diaphragm, directed from one light diode (Bremer and Pfeifer 1995; Karabegović et al. 2003).

Figure 2. Schematic representation of incremental measurer of rotation and appearance (Heidenhan 2016).

Sensors of Speed The speed and acceleration of joints of mechanical robot system is necessary to know, when studying kinematic characteristics and programming of robots, when speed of the effector depends on a type of work that robot carries out. In robotics are most often used rotational joints, so here is processed sensor for measuring of angular speed (Doleček and Karabegović 2002; Rogić 2001; Širić 2005; Popović 1996). Speed measuring of translational movement is not carried out directly, but translational movement converts into rotational, so the same sensors are used.

54

Mehmed Mahmić

Tachogenerator The output signal in tachogenerator is proportional to input angular speed of a shaft, i.e., angular speed of rotor. A tachogenerator is an engine of a direct current with permanent magnets, Figure 3. In this case, the engine of the direct current has a reverse function and works as a generator (Bašić 2008). Tachogenerator consists of a winding rotor, which rotates inside of a magnetic field, induced by magnets set on a stator. The angular speed of robot joint is measured by joining the shaft of tachogenerator, through reducer, to the joint.

Figure 3. Unidirectional tachogenerator.

Sensors of Deflection This type of a sensor is applied with autonomous mobile robots. Sensors of deflection have high accuracy.

Piezoelectric Sensor These sensors have multilayer structure, Figure 4. The sensor bases are made of silicon, and on the base are set electrodes of diameter 1,5 to 6mm. Individual electrodes are fixed with a resin, and over the electrodes is a thin layer of piezoceramics (10 100m) of barium-titanate. These sensors work on the principle of piezoelectricity. With these sensors, it is possible to detect forces of 0,01 N, linearity 0,2 - 0,3% and frequency range 0,2 - 0,3%. The output information is an image in a binary form. Then, with the special programme, the position and orientation of a subject is determined.Programmes for image processing additionally improve the accuracy of these sensors, e.g., for sensor 16 x 16 elements of spatial resolution 8mm, diameter 6mm, the canter is detected with an error  0,25mm (Doleček and Karabegović 2002).

Sensors in Robotics

55

Figure 4. Piezoelectric sensor.

INS (Inertial Navigation System) A position of mobile robots is determined with a direction sensor. The direction sensors can be: gyroscopes, geomagnetic,…

Geomagnetic Sensors A magnetic compass is the best known sensor of this kind. Geomagnetic sensors that are applied on mobile robots can be divided in categories:     

mechanical magnetic compasses, compasses with the base for a flux, compasses with Hall's effect1, magneto-resistive compasses and magneto-elastic compasses.

Figure 5 shows some of the magnetic compasses. The sensor with the Hall's effect can be used as a shaft encoder, or for production of digital compass, i.e., detector of magnetic field.

Hall’s effect-the behaviour of electrons in a magnetic field.

1

56

Mehmed Mahmić

SENSORS OF EXTERNAL STATE These sensors give information about the environment of the robot, and the contact between the robot and the object from its environment.

Figure 5. Magnetic compasses (left-vector digital compass) (Krstulović 2003; Bejdić 2006).

Sensors of external state are most often classified on contact and contactless. These sensors are used for distance determination, i.e., interval, and can be used for object detection.

Tactile Sensors It is necessary for the robot to have adequate information about the work space change, and the intensity of robot and environment activity. Such information is obtained through sensors that measure physical values of the robot work space. These sensors give information not only about the object form, but also the force intensity with which the robot work on the object (Doleček and Karabegović 2002; Binner 1999; Popović 1996). Tactile sensor can be built in on top of a finger of robot's gripper, Figure 6. In that case, the tactile sensor represents the miniature six-component sensor of force/moment. When manipulating the object, it is necessary to obtain the information with more contact points. For such information, it is necessary to install more sensors divided in a matrix form, or by installing the touch sensor which gives information from a certain surface.

Sensors in Robotics

57

Figure 6. Measuring of contact force with sensor on the finger (OptoForce 2014).

Magneto-Resistive Touch Sensor Magneto-resistive touch sensors work on the principle of electric resistance change of magneto-resistor, depending on the magnetic field. The surface of this sensor is made of aluminium oxide, and the sensor is made of more layers. On the sensor surface is applied a magneto-resistive material in the matrix form, Figure 7. A deformation rubber layer is set between the magnetic resistors and insulators. On the rubber layer are made gutters all along the lateral dilatation, when pressing the rubber. On the rubber layer is set a layer of non-conductive material-insulators, and the thin-layer copper conductors in the form of a tape are applied on the insulator (Bremer and Pfeifer 1995; Bejdić 2006).

Sensors of Force and Moment Depending on the place of installation, i. e of concrete application, these sensors, according to the main classification, can be internal and external sensors. The basic places on the robot where the moment and force sensors are set are shown in Figure 8 (Popović 1996). In robotics, it is often necessary to have information about all three force components F (Fx, Fy, Fz) and all three moment components M (Mx, My, Mz), while with classic sensors, it is enough to measure only one force component.

58

Mehmed Mahmić

Figure 7. Magneto-resistive touch sensor.

Figure 8. Installation places of force/moment sensors.

There is a large number of different sensors that are used in robotics for measuring forces and moments. Which type of sensor will be used for measuring them depends on the place of its installation. Measuring of force is carried out according to two fundamental principles. The first principle is about changing the material characteristics, e.g., inductance, electric resistance, etc.; the second principle is about measuring the elastic deformation (Bašić 2008).With force influence, the elastic elements of measuring sensor are deformed. In this way, the movements are obtained, on which the information of the force is obtained. The basic places, Figure 8, of installation of moment and force sensors: hand joint, elbow joint, shoulder joint and hand. When applying robot for performing tasks, e.g., varnishing, polishing, cleaning and etc. the robot control unit controls movements of a gripper/effector, according to the given positions, until the contact between the robot’s gripper/effector and the working object appears (Doleček and Karabegović 2002; Krstulović 2003; Bašić 2008).

Sensors in Robotics

59

It is necessary to install the force sensors that give information about the contact intensity between the robot’s gripper and the working object. The robot control unit processes information of the contact force between the gripper and the working object, and according to the defined algorithm, carries out corrections.

Application of Strain Gauges with Sensors of Moment/Force Strain gaugesoperate on the principle of electrical resistance change, which appears when changing the conductor length. Applications of strain gaugesare very broad and are installed in many types of sensors. They are installed on the elastic element of the sensor, or directly on loaded construction. Six-Component Hand Force Sensors These sensors are very compact. Figure 9 shows a six-component hand force elastic element sensor in a form of a hallow cylinder. This cylinder is made of 4 longitudinal and 4 transverse elastic keys. The strain gaugesare set on the elastic keys, which are sensitive on longitudinal and transverse cylinder deformation (Doleček and Karabegović 2002; Bremer and Pfeifer 1995; Rogić 2001). Two strain gaugesare set on each elastic key. Such sensor can measure three force and three moment components. Apart from this sensor, there are other construction solutions of force sensor.

Figure 9. Cylindrical six-component hand force sensor.

60

Mehmed Mahmić

Proximity Sensors These sensors detect a presence of an object in its proximity. Proximity sensors send signal when an object occurs in a set zone where they operate. A border distance, i.e., a set zone where the sensor operates can be different, depending on the sensor application of a few millimetres, to one meter or more.

Inductive Proximity Sensors These sensors work on the principle of the coil inductance change, depending on the change of the magnetic resistance (Doleček and Karabegović 2002; Bašić 2008). When approaching an object to the sensor the inductance grows, and with its distancing it reduces. The inductive sensor scheme is shown in Figure 10.

Figure 10. Inductive proximity sensor: a) sensor structure, b) appearance.

When connecting the sensor to the power supply source, the oscillations on resonant frequency occur. These oscillations are the reason for the magnetic field creation, which spreads to the sensor axis (Doleček and Karabegović 2002; Dieter W. Wloka 1992). The output sensor signal is changing with the distance change of the working object or an item.

Ultrasonic Proximity Sensors The basic elements of the ultrasonic proximity sensors are: an ultrasonic transceiver, a device for forming the output signal and an amplifier. The ultrasonic proximity sensor with basic elements is shown in Figure 11. The ultrasonic proximity sensors are highly applied. One application example is shown in Figure 12.

Sensors in Robotics

a)

61

b)

Figure 11. Ultrasonic sensor: a) sensor structure, b) appearance (Pavlin 1995).

Figure 12. Detection of input/output.

Optoelectronic Sensors These sensors are made of two basic parts: transmitter and receiver. In transmitters are mostly installed light-emitting diodes and laser diodes, and receivers mostly use: phototransistors, photodiodes and photo resistors. The combination of light-emitting diode and phototransistor is most often used (Baginski1998; Rogić 2001; Krstulović 2003). An optical pair of transmitter-receiver works co-ordinately in certain area of optical spectrum. Optical signal is in an area of: visible light, shortwave infrared light or medium-wave infrared light (Dieter W. Wloka 1992). Optoelectronic sensors recognize an object at a certain distance, which depends on the intensity of emitted light, object position in relation to the transmitter and receiver, Figure 13.

Mehmed Mahmić

62

The sensor housings are mostly made of plastic. An optical air can be brought with optical fibres and mirrors, in installation places that are not easily accessible.

Sensors of Vision Sensors of robotic vision represent a very important component in robot’s structure. Through the vision sensor, the robot gets the most information of its environment, which enables to increase complexity of tasks that robot needs to perform. The robot equipped with the vision sensor is easy to programme, because the robot, when performing the set tasks brings certain decisions himself, according to the previously defined limitations (Doleček and Karabegović 2002; Bremer and Pfeifer 1995).

Figure 13. Optical sensor in object recognition.

The basic levels of vision sensors are:   

image processing, enables image obtaining, classification, represents character recognition on a scene scene analysis, visual systems based on the existing data base have necessary knowledge of an object class.

Vision sensor is an optical sensor which transforms the light radiation into an image. The basic sources of information in nowadays vision sensors are cameras. The

Sensors in Robotics

63

CMU2camera and the way of its connection is shown in picture 14, which is used in mobile robots. This camera has dimensions: 2.25” width x 1.75 height x 2 depth”. The CMU camera can in real time perform a few different types of image processing. This camera uses a serial port and can be directly connected with other processors. The CMU camera, when changing 17 images in a second can (Bejdić 2006):    

determine position and size of light or coloured object, measure RGB3 or YUV4 statistic of image zone, automatically record and monitor the first object it sees, physically follow an object by using directly connected servo device,

Figure 14. CMU camera module (Bejdić 2006).

 

send complete image through a serial connector, send a bitmap file that shows the form of the followed object.

As one of the vision sensors for a three-dimensional object scanning, apart from cameras, the laser distance sensors are used too (Figure 15). The scanning principle with a laser beam is about determining the distance between a laser and an object. With nowadays laser sensors, the most applied are auto synchronization systems, which contain pyramidal mirrors. One side of the pyramidal mirror is for laser detection, another is for its projection (Bejdić 2006). With laser and camera combination, the more quality spatial image is tended to be obtained, and this sensor combination gives the so called “technical eye”.

Carnegie Mellon University – a place where this system is developed. RGB (Red, Green and Blue) - an additive model of colours. 4 YUV-a system of colours used in analogue systems, a vector record in three-dimensional space. 2 3

64

Mehmed Mahmić

Figure 15. Laser 3D scanner.

Application of Vision Sensors in Robotics The beginning of the artificial vision application is in systems of sign recognition of biological materials, and different object recognition of military purpose (Doleček and Karabegović 2002; Rogić 2001; Popović 1996).Further development of artificial vision speeded the development of computers and robotics. These sensors are also applied in product quality control. When controlling the product quality, the form and dimensions are controlled. Products that deviate from the tolerance, or with the damages, are removed from a line. Vision sensors can utilize either fixed or mounted camera on robot, Figure 16. A highly important application of these sensors is with mobile sensors. The movement of mobile robots depends on the guidance. The robot guidance can be with sensors, or with previously set programme. The robot development highly contributed to sensor development of all kinds. Sensors of robot’s vision represent the highest level of sensor systems. With certain software support, they open different application possibilities in other fields of industry. Apart from application in industry, these sensors can be applied in areas unreachable to man, because it enables controlling from the distance.

Sensors in Robotics

65

Figure 16. Sorting of objects with robot (Dechow David 2016).

3D-Vision Sensors A three-dimensional vision implies knowing the distance between the sensor and each scene point. For a three-dimensional vision achievement, there are different methods. A triangulation is one method of three-dimensional vision, and represents measuring of the distance based on three points, that make the vertices of the triangle. The triangulation can be active or passive, depending on the movement of the camera. Figure 17 shows the triangulation and structural light in automatic welding. According to the programme, the control robot unit obtains the data necessary for the work of robot’s servo motor, and for the material consummation necessary for welding. The manipulator carries the camera and the laser.

Figure 17. Automatic welding with robot (Doleček and Karabegović 2002).

66

Mehmed Mahmić

Welding with continuous wire is most often used for weld continuation, especially in automobile industry. Robot application for performing certain tasks is impossible without sensor systems. Some examples of robot application are shown in following Figures, where in certain operations; the sensor system is irreplaceable part of the process (Popović 1996).

Figure 18. Robot cleaning the plane.

Figure 19. Application of robots in medicine.

Figure 19 shows examples of robot’s application in medicine. Apart from shown applications of robots, they are also used in other fields, e.g.: in space research, in underwater research, different types of cleaning, in films, etc.

Sensors in Robotics

67

CONCLUSION Robots are increasingly applied in different branches of industry for certain tasks, with increasing complexity. Robot’s ability for performing different classes of tasks depends on a feedback of task performance, i.e., of sensor system level that robot is equipped. Information that robot obtain through the sensor contain data about; an object of research, robot’s joint position, moment and force on robot’s gripper, information of robot’s environment, etc. Sensors give information in a form of an electrical signal, incurred by measuring of different physical values. The control robot unit, depending on the level of controlling, analyses information obtained by sensor system, and when needed, modifies the robot’s work when performing certain task. Sensor application, depending on the sensor system level, simplifies programming of the robot with increase of flexibility, efficiency and reliability. In order to perform certain tasks, the robot control unit obtains information about (position, movement, force, moment, ...) on which correctively performs, according to the set algorithm. Measuring of stated values is carried out in relation to coordinate systems of robot by sensor of internal state. When a certain contact force between a gripper (effector) and a working object is needed, or when it is the information from robot’ environment, the kinematic modelling of robot’s movement is also carried out, according to information obtained by sensor of external state. Sensor system gives information on which robot adapts to the environment, which represents an application of artificial intelligence. Accordingly, the robot is given an opportunity to learn independently and to adapt to the situated environment, whereby the security increases to a higher level. In the future, the robot is expected to be highly independent in work, which is closely related to sensor system development.

REFERENCES Bašić, H. 2008. Measurements in Mechanical Engineering, Faculty of Mechanical Engineering, Sarajevo. Baginski, B. 1998. Motion Planning for Manipulators with Many Degrees of Freedom The BB-Method, Technishe Universität, München. Bejdić, M. 2006. Sensor application in Robotics and FMS, Postgraduate studies, Bihać. Bremer, H., Pfeiffer, F.1995. Experiments with flexibile Manipulators, Technishe Universität, München. Binner, H. F. 1999. Prozeβorientirte Arbeitsvrbereitung, Carl Hanser Verlag, München, Wien. [Process-oriented work preparation]

68

Mehmed Mahmić

Dechow, David. 2016. Quality Magazine, Practical VGR: How to successfully implement the latest technologies for vision guided robotics (https://www.qualitymag. com/articles/93293-practical-vgr). Dieter, W. Wloka. 1992. Roboter systeme 1-Teschnische Grundlagen. Springer-Verlag, Berlin, Heidelberg. [Robot systems 1-Technical basics] Doleček, V., Karabegović, I., Rošić, H. 2005. Contribution to the analysis of application of sensors in the process of commissioning the finished product, 5th International Scientific Conference on Production Engineering Development and Modernization of Production RIM, 2005, 14 - 17. September 2005, Bihać, Bosnia and Herzegovina, ISBN 9958-9262-0-2, pp.: 311-316. Doleček, V. and Karabegović, I. 2002. Robotics, (In Bosnian), Technical faculty, Bihać, Bosnia and Herzegovina. Hidenhan, 2016. Measuring Rotational Motion with Precision and Reliability (https://www.heidenhain.us/resources-and-news/measuring-rotational-motionprecision-reliability/). Krstulović, A. 2003. Introduction in industrial robotics, Croatian Association of Technical Culture, Zagreb. Karabegović, I., Doleček, V., Rošić, H. 2003. The use of sensors for conducting gripper robot industry, 4th International Scientific Conference on Production Engineering RIM, 2003, 25-27. September 2003, Bihać, Bosnia and Herzegovina, ISBN 9958624-16-8, pp.: 321-326. Pavlin, G. 1995.Automatic Position Analysis of Spatial Kinematic Chains, Graz. Popović, M. 1996. Sensors in robotics, College of electrical engineering, Beograd. Rogić, M. 2001. Industrial robots,(in Serbian), Faculty of Mechanical Engineering, Banja Luka. Schraft, R. D. and Schmierer, G. 1998. Serviceroboter, Handbuch fur Industrie und Wissenschaft. Springer -Verlag, Berlin. [Handbook for industry and science.] 3D force sensors in robotic fingers – OptoForce. 2014. (https://www.youtube. com/watch?v=PnW6lH9Wu6I&list=UUgGUP9lx-wTykcj966jpuCw&index=52). www.kuka-robots/sensors/industries.com. www.azom.com. www.mitsubichi.com.

In: Industrial Robots: Design, Applications and Technology ISBN: 978-1-53617-779-4 Editors: Isak Karabegović et al. © 2020 Nova Science Publishers, Inc.

Chapter 4

ROBOTIC VISION Samir Vojić*, PhD Technical Faculty, University of Bihac, Bosnia and Herzegovina

ABSTRACT This chapter presents an overview of a robotic vision with particular emphasis on the basic elements of the vision systems. Vision is an important sensory modality that can be used for deriving information critical to the proper, efficient, flexible, and safe operation of an intelligent robot. Vision systems are utilized for developing a higher level interpretation of the nature of a robotic workspace using images acquired by cameras mounted on a robot. Such information can be useful for tasks such as object recognition, object location, object inspection, obstacle avoidance and navigation.

Keywords: robot, robotic vision, image processing, sensors, technology, measuring, quality

INTRODUCTION In order to havethe successful development of new products, it is necessary that production systems have more flexibility in managing production processes. This can be achieved if the software and hardware architecture of a particular flexible production system are previously integrated. An important role in all this is played by artificial intelligence, that is, the application of artificial vision in the spatial guidance of industrial

* Corresponding Author’s E-mail: [email protected].

70

Samir Vojić

robots, thereby realizing their autonomy and allowing them to become intelligent industrial robots (Bachelor and Whelan, 1997, 19-64). The development and implementation of integrated control systems presuppose the use of new technologies that need to enable the intelligent processing of data from a network of sensors deployed in such environments and the two-way communication of the robot and the environment. For a robot to be able to operate successfully in a complex environment, advanced hybrid system management methods are a prerequisite. By using certain sensors in the form of cameras, an autonomous industrial robot can achieve adaptive behavior, which implies that the industrial robot can adapt flexibly to changes in the workspace and perform intelligent tasks such as recognizing, identifying objects and manipulating them. Intelligent industrial robots that use an object recognition camera are superior to conventionally controlled robots because the positions and orientations of the objects being recognized can thus be arbitrary in the corresponding robot workspace. Robotic vision is a technology that allows the robot to move based on the visually expressed size of the product and the environment. From the very beginning of the Industrial Revolution, human vision has played an indispensable role in the production process. Tracking the movement of individual parts on the production line, locating the appropriate ones and positioning them properly for processing, and examining the quality of the finished product, is a job that has required human eyes.

HISTORY OF ROBOTIC VISION The concept for machine vision dates back to 1930 when the company Electronic Sorting Machines in New Jersey offered a food sorting machine based on the use of specific filters and photomultipliers as detectors (Zuech, 2000). In 1940, return bottles were still being used in the USA. RCA Camden Operation designed and built a bottle control system that must be clean before refilling. This technology was based on an analogy. In 1960, the computer vision, which was supported by the military, began at the MIT and Stanford University AI Laboratory (Zuech, 2000, 7-21). In the early 1960s, IT&T produced a picture-dissector reflector control system at General Motors. At the same time, Procter & Gamble were experimenting with the concept of controlling pampers (Zuech, 2000, 7-21). In 1964, Jerome Lemelson was awarded for the patent for a general concept: for obtaining electromagnetic radiation by using a scanner, digitizing a signal, and using a computer to compare results against stored ones. In 1965, Colorado Video implemented a video image digitization unit, which basically digitizes one pixel per line for each scan. In 1970, with the material support of

Robotic Vision

71

the US Army, GE demonstrated a 32x32 pixel CID camera, and Bell Labs developed a CCD (charge-coupled device) camera. In 1969, NASA, together with EMR Photoelectric and Schlumberger, developed an optical data digitizer. In the early 1970s, several companies produced a commercial TVbased control system (Zuech, 2000, 7-21). In 1971, Solid Photography (today Robot Vision Systems Inc.) commercialized a 3D technique to obtain data from individuals that would be the basis for creating the same person. In 1973, GM (General Motors) simulated car assembly using vision-driven robotics. In 1974 GE (General Electric) introduced a 100x100 pixel CID camera and later a 244x128 camera version. In 1975, EMR introduced a TV-based off-line measurement system. In 1977, Quantex introduced the first real-time image processor in a single casing, GE introduced the first commercial vision control system, SRI introduced a vision module-lab system with a camera, analog preprocessor, and computer designed for prototyping (Zuech, 2000, 7-21). Also in the early 1970s, research into the field of artificial vision was initiated at many universities including the universities of Missouri, Columbia, Maryland, Michigan. In 1975, EMR introduced a TV-based offline measurement system. In 1976, the following systems appeared: GM published for the first time its work on an automatic IC chip control system. 1977 Quantex introduced the first real-time image processor in a single casing. GE introduced its first commercial vision control system, SRI introduced a vision module - a laboratory system with a camera, analog preprocessor and DEC computer, designed for prototype applications. 1977 Leon Chassen was granted a patent for applying structural light principles for applied scanning to commercialize the technique. By 1978, the ORS had established a collaboration with a Hajime firm from Japan that led to the commercialization of the technology. By the late 1970s, companies such as Imanco in the UK and Bausch & Lomb in the USA introduced a TV-based computer workstation for metallographic analysis as well as microscopic biomedical analysis (Zuech, 2000, 7-21). By the early '80s, Texas Instruments had a group development vision system for their manufacturing needs that included a model recognition system for alignment and an offline TV-based dimensional measurement system. In 1980, Machine Intelligence Corporation (MIC) was formed and commercialized SRI machine vision technology. In 1980, Machine Intelligence Corporation (MIC) was formed and commercialized SRI machine vision technology.

72

Samir Vojić

In 1981, the MIC introduced its VS-110 system, which was designed to perform high-speed control on precisely indexed parts, by comparing the images of the parts with respect to the main image stored in memory (Zuech, 2000). The first industrial application of binary pattern coupling is focused on high precision parts in the electronics industry. In 1981, Perceptron was formed on GM principles, also in 1981 Machine Vision International was formed to commercialize a parallel coaxial cyto computer manufactured at the Environmental Research Institute of Michigan (ERIM). 1984 Alan Bradley founded the French company Robotronics and became a major supplier of robotic vision. Since 1984, various organizations and professional societies have been established within the companies of manufacturing engineers. Also, an association for machine vision was established within the Industrial Robotics Association (Zuech, 2000, 7-21). This association has defined the term machine vision that has been embraced to describe this technology. Already during this period, it could be said that the machine vision industry was well on its way to advancing as a serious industry. With advances in microprocessor and camera technology and declining prices for basic components, all things favored the development of the machine vision industry. In 1985, part of Kodac, VIDEC delivered its first unit with a dedicated hardboard that performs edge segmentation. In 1987, Hitachi introduced an IP series in which the first dedicated VLSI chip was used to process the image. 1988 Cognex introduced its VC-1, the first VLSI chip intended for image analysis in machine vision. 1988 Videk introduced a 1024x1024 Megaplus camera. 1988 LSI Logic introduced the RGB-HSI converter CMOS chip, which was first implemented in machine vision based on color. In 1991, Dickerson Vision Technologies was among the first to offer a “smart camera” with a built-in microprocessor, which contributed to the advancement of the general use of machine vision (Zuech, 2000, 7-21).

BASIC ELEMENTS OF MACHINE VISION SYSTEM Today, more and more manufacturers are using machine vision technology to improve productivity and reduce costs. Machine vision integrates optical components with computer-controlled systems for increased productivity with existing automated manufacturing equipment. A key factor in continuing this trend is the constant decline in the price of computer hardware and the dramatic increase in the power of computers.

Robotic Vision

73

Many vision systems are used on ordinary Pentium processors, without special hardware (Florczyk, 2005, 47-55). At the heart of this development of machine vision are the manufacturer's increasing demands for better control and higher product quality. Machine vision systems come in many forms. Some systems use an analog camera and digitize the image with a frame grabber (Figure 1.). A growing number of systems are using the digital camera as well as other peripherals, sending data directly to PC memory. In some applications, the smart camera provides a complete vision system in one casing. Despite their differences, all of these systems depend on input optics that provides high image quality for the sensor.

Figure 1. Basic elements of a machine vision system.

The image is just a source of information for the machine vision system. The quality of the analysis depends on the quality of the image, and the quality of the image is determined with the appropriate choice of optics. The software cannot correct poor image quality. Nonetheless, optics is the most neglected aspect of machine vision. Lighting and lenses must work together to gather relevant information about the object.

Samir Vojić

74

The lighting must illuminate each characteristic, provide good contrast, and minimize confusing effects. Lenses must address completely all the features of the entire object and the scope of the work surface. For alignment and measurement applications, the lenses must represent an object in fixed geometry, so that the position of the image is accurately calibrated to the position of the object in space (Snyder and Hairong, 2004, 38-62). Machine vision is continuing its expansion in new applications. Camera size and price tend to decrease. High-resolution digital cameras are in common use. Smart cameras make perfect vision systems available at a lower cost than processors, especially in recent years (Kurfess, 2000, 22).

Lenses In many cases, machine vision input optics can be built using components from a warehouse. Cameras, lenses and lighting manufacturers offer a variety of standard products, many of which are specifically designed for use in machine vision. For each case, the selection option will depend on the following characteristics (Snyder and Hairong, 2004, 38-62):     

the area of vision required to reach the required resolution cannot be achieved using standard lenses, the apparent space available is too small to accommodate standard lenses, Increasing precision centering requires a zoom change, the required width of the area cannot be achieved with stationary lenses, the area of interest is not quickly accessible with standard lenses.

All machine vision system information is collected through the lens. Proper lens selection can reduce image processing requirements and improve system performance and power. The technical and practical information needed to select the machine vision system lenses will be presented here. In order to understand the theoretical principles, it is necessary to first give some basic definitions and parameters of the lenses. Lenses, by construction, give images above a limited level. One has to make sure that the lenses cover an area larger or larger than the camera format. Field of View - is the area of the object that is covered by the lens on the image sensor. It must be sufficient to cover all contents measured with additional tolerances for alignment errors. Magnification - Magnification required is (Snyder and Hairong, 2004, 38-62):

Robotic Vision 𝑚𝑎𝑔𝑛𝑖𝑓𝑖𝑐𝑎𝑡𝑖𝑜𝑛 =

𝑊𝑐𝑎𝑚𝑒𝑟𝑎 𝑊𝑓𝑖𝑒𝑙𝑑 𝑜𝑓 𝑡ℎ𝑒 𝑣𝑖𝑒𝑤

75 (1)

whereWcamera is the scope of the camera and the Wfield of area is the range of view. Working Distance is the distance from the front of the lens to the subject. In principle, lenses that allow longer working distances should be larger and much more expensive than lenses that allow shorter working distances. To understand machine vision lenses, we will start with a thin lens model. If a piece of glass or other translucent material is shaped in a certain way, it will be able to converge a parallel beam of inlet rays to a point or to diverge from some point. We call such a piece of glass (or some other transparent material) a lens. Generally, a lens is a carefully crafted piece of translucent material that can deflect rays of light in such a way that it creates an image. The lens can be seen as a series of small prisms, each of which turns the light to create its own image (Figure 2). When these prisms work together, they create a bright enough image in the focus of the lens.

Figure 2. Series of a small prisms.

There are different types of lenses. Lenses differ in the shape and material of which they are made. We will consider lenses that are symmetrical with respect to a horizontal axis called the optical axis. We will categorize lenses into convergent and divergent. As the names themselves call it, a convergent lens is a lens that converges rays of light traveling along the optical axis, and a divergent lens is a lens that diverges (Hornberg, 2006, 36-66).

Samir Vojić

76

Figure 3. Convergent (positive)and divergent (negative) lens.

Deriving the Lens Equation The lens equation expresses quantitative relationships between object distance (a), image distance (b), and focal length (f) (Hornberg, 2006, 36-66).

Figure 4. Characteristics of the lens.

The lens equation is: 1 𝑓

1

1

=𝑎+𝑏

(2)

The linear lens magnification equation expresses the relationship between the image distance ratio and the object distance, and the image height to object height ratio. The linear increase is (Snyder and Hairong, 2004, 38-62): 𝑏

𝑚 = −𝑎

(3)

In the case of convergent lenses, the focal length f is positive and in the case of divergent lenses f is negative.

Robotic Vision

77

Image Construction The easiest way to construct an image of an object of height y is to use the so-called characteristic rays of light (Hornberg, 2006, 36-66). Knowing the behavior of certain rays, we can obtain information about the position, size and orientation and type of image of objects that are placed in a specific location in front of the lens. It is enough to know the two rays and find their intersection.

Figure 5. Image construction – convergent lens.

As shown in Figure 6.6. for this case it is valid (Hornberg, 2006, 36-66).: 1 The ray arriving at the lens parallel to the optical axis breaks through the lens as if it came from the focal point of F '. 2 The ray passing through the center of the lens does not break but passes through the lens without changing direction. 3 The ray that would pass through the focal point of the F object breaks parallel to the optical axis. The extensions of the refracted rays of light intersect at one point and here we get, from a real object of height y, an image that is virtual, scaled-down and upright. Lenses of various manufacturers are present on the market, and it is necessary to choose those that meet the defined needs with quality and characteristics.

Samir Vojić

78

Figure 6. Image construction –divergent lens.

Lighting An important point when designing machine vision applications is to choose the type of lighting. Choosing the right light source scheme increases the accuracy, reliability of the system itself, and reduces the response time. Poor lighting cannot be compensated by better processing algorithms. The importance of lighting is also borne out by the fact that many applications, which have proven to be good in laboratory conditions, have failed in the work environment solely because of inadequate lighting. Lighting systems must be adapted to actual operating conditions where variations in the hue of the product surface and the influence of the changing ambient component of the light lead to distortion in the input image. Unlike humans, machine vision systems have no built-in experience to draw conclusions from. Simple problems like determining the top and bottom, finding holes in an object, locating the object itself become very complex transposed into the world of artificial vision systems (Dickmanns, 2007, 440-500). Therefore, it is very important to supply the system with the best possible input image. The best possible image for a machine vision system is one that emphasizes the area of interest, has high contrast, emphasized textures, in other words, the image that allows the system to perform the set task to the best of its ability. Lighting techniques in machine vision include (Hornberg, 2006, 73-200). a) diffuse front lighting, b) directional front lighting, c) polarized lighting,

Robotic Vision

79

d) coaxial lighting, e) structural lighting and f) backlight.

Diffuse Front Lighting Diffuse front lighting, as the name implies, consists of a light source positioned on the same side as the camera (Hornberg, 2006, 73-200). This method of illumination creates a wide area of uniform illumination, eliminating shadows and reducing the effect of mirror reflection.

Figure 7. Diffuse front lighting.

Directional Front Lighting This mode of illumination generally occurs in two light and dark fields. A brightfield is an area where light is reflected within the field of view of the camera. Conversely, a dark field is an area where light is reflected from a reflecting surface outside the field of view of the camera (Hornberg, 2006, 73-200). Dark-field lighting is often used to illuminate grooves or imperfections on the other side of a smooth surface.

80

Samir Vojić

Figure 8. Directional frontal lighting.

Polarized Lighting By controlling the illumination conditions, mirror reflections can be removed from the object from any angle of view (Hornberg, 2006, 73-200). This effect is achieved by installing two polarizers. One in front of the light source so that the object is illuminated by polarized light and the other in front of the camera (also called the analyzer) so that only the corresponding rays reach the camera. Complete elimination of the mirror component is only achieved if the camera and light source are set at the right angle.

Figure 9. Polarizedlighting.

Robotic Vision

81

Coaxial Lighting The idea behind coaxial illumination is to introduce light along the optical axis of the camera. By far the best way to accomplish this is to use a light divider. More advanced axial lighting systems include a reflection chamber and allow only indirect illumination of the object when exploring (Hornberg, 2006, 73-200). This type of lighting is ideal for illuminating textured surfaces.

Figure 10. Coaxial lighting.

Structural Lighting Structural lighting is used if it is necessary to obtain the surface characteristics of the object and the structure of the object itself. The light source is placed above the observed object at a certain angle with respect to the camera. A beam of light falling on an object forms a line or a certain grid pattern. Grid patterns are used to determine the 3D characteristics of an object (Hornberg, 2006, 73-200). The camera captures the projection of the light beam on the object and based on the distortion of the lines or the grid, we get a 3D model.

82

Samir Vojić

Figure 11. Structural lighting.

Backlighting Backlighting gives the highest contrast between the background and the subject. It is used in the extreme edges of the object are to be accurately obtained. Setting up a system like this is very simple. The object is placed between the light source and the camera (Hornberg, 2006, 73-200). The result of such a shot is a silhouette of an object on a light (uniformly white) background. The choice of illumination type depends on which application the particular machine vision system is used for. The following illustration provides some examples and comparisons of lighting.

Figure 12. Backlighting.

Robotic Vision

a) diffuse lighting

83

b) coaxial lighting

Figure 13. Comparisons lighting.

It can be seen that the image obtained when using diffuse illumination is unclear compared to that obtained using coaxial illumination.

Cameras Cameras are also a very important element of machine vision. The most commonly used cameras in the machine vision industry are compact, lightweight with built-in video memory and an image processor. The purpose of the camera in machine vision systems is to get an accurate view of the object being controlled. In principle, cameras are divided into two categories: surface and line cameras (Stanley, 1999, 13-40). As the name implies, surface cameras have image sensors that are capable of scanning the surface (multiple lines), while in line cameras we have sensors that can only scan one line before image data is loaded. They consist of a high-resolution CCD or CMOS sensor with a fast signal processor for image processing.

CCD (Charge Coupled Devices) Sensors A CCD sensor is a semiconductor memory element in which electrical load (charge) moves across the surface. They consist of a lot of tiny photosensitive elements on a semiconductor basis (Tönshoff and Inasaki, 2001, 47-70). Figure 14 shows the basic form of the most commonly used type of CCD set, the socalled Interline CCD. The CCD is composed of precisely positioned semiconductors that are light sensitive and arranged in rows or columns. Each row in the order represents one line of the resulting image. When the lights fall on the sensing elements, the photons are converted to electrons, the accumulated voltage in each element is proportional to the intensity of the light and the time of exposure to that light. This is known as the integration phase (Tönshoff and Inasaki, 2001, 47-70). After a predetermined time, the accumulated voltage is transferred to the vertical change register.

84

Samir Vojić

Figure 14. CCD sensors.

In video-compliant cameras, transmission to vertical registers is achieved in two stages. Initially, the filling of the odd number rows is carried over, and only then the even number rows. The following vertical register fills are switched to horizontal ones and changed to CCD output. Therefore, all odd rows are switched first (odd fields) followed by even rows (fields). The speed at which voltage is transmitted from horizontal registers is determined by the number of elements (pixels) per row and the video standard to which the camera belongs. The problem that is inseparable from the CCD interline lies in the fact that the vertical register that passes over the set (matrix) represents insensitive zones that act as blind spots as such. One way to overcome this problem created by vertical registers is to abolish them and use a different voltage-transfer mechanism. The frame portable CCD does just that. This type of CCD has a separate memory zone into which voltage is transferred from each cell. This process must be performed fairly quickly to avoid blurring as the transfer takes place at the exposure time.

CMOS (Complementary Metal Oxide Semiconductor) Sensors Unlike the CCD sensor, the CMOS sensor has a built-in amplifier for each pixel. This allows all pixels to be processed at the same time. This process allows for faster recording speeds and thus more recorded images per second. Since an amplifier is associated with each pixel in a CMOS sensor, it increases the price of its production. They are also called smart sensors because almost all electronics are contained within the sensor itself. They have a better signal-to-noise ratio, lower power consumption, higher speed, etc.

Robotic Vision

85

In Figure 15. the actual CMOS sensor is shown, that is, the active pixel zone (green) and the occupied zone of the coupled chips (yellow) are shown, which replaces the exposure zone in CCD sensors (Zuech, 2000, 83-133).

Figure 15. CMOS sensors.

The active amplifier and sample capacitor give the CMOS sensor an advantage when it comes to speed, capacity, and advanced response characteristics to increase the content of black pixels. CMOS sensors can also produce higher levels of pattern recognition interference than CCD sensors but this form of interference can be easily remedied by software filters.

Frame Grabbers Frame grabbers are devices for digitizing and capturing images on a computer screen. Depending on the performance, the image can be displayed in a window (frame) or full screen. Depending on the application, different manufacturers produce different variants of frame grabbers with different capabilities. There are designs for internal mounting directly to the computer bus or external mounting and connecting to the computer via a serial or parallel port (Florczyk, 2005, 47-55).

86

Samir Vojić

Figure 16. Block diagram of a frame grabber.

Depending on the application, frame grabbers are used for color digitization (by RGB components) or monochromatic in shades of gray. Common color depth values are: 8, 15, 16, 24 and 32 bit/pixel. Some common standardized frame grabber resolutions are: 768x576, 768x876, 640x480, 800x600, 1280x1024, 1600x1280 and 4096x4096. In addition to this data, data on the speed of digitization of input images is important because of the continuity of the collected images and the speed of transfer of these images to the computer bus for processing. The minimum frame rate from the input should be 30 frames / sec while the transfer to the bus after digitization varies from 20MB/s up to 800 MB/s. The transfer speed to the PC bus depends on the PC bus itself, the resolution of the digitized image, and the circuits on the frame grabber card itself. DSP processors for digitization and fast processing of digitized images are most commonly used. In addition, the most common video cameras that are connected to the frame grabber inputs are ones with NTSC, PAL, and SECAM color encoding.

CONCLUSION Robotic vision is a technology that allows the robot to move based on the visually expressed size of objects and the environment. It is used in various applications such as control, identification and sorting, and is an important component in the implementation of industrial robots in various fields. In order to have successful development of new products, it is necessary that production systems have more flexibility in managing production processes. This can be achieved if the software and hardware architecture of a particular flexible production system is previously integrated. An important role in all this is played by artificial intelligence, that is, the application of artificial vision in the

Robotic Vision

87

spatial management of industrial robots, thereby realizing their autonomy and allowing them to become intelligent industrial robots. There are around one million industrial robots in the world today that work in manufacturing facilities and perform various tasks. When using robots, the question is how to solve their control, so that the problem of uncertainty of position and orientation of objects in the workspace of the robot is solved. One way to solve this problem is to apply autonomous adaptive control to industrial robots using visual feedback and hierarchical intelligent control. The realization of the intelligent control of an autonomous industrial robot, which uses robotic vision as a means of object recognition, also involves the application of artificial intelligence. The task of robotic vision is to achieve a visual perception of production systems with the aim of increasing their functional capabilities. Thanks to the robotic or artificial vision, technical products increasingly have features that can be compared to human capabilities. Artificial vision aims to connect computers, electronics, and robotics, all in order to at least somewhat achieve a person’s visual ability. The robot's vision detects the position and orientation of the 3D workpiece and calculates certain data that is returned to the robotic system. By using certain cameras, the industrial robot achieves adaptive behavior. Given here is a brief historical development of machine or robot vision, a description of the basic elements that make up the system of robotic vision and methods of implementation. Robotic vision will play an increasing role and application in the future in robotics as well as in everyday life.

REFERENCES Apolloni, Bruno, Ashish Ghosh, Ferda Alpaslan, Lakhmi C. Jain, Srikanta Patnaik (2005). Machine Learning and Robot Perception, Springer Berlin Heidelberg New York. Batchelor, B.G., Whelan; P.F. (1997). Intelligent Vision Systems for Industry. SpringerVerlag. Berthold, Horn. Robot Vision, MIT Electrical Engineering and Computer Science. Dickmanns, Ernst D. (2007). Dynamic Vision for Perception and Control of Motion, Springer-Verlag London. Dolecek, V., D. Hodzic, S. Vojic, I. Karabegovic (2007). Vision sensors and their application at the industrial robots, 1st International Congress of Serbian Society of Mechanics. Dolecek, V., Karabegovic, I. et al. (2002). Robotics (in Bosnian), Technical faculty Bihac.

88

Samir Vojić

Florczyk, Stefan, (2005). Robot Vision, WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim. Hong, Won (1995). Robotic Catching and Manipulation using Active Vision, MIT. Hornberg, Alexander (2006). Handbook of Machine Vision, Wiley-VCH, Weinheim. Jacak, Witold (1999). Intelligent robotic systems, Kluwer Academic / Plenum Publishers, New York. Jähne, Bernd (1999). Handbook of Computer Vision and Applications, Academic Press, London. Karabegovic, I., S. Vojic (2007). Industrial Robots Guided by Intelligent system in Complex Environment” 11th International Research/Expert Conference “Trends in the Development of machinery and Associated Technology” TMT 2007, Hammamet, Tunisia. Karabegovic, I., S. Vojic, D. Hodzic, V. Dolecek (2007). Artificial intelligence and its use in industrial robots control in space. 1st International Congress of Serbian Society of Mechanics. Karabegovic, I., Vojic, S., Dolecek, V. (2006). 3D Vision in industrial robot working process, EPE-PEMC. 12th International Power Electronics and Motion Control Conference, Portoroz, Slovenia. Kurfess, Thomas R. (2000). Robotics and automation Handbook, CRC Press, London. Mehrandezh, Mehran (1999). Navigation-Guidance-based Robot trajectory Planning for interception of Moving Object, PhD Thesis, University of Toronto. Murphy, Robin R.(2000). Introduction to AI Robotics, The MIT Press, Cambridge. Pires, J.Norberto. (2005). Industrial Robots Programming (Building Applications for the Factories of the Future), Springer. Safaric, R. (2004). Intelligent Control Technics in Mechatronics (in Slovenian), Maribor. Scott, C. (2001). Vision Guided Robotics is Revolutionizing Automotive Manufacturing Competitiveness, Braintech. Snyder, W., Hairong, Q. (2004). Machine Vision, Cambridge University Press. Spong, Mark W., Hutchinson Seth, Vidyasagar M. (2005). Robot Modeling and Control; Wiley. Stanley, Kevin (1999). A Hybrid Motion Vision Guided Robotic System with Image Based Grasp Planning, MASc, University of Toronto. Taylor, Geoffrey, Lindsay Kleeman (2006). Visual Perception and Robotic Manipulation, Springer, Berlin Heidelberg New York. Tönshoff, H.K., I. Inasaki (2001). Sensors in Manufacturing,Wiley-VCH Verlag GmbH.

Robotic Vision

89

Vojic, S., I.Karabegovic, V. Dolecek (2007). Appliance of Robot Vision in Industrial Robot Welding Process, 6th International Scientific Conference on Production Engineering “Development and Modernization of Production” RIM 2007, Plitvickajezera. Zuech, Nello (2000). Understanding and Applying Machihe Vision,Marcel Dekker AG, Basel.

In: Industrial Robots: Design, Applications and Technology ISBN: 978-1-53617-779-4 Editors: Isak Karabegović et al. © 2020 Nova Science Publishers, Inc.

Chapter 5

3D ROBOT VISION IN INDUSTRIAL APPLICATIONS Dinko Osmanković Department of Automatic Control and Electronics, Faculty of Electrical Engineering, University of Sarajevo, Sarajevo, Bosnia and Herzegovina

ABSTRACT In this chapter, we present the principles of 3D environment sensing in modern robotics. This type of sensing is found in nature in many animals, including humans, and is known as stereopsis. Stereo-vision systems are based on this principle. Thus, we present the mathematical model of such camera systems. First, pinhole cameras are modeled to mimic how eyes perceive its environment. Calibration is an essential part of the vision process. Without it, it would not be possible to reliably represent the environment. After that, the mathematical model of epipolar geometry, a principle of stereo-vision, is presented. Moreover, other types of sensors that can perceive the 3D environment are presented. These include depth sensing cameras and different operating principles of these cameras are presented – time-of-flight and projected-light cameras, and rotating setups that include 3D LiDAR sensors. Finally, the chapter addresses some interesting applications of 3D vision in a wide range of industrial applications including object measurement, safe path planning in human-robot collaboration environments and 3D thermal mapping for the purpose of improving energy efficiency of buildings.

Keywords: 3D robot vision, depth sensing, sensor fusion, stereo-vision, 3D LiDAR, epipolar geometry



Corresponding Author’s E-mail: [email protected].

92

Dinko Osmanković

INTRODUCTION Humans use different senses to perceive their environments. However, one of our most important ways of sensing our environment is sight. Looking from the evolutionary perspective, the evolution of the eye is a masterpiece of the evolution as a process. In Figure 1, several stages of the evolution of the eye are illustrated (Ayala 2007, 4).

Figure 1. Evolutionary stages of the development of the eye.

First, the pigment spots appeared (far-left in Figure 1). These are found in today’s limpets (marine mollusks) and consisting only of a few pigmented cells, slightly modified from typical epithelial (skin) cells. Slit-shell mollusks (second from the left) have a slightly more advanced organ, consisting of some pigmented cells shaped as a cup. The octopus eye (far right) is quite complex, with components similar to those of the human eye such as cornea, iris, refractive lens, and retina (Ayala 2007, 4). A major advancement in sight came with the evolution of stereopsis. By definition, stereopsis is the ability to compute depth information from views acquired simultaneously from different frames of references (Nityananda and Read 2017, 2-3). As a mechanism, stereopsis can be found in many amphibians, birds, mammals including primates and humans. The principle of stereopsis can be described with an illustration given in Figure 2.

3D Robot Vision in Industrial Applications

93

Figure 2. Stereopsis in mobile eyes.

In this illustration, there are two observers, A and B. In both A and B, the apple is imaged at the fovea (part of the eye where the vision is clearest) while the orange is to the left of the fovea in both eyes, by an angle α in the left eye and β in the right. The retinal disparity is, therefore, the same in both cases: the absolute disparity of the apple is 0, and the absolute disparity of the orange is α−β, which is also the relative disparity between the two objects. However, different positions of the eyes (less converged in A, strongly converged in B) means that the locations of the objects in space is very different in the two cases. In both cases, the fact that the orange is closer can be deduced from the relative disparity, but to deduce the absolute distance to either object requires a knowledge of the vergence angle (V1, V2) (Nityananda and Read 2017, 2-3). The same principle applies to human vision as well. Based on the study presented in (Pryor 1969), the distance between frames of references of the two eyes, called pupillary distance, and is between 54 and 74 mm for adults, and 43 and 58 mm for children. As with most things in robotics, most good ideas are borrowed from nature. This includes stereo-vision cameras that clearly resemble stereopsis mechanism. In the next sections, we will explore how we measure depth in 3D robot vision using different types of sensors.

Dinko Osmanković

94

SENSORS FOR 3D VISION In this section, several types of sensors used in 3D vision applications will be described. First, a standard camera model will be given. This is represented with pinhole camera model. After that, sensors that can perceive depth will be described including the stereo-vision systems, heterogeneous stereo-vision with visible spectrum camera and depth camera and 3D laser scanners (LiDARs). The section will end with the description of the sensor fusion of 3D laser scanners and other types of visual sensors.

Pinhole Camera An ordinary camera resembles the mechanism of the human eye. In Figure 3, the cross section of the eye is illustrated (Kolb 2007, 3).

Figure 3. Cross-section of the human eye.

This shows three different layers: 1) The external layer, formed by the sclera and cornea.

3D Robot Vision in Industrial Applications

95

2) The intermediate layer, divided into two parts: anterior (iris and ciliary body) and posterior (choroid). 3) The internal layer or the sensory part of the eye, the retina. The light rays are focused through the transparent cornea and lens upon the retina. The central point for image focus (the visual axis) in human retina is the fovea. Here a maximally focused image initiates resolution of the finest detail and direct transmission of that detail to the brain for the higher operations needed for perception. The optic axis is slightly closer to the nasal area and projects closer to the optic nerve head (Kolb 2007, 1-3). The optic nerve acts as a transmitter, sending the visual information from the retina to the brain. This served as a basis for the development of the camera obscura, also known as the pinhole camera model. This process has been analyzed since ancient times, but is mostly known from the works of Ibn al-Haytham (or Alhazen, as mostly knows in the western world), an Arab physicist. The model of the pinhole image can be illustrated as in Figure 4(Prince 2012, 298).

Figure 4. The pinhole camera model (Prince 2012, 298).

As seen in this figure, rays from an object in the world pass through the pinhole on front of the camera and form an image on the back plane (the image plane). Note that the image is projected as upside-down. This can be overcome by introducing the virtual image in front of the camera. This is not physically possible, and this is usually treated by introducing lenses into the camera apparatus (Prince 2012, 298-299). Let us develop the mathematical model of the pinhole camera. In Figure 5, the pinhole is referred to as an optical center, and positioned as an origin of the(𝑢, 𝑣, 𝑤) coordinate system. The image plane, where the virtual image is formed, is displaced along the optical 𝑤 axis for the length of focal length. The point where the optical axis crosses the image plane is called principal point (Prince 2012, 299). The focal length is usually normalized to 1, which yields a normalized camera model. The geometry of this camera is illustrated in Figure 6.

Dinko Osmanković

96

Figure 5. Detailed description of pinhole camera model (Prince 2012, 299).

Figure 6. Normalized camera model (Prince 2012, 300).

Here we see a 2D slice of the pinhole camera geometry. Everything is now given in (𝑥, 𝑦) coordinate which makes derivation of equations much easier. By similar triangles, the 𝑦 −position of the world point 𝑤 = [𝑢, 𝑤, 𝑣]𝑇 is 𝑢⁄𝑣. We can extend this to projecting 3D world coordinates 𝑤 = [𝑢, 𝑤, 𝑣]𝑇 to image coordinates𝑥 = [𝑥, 𝑦]𝑇 using the relations (Prince 2012, 300-301): 𝑥=

𝑢 𝑤 𝑣

𝑦=𝑤

(1)

3D Robot Vision in Industrial Applications

97

where 𝑥, 𝑦, 𝑢, 𝑣 and 𝑤 all have the same unit measures (usually in millimeters). However, this model is not very realistic. To develop a more realistic model, we need to address the following: 1) 2) 3) 4)

Focal length is usually not normalized to 1; The origin of the image is in the top-left corner and not in the center; Sometimes a skew can occur; Camera is not always conveniently centered at the origin of the world coordinate system with the optical axis exactly aligned with the optical axis.

To address the first issue, we introduce a scaling factor in the equation (1) as follows: 𝑥= 𝑦=

𝜙𝑥 𝑢 𝑤 𝜙𝑦 𝑣 𝑤

.

(2)

Note that scaling is different for 𝑥 and 𝑦 axes and these scaling factors represent focal lengths along 𝑥 and 𝑦 axes. The second issue is remedied with offset factors that are introduced in equation (2) as follows: 𝑥= 𝑦=

𝜙𝑥 𝑢 𝑤 𝜙𝑦 𝑣 𝑤

+ 𝛿𝑥 + 𝛿𝑦 .

(3)

Usually, the factors 𝛿𝑥 and 𝛿𝑦 are halves of the image size (e.g., 320 and 240 for an 640 × 480 image). The skew factor𝛾 is introduced to moderate the projected position 𝑥 as a function of the height 𝑣 in the world. This results in: 𝑥= 𝑦=

𝜙𝑥 𝑢+𝛾𝑣 𝑤 𝜙𝑦 𝑣 𝑤

+ 𝛿𝑥

+ 𝛿𝑦 .

(4)

And lastly, the position of the camera needs to be transformed to properly align its reference frame. To achieve this, the world point 𝑤is expressed in the coordinate system of the camera before they are passed through the projection model using the [𝜴|𝝉]rigid body transformation as follows: 𝜔11 𝑢′ 𝜔 [ 𝑣′ ] = [ 21 𝜔31 𝑤′

𝜔12 𝜔22 𝜔32

𝜔13 𝑢 𝜏𝑥 𝜔23 ] [ 𝑣 ] + [𝜏𝑦 ]. 𝜔33 𝑤 𝜏𝑧

(5)

Dinko Osmanković

98

By combining equations (4) and (5), we obtain the full pinhole camera model (Prince 2012, 302): 𝑥=

𝜙𝑥 (𝜔11 𝑢+𝜔12 𝑣+𝜔13 𝑤+𝜏𝑥 ) 𝜔31 𝑢+𝜔32 𝑣+𝜔33 𝑤+𝜏𝑧

+𝛾(𝜔21 𝑢+𝜔22 𝑤+𝜔23 𝑤+𝜏𝑦 ) 𝜔31 𝑢+𝜔32 𝑣+𝜔33 𝑤+𝜏𝑧

𝑦=

+

+𝛿𝑥

𝜙𝑦 (𝜔21 𝑢+𝜔22 𝑣+𝜔23 𝑤+𝜏𝑦 ) 𝜔31 𝑢+𝜔32 𝑣+𝜔33 𝑤+𝜏𝑧

+ 𝛿𝑦 .

(6)

There are two sets of parameters in this model, and we tend to divide them by nature: 1) Intrinsic parameters - 𝜙𝑥 , 𝜙𝑦 , 𝛾, 𝛿𝑥 , 𝛿𝑦 ; 2) Extrinsic parameters - 𝛺, 𝜏. Usually, with lenses, radial distortion occurs, sometimes tangential distortion. The effects of radial distortion on the image are illustrated in Figure 7.

Figure 7. Types of radial distortion and its effect on the image.

This is usually modeled as a polynomial function of the distance 𝑟 from the center of the image. The final image position (𝑥 ′ , 𝑦 ′ ) is calculated as (Prince 2012, 303): 𝑥 ′ = 𝑥(1 + 𝛽1 𝑟 2 + 𝛽2 𝑟 4 ) 𝑦 ′ = 𝑦(1 + 𝛽1 𝑟 2 + 𝛽2 𝑟 4 ), where 𝛽1 and 𝛽2 are control parameters of the distortion.

(7)

3D Robot Vision in Industrial Applications

99

Pinhole CameraCalibration In order to obtain a reliable image, the camera needs to be calibrated. This procedure determines previously described intrinsic and extrinsic parameters of the camera. In (Fetić, Jurić, and Osmanković 2012, 1752-1757) the authors describe this procedure in detail for a Canon VC-C50i CCD camera. This is actually an optimization procedure where the parameters are determined so the reprojection error is minimized. Typically, a chessboard pattern is used for the camera calibration, since it has straight lines that lie on a plane. To achieve better results, usually several dozens of chessboard images are taken. This is illustrated in Figure 8 (Fetić, Jurić, and Osmanković 2012, 1753).

Figure 8. Images used in calibration process (20 chessboard images with different position in world coordinates).

The aforementioned intrinsic and extrinsic parameters are obtained by minimizing the reprojection errors using the Levenberg-Marquardt algorithm. The results are presented in Figure 9 (Fetić, Jurić, and Osmanković 2012, 1754). All of the 20 images used in the calibration procedure are labeled by their ordinal position in the array of images. These are images given with row-major ordering. The calibration procedure significantly reduces the reprojection error after three stages. This happens when some of the chessboard images are not of satisfying quality due to a relatively low camera resolution and motion blur (Fetić, Jurić, and Osmanković

100

Dinko Osmanković

2012). The average reprojection error after the first stage is [0.617270.74773] (in pixels), while after three stages it is reduced to [0.270800.34822]. For the whole data set, the reprojection errors are plotted and presented in Figure 10 (Fetić, Jurić, and Osmanković 2012, 1754).

Figure 9. Result of camera calibration; left figure shows calibration patterns from a camera reference frame; right figure shows calibration patterns from a world reference frame.

Figure 10. Reprojection error for the dataset of 20 calibration images.

This procedure yields optimal intrinsic and extrinsic parameters for the given camera. They can be used to transform the image the camera acquires and produce an undistorted image where parallel lines are preserved and properly represented. This is shown in Figure 11 (Fetić, Jurić, and Osmanković 2012, 1755). It is not obvious at first, but looking closely at the left image indicates that lines of the chessboard pattern are slightly curved. This is corrected in the right image.

3D Robot Vision in Industrial Applications

101

Figure 11. Images before and after the calibration procedure; left image is with distortion; right image is undistorted image obtained using the intrinsic and extrinsic parameters.

Camera calibration is fundamental to robot vision. Estimating properly the environment and its geometric characteristics is essential for the robot to move and perform complex tasks. However, one camera cannot perceive depth, i.e., distances from its frame of reference. This problem is addressed in the next section.

Depth Perception Camera Sensing depth of the environment has only recently found its way in modern robotics applications. Sensor systems have become more affordable and integrated. This enabled them to reach wide variety of application from computer games (Yahav, Iddan, and Mandelboum 2007, 1-2) to autonomous driving (Bostelman, Hong, and Madhavan 2005, 237-249) to model digitization (Barbero and Ureta 2011, 188-206). There are several different approaches in designing depth perception cameras. In short, they all use some physical properties of light and can be classified into: 1) Time-of-flight (ToF) cameras: a) Pulsed-light ToF cameras; b) Modulated-light ToF cameras; 2) Projected-light cameras; 3) Rotating setups (3D LiDARs).

Time-of-Flight Camera Time-of-flight cameras have recently found their market in robotics, mostly due to introduction of Microsoft Kinect™ sensors. Although the original Kinect™ uses Patternprojection principle, Kinect™ V2 uses ToF camera (Wasenmüller and Stricker 2016, 3454). The working principle of ToF cameras revolves around the measurement of the time required for emitted light pulse that is reflected of the object, to return to the camera system. This is illustrated in Figure 12.

Dinko Osmanković

102

Figure 12. ToF camera principle.

Red pulse light is emitted from the ToF camera, and then reflected of the object. This reflected wave is in blue color in the figure. Now, electronic circuits observe the pulse waves as given in Figure 13(Büttgen et al. 2005, 21-32).

Figure 13. Pulsed-light ToF camera waves.

Light is emitted in pulses and reflected energy is measured for each ToF camera pixel. This is done by concurrently using two out-of-phase waves 𝐶1 and 𝐶2 .Electronic circuits consist of two capacitors charged with 𝑄1 and 𝑄2 . Now, the distance is computed using the following formula: 1

𝑑 = 2 𝑐𝑇 𝑄

𝑄2

1 +𝑄2

,

(8)

where 𝑐 is speed of light in a vacuum, and 𝑇 is measurement/integration time. In contrast to the pulsed-light principle, continuous wave principle takes four samples each

3D Robot Vision in Industrial Applications

103

measurement. These waves are phase-stepped by 90°. Electronic circuits in this case observe waves as illustrated in Figure 14. Here, the camera measures the phase difference between emitted and reflected wave. This is calculated by: 𝜙 = 𝑎𝑡𝑎𝑛

𝑄3 −𝑄4 , 𝑄1 −𝑄2

(9)

while the distance is computed by: 𝑐𝜙

𝑑 = 4𝜋𝑓.

(10)

The last equation gives us the possibility to extend the range of the depth measurement by modulating the frequency 𝑓.

Figure 14. Continuous wave ToF camera waves.

104

Dinko Osmanković

Typical ToF cameras include Kinect™ V2, SwissRanger cameras and PMD cameras. They are very compact, high performance, yet very simplistic in operations. However, they are prone to external interferences and ambient lighting conditions. These problems are addressed as presented in (Schmidt and Jähne 2009, 1-15).

Projected-Light Camera Projected-light cameras are based on the principle of triangulation. This means that we require two cameras and a specific projector (Fofi, Sliwa, and Voisin 2004, 90-98). This is illustrated in Figure 15.

Figure 15. Principle of projected-light sensor.

Figure 16. 1D version of projected-light principle.

3D Robot Vision in Industrial Applications

105

In order to calculate the distance, simple geometry calculations are required. To fully understand what happens, Figure 16 gives a good illustration of the procedure for 1D projection (Siegwart, Nourbakhsh, and Scaramuzza 2011, 114). Now, it is easy to calculate 𝑥 and 𝑧 coordinates of the object as follows: 𝑏⋅𝑢

𝑥 = 𝑓𝑐𝑜𝑡𝛼−𝑢 𝑏⋅𝑓

𝑦 = 𝑓𝑐𝑜𝑡𝛼−𝑢,

(11)

where 𝑏 is the distance between the camera frame and the beam projector frame, while 𝑓 is the focal length of the camera.

2D and 3D LiDAR Sensors The easiest way to perceive 3D environment with the depth sensor mounted on a rotating platform. Rotating 3D depth measurement systems are basically pulsed-light rotating depth sensors. Principle of operation is, therefore, the same as with pulsed-light ToF cameras. However, these setups are not as compact as ToF cameras, but they are very accurate and can scan full 3D environment due to two additional degrees of freedom (azimuth/yaw and elevation/pitch angles of rotation). These sensors are also known as LiDARs and are extensively used in autonomous driving (Levinson et al. 2011, 163-168) and cultural heritage sites digitization (Borrmann et al. 2015, 1-8).

Figure 17. Principle of operation of 2D LiDAR sensor.

The principle of operation of the 3D laser scanner is simple. The laser beam that measures the distance is projected in the plane with different angles. These angles usually cover between 180°, 240° and 360° field-of-view (FOV). This is done by rotating the mirror that deflects the laser beam to angles within FOV. The FOV range is discretized

106

Dinko Osmanković

down to 0.5° or even 0.1°. This principle of rotating mirror is illustrated in the Figure 17(Bitsch, n.d., 9). By using multiple senders or receivers, or a combination of both, sensors can be produced with the capacity to scan multiple planes simultaneously or at offset angles. This means that the 3D LiDAR sensors, in addition to the horizontal 2D plane (which is the 0° plane in a horizontally positioned sensor), can scan further planes tilted up or down (Bitsch, n.d., 14). This is illustrated in Figure 18.

Figure 18. Principle of operation of 3D LiDAR sensor.

Stereo-Vision Camera Systems As explained in the introduction of this chapter, the basic principle behind depth sensing in humans is a principle called stereopsis. Now, let’s consider the same principle, but with two pinhole camera models instead of human eyes. If two cameras, left and right, observe the same object, the images have intersection optical axes. Moreover, connecting the origins of these frames of reference yields a plane. We call this plane epipolar planeor epiplane. The intersection of the line connecting two origins of camera frames and their respective image planes result in two points that we call epipoles. The intersection of the epiplane and the image plane results in two lines that we call epipolar lines or epilines.

3D Robot Vision in Industrial Applications

107

Figure 19. Two-view geometry for stereo-vision system.

Here, the point 𝑥from the left image plane is mapped via a plane 𝜋 onto a point 𝑥 ′ in the right image plane. These two points are respective projections of world point 𝑥.We say that there is a 2D homography map𝐻𝜋 between points 𝑥 and 𝑥′. The epipolar line𝑙 ′ obviously passes through 𝑥′ and the epipole𝑒′ and can be written as: 𝑙 ′ = 𝑒 ′ × 𝑥 ′ = [𝑒 ′ ]× 𝑥 ′

(12)

Furthermore, since 𝑥′ is a mapped from 𝑥 by homography𝐻𝜋 , i.e., 𝑥 ′ = 𝐻𝜋 𝑥, we have: 𝑙 ′ = [𝑒 ′ ]× 𝑥 ′ = [𝑒 ′ ]× 𝐻𝜋 𝑥 = 𝐹𝑥

(13)

Now, matrix 𝐹 = [𝑒 ′ ]× 𝐻𝜋 is called the fundamental matrix. It is rank 2 matrix representing point-line transformation of the stereo-vision setup. This matrix has a very important property that constrains the two-view geometry. This property is given as: 𝑥 ′𝑇 𝐹𝑥 = 0

(14)

for all corresponding pairs (𝑥, 𝑥 ′ ). Another important property gives epipolar line in the left image: 𝑙 = 𝐹 𝑇 𝑥′

(15)

Dinko Osmanković

108

If 𝑀 and 𝑀′ are projection matrices of the left and the right camera, and let us assume that general coordinate system is the coordinate system of the left camera, then we have: 𝑀 = 𝐾[𝐼0] 𝑀′ = 𝐾′[𝑅𝑇]

(16)

Let us assume that 𝐾 = 𝐾 ′ = 𝐼 for the moment. The point 𝑥′ of the right camera is located at 𝑅 𝑇 𝑥 ′ − 𝑅 𝑇 𝑇 in the reference frame of the left camera. Vectors 𝑅 𝑇 𝑥 ′ − 𝑅 𝑇 𝑇 and 𝑅 𝑇 𝑇 both lie in the epiplane𝜋, which means that the vector: 𝑅 𝑇 𝑇 × (𝑅 𝑇 𝑥 ′ − 𝑅 𝑇 𝑇) = 𝑅 𝑇 (𝑇 × 𝑥 ′ )

(17)

is normal to the epipolar plane. Moreover, line 𝑥𝑥 is normal to this vector as well, meaning that: 𝑇

𝑅 𝑇 (𝑇 × 𝑥 ′ )𝑥 = 𝑥 ′𝑇 [𝑇× ]𝑅𝑥 = 𝑥 ′ 𝐸𝑥 = 0

(18)

where 𝐸 is essential matrix. This matrix is just a specialization of fundamental matrix for the case of normalized image coordinates, i.e., where the condition 𝐾 = 𝐾 ′ = 𝐼 is satisfied. The example of epipolar geometry computation is illustrated in Figure 20, where epipolar lines are drawn in both images.

Figure 20. Epipolar lines for the left and right camera images of the same object (mug).

3D Robot Vision in Industrial Applications

109

The described procedure gives the calibration of a stereo-vision system. There are also ways to estimate essential or fundamental matrices. These methods are known as 8point algorithm (Longuet-Higgins 1981, 133-135) and normalized 8-point algorithm (Hartley 1997, 580-593). However, knowing the fundamental matrix is essential to sensing depth from two images. If the stereo-vision system is calibrated, the difference in corresponding points in the left and the right image create disparity. This is depicted in Figure 21(Yu 2017, 10).

Figure 21. Geometry of disparity and depth.

From the equivalent triangles, we get: 𝐵𝑓

𝑧 = 𝑥−𝑥′.

(19)

In Figure 21, the example of depth image calculation is given (Scharstein and Szeliski 2003, 1-8).

Figure 22. Far-left is the left camera image; right camera image is in the center; computed depth image is on the right (darker regions are closer to the camera).

Dinko Osmanković

110

APPLICATIONS OF DEPTH SENSING IN ROBOTICS We have seen in the previous section how a robot can perceive geometric characteristics of its environment. This section will present some interesting applications of depth measurement in robotics. The focus on this section will be on three different applications of depth sensing. First, different object volume measurement approaches will be presented; second, application of depth measurement to path planning for industrial robot arms in humanrobot collaboration environments;and third, the application of fusion of depth sensing with other types of sensing (e.g., visible light and thermal) in the field of 3D environment mapping and modeling.

Volume Measurement Knowing the volume of an object is essential in many applications from automotive industry (Blagojević et al. 2016, 1541-1546) to medicine (Chen and Wang 2015, 221236) to agriculture (Kongsro 2014, 32-35).

Figure 23. Workflow of the developed system for computing volumes of box-lie objects.

In (Ferreira et al. 2014, 24-29), authors present an interesting approach to measuring the volume of box-like objects using a Kinect™ depth camera. In this case, authors assume that the camera is properly calibrated, and they propose the procedure. Authors use the box as the test object for the volume measurement. This algorithm then estimates planes of the box and maps them to 2D binary images as presented in Figure 25.

3D Robot Vision in Industrial Applications

111

Figure 24. RGB image of the object for the volume measurement.

Figure 25. Planes represented as binary images, with the detected corners depicted and the vertex common to the three orthogonal planes pointed out with an arrow.

After applying morphological image operators to reduce noise in the image, mainly dilation and closing morphological operators, the algorithm employs Harris corner detector on each of the three images. This gives a set of points that lie along the lines of each box sides. However, one point is common to all of the three images and that is a vertex point of the box. Algorithm then proceeds with measuring width, length and depth of the box giving a volume of the object as an output with a reported accuracy of around 15% (Ferreira et al. 2014, 28). This procedure is limited to measuring volume of certain types of objects. In (Borrmann et al. 2014, 425-440), authors present Marching Cubes algorithm with the Level-set approach to reconstruct a 3D model of an indoor environment. By incorporating 3D point cloud segmentation using methods presented in (Nguyen and Le 2013, 225-230), and employing 3D model reconstruction it is possible to obtain a 3D mesh of an object. Finally, volume of a mesh can be computed using the following (Zhang and Chen 2001, 935-938):

Dinko Osmanković

112 1

𝑉′𝑖 = 6 (−𝑥𝑖3 𝑦𝑖2 𝑧𝑖1 + 𝑥𝑖2 𝑦𝑖3 𝑧𝑖1 + 𝑥𝑖3 𝑦𝑖1 𝑧𝑖2 −𝑥𝑖1 𝑦𝑖3 𝑧𝑖1 − 𝑥𝑖2 𝑦𝑖1 𝑧𝑖3 + 𝑥𝑖1 𝑦𝑖2 𝑧𝑖3 ) 𝑉′𝑡𝑜𝑡𝑎𝑙 = ∑𝑖 𝑉𝑖′ ,

(20)

where (𝑥𝑖1 , 𝑦𝑖1 , 𝑧𝑖1 ), (𝑥𝑖2 , 𝑦𝑖2 , 𝑧𝑖2 ) and (𝑥𝑖3 , 𝑦𝑖3 , 𝑧𝑖3 ) are coordinates of vertices of i-th triangle in triangular mesh. Note that 𝑉′𝑡𝑜𝑡𝑎𝑙 is signed volume (the sign of the value is determined by checking if the origin is at the same side as the normal with respect to the triangle), and real volume of the 3D mesh is the absolute value of 𝑉′𝑡𝑜𝑡𝑎𝑙 . This method is essential in process engineering where production quality takes the measurements of produced objects into account. This way, time unnecessarily spent on manual measuring of the object's volume can be reduced.

Path Planning for Industrial Robots Path planning is one of the essential tasks in modern robotics. The main objective is to move a robot from one configuration to another without interacting with the environment in the unsafe way, i.e., collision. Typical situation involving industrial robot in collaboration with humans is depicted in Figure 26.

Figure 26. Collaborative environment between an industrial robot and a human operator.

Depth cameras in this case sense the environment, creating a 3D point cloud representation of all objects within human-robot collaborative task. The robot can then be

3D Robot Vision in Industrial Applications

113

provided the information about its distance between the human operator and other elements of the environment. In path planning, we usually refer to them as obstacles. In (Lacevic, Osmankovic, and Ademovic 2016, 70-76), the authors present a novel structure for path planning in robot’s configuration space (C-space). This structure is called bur and it takes into account the distance between the robot and obstacles in the environment for a given robot configuration. The algorithm that authors propose is based on RRTConnect algorithm(Kuffner and LaValle 2000, 995-1001). In the follow-up research, the authors present an optimal (Osmanković and Lačević 2016, 2085-2090) and adaptive variant of this algorithm (Lacevic, Osmankovic, and Ademovic 2017, 1-6). The strength of the proposed algorithm is visible in Figure 27. The figure depicts how the proposed algorithm (RBT) outperforms the original RRT algorithm in terms of Cspace exploration. This yields two to three times higher performances in certain scenarios.

Figure 27. RRT path planning versus RBT path planning.

Fusion of Depth Cameras and Other Sensors Sensor fusion deals with integrating different types of information obtained from different types of sensors. Sensor fusion is essential in robotics applications ranging from robot navigation (Kam, Zhu, and Kalata 1997, 108-119), to simultaneous localization and mapping (Fang, Ma, and Dai 2005, 1837-1841) to computer vision (Fay et al. 2000, 1-8). Fusion of depth sensing and other visual senors have been studied with the emergence of affordable depth sensors. In (Vidas, Moghadam, and Bosse 2013, 23112318), the authors present the method for 3D thermal model reconstruction that employs Kinect™ depth camera and thermal imaging camera. This method does 3D thermal mapping of the environment which incorporates color, thermal-infrared and depth

114

Dinko Osmanković

information simultaneously. The authors target the system towards the application of continuous and non-destructive monitoring of building interiors for the energy efficiency assessment. The resulting 3D thermal model is presented in Figure 28.

Figure 28. Demonstration of fusion scheme for texturing a 3D model with thermal data.

Figure 29. Laser scan with reflectance (left), thermal (middle) and color (right) information.

3D Robot Vision in Industrial Applications

115

Figure 30. Top: Reconstructed 3D model of an office with furniture, lamps, computers, etc. Bottom: Colored point cloud of the same office.

This method is limited by the range limitations of Kinect™ sensors. Typical range of depth sensing is between 4 and 10 meters which limits the application of this sensor setup to the small-scale environments. To overcome this problem, authors in (Borrmann et al. 2012, 31-38) and (Borrmann et al. 2014, 425-440) propose a system based on a mobile platform with a rotating 3D laser scanner and thermal and RGB cameras. This is depicted in Figure 29. The authors also propose the method for a full 3D thermal model reconstruction. The output of this method is similar to those in (Vidas, Moghadam, and Bosse 2013, 23112318) and the example is given in Figure 30. The proposed method can deal with large-scale point clouds that are orders of magnitude larger in volume. This renders the method suitable for the 3D thermal mapping of large residential buildings, factories, etc. The method for analysis of such 3D models and detection of possible heat sources is presented in (Osmanković and Velagić 2013, 1-6). If done from the outside of the buildings, as demonstrated in (Borrmann, Elseberg, and Nüchter 2013, 173-182), it can

116

Dinko Osmanković

be used to detect possible heat leaks and other losses of heat energy to improve energy efficiency of buildings.

CONCLUSION This chapter provides an insight into depth sensing and its application to robotics in industry 4.0. In the introduction, the basis of depth sensing is given. A short description of human eyesight is presented along with the fundamental characteristic of depth sensing in humans and animals with similarly designed eyesight. This characteristic is called stereopsis and it is fundamental to depth sensing and 3D geometric representation of environments. After introducing natural occurrences of depth sensing, the chapter proceeds with its mathematical descriptions and how it is used in engineering to mimic this principle. A pinhole camera model is kind of mathematical model of the eyesight, but for the proper sensing of environment the camera needs to be calibrated. This is discussed in the chapter as well as it is essential for stereo-vision principle that is based on stereopsis principle. Stereo-vision is discussed in detail in this chapter. Stereo-vision is not the only way of measuring distances in 3D. Several more methods and devices based on these methods are presented in this chapter, i.e., depth cameras and 3D LiDARs. The principles of operation of these types of sensors are presented as well. The final part of this chapter deals with applications of the 3D vision and sensors. The first example deals with measuring object volume using depth sensing cameras. This is essential in quality control of industrial production lines where this process is usually done manually. The second example gives an insight to how the depth sensing can create a safe collaborative environment for humans and robots. And the final example explains how depth sensing and sensor fusion with depth sensors, with additional information, can be used in the reconstruction of the 3D environment. The examples give primarily the fusion of depth sensors with thermal cameras. These systems are used to generate reliable 3D thermal representations of indoor and outdoor environments that can be used to assess thermal efficiency of residential buildings and factories. This is an increasingly important problem that needs to be addressed in the future.

REFERENCES Ayala, Francisco J. 2007. “Darwin’s Greatest Discovery: Design without Designer.” Proceedings of the National Academy of Sciences 104 (suppl 1): 8567–8573. doi:10.1073/pnas.0701072104.

3D Robot Vision in Industrial Applications

117

Barbero, Basilio Ramos, and Elena Santos Ureta. 2011. “Comparative Study of Different Digitization Techniques and Their Accuracy.” Computer-Aided Design 43 (2): 188– 206. Bitsch, Clemens. n.d. “SICK AG WHITEPAPER.” Blagojević, Milan, Dragan Rakić, Marko Topalović, and Miroslav Živković. 2016. “Optical Coordinate Measurements of Parts and Assemblies in Automotive Industry.” Technical Gazette 23 (5): 1541–1546. Borrmann, Dorit, Andreas Nüchter, Marija Djakulović, Ivan Maurović, Ivan Petrović, Dinko Osmanković, and Jasmin Velagić. 2012. “The Project Thermalmapper– Thermal 3d Mapping of Indoor Environments for Saving Energy.” IFAC Proceedings Volumes 45 (22): 31–38. Borrmann, Dorit, Andreas Nüchter, Marija Ðakulović, Ivan Maurović, Ivan Petrović, Dinko Osmanković, and Jasmin Velagić. 2014. “A Mobile Robot Based System for Fully Automated Thermal 3D Mapping.” Advanced Engineering Informatics 28 (4): 425–440. Borrmann, Dorit, Jan Elseberg, and Andreas Nüchter. 2013. “Thermal 3D Mapping of Building Façades.” In Intelligent Autonomous Systems 12, 173–182. Springer. Borrmann, Dorit, Robin Heß, HamidReza Houshiar, Daniel Eck, Klaus Schilling, and Andreas Nüchter. 2015. “Robotic Mapping of Cultural Heritage Sites.” International Archives of the Photogrammetry, Remote Sensing & Spatial Information Sciences. doi:10.5194/isprsarchives-XL-5-W4-9-2015. Bostelman, Roger, Tsai Hong, and Raj Madhavan. 2005. “Obstacle Detection Using a Time-of-Flight Range Camera for Automated Guided Vehicle Safety and Navigation.” Integrated Computer-Aided Engineering 12 (3): 237–249. doi:10.3233/ICA-2005-12303. Büttgen, Bernhard, Thierry Oggier, Michael Lehmann, Rolf Kaufmann, and Felix Lustenberger. 2005. “CCD/CMOS Lock-in Pixel for Range Imaging: Challenges, Limitations and State-of-the-Art.” 1st Range Imaging Research Day, 21–32. Chen, Xiaona, and Jianping Wang. 2015. “Breast Volume Measurement by Mesh Projection Method Based on 3D Point Cloud Data.” International Journal of Clothing Science and Technology 27 (2): 221–236. doi: https://doi.org/ 10.1108/IJCST-11-2013-0124. Fang, Fang, Xudong Ma, and Xianzhong Dai. 2005. “A Multi-Sensor Fusion SLAM Approach for Mobile Robots.” In IEEE International Conference Mechatronics and Automation, 2005, 4:1837–1841. IEEE. Fay, David A, Allen M Waxman, Mario Aguilar, David B Ireland, JP Racamato, WD Ross, William W Streilein, and MI Braun. 2000. “Fusion of Multi-Sensor Imagery for Night Vision: Color Visualization, Target Learning and Search.” In Proceedings of the Third International Conference on Information Fusion, 1:TUD3–3. IEEE.

118

Dinko Osmanković

Ferreira, Beatriz Quintino, Miguel Griné, Duarte Gameiro, João Paulo Costeira, and Beatriz Sousa Santos. 2014. “VOLUMNECT: Measuring Volumes with Kinect.” In Three-Dimensional Image Processing, Measurement (3DIPM), and Applications 2014, 9013:901304. International Society for Optics and Photonics. Fetić, Azra, Davor Jurić, and Dinko Osmanković. 2012. “The Procedure of a Camera Calibration Using Camera Calibration Toolbox for MATLAB.” In 2012 Proceedings of the 35th International Convention MIPRO, 1752–1757. IEEE. Fofi, David, Tadeusz Sliwa, and Yvon Voisin. 2004. “A Comparative Survey on Invisible Structured Light.” In Machine Vision Applications in Industrial Inspection XII, 5303:90–98. International Society for Optics and Photonics. doi: https://doi.org/ 10.1117/12.525369. Hartley, Richard I. 1997. “In Defense of the Eight-Point Algorithm.” IEEE Transactions on Pattern Analysis and Machine Intelligence 19 (6): 580–593. doi:10.1109/34.601246. Kam, Moshe, Xiaoxun Zhu, and Paul Kalata. 1997. “Sensor Fusion for Mobile Robot Navigation.” Proceedings of the IEEE 85 (1): 108–119. doi:10.1109/ JPROC.1997.554212. Kolb H. Gross Anatomy of the Eye. 2005 May 1 [Updated 2007 May 1]. In: Kolb H, Fernandez E, Nelson R, editors. Webvision: The Organization of the Retina and Visual System [Internet]. Salt Lake City (UT): University of Utah Health Sciences Center; 1995-. Available from: https://www.ncbi.nlm.nih.gov/books/NBK11534/. Kongsro, Jørgen. 2014. “Estimation of Pig Weight Using a Microsoft Kinect Prototype Imaging System.” Computers and Electronics in Agriculture 109: 32– 35.doi:https://doi.org/10.1016/j.compag.2014.08.008. Kuffner, James J, and Steven M LaValle. 2000. “RRT-Connect: An Efficient Approach to Single-Query Path Planning.” In Proceedings 2000 ICRA. Millennium Conference. IEEE International Conference on Robotics and Automation. Symposia Proceedings (Cat. No. 00CH37065), 2:995–1001. IEEE. Lacevic, Bakir, Dinko Osmankovic, and Adnan Ademovic. 2016. “Burs of Free C-Space: A Novel Structure for Path Planning.” In 2016 IEEE International Conference on Robotics and Automation (ICRA), 70–76. IEEE. ———. 2017. “Path Planning Using Adaptive Burs of Free Configuration Space.” In 2017 XXVI International Conference on Information, Communication and Automation Technologies (ICAT), 1–6. IEEE. Levinson, Jesse, Jake Askeland, Jan Becker, Jennifer Dolson, David Held, Soeren Kammel, J Zico Kolter, et al. 2011. “Towards Fully Autonomous Driving: Systems and Algorithms.” In 2011 IEEE Intelligent Vehicles Symposium (IV), 163–168. IEEE. Longuet-Higgins, H Christopher. 1981. “A Computer Algorithm for Reconstructing a Scene from Two Projections.” Nature 293 (5828): 133. doi:https:// doi.org/10.1038/293133a0.

3D Robot Vision in Industrial Applications

119

Nguyen, Anh, and Bac Le. 2013. “3D Point Cloud Segmentation: A Survey.” In 2013 6th IEEE Conference on Robotics, Automation and Mechatronics (RAM), 225–230. IEEE. Nityananda, Vivek, and Jenny CA Read. 2017. “Stereopsis in Animals: Evolution, Function and Mechanisms.” Journal of Experimental Biology 220 (14): 2502–2512. doi:10.1242/jeb.143883. Osmanković, Dinko, and Bakir Lačević. 2016. “Rapidly Exploring Bur Trees for Optimal Motion Planning.” In 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC), 002085–002090. IEEE. Osmanković, Dinko, and Jasmin Velagić. 2013. “Detecting Heat Sources from 3D Thermal Model of Indoor Environment.” In 2013 XXIV International Conference on Information, Communication and Automation Technologies (ICAT), 1–6. IEEE. Prince, Simon JD. 2012. Computer Vision: Models, Learning, and Inference. Cambridge University Press. Pryor, Helen B. 1969. “Objective Measurement of Interpupillary Distance.” Pediatrics 44 (6): 973–977. Scharstein, Daniel, and Richard Szeliski. 2003. “High-Accuracy Stereo Depth Maps Using Structured Light.” In 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2003. Proceedings, 1:I–I. IEEE. Schmidt, Mirko, and Bernd Jähne. 2009. “A Physical Model of Time-of-Flight 3D Imaging Systems, Including Suppression of Ambient Light.” In Workshop on Dynamic 3D Imaging, 1–15. Springer. Siegwart, Roland, Illah Reza Nourbakhsh, and Davide Scaramuzza. 2011. Introduction to Autonomous Mobile Robots. MIT press. Vidas, Stephen, Peyman Moghadam, and Michael Bosse. 2013. “3D Thermal Mapping of Building Interiors Using an RGB-D and Thermal Camera.” In 2013 IEEE International Conference on Robotics and Automation, 2311–2318. IEEE. Wasenmüller, Oliver, and Didier Stricker. 2016. “Comparison of Kinect v1 and v2 Depth Images in Terms of Accuracy and Precision.” In Asian Conference on Computer Vision, 34–45. Springer. Yahav, Giora, Gabi J Iddan, and David Mandelboum. 2007. “3D Imaging Camera for Gaming Application.” In 2007 Digest of Technical Papers International Conference on Consumer Electronics, 1–2. IEEE. Yu, Xinyuan. 2017. “Efficient Stereo Camera Based Large Scale Semantic Mapping.” Master’s Thesis, ETH Zurich, Department of Computer Science. Zhang, Cha, and Tsuhan Chen. 2001. “Efficient Feature Extraction for 2D/3D Objects in Mesh Representation.” In Proceedings 2001 International Conference on Image Processing (Cat. No. 01CH37205), 3:935–938. IEEE.

In: Industrial Robots: Design, Applications and Technology ISBN: 978-1-53617-779-4 Editors: Isak Karabegović et al. © 2020 Nova Science Publishers, Inc.

Chapter 6

ROBOT ACTUATORS Safet Isić* and Ermin Husak Faculty of Mechanical Engineering, “Džemal Bijedic” University of Mostar, Mostar, Bosnia and Herzegovina Technical Faculty, University in Bihać, Bihać, Bosnia and Herzegovina

ABSTRACT An actuator is an electromechanical device which converts mechanical or electrical energy into mechanical work. There are several types of actuators depending of input energy source: hydraulic, pneumatic, AC, DC and stepper actuators. In this chapter are classified the various actuators available, based on the type of energy source used. A working principle, general design and practical design and usage for each type of these actuator are introduced.

Keywords: pneumatic actuators, hydraulic actuators, electrical actuators

INTRODUCTION Engines that allow movement in the joints of a robot are called drives or actuators. There are three main types of actuators in use (Coiffet, 2006, 83-107):   

pneumatic, hydraulic and electric.

* Corresponding Author’s E-mail: [email protected].

Safet Isić and Ermin Husak

122

Pneumatic actuator is used in simple manipulators that serve to operate the machine, i.e., to perform simple repetitive operations without considering the trajectory between the start and end points. Robot joints are driven by cylinders based on compressed air. This actuator allows the device to work quickly and reliably, but it is hard to accomplish movement by default. The first Unimate robot had a hydraulic actuator and at this stage of robotics development this form of actuator was dominated. Hydraulic actuator enables more accuracy of joint structure control thanks to servo valves (manifolds) which enable to establish proportionality between the flow of oil and the control structure. Today electrical actuators are most commonly used due to its advantages over other actuators. Hydraulic actuators are still in use when it comes to carrying capacities exceeding 100kg. Electric actuators are used for the load capacity range of 1 - 100kg, while pneumatic actuators are used for smaller weights of 0.2-15kg. Table 1. Actuators system comparison Type of actuators

Positioning accuracy (mm)

Pneumatic

Capacity (kg) 0,2 – 15

0,1 – 1,0

Velocity (m/s) 0,3 – 1,0

Hydraulic

40 – 500

0,1 – 2,0

0,75 – 5,0

Electric

1 – 100

0,02 – 1

0,5 - 10

A comparison of the actuators systems considering to positioning accuracy, load capacity and speed is given in Table 1  Coiffet, 2006, 83-107. Robot actuators are tasked with reaching the specified path, positioning and orienting the end device. The most important requirements that actuators must satisfy are (Sandin, 2003, 1-68):    

light weight and engine volume, higher torque, larger angular rotation area, higher positioning accuracy.

A schematic representation of actuator system is given in Figure 1.

Robot Actuators

123

Figure 1. Schematic representation of actuator system.

PNEUMATIC ACTUATORS Pneumatic actuators use the potential energy contained in the fluid (air) to drive it. Pneumatics are suitable for less power, as air is known to be difficult to compress and there is a danger of an explosion, so large cylinder diameters would be required to achieve higher forces. This type of actuator has its advantages and disadvantages. The advantages of a pneumatic actuator are (Coiffet, 2006, 83-107): 1. 2. 3. 4. 5.

relatively cheap, simple construction, fast reaction time, can make straight and turning movements, resistant to overload, flammability, high electromagnetic interference.

temperatures,

The disadvantages of pneumatic drive are (Coiffet, 2006, 83-107): 1. 2. 3. 4. 5. 6. 7.

noise at work, uncontrolled movement velocity, point-to-point actuator poor positioning the need for drying, cleaning and lubrication less ability to carry heavy loads shorter working life

Mechanical stops are used to stop the robot and to define its motion. The pneumatic system consists of the following components: 1. piston and 2. cylinder.

radiation

and

124

Safet Isić and Ermin Husak

Figure 2. Pneumatic cylinder.

Figure 2 shows a linear piston. It is a piston that moves from one boundary of its stroke to the other depending on the direction of the propulsion air (Coiffet, 2006, 83107). Adjusting the air flow allows the piston shaft to be braked at the limit. There are also rotary pistons, but these are essentially linear pistons where the piston shaft contains a toothed handle that drives the gear (known as a gear and gear system). There are several types of manifolds. Valve manifolds are most commonly used (Figure 5). In valve manifolds, the valve is driven by a membrane exposed to the control signal and returned to its original position by a spring. Valve manifolds are most commonly used (Figure 6).

Figure 3. Cross section of a pneumatic actuator.

Figure.4. Pneumatic actuator designs.

Robot Actuators

125

Figure 5. Pneumatic valve.

Figure.6. Pneumatic manifolds.

The use of pneumatic actuators is very wide where high forces are not required due to their simple construction, easy maintenance and low cost. The pneumatics is also used in robot grips and some examples are given in the following figure.

Figure 7. Pneumatic grips.

Safet Isić and Ermin Husak

126

HYDRAULIC ACTUATORS As mentioned in the introduction, these actuators operate on the basis of compressed oil. Hydraulic actuation can result in high pressures in the cylinder and thus high forces in the joints of the robot. This drive has a limited maximum piston speed in the cylinder, so the robot speed is also limited. As with pneumatic systems, hydraulically controlled systems consist of energy sources (pumps), hydraulic lines, manifolds, and the actuator itself (ie linear or rotary piston in robots) (Sandler, 1991, 64-103). The main difference is that the manifold can be proportionally controlled and thus allows the conventional use of hydraulic servo systems, while for pneumatic systems the manifold operates on an “onoff” principle.

Linear Cylinders There are three types of linear cylinders (Coiffet, 2006, 83-107):   

single action piston, double action piston and differential piston.

Figure 8. Single acting hydraulic cylinder.

For a single acting cylinder (Figure 8), the force developed is one-way. A return device (e.g., a spring) ensures that the piston shaft is restored to its initial position. The double action piston has two chambers in which the pressure p1 and p2 can be alternately determined (Figure 9) (Coiffet, 2006, 83-107). Note that the presence of a

Robot Actuators

127

piston shaft in only one chamber means that the piston is asymmetrical considering to the pressure required to obtain the same movement to the right and to the left.

Figure 9. Double acting hydraulic cylinder.

A differential piston is used for the long stroke of the piston (Figure 10). The cross section s from the piston axis is equal to half the surface of the piston body S, respectively, S =2s. The force developed by cylinder is: F = s(2p – pa)

Figure 10. Hydraulic differential cylinder.

(1)

128

Safet Isić and Ermin Husak

Rotary Engine As is the case with pneumatic pistons, it is possible to mechanically transform the linear movement of the hydraulic piston into rotational motion. There is, however, a piston type that is made to rotate (Sandin, 2003, 1-68). It is a rotating piston (Figure 11).

Figure 11. Hydraulic rotary engine.

Figure 12. Cross section of hydraulic engine.

The simplest version consists of a fixed-lock cylinder. Inside it, a mobile stopper is formed, which forms a unit with the output shaft. The stroke of the piston is limited by the dimensions of the shutter to about 330 degrees. Beside to the rotary engine (pump), there are also axial piston, radial piston and gear hydraulic engines (pumps). Since hydraulic pumps are not often used as a drive for robots, we will not describe them in more detail.

Robot Actuators

129

Manifolds Manifolds are actually valves that interconnect the lines between the power source and the operating units (Coiffet, 2006, 83-107). The type of manifold is defined by the number of connection points and the number of working positions.

Figure 13. Hydraulic manifold.

A typical example of a manifold is shown in Figure 13. The role of this manifold is to measure the flow q entering the chamber with a known pressure [(pa + pb / 2]. As shown in Figure 4.13, pistons of length d are adjusted by an opening of length d ', and three different ones can occur situations: d = d '(manifold without leakage or return), d> d' (manifold with return) i d < d '(manifold with leakage).

Figure 14. Hydraulic manifolds.

Servo Valves A servo valve is a device that drives a manifold piston by detecting compartments, adjusting the flow rate in proportion to the electrical control signal. This is essential for hydraulic power steering. Hydraulic servo valve is a complex device characterized by many parameters (Sandler, 1991, 64-103). The most important parameters are:

Safet Isić and Ermin Husak

130

1. flow increase or ratio between flow through valve and control currents without any load (short-circuited operating openings); 2. pressure increase or the ratio between the pressure difference and the control current when the openings are closed; 3. flow-pressure curves (characteristics) or the relationship between flow and pressure difference when the current is constant.

Figures 15. Hydraulici servo valve.

Servo-Regulated Hydraulic Systems Let's take a system consisting of a servo valve regulated by current I, which drives the manifold by a voltage U, and thus produces a motion x of a linear piston. Considering the geometric complexities and imperfections, and the dynamic properties of the fluid, neither component has a simple, linear model (Sandin, 2003, 1-68). In practice, it is necessary to calculate approximate values, for example, expressing the relationship between the current I used to regulate the servo valve and the movement of the U manifold in the form of a second-order function (Coiffet, 2006, 83-107): T1 ( p ) 

U  p  I  p

K1 1  21 

p

1



p2

(2)

12

where are:

 1 - damping (braking) effect, the value of which is approximately 1 1 - angular velocity In the same way, the relationship between the motion of the piston x and the movement of the manifold can be represented as a function of:

Robot Actuators T2 ( p ) 

K  p  U  p

K2 p( 1  2 2 

p

2



p2

 22

131 (3)

)

whereby it is  2  1 . The overall function will be: T  p 

x p   T1  p T2  p  I  p

(4)

Figure 16. Servo controlled hydraulic system.

This is a standard system that functions as an open cycle. In Figure 16. an overview of this system is given. The servo control can be done mechanically by connecting the piston to the manifold so that the access openings in the piston chambers depend on the position of the return in proportion to the load position (Figure 17).

Figure 17. Mechanical servo control system.

The addition of an electric engine actuator creates an electro-hydraulic system with positional servo control (Figure 18).

Safet Isić and Ermin Husak

132

Figure 18. Electro-hydraulic system with power steering.

Figure 19. Different hydraulic actuators designs.

ELECTRIC ACTUATORS The robot's electric actuator was first introduced in 1974 by the Swedish company ASEA. Since then, electric engines have been increasingly used as robot actuator. The advantage of using electric engines is the universal presence of electricity, the ease of connections, the control of which is light, accurate and reliable. There is no problem here, as with hydraulics, with oil leaks and pollution, and the noise produced by these engines is small (Sandin, 2003, 1-68). The disadvantage of these engines is the poor power-to-weight ratio. For electric engines, it is mandatory to use certain magnetic materials that increase the weight of the engine. One of the problems is overheating of the engine due to high current density and magnetic flux. Electric engines are dangerous for use in explosive environments.Any type of engine can be used, but today the following types of engines are used in robots:   

DC engines, AC engines and stepper engines.

Robot Actuators

133

DC Engines In recent years, DC engines have played a leading role in automation and regulation because they have convenient capabilities for fast and fine speed control (Sandler, 1991, 64-103). The DC engine has three main parts: 1. stator, 2. rotor, 3. collector. The stator was designed as a hollow cast iron roller. On the inside of the stator yoke there are magnetic poles with excitation coils, and on the stator side there are shields with shaft bearings. Magnetic forces exit the north N - pole, pass over the rotor, enter the southern S - pole and return to the north N - pole. The stator is made from a solid piece because it is exposed to a direct current magnetic field and we have no eddy current losses and no hysteresis loop losses.

Figure 20. Display of DC engine construction.

The rotor is made of dynamo sheets and fixed to the shaft. In the grooves there is an armature coil whose ends are connected to the collector blades. The collector is located next to the rotor on the engine shaft, and consists of copper blades that are isolated from each other and are also isolated from the shaft, and the brushes are sliding along them. The brushes are made of a softer material than a collector such as: hard coal, graphite coal, metallic coal, etc. They must lie on the collector with a specified pressure over the whole surface and must not exceed 2-3 blades in width.

134

Safet Isić and Ermin Husak

Figure 21. Magnetic current.

The principle of operation of a one-way engine can be explained by the rotation of the armature return whose ends are connected to the commutator (Sandler, 1991, 64-103). The stator is made of two poles, which create a magnetic flux (Figure 21).

Figure 22. The forces and moment on the coil

By flowing current (i) through the roll of the armature coil located in the stator magnetic field, a so-called Lorenzo's force is created. These forces, on the diametrically opposite sides of the rotor, form the torque that rotates the rotor (Figure 23).

Figure 23. DC engine designs.

As the armature roll passes through the neutral zone, its conductors enter the magnetic field of the opposite sign. However, at the same time under the brushes, the commutator blades change, causing the direction of current to change through the armature roll, so that the sign of torque does not change (Sandler, 1991, 64-103). Thanks to the commutator, the engine rotor rotates continuously in one direction.

Robot Actuators

135

When permanent magnets are used as inductors, it is ideal to combine lightness, high induction and stability of induction with temperature variations. There is still no ideal magnetic material, but the three types of permanent magnets currently in use are (Sandin, 2003, 1-68): 1. alnico magnets (ie an alloy containing different amounts of iron, aluminum, cobalt, nickel and copper; induction is Tesla 1), 2. ferrite magnets (ie compounds of iron powder and barium or strontium oxide; induction is 0.5 Tesla), 3. rare earth and cobalt magnets (eg samarium cobalt). These give the best results but are expensive. The DC engines used in robotics usually have permanent magnets in the inductor (Sandin, 2003, 1-68). There are three main types: 1. standard engines in which the anchor of the magnet is wound on the magnetic material (Figure 20), 2. bell-shaped engines in which the magnetic anchor conductors are attached to an insulated cylinder, and 3. disk engines in which the magnetic anchor conductors are attached or wound on an insulated disk. In a disk drive, current flows through radially positioned lines in a thin disk. The stator magnetic field is axially oriented and is formed by permanent magnets on either side of the rotor disc. The low inertia of the rotor makes the use of disk engines useful especially in cases where high acceleration and good positioning are necessary. The characteristic of bell engines is high torque in the low speed range. For this reason they are suitable for direct actuator. To avoid the spark caused by brush engines, brushless engines (BDC) have recently been used (Sandler, 1991, 64-103). With these engines, the roles of the stator and rotor have been replaced. Permanent magnets are mounted on the rotor and coils through which the current flows through the stator. Since there is no commutator, the change of current is provided by a special electronic circuit, called the electronic commutator. Non-collector engines have a higher reliability, higher efficiency, lower weight and dimensions at the same power, minimal maintenance, excellent thermal protection and higher speed control range than conventional permanent magnet engines.

Safet Isić and Ermin Husak

136

AC Engines There are two types of AC engines (Sandin, 2003, 1-68):  

asynchronous and synchronous.

These engines have good actuator characteristics, but have been less used in robotics due to heavier speed control. The development of electronics has recently enabled the development of new AC servo engines that have good speed control. Asynchronous engines are predominantly used as an engine, and very rarely as a generator. The asynchronous engine was named because the speed of the rotating magnetic flux and the speed of the rotor are not the same, as is the case with synchronous engines (Sandin, 2003, 1-68). The asynchronous engine is manufactured in serial production as single-phase or three-phase, very easy to manufacture and maintain and at relatively low production cost. The operation of the induction engine is based on a rotating magnetic flux. The design of the asynchronous engine, as far as the stator is concerned, is exactly the same as the synchronous machine, while the difference is in the rotor. The stator is made in the form of a hollow dynamo roller, and along the roller there are grooves on the inside of which a three-phase coil is placed. The engine housing serves as a support and protection of sheets and coils, and is made of cast iron, silicon steel, etc. In the middle there are bearing shields in the form of a cover where the bearings are located for the shaft on which the rotor is located.

Figure 24. Asynchronous engine.

The rotor is assembled similar to the stator and consists of a shaft and a rotor package. The rotor package is made in the form of a roller of dynamo sheets, and in the longitudinal direction on the outside of the roller there are grooves to accommodate the

Robot Actuators

137

rotor coil (Sandler, 1991, 64-103). If the rotor coil is made of rods of copper, brass, bronze or aluminum, which are short-circuited on both sides of the ring and resemble a cage, then it is a cage induction engine. Or, if the rotor coil is made as a stator, that is, a return connected to three rings by which the brushes are used to connect to the rotor resistors, then it is a sliding-ring induction engine. By connecting the stator primary coil to an alternating three-phase network, a threephase coil initiates a three-phase alternating current, which creates a rotating magnetic field that rotates at a speed ns and closes through the stator and the rotor secondary coil. The intersection of the stator and rotor return guides creates an induced EMS (electromotive force) E1 that maintains a balance to the connected grid voltage, and the induced EMS E2 in the rotor coil will expel current I2. This current generates a magnetic field around the guides, which, with a rotating magnetic field, produces a resultant field, and this creates mechanical forces that create a torque on the shaft. The direction of rotation of the rotating magnetic field and the direction of rotation of the rotor are the same. If we want to change the direction of rotation of the rotor, we need to change the direction of rotation of the rotating magnetic flux by swapping two phases. The speed of rotor n is always less than the synchronous speed ns which rotates the rotating magnetic field and depends on the load on the engine. The rotor can never achieve synchronous rotational speed, and if the rotor reached synchronous speed there would be no difference in speed between the rotating magnetic flux and the rotor and there would be no intersection of the rotor coil by magnetic forces. Therefore, no EMS would be induced in the rotor coil and there would be no mechanical force on the conductor and no torque would be created. The rotor always turns asynchronously, which is the name of this engine.

Figure 25. Sliding disc asynchronous engine.

The three-phase coil is connected to the star junction or triangle on the terminal box as in Figure 26.

Safet Isić and Ermin Husak

138

Figure 26. Coil connection.

Since synchronous engines are used more as generators than as engines, we will not specifically describe them.

Figure.27. AC engine.

Stepper Engine A stepper engine is a special type of engine that is easily operated by a computer. It is very similar to a synchronous engine (Sandin, 2003, 1-68). Can be with rotor:   

permanent magnet, of soft iron, or with returns and brushes.

According to the number of stages it can be: two-phase, three-phase, four-phase, or five-phase. The number of full circle steps depends on the construction, and ranges from 10 to 500 steps. They are made for power from several W to several kW. Stepper engines have a major advantage over other types in computer-controlled processes. They have high positioning accuracy so they are irreplaceable in applications where accuracy is of great importance. Turning a stepper engine consists of a large number of steps. How

Robot Actuators

139

many steps there are in one full rotation of the rotor depends on the construction of the engine. Important properties of stepper engines are (Sandin, 2003, 1-68):   

positioning accuracy, without feedback, using the required number of control pulses, high torque at low angular speeds, even at single steps, high moment of holding in the excited state.

The principle of operation of a stepper engine will be explained by the example of a four-phase engine shown in Figure 29.

Figure 28. Cross-section of a four-phase stepper engine with control scheme.

The engine rotor is made of a permanent magnet cylindrical shape, while the stator has four poles around which the coils are located. Electromagnetic torque is generated by the interaction of the stator and rotor magnetic fields. By switch P1, the S pole of the rotor is positioned below the phase 1 coil (Figure 28). If P1 is switched off and P2 is switched on, the rotor is rotated so that its S pole is below the phase 2 coil. So the rotor made an angular turn from /2. In the same way, the successive inclusion of P1, P2, P3 and P4 produces a 2 displacement. The reverse order of the switch changes the direction of rotation of the engine. This type of stepping, where only one switch is active at one time, is called full-step stepping.

140

Safet Isić and Ermin Husak

Figure 29. Starting a 4-Phase Permanent Magnetic Engine in Phase 1 - 2 - 3 – 4.

Figure 30. Stepper engines.

The half-step can also be controlled. In this case, P2 (together with P1) is switched on after initiating phase 1 and setting the S half of the rotor below the active phase. Then the impeller makes an angular displacement of /4. To achieve a full circuit, the switches should be activated in the following order: P1 - P1P2 - P2 - P2P3 - P3 - P3 P4 - P4 P4P1 - P1. The advantage of half-steer control is twice the stepping resolution, that is, the precision of rotor positioning.

Robot Actuators

141

SOLENOIDS A solenoid is an electromechanical device that converts electrical energy into linear or rotational mechanical motion (Sandler, 1991, 64-103). They consist of coils for conducting electricity and generating a magnetic field, an iron or steel sheath, or housing for closing the circuit and a piston or armature for converting motion.

Figure 31. Principle of operation of solenoids.

The solenoids are made with conductors that transmit maximum magnetic flux intensity with minimal electricity input.

Figure 32. Solenoids.

Mechanical movements performed by the solenoid depend on the piston construction of the linear solenoid and the construction of the armature for the rotary solenoid.

HARMONIC ACTUATOR A harmonic actuator is a type of gear that is always directly connected to the electric engine in the construction of a robot. Since electric engines have a high angular velocity and mechanical joint rotations relative to them are slow, it is necessary to reduce the

Safet Isić and Ermin Husak

142

angular velocity considerably. If conventional gears are used, which have a limited upper gear ratio, several gears should be fitted for efficient reduction (Doleček, 2002,175-196). Therefore, special gearboxes have been developed for these purposes, the most important being the harmonic system (Kafrissen and Stephons 1984), which was patented by the American firm Harmonic-Drive-Division. Due to its low weight, large gear ratio and reduced looseness, the harmonic actuator displaces other types of gears in robots. The harmonic actuator consists of three parts (Figure 33): 1 – wave generator, 2 – elastic ring and 3 – housing.

Figure 33. Elements of harmonic actuator.

A wave generator is an elliptical roller with a ball bearing on the circumference. A actuator engine input shaft is connected to it. The circular housing has a notch with the number of teeth zk on the inside and is attached to the skeleton of the robot. Between the wave generator and the housing is placed an elastic ring in the form of a pot made of thin elastic sheet, to which the output shaft is attached. The outer circumference of the elastic ring impresses the number of zf teeth, which is usually two teeth smaller than the number of teeth of the housing. The transmission ratio of the harmonic drive is (Doleček, 2002, 175-196): i

zf  zk  z f

(5)

Robot Actuators

143

Figure 34. The principle of operation of the harmonic actuator.

The following figure shows the rotation of the wave generator clockwise with four positions. If the shaft rotates full circle while pushing the teeth of the ring into the intermediate teeth of the housing, the ring will move two teeth backwards. On the Figure 35 a complete harmonic actuator assembly with wave generator, elastic ring and housing is given.

Figure 35. Technical versions of the harmonic actuator.

144

Safet Isić and Ermin Husak

CONCLUSION The manipulators used by the machines usually had pneumatic actuator. The manipulator joints were driven by cylinders based on compressed air. Such actuator enabled reliable and very fast operation of the device. Regardless of these qualities, pneumatic propulsion is very rare in modern robots. The problem is that such a actuator is difficult to regulate, that is, it is difficult to achieve movement under a given law. One of the first, and still equally current, actuators systems in robotics is hydraulic actuator. The benefits of applying hydraulic actuator are considerable. First of all, it is possible to exert high pressures in the cylinders and therefore very high forces in the joints of the robot. Thanks to this ability to achieve high forces, robots with hydraulic conditions do not have gear boxes, which greatly simplifies construction. Hydraulic actuator is equally suitable for translational and rotary movements. These actuators are most common with robots used to manipulate objects of higher mass. Notwithstanding the aforementioned advantages of having a hydraulic actuator, there is a recent tendency to switch to electrically powered robots. Previously, electric actuator was used for light and medium-sized robots, and today heavy robots are increasingly being designed as electric. Their widespread use is due to the fact that their regulation is relatively simple. The hydraulic actuator has a limited maximum piston speed in the cylinder, thus limiting the speed of the robot. In the future, the actuator will be sought to resemble a man's muscles. Today, research is being done in several places to develop artificial muscle. There were attempts to realize it on the basis of compressed air. However, the right solution for the construction of artificial muscle has not been found so far. This robot drive remains an idea that will only be realized in the near or distant future. The development of such a actuator will further increase the universality of robot movement.

REFERENCES Coiffet, P., and M. Chirouze, 2006; An Introduction to Robot Technology, Hermes Publishing, France. Sandin, E.P., 2003: Robot Mechanisms and Mechanical Devices Illustrated, McGrawHill. Sandler, B.Z., 1991: Robotics – Designing the Mechanisms for Automated Machinery, Academic Press, London. Denavit, J., and R.S. Hartenberg, 1955: Kinematic Notation for Lower-Pair Mechanisms Based on Matrices, ASMEJ of Applied Mechanics.

Robot Actuators

145

Doleček, Vlatko and Karabegović Isak. 2002. Robotics, Technical faculty of Bihać, Bihać, Bosnia and Herzegovina; 175-196. Wloka, Dieter, W., 1992: Roboter systeme 1-Teschnische Grundlagen, Springer-Verlag Berlin Heidelberg. Husty, M., and A. Karger, H. Sachs, W. Steinhilper, 2000: Kinematik und Robotik, Springer-Verlag Berlin Heidelberg. Kovačić, Z. and V. Laci, S. Bogdan, 1969: Osnove robotike, FER, Zagreb. Kreuzer, E.J. and J.B. Lugtenburg, 1994: Industrie roboter-Technik, Berechung und anwendungsorientierte Auslegung, Springer-Verlag Berlin Heidelberg. Paul, R.P., 1981: “Robot Manipulators. Mathematichs, Programming and Control”, Conference on Robots and Automation. Philadelphia. Ranky, P.G. and C.Y. Ho, 1985: Robotmodeling, Springer Verlag, Berlin, Heidelberg, New York, Tokyo. Šurina, T. and D. Crneković, 1990: Industrijski roboti, Školska knjiga, Zagreb. [Industrial robots] Binner, H.F, 1999: Prozessorientirte Arbeitsverbereitung, Carl Hanser Verlag Muenchen Wien. [Process-oriented work processing] Vukobratović, M., D. Stokić, N. Krišćanski, M. Krišćanski, D. Hristić, B. Karan, D. Vujić and M. Đurović, 1986: Uvod u Robotiku, Institut “Mihajlo Pupin”, Beograd. [Introduction to Robotics] Wolka, D.W., 1992: Roboter sisteme, Technishe Universität des Saarlandes im Stadtwald. Acarnley, P.P., 1987: Stepping Engines: A Guide to Modern Theory and Practice, Peter Peregrinis Lt. McKerrow, P.J., 1998: Introduction to Robotics, Addison Vesley Publishing Company. Yeaple, F., 1984: Fluid Power Design Book, New York: Marcel Dekker. Craig, J.J., 2005: Introduction to Robotics Mechanics and Control, Prentice Hall. Spong, M.W., and M. Hutchinson, 2006: Robot Modelling and Control, Wiley.

In: Industrial Robots: Design, Applications and Technology ISBN: 978-1-53617-779-4 Editors: Isak Karabegović et al. © 2020 Nova Science Publishers, Inc.

Chapter 7

KINEMATICS AND DYNAMICS OF ROBOTS Emir Nezirić* Faculty of Mechanical Engineering, “Džemal Bijedić” University of Mostar, Mostar, Bosnia and Herzegovina

ABSTRACT Modern industrial facilities are impossible to imagine without robots as a part of the production line. It is necessary to have precise, powerful and reliable robots for good quality products. Another important point is that robots and whole production have to be flexible and adaptable to new products and production processes. That could be achieved only if it is known how the motion of robots is possible to be easily changed through equations of motion. It is required to know equations of movement, forces and moments of each part of the robot for control of the motion of robots. The motion of the robots will be explained through equations of motion through this chapter.

Keywords: industrial robots, kinematics, dynamics, transformation of coordinates

INTRODUCTION TO KINEMATICS OF ROBOTS Industrial robots could be modelled as multiple solid-body chains linked with joints, where the beginning of the chain is fixed while the chain end is movable with end effector (Coiffet and Cirouze 1983, 25-29; Craig 2005, 62-77; Kovačić et al. 2002, 3-9). It is necessary to have the ability of positioning and aligning the end effector in threedimensional space for achieving its purpose. The main goal of the kinematics of robots is *

Corresponding Author’s E-mail: [email protected].

Emir Nezirić

148

to linking the joint characteristics of movement and to sum it up to end-effector position and alignment. Position of the end effector is defined by the coordinates of the referent point on it and the orientation of end effector is defined by angles which are aligned to coordinate axes. Position and orientation formulated like this are parameters which define the location of the end effector in the environment in which the robot is acting. The details which are referring to the connection between joint parameters and the position and orientation of the end effector would be discussed as follows.

MATRIX OF TRANSFORMATIONS Point Coordinates Transformation from One Coordinate System to Another If it is required to observe the mutual positioning of the robot links, same as the robot itself to the ground, it is necessary to analyse the connection between the movable coordinate systems fixed to the robot links (Coiffet and Cirouze 1983, 57-65; Craig 2005, 62-92; Gačo 2002, 36-37).

Figure 1. Point coordinates transformation.

Two coordinate systems will be observed, where a coordinate system (B) has origin displaced for vector

A

 p B with regard to the origin of the coordinate system (A). Axes of

the system (B) are rotated with some angle to the system (A).

Kinematics and Dynamics of Robots

149

Radius vector of the point M in the system (A) could be written as a sum of the radius vector of system (B) origin and the radius vector of point M in the system (B). A

  pM  Ap B  B pM

(1)

If equation (1) is written as composed by projections on the axis of its representative coordinate system and then multiplied with unit vectors

 iA

,

 jA

,

 kA

respectively, it is

possible to obtain equation system written in matrix form:

 

   A x M   i A ,i B A      y M    j A ,iB  A z   k ,i  M  A B



 iA , jB  iA ,kB  B xM   A xB   j A , jB  j A ,kB  B yM    A yB  

 k , j  k ,k  A

B

A

B

B

zM   A zB    

(2)

where: 

radius vector of point M in systems (A) and (B)

 A xM   B xM      A pM    A y M  , B pM    B y M   Az  Bz   M  M 

matrix of rotation (3x3) of system (B) in system (A)

 

   i A ,i B   A   R   j A ,iB B  k ,i  A B





(3)

 iA , jB  iA ,kB    j A , jB  j A ,kB  

 k , j  k ,k  A

B

A

(4)

B

radius vector of system (B)origin in system (A):

 A xB    A p B    A y B   Az   B

(5)

Emir Nezirić

150 Equation (2) could be written as A

pM  BA RB pM  ApB 

(6)

Rotation matrix has feature which could be written as A B

RBA RT BA R1

(7)

From the rotation matrix it could be seen that members are cosines of the angles between the axes of the systems (A) and (B). Equation (2) could be also written as 4x4 extended matrix as follows

 

   A x M   i A ,i B A      y M   j A ,i B A      z M   k A ,i B  1   0   



 iA , jB  iA ,kB    j A , jB  j A ,kB  

 k

A , jB

0

 k

A ,k B

0



x B   B xM    A yB  B yM    A zB  B zM   1   1  A

(8)

or, in shorter way A

pM  BA DB pM 

(9)

where

pM  - extended radius vector of point M in the system (A), B pM  - extended radius vector of point M in the system (B), A B D - main matrix of transformation (4x4) of system (B) in system (A). A

If it is required to transform coordinates from system (s) to system (r), matrix of transfromation could be written as s 1

r s

D   i1i D i r

(10)

Kinematics and Dynamics of Robots

151

Special Cases of Transformation Translation Matrix If coordinate system (B) is translatory displaced in the system (A), then transformation matrix 1  A 0 B T    0  0

0 1 0 0

0 0 1 0

A B

D becomes translation matrix BA T  .

xB   A yB  A zB   1  A

(11)

Matrices of Rotation and Extended Matrices Of Rotation If the system (B) is rotated in the system (A) and both of the systems have the same origin, then a matrix of transformation

A B

D becomes extended rotation matrix of system

(B) in the system (A). If it is observed rotation of system (B) about its common x-axis with system (A), the rotation matrix and extended rotation matrix could be written as

Figure 2. Coordinate system translation.

Emir Nezirić

152

Figure 3. Rotation of the coordinate system about the x axis.

A B

0 0 0 1 0 0  1 0 cos   sin  0   Rx   0 cos   sin  , BA Rx     0 sin  cos  0 0 sin  cos     0

0

0

(12)

1

If the rotation is about axes y and z, a similar procedure could be used to form the rotation matrices for the governing angles. Translational transformations are linear, and rotational transformations are non-linear since there are sine and cosine functions in its transformation matrices.

Free Vector Transformation From One To Another System  If we observe the vector w  MN , then that vector could be written in systems (A) and (B) as

Kinematics and Dynamics of Robots

153

Figure 4. Tranformation of free vector from one system to another.

w   pN    pM  B B B w   pN    pM  A

A

A

(13)

Extended radius vectors of M and N in observed coordinate systems are

 A pM  A  B pM     D    B    1   1  ,

 A p N  A  B p N     D    B    1   1 

(14)

If we substract those two vectors we obtain

 A w  BA R    0   0

A

pB    B w 1

     0 

(15)

The free vector is transformed from system (B) to system (A) by multiplying the vector from system (B) with a rotation matrix of system (B) to system (A) A

wBA RB w

(16)

Emir Nezirić

154

INNER AND OUTER COORDINATES Coordinates of robotic systems could be defined through the relative position of the links (inner coordinates) or the position of the end effector at the end of the kinematic chain in the fixed coordinate system (outer coordinates – global coordinate system). Inner coordinates are mostly marked as qi (I =1,...,n), where n is the number of the degrees of freedom od the system. Those coordinates combined gives the vector of the inner coordinates.

q  q1

q2 . . . qn T

(17)

Inner coordinates could be rotational or translational. Rotational coordinate is marked as angle (φ on Figure 5) and translational coordinate is marked as distance (h on Figure 5).Orientation of the end effector is defined by its rotation matrix in global coordinate system.

 

   i0 , i p   0 p R    j 0 , i p  k , i  0 p



 i0 , jp  i0 , kp    j0 , j p  j0 , k p  

 k , j   0

p



k0 ,k p  

(18)

If the end effector has its right-handed coordinate system unit vectors n,.o,.a (normal, orientation and action direction), its matrix of rotation could be written as

Figure 5. Rotational and translational joint.

Kinematics and Dynamics of Robots

155

Figure 6. Example of end effector with its local coordinate system.

0 p

n x o x a x  R  n y o y a y   n z o z a z 

(19)

Position of the effector in the outer coordinate system is mostly defined in the rectangular coordinate system. Since the only three members of the matrix (19) are mutually independent, it leads to the conclusion that it is required to have only 3 coordinates to define the orientation of end effector in the global coordinate system. It leads to the conclusion that the vector of outer coordinates would contain the position of the end effector in global coordinate system (px, py, pz) and three independent values from the matrix of rotation. These three independent values could be defined as Euler angles. Euler angles between the global coordinate system and end effector local coordinate system are angles for whose value is required to rotate local system axes about the global system axes to have it oriented as it is required. Euler angles are named  - yaw angle,  - pitch angle,

 - roll angle.

Emir Nezirić

156

So, the most common way to define vector of outer coordinates is

r   p x

p y pz   

T

(20)

INNER AND OUTER COORDINATES RELATION Kinematic model of robot requires that relation between vectors of inner and outer coordinates is known. The direct kinematic problem of robots is to find {r} from known vector {q}. The inverse kinematic problem of robots is to find {q} from the known vector {r}. If dimensions of vectors are the same, then that robot is non-redundant, which has the finite number of solutions of the inverse kinematic problem. If the dimensions of the vectors are not the same, the inverse problem has an infinite number of solutions, so the fittest solution is chosen. Following sections will be considering only non-redundant robots, since most robots are non-redundant.

Direct Kinematic Problem Solution The solution of the direct kinematic problem is possible to find by analytic procedures for every robot (Seibel et al. 2018; Merlet 2000). That procedure could be described as follows:    

Drawing the symbolic scheme of the robot with the most important dimensions, where it is most easier to do it in the zero-position. Attaching the coordinates to robot joints. A number of robots joints have to be at least the same to the degrees of freedom. Determining the matrices of transformation between coordinate systems. Calculating the final transformation matrix, position and orientation.

Inverse Kinematic Problem Solution The inverse kinematic problem could be solved by analytical and numerical methods. Both methods have their pros and cons. It is possible to obtain exact equations of transformation between inner and outer coordinates by analytical methods (Vladimirov

Kinematics and Dynamics of Robots

157

and Koceski. 2019; Amici and Cappellini 2016; Hildebrand 2013, 101-106; Kofinas et al. 2015). The procedures are complicated and it is impossible to have an unique solution for the arbitrary setup of the robot. On the otherside, it gives the singular solutions and there is no divergence in the procedure. The numerical methods (Newton method, Chebyshev method, gradient method) gives the approximate values of solutions for inner coordinates. The precision of the solutions depends on multiple conditions (method, iteration step, initial conditions). If some of the conditions are not appropriate, it is possible to have divergence in the solution.

Analytic Methods Procedures There isa couple of procedures for solving the inverse kinematicproblem. Herewouldbedescribed the procedurewithJacobian matrix. Let observe the functionswriten as

y1  y1  x1 , x2 ,..., xn  y2  y2  x1 , x2 ,..., xn  (21)

................................ yn  yn  x1 , x2 ,..., xn  Diferentiating the previousequations and writingit in the matrix formgives

 y1  dy1      x1    y2 dy 2    x1         y dy n   n  x1

y1 x2 y2 x2  yn x2

y1   dx  xn   1   y2       xn  dx2       yn     dxn   xn    

(22)

Or written in another way

 y  d y     d x  x 

(23)

Emir Nezirić

158

 y    J ac  is Jacobian matrix. If it is used in a relation between the vectors of  x 

where 

inner and outer coordinate, it could be written as

d r  J ac   d q

(24)

Jacobian matrix is also the link between vectors of inner and outer velocities.

r  J ac   q

(25)

Vector of outer accelerations could be written as

r  Jac  q  J ac  q

(26)

Vectors of inner velocities and accelerations related to the outer velocities and accelerations could be written as

q  J ac 1  r

(27)

q  J ac 1r Jac  q

(28)

As a conclusion, Jacobian matrix is a connection between inner and outer coordinates, velocities and accelerations and vice versa.

INTRODUCTION TO DYNAMICS OF ROBOTS At the beginning of robotic development, the only thing that is observed was the kinematics of robots. As the requirements for precision and energy efficiency increased, the importance of dynamics of robots increases rapidly. Direct dynamics problem considers determining the drive load (forces and moments) on actuators which causes the prescribed motion of the system. Forces and moments are easy to determine from known inner coordinates of joints, which is the assignment for the inverse kinematic problem. The solution of the direct dynamics problem is the solution for the manipulation of the robots (Voloder 2002; Khalil 2010). Inverse dynamics problem requires that the first step is to analyse what motion would be caused by actuators. As a result, the vector of inner coordinates is obtained. Then it is

Kinematics and Dynamics of Robots

159

required to solve the direct kinematics problem. Inverse dynamics problem requires numerical methods for the solution, still, the direct dynamics problem could be solved by analytical methods. There are some common methods which are used to form the equations of motion. They are based on Newton-Euler laws, Lagrange’s equations or Apple’s equations. The first method would be discussed as follows.

Kinematics Prerequsites for Newton-Euler Method Let’s observe the fixed coordinate system as (0) and (A) is a movable coordinate system in which the point Q is moving. Radius vector of point Q in system (O) in that coordinate system could be written as

Figure 7. Point velocity analysis.

 p   p   p   p 

0 0

0 0

Q

0 A

A

0 0

Q

A

0 A

R

p

A A

Q

(29)

Absolute velocity of point Q in fixed coordinate system is 0

or

vQ 0 v A 0  A0 A pQ  0 AvQ 

(30)

Emir Nezirić

160 0

vQ 0 v A0  A  A0 R A A pQ  A0 R A AvQ 

(31)

Velocity vector could be also obtainded by differentiating the equation (29). 0

vQ   dtd  0 0 p A   dtd  A0 R A A pQ  0 v A  dtd  A0 R A A pQ 

(32)

where d0 A  A R dt 

 p     0

A

Q

A

0 A

R

 p 

A A

Q

0 A

R

v 

A A

(33)

Q

Equation (33) could be used when it is required to differentiate the product of the rotation matrix and any relative vector in the movable coordinate system represented in that system. To obtain the accelerationvector of point Q in the fixed coordinate system it is required to differentiate the equation (31). 0

aQ   dtd 0 v A dtd 0  A   A0 R A A pQ  0  A dtd  A0 R A A pQ  



 

d0 A A   A R  vQ   dt 

(34)

By using the similar expression to (33) it is possible to transform the equation (34) to 0

aQ 0 a A 0  A  A0 R A A pQ  0  A  0  A  A0 R A A pQ   

 



 

A A  20  A   A0 R  A vQ  A0 R  A aQ  

(35)

If it is observed that rotation of the system (A) with absolute angular velocity  A  and system (B) with relative angluar velocity

   about system (A), then the absolute A

B

angular velocity of system (B) could be written as 0

 B  0  A 

or

 

0 A

B

(36)

Kinematics and Dynamics of Robots 0

B 0  A  A0 R

 

A A

161 (37)

B

By differentiating the equation (37) it could be obtained absolute angular acceleration of system (B).

Figure 8. Angular velocity of system (B). 0

 B  

d dt

    dtd    dtd  0

0

B

A



0 A

R

  

A A

B

(38)



d 0  A 0  A and using the similar procedured as shown in (33), it dt could be also written as

By substituting

0

 B  0  A  0  A   A0 R 

  

A A

B

0 A

R

 

A A

B

(39)

Equations (31), (35),(37) and (39) could be used to determine kinematic characteristics of robotic joint as functions of its parameters and the connected joint parameters.

Newton – Euler Method The procedure of the Newton-Euler method could be divided into two steps: outer and inneriteration.Kinematic parameters (angular velocities, angular accelerations, accelerations of origination of link, acceleration of the center of inertia) and dynamic parameters (vectors of forces on links, moments of forces about centers of inertia) are determined from base member of kinematic chain up to the operative member of

Emir Nezirić

162

kinematic chain during the outer iteration.As a result in the inner iteration, it is possible to obtain forces and moments in the joints and also the forces and moments of actuators of links of the robot.

Outer Iteration If i-th and (i+1)-th link of the robot are mutualy conected with the rotational joint numbered as (i+1), according to equations (37) and (39) angular velocity of link (i+1) could be written as 0

i 1 0 i  0i R ii 1 i

(40)

and angular acceleration could be written as 0

 i1 0  i  0 i   0i R ii1 0i R i i1 i

i





(41)

Angular velocity of that link in its own coordinate system could be written as i 1

i 1i 01R0 i 1i 01R0 i  i 01R0i R ii 1 i

 

 i 1i  i 1i R  ii 1  i 1i R i i  i

 

i 1 i

i 1

(42)

where

   

i 1 i

i 1

i 1 i 1 

ˆ 

(43)

i 1

i 1 is a projection of angular velocity of (i+1)th link on the rotation axis, and

i 1

ˆ  is unit vector of the link axis. i 1

This gives i 1

i 1i 1i Ri i   i 1

i 1

ˆ  i 1

(44)

When the (i+1)th joint of robot is translational, relative angular velocity to previous link is equal to zero ( i 1  0 ). That gives

Kinematics and Dynamics of Robots i 1

163

i 1i 1i Ri i 

(45)

Angular acceleraton of that link in its own coordinate system could be written as i 1

 i 1i 01R0  i 1i 01R0  i   i 01R0 i   i 01R0i R ii 1  i

 

 i 01 R 0i R  i i 1  i 1 i  i 1i  i

 i 1i R i  i  



i 1 i

Ri i 





i 1 i



R ii 1i 1i R i i 1 i

     

i 1 i

i 1

i 1 i

i 1

(46)

Since

     ˆ 

i 1 i

i 1

i 1

i 1

i 1

(47)

where i 1 is projection of angular acceleration vector of the (i+1)th link on the link rotation axis, it leads to i 1

 i1i1i Ri  i  i1 i1i Ri i 

i 1

ˆ    ˆ  i 1

i 1

i 1

(48)

i 1

For the translational (i+1)th joint relative angular acceleration is zero ( i 1  0 ) so i 1

 i1i1i Ri  i 

(49)

If the i-th and (i+1)-th link of the robots are connecter by rotational joint (i+1), then acording acceleration of the origin of the link (i+1) in the fixed coordinate system would be 0

ai1 0 ai  0  i   0i R i pi1 0 i   0 i   0i R i pi1 i



where

0

i









(50)

ai  is acceleraton of the origin of i-th link in the fixed coordinate system, and

 p  is relative radius vector of the origin of the (i+1)link in the i-th link coordinate

i i

i 1

system. Acceleration of the origin of (i+1)link coordinate system could be obtained if the equation (50) is multiplied by matrix of rotation.

Emir Nezirić

164 i 1

ai1i1i R   i ai  i  i  i pi1 i i   i i  i pi1 i

i







(51)

If (i+1)-th robot joint is translational, then acceleration of (i+1) origin could be written 0

ai1 0 ai  0  i   0i R i pi1 0 i   0 i   0i R i pi1  i

i











  R  a 

i  20 i   0i R  i vi 1 

i i

0 i

(52)

i 1

where relative velocity and acceleration of the origin of(i+1)-th link in i-th coordinate system are

 v 

i i

i 1

i i 1

 a 

i i

i 1

 v 

R hi 1i 1ˆi 1

(53)

Ri 1i ai 1i 1i R hi 1i 1ˆi 1

(54)

R

i i 1

i 1 i

i 1

i i 1

Acceleraton of the origin of the (i+1) translatory link in that system could be written in transformed form as

Figure 9. Acceleration of the translational link of robot. i 1

ai1i1i R   i ai  i  i  i pi1 i i   i i  i pi1  i



i



 

i 1  2  hi 1i 1i 1i 1ˆzi 1  hi 1 ˆi 1



(55)

Kinematics and Dynamics of Robots

165

If the center of inertia vector of the (i+1)-th link vector in its coordinate system is i1

marked as

pci1 , acceleration of the center of inertia in the fixed coordinate system

could be writen as 0



aci 1 0 ai1 0  i1  i01R

i 1 i 1





i 1  pci 1  0 i 1  0 i 1  i 01 R   



i 1



 pci 1  

(56)

For the (i+1) system it would be i 1

aci 1i1ai1 i1 i1



i 1 i 1



i 1 pci 1  i 1i 1  i1i1 



i 1



pci 1  

(57)

Vector of the principal acting forces on link (i+1) could be obtained by multiplication of acceleration vector with mass of the link. i1

Fi1  mi1i1aci1

(58)

Vector of the principal acting moments on link (i+1) around the center of inertia, acording to Eulers dynamic equations could be writen as i1

Mi1  i1J c i1 i1i1i1 i1J c i1i1

(59)

where matrix of inertia of (i+1) link is



i 1

Jc



 i 1 J cx    i 1J cxy   i 1J cxz 

 i 1J cxy i 1

J cy

i 1

 J cyz

 i 1J cxz    i 1J cyz  i 1 J cz  

(60)

where i 1

i 1

J cx ,

J cxy ,

i1

J cy ,

i 1

i 1

J cz - axial moments of inertia of (i+1) link for central axes

J cxz ,

i 1

J cyz

- centrifugal moments of inertia of (i+1) link for pairs of

central axes Equations (44), (48), (51), (57), (58) and (59) are equations for the outer dynamic iteration for the rotational links of robots, and (45), (49), (55), (57), (58) and (59) are equations for the outer dynamic iteration for the translational links of robots. These

Emir Nezirić

166

equations could be used to obtain the characteristics of robots step by step from the inner to the outer links of the robotic chain.

Inner Iteration Robotic system motion would be observed caused by forces. Vector of principal forces could be writen as i

Fi i f i  i Fi i1i Ri1f i1

(61)

Figure 10. Loads on robotic link.

where

 f i  - principal vector of forces exerted on link (i)by link (i-1),  fi 1 - principal vector of forces exerted on link (i+1)by link (i), Fi  - principal vector of other real forces acting on link (i), which are not produced by interaction of link (i) with neighboring links Finaly, principal vector of forces exerted on link (i) by link (i-1) is i

f i i Fi i Fi  i1i Ri1f i1

(62)

Principal moment of all forces exerted on link (i) about its center of inerta is i

M i  i ni  i M i  i ni 1  i p ci  i  f i   i



where

 p   p  f

i i

i 1

i i

i

ci

i 1

 (63)

Kinematics and Dynamics of Robots

167

ni  - principal moment of forces exerted on link (i)by link (i-1) about origin of coordinate system (i), ni 1  - principal moment of forces exerted on link (i+1)by link (i) about origin of the coordinate system (i+1), M i - principal moment other real forces acting on link (i), which are not produced by interaction of link (i) with neighboring links about center of inertia of link (i). Inserting (62) in (63) it could be obtained that i

ni  i M i  i M i  i 1i Ri 1 ni 1 



 p  

i i

i 1

i i 1

Ri 1  f i 1 

 p  F   p  F  

i i

i i

i

ci

i

i

ci

i

(64)

Actuator momentum in rotational joint (i) is equal to projection of moment by which link (i-1) is acting on link (i) on axis of rotation i. That projection is equal to scalar product of

i

ni  and unit vector i ˆi .



i M iak  i ni T  ˆ i

(65)

If the joint (i) is translational, actuator force is



i Fi ak  i  f i T  ˆ i

(66)

Equations (62), (64) and (65) are equations for the inner dynamic iteration for the rotational links of robots, and (62), (64) and (66) are equations for the inner dynamic iteration for the translational links of robots. These equations could be used to obtain the characteristics of robots step by step from the outer to the inner links of the robotic chain. Equations (65) and (66) could be used to form the system of equations which connects the loads of joint actuators, joint coordinates, velocities and accelerations.

   Aq q  Bq ,q   C q  where

(67)

Emir Nezirić

168

  Aq  q Bq ,q  C q 

vector of actuators forces and moments, matrix of inertia of robot, generalized acceleration vector, vector of Coriolis and centrifugal effects, vector of gravity effects.

Equation (67) represents the solution for the direct dynamic problem of robots.

REFERENCES [1]

Coiffet, Phillipe and Chirouze, Michael. 1983. An Introduction to Robot Technology. Translated by Meg Tombs. London: Kogan Page Ltd. [2] Craig, John J. 2005. Introduction to Robotics: Mechanics and Control. New Jersey: Pearson Education. [3] Kovačić, Zdenko, Bogdan, Stjepan and Krajči, Vesna. 2002. Basics of Robotics. Zagreb: Graphis. [4] Gačo, Dženana. 2002. “Kinematical analysis of robots” in Robotika, edited by Karabegović Isak and Doleček, Vlatko. 35-126. Bihać: Tehnički fakultet Bihać. [5] Seibel, Arthur, Schulz, Stefan and Schlattmann, Josef. 2018. "On the Direct Kinematics Problem of Paralel Mechanisms". Journal of Robotics, 2018(3): 1-9. DOI: 10.1155/2018/2412608. [6] Merlet, Jean-Pierre. 2000. “Direct Kinematics” in Parallel Robots, Solid Mechanics and its Application, vol. 74: 91-147. Dordecht: Springer. [7] Vladimirov, Gjorgji and Koceski,Saso. 2019. “Inverse Kinematics Solution of a Robot Arm based on Adaptive Neuro Fuzzy Interface System”. International Journal of Computer Applications,178(39):10-14. AcessedOctober 24, 2019. DOI: 10.5120/ijca2019919268. [8] Amici, Cinzia and Cappellini,Valdet. 2016. “Inverse Kinematics of a Serial Robot”. MATEC Web of Conferences 53(01060):1-7. DOI: 10.1051/matecconf/ 20165301060. [9] Hildebrand, Dietmar. 2013. Foundations of Geometric Algebra Computing. Berlin: Springer. [10] Kofinas, Nikolaos and Orfanoudakis, Emmanouil and Lagoudakis, Michail. 2015. “Complete Analytical Forward and Inverse Kinematic for the NAO Humanoid Robot”. Journal of Inteligent and Robotic Systems, 77(2): 251-264. DOI:10.1007/ s10846-013-0015-4.

Kinematics and Dynamics of Robots

169

[11] Voloder, Avdo. 2002. “Dynamical analysis of robots” in Robotika, edited by Karabegović Isak and Doleček Vlatko, 83-127. Bihać: Tehnički fakultet Bihać. [12] Khalil, Wisama. 2010. “Dynamic Modeling of Robots Using Newton-Euler Formulation”. In Informatics in Control, Automation and Robotics. Lecture Notes in Electrical Engineering, vol. 89. edited by Cetto, J. A., Ferrier, J. L. and Felipe,J. 3-20. Berlin: Springer. [13] Manseur, Rachid. 2006. Robot Modeling and Kinematics. Boston: Cengage Learning. [14] Khaleel, Hind. 2019. Enchanced Solution of Inverse Kinematics for Redundant Robot ManipulatorUsing PSO, in Engineering and Technology Journal, 37(7): 241247. DOI: 10.30684/etj.37.7A.4. [15] Jazar, Reza. 2007. Theory of Applied Robotics:Kinematics, Dynamics, and Control. New York: Springer. [16] Sciavicco, Lorenzo and Siciliano,Bruno. 1996. Modeling and Control of Robot Manipulators. New York: McGraw-Hill. [17] Voloder, Avdo. 2005. Mechanism theory. Sarajevo: Mašinski fakultet Sarajevo. [18] Doleček, Vlatko. 2005. Kinematics, Sarajevo: Mašinski fakultet Sarajevo. [19] Doleček, Vlatko. 2007. Dynamics. Sarajevo: Mašinski fakultet Sarajevo. [20] Lenarčić, Jadran and Khatib, Oussama. 2006. Advances in Robot Kinematics: Mechanisms and Motion. Dordecht: Springer.

In: Industrial Robots: Design, Applications and Technology ISBN: 978-1-53617-779-4 Editors: Isak Karabegović et al. © 2020 Nova Science Publishers, Inc.

Chapter 8

COLLABORATIVE ROBOTS: OVERVIEW AND FUTURE TRENDS Mattia Zamboni* and Anna Valente, PhD DTI – ARM Lab, University of Applied Science of Southern Switzerland (SUPSI), Manno (TI), Switzerland

ABSTRACT Collaborative robots in the industry target the enhancement of production efficiency by combining dexterity and flexibility of human operators, and the industrial robots’ accuracy, speed, and reliability. This shift in paradigm requires robots to have the ability to work with humans in the same space, on the same piece and at the same time: just like another operator would. This not only requires them to be more sensitive and aware of their surroundings but also proactive in predicting collisions. Thisimplies significant redesign for safe collaboration throughout the entire design process. This chapter provides a wide range of information on human-robot collaboration from the types of collaboration to key factors for effective collaboration. It suggests a comprehensive set of sensing technologies to solve tomorrow’s challenging interaction tasks. Real-world applications are analyzed to provide design solutions taking into account the maximum efficiency/safety trade-off.

Keywords: human-robot interaction, collaborative robotics, future robots’ trends

*

Corresponding Author’s E-mail: [email protected].

172

Mattia Zamboni and Anna Valente

INTRODUCTION Since their introduction industrial robots have been very versatile machines, meant to be equipped with a wide rangeof toolings and sensors specifically configured and tuned according to the task to be performed (Siciliano and Khatib 2008, 969–975; Krüger, Lien, and Verl 2009, 628–646; Angerer et al. 2010, 1–22; Verl et al. 2019, 799–822). Their introduction aimed mainly at replacing human operators in repetitive, tiring, unsafe or unhealthy production tasks, with the benefit of a high level of positioning accuracy (Akli and Bouzouia 2015, 1–5). For safety reasons, they have typically always been placed within restricted areas where humans cannot unintentionally access. The continuous search for production efficiency and resource optimization, supported by technological progress, is set to gradually change the current trend with a new emerging generation of more capable robots. This results in the need to share the workspace between robot and human (Stadler et al. 2013, 231–232)as well as a much tighter collaboration between them, to perform tasks both in the industrial as well as in the domestic domain (Duffy 2003, 177–190). In (Haddadin, Albu-Schäffer, and Hirzinger 2009, 1507–1527)the authors describe the requirements for safe collaborative robots, which include the physical interaction human-robot (pHRI). Collision avoidance, which is important for human-robot safety,is listed as one of the relevant factors. This will significantly transform the way manufacturing is currently structured and the jobs which make it happen. If the scenario of a car factory’s assembly floor staffed entirely by machines is nothing new, having the robots interact with human workers brings the conceptto a whole different level (Accenture 2018, 2–5). According to (Smith 2019, 8) the figure of collaborative robots, which in 2019 accounted for 3 percent of the whole robotics market, is expected to reach 34 percent by 2025. One venture capital firm projects a 61 percent compound annual growth rate (CAGR) for cobots (Searby 2016, 1). Advances in sensor technologies have enabled the collaborative robots to sense their environment, internal state, relative position, and more, yet the complexity of the humanmachine collaboration is a puzzle still unfolding (Blyler 2018, 1–4). Ultimately the success in the cobots’ adoption within factories and shop floors will be dictated by their ability to work efficiently and safely alongside humans who still hold the unique skills and peculiarities making them irreplaceable key workers. The organization of this paper is given as follows: after the introduction, there is a section with general notes about collaborative robots, the type of interaction with humans, a comparison with traditional robots, their main challenges and specific safety notes. Next, there is a section with design considerations for future collaborative robots in which a comprehensive set of key design factors get analyzed, including security

Collaborative Robots

173

concerns and future trends. Finally, there are acknowledgments and conclusions which include a summary.

COLLABORATIVE ROBOTS Collaborative robots are defined in (Keeping 2018, 1) as “light, inexpensive industrial robots designed to fill in gaps and work closely with humans to bring automation to tasks that had previously been completed solely with manual labor.” They are also referred to more simply as “cobots”, short for COllaborative roBOTS (Bélanger-Barrette 2015, 1). Oftentimes the term ‘cobot’ is used when referring about a force limited robot. While traditional industrial robots can be used for collaborative tasks, they are typically not force-limited, and they usually need supplementary monitoring devices to safely operate alongside humans. By the same token, an application can be labeled as ‘collaborative’ -and therefore inherently safe- only if all the components involved in it are collaborative as well. Cobots can be split into natively collaborative (typically power/force limited) and collaborative by “adoption”, which can imply e.g., industrial robot with an externally added sensory apparatus. The key features in which cobots differ the most from traditional industrial robots (Hentout et al. 2019, 764–799) can be summarized as follows: 





Safety and sensitivity: whereas industrial robots blindly perform operations according to commands with limited awareness, cobots can perform controlled movements with continuously monitored motor currents and torques, which allow them to stop whenever something is detected in their way. This means that cobots can be deployed directly within a shared production working space. Versatility and ease of set up: whereas traditional industrial robots normally require advanced programming skills, cobots are typically programmed by given work instructions instead. In other words training and re-training are used as opposed to programming. As a result, cobots can be deployed and re-deployed quicker and with less effort compared to the meticulously and long-planned robot cells. Performance and accuracy: besides the obvious safety reasons, the cages in which industrial robots are enclosed have a purpose: they allow the robot to perform at its very best in terms of both speed and acceleration. This makes them undeniably more performant. In terms of positioning accuracy as well, industrial robots typically perform better thanks to an overall higher structural stiffness and more accurate (and expensive) hardware.

Mattia Zamboni and Anna Valente

174

Table 1 presents a more comprehensive comparison table of features. Table 1. Comparison between typical traditional industrial robots and collaborative robots

Safety Speed Positioning accuracy Teaching Set-up time Size Payload Team work Versatility Mobility

Industrial traditional robots Not inherently safe Typically fast Typically accurate Not easy to teach Time consuming From small to very big From small to very high No Limited None

Industrial collaborative robots Inherently safe Typically slow Limited accuracy Easy to teach Quick Typically small Typically low Yes High Easily moved

In many respects collaborative robots represent, undeniably, a majorevolution over industrial robots. But for the ultimate goal of humans-less automation they still have several keychallenges to address.

The Cobot Big Challenges The implementation of full automation processes is highly complex because of technical limitations that have not been sufficiently addressed. The three major challenges (Huang 2019, 1–2) can be identified as follows:

Flexibility and Adaptability Currently employed automated production lines are being optimized and designed for mass production. Thisreduces costs, but leads to a lack of flexibility. As a consequence of the speed of evolution, the general future trend imposes shorter product life cycles and small volume. Concurrently, however, highly customized production requires higher flexibility. This up to today has been strictly a human’s territory and therefore represents still one of the main challenges. Dexterity and Task Complexity There is no denying that despite the rapid advancement of technology there are no machines with a dexterity level on par with humans. There are plenty ofassembly tasks in which robots excel. However, when it comes to operations like wiring a set of boards, which for a human worker is considered not challenging, for the robot they representa monumental undertaking, challenging both vision and handling system.

Collaborative Robots

175

Sensitivity and Practical Experience Several complex assembly operations rely on both sensitivity and practical experience of the human worker. If sensitivity comes as standard with humans, practical experience is strictly related to the specific level of experiences, and the learned skills. This, in technology’s world, translates into the use of artificial intelligence and deep learning. Through an iterative process of trial and error, the robot needs to perform an operation many times, until it learns how to finely control its actuators. Learning to be gentle, delicate and smooth in its operations is necessary for a cobot. This represents another big challenge.

Types of Collaborations with Humans Human-industrial robot collaboration can range from a shared workspace with no direct human-robot contact or task synchronization, to a robot that adjusts its motion in real-time to the motion of an individual human worker. Table 2 shows the existing levels of human-robot cooperation. Table 2. Summary of the levels of cooperation between human worker and robot Type Cell Coexistence Synchronized

Cooperation Collaboration

Description No cooperation since the robot is operated in a traditional cage. Human and cage-free robot work alongside each other but do not share a workspace. The design of the workflow means that the human worker and the robot share a workspace but that only one of the interaction partners is actually present in the workspace at any one time. Both interaction partners may have tasks to perform at the same time in the (shared) workspace, but they do not work simultaneously on the same product or component. Human worker and robot work simultaneously on the same product or component.

Figure 1. Schematic representation of the human-robot levels of cooperation.

Mattia Zamboni and Anna Valente

176

Interaction Implementations Modes with Cobots Based on the levels of cooperation defined in Figure 2, four different interaction modes are identified and described. This is shown on Table 3. Table 3. Comparison between collaboration approaches Operation mode Safety Monitored Stop

Description Achieved by safety mechanisms relying on external sensors for triggering a stop

Peculiarities  Traditional industrial robot combined with various safety sensors  Once a human inside the work envelope is detected, it stops operating

Employment Applications with minimal human interaction with the robot

Hand Guided

Achieved by manually taking control of the robot by using a handoperated device

 Hand-operated device which allows the human operator to take control of the robot

Can be used both in the teaching phase as well to assist production

Speed and Separation

Achieved by reducing the system’s speed once human operator enters critical zone

 With vision systems to monitor the cobot’s work envelope.  3 zones system:  Safe zone: normal operation  Warning zone: reduced speed  Stop zone: stop operation

Suited best for applications with frequent interaction with human workers

Power and Force Limiting (PFL)

Can be achieved mainly with a natively collaborative robot with power and force limiting capabilities

 Does not require additional safety barriers, vision systems, or external scanners  Cobots have no sharp corners nor pinch points  No exposed motors  Sensitive collision monitors built-in

Currently limited to smaller applications.

Collaborative Robots

177

Safety Guidelines for Cobots In collaborative workplaces, where humans work alongside robots safety is an essential prerequisite in the design of equipment, machines, and systems. Safety and dependability can be considered keys to the design of robots for environments with human presence. Safety standards represent unified requirements and design guidelines to help and ease development of new systems. Even if compliance to standards is not needed to demonstrate a system’s safety, it helps to reduce the effort in the safety compliance process and certification concerning the country-specific legislation for health and safety requirements for machinery. Table 4 includes a list of the relevant existing standards related to collaborative robotics. Table 4. Safety standard documents related to collaborative robotics Document ISO 102181:2011

ISO/TS 15066:2016 R15.606-2016

R15.806-2018

Content description Specifications requirements and guidelines for the inherent safe design related to industrial robots. Description of basic hazards associated with robots and requirements to eliminate/reduce, the risks associated with these hazards.Will soon be replaced by ISO/CD 10218-1 Specifications about the safety requirements for collaborative industrial robot systems and the work environment, and supplements the requirements and guidance on collaborative industrial robot operation given in ISO 10218‑1 and ISO 10218‑2. Specifications about safety requirements specific to collaborative robots and robot systems and is supplemental to the guidance in ANSI/RIA R15.06. This document is a U.S. National Adoption of ISO/TS 15066. Test methods and metrics for measuring the pressures and forces associated with quasistatic and transient contact events of collaborative applications. Guidance on determining conditions of testing methods, test measurements and measurement devices.

The safety standard document R15.606-2016 additionally contains the pain threshold limit values required to validate whether a collaborative robot is operating within the acceptable range. The pain threshold limit values are different for different parts of the human body that the cobot may contact. Collaborative robots typically have sensors and safety mechanisms built-in. The variety of applications and flexibility of cobots makes it necessary to run tests to ensure that the limits are set appropriately for the expected contact type. Technical Report R15.806-2018 instead addresses the test methods and metrics for measuring pressure and force associated with quasi-static and transient contact events of collaborative applications. It outlines the best testing methods for power and force in power-and-force-limited (PFL) cobot systems (MMH_Staff 2019).

Mattia Zamboni and Anna Valente

178

As a general rule collaborative robots should be designed to avoid the possibility of hurting the human. This requires eliminating all potential pinch and crush points on the robot as well as increasing the surface area of contact points. In planning the application the workspace layout should be organized to limit clamping points and to allow recoil after transient collisions. In addition the robot’s velocity, in order to allow for a quicker stop, should be reduced whenever the robot approaches a fixed surface.

Safety vs Performance Interactions between human and robot represent an unstoppable trend which is being addressed in the name of both performance and safety. Without these prerequisites, human-robot collaboration cannot be justified. (X. Zhang, Zhu, and Lin 2016). Safety isthesine qua non condition whenever a human is involved, just as stated in the first Asimov’s Law of Robotics. Only with that out of the way the performance aspect can be addressed (Bicchi et al. 2008, 1–6). If the performance of a single human worker only depends on his skills and ability to perform an operation as quick as possible, in a team of 2 coordination and organization play a role just as important. In addition, the team of 2 human workers is expected to increase in productivity over time thanks to their ability to improve their efficiency and coordination, this will not necessarily be the case in a mixed team human-robot. For this reason, in terms of efficiency, it is very important to meticulously plan and tune the collaborative operation to make sure that as much as possible there are no downtimes both from the robot but mostly not for the human operator. A successful collaborative implementation can be considered as such not only if the resulting product is assembled correctly but also if is done within minimum reasonable time. The transition of a manual operation to a collaborative one often requires a product redesign which simplifies the task and/or makes the operation more efficient.

DESIGN CONSIDERATIONS FOR FUTURE COBOTS The ultimate goal in designing future cobots should be to conceive the ideal partner for a human operator who needs assistance in performing a specific task. A perfect balance between performance in its operations, safety in its actions and movements, and the awareness of its partner’s state in order to behave accordingly. The human is a beautifully designed “machine”, with a high degree of dexterity, high strength to weight ratio, high level of adaptation capabilities and quick learning skills. As a result, human capabilities should be taken as a source of inspiration.

Collaborative Robots

179

To summarize the relevant factors a comparison Table 5 has been compiled. Table 5. Summary of advantages of human vs machine Human operator’s advantages • • • • • • • • • • •

High dexterity High mobility and versatility High body strength/weight ratio High sensitivity for delicate tasks Ability to learn tasks rapidly and proactively make judgements Problem solving skills Naturally safe to work with Ability to teach tasks Ability to sense and apply tolerance compensation Flexible availability Ability to improve productivity over - time

Artificial operator’s advantages • • • • • • • • • •

High accuracy High speed High endurance Accurate sensing system Consistent production quality Ability to perform monotonous and unreasonable tasks Ability to perform heavy and hazardous tasks Immunity to diseases Potentially high computational power Collision prediction capability

Based on Table 5, the ambitious mission to design and engineer the perfect coworker should not be limited to trying to replicate human skills but rather try to combine the best of both worlds. For this purpose, a wide literature review allowed for identifying the key factors in designing intrinsically safe cobots, (Hentout et al. 2019, 9–10), as summarized in Table 6. Table 6. Key factors to the design of intrinsically safe cobots identified in literature Factor Weight reduction

Force/torque detection /Sensitive skin adoption Sensoric system increase Cushion layer adoption

Velocity reduction

Strength reduction

Description Reducing the weight of the robot’s moving parts is one of the main factors in designing intrinsically safe cobots. Using torque/force sensors or proximitysensitive skins to detect collisions. The interaction’s risk can be reduced by increasing their sensorial apparatus. Increase the energy absorbing properties of the robot’s protective layers, adding enough soft and compliant coverings and placing airbags around the robot. Set a limit to the velocity and maximum energy within the system to match human capabilities To limit both the strength and contact force of the robot.

Reference (Hirzinger et al. 2001, 3356– 3363) (Yu, Hai-Jun, and Hurd 2016; She et al. 2018, 1–14) (Bicchi, Peshkin, and Colgate 2008, 1335–1348) (Weitschat et al. 2017, 2279– 2284; Bicchi, Peshkin, and Colgate 2008, 1335–1348) (De Santis 2008)

(Bo et al. 2016, 1340–1345; Navarro et al. 2016, 3043– 3048)

In the next paragraphs, we elaborate and build upon the most relevant factors mentioned in Table 6.

180

Mattia Zamboni and Anna Valente

Weight Reduction If in industrial robots mass plays a relevant role in relation to their performance and positioning accuracy, in the development of cobots it becomes a safety issue.The lighter the robot is the less harmful it can be in case of collision. It is therefore of utmost importance to address the weight factor in a structured way. In order to build a robot with a load to weight ratio as big as possible the design requires an extremely lightweight structure (Hirzinger et al. 2001, 3356–63). The structural material needs to have a strength to weight ratio as high as possible. The best options available are aluminum, carbon fiber, and hybrid materials. Concurrently the usage of extremely lightweight structures requires a close look at the highly loaded mechanical parts. It is necessary to maximize structural stiffness and strength while trying to reduce weight, which can be done using Finite Element Methods (FEM) (Hirzinger et al. 2001, 3356–63). New manufacturing technologies, like additive manufacturing able to produce lighter parts designed with honey-comb types of structures, need to be considered (Plocher and Panesar 2019, 1–20). In addition, wherever possible the design should try to keep heavy parts as close to the robot’s base as possible in orderto reduceinertia. Finally the adoption of motors and gears with optimal output-torque to weight ratio is recommended. A lighterjointstructure impliesmore unwantedflex. To compensate for this elasticity, torque sensing can be implemented in the joints. Torque sensing indeed allows sophisticated control methods such as vibration damping and stiffness control.

Sensitive Joints Design The sensitivity imposed on a collaborative robot requires implementation of the needed technologies and solutions to achieve a sense of awareness of its components state. A list of available solutions that help the robot to monitor its joints state is provided.

Dual Encoder Design By placing encoders on both motor and load sides (Figure 2) it is possible to detect the displacement caused by the joint elasticity (Mikhel et al. 2018, 246–252). This value can be exploited in several different ways. It can support the identification, the control and the sensing of the external torque. During the identification process, precious data about the compliance’s characteristic of the joints elasticity are provided. This data can be used by the robot controller to eliminate potential compliance errors and system vibrations.

Collaborative Robots

181

Double encoders supplemented by a stiffness model also can work as the force/torque sensors, allowing to develop robot behavior strategies which are based on collision with the dynamically changed environment. (Mikhel et al. 2018, 246–252) Several robot manufacturers adopt in their manipulators joints the double encoders’ design and feed the values in the control loop feedback in order to compensate for the joint compliances (Tsai et al. 2008, 1–10; Izumi and Matsuo 2012, 1–13). Lastly the dual encoder configuration can be used for safe positioning signaling, using the double feedback as redundancy.

Figure 2. Block diagram of the dual encoder configuration.

2.2. Joint Motor Current Monitoring With this technique, by measuring the motor current, it is possible to estimate the robot joints torque without using an external force/torque sensor. There are a couple of methods to measure the current flow: either by reading it directly within the motor driver (if available), or using Hall Effect sensors. By developing both a dynamic and a friction model of the robot’s joints (Chen, Luo, and He 2018, 1–10), real-time detection can be implemented by measuring both the motor current and the position of the robot’s joint encoder. Lastly, the measured residual error’s value gets compared to the threshold in order to detect a collision. A current-carrying conductor is placed within two different magnetic fields: the first with no magnetic field and the second with a perpendicular magnetic field (Figure 3a and 3b). In the first scenario, the resulting voltage will be zero, in the second a voltage will be present and it will be perpendicular both to the magnetic field and to the direction of the current flow.

182

Mattia Zamboni and Anna Valente

Figure 3. Block diagram for Hall Effect principle and collision reaction strategy: (a) no magnetic field, (b) magnetic field, and (c) collision reaction strategy

Thanks to this technique, a collision detection can be implemented so that as soon as the robot detects a collision through the torque estimation, it will be able to react accordingly in order to minimize the impact (Figure 3c). Within its robots, the company Universal Robots (UR), uses a similar strategy: by looking at current and encoders’ position, it derives the force/torque (Anandan 2013, 1). As a result, the robot knowing the required amount of force to pick up a load and move it, recognizing an increase in torque or force required for a movement caused by a collision, it safely stops to prevent causing harm. The UR’s control system is redundant so that any dangerous failure forces the robot to fail in a safe condition. While this strategy allows to achieve remarkable results and to keep the hardware costs down, it does not provide the same kind of sensitivity as with dedicated torque sensors.

Force/Torque Feedback Sensors Torque sensors are typically implemented using strain gauges applied to the supporting part of the rotating axle of the robot joints. This method (Chen, Luo, and He 2018, 1–10) captures the torque of each joint and the values of each encoder while in parallel a dynamic model calculates the driving moment of the moving robot. A collision canbe detected by comparing the calculated torque with the captured torque. Within this method, the first and second derivatives of the position need to calculated. To be noted, this process may lead to noise, which, if not properly handled, can affect the accuracy of the system.

Collaborative Robots

183

Torque sensors can, for example, enable the automation of delicate assembly tasks for force controlled joining operations and process monitoring. Torque sensors, while providing additional precious level of awareness to the joint/robot, require additional space. They add weight, cost and a relevant level of complexity primarily to process the data from the sensor. The KUKA LBR iiwa is one of the main examples of robots adopting torque sensors at each of its 7 joints.

Figure 4. Deformation of elastic structure by external torque: (a) side view, and (b) frontview.

Figure 5. Measurement of external force: (a) installation of strain gauge in spoke, and (b) full-bridge circuit.

184

Mattia Zamboni and Anna Valente

Mechanical PFL In mechanical PFLs power and force are limited mechanically by employing either variable stiffness actuators or non-stiff elastic actuators. One of the main examples of the employment of mechanical PFL is Rethink Robotics’s Baxter which mounts the Series Elastic Actuators (SEA) developed at MIT (Anandan 2013, 1). The SEA consists of a motor, a gearbox, and a spring. The mechanism senses and limits force by measuring the twist of the spring to control the force output. That measurement of the twist of the spring provides a force sensor (Cestari 2016, 30–31). With this technology, the series of elastic actuators make the robot itself inherently safe by making the robot compliant as opposed to stiff. It represents the difference between being hit by a spring or being hit by something rigid. The advantage of the SEA solution is that using springs to convert a position to a force 𝐹 is a convenient way thanks to Hooke’s Law: 𝐹 = 𝑘 ∙ ∆𝑥. Electric motors in contrast, despite representing the predominant actuation technology and being very good at position control, are not very good at force control.

Sensoric System The sensitivity concept cannot stop at the joints but needs to be extended to a more holistic view. A list of available technical solutions is presented with each of them providing an additional degree of perception.

Figure 6. Schematic Diagram of Series Elastic Actuators.

Collaborative Robots

185

Figure 7. Series Elastic Actuator. Source: Cestari 2016.

Sensitive Skin Since designing the best artificial companion for the human worker seems to mean to build a robot to his image and likeness, adding sensing capabilities embedded in its body/skin looks like a reasonable step. Depending on the specific needs, several different technologies can be used to achieve the required sensitivity, including mechanical, ultrasonic, thermal, but with the most common being capacitive sensing. Capacitive Skin Capacitive sensing is a flexible technology gradually becoming popular sinceconstructing a sensor can be as simple as adding a conductive area to a printed circuit board (Figure 9). Low-cost integrated interface chips are readily available (Phan et al. 2011, 2992–97). The sensitive area can then be covered with a protective layer like silicone rubber foam. Its low cost and low power consumption makes it an ideal choice for a wide range of both consumer and industrial applications.

Figure 8. Capacitive sensing principle.

186

Mattia Zamboni and Anna Valente

Figure 9. A simple implementation of capacitive sensor with 8 sensitive areas. Source (Phan et al. 2011).

Capacitive sensors can be used for collision detection. It is important to notice that although it is possible to calibrate such a capacitive sense system for rough distance measurements, it is primarly designed to detect changes in capacitance, which implies that it is mainly suitable for detecting changes in distance, e.g., when a human body part is moving closer or farther away from the sensor. As a result, this kind of sensing can reduce impact forces detecting and characterizing collision events and providing information that can be used for force reduction behaviors. Various parameters that affect collision severity, interface friction, interface stiffness, end tip velocity and joint stiffness irrespective of controller bandwidth, are used to provide information about the contact force at the site of impact. Building large arrays can present serious challenges associated with how to concurrently monitor them all. Therefore, in order to design an efficient method, optimal sensor density must be taken into consideration. The sensor granularity should be higher wherever fine manipulation is required (e.g., fingertips) and much lower where no or little contact is planned. Despite being predominantlyused for research purposes and not very popular at the industrial level, some examples of cobots provided with tactile sensors can be found on the market, like Comau’s Aura or the APAS by Bosch.

Vision System Cobots can be equipped with two different kinds of vision systems: either dedicated to safety or for generic recognition. These two kinds of systems serve a very different purpose. Safety cameras are typically active full time for the purpose of detecting hazards. The second kind can be used for inspection: recognizing objects or people, reading barcodes and performing measurements. Safety Cameras Several different types of safety systems help to guarantee the safety in human-robot collaborative manufacturing by preventing collisions or limiting impact within acceptable threshold in case of collisions (Halme et al. 2018, 111–116).

Collaborative Robots

187

Safety cameras are typically installed in fixed positions rather than on the robot arm itself in order to provide afixed angle of view. This results in less complex subject recognition. Stable lighting conditions alsoprovidebetter recognition. Occlusions in the robot working area, mostly in case of the presence of multiple people or large objects, may lead to scenarios where a human is not detected at a crucial moment. The risk of occlusions canbedecreased by adopting multiple sensors and performing sensor fusion. Safety cameras equipment can be used in combination with the traditional ‘safety monitored stop’ or ‘speed and separation’ interaction modes explained above. However, for maximum efficiency dynamic collision avoidance algorithms can be implemented with the goal to integrate real-time trajectory planning, as documented for example in (Shiyu Zhang et al. 2019, 103664). This can reduce robot slow downs or downtimes everytime something happens to be in the robot’s way. Multi-Purpose Cameras This kind of camera can be used for a wide variety of tasks, from reading barcodes to inspection, recognition and tracking people or objects, performing measurements and more. The image processing involved in image recognition requires significant processing power. To overcome this problem there are either cameras with processing capabilities on board or cameras relying on external processing. The first category embeds Field Programmable Gate Arrays (FPGA) chips, which after being programmed, can autonomously detect and recognize low to medium complexity subjects. Their counterpart, without this capability, relies on external processing which has the benefit of much higher potential processing power, but with slower response time due to additional image transferring time (Wang, Zeng, and Geng 2019, 01015). To build on top of the existing state of the art in visionrecognition systems, future cobots will need to better address the cognitive human-robot interactions. In industrial environments, a more efficient HRI needs to address topics like human intention recognition in order to implement a good communication between human and robot partners. Thus, the robot should be able to handle several behaviors, social components, gestures and faces (Maurtua et al. 2017, 1–10)to facilitate safe and fluid collaborations (Coupeté, Moutarde, and Manitsaris 2016, 1–7).

Programming Modes Simple and fast robot programming is not only an attractive feature for new users, but provides experienced users with reduced programming time while coping with high-mix production. The reduced time translates to reduced costs which helps also to justify the

188

Mattia Zamboni and Anna Valente

use of robotic labor in new applications. The growing demand for cobots is also witnessed by the rapid sales increase (Smith 2019, 8). In collaborative robotics new features like hand guiding and teaching by demonstration have been introduced (Solvang and Sziebig 2012, 459–464).

Hand Guiding and Teaching Hand guiding, allowing the operator to rapidly and intuitively interact and program a robot, is one of the main peculiarities of collaborative robots. Unlike using a teach pendant, hand guiding allows unskilled users to both interact and program robots in a more intuitive way. Current collaborative robots typically include hand guiding functionality withlimitations in terms ofaccuracy required for assembly operations. For accurate end-effector positioning the teach pendant is still the most suitable tool despite its lower intuitiveness and longer programming time required (Safeea, Bearee, and Neto 2018, 1–11). During the teaching process, many applications require end-effector precision positioning. In (Safeea, Bearee, and Neto 2018, 1–11) the authors propose a method for precision hand-guiding at the end-effector level, while (Shaolin Zhang et al. 2019, 5204– 13) present a sensorless hand guiding scheme for industrial robots.

The Security Concern If an industrial robot operating inside its cage can, in case of malfunction, cause material damages, collaborative robot applications may produce more dramatic damages thus representing an alarming safety concern. Cybersecurity in collaborative robots has recently emerged as a dangerously neglected part in the deployment of collaborative robot (RIA 2017). As reported in (Proposyscom Tech 2019), researchers from Alias Robotics performed an in-depth analysis of a modular cobot MARA by Acutronic. The study highlighted in the cobot several weaknesses and 27 exploitable vulnerabilities. As a result, Acutronic Robotics promptly reacted, to implement security fixes and turned their MARA into probably one of the most secure cobots on the market. This episode highlighted an emerging problem and the likeliness of many other cobots sharing the same security issues. The non-negligible takeaway is this new close bond between safety and security, or better defined as cybersecurity. The concept of cybersecurity indeed is not anymore limited to only protecting a company’s sensitive data from being stolen. Hacking into safety parameters and settings in a factory with wide scale cobot’s adoption can lead to even more devastating consequences.

Collaborative Robots

189

The Robotic Industries Association (RIA), in (RIA 2017) pointed out how vulnerabilities in cybersecurity can severely undermine the entire purpose of collaborative robots. Issuing a requirement to cobot suppliers to design only harmless robots would, even in the worst possible case, be counter productive since it would result in a huge roadblock on the path of cobot employment.

The Cybersecurity Solution for Collaborative Robots Much of the responsibility lies first with the robot’s manufacturer, but also the robot system integrator plays an important role since it has a major responsibility in ensuring safe operating conditions. As an example, in a worst-case scenario, the system integrators should limit both force and power of collaborative robot clamps. Momentum and tool orientation can also be limited to further improve safety. While it is up to the robot manufacturer to provide the foundation for a secure robot, the system integrator can still improve safety in several other important ways. Further guidelines on this subject are provided in (Matthews 2018).

Artificial Intelligence in Cobots In (Manz 2018, 27–29) the artificial intelligence (AI) contribution to the cobot’s evolution is described as “cobots are collaborators. AI will make them partners.” Cobots have already shown their benefits in industrial automation, but up to now only the surface of what can be achieved has been scratched. For the next leap, the magic of AI will be required. Cobots initially could perform only a single task. Today they can perform multiple sets of complex tasks for specific workstations on a production line. One of the major innovations compared to industrial robots is their ability to learn by being taught. However, the next disruptive innovation sits just around the corner and implies the ability to learn through experience, courtesy of Artificial Intelligence. AI is typically treated as a single entity, but it actually includes a large family of problem-solving subsets: reasoning, planning, learning, verbalization, perception, localization, manipulation, and more. Each of them is required depending on the specific application. The discipline itself draws from other disciplines: psychology, physics, mathematics, general science, linguistics, economics, neurobiology and more. Fortunately, cobots don’t need to address all these, or at least not yet (Manz 2018, 27– 29). In order to achieve this new way of learning, their main AI-derived skill required is machine learning. This represents the ability to gradually improve their skills through experience. Machine learning algorithms use computational methods that, from data and

190

Mattia Zamboni and Anna Valente

without relying on a predetermined equation as a model, enable cobots to make predictions and their own decisions (Rajan and Saffiotti 2017, 1–9). The next fundamental requirement is perception, which implies that the cobot, by pulling the information from its sensors, needs to create a view of the surrounding world, resulting in a sense of awareness. This is crucial for cobots because, unlike their caged counterparts, they function in close contact with humans. Without this ability, safety would be irretrievably compromised. Verbalization, or natural language processing represents one additional ultimate goal of AI robotics so that cobots willbe able to converse with andlearn from human coworkers. The final goal of this extremely complex discipline is to allow cobots to replicate the human’s ability to smoothly integrate inputs with motion responses, even while experiencing changes to the environment. Like most other AI subsets or those related to it, neural networks use other resources within the cobot to achieve their goals. Although these skills have been demonstrated, significantly more work needs to be done in this field, also because several other AI areas rely on this. Back in 1954 George Devol patented the first industrial robot arm. 54 years later Universal Robots introduced what is considered the first collaborative robot: the UR5. Considering the progress in computational power and AI the next revolution in cobotics will most likely arrive much faster this time (Webster and Hristov Ivanov 2019, 127– 143). Still, considering that AI is a work in progress, it will likely enter into cobots step by step, with incremental improvements adding intelligence to the machines. This will usher in a new generation that will turn them from collaborators to full partners (Manz 2018, 27–29).

INDUSTRIAL APPLICATIONS In this section some existing use cases are presented as examples of real-world industrial applications of collaborative robotics. Each of them has its own peculiarities, requirements and limitations.

Use Case 1 – Electronic Panels Assembly Description A first use case presents a production plant specialized in the production of: elevator’s panels, Car Operating Panels (COP), Landing Operating Panels (LOP) and

Collaborative Robots

191

Landing Indicator Panels (LIP). In this case the focus will be placed on the COP assembly line. The main operations required for the assembly are defined as follows: parts sourcing, bar codes scanning, components placement, screws tightening, components wiring, and assembly testing.

Challenges The complexity in this use case is represented mainly by the high mix/low volume nature of the production. The elevator’s control panelspresents a high level of customization depending on the building type, number of floors, color requirements, country-specific customs and regulation, specific safety requiments, special features, and more. This implies a very high variety of possible COP configurations, which results in very articulated production requirements. Production starts with a kitting phase in which all the components in the bill of material are collected. The assembly is then perfomed step-by-step and the panel gradually populated. Once assembled the panel is tested, packaged and ready for delivery. Adopted Solution In the solution adopted by the manufacturer (Figure 10) a collaborative cell has been designed in which cobot and human operator are both working on the same assembly at the same time. All the required tasks have been accurately studied and assigned to either the robot or the human operator based on their specific skills. The assembly procedure has been timed in such a way that robot and human operator do not need to work on the same panel’s area at the same time. Outcome There are multiple benefits in the adopted solution. On one side the tasks is assigned based on the specific skills, resulting in both the robot and the human operator performing the tasks they do well. On the other they can work in parallel since the COP assembly allows for it, effectively reducing the assembly time. A bonus feature allows the operator to take over operations from the robot in case of problems in order to avoid downtime.

192

Mattia Zamboni and Anna Valente

Figure 10. Electronic panel assembly.

Use Case 2 – Domestic Appliances Assembly Description A second use case provided presents an example of domestic appliance assembly. This use case focuses on the production of wash machines. The specific step to be addressed is the installation of the counterweight needed to stabilize the washing unit during spinning operations. The counterweight’s weight ranges from 12 to 14 kg depending on the product model and needs to be tightened with bolts to the washing units that are slowly moving on a continuous assembly line. Challenges This operation can be performed manually, however even with the support of a zerogravity device especially during the pick-up of the counterweight, the task is not ergonomic and results tiring over time. The additional challenge is the task of tightening the heavy counterweight’s bolts which needs to be performed on the continuously moving assembly line. Adopted Solution The implemented solution is a collaborative cell in which a cobot autonomously picks up counterweights from a stack and hands them to the operator. The placement of the counterweight is performed by hand guiding and centering the piece to the washing unit in order to be tightened.

Collaborative Robots

193

Outcome One of the most critical and less ergonomic operations is picking the counterweights which in this case has been assigned to the robot. The human operator is left with the lighter task of centering and tightening the part but which concurrently requires the responsibility to make sure the operation is performed properly and securely. For this use case the collaborative operation significantly improves the human’s operator job quality.

Figure 11. Domestic appliances assembly solution.

Use Case 3 – Food Products Packaging Description The third use case presents a flexible system for packaging materialloading and handling on industrialteabags machines by a world leader in the design and manufacture of automated solutions for the packaging industry. The actors in thisscene are the packaging machines thatneed to beloaded and unloadedwith the packaging spools by autonomous mobile robotic platforms. In all this, humanoperatorsneed to access the 160 square meters shop floor to perform maintenance operations. Challenges There are several challenges in this use case. Starting from the required ability for the robotic platform to autonomously navigate to the packaging spools stock, pick the proper spool, transport it to the machine which requested the spool swap, remove the empty spool and replace it. All thisneeds to happen in a safe environment in which human

194

Mattia Zamboni and Anna Valente

operators need to be able to move around the shop floor freely and safely to perform their operations.

Figure 12. Food products packaging solution.

Adopted Solution A double cobot arm has been installed on a collaborative mobile platform. The platform has been equippedwith cameras with TOF (Time Of Flight) and tracking technologies designed to provide depth perception capabilities and enablingit to perceive and understand the environment. Outcome The collaborative solution implemented presents several challenges related to making the vehicle reliable and autonomous. However, this is the onlyway to achieve a really single and flexible solution adoptable by the majority of companiesusingthis type of machine, no matter the size or the lay out of their factories.

ACKNOWLEDGMENTS This chapter contains information from a research project which has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement nº 818087, Rossini. This document reflects only the authors’ view and the Commission is not responsible for any use that may be made of the information it contains.

Collaborative Robots

195

CONCLUSION Thanks to advancements in technology the cobots represent a disruptive innovation inevitably set to change the concept of automated production. In this respect, the risk for a significant part of industrial robots of being replaced by cobots is in fact much more real than for human workers. Despite this, they should not be considered as a total alternative to traditional industrial robots. While cobots do boast some impressive responsive features, they cannot tackle the dangerous, repetitive and heavy-duty tasks usually associated with industrial robots which most likely do not even require collaboration in the first place. In this chapter, we illustrated the roots on which collaborative robotics is built upon and the technologies which can help to design a collaborative robot. To summarize, the main key takeaways can be listed as follows:   





Despite the worker’s safety being the highest priority, robots need to be designed in such a way that the very same safety does not compromise their productivity. Collaborative robot ambitions bring new and unique safety challenges that can not be addressed by using traditional barriers and exclusion zones. The key to future collaborative robot’ success will be dictated by the success in combining the advantages of machines with the dexterity and problem-solving skills of humans. While specifications and safety guidelines for cobots are available, more needs to be released in order to allow a more comprehensive and systematic risk assessment process. Designing the best coworker for humans implies pushing the current boundaries of collaborative robotics and AI. If up to now the human operator has been responsible to make sure the robot is functioning properly and according to specs, the new paradigm suggests to have smarter and more pro-active robots, to which eventually humans will seriously rely on, and not just for production. This will be the true key enabler of improvement in job quality.

REFERENCES Accenture. (2018). Foster Innovation with Enterprise Robotics. Akli, Isma & Brahim Bouzouia. (2015). “Time-Dependant Trajectory Generation for Tele-Operated Mobile Manipulator.” 3rd International Conference on Control, Engineering and Information Technology, CEIT 2015, 1–5. https://doi.org/ 10.1109/CEIT.2015.7233068.

196

Mattia Zamboni and Anna Valente

Anandan, Tanya M. (2013). Safety and Control in Collaborative Robotics. 2013. https://www.controleng.com/articles/safety-and-control-in-collaborative-robotics/. Angerer, Andreas, Alwin Hoffmann, Andreas Schierl, Michael Vistein & Wolfgang Reif. (2010). “The Robotics API: An Object-Oriented Framework for Modeling Industrial Robotics Applications.” IEEE/RSJ 2010 International Conference on Intelligent Robots and Systems, IROS 2010 - Conference Proceedings, 4036–41. https://doi.org/10.1109/IROS.2010.5649098. Bélanger-Barrette, Mathieu. (2015). What Is a Cobot? 2015. https://blog. robotiq.com/what-is-a-cobot. Bicchi, Antonio, Michele Bavaro, Gianluca Boccadamo, Davide De Carli, Roberto Filippini, Giorgio Grioli, Marco Piccigallo, et al. (2008). Physical Human-Robot Interaction - Dependability, Safety, and Performance, 782. Bicchi, Antonio, Michael A. Peshkin & Edward Colgate, J.(2008). “Safety for Physical Human–Robot Interaction.” Springer Handbook of Robotics, 1335–48. https://doi.org/10.1007/978-3-540-30301-5_58. Blyler, John. (2018). The Complexity of Mimicking Humans Is Just the Beginning. Mouser Electronics, Inc. Bo, Han, Dhanya Menoth Mohan, Muhammad Azhar, Kana Sreekanth & Domenico Campolo. (2016). “Human-Robot Collaboration for Tooling Path Guidance.” Proceedings of the IEEE RAS and EMBS International Conference on Biomedical Robotics and Biomechatronics, 2016-July, 1340–45. https://doi.org/10.1109/ BIOROB.2016.7523818. Cestari, Manuel. (2016). “Variable-Stiffness Joints with Embedded Force Sensor for High-Performance Wearable Gait Exoskeletons.” Thesis, no. January. https://doi.org/10.13140/RG.2.2.35820.77446. Chen, Saixuan, Minzhou Luo & Feng He. (2018). “A Universal Algorithm for Sensorless Collision Detection of Robot Actuator Faults.” Advances in Mechanical Engineering,10 (1), 1–10. https://doi.org/10.1177/1687814017740710. Coupeté, Eva, Fabien Moutarde & Sotiris Manitsaris. (2016). “A User-Adaptive Gesture Recognition System Applied to Human-Robot Collaboration in Factories.” Duffy, Brian R. (2003). “Anthropomorphism and the Social Robot.” Robotics and Autonomous Systems, 42 (3–4), 177–90. https://doi.org/10.1016/S09218890(02)00374-3. Haddadin, Sami, Alin Albu-Schäffer & Gerd Hirzinger. (2009). “Requirements for Safe Robots: Measurements, Analysis and New Insights.” International Journal of Robotics Research,28 (11–12), 1507–27. https://doi.org/10.1177/0278364909343970. Halme, Roni Jussi, Minna Lanz, Joni Kämäräinen, Roel Pieters, Jyrki Latokartano & Antti Hietanen. (2018). “Review of Vision-Based Safety Systems for Human-Robot Collaboration.” Procedia CIRP, 72, 111–16. https://doi.org/10.1016/ j.procir.2018.03.043.

Collaborative Robots

197

Hentout, Abdelfetah, Mustapha Aouache, Abderraouf Maoudj & Isma Akli. (2019). “Human–Robot Interaction in Industrial Collaborative Robotics: A Literature Review of the Decade 2008–2017.” Advanced Robotics, 1864. https://doi.org/ 10.1080/01691864.2019.1636714. Hirzinger, G., Albu-Schäffer, A., Hähnle, M., Schaefer, I.& Sporer, N.(2001). “On a New Generation of Torque Controlled Light-Weight Robots.” Proceedings - IEEE International Conference on Robotics and Automation, 4, 3356–63. https://doi.org/10.1109/robot.2001.933136. Huang, Bastiane. (2019). It’s Here! How AI Robot Will Revolutionize Manufacturing, 2019. https://towardsdatascience.com/its-here-how-ai-robot-will-revolutionize-manu facturing-44ce784438d4. Izumi, Tetsuro & Tomohiro Matsuo. (2012). Robot system and robot Control Apparatus, issued 2012. https://patents.google.com/patent/US20120215353. Keeping, Steven. (2018). What Is a Collaborative Robot? Mouser Electronics, Inc. Krüger, J., Lien, T. K.& Verl, A.(2009). “Cooperation of Human and Machines in Assembly Lines.” CIRP Annals - Manufacturing Technology,58 (2), 628–46. https://doi.org/10.1016/j.cirp.2009.09.009. Manz, Barry. (2018). Cobots Are Collaborators. A.I. Will Make Them Partners. Mouser Electronics, Inc., 2018. Matthews, Kayla. (2018). Do’s and Don’ts for Protecting Your Cobots’ Cybersecurity, 2018. https://blog.robotiq.com/protecting-the-cybersecurity-of-your-cobots-dos-anddonts. Maurtua, Inaki, Aitor Ibarguren, Johan Kildal, Loreto Susperregi & Basilio Sierra. (2017). Human – Robot Collaboration in Industrial Applications: Safety, Interaction and Trust, no. August: 1–10. https://doi.org/10.1177/1729881417716010. Mikhel, Stanislav, Dmitry Popov, Shamil Mamedov & Alexandr Klimchik. (2018). “Advancement of Robots with Double Encoders for Industrial and Collaborative Applications.” Conference of Open Innovation Association, FRUCT, 2018-Novem, 246–52. https://doi.org/10.23919/FRUCT.2018.8588021. MMH Staff. (2019). RIA Standards Committee Releases All-New Methods and Metrics for Collaborative Robot Testing. May 14. 2019. https://www.mmh.com/ article/ria_standards_committee_releases_all_new_methods_and_metrics_for_collab orat. Navarro, Benjamin, Andrea Cherubini, Aicha Fonte, Robin Passama, Gerard Poisson & Philippe Fraisse. (2016). “An ISO10218-Compliant Adaptive Damping Controller for Safe Physical Human-Robot Interaction.” Proceedings - IEEE International Conference on Robotics and Automation, 2016-June, 3043–48. https://doi.org/ 10.1109/ICRA.2016.7487468. Phan, Samson, Zhan Fan Quek, Preyas Shah, Dongjun Shin, Zubair Ahmed, Oussama Khatib & Mark Cutkosky. (2011). “Capacitive Skin Sensors for Robot Impact

198

Mattia Zamboni and Anna Valente

Monitoring.” IEEE International Conference on Intelligent Robots and Systems, 2992–97. https://doi.org/10.1109/IROS.2011.6048844. Plocher, Janos & Ajit Panesar. (2019). Review on Design and Structural Optimisation in Additive Manufacturing : Towards Next-Generation Lightweight Structures 183. https://doi.org/10.1016/j.matdes.2019.108164. Proposyscom Tech. (2019). Collaborative Robot Insecurities Exposed ! MARA, 2019. https://www.prosyscom.tech/robotics/collaborative-robot-insecurities-exposed-mara/. Rajan, Kanna & Alessandro Saffiotti. (2017). “Towards a Science of Integrated AI and Robotics Towards a Science of Integrated AI and Robotics.” Artificial Intelligence, no. February 2018. https://doi.org/10.1016/j.artint.2017.03.003. RIA. (2017). Collaborative Robots and Cybersecurity Concerns, 2017. https://www.robotics.org/blog-article.cfm/Collaborative-Robots-and-CybersecurityConcerns/66. Safeea, Mohammad, Richard Bearee & Pedro Neto. (2018). “End-Effector Precise HandGuiding for Collaborative Robots.” ROBOT 2017: Third Iberian Robotics Conference, 694, (September). https://doi.org/10.1007/978-3-319-70836-2. Santis, Agostino De. (2008). Modelling and Control for Human–Robot Interaction, no. November,95. http://www.fedoa.unina.it/2067/. Searby, Lynda. (2016). “Robots Uncaged.” Packaging News, 2016. https://www2.deloitte.com/us/en/insights/focus/signals-for-strategists/nextgeneration-robots-implications-for-business.html#endnote-sup-12. She, Yu, Hai Jun Su, Deshan Meng, Siyang Song & Junmin Wang. (2018). “Design and Modeling of a Compliant Link for Inherently Safe Corobots.” Journal of Mechanisms and Robotics, 10 (1). https://doi.org/10.1115/1.4038530. Siciliano, Bruno & Oussama Khatib. (2008). Springer Handbook of Robotics. Springer. Smith, Nigel. (2019). “Who Is Winning the Robot Race?” I40 Today, Issue 7, 2019. http://online.fliphtml5.com/kwnhb/tiel/#p=1. Solvang, Bjorn & Gábor Sziebig. (2012). “On Industrial Robots and Cognitive InfoCommunication.” 3rd IEEE International Conference on Cognitive Infocommunications, CogInfoCom 2012 - Proceedings, no. 978: 459–64. https://doi.org/10.1109/CogInfoCom.2012.6422025. Stadler, Susanne, Astrid Weiss, Nicole Mirnig & Manfred Tscheligi. (2013). “Anthropomorphism in the Factory - A Paradigm Change?” ACM/IEEE International Conference on Human-Robot Interaction, 231–32. https://doi.org/10.1109/ HRI.2013.6483586. Tsai, Jason, Eric Wong, Jianming Tao, H. Dean McGee & Hadi Akeel. (2008). Secondary Position Feedback Control of a Robot. https://doi.org/US 2010/0311130 Al.

Collaborative Robots

199

Verl, Alexander, Anna Valente, Shreyes Melkote, Christian Brecher & Erdem Ozturk. (2019). CIRP Annals - Manufacturing Technology - Robots in Machining. https://doi.org/10.1016/j.cirp.2019.05.009. Wang, Nana, Yi Zeng & Jie Geng. (2019). “A Brief Review on Safety Strategies of Physical Human-Robot Interaction.” ITM Web of Conferences, 25, 01015. https://doi.org/10.1051/itmconf/20192501015. Webster, Craig & Stanislav Hristov Ivanov. (2019). Robotics, Artificial Intelligence, and the Evolving Nature of Work, no. 2015. Weitschat, Roman, Jorn Vogel, Sophie Lantermann & Hannes Hoppner. (2017). “EndEffector Airbags to Accelerate Human-Robot Collaboration.” Proceedings - IEEE International Conference on Robotics and Automation, 2279–84. https://doi.org/ 10.1109/ICRA.2017.7989262. Yu, She, Su Hai-Jun & Carter J Hurd. (2016). Shape Optimization of 2D Compliant Links for Design of Inherently Safe Robots, 1–9. Zhang, Shaolin, Shuo Wang, Fengshui Jing & Min Tan. (2019). “A Sensorless Hand Guiding Scheme Based on Model Identification and Control for Industrial Robot.” IEEE Transactions on Industrial Informatics 15 (9): 5204–13. https://doi.org/ 10.1109/tii.2019.2900119. Zhang, Shiyu, Andrea Maria, Renzo Villa & Shuling Dai. (2019). “Real-Time Trajectory Planning Based on Joint-Decoupled Optimization in Human-Robot Interaction.” Mechanism and Machine Theory, 144, 103664. https://doi.org/10.1016/ j.mechmachtheory.2019.103664. Zhang, Xiaobin, Yinhao Zhu & Hai Lin. (2016). “Performance Guaranteed Human-Robot Collaboration through Correct-by-Design.” Proceedings of the American Control Conference, 2016-July, 6183–88. https://doi.org/10.1109/ACC.2016.7526641.

In: Industrial Robots: Design, Applications and Technology ISBN: 978-1-53617-779-4 Editors: Isak Karabegović et al. © 2020 Nova Science Publishers, Inc.

Chapter 9

ARTIFICIAL INTELLIGENCE DRIVES ADVANCES IN HUMAN-ROBOT COLLABORATION Lejla Banjanović-Mehmedović*, PhD Department of Control Systems, Automation and Robotics, Faculty of Electrical Engineering, University of Tuzla, Tuzla, Bosnia and Herzegovina

ABSTRACT Today’s collaborative robotic systems are based on force and speed limitation. The tendency in Industry 4.0 towards the mass customisation of products, shorter product cycles and quality demands is introducing of thepowerful combination of artificial intelligence (AI) and advanced vision technology in collaborative robot’ssystems, capable of learning and work hand in hand with humans. A central component that have influence on industry productivity is the movement of the robot in order to ensure human safety in an unstructured environment. Fuzzy and game theory-based decision making, machine and deep learning for body posture, hand motion and voice recognition, predicting human reaching motion, optimal control for path planning of exoskeleton robots, task allocation optimization is an advanced approach for humanrobot collaboration. This chapter presents a comprehensive survey of artificial intelligence techniques and discusses their applications in human robot collaboration toward making manufacturing more flexible. The recent advances in an artificial intelligence, digital twin, cyber-physical systems and robotics have opened new possibilities for technological progress in manufacturing, which led to efficient and flexible factories.

*

Corresponding Author’s E-mail: [email protected].

202

Lejla Banjanović-Mehmedović

Keywords: artificial intelligence, collaborative robots, deep learning, fuzzy logic, game theory, human-robot collaboration, machine learning, industry robotics, optimization algorithms, smart industry

INTRODUCTION There are many industrial processes that can be performed autonomously in an effective and efficient manner using standardized industrial robots (Faber et al. 2015, 510-517). However, there are task, for example as assembly, which cannot be fully automated mostly due to special sensor and motor skills that are necessary to succeed in these tasks. Collaborative automation “gives you” the option to have a human in the cycle. The physical human-robot interaction is an emerging research field due to the need of robotics in unstructured environments. Combining advantages by means of direct human-robot cooperation increases the performance of the overall system most efficiently. Human robot collaboration (HRC) presents complex biomechatronic architecture from a design perspective, and design decisions are extensively supported by computer-aided simulation models for validation and failure assessments (Maurice et al. 2017, 88-102). While robots act at repetitive and monotonous steps, humans are able to concentrate on the tasks that require human special skills, such as sensor-motor skills and creative problem solving and to adapt flexibly to new situations and upcoming problems. On the other hand, the heavy weights or hazardous parts can be handled by the robot using artificial intelligence to relieve the human. Creating a balance between flexibility of manual human processes and efficiency and repeatability of robots, with safety consideration, collaborative robots or cobots are the new generation of industrial robots (Malik and Bilberg 2018, 278-285). A key problem in human robot collaboration (HRC) is a safe movement of the robot. The different events that can occur in an unstructured environment could be a reason for a variable working space for example as human movements. For path planning and obstacle detection, a machine learning based control strategy could be useful for human protection in joint workspace (Dröder et al. 2018, 187-192). Depending on the intelligence of collaborative robots (Cobots), the collaboration between humans and robots can be categorized into three different levels: from low-level collision avoidance, to middle-level efficient cooperation with prediction and recognition for human’s trajectories, intentions and plans, to high-level collaboration mode selection and task assignments (Figure 1).

Artificial Intelligence Drives Advances in Human-Robot…

203

Advanced Forms of Human Robot Collaboration Applications Advanced form of new generation industrial robotic applications that use artificial intelligencethrough human-robot collaboration are assembly, picking and packing, lifting and transportation and other applications.

Figure 1. Collaboration between humans and robots based on the intelligence of collaborative robots.

Assembly Applications AI is a highly useful tool in robotic assembly applications in complex manufacturing sectors and can be used to help robot learn on its own, which are best for certain processes while it is in operation.An example of assembly application is presented in Figure 2. Picking and Packaging This is very difficult for a robot to work with different sizes and shapes of products, but also because it requires work to be completed around humans as opposed to inside of a work cell. This is where collaborative robots using AI, can work accurately, quicker and safely around humans with lower cost and more packages.An example of the packaging application is presented in Figure 3. Lifting and Transportation of Heavy Parts Using Industrial Exoskeleton The adoption of exoskeletons in industrial applications is nowadays a hot-topic, since their capabilities to assist humans executing onerous tasks. Exoskeletons for work and industry can be separated into 6 categories (Web, January 2020):

Lejla Banjanović-Mehmedović

204

Figure 2.Colloboration between human and robot at assembly process.

Figure 3. Human-robot collaboration by monitoring the working space and slowing or stopping the high-speed robot.





Tool Holding Exoskeletons: The exoskeletons are composed of a mechanical arm which holds a heavy tool on one end and is connected to a lower body exoskeleton and a counterweight. The exoskeleton is usually passive but there is at least one prototype with motors in the legs. The tool weight is directly transmitted directly into the ground. The Chairless Chairs are lightweight exoskeletons designed to be worn on top of work pants and they can stiffen and lock in place, which decreases fatigue in case of prolonged standing or crouching over a long period of time.

Artificial Intelligence Drives Advances in Human-Robot… 

 



205

Back Support: The exoskeletons which offer back support aid workers maintain the correct posture as they bend down to perform a lift. They can also reduce the load on the back muscles or even on the spine while bending down. Powered Gloves: The robotic glove designed to help workers with a weak grasp to gain a stronger hold on tools. Full Body Powered Suits: Until a few years ago it was believed that large, full body powered suits would be used for work and industry. Now almost all developers have switched to smaller, specialized exoskeletons but there are still ongoing projects in this area. Additional/Supernumerary Robotics:This is a pair of exoskeleton hand. It is challenging and very important wearable robot project. Those powered arms controlled by a wearer are used to hold tools or materials in place.

The different forms of industrial exoskeletons are presented in Figure 4. Industrial exoskeletons control deal with complex, uncertain and dynamic processes, so fuzzy logic based control, optimal control, machine learning and optimization techniques are forms of artificiallinteligence for human robot collaboration.

Other Applications Although welding, assembly, picking and packing robots are the most common types of robot-human collaboration, some industries use robots to perform other tasks in cooperation with the human. Aerospace, automotive, electronics, food and textile industries use robots to cut, drill and clean a variety of materials in interaction with a human. An example of the human robot collaboration in the auto industry is presented in Figure 5.

Figure 4. Industrial exoskeletons in human-robot collaboration.

206

Lejla Banjanović-Mehmedović

Figure 5. Human-robot collaboration in industry manufacturing.

The human-in-the-loop robotic applications introduce many problems from ergonomics and human factors toward the uncertainties and unobservable states. Many solutions have also been proposed to cope with these problems in the literature, but an abundant research effort need to be made in the field of advanced artificial intelligence.

Digital Twin for Human Robot Collaboration Mechanical assembly is the dominant application domain of industrial robots in industrial manufacturing systems. Those operations often imply handling with difficult product geometries and require higher production flexibility what is hard to automate. A key problem in high-precision robotic assembly tasks is how to make robots operate reliably in the presence of uncertainties, such as mechanical, control, sensor, kinematical or dynamical uncertainties. By introducing an intelligent search strategy in simulation framework, it is possible to overcome uncertainties during robot assembly process. The genetic based re-planning search strategy, using neural learned vibration behavior for achieving tolerance compensation of uncertainties in robotic assembly is presented in the paper (Banjanovic-Mehmedovic 2011). The vibration behavior was created for the complex robot assembly of cogged tube over multistage planetary speed. To enhance manufacturing productivity to be able to cope with the challenges of higher production flexibility, product mix and reconfiguration, there are important rising of the concept of human robot collaboration in industry manufactuirng, especially in the asembly industry. A human robot collaborative system is a dynamic system, which requires innovative approaches for design and controll of a complex environment. The role of simulation-based digital prototyping is very important in research to support

Artificial Intelligence Drives Advances in Human-Robot…

207

decision making in manufacturing systems (Flores-Garcia et al. 2015, 2124-2135). Digital simulation gives insights into complex production systems to test opertaing polices prior their implentation in real world (Mourtzis et al. 2014,213-229).As a digital virtual modelling formalism, the reactive hybrid automata, the Wormhole Model with both learning and re-planning capacities (WOMOLERE) for robot assembly is introduced in (Banjanovic-Mehmedovic et al. 2013, 380–512). The digital twin simulates the behavior of the system by creating virtual models (digital shadows) of physical objects (Malik and Bilberg 2018, 278-285). The physical environment is the actual production system composed of humans, robots and related production equipment, while the virtual environment is composed of a computersimulation. The process initiates with a simple animation which is extended to extensive experiments with introduced artificial intelligence algorithms. These systems need to be able to continuously extend and adapt to various configurations during their operation to support high product variety (Bilberg and Malik 2019, 499–502). In this way, it is possible to make the performance analysis on the system level for many different scenarios from experiment setup. The digital twin can help to estimate results in planning and optimization of robotic systems, even without a real-time connectivity to the physical system, but could be continuously evolved in real time using advanced forms of information and communication technologies. A reference model for implementation of digital twins in manufacturing has been proposed by (Schleich et al. 2017, 141-144). The paper (Bilberg and Malik 2019, 499– 502) discusses an object-oriented event-driven simulation as a digital twin of a flexible assembly cell coordinated with a robot to perform assembly tasks alongside a human. A Kinect sensor is used to monitor human positions and the presence inside the workspace. This data is utilized to optimize robot trajectories periodically to accommodate for the locations where a human often enters. The historic data of human positions will make simulation to get self-learned about constantly occurring human–robot collisions and generate robot trajectories free from possible human intervention. An experimental simulation platform (HIRIT) for human robot interaction with adaptive path planning by means of a dynamic protective cover for humans is presented in (Dröder et al. 2018, 187-192). The novelty about this digital twin platform is the usage of machine learning, especially the extensive implementation of ANNs. Digital Twin for human-robot collaboration is presented in Figure 6.

ARTIFICIAL INTELLIGENCE IN HUMAN-ROBOT INTERACTION Artificial intelligence (AI) has large potential to assist the industry to achieve transformations that strengthen its global competitiveness. AI includes multiple algorithms such as reasoning, control, machine learning, optimization that, individually

208

Lejla Banjanović-Mehmedović

or in combination, add intelligence to applications.For human robot collaboration, artificial intelligence can be applied for different purposes like as a prediction of human reaching motion, decision making and control, a different form of classification (body posture recognition, hand motion recognition and voice recognition), optimization of robot human task allocation in process manufacturing, etc. The overview of artificial intelligence techniques, which have been used in human robot collaboration in industry manufacturing systems, is presented in Table 1.

Figure 6. Digital Twin for human-robot collaboration using machine learning.

Table 1. Overview of artificial intelligence techniques due specified tasks in Human robot collaboration (HRC) Artificial Intelligence Algorithms Types

Task specification

Fuzzy Logic (FL) Game Theory K-Nearest Neighbour (KNN) Hidden Markov Models (HMM) Support Vector Machines (SVM) Artificial Neural Network (ANN) Deep Learning networks (DLN) Genetic algorithm (GA) Particle swarm optimization (PSO) Antcolony optimization algorithm(ACO) Multi-Objective Optimization: (NSGA-II, NSGA-III, SPEA2, NPGA-II)

Decision Making, Control Classification Tasks,Learning

Optimal Control, Optimization Task Allocation

AI in Decision Making and Control for Human Robot Collaboration In human-robot collaboration, the human and the robot workers must adapt to each other’s decisions and motions. Fuzzy logic, game theoryare artificial intelligence techniques for decision making and control in human robot collaboration.

Artificial Intelligence Drives Advances in Human-Robot…

209

Fuzzy Logic Decision Making and Control The intelligent manufacturing have human-like reasoning capabilities, implemented by fuzzy logic systems into the complex manufacturing system (Zhang and Lu 2012). While classical logic only allows values of 0 or 1 as a variable, fuzzy logic permits any value in the interval [0, 1] (Pohammad et al. 2009, 535-553). To emulate the human decision making process, fuzzy logic is proposed in (Dimeas et al. 2014), that determines the desired damping of the admittance controller using only the joint position sensors of the robot and an external force sensor. In order to tune the FIS for optimal cooperation a Fuzzy Model Reference Learning Controller (FMRLC) is used for adapting the FIS towards the minimum jerk trajectory model (Dimeas and Aspragathos 2014, 1011-1016). Optimum choice of the process parameters is essential for the efficient robot control, in the scope of human robot collaboration. The optimization approach within the context of control systems is shown in Figure 7.

Figure 7. Optimal tuning parameters of controller.

To enable safe and efficient human-robot collaboration in shared workspaces, it is important for the robot to predict how a human will move when performing a task. However, human motion in environments with obstacles has been difficult to characterize. In order to to predict human motion in human robot collaborative manipulation in manufacturing, it is important to study how two humans collaborate in a shared workspace. The approach based on Inverse Optimal Control (IOC) allows us to find a cost function balancing different features that outperforms hand-tuning of the cost function in terms of task space and joint center distance (Mainprice et al. 2015, 885-892). Exoskeleton robots are rising technology in industrial contexts to assist humans in onerous applications in order to achieve a high performance human-robot collaboration

210

Lejla Banjanović-Mehmedović

as well as transparency, ergonomics, safety, etc. (Mauri et al. 2019). Industrial exoskeletons can be classified as passive and active. Passive solutions are not provided by actuation, indeed, they use springs and/or dampers to store energy from human’s motion and releasing it when required. Passive exoskeletons are commonly supporting the human operator in order to relieve him/her from repetitive tasks, while improving ergonomics. Active exoskeletons are instead provided by actuation, allowing to empower the human worker. Exoskeleton control is widely investigated in order to assist humans in different applications. Many control approaches have been developed, integrating different sensors and control techniques. Motivation to adopt a fuzzy logic for the outer human’s intentions-based control is to deal with complex, uncertain and dynamic processes, which are intrinsically difficult to being modelled mathematically. A hierarchic model-based controller withembedded safety rules is proposed to actively assist the human while executing the task. An inner optimal controller is proposed for trajectory tracking, while an outer safety-based fuzzy logic controller is proposed to online deform the task trajectory on the basis of the human’s intention of motion. A gain scheduler is designed to calculate the inner optimal control gains on the basis of the performed trajectory. Considering the control design of the device, machine learning techniques are investigated to optimize the outer controller parameters. Game theory is a form of decision-making where several players must make choices that potentially affect interests of other players (Turocy and von Stengel 2001). Game theory applies in many studies of competitive scenarios, so the problems are called games and the participants are called players or agents of the game (Wandile 2012). A player is defined as an individual or group of individuals making a decision (Osborne 2003). Each player of the game has an associated amount of benefit, called payoff or utility, which it receives at the end of the game. The payoff measures the degree of satisfaction an individual player derives from the conflicting situation (Camerer et al. 2001). For each player of the game, the choices available to them are called strategies (Shahet al. 2012, 511). The game is a description of strategic interactions including action constraints that a player can take as well as the player’s interestswithout specifying the actions which the players take (Osborne 2003). Game theory includes cooperative game and noncooperative game (Zhihao, 2018, 87-92). Whether a game is cooperative or non-cooperative would depend on whether the players can communicate with one another. The non-cooperative game is based on Nash equilibrium and concerned with the analysis of strategic choices. The cooperative game is described as ‘a game of cooperation agreement’ and focuses on how to maximize the interests of the participants in the game and how to distribute the benefits for each participant. The Nash equilibrium, called the strategic equilibrium, is a list of strategies, one for each player, which has the property that no player can unilaterally change his strategy

Artificial Intelligence Drives Advances in Human-Robot…

211

and get a better payoff. In other words, no player in the game would take a different action as long as every other player remains the same (Leyton-Brown 2008). There are a number of possible strategies (Turocy and von Stengel 2001):      

Dominating. A dominant strategy presents the best choice for a player for every possible choice by the other player. Extensive game. An extensive game (or extensive form game) describes with a tree how a game is played. Mixed strategy. A mixed strategy is active randomization, with given probabilities, which determine the player’s decision. Evolutionary interpretation. In the evolutionary interpretation, there is a large population of individuals, each of whom can adopt one of the strategies. Zero-sum game. A game is said to be zero-sum if for any outcome, the sum of the payoffs to all players is zero. Non-zero-sum games. Non-zero-sum games are non-strictly competitive, as opposed to the completely competitive zero-sum games, because such games generally have both competitive and cooperative elements.

Players engaged in a non-zero sum conflict have some complementary interests and some interests that are completely opposed. The prisoner’s dilemma is an example of non-zero-sum games, which can be applied to any situation in which two players do not cooperate while the best strategy is for both to mutually cooperate, whereas the worst outcome is to be the cooprating player while the other player defects. The non-zero-sum game is proposed for the autonomous vehicle-to-vehicle (V2V) decision making in conflict situations at roundabout test-bed (Banjanovic-Mehmedovic et al. 2016, 292-298). Game theory has been shown to be suitable for analyzing the performance of multi-agent systems (Jarrasse 2013), in which human-robot interaction is deemed as a two-agent game. By enabling the robot to identify human users’ behaviour and exploiting game theory to let the robot optimally react to them, the robots can work along humans as humans do. In game theory, a variety of interactive behaviors can be described by different combinations of individual objective/cost functions and different optimization criteria. The collaborative task in paper (Yanan 2016, 1408-1418) is realized through physical human-robot interaction. Game theory is employed to describe the system under study, and policy iteration is adopted to provide a solution of Nash equilibrium. The human’s control objective is estimated based on the measured interaction force, and it is used to adapt the robot’s objective such that human-robot coordination can be achieved.

212

Lejla Banjanović-Mehmedović

AI Algortihmsfor Classificationin Human-Robot Collaboration The information, which the robot collects with sensors from human-robot workspace are: human body posture, hand motion and voice command (Liu et al. 2018, 3-8). The gestures are the dynamic movements of hands within a certain time, which are very important part of robot human interaction. There are four essential technical components in the model of gesture recognition for human-robot collaboration: sensor technologies, gesture identification, gesture tracking and gesture classification (Liuand Wang2018,355367). K-Means Clustering, K-Nearest Neighbours (KNN), Hidden Markov Model (HMM), Support Vector Machine (SVM), Multilayer feed-forward neural network, Deep learning networks are often used in gesture classification (Attolico et al. 2014). K-Nearest Neighbours (KNN) algorithm is a fundamental and basic classification algorithm that classifies input data according to the closest training examples. An automatically segmenting of human grasping motions using KNN, SVM and ANN classifiers is presented inthe paper (Hayne 2015). Hidden Markov Model (HMM) is a combination of an un-observable Markov Chain and a stochastic process (Liuand Wang 2018, 355-367). Its characteristics are the number of state in the model, the number of distinct observation symbols per state, the state transition probability distribution, the observation symbol probability distribution and the initial state distribution (Attolico et al. 2014). Hidden Markov Model (HMM) is a popular gesture classification and voice recognition algorithm (Hinton et al. 2012, 82-97). Support Vector Machine (SVM) is a discriminative classifier defined by a separating hyperplane, where the optimal separation hyperplane maximises the margin of training data and categorizes new examples (Attolico et al. 2014). SVM kernel transforms lowdimensional training data into high-dimensional feature space with non-linear method (Liuand Wang 2018, 355-367). The combination of SVM with other classification methods can improve the gesture classification performance (Patsadu et al. 2012, 28-32). Artificial Neural Networks (ANN). The important property of artificial neural networks (ANN) is its ability to learn complex nonlinear relationships between the inputs and outputs of the network (Yang et al. 2014, 11-20). The learning is a process, through which the implicit rules are extracted from patterns of experience. The back-propagation (BP) learning algorithm network typically trains the network by employing the deviation of outputs from corresponding desired values to correct and update the weights of a previous layer. A more accurate result can be achieved by using methods such as the Levenberg-Marquardt learning algorithm. The neural network models used in manufacturing area are multilayer feedforward networks (MLP), Radial Basis Function Network (RBF) (Unbehauen, 2009), Adaptive Resonance Theory (ART) network, Selforganizing Map (SOM), Hopfield network, Boltzmann Machine (BM). The different gestures for realization of human robot interfaces can be recognized using neural network classifiers (D’Orazio 2014, 741-746). The 3D gesture recognition

Artificial Intelligence Drives Advances in Human-Robot…

213

system combines ANN with other classification methods (El-Baz and Tolba 2013, 14771484). Path planning for industrial robots which can keep a safe distance to a human could be implemented with machine learning using an artificial neural network (Dröder et al. 2018, 187-192). The robot control strategy combines a nearest neighbor approach for path planning, clustering analysis and ANN for obstacle detection. The artificial neural network learns the geometric features that distiguish an object from other objects in its geometry. Feed Forward Network (FFN) achieved high performance in recognizing the learned components from shifted and rotated simulated model (digital twin), but the trained network was limited to recognize the learned objects in real scenes (point clouds). The point clouds from a real scene (includes a human, an industrial robot, etc.) are divided into groups by a cluster analysis as a method of machine learning. If an object is recognized by FFN, the position of that object can be presented to the robot controller by means of the associated clusters. Deep Learning (DL) is the fast-growing branch of machine learning. Deep learning is the science of training large artificial neural networks. Deep neural networks (DNNs) can have hundreds of millions of parameters, allowing them to model complex functions such as nonlinear dynamics (Pierson and Gashler 2017). Traditional machine learning algorithms are limited in processing the data with its row form, they need domain-specific expertise to extract features, but those extracted data representations may still lose part of ‘s hidden patterns (Liu et al. 2018, 3-8). Compared with traditional machine learning technologies, deep learning does not require careful engineering and expert-level domain knowledge to design a feature extractor, Figure 8. Deep Learning implementation in manufacturing have been used for predictive analytics, different forms of recognition, control object detection (which can be applied to a robot-assist task). For human robot collaboration applications, deep learning algorithms could increase flexibility of HRC systems. Using multimodal data for body posture recognition, hand motion recognition and voice recognition, robot control commands are generatined and fed to robot control interface (Liu et al. 2018, 3-8). Deep learning has different major architectures of networks for different task specifications. Inside these major architectures, there are many variants in how the layers and artificial neurons are composed. The custom Deep learning methods are Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN), Long shortterm Memory (LSTM), and Autoencoders. CNN and LSTM are the most useful for understanding of human activities at the robot-human collaboration in manufacturing sector. The convolutional neural networks (CNN) are used for image data processing, LSTM for sequence data modelling and Autoencoder for feauture learning. Convolutional Neural Networks (CNNs) is a multi-layer feedforward artificial neural network, where each neuron receives its input only from a subset of neurons of the

214

Lejla Banjanović-Mehmedović

previous layer (Zhao et al. 2019). They consist of two main parts: Feature Detection layers and Classification layers.

Figure 8. Comparison of machine learning (shallow learning) and deep learning techniques.

Feature Detection Layers perform alternating three types of operations on the data: convolution, pooling or rectified linear unit (ReLU). The convolutional layers put the raw input data through a set of convolutional filters, each of which activates certain local features. The following pooling layers extract the most significant, fixed-length features with a over sliding windows of the raw input data by pooling operations such as max pooling and average pooling. Max pooling is a pooling opeation which selects the maximum value from the region of the feature map as the most significant one. Average pooling is an operation which calculates the average value of the elements in the region. Rectified linear unit (ReLU) enables faster and more effective training by mapping negative values to zero and maintaining positive values. These operations are repeated over tens or hundreds of layers, with each layer learning to identify different features. After multi-layer feature learning, fully-connected layers (classification layers) convert a two-dimensional feature map into a one dimensional vector of K dimensions where K is the number of classes that the network will be able to predict. This vector contains the probabilities for each class of any raw input data being classified. The final layer of the CNN architecture uses a softmax function to provide the classification output. Typical CNN structure is shown in Figure 9.

Artificial Intelligence Drives Advances in Human-Robot…

215

CNN-based deep learning method has proved as an effective algorithm in 2-D images classification and recognition tasks. CCN network could be used for voice recognition with transforming the audio files into 2D-images, hand motion and human body postures recognition (Liu et al. 2018, 3-8). With a large variety of hand and body postures in the dataset, it is possible to increase the applicability of the human-robot interaction. Long Short Term Memory networks (LSTM) are a special kind of RNNs, designed to preserve the error that can be backpropagated through time and layers. The most important idea of LSTM is the cell state, which allows information flow down with linear interactions. By maintaining a more constant error, they allow recurrent nets to continue to learn over many time steps (over 1000), thereby opening a channel to link causes and effects remotely. LSTMs contain information outside the normal flow of the recurrent network in a gated cell. The cell decides what to store and when to allow reads, writes and erasures, by opening and closing gates (i.e., with their own sets of weights). Compared to a single recurrent structure in RNN, the gates include a forget gate layer, input gate layer and output gate layer. Those weights are adjusted through the gradient descent learning process, which enables each recurrent unit to adaptively capture long-term dependencies of different time scales. That combination of present input and past cell state is fed not only to the cell itself, but also to each of its three gates, with different sets of weights filter.The LSTM structure is presented in Figure 10. A deep learning based method for fast and responsive human-robot handovers that generate robot motion according to human motion observations is proposed in paper (Zhao 2018). This method learns an offline human-robot interaction model through a Recurrent Neural Network with Long Short-Term Memory units (LSTM-RNN). The robot uses the learned network to respond appropriately to novel online human motions. This method is tested in pre-recorded data and real-world human robotexperiments.

Figure 9. Convolutional Neural Networks (CNN) architecture.

216

Lejla Banjanović-Mehmedović

Figure 10.LSTM architecture.

Optimization Techniques in Human Robot Collaboration AI techniques are increasingly used to solve optimization problems in engineering. Optimization methodologies could be used to solve various types of practical problems such as robot trajectory planning and collision avoidance, optimal control, task allocation, human estimation, etc. (Gambier and Badreddin 2007). An optimization task is a computing process in which intelligent search is performed in the large-dimensional space involving many decision variables for locating a point that would minimize or maximize a pre-specified objective function of decision variables (Chaudhuri and Deb 2010, 496-511).There have been numerous methods like Simulated Annealing (SA), Tabu Search (TS), Greedy approaches, Hill climbing procedures, which can be used to solve such objective functions. Evolutionary algorithms are part of metaheuristic optimization algorithms which use mechanism, inspired by biological evolution such as selection, recombination and mutation, to generate and iteratively update an initial set of candidate solutions. Optimization techniques based on natureinspired optimization heuristics, such as the Genetic Algorithm (GA), Particle Swarm Optimization (PSO), Ant Colony Optimization Algorithm (ACO), Artificial Bee Colony (ABC) or Firefly algorithms are able to optimize various kinds of manufacturing systems with the goal to overcome the limitations of traditional optimization techniques. Particle Swarm Optimization (PSO) and Genetic Algorithm (GA) are two kinds of widely used in evolutionary computation (EC) (Kubota and Adji Sulistijono et al. 2006, 2204-2209).

Artificial Intelligence Drives Advances in Human-Robot…

217

Genetic Algorithm (GA) Genetic Algorithm is a computerized search and optimization algorithm based on the mechanics of natural genetics and natural selection. GA is a search technique for global optimization in a search space. Genetic algorithm is based on a direct analogy to Darwinian natural selection and mutations in biological reproduction and belong to a category of randomized heuristics. These operators have been conceived through abstractions of natural genetic mechanisms such as crossover and mutation. Pseudocode of GA 

 

Generate and evaluate the initial population P(t), t = 0. o Repeat the following steps until stopping condition is satisfied.Selection of chromosomes from the current population o Apply the crossover over the parent chromosomes and produce two off springs. o Apply the mutation operator over the offspring. o Copy the offspring to population P(t + 1). o Evaluate P(t + 1). o Replace the worst chromosome of P(t + 1) by the best chromosomefound so far. Set t = t + 1. Return the best chromosome found.

In assembly, a traditional formulation to establish a proper task allocation is the Assembly Line Balancing Problem (ALBP). In the ALBP modelling, different methods for ergonomic assessment have been included in the years. A genetic algorithm to approach the Assembly Line Balancing Problem (ALBP) in the case of human-robot collaborative work is presented in (Dalle Mura and Dini 2019, 1-4). The aim is the minimization of: 1) the assembly line cost, evaluated according to the number of workers and equipment on the line, including collaborative robots, 2) the number of skilled workers on the line, 3) the energy load variance among workers, based on their energy expenditures, on their physical capabilities and on the level of collaboration with robots. The distribution of tasks in human-robot teams (HRTA: Human-Robot-TaskAllocation) brings significantly greater complexity to a workplace. In workplaces where humans and robots work in teams, the tasks have to be allocated in an intelligent way. The task scheduling between humans and robots is very difficult because those two agents with own capabities differ dramatically from each other.For the human-robot task allocation for a given workplace, the genetic algorithm-based optimization method is presented in the paper (Banziger et al. 2018, 1-14). It is able to consider the highly dynamic interaction between the worker and the robot in the same workspace. The optimization principle allows the separation of the two optimization problems: the task

218

Lejla Banjanović-Mehmedović

distribution and the task sequence optimization. Both problems can be independently solved therefore increasing transparency and interpretability of the results.

Particle Swarm Optimization (PSO) The PSO is inspired by the social behavior of a flock of migrating birds trying to reach an unknown destination(Kubota and Adji Sulistijono 2006, 2204-2209). InPSO, each solution, called a particle, is a bird in the flock. Particles fly through the search space with velocities, which are dynamically adjusted according to their historical behaviors. Therefore, the particles have a tendency to fly towards the better search area over the course of the search process. The process is initialized with a group of random particles (solutions), N. The i-th particle is represented by its position as a point 𝑋𝑖 = (𝑥𝑖,1 , 𝑥𝑖,2 , … , 𝑥𝑖,𝑑 ) ∈ 𝑆in an S-dimensional space. Throughout the process, each particle monitors three values: its current position 𝑋𝑖 ; the best position it reached in previous cycles𝑝𝑏𝑒𝑠𝑡𝑖 = (𝑝𝑏𝑒𝑠𝑡𝑖,1 , 𝑝𝑏𝑒𝑠𝑡𝑖,2 , 𝑝𝑏𝑒𝑠𝑡𝑖,3 , … , 𝑝𝑏𝑒𝑠𝑡𝑖,𝑑 ) ∈ 𝑆and its flying velocity 𝑉𝑖 = (𝑣𝑖,1 , 𝑣𝑖,2 , 𝑣𝑖,3 , … , 𝑣𝑖,𝑑 ). Each particle updates its position and velocity to catch up with the best particle g as: 𝑋𝑖 (𝑡 + 1) = 𝑋𝑖 (𝑡) + 𝑉𝑖 (𝑡 + 1)

(1)

𝑉𝑖 (𝑡 + 1) = 𝑉𝑖 (𝑡) + 𝑐1 𝑟𝑖,1 (𝑡)(𝑝𝑏𝑒𝑠𝑡𝑖 (𝑡) − 𝑋𝑖 (𝑡)) + 𝑐2 𝑟𝑖,2 (𝑡)(𝑔𝑏𝑒𝑠𝑡(𝑡) − 𝑋𝑖 (𝑡))

(2)

PSO pseudocode For each particle Initialize particle Do For each particle Calculate fitness value If the fitness value is better than the best fitness value (pBest) in history Set current value as the new pBest Choose the particle with the best fitness value of all the particles as (gBest) For each particle Calculate particle velocity according equation (1) Update particle position according equation (2) While maximum iterations or minimum error criteria is not attained In iteration t, the best position of the whole swarm and the particle itself respectively, are denoted as gbest (t) and 𝑝𝑏𝑒𝑠𝑡𝑖 (𝑡); 𝑐1 and 𝑐2 are two positive constants (acceleration

Artificial Intelligence Drives Advances in Human-Robot…

219

coefficients), which denote the cognitive and social parameters, respectively; 𝑟𝑖,1 and𝑟𝑖,2 are two random functions in the range [0,1]. In order to interact with a human, the robot requires the capability of visual perception. A robot should perform moving object extraction for visual perception used in the interaction with human. Particle swarm optimization and genetic algorithm are used to human head tracking for a partner robot in order to reduce time cost consumption (Kubota and Adji Sulistijono 2006, 2204-2209).

Multi-Objective Optimization Although evolutionary algorithms have conventionally focused on optimizing single objective functions, most practical problems in engineering are inherently multi-objective (Das and Panigrahi, 2009), which means that there is a set of optimal mathematically incomparable solutions and none of them is preferable to the others, instead of a unique optimal solution. A multi-objective optimization, also known as multi-criteria optimizationor Pareto optimization, involves a number of objective functions which are to be either minimized or maximized subject to a number of constraints and variable bounds. Multi-objective optimization algorithms advantages to single objective optimization is that they are able to find more optimal solutions in single run, which represent different compromises between criteria. The solution of Multi-Objective Optimization (MOO) problem will lead to a family of Pareto optimal points, where any improvement in one objective will result in degradation of one or more of the other objectives (Rostami et al. 2013). The most important Multi-objective Evolutionary Algorithms are MOGA (Multiple Objective Genetic Algorithm), NSGA-II and NSGA-III (Non-dominated Sorting Genetic Algorithm), SPEA2 (Strength Pareto Evolutionary Algorithm, NPGA-II (Niched Pareto Genetic Algorithm) (Konak et al. 2006, 992-1007). For example, in order a robot hand have a human like motion, numbers of optimization technique have been proposed and multi-objective evolutionary algorithm (MOEA) is one of it. The primary reason is the ability of MOEA to find multiple Pareto optimal solution in a single run (Zukifli et al. 2014). Manual disassembly has low efficiency and high labor cost while robotic disassembly is not flexible enough to handle complex disassembly tasks. Therefore, human-robot collaboration for disassembly (HRCD) is usefull for flexibly and efficiently finishing the disassembly process in remanufacturing. The modified discrete Bees algorithm based on Pareto (MDBA-Pareto) is proposed to search the optimal solutions using three objective funtion (minimize the disassembly time, disassembly cost and disassembly difficulty) in the disassembly process (Xu 2020). The results show the proposed method can solve disassembly process for human robot collaboration in remanufacturing using multiobjective optimisation approach.

220

Lejla Banjanović-Mehmedović

CONCLUSION Human-robot collaboration has become key technology for the factory of the future. In recent years, AI has driven more advances in robotic solutions, introducing flexibility and learning capabilities in previously rigid applications. At the same time, artificial intelligence turns the robot human workplace into the learning and anticipative system that continuously optimises itself. Intelligent planning and control algorithms are needed for organization of the work in hybrid teams of humans and robots. Through human-robot collaboration, the intelligence and decision-making powers of humans are matched by the strength and precision of robots, allowing for ever more complex and delicate possibilities. The future of human-robot collaboration is intuitive human-robot interaction, where robots will receive commands directly from human minds. The opportunities for humanrobot collaboration are endless, if we know to use those exciting possibilities in the right way.

REFERENCES Attolico, C., Reno, V., Guaragnella, C., D’Orazio, T.and Cicirelli, G.(2014). “A Review of Gesture Recognition Approaches for HRI.” In Conference: Workshop on RealTime Gesture Recognition for Human Robot Interaction. Banjanovic-Mehmedovic, L., Karic, S.and Mehmedovic, F. (2011). “Optimal Search Strategy of Robotic Assembly based on Neural Vibration Learning.” Journal of Robotics, the special issue “Cognitive and Neural Aspects in Robotics with Applications 2011. Banjanovic-Mehmedovic, L., Mehmedovic, F., Bosankic, I.and Karic, S. (2013). “Genetic Re-planning Strategy of Wormhole Model using Neural Learned Vibration Behavior in Robotic Assembly.” AUTOMATIKA - Journal for Control, Measurement, Electronics, Computing and Communications, Volume 54, No. 4, 380–512. Banjanovic-Mehmedovic, L., Halilovic, E., Bosankic, I., Kantardzic, M.and Kasapovic, S. (2016). “Autonomous Vehicle-to-Vehicle (V2V) Decision Making in Roundabout using Game Theory.” International Journal of Advanced Computer Science and Applications (IJACSA), Vol. 7. No. 8, 292-298. Banziger, Timo, Kunz, Andreas and Wegener, Konrad. (2018). “Optimizing HumanRobot Task Distribution using a Simulation Tool based on Standardized Work.” Journal of Intelligent Manufacturing., 1-14.

Artificial Intelligence Drives Advances in Human-Robot…

221

Bilberg, Arne and Ali Ahmad, Malik. (2019). “Digital twin driven human–robot collaborative assembly.” Elsevier CIRP Annals - Manufacturing Technology, 68, 499–502. Camerer, C.F., Ho, T. H.and Chong, J. K. (2001). Behavioral Game Theory: Thinking, Learning, and Teaching. Pasadena: California Institute of Technology California Institute of Technology. Chaudhuri, S. and Deb, K. (2010). “An interactive evolutionary multi-objective optimization and decisionmaking procedure.” Applied Soft Computing, Volume 10, Issue 2, 496-511. Dalle Mura, Michela and Gino, Dini. (2019). “Designing assembly lines with humans and collaborative robots: A genetic approach.” Elsevier, Manufacturing technology, Volume 68, Issue 1. 1-4. Das, S. and Panigrahi, B. K. (2009). “Multi-Objective Evolutionary Algorithms.” Encyclopedia of Artificial Intelligence. Dimeas, F. and Aspragathos, N. (2014). “Fuzzy Learning Variable Admittance Control for Human-Robot Cooperation.” In IEEE International Conference on Intelligent Robots and Systems., 1011-1016. D’Orazio, T., Attolico, G., Cicirelli, G.andGuaragnella, C. (2014). “A Neural Network Approach for Human Gesture Recognition with a Kinect Sensor.” In: ICPRAM, 741746. Dröder, K., Bobka, P., Germann, T., Gabrie, F. and Dietrich, F.(2018). “A Machine Learning-Enhanced Digital Twin Approach for Human-Robot Collaboration.” In 7th CIRP Conference on Assembly Technologies and Systems, Volume 76, 187-192. El-Baz, A. and Tolba, A. (2013). “An efficient algorithm for 3D hand gesture recognition using combined neural classifiers.” Neural Computing and Applications, 22, 14771484. Faber, Marco, Bützler, Jennifer and Schlick, Christopher M. (2015). “Human-robot cooperation in future production systems: Analysis ofrequirements for designing an ergonomic work system.” Procedia Manufacturing, 3, 510 – 517. Flores-Garcia, E., Wiktorsson, M., Jackson, M. and Bruch, J. (2015). “Simulation in the production system design process of assembly systems.” In Proceedings of the 2015 Winter Simualtion Conference, 2124-2135. Gambier, A. and Badreddin, E. (2007). “Multi-objective Optimal Control: An Overview.” In 16th IEEE International Conference on Control Applications, Part of IEEE Multiconference on Systems and Control. Hayne, Rafi. H. (2015). Exploring Human-Robot Interaction in Collaborative Tasks. Worcester Polytechnic Institute. E-project-043015-122548. Hinton, Geoffrey, Deng, Li, Yu, Dong, Dahl, George E., Abdel-rahman, Mohamed, Jaitly, Navdeep, Senior, Andrew, Vincent, Vanhoucke, Nguyen, Patrick, Sainath, Tara N. and Kingsbury, Brian. (2012). “Deep Neural Networks for Acoustic

222

Lejla Banjanović-Mehmedović

Modeling in Speech Recognition: The Shared Views of Four Research Groups.” IEEE Signal Processing Magazine, 82-97.https://exoskeletonreport.com/2016/04/ exoskeletons-for-industry-and-work/:January, 2020. Jarrasse, N., Charalambous, T. and Burdet, E. (2013). “A framework to describe, analyze and generate interactive motor behaviors.” PLoS ONE, vol. 7, no. 11. Konak, A., Coit, D. W. and Smith, A. E. (2006). “Multi-objective optimization using genetic algorithms: A tutorial.” Reliability Engineering and System Safety, 91, 992– 1007. Kubota, Naoyuki and Indra Adji, Sulistijono. (2006). “A Comparison of Particle Swarm Optimization and Genetic Algorithm for Human Head Tracking.” In Joint 3rd International Conference on Soft Computing and Intelligent Systems and 7th International Symposium on advanced Intelligent Systems.,2204-2209. Leyton-Brown, K. and Shoham, Y. (2008). Essentials of Game Theory: A Concise, Multidisciplinary Introduction. Morgan & Claypool Publishers. Liu, H. andWang, L. (2018). “Gesture recognition for human-robot collaboration: A review.” International Journal of Industrial Ergonomics, 68, 355-367. Liu, H., Fang, T., Zhou, T., Wang, Y. and Wang, L. (2018). “Deep learning-based Multimodal Control Interface fro Human-Robot Collaboration.” Elsevier Procedia CIRP, 72, 3-8. Mainprice, Jim, Hayne, Rafi and Berenson,Dmitry. (2015). “Predicting Human Reaching Motion in Collaborative Tasks Using Inverse Optimal Control and Iterative Replanning.” In Proceedings of the IEEE International Conference on Robotics and Automation. 885-892. Malik A.A. and Bilberg, A. (2018). “Digital Twins of Human Robot Collaboration in a Production Setting.” Procedia Manufacturing, 17. 278-285. Mauri, A., Lettori, J., Fusi, G., Fausti, D., Mor, M., Braghin, F., Legnani, G.andRoveda, L. (2019). “Mechanical and Control Design of an Industrial Exoskeleton for Advanced Human Empowering in Heavy Pats Manipulation Tasks.” Robotics, 8,65. Maurice, Pauline, Padois, Vincent, Measson, Yvan and Bidaud, Philippe. (2017). “Human-oriented design of collaborative robots.” International Journal of Industrial Ergonomics, Volume 57, 88-102. Mourtzis, D., Doukas, M.andBernidaki, D. (2014). “Simulation in Manufacturing: review and Challenges.” Procedia CIRP 2014: 25. 213-229. Osborne, M. (2003). An introduction to Game Theory.Oxford University Press. Patsadu, O., Nukoolkit, C. and Watanapa, B. (2012). “Human gesture recognition using Kinect camera.” In: Computer Science and Software Engineering (JCSSE), 2012 International Joint Conference on, IEEE. 28-32. Pierson, Harry and Gashler, Michael S. (2017). “Deep Learning in Robotics: A Review of Recent Research.” Advanced Robotics.

Artificial Intelligence Drives Advances in Human-Robot…

223

Pohammad, B.L., Kaedi, M., Ghasem-Aghaee, N.and Oren, T. I.(2009). “Anger evaluation for fuzzy agents with dynamic personality.” Mathematical and Computer Modelling of Dynamical Systems, 15 (6). 535-553. Rostami, S., Delves, P.and Shenfield, A. (2013). “Evolutionary Multi-Objective Optimisation of an Automotive Active Steering Controlle.”Manchester Metropolitan University Research Symposium. Schleich, B., Nabil, A., Luc, M. andWartzack S. (2017). “Shaping the digital twin for design and production engineering.” Elsevier CIRP Annals,Volume 66, Issue 1, 141144. Shah, A., Jan, S., Khan, I.and Qamar, S. (2012). “An Overview of Game Theory and its Applications in Communication Networks.” International Journal of Multidisciplinary Sciences and Engineering, Vol. 3, No. 4. 5-11. Sudhakara, Pandian. R. and Modrák, Vladimír. (2009). “Possibilities, Obstacles and Challenges Of Genetic Algorithm In Manufacturing Cell Formation.” Advanced Logistic Systems, 63-70. Turocy, T. and von Stengel, B. (2001). Game Theory. London: Academic Press. London School of Economics. Unbehauen, H. (2009). Identification of nonlinear systems .Control systems, Robotics and Automation - Vol. VI - Identification of Nonlinear Systems -Encyclopedia of Life Support Systems (EOLSS). Wandile, S. (2012). A review of Game Theory. University of Stellenbosch. Wenjun, Xu, Tang, Quan, Liu, Jiayi, Liu, Zhihao,Zhou, Zude and Pham, Duc Truong. (2020). “Disassembly sequence planning using discrete Bees algorithm for humanrobot collaboration in remanufacturing.” Robotics and Computer-Integrated Manufacturing, Volume 62, will be appeared in April 2020. Yanan, L., Peng, T. K., Rui, Y., Liang, C. W. and Wu Y. (2016). “A framework of human–robot coordination based on game theory and policy iteration.” IEEE Transactions on Robotics, 32 (6), 1408-1418. Yang, X., Behroozi, M. and Olatunbosun, O. A. (2014). “A Neural Network Approach to Predicting Car Tyre Micro-Scale and Macro-Scale Behaviour.” Journal of Intelligent Learning Systems and Applications, 6,11-20. Zhang, X. and Lu, J. D.(2012). Fuzzy Control for Heat Recovery Systems of Cement Clinker Cooler.” Journal of Theoretical and Applied Information Technology, Vol. 42. No.2. Zhao, R., Jan, R., Cheng, Z. and Wang, P. (2019). “Deep learning and its applications to machine health monitoring.” Mechanical Systems and Signal Processing. Zhao, X., Chumkamon, S., Duan, S., Rojas, J. and Pan, J. (2018). “Collaborative HumanRobot Motion Generation using LSTM-RNN.” In IEEE-RAS Conference on Humanoid Robots (Humanoids)

224

Lejla Banjanović-Mehmedović

Zhihao, L., Liu, Q., Xu, W., Duc, Z. Z. and Pham, T. (2018). “Human-Robot Collaborative Manufacturing using Cooperative Game: Framework and Implementation.” Elsevier Procedia Cirp, Vol. 72, 87-92. Zulkifli, M., Kitan, M. and Capi, G. (2014). “Humanoid robot arm performance optimization using multi objective evolutionary algorithm.” International Journal of Control Automation and Systems, 12(4).

In: Industrial Robots: Design, Applications and Technology ISBN: 978-1-53617-779-4 Editors: Isak Karabegović et al. © 2020 Nova Science Publishers, Inc.

Chapter 10

THE STUDY ON KEY TECHNOLOGIES OF COLLABORATIVE ROBOT, SENSORLESS APPLICATIONS AND EXTENSTIONS Jianjun Yuan1,*, Yingjie Qian2, Liming Gao3, Zhaojiang Yu1, Hanyue Lei2, Sheng Bao1 and Liang Du 4 1

Shanghai Robotics Institute, School of Mechatronic Engineering and Automation, Shanghai University, Shanghai, China 2 Robotic Institute, School of Mechanical Engineering, Shanghai Jiaotong University, Shanghai, China 3 Mechanical Engineering, Penn State University, Pennsylvania, US 4 Department of Robotics, Ritsumeikan University, Shiga, Japan

ABSTRACT This chapter discusses sensorless dynamic control and related applications of collaborative robots. The idea of sensorless dynamic control means that no additional force/torque sensors nor joint torque sensors are involved. Above all, dynamic control requires estimation of motor’s output torque. Two methods for torque estimation are proposed which are current-based estimation method and the more novel one, double-encoder-based method. The former one is based on the linear relationship between motor’s current and output torque while the latter one uses the speed reducer in a robot joint as a torque indicator by considering angle deformation. The most important aspect is joint friction which directly influences the accuracy of external torque estimation and the performance of applications. In our work, a comprehensive friction model, considering joint angular velocity, load effect and temperature has been proposed for current-based torque estimation method and validated * Corresponding Author’s E-mail: [email protected].

226

Jianjun Yuan, Yingjie Qian, Liming Gao et al. through experiments. Moreover, a different friction model for double-encoder-based method is also studied. Two basic applications of dynamic control are explained. The first one is collision detection whose purpose is to detect any collision happened in its early phase with enough sensitivity and stability. The second one is kinesthetic teaching together with its extension, Cartesian teaching. It allows human operators to drag the robot to any position directly rather than with the help of control pendant. Original kinesthetic teaching is performed in joint space while Cartesian teaching in Cartesian space of end-effector. In Cartesian teaching, some degrees of freedom such as orientation may be restricted according to practical requirement. A more advanced application is also studied which is force control with surface tracking. Here, an improved sensorless hybrid controller for constant force control along normal direction is proposed for applications including polishing, milling and deburring. With the help of dynamically consistent generalized inverse matrix, external torque at every joint can be converted to external force/torque at the end-effector. The underlying force control strategy is the integration of impedance control model and explicit force control. The novel improvement is the real-time prediction algorithm of surface’s shape profile and normal direction without any prior knowledge. So, this force controller has great adaptiveness to arbitrary unknown surfaces. The idea of sensorless dynamic control originates from collaborative robots yet can also be transplanted to industrial robots. Since industrial robots have wider applications than collaborative robots, this transplantation has both academic value and practical merits. The distinction between industrial robots and collaborative robots, and corresponding different control strategies are discussed. All the applications and studies are validated through practical experiments and simulations, and proved to be functional and effective.

Keywords: dynamic control, friction model, collision detection, kinesthetic teaching, force control, collaborative robot, collaborative application of industrial robot

INTRODUCTION Nowadays, robots have already been playing an important role in many fields such as manufactory process and industrial automation. The most commonly used robots are industrial robots, featuring large payload, high stiffness, and fast motion. Due to these features, these robots must be isolated from human operators for safety concerns. However, people find more tasks that can be accomplished better with the collaboration of human, such as tasks that can be partially automated if a fully automated solution is not economical or too complex, and tasks need to be fulfilled at non-ergonomic workstations. Since robots excel at simple but repetitive tasks, while human, on the contrary, have unique cognitive skills for understanding and adapting changings, the collaboration of human and robot can achieve better performance and wider application (Albu‐Schäffer et al. 2007, 376). Hence in these cases, human operators have to share workspace with robots and interact with them. These demands cannot be fulfilled by present industrial robot technology.

The Study on Key Technologies of Collaborative Robot …

227

For these purposes, collaborative robots have been proposed. They adopt the design of user-friendly appearance, lighter self-weight, and slower motion speed in order to lower the psychological barrier of human operators and reduce the potential risk (Albu‐Schäffer et al. 2007, 376). More than mechanical designs, the idea of human-robot collaboration and its safety are also advantages of collaborative robots. These features are guaranteed by robotic control, particularly dynamic control (Wang, S. et al. 2016). Unlike conventional position control which lays more emphasis on robotic motion control such as positioning accuracy and speed tracking ability, dynamic control deals with the interaction force between robot and the environment (including the human operator) and can implement new functions, for instance collision detection, kinesthetic teaching and force tracking (Wang, S. et al. 2016, 923; Wang N. et al. 2017, 44; Yuan, J. et al. 2019, 489). There’re many methods to achieve dynamic control. Most of them require additional sensors such as force/torque sensors (FT sensors) at end-effector or at base and joint torque sensors (Ott, C. and Nakamura, Y. 2009, 3244; Albu‐Schäffer et al. 2007, 376). Yet, these sensors can cause problems including high vulnerability, increase of hardware cost and reduction of payload ability. Hence, the study on sensorless dynamic applications of robot has more academic and practical merits (Yuan, J. et al. 2018, 216; Gao, L. et al. 2017, 3027; Yuan, J. et al. 2019, 489; Yuan, J. et al. 2019, 489). Our works mainly focus on dynamic studies of a 7 Degrees-of-Freedom (DOF) collaborative robot (Figure 1). One of the keys of sensorless applications of dynamic control is the estimation of motors’ output torque (Wang, S. et al. 2016, 923). A common method is to use motors’ current signal which is proportional to their output torques. It is called current-based torque estimation method (Wang, S. et al. 2016, 923; Yuan, J. et al. 2018, 216). The ratio between the current and the torque is called the torque-constant. Although the torqueconstant of a motor can be found in its datasheet, it’s still necessary to identify its actual value through experiments for higher accuracy. The other more novel method is based on the deformation of joints under external payload. This method involves the structure of double encoders, called double-encoderbased torque estimation method (Han, Z., Yuan, J., and Gao, L. 2018, 1852; Wang N. 2017; Wang S. 2017). As is shown in Figure 2, one absolute encoder is mounted on the input shaft of the speed-reducer, and one incremental encoder on the output shaft. The structure of double encoder takes full usage of the stiffness of the speed reducer which shares a similar principle with joint torque sensors (Hirzinger, G. et al. 2002, 1710). Compared with the aforementioned method, the usage of joint deformation leads to several advantages including quicker response and more stable static readings.

228

Jianjun Yuan, Yingjie Qian, Liming Gao et al.

Figure 1. 7 Degrees-of-Freedom (DOF) collaborative robot.

Figure 2. Mechanical Structure of robot joint.

After obtaining the output torque, the robot’s dynamic parameters can be identified. The most challenging problem is the identification of frictional parameters (Goto, S. et al. 2007, 627). The primary friction model is the combination of Coulomb friction and the first-order viscous friction. Yet through experiments, more complex frictional behaviors have been observed, such as the non-linearity of viscous friction, the Stribeck effect and the influence of temperature and payload on the friction (Stribeck, R. 1902, 1342). Present friction models such as original Stribeck model and LeGru model, are either too complicated to be applied in real-time, or incapable of covering all these factors (Stribeck, R. 1902, 1342; Johanastrom, K. and Canudas-De-Wit, C. 2008, 101). In our work, a comprehensive friction model for collaborative robot with harmonic reducers has been proposed and validated through experiments (Gao, L. et al. 2017, 3027; Gao, L., Yuan, J., and Qian, Y., 2019, 699). Building on these preparational works, some basic applications of dynamic control can be implemented in sensorless manners.

The Study on Key Technologies of Collaborative Robot …

229

The first application is collision detection, which is crucial to safety concerns (Lu, S. et al. 2005, 3796). Here, more attention is paid to the early detection method rather than post-event strategy. Once the output torque of joints and their desired dynamic torque have been obtained, the external torque of joints can be calculated accordingly. The robot must be sensitive enough to hazardous external forces whereas prevent any misjudges. Misjudges may be caused by sudden acceleration or deceleration, and the minor inaccuracy of friction model. The threshold of external torque is settled through various experiments and by referring to ISO 15066 (Wang N. et al. 2017, 44; Wang N. 2017). The second application is kinesthetic teaching, which is a function improving the efficiency and flexibility of teaching (Gleeson, B. et al. 2015, 95). Conventional teaching is usually achieved with the help of teaching pendants or panels (Figure 3). However, this process is time-consuming and requires high demands for human operators (Lee, H. M., and Kim, J. B. 2013, 648). Therefore, a more intuitive teaching method called kinesthetic teaching has been proposed with which human operators can directly drag any part of the robot to certain positions. Studies show that torque-based kinesthetic teaching method has better performance over position-based method (You, Y., Zhang, Y. and Li, C. 2014, 10; Yuan, J. et al. 2018, 216). By switching motors into torque mode, the current of motors can be assigned to fully compensate the gravity torque and partially compensate the friction torque. An ideal kinesthetic teaching is a balance between effort-saving and stability, so the proportion of compensated friction torque needs to be carefully settled. Furthermore, the transition from static friction to dynamic friction also needs special treatment. Several supplementary strategies are provided for safety concerns which are limitations on jointangles and velocity (Yuan, J. et al. 2018, 216; Yuan, J. et al. 2018, 216; Wang S. 2017; Gao, L., Yuan, J., and Qian, Y., 2019, 699).

Figure 3. (Left) Teaching pendant, (Right) Conventional teaching method.

The aforementioned kinesthetic teaching is on joint-control level. Yet in some cases, during the teaching process, the robot is desired to be stiffer in some Cartesian directions or orientations while compliant in the others. For instance, the robot’s end tool might be supposed to have only translational motion while no rotational motion. This can be

230

Jianjun Yuan, Yingjie Qian, Liming Gao et al.

achieved by applying the teaching algorithm on Cartesian level, called the Cartesianspace kinesthetic teaching (Wang S. 2017). These basic applications greatly enhance the dynamic performance of the robot. What makes collaborative robots more useful in manufactory are advanced dynamic control and force control (Mei, C., Yuan, J., and Guan, R. 2018, 1864; Gao, L. 2019; Yuan, J. et al. 2019, 489). Although force control has been studied theoretically for a long time, wide practical applications only started in accompany with collaborative robots. Force control means that the force exerted by the robot to the environment is controlled. One typical application is surface polishing. In this case, the force exerted on the surface to be machined is supposed to be constant, which makes the process a constant force tracking problem. The external torques of joints can be converted to a force vector in Cartesian space by using the inverse of the Jacobian matrix. Here, instead of using the robot’s jointtorque loop (or called the joint-current loop), the position loop was adopted for convenience. Impedance control is one of the most common and effective force control models (Hogan, N. 1984, 304). Here, the proposed force controller integrates impedance control and explicit force control to performing a hybrid force control method which can eliminate the steady-state error. Furthermore, the shape profile and the normal direction of the surface are predicted in real-time using historic position information and force information of robot’s end-effector. They can be used in the calculation of reference trajectory and the adjustment of end-effector’s pose. In our work, two versions of force controllers have been proposed for constant force tracking along the gravitational direction and the normal direction (Gao, L. 2019; Yuan, J. et al. 2019, 489).

Figure 4. A typical 6DOF industrial robot.

The Study on Key Technologies of Collaborative Robot …

231

As is analyzed above, the deep study of robotic dynamics can achieve innovative and useful functions. However, these mentioned studies only focus on the dynamics of collaborative robots. Although the dynamics of collaborative robots are simpler than that of industrial robots due to their special designs, industrial robots still outperform collaborative robots in pragmatic aspects including higher payload, faster motion speed and better positioning accuracy. Moreover, industrial robots still rank first in the market share. Hence, it is truly meaningful and pressing to transplant related dynamic algorithms from collaborative robots to industrial robots. This leads to a brand-new research field, called the collaborative application of industrial robots which combines the advantages of both kinds of robots while narrowing their weaknesses (Han, Z., 2019). Similar to the dynamic study of collaborative robots, the collaborative application of industrial robots also starts from the preparation works including torque estimation, friction modeling and parameter identification. Yet there still exist some distinctions. In our work, a typical 6DOF industrial robot (Figure 4) was used to validate this idea and has realized basic dynamic applications. This successful transplantation proves that without of generality, the proposed study on dynamic control and applications can be widely replicated to all articulated robots including not only collaborative robots but also industrial robots. This chapter is organized as follows. Section 2 reviews previous studies related with dynamic control methods and applications. Section 3 first introduces the dynamic model for articulated robot and states key problems of sensorless dynamic control. Then, two different methods for output torque estimation are elaborated followed by inertial parameter identification method. Section 4 deeply studies the behavior of friction considering multiple factors such as velocity, temperature and load effect. Then in Section 5, two basic applications of basic dynamic control are discussed. The first one is collision detection which is crucial for ‘human-robot’ interaction. The other one is kinesthetic teaching together with Cartesian teaching which allows human operator to intuitively control the robot. Section 6 focuses on more advanced dynamic control including constant force tracking with surface tracking and adaptive unified control methods. In Section 7, the idea of collaborative applications is transferred to industrial robots. Finally, Section 8 draws the conclusion.

RELATED WORK Study of Friction Model Precise joint friction estimation is crucial to the success of sensorless application of dynamic control (Goto, S. et al. 2007, 627). You et al. adopted a simple friction model with constant Coulomb friction and first-order viscous friction in their study because the entire friction resistance was small (You, Y., Zhang, Y. and Li, C. 2014, 10). Lee and

232

Jianjun Yuan, Yingjie Qian, Liming Gao et al.

Song used cubic polynomial to describe the joint viscous friction (Lee, S. D. and Song, J. B 2016, 11). However, friction is a complicated reaction force between two surfaces in contact. It features high nonlinearity and influence from multiple factors including contact geometry, materials, relative velocity, lubricants etc. (Olsson, H. et al. 1988, 176). More comprehensive models such as Stricbeck model, LuGre model and generalized Maxwell slip model describe the nonlinear relationship between friction and relative velocity and take dynamical behavior into account (Stribeck, R. 1902, 1342; Johanastrom, K. and Canudas-De-Wit, C. 2008, 101; Al-Bender, F., Lampaert, V., and Swevers, J. 2005, 1883). A detailed review of different friction models is provided by Pennestrì et al. (Pennestrì, E. et al. 2016, 1785). Yet these friction models mentioned above only study the relationship between friction and velocity. In practice, effects of other factors are also apparent. Simoni et al.’s study on robot joints proved the existence of considerable dependence of friction on temperature (Simoni, L. et al. 2015, 3524). Moreover, the estimation of joint temperature was deduced from thermodynamics rather than from temperature sensors. This method needs careful calibration before operation and is time-consuming (Simoni, L. etal. 2015, 3524i). Bittencourt and Gunnarsson proposed a load-dependent friction model for heavyload industrial robots (Bittencourt, A. C. and Gunnarsson, S. 2012, 134). Hence, a universal and efficient friction model for collaborative robots needs further study.

Study of Collision Detection The most common way to detection collision is the usage of additional FT sensors at wrist or end-effector (Lu, S. et al. 2005, 3796). However, this method can only detect collision happened at the position of sensor but not at any part of the robot. This is not safe enough for human-robot interaction situations. This problem can be solved by mounting FT sensors at base or using joint torque sensors (Ott, C., & Nakamura, Y. 2009, 3244; Hirzinger, G. et al. 2002, 1710). Lumselshy et al proposed a detection method based on tactile skin (Lumelsky, V. J. and Cheung, E. 1993, 194). Yet these methods have the common drawbacks such as increase of hardware cost, high vulnerability etc. Hence, sensorless collision detection has more academic and practical merits.

Study of Kinesthetic Teaching In general, kinesthetic teaching can be achieved in either position-based manner or torque-based manner. Position-based kinesthetic teaching method needs to obtain the external force/torque exerted by the operator and then by applying the impedance model

The Study on Key Technologies of Collaborative Robot …

233

converts the force/torque to position commands. Goto et al.’s work showed that the external force/torque can be estimated in a sensorless manner in position-based kinesthetic teaching (Goto, S. et al. 2003). Yet position-based method has limited bandwidth and is too sensitive to external forces which is not desired in kinesthetic teaching. So, torque-based kinesthetic teaching is more suitable in this case. In You et al.’s study, kinesthetic teaching was achieved by compensating the gravity and friction resistance independently in motor current loop (You, Y., Zhang, Y. and Li, C. 2014, 10). This method is more compact and can obtain larger control bandwidth yet it didn’t take friction into consideration since the author argued that the friction resistance was small. On the contrary, for common cases, the teaching effect and stability greatly depends on the accuracy of friction model. Furthermore, the compensation strategy of friction needs deeper study. Kinesthetic teaching allows operator to freely drag the robot to any possible positions and orientations. Yet in some situations, the orientation or even some translational direction of end-effector should remain unchanged. It is called Cartesian-space kinesthetic teaching which is a newly emerged research area.

Study of Compliant Behavior and Force Control Compliant behavior means that the robot will divert from the original trajectory along the direction of external force which is a simple case of force control. Classic position control strategy is incapable of dealing with this situation as it doesn’t consider any external interactions. One fundamental control strategy to solve this problem is called Impedance Model, initially presented by Hogan (Hogan, N. 1985, 107). Impedance model regulates the system of manipulator and the environment as a mass-damp-spring system. Details will be discussed in next part. As for robot control, impedance control can either implemented on torque-loop (also called current-loop), or position-loop. The former is named as torque-based impedance control or impedance control method while the latter position-based impedance control or admittance control (Jung, S. 2012, 373). Schäffer et al. applied the torque-based impedance control into the LWR robot to achieve compliant behavior in Cartesian space (Albu-Schäffer, A., Ott, C., Frese, U., and Hirzinger, G. 2003, 3704). Building on this, the passivity control was integrated into the control law to improve the stability of Cartesian impedance control (Albu-Schäffer, A., Ott, C., and Hirzinger, G. 2007, 23). Heinrichs proposed a thorough study on position-based impedance control method for a hydraulic manipulator (Heinrichs, B., Sepehri, N., and Thornton-Trump, A. B. 1997, 46). Compliant behavior is the foundation of advanced force control where the contact force should be controlled, called force tracking. Yet, impedance control alone cannot accomplish such tasks. Roveda et al. combined admittance control loop over impedance

234

Jianjun Yuan, Yingjie Qian, Liming Gao et al.

control loop to achieve optimal force tracking controller (Roveda, L. et al. 2015, 130). Schindlbeck et al. applied task-energy tasks to passivity-based impedance control frame work to realize torque-based force tracking (Schindlbeck, C., and Haddadin, S. 2015, 440. From a different perspective, explicit force control, also called direct force control, directly uses force feedback to control the contact. Hence, the integration of explicit force control and impedance model can have better performance.

ROBOTIC DYNAMIC MODEL AND PARAMETER IDENTIFICATION This section is mainly around dynamic model of collaborative robots. In the first part, a full version of dynamic model is discussed and compared with dynamic model for flexible joints. Then, two different approaches for output torque estimation are analyzed which are estimation from motors’ current signal and from the structure of double encoders. Finally, parameter identification methods for robot’s inertial parameters and friction model are thoroughly studied.

Robotic Dynamic Model The classical dynamic model of articulated robot arm can be written as (Siciliano, B. and Khatib, O. 2016):

τ d (q, q, q)  M (q)q  C (q,q)q  G(q)

(1)

τ d denotes the theoretical dynamic joint torque. q presents the vector of joint angles, called link-side angle. M (q) is the inertia matrix. C (q,q) is the Coriolis and centrifugal matrix. And G (q ) is the gravity vector. Similarly, M (q)q is called the inertial torque.

C (q,q)q is the coupled torque. Taking the external torque and friction torque into consideration, the full version of dynamic model is given by:

τ s  τ d  τ f  τ ext

(2)

Herein, τ s presents the total output torque of the motor after speed reducer. τ f presents the friction torque. τ e presents the external joint torque.

The Study on Key Technologies of Collaborative Robot …

235

This dynamic model bases on the premise of joints’ rigidity. Building on the original model, Spong et. al studied the impact of joints’ elasticity and damping and proposed an improved dynamic model for flexible joints (Spong, M. W. 1987, 310).

M(q)q + C(q,q)+ G(q)  τ + DK -1 τ + τ ext

Bθ + τ + DK -1 τ = τ m τ = K(θ - q)

(3)

(4)

θ is the vector of motor angles divided by the gear ratio, called motor-side angle. τ denotes the actual output torque of joints. K and D present joint stiffness and damping matrices which are diagonal positive. In general, joint elasticity is mainly caused by joint torque sensors if any, as well as speed reducers. Albuschäffer proposed an identification method for K and D based on oscillation tests where these two parameters are treated as constants (Albu-Schäffer, A. andHirzinger, G. 2001, 2852). This model has been applied in LWR robot designed by DLR (Albu-Schäffer, A. et al. 2003, 3704; Schindlbeck, C. andHaddadin, S. 2015, 440). In LWR, high-accuracy position sensors (magneto-resistive encoders) are only equipped at motor-side while position sensors at link-side are potentiometers with low-accuracy used for safety concerns (Albu-Schäffer, A. et al. 2007, 376). It means that only θ can be measured accurately. So LWR adopts the strategy where q is iteratively estimated from K and link-side angle (Schindlbeck, C., and Haddadin, S. 2015, 440). Furthermore, LWR is equipped with joint torque sensors to obtain state feedback which in return aggravates the elasticity of joints. Yet, the dynamic model of flexible joint is neither feasible nor suitable in this case. In contrast to LWR, the collaborative robot used in this case is equipped with high-accuracy encoders at both motor- and link-side. So, there will be no necessity for estimation of link-side joint angle. Moreover, since the robot is not equipped with joint sensors, the elasticity of joints is considerably small and negligible. Finally, the usage of joint torque sensors contradicts the idea of sensorless application of dynamic control. There are two ways to calculate τ d . The first one is to compute all parameters ( M(q) ,

C(q,q) and G(q) ) explicitly by using the Lagrange dynamic equation (Gu, Y. L., and Loh, N. K. 1985, 1497). Yet, this computation is time-consuming and can only be done numerically. Thus, this method is applicable for simulations but not for real-time control. The other one is called Newton-Euler method with recursive computation, which is in contrary, more common in real-time control (Khalil, W. 2011,3). Detailed iteration process is listed as follows.

236

Jianjun Yuan, Yingjie Qian, Liming Gao et al. for: i  0  6 do

ωi 1  R i i 1 ωi  i 1Zn

(5-1)

ωi 1  Ri i 1 ωi  Ri i 1 ωi  (i 1Zn )  i 1Zn

(5-2)

vi 1  R i i 1[ i ωi  Pi i 1  ωi  (ωi  Pi i 1 )  vi )

(5-3)

vC ,i 1  ωi +1  PC ,i 1 +ωi +1  (ωi +1  PC ,i 1 )  vi 1

(5-4)

Fc ,i 1  mi 1 v C ,i 1

(5-5)

Nc ,i 1  I i 1ωi 1  ωi 1  (Ii 1ωi 1 ) (5-6) for i  7  1 do

fi  Rii 1fi 1  Fc,i 1

(5-7)

ni  Nc.i 1  Rii 1ni  PC ,i  Fc ,i  Pii1  (R ii1 fi 1 )

(5-8)

τ d ,i  ni Zn

(5-9)

Tii 1 represents the transformation matrix of i joint coordinate system relative to the

i  1joint coordinate system, and R ii 1 is its rotation matrix. R ii 1 is the inverse matrix of R ii1 , in this case is also its transpose.  is the joint velocity,  is the joint acceleration.

Z n represents the unit vector in the positive direction of the Z axis. mi represents the mass of the i link, Pc ,i represents the center of mass relative to the coordinates of the coordinate system, and I i represents the inertia tensor of the link with respect to the principal axis. g represents the gravitational acceleration vector. The adopted dynamic model contains three sorts of parameters which are τ s the output torque of joints, mi , Pci and I i called inertial parameters and

τ f the joint friction.

Next parts will discuss the estimation and identification of these parameters.

The Study on Key Technologies of Collaborative Robot …

237

Output Torque Estimation The estimation of joint output torque is one of the keys of sensorless applications of dynamic control. One common method is the usage of motors’ current signal given the fact that the output torque of a motor is proportional to its current. Although this proportional ratio, called torque constant, can be found in motor’s datasheet, it’s still necessary to calibrate the value through experiments for higher accuracy (Wang, S. et al. 2016, 923). Another method which is more novel, is based on the deformation of joints. As is mentioned above, the principle of joint torque sensor is to magnify angle difference between link-side and motor-side by using the elasticity of sensor (Hirzinger et al. G. 2002, 1720). So, from a different perspective, instead of using an additional sensor, one can take full advantage of the joint’s intrinsic elasticity which comes from the speed reducer. This method leads to several advantages including quicker response and more stable static readings (Han, Z., Yuan, J., and Gao, L. 2018, 1852). Since collaborative robots adopt, in general, modular design, following discussion and experiments in this part are performed on one single joint (Figure 5).

Figure 5. Experiments on a single joint.

Current-Based Torque Estimation Within a certain load range, the output torque of a motor satisfies a linear relationship with its current, which can be written as:

 s  n(iKˆ t  Jˆmˆ)

(6)

n the gear ratio, and i is the motor’s current. Kˆ t stands for the torque constant and Jˆ m for the equivalent rotation inertia of the motor. ˆ is the original motor angle. All these parameters are at motor-side and in scalar form. Yet, it would be more intuitive to convert them into link-side:

238

Jianjun Yuan, Yingjie Qian, Liming Gao et al.

 s  iKt  J m

(7)

Here, Kt and J m have similar meanings with their counterparts, but are at the linkside. Although both Kt and J m can be calculated from motor’s datasheet, Kt has greater influence on torque estimation and thus should be calibrated through experiments. (Wang, S. et al. 2016, 923) The experiment setting is shown in Figure 6. A payload bar with known mass distribution is mounted on a joint. The joint rotates at slow and constant speed (1 °/s) reciprocatively, starting from horizontal axis.

Figure 6. Experiment setting for torque estimation.

Since the joint rotates at constant speed, the dynamic model of this single joint can be simplified as:

 s =  d (q,0,0)+ f = G(q)+ f

(8)

Here, all parameters are in scalar form. Obviously, the friction torque is always opposite to the direction of rotation speed. Also assume that the friction torque remains the same absolute value under the given speed which leads to:

 f (q)   f (q)

(9)

By separating the entire rotation process into clockwise part and counterclockwise part, the measured current signal can be written as:

 s   G(q)   f ( q ) (ccw)  s   G(q)   f ( q ) (cw)

(10)

In order to remove the influence of friction, one can add these equations to obtain:

The Study on Key Technologies of Collaborative Robot …

 s (q)   s (q)  ˆs (q)=2G(q)

239 (11)

Since a single joint has only one DOF, the gravity torque is given by:

G(q)  gmax sin(q)

(12)

g max represents the maximum gravity torque when the load bar is horizontal. So, the calibration of torque constant turns into a sinusoidal function fitting problem:

ˆs (q)=Asin(q)+B

(13)

A and B are coefficients. By changing the mounting condition of payload bar and adding more payload blocks, multiple experiments can be performed. Here, the third joint of the robot used in this case is taken for example. The collected current data (in permillage) together with the fitting result in one single experiment is shown in Figure 7, the final calibration result for one joint is shown in Table 1.

Figure 7. Fitting result for current-based torque estimation.

240

Jianjun Yuan, Yingjie Qian, Liming Gao et al.

Figure 7 also proves the linear relationship between the output torque of a motor and its current. Torque constants of other joints can be calibrated in the same way. Final results of all joints are listed in Table 1. Table 1. Parameters for current-based torque estimation Joint Kt (Nm/‰)

1 5/17

2 5/17

3 1/8

4 1/8

5 1/18

6 1/18

7 1/18

Double Encoders-Based Torque Estimation As is demonstrated before, each modular joint of the collaborative robot used in this case is equipped with an incremental encoder at motor-side and an absolute encoder at link-side. Both have high accuracy. A conceptual structure is shown in Figure 8 (a). Other collaborative robots may share a similar structure with different order of encoder type.

Figure 8. Conceptual structure of robot joint with double encoders.

The principle of torque estimation from double encoder is illustrated in Figure 8(b) (Han, Z., Yuan, J., and Gao, L. 2018, 1852). Since the speed reducer has certain elasticity, it can be twisted when an output torque is applied. The angle difference between link-side and motor-side can thus serve as a torque sensor and indicate the joint’s output torque (Lynch, K. M. and Park, F. C. 2017). This idea has also been applied in the dynamic model for flexible joints. The relationship between the joint’s output torque and the angle difference is defined as follows:

The Study on Key Technologies of Collaborative Robot …

θ  θ  q τ  K s θ

241

(14)

K s represents the diagonal matrix of the stiffness of the harmonic reducer which should

be identified through experiments. The experiment platform is the same as previous method. The initial values of the absolute encoder and the incremental encoder (divided by gear ratio) are recorded once the driver is powered on. The identification methodology is similar to the current-based method. Here, still assume that the friction torque remains the same absolute value under the given speed, yet the value might be different from previous. Multiple experiments are performed by changing the payload. The same joint as previous is taken for example. The collected angle difference data in single experiment is shown and compared with corresponding current data in Figure 9 (a, b). The identification result of one joint is shown in Figure 9 (c). The stiffness of other joints can be identified in the same way. The final result is listed in Table 2.

Figure 9. Fitting result for double-encoder-based torque estimation.

Table 2. Parameters for double-encoder-based torque estimation Joint Ks (Nm/°)

1 0.4186

2 0.4106

3 0.1382

4 0.1340

5 0.0680

6 0.0658

7 0.0693

242

Jianjun Yuan, Yingjie Qian, Liming Gao et al.

Compared with current-based output torque estimation method which depends on back electromotive force of motor, double encoders-based method has much faster response because it directly depends on the deformation of mechanical structure. For the same reason, the measurement of angle difference is much more stable than current data when the joint is at stationary state. Yet when the joint is moving, double encoders-based method leads to more noisy result than the result of current-based. This can be alleviated by using stronger signal filters. Another disadvantage is the non-linear behavior of speed reducer’s stiffness under wide range of payload (Han, Z., Yuan, J., and Gao, L. 2018, 1852). This needs further study.

FRICTION MODEL FOR COLLABORATIVE ROBOT Identification of frictional parameters is the most challenging problem of sensorless application of dynamic control (Goto, S. et al. 2007, 627). The friction model used in previous section only contains static part, also called Coulomb friction and first-order viscous part. However, further studies show that this simple model is neither accurate nor complete (Gao, L. et al., 2017, 3027). As a matter of fact, joint friction torque is influenced by factors including rotational speed, payload and temperature (Gao, L., Yuan, J., and Qian, Y., 2019). Furthermore, the friction model targeted for current-based output torque estimation method differs from that of double encoder-based method (Han, Z., Yuan, J., and Gao, L. 2018, 1852). This will be thoroughly discussed in this section. Here again, since the robot is in modular design, following discussion and experiment are performed on a single joint, Joint 4 for example.

Friction Model for Current-Based Torque Estimation Velocity Dependence of Friction Model In order to study the velocity dependent part of friction model, it is important to separate the velocity variable from others. The temperature of joint is controlled at 25°C in a thermostatic container. Similar to the calibration process, the joint is set to rotate at different constant speed reciprocally. Yet here, no external payload is attached. Hence, the measured output torque of motor can be considered as friction torque. For convenience concerns, the friction torque in this case is scaled in permillage. Figure 19 depicts the relationship between joint velocity and friction torque. It is worth mentioning that the curve is noticeably symmetric around the origin which validates the assumption used in the previous section.

The Study on Key Technologies of Collaborative Robot …

243

Figure 19. Relationship between joint velocity and friction torque.

In Figure 3, constant Coulomb friction can be observed and the Stribeck effect is not apparent. Moreover, the relationship between velocity and friction is non-linear. Hence, the classical Coulomb-viscous-Stribeck friction model (Ohanastrom, K. and Canudas-DeWit, C. 2008, 101) is not appropriate for the friction behavior in this case. Waiboer et al. proposed that the viscous friction is mainly caused by the lubricant shear stress which is a function of shear rate (Waiboer, R., Aarts, R., and Jonker, B.). In general, lubricant grease used in joints of collaborative robots are non-Newtonian fluid. According to (Bair, S., and Winer, W. O. 1979, 258), for non-Newtonian fluid the model of shear stress and shear rate is given by: - /c  =c( ) L 1-e 0

(15)

where  is the shear stress,  is the shear rate, cL and c0 are two lubricant constants. As for a robot joint, the friction model can be treated as two solid bodies separated by a lubricant layer. Building on equation (1), the relationship between viscous friction and rotation velocity can be written as:

 f (q)  [ fc  fv (1  e q / q )sgn(q)] v

(16)

where fc is the constant Coulomb friction, fv and qv are two parameters. sgn(q) is the sign function of q , which describes the directional independence of friction. In this paper, equation (2) is called velocity friction model.

244

Jianjun Yuan, Yingjie Qian, Liming Gao et al.

Here, the Levenberg–Marquardt nonlinear least square fitting method (L-MNLSM) is employed to calculate parameters in the friction model. The model result is listed in Table 5 and compared in Figure 1. The root mean square percentage error (RMSPE) is smaller than 0.023 which verifies the effectiveness of this model. Table 5. Parameters of velocity-friction model of Joint4 Parameters Values

fc

fv

58.47

288.28

v 90.17

Temperature Dependence of Friction Model Although the type of lubricant may lay great influence on the parameter of friction model, yet there’re some common phenomena such as temperature dependence. Generally, the rise of temperature decreases the friction nonlinearly. Experiments investigating the temperature effects was carried out by setting the thermal container to different temperatures ranging from 14°C to 40°C and wait for the robot joint to achieve a thermal equilibrium. Then again, the joint rotates at constant speed reciprocally without payload. The relationship of velocity and measured friction (scaled in permillage) at different temperature is shown in Figure 20 (a). The data indicate that the friction torque decreases appreciably as the temperature rises while the Coulomb friction remains almost constant. This is consistent with previous prediction. It also can be seen from Figure 3(b) that the friction value decreases about 100 unit (‰) from 15°C to 40°C at the speed of 500/s, which accounts for nearly 10% of the rated output torque. In other words, the joint transmission efficiency is obviously higher when working in relatively high temperature. From a different perspective, the relationship between friction and temperature at different velocity is shown in Figure 20 (b). The velocity dependent friction model should be revised to contain temperature factor (Bair, S., and Winer, W. O. 1979, 258)

 f (q, T )  [ fc  ( fv 0  fvT T )(1  e q /( q

v 0  qvT T )

)sgn(q)

(17)

where T is the temperature (°C) of joint, fc is the Coulomb friction which is independent on T . fv 0 , f vT , qv 0 , qvT are four parameters. In this paper equation (28) is called velocity-temperature friction model. L-MNLSM is also used to identify temperature dependent parameters. Its RMSPE is smaller than 0.028. The identification result is listed in Table 6. The visualization and comparison of temperature-dependent model is shown in Figure 21.

The Study on Key Technologies of Collaborative Robot …

Figure 20. Relationship between velocity, temperature and friction.

Figure 21. The visualization and comparison of temperature-dependent model.

245

246

Jianjun Yuan, Yingjie Qian, Liming Gao et al. Table 6. Parameters for velocity-temperature friction model of Joint4

Parameters Values

fc 58.47

fv 0

f vT

qv 0

qvT

322.6

-1.374

12.32

3.125

Load Dependence of Friction Model Further experiments show that the friction torque is also effected by the external load torque. It can be explained by the essential Coulomb friction model. Here, similar to the experiment setting in Section 3, the joint rotates with extra payload which is known in advance to figure out the load dependent friction model. In this case, the output torque of motor can be considered as the sum of friction and load torque. The temperature is set to be 250C since practical experiments show that the temperature and load effect independently on the friction model.

Figure 22. Relationship between velocity, payload and friction torque.

The Study on Key Technologies of Collaborative Robot …

247

The collected data are shown in Figure 22. Dot-curves represent total friction while star-curves the friction after the deduction of velocity-dependent part. Figure 22 clearly shows that the load-dependent friction increases with the increase of load torque. Furthermore, Stribeck effect slightly appears. These characteristics can be represented by improving model (27) with classical Stribeck friction model to (Johanastrom, K. and Canudas-De-Wit, C. 2008, 101):  f (q, t )  [ fc  ( fc  f s e q / q )  t sgn(q)  fv (1  e q / q )sgn(q) 0

t

(18)

0

t

where  f (q, t )  ( f c t  f s t e

 q / q 0

)  t sgn(q) represents the load-dependent friction

 q/q

0  t sgn(q) describes the load-dependent Stribeck effect. fc t , f s t , q 0 term. f s t e are three model parameters. In this paper, model (29) is called as velocity-load model. By using similar

identification method, results are listed in Table 7. Compared with fc and f c t , the value of f s t is small, indicating that the Stribeck effect caused by load torque is relatively slight. The RMSPE of model (29) is smaller than 0.020. Figure 23 depicts and compares the measured friction data and model. Table 7. The identified parameters for velocity-load model Parameters Values

f c t 0.1140

f s t 0.02908

Figure 23. The visualization and comparison of velocity-load model.

 0 2.585

248

Jianjun Yuan, Yingjie Qian, Liming Gao et al.

Comprehensive Friction Model After studying the effect of velocity, temperature and payload, the comprehensive friction model is proposed.  f (q, T , t )  [ f c  ( f c  f s e q / q )  t sgn(q)  f v (1  e  q / q )sgn(q)  0

t

0

t

( f v 0  f vT T )(1  e

 q /( qv 0  qvT T )

)sgn(q)

(19)

The model suggests that the temperature and velocity mainly have an influence on viscous friction nonlinearly, whereas load torque influences the Coulomb friction linearly and causes a slight Stribeck effect. The 8 model parameters values for joint 4 are listed in Table 8. Table 8. Parameters of comprehensive friction model of Joint4 Parameters Value

fc 58.47

fvt

fvo 322.6

-1.374

qv 0

qvT

f c t

f s t

q 0

12.32

3.125

0.114

0.0291

2.585

Figure 24. RMSPE of different friction model.

Table 9. Performance indicators for the different models

Fitness RMSE RMSPE

Velocity 0.8875 38.52 0.1812

Velocity-Load 0.9368 26.51 0.1181

Velocity-Temperature-Load 0.9872 5.215 0.02501

As a comparison, the RMSPE of different friction model, which are velocity dependent mode, velocity-temperature model and the comprehensive model are depicted

The Study on Key Technologies of Collaborative Robot …

249

in Figure 24. and listed in Table 9, it shows that the comprehensive model performs far better than the rest and is extremely crucial to sensorless application of dynamic control.

Double Encoder-Based Estimation As is mentioned in previous section, the current-based estimation method estimates the output torque of the motor while the double encoder-based estimation method estimates the output torque of the speed reducer. This leads to the difference in the corresponding friction model (Han, Z., Yuan, J., and Gao, L. 2018, 1852). Since the identification process and experiment setup in this case is similar to the previous part, these will not be duplicated here. The first step is to find out the velocity dependent friction model. The relationship between velocity and friction is shown in Figure 25. The non-linear phenomenon and Stribeck effect are not apparent. So, a linear model is adopted.

 f (q)  sgn(q) f v q

(20)

The second step is to figure out the effect of temperature. The collected data are shown in Figure 26.

Figure 25. The relationship between velocity and friction.

250

Jianjun Yuan, Yingjie Qian, Liming Gao et al.

Figure 26. The velocity-friction torque curve under varied temperatures.

In comparison with the previous part, here the change of temperature doesn’t lay great influence on the friction since these curves coincide with each other. So the temperature-dependent factor is negligible. Finally, the impact of load-effect on friction model is studied. The collected data are shown in Figure 27.

Figure 27. The velocity-friction torque curve under varied load torques.

Again, these curves coincide with each other which means that the effect of external load is not apparent. So in conclusion, the friction model for double-encoder based torque estimation method can be represented by a simple linear function. The core reason of the difference between two friction models is that for the currentbased estimation method, the friction model contains the part caused by the speed reducer whereas the double-encoder based method doesn’t. Therefore, the former one is affected

The Study on Key Technologies of Collaborative Robot …

251

by the characteristics of lubricant grease and influence by temperature and payload. However for the latter one, the influence of lubricant grease is much more slight, leading to a simpler model.

BASIC APPLICATIONS OF DYNAMIC CONTROL Previous sections discuss the principle of dynamic model and parameter identification which have laid solid foundation to practical applications. Basic dynamic applications include collision detection, kinesthetic teaching and force estimation (Wang N. et al. 2017,44,Wang, S. et al. 2016, 923, Yuan, J. et al. 2019, 489). These functions are the distinctions and advantages of collaborative robots against conventional industrial robots and all made great changes in the automatic field.

Collision Detection Since the core idea of collaborative robots is to allow human and robot to work in the same workspace, collision detection is crucial for safety concerns. Moreover, the idea of sensorless dynamic control enables the robot to detect collision happened on any part of it (Wang N. et al. 2017, 44).

Study of Collision Process First of all, the collision process is studied. Here, the collision between robot and human is discussed. A normal adult person with buffer protection tried to collide with end-effector of moving robot. For convenience concerns, only the first joint rotates while the others remain stationary. A force sensor is used to record the contact force between robot and human. In practice, during the collision process, person will step back subconsciously. So the contact force will increase in the first place and then decreases. The collected force data with velocity of 8°/s is shown in Figure 28. In Figure 28, the collision happened at around 1500ms. The contact force then gradually increases, reaches maximum value at around 3200ms and finally decreases to zero. In order to successfully detect collision at its starting phase and prevent aggravation, Define collision time as the duration from the time when collision starts to the time when contact force reaches 150N. (Wang N. et al. 2017, 44; Wang N. 2017) More experiments are performed to find out the relationship between velocity and collision time as is shown in Figure 29.

252

Jianjun Yuan, Yingjie Qian, Liming Gao et al.

Figure 28. Demonstration of collision process.

Figure 29. Relationship between collision time and velocity.

The result shows that the collision time converges to 400ms when velocity increases, which is far larger the time period of control system. It proves the feasibility of postevent detection.

Principle of Collision Detection The principle of collision detection is to set thresholds to external torques of all joints. By applying the dynamic model and friction model, the theoretical dynamic torque and friction torque can be computed. According to equation (1), the external joint torque can be easily computed.

τ ext  τ d  τ f  τ s

(21)

The Study on Key Technologies of Collaborative Robot …

253

If the estimated external torque of any joint exceeds the threshold, a collision is considered to happen (Wang N. et al. 2017,44). Moreover, in order to achieve more stable detection result, improvement is made which is deactivation of acceleration signal. Since the acceleration signal is not directly measured but is differentiated twice from position signal, large acceleration or deceleration may cause large fluctuation of external torque estimation which may cause misjudgment of collision. So, an additional threshold is set for acceleration signal (Wang N. 2017). If the absolute value of acceleration exceeds the corresponding threshold, the torque threshold for collision detection is multiplied by a scaling factor (e.g., 2). Moreover, the usage of first-order filter also reduces the effect of noise.

Validation Experiments Then validation tests are performed. The first one is to test whether all collision can be correctly detected. Here, the robot keeps moving when collisions happen. Different motion speed of robot and collision at different parts of the robot are tested. Snaps of testing are shown in Figure 30. One of the test records is shown in Figure 31. Only the first six joints’ data are included since the last joint is inconsequential here. The result proves that the detection method is functional. All collisions are correctly detected.

Figure 30. Snaps of testing.

254

Jianjun Yuan, Yingjie Qian, Liming Gao et al.

Figure 31. The difference between calculated torque and current torque.

The second test is to actually test the post-event detection result. Here, the robot stops when collisions happen and the maximum contact force is also recorded. Multiple adult people are involved in this test who pushes and pulls the robot at different parts. During the six-hour testing, the correct detection rate is 96% with the maximum contact force below 120N. This result satisfies the standard provided by ISO 15066 and proves that the proposed post-event collision detection is functional and can prevent further damage to both robot and human.

Kinesthetic Teaching Kinesthetic teaching, also called direct teaching, is a new teaching method for robots. The idea is to allow human operators drag any part of the robot to certain positions directly rather than to use the operation panel. This is also one of the major advantages of collaborative robots to conventional industrial robots (Wang, S. et al. 2016, 923).

Principle of Kinesthetic Teaching Here joint torque-based kinesthetic teaching method is adopted, which is also called force-free control. According to the dynamic model mentioned in Section 3, the output torque of motor consists of four parts, inertial torque, coupled torque, gravity torque and the friction torque. Because the velocity and acceleration of all joints during the kinesthetic teaching process, in general, are and should be at low level, the influence of inertial torque and coupled torque is negligible. So, the desired output torque of motor can be written as (Gao, L. et al. 2017, 3027, Gao, L., Yuan, J., and Qian, Y., 2019):

τ in  Gˆ (q)  τˆ f

(22)

The Study on Key Technologies of Collaborative Robot …

255

If external torque provided by the operator is considered, the dynamic model becomes:

τ ext  (τ f  τˆ f )  M (q)q  V (q,q)q

(23)

It indicates that the external torque is used to change the robot’s moving status. Although it is also possible to calculate and compensate M (q)q in advance, yet this will cause potential instability.

Figure 32. The flowchart of kinesthetic teaching.

The overall flowchart of kinesthetic teaching is shown in Figure 32. Since it is the output torque of motor that is controlled, the friction model for current-based torque estimation is adopted (Yuan, J. et al. 2018, 2016).

Improvement Strategies The basic principle mentioned above is not adequate for stable and functional kinesthetic teaching. More improvements are made to enhance its performance focusing on the compensation of friction and stability. Under-Compensation of Friction Theoretically speaking, if the friction torque is fully compensated, the force exerted to the robot will be minimal. Yet in this situation, the robot will be too sensitive to external forces and even becomes potentially unstable due to the fluctuations of friction. On the other hand, if the friction is completely not compensated, the kinesthetic teaching process will be exhausting. So the friction has to be partially compensated. ( f )comp  pc ( fc )comp  pv ( fv )comp

(24)

256

Jianjun Yuan, Yingjie Qian, Liming Gao et al.

 fc means the Coulomb friction for compensation,  fv the viscous friction. The factor pc can be set pragmatically from 0.5 to 0.8, pv from 0.8 to 0.9 (Gao, L., Yuan, J., and Qian, Y., 2019). Smooth Transition of Coulomb Friction As is mentioned above, the entire friction can be divided into two parts, the coulomb friction and viscous friction. The viscous friction is symmetric and continuous to rotation velocity while the coulomb friction is a step function. If discontinuous coulomb friction function is adopted, it will cause instability when the robot transit from stationary to moving. So the compensation strategy for coulomb friction, especially the smooth transition around zero-velocity, should be studied. Here, Sigmoid function featuring its smoothness is used for interpolation as is shown in Figure 33.

1  e aq f h (q)  (a  0) 1  e aq

(25)

Figure 33. Sigmoid function.

Coefficient b is selected pragmatically (e.g., from 2 to 6). Larger b makes the transition faster but sharper, may cause potential instability. Smaller b on the contrary makes the transition softer but may cause the robot less compliant (Wang, S. et al. 2016, 923). The compensation of coulomb friction can be written as:

( fc )comp  h(q) fc

(26)

The Study on Key Technologies of Collaborative Robot …

257

Fast Deceleration The desired behavior of kinesthetic teaching includes two aspects. When the operator is dragging the robot, it should behave smooth and labor-saving motion. Yet when the operator stops dragging, the robot should also hold its position immediately. This requirement leads to different friction compensation strategies for these two stages (Wang, S. et al. 2016, 923). Previous improvements focus on the former stage. Here improvement for the latter one is discussed. Obviously, if the friction is compensated less, the robot will stop faster. So the problem turns into the identification of when the operator stops dragging. Since as is mentioned above, the friction is partially compensated, one apparent criterion for this situation is that the robot starts decelerating continually. Instead of using the instant acceleration signal, an average acceleration during a certain period of time is used:

ai 

vi 1  vi T

(27)

If ai is negative and smaller than the threshold, the robot is considered as decelerating. At this situation, the friction compensation will become less to speed up the deceleration.

( fv )comp   fv

(28)

with

1 (ai  ah )    = (3ah  a )/2ah (ah  ai  3ah )  0 (ai  2ah)  Motion Limits For any robot, the angle of joints should be limited with certain ranges. It can be easily achieved through position-based control, yet complicated for torque-based control. Here, angle limit is enforced by the usage of virtual spring to exert additional counter force (Yuan, J. et al. 2018, 2016.) Furthermore, when the joint is approaching its angle limit, the compensation of friction is cancelled. Hence, when the angle of a joint exceeds its limit, the operator will feel more laborious.

( f )comp   Kd (q  qlim )

(29)

258

Jianjun Yuan, Yingjie Qian, Liming Gao et al.

K is also selected manually (e.g., 5). The other motion limit is speed limit. Out of safety concerns, the velocity of joints during teaching process should also be limited. There will be no compensation of friction when the joint exceeds speed limit. Under the influence of friction, the joint will decelerate to stationary.

( f )comp  0

(30)

1)

Fusion with Friction Model for Double-Encoder Based Method As is discussed in previous section, friction model for current-based torque estimation method differs from friction model for double-encoder based method while the former one contains the latter one. So if these two models can be combined together, the performance of kinesthetic teaching will be better (Wang S. 2017). So the final friction compensation can be written as: ( f )comp  pc ( fc ,double )comp  pv ( fv ,double )comp  ( f ,current   f , double )

(31)

Validation Experiment Here, the proposed kinesthetic teaching method is validated through experiments as is shown in Figure 34.

Figure 34. Validation experiment of kinesthetic teaching.

Figure 35. Different compensation factor during validation experiments.

260

Jianjun Yuan, Yingjie Qian, Liming Gao et al.

Figure 36. Different compensation factor during fast deceleration experiments.

The result shows that the kinesthetic teaching process is stable, smooth and laborsaving. The transition from stationary to motion is also smooth. three comparisons are made to analyze the effect of proposed improvements. The first one is the comparison of under-compensation of friction under different factors, depicted in Figure 35. The result proves that larger compensation factor has larger tendency to cause instability. The second comparison is about fast deceleration. Two tests are performed with or without the improvement mentioned above as is depicted in Figure 36. The results show that with the proposed method, the robot can stop faster. The last one is the comparison of single friction model to fusion friction model, as is shown in Figure 37. The results show that the applied force by the operator becomes smaller when the fusion of friction model is adopted, while the stability remain the same. Two more comparisons are made to verify the accuracy of friction model. Figure 38 shows the applied force is more stable

The Study on Key Technologies of Collaborative Robot …

261

when the comprehensive friction model mentioned in section is used rather than other versions of friction models.

Figure 37. The comparison of single friction model to fusion friction mode.

262

Jianjun Yuan, Yingjie Qian, Liming Gao et al.

Figure 38. Teaching force with or without comparisons.

Cartesian Teaching The aforementioned teaching method allows human operator to freely drag the robot to any Cartesian position and orientation. Yet, in practice a more common situation is where some during degrees of freedoms during the teaching process should be limited. Here a new teaching method called Cartesian teaching is proposed to achieve such demand.

The Study on Key Technologies of Collaborative Robot …

263

Basic Principle of Cartesian Teaching Cartesian teaching means that the end-effector of robot can be dragged along certain degrees of freedom of the Cartesian space (called compliant DOFs), while others are constrained (non-compliant DOFs). This involves a simple usage of impedance model which will be discussed in detail in next section. Since human operators always try to exert force in a labor-saving manner (along compliant directions), the teaching process can be treated as a low-rigidity robot operation task. So, torque-based impedance control method is adopted for this case because it is more suitable for operation tasks that require low stiffness. The impedance model establishes a relationship between the contact force and the position as:

MΔX + BΔX + KΔX = F  Fd

(32)

X  X d = ΔX

(33)

 M, B, and K represent the parameters of the target impedance inertia, damping, and stiffness respectively. X d and Fd represent the desired pose of the target and the desired driven force of the target respectively. Here, X d is actually the starting pose when the teaching process initiates and Fd should be zero. X represents the actual pose of the target and F the actual driven force. X indicates the pose deviation. Here, X is the input while F is the output. The acceleration during the teaching process is relatively small which means MΔX is negligible. Then the driven force at end-effector should be converted to torques at joint level. Detailed deduction will be explained in next section. The final calculation of joint torque is given by:

τ s  J T ( KΔX  BX )  G (q)  τ f The block diagram of Cartesian teaching is shown in Figure 39.

(34)

264

Jianjun Yuan, Yingjie Qian, Liming Gao et al.

Figure 39. Impedance control in Cartesian space.

K and Bd can have smaller values for compliant DOFs while larger values for noncompliant ones. The driven force can be classified into two parts, the force corresponding to translational deviation and the moment corresponding to orientation deviation. The definition of translational deviation is clear and apparent yet the deviation of orientation may have multiple representations such Euler angles and unit quaternion. In next parts, the drawback of Euler angles will be first explained followed by the definition of orientation deviation in unit quaternion.

Orientation Deviation Represented by Euler angles The definition of Euler angles and transformation between Euler angles and rotational matrix are given by:

RZYX ( ,  ,  )  RZ ( ) RY ( ) RX ( )

A B

 r11 r12 R   r21 r22  r31 r32

r13  c  c r23    s  c r33   s

c  s  s  s  c s  s  s  c  c c  s

(35) c  s  c  s  s  s  s  c  c  s   c  c

(36)

  a tan(r31 / r112  r212 )

(37)

  a tan(r21 / r11 )

(38)

  a tan(r32 / r33 )

(39)

with c is cos , s is etc. Apparently, when c = 0 (  = ±90°),  and

 cannot be

calculated correctly. The occurrence of this case may cause the problem that the control torque tends to be infinite. This is unacceptable and therefore, unit quaternion is adopted instead.

The Study on Key Technologies of Collaborative Robot …

265

Orientation Deviation Represented by Unit Quaternion The quaternion, introduced by Hamilton, can be used to express the orientation with a four-parameter representation. It has no singularities and is capable of decoupling the rotation angle and the rotation axis, thereby providing spatial uniformity. The unit quaternion is expressed as follows:

Q    1i+1 j  1k

(40)

where   R, n  1, 2,3 . i, j , k are orthogonal unit imaginary numbers. The above equation can also be written as Q = [η, ε], where η is the scalar part of the quaternion and ε is the vector part of the quaternion. The norm of the unit quaternion should be 1.

 2  εT ε  1

(41)

The relationship between unit quaternion and rotation matrix is as follows:

= 

1 r11  r22  r33  1 2

(42)

1 

r21  r12 4

(43)

2 

r02  r20 4

(44)

3 

r10  r01 4

(45)

The deviation between two unit quaternions is given by:

q  q1 * q0

(46)

where g 0  S 3 denotes the goal orientation, bar denotes the quaternion conjugation defined as q  v  u  v  u and *denotes the quaternion product defined as :

266

Jianjun Yuan, Yingjie Qian, Liming Gao et al. q1 * q2  (v1  u1 )*(v2  u 2 )  (v1v2  u1T u2 )  (v1u2  v2u1  u1  u2 )

(47)

Here, q is also a unit quaternion. The corresponding distance metric is given by:

u   arccos(v) u , u  0 log(q)  log(v  u)    0, 0, 0 T , otherwise  

(48)

log(q) is a vector with three elements indicating both angle difference and rotation direction of q. Hence, the calculation of joint torque can be written as:

J   X  v τ s   v  ( K  p   B   )  G (q )  τ f    J   X o  T

(49)

Experiments Experiments of Cartesian teaching are shown in Figure 40.

Figure 40. Direct teaching in Cartesian space using impedance control with unit quaternion.

These experiments are performed at randomly settled 51 initial poses. In each experiment, only one DOF is set to be compliant while the others are not. The stiffness and damping in the non-compliant direction during traction are unchanged. The translational stiffness is 40000 N/m, the translational damping is 2000 N/(m/s), the rotational stiffness is 1000, and the damping is 50. The stiffness and

The Study on Key Technologies of Collaborative Robot …

267

damping of translation and rotation in the compliant direction are both 0. Collect data of position deviation and orientation deviation in different situations are demonstrated in Figure 41.

a. all non-compliant, slow force

b. all non-compliant, applying impact

c. only the X-axis direction is compliant and is towed

d. only the Y-axis direction is compliant and is towed

Figure 41. (Continued)

268

Jianjun Yuan, Yingjie Qian, Liming Gao et al.

e. only the Z-axis direction is compliant and is towed

f. only the direction of rotation of the v0 axis is compliant and towed

g. only the direction of rotation of the v1 axis is compliant and towed

h. only the direction of rotation of the v2 axis is compliant and towed

Figure 41. Errors of Direct teaching in Cartesian space for the third-generation manipulator.

The results show that for non-compliant DOFs, the position deviation is in general within 2 cm and the orientation deviation is smaller than 0.03 in quaternion representation. Singular configurations may enlarge the deviation.

The Study on Key Technologies of Collaborative Robot …

269

ADVANCED DYNAMIC CONTROL The basic applications of dynamic control mentioned in previous can achieve some innovative yet accessory functions. More advanced dynamic control methods such as force control have been proposed to make collaborative robots really useful in manufactory. One common application of dynamic control is force tracking along surfaces, also called force control. It is the typical requirement for tasks such as polishing, deburring and milling. Here, instead of using the robot’s joint-torque loop (or called the jointcurrent loop), the position loop was adopted for convenience and dependability (Gao, L. 2019). The basic principle of force control is the position-based impedance model. Building on this, the idea of explicit force control is integrated to eliminate the steadystate error. Moreover, in order to increase the adaptiveness of surface tracking method, the shape profile and normal direction of surface is predicted in real-time rather than preacquired. Force control along both gravitational direction and normal direction will be discussed in this part.

Dynamic Model in Cartesian Space Once the external torque of all joints has been obtained, they can be converted to the external force/torque at end-effector which is more useful in practical control applications (Gao, L. 2019). The key of conversion is called the force Jacobian matrix. The forward kinematics of an articulated robot can be written as:

X  f (q)

(50)

X stands for the 6-dimensional vector of position and orientation of the end-effector in Cartesian space, f for the forward kinematic transformation function. The mapping between joint and end-effector velocity can be computed via the Jacobian matrix as J

X J q

(51)

Hence, the relationship between external joint torques and external end-effector force and torque is given by:

τ ext = J T Fext

(52)

270

Jianjun Yuan, Yingjie Qian, Liming Gao et al.

with being 𝑭𝒆𝒙𝒕the external force and torque at end-effector, 𝝉𝒆𝒙𝒕 the vector of external joint torque, J T the transpose of the Jacobian matrix, also called the force Jacobian matrix. Therefore, the external force at end-effector can be calculated.

Fext  ( J T )1 τ ext

(53) T 1

For 6DOF robot, ( J )

is adequate. Yet for 7dof robot or if J T is not full rank, it

should be the pseudo inverse matrix. Here, dynamically consistent generalized inverse matrix, in this case the calculated. So the Moore-Penrose generalized inverse matrix is selected instead is adopted by using SVD decomposition. The calculation method of Moore-Penrose generalized inverse is based on matrix singular value decomposition (SVD). SVD decomposition of a matrix m can be expressed as:

mab  U aa Dab Vbb 

T

(54)

where U and V are orthogonal matrices, respectively consisting of unit eigenvectors of matrix products mmT and mTm. D is a diagonal matrix composed of the eigenvalues of the matrix m. The eigenvectors of the matrix can be solved iteratively using the OR method. Since mmT and mTm are both semi-positive matrices, the QR method finally converges to the eigenvalue group from the matrix to be decomposed. The overall dynamic model can be written as:

τ s  M (q )q  C (q,q )q  G (q )  τ f  J T (q) Fext

(55)

This equation is in joint space. Yet, it would be more intuitive if the model is described in Cartesian coordinate (Yuan, J. et al. 2019, 489). The derivative of speed relation between end-effector and joints is given by:

X  J qJ q

(56)

By multiplying J M 1 (q) to both sides, it can be rewritten as: J (q)q  J (q) M 1 (q)(C (q,q)  G (q))  J (q) M 1 (q)( τ s - τ f )  J (q) M 1 (q) J T (q) Fext

Hence, the dynamic equation of the robot in Cartesian space is given by

(57)

The Study on Key Technologies of Collaborative Robot …

M x (q ) x  C x (q,q ) x  Gx (q )  ( J T (q))† ( τ s - τ f )  Fext

271 (58)

with

M x  (JM 1 (q)J T )1 C x  ( J T )† C (q,q)  M x Jq

Gx  (J T )† G (q) As it should be, (66) and (69) are two descriptions of the same robot system from different perspective, sharing similar forms.

Properties of Constant Force Tracking The force model analysis of the robot during the tracking process is illustrated in Figure 42.

Figure 42. The diagram of surface constant force tracking.

The resultant forces, denoted as F , is the combination of contact force Fn in the normal direction and the friction force F along the tangential direction. It can also be decomposed into f x , f y and f z regarding to the base coordinate. For force control along gravitational direction, only f z needs to be controlled. For force control along normal direction, Fn needs to be controlled while the orientation of end-effector should also be adjusted accordingly.

272

Jianjun Yuan, Yingjie Qian, Liming Gao et al.

Impedance Model in Cartesian Space As is mentioned before, the generalized impedance control can be classified into position-based model and force-based model. Both of them models the interaction between the robot and the environment as a mass-spring-damp system (Jung, S. 2012, 373). Yet for position-based method, the deviation of external force is the input while the output is position amendment command. Since the position-based impedance control is adopted due to its better dependability, a reference trajectory is necessary (Yuan, J. et al. 2019, 489). The impedance control model is given by:

U  Fext  Fd  Bx e  Dx e  K x e

(59)

e  xd  xr

(60)

It shares certain similarity to the dynamic model in Cartesian space. Herein, 𝑼 is the force deviation between desired contact force 𝑭𝒅 and actual external force, 𝑭𝒆𝒙𝒕 . 𝒆 denotes the position amendment command while 𝒙𝑟 the reference position, 𝒙𝒅 the new demand position. The orientation of end-effector is not discussed in this part. 𝑩𝑥 , 𝑫𝑥 and 𝑲𝑥 are inertia matrix, damping matrix and stiffness matrix of the impedance model respectively. the block diagram is shown in Figure 43. However, although impedance control can realize compliant behaviors, it is not adequate to keep the contact force to be constant due to the existence of steady-state error (Yuan, J. et al. 2019, 489). Several improvements are made to solve this problem.

Figure 43. The block diagram of Impedance control.

Position-Based Hybrid Control A steady state means that 𝒆̇ and𝒆̈ become zero. Assume that the environment is elastic with stiffness of 𝑲𝑒 and position 𝒙𝒆 . The external force at steady state can thus be written as:

The Study on Key Technologies of Collaborative Robot …

273

Fext  Fd  K x ( xd  xr )

(61)

Fext  Ke ( xe  xd )

(62)

The steady-state force error, ∆𝑭𝑠𝑠 is given by:

Fss  Fext  Fd  ( K x  K e )1 K x [ K e ( xe  xr )  Fd ]

(63)

The steady state problem can be eliminated by the usage of force deviation integral. This is inspired by the concept of explicit force control. The integral of force deviation at time 𝑇0 , denoted as 𝛚(𝑇0 ), is defined as: T0

ω(T0 )    [ Fext (t )  Fd (t )] dt 0

(64)

𝜂 is the proportional factor. Equation (15) can be thus modified into: T0

U    U (t ) dt  Bx e  Dx e  K x e 0

(65)

The block diagram of position-based hybrid control is depicted in Figure 44.

Figure 44. Position-based hybrid control scheme in Cartesian Space.

Position amendment 𝒆 given by (76) is in continuous time domain. However, in practical programming, discrete form should be adopted instead, given by:

e ( j )  ( BxT 2  DxT 1  K x ) 1 * [(2 BxT 2  DxT 1 )e ( j  1)  BxT 2 e ( j  2)  U ( j )  ω( j )]

(66)

274

Jianjun Yuan, Yingjie Qian, Liming Gao et al.

where

ω( j)  ω( j 1) T [Fext ( j)  Fd ( j)] 𝑗 is the discrete index, and 𝑇 is the period of control loop.𝒆(𝑗) stands for the position amendment at time 𝑡 = 𝑗𝑇. 𝒆(0)is set to be 0 for stability concerns. Hence, the demand position 𝒙𝒅 becomes:

xd ( j) = xr ( j) + e( j)

(67)

Since the position-based impedance model is applied, the newly computed 𝒙𝒅 should be converted to joint angle 𝒒𝒅 by using inverse kinematics. The inner joint position control loop will execute this position amendment. It can be concluded that the force control is achieved through task space formulation while executed by position command in joint space. Since not all dimensions require force control (e.g., only force along gravitational or normal direction), parameters such as position amendment e and matrices of the impedance model, 𝑩𝑥 , 𝑫𝑥 and 𝑲𝑥 can be simplified as scalars.

Prediction of Shape Profile The acquisition of surface’s shape profile can also enhance the accuracy of force control. If the reference trajectory is selected to be:

xr  xe  K e1Fd

(68)

the steady state error will become zero. Noting that 𝑲𝑒 −1 𝑭𝒅 is actually the deformation of the environment. This is negligible since the environment is in general made of metal. So, (79) can be simplified as

xr = xe

(69)

In convention, the acquisition of surface’s shape profile is achieved through manual predefinition, computer vision, CAD model or the kinesthetic teaching as mentioned above. Nevertheless, these methods are for prior acquisition of surface’s shape and will cause the increase of either hardware cost or time consumption. Furthermore, due to the hostile working condition, camera system is not suitable in practice. CAD model can

The Study on Key Technologies of Collaborative Robot …

275

provide accurate shape profile yet is futile for an arbitrary surface whose CAD model is not provided. Here, the acquisition of surface’s shape in z-axis is made through real-time prediction instead, which is universal to arbitrary unknown surface. The pre-set reference trajectory, denoted as 𝒙𝑟0 , is considered to be horizontal. While the robot is moving, historical information of end-effector’s position is used for prediction, and the end-effector is adjusted accordingly (Yuan, J. et al. 2019, 489). 𝒙𝑟0 can be decomposed into 𝑥𝑟0 (𝑗), 𝑦𝑟0 (𝑗) and 𝑧𝑟0 (𝑗), denoting the x-axis, y-axis and z-axis components at time 𝑡 = 𝑗𝑇. 𝒙𝑟 stands for predicted reference trajectory. Its x-axis and y-axis components remain the same as these of 𝒙𝑟0 while z-axis component needs adjustment.𝑥𝑖 , 𝑦𝑖 and 𝑧𝑖 are the positions of end-effector relative to world coordinate in the past 100 control periods with i ranging from 1 to 100. 𝑥̅ , 𝑦̅ and 𝑧̅ stand for corresponding average values. 𝑧𝑟 (0) is the initial z-axis value of the horizontal pre-set trajectory. ∆𝑧𝑟 (𝑗) is the adjustment according to surface prediction, given by: zr ( j )  ax [ xr 0 ( j )  xr 0 ( j  1)]  a y [ yr 0 ( j )  yr 0 ( j  1)]

(70)

𝑎𝑥 and 𝑎𝑦 are factors that can be obtained from the historical position information by using linear regression. ax   L10 L22  L20 L21  /  L11 L22  L12 L21 

(71)

a y   L20 L22  L10 L21  /  L11 L22  L12 L21 

(72)

with 100

100

i 1

i 1

L10 =  ( xi  x )( zi  z ), L11 =  ( xi  x ) 2 100

100

i 1

i 1

L20 =  ( yi  y )( zi  z ), L22 =  ( yi  y ) 2 100

L12 =  ( xi  x )( yi  y ) i 1

The demand position can be finally written as:

(73)

276

Jianjun Yuan, Yingjie Qian, Liming Gao et al.

xd ( j )  x r ( j )  e ( j )  xr0 ( j )  xr ( j )  e ( j )

(74)

with

xr ( j )   01x2

zr ( j )  zr (0) 

T

(75)

It’s worth mentioning that the prior knowledge of surface’s shape profile will still undoubtedly improve the control performance. Yet here in order to test the universality and capability of the proposed method, the control strategy without prior information is adopted.

Prediction of Normal Direction For force control along normal direction, prediction of surface’s normal direction is necessary. Moreover, the orientation of robot’s end-effector should also be adjusted accordingly, parallel to position adjustment discussed above. Prediction of normal direction cannot be obtained merely from historical position information. Historical force information is also involved. The direction of friction can be considered opposite to the moving direction of endeffector, which has been already solved in last part.

s f  xr ( j ) / xr ( j )

(76)

𝒔𝑓 is the moving direction of end-effector after normalization. So 𝑭𝜏 can be deduced as the component of 𝑭𝑅 along 𝒔𝑓 .

Fτ  FR ( s f )

(77)

Finally, the normal force 𝑭𝑛 is considered to be the residual component of 𝑭𝑅 after the deduction of 𝑭𝜏 :

Fn  FR  FR s f The normal direction, 𝒔𝑛 , is thus given by:

(78)

The Study on Key Technologies of Collaborative Robot …

sn  Fn / Fn

277 (79)

The robot can adjust the end-effector’s orientation with a simple proportional controller by comparing its present z-axis of end-effector coordinate with 𝒔𝑛 . Denote the orientation matrix of end-effector as 𝑹𝑖 :

Ri   Rix

Riy

Riz 

(80)

Here, 𝑹𝑖𝑥 , 𝑹𝑖𝑦 and 𝑹𝑖𝑧 are three columns of 𝑹𝑖 , representing the x-, y- and z-axis of end-effector’s coordinate. The divergence between 𝑹𝑖𝑧 and 𝒔𝑛 can be represented by a rotation around an axis, 𝒇𝑟 , with angle 𝜃𝑟 .

fr  Riz  sn r  cos1 ( Riz sn )

(81)

(82)

Apparently, 𝒇𝑟 is on the x-y plane referring to the end-effector’s coordinate. The proportional control law can be written as

Rd ( j)  Ri ( j)O( j)

(83)

O( j )  f r , k pr 

(84)

𝑹𝑑 (𝑗) is the demand orientation at time 𝑡 = 𝑗𝑇 and ∆𝑶(𝑗) is the orientation adjustment.𝑘𝑝 is the proportional gain.< 𝒇𝑟 , 𝑘𝑝 𝜃𝑟 > represents the rotation matrix around axis, 𝒇𝑟 , with angle 𝑘𝑝 𝜃𝑟 . The combination of demand position 𝒙𝑑 , and orientation matrix 𝑹𝑑 becomes the final demand posture, 𝑿𝑑 . Figure 45 demonstrates the final block diagram with prediction of surface shape profile and normal direction, and orientation adjustment.

278

Jianjun Yuan, Yingjie Qian, Liming Gao et al.

Figure 45. The final block diagram with prediction of surface shape profile and normal direction.

Validation Experiments Force tracking along gravitational direction and normal direction are validated respectively. A Maxwell 3-dimensional force sensor mounted on the end-effector provides measurement of external force for reference only. Favorable control parameters are listed in Table 10 (Yuan, J. et al. 2019, 489). Table 10. Parameter for hybrid force controller Parameter Sampling Period Damping Integral coefficient Velocity of the end-effector

Value 4 ms 20 kg/s 0.13 5 mm/s

Parameter Stiffness of impedance model Inertia Desired contact force Proportional gain for orientation adjustment

Value 1000 N/m 1kg 50 N 0.01

Figure 46 demonstrates the experiment for gravitational-force control. The robot started from initial location, (46 a). Then it moved to the final location (46 d) passing through the intermediate locations (46 b, c). During the whole process, singular cases are avoided. Figure 47 shows the actual contact force given by the force sensor in comparison with the estimated contact force. It has to be emphasized that the measurement result of force sensor is not used in the control program, providing only a reference.

The Study on Key Technologies of Collaborative Robot …

Figure 46. The experiment of surface tracking along gravitational direction.

Figure 47. The result of force control along gravitational direction.

279

280

Jianjun Yuan, Yingjie Qian, Liming Gao et al.

Figure 48. The tracking process over an inclined plane.

The green line stands for the measured contact force by the force sensor. The red line stands for the estimation result of contact force. The black line stands for the prediction of the surface’s shape in z-axis. The pink one represents the prediction error of the shape.

Figure 49. The tracking process over a curved surface.

According to the figure, the difference between the estimated and the measured external force is within the range of 2N. The error of surface prediction is less than 0.5%. The force deviation was limited within the range of 5N, 10% to the desired contact force. Figure 48 and Figure 49 demonstrate the experiment process of normal force control. The robot started at the initial position and orientation. Then it moved along the surface with its orientation adjusting. After reaching the destination, the robot moved backward to the initial position. Figure 50 and Figure 51 compare the measured and estimated contact force. In the upper part of each figure, the blue line stands for the contact force measured by the sensor while the red line for the estimated contact force along the predicted normal

The Study on Key Technologies of Collaborative Robot …

281

direction. In the lower part, the blue line stands for the difference between these two values while the red line for the deviation between the estimated force and desired force. The measured contact force along normal direction is replaced by its z-axis component relative to the end-effector coordinate. So, the difference between estimated force and measured force became larger when the robot is adjusting its orientation. The difference between the estimated external force and the measured force is, in general, within the range of 6N. It proves the credibility of sensorless force estimation.

(a)

(b)

Figure 50. (a) Normal-force control result for plane tracking, (b) Normal-force control result for curved surface tracking.

The contact force deviation was limited within the range of 2N for plane tracking and 5N for curve surface tracking. The results indicate that this estimation of external force/torque is credible enough to replace force/torque sensors. It’s actually quite remarkable for a robot without any force/torque sensors to achieve such result. The proposed sensorless force control method

282

Jianjun Yuan, Yingjie Qian, Liming Gao et al.

is adequate for targeted applications such as polishing and milling (Cen, L., and Melkote, S. N. 2017, 486). Prediction algorithms, including the prediction of surface shape profile and normal direction, are also functional. Furthermore, the proposed force controller is effective and has preferable utility to arbitrary unknown surfaces.

COLLABORATIVE APPLICATION OF INDUSTRIAL ROBOTS The study of robotic dynamics and applications of dynamic control are started from collaborative robots. Yet currently, it is the industrial robot that has the highest market share because their high load capacity, high motion speed and high stiffness. So, it is of great practical and academic importance to transfer the dynamic skills to industrial robot which is called collaborative application of industrial robots. Generally speaking, the dynamic study of industrial robots is similar to collaborative robots. Yet, there’re some significant distinctions in 3 aspects. The first one is difference in mechanical design. Above all, industrial robots usually have 6DOFs for higher dependability. Furthermore, unlike modular joints used in collaborative robots, structures of joints in a single industrial robot may differ from each other. They may use different types of speed reducers such as planetary reduction gear, RV gear reducer, Harmonic reducer or even belt reducer. This may cause challenges in establishment of friction model. Industrial robots also use links with higher weights. The second aspect is the specification of motor. Industrial robots use motors of higher output torque and higher power-density. Consequently, the noise of current signal becomes larger than that of collaborative robots. The linear relationship of current and output torque is still valid yet with higher torque constant. The calibration process and result of torque constant is shown in Figure 51 and Table 12. respectively.

Figure 51. The calibration of motor.

Table 12. KT of joints of robot Joint Num. KT (Nm/‰)

J1 76

J2 76

J3 56

J4 20

J5 22

J6 15

The Study on Key Technologies of Collaborative Robot …

283

(a)

(b)

Figure 52. The relationship between friction, velocity and load.

Furthermore, since the stiffness of industrial robots are much higher for higher accuracy, the double-encoder based torque estimation is not feasible. The final aspect is the friction model. External payload still effects the friction, yet no apparent Stribeck effect is observed. The relationship between friction, velocity and load is shown in Figure 52. The friction model for industrial robots is given by:

 f  ( fc  fv (1  e q / q )  fc  Load )  sgn(q) v

(85)

Here, the effect of temperature is excluded since most industrial robots are not equipped with temperature sensors. Results show that friction of industrial robots are larger than that of collaborative robots, for both Coulomb friction and viscous friction. However, the RPMS is much larger than the result in Section 3 which is proven by the fluctuations of friction curves in Figure 63. It means that the friction behavior of

284

Jianjun Yuan, Yingjie Qian, Liming Gao et al.

industrial robots is more complicated and may not be able to be described by a reasonable mathematical model. These differences lead to minor changes in practical applications.

Figure 53. Different compensation strategy of static friction torque.

For kinesthetic teaching, coulomb friction is compensated in a different strategy. Here a new transition function about time is applied over the Sigmoid function about joint velocity. A fast sinusoidal function is oscillating within the range of maximum compensated coulomb friction as is shown in Figure 53. The compensation of coulomb friction at stationary state can be written as

( fc )comp  h(t ) fc h(t )  1  cos( fct )

(86)

When the operator tries to drag the robot from stationary state, the time-changing compensated friction will be at the same direction as the desired moving direction at some certain time point. Then the robot starts moving and escapes the stationary state soon enough before the compensated friction changes its direction. This new design can greatly reduce the driving force provided the operator at initial state. The kinesthetic teaching process is demonstrated in Figure 54.

The Study on Key Technologies of Collaborative Robot …

285

Figure 54. The kinesthetic teaching process

Figure 54. The kinesthetic teaching process.

Figure 55. Collision detection on industrial robot.

For collision detection, industrial robots needs adaptive threshold due to its complicated friction behavior. Unlike collaborative robot, industrial robot are usually applied in repetitive operations. Which means that industrial robots are likely to move along a pre-defined and fixed trajectory. So in practice, the robot can run the fixed trajectory multiple times in advance without any interference. The estimated external force which is generally not zero due to large noise and inaccuracy of friction model, can be recorded as a time sequence. The maximum value at each time point can then be used to consist a threshold sequence by multiplying a scaling factor. Hence, the threshold is time changing and adaptive on a certain level. The experiment is shown in Figure 55.

286

Jianjun Yuan, Yingjie Qian, Liming Gao et al.

CONCLUSION This chapter discusses the entire process of sensorless dynamic applications of collaborative robots and extension to industrial robots. The first issue is output torque estimation since no additional FT sensors or joint torque sensor are used. Here, two output torque estimation methods are proposed. One is the current-based torque estimation method by using the proportional relationship between motor’s torque and current. The other one is double-encoder-based method which is more novel and uses the speed reducer as a torque indicator by considering its stiffness. Both of these two methods achieve desirable performances in validation tests. The study of friction parameter is key to sensorless dynamic control. Out study shows that multiple factors may influence joint friction including angular velocity, payload and temperature. Complicated friction behaviors such as non-linearity and Stribeck effect are observed. Moreover, friction behaviors of different torque estimation methods are also distinct. A comprehensive friction model for current-based torque estimation method with 8 parameters is proposed. It takes viscous friction, Coulomb friction, temperature-dependent friction and load-dependent friction all into account. On the other hand, the friction behavior and model for double-encoder based method is simpler. All these fundamental studies are preparations for practical applications of dynamic control. Basic applications include collision detection and kinesthetic teaching. Collision detection is crucial for human-robot interaction situations. Validation experiments proves the sensitivity and dependability of proposed collision detection method. Kinesthetic teaching is another unique function which allows human operator to directly control the robot rather than through pedants. Several improvement strategies are proposed to achieve smoother and more stable performance. Furthermore, both friction model for two torque estimation method can be fused together to make the teaching process more labor-saving. The concept of kinesthetic teaching on joint level is then extended to Cartesian space to perform Cartesian teaching. Since in practice, the teaching process usually does not require all DOFs to be compliant, Cartesian teaching is more useful than the original one. Here, unit quaternion is used to represent the orientation of robot’s end-effector. More advanced application is also discussed which is force control applied on arbitrary surface tracking. A hybrid force control method integrating both impedance control model and explicit force control is adopted. Moreover, in order to be adaptive to arbitrary surface without prior knowledge of shape profile, real-time prediction methods of surface’s shape profile and normal direction are proposed. Two situations are studies. One is the force control along gravitational direction. The other one is force control along normal direction where the robot should adjust its orientation of end-effector accordingly.

The Study on Key Technologies of Collaborative Robot …

287

Both of them are proved to be functional. The control accuracy is adequate for targeted applications such as polishing, milling and deburring. Finally, the transplantation of collaborative applications to industrial robots is studied. The general principles and methodologies are similar yet there still exist several distinctions. This study is meaningful because up to now, industrial robots have larger market share and wider range of applications. All the discussed sections are around the core idea of sensorless dynamic control. It is more advantageous over traditional control manners which involve additional force or torque sensors in lower hardware cost, lower vulnerability and smaller occupation of payload capacity. Our studies show that sensorless dynamic control can achieve the same performance in practical applications. In conclusion, sensorless dynamic control is a novel and promising idea, and has been proved to be feasible and effective in our studies.

REFERENCES Al-Bender, F., Lampaert, V., & Swevers, J. (2005). The generalized Maxwell-slip model: a novel model for friction simulation and compensation. IEEE Transactions on automatic control, 50(11), 1883-1887. Albu-Schaffer, A., & Hirzinger, G. (2001, May). Parameter identification and passivity based joint control for a 7 DOF torque controlled light weight robot. In Proceedings 2001 ICRA. IEEE International Conference on Robotics and Automation (Cat. No. 01CH37164) (Vol. 3, pp. 2852-2858). IEEE. Albu-Schäffer, A., Haddadin, S., Ott, C., Stemmer, A., Wimböck, T., & Hirzinger, G. (2007). The DLR lightweight robot: design and control concepts for robots in human environments. Industrial Robot: an international journal, 34(5), 376-385. Albu-Schäffer, A., Haddadin, S., Ott, C., Stemmer, A., Wimböck, T., & Hirzinger, G. (2007). The DLR lightweight robot: design and control concepts for robots in human environments. Industrial Robot: an international journal, 34(5), 376-385. Albu-Schäffer, A., Ott, C., & Hirzinger, G. (2007). A unified passivity-based control framework for position, torque and impedance control of flexible joint robots. The international journal of robotics research, 26(1), 23-39. Albu-Schaffer, A., Ott, C., Frese, U., & Hirzinger, G. (2003, September). Cartesian impedance control of redundant robots: Recent results with the DLR-light-weightarms. In 2003 IEEE International Conference on Robotics and Automation (Cat. No. 03CH37422) (Vol. 3, pp. 3704-3709). IEEE. Albu-Schaffer, A., Ott, C., Frese, U., & Hirzinger, G. (2003, September). Cartesian impedance control of redundant robots: Recent results with the DLR-light-weightarms. In 2003 IEEE International Conference on Robotics and Automation (Cat. No. 03CH37422) (Vol. 3, pp. 3704-3709). IEEE.

288

Jianjun Yuan, Yingjie Qian, Liming Gao et al.

Bair, S., & Winer, W. O. (1979). A rheological model for elastohydrodynamic contacts based on primary laboratory data. Journal of Lubrication Technology, 101(3), 258264. Bittencourt, A. C., & Gunnarsson, S. (2012). Static friction in a robot joint—modeling and identification of load and temperature effects. Journal of Dynamic Systems, Measurement, and Control, 134(5), 051013. Cavenago, F., Voli, L., & Massari, M. (2018). Adaptive hybrid system framework for unified impedance and admittance control. Journal of Intelligent & Robotic Systems, 91(3-4), 569-581. Cen, L., & Melkote, S. N. (2017). Effect of robot dynamics on the machining forces in robotic milling. Procedia Manufacturing, 10, 486-496. Erickson, D., Weber, M., & Sharf, I. (2003). Contact stiffness and damping estimation for robotic systems. The International Journal of Robotics Research, 22(1), 41-57. Gao L. 2018. Force control technology and its application for 7-DOF manipulator. Diss., Shanghai Jiaotong University. Gao, L., Yuan, J., & Qian, Y. (2019). Torque control based direct teaching for industrial robot considering temperature-load effects on joint friction. Industrial Robot: the international journal of robotics research and application. Gao, L., Yuan, J., Han, Z., Wang, S., & Wang, N. (2017, September). A friction model with velocity, temperature and load torque effects for collaborative industrial robot joints. In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 3027-3032). IEEE. Gleeson, B., Currie, K., MacLean, K., & Croft, E. (2015). Tap and push: Assessing the value of direct physical control in human-robot collaborative tasks. Journal of Human-Robot Interaction, 4(1), 95-113. Goto, S., Nakamura, N., & Kyura, N. (2003, September). Forcefree control with independent compensation for inertia friction and gravity of industrial articulated robot arm. In 2003 IEEE International Conference on Robotics and Automation (Cat. No. 03CH37422) (Vol. 3, pp. 4386-4391). IEEE. Goto, S., Usui, T., Kyura, N., & Nakamura, M. (2007). Forcefree control with independent compensation for industrial articulated robot arms. Control engineering practice, 15(6), 627-638. Gu, Y. L., & Loh, N. K. (1985, December). Dynamic model for industrial robots based on a compact Lagrangian formulation. In 1985 24th IEEE Conference on Decision and Control (pp. 1497-1501). IEEE. Han Z. 2019. Collaborative force control for industrial robot manipulator based on internal current sensing information. Diss., Shanghai Jiaotong University. Han, Z., Yuan, J., & Gao, L. (2018, December). External Force Estimation Method for Robotic Manipulator Based on Double Encoders of Joints. In 2018 IEEE

The Study on Key Technologies of Collaborative Robot …

289

International Conference on Robotics and Biomimetics (ROBIO) (pp. 1852-1857). IEEE. Heinrichs, B., Sepehri, N., & Thornton-Trump, A. B. (1997). Position-based impedance control of an industrial hydraulic manipulator. IEEE Control Systems Magazine, 17(1), 46-52. Hirzinger, G., Sporer, N., Albu-Schaffer, A., Hahnle, M., Krenn, R., Pascucci, A., & Schedl, M. (2002, May). DLR’s torque-controlled light weight robot III-are we reaching the technological limits now? In Proceedings 2002 IEEE International Conference on Robotics and Automation (Cat. No. 02CH37292) (Vol. 2, pp. 17101716). IEEE. Hogan, N. (1984, June). Impedance control: An approach to manipulation. In 1984 American control conference (pp. 304-313). IEEE. Hogan, N. (1985). Impedance control: An approach to manipulation: Part I—Theory. Journal of dynamic systems, measurement, and control, 107(1), 1-7. Johanastrom, K., & Canudas-De-Wit, C. (2008). Revisiting the LuGre friction model. IEEE Control Systems Magazine, 28(6), 101-114. Jung, S. (2012, November). A position-based force control approach to a quad-rotor system. In 2012 9th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI) (pp. 373-377). IEEE. Jung, S. (2012, November). A position-based force control approach to a quad-rotor system. In 2012 9th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI) (pp. 373-377). IEEE. Khalil, W. (2011). Dynamic modeling of robots using Newton-Euler formulation. In Informatics in Control, Automation and Robotics (pp. 3-20). Springer, Berlin, Heidelberg. Khosla, P. K. (1987). Estimation of robot dynamics parameters: Theory and application. Carnegie-Mellon University. The Robotics Institute. Lee, H. M., & Kim, J. B. (2013). A survey on robot teaching: Categorization and brief review. In Applied Mechanics and Materials (Vol. 330, pp. 648-656). Trans Tech Publications. Lee, S. D., & Song, J. B. (2016). Sensorless collision detection based on friction model for a robot manipulator. International Journal of Precision Engineering and Manufacturing, 17(1), 11-17. Lu, S., Chung, J. H., & Velinsky, S. A. (2005, April). Human-robot collision detection and identification based on wrist and base force/torque sensors. In Proceedings of the 2005 IEEE International Conference on Robotics and Automation (pp. 3796-3801). IEEE. Lumelsky, V. J., & Cheung, E. (1993). Real-time collision avoidance in teleoperated whole-sensitive robot arm manipulators. IEEE Transactions on Systems, Man, and Cybernetics, 23(1), 194-203.

290

Jianjun Yuan, Yingjie Qian, Liming Gao et al.

Lynch, K. M., & Park, F. C. (2017). Modern Robotics. Cambridge University Press. Olsson, H., Åström, K. J., De Wit, C. C., Gäfvert, M., & Lischinsky, P. (1998). Friction models and friction compensation. Eur. J. Control, 4(3), 176-195. Ott, C., & Nakamura, Y. (2009, October). Base force/torque sensing for position based Cartesian impedance control. In 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems (pp. 3244-3250). IEEE. Ott, C., & Nakamura, Y. (2009, October). Base force/torque sensing for position based Cartesian impedance control. In 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems (pp. 3244-3250). IEEE. Ott, C., Mukherjee, R., & Nakamura, Y. (2010, May). Unified impedance and admittance control. In 2010 IEEE International Conference on Robotics and Automation (pp. 554-561). IEEE. Ott, C., Mukherjee, R., & Nakamura, Y. (2015). A hybrid system framework for unified impedance and admittance control. Journal of Intelligent & Robotic Systems, 78(3-4), 359-375. Pennestrì, E., Rossi, V., Salvini, P., & Valentini, P. P. (2016). Review and comparison of dry friction force models. Nonlinear dynamics, 83(4), 1785-1801. Roveda, L., Iannacci, N., Vicentini, F., Pedrocchi, N., Braghin, F., & Tosatti, L. M. (2015). Optimal impedance force-tracking control design with impact formulation for interaction tasks. IEEE Robotics and Automation Letters, 1(1), 130-136. Schindlbeck, C., & Haddadin, S. (2015, May). Unified passivity-based cartesian force/impedance control for rigid and flexible joint robots via task-energy tanks. In 2015 IEEE international conference on robotics and automation (ICRA) (pp. 440447). IEEE. Schindlbeck, C., & Haddadin, S. (2015, May). Unified passivity-based cartesian force/impedance control for rigid and flexible joint robots via task-energy tanks. In 2015 IEEE international conference on robotics and automation (ICRA) (pp. 440447). IEEE. Siciliano, B., & Khatib, O. (Eds.). (2016). Springer handbook of robotics. Springer. Simoni, L., Beschi, M., Legnani, G., & Visioli, A. (2015, September). Friction modeling with temperature effects for industrial robot manipulators. In 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 3524-3529). IEEE. Spong, M. W. (1987). Modeling and control of elastic joint robots. Journal of dynamic systems, measurement, and control, 109(4), 310-318. Stribeck, R. (1902). The key qualities of sliding and roller bearings. Zeitschrift des VereinesSeutscherIngenieure, 46(38), 1342-1348. Waiboer, R., Aarts, R., & Jonker, B. (2005, June). Velocity dependence of joint friction in robotic manipulators with gear transmissions. In Proceedings of the 2005 ECCOMAS Thematic Conference Multibody Dynamics (pp. 1-19).

The Study on Key Technologies of Collaborative Robot …

291

Wang N. 2017. Collision detection for collaborative robotic manipulator based on multiinternal sensor information. Diss., Shanghai Jiaotong University. Wang S. 2017. Direct teaching for collaborative robotic manipulator based on multiinternal sensor information. Diss., Shanghai Jiaotong University. Wang, S., Yuan, J., Fu, X., Wang, N., Zhang, W., & Xu, P. (2016, October). Control and modeling for direct teaching of industrial articulated robotic arms. In 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 923-928). IEEE. You, Y., Zhang, Y., & Li, C. G. (2014). Force-free control for the direct teaching of robots. Journal of mechanical engineering, 50(3), 10-17. Yuan, J. J., Wang, S., Wan, W., Liang, Y., Yang, L., & Liu, Y. (2018). Direct teaching of industrial manipulators using current sensors. Assembly Automation, 38(2), 216-225. Yuan, J., Qian, Y., Gao, L., Yuan, Z., & Wan, W. (2019). Position-based impedance force controller with sensorless force estimation. Assembly Automation.

In: Industrial Robots: Design, Applications and Technology ISBN: 978-1-53617-779-4 Editors: Isak Karabegović et al. © 2020 Nova Science Publishers, Inc.

Chapter 11

THE APPLICATION OF ROBOTS IN THE METAL INDUSTRY Edina Karabegović* Department Technology, University of Bihać, Bihać, Bosnia and Herzegovina

ABSTRACT The conditions in which businesses operate today require a constant modernization of production processes, regardless of their technological level. Constant market demands imply changes in production processes that aim to execute production in automatic conditions. Only in such conditions will companies be able to increase the quality of the final product and reduce production costs. The steps taken in this direction require a reduction in manpower, as the most sensitive factor that makes up each process. Views of the manpower reductions are different. Some of the studies conducted indicate that with the introduction of robotic automation by 2030, high number of workers will lose their jobs. It is a well-known fact that researchers in the field of robotics, in addition to the development and automation of robots, are also doing research to create new opportunities for workers in manufacturing jobs. The direction of development in the field of production in the world is aimed at the Industry 4.0, which strives for manufacturing based on smart production processes. Such an approach in the development of production processes certainly requires the introduction of new technologies such as Robotics & Automation, Internet of Things (IoT), 3D printing, Radio frequency identification (RFID), then smart sensors, etc. The role of manpower, which in the present conditions of production has been an essential factor in the process, is changing in all industrial branches, including metalworking production. Tasks performed by the workers in difficult, physically and psychologically tedious, and dangerous routine operations, which have daily damaged their health and safety, are changing to other forms of easier intellectual work. Certainly, this requires individuals to *

Corresponding Author’s E-mail: [email protected].

Edina Karabegović

294

be educated in trends that are now focused on the design, monitoring and control of production processes. The justification for these changes is related to the enormous advantage that industrial and service robots provide in jobs with frequent repetitions of the same work operations. Furthermore, the development of robotics was following the direction of changes associated with the automated production. First-generation industrial robots did not meet the requirements of automatic production, which is not the case with the next-generation – collaborative robots. The chapter presents a vision of the application of industrial robots in the metal industry in the world.

Keywords: production process, metal industry, robot, automation, application

INTRODUCTION Majority of developed countries are already largely monitoring the changes that are taking place in the development and improvement of production processes towards Industry 4.0. Technological development aims at raising the level of scientific knowledge, production processes and systems, ways of organizing production activities and other. Metalworking production is an important segment of industrial production, so meeting the requirements for its higher level is based on the application of automation and intelligent systems. Automated production systems provide greater capacity for achievement of power, speed and precision compared to the production systems with the higher workforce participation. Automated production systems help reduce production costs while increasing product quality, reducing overall product prices and increasing competitiveness in the market. The application of robots in production processes has a technological and economic justification, as confirmed by the authors’ research (Karabegović 2018, 59-71; Karabegović 2018, 001-007; Karabegović 2018, 110-117; Karabegović 2018, 11-16; Karabegović 2017, 29-38; Karabegović 2016, 15-22; Karabegović 2015, 185-194; Doleček 2003, 25-35; Doleček 2002, 1-34; Doleček 2008, 231-268). The benefits are the following:     

Reduction of workforce in the conditions of performing difficult and repetitive jobs, Increased safety at work in inadequate working conditions, Increased accuracy in production of parts and better product quality, Reduced production costs, and Increased production flexibility to meet market demands.

These advantages are especially significant in the conditions of higher capacity production and as such are evident in the metalworking industry. In the examples cited in

The Application of Robots in the Metal Industry

295

this chapter, robots have, in addition to the technological justification, also justified economical aspect of operation (Doleček 2002, 1-34).

THE APPLICATION OF INDUSTRIAL ROBOTS IN THE METALWORKING INDUSTRY The improvement of production processes aim to increase productivity, reduce costs, and achieve better product quality. The application of robots in all production segments is one way to reach these goals. The application of robots should achieve the following (Doleček 2002, 1-34; Uzunović-Zaimović 1997, 34-58; Nikolić 1979, 24-86; Potkonjak 1988, 14-78; Krstulović 1990, 32-74):  

Higher production output with economic justification, whether the existing production processes or the introduction of new plants, and Higher degree of automation with satisfactory flexibility.

Considering that the role of manpower is still irreplaceable in the realization of production processes, and the role of robots is of great assistance in achieving cost savings and higher productivity, the interaction between them is solved with the application of collaborative robots. Collaborative robot is a type of robot specially designed for direct interaction with humans, within a predefined shared workspace. Human intelligence and ability, coupled with the power and precision of the robot, have led to greater flexibility and efficiency in production processes. The metalworking industry covers all production and service processes, from part production to assembly into semi-finished or finished products. The level of process execution in the metalworking industry needs to meet predefined market requirements. Robots in the metalworking industry are used in the following:    

Process operations, Transport of materials before and after processing, Assembly processes of subsets/sets/finished products, and Control processes (inter-operative and final product controls).

Today’s technological processes and systems are not yet fully automated. Market demands are often focused on introducing new products or modifying existing ones, which creates new conditions for production and requires parallel development of the existing and new technologies and systems. World economic trends of market expansion imply the application of robots in all branches of industry, including metalworking,

Edina Karabegović

296

where the use of robots is different (Karabegović 2018, 1-13; Karabegović 2017, 110117; Karabegović 2016, 92-97; Litzenberger 2018, 1-28; World Robotics 2017, 26-492; World Robotics 2016, 11-18; World Robotics 2015, 13-26; World Robotics 2011, 7-16; World Robotics 2010, 7-12). The studies conducted so far have resulted in the data given in Figure 1 and Figure 2. An overview of the application of industrial robots in the metal industry for the period 2011-2018 is given in Table 1. The statistical data listed in the tables and diagrams are taken from the International Federation of Robotics (IFR), the United Nations Economic Commission for Europe (UNECE) and the Organization for Economic Cooperation and Development (OECD). According to Table 1, the trend of application of industrial robots is continuously increasing in all segments of the metal industry in the world. The largest representation is recorded in the production of motor vehicle parts, with about 44.340 robot units applied in 2018. Figure 1 shows the annual trend of application of industrial robots across different industries in the world. The application of industrial robots worldwide in all industries was increasing in the period 2011-2018. In 2018, the application reached over 400.000 robot units. In regard to the metal industry, the upward trend was lower in 2018, reaching about 130.000 robot units. It is estimated that the application of industrial robots will increase all industries. It is expected that about 630.000 robot units will be applied in 2021. In addition, it is estimated that around 195.000 robot units will be used annually in the metal industry worldwide. The comparison of the application of industrial robots across industries in the world is shown in Figure 2. Table 1. The application of industrial robots worldwide in the metal industry

Metal Motor vehicles, engines and bodies Spare parts for motor vehicles Metalproducts TOTAL

2011 14.125 30.805

2012 14.082 33.504

2013 16.458 31.787

2014 21.176 34.834

2015 29.445 35.882

2016 28.710 38.611

2017 29.850 39.710

2018 31.280 41.230

20.926

22.208

25.329

44.042

42.039

42.345

43.280

44.340

8.262 74.118

6.882 76.672

7.607 81.181

12.056 112.108

13.110 120.476

13.092 122.808

13.480 126.320

14.120 130.970

The Application of Robots in the Metal Industry

297

Number of units

Application in all industries Application in the metal industry

Future application

Year

Figure 1. The application of industrial robots in the world across various industries for the period 20112018 with estimated application for the period 2019-2021 (Litzenberger 2018, 1-28; World Robotics The comparison of2016, the application of industrial robots across industries in 2017, 26-492; World Robotics 11-18; World Robotics 2015, 13-26; World Robotics 2011, 7-16; world is shown in Figure 2. World Robotics 2010, 7-12). Number of units

Application in manufacturing Application in the metal industry

Application in the electrical/elec. industry Application in the plastic and chemical products Year

Figure 2. The application of industrial robots in production worldwide in the metal industry, electric/electronic industry, and plastics and chemical product industry for the period 2011-2018 (Litzenberger 2018, 1-28; World Robotics 2017, 26-492; World Robotics 2016, 11-18; World Robotics 2015, 13-26; World Robotics 2011, 7-16; World Robotics 2010, 7-12).

Based on the diagram given in Figure 2, it can be concluded that application of robots in production is the highest in the metal industry, followed by the electric/electronic industry, and plastic and chemical product industry. The application of industrial robots in the food, glass and rubber industries has not been analyzed, because the application of robots in these industries is far smaller.

the

Edina Karabegović

298

Increased productivity, reduced participation of manpower and the associated costs, cost savings and increased quality, as well as protection of human health are many of the reasons why robots are used in metal working processes (Karabegović 2005; Jurković 2003; Doleček 2003; Karabegović 2004).

THE APPLICATION OF INDUSTRIAL ROBOTS IN THE PRODUCTION PROCESSES OF THE METAL INDUSTRY The basis of any industrial production is the production process, which represents all actions performed in a production system (company) with input sizes (material, energy and information) with the aim of achieving output sizes (product, services and information). This means that the production process in the metal industry is an indivisible whole in which organizational activities are conducted from the warehouse of starting material, followed by transport, technological processes and assembly, to the warehouse of the finished product.

The Application of Industrial Robots in Material Transport In order to ensure continuity of the process, it is necessary to provide an inflow of materials, energy and information in all metalworking processes. The term material flow in this case refers to the material before processing, during processing and after the completion of the processing, assembly and inspection phases. The production segment related to the material transport needs to be specially emphasized, since material flow time in the metalworking industry mostly depends on the following:   

Time needed for material transport and handling, Waiting time that depends on the machine handling systems, Time of planned downtime that is determined when designing the production process.

In production companies where complete automation has not been finalized, the material transport is still dependent on the human factor. However, the demands for increased speed of work, reduced costs, and uniform flow rate of processing materials, have resulted in the automation of the internal transport of materials.

The Application of Robots in the Metal Industry

299

As material transport time is one of the factors that enables rationalization of the total processing time and achieves significant savings, it implies that an analysis is required in selecting and introducing appropriate transport systems. In addition to continuous transport in the form of conveyor belts moving at a constant speed, robots are also used in material transport (Nikolić 1979, 24-86; Potkonjak 1988, 14-78). Their task is to put the workpiece material into the machine and then, after final processing, return it to the belt as the semi-finished product or finished product. The possibility of using the robot for continuous transport is also reflected through the implementation of various process, assembly or control operations on the object remaining on the belt. Such application of robots in production, in addition to other characteristics which define the load-bearing capacity and precision, places requirements on the intellectual level of object recognition and selection, decision-making in certain situations and in conditions adequate to real working conditions. Thanks to the application of new technologies in robotic technology, research and advancements in this direction are continuously conducted (Karabegović 2019, 3-17). The development and application of sensors, followed by 2D and 3D cameras (Figure 3), enable the recognition, visual control, positioning, orientation and visual tracking of the products in the processes of machine handling, palletization, packaging, control, etc. The application of robots in large parts transport in the interphase processing is given in Figure 4. The application of robots ensures continuity and safety in the transport of products of specific shape, dimensions and weight in the packaging process (Figure 5). In relation to humans, the robot is not bothered with the continuous, monotonous and repetitive workloads.

Figure 3. Robot supported automatic transport of products (Fanuc 2019).

300

Edina Karabegović

Figure 4. The interphase transport of large parts in the processing (Fanuc 2019).

Figure 5. Transport of products for packaging (Motion Controls Robotics, 2019).

Figure 6. Finished products transport (Fanuc 2019; Dassault Systemes 2019).

The Application of Robots in the Metal Industry

301

The application of robots in the final assembly phase of the product directly on the conveyor belt saves time and space (Figure 6). The development of artificial intelligence offers broader automation in industrial production. Since the highest intensity of movement in the metalworking industry relates to work items, optimizing these movements can lead to an increase in the efficiency of production processes. The savings thus generated are a significant reference for Industry 4.0.

The Application of Industrial Robots in Machine Handling The application of robots in the metalworking industry in machine handling tasks involves performing various tasks, such as:   

Taking a workpiece off the conveyor belt and placing it in a specific work position of the processing machine, Removing the object from the machine and placing it on the conveyor belt or the intended location, and Modifying and disposing of the work tool in tool storage.

The complexity of the operation of taking a workpiece off the conveyor belt will depend on the complexity of the robot’s control system. If the workpiece is still during the takeover (the belt stops or mechanical barrier stops the workpiece), then relatively simple robot control systems are possible. However, in the case of retrieving a workpiece from a moving belt, it is necessary to apply more complex robot control. In the conditions of flexible production, it is possible to produce a different range of products. Products vary in material, dimensions, shape, weight, technological manufacturing operation, etc. Very often, the robot is required to identify the workpiece before it is placed in the machine or from the machine after processing, as shown in Figure 8. Such operating conditions of the robot require technical progress in the tasks it performs. The wider field of application of robots requires the use of advanced grippers (two-arm or multi-arm) with quick-change systems. In this way, the loading and unloading times of the parts from the machine are reduced and significant savings are achieved.

302

Edina Karabegović

Figure 7. Machine handling with workpieces (Fanuc 2019).

Figure 8. The application of robots in the conditions of flexible production of different product range (Fanuc 2019).

The various activities of robots in machine handling require adequate flexibility of the gripper, Figure 9. For this reason, universal grippers or quick replacement system are used.

The Application of Robots in the Metal Industry

303

Figure 9. Different types of robot grippers in machine handling (Fanuc 2019, Fanucamerica 2019).

Figure 10. The application of portal and articulated robot for machine handling (Fanucamerica 2019).

Robots for machine handling are usually of articulated or portal type.The articulated robot is installed next to the machine and enters workpieces in front, whereas portal robots handle the machine from above, resulting in greater productivity and savings in the workspace of the production environment. The application of a portal and articulated robot in machine handling processes is given in Figure 10. The complexity of the robot’s task is also expressed when handling multiple material processing machines.

304

Edina Karabegović

Likewise, the cost-effectiveness of robots in machine handling is very important, so that in the planned activities, an effort is made to synchronize the work of robots through multiple activities or apply them to serve more machines (Figure 11).

Figure 11. Robot handling multiple machines (Fanuc 2019).

The most common example of robots handling multiple machines is recorded in the automotive industry. In order to give more detailed overview of the robot’s role in the metalworking industry, examples of robot application in various production operations, measurements and controls are given.

The Application of Robots in Welding Processes The application of robots in industrial processes has proved particularly justified in performing welding processes. Performing welding processes with robots in the automotive, shipbuilding and other industries is very important because the characteristics of the welded joint are significantly better with the use of robots. In addition to improving the quality of welds, the use of robots increases productivity, reduces costs and protects the health of workers, as potential executors of the welding process. Figure 12 gives an example of the application of a robot in the welding process. Robots are used in welding processes in a number of different technologies, such as MAG (Metal Active Gas) welding, MIG (Metal Inert Gas) welding, laser welding, spot welding, ARC welding, etc. (Figure 13). The robot’s structural design is adapted to the method used to perform welding. The welding process is usually performed on conveyor belts. The robot is required to perform welding within a short period of retention of the workpiece on the belt,

The Application of Robots in the Metal Industry

305

according to the specified program. Sensor (Doleček 2008, 231-268; Karabegović 2005; Jurković 2003; Doleček 2003) is required to perform movement on a defined path.

Figure 12. Application of the robot in the welding process (Fanuc 2019).

Their application in the development of robotic technology and automation for measuring position, speed, position, acceleration, force, moment, distance, touch, and closeness is indispensable. The welding process in multi-robot conditions is more complex (Figure 14), which requires robots to apply more complex control systems. In addition to the use of robots for assembly in the automotive industry, which is the largest in percentage, robots are also used for metalworking operations of cutting, deforming, assembling, etc.

Figure 13. Different methods of welding process (Canadian Metalworking 2019, Fanuc 2019).

306

Edina Karabegović

Figure 14. The application of multiple robots in the welding process (Welding Productivity 2019).

The Application of Robots in Cutting Processes Machine handling is very successful process because of the application of robots. This is especially true in conditions of higher technological level of metalworking cutting on machining systems that meet very high market requirements in terms of productivity, due to the possibility of performing machining operations with high speeds, as well as in the conditions of performing various cutting operations in one production site. The development of machining systems enabled a number of different cutting operations on machines called machining centers. Handling these machines, in addition to increased flexibility and repeatability in robot operation, requires compatibility with the environment, which is achieved through a common control system (Doleček 2008, 231-268; Karabegović 2005; Jurković 2003; Doleček 2003). The robot’s performance is visible through the achieved precision in operation. For example, it enables the proper positioning of workpieces in the clamping position of the machine, it has the ability to place workpieces and remove finished pieces from the workspace of the machine due to the greater degree of freedom, it can change tools during the machining process and store tools in appropriate storages, it can perform simpler machining operations, etc. Figure 15 shows the handling of the machining center with a robot. The turning operation commonly presents the processing of cylindrical parts on a lathe. The role of the robot is mainly reflected in handling tasks, such as placing a workpiece in the clamping head of the machine (Figure 16), or replacing tools during processing.

The Application of Robots in the Metal Industry

307

Figure 15. The application of robots in the handling of the machining center (Indiamart 2019; Fanuc 2019).

Figure 16. The application of robot in handling the lathe (Fanucamerica 2019).

The robot in the drilling operation (Figure 17) economically performs machining in the following cases:   

The need for large drilling forces, Large series and constant repetition, and Dangerous and airless spaces.

In the milling operation (Figure 18) there is a far higher representation of robots than in the turning operation. Robots in milling operation have:  

The direct application in the processing operation, The application when changing tools and workpieces on a CNC milling machine.

The application of robots in the milling operation of large and heavy-duty parts is very significant, during which one robot is holding a workpiece, while the other performs milling operations.

Edina Karabegović

308

Figure 17. The application of robot in the drilling process (Fanuc 2019; Fanucamerica 2019).

Figure 18. The application of robot in the milling process (Fanucamerica 2019; Fanuc 2019; Eppm 2019; Esmo-grup 2019).

The application of a robot in the grinding operation (Figure 19) is possible in two manners:  

For workpieces of larger dimensions or weights, the robot carries a grinding device and moves it to provide grinding of the intended surfaces, The robot picks up the workpiece and brings it to a fixed grinder. This method is applicable when the workpiece is not large, or when the workpiece requires machining on several machines (e.g., cutting, drilling and grinding).

The Application of Robots in the Metal Industry

309

Figure 19. Grinding of a workpiece (Fanuc 2019).

Figure 20. The application of robot in the polishing process (Fanuc 2019; Robotics online 2019).

In addition to grinding, the polishing of workpieces is also included in the metalworking finish of the workpiece, which is performed using a robot, as shown in Figure 20.

The Application of Robots in Metal Sheet Cutting Processes The application of robots in metal sheet cutting processes is performed with the use of different types of energy for cutting operations:    

Plasma, Water jet, Laser, and Gas flames.

310

Edina Karabegović

Plasma cutting process applies an extremely high cutting temperature (over 20000 K), directed at the workpiece material. The cutting robot has an articulated configuration with six degrees of freedom of movement, as shown in Figure 21.

Figure 21. Robot during plasma cutting process (Mac-tech 2019; Alfamatic 2019).

Figure 22. The application of robot in a high-pressure water jet cutting process (Fanuc 2019).

Figure 23. The application of robot in the laser cutting process (Fanuc 2019).

The application of the robot in the metal sheet cutting process with a high pressure water jet (Figure 22) achieves a high quality construction. With water jet, a process of metal sheet cutting or surface cleaning is performed under high pressure.

The Application of Robots in the Metal Industry

311

Laser cutting (Figure 23) is one of the most modern metal sheet cutting procedures.

The Application of Robots in the Deformation Processes The process of metal forging in the hot state presents the processing performed on the presses/hammers, whose handling is conducted by the robot (Figure 24). Due to the high temperatures of the workpieces, which are extracted from the furnace into the working area of the press for further processing, it is necessary that the robot gripper is resistant to high temperatures. Robot programming is performed in accordance with the design of the metal forging process.

Figure 24. Handling heated workpieces in a robot forging process.

The shaping of workpieces of large dimensions and weight on the presses by the use of the plastic molding process very often requires great strength and easy manipulation. Replacing a human with a robot in these conditions is quite justified. An example of the application of a robot handling a hydraulic press is given in Figure 25. The application of robots in metal forming processes by deformation is given as an example of a pipe bending process, as shown in Figure 26.

312

Edina Karabegović

Figure 25. The application of robot in press handling (Metal Forming Magazine 2019).

Figure 26. Robotic cell for pipe bending (Fanuc 2019).

The Application of Robots in the Foundries The application of robots in processes requiring special operating conditions has certain advantages. The use of robots under high temperature conditions and the manipulation of finished parts of heavy weight, with a large number of cycle repetitions within an hour, justify their application in casting processes.

The Application of Robots in the Metal Industry

313

Figure 27. The application of robot in a foundry (Automatie-pma 2019).

The metal casting process is performed by filling the mold of a particular shape with the heated metal in the liquid phase to form a finished piece of the desired shape. When the cooling process is completed, the finished piece is removed from the mold using a robot. Since this process requires repetition during the operation, it is possible that the robots with more degrees of freedom of movement will achieve adequate productivity, as shown in Figure 27.

The Application of Robots in the Painting Processes Due to the specificity and performance conditions, the painting process has a number of reasons for justifying the application of robots. The largest representation of robot painting is in the automotive and household appliance industries, as shown in Figure 28. Some examples of justification are:      

Health: protection materials are often toxic and cancerous, Better effects: the proper regulation of the movement of the spray gun ensures the uniformity of the application of the paint, Increased productivity due to the reduced participation of humans and associated costs, Material savings: even application and simple and quick interruption of material flow, Energy savings: no complex ventilation and heating system is required, Safety: danger of flammable substances for protection.

Painting robots must meet certain criteria, such as:

Edina Karabegović

314   

Must have a greater number of degrees of freedom of movement, Large manipulation area, and The robot structure should be protected from the effects of painting.

Figure 28. The application of robots in the painting processes (Fanuc 2019).

Figure 29. The application of multiple robots in the painting processes (Report Herald 2019).

For economic reasons, it is very common to use multiple robots to paint large-scale structures and complexproducts, as shown in Figure 29.

The Application of Robots in the Metal Industry

315

The Application of Robots in Palletizing and Packaging Processes Palletizing and packaging operations can also be classified as workpiece transport operations, because pallet is considered to be a transfer of pieces in groups placed in specific containers. It is as simple process in which the robot takes the products off the conveyor and puts them on a pallet, as shown in Figure 30. In the interphase production process, the pallets would be transported to the machine on conveyor belts and the robot would remove workpieces for further processing on the appropriate machines, thus performing depalletizing.

Figure 30. Palletizing process (Fanuc 2019).

Figure 31. Packaging process (Fanuc 2019).

The application of robots in packaging operations involves taking the product off the belt and placing it in designated packaging spaces (various crates, boxes, etc.), in which the product would be placed on the market after the inspection.

Edina Karabegović

316

The Application of Robots in the Assembly Processes The application of robots in assembly processes has a very high economic justification. Considering that the assembly takes place both during and after the process, the basic requirement for robots is flexibility, which requires the use of sensors to guide the robot in real time and operating conditions. By planning robot application in the assembly process, some of the components or more complex components are already made in one piece during the manufacturing process. In the assembly process, industrial robots are used for the following:     

Placement of parts in working position, Bonding in the assembly process, Control and measurement of certain dimensions and shapes, Integration of technological systems in transferring workpieces from one technological system to another, Storage – disposal of workpieces on palletsafter the completion of assembly, output conveyors or warehouses.

Figure 32. Robots in a flexible assembly cell (Capital Industries 2019; Fanuc 2019).

The application of a robot in the assembly process is given as an example of a flexible assembly cell (Figure 32). The assembly process is closely related to the bonding process, so robots with greater degrees of freedom, higher precision and more complex sensor systems are used to perform the assembly. A simulation of the entire assembly process is applied in practice before performing the process. Simulation is not only for machining and assembly processes, but also for the process of transportation, testing, and other processes that are contained in the overall technological process of obtaining the finished product.

The Application of Robots in the Metal Industry

317

The Application of Robots in the Control Processes Control processes in the metalworking are performed during the processing (interphase control) and as the final control of the finished piece. The application of the robot in the process of interphase control of workpieces is essential in order to identify defects at the very beginning of the machining process, in order to influence the reduction of production costs Figure 33. The control of the finished piece represents the final stage of the production process in the metalworking industry, which is performed visually and with the use of appropriate measuring devices. Control procedure refers to the control of shapes, dimensional sizes, surface quality, quality of material structure, weight, functional and other properties.

Figure 33. Interphase control of workpieces (Fanuc 2019).

Figure 34. Robot in a storage service (Fanuc 2019).

The Application of Robots in the Product Storage Production automation, especially in conditions of flexible production systems, implies that each products is encoded due to simple recording and storage practices. The coding makes the storage process easier for robot application. Figure 34. shows the robot performing a storage function. In flexible production conditions, storage is performed with sufficient space for robot manipulation.

318

Edina Karabegović

CONCLUSION The objective of each production is to reduce costs while increasing the productivity and quality of the final product. When designing any of the metalworking processes, it is necessary to take care that the process is organized with as little human involvement as possible in the operational tasks. As shown in this chapter, it is imperative that human labor is minimized in order to leave physically and mentally tedious jobs to robots. Humans, as previously unavoidable participants in production, would be assigned to more intellectually important and physically simpler tasks. Whenever possible, and in order to introduce production automation, the introduction of intelligent systems is required. The development of the metalworking industry in the world is moving towards the Industry 4.0. Any deviation or slowdown in development diminishes competitiveness in the market and inevitably leads to downtime. The above examples of robot implementation have shown that there is justification for their application. The fact that the introduction of robot initially requires considerable financial investment should not be the reason for giving up. Prior to every robot implementation an analysis should be conducted to justify the application and to calculate production costs. The introductory section outlines the current state of the robot application, as well as the predictions expected in this segment of industrial production. In addition, it also describes direct examples of the application of robots with the aim of making changes to introduce robots to places where they have not been implemented so far. The decision to introduce robots into all the above-mentioned segments of metalworking industry would represent one of the conditions for achieving greater efficiency in the work of industrial processes. The use of robots is no longer limited to industrial robots with the specific requirements of the safe working environment, because the achieved robot-human interaction has created the conditions for their joint workthrough the use of collaborative robots. This does not mean that the introduction ofrobots implies that the process of development and advancement of the metalworking industry would be completed. The conducted research shows that the changes taking place in order to improve production are permanent and each subsequent stage in advancement would result in even better quality or higher productivity, which represents the maximum level in the current time only.

The Application of Robots in the Metal Industry

319

REFERENCES Alfamatic. (2019). Fanuc ARC Matte 120iB Plasma cutting application, Accessed August 28. https://alfamatic.ru/services/robotizatsiya/ plazmennaya-rezka/. Assemblymag. (2019). Robots for Handling Heavy Loads, Accessed December 20. https://www.assemblymag.com/articles/95181-robots-for-handling-heavy-loads. Automatie. (2019). Machinebouw 4.0, Accessed December 22. https:// automatiepma.com/nieuws/machinebouw-4-0-verandert-rangorde. Canadian Metalworking. (2019). Laser Welding Creates Efficiency Sweet Spot, Accessed August 20. https://www.canadianmetalworking.com/ canadianfabricatingandwelding/ article/fabricating/laser-welding-creates-efficiency-sweet-spot. Capital Industries. (2019). Assembly systems, Accessed December 22. https://www. capitalindustries.com/assembly-systems.html. Dassault Systemes. (2019). MACHINES TO LEARN: Artificial intelligence can transform production, and moreover it is dispute filing, Accessed July 10. https://blogs.3ds.com/delmia/machines-that-learn-artificial-intelligence-maytransform-manufacturing-but-adoption-is-slow/. Doleček Vlatko, Karabegović Isak & Mahmić Mehmed. (2003). “Application of Industrial Robots in Flexible Assembly Line”, Paper presented at the International Conference, Development and Modernization of Production Engineering, Bihać, Bosnia and Herzegovina, 327-332. Doleček, Vlatko & Karabegović Isak. (2003). “Application of robots in 21st Century” (in Bosnian), Paper presented at the International Conference, Development and Modernization of Production Engineering, Bihać, Bosnia and Herzegovina, 3-22. Doleček, Vlatko & Karabegović Isak. (2002). Robotic, Bihać: University of Bihać, 1-34. Doleček, Vlatko & Karabegović Isak. (2008). Robots in the industry, University of Bihać, Bihać, 231-268. Eppm. (2019). Fanuc to champion robot integration at EMO 2019, Accessed August 28. https://www.eppm.com/machinery/fanuc-champion-robot-integration-emo-2019/. Esmo-group. (2019). Robotics in Automation, Accessed August 28. https://esmogroup.com/automation/en/technologies/robotics/. Fanuc (2019). Polishing and packaging with robots, Accessed August 28. https://www.fanuc.eu/si/sl/izku%C5%A1nje-strank/fastlog. Fanuc. (2019). Aerospace Drilling & Deburring Robot - The New FANUC M-900iB/700 Robot, Accessed August 25. https://www.youtube.com/ watch?v=JG1DprGbNMs. Fanuc. (2019). Arc Welding Applications Accessed August 20. https://www.fanuc.eu/de/ en/industrial-applications/arc-welding-applications. Fanuc. (2019). Assembly - FANUC LR-Mate robots in a component assembly cell for an automotive Tier 1 supplier, Accessed December 22. https://www.youtube.com/ watch?v=k-SM5XcFksY.

320

Edina Karabegović

Fanuc. (2019). Azimatronics Water Jet Cutting Robotic Project, Accessed August 28. https://www.youtube.com/ watch?v=013q9wgmZQ0. Fanuc. (2019). Era Systems, Accessed July 8. https://i.pinimg.com/736x/ 51/a2/8e/51a28e2826e5df342f60f1f5072ac320--fanuc-robotics-robot-factory.jpg. Fanuc. (2019). FANUC Industrial Robots at AUDI, Accessed December 22. https://www.youtube.com/watch?v=rbki4HR41-4. Fanuc. (2019). Fanuc M-10iA Plumbing Handle Polishing Robot - Courtesy of Acme Manufacturing, Accessed August 28. https://www.youtube.com/watch?v=Xy6eCCFTB4. Fanuc. (2019). Fanuc manufacturing automation work cell developed for Johnson Controls, Accessed December 20. https://www.youtube.com/ watch?v=r MBTiBBTV_A. Fanuc. (2019). Fanuc model M2 0 i A / M20, Accessed August 28. https://www.fanuc.eu/ rs/en/customer-cases/high-efficiency-sanding-and-smoothing-of-speaker-cabinetsurfaces. Fanuc. (2019). FANUC part transfer robots Accessed July 8. https://www.fanuc america.com/industrial-solutions/manufacturing-applications/part-transfer-robots. Fanuc. (2019). Fanuc Robomachine products 2019, Accessed August 28. https://www. youtube.com/watch?v=tDcW9P9FliY. Fanuc. (2019). Fanuc Robot CNC Fanuc LR Mate 2, Accessed August 20. https://www.kitmondo.com/listing/fanuc-robot-cnc-fanuc-lr-mate-2-p80208004/. Fanuc. (2019). Fanuc Robot Plasma Welding, Accessed August 20. https://www.youtube.com/watch?v=hzAwjwK6xtk. Fanuc. (2019). Fanuc robot with 35 position pallet stand, Accessed December 22. https://staubinc.com/staub-5-axis-machining/dsc03503/. Fanuc. (2019). Friction Spot Welding to Plan Role in Automative Lightweighting Accessed August 20. https://www.assemblymag.com/ articles/94484-friction-spotwelding-to-play-role-in-automotive-lightweighting. Fanuc. (2019). Industrial-applications/automated-material-handling, Accessed December 22. https://www.fanuc.eu/uk/en/industrial-applications/automatedmaterial-handling. Fanuc. (2019). Integrated production solutions Accessed July 12. https://www.fanucamerica.com/industrial-solutions/manufacturingapplications/machine-tending-robots. Fanuc. (2019). Introduction of Robots for the Machine Tending Industry, Accessed August 20. https://www.fanuc.co.jp/en/product/robot/application/machining/ index.html. Fanuc. (2019). Material Handling - FANUC robots on a rail with machine vision, Accessed July 12. https://www.youtube.com/ watch?v=RFE0bH85WVg.

The Application of Robots in the Metal Industry

321

Fanuc. (2019). Material handling automation from a single partner, Accessed December 22. https://www.fanuc.eu/uk/en/industrial-applications/ automated-material-handling. Fanuc. (2019). Mobile robots provide flexibility in material handling Accessed July 8. https://www.automationworld.com/factory/robotics/ article/13317873/mobile-robotsdeliver-material-handling-flexibility. Fanuc. (2019). More productive 3-D Laser cutting, Accessed December 20. https://www.fanuc.eu/ch/it/sistemi-laser. Fanuc. (2019). Process flexibility with wide reach, Accessed December 22. https://www.fanuc.eu/bg/en/robots/robot-filter-page/paint-series/p-250ib-15. Fanuc. (2019). Quick change grips, Accessed July 12. https:// www.fanuc.eu/uk/en/industrial-applications/machine-tending. Fanuc. (2019). Robotic Arc Welding with Servo Robot Seam Tracking Process Control & FANUC ARC Mate 100iD Robot Accessed August 20. https://www.youtube.com/ watch?v=mMygZYAUXx4. Fanuc. (2019). Robotic Waterjet Trimming with FANUC LVC - Dynamic Robotic Solutions (formerly KMT), Accessed August 28. https:// www.youtube.com/ watch?v=9FgNWOB0sPo. Fanuc. (2019). The production will be for the solar panel, Accessed July 8. https://www.inventekengineering.com/services/solar-panel-automation/. Fanuc. (2019). Accessed July 10. https://i.ytimg.com/vi/69RtLBImXiU/ maxresdefault.jpg. Fanuc. (2019). Flexible Part Feeding with Graco’s G-Flex™ 1500 Feeder and FANUC Robots, Accessed July 12. https://www.fanuc.com/ watch?v=TGOvEFWde9M. Fanuc. (2019). Need 3D Vision Guidance, Accessed July 12. https://www.assembly mag.com/articles/91945-do-you-need-3d-vision-guidance Fanucamerica. (2019). Applications machine tending robots, Accessed August 20. https://www.fanucamerica.com/industrial-solutions/manufacturingapplications/machine-tending-robots. Fanucamerica. (2019). Automat tool change, Accessed July 12. https:// www.fanucamerica.com/industrial-solutions/manufacturing-applications/machinetending-robots. Fanucamerica. (2019). Automated machining for more versatile milling, drilling and tapping, Accessed August 25. https:// www.fanucamerica.com/products/fanucrobodrill-robomachine. Fanucamerica. (2019). Fanuc Material Removal Robots, Accessed August 28. https://www.fanucamerica.com/industrial-solutions/ manufacturingapplications/material-removal. Fanucamerica. (2019). Maximize productivity with a robotic machine solution Accessed July 15. https://www.fanucamerica.com/industrial-solutions/manufacturingapplications/machine-tending-robots.

322

Edina Karabegović

Garnier Sebastien, Subrin Kevin & Waiyangan Kriangkrai. (2017). “Modelling of robotic drilling” Paper presented at the Conference on Modelling of Machining Operation, Burgundy, France, June 15-16. http://www.plasticsdist.com/equipment/robotic/robotic_inages/fanuc420a_material_Hnd_ md.jpg Indiamart. (2019). Fanuc Robodrill Compact vertical machining center, 10-20 HP Accessed August 20. https://www.indiamart.com/ proddetail/fanuc-robodrillcompact-vertical-machining-center-20936402688.html. Jurković, Milan & Karabegović Isak. (2003). “Advanced Technologies for Countries in Transition”, Paper presented at the International Conference, Development and Modernization of Production, Bihać, Bosnia and Herzegovina, 23-38. Karabegović Isak, Jurković Milan & Doleček Vlatko. (2005). “Application of Industrial Robots in Europe and the World”, Paper presented at the International Conference Research and Development in Mechanical Industry, Vrnjačka Banja, Serbia, September 4-7. Karabegović Isak, Karabegović Edina, Mahmić Mehmed & Husak Ermin. (2015). “The application of service robots for logistics in manufacturing processes”, Advances in Production Engineering & Management, 10(4), 185-194. Karabegović Isak, Mahmić Mehmed & Karabegović Edina. (2004). “Representation of industry robots at the industry branches”, Paper presented at the International Conference Research and Development in Mechanical Industry, Vrnjačka Banja, Serbia, September, 13-16. Karabegović, Isak. (Ed), (2019). New Technologies, Development and Application, Springer International Publishing, Germany., 3-17. DOI: 10.1007/978-3-319-908939. Karabegović, Isak & Husak Ermin. (2016). “China as a Leading Country in the World in Automation of Automotive Industry Manufacturing Processes”, Journal Mobility and Vehicle, 42(3), 15-22. Karabegović, Isak & Husak Ermin. (2018). “Industry 4.0 based on Industrial and Service Robots with Application in China”, Journal Mobility and Vehicle, 44(4), 59-71. Karabegović, Isak. (2017). “Comparative analysis of automation of production process with industrial robots in Asia/Australia and Europe”, International Journal of Human Capital in urban management, 2(1), 29-38. Karabegović, Isak. (2018). “Application of Industrial Robots in the Automation of the Welding Process”, Journal Robotics & Automation Engineering, 4(1), 001-007. Karabegović, Isak. (2018). “The Role of Industrial and Service Robots in Fourth Industrial Revolution with Focus on China”, Journal of Engineering and Architecture, 5(2), 110-117.

The Application of Robots in the Metal Industry

323

Karabegović, Isak. (2018). “The Role of Industrial and Service Robots in Fourth Industrial Revolution”, ACTE Technica Corviniensis-Bulletin of Engineering, XI (2), 11-16. Krstulović, Ante. (1990). Application of the Robots in Technological Processes (in Croatian), Zagreb: Faculty of Mechanical Engineering and Naval Architecture, Zagreb, 32-74. Litzenberger, Gudrom. (2018). “World Robotics 2018, Industrial and Service Robots, Paper presented at the IFR Press Conference, Tokyo, Japan, October 18. Mac-tech. (2019). PCR42 Plasma Cutting Robotic Solution, Accessed August 28. https://www.mac-tech.com/product/prodevco-pcr42-robotic-plasma-cutting-solution. Metal Forming Magazine (2019), Teaming Hydraulic Presses with Robots. Accessed December 20. https:// www.metalformingmagazine.com/magazine/article/?/2018/2/1/ Teaming_Hydraulic_Presses_with_Robots. Motion Controls Robotics. (2019). MCRI robotics processing and packaging market Accessed July 10. https:// motioncontrolsrobotics.com/robotic-roll-processingpackaging-market-defined-mcri/. Motion Controls Robotics. (2019). Transportation and transportation of products, Accessed July 8. https://motioncontrolsrobotics.com/ robotic-applications/automatedmaterial-handling/transporting-conveying/. Nikolić, Dragomir et al., (1979). Machining (in Serbian), Beograd: Faculty of Mechanical Engineering, Beograd, 24-86. Potkonjak, Veljko. (1988). Robotics and Automation (in Serbian), Beograd: Faculty of Electrical Engineering, Beograd, 14-78. Report Herald (2019). “Painting Robots Market Revenue 2019: ABB, STAUBLI, FANUC, Yaskawa, Kawasaki, CMA Robotics S.p.A”, Accessed December 22. http://reportherald.com/2019/08/13/painting-robots-market-revenue-2019-abbstaubli-fanuc-yaskawa-kawasaki-cma-robotics-s-p-a/. Robotics online. (2019). Robotic Grinding, De-Burring and Finishing Applications, Accessed August 28. https://roboticsonline.wordpress.com/category/deburringfinishing/. Uzunović-Zaimović, Nermina. (1997). Measuring techniques (in Bosnian), Zenica: University of Zenica, 34-58. Welding Productivity. (2019). Robot reliability Accessed August 25. https://weldingproductivity.com/article/robot-reliability/. World Robotics 2010-Industrial Robots. (2010). The International Federation of Robotics, Statistical Department, Frankfurt am Main, Germany, 7-12. World Robotics 2011-Industrial Robots. (2011). The International Federation of Robotics, Statistical Department, Frankfurt am Main, Germany, 7-16. World Robotics 2015-Industrial Robots. (2015). The International Federation of Robotics, Statistical Department, Frankfurt am Main, Germany, 13-26.

324

Edina Karabegović

World Robotics 2016-Industrial Robots. (2016). The International Federation of Robotics, Statistical Department, Frankfurt am Main, Germany, 11-18. World Robotics 2017-Industrial Robots. (2017). International Federation of Robotics, Statistical Department, Frankfurt am Main, Germany, 26-422.

In: Industrial Robots: Design, Applications and Technology ISBN: 978-1-53617-779-4 Editors: Isak Karabegović et al. © 2020 Nova Science Publishers, Inc.

Chapter 12

THE IMPLEMENTATION OF ROBOTS IN WOOD INDUSTRY Salah Eldien Omer, PhD* Department of Wood Processing Technologies, University of Bihać,Technical Faculty, Bihać, Bosnia and Herzegovina

ABSTRACT The implementation of robots in wood industry is a necessity in this time to achieve the production objectives. We will start from the beginning of the process of wood industry production from logs yard to furniture production. Every phase of processing has a vital area where the process could be advanced or upgraded by using sophisticated machines our equipment. The new generation of technologies for processing wood easily allows the instillation of robots mostly in all phases of production. Robots will installed where the safety of the process is needed or the increase of productivity with changing the basic technology process. We tried to find the robots in the market which could use to achieve these targets. Every day the robot industry is launching new ones to satisfy the need of wood processing industry.

Keywords: upgrading processing, sophisticated machines and robots in wood processing.

INTRODUCTION As we know wood is one of the first materials humans used for tools and as a building material as well as for furniture production. Yet, wood has a bright future within * Corresponding Author’s E-mail: [email protected]

326

Salah Eldien Omer

the realm of modern industrial robotics. Experts predict that improvements in vision systems will lead to an increase use of robotics by the wood industry. I see more binpicking of wood products and robots sorting wood planks that go into veneer manufacturing. Both require a lot of labour and cause bottlenecks in processing planks for furniture. I think that bin-picking wood parts are an important area, especially when dealing with layers of products such as wooden planks. Processing industry experts thinks woodworking applications have a promising future within robotics. Wood applications are one of our target markets. As wood mill operators realize that workers are getting harder to find, robotics in the wood industry will experience bigger growth in the near few years. Managers of woodworking plants are having difficulty finding the people they need, especially young people in small towns. I have seen many robots put into the wood industry in the last three years, which is giving good results. The wood industry has a lot of potential for robotics. This process will be evolutionary, but I am starting to see openings for robotics in an industry that was relatively closed. We saw a market for robotics in the wood industry, but it is a difficult market for many integrators. Integrators need to have an intense knowledge of the woodworking process, how it handles and grades timber, as well as how production plants are designed. That time the biggest uphill battle is for integrators to understand the wood products industry and the real need for advanced and sophisticated devices and machines (Omer 2020, 181-188; Omer 2019, 163-169; Omer 2015, 67-76; Groover 2015, 22-68; Mitchell 1991, 58-124). Computer controlled automation is becoming increasingly important for production technology. Even in the wood processing industry, CNC machines are finding wider usage. They represent complex and integrated CNC machining centres, which execute the various machining steps on the work piece secured into a work piece clamp. Turning, sawing, milling, drilling, sanding or gluing – all these are performed on the wood using programmed tool motions. Conventional control elements, such as foot and hand levers or hand wheels are eliminated. The control motions are instead performed with the computer keyboard or a mouse click, while the functions and operation are monitored with the monitor. The computer controls the machining centre with the data entered to execute all feed motions and clamping processes. Manual intervention by the operator is no longer required. Using a CNC machining centre, the manufacturing operation can attain significantly higher machining accuracies and machining speeds. The frequency of defects and the risk of malfunctions is reduced. Moreover, CNC machines are highly flexible using text or graphics-based reprogramming interfaces, and can be modified with ease to accommodate the individual needs of the specific workshop. CNC technology generally establishes a close link to the design-engineering department, which can use CAD applications to directly develop and implement programmes for the CNC machine (Omer 2015, 67-76).

The Implementation of Robots in Wood Industry

327

Due to their highly flexible uses, CNC machines were become increasingly indispensable for wood processing operations - for industry and workshops alike. A growing number of companies are upgrading to CNC machines in an effort to confront market demands and cost pressures. CNC machining centers are in use particularly when various wood species need to be machined, frequently also in combination with other materials, such as plastics or non-ferrous metals. Solid wood or wood materials, such as plywood, particle board or MDF panels, can be milled (CNC routers), sawed or sanded automatically and in series production. This permits the use of CNC machines to produce construction elements (doors, windows, etc.), for solid wood processing (furniture, interior design), and for panel machining. The use of CNC wood processing machinery even results in highly satisfactory and cost-effective outcomes for the production of intricate musical instruments. So the next step was integrating Robots in the wood industry processing in most phases of production where it is justifided. (Groover 2015, 64-72). CNC machining centers for wood generally consist of four units: the machine frame, the vacuum fixture with suction elements, the machining bed and the machining unit. All units can employ and, where they can provide maintenance-free operation and a long service-life. It is a place for simple Robots to be intyroduced. The machine frame is a basic unit of the machine that houses the control panels. The machine drive is generally also located here. The other travelling units are installed with the machine frame as the foundation. Using the suction elements on the vacuum fixture, the work pieces are precisely located in accordance with the tool size. The vacuum fixture adjustment is controlled from the computer. This is accomplished in part with laser technology. The vacuum fixture and the suction elements can be controlled independently. Work pieces are clamped on the processing or machining bed for subsequent machining. Some machines are equipped with multiple machining stations that can be operated independently from one another. For instance, while the work piece is still being machined, the work piece is adjusted at the queued workstation. The CNC machine functions are centrally located in the machining unit. Various processing steps can be executed that depend on the installed tools.Mostly all the above mentioned technical details are integrated in the ROBOTS as a new concept in industry processing. The increasing use of robotics in the manufacturing sector is now being incorporated into the wood products industry (Jackson M.H., Hellström E., Granlund A., Friedler N., 2011, 36-42). Companies across the world are making substantial investments in robotics and automation technology in order to increase efficiencies and make up for shortages in the current educated workforce. According to old statistics presented during the last years, 85% of current robotics use in the wood products industry is on the handling side, while 10% of mechanisms are used for assembly and 5% for machinery. Now the use of robotics can provide

328

Salah Eldien Omer

companies, in all phases of different production, with solutions to issues that are currently roadblocks to growth, or are causing headaches within their own labour force. Robotics can be used to perform low-level tasks needed in the production process, allowing companies to stop competing on wages with competitors and other industries. (Jackson M.H., Hellström E., Granlund A., Friedler N. 2011, 36-42) In many cases, single robots were taking the place of three to five employees. That scenario provides a cost-effective solution for replacing retiring workers, or accomplishing tasks for which qualified employees are unavailable. That ability to replace human workers will become especially important as the ability to attract skilled labourers becomes more difficult in wood products industry. The introduction of robotics has also given wood processing Industries an edge over its competitors, and a solid outlook for the future of the company. If we want to be ahead of everyone wood processing, we need to think about the future. The initial outlay for the first robot in USA before was $2 million, that robot was able to replace five retiring workers at $50,000 per worker per year, while creating a more adaptive system for production that has a better yield, quality and efficiency (World Robotics - Industrial Robots 2010, 2011, 7-12). Now in most of developed world it is far more than that, Robotics is also allowing companies to provide easier systems for customization of product orders by using new product lines that are completely automated. The production line in cabinet production, can produce eight cabinets per hour, 24 hours a day, seven days a week, with staff needed only for supervision and stocking supplies which an economical factor in that industry which open more doors for robotic integration to maximum levels. Now robots are cheaper for purchasing especially in Europe more than other parts of the world. In this material we are going to concentrate in certain cases in wood processing industry. So the question is is it in all production faces are justified as investment? (Landscheidta, S.2017, 233-240). Opportunities for robotic automation in wood product industries: the supplier and system integrators’ perspective. Robots have been proven to deliver a host of benefits in a wide variety of applications. End users introducing robots to their production process have typically seen a significant transformation their productivity and efficiency, with higher levels of output, product quality and flexibility amongst the many improvements reported (Omer 2015, 67-76). Ten reasons which were generally verified by using robots in wood industry processing are: 

Reduce operating costs (energy saving about 8% on every 1 OC reduction in heating levels, and 20% by turning off unnecessary lighting. Also can reduce or eliminate the costs associated with manual workers,

The Implementation of Robots in Wood Industry 

 

     

329

Improve product quality and consistency, the machine will not suffer from tiredness distraction and similar, and you can count on high quality finish of products, Improve quality of work for employees, workers will no longer have to work in dusty hot or hazardous environments. They will learn also programming skills, Increase production output rates. They can be left running over night and during weekends with little supervision. They will not time out for breaks, sickness and similar, Increase product manufacturing flexibility, after programming the machine, you can easily switch from one to another phases of production, Reduce material waste and increase yield, by precise processing the input material, Comply with safety rules and improve work place health and safety, Reduce labour turnover and difficulty of recruiting workers, Reduce capital costs (inventory, work in progress). Reduce the cost of consumables used and reduce wastage, moving products faster in production, Save space in high value manufacturing areas. They can be placed on shelf systems, on walls or even in ceilings.

Hereby we are consider the above mentioned verified reasons in using robots in the wood industry processing phases as we practise in this part of the world. We are going to start from the log yard because of the problems we used to have with the traditional sorting of the delivered logs from the forestry department, but not as the ordered logs were requested but in a very confused order. Most of the companies around us as we has the same problem most of the time. We consider the log sorting and preparation phase in the wood processing is crucial for the upgrading of the value of the input material. The introduction ion of robots in the programmed plans for the processing of delivered raw material is very useful for many reasons as we mentioned above.

ROBOT USAGE IN LOGS STORAGE YARD The process of wood industry started usually from the storage and the sorting of delivered logs to the log yard of the factory. Mostly the ordered logs come to the factory in the quantity and quality order by the specification sent to the forestry department .From the anion times the logs were delivered by the transportation used in that time. It was various from the degree of advanced or non advanced area. Let us say from the middle of twenty century and on, mostly by different trucks. The degree of mechanization of those transportation facilities were different levels from simple to very

330

Salah Eldien Omer

mechanized. The equipment they use for loading and unloading the trucks were various. At the beginning of the nineteen’s we found many producers of equipment are presenting very advanced equipment for that propos with or without the attachment to the trucks. Soon we saw many robots substituting the advanced equipment in that area. So we are going to select some of them and present them here.

Figure 1. Robot for loading logs in log yard as planed (Google 2019)

Figure 2. Modern equipped robot in log yard for sorting logs. (Google 2019)

The development of robots for this phase of wood processing are in quick development, because the results of using robot shows big benefits in the sense of precise and quick sorting of logs as well as their transfer to next processing phases.

ROBOTS IN PREPARATION OF LOGS FOR PROCESSING The second phase in preparing logs for farther processing we could divide in two operations. The big and expensive one is the heating our cocking of the logs for the

The Implementation of Robots in Wood Industry

331

veneer production in specially prepared pools for this process. The usage of robots in this phase is not in implementation, since last few years where the use similar robots for transportation and handing of logs in the log yard.

Figure 3. Robotic crane grapple loader for logs (Lumbermen 2019)

Mostly in the phase of preparation logs for farther processing companies in this part of the world, they use a lot of workers and most of them are not technically educated. We find a lot of mistake sorting, many injures and lot of time wasting. The standard cranes for the unloading and manipulations of logs to be sorted are complicated and not efficient. Using robots in each phase and certain areas will give better results and allow the planed sorting to be achieved. The following selected robots will easily make it possible to organize and achieve the planed program in this phase (Andres, J., Bock, T., Gebhart, F., et al.1994, 87-93).

Figure 4. Robotic station for logs manipulation

332

Salah Eldien Omer

One of the important phase of preparation of logs is the debarking operations which could helped with the implementation of robot to do the job or help in the input transportation and output transportation of logs to the debarking units.

Figure 5. Robot system for debarking logs

Figure 6. Modern debarking machine function by robot support

In cases of big logs we need a strong debarking machine to do the job, so we organize robot services for the transportation during the process and the handling of the logs safely after.

ROBOTS IN PRIMARY LOG SAWING PHASE Primary log sawing phase in the process of industry is a very important one. The exploitation of received logs in the company is economic important one. Using modern

The Implementation of Robots in Wood Industry

333

and a very advanced equipment will allow the planed upgrading of the raw material. In the input of logs to the primary sawmilling is important to evaluate the quality of the log coming before the saw. Many new machines could be installed to do so. The transportation to the machine which evaluate and determine the basic value, quality and dimensions of the log gives clear picture to the operator at the sawing machine what should be done according to the plan of exploitation and usage of each log. Selecting the robots for these operations is very sensitive because their prices are high and the preparation to install in the existing process is complicated. For these reasons their purchasing is not so easy. (Dalacker, M. 1997, 67-78).

Figure 7. Log scaning for quality and quantaty

Figure 8. Robot for optimizing log exploatation

The second important operation in this phase is the optimizing of the usage of the input material to maximize the value of received raw material in the log yard as it is. After defining the value of logs it is important to optimize the exploitation of logs based on their internal qualities which were resisted previously. This type of mobile selected robot is make it easy to realize the optimization.

334

Salah Eldien Omer

Figure 9. Sorting and stacking sawn timber

In sawmilling process the value of upgrading the input material is very important, so the evaluation of log quality and quantity and very important if they can be done by programmed units as robots with integrated specialized software. The next phase is the quality sorting of the sawn elements, considering that the sawing process was done with high quality machines.

ROBOTS IN SECONDARY SAWING PHASE The secondary sawmilling phase is a very important one in the sense of upgrading the value of the raw material received in the log yard and after in the primary saw mill. In our factories we usually had a coefficient of 1,8 to 2,2 at upgrading the value of logs to the produced elements based on the plans of sales. So the sophisticate machines and equipment in this phase of production are very important to install. The secondary sawmilling is usually organized based on the planed needs for the sawn elements or those which are agreed for sale and farther processing, that need a very specified organization with good sorting of the sawn elements and before that need good selection of input sawn wood. For these operations we need a good selection of equipment for sorting, transporting and quick checking of the output elements. We tried to select the most mobile and economically suitable robots for these targets.

The Implementation of Robots in Wood Industry

335

Figure 10. Mobile industrial robot for the input material transport (Mobile-Industrial –Robots 2019)

Figure 11. Gantary robot system 6 (thre axsis xx,yy,zz) for transporting.

The Gantary robot system could be installing above output transporters of the elements after their defined operations to be sorted as the quality was defined previously. The programming of those robots is easy and it could be connected with the total program which organizes the production in this sector. Usually we plan the total production based on the needed quality and quantity planed for sale. The packing system of the sorted elements by quality and quantity also could be done by robots.

336

Salah Eldien Omer

Figure 12. Robotic packing systems (Google 2019)

Figure 13. Robotic automated packing system (Google 2019)

The packaging system is very important for the elements mend to farther operations or to be transported to the kiln drying and similar.

ROBOTS IN EXTERNAL INBETWEEN PROCESSING PHASES The external transportation of semi-half products in the wood processing industry is taking a lot of time and humanresources. For example from saw mill to secondary sawmilling and from there to kiln drying and to final production process.

The Implementation of Robots in Wood Industry

337

Figure 14. Mobil industrial robot for wood elements (Mobile-Industrial-Robots 2019)

In the factories which process logs from log yard to final products, they has a big need for quick and organized transport between the production phases as well as in between the production halls indoor and outdoor. The factories which used to have a big number employees, most of them where working in such working places where robots comes very efficient and could programmed to work more time than the human, which brought a big gain in the sense of maximizing the efficient of production farther. Mobile robots are most practical for these production phases and they can be used in other area when they are needed in the places where they usually do the job.

Figure 15. Robot for handiling sawn wood and elements (Google 2019)

338

Salah Eldien Omer

Figure 16. Autonomous mobile robot for the sawn timber yard (Google 2019)

The sawn wood usually is put on pallet according to the specifications by dimension or by quality is sorted to be transported to other processing or outdoor. So the cages which could protect the selected elements for farther processing are very often used. They are also used inside the halls for transporting packages from unit to unit. The cages are practical especially when speed needed for quick transportation.

ROBOTS IN VENEER PROCESSING The veneer processing units are special production and process. As they mostly has the full line of production already technically and technologically planed and organized from the producers of the machine line, it is easy to ad few robots in certain places in peeling and slicing processes.

The Implementation of Robots in Wood Industry

339

Figure 17. Rotary veneer production process with robot support (Gogle 2019)

This process is peeling big logs into rotary veneer where the centralization and input of log handling needed a support of a robot which will integrate in process to make secure and stable. It is also could choose a mobile robot with wide range to be used in different place at the process to maximize it usage. With different devisees it could use at the end of the peeled veneer transportation tracked to collect the packages and transport them farther.

Figure 18. Robot for veneer handiling and control (Google 2019)

340

Salah Eldien Omer

Figure 19. Robot in quality control of veneer sheets (Google 2019)

Since the peeling veneer process is usually a standardized technology production line, there is many places on the line where robots could be usefully. From veneer handling, control of the rotary sheets to the correction of the sheets surfaces as required.

ROBOTS IN PLYWOOD CURVEPLYWOOD PROCESSING The phase of producing plywood is a very important one, becvause of the production of different plywood boards for different usage. It is also a phase where veneer as a raw material upgraded in usefull boards with different quality for different purposes. The usage of robots in this phase is long time before is interoduced in certain productionphases. The most important phases where robot is very usefull in the sense of handiing the big sheets of veneer in different phases as well as the plywood sheets during and after production.

Figure 20. Robot at the trasporter handilling veneer sheets and boards (Kuka 2019)

The Implementation of Robots in Wood Industry

341

Figure 21. Robot in important phase of repairing veneer imperfections (Fanuc 2019)

Figure 22. Robot at the veneer patching line

Figure 23. Robot for finished board handilling and sorting (Fanuc 2019)

This production process the quality of peeled veneer is usualy standarized accordinf to the known standards. For that the process need a well programed sophisicated machines and robots to a chieve that. The robot industry recognized the need and with the cooperatration of technoogies producers lanchued a lot of robots which can be used for such purpases.

342

Salah Eldien Omer

The farther face in the industry of veneer production is the plwood production and the curve plywood production. As we see the preparation and the selection of veneeer for these process is important to satisfy the specification needed.

Figure 24. Robotic arm automats pressing on the hot press

Figure 25. Robot feeding the press with elements for pressing (Google 2019)

Figure 26. Robot for loading and unloading big presses (Google 2019)

The Implementation of Robots in Wood Industry

343

Both process used hot pressing sysytem andboth technologies are well organized.But in certain places we need robots to elementate danderous injures and to speed the process. So the selection of mobile robots where needed is wise choice.

ROBOTS IN WOODBASED BOARDS PRODUCTION The woodbased board production is mostly a continous production process with a defined production line from the machine producers. In certain areas of the continous processm it is good if we can include the robot serveses for the reasons we mention in the beginning of the material. The gluing process of particles is important in sense of quality of spraying the glues nad the economy of the process.Robts are easy to program and mentain to do such important job.

Figure 27. Robot in particle spraying process

Figure 28. Robots sp technology for handiling and packing boards

344

Salah Eldien Omer

Figure 29. Palatizing robot for final transport

Since such processe are continous and close prosess the handilling of the finished boards when they are hot is a little bit slow, but using the robots weher the tempreture of the product is relevant is a usefull matter if it is posible.

ROBOTS IN FURNITURE PRODUCTION In the furniture production we find the earlyer aplication and integration of robots in the productipon process, because the need for for more effcient production was big. Know it is mostly at the more complicated operations were the prcisize operation are needed.Then the quntatity and quality come after that. The production of furnture is based on the furniture from wood based boards or from massive solid wood (Gramazio, Kohler, 2008, 7-11). Every proces has its specific needs for the usage of robots serveses. The furniture from wood based boards is mostly massive production where the furniture from solid wood depend on the organization of the production (Nof, Rajan, 2007, 26-42).

Figure 30. Robots in furnitire production which based on boards.

The Implementation of Robots in Wood Industry

345

Figure 31. Robots at diffrenet operations on boards elements (ABB 2019)

Figure 32. Robots at assembling process in furniture lines (ABB 2019)

As we mentioned the usage of robots in the furniture production was started early we find them included in many places at the process of furniture production with the usage of many modern softwares for organization of the process.

Figure 33. Robots for sandiling,welding and painting diffrenet products (Kuka 2019)

346

Salah Eldien Omer

Specially in the production of well organized technologies like wood based boards furniture production as well as window, doors and specially chairs production mostly in all phases of the process. The producers of machines specially the sophisticated ones where they mostly use robot serveses, they included robots mostly every where in the process lines or units (Kuka 2019). The calculation of production cost per phases of production shows that there is a limit for the usage of robots in the technologies, because of their prices on the market.

ROBOTS IN STORAGING PHASES The storage phase in wood industry is different from process to process according to the product which it produces and the destination aimed to Willmann J., Knauss M., Bonwetsch T., Apolinarska A. A. Gramazio F. & Kohler M. 2016;16-23) .The packaging of final products also dictates the storage of the product as well as the ware house organization for farther handling of it.

Figure 34. Robot in smart non-standard robotic fabrication system, featuring automated gripping and cutting (Kuka 2019)

The Implementation of Robots in Wood Industry

347

The ware houses of wood processing industry is differe according to the organization of production and the farther activities they planed. Mostly they need smart warehouse system to beplaned in a central marketing software. The dispaching of the semi finished or finished product depend on the potencial buyer and the type of transport to the buyer as well as the position of the warehouse to the farther transportation facilities .

Figure 35. Robot for to perform fully automated fabrication tasks within an effective workspace

The packging of the product also depond on the specifications ordered by the buyer. It a very big job and the well organized system with a good robot solve mostly are the needed activities. Th final working place in the factory is the ware house exit to transportation facilities. For that area we chose a big mobil robot to operat from the stocking places to the despaching spots. We had a software which followed the production by phases and programed weekly all the needed activities the robots make that very successfully.

CONCLUSION Based on issues presented in this chapter we can formulate the following conclusions:  



The wood processing industry is multi displince process and it use a very unhumagen material as the logs, so it need certain upgrading process, The unhumagen material which is wood has a certain duration period which dectate the way of storaging protection and need a quick sysytem for farther processing, From the log yard to the first operating or processing place few operations needed to prepare logs as quickly as posible to make the following operation

348

Salah Eldien Omer



  

phase successful, where robots could do the jon by working day and night when they programmed, The weight and length of the delevered logs in the log yard also need well handiling to protect the logs and keep their quality well and there the mobil robots are the best facilities to use there, In the log preparation phases like debarking, cooking, and transporting to precise machines, it is aso a job for robots, where we gain time and precision, In the transportation indoor our out door mobil robots shows very high effeciency and they element at the human factor and all what goes with it, Finally in the phases of final processing all the advantages of using robots comes to the surafce and asure the planed operations, precision assembling and storaging all kind of products in this wide range processing industry.

REFERENCES 2019 Rotobec 910 Log Loader Knuckleboom, Accessed December 12. https://www. lumbermenonline.com/for-sale/2019-Rotobec-910-Log-LoaderKnuckleboom?itemid=70729 Andres, J., Bock, T., Gebhart, F., et al. (1994). First results of the development of the masonry robot system ROCCO. In: Proceedings of the 11th ISARC (International Symposium on Automation and Robotics in Construction), Elsevier, Oxford, UK; 87–93. Google Scholar Automation & robotics in the wood industry, Accessed December 24. https://www. kuka.de Dalacker, M. (1997). Schriftenreihe Planung, Technologie, Management und Automatisierung im Bauwesen. In: Bock, T. (ed.) Entwurf und Erprobung eines mobilen Roboters zur automatisierten Erstellung von Mauerwerk auf der Baustelle, vol. 1. Fraunhofer IRB Verlag, Stuttgart, Deutschland; 68-78. Google Scholar [Development and testing of mobile robots for automated design of the construction work on the construction site] Flexibility and user-friendliness increase the applications, Accessed December 22. https://www.mobile-industrial-robots.com/en/products/mir1000/ Gramazio, F., Kohler, M. (2008). Digital Materiality in Architecture, Lars Mueller Publishers, Baden, Deutschland; 7–11Google Scholar Groover M.P. (2015). Automation, production systems, and computer-integrated manufacturing. 4th ed. Pearson, Boston, USA; 64-72. Hydraulic Grapple Loading, Accessed December 12. https://www.google.com/search? rlz=1C1NHXL_hrBA708BA708&biw=1517&bih=694&tbm=isch&sxsrf

The Implementation of Robots in Wood Industry

349

Industrial Robots, Accessed December 24. https://new.abb.com/products/ robotics/ industrial-robots Jackson M.H., Hellström E., Granlund A., Friedler N. (2011). Lean Automation: Requirements and Solutions for Efficient use of Robot Automation in the Swedish Manufacturing Industry. IJo Engineering Research & Innovation. 2011(2): 36-42. Landscheidt S., Kans M. (2016). Automation Practices in Wood Product Industries: Lessons learned, current Practices and Future Perspectives. The 7th Swedish Production Symposium SPS, Lund, Lund University, Sweden;1-9. Landscheidta, S. (2017), Opportunities for robotic automation in wood product industries: the supplier and system integrators’ perspective. In: 27th International Conference on Flexible Automation and Intelligent Manufacturing, FAIM 2017, 27–30 June 2017, Modena, Italy; 233-240. CrossRefGoogle Scholar. Mitchell, Jr., F.H. (1991).CIM systems, An Introduction to Computer-Integrated Manufacturing, Prentice-Hall International Inc., New York, USA; 58-124. Nof, S.Y., Rajan, C.N. (2007). Robotics. In: Handbook of Design, Manufacturing and Automation. Wiley, London, UK; 26-42. CrossRefGoogle Scholar. Omer SE. (2019). Timber Construction and Robots. In: Karabegović I. (eds) New Technologies, Development and Application. NT 2018. Lecture Notes in Networks and Systems, vol 42. Springer, Cham; 163-169; https://doi.org/10.1007/978-3319-90893-9_20 Omer SE. (2020). Benefit of Using Robots in the Production of Three-Layer Parquet. In: Karabegović I. (eds) New Technologies, Development and Application II. NT 2019. Lecture Notes in Networks and Systems, vol 76. Springer, Cham; 181-188; https://doi.org/10.1007/978-3-030-18072-0_21 Omer SE.(2015). Efficient effects of Robots and CNC centers in Wood industry, Karabegović I. (eds) New Technologies, Development and Application, Bosnia and Herzegovina, Mostar, 2015. pp 67-76. Omer, S.E. (2016). Justified usage of robots in wood industry, Saint Petersburg, 58-64. Google Scholar Optimized Internal Transportation of Heavy Loads and Pallets with MiR1000 and MiR500; Odense, Denmark, https://www.mobile-industrial-robots.com/ Robot technology for wood processing, (2011), Wood, Unlimited brochures; 2-14. Google Scholar Willmann J., Knauss M., Bonwetsch T., Apolinarska A. A. Gramazio F. & Kohler M. (2016), Robotic timber construction — Expanding additive fabrication to new dimensions, Automation in Construction 61; 16–23 Wood Production Factory, Accessed December 22. https://www.fanuc.com

350

Salah Eldien Omer

World Robotics 2010-Industrial Robots, 2010. The International Federation of Robotics, Statistical Department, Frankfurt am Main, Germany; 7-12. http://www.worldrobotics.org World Robotics 2016-Industrial Robots, 2016. The International Federation of Robotics, Statistical Department, Frankfurt am Main, Germany; 11-18. http://www. worldrobotics.org

In: Industrial Robots: Design, Applications and Technology ISBN: 978-1-53617-779-4 Editors: Isak Karabegović et al. © 2020 Nova Science Publishers, Inc.

Chapter 13

HUMAN GRASPING AS AN ARCHETYPE OF GRASPING IN ROBOTICS: NEW CONTRIBUTIONS IN PREMISES, EXPERIMENTATION AND MATHEMATICAL MODELING Ionel Staretu* Product Design, Mechatronics and Environment Department, Transilvania University of Brasov, Brasov, Romania

ABSTRACT In this chapter we present, in a more complete and unitary way compared to similar studies, the structural, constructive and functional characteristics of the human hand in order to be useful for the constructive and functional optimization of anthropomorphic grippers with fingers, for robots. Thus, we describe the bone structure of the hand and emphasize the observance of the golden proportion between some parts of the human hand. Substantial human hand configurations, useful for grasping and micromanipulation are then highlighted. We are experimenting with the grasping of three significant types of objects, in a new version for the three types of objects which is the most complete and suggestive so far. The minimum and sufficient mathematical conditions for static grasping are provided. We experiment with the micromanipulation of a rod-type object and the minimum mathematical conditions of safe micromanipulation are mentioned. We also present, on the basis of our own experiments, adaptations of the author of Cutoksky’s taxonomy, which is extended with two new grasping situations, and of grasping situations highlighted by Lyons.

*

Corresponding Author’s E-mail: [email protected].

Ionel Staretu

352

Keywords: human hand, experimentation, taxonomy, anthropomorphic gripper, mathematical model, micromanipulation

INTRODUCTION In nature, as already shown, there is a wide variety of gripping systems, also called gripping biosystems. Their study has substantially contributed to better knowledge about gripping systems in general, and it was and is particularly useful for the design and improvement of gripping systems used in robots. The human hand is obviously the most important gripping biosystem, especially because of its crucial role in the building of human civilization by performing specific human functions, and as a carrier of tools. The basic structure of this gripping biosystem is shown in Figure 1 and consists of: that part of the brain that coordinates the system performance, muscular fibers driven by biochemical energy which are activated by the tendons of the hand, skin for the perception and control of contact with the gripped object, the eye for identifying the location and shape of the gripped object, and the nerves linking these components through which the information required for the optional performance of the biosystem is sent.

Figure 1. Human hand bio-gripper structure.

In terms of gripping, out of the components described above, we are particularly interested in the part that performs the gripping and boosts the necessary strength for this purpose, and therefore the hand itself.

Human Grasping as an Archetype of Grasping in Robotics

353

The hand, and especially its skeleton will be described in more detail below, because its structure allows an extremely large variety of gripping methods.

HISTORICAL REFERENCES The human hand is compared to other gripping biosystems encountered in nature, as the most advanced gripping biomechanism, and the most studied and researched throughout history, from ancient times until today. Thousands of years ago it fascinated man, who tried to leave different representations of the hand such as those at Altamira (Figure 2), dated from over 20, 000 years BC.

Figure 2. Human hand representation at Altamira, Spain.

There were as well, representations in the form of ancient Greek sculptures, the most accurate in outlining the human body, particularly the hands. The hand is represented in different situations of gripping objects, such as the statue of Zeus in Olympia (Figure 3a (Pheidias 432BC)), made by Pheidias around 432 BC, who is holding a globe, supporting a Victory in his left hand and a rod in his right hand. Pheidias also made the statue of Athena Parthenos (Figure 3b) in the years 450-448; here the representation of the hands in symbolic configurations is remarkable. Another outstanding statue is Myron Discobolus, achieved in the years 460-450 BC, where we can notice very accurately the disc gripping representation (Figure 3c). An important role was given to the hand in Christian representations of the bishops, whose hands have different symbolic configurations or are holding a sacred object, usually a book (Figure 4a). Builders or painters also perform symbolic gestures with their hands (Figure 4 b) or hold various tools related to their specific activities.

Ionel Staretu

354

If until the Renaissance (around 1500), we can speak, as seen above, about representations of the human hand, during the Renaissance, investigation of the hand structure and its functions began. Obviously, attempts in this direction have been made before, and if we only mention that medical schools have existed since the first millennium, for example, the medical school at the University of Constantinople, which was attended before the year 500, and after year 1000, such schools were attended in several European centers, like the one in Montpellier (where the medical school was established in the 12th Century).

Figure 3. Human hand representation in ancient Greece statues.

(a)

(b)

Figure 4. Representations of the human hand in the iconography of Byzantine origin.

Human Grasping as an Archetype of Grasping in Robotics

355

Certainly, in these schools, the function and the anatomy of the hand were studied to some extent. Following these interests, we must mention Leonardo da Vinci’s (14521519) works- Figure 5 and representations of the human skeleton by Andre Vesale (15141564)- Figure 6, where it can be noted that the hand biomechanism had already been known in detail.

Figure 5. Anatomical parts of the human body by Leonardo da Vinci.

Figure 6. Skeleton by Andre Vesale.

356

Ionel Staretu

Figure 7. Rembrandt- The Anatomy Lesson of Dr. Nicolaes Tulp, 1632-Mauritshuis, Haga.

Figure 8. Goetz prosthesis (adapted after Püschel 1955, 23).

Further evidence for the purpose of the above is Rembrandt’s painting, “The Anatomy Lesson of Dr. Nicolaes Tulp” (Figure 7), dated 1632, where it can be noted that the human arm and hand are studied through dissection. Obviously, Renaissance anatomists’ studies, particularly those of Leonardo da Vinci, and of others interested in the outstanding performances of the hand (see the painting “Etude de main”, in 1715, by Nicolas de Lorgillierre, Louvre, France), continuing to present days, with attempts to make copies, at the beginning only mechanical, of the hand. In this context, we mention the humanoid robot of Leonardo Da Vinci, sketches of whom were made around 1500, the Goetz prosthesis in 1509 (Figure 8), which is the first perfected copy of the hand skeleton, and the Vaucanson machine (1738), the Inanimate flautist (Le fluteur inanime) – Figure 9, playing the flute, with all four fingers extending the palms of the two hands, active cinematically.

Human Grasping as an Archetype of Grasping in Robotics

357

More recently, at the end of the 20th century, important synthesis works published after 1975, consider without exception, the human hand as the most refined and useful gripping biomechanism for the design and optimization of similar parts of gripping systems used in robots and prosthetic structures, respectively for gripping mechanisms improvement. Both Kato (Kato 1982, 12) and Lundstrom (Lundstrom 1977, 18) consider the human hand as the proximal genre in the study of gripping systems. In most studies, the main objective was to establish as precisely as possible the hand skeleton structure and to identify the joint type between its elements.

Figure 9. Vaucanson’s Flautist (Vaucanson 1738).

Following research devoted to identifying structural and kinematic features of the hand (Lundstrom 1977, 18; Kato 1982, 12; Kovacs 1982, 26; Mason and Salisbury 1985, 32; Dudita et al. 1987, 161-168), several aspects were obtained, the most important being discussed further on. Compared to various attempts to achieve similar mechanical models to the human hand, based on investigations related to the above, bionic-mechatronic models of the hand today are very advanced.

Ionel Staretu

358

STRUCTURAL AND FUNCTIONAL CHARACTERISTICS OF THE HUMAN HAND First, it must be underlined that the forelimbs (superior) became gripping “devices, instruments,” especially after man shifted from quadruple to bipedal movement. Thus, the arms and hands became free, in a position of waiting to take action. On the other hand, from the evolutionary perspective, we can identify three directions:   

Increasing the number of fingers, as is apparent in Figure 10, in which three stages of transformation of fish fins in pentadactyl limbs are presented; Increasing fingers mobility (phalanges); Optimizing the proportions between the main areas of the hand (see structural archetype of Gegenbauer).

Concerning the first direction, we can see that the five-finger structure is somehow redundant because most usual gripping operations can be performed with only three fingers, but complex gripping and micro-handling configurations and positions can be performed only with five fingers. It is worth noting that 5 corresponds to a number sequence. Number five comes after 4. We have four limbs. We can set as well other correspondences between numbers and parts of the human body, including numbers 1, 2, 3, as well. These correlations’ significance is little understood by human brain.

(a)

(b)

Figure 10. Three stages of transformation of fish fins in pentadactyl limb.

(c)

Human Grasping as an Archetype of Grasping in Robotics

(a)

359

(b)

Figure 11. Gripping situation in monkeys (a) and humans (b).

As far as the second direction of evolution is concerned, our fingers have the greatest gripping possibilities in different positions, as their mobility degree is higher compared to other primate hands. In Figure 11a, b, you can see, comparatively, some gripping positions of the hand in monkeys (Figure 11a) and in humans (Figure 11b). It may be noted that due to different proportions between the size of the fingers and the palm, the gripping of some objects is more difficult for monkeys than for humans. Based on the concept of precision gripping and power introduced by Napier (Napier 1956, 903-911), presently, in specialty literature on grasping, the systematization of grasping situations using the human hand proposed by Cutkosky (Cutkosky 1989, 273), to which some other somehow similar approaches were added (Cutkosky and Wright 1986, 1533-1538; Rosheim 1994, 190-193; Feix et al. 2016, 68-70; Pollard 2016, 25-30; Liu et al. 2014, 575-579) are well known. Cutkosky’s systematization is performed from two main perspectives: precision and power. The systematization is based on the number of fingers used, aiming at two main directions: strong gripping (important force), focusing on the gripping safety and stability, and precision gripping, based on dexterity and finesse. This systematization is shown in Figure 12, adapted under the coordination of the author and completed with two situations, namely: grasping between two fingers of the four of the palm extension (11 in Figure 12) and power grasping with two opposing fingers, the thumb and the first finger of the four in the palm extension (3 in Figure 12), an object of ovoid shape, and average to maximum dimensions about hand grasping

360

Ionel Staretu

options, limited by finger size and finger ability to encompass (useful contact) a grasped object (see and Bolboe 2013, 109). This systematization shows that in the case of strength based gripping, ensuring safety and stability first, usually, all the fingers are involved in gripping, and, except spherical gripping, the role of the metacarpal bones is insignificant. This is the reason why, in the case of anthropomorphic grippers, currently available, especially for secure gripping, these bones modeling is neglected.

Figure 12. Cutkoski taxonomy to highlight significant gripping situations with human hand (adapted by the author).

However, in the case of precision gripping, which involves the micro-handling of the gripped object, the metacarpal bones role is essential (cases 14, 15, 16 in Figure 12 for gripping and cases 7, 8, 9 in the same figure for micro-handling especially). As a result,

Human Grasping as an Archetype of Grasping in Robotics

361

in anthropomorphic grippers of this type, in the case of micro-handling gripped objects, metacarpal bones modeling must be performed at least partially. However, this is a complicated technical problem which still needs the best solutions.

(a)

(d)

(b)

(e)

(c)

(f)

Figure 13. Lyons’s grasping situations adapted by the author.

In addition to the two situations above, power gripping and precision gripping, we must also mention lateral gripping (Lyons 1985, 590), in Figure 13, situation 6, which is important in the case of anthropomorphic grippers of humanoid robots. Regarding the third direction, the phylogenetic study of the human hand was also a concern for many researchers who have tried to identify as correctly as possible, the main stages of its evolution. After the comparative analysis of the tetrapods’ limbs, Gegenbauer imagined a structural archetype of the vertebrates limbs. In this case (Figure 14), the following main parts can be seen: proximal segment (stylopodium), medial segment (metapodium) and distal segment (autopodium). For the latter, we propose a structure composed of three segments: proximal segment (basipodium), medial segment (metapodium), and distal segment (phalanges digitorum). The distal segment (autopodium) is the structural archetype of the hand itself. After careful research we found that the differences on gripping possibilities are due to the differences between the bones that make up each segment. In mammals, the articulated complex called basipodium, which in the archetype proposed by Gegenbauer contains ten bones, changed by reducing the number of bones, so that in humans their number decreased to eight.

Ionel Staretu

362

Figure 14. Gegenbauer’s structural archetype (Dudita et al. 1987, 165).

In Figure 15 we can see the differences between the distal segment parts in lower primates (lemurs Figure 15a), primates (Figure 15b) and gorilla, orangutan and humans (Figure 15c). It is significant that, to the great possibilities of having different configurations, and of gripping a wide variety of objects whose size is proportional to the size of the hand, we can add the approximate compliance with the golden section, the ratio of 1.618 (Figure 16), between the size of the metacarpal bones and the first phalanx (13/8 = 1.625), between the size of the first phalanx and the second phalanx (8/5 = 1.6) and between the sizes of the last two phalanges (5/3 = 1.66).

(a)

(b)

Figure 15. Comparison between lemurs and primates “hand.”

(c)

Human Grasping as an Archetype of Grasping in Robotics

363

Figure 16. The golden section to human hand skeleton.

The human hand is made of bones, a system of muscle fiber that actuates this skeleton and an outer layer (skin) with a protective and sensorial role. The hand skeleton is a biomechanism, a biomechanical system that transmits conditioned movements and mechanical forces. The skeleton (see Figure 17) is made of 27 bones: 8 carpal bones (corresponding to the proximal segment-basipodium), 5 metacarpal bones (corresponding to the medial segment-metapodium) and 14 phalange bones (corresponding to the distal segment- phalanges digitorum). These bones are connected by 36 joints (Figure 17) of which: 11 intercarpal joints, 8 carpal-metacarpal joints, 3 inter-metacarpal joints, 5 metacarpal-phalange joints and 9 interphalangeal joints. Between the 27 elements (bones) there are 36 joints (couplings) which form the biomechanism of the human hand. Depending on the components considered, this biomechanism has different degrees of mobility. Thus, if it is considered that between the carpal and the metacarpal bones there are bimobile joints, as well as between the metacarpal bones and the phalanges, there is a degree of mobility M = 28. If we consider only metacarpal-phalangeal (bimobile) and interphalangeal (monomobile) joints, we obtain a mobility degree M = 20 (Staretu et al. 2001, 54). It should be noted that the three main groups of bones form three segments called, the proximal segment corresponding to the group of the carpal bones, the medial segment corresponding to the metacarpal bones group and the distal segment corresponding to the group of the phalangeal bones (fingers), according to the same Figure 17 (Staretu 2005, 56-60, Staretu 2011, 28). These segments are also found in the structure of a finger, in which there are: proximal phalanx, medial phalanx and distal phalanx (see Figure 17). Obviously, the relative arrangement of the 5 fingers (the thumb being opposable to the other 4 fingers) and the proportion between the different segments of the hand or even between the components of a segment are very important.

Ionel Staretu

364

Figure 17. Human hand skeleton.

The number and size of the bones (elements) and the type of joints (couplings) underlie the wide variety of gripping positions of the hand. Thus, the shift from the palm configuration, where the carpal bones have limited relative position (Figure 18a) to the crucible shape (Figure 18b and c), it is possible because of insignificant relative displacements that occur in the intercarpal joints.

(a)

(b1)

Figure 18. Flat and crucible form of the hand.

Figure 19. Relative position of metacarpal bones.

(b2)

(c)

Human Grasping as an Archetype of Grasping in Robotics

(a)

(b)

(c)

Figure 20. Phalanges characteristic abduction – adduction positions.

(a) Figure 21. Index finger and flexion – extension compound movement.

(b)

365

Ionel Staretu

366

(a)

(a)

(b)

(b)

(c)

(c)

(d)

(d)

(e)

(e)

Figure 22. Thumb abduction – adduction (a, b) and rotation (c, d).

The crucible position of the palm stems mainly from the movements of four metacarpal fingers, in relation to the carpal complex (Figure 19). Relative positions, characteristic to the phalanges (Figure 20 and Figure 21) are possible thanks to metacarpal –phalange joints, which are bimobile spheroid joints (also called condylar joints) except for the thumb joint. Due to these joints, the main movements of abduction-adduction of the fingers are possible (Figure 20a, b) and of flexion-extension as well (Figure 20a, c). The compound movement of abduction-adduction and flexion-extension of the index finger can be seen in Figure 21, and the adduction-abduction and rotation movements can be seen in Figure 22. The thumb movements are possible due to the carpal-metacarpal bimobile joint, also called saddle joint due to the shape of hyperbolic paraboloid (saddle Figure 22e). Interphanlangeal joints are monomobile trochlean joints (see Figure 23 c and d). The flexion amplitude in the proximal interphalangeal joints (Figure 23) exceeds 90o, and that of the distal interphalangeal joints is generally lower. In Figure 24 there are angles that characterize the flexion amplitude in an intermediate position and in the final position. In the limit position of the active extension, the fingers’ phalanges are in extension (Figure 22b). In conclusion, the complex structure and the hand skeleton size make possible a wide variety of spatial configurations, in general and of gripping, particularly (Figure 11b).

Human Grasping as an Archetype of Grasping in Robotics

(a)

367

(b)

Figure 23. Interphalangeal trochlear joints.

(a)

(b)

Figure 24. Fingers medial and maximum flexion.

Figure 25. Relative position of the hand bones for different gripping positions.

Figure 26. Positions of human hand micromanipulation.

Other positions of the hand bones for different gripping positions can be seen in Figure 25 (Püschel 1955, 65). In addition, due to this structure, the hand has outstanding possibilities to move the object between the fingers, movement that can be considered micro-handling. In Figure 26 there are several significant situations when micro-handling is possible.

368

Ionel Staretu

The study of these situations is particularly useful in designing anthropomorphic grippers (similar to the human hand) used in robots and prostheses.

Human Hand Biomechanism Actuation Highly varied configurations, including opportunities of micro-handling, are possible as well due to the very ingenious actuation at the level of carpal, metacarpal bones and at the level of phalanges. Thus, movements in the radial-carpal and carpal joints are possible due to a system of muscles for actuation along with a system of ligaments that provide constant contact between the carpal bones (Figure 27). Phalanges’ actuation is also interesting. For the flexion-extension movements that are interesting primarily in terms of robotics, each finger is actuated by 5-7 muscles that provide an independent movement of each phalange bone (see Figure 28 (adapted after www 3)). These movements are stimulated by flexor and extensor muscles (superficial and deep) whose tendons are attached very ingeniously to the phalanges (Figure 29a, b, c). The finger extension is possible through a single tendon connected in parallel with the three phalanges (Figure 29a, b, c3). The finger flexion (bending) is possible through different tendons (Figure 29b, c). In Figure 29c, c1, c2, c3, we can see how phalanges guided by ligaments (sheaths) are bent. In Figure 29d, we can see the link between tendons and the corresponding actuation muscles of a finger. The actuation muscular system is briefly presented, namely the 27 bones and the 36 joints provide a mobility degree M=35, which explains the multiple gripping and microhandling position of the human hand bio-mechanism (Figure 11b and Figure 26). Thus, only the 9 interphalangeal joints that are monomobile and the 5 metacarpalphalangeal joint that are bimobile have a mobility degree M = 19 altogether. Compared to these joints, in the others, the displacements are small or very small and although they contribute greatly to the diversity of hand position, usually they have been neglected in the construction of anthropomorphic grippers because their consideration would lead to very complex structures.

Human Grasping as an Archetype of Grasping in Robotics

(a)

369

(b)

Figure 27. Carpal bones’ movements actuation: dorsal view- a, adapted after (www 1), volar view-b, adapted after (www 2).

(a)

(b) Figure 28. Metacarpal (a) and phalanges’ (b) bones’ movements’ actuation (adapted after www 3).

370

Ionel Staretu

Figure 29. Phalanges actuation (Staretu 2005, 73).

Human Hand Protection and Sensitivity To complete the human hand biosystem presentation, we refer briefly to the structure and function of the skin. The skin is composed of three main layers: epidermis, dermis and hypodermis, which, in turn consists of several components (Figure 30). Among the function of the skin, we are interested in the protection function and in the sense organ function. The protection function provides hand protection (internal tissues) against mechanical, chemical, thermal, actinic (radiation) and microbial agents. The sense organ function involves the possibility of tactile, thermal and painful excitations sensing through specialized receptors. Each type of excitation corresponds to a certain type of sensitivity at the skin level. Tactile sensitivity is triggered by mechanical stimuli acting through touch or tact and by pushing or pressure. Thermal sensitivity is triggered by thermal stimuli, i.e., the ambient temperature and that of bodies in contact with it. Painful sensitivity is triggered by stimuli that may cause damage to the skin tissue, physically, chemically or biologically. First, it should be noted that for the reception of certain excitation, there are more specialized receptors, namely in the dermis: corpuscles specialized in receiving tactile excitations, cold sensations, then in the hypodermis: corpuscles specialized in perceiving high pressure, in receiving low pressure and warmth sensations. In addition, free nerve fibers in the epidermis are designed to receive painful excitations, and around the hair roots, tactile stimuli triggered by hair movements. These details show the special

Human Grasping as an Archetype of Grasping in Robotics

371

complexity of the skin, which explains the exceptional sensitivity and difficulty, overcome only partially now, to make artificial skin similar to natural skin, to be used in mechanical grippers (Mogos 1972, 232). Information provided by specialized receptors in the skin, via nerve pathways, reach the cerebral area (the post central gyrus in the parietal lobe), which processes it, after which it is perceived as a variation in temperature or contact with objects in the environment. In view of the above, the special complexity of the human hand structure and function is demonstrated by the area reserved in the cerebral cortex for processing information related to it (see Figure 31). In Figure 31a there is the graphic representation of the cortical projection of the body on the cortical sensory area, which shows that the largest cortical representations, after the lips and the tongue belongs to the hand. In Figure 31b, there is the cortical projection of the body on the cortical motor area showing that the hand muscles involved in finer movements have cortical projection areas much larger compared to muscles that perform less fine movements. Related to the structural-functional hand particularities, briefly presented, we must mention that both anatomically and physiologically, the structure and the function of the human hand as a gripping system, are well known (Mogos 1972, 342).

Figure 30. Structure of the skin (adapted after www 4).

Ionel Staretu

372

(a)

(b)

Figure 31. Sensitive (a) cortex protection and motor (b) of human body.

Case Study To complete and support the above, there is a gripping and micro-handling personal case study. Gripping is possible first for a linear object, like a pencil, with one finger, two fingers, three fingers, four fingers and five fingers (Figure 32a, b, c, d, e), and in the second phase for an approximately spherical, small object, with average diameter about 10 mm, also using 1-5 fingers (Figure 33a, b, c). It is noted that in the second case only gripping with 1, 2 and 3 fingers are significant because using more than three fingers does not increase gripping accuracy or reliability; a sort of redundancy occurs, even the impossibility of the contact with the object, with such size, if we use more than three fingers.

Human Grasping as an Archetype of Grasping in Robotics

(a)

(b1)

(b3)

(

d1)

(c1)

(d2)

373

(b2)

(c2)

(e)

Figure 32. Grasping situations of a rod-type object.

Obviously, if the body size is greater, one-finger gripping is not possible anymore. We need at least two fingers and then three, four or five-fingers based upon the different utility. For such a case there are possible gripping situations of a mouse like object (Figure 34).

Ionel Staretu

374

(a)

(b)

(c)

Figure 33. Situations of grasping a small object.

(a)

(b)

(e1)

(c)

(e2)

(d)

(e3)

Figure 34. Grasping situations of an average size object.

As example of micro-handling, the feature specific to anthropomorphic grippers, it is shown the micro-handling of a pencil type object (Figure 35). It is noted that handling is possible with at least two fingers, and it has, according to the higher number of fingers, more and more situations.

Human Grasping as an Archetype of Grasping in Robotics

(a1)

(a2)

(b1)

(b3)

(c1)

375

(b2)

(b4)

(c2)

(b5)

(c3)

Figure 35. Handling with two fingers (a), three fingers (b) and transfer between the first two fingers to second two fingers (c).

The issues presented in this case study sought to exemplify outstanding possibilities of gripping and micro-handling of the human hand, as bases for the conception and achievement of similar robotic anthropomorphic processes.

Ionel Staretu

376

MINIMUM MATHEMATICAL CONDITIONS OF STATIC GRASPING WITH THE HUMAN HAND Mechanical Contact Modeling by Torsors Preliminary Notions A Vector’s Torsor If a force represented by a slip vector Fi is considered and we note with M i this force momentum against a pole O, then the torsor in pole O of the slip vector Fi is the unit composed of the free vector Fi is also the bound vector M i . This torsor is noted as follows: ì Fi ü t O ( F i ) = t ( F i ) = { F i , M i} = í ý. î Mi þ

(1)

For a vector field X1 , X2 ,..., Xq the resulting vector in a point A is: R A = X 1 + X 2 +...X q.

(2)

In the same point A, considering the points Pi, xi vectors origins, a resultant momentum is obtained: M A = X 1 ´ P 1A + X 2 ´ P 2A + ... + X q ´ P qA.

(3)

In point A, the torsor corresponding to the vectors X1 , X2 ,..., Xq will be: ìï R ü ï t A(R) = í ý. îï M (A) ïþ

(4)

Matrix Expression of the Torsor Considering a tridimensional reference system of axes: x, y, z with the vectors i, j, k and the origin O, the common vector Vi can be written as: V i = X ii + Y i j + Z ik.

(5)

Human Grasping as an Archetype of Grasping in Robotics

377

The matrix associated to this vector will be: é Xi ê Vi=êYi ê Zi ë

ù ú , ú ú û

(6)

O

where Xi, Yi, Zi are the vector projections on the reference system of axes. The vector momentum in point O is: é i j kù ê ú = i + j + k = y ziú , M i M ix M iy M iz ê x i i ê ëX i Y i Z i ú ûO

(7)

where Mix, Miy, Miz are scalar components corresponding to {O, i, j, k} reference axes and xi, yi, zi are scalar components of the position vector along the same axes. The matrix attached to the momentum vector will be: éM ix ù é 0 -z i y i ùéX i ù é y i Z i - z iY i ù úê ú ê ú ê ú ê M i = êM iy ú = ê z i 0 - x i úêY i ú = êz i X i - x i Z i ú . ê 0ú ëZ i ú ûO ê ëM iz ú ûO ê ë- y i x i ûê ëx iY i - y i X i ú ûO

(8)

As a result, the matrix form of torsor vector Vi against point O is: ù é Xi ù é Xi ú ê ú ê ú êYi ú ê Yi ú ê Zi ú ê Zi ú. t O (V i ) = ê ú= ê êM ix ú êy i Z i - z iY i ú êM iy ú êz i X i - x i Z i ú ú ê ú ê êM iz û ú ë êx iY i - y i X i û ú ë

(9)

Torsors’ Vector Space Dimension If vectors Vi with i = 1, ..., n composing vector space { V } are given, they determine against the tridimensional reference system origin the torsors τO ( Vi ). In matrix writing, the torsors system corresponding to vector space { Vi }, will be:

Ionel Staretu

378

t O{ V } = é ët O(V 1) ×t O(V 2) ×××t O(V i ) ×××t O(V n) ù ûÞ é XV1 XV2 ê Y YV2 ê V1 ê ZV1 ZV2 Þ t O{ V } = ê êM xV 1 M xV 2 êM yV M yV 1 2 ê ê ëM zV 1 M zV 2

... ... ... ... ...

XVi Y Vi ZVi M xV i M yV i

... ... ... ... ...

...

M zV i

...

X V nù ú Y V nú Z V nú ú. M xV n ú M yV n ú ú M zV n ú û

(10)

The torsors’ vector space dimension, representing the number of linear independent vectors, will be the matrix rank τO { V }: rang é ët 0{V}ù û= r.

(11)

Static Torsor If Qi is a vector in {O, i, j, k } the reference system being a force, then the torsor in the point O of the vector Qi is called static torsor and its components are Qi force and its momentum reported to O:

t

ìï

Qi ü ï ý. ï îï M Qi þ

SO (Q i ) = í

(12)

In matrix writing, τSO( Qi ) torsor will be: é Q ix ê ê Q iy ê Q iz t SO(Q i ) = ê ê M Qix ê ê M Qiy ê ë M Qiz

ù ú ú ú ú. ú ú ú ú û

(13)

In a tridimensional reference system, to an axis, we can consider that there is a torsor corresponding to the linear force along this axis and a torsor corresponding to the momentum against this axis. The two vectors, linear force vector, and rotation momentum vector are linear independent vectors.

Human Grasping as an Archetype of Grasping in Robotics

379

Mechanical Contacts Types Mechanical Frictionless Contact between Two Non-Deformable Entities Between two non-deformable solids, if the friction is ignored, considering the entities’ shapes defined and they have a contact point at the action of an entity on the other, two equal forces with opposite directions are generated, having as support the common normal to the areas in contact (see Figure 36). Considering the force as the vector acting on entity 2, it generates a static torsor in the reference system with origin in the point P and another static torsor in the reference system attached to entity 1, with origin in the point O: F 12 ü ìï F 12ïü ý= í ý, î M P ( F 12)þ îï 0 ïþ

(14)

0 F 12 ü ì F 12 ü ìï ïü ïì F12 ïü ý= í ý+ í ý =í ý. ï ïî OP ´ F12 ïþ î M O( F 12) þ î M P ( F 12)þ îï OP ´ F12 þ

(15)

ì

t

SP ( F 12) = í

t

SO( F 12) = í

ì

In matrix writing the two torsors are: é F 12 x P ê ê F 12 y P ê F 12 z P t SP ( F 12) = ê ê 0 ê 0 ê ê 0 ë

ù ú ú ú ú , [t ú ú ú ú û

SO ( F 12 )

] = [T PO][t SP ] , ,

(16)

where [TPO] is the transition matrix from P to O. Mechanical Contact between Two Non-Deformable Solids, with Friction In this case, considering the entity 2 action in a point on entity 1, with friction, then forces are generated. They are the reaction force on entity 2, having as support the common normal to the surfaces of the two entities in the contact point, and a friction force, perpendicular to the first and placed in tangential plane to the entities surfaces, in the contact point. (see Figure 37). The friction force F

12 f

= m12 F 21 can be decomposed by two perpendicular axes

located in the tangential plane ( m12 is friction coefficient between entities). A static torsor is generated in the reference system with the origin in the contact point P:

Ionel Staretu

380

t

F 12 + F 12f

ïü ïì F 12 + F 12f ý= í 0 ïî M P ( F 12) + M P ( F 12f )ïþ ïî ïì

SP ( F 12) = í

ïü ý. ïþ

(17)

A corresponding torsor will be generated in the point O of the reference system attached to entity 1:

t

ü F 12 + F 12f R 12 ü ìï ï ý= í ý. ( ) OP ´ ( + ) F 12 F 12f þ î M O F 12 þ îï ï ì

SO(F 12) = í

(18)

Figure 36. Mechanical contact between non-deformable solids without friction.

Figure 37. Mechanical contact between non-deformable solids with friction.

Mechanical Contact between Two Deformable Solids with Friction For deformable solids, the contact point is replaced by the contact zone. The entity 2 action on entity 1 (see Figure 38) generates the following forces: a reaction force F 12 on entity 2, whose support is the common normal to the entities surfaces in theoretical contact point, a friction force, F

12 f

= m12 F 21 in the tangential plane, common to surfaces

in contact and a friction momentum, M12 f which opposes the rotation movement tendency around the normal.

Human Grasping as an Archetype of Grasping in Robotics

381

Friction force F 12 f splits into two independent components, along two perpendicular axes in the common tangent plane to the surfaces in contact ( F

12 f

= F X 12 f + F Z 12 f ). 2 3

The friction momentum is the friction pivoting momentum and it is M12y f = m12 rF 21 P

where r is the contact area radius, considered perfectly flat. The static torsor in point P, the reference system {P, XP, YP, ZP} origin, of forces used by entity 1 to act on entity 2 is:

Figure 38. Mechanical contact between solid deformable entities with friction.

t

ìï

F 12 + F 12f

ïî

yp M P ( F 12 ) + M P ( F 12f ) + M P12f )

SP ( F 12 ) = í

+ ü ï ìï F 12 F 12f ý= í yp ïþ ïî M P12f

ü ï ý. ïþ

(19)

A corresponding torsor is generated in point O, the reference system origin attached to entity 1:

t

ü F 12 + F 12f R 12 ü ìï ï ý = í yP ý. î M O( F 12) þ îï M P12+fOP ´ ( F 12 + F 12f )þ ï ì

SO(F 12) = í

(20)

Equilibrium of a Solid Body A solid is in equilibrium if the system of forces acting on it is in equilibrium; hence if the entity position is identical to itself, regardless of the forces acting on it. The necessary and sufficient condition for a free or bound solid to be in equilibrium is that in some point in space, the forces system torsor acting on it is invalid:

Ionel Staretu

382

t

AI

ìï R ïü ïì å Fi ïü ì 0 ü =í ý=í ý=í ý. ïî M ïþ îï å (ri ´ Fi )þ ï î 0þ

(21)

If we project these expressions along a Cartesian reference axis, solid classical equilibrium conditions are obtained:

 X i   0     =  0 ,  Yi      Z i   0 

  M ix   0        M iy  =  0 .   M iz   0 

(22)

Considering the friction, these conditions must be met as well: { F }  { T } and { M }  { M f }, where { F } is the set of forces actually applied to the solid, { T } is the set of friction forces, {M} the set of forces momentum actually applied, and { M f } the set of the friction forces momentum. Considering the general case of a deformable solid S1 in mechanical contact with a solid S2, under active forces { F i , M i }, i = 1, ..., n, a contact zone Ω is generated, where elementary reactions are developed, acting on the solid S1. If it is roughly considered that the contact area Ω belongs to a plane π (Figure 39) and that { R A , M A } is the torsor of elementary reactions d F against surface Ω gravity center A, torsor A elements split along a normal direction in A, against plane (π) with components N and M P and along a direction located in plane (π) having components T respectively M r . Therefore, we can write: R A = N + T , M A = M P + M r , where N is the normal reaction, T is sliding friction, M P is pivoting friction momentum and M r is rollover friction momentum.

The normal reaction N opposes to (S1) entity movement along n-n direction, sliding friction T opposes to entity sliding or sliding tendency in plane (π), friction pivoting momentum M P opposes to entity rotation or rotation tendency around normal n-n against plane π, and rolling friction momentum M r opposes to entity rotation or rotation tendency around axis (Δ), located in plane (π).

Human Grasping as an Archetype of Grasping in Robotics

383

Figure 39. Forces acting on a deformable solid S1 in contact with another solid S2.

Figure 40. Forces acting on a solid reduced to a material point.

Figure 41. Friction cone A.

Under Columbus’ equilibrium laws it is necessary that T £ mN , M P £ u N and M r £ sN , where μ is sliding friction coefficient, υ is pivoting friction coefficient, and s is

a rolling friction coefficient. As long as the conditions above are met, the entity is at rest or at displacement limit. In the limit case, of punctual contact, the solid associated with a punctual contact element is reduced to a material point (MP). In this situation of contact

Ionel Staretu

384

with a plane (see Figure 40), T £ mN is the necessary material point - MP the equilibrium condition. The necessary and sufficient equilibrium condition is α ≤ φ. T is the friction force, N is normal reaction, P is active variable force acting on the material point, F = P + G

is the result of properly applied forces, R = N + T is the overall reaction, tg φ = μ, and α is the angle between the resultant support forces and the one normal to the plane. If a certain area is considered, the line through point A generating with the normal An to the surface, the friction angle φ (Figure 41) generates as well a double cone called friction cone. For balance, the resultant support of forces properly applied to point A is located within the friction cone, or even on the cone’s lateral surface.

Minimum Mathematical Conditions for Static Grasping Grasping experiments conducted leave room for a brief presentation of the mathematical modeling of grasping. Thus, from the case of a random shaped object, such as that of Figure 34e1, in which we represented the contact forces Fi applied by the fingers at the contact points Pi on the grasped object according to Figure 42a, we shifted to the general case of a solid object grasping according to Figure 42b. Further, the following explanations are given regarding the simplified mathematical modeling of static grasping (Staretu 2011, 38-47).

(a)

(b)

Figure 42. Non-deformable solids system.

If a system of non-deformable (rigid) solids is considered,see Figure 15b, actuated by external forces and reactions { F i , M i }, i = 1, ..., n and internal reactions (in coupling) {

Human Grasping as an Archetype of Grasping in Robotics

385

F ij , M ij } (i, j  N, i  j), the balance conditions of the system made of external forces

and internal and external reactions are: ì ï ï í ï ï î

n

n

n

å Fi + å å Fij = 0 i =1

i =1 j =1

n

n

i =1

i =1

n

n

n

n

(23)

å ri ´ Fi + å Mi + å å ri ´ Fij + å å Mij = 0 i =1 j =1

i =1 j =1

where r i and r j are position vectors of forces F i respectively F ij . A number of scalar linear equilibrium equations can be obtained, projecting equilibrium conditions along tridimensional reference axes, applied to determine internal or external reactions to the system. The mechanical gripping system, a respectively gripping mechanism is first treated as a rigid entities system, with one of them being the grasped entity. Based on this simplified case we can proceed to approach the situation considering deformability of elements in the contact area with the grasped object. First, we consider the simplified case of a solid, elastically deformable gripping entity, actuated only by contact forces F i (see Figure 43), normal to the entity surface in Pi (i = 1, ..., n) points, through the gripper contact elements. To each point Pi is attached a reference tridimensional system {Pixiyizi}, with versors ii, ji, ki, such that zi axis coincides with (Pini) normal to the entity surface. In addition, every Pi point is considered the gravity center of a contact area of surface Ωi located in a plane (Π-see Figure 39 too) tangential to the entity surface in point Pi (the contact force F i is considered generated by a solid Si which contacts that entity in point Pi).

Figure 43. Contact forces general case on a deformable solid gripping.

Ionel Staretu

386

Under these assumptions in the points Pi, the following forces are developed: a contact force F i , which has, as support, the Pini normal to the entity surface, a friction force F if located in the tangent plane to the entity surface and a pivoting friction momentum which has, as support, the same Pini normal (the rolling friction momentum is neglected) (Staretu 2011, 46). The friction opposed to the possible sliding movement in Π plane splits into two independent components along the reference system Pixiyizi axes located in plane Π, and the pivoting friction momentum opposed to possible pivoting movements around the normal. In the reference systems Pixiyizi, each force vector’s respective momentum corresponds to a torsor in point Pi. Therefore,

t

ì F izi ü ý, t î 0 þ

Pi( F izi) = í

ïì F ifxi ïü ý, ïî 0 ïþ

Pi ( F ifxi) = í

(24)

ìï 0 ü ï ý, Pi ( M if ) = í M ï îï ifzi þ

ìï F ifyi ü ï t Pi(F ifyi) = í ý, t 0 ï îï þ

with the following matrix forms: t

Pi ( F izi )

t

t

= [ 0 0 F izi 0 0 0 ] = F izi[ 0 0 1 0 0 0 ] , t

t

t

t

t

Pi ( F ifxi ) = é ëF ifxi

0 0 0 0 0ù û = F ifxi[1 0 0 0 0 0 ] ,

t

Pi ( F ifyi ) = é ë0 F ifyi 0 0 0 0 ù û = F ifyi[ 0 1 0 0 0 0 ] ,

t

Pi ( M if )

=é ë0 0 0 0 0

t

(25) t

M ifzi ù û = M ifzi[ 0 0 0 0 0 1] .

In qualitative calculations, scalars Fi, Mi may be waived, thereby obtaining elementary torsors. Each torsor in point Pi is expressed in point O, the origin of OX0Y0Z0 reference system is considered fixed. The torsors’ expression in the fixed reference system is based on the matrix formula: t O(F i) = T PiO t

Pi( F i)

.

(26)

TPiO is the transition matrix between the two reference systems and it has the form: t 0  C PiO [ T PiO ] =  , - C tPiO ( r PiO ) C tPiO 

(27)

Human Grasping as an Archetype of Grasping in Robotics

387

where CPiO is the direct cosines matrix characterizing the relative angular position of the two reference systems, and rPiO are PiO distance coordinates in the fixed reference system with origin in O. Taking into account that: éi Pi i 0 i Pi j 0 i Pi k 0 ù ér PiOx ù ê ú ú =ê C PiO = ê j Pi i 0 j Pi j 0 j Pi k 0 ú and r PO i êr PiOy ú , ê ú êr PiOz û ú ë êk Pi i 0 k Pi j 0 k Pi k 0 û ú ë

(28)

and Δ(rPiO) is the antisymmetric matrix corresponding to the position vector r PIO :

0 r PiOz - r PiOy     ( r PiO ) = - r PiOz 0 r PiOx  .   0   r PiOy - r PiOx

(29)

The torsor of force Fi in point O has the form: 0 ù é F ixi ù é F ixO ù é C tPiO | úê ê ú ê ú | ú ê F iyi ú ê F iyO ú ê ú ê F izi ú ê F izO ú ê | úê ê ú= ê ú. êM ixO ú ê - - - - - - - - - - - - - - ú êM ixi ú ú êM ú êM iyO ú ê | ú ê iyi ú ê ú ê t t | ê ú D ( ) ê M C PiO ú ë izO û ë C PiO r PiO ëM izi ú û ûê

(30)

With twist expressed in the fixed reference system (Staretu 2011), we can form the matrix denoted by G and called a gripping matrix: G =é ët 0 ( F 1) t 0 ( F 2)

...

t 0(F i)

...

t 0 ( F n)ù û

(31)

Under these assumptions, the necessary gripping condition of the entity considered against the fixed reference system origin is: rank G = 6

(32)

This means that through the contact forces action on the gripped entity surface, 6 linear independent forces are generated, which will block the 6 possible independent linear movements.

Ionel Staretu

388

At the above condition, for achieving gripping, we must also add the connection (contact) of static equilibrium condition of the system of forces. If a system of six independent links (of rank 6) were used, the static equilibrium condition would lead to a homogeneous system of six equations with six unknowns, which admits only the trivial solution; obviously, in such case, the entity gripping is not possible. Therefore, to achieve the entity gripping, generally, at least seven external links (Σc = 7) of C12 = 6 rank are necessary; indeed, in such case static equilibrium condition leads to a homogeneous system of 6 independent equations with 7 unknowns. As a result, six of the link forces can be calculated based on the seventh force that is indefinite (independent); the entity gripping is achieved through the latter. Synthetically, the necessary and sufficient condition for entity gripping, considering only contact forces, can be expressed as follows: An entity gripping, involving cancellation of C12  6 freedom degrees, must use a system of a minimum C12 +1 links ranking C12.

If the friction is neglected, the number of links between the entity and the contact elements equals the number of punctual contacts. If the friction is considered, the entities being non-deformable, the links number is three times higher (in each punctual contact, a link is materialized through a normal force on the tangent plane and two links correspond to independent linear friction forces in the tangent plane), therefore, in the last case the number of contact points required is considerably less. If external forces such as gravity and inertia act on the entity, the gripping condition of the entity considered assumes that all forces resultant torsor in the point O (the origin of the fixed reference system) is invalid. Taking into account the resultant R of external forces (other than contact forces), the gripping condition means the equation: [G* ][f] = [R],

(33)

in which [f] is the amplitudes matrix for contact forces, and the gripping matrix [G *] is composed of elementary torsors. The entity gripping stability requires positive ( F i  0) contact forces, normal to the entity surface, and the forces tending to suppress forces generated by friction must respect the conditions: Fit*12 + Fit*22 £ mi2F i2,

M ip £ M if = n F i.

(34)

Human Grasping as an Archetype of Grasping in Robotics

389

If we consider a system of solids and one of them is the grasped entity (see Figure 42b), static gripping condition requires mobility degree M  0 for the mechanism attached.

MICROMANIPULATION WITH HUMAN HAND - PREMISES AND MATHEMATICAL MODELING Micromanipulation Micromanipulation represents the possibility to change the gripped piece position only against the gripper, with or without changing contact points, due to the gripper’s capacity to allow the change of the relative position of some of its elements. The micromanipulation possibility is specific to the human hand (see Figure 35), anthropomorphic grippers or other grippers that operate like them, such as tentacular grippers. Considering the piece gripped by two articulated fingers (see Figure 44a, C1, C2-contact points), changing the relative position of the elements and of the angles between them, the piece can be moved in the position in Figure 44b, perhaps up to an extreme position (see Figure 44c, s-represent the micromanipulation area of the q piece center) (Kerr and Roth 1986, 6-10).

(a)

(b)

(c)

Figure 44. Micromanipulation with two fingers (adapted by the author).

Minimum Mathematical Conditions for the Stability of Micromanipulation Starting from micromanipulation with the human hand (see Figure 35b4 and Figure 45a), in general, if [Ω]0=[ωx, ωy, ωz, Vox, Voy, Voz]t is the velocity imposed on a point O attached to the grasped object, and [Ω]j is the matrix of the corresponding motor movements of each finger (see Figure 45b) can be written by adapting after (Staretu

Ionel Staretu

390

2011, 233-235; Kerr and Roth 1986, 6-10; Salisbury and Roth 1983, 36-39; Nakashima et al. 2010, 669-675), the relation: [W] = Gt [W] ,

(35)

where [Ω] groups all absolute finger velocity transmitted through contacts with the grasped object, and G is the gripping matrix(see equation 31). Because: [ W] = J[q ] ,

(36)

where J is the jacobian of the biomechanism corresponding to the fingers, in this case of three fingers, and q the velocity in the joints of the fingers, obtaining velocity of the joints, necessary for the movement on the micromanipulated object: [q ] = J -1Gt [ W] .

(37)

In the case of montage, this displacement, which results in change of piece position against the gripper, must be possible to reduce montage errors (see Figure 46), delta (Δ), between the reference system attached to the mounting part (S1) and the reference system attached to the part where the assembly is made (S2). The reference system is attached to the base of the prehensor and the robot arm, so it remains fixed. In micromanipulation and in active compliance too, the force generated in montage is used to set micromovements that must be performed by the ”fingers,” so that gradually the force is zero and the montage may be possible. Obviously, “the fingers” are equipped with the necessary force transducers. To solve the problem raised by micromanipulation, we use both direct and inverse kinematics applied in the study of guiding chains (positioning and orientation) of the robots (Coiffet 1992, 221). The exemplification of micromanipulation with an anthropomorphous gripper with four fingers of a rectangular piece, is shown in Figure 47(Jacobson et al. 1984, 26-32; Brock et al. 2005, 311-317), by two instants (a and b) during this procedure.

Human Grasping as an Archetype of Grasping in Robotics

(a)

391

(b)

Figure 45. The kinematic scheme of micromanipulation.

Figure 46. Micromanipulation for montage.

(a)

(b)

Figure 47. Micromanipulation examples (adapted after Brock et al. 2005, 314).

A montage modality much closer to the human one is the manipulation to reduce the montage error, using visual sensors (video cameras), see Figure 48a. Micromanipulation, summarized above, is useful in montage and in changing gripped piece position in

Ionel Staretu

392

general, or even to perform some operations when the piece is a mobile unit, for example, the move of a syringe piston (see Figure 48b).

(a)

(b)

Figure 48. Examples of micromanipulation inserting a pin (a) and operating a syringe (b).

Problems related to micromanipulation are still the object of many studies and they are of obvious actuality.

CONCLUSION Based on issues presented in this chapter we can formulate the following conclusions: 







The human hand is a result of a very long evolution at the end of which it became an extremely powerful gripping biomechanism for gripping and micro-handling objects whose size is proportional to its size, objects often used by man; Human hand model is important both for the achievement of simple two-jaw grippers (similar to anthropomorphic grippers with two fingers), and for the achievement of 3, 4, 5 or even 6-finger grippers; The human hand biomechanism allows most of the usual gripping operations to be performed using three fingers, which usually led to the achievement of mechanical grippers having this number of fingers, which decreases the complexity and obviously the cost and the price of the gripper: The relative redundant nature, due to the existence of two more fingers, but also due to the metacarpal bones, and to a lesser extent to carpal bones, make possible

Human Grasping as an Archetype of Grasping in Robotics





393

much more gripping situations, especially complex configurations and microhandlings of objects as well as greater precision and safety. Studies and research of structural and functional particularities of the human hand must continue to identify them very well and therefore find appropriate technical solutions for modeling carpal and metacarpal bones, so that the similarity and the performances of mechatronic anthropomorphic grippers differ increasingly less from those of the human hand. Micromanipulation is a functional feature of the very important human hand, which is possible due to the number and configuration of the five fingers. The accomplishment of the micromanipulation with anthropomorphic grippers with at least three fingers is difficult and still limited, which justifies the continuation of the research in order to obtain more suitable solutions.

REFERENCES Bolboe, M. (2013). Theoretical and experimental research of anthropomorphic grippers with reduced number of fingers for robots (in Romanian). PhD thesis, Transilvania University of Brasov, Romania. Brock, O., Fagg, A., Grupen, R., Platt, R., Rosenstein, M. & Sweeney, J. (2005). “A framework for learning and control in intelligent humanoid robots.” International Journal of Humanoid Robotics, 10(2), 301-336. Accessed June 28, 2019. doi: 10.1142/S0219843605000491. Dudiță, FL., Diaconescu, D. & Gogu, Gr. (1987). Mechanisms course. Vol. 4. Kinematics of articulated mechanisms. Classic mechanisms. Robot-mechanisms (in Romanian). Transilvania University of Brasov, Romania. Coiffet, P. (1992). La robotique, principes et applications. Paris: Hermes. [Robotics, principles and applications] Cutkosky, M. R. & Wright, P. K. (1986). “Modeling manufacturing grips and correlation with the design of robotic hands.” Proc. of the 1986 IEEE International Conference on Robotics and Automation, 1533–1539, San Francisco, CA. Accessed June 16, 2019. doi: 10.1109/ROBOT. 1986. 1087525. Cutkosky, M. (1989). “On Grasp Choice, Grasp Models, and the Design of Hand for Manufacturing Tasks.” IEEE Transactions on Robotics and Automatation, 5(3), 269279. Accessed June 12, 2019. Feix, T., Romero, J., Schmiedmayer, H. B., Dollar, A. M. & Kragic, D. (2016). “The GRASP Taxonomy of Human Grasp Types.” IEEE Transactions on Human-Machine Systems, 46(1), 66-77. Accessed June 10, 2019. doi: 10.1109/THMS.2015.2470657. Kato, I. (1982). Mechanical Hands Illustrated. New York: Hemisphere.

394

Ionel Staretu

Kerr, J. & Roth, B. (1986). “Analysis of Multifingered Hands.” The International Journal of Robotics Research, 4(4), 3-17. Accessed June 20, 2019. doi.org/ 10.1177/027836498600400401. Kovacs, Fr. & Cojocaru, G. (1982). Manipulators, Robots and their Industrial Applications (in Romanian). Timisoara: Facla. Jacobson, S. C., Wood, J. E., Knutti, D. F. & Biggers, K. B. (1984). “The UTAH/MIT dextrous hand: Work in progress.” The International Journal of Robotics Research, 3(4), 21-50. Accessed June 20, 2019. doi.org/10.1177/ 027836498400300402. Liu, J., Feng, F., Nakamura, Y. C. & Pollard, N. S. (2014). “A Taxonomy of Everyday Grasps in Action.” 14th IEEE-RAS International Conference on Humanoid Robots, 573-580. Accessed June 20, 2019. doi: 10.1109/ HUMANOIDS.2014.7041420. Lyons, D. (1985). “A simple set of grasps for a dextrous hand.” IEEE International Conference on Robotics and Automation, St. Louis, MO, USA: 588-593. Accessed June 22, 2019. doi: 10.1109/ ROBOT.1985.1087226. Lundstrom, G. (1977). Industrial Robots. Gripper Review. England: International Fluidics Services Limited. Mason, M. T. & Salisbury, J. K. Jr. (1985). Robot Hands and the Mechanics of Manipulation. Cambridge: M.I.T. Press. Mogos, Ghe. & Inaculescu, A. (1972). Compendium of Anatomy and Physiology (in Romanian). Bucuresti: Ed. Stiintifica. Nakashima, A., Uno, T., Hayakawa, Y., Kondo, T., Sawada, S. & Nanba, N. (2010). “Synthesis of Stable Grasp by Four-Fingered Robot Hand for Pick-and-Place of Assembling Parts.” 5th IFAC Symposium on Mechatronic Systems, Marriott Boston Cambridge, Cambridge, MA, USA, Sept 13-15, 43(18), 669-676. Accessed June 22, 2019. doi.org/10.3182/20100913-3-US-2015. 00120. Napier, J. R. (1956). “The prehensile movement of the human hand.” Journal Bone Surgery, 38B(4), 902-913. Accessed June 22, 2019. doi.org/10.1302/ 0301620X.38B4.902. Pheidias. (432 BC). Statue of Zeus- Reconstruction of the statue of Zeus in the Hermitage Museum. Available on https://archaeologynewsnetwork.blogspot. com/2015/04/ marble-naturally-illuminated-statue-of.html#xjVYV7x69Q 84T yUb.97. Pollard, N. S. (2016). Evaluation of Dexterous Manipulation “In the Wild”. Carnegie Mellon University, http://clem.dii.unisi.it/~malvezzi/graspquality/wp content/ uploads/2016/05/ WSbenchmark -Pollard. Püschel, F. (1956). Künstliche Hände Künstliche Arme. Berlin: Technischer Verlay Herbert Cram, W35. [Artificial arms Artificial arms.] Rosheim, M. E. (1994). Robot Evolution-The Development of Anthrobotics. New York: A Wiley Interscience publication.

Human Grasping as an Archetype of Grasping in Robotics

395

Salisbury, K. & Roth B. (1983). “Kinematic and force analysis of articulated mechanical hands.” ASME J. Mech., Transmiss., Autom. Design, 105(1), 35–41. Accessed June 23, 2019. doi.org/10.1115/ 1.3267342. Staretu, I., Neagoe, M. & Albu, N. (2001). Mechanical Hands. Anthropomorphic Gripping Mechanisms for Prostheses and Robots (in Romanian). Brasov: Ed. Lux Libris. Staretu I. (2005). Elements of Medical Robotics and Prosthesis (in Romanian). Brasov: Ed. Lux Libris. Staretu, I. (2011). Gripping Systems. Tewksbury, Massachusetts: Derc Publishing House. Vaucanson, J. de. (1738). Available on http://musicaetecnologia191227. blogspot. Com /2013/06/jacques-de-vaucanson-e-i-suoi-automata.html. www 1: available on https: // www.orthopaedicsone.com / display / Main / Carpo – metacarpal+joint. www 2: available on https://www.wikiwand.com/en/Pisohamate_ligament. www 3: available on https: //frcemsuccess.com/wp-content/uploads/2016 /12/ThenarMuscles.png. www 4: available on https://biocyclopedia.com/index/general_zoology /structural _and_functional_adaptations_of_mammals.php.

In: Industrial Robots: Design, Applications and Technology ISBN: 978-1-53617-779-4 Editors: Isak Karabegović et al. © 2020 Nova Science Publishers, Inc.

Chapter 14

APPLICATION OF INDUSTRIAL ROBOTS FOR ROBOTIC MACHINING Janez Gotlih*, Timi Karner, Karl Gotlih and Miran Brezočnik Faculty of Mechanical Engineering, University of Maribor, Maribor, Slovenia

ABSTRACT Manufacturing has advanced to a level where machining flexibility is more important than ever. Automation is regularly employed to increase efficiency of the manufacturing process, where industrial robots are already an integral part. However, not only for material handling, today, robots are also used for machining itself, which additionally increases the adaptably and efficiency of the overall process and reduces the price of the product. The most common application fields for robotic machining are found where machines tools are too expansive, which is in case of manufacturing small batches of large workpieces or where accuracy of robotic machining can be controlled, which is in case of operations where machining forces are low and more general in case of machining softer materials. To optimize robotic machining different approaches are used, whereby most deal with the robots structural deficiencies originating from its static, kinematic and dynamic properties. Identification and modelling of the robots properties in combination with modern computer aided manufacturing software solutions allows for optimization of the manufacturing process’ in an advanced way.

Keywords: robotics, machining, kinematics, dynamics, modeling, optimization

*

Corresponding Author’s E-mail: [email protected].

398

Janez Gotlih, Timi Karner, Karl Gotlih et al.

INTRODUCTION In production engineering, flexibility has become very important. Thus, even for machining operations traditionally performed by machine tools, industrial robots are applied. However, to be interesting for the industry, robots have to prove advantageous compared to specialized machine tools. Their flexibility is one of the main reasons for their wide use, as it is possible to perform different operations on a single station. Specialized machine tools are still favored for achieving the final shapes of parts, especially for operations where high dynamic loads occur, as machining with industrial robot often lacks the desired process accuracy. The industries, processes and products where industrial robots are most commonly applied are summarized (Figure 1). Among the processes, robotic milling has been identified as the most widely used area of industrial robots.

Figure 1. Products and processes used in different industries (Iglesias et al. 2015, 911-917).

Iglesias et al. (Iglesias et al. 2015, 911-917) also summarized the processes performed with industrial robots in terms of material hardness and production stage (Figure 2). They found, that in the case of drilling, robots can be used for machining the hardest materials, and in the case of machining softer materials, they can also be used for milling and general cutting operations.

Application of Industrial Robots for Robotic Machining

399

For most products listed in Figure 1, machining accuracy is important. Kubela et al. (Kubela et al. 2016, 720-725) classified the reasons for inaccuracies into three groups: environmental errors, robot-dependent errors, and process-dependent errors. The robotdependent errors were further subdivided into geometric, non-geometric and system errors. Authors found that the backlash effect is an important factor for robot’s inaccuracy. In the case of milling wooden parts, they detected inaccuracies in the range 0.3-0.5 mm.

Figure 2. Product stage and material hardness (Iglesias et al. 2015, 911-917).

Wu et al. (Wu et al. 2015, 151-168) noted that considering only the geometrical parameters for robot positioning is deficient as elasto-static deformations, friction and backlash also affect the robot’s repeatability. They explored the possibility of geometrical error compensation and added a fifth step to the standard robot calibration procedure to improve positional accuracy. The proposed approach includes the following steps: modeling, experiment design, measurement, recognition and implementation. By the use of the proposed approach, they achieved an improved positioning accuracy with deviations up to 0.17 mm. Positional accuracy of up to 0.11 mm was measured by Józwik et al. (Józwik et al. 2016, 86-96) who studied the influence of direction of approach to the desired point on positional accuracy of the robot using a high-speed camera. They found that positional accuracy decreases with the continuous operation of the robot. Cordes in Hintze (Cordes and Hintze 2017, 1075-1083) addressed the errors in robotic milling. They showed that the backward movement error is not the result of the backlash effect but the hysteresis that occurs when the joints are loaded. Based on the

400

Janez Gotlih, Timi Karner, Karl Gotlih et al.

identification of joint stiffness’, they developed a model for indirect tool path compensation and tested it on the case of aluminum milling. To improve the accuracy of robotic aluminum milling, Slavkovic et al. (Slavkovic et al. 2013, 2083-2096) measured the deformation of the tool tip due to external loading using an optical measuring device. They developed a model for off-line path compensation based on the cutting force. By the use of the model, they achieved an accuracy improvement of 0.18 mm in milling of aluminum workpieces prescribed by the standards for CNC machine calibration. Barnfather et al. (Barnfather et al. 2016, 49-56) have pointed out that standardized procedures for the evaluation of robotic machining do not exist. By combining existing standard measurement protocols for machine tools, they developed a methodology for evaluating the quality of robotic machining and proposed appropriate standards that could be used for the static and dynamic evaluation of robotic machining. In another work, Barnfather et al. (Barnfather et al. 2016, 561-577) have developed an algorithm for optical product control and tool path correction in robotic milling. The algorithm proved effective in the test environment, but in a realistic environment, the authors encountered problems with capturing optical data for tool path correction. Kruger et al. (Kruger et al. 2016, 409-420) proposed a method for tool path correction in robotic machining based on a fault model obtained from measurement data from milling and drilling experiments. They evaluated the influence of cutting parameters and cutting forces on quality of robotic machining by the flatness and surface roughness of the produced parts. In order to improve accuracy of robotic machining, the desired machining accuracy was considered during offline tool path generation. By the use of the model, the surface flatness was improved by 66%. Furtado et al. (Furtado et al. 2017, 2487-2502) proposed a series of experiments to obtain the most appropriate posture and process parameters for robotic milling. To reduce the surface waviness of the end product, they suggested measuring the waviness of the workpiece and using inverse values in the control program. By aluminum milling experiments they identified that the most important process parameters are the robot posture, cutting direction and cutting depth and showed that cutting speeds and feed rates within the experimentally considered space had no statistically significant effect. By compensating for the surface waviness, they achieved a reduction of 22%. The average waviness was measured in accordance with the ISO standard, and a linear regression model was established for the dependence of the waviness on the input parameters. Schneider et al. (Schneider et al. 2016, 3-15) examined the causes of robotic milling errors and ranked them by strength and frequency. They described different approaches for indirect and direct error compensation. They evaluated different compensation approaches experimentally on aluminum and steel milling experiments. They found that the implementation of a system and its capacity to compensate errors depends on the manufacturer and on the model of the robot.

Application of Industrial Robots for Robotic Machining

401

As robotic machining accuracy depends on the robots posture and external loads, the systems static, kinematic, and dynamic properties have to be studied. Once static kinematic and dynamic factors are identified, compensation models can be developed for accuracy improvement. Chen and Dong (Y. Chen and Dong 2012, 1489-1497) divide the field of robotic machining into the development of robot machining systems, robot path planning, vibration analysis, which also includes methods for their compensation, dynamic modeling and stiffness modeling. They found that most research is aimed at improving the accuracy and efficiency of robotic machining. Authors summarized the most promising areas in optimizing machining efficiency, designing tool paths based on robot stiffness, optimizing robot structure, and designing robotic lines. To evaluate the kinematic performance of robots various indicators based on the determinant of the Jacobi matrix were introduced. Many have been summarized by Gotlih et al. (K. Gotlih et al. 2011, 205-211) who illustrated the importance of the velocity anisotropy index by the use of an industrial robot and a four-dimensional workspace. For static parameter identification, the robot’s stiffness is evaluated, by the use of the global (Cen and Melkote 2017, 50-61; Matsuoka et al. 1999, 83-89; Zaeh and Roesch 2014, 737-744) or the local approach (Bauer et al. 2013, 245-263; J.J. Wang et al. 2009, 3826-3831). The posture dependent global approach captures nonlinear effects contributing to the robot’s structural deformation.To expand stiffness to the entire robot’s workspace some authors have measured robot’s stiffness in several postures (Bauer et al. 2013, 245-263; Cen and Melkote 2017, 50-61). The local approach is used to establish a robot’s stiffness model from stiffness of individual components, which requires identification of multiple parameters, that can be identified only if the robot is disassembled (Tyapin et al. 2014). Klimchik et al. (Klimchik et al. 2015, 1-22) identified several parameters required to establish a complete robot stiffness model and used different techniques for selecting the most important ones. To utilize benefits from the global and the local approach, combined approaches were proposed (Dumas et al. 2012, 649-659; Tyapin et al. 2014). To evaluate dynamic performance of robots dynamic indicators based on the Jacobian matrix are used (Asada 1983; Yoshikawa 1985, 1033-1038). In advanced dynamic models, also modal parameters commonly identified based by experimental and analytical methods are included. For experimental parameter identification, the frequency response function (FRF) obtained by impact hammer tests (Bauer et al. 2013, 245-263) is used and for analytical parameter identification FEM simulations are performed (Palmieri et al. 2014, 961-972). To improve performance of robotic machining surrogate models based on regression modeling and artificial neural networks (ANN) are used (Friedrich et al. 2017, 124-134; Z. Gao et al. 2010, 180-189; Marie et al. 2013, 13-43; Modungwa et al. 2013, 26-33; Palmieri et al. 2014, 961-972; Slamani et al. 2015, 268-283). For optimization, especially

402

Janez Gotlih, Timi Karner, Karl Gotlih et al.

the genetic algorithm (GA) proved suitable and was successfully applied for robot topology optimization (Z. Gao et al. 2010, 180-189; Mohamed et al. 2015, 240-256), calibration (Wu et al. 2015, 151-168), workpiece placement (Hassan et al. 2017, 113-132; Lim et al. 2017, 87-98) and robot path planning (Vosniakos and Matsas 2010, 517-525).

ROBOTIC MACHINING Recently, the research of robotic machining is focused on milling, drilling and deburring, while few studies on robotic grinding and polishing were also detected. Due to the wide product range and the promising results, especially robotic milling was often used in research. The first research of robotic machining occurred at the break of millennium, when Matsouka et al. (Matsuoka et al. 1999, 83-89) described the possibility of using a serial robot for milling of extruded aluminum alloys. They used high-speed fine precision pocket milling technology to manufacture the sample parts. Machining accuracy was evaluated by the use of a coordinate measuring machine (CMM) based on measured part diameter, circularity and circle profile. When evaluating accuracy, the actual shape was compared with the tool path, and the cutting forces at different cutting parameters were measured. The sine behavior of the cutting force component in feed direction was highlighted as an important factor for optimizing the accuracy of robotic milling. Most research on robotic milling has occurred in the last decade. Sörnmo et al. (Sörnmo et al. 2012) studied the accuracy of robotic milling. To compensate for the errors caused by the high milling forces and the low stiffness of the robot, they developed a dynamic micromanipulator with a control algorithm that adapts to the process. In aluminum milling experiments, authors shown that they achieve better results using a compensation manipulator. Bauer et al. (Bauer et al. 2013, 245-263) used a three-step method to obtain variable chip thickness, which they used as the basis for calculating the resultant cutting force acting on the TCP of the robot. The cutting force was determined based on cutting depth, cutting thickness, and cutting force coefficients. By adjusting the tool paths to compensate for cutting forces, they were able to improve the accuracy of robotic milling and reduce the errors resulting from high milling forces. They found that another important source of error is the accuracy of the robot controller. To improve the accuracy of robotic milling, Tyapin et al. (Tyapin et al. 2014) have developed a control program that adjusts the tool path, based on fuzzy logic and the robot’s stiffness model. The control program switches between error compensation strategies, based on vibration strength, measured with an accelerometer during milling. By the use of the control program, authors found a 70% reduction in cutting force errors in aluminum milling experiments.

Application of Industrial Robots for Robotic Machining

403

Denkena and Lepper (Denkena and Lepper 2015, 79-84) have exposed robotic milling as an affordable solution for the production of large products for the aviation industry. Because of the errors resulting from low robot stiffness and high milling forces, they proposed a model for tool path compensation based on force measurements. They developed a spindle head holder equipped with strain gauges to measure the milling forces. They validated the whole process by milling aluminum and confirmed its usefulness. Kaldestad et al. (Kaldestad et al. 2015) described an approach for improving the accuracy of robotic milling by correcting tool paths in indirect robot programming. Tool path correction took into account the robot’s stiffness and the milling forces. In addition, they added a vibration dampener to the robot’s head. The milling forces were calculated by the use of an analytical method that takes into account nonlinear relationships between the milling force and the cutting parameters. By the use of the system, they successfully reduced vibrations and improved the accuracy of aluminum milling. Mejri et al. (Mejri et al. 2015, 351-359) described the dynamics of robotic milling and analyzed the stability of its operation. In the model, the axial and tangent component of the cutting force were taken into account. The authors found that stable operation of the robot could not be ensured only by the posture of the robot, but that also the cutting parameters have to be adjusted. Klimchik et al. (Klimchik et al. 2016, 174-179) compared five robots with respect to their hole milling accuracy. The holes were milled at different robot postures and at the optimal posture according to the robot’s stiffness. The circularity of the hole was checked to evaluate the quality of machining. Gotlih et al. (J. Gotlih et al. 2017, 233-244) described a procedure for the optimal placement of arbitrarily shaped workpiece in the workspace of a milling robot. Based on dimensional accuracy measurements of milled products, authors found that there are two areas in the robot’s workspace that are separated by the contour of suitable milling accuracy. Mousavi et al. (Mousavi et al. 2017, 3053-3065) used the differential equation of motion to describe the dynamics of robotic milling and determined the area of stable and unstable operation. By milling experiments, they confirmed the correctness of the model and showed that by considering functional redundancy, operational stability can be maintained without changing the cutting parameters. Diaz et al. (Diaz et al. 2016) described the operation of an automatic robotic deburring system. The system generates initial tool paths based on the CAD model of the product. A laser then detects the workpiece in the robot’s workspace and corrects tool paths online during the deburring process. Kuss et al. (Kuss et al. 2016, 545-550) used an optical control system with a stereo camera to correct the tool trajectory in robotic deburring. By the use of the system, the

404

Janez Gotlih, Timi Karner, Karl Gotlih et al.

robot’s error in tracking the tool path was reduced when deburring non-alloy structural steel. Leali et al. (Leali et al. 2016, 47-55) developed a method for calibrating a robotic cell for machining of aeronautical parts. The method was tested on fine machining of casted aluminum gearbox housings with complex shape and strict dimensional and geometric requirements. Authors have demonstrated the effectiveness of the method to increase the accuracy of robotic machining through experiments. Wang et al. (G. Wang et al. 2017, 411-421) achieved stable conditions for robotic drilling with the help of a pressure foot. With robotic drilling experiments of highstrength steel, they found that resonant vibrations are significantly influenced by the feed rate and the depth of cut. Bu et al. (Bu et al. 2017, 3253-3264) considered the axial stiffness of the robot in various postures for robot drilling. They introduced a quantitative index for evaluation of drilling accuracy and experimentally confirmed that drilling accuracy is higher for robot’s postures with greater axial stiffness. Cen and Melkote (Cen and Melkote 2017, 486-496) described the development of a dynamic model for robotic milling. Comparison of the model with aluminum milling experiments showed the possibility of reducing the maximum force by 50% - 75%. The milling forces were measured with a dynamometer in three directions. Lin et al. (Lin et al. 2017, 59-72) found the optimal robot’s posture for robotic drilling based on a new deformation index. Authors verified the approach with robotic drilling experiments and confirmed the feasibility of the proposed methodology. Lin et al. (Lin et al. 2018, 83-95) investigated the influence of spindle configuration on the accuracy of robotic machining. Based on the compliance matrix, they derived complementary stiffness to estimate the error when using different spindle configurations. They performed an optimization of the spindle configuration for the prescribed trajectory at the optimal robot posture. Results were experimentally validated and the significant influence of the spindle configuration on machining accuracy was confirmed (Figure 3).

Figure 3. Un-optimized and optimized spindle configuration (Lin et al. 2018).

Application of Industrial Robots for Robotic Machining

405

Kinematic Performance of Robots The first studies on the kinematic performance of robots appeared in the 1990s (Yoshikawa 1985, 3-9). Among the first mathematical descriptions of the kinematic ability of a mechanism, Yoshikawa introduced the manipulability index W. Manipulability for non-redundant mechanisms is defined by Eq. (1). 𝑊 = 𝑑𝑒𝑡𝐽

(1)

While for redundant mechanisms, manipulability is defined by Eq. (2). 𝑊 = √det(𝐽𝐽𝑇 )

(2)

Mathematically, manipulability is the product of the singular values of the Jacobian matrix and represents the volume of the ellipsoid, defined by the Jacobian matrix. Another kinematic performance index that is also based on the singular values of the Jacobian matrix is the index condition number 𝜅, defined by Eq. (3). 𝜅=

𝜎𝑚𝑎𝑥 𝜎𝑚𝑖𝑛

(3)

In Eq. (3) 𝜎𝑚𝑖𝑛 and 𝜎𝑚𝑎𝑥 represent the smallest and largest singular values of the Jacobian matrix, thus the condition number describes the distance from the singular position. The isotropy index was introduced as the inverse value of the condition number and is defined by Eq. (4). 1 𝜅

=

𝜎𝑚𝑖𝑛 𝜎𝑚𝑎𝑥

(4)

Since, new kinematic performance indexes were introduced for different applications. The manipulability of closed kinematic chains by the use of the Riemann’s metric was studied by Park and Kim (Park and Kim 1998, 542-548) and dynamically weighted kinematic manipulability was proposed. Siciliano (Siciliano 1999, 437-445) used the manipulability index to derive the velocity and force manipulability, while its generalization to multi-body systems was discussed by Wen and Wilfinger (Wen and Wilfinger 1999, 558-567), who, by the shape of the velocity and the force ellipsoid divided singularities into two groups: unmanipulable and unstable. Mostly for robot design, new indexes based on the condition number occurred (Collard et al. 2005, 69-84; Sika et al. 2012, 48-63). Collard et al. (Collard et al. 2005, 69-84) used the posture-independent global dexterity index (GDI), which is defined as

406

Janez Gotlih, Timi Karner, Karl Gotlih et al.

the mean value of the inverse condition number across the volume of the robot’s workspace and thus easily allows to design a mechanism with the highest overall GDI. In some studies, several kinematic performance indexes were used in combinations. Huo et al. (Huo et al. 2008, 456-464) solved the problem of the mechanism approaching the joint boundaries and singularities by introducing a new index, which was defined as a combination of manipulability and condition number. Merlet (Merlet 2006, 199-206) highlighted the problems of linking Jacobian based kinematic performance indexes with the mechanism’s accuracy. As the main reason they exposed the physical unit inconsistency of the Jacobin matrix. To overcome the problems characteristic length was introduced in the calculation of Jacobian which, however, is difficult to use in robotics due to the lack of geometric interpretation (Siciliano and Khatib 2008). Subsequently, Frobenius’s norm was used in some studies (Z. Gao et al. 2010, 180-189), allowing for the dimensionless treatment of the kinematic performance of the mechanisms. Mansouri and Ouali (Mansouri and Ouali 2011, 434-449) presented another solution to the problem with a double transformation of the Jacobian matrix, which is essentially a transition from kinematic to dynamic indexes. They introduced the power manipulability, which is a combination of kinematic and static parameters and contains translational and rotational components. They showed that power manipulability is a homogeneous tensor that is insensitive to physical units change. Due to the functional redundancy that occurs at five-axis machining with a six-axis robot, Léger and Angeles (Léger and Angeles 2016, 155-169) expected an increase in machining accuracy by reducing the condition number expressed with characteristic length. They obtained the tool trajectory that showed the most appropriate course of the condition number by a new gradient method and used the approach for offline programming of the robot.

Figure 4. Optimal workpiece layout for robotic machining (Caro et al. 2013, 2921-2926).

Application of Industrial Robots for Robotic Machining

407

Among the robotic machining applications, kinematic indexes were most commonly used to optimize robotic milling and drilling. Vosniakos and Matsas (Vosniakos and Matsas 2010, 517-525) have obtained an optimal tool path with increased manipulability and reduced joint torques for robotic milling. Caro et al. (Caro et al. 2013, 2921-2926) searched for the most appropriate placement of the workpiece in the workspace of a milling robot (Figure 4). To provide the highest quality indicator for machining, they obtained redundancy schemes by optimizing the elasto-static and kinematic properties of the robot. For description of kinematic properties, they used the condition number expressed by the Frobenius norm. To determine the influence of the cutting parameters on the cutting force in the robotic milling of carbon fiber polymers, Slamani et al. (Slamani et al. 2015, 268-283) used the condition number and designed an experiment that also considered the influence of the robot’s posture. Xiong et al. (Xiong et al. 2019, 19-28) have optimized the robot’s posture to improve the accuracy of robotic milling, taking into account the condition number. With milling experiments at optimal robot posture and tool trajectory, they found greater accuracy and confirmed the effectiveness of the proposed method. To optimize the layout of the workpiece in the workspace of the robot, Lin et al. (Lin et al. 2017, 59-72) used the condition number expressed with characteristic length. By the use of kinematic performance maps of the robot, problematic zones for individual joints were eliminated. The optimal workpiece layout was tested in the case of robotic drilling and confirmed the feasibility of the proposed methodology. A three-stage optimization for placing a workpiece in a drilling robot workspace was developed by Lin et al. (Lin et al. 2017, 59-72). Their approach divides the robot’s workspace into sub-areas that allow optimal positioning of the workpiece and is based on manipulability, stiffness and a new deformation index. Kinematic performance indexes were also used to optimize less common machining operations. Zargarbashi et al. (Zargarbashi et al. 2012, 694-699) used the condition number to distribute the rotations of the individual joints more evenly during deburring. For robotic fine machining of casted parts, Sabourin et al. (Sabourin et al. 2012, 381-391) adapted the actual tool path to the desired tool path, by the use of an optical 3D measurement system. In order to obtain the most appropriate tool trajectory in the robot’s workspace, they proposed a multi-criteria optimization, taking into account the robot’s structural properties, external loads, and a factor based on manipulability. Palpacelli (Palpacelli 2016, 1-8) used an inverse condition number to analyze the kino-static properties of a robotic mechanism enhanced with wire tensioners to achieve a higher and evenly distributed force on the tool during robotic friction welding and incremental forming.

Janez Gotlih, Timi Karner, Karl Gotlih et al.

408

Static Performance of Robots In robotics, external loads are associated with the deformation of the robot’s structure with a stiffness matrix. As a simplification, the displacement of TCP is used instead of structural deformation. The vector resulting from static force and torque components describes the external load, and the translation and rotation vector of TCP describes the TCP displacement. In practice, the compliance matrix is often used, which is the inverse of the stiffness matrix. Salisbury (Salisbury 1980, 95-100) proposed the inclusion of a stiffness model in the control algorithm of a robotic mechanism already in 1980. They derived the required torques on the motors to compensate for the robot’s stiffness. They defined the stiffness matrix in internal coordinates as described by Eq. (5). 𝐾𝐽 = 𝐽𝐾𝐶 𝐽𝑇

(5)

In Eq. (5), 𝐾𝐽 represents the stiffness matrix in the internal coordinates, while 𝐾𝐶 represents the stiffness matrix in external (Cartesian) coordinates. 𝐽 is the Jacobian matrix. Later, such a record of the stiffness matrix was called passive or structural stiffness. Chen and Kao (S.-F. Chen and Kao 2000, 835-847) pointed out that the standard formulation of the stiffness matrix is only valid when the robot is unloaded. The load on the robot causes deformation of the geometry, which also changes the Jacobian matrix. To compensate for the deformation, authors added to the standard formulation of the stiffness matrix the term 𝐾𝐺 , called the Conservative congruence transformation (CCT). Eq. (6) defines the stiffness matrix in Cartesian coordinates expressed with CCT. 𝐾𝐶 = 𝐽−𝑇 (𝐾𝐽 − 𝐾𝐺 )𝐽−1 , 𝐾𝐺 = [

𝜕𝐽𝑇

𝜕𝜃𝑛

𝐹]

(6)

𝐾𝐺 is an 𝑛 × 𝑛 matrix that describes the change in geometry due to the external force. n is the number of joints and F is the force acting on TCP. Due to the physical unit inconsistency of the translational and torsional elements in the Jacobian matrix, two approaches have been proposed to define the stiffness matrix (Siciliano and Khatib 2008). The first by the use of characteristic length that expresses the total stiffness of the robot with the greatest stiffness in the joints and the second, by the use of the weighted Frobenius norm, which uses effective value of the root mean square of joint stiffness for total robot stiffness. The latter is more appropriate for engineering problems because the rank of the matrix is not important. The stiffness matrix written with the weighted Frobenius norm is defined by Eq. (7).

Application of Industrial Robots for Robotic Machining 1

1

𝑛

𝑛

‖𝐾‖𝐹 ≡ √ 𝑡𝑟(𝐾𝐾 𝑇 ) ≡ √ 𝑡𝑟(𝐾 𝑇 𝐾)

409 (7)

In Eq. (7), n represents the weighting factor. If the weight is neglected, the standard Frobenius norm is obtained. For identification of stiffness parameters, analytical and experimental methods are commonly used. Matsuoka et al. (Matsuoka et al. 1999, 83-89) performed stiffness measurements with a micrometer under static TCP load. Analyzing the structural properties of a robot, they found the robot’s stiffness about 700 times lower than that of a dedicated machine tool. Pashkevich et al. (Pashkevich et al. 2008, 966-982) presented a method for determining the stiffness of a parallel robot, in which, in addition to deformations in the joints, the elasticity of the links was also taken into account. In the simulation model, the joints were replaced with virtual springs with six degrees of freedom. Authors evaluated stiffness’s of the individual components by FEM analysis and obtained the final model for unloaded robot postures by combining partial solutions. They illustrated effectiveness of the model on practical cases and compared the accuracy of the results was that of the FEM analysis. Authors pointed out, that the developed model allows the calculation of the robot’s stiffness in singular points. Dumas et al. (Dumas et al. 2012, 649-659; Dumas et al. 2011, 881-888)described a method for evaluating joint stiffness’ of serial robots. They derived a stiffness matrix with CCT, where 𝐾𝐽 represented the diagonal stiffness matrix of the joints. Authors evaluated joint stiffness’ by measuring the TCP deformation at different external loads. To simplify experimental evaluation of joint stiffness’, they found the robot’s posture in which the change in robot’s stiffness due to external loading is smallest. Bauer et al. (Bauer et al. 2013, 245-263) performed an analysis of the structural properties of the robot and obtained the structural stiffness of the robot by measuring TCP deformations in three Cartesian directions with a laser-measuring device. They obtained Rotational stiffness’ of joints by measuring the rotation of each joint. In the experiments, TCP was loaded with force 𝐹 and the measurements were repeated at nine different postures in the relevant workspace. Marie et al. (Marie et al. 2013, 13-43) used two approaches to describe the elastostatic properties of the robot. They derived the parametric stiffness matrix of the entire robot’s structure by the use of an analytical FEM approach, where they considered the individual stiffness matrices of joints and links. They verified the FEM model on experimental data and found that nonlinear effects that were not accounted in the model caused deviations. In the second approach, authors developed a fuzzy logic model based on measurement data that also considered nonlinear effects. Ahmad et al. (Ahmad et al. 2014, 125-141) developed an analytical model for determining the stiffness of parallel, serial and hybrid manipulators. They built the

410

Janez Gotlih, Timi Karner, Karl Gotlih et al.

detailed stiffness model from a simplified model with consideration of all elements of the system. The detailed model takes into account the stiffness of actuators, linear guides, coupled joints and passive connections (Figure 5). Authors demonstrated the application of the method on an example of a parallel haptic robot and compared the experimentally obtained stiffness results with results obtained by the FEM calculation. A comparison between the simplified and detailed models showed that by considering passive connections and actuators, the relative error between experimental and analytical results decreases from 84% to 16% and the average error from 80% to 9%. Schneider et al. (Schneider et al. 2014, 4464-4469; Schneider et al. 2015, 2054-2059) evaluated the stiffness of the entire robot structure based on TCP deformation measurements in a highly redundant measuring cell. Authors performed equipped the joints with at least three LEDs and measured the spatial deformation due to external loading with three cameras. In this way, they obtained three translational and three rotational stiffness’s for each joint. With a large number of measurements, authors showed nonlinear relationship between external load and joint deformation (Figure 6). Tyapin et al. (Tyapin et al. 2014) proposed a combination of local and global methods for determining the joint stiffness’. The authors presented the structural stiffness of the robot as the consequence of a combination of several influences. They exposed the stiffness of the joints, the stiffness of the bearings and the gearboxes and the stiffness conditioned by the state of the control program. To evaluate stiffness, they performed measurements at one robot posture without disassembling the robot and found a difference in the stiffness of the first and last three joints, with the first three joints having greater stiffness. The results were compared with a simulation in which the joints were modeled as a combination of a spring and a damper. Significant deviations were found between the simulation and the measurements for joints 4 and 5. The difference was attributed to the relatively small stiffness and, consequently, the large deformation in the load of these two joints and the disregarded influence of the previous joints on the stiffness of joints 4 and 5. Zaeh and Roesch (Zaeh and Roesch 2014, 737-744) obtained the stiffness model of a robot by loading the TCP with a pneumatic actuator and measuring the deformation in three directions with a laser measuring device. Authors used only one robot posture to establish the stiffness model.

Application of Industrial Robots for Robotic Machining

Figure 5. Stiffness modelling methodology (Ahmad et al. 2014, 125-141).

411

412

Janez Gotlih, Timi Karner, Karl Gotlih et al.

Figure 6. Rotational stiffness around z-axis of six joints (Schneider et al. 2014, 4464-4469).

Mohammed et al. (Mohammed et al. 2016) performed a FEM analysis to determine the stiffness of a robot and found that the lowest stiffness does not always occur in the most extended posture. Kaldestad et al. (Kaldestad et al. 2015) determined normalized joint stiffness’ experimentally using a combined local and global approach. The advantage of the approach is that the stiffness’ of joints 1, 2 and 6 are determined independently of the other joints and therefore fewer experiments are needed for their identification. The stiffness’ of joints 1, 2 and 6 must be known beforehand to determine the stiffness’ of joints 3, 4 and 5. Denkena and Lepper (Denkena and Lepper 2015, 79-84) developed a model for deformation prediction of all robot joints in six degrees of freedom. Authors defined the TCP deformation caused by an external force as the inverse of the stiffness value. For effective use of the model, they designed a special tool holder, equipped with strain gauges, to measure the milling forces during robotic milling. Cao et al. (Cao et al. 2018, 426-435) developed a new analytic approach to obtain stiffness results comparable to those of a FEM analysis. Their approach is based on strain energy and structural decomposition of the robot. The strain energy of each component of the robot is a function of the independent position and orientation parameters, the coherence of the drive train coefficients, and the external forces and torques. Authors introduced a new index to evaluate stiffness, meant for robot design and tool trajectory optimization. Sever times, stiffness models were used for optimization and control of robots. Brogårdh (Brogårdh 2007, 69-79) concluded that in future, controllers will increasingly be oriented toward the task that the robot performs and announced that identification of the virtual stiffness of the robot is going to be important.

Application of Industrial Robots for Robotic Machining

413

Ott et al. (Ott et al. 2006) incorporated a stiffness model into the control of a twohanded humanoid robot. In accordance with the virtual joint model method, a threespring model was developed that links the reference coordinate system to the coordinate systems of the tips of both hands. By accounting for stiffness, they reduced the error between motor and joint rotations. Cen and Melkote (Cen and Melkote 2017, 50-61, 2017, 486-496) have added terms to the CCT stiffness matrix to consider the external forces and to reduce vibrations during robotic milling. They performed robot stiffness measurements at various postures and loads, and measured deformations of the TCP with a laser-measuring device. Lai et al. (Lai et al. 2018, 2987-2996) suggested increasing the structural stiffness of the robot with a support structure. Analytical derivation and drilling experiments yielded better results, based on which the authors confirmed the reasonableness of the inclusion of a support structure in case of robotic machining. To reduce errors in robotic milling, Chen et al. (C. Chen et al. 2019, 29-40) developed a new index for evaluating the stiffness of a robot in a plane perpendicular to the machining direction. The index does not depend on the magnitude of the force, but depends on the posture of the robot and the direction of the external force. By the use of the proposed index, the authors improved precision of machining with adjusting the feed direction in case of planar milling. Sabourin et al. (Sabourin et al. 2012, 381-391) considered the structural properties of the robot in addition to the kinematic ones for optimizing the trajectory for polishing of aluminum casts. They experimentally obtained a reduced stiffness matrix and found that the stiffness coefficient in the dominant direction is six times greater than in other directions. They compared measurement results with FEM analysis and found a deviation of factor two. On the assumption that joints have the most important influence on robot’s stiffness, Wang et al. (J.J. Wang et al. 2009, 3826-3831) neglected other factors, and described the stiffness matrix as a diagonal matrix whose elements were only the joint stiffness coefficients. They correlated joint stiffness coefficients with the compliance vector, which is functionally dependent on external forces and torques, and was evaluated using deformation measurements. By the use of a force compensation model, they reduced the error in robotic milling by 60%. Guo et al. (Guo et al. 2015, 69-76) proposed a new index for evaluation of robot stiffness. The index is based on the volume of the ellipsoid formed by the eigenvectors of the TCP translational deformation matrix. By optimizing the new index, they found a suitable posture for robotic drilling, and confirmed that the robot in an optimized posture with increased stiffness can achieve greater hole machining accuracy. Diaz et al. (Diaz et al. 2017) used the functional redundancy which they evaluated using process and product parameters, to optimize a robot’s stiffness along the prescribed

414

Janez Gotlih, Timi Karner, Karl Gotlih et al.

milling path. They emphasized the ease of tool path planning by integrating a simplified compliance model into modern CAM software. Xiong et al. (Xiong et al. 2019, 19-28) developed an algorithm that incorporates a new stiffness index, that accounts for the most unfavorable values of translational and rotational deformation, to optimize the tool trajectory. The authors evaluated the index using deformation measurements of at least three reference points at the robot’s TCP. Klimchik contributed several in depth studies on robot stiffness. In (Klimchik et al. 2014), Klimchik et al. derived the stiffness model of the entire robot structure. They developed a comprehensive model that took into account the flexibility of all elements. Due to the amount of parameters of the obtained model, they used physical and algebraic methods to reduce the model. They used the reduced model for elasto-static calibration of an industrial robot and showed a statistical distribution of errors in three directions. An analysis of hole milling accuracy for several different serial robots at different postures has been described by in (Klimchik et al. 2016, 174-179). To determine the stiffness coefficients of the robots, authors used the approach described in (Klimchik et al. 2014). Authors separately optimized kinematic performance and stiffness of the robot to position a workpiece into the robot’s workspace. Stiffness optimization was found to be more appropriate as it also takes into account the robot’s machining accuracy. In (Klimchik et al. 2014, 73-81), Klimchik et al. took into account the elasto-static model of the robot and developed an algorithm for offline correction of the milling trajectory. In the elasto-static model, the gravity compensator and the weights of the joints were considered. The rotational stiffness’ of the joints were obtained in a two-stage process based on analytical calculations and measurements. By the use of the developed algorithm, the authors improved the accuracy of robotic milling. In (Klimchik et al. 2016), Klimchik et al. described the comparison between the stiffness of serial and quasi-serial robots. For both types of robots, stiffness was analytically derived, whereby the authors assumed the stiffness equality of all joints. Based on their study the authors suggest the use of serial robots that have the greatest stiffness at the edges of the workspace for machining of smaller workpieces, while for machining of larger workpieces, they suggest the use of quasi-serial robots that have the greatest stiffness in the middle of workspace. In (Klimchik et al. 2016, 967-972), Klimchik et al. described a procedure for determining the posture of a quasi-serial robot with a gravity compensator, for which the influence of measurement noise on the accuracy of stiffness coefficients determination is lowest. In (Klimchik and Pashkevich 2017, 46-70), Klimchik derived a simplified compliance matrix for typical serial and quasi-serial robots. The compliance matrix allows the calculation of TCP deformation at any external load and allows optimization of robot design with regard on stiffness. The author presented compliance throughout the workspace for various robotic mechanisms and confirmed that serial robots are suited for machining of smaller to medium workpieces, while quasi-serial robots are suited for machining of larger workpieces.

Application of Industrial Robots for Robotic Machining

415

Dynamic Performance of Robots To analyze and model the dynamics of robotic machining, the equation of motion Eq. (8) is used. M ∙ 𝑥̈ + C ∙ 𝑥̇ + K ∙ 𝑥 = 𝐹

(8)

M is the mass matrix, C the damping matrix, K the stiffness matrix, 𝑥 the displacement vector and 𝐹 the force vector. Parameters are presented in Figure 7.

Figure 7. Parameters of equation of motion in robotic milling (Cordes et al. 2019, 11-18).

To consider the effect of mass, inertia matrix is incorporated in dynamic models. To evaluate the dynamic performance of robots, Yoshikawa (Yoshikawa 1985, 1033-1038) proposed the dynamic manipulability and Asada (Asada 1983, 131-142) proposed the generalized inertia ellipsoid. When considering the dynamics of the robot, damping was often ignored for simplification of models (Bu et al. 2017, 3253-3264; Cen and Melkote 2017, 486-496; Cen et al. 2016, 2227-2235; Dumas et al. 2012, 649-659; Dumas et al. 2011, 881-888; Lin et al. 2018, 83-95) or damping system to support the robot in performing operations where resonant vibration would be a problem were incorporated (F. Chen et al. 2019, 391-403; Kaldestad et al. 2015; Ozturk et al. 2018, 427-430). Parameter identification to establish dynamic models is based on analytic, experimental or combined approaches. Gautier and Khalil (Gautier and Khalil 1990, 368-

416

Janez Gotlih, Timi Karner, Karl Gotlih et al.

373) described a method for determining the minimum number of inertia parameters required to establish a dynamic model, which was also integrated into a robot modeling software. Methods for describing the inertia matrix in internal coordinates and in external coordinates are summarized in more detail in (Siciliano and Khatib 2008, 1611). In order to establish a dynamic model of robotic milling, Cen and Melkote (Cen and Melkote 2017, 486-496) described the relationship between the mass matrix and the inertia matrix. Because inertia depends on the posture of the robot, the relationship was created through the Jacobian matrix, which, when changing the posture of the robot, takes into account the transformation of the coordinate systems of joints with respect to the base coordinate system. Koser (Koser 2004, 169-182) analytically derived the posture dependent inertia matrix of a robot with respect to the base coordinate system. Swiatek et al. (Swiatek et al. 2010) created a CAD model of the robot by simplification of complex components such as drive motors and gearboxes. In order to compensate for the error, they adjusted the densities of the simplified components to obtain accurate masses. They have found that due to simplifications, the model showed incorrect center of gravity and inertia of the robot. Hoai Nam et al. (Hoai Nam et al. 2018) obtained the inertia matrices of all robot components using CAD. To obtain accurate results, individual components were modeled as hollow bodies with attributed material type. The interior of the robot was adjusted based on photographs, and the material was determined by magnetism, separating aluminum and steel. Gearboxes and motors were also modeled accurately. Based on the calibrated CAD model, authors established a simulation model for the prediction of forces in robotic milling and compared the simulation results with experiments. They concluded that it is possible to predict satisfactorily the milling forces with medium amplitudes using the model. Klimchik et al. (Klimchik et al. 2016; Klimchik et al. 2016, 967-972) pointed out that quasi-serial robots have less inertia of the robotic arm and consequently better dynamic properties, which is also significantly influenced by the placement of the drive motor for the third axis on the leg of the robot, next to the drive motors for the first and the second axis. Du et al. (Du et al. 2016, 39-48) developed a system for automatic dynamic robot calibration with the help of an inertia-measuring device and a position-measuring device. The inertia-measuring device captures the accelerometer and magnetic field data, while from the measurement results and by the use of the factor quaternion algorithm, they obtained the TCP orientation. The calibration method proved to be effective. Palmieri et al. (Palmieri et al. 2014, 961-972) noted that FEM is a typical approach for elasto-dynamic analysis of robotic structures as it is based on discretization of the system, which takes into account the elastic and inertial properties of elements. They conducted a FEM modal analysis for multiple postures of a parallel robot. The extension

Application of Industrial Robots for Robotic Machining

417

of the obtained natural frequencies to the entire workspace of the robot was performed by polynomial regression. The results of the numerical model were compared with experiments performed with an impact hammer test, with the measurement points being fitted to different components of the robot. The modal parameters were extracted from the Frequency Response Functions (FRF), which were captured by a three-dimensional accelerometer. By experiments, they identified three modal states in the frequency range below 60 Hz and six in the same range by the use of FEM. The authors explain the deviation by not considering damping and friction in the simulation model. By considering friction in the FEM model, they obtained the correct number of natural frequencies in the consider frequency range, but not good matching of the transfer functions By the use of FEM software for dynamic analysis, Swiatek et al. (Swiatek et al. 2010) numerically obtained the natural frequencies and damping factors for the first eight modal states for different robot postures. They created a dynamic model in which they considered flexible joints and links and validated the obtained model with the experimental results. Bauer et al. (Bauer et al. 2013, 245-263) performed a modal analysis of the robot with the impact hammer tests and obtained natural frequencies, modal shapes and damping factors for several modal states of the robot. Measurements of the structure response to the impact force were performed with an accelerometer, and the transfer functions were measured over hundred points distributed over the robot’s structure. To establish a dynamic model they adjusted mass, stiffness and damping parameters. By the use of the parameter calibration approach, they obtained a good match of the transfer functions especially in the field of resonance. Mejri et al. (Mejri et al. 2015, 351-359) analyzed the influence of the robot’s posture on its natural frequencies and damping factors during robotic milling. They extracted modal parameters from experimentally obtained transfer functions by the use of the PolyMAX method. During the experiment, the hammer impact was executed in the radial and tangent direction, in which significant forces in robotic milling appear. Authors found that the dominant natural frequency occurs due to the milling head attached to the robot. They presented the natural frequencies and damping factors in the lower frequency spectrum and confirmed that the robot’s posture and the direction of excitation influence the modal states of the robot. To determine the required torque on the drive motors for an active robotic arm controller, Salisbury (Salisbury 1980, 95-100) considered modal damping and inertia moments of the links. With inclusion of inertia in the control of the robotic arm, they achieved an increase in the stability of the robot’s movement. They measured the resonant states of the robotic arm and identified different resonant frequencies regarding its posture. They pointed out that the natural frequencies of the arm in contact with the

418

Janez Gotlih, Timi Karner, Karl Gotlih et al.

environment depend on the elasticity of the arm and the environment and on the distribution of mass in the hand. Ott et al. (Ott et al. 2006) developed a control algorithm, which was used on a twohanded humanoid robot. To determine the required torques on the drive motors, they considered the damping matrix. The importance of considering the excitation frequency, caused by the performed task, in the robot design is emphasized in (Siciliano and Khatib 2008). In order to avoid resonance, authors recommend the development of a dedicated robot with customized stiffness and mass matrices. Matsouka et al. (Matsuoka et al. 1999, 83-89) found, that for high-speed robot milling, the appearance of resonance is not conditioned by the structure of the robot but rather by the natural frequencies of the milling head. Guo et al. (Guo et al. 2016, 102-110) investigated resonant behavior in robotic drilling and found that the deflection amplitude in the direction of lower stiffness was greater, which, in the authors’ opinion, was the main cause for breaking of the cutting tool insert. Based on force measurements, they found that in some cases during resonance the tool loses contact with the workpiece, which did not happen when the robot was only slightly vibrating. Leonesio et al. (Leonesio et al. 2018) investigated the effect of vibrations on surface roughness in robotic milling of aluminum alloys. For the optimization of robotic milling, the authors introduced a dynamic index, dependent on natural frequencies and cutting forces. Resonant vibrations in robotic machining occur at much lower excitation frequencies compared to machine tools. Schneider et al. (Schneider et al. 2014, 636-647) developed an active compensation mechanism to dampen vibrations and improve the accuracy of robotic milling. The mechanism’s controls were designed to actively suppress resonant vibrations. With the developed system, they successfully suppressed most of the unwanted resonant vibrations and improved the product quality for robotic machining of steel. Wang et al. (G. Wang et al. 2015) used a support leg to dampen vibrations in robotic drilling. In order to establish a dynamic model, they considered cutting forces and supporting leg forces in the equations of motion. Natural frequencies and damping factors were obtained in three directions of the Cartesian coordinate system by measuring forces at different cutting parameters. Authors compared the numerical model with experimental results in stable, vibrational and chatter conditions, and found 11% match for stable and 21% match for vibration mode. In case of chatter between the robot’s natural frequency and the spindle’s rotational speed, a two to three times increase in cutting force was found. Guo et al. (Guo et al. 2016, 102-110) obtained natural frequencies and damping coefficients for the first five modal states of the robot by measuring the tangent and radial

Application of Industrial Robots for Robotic Machining

419

component of the cutting force. They showed that the robot itself is the main cause of vibration in robot drilling, since the robot’s natural frequencies coincide with the frequencies of the drilling process. They proposed two approaches to reduce vibration, first by adjusting the spindle speed so that its natural frequency does not coincide with the robot’s natural frequency and second by mounting a support leg on the robot. By experiments, authors have shown that by the use of a support leg they can completely eliminate vibrations of the robot and improve the drilling process. Mousavi et al. (Mousavi et al. 2017, 3053-3065, 2018, 181-192) established a dynamic model for robotic machining by the use of proportional damping coefficients. They expressed the damping coefficients as a function of the natural frequencies in any modal state. By the use of the model, they plotted diagrams of the robot’s stable operation (Figure 8) and showed that, by adjusting the robot’s posture, stable operation during machining can be achieved, without changing the trajectory and cutting parameters.

Figure 8. Stable and unstable zones for robotic machining (Mousavi et al. 2018, 181-192).

Figure 9. Stability chart with cutting parameters for robotic machining (Cordes et al. 2019, 11-18).

420

Janez Gotlih, Timi Karner, Karl Gotlih et al.

Cordes et al. (Cordes et al. 2019, 11-18) performed an impact hammer test at one robot posture and collected vibrations with a laser Doppler vibrometer, which can effectively measure low-frequency modes. In doing so, thy obtained the main and interacting transfer functions for the robot and the milling head. They have identified the posture dependent natural frequencies in the low-frequency range below 26 Hz, and posture independent natural frequencies caused by the milling head close to 850 Hz and 2325 Hz. In order to identify the dynamic parameters, they derived the equation of motion in the modal space, expressed by eigenvalues and normalized eigenvectors, with eigenvalues functionally dependent on natural frequencies and damping factors for each modal state. Iteratively solving the Levenberg-Marquardt algorithm, they obtained the values of natural frequencies, damping factors and eigenvectors for the most important modal states. They showed that in case of aluminum milling at high spindle speeds the influence of the modal states of the milling head prevails and critical vibrations can be avoided by adjusting the spindle speed (Figure 9). At low spindle speeds, the influence of the modal states of the robot’s structure prevails. Yuan et al. (Yuan et al. 2018, 2240-2251) found that in robotic machining, the area of stable operation varies with the posture of the robot, which makes it difficult to control the stable operation offline. They noted that methods for avoiding resonant vibrations are primarily related to increasing structural stiffness, adjusting process parameters, measuring of cutting forces and damping vibrations.

Modeling and Optimization of Robotic Mathematical modeling of mechanical systems such as robots is based on parameter identification and establishing models that describe its geometric, static, kinematic, and dynamic properties. By the use of such models, we can effectively control the movement of the robot, even during execution of very demanding tasks. Gao et al. (Z. Gao et al. 2010, 180-189) optimized the design of a parallel robotic mechanism by the use of the genetic algorithm (GA). By the use of artificial neural networks, they approximated the analytical solution for stiffness and manipulability, which they used as the optimization targets. Authors presented optimization results with the Pareto front of the best solutions, from which the most suitable solution can be selected. Marie et al. (Marie et al. 2013, 13-43) used the GA to adjust the parameters of the robot model to the measurement results. They verified the structural stiffness model obtained with FEM by experiments and found that the model is unsuitable for practical use, as nonlinear effects were not considered in the model. In the second step, they developed a new model based on the measurement results and through fuzzy logic, which

Application of Industrial Robots for Robotic Machining

421

proved to be more appropriate. In incremental forming of aluminum, they achieved accuracy in the range +/- 0.15 mm, which is within the tolerance field of technology. Bocsi et al. (Bocsi et al. 2014, 589-599) shown that, due to nontrivially of solutions, classical regression modeling is not a suitable approach for the robot to learn its inverse kinematics, used for robot path planning. Therefore, they also used support vector machine and the forward Gaussian process modeling methods and showed that by the use of these methods, the robot could learn inverse kinematics, but not always. Chen et al. (G. Chen et al. 2014, 1759-1774) have exposed regression modeling and artificial neural networks as an effective tools for modeling and optimization of robotic parameters. They found that in particular, artificial neural networks are able to approximate nonlinear functional dependencies with arbitrary precision, making them highly suitable for use in robotics, where nonlinear influences are very important. Slamani et al. (Slamani et al. 2015, 268-283) designed an experiment to determine the influence of the cutting parameters on the cutting force in robotic milling of carbon fiber polymers, which also considered the influence of the robot’s posture. They developed a predictive regression model in which the robot’s posture has a significant effect. They showed that the model could predict with high reliability the cutting forces in robotic milling of carbon fiber polymers. Friedrich et al. (Friedrich et al. 2017, 124-134) shown the relationship between the maximum stable cutting depth and the spindle speed. To establish the predictive model, they used deep learning, which does not require prior training of the networks as the system learns during operation. Zeng et al. (Zeng et al. 2017, 2745-2755) used a regression model for the absolute calibration of the robot for robotic drilling. They measured positional errors of the robot and established a model that relates positional errors with joint rotations. With indirect error compensation, the robot’s absolute accuracy was improved by 84%. Various algorithms have been proposed to optimize different aspects of robotic machining. Optimization techniques were used commonly applied in robot design, calibration, tool path planning, workpice positioning and multi-robotic system design. To determine the maximum allowable load of serial robots, Mohamed et al. (Mohamed et al. 2015, 240-256) compared the performance of the GA and NSGA II for various configurations. The optimization target was the robot’s posture with the lowest stiffness, which was evaluated by FEM. Kaldestad et al. (Kaldestad et al. 2015) used a complex evolutionary search algorithm to estimate the radial and tangent forces in robotic milling. To improve robotic grinding, Gao et al. (Z.H. Gao et al. 2011, 346-354) optimized the robot’s structure and its position relative to the grinding stone. They used the particle swarm optimization (PSO) algorithm to obtain data for the design of a grinding cell that allows processing of the entire aerial engine blade with one holder and one robot setup.

422

Janez Gotlih, Timi Karner, Karl Gotlih et al.

Wu et al. (Wu et al. 2015, 151-168) proposed an approach for geometric calibration of industrial robots by designing experiments and selecting a suitable robot posture to minimize measurement noise. Authors used GA as the optimization method. They summarized optimization methods used by other authors for the calibration of robots, where, in addition to GA, the least squares method, the gradient search algorithm and the heuristic search appear. Diaz et al. (Diaz et al. 2016) used the Levenberg-Marquardt algorithm to calibrate the laser sensor mounted on the flange of the robot. In addition to the sensor, a deburring tool was installed on the flange, and a random tree algorithm was used to find the appropriate orientation of the deburring tool. Messay et al. (Messay et al. 2016, 33-48) used a two-step hybrid optimization method to develop a new computationally efficient kinematic robot calibration algorithm. In the first step, simulated annealing obtained a set of starting points for further search by the thrust region optimization. Authors compared the approach with the standard calibration method, which is integrated into commercial industrial robots and confirmed the effectiveness of the method in identifying kinematic parameters of the robot with significant calibration error. Majarena et al. (Majarena et al. 2013, 751-761) used the Levenberg-Marquardt method to identify kinematic parameters of a parallel robot mechanism, Gao et al. (G. Gao et al. 2017) used a customized GA for kinematic calibration and Filion et al. (Filion et al. 2018, 77-87) used the DETMAX optimization method. Xiong et al. (Xiong et al. 2017) used an advanced sequential forward floating search algorithm to calibrate the robot and identify structural parameters. They achieved better results compared to the DETMAX algorithm, which is often used for robot calibration. Vosniakos and Matsas (Vosniakos and Matsas 2010, 517-525) used the GA to for tool path optimization of a serial robot. Authors obtained variables such as joint rotations and joint torques by inverse kinematics and inverse dynamics. Initial postures and singular positions of the robot that occur in tool path generation were penalized. In the first step, they used GA to optimize manipulability. In the second step, they used GA to reduce joint torques. They considered milling forces calculated numerically based on the cutting parameters. Ur-Rehman et al. (Ur-Rehman et al. 2010, 1125-1141) used a multi-criteria GA to optimally position the tool path in the workspace of parallel kinematic machines. They used energy consumption and vibrations or maximum torques on actuators as target criteria. In order to obtain the most appropriate trajectory for robotic machining and polishing, Sabourin et al. (Sabourin et al. 2012, 381-391) proposed a multi-criteria optimization, which considers the structural properties of the robot, the external loads and the factor based on manipulability.

Application of Industrial Robots for Robotic Machining

423

Diaz et al. (Diaz et al. 2018) used the GA to optimize tool paths in robotic milling. The simulated annealing algorithm was used to place the workpiece. Farzanehkaloorazi et al. (Farzanehkaloorazi et al. 2018, 346-362) used an improved PSO to optimize path planning and path placement in a robotic cell (Figure 10). Two criteria were considered for optimization. The first criterion required the acquisition of a redundant solutions for the prescribed tool path, and the second criterion was used for avoiding singularities when executing the path. Caro et al. (Caro et al. 2013, 2921-2926) used the GA to optimize the placement of the workpiece in the workspace of a machining robot. In the algorithm, they consider constraints such as joint limits, a quality factor, the position and dimensions of the rotary table, and the manipulability factor to avoid singularities. Gotlih et al. (J. Gotlih et al. 2017, 233-244) used the GA to place the largest workpiece of the desired shape in the workspace of a robot. Doan et al. (Doan and Lin 2017, 233-242) used the modified particle swarm optimization (MPSO) algorithm to optimize tool paths and robot placement during robotic welding. They pointed out that gradient methods for such a complex problem are not appropriate, and that GA is computationally inefficient due to the more complex mathematical model. MPSO is an upgrade of the classic particle swarm algorithm (PSO) algorithm and includes a cross-search module and two updating strategies.

Figure 10. Optimal placement of the workpice in the robotic cell (Farzanehkaloorazi et al. 2018, 346-362).

424

Janez Gotlih, Timi Karner, Karl Gotlih et al.

Hassan et al. (Hassan et al. 2017, 113-132) solved the problem of setting up autonomous cooperative painting robots with the NSGA II. Maximum surface coverage, shortest build time, maximum flexibility and minimum torque on the drive motors were chosen as optimization criteria. From the Pareto front, solutions with 100% surface coverage were first selected, which is a prerequisite for robot painting, and then cases with the shortest production time were eliminated. As a final optimization result, they chose a good compromise solution between manipulability and joint torques. Nicola et al. (Nicola et al. 2018, 386-391) optimized the placement of tool paths in the workspace of several robotic cells with the aim of increasing the overall accuracy of the process. To solve the problem, they developed a hybrid algorithm that integrates whale optimization algorithm and ant colony optimization. Léger and Angeles (Léger and Angeles 2016) compared two optimization methods, to obtain the robot posture with the lowest kinematic condition number. They showed that the quadratic form of convergence progression in the Newton-Raphson method does not outweigh the speed of the Quasi Newton method, in which convergence is almost linear. Quasi Newton method is computationally faster as it only considers the approximation of the Hessian matrix, while the Newton-Raphson method calculates the Hessian matrix accurately. Lim et al. (Lim et al. 2017, 87-98) compared four GA-based hybrid algorithms for optimization of the placement of robots in a multi-robot manufacturing cell. The cell surface area, time of manufacture and manipulability were chosen as optimization criteria. Solutions were presented with the Pareto front, where the GA + PSO algorithm proved to be the most efficient.

CONCLUSION Structural and kinematic properties of robots are important parameters in robotic machining. Especially, when high dynamic loads occur the dynamic properties should be considered. In the current state of the art, comprehensive dynamic models consider stiffness of the robot, inertia, damping and natural frequencies, as well as external loads. As robotic machining is a functionally redundant process, kinematic performance measures are used to avoid singularities, stiffness is used to compensate high loads and stability diagrams are used to avoid chatter. With modeling and optimization of the system parameters, an improved machining process can be achieved. As optimization targets the machining accuracy, energy consumption, manufacturing time, space usage and other criteria are used. As robots are complex and interconnected systems, not all criteria can be met at the same time. For the best results, it is therefore important to select the most suitable robot for a specific task and improve its abilities by advanced control algorithms. As a

Application of Industrial Robots for Robotic Machining

425

promising modern approach, haptic sensors and robotic vision combined with deep learning arise, but the implementation in industrial robotic machining is still in the concept stage.

REFERENCES Ahmad, A., Andersson, K., Sellgren, U. and Khan, S. (2014). "A stiffness modeling methodology for simulation-driven design of haptic devices." Engineering with Computers 30 (1): 125-141. doi:10.1007/s00366-012-0296-4. Asada, H. (1983). "A Geometrical Representation of Manipulator Dynamics and Its Application to Arm Design." Journal of Dynamic Systems, Measurement, and Control 105: 131-142. doi:10.1115/1.3140644. Barnfather, J. D., Goodfellow, M. J. and Abram, T. (2016). "Development and testing of an error compensation algorithm for photogrammetry assisted robotic machining." Measurement 94: 561-577. doi:10.1016/j.measurement.2016.08.032. Barnfather, J. D., Goodfellow, M. J. and Abram, T. (2016). "A performance evaluation methodology for robotic machine tools used in large volume manufacturing." Robotics and Computer-Integrated Manufacturing 37: 49-56. doi:10.1016/ j.rcim.2015.06.002. Bauer, J., Friedmann, M. and Hemker, T. (2013). "Analysis of Industrial Robot Structure and Milling Process Interaction for Path Manipulation." Process Machine Interactions. Bocsi, B., Csato, L. and Peters, J. (2014). "Indirect robot model learning for tracking control." Advanced Robotics 28 (9): 589-599. doi:10.1080/01691864.2014.888371. Brogårdh, T. (2007). "Present and future robot control development—An industrial perspective." Annual Reviews in Control 31 (1): 69-79. doi:10.1016/ j.arcontrol.2007.01.002. Bu, Y., Liao, W., Tian, W., Zhang, L. and Dawei, L. I. (2017). "Modeling and experimental investigation of Cartesian compliance characterization for drilling robot." The International Journal of Advanced Manufacturing Technology 91 (9): 3253-3264. doi:10.1007/s00170-017-9991-z. Cao, W.-a., Yang, D. and Ding, H. (2018). "A method for stiffness analysis of overconstrained parallel robotic mechanisms with Scara motion." Robotics and Computer-Integrated Manufacturing 49: 426-435. doi:10.1016/j.rcim.2017.08.014. Caro, S., Dumas, C., Garnier, S. and Furet, B. (2013). "Workpiece placement optimization for machining operations with a KUKA KR270-2 robot." 2013 IEEE International Conference on Robotics and Automation, Karlsruhe, Germany.

426

Janez Gotlih, Timi Karner, Karl Gotlih et al.

Cen, L. and Melkote, S. N. (2017). "CCT-based mode coupling chatter avoidance in robotic milling." Journal of Manufacturing Processes 29: 50-61. doi:10.1016/ j.jmapro.2017.06.010. Cen, L. and Melkote, S. N. (2017). "Effect of Robot Dynamics on the Machining Forces in Robotic Milling." 45th ASME North American Manufacturing Research Conference (Namrc 45). Cen, L., Melkote, S. N., Castle, J. and Appelman, H. (2016). "A Wireless Force-Sensing and Model-Based Approach for Enhancement of Machining Accuracy in Robotic Milling." IEEE/ASME Transactions on Mechatronics 21 (5): 2227-2235. doi:10.1109/tmech.2016.2567319. Chen, C., Peng, F. Y., Yan, R., Li, Y. T., Wei, D. Q., Fan, Z., Tang, X. W. and Zhu, Z. R. (2019). "Stiffness performance index based posture and feed orientation optimization in robotic milling process." Robotics and Computer-Integrated Manufacturing 55: 29-40. doi:10.1016/j.rcim.2018.07.003. Chen, F., Zhao, H., Li, D. W., Chen, L., Tan, C. and Ding, H. (2019). "Contact force control and vibration suppression in robotic polishing with a smart end effector." Robotics and Computer-Integrated Manufacturing 57: 391-403. doi:10.1016/ j.rcim.2018.12.019. Chen, G., Li, T., Chu, M., Xuan, J.-Q. and Xu, S.-H. (2014). "Review on kinematics calibration technology of serial robots." International Journal of Precision Engineering and Manufacturing 15 (8): 1759-1774. doi:10.1007/s12541-014-0528-1. Chen, S.-F. and Kao, I. (2000). "Conservative Congruence Transformation for Joint and Cartesian Stiffness Matrices of Robotic Hands and Fingers." The International Journal of Robotics Research 19 (9): 835-847. doi:10.1177/02783640022067201. Chen, Y. and Dong, F. (2012). "Robot machining: recent development and future research issues." The International Journal of Advanced Manufacturing Technology 66 (9-12): 1489-1497. doi:10.1007/s00170-012-4433-4. Collard, J. F., Fisette, P. and Duysinx, P. (2005). "Contribution to the optimization of closed-loop multibody systems: Application to parallel manipulators." Multibody System Dynamics 13 (1): 69-84. doi:10.1007/s11044-005-4080-8. Cordes, M. and Hintze, W. (2017). "Offline simulation of path deviation due to joint compliance and hysteresis for robot machining." The International Journal of Advanced Manufacturing Technology 90 (1-4): 1075-1083. doi:10.1007/s00170-0169461-z. Cordes, M., Hintze, W. and Altintas, Y. (2019). "Chatter stability in robotic milling." Robotics and Computer-Integrated Manufacturing 55: 11-18. doi:10.1016/j.rcim. 2018.07.004. Denkena, B. and Lepper, T. (2015). "Enabling an Industrial Robot for Metal Cutting Operations." Procedia CIRP.

Application of Industrial Robots for Robotic Machining

427

Diaz, J. R. P., Kumar, S., Kuss, A., Schneider, U., Drust, M., Dietz, T. and Verl, A. (2016). "Automatic Programming and Control for Robotic Deburring." Proceedings of 47th International Symposium on Robotics. Diaz, J. R. P., Mukherjee, P. and Verl, A. (2018). "Automatic Close-optimal Workpiece Positioning for Robotic Manufacturing." Procedia CIRP 72: 277-284. doi:10.1016/j.procir.2018.03.142. Diaz, J. R. P., Schneider, U., Sridhar, A. and Verl, A. (2017). "Automatic Motion Generation for Robotic Milling Optimizing Stiffness with Sample-Based Planning." Machines 5 (1). doi:10.3390/machines5010003. Doan, N. C. N. and Lin, W. (2017). "Optimal robot placement with consideration of redundancy problem for wrist-partitioned 6R articulated robots." Robotics and Computer-Integrated Manufacturing 48: 233-242. doi:10.1016/j.rcim.2017.04.007. Du, G. L., Shao, H. K., Chen, Y. J., Zhang, P. and Liu, X. (2016). "An online method for serial robot self-calibration with CMAC and UKF." Robotics and ComputerIntegrated Manufacturing 42: 39-48. doi:10.1016/j.rcim.2016.05.006. Dumas, C., Caro, S., Cherif, M., Garnier, S. and Furet, B. (2012). "Joint stiffness identification of industrial serial robots." Robotica 30: 649-659. doi:10.1017/ S0263574711000932. Dumas, C., Caro, S., Garnier, S. and Furet, B. (2011). "Joint stiffness identification of six-revolute industrial serial robots." Robotics and Computer-Integrated Manufacturing 27 (4): 881-888. doi:10.1016/j.rcim.2011.02.003. Farzanehkaloorazi, M., Bonev, I. A. and Birglen, L. (2018). "Simultaneous path placement and trajectory planning optimization for a redundant coordinated robotic workcell." Mechanism and Machine Theory 130: 346-362. doi:10.1016 /j.mechmachtheory.2018.08.022. Filion, A., Joubair, A., Tahan, A. S. and Bonev, I. A. (2018). "Robot calibration using a portable photogrammetry system." Robotics and Computer-Integrated Manufacturing 49: 77-87. doi:10.1016/j.rcim.2017.05.004. Friedrich, J., Hinze, C., Renner, A., Verl, A. and Lechler, A. (2017). "Estimation of stability lobe diagrams in milling with continuous learning algorithms." Robotics and Computer-Integrated Manufacturing 43: 124-134. doi:10.1016/j.rcim.2015.10.003. Furtado, L. F. F., Villani, E., Trabasso, L. G. and Suterio, R. (2017). "A method to improve the use of 6-dof robots as machine tools." The International Journal of Advanced Manufacturing Technology 92 (5-8): 2487-2502. doi:10.1007/s00170-0170336-8. Gao, G., Sun, G., Na, J., Guo, Y. and Wu, X. (2017). "Structural parameter identification for 6 DOF industrial robots." Mechanical Systems and Signal Processing: 145-155. doi:10.1016/j.ymssp.2017.08.011. Gao, Z., Zhang, D. and Ge, Y. J. (2010). "Design optimization of a spatial six degree-offreedom parallel manipulator based on artificial intelligence approaches." Robotics

428

Janez Gotlih, Timi Karner, Karl Gotlih et al.

and Computer-Integrated Manufacturing 26 (2): 180-189. doi:10.1016/ j.rcim.2009.07.002. Gao, Z. H., Lan, X. D. and Bian, Y. S. (2011). "Structural Dimension Optimization of Robotic Belt Grinding System for Grinding Workpieces with Complex Shaped Surfaces Based on Dexterity Grinding Space." Chinese Journal of Aeronautics 24 (3): 346-354. doi:10.1016/S1000-9361(11)60041-1. Gautier, M. and Khalil, W. (1990). "Direct calculation of minimum set of inertial parameters of serial robots." IEEE Transactions on Robotics and Automation 6 (3): 368-373. doi:10.1109/70.56655. Gotlih, J., Brezocnik, M., Balic, J., Karner, T., Razborsek, B. and Gotlih, K. (2017). "Determination of accuracy contour and optimization of workpiece positioning for robot milling." Advances in Production Engineering & Management 12 (3): 233-244. doi:10.14743/apem2017.3.254. Gotlih, K., Kovac, D., Vuherer, T., Brezovnik, S., Brezocnik, M. and Zver, A. (2011). "Velocity anisotropy of an industrial robot." Robotics and Computer-Integrated Manufacturing 27 (1): 205-211. doi:10.1016/j.rcim.2010.07.010. Guo, Y. J., Dong, H. Y. and Ke, Y. L. (2015). "Stiffness-oriented posture optimization in robotic machining applications." Robotics and Computer-Integrated Manufacturing 35: 69-76. doi:10.1016/j.rcim.2015.02.006. Guo, Y. J., Dong, H. Y., Wang, G. F. and Ke, Y. L. (2016). "Vibration analysis and suppression in robotic boring process." International Journal of Machine Tools and Manufacture 101: 102-110. doi:10.1016/j.ijmachtools.2015.11.011. Hassan, M., Liu, D. and Paul, G. (2017). "Collaboration of Multiple Autonomous Industrial Robots through Optimal Base Placements." Journal of Intelligent & Robotic Systems 90 (1-2): 113-132. doi:10.1007/s10846-017-0647-x. Hoai Nam, H., Riviere, E. and Verlinden, O. (2018). "Multibody modelling of a flexible 6-axis robot dedicated to robotic machining." The 5th Joint International Conference on Multibody System Dynamics, Lisboa, Portugal. Huo, L., Franks, J. and Baron, L. (2008). "The joint‐limits and singularity avoidance in robotic welding." Industrial Robot: An International Journal 35 (5): 456-464. doi:10.1108/01439910810893626. Iglesias, I., Sebastian, M. A. and Ares, J. E. (2015). "Overview of the state of robotic machining: Current situation and future potential." Procedia Engineering 132: 911917. doi:10.1016/j.proeng.2015.12.577. Józwik, J., Ostrowski, D., Jarosz, P. and Mika, D. (2016). "Industrial Robot Repeatability Testing with High Speed Camera Phantom V2511." Advances in Science and Technology Research Journal 10 (32): 86-96. doi:10.12913/22998624/65136. Kaldestad, K. B., Tyapin, I. and Hovland, G. (2015). "Robotic face milling path correction and vibration reduction." 2015 IEEE International Conference on Advanced Intelligent Mechatronics (AIM).

Application of Industrial Robots for Robotic Machining

429

Klimchik, A., Ambiehl, A., Garnier, S., Furet, B. and Pashkevich, A. (2016). Experimental study of robotic-based machining. IFAC-PapersOnLine. Klimchik, A., Caro, S., Furet, B. and Pashkevich, A. (2014). "Complete Stiffness Model for a Serial Robot." 2014 11th International Conference on Informatics in Control, Automation and Robotics (ICINCO). Klimchik, A., Furet, B., Caro, S. and Pashkevich, A. (2015). "Identification of the manipulator stiffness model parameters in industrial environment." Mechanism and Machine Theory 90: 1-22. doi:10.1016/j.mechmachtheory.2015.03.002. Klimchik, A., Magid, E., Caro, S., Waiyakan, K. and Pashkevich, A. (2016). "Stiffness of serial and quasi-serial manipulators: comparison analysis." 2016 International Conference on Measurement Instrumentation and Electronics (Icmie 2016). Klimchik, A., Magid, E. and Pashkevich, A. (2016). "Design of experiments for elastostatic calibration of heavy industrial robots with kinematic parallelogram and gravity compensator." IFAC-PapersOnLine. Klimchik, A. and Pashkevich, A. (2017). "Serial vs. quasi-serial manipulators: Comparison analysis of elasto-static behaviors." Mechanism and Machine Theory 107: 46-70. doi:10.1016/j.mechmachtheory.2016.09.019. Klimchik, A., Wu, Y., Caro, S., Furet, B. and Pashkevich, A. (2014). "Accuracy Improvement of Robot-Based Milling Using an Enhanced Manipulator Model." In Advances on Theory and Practice of Robots and Manipulators, edited by Marco Ceccarelli and Victor A. Glazunov, 73-81. Cham: Springer International Publishing. Koser, K. (2004). "A slider crank mechanism based robot arm performance and dynamic analysis." Mechanism and Machine Theory 39 (2): 169-182. doi:10.1016/S0094114x(03)00112-5. Kruger, J., Zhao, H. Q., de Ascencao, G. R., Jacobi, P., Surdilovic, D., Scholl, S. and Polley, W. (2016). "Concept of an offline correction method based on historical data for milling operations using industrial robots." Production Engineering 10 (4-5): 409420. doi:10.1007/s11740-016-0686-3. Kubela, T., Pochyly, A. and Singule, V. (2016). "Assessment of industrial robots accuracy in relation to accuracy improvement in machining processes." Power Electronics and Motion Control Conference (PEMC), 2016 IEEE International. Kuss, A., Drust, M. and Verl, A. (2016). "Detection of workpiece shape deviations for tool path adaptation in robotic deburring systems." 49th CIRP Conference on Manufacturing Systems (CIRP-CMS 2016). Lai, C. Y., Villacis Chavez, D. E. and Ding, S. (2018). "Transformable parallel-serial manipulator for robotic machining." The International Journal of Advanced Manufacturing Technology 97 (5-8): 2987-2996. doi:10.1007/s00170-018-2170-z. Leali, F., Vergnano, A., Pini, F., Pellicciari, M. and Berselli, G. (2016). "A workcell calibration method for enhancing accuracy in robot machining of aerospace parts."

430

Janez Gotlih, Timi Karner, Karl Gotlih et al.

International Journal of Advanced Manufacturing Technology 85 (1-4): 47-55. doi:10.1007/s00170-014-6025-y. Léger, J. and Angeles, J. (2016). "Off-line programming of six-axis robots for optimum five-dimensional tasks." Mechanism and Machine Theory 100: 155-169. doi:10.1016/j.mechmachtheory.2016.01.015. Leonesio, M., Villagrossi, E., Beschi, M., Marini, A., Bianchi, G., Pedrocchi, N., Tosatti, L. M., Grechishnikov, V., Ilyukhin, Y. and Isaev, A. (2018). "Vibration Analysis of Robotic Milling Tasks." Procedia CIRP. Lim, Z. Y., Ponnambalam, S. G. and Izui, K. (2017). "Multi-objective hybrid algorithms for layout optimization in multi-robot cellular manufacturing systems." KnowledgeBased Systems 120: 87-98. doi:10.1016/j.knosys.2016.12.026. Lin, Y., Zhao, H. and Ding, H. (2017). "Posture optimization methodology of 6R industrial robots for machining using performance evaluation indexes." Robotics and Computer-Integrated Manufacturing 48: 59-72. doi:10.1016/j.rcim.2017.02.002. Lin, Y., Zhao, H. and Ding, H. (2018). "Spindle configuration analysis and optimization considering the deformation in robotic machining applications." Robotics and Computer-Integrated Manufacturing 54: 83-95. doi:10.1016/j.rcim.2018.05.005. Majarena, A. C., Santolaria, J., Samper, D. and Aguilar, J. J. (2013). "Analysis and evaluation of objective functions in kinematic calibration of parallel mechanisms." The International Journal of Advanced Manufacturing Technology 66 (5-8): 751761. doi:10.1007/s00170-012-4363-1. Mansouri, I. and Ouali, M. (2011). "The power manipulability – A new homogeneous performance index of robot manipulators." Robotics and Computer-Integrated Manufacturing 27 (2): 434-449. doi:10.1016/j.rcim.2010.09.004. Marie, S., Courteille, E. and Maurine, P. (2013). "Elasto-geometrical modeling and calibration of robot manipulators: Application to machining and forming applications." Mechanism and Machine Theory 69: 13-43. doi:10.1016/j. mechmachtheory.2013.05.003. Matsuoka, S.-i., Shimizu, K., Yamazaki, N. and Oki, Y. (1999). "High-speed end milling of an articulated robot and its characteristics." Journal of Materials Processing Technology 95 (1): 83-89. doi:10.1016/S0924-0136(99)00315-5. Mejri, S., Gagnol, V., Le, T.-P., Sabourin, L., Ray, P. and Paultre, P. (2015). "Dynamic characterization of machining robot and stability analysis." The International Journal of Advanced Manufacturing Technology 82 (1-4): 351-359. doi:10.1007/s00170-0157336-3. Merlet, J. P. (2006). "Jacobian, Manipulability, Condition Number, and Accuracy of Parallel Robots." Journal of Mechanical Design 128 (1): 199-206. doi:10.1115/ 1.2121740. Messay, T., Ordóñez, R. and Marcil, E. (2016). "Computationally efficient and robust kinematic calibration methodologies and their application to industrial robots."

Application of Industrial Robots for Robotic Machining

431

Robotics and Computer-Integrated Manufacturing 37: 33-48. doi:10.1016/j.rcim. 2015.06.003. Modungwa, D., Tlale, N. and Twala, B. (2013). "Application of ensemble learning approach in function approximation for dimensional synthesis of a 6 DOF parallel manipulator." 2013 6th Robotics and Mechatronics Conference (RobMech), Durban, South Africa. Mohamed, R. P., Xi, F. F. and Lin, Y. (2015). "A combinatorial search method for the quasi-static payload capacity of serial modular reconfigurable robots." Mechanism and Machine Theory 92: 240-256. doi:10.1016/j.mechmachtheory.2015.05.016. Mohammed, A., Schmidt, B. and Wang, L. (2016). "Energy-Efficient Robot Configuration for Assembly." Journal of Manufacturing Science and Engineering 139 (5). doi:10.1115/1.4034935. Mousavi, S., Gagnol, V., Bouzgarrou, B. C. and Ray, P. (2017). "Dynamic modeling and stability prediction in robotic machining." The International Journal of Advanced Manufacturing Technology 88 (9-12): 3053-3065. doi:10.1007/s00170-016-8938-0. Mousavi, S., Gagnol, V., Bouzgarrou, B. C. and Ray, P. (2018). "Stability optimization in robotic milling through the control of functional redundancies." Robotics and Computer-Integrated Manufacturing 50: 181-192. doi:10.1016/j.rcim.2017.09.004. Nicola, G., Pedrocchi, N., Mutti, S., Magnoni, P. and Beschi, M. (2018). "Optimal task positioning in multi-robot cells, using nested meta-heuristic swarm algorithms." Procedia CIRP. Ott, C., Eiberger, O., Friedl, W., Bauml, B., Hillenbrand, U., Borst, C., Albu-Schaffer, A., Brunner, B., Hirschmuller, H., Kielhofer, S., Konietschke, R., Suppa, M., Wimbock, T., Zacharias, F. and Hirzinger, G. (2006). "A humanoid two-arm system for dexterous manipulation." 2006 6th IEEE-RAS International Conference on Humanoid Robots, Vols 1 and 2, January. Ozturk, E., Barrios, A., Sun, C., Rajabi, S. and Munoa, J. (2018). "Robotic assisted milling for increased productivity." CIRP Annals-Manufacturing Technology 67 (1): 427-430. doi:10.1016/j.cirp.2018.04.031. Palmieri, G., Martarelli, M., Palpacelli, M. C. and Carbonari, L. (2014). "Configurationdependent modal analysis of a Cartesian parallel kinematics manipulator: numerical modeling and experimental validation." Meccanica 49 (4): 961-972. doi:10.1007/s11012-013-9842-4. Palpacelli, M. (2016). "Static performance improvement of an industrial robot by means of a cable-driven redundantly actuated system." Robotics and Computer-Integrated Manufacturing 38: 1-8. doi:10.1016/j.rcim.2015.09.003. Park, F. C. and Kim, J. W. (1998). "Manipulability of Closed Kinematic Chains." Journal of Mechanical Design 120 (4): 542-548. doi:10.1115/1.2829312.

432

Janez Gotlih, Timi Karner, Karl Gotlih et al.

Pashkevich, A., Chablat, D. and Wenger, P. (2008). "Stiffness analysis of overconstrained parallel manipulators." Mechanism and Machine Theory 44: 966982. doi:10.1016/j.mechmachtheory.2008.05.017. Sabourin, L., Robin, V., Gogu, G. and Fauconnier, J. M. (2012). "Improving the capability of a redundant robotic cell for cast parts finishing." Industrial Robot: An International Journal 39 (4): 381-391. doi:10.1108/01439911211227962. Salisbury, J. K. (1980). "Active stiffness control of a manipulator in cartesian coordinates." 1980 19th IEEE Conference on Decision and Control including the Symposium on Adaptive Processes, 10-12 Decembre 1980. Schneider, U., Diaz, J. R. P. and Verl, A. (2015). "Automatic pose optimization for robotic processes." 2015 IEEE International Conference on Robotics and Automation (ICRA), 26-30 May 2015. Schneider, U., Drust, M., Ansaloni, M., Lehmann, C., Pellicciari, M., Leali, F., Gunnink, J. W. and Verl, A. (2016). "Improving robotic machining accuracy through experimental error investigation and modular compensation." International Journal of Advanced Manufacturing Technology 85 (1-4): 3-15. doi:10.1007/s00170-0146021-2. Schneider, U., Momeni-K, M., Ansaloni, M. and Verl, A. (2014). "Stiffness modeling of industrial robots for deformation compensation in machining." 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, Chicago, IL, ZDA. Schneider, U., Olofsson, B., Sornmo, O., Drust, M., Robertsson, A., Hagele, M. and Johansson, R. (2014). "Integrated approach to robotic machining with macro/microactuation." Robotics and Computer-Integrated Manufacturing 30 (6): 636-647. doi:10.1016/j.rcim.2014.04.001. Siciliano, B. (1999). "The Tricept robot: Inverse kinematics, manipulability analysis and closed-loop direct kinematics algorithm." Robotica 17 (4): 437-445. doi:10.1017/S0263574799001678. Siciliano, B. and Khatib, O. (2008). Springer handbook of robotics. Springer-Verlag Berlin Heidelberg. Sika, Z., Hamrle, V., Valasek, M. and Benes, P. (2012). "Calibrability as additional design criterion of parallel kinematic machines." Mechanism and Machine Theory 50: 48-63. doi:10.1016/j.mechmachtheory.2011.12.001. Slamani, M., Gauthier, S. and Chatelain, J. F. (2015). "A study of the combined effects of machining parameters on cutting force components during high speed robotic trimming of CFRPs." Measurement 59: 268-283. doi:10.1016/ j.measurement.2014.09.052. Slavkovic, N. R., Milutinovic, D. S. and Glavonjic, M. M. (2013). "A method for off-line compensation of cutting force-induced errors in robotic machining by tool path modification." The International Journal of Advanced Manufacturing Technology 70 (9-12): 2083-2096. doi:10.1007/s00170-013-5421-z.

Application of Industrial Robots for Robotic Machining

433

Sörnmo, O., Olofsson, B., Schneider, U., Robertsson, A. and Johansson, R. (2012). "Increasing the Milling Accuracy for Industrial Robots Using a Piezo-Actuated HighDynamic Micro Manipulator." Advanced Intelligent Mechatronics (AIM), 2012 IEEE/ASME International, Sweden, Europe. Swiatek, G., Liu, Z. and Hazel, B. (2010). "Dynamic simulation and configuration dependant modal identification of a portable flexible-link and flexible-joint robot." 28th seminar on machinery vibration. Tyapin, I., Hovland, G. and Brogårdh, T. (2014). "Method for estimating combined controller, joint and link stiffnesses of an industrial robot." 2014 IEEE International Symposium on Robotic and Sensors Environments (ROSE) Proceedings. Ur-Rehman, R., Caro, S., Chablat, D. and Wenger, P. (2010). "Multi-objective path placement optimization of parallel kinematics machines based on energy consumption, shaking forces and maximum actuator torques: Application to the Orthoglide." Mechanism and Machine Theory 45 (8): 1125-1141. doi:10.1016/j.mechmachtheory.2010.03.008. Vosniakos, G. C. and Matsas, E. (2010). "Improving feasibility of robotic milling through robot placement optimisation." Robotics and Computer-Integrated Manufacturing 26 (5): 517-525. doi:10.1016/j.rcim.2010.04.001. Wang, G., Dong, H., Guo, Y. and Ke, Y. (2015). "Dynamic cutting force modeling and experimental study of industrial robotic boring." The International Journal of Advanced Manufacturing Technology: 179-190. doi:10.1007/s00170-015-8166-z. Wang, G., Dong, H., Guo, Y. and Ke, Y. (2017). "Chatter mechanism and stability analysis of robotic boring." The International Journal of Advanced Manufacturing Technology 91 (1-4): 411-421. doi:10.1007/s00170-016-9731-9. Wang, J. J., Zhang, H. and Fuhlbrigge, T. (2009). "Improving Machining Accuracy with Robot Deformation Compensation." 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, St. Louis, MO, USA. Wen, J. T. and Wilfinger, L. S. (1999). "Kinematic manipulability of general constrained rigid multibody systems." IEEE Transactions on Robotics and Automation 15 (3): 558-567. doi:10.1109/70.768187. Wu, Y., Klimchik, A., Caro, S., Furet, B. and Pashkevich, A. (2015). "Geometric calibration of industrial robots using enhanced partial pose measurements and design of experiments." Robotics and Computer-Integrated Manufacturing 35: 151-168. doi:10.1016/j.rcim.2015.03.007. Xiong, G., Ding, Y. and Zhu, L. M. (2019). "Stiffness-based pose optimization of an industrial robot for five-axis milling." Robotics and Computer-Integrated Manufacturing 55: 19-28. doi:10.1016/j.rcim.2018.07.001. Xiong, G., Ding, Y., Zhu, L. M. and Su, C. Y. (2017). "A product-of-exponential-based robot calibration method with optimal measurement configurations." International Journal of Advanced Robotic Systems 14 (6). doi:10.1177/1729881417743555.

434

Janez Gotlih, Timi Karner, Karl Gotlih et al.

Yoshikawa, T. (1985). "Dynamic manipulability of robot manipulators." Proceedings. 1985 IEEE International Conference on Robotics and Automation, 25-28 March 1985. Yoshikawa, T. (1985). "Manipulability of Robotic Mechanisms." The International Journal of Robotics Research 4 (2): 3-9. doi:10.1177/027836498500400201. Yuan, L., Pan, Z., Ding, D., Sun, S. and Li, W. (2018). "A Review on Chatter in Robotic Machining Process Regarding Both Regenerative and Mode Coupling Mechanism." IEEE/ASME Transactions on Mechatronics 23 (5): 2240-2251. doi:10.1109/ tmech.2018.2864652. Zaeh, M. F. and Roesch, O. (2014). "Improvement of the machining accuracy of milling robots." Production Engineering 8 (6): 737-744. doi:10.1007/s11740-014-0558-7. Zargarbashi, S. H. H., Khan, W. and Angeles, J. (2012). "The Jacobian condition number as a dexterity index in 6R machining robots." Robotics and Computer-Integrated Manufacturing 28 (6): 694-699. doi:10.1016/j.rcim.2012.04.004. Zeng, Y., Tian, W., Li, D., He, X. and Liao, W. (2017). "An error-similarity-based robot positional accuracy improvement method for a robotic drilling and riveting system." The International Journal of Advanced Manufacturing Technology 88 (9-12): 27452755. doi:10.1007/s00170-016-8975-8.

ABOUT THE EDITORS Isak Karabegović is an Academician of Academy of Science and Arts Bosnia and Herzegovina and Full professor at University of Bihać. Author and coauthor of more the 26 books, over 100 scientific papers published in international journals, over 300 papers published in proceedings of international conferences. Editor and coeditor of significant number of conference proceedings. Member in editorial board of 21 international journals. He received doctoral degree from Faculty of Mechanical Engineering, University of Sarajevo in 1989, his Master of Science degree from Faculty of Mechanical engineering and naval architecture Zagreb, University of Zagreb in 1982, and bachelor degree of mechanical engineering from Faculty of Mechanical engineering Sarajevo, University of Sarajevo in 1978. He was Dean of Technical faculty in several occasions and also rector of University of Bihać in several occasions. His research interest includes domains of Mechanics and Robotics. He is a general secretary of Society for Robotics of Bosnia and Herzegovina. Lejla Banjanović-Mehmedović was born in Sarajevo, Bosnia and Herzegovina, in 1966. She received the B.S. degree (1989) and the M.S. degree (1999) both from the University of Sarajevo, Faculty of Electrical Engineering, Bosnia and Herzegovina. She received her PhD degree in Electrical Engineering in 2006, from University of Zagreb, Croatia. From 1989 to 1994, she worked in Energoinvest, Sarajevo as software development researcher for supervisory control of industrial real-time systems. From 1995, she was with the Faculty of Electrical Engineering University of Tuzla, Bosnia and Herzegovina. Since 2011, she was an Associate Professor with the Faculty of Electrical Engineering, University Tuzla. She is the author of one book, a few chapters in books and more than 45 articles. Her current research interests include intelligent system’s control, industry and mobile robotics, intelligent transport systems, embedded real-time systems and cyber-physical systems. She is a participant and/or the leader of more 20 national and

436

About the Editors

international projects. Assoc. Prof. Banjanović-Mehmedović is senior member of IEEE and the reviewer of the more scientific journals. She is the member of the HORIZON 2020 Programme Committee “Nanotechnologies, Advanced Materials, Biotechnology, Advanced Manufacturing and Processing” (NMBP), (2014-2020) and the member of the FP7 NMP Programme Committee, 2011-2013.

INDEX # 3D laser scanner, 94, 105, 115 3D LiDAR, xv, 91, 101, 105, 106, 116 3D mesh, 111, 112 3D point cloud, 111, 112, 117, 119 3D robot vision, vii, xv, 91, 93 3D thermal mapping, xv, 91, 113, 115, 119 3D thermal model, 113, 115, 119 3D vision, xv, 88, 91, 94, 116, 321

A actuation, 126, 184, 210, 368, 369, 370, 432 actuators, xvi, 27, 29, 34, 39, 40, 41, 45, 121, 122, 123, 125, 126, 132, 144, 158, 162, 167, 168, 175, 184, 410, 422 adjustment, 230, 275, 276, 277, 278, 327 advancement, 72, 92, 174, 318 algorithm, 59, 67, 99, 109, 110, 111, 113, 208, 212, 215, 217, 219, 221, 223, 224, 226, 230, 400, 402, 408, 414, 416, 418, 420, 421, 422, 423, 424, 425, 432 Alhazen, 95 amplitude, 366, 418 anisotropy, 401, 428 annealing, 422, 423 anthropomorphic gripper, xvii, 351, 352, 360, 361, 368, 374, 389, 392, 393 application, v, vii, viii, xi, xiv, xv, xvi, xvii, 1, 2, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,

20, 21, 22, 23, 25, 31, 35, 44, 45, 46, 47, 49, 50, 57, 59, 60, 64, 66, 67, 68, 69, 72, 82, 85, 86, 87, 101, 110, 114, 115, 116, 119, 168, 173, 178, 189, 203, 206, 226, 229, 230, 231, 235, 242, 249, 269, 282, 286, 288, 289, 293, 294, 295, 296, 297, 298, 299, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 349, 397, 410, 425, 426, 430, 431, 433 artificial intelligence, vii, xi, xii, xiii, xvi, 47, 67, 69, 86, 87, 175, 189, 198, 199, 201, 202, 203, 206, 207, 208, 220, 221, 301, 427 automation, xii, xvi, 1, 2, 9, 24, 25, 46, 88, 117, 118, 119, 133, 145, 169, 173, 174, 183, 189, 197, 199, 201, 202, 222, 223, 224, 225, 226, 287, 288, 289, 290, 291, 293, 294, 295, 298, 301, 305, 317, 318, 319, 320, 321, 322, 323, 326, 327, 328, 348, 349, 393, 394,397, 425, 428, 429, 432, 433, 434 autonomous navigation, xi aviation industry, 403 avoidance, xv, 69, 172, 187, 202, 216, 289, 426, 428 awareness, 173, 178, 180, 183, 190

B benefits, 144, 189, 191, 210, 294, 328, 330, 401 binary images, 110, 111 bones, 360, 361, 362, 363, 364, 367, 368, 369, 392, 393 Bosnia, v, xvii, 1, 25, 27, 46, 47, 49, 68, 69, 91, 121, 145, 147, 201, 293, 319, 322, 325, 349 bur, 113, 119

Index

438 C

calibration, 91, 99, 100, 101, 109, 118, 232, 239, 242, 282, 399, 400, 402, 414, 416, 417, 421, 422, 426, 427, 429, 430, 433 camera calibration, 99, 100, 118 camera obscura, 95 carbon, 407, 421 cell phones, xiv cell surface, 424 cerebral cortex, 371 challenges, 172, 174, 186, 193, 194, 195, 282 chemical, 10, 13, 15, 17, 20, 23, 24, 50, 51, 297, 370 chessboard pattern, 99, 100 China, 7, 8, 9, 10, 12, 13, 14, 24, 25, 225, 322 classification, v, xv, 4, 27, 51, 57, 62, 208, 212, 213, 214, 215 cleaning, 2, 58, 66, 123, 310 collaboration, xii, xv, xvi, 24, 71, 91, 110, 112, 171, 172, 175, 176, 178, 195, 201, 202, 203, 204, 205, 206, 207, 208, 209, 212, 213, 217, 219, 220, 222, 223, 226, 227 collaborative application of industrial robot, 226, 231, 282 collaborative robotics, 171, 177, 188, 190, 195, 196, 197 collaborative robots, vii, xvi, xvii, 25, 46, 171, 172, 173, 174, 177, 178, 188, 189, 198, 202, 203, 217, 221, 222, 225, 226, 227, 230, 231, 232, 234, 237, 240, 243, 251, 254, 269, 282, 283, 286, 294, 295, 318 collision detection, 182, 186, 196, 226, 227, 229, 231, 232, 251, 252, 253, 254, 285, 286, 289 compensation, 179, 206, 233, 255, 256, 257, 258, 259, 260, 284, 287, 288, 290, 399, 400, 401, 402, 403, 413, 418, 421, 425, 432 competitiveness, xiv, 9, 207, 294, 318 complexity, 2, 51, 62, 67, 172, 183, 187, 191, 217, 301, 303, 371, 392 compliance, 177, 180, 362, 390, 404, 408, 413, 414, 425, 426 computer, x, xi, xiii, 2, 33, 70, 71, 72, 85, 86, 101, 113, 138, 202, 207, 274, 326, 327, 348, 397 configuration, 27, 112, 113, 181, 310, 364, 393, 404, 430, 433 configuration space, 113, 118 construction, xiv, 45, 52, 59, 74, 77, 78, 123, 125, 133, 138, 141, 144, 310, 327, 349, 368

cooperation, 175, 176, 202, 205, 209, 210, 221 cost, xiv, xvii, 2, 74, 125, 136, 183, 185, 203, 209, 211, 217, 219, 227, 232, 274, 287, 295, 298, 304, 327, 328, 329, 346, 392 cultural heritage, 105 cutting force, 400, 402, 403, 407, 418, 419, 420, 421, 432, 433 cybersecurity, 188, 189, 197 cycles, 201, 218

D damping, 130, 180, 209, 235, 263, 266, 272, 288, 415, 417, 418, 419, 420, 424 data processing, xii, 213 data set, 100 decomposition, 270, 412 decoupling, 265 deduction, 247, 263, 276 deep learning, xvi, 175, 201, 202, 208, 213, 215, 222, 421, 425 deformation, xvi, 57, 59, 225, 227, 237, 242, 274, 311, 400, 401, 404, 407, 408, 409, 410, 412, 413, 414, 430, 432 degree of freedom, 27, 28, 31, 306 depth, xv, 63, 86, 91, 92, 93, 94, 101, 103, 105, 106, 109, 110, 111, 112, 113, 115, 116, 119, 188, 194, 400, 402, 404, 414, 421 depth camera, 94, 110, 113, 116 depth image, 109, 119 depth perception, 101 depth perception camera, 101 depth sensing, xv, 91, 106, 110, 113, 115, 116 detection, 56, 63, 115, 179, 181, 182, 186, 202, 213, 226, 227, 229, 231, 232, 251, 252, 253, 254, 285, 286, 289, 291 developed countries, 6, 294 developing countries, 6 deviation, 37, 212, 263, 264, 265, 267, 268, 272, 273, 280, 281, 318, 413, 417, 426 displacement, 139, 140, 180, 383, 390, 408, 415 dynamic control, xvi, 225, 226, 227, 228, 230, 231, 235, 237, 242, 249, 251, 269, 282, 286, 287 dynamic factors, 401 dynamic loads, 398, 424 dynamic systems, 289, 290 dynamics, vii, xiv, xvi, 147, 158, 169, 213, 231, 282, 288, 289, 290, 397, 403, 415, 422, 425, 426, 428

Index E elastic deformation, 58 electrical actuators, 121, 122 electromagnetic, 51, 70, 123 electronic circuits, 44, 102 energy, x, xv, xvi, 34, 51, 91, 102, 114, 116, 121, 123, 126, 141, 158, 179, 210, 217, 234, 290, 298, 309, 328, 352, 412, 422, 424, 433 energy consumption, 422, 424, 433 energy efficiency, xv, 91, 114, 116, 158 energy expenditure, 217 engineering, xiv, 46, 47, 68, 112, 116, 213, 216, 219, 223, 288, 291, 326, 398, 408 environment, v, xii, xv, xvii, 3, 49, 51, 52, 56, 62, 67, 70, 86, 91, 92, 101, 105, 110, 111, 112, 113, 116, 148, 172, 181, 190, 194, 201, 202, 206, 207, 227, 230, 233, 272, 274, 303, 306, 318, 371, 400, 418, 429 environments, xiii, xv, 70, 91, 92, 110, 115, 116, 132, 177, 202, 209, 287, 329 epilines, 106 epiplane, 106 epipolar, xv, 91, 106, 107, 108 epipolar plane, 106, 108 equilibrium, 210, 244, 381, 382, 383, 384, 385, 388 essential matrix, 108 evolution, xiii, 92, 174, 189, 216, 359, 361, 392 evolutionary computation, 216 excitation, 133, 370, 417, 418 exoskeleton, xvi, 201, 204, 205 experimentation, viii, xvii, 351, 352 exploration, v, xi, 113 extrinsic parameters, 99, 100, 101 eye, 63, 92, 93, 94, 95, 118, 352

F fast deceleration, 257, 260 field-of-view, 105 flexibility, xvi, 3, 67, 69, 86, 171, 174, 177, 202, 206, 213, 220, 229, 294, 295, 302, 306, 316, 321, 328, 329, 397, 398, 414, 424 force control, 183, 184, 226, 230, 233, 269, 271, 273, 274, 276, 278, 279, 280, 281, 286, 288, 289, 291, 426

439

freedom, xv, 2, 27, 28, 29, 31, 42, 45, 105, 154, 156, 226, 263, 306, 310, 313, 314, 316, 388, 409, 412, 427 friction, 181, 186, 225, 226, 228, 229, 231, 232, 233, 234, 236, 238, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 252, 254, 255, 256, 257, 258, 260, 261, 271, 276, 282, 283, 284, 285, 286, 287, 288, 289, 290, 320, 379, 380, 381, 382, 383, 384, 386, 388, 399,407, 417 friction model, 181, 225, 226, 228, 229, 231, 232, 233, 234, 242, 243, 244, 246, 247, 248, 249, 250, 252, 255, 258, 260, 261, 282, 283, 285, 286, 288, 289 fundamental matrix, 107, 108, 109 fusion, xiii, 91, 94, 110, 113, 114, 116, 187, 260, 261 Fusion with Friction Model for Double-Encoder Based Method, 258 future robots’ trends, 171 fuzzy logic, 202, 205, 208, 209, 210, 402, 409, 420

G game theory, xvi, 201, 202, 208, 211, 220, 221, 222, 223 geometric characteristics, 101, 110 geometrical parameters, 399 geometry, 74, 95, 96, 105, 107, 213, 232, 408 gravity, 168, 229, 233, 234, 239, 254, 288, 382, 385, 388, 414, 416, 429

H homography, 107 homography map, 107 human, v, ix, x, xi, xii, xiii, xv, xvi, xvii, 3, 24, 29, 70, 87, 91, 92, 93, 94, 95, 106, 110, 112, 116, 171, 172, 174, 175, 176, 177, 178, 179, 185, 186, 187, 190, 191, 193, 195, 201, 202, 203, 204, 205, 206, 207, 208, 209, 211, 212, 213, 215, 216, 217, 219, 220, 221, 222, 223, 226, 227, 229, 231, 232, 251, 254, 262, 263, 286, 287, 288, 298, 311, 318, 328, 337, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 363, 367, 368, 370, 371, 372, 375, 389, 391, 392, 393, 394 human body, 177, 186, 212, 215, 353, 355, 358, 372 human brain, 358

Index

440

human hand, ix, xvii, 351, 352, 353, 354, 357, 358, 359, 360, 361, 363, 367, 368, 370, 371, 375, 376, 389, 392, 393, 394 human health, 24, 298 human will, 209 human-robot collaboration, vii, xv, xvi, 91, 110, 171, 178, 196, 199, 201, 202, 203, 205, 207, 208, 209, 212, 219, 220, 221, 222, 223, 227 human-robot interaction, 171, 187, 196, 197, 198, 199, 202, 207, 211, 215, 220, 221, 232, 286, 288 hybrid, 70, 207, 220, 226, 230, 273, 278, 286, 288, 290, 409, 422, 424, 430 hydraulic actuators, 121, 126, 132

317,318, 319, 320, 322, 325, 326, 327, 328, 329, 332, 336, 341, 342, 346, 347, 348, 349, 398, 403, 435 industry robotics, 202 inertia, 135, 161, 165, 167, 168, 234, 236, 237, 263, 272, 288, 388, 415, 416, 417, 424 integration, 46, 83, 102, 226, 234, 319, 328, 344 integrators, 189, 326, 328, 349 intelligence, xiii, xiv, xvi, 88, 190, 201, 202, 203, 207, 220, 295, 319 intelligent systems, xiv, 294, 318 investment, xvii, 318, 328 iteration, 157, 162, 165, 167, 218, 235

I

J

identification, 86, 180, 212, 228, 231, 234, 235, 236, 241, 244, 247, 249, 251, 257, 287, 288, 289, 293, 400, 401, 409, 412, 415, 420, 427, 433 illumination, 79, 80, 81, 82, 83 image, 51, 54, 62, 63, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 83, 85, 86, 95, 96, 97, 98, 99, 100, 101, 106, 107, 108, 109, 111, 185, 187, 213 image coordinates, 96, 108 image plane, 95, 106, 107 image processing, 54, 62, 63, 69, 74, 83, 118, 119, 187 images, xi, xv, 63, 69, 72, 74, 84, 85, 86, 99, 100, 106, 108, 109, 110, 111, 215 industrial environments, 187 industrial revolution, 6, 24 industrial robot, v, vii, viii, xi, xiv, xv, xvi, xvii, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 25, 26, 27, 28, 29, 30, 32, 33, 34, 35, 36, 38, 39, 40, 41, 42, 45, 46, 47, 68, 70, 72, 86, 87, 88, 89, 110, 112, 147, 171, 172, 173, 174, 175, 176, 177, 180, 188, 189, 190, 195, 196, 198, 199, 202, 203, 206, 213, 226, 230, 231, 232, 251, 254, 282, 283, 284, 285, 286, 287, 288, 290, 294, 295, 296, 297, 298, 301, 316, 318, 319, 320, 322, 323, 324, 326, 328, 335, 337, 349, 350, 394, 397, 398, 401, 414, 422, 425, 426, 427, 428, 429, 430, 431, 432, 433 industry, v, vii, viii, ix, xi, xiv, xv, xvi, xvii, 1, 3, 4, 6, 9, 10, 11, 13, 14, 15, 17, 19, 20, 21, 23, 24, 25, 46, 47, 49, 64, 66, 67, 68, 72, 83, 87, 110, 116, 117, 171, 193, 201, 202, 203, 205, 206, 207, 208, 222, 294, 295, 296, 297, 298, 301, 304, 305,

Japan, xi, 6, 7, 8, 9, 10, 11, 14, 15, 16, 24, 25, 71, 225, 323 joints, 28, 29, 30, 34, 35, 39, 40, 42, 43, 45, 51, 53, 121, 122, 126, 144, 147, 156, 158, 162, 180, 181, 182, 183, 184, 227, 229, 230, 232, 234, 235, 236, 237, 240, 241, 243, 252, 253, 254, 257, 258, 269, 270, 282, 288, 363, 364, 366, 367, 368, 390, 399, 407, 408, 409, 410, 412, 413, 414, 416, 417 justification, 294, 295, 313, 316, 318

K kinematics, vii, xiv, xvi, 147, 158, 159, 168, 169, 269, 274, 390, 393, 397, 421, 422, 426, 431, 432, 433 kinesthetic teaching, 226, 227, 229, 231, 232, 233, 251, 254, 255, 257, 258, 260, 274, 284, 285, 286

L learning, xvi, 178, 189, 201, 207, 212, 213, 214, 215, 220, 222, 223, 393, 425, 427, 431 lenses, 73, 74, 75, 76, 77, 95, 98 Levenberg-Marquardt algorithm, 99, 420, 422 LiDAR, 105 light, xv, 53, 61, 62, 63, 65, 71, 75, 77, 78, 79, 80, 81, 82, 83, 91, 95, 101, 102, 104, 105, 110, 122, 132, 144, 173, 287, 289

Index M machine learning, 24, 87, 189, 202, 205, 207, 208, 210, 213, 214, 221 machining, viii, xvii, 199, 288, 306, 307, 308, 316, 317, 320, 321, 322, 323, 326, 327, 397, 398, 399, 400, 401, 402, 403, 404, 406, 407, 413, 414, 415, 418, 419, 420, 421, 422, 423, 424, 425, 426, 428, 429, 430, 431, 432, 433, 434 magnetic field, 54, 55, 57, 60, 133, 134, 135, 137, 139, 141, 181, 182, 416 magnetic materials, 132 management, xvii, 3, 70, 87, 322 manifolds, 122, 124, 125, 126, 129 manipulation, 2, 3, 4, 45, 158, 186, 189, 209, 289, 311, 312, 314, 317, 331, 391, 431 manufacturing, v, ix, x, xvi, xvii, 2, 25, 46, 71, 72, 87, 172, 180, 186, 197, 201, 203, 206, 207, 208, 209, 212, 213, 216, 293, 301, 316, 319, 320, 321, 322, 326, 327, 329, 348, 393, 397, 424, 425, 430 mapping, xv, 91, 110, 113, 115, 214, 269 market share, 231, 282, 287 mathematical model, viii, xv, xvii, 91, 95, 116, 284, 351, 352, 384, 389, 423 mathematics, 189 matrix, 51, 56, 57, 84, 107, 108, 109, 149, 150, 151, 153, 154, 155, 157, 158, 160, 163, 165, 168, 226, 230, 234, 236, 241, 265, 269, 270, 272, 277, 377, 378, 379, 386, 387, 388, 389, 390, 401, 404, 405, 406, 408, 409, 413, 414, 415, 416, 418, 424 measuring, 3, 40, 49, 50, 51, 52, 53, 57, 58, 65, 67, 68, 69, 110, 111, 112, 116, 118, 177, 181, 184, 305, 317, 323, 400, 402, 409, 410, 413, 416, 418, 420 metal industry, viii, xvi, 9, 13, 17, 20, 25, 293, 294, 296, 297, 298 micromanipulation, xvii, 351, 352, 367, 389, 390, 391, 392, 393 mobile robots, 50, 54, 55, 63, 64, 343 modeling, 46, 47, 88, 110, 169, 196, 198, 222, 288, 289, 290, 291, 360, 361, 376, 384, 393, 397, 399, 401, 416, 420, 421, 424, 425, 430, 431, 432, 433 modelling, 52, 67, 207, 213, 217, 397, 411, 428 models, xii, 106, 115, 202, 207, 212, 228, 230, 232, 248, 250, 258, 261, 272, 290, 357, 401, 410, 412, 415, 420, 424 momentum, 167, 376, 377, 378, 380, 381, 382, 386 motion control, xvi, 227

441

motion limits, 257 muscles, 144, 205, 368, 371

N neural network, 47, 190, 212, 213, 401, 420, 421 neural networks, 190, 212, 213, 401, 420, 421 nonlinear dynamics, 213 nonlinear systems, 223 number of axis, 27

O obstacles, xi, xii, 113, 209 operations, xiv, xvii, 2, 13, 14, 16, 17, 18, 20, 21, 23, 24, 66, 95, 104, 122, 173, 174, 175, 178, 183, 188, 191, 192, 193, 206, 214, 285, 293, 295, 299, 304, 305, 306, 307, 309, 315, 327, 330, 332, 333, 334, 335, 336, 344, 345, 347, 348, 358, 392, 397, 398, 407, 415, 425, 429 optimization, xvi, xvii, 99, 172, 199, 201, 202, 205, 207, 208, 209, 211, 216, 217, 218, 219, 221, 222, 224, 333, 351, 357, 397, 401, 404, 407, 412, 414, 418, 420, 421, 422, 423, 424, 425, 426, 427, 428, 430, 431, 432, 433 optimization algorithms, 202, 216, 219 optimization method, 217, 422, 424, 430

P path planning, xv, xvi, 91, 110, 112, 113, 118, 201, 202, 207, 213, 401, 402, 414, 421, 423 phalanges, 358, 361, 362, 363, 366, 368, 369 phalanx, 362, 363 phase difference, 103 pinhole camera, 91, 94, 95, 96, 98, 99, 106, 116 plane, 66, 95, 99, 105, 106, 107, 277, 280, 281, 379, 380, 381, 382, 384, 385, 386, 388, 413 platform, x, 105, 115, 194, 207, 241 pneumatic actuators, 39, 121, 122, 123, 125 policy, 211, 223 principal point, 95 principles, xv, 51, 58, 71, 72, 74, 91, 116, 287 prior knowledge, 226, 276, 286 problem-solving, 189, 195 problem-solving skills, 195 product life cycle, 1, 24, 174 production costs, 293, 294, 317, 318

Index

442

production process, xv, xvii, 1, 2, 3, 6, 9, 10, 11, 24, 46, 47, 69, 70, 86, 147, 293, 294, 295, 298, 301, 315, 317, 322, 328, 336, 339, 341, 343 production technology, 326 programming, 32, 33, 53, 67, 173, 187, 188, 273, 311, 329, 335, 403, 406, 430 project, xiii, 194, 205, 221, 382 projected-light cameras, xv, 91 protection, x, 135, 136, 202, 251, 298, 313, 347, 370, 372 pulse waves, 102 pulsed-light ToF cameras, 105 pupillary distance, 93

Q quality, xiii, xiv, 2, 3, 24, 63, 64, 68, 69, 70, 73, 77, 99, 112, 116, 147, 179, 193, 195, 201, 293, 294, 295, 298, 304, 310, 317, 318, 328, 329, 333, 334, 335, 338, 340, 341, 343, 344, 348, 400, 403, 407, 418, 423 quality control, 64, 116, 340

R radial distortion, 98 radiation, xii, 4, 62, 70, 123, 370 radius, 40, 149, 150, 153, 163, 381 recognition, xii, xv, xvi, 50, 52, 62, 64, 69, 70, 71, 87, 186, 187, 201, 202, 208, 212, 213, 215, 221, 222, 299, 399 redundancy, 181, 372, 403, 406, 407, 413, 427 reference frame, 97, 100, 108 reference system, 376, 377, 378, 379, 380, 381, 386, 387, 388, 390 reprojection errors, 99, 100 requirements, 46, 74, 122, 158, 172, 177, 190, 191, 294, 295, 299, 306, 318, 404 researchers, v, xvii, 188, 293, 361 resistance, 57, 58, 60, 231, 233 resolution, 54, 74, 83, 86, 95, 99, 140 response, 78, 85, 187, 227, 237, 242, 401, 417 rigid body, xiv, 97 robot hand, 215, 219, 311 robotic vision, vii, xv, 34, 62, 69, 70, 72, 87, 425 rods, 137 rotating setups (3D LiDARs), xv, 91, 101, 116 rotation axis, 162, 163, 265

rotational matrix, 264 rotational transformations, 152 RRTConnect algorithm, 113

S safety, xvii, 46, 171, 172, 173, 176, 177, 178, 180, 186, 187, 188, 189, 190, 191, 195, 196, 201, 202, 210, 226, 227, 229, 235, 251, 258, 293, 294, 299, 325, 329, 359, 360, 393 scaling factor, 97, 253, 285 sensitivity, 173, 175, 179, 180, 182, 184, 185, 226, 286, 370, 371 sensor fusion, 91, 94, 116, 117, 118, 187 sensor system, xv, 49, 50, 52, 64, 66, 67, 316 sensors, vii, xv, xvi, 1, 27, 34, 45, 47, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 67, 68, 69, 70, 83, 84, 85, 87, 88, 91, 93, 94, 101, 105, 106, 113, 115, 116, 172, 176, 177, 179, 181, 182, 183, 186, 187, 190, 197, 209, 210, 212, 225, 227, 232, 235, 281, 283, 286, 287, 289, 291, 293, 299, 316, 391, 425, 433 sight, 92 simulation, 46, 202, 206, 207, 287, 316, 409, 410, 416, 417, 425, 426, 433 skeleton, 142, 353, 355, 356, 357, 363, 364, 366 skew, 97 skew factor, 97 smart industry, 202 social consequences, xiv software, xiii, 64, 69, 73, 85, 86, 334, 347, 397, 414, 416, 417 solution, 144, 156, 157, 158, 159, 168, 184, 191, 192, 193, 194, 211, 218, 219, 226, 321, 323, 328, 388, 403, 406, 420, 424 sophisticated machines and robots in wood processing., 325 specifications, 195, 213, 338, 347 stability, 135, 226, 229, 233, 255, 260, 274, 359, 360, 388, 403, 417, 424, 426, 427, 430, 431, 433 steel, 136, 141, 400, 404, 416, 418 stereopsis, 91, 92, 93, 106, 116, 119 stereo-vision, xv, 91, 93, 94, 106, 107, 109, 116 stereo-vision camera, 93, 106 storage, 3, 301, 317, 329, 346 stress, 243 structure, xv, 2, 4, 27, 28, 29, 34, 35, 38, 39, 40, 41, 42, 43, 44, 45, 46, 51, 54, 60, 61, 62, 81, 113,

Index 118, 122, 180, 183, 214, 215, 227, 228, 234, 240, 242, 314, 317, 351, 352, 353, 354, 357, 358, 361, 363, 366, 367, 370, 371, 401, 408, 409, 410, 413, 414, 417,418, 420, 421, 425

T tangential distortion, 98 target, 3, 114, 171, 263, 326, 421, 422 task allocation, 201, 208, 216, 217 task performance, 49, 67 techniques, xvi, 78, 201, 205, 208, 210, 214, 216, 323, 401, 421 technological advances, xiv technological change, 1 technological progress, xvi, 172, 201 technology, v, x, xiii, xiv, xv, xvii, 1, 2, 24, 25, 35, 46, 47, 69, 70, 71, 72, 86, 88, 117, 144, 168, 169, 174, 175, 184, 185, 195, 197, 199, 201, 209, 220, 221, 223, 226, 288, 293, 299, 305, 325, 326, 327, 340, 343, 349, 402, 421, 425, 426, 427, 428, 429, 430, 431, 432, 433, 434 temperature, 135, 225, 228, 231, 232, 242, 244, 245, 246, 248, 249, 250, 251, 283, 286, 288, 290, 310, 312, 370, 371 three-dimensional space, 63, 147 time-of-flight (ToF) cameras, 101, 104, 105 trajectory, 122, 187, 209, 210, 216, 230, 233, 272, 274, 275, 285, 403, 404, 406, 407, 412, 413, 414, 419, 422, 427 transformation, 50, 97, 107, 147, 148, 150, 151, 152, 156, 236, 264, 269, 328, 358, 406, 408, 416 transformation matrix, 151, 156, 236 transformation of coordinates, 147 transport, 2, 11, 193, 298, 299, 300, 315, 335, 337, 339, 344, 347, 348 transportation, 1, 203, 316, 323, 329, 331, 332, 333, 336, 338, 339, 347, 348

U under-compensation of friction, 260

443

undistorted image, 100, 101 upgrading processing, 325

V vector, 56, 63, 108, 148, 149, 150, 152, 153, 154, 155, 156, 158, 159, 160, 162, 163, 165, 166, 167, 168, 214, 230, 234, 235, 236, 265, 266, 269, 270, 376, 377, 378, 379, 386, 387, 408, 413, 415, 421 velocity, 34, 38, 123, 130, 141, 159, 160, 161, 162, 164, 178, 179, 186, 218, 225, 229, 231, 232, 236, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 254, 256, 258, 269, 283, 284, 286, 288, 389, 390, 401, 405 vibration, 180, 206, 401, 402, 403, 415, 418, 419, 426, 428, 433 virtual image, 95 visible spectrum, 94 vision, xv, xvii, 1, 50, 62, 63, 64, 65, 68, 69, 70, 71, 72, 73, 74, 75, 78, 82, 83, 86, 87, 91, 93, 94, 101, 107, 109, 113, 116, 174, 176, 186, 201, 274, 294, 320, 321, 326 visual system, 50, 62 visualization, 244, 245, 247 vulnerability, 227, 232, 287

W welding, 2, 13, 14, 16, 17, 18, 20, 21, 23, 24, 42, 65, 205, 304, 305, 306, 319, 320, 345, 407, 423, 428 windows, 214, 327 wood, xiv, xvii, 325, 326, 327, 328, 329, 330, 334, 336, 337, 338, 344, 346, 347, 348, 349 wood products, 326, 327, 328 wood species, 327 work environment, 78, 177 work space, 27, 30, 35, 36, 39, 40, 41, 56 workers, xiv, 11, 24, 46, 172, 176, 178, 195, 205, 208, 217, 293, 304, 326, 328, 329, 331 worldwide, xv, xvii, 1, 24, 296, 297