197 19 71MB
English Pages 1532 [1533] Year 2023
Springer
Handbook
oƒ
Automation Nof Editor 2nd Edition
12 3
Springer Handbooks
Springer Handbooks maintain the highest standards of references in key areas of the physical and applied sciences for practitioners in industry and academia, as well as graduate students. Designed to be useful and readable desk reference books, but also prepared in various electronic formats, these titles allow fast yet comprehensive review and easy retrieval of essential reliable key information. Springer Handbooks cover methods, general principles, functional relationships and fundamental data and review established applications. All Springer Handbooks are edited and prepared with great care by editors committed to harmonizing the content. All chapters are written by international experts in their field. Indexed by SCOPUS. The books of the series are submitted for indexing to Web of Science.
Shimon Y. Nof Editor
Springer Handbook of Automation Second Edition
With 809 Figures and 151 Tables
Editor Shimon Y. Nof PRISM Center and School of Industrial Engineering Purdue University West Lafayette, IN, USA
ISSN 2522-8692 ISSN 2522-8706 (electronic) Springer Handbooks ISBN 978-3-030-96728-4 ISBN 978-3-030-96729-1 (eBook) https://doi.org/10.1007/978-3-030-96729-1 1st edition: © Springer-Verlag Berlin Heidelberg 2009 © Springer Nature Switzerland AG 2023 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG. The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
This Springer Handbook is dedicated to all of us who collaborate with automation to advance humanity
Foreword: Automation Is for Humans and for Our Environment
Preparing to write the Foreword for this outstanding Springer Handbook of Automation, I have followed Shimon Y. Nof’s statement in his original vision for this Handbook: “The purpose of this Handbook is to understand automation knowledge and expertise for the solution of human society’s significant challenges; automation provided answers in the past, and it will be harnessed to do so in the future.” The significant challenges are becoming ever more complex, and learning how to address them with the help of automation is significant too. The publication of this Handbook with the excellent information and advice by a group of top international experts is, therefore, most timely and relevant. The core of any automatic system is the idea of feedback, a simple principle governing any regulation process occurring in nature. The process of feedback governs the growth of living organisms and regulates an innumerable quantity of variables on which life is based, such as body temperature, blood pressure, and cells concentration, and on which the interaction of living organisms with the environment is based, such as equilibrium, motion, visual coordination, response to stress and challenge, and so on. Humans have always copied nature in the design of their inventions: feedback is no exception. The introduction of feedback in the design of man-made automation processes occurred as early as in the golden century of Hellenistic civilization, the third century BC. The scholar Ktesibios, who lived in Alexandria circa 240–280 BC and whose work has been handed to us only by the later roman architect Vitruvius, is credited for the invention of the first feedback device. He used feedback in the design of a water clock. The idea was to obtain a measure of time from the inspection of the position of a floater in a tank of water filled at constant velocity. To make this simple principle work, Ktesibios’s challenge was to obtain a constant flow of water in the tank. He achieved this by designing a feedback device in which a conic floating valve serves the dual purpose of sensing the level of water in a compartment and of moderating the outflow of water. The idea of using feedback to moderate the velocity of rotating devices eventually led to the design of the centrifugal governor in the eighteenth century. In 1787, T. Mead patented such a device for the regulation of the rotary motion of a windmill, letting the sail area be decreased or increased as the weights in the centrifugal governor swing outward or, respectively, inward. The same principle was applied two years later by M. Boulton and J. Watt
vii
viii
Foreword: Automation Is for Humans and for Our Environment
to control the steam inlet valve of a steam engine. The basic simple idea of proportional feedback was further refined in the middle of the nineteenth century, with the introduction of integral control to compensate for constant disturbances. W. von Siemens, in the 1880s, designed a governor in which integral action, achieved by means of a wheel-and-cylinder mechanical integrator, was deliberately introduced. The same principle of proportional and integral feedback gave rise, by the turn of the century, to the first devices for the automatic steering of ships and became one of the enabling technologies that made the birth of aviation possible. The development of sensors, essential ingredients in any automatic control system, resulted in the creation of new companies. The perception that feedback control and, in a wider domain, automation were taking the shape of an autonomous discipline occurred at the time of the Second World War, where the application to radar and artillery had a dramatic impact, and immediately after. By the early 1950s, the principles of this newborn discipline quickly became a core ingredient of most industrial engineering curricula; professional and academic societies were established and textbooks and handbooks became available. At the beginning of the 1960s, two new driving forces provoked an enormous leap ahead: the rush to space and the advent of digital computers in the implementation of control system. The principles of optimal control, pioneered by R. Bellman and L. Pontryagin, became indispensable ingredients for the solution of the problem of soft landing on the moon and in manned space missions. Integrated computer control, introduced in 1959 by Texaco for set point adjustment and coordination of several local feedback loops in a refinery, quickly became the standard technique for controlling industrial processes. Those years saw also the birth of an International Federation of Automatic Control (IFAC), as a multinational federation of scientific and/or engineering societies each of which represents, in its own nation, values and interests of scientists and professionals active in the field of automation and in related scientific disciplines. The purpose of such a federation, established in Heidelberg in 1956, is to facilitate growth and dissemination of knowledge useful to the development of automation and to its application to engineering and science. Created at a time of acute international tensions, IFAC was a precursor of the spirit of the so-called Helsinki agreements of scientific and technical cooperation between east and west signed in 1973. It represented, in fact, a sincere manifestation of interest, from scientists and professionals of the two confronting spheres of influence in which the world was split at that time, toward a true cooperation and common goals. This was the first opportunity after the Second World War that scientists and engineers had of sharing complementary scientific and technological backgrounds, notably the early successes in the space race in the Soviet Union and the advent of electronic computers in the United States. The first President of IFAC was an engineer from the United States, while the first World Congress of the Federation was held in Moscow in 1960. The Federation currently includes 48 national member organizations, runs more than 60 scientific conferences with a three-year periodicity, including a World Congress of Automatic Control, and publishes some of the leading journals in the field. Since then, three decades of steady progress followed. Automation is now an essential ingredient in manufacturing, in petrochemical, pharmaceutical, and paper industry, in mining and metal industry, in conversion and distribution of energy, and in many services. Feedback control is indispensable and ubiquitous in automobiles, ships, and aircrafts. Feedback control is also a key element of numerous scientific instruments as well as of consumer products, such as smartphones. Despite of this pervasive role of automation in every aspect of the technology, its specific value is not always perceived as such and automation is often confused with other disciplines of engineering. The advent of robotics, in the late 1970s, is, in some sense, an exception to this, because the impact of robotics in modern manufacturing industry is visible to everyone. However, also in this case there is a tendency to consider robotics and the associated impact on industry as an implementation of ideas and principles of computer engineering rather than principles of automation and feedback control.
Foreword: Automation Is for Humans and for Our Environment
ix
In the last decade of the previous century and in the first decade of the present century, automation has experienced a tremendous growth. Progresses in the automobile industry in the last decade have only been possible because of automation. Feedback control loops pervade our cars: steering, braking, attitude stabilization, motion stabilization, combustion, emissions are all feedback controlled. This is a dramatic change that has revolutionized the way in which cars are conceived and maintained. Industrial robots have reached a stage of full maturity, but new generations of service robots are on their way. Four-legged and even two-legged autonomous walking machines are able to walk through rough terrains, service robots are able to autonomously interact with uncertain environment and adapt their mission to changing tasks, to explore hostile or hazardous environments, and to perform jobs that would be otherwise dangerous for humans. Service robots assist elderly or disabled people and are about to perform routine services at home. Surgical robotics is a reality: minimally invasive micro robots are able to move within the body and to reach areas not directly accessible by standard techniques. Robots with haptic interfaces, able to return a force feedback to a remote human operator, make tele-surgery possible. New frontiers of automation encompass applications in agriculture, in recycling, in hazardous waste disposal, in environment protection, and in safe and reliable transportation. In the second decade of this century, artificial intelligence and machine learning have characterized the beginning of a new age in automation. Intelligent machines now match or in some cases outperform human performance in a range of activities, including ones requiring cognitive capabilities. The development of sophisticated algorithmic systems, combined with the availability of large amounts of data and processing power have been instrumental in this respect. Automation and artificial intelligence are changing the nature of work. As intelligent machines and software are integrated more deeply into the workplace, workflows and workspaces will continue to evolve to enable humans and machines to work together. In the manufacturing industry, this “fourth” revolution is characterized by widespread cooperation between humans and robots, by integration of physical systems and virtual reality, and by extensive interconnection of machines via Internet. Rapid developments in embedded and highperformance computing, wireless communication, and cloud technology are inducing drastic changes in the architecture and operation of industrial automation systems. Model-based design is now accessible by individual users via widely available commercial software packages: a well-known firm advertises that “a robot that sees, acts and learns can be programmed in an afternoon.” Increased automation will eventually contribute to higher productivity and wealth. A model developed in 2019 by the McKinsey Global Institute predicts that automation could raise productivity growth globally by 0.8–1.4% annually. Their model suggests that half of today’s work activities could be automated by 2055, but this could happen up to 20 years earlier or later depending on various scenarios. In the short term, jobs might be adversely affected, but in the long range this will be offset by the creation of new types of work not foreseen at present. It is foreseeable that a substantial shift will occur in workforces, similar to the shift away from agriculture that occurred in the developed countries in the twentieth century. At the dawn of the twentieth century, the deterministic view of classical mechanics and some consequent positivistic philosophic beliefs that dominated the nineteenth century had been shaken by the advent of relativistic physics. Today, after a century dominated by the expansion of technology and, to some extent, by the belief that no technological goal was impossible to achieve, similar woes are feared. The clear perception that resources are limited, the uncertainty of the financial markets, the diverse rates of development among nations, all contribute to the awareness that the model of development followed insofar in the industrialized world will change. Today’s wisdom and beliefs may not be the same tomorrow. All these expected changes might provide yet another great opportunity for automation. Automation will no longer be seen only as automatic production but as a complex of technologies that guarantee reliability, flexibility, and safety for humans as well as for the environment.
x
Foreword: Automation Is for Humans and for Our Environment
In a world of limited resources, automation can provide the answer to the challenges of sustainable development. Automation has the opportunity to make a greater and even more significant impact on society. In the first half of the twentieth century, the precepts of engineering and management helped solve economic recession and ease social anxiety. Similar opportunities and challenges are occurring today. This leading-edge Springer Handbook of Automation will serve as a highly useful and powerful tool and companion to all modern-day engineers and managers in their respective profession. It comes at an appropriate time and provides a fundamental core of basic principles, knowledge, and experience by means of which engineers and managers will be able to quickly respond to changing automation needs and to find creative solutions to the challenges of today’s and tomorrow’s problems. It has been a privilege for many members of IFAC to participate with Springer Publishers, Dr. Shimon Y. Nof, and the over 250 experts, authors, and reviewers in creating this excellent resource of automation knowledge and ideas. It provides also a full and comprehensive spectrum of current and prospective automation applications, in industry, agriculture, infrastructures, services, health care, enterprise, and commerce. A number of recently developed concepts and powerful emerging techniques are presented here for the first time in an organized manner and clearly illustrated by specialists in those fields. Readers of this new edition of the Springer Handbook of Automation are offered the opportunity to learn proven knowledge from underlying basic theory to cutting-edge applications in a variety of emerging fields. Rome September 2022
Alberto Isidori
Foreword: Automation Is the Science of Integration
In our understanding of the word automation, we used to think of manufacturing processes being run by machines without the need for human control or intervention. From the outset, the purpose of investing in automation has been to increase productivity at minimum cost and to assure uniform quality. Permanently assessing and exploiting the potential for automation in the manufacturing industries has, in fact, proved to be a sustainable strategy for responding to competition in the marketplace, thereby securing attractive jobs. Automation equipment and related services constitute a large and rapidly growing market. Supply networks of component manufacturers and system integrators, allied with engineering skills for planning, implementing, and operating advanced production facilities, are regarded, together with data science and Artificial Intelligence, as cornerstones of competitive manufacturing. Therefore, the emphasis of national and international initiatives aimed at strengthening the manufacturing base of economies is on holistic strategies for research and technical development, education, socio-economics, and entrepreneurship. As a result, progress automation technologies enable industries to provide high product varieties from production lines up to highly customized products. Meanwhile our understanding of the world of product has also changed considerably. Physical products have become cyber-physical entities existing in the physical world as well as in a digital world and are expected to interact in the so-called Metaverse in the future. They come as a bundle of hardware, software, and physical as well as digital services. Automation has expanded into almost every area of daily life: from smart products for everyday use, networked buildings, automated or autonomous vehicles and logistics systems, collaborative robots, to advanced healthcare and medical systems. It includes the well-known control of physical systems as well as data automation. In simplified terms, automation today can be considered as the combination of processes, devices, and supporting technologies, all enabled by advanced digitization. As a world-leading organization in the field of applied research, the Fraunhofer-Gesellschaft has been a pioneer in relation to numerous technological innovations and novel system solutions in the broad field of automation. Its institutes have led the way in research, development, and implementation of industrial robots and computer-integrated manufacturing
xi
xii
Foreword: Automation Is the Science of Integration
systems, service robots for professional and domestic applications, autonomous vehicles advanced systems for office automation and e-Commerce, as well as automated residential and commercial buildings. Moreover, our research and development activities in advanced manufacturing and logistics as well as office and home automation have been accompanied by large-scale experiments and demonstration centers, the goal being to integrate, assess, and showcase innovations in automation in real-world settings and application scenarios. On the basis of this experience we can state that, apart from research in key technologies such as sensors, actuators, process control, and user interfaces, automation is first and foremost the science of integration, mastering the process from the specification, design, and implementation through to the operation of complex systems that have to meet the highest standards of functionality, safety, cost-effectiveness, and usability. Therefore, scientists and engineers need to be experts in their respective disciplines while at the same time having the necessary knowledge and skills to create and operate large-scale systems. The Springer Handbook of Automation is an excellent means of both educating students and also providing professionals with a comprehensive yet compact reference work for the field of automation. The Handbook covers the broad scope of relevant technologies, methods, and tools and presents their use and integration in a wide selection of application contexts: from agricultural automation to surgical systems, transportation systems, and business process automation. I wish to congratulate the editor, Prof. Shimon Y. Nof, on succeeding in the difficult task of covering the multifaceted field of automation and of organizing the material into a coherent and logically structured whole. The Handbook admirably reflects the connection between theory and practice and represents a highly worthwhile review of the vast accomplishments in the field. My compliments go to the many experts who have shared their insights, experience, and advice in the individual chapters. Certainly, the Handbook will serve as a valuable tool and guide for those seeking to improve the capabilities of automation systems – for the benefit of humankind. Stuttgart January 2022
Hans-Jörg Bullinger
Foreword: Automation Technology: The Near Limitless Potential
The Springer Handbook of Automation (2nd Edition) is a truly remarkable resource volume that will be indispensable to engineers, business managers, and governmental leaders around the globe for years to come as they strive to better address the evolving needs of society. This technology, or rather set of technologies, will be an essential tool in that effort. Some of my comments will be specific to robotics which represents one of the key building blocks for automation systems. Many of the characteristics associated with robots are clearly impacting the design of other automation system building blocks. My perspective was shaped by nearly five decades of experience while employed at both General Motors R&D and its Manufacturing Engineering organization. I began my career as a researcher in robotics and machine vision. I went on to eventually lead GM’s robotics development team and to be an inaugural director of its Controls, Robotics & Welding organization which supported plant automation throughout GM worldwide. From there, I had the opportunity to lead GM’s global manufacturing research team. I am proud to have received the Joseph F. Engelberger Robotics Award in 1998 for “contributing to the advancement of the science of robotics in the service of mankind” in part because I believe that this stated goal is so profoundly important. I was responsible for the first visually guided robot in the automotive industry and several other industry firsts but even more important contributions came from an impressive set of collaborators that I had the great fortune to have worked with. These included, to name but a few, Joe Engelberger (father of robotics), Vic Scheinman, Brian Carlisle, Bruce Shimano, John Craig, Seiuemon Inaba, Jean-Claude Latombe, Oussama Khatib, Shimon Y. Nof, Berthold Horn, Marvin Minsky, Gary Cowger, Gerry Elson, Dick Beecher, Lothar Rossol, Walt Cwycyshyn, Jim Wells, and Don Vincent. I wish to thank these people for their many, many contributions to the field and for all that I learned from them. That said, I am writing today more as an observer or witness since I was there when many of the key breakthroughs were achieved and as seismic changes rocked the underlying automation technology. GM had the vision and was able to pioneer many significant and successful robotics and automation solutions along with a few less successful efforts. Other industries too have advanced the use of innovative robotics and automation systems that similarly improved competitiveness. GM was the first company to ever use a commercial robot in production. That first one was built by George Devol and Joe Engelberger’s company, Unimation, and was xiii
xiv
Foreword: Automation Technology: The Near Limitless Potential
used to perform a simple material handling function back in 1961. GM later was the first to use a robot to perform spot welding, first to implement a line of spot-welding robots, first to use a robot dedicated to painting, one of the first to do assembly with robots, first to install a COBOT (Collaborative Robot) designed to work with people vs. being kept isolated from people for safety reasons, first to implement fixturing robots, etc. GM’s huge buying power also played a significant role in rapidly transforming the robotics industry away from less reliable hydraulic machines to much more reliable, fast, precise electric ones that were controlled by computers. This brought huge performance gains and much lower costs which in turn resulted in accelerating the global implementation of automation systems across all industries. These were exciting times pivotal to the advancement of automation technology, but I think I need to also focus briefly on automation technology from a social point of view since it too is both important and is so often misunderstood. The adoption of automation is portrayed in the media as either something good and desirable (occasionally) or something bad to be avoided (unfortunately, more likely) but this was always a false choice. Media tried to paint it as a battle between “blue collar” workers and “steel collar” workers (i.e., robots). Automation fundamentally improves efficiency, among its other many benefits. We live in a world that is globally competitive, and this means those countries that squander opportunities to improve efficiency for political or social reasons can expect an inevitable declining standard of living over time. There may be a time lag but this conclusion is inescapable. Clearly the introduction of automation can cause displacement of workers but nothing can protect their jobs if those workers are allowed to become non-competitive in the global market. I had the opportunity to speak with a VP of the United Auto Workers (UAW) at an MIT tech conference in the 1990s. I was shocked when he shared with me that the UAW has always been in support of automation. He went on to explain that this support was subject only to two conditions. First, he said that workers who are either impacted or displaced by automation must always be treated with dignity and respect. Second, workers should be allowed to share in the benefits that accrue from that automation. This resonated with me then and I never forgot. Of course, it makes great sense. I cannot say that displaced workers were always treated properly but they clearly deserved that. In general, I can say that workers were allowed to share in the benefits, at least in the automotive industry, in that those workers have long enjoyed much higher wages and better benefits than available to workers in similar manufacturing jobs in other industries. This of course was essentially paid for by the ever-higher production efficiencies. Other automation benefits include dramatically expanded production volume, lowered costs, increased quality and safety, lessened energy demands, and the enablement of innovative products not conceivable without automation. For example, modern microcomputers and electronic chips which are so ubiquitous in our world could not be produced without automation. Similarly, smartphones, modern communication technology, and many other consumer products would not be possible. Automation in a very real sense is one of the most successful strategies ever devised to meet increasing demand from the global market and simultaneously address critical social issues facing our planet – including climate change, food production, and a host of other issues. To be sure, there is a balance to be struck in our world between consumer demands and social issues, but the tools of automation dramatically enable much better outcomes than would be possible without them. Two of the hallmarks of a professional in our modern society are that they use the right tools and they consistently produce high-quality results day in and day out. Automation can be thought of as giving people improved tools to do their jobs better, faster, and more economically. We can trace it back to the invention of the wheel and simple hand tools to enhance human performance. This in turn eventually led to power tools to perform even better and to lower the burden of often dirty, difficult, and dangerous jobs. This led to increasing levels of automation and ever-improving performance. There is no end in sight for what can be accomplished with unfathomable opportunities yet to be discovered.
Foreword: Automation Technology: The Near Limitless Potential
xv
One interesting trend in mechanical automation has been what I think of as the robotization of automation by using the characteristics inherent in robots to improve both flexibility and adaptability in other automation system elements. That is, we are seeing robot technology permeating many aspects of automation and even consumer products resulting in more flexibility or, in some cases, programmability. Beyond the mechanical aspects, we also see the penetration of AI techniques to improve the logic or thinking aspects of our automation systems. The first robots were large, rigid, expensive, powerful machines with limited programmability and limited accuracy. Robots themselves have been adapted to the needs of many new application areas. Material handling and spot-welding robots typically have very large work envelopes along with great strength and speed. Fixturing robots have nearly the opposite characteristics with small work envelopes, high rigidity, and slow speed. Painting and sealing robots need to be able to execute complex motions very smoothly with a constant tip speed. Lastly, one of the most important metrics for automation technology has always been availability (or up-time). Automation systems do not accomplish much if you cannot keep them working. My recent technical pursuits have centered around Vehicle Health Management (VHM) primarily for consumer cars and trucks. One of the goals of VHM is to be able to predict problems before they happen so that action can be taken in time to mitigate or avoid the issues entirely. VHM is built on advanced analytics, and many of the same techniques that apply to automobiles can be used to great benefit in manufacturing automation systems to keep them operational and more available. This gives us yet another reason for tremendous optimism as automation technology continues to evolve and advance. I believe that this new edition of the Springer Handbook of Automation lays out a solid foundation for the automation innovators and leaders of the future by capturing the lessons of the past and delivering a comprehensive set of tools upon which automation will continue to advance for the benefit of our global society. Detroit July 2022
Steven W. Holland
Foreword: The Dawn of the Smart Manufacturing Era Enables High-Quality Automation
In recent years, the term cyber-physical system (CPS) is often used to refer to production systems as well as robots, NC machine tools, and other production machines that automate these systems. “Physical” refers to things one can see and touch in the real world, while “cyber” refers to virtual space within a computer that models the real world. In the “physical” world, achieving high quality with customer satisfaction is one of the most important factors for a product to become globally competitive. Manufacturing high-quality products requires advanced manufacturing techniques such as high-precision machining and assembly, as well as the large role that skilled experts play. In many industrialized countries, however, the number of skilled experts is declining due to many factors, including lower birthrates, aging populations, and higher education. Smart manufacturing is expected to solve this problem by utilizing robots, IoT (Internet of Things), and AI (Artificial Intelligence), leading to the production of high-quality products without requiring high levels of expertise. Also, with the introduction of collaborative robots, humans and robots can work together in the same space. In addition to allowing humans and collaborative robots to work together, this also enables a division of labor between humans and robots, where most of the work is done with robotic automation and the more advanced tasks are performed by humans. A mobile robot equipped with a collaborative robot on an automatic guided vehicle (AGV) can freely move to where it is needed and continue to work. In addition, the United Nations has been actively promoting Sustainable Development Goals (SDGs) to help reduce environmental burdens. If smart manufacturing becomes a reality, it can be optimized by using the robots, machine tools, and the cells and production systems that encompass them in the “cyber” space of CPS. These optimized results can then be implemented in the “physical” world. For example, one can find a robot layout that minimizes power consumption, which is in line with the concept of SDGs. Also, the breakdown of a production machine can cause major damage to a factory. Collecting and analyzing large amounts of operation data using IoT is making it possible to predict part failures in advance to prevent serious damage. AI applications are also advancing, allowing the automation of advanced tasks such as bin-picking (picking up randomly located objects inside a bin) with 3D vision sensor and pass/fail judgement in assembly operations. Thus, with the help of robots, IoT, and AI technologies we are about to open the door to a new era of smart manufacturing.
xvii
xviii
Foreword: The Dawn of the Smart Manufacturing Era Enables High-Quality Automation
This automation handbook contains a significant amount of information about cutting-edge technologies including smart production, manufacturing, and services related to automation. I am confident that it will provide a new vision for beginners, students, teachers, system integrators, robot engineers, and business owners. Yoshiharu Inaba Doctor of Engineering Chairman, FANUC Corporation
Foreword: Automation of Surgical Robots
Between the first and current editions of this book, over 12 years have passed in which surgical robots have progressed from their infancy to childhood stage. Twelve years ago, surgical robots struggled to gain acceptance in operating rooms in order to prove their clinical benefits. With more applications of surgical robots and a larger number of live operations, robots have been able to demonstrate their advantage in three main areas: higher accuracy, better accessibility that allows for minimal invasiveness, and reduced radiation, all of which translate to better patient outcomes. Given the growing number of surgical robot companies, the question in some disciplines is no longer “do I need a robot?” but rather “which robot should I use?” There are more than a dozen companies that offer regulatory approved surgical robots as of the end of 2021, and the number of cases performed with robotic assistance is on the order of a million per year, spanning chirurgic, orthopedic, cardiovascular, urology, ENT, radiotherapy, and other fields. In recent years, a new player has emerged: Artificial Intelligence applied to Machine Learning. Machine learning has benefited all disciplines of medicine because it allows computers to analyze millions of cases and determine the best procedure for the patients, whether it is diagnostic or therapeutic. This enabling technology will become more and more dominant in this and other fields as better algorithms and larger databases are utilized. Even though machines make mistakes, it turns out that the experience computers can gain from millions of cases far outweighs the experience a single person can gain in a lifetime. Although it is still debatable whether a computer can diagnose better than a human, the odds are shifting more and more in favor of computers, much to our dismay. The majority of today’s surgical robotic procedures are performed in a remote manipulation mode, in which the robot follows the surgeon’s hand motion. Some robots use a semi-active mode, in which the robot directs the surgeon to the correct trajectory while the surgeon performs the actual surgical procedure (e.g., cutting, drilling). Only two robots are currently approved for active procedures, in which the robot holds the surgical tool and performs the procedure autonomously. This, I believe, will change dramatically when the next edition of this book comes out in a few years. Since robots are no longer an unknown in the operating room, more active robots will be designed and regulatory approved, and the operating room
xix
xx
Foreword: Automation of Surgical Robots
staff, particularly the younger ones, will be more accepting of them. When enhanced robot capabilities are combined with the dramatic rise of machine learning, a new era of autonomous surgical robots is on the horizon. The shift of the surgeon from a dexterity-talented person to a decision-maker would be a major impact of surgical robots on surgery. Because robots will play a key role in the precise execution of the planned surgery, the surgeon will choose the best treatment option, and robots that excel at precise manipulation will carry out the treatment with precision. Robots can be made in a variety of sizes, from large size to miniature ones capable of treating the human body from within. Several companies are working on small robots that are on the verge of science fiction, capable of monitoring and administering therapeutic procedures from within the human body. There are currently no FDA-approved robots capable of performing these tasks, but in the long run, miniature robots that live unharmed in the human body and are capable of detecting and treating, e.g., dangerous lesions at an early stage, are by far the best and most efficient surgical robots as compared to the invasive ones. Surgery is a cautious profession. In today’s world, having a robot disrupt the long tradition of surgery and shift the paradigm is a revolution. Combined with more autonomy and machine learning, the use of robots and automation in this conservative life-threatening field becomes closer to the topic of this excellent and very informative Springer Handbook of Automation, whether we like it or not. This new edition indeed adds expanded coverage of medical robotics and automation. Haifa, Israel August 2022
Moshe Shoham
Preface
There are at least three good reasons to create this new edition: 1. We want to understand automation better, realizing its exponential growth over the years and in the past decade, and its wider and deeper impact on our life. 2. We want to know and clarify what has changed and evolved in automation over the past decade, the future trends and challenges as we recognize them now. 3. We want to clarify and understand how automation is so helpful to us, as it has come to our rescue during the recent pandemic period, and at the same time we keep wondering, can we limit it from ever taking over our freedoms? We love automation… • When it does what we need and expect from it, like our most loyal partner: Wash our laundry and dishes, secure financial transactions, supply electricity where and when it is needed, search for answers, share music and movies, assemble and paint our cars. • And more personally, image to diagnose our health problems and dental aches, cook our food, and help us navigate. • Who would not love automation? We hate automation and may even kick it … • When it fails us, like a betraying confidant: Turn the key or push a button and nothing happens the way we anticipate – a car does not start, a TV does not display, our cellphone or digital assistant is misbehaving, the vending machine delivers the wrong item or refuses to return change. • When planes are late due to a mechanical problem and business transactions are lost or ignored due to computer glitches. • Eventually those problems are fixed and we turn back to the previous paragraph, loving automation again. We are amazed by automation and all those people behind it. Automation thrills us when we learn about its new abilities, better functions, more power, faster response, smaller size, and greater reliability and precision. And we are fascinated by automation’s marvels: In entertainment, communication, scientific discoveries; how it is applied to explore space and conquer difficult maladies of society, from medical and pharmaceutical automation solutions to energy supply, remote education, and smart transportation (soon to be autonomous?), and we are especially enthralled when we are not really sure how it works, but it works. It all begins when we, as young children, observe and notice; perhaps we are bewildered that a door automatically opens when we approach it, or when we are first driven by a train or bus, or when we notice the automatic sprinklers, or lighting, or home appliances: How can it work on its own? Yes, there is so much magic about automation. xxi
xxii
This magic of automation is what inspired a large group of us, colleagues and friends from around the world, all connected by automation, to compile, develop, revise, update, and present the new edition of this exciting Springer Handbook of Automation: Explain to our readers what automation is, how it works, how it is designed and built, where it can and is applied, and where and how it is going to be improved and be created even better; what are the scientific principles behind it and what are emerging trends and challenges being confronted. All of it concisely yet comprehensively covered in the 74 chapters which are included in front of you, the readers. Flying over beautiful Fall colored forest surrounding Binghamton, New York, in the 1970s on my way to IBM’s symposium on the future of computing, I was fascinated by the miracle of nature beneath the airplane: Such immense diversity of leaves changing colors; such magically smooth, wavy movement of the leaves dancing with the wind, as if they are programmed with automatic control to responsively transmit certain messages needed by some unseen listeners. And the brilliance of the sunrays reflected in these beautiful dancing leaves (there must be some purpose to this automatically programmed beauty, I thought). More than once, during reading the new and freshly updated chapters that follow, I was reminded again of this unforgettable image of multilayered, interconnected, interoperable, collaborative, responsive waves of leaves (and services): The take-home lesson from that symposium was that mainframe computers hit, about that time, a barrier – it was stated that faster computing was impossible since mainframes would not be able to cool off the heat they generated (unless a miracle happened). As we all know, with superb human ingenuity computing and automation have overcome that barrier and other barriers. Persistent progress, including fast personal, cloud, and edge computers, better software, machine learning, intelligent automation, and wireless communication, resulted in major performance and cost-effective improvements: client-server workstations; wireless access; local and wide area networks; multi-core, AI, and vision processors; web-based Internetworking, grids, cyber-physical systems, and more have been automated, and there is so much more yet to come. Thus, more intelligent automatic control and collaborative automation and more responsive human–automation interfaces could be invented and deployed for the benefit of all. Progress in distributed, networked, and collaborative control theory, computing, communication, and automation has enabled the emergence of e-Work, e-Business, e-Medicine, e-Service, e-Commerce, and many other significant e-Activities based on automation. By this new edition, all of them have become common and evolved to c-Work, c-Business, c-Medicine, c-Service, c-Commerce, c-Precision Agriculture, and more. And for the next edition, we anticipate all of them will drop the e- (electronic) and c- (cyber collaborative), since all activities with automation will inherently be automated electronically and cyber-collaboratively. It is not that our ancestors did not recognize the tremendous power and value of delegating effort to tools and machines; furthermore, of synergy, teamwork, collaborative interactions, and decision-making. But only when efficient, reliable, and scalable automation reached a certain level of maturity could it be designed into systems and infrastructures servicing effective supply and delivery networks, social networks, and multi-enterprise practices. In their vision and mission, enterprises expect to simplify their automation utilities and minimize their burden and cost, while increasing the value and usability of all deployed functions and acquirable information. Their goal: Timely conversion into relevant knowledge, goods, services, and practices. Streamlined knowledge, services, and products would then be delivered through less effort, just when necessary and only to those clients or decision-makers who truly need them. Whether we are in commerce or in service for society, the real purpose of automation is not merely better computing or better automation, but let us increase our competitive agility and service quality! The Springer Handbook of Automation Second Edition achieves this purpose well again. Throughout the 74 chapters, divided into 10 main sections, with numerous tables, equations, figures, and a vast number of references, with numerous guidelines, frameworks, protocols, models, algorithms, theories, techniques, and practical principles and procedures, the 149 co-authors revise and present proven and up-to-date knowledge, original analysis, best
Preface
Preface
xxiii
practices, and authoritative expertise. Plenty of case studies, creative examples and unique illustrations covering topics of automation from the basics and fundamentals to advanced techniques, case examples, and theories will serve the readers and benefit the students and researchers, engineers and managers, inventors, investors, and developers. Special Thanks To stimulate and accomplish such a complex endeavor of excellence and bring to you this new edition, a Network of Friends (NoF) is essential. I wish to express my gratitude and thanks to our distinguished Advisory Board members, who are leading international authorities, scholars, experts, and pioneers of automation, and who have guided the revision and development of this Springer Handbook, for sharing with me their wisdom and advice along the challenging streamlining editorial effort; to the Advisory Board members and co-authors of the original edition of this Handbook, upon whose efforts and expertise we have built and updated this new edition; to our co-authors and our esteemed reviewers, who are also leading experts, researchers, practitioners, and pioneers of automation. Sadly, several personal friends and colleagues, Professors Yukio Hasegawa, Ted Williams, Christopher Bissel, Tibor Vamos, and Daniel Powers, who took active part in helping create this Springer Handbook and its new edition, passed away before they could see it published. They left enormous voids in our community and in my heart, but their legacy will continue to inspire us. All the new and revised chapters were reviewed thoroughly and anonymously by over 100 reviewers and went through several critical reviews and revision cycles. Each chapter was reviewed by at least five expert reviewers to assure the accuracy, relevance, timeliness, and high quality of the materials, which are presented in this Springer Handbook Second Edition. The reviewers included: • • • • • • • • • • • • • • • • • • • • • • • • • • • • •
Vaneet Agrawal, Purdue University Praditya Ajidarma, Bandung Institute of Technology, Indonesia Juan Apricio, Siemens Parasuram Balasubramanian, Theme Work Analytics, India Ruth Bars, Budapest University of Technology and Economics, Hungary Avital Bechar, Volcani Institute, Israel Saif Benjaafar, University of Minnesota Sigal Berman, Ben-Gurion University, Israel Radhika Bhargava, Twitter Jeff Burnstein, Association for Advancing Automation Barrett Caldwell, Purdue University Brian Carlisle, Precise Automation Jose A. Ceroni Diaz, Pontificia Universidad Católica de Valparaiso, Chile David Hongyi Chen, PRISM Center, Purdue University Xin W. Chen, Southern Illinois University-Edwardsville Chris Clifton, Purdue University Juan Diego Velasquez de Bedout, Purdue University Juan Manuel de Bedout, Raytheon Technologies Alexandre Dolgui, IMT Atlantique-Nantes, France Puwadol Oak Dusadeerungsikul, Chulalongkorn University, Thailand Yael Edan, Ben-Gurion University, Israel Florin Gheorghe Filip, Romanian Academy, Romania Kenichi Funaki, Hitachi, Japan Micha Hofri, Worcester Polytechnic Institute Steven Holland, General Motors Chin-Yin Huang, Tunghai University, Taiwan Alf J. Isaksson, ABB, Sweden Kazuyoshi Ishii, Kanazawa Institute of Technology, Japan Alberto Isidori, University of Rome, Italy
xxiv
• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •
Sirkka-Liisa Jämsä-Jounela, Aalto University, Finland Wootae Jeong, Railroad Research Institute, Korea Sally Johnson, John Carroll University Hoo Sang Ko, Southern Illinois University-Edwardsville Samuel Labi, Purdue University Francoise Lamnabhi-Lagarrigue, University of Paris-Saclay, France Steven J. Landry, Pennsylvania State University Seokcheon Lee, Purdue University Louis Zekun Liu, Purdue University Freddy A. Lucay, Pontificia Universidad Católica de Valparaíso, Chile Ramses Martinez, Purdue University Joachim Meyer, Tel Aviv University, Israel Mahdi Moghaddam, PRISM Center, Purdue University Mohsen Moghaddam, Northeastern University Gerard Morel, University of Lorraine-Nancy, France Francisco Munoz, Javeriana University Cali, Colombia Gaurav Nanda, Purdue University Ricardo Naon, Purdue University Win P.V. Nguyen, PRISM Center, Purdue University Brandon Pitts, Purdue University Rashed Rabata, PRISM Center, Purdue University Vingesh Ram, Purdue University Rodrigo Reyes Levalle, American Airlines Oliver Riedel, Fraunhofer Institute, Germany Jasmin Shinohara, University of Pennsylvania Keiichi Shirase, Kobe University, Japan Nir Shvalb, Ariel University, Israel Jose Reinaldo Silva, University of Sao Paolo, Brazil Maitreya Sreeram, PRISM Center, Purdue University Hak-Kyung Sung, Samsung Electronics, Korea Yang Tao, University of Maryland Itshak Tkach, University of London, UK Mario Ventresca, Purdue University François B. Vernadat, University of Lorraine-Metz, France Edward Watson, Louisiana State University James W. Wells, General Motors Wenzhuo Wu, Purdue University Sang Won Yoon, Binghamton University Zoltan Szabo, MTA SZTAKI, Hungary
I wish to express my gratitude and appreciation also to my resourceful coauthors, colleagues, and partners from IFAC, IFPR, IFIP, IISE, NSF, TRB, RIA/A3, INFORMS, ACM, IEEE-ICRA, ASME, and PRISM Center at Purdue and PGRN, the PRISM Global Research Network, for all their support and cooperation leading to the successful creation of this Springer Handbook of Automation Second Edition. Special thanks to my late parents, Dr. Jacob and Yafa Berglas Nowomiast, whose brilliance, deep appreciation to scholarship, and inspiration have kept enriching me; to my wife Nava for her invaluable support and wise advice; to Moriah, Jasmin, Jonathan, Moshe Chaim, Erez, Dalia, Hana, Daniel, Andrew, Chin-Yin, Jose, Moshe, Ed, Ruth, Pornthep, Juan Ernesto, Richard, Wootae, Agostino, Daniela, Arnie, Esther, Hao, Mirek, Pat, David, Yan, Oak, Win, Gad, Micha, Guillermo, Cristian, Carlos, Fay, Marco, Venkat, Masayuki, Hans, Manuel, Sigal, Laszlo, Georgi, Arturo, Yael, Mohsen, Dov, Florin, Herve, Gerard, Gavriel, Lily, Ted, Isaac, Dan, Veronica, Rolf, Steve, Justin, Mark, Arnie, Colin, Namkyu, Wil, Aditya, Ken, Hannah,
Preface
Preface
xxv
Anne, Fang, Jim, Hoosang, Praditya, Rashed, Karthik, Seokcheon, John, Shelly, Jennifer, Barrett, Hyesung, Kazuyoshi, Xin, Yehezkel, Bezalel, Al, Hank, Russ, Doug, Francois, Tom, Frederique, Alexandre, Bala, Coral, Tetsuo, Oren, Asher, and to the late Izzy Vardinon, Yukio, Kazuo, and Tibor, for sharing with me most generously their thoughts, smiles, ideas, inspiration, encouragement, and their automation expertise. Deep thanks also to Springer’s Tom Ditzinger, who inspired me to create this Handbook; to Springer’s project manager, the talented and creative Judith Hinterberg; Juby George, Rajeswari Tamilselvan, and the entire Springer Handbook production team for their valuable help and vision in completing this ambitious endeavor. The significant achievements of humans with automation – in sustaining and improving our life quality, innovating and solving serious problems, and enriching our knowledge; inspiring people to enjoy automation while bewaring of its risks, and provoking us to learn how to invent even better and greater automation solutions; the wonders and magic, opportunities and challenges with emerging and future automation – are all enormous. Indeed, automation is an essential and wonderful augmenting power of our human civilization. West Lafayette, Indiana December 2022
Shimon Yeshayahu Nof Nowomiast
Contents
Part I Development and Impacts of Automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Automation: What It Means to Us Around the World, Definitions, Its Impact, and Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Shimon Y. Nof
1 3
2
Historical Perspective of Automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Christopher Bissell, Theodore J. Williams, and Yukio Hasegawa
39
3
Social, Organizational, and Individual Impacts of Automation . . . . . . . . . . . . Francoise Lamnabhi-Lagarrigue and Tariq Samad
61
4
Economic Effects of Automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Piercarlo Ravazzi and Agostino Villa
77
5
Trends in Automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Christopher Ganz and Alf J. Isaksson
103
Part II Automation Theory and Scientific Foundations . . . . . . . . . . . . . . . . . . . . . . .
119
6
Linear Control Theory for Automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . István Vajk, Jen˝o Hetthéssy, and Ruth Bars
121
7
Nonlinear Control Theory for Automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Alberto Isidori
163
8
Control of Uncertain Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Vaneet Aggarwal and Mridul Agarwal
189
9
Artificial Intelligence and Automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sven Koenig, Shao-Hung Chan, Jiaoyang Li, and Yi Zheng
205
10 Cybernetics, Machine Learning, and Stochastic Learning Automata . . . . . . . B. John Oommen, Anis Yazidi, and Sudip Misra
233
11 Network Science and Automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lorenzo Zino, Baruch Barzel, and Alessandro Rizzo
251
12 What Can Be Automated? What Cannot Be Automated? . . . . . . . . . . . . . . . . . Richard D. Patton and Peter C. Patton
275
Part III Automation Design: Theory, Elements, and Methods . . . . . . . . . . . . . . . . .
285
13 Designs and Specification of Mechatronic Systems . . . . . . . . . . . . . . . . . . . . . . . Rolf Isermann
287
14 Sensors, Machine Vision, and Sensor Networks . . . . . . . . . . . . . . . . . . . . . . . . . . Wootae Jeong
315
xxvii
xxviii
Contents
15
Intelligent and Collaborative Robots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Kenji Yamaguchi and Kiyonori Inaba
335
16
Control Architecture for Automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Oliver Riedel, Armin Lechler, and Alexander W. Verl
357
17
Cyber-Physical Automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cesar Martinez Spessot
379
18
Collaborative Control and E-work Automation . . . . . . . . . . . . . . . . . . . . . . . . . . Mohsen Moghaddam and Shimon Y. Nof
405
19
Design for Human-Automation and Human-Autonomous Systems . . . . . . . . . John D. Lee and Bobbie D. Seppelt
433
20
Teleoperation and Level of Automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Luis Basañez, Emmanuel Nuño, and Carlos I. Aldana
457
21
Nature-Inspired and Evolutionary Techniques for Automation . . . . . . . . . . . . Mitsuo Gen and Lin Lin
483
22
Automating Prognostics and Prevention of Errors, Conflicts, and Disruptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Xin W. Chen and Shimon Y. Nof
509
Part IV Automation Design: Theory and Methods for Integration . . . . . . . . . . . . .
533
23
Communication Protocols for Automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Carlos E. Pereira, Christian Diedrich, and Peter Neumann
535
24
Product Automation and Innovation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Kenichi Funaki
561
25
Process Automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Juergen Hahn and B. Wayne Bequette
585
26
Service Automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Christopher Ganz and Shaun West
601
27
Infrastructure and Complex Systems Automation . . . . . . . . . . . . . . . . . . . . . . . . Florin Gheorghe Filip and Kauko Leiviskä
617
28
Computer-Aided Design, Computer-Aided Engineering, and Visualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jorge D. Camba, Nathan Hartman, and Gary R. Bertoline
641
SafetyWarnings for Automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mark R. Lehto and Gaurav Nanda
661
Part V Automation Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
681
30
Economic Rationalization of Automation Projects and Quality of Service . . . José A. Ceroni
683
31
Reliability, Maintainability, Safety, and Sustainability . . . . . . . . . . . . . . . . . . . . Elsayed A. Elsayed
699
32
Education and Qualification for Control and Automation . . . . . . . . . . . . . . . . . Juan Diego Velasquez de Bedout, Bozenna Pasik-Duncan, and Matthew A. Verleger
717
29
Contents
xxix
33 Interoperability and Standards for Automation . . . . . . . . . . . . . . . . . . . . . . . . . François B. Vernadat
729
34 Automation and Ethics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Micha Hofri
753
Part VI Industrial Automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
773
35 Machine Tool Automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Keiichi Shirase and Susumu Fujii
775
36 Digital Manufacturing Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chi Fai Cheung, Ka Man Lee, Pai Zheng, and Wing Bun Lee
805
37 Flexible and Precision Assembly . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Brian Carlisle
829
38 Semiconductor Manufacturing Automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tae-Eog Lee, Hyun-Jung Kim, and Tae-Sun Yu
841
39 Nanomanufacturing Automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ning Xi, King Wai Chiu Lai, Heping Chen, and Zhiyong Sun
865
40 Production, Supply, Logistics, and Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . Manuel Scavarda Basaldúa and Rodrigo J. Cruz Di Palma
893
41 Automation and Robotics in Mining and Mineral Processing . . . . . . . . . . . . . . Sirkka-Liisa Jämsä-Jounela and Greg Baiden
909
42 Automation in the Wood, Paper, and Fiber Industry . . . . . . . . . . . . . . . . . . . . . Birgit Vogel-Heuser
923
43 Welding Automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Anatol Pashkevich
935
44 Automation in Food Manufacturing and Processing . . . . . . . . . . . . . . . . . . . . . . Darwin G. Caldwell
949
45 Smart Manufacturing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Andrew Kusiak
973
Part VII Infrastructure and Service Automation . . . . . . . . . . . . . . . . . . . . . . . . . . . .
987
46 Automation in Data Science, Software, and Information Services . . . . . . . . . . Parasuram Balasubramanian
989
47 Power Grid and Electrical Power System Security . . . . . . . . . . . . . . . . . . . . . . . Veronica R. Bosquezfoti and Andrew L. Liu
1015
48 Construction Automation and Smart Buildings . . . . . . . . . . . . . . . . . . . . . . . . . . Daniel Castro-Lacouture
1035
49 Agriculture Automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yael Edan, George Adamides, and Roberto Oberti
1055
50 Connected Vehicles and Driving Automation Systems . . . . . . . . . . . . . . . . . . . . Yuko J. Nakanishi and Pierre M. Auza
1079
51 Aerospace Systems Automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Steven J. Landry and William Bihlman
1115
xxx
Contents
52
Space Exploration and Astronomy Automation . . . . . . . . . . . . . . . . . . . . . . . . . . Barrett S. Caldwell
1139
53
Cleaning Automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Norbert Elkmann and José Saenz
1159
54
Library Automation and Knowledge Sharing . . . . . . . . . . . . . . . . . . . . . . . . . . . . Paul J. Bracke, Beth McNeil, and Michael Kaplan
1171
Part VIII Automation in Medical and Healthcare Systems . . . . . . . . . . . . . . . . . . . .
1187
55
Automatic Control in Systems Biology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Narasimhan Balakrishnan and Neda Bagheri
1189
56
Automation in Hospitals and Health Care . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Atsushi Ugajin
1209
57
Medical Automation and Robotics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Alon Wolf, Nir Shvalb, and Moshe Shoham
1235
58
Precision Medicine and Telemedicine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Kuo-Liang Chiang and Chin-Yin Huang
1249
59
Wearables, E-textiles, and Soft Robotics for Personalized Medicine . . . . . . . . Ramses V. Martinez
1265
60
Healthcare and Pharmaceutical Supply Chain Automation . . . . . . . . . . . . . . . Sara Abedi, Soongeol Kwon, and Sang Won Yoon
1289
Part IX Home, Office, and Enterprise Automation . . . . . . . . . . . . . . . . . . . . . . . . . . .
1309
61
Automation in Home Appliances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hoo Sang Ko and Sanaz Eshraghi
1311
62
Service Robots and Automation for the Disabled and Nursing Home Care . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Birgit Graf and Jonathan Eckstein
1331
63
Automation in Education, Training, and Learning Systems . . . . . . . . . . . . . . . Kin’ya Tamaki and Kazuyoshi Ishii
1349
64
Blockchain and Financial E-services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hong Wan, Kejun Li, Yining Huang, and Ling Zhang
1371
65
Enterprise and Business Process Automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . Edward F. Watson III and Andrew H. Schwarz
1385
66
Decision Support and Analytics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Daniel J. Power and Ramesh Sharda
1401
67
E-commerce . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Francisco Munoz, Clyde W. Holsapple, and Sharath Sasidharan
1411
Part X Automation Case Studies and Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1431
Case Study: Automation Education and Qualification Apprenticeships . . . . . Geanie Umberger and Dave Harrison
1433
68
Contents
xxxi
69 Case Study: IBM – Automating Visual Inspection . . . . . . . . . . . . . . . . . . . . . . . Rishi Vaish and Michael C. Hollinger 70 Case Study: Infosys – Talent Management Processes Automation with AI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Parasuram Balasubramanian and D. R. Balakrishna 71 Case Study: Intel and Claro 360 – Making Spaces Safe During Pandemic . . Cesar Martinez Spessot and Luis Fuentes 72 Case Study: Siemens – Flexible Robot Grasping with Deep Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Juan Aparicio Ojea and Eugen Solowjow
1439
1451 1459
1467
73 Case Study: 3M – Automation in Paint Repair . . . . . . . . . . . . . . . . . . . . . . . . . . Carl Doeksen and Scott Barnett
1475
74 Automation Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Puwadol Oak Dusadeerungsikul and Win P. V. Nguyen
1481
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1503
About the Editor
Shimon Y. Nof is Professor of Industrial Engineering at Purdue University and Director of the NSF and industry funded PRISM Center for Production, Robotics, and Integration Software for Manufacturing and Management. He earned his BSc and MSc in Industrial Engineering and Management from Technion, Israel Institute of Technology; his Ph.D. in Industrial and Operations Engineering from the University of Michigan. His professional merits include experience in full-time industrial and governmental positions and visiting professorships at MIT and at universities around the world. He was Secretary General and President of the International Foundation of Production Research and Chair within the International Federation of Automatic Control. He pioneered knowledge-based/AI robotic facility design and control models, robot ergonomics, and collaborative control theory with task-administration algorithms and protocols. As an experienced author and editor, his impressive list of publications include the Handbook of Industrial Robotics and the International Encyclopedia of Robotics and Automation. He is also the editor of the Springer Book Series on Automation, Collaboration, and E-Services. Shimon Y. Nof is a Fellow of the Institute of Industrial and Systems Engineering, and the International Foundation of Production Research, recipient of the Joseph Engelberger Award and Medal for Robotics Education, and Member of the Inaugural Book of Great Teachers of Purdue University.
xxxiii
Advisory Board
Ruth Bars Department of Automation and Applied Informatics Budapest University of Technology and Economics Budapest, Hungary Ruth Bars graduated at the Electrical Engineering Faculty of the Technical University of Budapest. She received her candidate of sciences degree in 1992 from the Hungarian Academy of Sciences and her PhD degree based on research in predictive control. She is honorary professor at the Department of Automation and Applied Informatics, Budapest University of Technology and Economics. Her research interests are predictive control, advanced control algorithms, and developing new ways of control education. She is the coauthor of the books Predictive Control in Process Engineering, from the Basics to the Applications (2011) and Control Engineering and Control Engineering: Matlab Exercises (2019). Jeff Burnstein Association for Advancing Automation Ann Arbor, USA Jeff Burnstein is the President of the Association for Advancing Automation (A3), the leading North American trade group representing over 1000 global companies involved in robotics, artificial intelligence, vision, motion control, and related automation technologies. Burnstein joined the association in 1983 and has held a variety of senior positions, culminating in his promotion to President in 2007. He is a frequent commentator in the media, often discusses automation issues with policy makers, and regularly speaks at global conferences on issues such as the impact of automation on jobs and the future of automation beyond the factory floor. Burnstein also serves on the Executive Board of the International Federation of Robotics (IFR).
xxxv
xxxvi
Advisory Board
Xin W. Chen Department of Industrial Engineering Southern Illinois University Edwardsville Edwardsville, USA Xin W. Chen is Professor in the School of Engineering at Southern Illinois University Edwardsville. He received his MS and PhD degrees in Industrial Engineering from Purdue University and BS degree in Mechanical Engineering from Shanghai Jiao Tong University. His research interests cover several related topics in the area of conflict and error prognostics and prevention, production/service optimization, and decision analysis. He is the author of the 2022 book titled Network Science Models for Data Analytics Automation – Theories and Applications in Springer ACES Automation, Collaboration, and E-Services Book Series. Juan M. de Bedout Aerospace Technologies, Raytheon Technologies Waltham, USA Juan M. de Bedout is the Vice President of Aerospace Technology at Raytheon Technologies. In this role, he works closely with the company’s businesses to shape the technology strategy for a broad portfolio of aerospace offerings. Prior to this role, Juan was the Vice President of Advanced Technology at Collins Aerospace, where he was responsible for helping to shape advanced technology planning and investment, drive the vitality of the central engineering teams, streamline engineering supplier planning, and promote continuous improvement throughout Collins’ businesses. Before joining Collins, Juan was the Chief Technology Officer for General Electric’s Grid Solutions business, after serving in the Company for 18 years in roles of increasing responsibility from his start at the Global Research Center in 2000. Juan has Bachelors, Masters, and Ph.D. degrees in Mechanical Engineering from Purdue University, where his graduate work was focused on automation and controls.
Advisory Board
xxxvii
Alexandre Dolgui Department Automation, Production and Computer Sciences IMT Atlantique, LS2N-CNRS Nantes, France Dr. Alexandre Dolgui is an IISE Fellow, Distinguished Professor, and the Head of Automation, Production and Computer Sciences Department at the IMT Atlantique, campus in Nantes, France. His research focuses on manufacturing line design, production planning, and supply chain optimization. His main results are based on the exact mathematical programming methods and their intelligent coupling with heuristics and metaheuristics algorithms. He is the author of 5 books and more than 260 refereed journal papers. He is the Editor-in-Chief of the International Journal of Production Research, an Area Editor of Computers & Industrial Engineering and Member of the editorial boards for several other journals, including the International Journal of Production Economics. He is a Fellow of the European Academy for Industrial Management, Member of the Board of the International Foundation for Production Research, former Chair (currently vice-chair) of IFAC TC 5.2 Manufacturing Modelling for Management and Control, Member of IFIP WG 5.7 Advances in Production Management Systems, etc. He has been Scientific Chair for a large number of leading international conferences. Yael Edan Agriculture, Biological – Cognitive Robotics Initiative Ben-Gurion University of the Negev Beer-Sheva, Israel Yael Edan is a Professor in Industrial Engineering and Director of the Agricultural, Biological, Cognitive Robotics Initiative. She holds a B.Sc. in Computer Engineering and an M.Sc. in Agricultural Engineering, both from the Technion-Israel Institute of Technology, and Ph.D. in Engineering from Purdue University. She performs R&D of robotic and sensory systems focusing on human-robot collaboration with major contributions in introduction and application of intelligent automation and robotic systems in agriculture. Florin Gheorghe Filip The Romanian Academy Bucharest, Romania Florin Gheorghe Filip received his MSc and PhD degrees in Automation from the “Politehnica” University of Bucharest in 1970 and 1982, respectively. In 1991, he was elected as a member of the Romanian Academy, where he served as Vice President from 2000 to 2010. He was the chair of IFAC Technical Committee “Large-scale complex systems” (2003–2009). His scientific interests include complex systems and decision support systems. He has authored/coauthored some 350 papers and 13 monographs.
xxxviii
Advisory Board
Kenichi Funaki Hitachi Ltd. Tokyo, Japan Kenichi Funaki received his doctoral degree in Industrial Engineering and Management from Waseda University, Tokyo, in 2001. Funaki began his career at Production Engineering Research Laboratory, Hitachi Ltd. in 1993 and has over 20 years of experience in developing production, logistics, and supply chain management systems. He conducted joint research projects with world class institutes, including Gintic Institute of Manufacturing Technology (currently SIMTech) in Singapore, the Institute for Computer Science and Control of Hungarian Academy of Sciences (SZTAKI), and the Supply Chain and Logistics Institute at Georgia Institute of Technology, where he stayed as a Research Executive in 2008 and 2009. In the last decade, he has been engaged in open innovation strategy and practice with various types of collaboration such as joint research with universities and institutes, co-creation program with world-class leading companies, and joint business development with startups with innovative ideas. Steven W. Holland General Motors R&D (retired), VHM Innovations, LLC St. Clair, USA Steve Holland retired from General Motors R&D as a Research Fellow following 49 years of service. He pioneered applications of robotics and vision- and computer-based manufacturing. He served in a variety of GM leadership roles: Robotics Development; Controls, Robotics and Welding plant support; and as Chief Scientist for Manufacturing. He has been a leading proponent of prognostics and vehicle health management for the global automotive industry which continues via his consulting company: VHM Innovations, LLC. Steve is a recipient of the Joseph F. Engelberger Robotics Award and is a Fellow of IEEE, PHM Society, and SAE. He holds technical degrees from Kettering and Stanford Universities.
Advisory Board
xxxix
Chin-Yin Huang Department of Industrial Engineering & Enterprise Information Tunghai University Taichung, Taiwan Chin-Yin Huang is Professor of Industrial Engineering and Enterprise Information at Tunghai University, Taiwan. He has a Ph.D. degree from Purdue University, USA. His research interests include Healthcare Management, Data Analytics, Ontology, and Industry 4.0. Prof. Huang is currently Vice President of International Foundation for Production Research. He also serves as the Chairman in the Asia Pacific Region. He is Board Member of Asia Pacific Industrial Engineering and Management Society. In addition, he is the Director of International Industrial Affairs of IIE in Taiwan. He is former Dean of General Affairs, Tunghai University. Prof. Huang has been serving as a professional consultant and advisor in government, universities, hospitals, and the manufacturing sector. He has published many papers in international journals with topics ranging from production/supply chain integration to data analytics, manufacturing/service automation, and AI applications. Kazuyoshi Ishii Academic Foundation Programs Kanazawa Institute of Technology Kanazawa, Japan Kazuyoshi Ishii received his PhD in Industrial Engineering from Waseda University. Dr. Ishii is a professor at Kanazawa Institute of Technology, Japan, since 1988. He is a Fellow of IFPR, APIEMS, and ISPIM. His research field covers production management, quality management, product/service development management, human resource, and education/learning management. Alberto Isidori Department of Computer, Control, Management Engineering University of Rome “La Sapienza” Rome, Italy Alberto Isidori was a Professor of Automatic Control at the University of Rome Sapienza from 1975 to 2012, where is now Professor Emeritus. His research interests are primarily in analysis and design of nonlinear control systems. He is the author of several books and of more than 120 articles on archival journals and the recipient of various prestigious awards, which include the Quazza Medal of IFAC (in 1996), the Bode Lecture Award from the Control Systems Society of IEEE (in 2001), the Honorary Doctorate from KTH of Sweden (in 2009), the Galileo Galilei Award from the Rotary Clubs of Italy (in 2009), and the Control Systems Award of IEEE in 2012. He is a Fellow of IEEE and of IFAC and the President of IFAC in the triennium 2008–2011. Since 2012, he has been a corresponding member of the Accademia Nazionale dei Lincei.
xl
Advisory Board
Steven J. Landry Pennsylvania State University State College, USA Steven J. Landry is Professor and Peter and Angela Dal Pezzo Chair and Department Head in the Harold & Inge Marcus Department of Industrial and Manufacturing Engineering at The Pennsylvania State University. He received his MS in Aeronautics and Astronautics from MIT and his Ph.D. in Industrial and Systems Engineering from Georgia Tech. He has over 2500 heavy jet flight hours as a USAF C-141B pilot. His research interests are in aviation systems engineering and human factors. Oliver Riedel Institute for Control Engineering of Machine Tools and Manufacturing Unit University of Stuttgart Stuttgart, Germany Prof. Oliver Riedel studied Technical Cybernetics at the University of Stuttgart and earned his doctorate at the Faculty of Engineering Design and Production Engineering. For over 25 years, he has been involved with the theory and practical application of virtual validation methods in product development and production. After working for 18 years in the international IT and automotive industry, he was appointed professor at the University of Stuttgart and is managing director of the Institute for Control Engineering of Machine Tools and Manufacturing Units (ISW), holds the chair for Information Technology in Production, and is managing director of the Fraunhofer Institute for Industrial Engineering (IAO). Oliver Riedel specializes in information technology for production and product development, IT-engineering methods, and data analytics. José Reinaldo Silva Mechatronics Department – Escola Polité-cnica Universidade de São Paulo São Paulo, Brasil José Reinaldo Silva is a senior associate professor at Escola Politécnica, Universidade de São Paulo, São Paulo, Brazil. His research deals with Engineering Design methods to automated systems, using formal methods and artificial intelligence. In this area, he has more than 130 articles and book chapters. He is a member of the Design Society, ACM, IFAC special groups, and other international forums of discussion on the design of automation systems.
Advisory Board
xli
Hak-Kyung Sung Mechatronics & Manufacturing Technology Center Samsung Electronics Suwon, Korea Hak-Kyung Sung received his Master’s degree in Mechanical Engineering from Yonsei University in Korea and his PhD degree in Control Engineering from Tokyo Institute of Technology, Japan, in 1985 and 1992, respectively. He is currently the Vice President in the Mechatronics and Manufacturing Technology Center, Samsung Electronics. His interests are in production engineering technology, such as robotics, control, and automation. François B. Vernadat Laboratoire de Génie Informatique, de Production et de Maintenance (LGIPM) University of Lorraine Metz, France François B. Vernadat received his PhD in Electrical Engineering and Automatic Control from University of Clermont, France, in 1981. He has been a research officer at the National Research Council of Canada in the 1980s and at the Institut National de Recherche en Informatique et Automatique in France in the 1990s. He joined the University of Lorraine at Metz, France, in 1995 as a full professor and founded the LGIPM research laboratory. His research interests include enterprise modeling, enterprise architectures, enterprise integration and interoperability, information systems design and analysis, and performance management. He has contributed to 7 books and published over 300 papers. He has been a member of IEEE and ACM, has held several vice-chairman positions at IFAC, and has been associate or area editor for many years of Computers in Industry, IJCIM, IJPR, Enterprise Information Systems, and Computers & Industrial Engineering. In parallel from 2001 until 2016, he held IT management positions as head of departments in IT Directorates of European institutions in Luxemburg (first European Commission and then European Court of Auditors).
xlii
Advisory Board
Birgit Vogel-Heuser TUM School of Engineering and Design Technical University of Munich Garching, Germany Birgit Vogel-Heuser graduated in Electrical Engineering and obtained her Ph.D. in Mechanical Engineering from the RWTH Aachen in 1991. She worked for nearly 10 years in industrial automation for the machine and plant manufacturing industry. She is now a full professor and in charge of Automation and Information Systems in the Department of Mechanical Engineering TUMSchool of Engineering and Design at the Technical University of Munich. Her research work focuses on improving efficiency in automation engineering for hybrid technical processes and intelligent distributed field-level control systems. She is a member of the German Academy of Science and Engineering (acatech), Senior Editor of IEEE T-ASE, and IEEE Distinguished Lecturer.
Advisory Board Members of the Previous Edition
• • • • • • • • • • • • • • • • • • •
Hans-Jörg Bullinger, Fraunhofer-Gesellschaft, Munich, Germany Rick J. Echevarria, Inter Corp., Santa Clara, USA Yael Edan, Ben-Gurion University of the Negev, Beer Sheva, Israel Yukio Hasegawa (), Waseda University, Tokyo, Japan Steven W. Holland, General Motors R & D, Warren, USA Clyde W. Holsapple, University of Kentucky, Lexington, USA Rolf Isermann, Technical University Darmstadt, Darmstadt, Germany Kazuyoshi Ishii, Kanazawa Institute of Technology, Hakusan City, Japan Alberto Isidori, University of Rome “La Sapienza”, Rome, Italy Stephen Kahne, Embry-Riddle University, Prescott, USA Aditya P. Mathur, Purdue University, West Lafayette, USA Gavriel Salvendy, Tshinghua University, Beijing, China George Stephanopoulos, Massachusetts Institute of Technology, Cambridge, USA Hak-Kyung Sung, Samsung Electronics, Suwon, Korea Kazuo Tanie (), Tokyo Metropolitan University, Tokyo, Japan Tibor Vámos (), Hungarian Academy of Sciences, Budapest, Hungary François B. Vernadat, Université Paul Verlaine Metz, Metz, France Birgit Vogel-Heuser, University of Kassel, Kassel, Germany Andrew B. Whinston, The University of Texas at Austin, Austin, USA
xliii
Contributors
Sara Abedi Department of Systems Science and Industrial Engineering, Binghamton University; SUNY, Binghamton, NY, USA George Adamides Department of Rural Development, Agricultural Research Institute, Aglantzia, Cyprus Mridul Agarwal Purdue University, West Lafayette, IN, USA Vaneet Aggarwal Purdue University, West Lafayette, IN, USA Carlos I. Aldana Department of Computer Science, University of Guadalajara, Guadalajara, Mexico Juan Aparicio Ojea Rapid Robotics, San Francisco, CA, USA Pierre M. Auza Nakanishi Research and Consulting LLC, New York, NY, USA Greg Baiden School of Engineering, Laurentian University, Sudbury, ON, Canada D. R. Balakrishna Infosys Ltd., Bengaluru, India Neda Bagheri Department of Chemical and Biological Engineering, Northwestern University, Evanston, IL, USA Departments of Biology and Chemical Engineering, University of Washington, Seattle, WA, USA Narasimhan Balakrishnan Department of Chemical and Biological Engineering, Northwestern University, Evanston, IL, USA Parasuram Balasubramanian Theme Work Analytics Pvt. Ltd., Bangalore, India Scott Barnett Application Engineering, Robotics and Automation, 3M Abrasive Systems Division 3M Center, St. Paul, MN, USA Ruth Bars Department of Automation and Applied Informatics, Faculty of Electrical Engineering and Informatics, Budapest University of Technology and Economics, Budapest, Hungary Baruch Barzel Department of Mathematics, Bar-Ilan University, Ramat-Gan, Israel The Gonda Interdisciplinary Brain Science Center, Bar-Ilan University, Ramat-Gan, Israel Manuel Scavarda Basaldúa Kimberly-Clark Corporation, Neenah, WI, USA Luis Basañez Institute of Industrial and Control Engineering (IOC), Universitat Politècnica de Catalunya, BarcelonaTech (UPC), Barcelona, Spain B. Wayne Bequette Department of Chemical & Biological Engineering, Rensselaer Polytechnic Institute, Troy, NY, USA
xlv
xlvi
Contributors
Gary R. Bertoline School of Engineering Technology, Purdue University, West Lafayette, IN, USA William Bihlman Aerolytics, LLC, Lafayette, IN, USA Christopher Bissell Department of Communication and Systems, The Open University, Milton Keynes, UK Veronica R. Bosquezfoti School of Industrial Engineering, Purdue University, Lafayette, IN, USA
West
Paul J. Bracke Gonzaga University, Spokane, WA, USA Barrett S. Caldwell Schools of Industrial Engineering and Aeronautics & Astronautics, Purdue University, West Lafayette, IN, USA Darwin G. Caldwell Department of Advanced Robotics, Istituto Italiano Di Tecnologia, Genoa, Italy Jorge D. Camba School of Engineering Technology, Purdue University, West Lafayette, IN, USA Brian Carlisle Brooks Automation, Livermore, CA, USA Daniel Castro-Lacouture Purdue Polytechnic Institute, Purdue University, West Lafayette, IN, USA José A. Ceroni School of Industrial Engineering, Pontifica Universidad Católica de Valparaíso, Valparaiso, Chile Shao-Hung Chan Computer Science Department, University of Southern California, Los Angeles, CA, USA Heping Chen The Ingram School of Engineering, Texas State University, San Marcos, TX, USA Xin W. Chen Department of Industrial Engineering, Southern Illinois University, Edwardsville, IL, USA Chi Fai Cheung Behaviour and Knowledge Engineering Research Centre, Department of Industrial and Systems Engineering, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong Kuo-Liang Chiang Department of Pediatric Neurology, Kuang-Tien General Hospital, Taichung, Taiwan Rodrigo J. Cruz Di Palma Dollarcity, Antiguo Cuscatlan, El Salvador Juan Diego Velasquez de Bedout Purdue University, West Lafayette, IN, USA Christian Diedrich Institute for Automation Engineering, Otto von Guericke University Magdeburg, Magdeburg, Germany Carl Doeksen Robotics & Automation, 3M, St. Paul, MN, USA Puwadol Oak Dusadeerungsikul Department of Industrial Engineering, Chulalongkorn University, Bangkok, Thailand Jonathan Eckstein Formerly University of Stuttgart, Department Human-Machine Interaction, Stuttgart, Germany Yael Edan Department of Industrial Engineering and Management, Agriculture, Biological, Cognitive Robotics Initiative, Ben-Gurion University of the Negev, Beer-Sheva, Israel Norbert Elkmann Fraunhofer IFF, Magdeburg, Germany
Contributors
xlvii
Elsayed A. Elsayed Department of Industrial and Systems Engineering, Rutgers University, Piscataway, NJ, USA Sanaz Eshraghi Department of Industrial Engineering, Southern Illinois University Edwardsville, Edwardsville, IL, USA Florin Gheorghe Filip The Romanian Academy, Bucharest, Romania Luis Fuentes Claro 360, Mexico City, Mexico Susumu Fujii Kobe University, Kobe, Japan Kenichi Funaki Corporate Venturing Office, Hitachi Ltd., Tokyo, Japan Christopher Ganz C. Ganz Innovation Services, Zurich, Switzerland Mitsuo Gen Department of Research, Fuzzy Logic Systems Institute (FLSI), Iizuka, Japan Research Institute of Science and Technology, Tokyo University of Science, Tokyo, Japan Birgit Graf Fraunhofer Institute for Manufacturing Engineering and Automation IPA, Department Robot and Assistive Systems, Stuttgart, Germany Juergen Hahn Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, USA Department of Chemical & Biological Engineering, Rensselaer Polytechnic Institute, Troy, NY, USA Dave Harrison FASTPORT, Inc., West Fork, AR, USA Nathan Hartman School of Engineering Technology, Purdue University, West Lafayette, IN, USA Yukio Hasegawa System Science Institute, Waseda University, Tokyo, Japan Jen˝o Hetthéssy Department of Automation and Applied Informatics, Faculty of Electrical Engineering and Informatics, Budapest University of Technology and Economics, Budapest, Hungary Micha Hofri Department of Computer Science, Worcester Polytechnic Institute, Worcester, MA, USA Michael C. Hollinger Sustainability Software, IBM Corporation, Austin, TX, USA Clyde W. Holsapple School of Management, Gatton College of Business and Economics, University of Kentucky, Lexington, KY, USA Chin-Yin Huang Department of Industrial Engineering and Enterprise Information, Tunghai University, Taichung, Taiwan Yining Huang Operations Research Graduate Program, North Carolina State University, Raleigh, NC, USA Kiyonori Inaba FANUC CORPORATION, Yamanashi, Japan Alf J. Isaksson ABB AB, Corporate Research, Västerås, Sweden Rolf Isermann Institute of Automatic Control, Research group on Control Systems and Process Automation, Darmstadt University of Technology, Darmstadt, Germany Kazuyoshi Ishii Kanazawa Institute of Technology, Academic Foundation Programs, Kanazawa, Japan Alberto Isidori Department of Computer, Control and Management Engineering, University of Rome “La Sapienza”, Rome, Italy
xlviii
Sirkka-Liisa Jämsä-Jounela Department of Biotechnology and Chemical Technology, Aalto University, Espoo, Finland Wootae Jeong Korea Railroad Research Institute, Uiwang, South Korea Michael Kaplan Ex Libris Ltd., Newton, MA, USA Hyun-Jung Kim Department of Industrial and Systems Engineering, KAIST, Daejeon, South Korea Hoo Sang Ko Department of Industrial Engineering, Southern Illinois University Edwardsville, Edwardsville, IL, USA Sven Koenig Computer Science Department, University of Southern California, Los Angeles, CA, USA Andrew Kusiak Department of Industrial and Systems Engineering, Seamans Center, The University of Iowa, Iowa City, IA, USA Soongeol Kwon Department of Systems Science and Industrial Engineering, Binghamton University; SUNY, Binghamton, NY, USA King Wai Chiu Lai Department of Biomedical Engineering, City University of Hong Kong, Hong Kong SAR, China Francoise Lamnabhi-Lagarrigue Laboratoire des Signaux et Systèmes CNRS, CentraleSupelec, Paris-Saclay Univ., Paris, France Steven J. Landry Pennsylvania State University, State College, PA, USA Armin Lechler Institute for Control Engineering of Machine Tools and Manufacturing Unit, University of Stuttgart, Stuttgart, Germany John D. Lee Department of Industrial and Systems Engineering, University of WisconsinMadison, Madison, WA, USA Ka Man Lee Behaviour and Knowledge Engineering Research Centre, Department of Industrial and Systems Engineering, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong Tae-Eog Lee Department of Industrial and Systems Engineering, KAIST, Daejeon, South Korea Wing Bun Lee Behaviour and Knowledge Engineering Research Centre, Department of Industrial and Systems Engineering, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong Mark R. Lehto School of Industrial Engineering, Purdue University, West Lafayette, IN, USA Kauko Leiviskä Control Engineering Laboratory, University of Oulu, Oulu, Finland Jiaoyang Li Computer Science Department, University of Southern California, Los Angeles, CA, USA Kejun Li Department of Industrial and Systems Engineering, North Carolina State University, Raleigh, NC, USA Lin Lin International School of Information Science and Engineering, Dalian University of Technology, Economy and Technology Development Area, Dalian, China Andrew L. Liu School of Industrial Engineering, Purdue University, West Lafayette, IN, USA
Contributors
Contributors
xlix
Ramses V. Martinez Purdue University, School of Industrial Engineering and Weldon School of Biomedical Engineering, West Lafayette, IN, USA Cesar Martinez Spessot Intel Corporation, Hillsboro, OR, USA Beth McNeil Purdue University, West Lafayette, IN, USA Sudip Misra Department of Computer Science and Engineering, Indian Institute of Technology Kharagpur, Kharagpur, India Mohsen Moghaddam Department of Mechanical and Industrial Engineering, Northeastern University, Boston, MA, USA Francisco Munoz School of Industrial Engineering, Purdue University, West Lafayette, IN, USA Departamento de Ingeniería Civil e Industrial, Pontificia Universidad Javeriana Cali, Cali, Colombia Yuko J. Nakanishi Nakanishi Research and Consulting LLC, New York, NY, USA Gaurav Nanda School of Engineering Technology, Purdue University, West Lafayette, IN, USA Peter Neumann Institut für Automation und Kommunikation – ifak, Magdeburg, Germany Win P. V. Nguyen Grado Department of Industrial and Systems Engineering, Virginia Tech, Blacksburg, VA, USA Shimon Y. Nof PRISM Center and School of Industrial Engineering, Purdue University, West Lafayette, IN, USA Emmanuel Nuño Department of Computer Science, University of Guadalajara, Guadalajara, Mexico Roberto Oberti Department of Agricultural and Environmental Science, University of Milan, Milan, Italy B. John Oommen School of Computer Science, Carleton University, Ottawa, ON, Canada Department of Information and Communication Technology, University of Agder, Grimstad, Norway Anatol Pashkevich Department of Automation, Production and Computer Science, IMT Atlantique, Nantes, France Bozenna Pasik-Duncan Department of Mathematics, University of Kansas, Lawrence, KS, USA Peter C. Patton School of Engineering, Oklahoma Christian University, Oklahoma City, OK, USA Richard D. Patton Lawson Software, St. Paul, MN, USA Carlos E. Pereira Automation Engineering Department, Federal University of Rio Grande do Sul (UFRGS), Porto Alegre, Brazil Daniel J. Power College of Business Administration, University of Northern Iowa, Cedar Falls, IA, USA Piercarlo Ravazzi Department of Manufacturing Systems and Economics, Politecnico di Torino, Torino, Italy
l
Oliver Riedel Institute for Control Engineering of Machine Tools and Manufacturing Unit, University of Stuttgart, Stuttgart, Germany Alessandro Rizzo Department of Electronics and Telecommunications, Politecnico di Torino, Torino, Italy Institute for Invention, Innovation, and Entrepreneurship, New York University Tandon School of Engineering, Brooklyn, NY, USA José Saenz Fraunhofer IFF, Magdeburg, Germany Tariq Samad Technological Leadership Institute, University of Minnesota, Minneapolis, MN, USA Sharath Sasidharan Department of Management and Marketing, Marshall University, Huntington, WV, USA Andrew H. Schwarz Louisiana State University, Baton Rouge, LA, USA Bobbie D. Seppelt Autonomous Vehicles, Ford Motor Company, Dearborn, MI, USA Ramesh Sharda Spears School of Business, Oklahoma State University, Stillwater, OK, USA Keiichi Shirase Department of Mechanical Engineering, Kobe University, Kobe, Japan Moshe Shoham Department of Mechanical Engineering, Technion Israel Institute of Technology, Haifa, Israel Nir Shvalb Faculty of Engineering, Ariel University, Ariel, Israel Eugen Solowjow Siemens, Berkeley, CA, USA Zhiyong Sun Department of Industrial Manufacturing Systems Engineering, The University of Hong Kong, Hong Kong SAR, China Institute of Intelligent Machines, Hefei Institute of Physical Science, CAS, Hefei, China Kin’ya Tamaki School of Business, Aoyama Gakuin University, Tokyo, Japan Atsushi Ugajin Hitachi, Ltd. Healthcare Innovation Division, Toranomon Hills Business Tower, Tokyo, Japan Geanie Umberger The Pennsylvania State University, University Park, PA, USA Rishi Vaish Sustainability Software, IBM Corporation, San Francisco, CA, USA István Vajk Department of Automation and Applied Informatics, Faculty of Electrical Engineering and Informatics, Budapest University of Technology and Economics, Budapest, Hungary Alexander W. Verl Institute for Control Engineering of Machine Tools and Manufacturing Unit, University of Stuttgart, Stuttgart, Germany Matthew A. Verleger Embry-Riddle Aeronautical University, Engineering Fundamentals Department, Daytona Beach, FL, USA François B. Vernadat Laboratoire de Génie Informatique, de Production et de Maintenance (LGIPM), University of Lorraine, Metz, France Agostino Villa Department of Manufacturing Systems and Economics, Politecnico di Torino, Torino, Italy Birgit Vogel-Heuser Faculty of Mechanical Engineering, Institute of Automation and Information Systems, Technical University of Munich, Munich, Germany
Contributors
Contributors
li
Hong Wan Department of Industrial and Systems Engineering, North Carolina State University, Raleigh, NC, USA Edward F. Watson III Louisiana State University, Baton Rouge, LA, USA Shaun West Institute of Innovation and Technology Management, Lucerne University of Applied Science and Art, Horw, Switzerland Theodore J. Williams College of Engineering, Purdue University, West Lafayette, IN, USA Alon Wolf Faculty of Mechanical Engineering, Technion Israel Institute of Technology, Haifa, Israel Ning Xi Department of Industrial Manufacturing Systems Engineering, The University of Hong Kong, Hong Kong SAR, China Kenji Yamaguchi FANUC CORPORATION, Yamanashi, Japan Anis Yazidi Computer Science Department, Oslo Metropolitan University, Oslo, Norway Sang Won Yoon Department of Systems Science and Industrial Engineering, Binghamton University; SUNY, Binghamton, NY, USA Tae-Sun Yu Division of Systems Management and Engineering, Pukyong National University, Busan, South Korea Ling Zhang Department of Industrial and Systems Engineering, North Carolina State University, Raleigh, NC, USA Pai Zheng Behaviour and Knowledge Engineering Research Centre, Department of Industrial and Systems Engineering, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong Yi Zheng Computer Science Department, University of Southern California, Los Angeles, CA, USA Lorenzo Zino Department of Electronics and Telecommunications, Politecnico di Torino, Torino, Italy Engineering and Technology Institute Groningen, University of Groningen, Groningen, The etherlands
Part I Development and Impacts of Automation
1
Automation: What It Means to Us Around the World, Definitions, Its Impact, and Outlook Shimon Y. Nof
Contents 1.1 1.1.1 1.1.2
The Meaning of Automation . . . . . . . . . . . . . . . . . . . . . . Definitions, Formalism, and Automation Examples . . . . . Domains of Automation . . . . . . . . . . . . . . . . . . . . . . . . . . .
4 4 12
1.2 1.2.1 1.2.2
12 12
1.2.4
Brief History of Automation . . . . . . . . . . . . . . . . . . . . . . . First Generation: Before Automatic Control (BAC) . . . . . Second Generation: Automatic Control Before Computer Control (ABC) . . . . . . . . . . . . . . . . . . . . . . . . . . Third Generation: Automatic Computer/Cyber Control (ACC) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Perspectives of Automation Generations . . . . . . . . . . . . . .
15 15
1.3
Automation Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15
1.4 1.4.1 1.4.2
Flexibility, Degrees, and Levels of Automation . . . . . . . Degree of Automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Levels of Automation, Intelligence, and Human Variability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15 19
1.2.3
1.5 1.5.1 1.5.2 1.5.3 1.5.4 1.5.5 1.6 1.6.1 1.6.2 1.6.3 1.6.4 1.6.5
Worldwide Surveys: What Does Automation Mean to People? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . How Do We Define Automation? . . . . . . . . . . . . . . . . . . . . When and Where Did We Encounter Automation First in Our Life? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . What Do We Think Is the Major Impact/Contribution of Automation to Humankind? . . . . . . . . . . . . . . . . . . . . . . What Do We Think Is the Major Risk of Automation to Humankind? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . What Do We Think Is the Best Example of Automation? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Emerging Trends . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Automation Trends of the Twentieth and Twenty-First Centuries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bioautomation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Collaborative Control Theory and Collaborative Automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Risks of Automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Need for Dependability, Survivability, Security, Continuity of Operation, and Creativity . . . . . . . . . . . . . . .
15
23 23 28 28 28 29 30 30 30 30 34 35 36
1.6.6
Quantum Computing and Quantum Automation . . . . . . .
36
1.7
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
37
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
37
Abstract
Introducing automation and this handbook, the meaning of the term automation is reviewed through its definition and related definitions, historical evolution, technological progress, benefits and risks, and domains and levels of applications. A survey of 506 respondents, including university students and professionals around the world, adds insights to the current and evolving meaning of automation to people, with regard to: What is your best example and your own definition of automation?; Where did you encounter automation first in your life?; and, what are the top contribution and top risk of automation to individuals and to society? The survey responses include five main aspects of automation definitions; ten main types of first automation encounter; and six types of benefits, six types of risks, and ten types of best automation examples. The most exciting contribution of automation found in the survey (as it was found in the previous, original survey) is to encourage/inspire creative work, inspire innovation and newer solutions. Minor variations were found in different regions of the world. Responses about the first automation encounter are somewhat related to the age of the respondent, e.g., pneumatic versus digital, wireless, and cyber control, and to urban versus farming childhood environment. The chapter concludes with several emerging trends in nature-inspired and bioinspired automation, cyber collaborative control and cyber physical automation, and risks to consider, avoid, and eliminate.
S. Y. Nof () PRISM Center and School of Industrial Engineering, Purdue University, West Lafayette, IN, USA e-mail: [email protected]
© Springer Nature Switzerland AG 2023 S. Y. Nof (ed.), Springer Handbook of Automation, Springer Handbooks, https://doi.org/10.1007/978-3-030-96729-1_1
3
4
S. Y. Nof
Keywords
Assistive automation · Automatic computer/cyber control · Automatic control · Automation platform · Before automatic control · Before computer control · Collaborative automation · Cyber intelligence · Cyber physical automation · Flexibility · Industrial automation · Industrial revolution
1.1
The Meaning of Automation
What is the meaning of automation? When discussing this term and concept with many colleagues and leading experts in various aspects of automation, control theory, robotics engineering, artificial intelligence, cyber systems, and computer science during the development of this second edition of the Springer Handbook of Automation, many of them have different definitions; they even argue vehemently that in their language, or their region of the world, or their professional domain, automation has a unique meaning and “we are not sure it is the same meaning for other experts.” Yet there has been no doubt, no confusion, and no hesitation that automation is powerful; it has tremendous and amazing impact on civilization, on humanity, and it may carry risks, even dangerous risks to individuals and to society. So, what is automation? This chapter introduces the meaning and definition of automation, at an introductory, overview level. Specific details and more theoretical definitions are further explained and illustrated throughout the following parts and chapters of this handbook. A survey of 506 participants from around the world (compared with 331 participants in the previous, original survey) was conducted about the meaning of automation, and is presented in Sect. 5.
1.1.1
Definitions, Formalism, and Automation Examples
Automation: Operating or acting, or self-regulating, independently, without human intervention, and the science and technology that enable it. The term evolves from automatos, in Greek, meaning acting by itself, or by its own will, or spontaneously. Automation involves machines, tools, devices, installations, systems, and systems-of-systems that are all platforms developed by humans to perform a given set of activities without human involvement during those activities. There are many variations of this definition. For instance, before modern automation (specifically defined in the modern context since about 1950s), mechanization was a common version of automation. When automatic control was added to mechanization as an intelligence feature, the distinction and advantages of
automation became evident. In this chapter, we review these related definitions and their evolvement, and survey how people around the world perceive and understand automation. Paraphrasing from this author’s definition [1] automation can be described as follows: As power fields, such as magnetic fields and gravitation, influence bodies to organize and stabilize, so does the sphere of automation. It envelops us and influences us to organize our life and work systems in a different way, and purposefully, to stabilize life and work while effectively producing the desired outcomes.
From the general definition of automation, the automation formalism is presented in Fig. 1.1 with four main elements: platform, autonomy, process, and power source. Automation platforms are illustrated in Table 1.1. Autonomy, process, and power source are illustrated by examples of automation: ancient to early examples in Table 1.2, examples from the Industrial Revolution to 1920 in Table 1.3, and modern and emerging examples in Table 1.4. This automation formalism can help us review some early examples that may also fall under the definition of automation (before the term automation was even coined), and differentiate from related terms, such as mechanization, cybernetics, artificial intelligence, robotics, virtualization, digitalization, and cyberization (which are explained throughout this handbook chapters.) Automaton (plural: automata, or automatons): An autonomous machine that contains its own power source and can perform without human intervention a complicated series of decisions and actions, in response to programs and external stimuli. The term automaton is used for a specific autonomous machine, tool, or device. It usually does not include automation platforms such as automation infrastructure, automatic installations, or automation systems such as automation software (even though some use the term software automaton to imply computing procedures). The scholar Al-Jazari from al-Jazira, Mesopotamia, designed pioneering programmable automatons in 1206, as a set of dolls, or humanoid automata. Today, the most typical automatons are what we define as individual robots. Robot: A mechanical device that can be programmed to perform a variety of tasks of manipulation and locomotion under automatic control. By this definition, a robot could also be an automaton. But unlike an automaton, a robot is usually designed for highly variable and flexible, purposeful motions and activities, and for specific operation domains, e.g., surgical robot, service robot, welding robot, toy robot, etc. General Motors implemented the first industrial robot, called UNIMATE, in 1961 for die-casting at an automobile factory in New Jersey. By now, millions of robots are routinely employed and integrated throughout the world. Cobots (collaborative robots) are a
1
Automation: What It Means to Us Around the World, Definitions, Its Impact, and Outlook
5
1 Automation =
Platform xMachine xTool xDevice xInstallation xSystem xSystem-of-systems
Autonomy Process xAction xOperation xFunction
xOrganization xProcess control xAutomatic control xIntelligence xCollaboration
Power source
Fig. 1.1 Automation formalism. Automation comprises four basic elements. See representative illustrations of platforms, autonomy, process, and power source in Tables 1.1, 1.2, 1.3, 1.4, 1.6, and the automation cases below, in Sect. 3 Table 1.1 Automation platform examples Platform Example
Machine Mars lander
Tool Sprinkler
Device Pacemaker
Installation AS/RC (automated storage/retrieval carousel)
System ERP (enterprise resource planning)
System-of-systems Internet
Table 1.2 Automation examples: ancient to early history
1. 2.
3. 4.
5.
6.
Machine/System Irrigation channels Water supply by aqueducts over large distances Sundial clocks Archytas’ flying pigeon (fourth century BC); Chinese mechanical orchestra (third century BC) Heron’s mechanical chirping birds and moving dolls (first century AD) Ancient Greek temple, automatic door opening Windmills
Autonomous action/Function Direct, regulate water flow Direct, regulate water supply
Autonomy: control/intelligence Power source From-to and on-off Gravity gates, predetermined From-to and on-off Gravity gates, predetermined
Display current time
Predetermined timing Predetermined sound and movements with some feedback
Sunlight
Preset states and positions with some feedback Predefined grinding
Flying, playing, chirping, moving
Open and close door
Grinding grains
major recent development of robots that can work safely with other robots and with humans, also in close proximity. Bots are software (nonmechanical) robots that can perform a variety of software-based tasks. Robotics: The science and technology of designing, building, and applying robots, computer-controlled mechanical devices, such as automated tools and machines.
Replacing Manual watering Practically impossible
Process without human intervention Water flow and directions Water flow and directions
Impossible otherwise Real birds; human play
Shadow indicating time Mechanical bird or toy motions and sounds
Heated air, steam, water, gravity
Manual open and close
Door movements
Winds
Animal and human Grinding process power
Heated air and steam (early hydraulics and pneumatics)
Science fiction author and scientist Isaac Asimov coined the term robotics in 1941 to describe the technology of robots and predicted the rise of a significant robot industry, e.g., in his foreword to [2]: Since physics and most of its subdivisions routinely have the “-ics” suffix, I assumed that robotics was the proper scientific term for the systematic study of robots, of their construction, maintenance, and behavior, and that it was used as such.
6
S. Y. Nof
Table 1.3 Automation examples: Industrial Revolution to 1920
1.
2.
3.
Machine/System Windmills (seventeenth century) Automatic pressure valve (Denis Papin, 1680) Automatic grist mill (Oliver Evans, 1784)
4.
Flyball governor (James Watt, 1788)
5.
Steamboats, trains (eighteenth to nineteenth century) Automatic loom (e.g., Joseph Jacquard, 1801)
6.
Autonomous action/function Flour milling
Steam pressure in piston or engine
Autonomy: control/intelligence Power source Feedback keeping Winds blades always facing the wind Feedback control of Steam steam pressure
Continuous-flow Conveyor speed flour production line control; milling process control Control of steam Automatic feedback engine speed of centrifugal force for speed control Transportation over Basic speed and very large distances navigation controls Fabric weaving, including intricate patterns
7.
Telegraph (Samuel Morse, 1837)
Fast delivery of text message over large distances
8.
Semiautomatic assembly machines (Bodine Co., 1920)
9.
Automatic automobile-chassis plant (A.O. Smith Co., 1920)
Assembly functions including positioning, drilling, tapping, screw insertion, and pressing Chassis production
Pressure regulation
Water flow; steam
Steam
Human control
Speed regulation
Steam
Practically impossible otherwise Human labor and supervision
Travel, freight hauling, conveyance
Human labor and supervision
Manufacturing processes and part handling with better accuracy
Electricity; compressed air through belts and pulleys
Parts, components, Electricity and products flow control, machining, and assembly process control
Robotics and Automation Robotics is an important subset of automation (Fig. 1.2). For instance, of the 27 automation examples in Tables 1.2, 1.3, and 1.4, two examples are about robots, example 4 in Table 1.2 and example 7 in Table 1.4. Beyond robotics, automation includes: • Infrastructure, e.g., water supply, irrigation, power supply, and telecommunication • Nonrobot devices, e.g., timers, locks, valves, and sensors • Automatic and automated machines, e.g., flour mills, looms, lathes, drills, presses, vehicles, and printers • Automatic inspection machines, measurement workstations, and testers • Installations, e.g., elevators, conveyors, railways, satellites, and space stations • Systems, e.g., computers, office automation, Internet, cell phones, and software packages
Process without human intervention Milling process
Practically impossible otherwise Human labor
Basic process control Steam programs by interchangeable punched card On-off, direction, Electricity and feedback
Connect/disconnect; process control
Replacing Nonfeedback windmills
Grains conveyance and milling process
Cloth weaving according to human design of fabric program Before Movement of text telecommunication, over wires practically impossible otherwise Human labor Complex sequences of assembly operations
Common to both robotics and automation are use of automatic control, and evolution with computing, communication, and cyber progress. As in automation, robotics also relies on four major components, including a platform, autonomy, process, and power source, but in robotics, a robot is often considered a machine, thus the platform is mostly a machine, a tool, or device, or a system of tools and devices. While robotics is, in a major way, about automation of motion and mobility, automation beyond robotics includes major areas based on software, decision-making, planning and optimization, systems collaboration, process automation, office automation, enterprise resource planning automation, and e-services. Nevertheless, there is clearly an overlap between automation and robotics; while to most people a robot means a machine with certain automation intelligence, to many an intelligent elevator, or highly automated machine tool, or even a computer may also imply a robot.
1
Automation: What It Means to Us Around the World, Definitions, Its Impact, and Outlook
7
Table 1.4 Automation examples: modern and emerging
1.
Machine/System Automatic door opener
2.
Elevators, cranes
3.
Digital computers
4.
Automatic pilot
5.
Automatic transmission
6.
Office automation
7.
Multirobot factories
8.
9.
10.
Autonomous action/function Opening and closing of doors triggered by sensors Lifting, carrying
Autonomy: control/intelligence Power source Automatic control Compressed air or electric motor
On-off, feedback, preprogrammed, or interactive Data processing and Variety of computing functions automatic and interactive control and operating systems; intelligent control; mobility (after (9) below) Steering aircraft or Same as (3) boat Switch gears of Automatic control power transmission Document processing, imaging, storage, printing Robot arms and automatic devices perform variety of manufacturing and production processes
Same as (3)
Optimal, adaptive, distributed, robust, self-organizing, collaborative control, and other intelligent control Medical diagnostics Visualization of Automatic control, (e.g., computerized medical test results in automatic virtual tomography [CT] and real time reality magnetic resonance imaging [MRI]) Wireless services Remote prognostics, Predictive control; automatic repair, and interacting with exploration radio frequency identification (RFID) and sensor networks, IoT, IoS Internet search Finding requested Optimal control, engine information multiagent control
11.
Cell phones, Smart phones
Mobile and remote communication, knowledge sharing, collaboration
12.
Remote meeting platforms
Telework, tele-learning and teaching, telemedicine
Hydraulic pumps, electric motors Electricity
Electrical motors
Human pilot
Electricity, hydraulic pumps
Manual transmission control Some manual work; some is practically impossible Human labor and supervision
Electricity
Hydraulic pumps, pneumatics, and electric motors
Navigation, operations, e.g., landing Engaging/disengaging rotating gears Specific office procedures Complex operations and procedures, including quality assurance
Electricity
Impossible otherwise
Noncontact testing and results presentation
Electricity
Practically impossible otherwise
Monitoring remote equipment functions, executing self-repairs
Electricity
Human search, practically impossible
Search for specific information over a vast amount of data over worldwide systems Remote interaction with computing and communication equipment
Same as (3) and (7) Electricity and (9) and (10); intelligent user interface and interaction controls; machine learning Same as (11) Electricity
Cybernetics: The scientific study of control and communication in organisms, organic processes, and mechanical and electronic systems. It evolves from kibernetes, in Greek, meaning pilot, or captain, or governor, and focuses
Process without human intervention Doors of buses, trains, and buildings open and close by themselves Human climbing, Speed and movements carrying require minimal supervision Calculations at Cognitive and speeds, complexity, decision-making and with amounts functions of data that are humanly impossible Replacing Human effort
Stationary phones, computers, and digital cameras
Face-to-face meetings
Remote access and collaborative automation
on applying technology to replicate or imitate biological control systems, often called today bioinspired, natureinspired, or system biology. Cybernetics, a book by Norbert Wiener, who is attributed with coining this word, appeared
1
8
S. Y. Nof
Automation Examples:
1. a) Just computers; b) Automatic devices but no robots • Decision-support systems (a) • Enterprise planning (a)
3. Robotics
• Water and power supply (b)
• Factory robots
• Office automation (a+b)
• Soccer robot team
• Aviation administration (a+b)
• Medical nanorobots
• Ship automation (a+b) • Smart building (a+b)
2. Automation including robotics • Safety protection automation able to activate fire-fighting robots when needed • Spaceship with robot arm
Automation
1. Automation without robots/robotics 2. Automation also applying robotics
3. Robotics
Fig. 1.2 The relation between robotics and automation: The scope of automation includes applications: (1a) with just computers; (1b) with various automation platforms and applications, but without robots; (2) automation including also some robotics; and (3) automation with robots and robotics
in 1948 and influenced artificial intelligence research. Cybernetics overlaps with control theory and systems theory. Cyber: A prefix, as in cybernetic, cyberization, cyber physical, cyber security, or cyborg. Cyber has also assumed a meaning as a noun, meaning a combination of computers, communication, information systems, artificial intelligence (including machine learning), virtual reality, and the Internet. This meaning has emerged because of the increasing importance of these integrated automation systems to society and daily life. Recently, cyber physical systems (CPS) have been developed as integrated automation in which cyber tools and techniques are intertwined to collaborate and control
physical automation. Realizing the increasing role of cyber in automation, the current automation era of automatic computer control, ACC, is now termed automatic computer/cyber control. Artificial intelligence (AI): The ability of a machine system to perceive anticipated or unanticipated new conditions, decide what actions must be performed under these conditions, and plan the actions accordingly. The main areas of AI study and application are knowledge-based systems, computer sensory systems, language processing systems, and machine learning. AI is an important part of automation, especially to characterize what is sometimes called intelligent automation (Sect. 4.2).
1
Automation: What It Means to Us Around the World, Definitions, Its Impact, and Outlook
It is important to note that AI is actually human intelligence (including biological intelligence as observed by humans) that has been implemented on machines, mainly through computers and communication. Its significant advantages are that it can function automatically, i.e., without human intervention during its operation/function; it can combine intelligence from many humans (cumulative and collaborative intelligence), and improve its abilities by automatic learning and adaptation (machine learning); it can be automatically distributed, duplicated, shared, inherited, and, if necessary, restrained and even deleted. With the advent of these abilities, remarkable progress has been achieved. There is also, however, an increasing risk of running out of control (Sect. 6), which must be considered carefully as with harnessing any other technology. Collaborative automation (CA): The science and technology of designing, integrating, building, and applying automation systems, devices, and technologies that collaborate effectively. CA accomplishes this effectiveness by AI and cyber techniques. For example: (1) Automation platforms that support and optimize human-human collaboration, such as online meetings, file sharing, and online learning (2) Automation installations enabling and optimizing machine-machine collaboration and system-system collaboration, such as cobots interacting as a team with assembly lines, and sensor networks collaborating to enable IoT (Internet of Things) and IoS (Internet of Services) (3) Automation infrastructure enabling and optimizing human-automation collaboration (human-in-the-loop), such as assistive cobots helping the elderly, smartphones, and wearables assisting humans in their health care routines The common goal and challenge of collaborative automation is to optimize the interactions between and among humans, between and among robots and machines, and between and among humans, robots, and machines.
Early Automation The creative human desire to develop automation, from ancient times, has been to recreate natural activities, either for enjoyment or for productivity with less human effort and hazard. The following six imperatives have been proven about automation: 1. 2. 3. 4.
Automation has always been developed by people. Automation has been developed for the sake of people. The benefits of automation are tremendous. Often automation performs tasks that are impossible or impractical for humans. 5. As with other technologies, care should be taken to prevent abuse of automation, and to eliminate the possibilities of unsafe automation. 6. Automation is usually inspiring further creativity of the human mind.
9
The main evolvement of automation has followed the development of mechanics and fluidics, civil infrastructure, and machine design, and since the twentieth century, of computers, communication, and cyber. Examples of ancient automation that follow the formal definition (Table 1.3) include flying and chirping birds, sundial clocks, irrigation systems, and windmills. They all include the four basic automation elements, and have a clear autonomous process without human intervention, although they are mostly predetermined or predefined in terms of their control program and organization. But not all these ancient examples replace previously used human effort: Some of them would be impractical or even impossible for humans, e.g., continuously displaying time, or moving large quantities of water by aqueducts over large distances. This observation is important, since, as evident from the definition surveys (Sect. 5): 1. In defining automation, over one-third of those surveyed associate automation with replacing humans (compared to over one-quarter in the previous survey), implying somber connotation that humans are losing certain advantages. Many resources erroneously define automation as replacement of human workers by technology. But the definition is not about replacing humans, as many automation examples involve activities people cannot practically perform, e.g., complex and fast computing, wireless telecommunication, additive, nano- and microelectronics manufacturing, and satellite-based positioning. The definition is about the autonomy of a system or process from human involvement and intervention during the process (independent of whether humans could or could not perform it themselves). Furthermore, automation is rarely disengaged from people, who must maintain and improve it (or at least replace its batteries). 2. Humans are always involved with automation, to a certain degree, from its innovation and development, to, at certain points, supervising, maintaining, repairing, and issuing necessary commands, e.g., at which floor should this elevator stop for me? Or, what planning horizon should be considered by this decision automation function? Describing automation, Buckingham [4] quotes Aristotle (384–322 BC): “When looms weave by themselves human’s slavery will end.” Indeed, the reliance on a process that can proceed autonomously and successfully to completion, without human participation and intervention, is an essential characteristic of automation. But it took over 2000 years since Aristotle’s prediction till the automatic loom was developed during the Industrial Revolution.
Industrial Revolution Some scientists (e.g., Truxal [5]) define automation as applying machines or systems to execute tasks that involve
1
10
more elaborate decision-making. Certain decisions were already involved in ancient automation, e.g., where to direct the irrigation water. More control sophistication was indeed developed later, beginning during the Industrial Revolution (see examples in Table 1.3). During the Industrial Revolution, as shown in the examples, steam and later electricity became the main power sources of automation systems and machines, and autonomy of process and decision-making increasingly involved feedback control models.
Modern Automation The term automation in its modern meaning was actually attributed in the early 1950s to D.S. Harder, a vice president of the Ford Motor Company, who described it as a philosophy of manufacturing. Toward the 1950s, it became clear that automation could be viewed as substitution by mechanical, hydraulic, pneumatic, electric, and electronic devices for a combination of human efforts and decisions. Critics with humor referred to automation as substitution of human error by mechanical error. Automation can also be viewed as the combination of four fundamental principles: (1) mechanization, computerization, digitalization, and cyberization; (2) process continuity; (3) automatic control; and (4) economic, social, and technological rationalization.
Mechanization, Computerization, Digitalization, and Cyberization Mechanization is defined as the application of machines to perform work. Machines can perform various tasks, at different levels of complexity. When mechanization is designed with cognitive and decision-making functions, such as process control and automatic control, the modern term automation becomes appropriate. Some machines can be rationalized by benefits of safety and convenience. Some machines, based on their power, compactness, and speed, can accomplish tasks that could never be performed by human labor, no matter how much labor or how effectively the operation could be organized and managed. With increased availability and sophistication of power sources and of automatic control, the level of autonomy of machines and mechanical system created a distinction between mechanization and the more autonomous form of mechanization, which is automation (Sect. 4). With the advent of digital computers, it became clear that these “digital machines” add computational and cognitive dimensions to automation. The terms computerization, and more recently, digitalization and cyberization have been used to express the different processes increasing the power of relatively more intelligent automation.
S. Y. Nof
Process Continuity Process continuity is already evident in some of the ancient automation examples, and more so in the Industrial Revolution examples (Tables 1.2 and 1.3). For instance, windmills could provide relatively uninterrupted cycles of grain milling. The idea of continuity is to increase productivity, the useful output per labor-hour, and increase resilience, the ability to continue productively and responsively despite disruptions. Early in the twentieth century, with the advent of mass production, it became possible to better organize workflow. Organization of production flow and assembly lines, and automatic or semiautomatic transfer lines increased productivity beyond mere mechanization. The emerging automobile industry in Europe and the USA in the early 1900s utilized the concept of moving work continuously, automatically or semi automatically, to specialized machines and workstations. Interesting problems that emerged with flow automation included balancing the work allocation and regulating the flow. More recently, with intelligent automation, optimized operations have increasingly enabled and improved process continuity with effectiveness and longer-term resilience. These continuity abilities by human-automation systems have been vital during the recent pandemics, both in terms of supplies continuity, and vaccines and therapies developments and distribution.
Automatic Control A key mechanism of automatic control is feedback, which is the regulation of a process according to its own output, so that the output meets the conditions of a predetermined, set objective. An example is the windmill that can adjust the orientation of its blades by feedback informing it of the changing direction of the current wind. Another example is a heating system that can stop and restart its heating or cooling process according to feedback from its thermostat. Watt’s flyball governor (1769) applied feedback from the position of the rotating balls as a function of their rotating speed to automatically regulate the speed of the steam engine. Charles Babbage’s analytical engine for calculations applied the feedback principle in 1840.
Automation Rationalization Rationalization means a logical and systematic analysis, understanding, and evaluation of the objectives and constraints of an automation solution. Automation is rationalized by considering the technological and engineering aspects in the context of economic, social, and managerial considerations, including also: human factors and usability, organizational issues, environmental constraints, conservation of resources and energy, and elimination of waste.
1
Automation: What It Means to Us Around the World, Definitions, Its Impact, and Outlook
Soon after automation enabled mass production in factories of the early twentieth century and workers feared for the future of their jobs, the US Congress held hearings in which experts explained what automation means to them (Table 1.5). From our vantage point several generations later, it is interesting to read these definitions, while we already know about automation discoveries yet unknown at that time, e.g., mobile computers, robots, smart phones and personal digital assistants, the Internet, drones, and more. A relevant question is: Why automate? Several prominent motivations are the following, as has also been indicated by the worldwide automation survey participants (Sect. 3.5): 1. Feasibility: Humans cannot handle certain operations and processes, either because of their scale, e.g., micro- and nanoparticles are too small, the amount of data is too vast, or the process happens too fast, for instance, missile guidance; microelectronics design, manufacturing, and repair; and Internet search. 2. Productivity: Beyond feasibility, computers, automatic transfer machines, and other equipment can operate at such high speed and capacity that it would be practically impossible without automation; for instance, controlling consecutive, rapid chemical processes in food production; performing medicine tests by manipulating atoms or molecules; optimizing a digital image; and placing
11
millions of colored dots on a color television or computer screen. 3. Safety: Automation sensors and devices can operate well in environments that are unsafe for humans, for example, under extreme temperatures, nuclear radiation, or in poisonous gas. 4. Quality and economy: Automation can save significant costs on jobs performed without it, including consistency, accuracy, and quality of manufactured products and of services, and saving labor, safety, energy, and maintenance costs. 5. Importance to individuals, to organizations, and to society: Beyond the above motivations, service- and knowledge-based automation reduces the need for middle managers and middle agents, thus reducing or eliminating the agency costs and removing layers of bureaucracy, for instance, Internet-based travel services and financial services, and direct communication between manufacturing managers and line operators, or cell robots. Remote supervision, telepresence, and telecollaboration change the nature, sophistication, skills and training requirements, and responsibility of workers and their managers. As automation gains cyber intelligence and competencies, it takes over some employment skills and opens up new types of work, skills, and service requirements, and new business opportunities.
Table 1.5 Definitions of automation in the 1950s. (After [3]) 1.
Source John Diebold, President, John Diebold & Associates, Inc.
2.
Marshall G. Nuance, VP, York Corp.
3.
James B. Carey, President, International Union of Electrical Workers
4.
Joseph A. Beirne, President, Communications Workers of America Robert C. Tait, Senior VP, General Dynamics Corp.
5.
6. 7.
8.
Robert W. Burgess, Director, Census, Department of Commerce D.J. Davis, VP Manufacturing, Ford Motor Co.
Don G. Mitchell, President, Sylvania Electric Products, Inc.
Definition of automation It is a means of organizing or controlling production processes to achieve optimum use of all production resources – mechanical, material, and human. Automation means optimization of our business and industrial activities. Automation is a new word, and to many people it has become a scare word. Yet it is not essentially different from the process of improving methods of production which has been going on throughout human history. When I speak of automation, I am referring to the use of mechanical and electronic devices, rather than human workers, to regulate and control the operation of machines. In that sense, automation represents something radically different from the mere extension of mechanization. Automation is a new technology, arising from electronics and electrical engineering. We in the telephone industry have lived with mechanization and its successor automation for many years. Automation is simply a phrase coined, I believe, by Del Harder of Ford Motor Co. in describing their recent supermechanization which represents an extension of technological progress beyond what has formerly been known as mechanization. Automation is a new word for a now familiar process of expanding the types of work in which machinery is used to do tasks faster, or better, or in greater quantity. The automatic handling of parts between progressive production processes. It is the result of better planning, improved tooling, and the application of more efficient manufacturing methods, which take full advantage of the progress made by the machine-tool and equipment industries. Automation is a more recent term for mechanization, which has been going on since the Industrial Revolution began. Automation comes in bits and pieces. First the automation of a simple process, and then gradually a tying together of several processes to get a group of subassemblies complete.
1
12
S. Y. Nof
6. Accessibility: Automation enables better accessibility to knowledge, skills, education, and cultural engagement for all people, including disadvantaged, limited, and disabled people. Furthermore, automation opens up new types of services and employment for people with limitations, e.g., by integration of speech and vision recognition interfaces, and by cyber collaborative augmentation and assistive devices of work and play. 7. Additional motivations: Additional motivations are the competitive ability to integrate complex mechanization, advantages of modernization, convenience, improvement in quality of life, and improvement in resilience with automated systems. To be automated, a system (of humans and automation) must follow the motivations listed above. The modern and emerging automation examples in Table 1.4 and the automation cases in Sect. 3 illustrate these motivations, and the mechanization, computerization, digitalization, process continuity, and automatic control features. Certain limits and risks of automation need also be considered. Modern, computer-controlled automation must be programmable and conform to definable procedures, protocols, routines, and boundaries. The limits also follow the boundaries imposed by the four principles of automation: (1) Can it be mechanized, computerized, and digitalized?; (2) Is there continuity in the process?; (3) Can automatic control be designed for it?; and (4) can it be rationalized? Theoretically, all continuous processes can be automatically controlled, but practically such automation must be rationalized first; for instance, jet engines may be continuously advanced on conveyors to assembly cells, but if the demand for these engines is low, there is no justification to automate their flow. Furthermore, all automation must be designed to operate within safe, legal, and ethical boundaries, so it does not pose hazards to humans and to the environment.
flexible assembly system, or robot cell, which are suitable for medium-demand volume and medium variety of flexible tasks. Its purpose is to advance from mass production of products to more customer-oriented and customized supply. For higher flexibility with low-demand volume, stand-alone numerically controlled (NC) machines and robots are preferred. For high-demand volume with low task variability, automatic transfer lines are designed. The opposite of flexible automation is fixed automation, such as process-specific machine tools and transfer lines, with relatively lower task flexibility. For mass customization (mass production with some flexibility to respond to variable customer demands), transfer lines with flexibility can be designed (see more on automation flexibility in Sect. 4). • Office automation: Computer and communication machinery and software used to improve office procedures by digitally creating, collecting, storing, manipulating, displaying, and transmitting office information needed for accomplishing office tasks and functions [7, 8]. Office automation became popular in the 1970s and 1980s when the desktop computer and the personal computer emerged. Later, it expanded as part of business and enterprise automation. Other examples of well-known domains of automation have been factory automation (e.g., [9]), health care automation (e.g., [10]), workflow automation (e.g., [11, 12]), and service automation (e.g., [13]). More domain examples are illustrated in Table 1.6. Throughout these different domains, automation has been applied for various organization functions. Five hierarchical layers of automation are shown in the automation pyramid (Fig. 1.3), which is a common depiction of how to organize automation implementation.
1.2 1.1.2
Brief History of Automation
Domains of Automation
Some unique meanings of automation are associated with the domain of automation. Several examples of well-known domains are listed here: • Detroit automation: Automation of transfer lines and assembly lines adopted by the automotive industry [6]. • Flexible automation: Manufacturing and service automation consisting of a group of processing stations and robots operating as an integrated system under computer control, able to process a variety of different tasks simultaneously, under automatic, adaptive control or learning control [6]. Also known as flexible manufacturing system (FMS),
Automation has evolved, as described in Table 1.7, along three automation generations. (In the current third generation, events are organized by their approximate time of widespread application.)
1.2.1
First Generation: Before Automatic Control (BAC)
Early automation is characterized by elements of process autonomy and basic decision-making autonomy, but without feedback, or with minimal feedback. The period is generally from prehistory till the fifteenth century. Some examples of
1
Automation: What It Means to Us Around the World, Definitions, Its Impact, and Outlook
13
Table 1.6 Examples of automation domains (Each example is shown as a pair of a domain, and under it, an example illustrating it.) Agriculture Harvester
Energy Power windmill
Engineering Simulation
Leisure Movie
Library E-books
Post Mail sorter
Retail E-commerce
Banking ATM (automatic teller machine) Factory AMR (autonomous mobile robot) Logistics RFID (radio frequency identification) Safety Fire alarm
Chemical process Communication Print press Refinery
Construction Truck
Government Government Web portals
Health care Biosensor
Home Oven
Management Financial analysis software
Manufacturing 3D printer
Maritime Navigation
Military Intelligence satellite
Hospitality CRM (customer relations management) Office Copying machine
Security Motion detector
Service Vending machine
Space Exploratory satellite
Sports Treadmill
Transportation Traffic light
Design CAD (computeraided design) Hospital Drug delivery
Education Television
Communication layer
n atio om ma Hu
Machine and computer controllers
ut na
Control level
nts
:
Communication layer
ipa
Management/manufacturing execution system level
tic par
MES
er Op
atio Au
tom
Communication layer
ato
rs –
er sup
Enterprise resource planning level
ns
erv
ice
s
ERP
vis
ors
–
Communication layer
nt age
s–
Multienterprise network level
p sup
MEN
s– lier
n clie
ts
Accounting Billing software
Device level Sensors, actuators, tools, machines, installations, infrastructure systems Power source
Fig. 1.3 The automation pyramid: organizational layers
basic automatic control can be found earlier than the fifteenth century, at least in conceptual design or mathematical definition. Automation examples of the first generation can
also be found later, whenever automation solutions without automatic control could be rationalized.
1
14
S. Y. Nof
Table 1.7 Brief history of automation events Period Prehistory Ancient history First millennium AD
Eleventh – Fifteenth century Sixteenth century
Automation inventions and innovations (examples) Sterilization of food and water, cooking, ships and boats, irrigation, wheel and axle, flush toilet, alphabet, and metal processing Optics, maps, water clock, water wheel, water mill, kite, clockwork, and catapult Central heating, compass, woodblock printing, pen, glass and pottery factories, distillation, water purification, wind-powered gristmills, feedback control, automatic control, automatic musical instruments, self-feeding and self-trimming oil lamps, chemotherapy, diversion dam, water turbine, mechanical moving dolls and singing birds, navigational instruments, and sundial Pendulum, camera, flywheel, printing press, rocket, clock automation, flow-control regulator, reciprocating piston engine, humanoid robot, programmable robot, automatic gate, water supply system, calibration, and metal casting Pocket watch, Pascal calculator, machine gun, and corn grinding machine
Automation generation First generation: before automatic control (BAC)
Second generation: automatic control before computer control (ABC)
Seventeenth century Automatic calculator, pendulum clock, steam car, and pressure cooker Eighteenth century Typewriter, steam piston engine, Industrial Revolution early automation, steamboat, hot-air balloon, and automatic flour mill Nineteenth century Automatic loom, electric motor, passenger elevator, escalator, photography, electric telegraph, telephone, incandescent light, radio, X-ray machine, combine harvester, lead–acid battery, fire sprinkler system, player piano, electric street car, electric fan, automobile, motorcycle, dishwasher, ballpoint pen, automatic telephone exchange, sprinkler system, traffic lights, and electric bread toaster Early twentieth Airplane, automatic manufacturing transfer line, conveyor belt-based assembly line, century analog computer, air conditioning, television, movie, radar, copying machine, cruise missile, jet engine aircraft, helicopter, washing machine, parachute, and flip-flop circuit 1940s Digital computer, assembler programming language, transistor, nuclear reactor, Third generation: automatic microwave oven, atomic clock, barcode, and computer simulation computer/cyber control (ACC) 1950s Mass-produced digital computer, computer operating system, FORTRAN programming language, automatic sliding door, floppy disk, hard drive, power steering, optical fiber, communication satellite, computerized banking, integrated circuit, artificial satellite, medical ultrasonics, and implantable pacemaker 1960s Laser, optical disk, microprocessor, industrial robot, automatic teller machine (ATM), computer mouse, computer-aided design, computer-aided manufacturing, random access memory, video game console, barcode scanner, radiofrequency identification tags (RFID), permanent press fabric, and wide area packet switching network 1970s Food processor, word processor, Ethernet, laser printer, database management, computer-integrated manufacturing, mobile phone, personal computer, space station, digital camera, magnetic resonance imaging, computerized tomography (CT), email, spreadsheet, and cellular phone 1980s Compact disk, scanning tunneling microscope, artificial heart, deoxyribonucleic acid DNA fingerprinting, Internet transmission control protocol/Internet protocol TCP/IP, camcorder, cybersecurity, and computer-supported collaborative work 1990s World- Wide Web, global positioning system, digital answering machine, smart pills, service robots, Java computer language, wWeb search, Mars Pathfinder, and wWeb TV 2000s Artificial liver, Segway personal transporter, robotic vacuum cleaner, self-cleaning windows, iPod, softness-adjusting shoe, drug delivery by ultrasound, Mars Lander, disk-on-key, and social robots 2010s Voice assistant, reusable space rockets, iPad, self-driving car, home LED light bulbs, gene editing, solar powerwall battery backup, IBM Watson supercomputer, wearable technology, 4G network launch, Uber, virtual reality headset, consumer drones, movie streaming, cryptocurrencies, image recognition, online social media, precision medicine, and cloud computing and storage 2020s (Emerging) 5G wireless standard, edge computing, autonomous electric vehicles, shared autonomous transportation, AI-based health care and telemedicine, AI-based assistants (“avatars”), global IoT/IoS, quantum computers, quantum cyber, and quantum Internet
1
Automation: What It Means to Us Around the World, Definitions, Its Impact, and Outlook
15
Table 1.8 Automation generations Generation BAC before automatic control; prehistoric, ancient ABC Hydraulic automation automatic control before computer control, sixteenth Pneumatic automation century to 1940 Electrical automation ACC automatic computer/cyber control, 1940 to present
1.2.2
Electronic automation Micro automation Nano automation Mobile automation Remote automation Assistive automation Collaborative automation
Second Generation: Automatic Control Before Computer Control (ABC)
Automation with advantages of automatic control, but before the introduction and implementation of digital computers, belongs to this generation. Automatic control emerging during this generation offered better stability and reliability, more complex decision-making, and in general better control and automation quality. The period is between the fifteenth century and the 1940s. It would be generally difficult to rationalize in the future any automation with automatic control and without computers; therefore, future examples of this generation will be rare.
1.2.3
Third Generation: Automatic Computer/Cyber Control (ACC)
The progress of computers and communication, and more recently cyber systems and networks has significantly impacted the sophistication of automatic control and its effectiveness. This generation began in the 1940s and continues today. Further refinement of this generation, based on the significant innovations in automation solutions and applications, can be found in Sect. 4, discussing the levels of automation. See also Table 1.8 for examples discovered or implemented during the three automation generations.
1.2.4
Perspectives of Automation Generations
The three main generations of automation defined above reflect a clear, historical perspective of the long-term evolution of automation by our human civilization. Readers may ask: “Then, what is being meant by the terms Automation 4.0, Automation 5.0? and the related terms Industry 4.0, Industry 5.0?” These terms reflect current, on-going attempts by business professionals and researchers (including this author) to
Typical Example Waterwheel Automobile Hydraulic elevator Door open/shut Electric telegraph Microprocessor Digital camera Nanomemory Cell phone Global positioning system (GPS) Home automation Wearables
differentiate short-term, current innovations and distinguish them from other recent, yet older innovations. For Industry 4.0, what is the meaning? Some scientists differentiate four clear industrial generations: 1.0 fire; 2.0 agriculture; 3.0 electricity; 4.0 scientific methods. Others claim: 1.0 steam engines; 2.0 electricity; 3.0 computers; 4.0 wireless Internet; 5.0 AI, cyber. Similarly, for automation scientists, we differentiate: 1.0 Computerized; 2.0 Computer integrated; 3.0 Internetworked and mobile; 4.0 Cloud-based and machine learning; 5.0 Cyber-physical and cybernetic. These multitude of ideas and perspectives, often being revised to include the next innovation, reflect the rapid advancements over a relatively short period. Only future human generations will be able to reflect back on the long-term generational implications.
1.3
Automation Cases
Twelve automation cases are illustrated in this section to demonstrate the meaning and scope of automation in different domains. These 12 cases cover a variety of automation domains. They also demonstrate different levels of intelligence programmed into the automation application, different degrees of automation, and various types of automation flexibility. The meaning of these automation characteristics is explained in the next section.
1.4
Flexibility, Degrees, and Levels of Automation
Increasingly, solutions of society’s problems cannot be satisfied by, and therefore cannot mean the automation of just a
1
16
S. Y. Nof
a)
PLC
b)
Process & machinery Inputs and outputs
Operator interface Speed
Steam turbine
Final driver
Governor valve Actuator T&T valve
Fig. 1.4 (a) Steam turbine generator. (b) Governor block diagram (PLC: programmable logic controller; T&T: trip and throttle). A turbine generator designed for on-site power and distributed energy ranging from 0.5 to 100 MW. Turbine generator sets produce power for pulp and Case 1 Source Process Platform Autonomy
Collaborative automation
Steam Turbine Governor (Fig. 1.4) Courtesy of Dresser-Rand Co., Houston (http://www.dresser-rand.com/) Operates a steam turbine used to drive a compressor or generator Device, integrated as a system with programmable logic controller (PLC) Semiautomatic and automatic activation/deactivation and control of the turbine speed; critical speed-range avoidance; remote, auxiliary, and cascade speed control; loss of generator and loss of utility detection; hot standby ability; single and dual actuator control; programmable governor parameters via operator’s screen and interface; and mobile computer. A manual mode is also available: The operator places the system in run mode which opens the governor valve to the full position, then manually opens the T&T valve to idle speed, to warm up the unit. After warm-up, the operator manually opens the T&T valve to full position, and as the turbine’s speed approaches the rated (desirable) speed, the governor takes control with the governor valve. In semiautomatic and automatic modes, once the operator places the system in run mode, the governor takes over control Governors can collaborate with preventive maintenance systems, and with self-repair devices in case of disruptions
paper mills, sugar, hydrocarbon, petrochemical and process industries, palm oil, ethanol, waste-to-energy, other biomass burning facilities, and other installations. (With permission from Dresser-Rand)
single, repeated process. By automating the networking and integration of devices and systems, they are able to perform different and variable tasks. Increasingly, this ability also requires cooperation (sharing of information and resources) and collaboration (sharing in the execution, interactions, and responses) with other devices and systems. Cyber collaborative automation is rapidly emerging with higher levels of learning-based, predictive, and feedforward control, enabling better quality, responsiveness, resilience, and agility by automation. Thus, devices and systems have to be designed with inherent flexibility, which is motivated by the application and clients’ requirements. With growing demand for service and product variety and individualization, there is also an increase in the expectations by users and customers for greater reliability, responsiveness, and smooth interoperability. Thus, the meaning of automation also involves the aspects of its flexibility, degree, and levels. To enable design for flexibility, certain standards and measures have been and will continue to be established. Automation flexibility, often overlapping with the level of automation intelligence, depends on two main considerations: 1. The number of different states that can be assumed automatically; 2. The length of time and amount of effort (setup or transition process) necessary to respond and execute a change of state (sometimes called response time).
1
Automation: What It Means to Us Around the World, Definitions, Its Impact, and Outlook
17
Case 3 Source Process
Fig. 1.5 Bioreactor system configured for microbial or cell culture applications. Optimization studies and screening and testing of strains and cell lines are of high importance in industry and research and development (R&D) institutes. Large numbers of tests are required and they must be performed in as short a time as possible. Tests should be performed so that results can be validated and used for further process development and production. (With permission from Applikon Biotechnology)
Telephone (Fig. 1.6) Photos taken by the author of his past telephones Automated transmission of information over distance, telecommunication. In recent years, cellphones and smartphones automate also the integration of photography, navigation, and an increasing number of other applications (“apps”) Platform A hardware-software system, increasingly a mobile computer platform Autonomy As a human-operated automation device, the telephone’s autonomy is limited to programmable commands, such as alarms, timers, signal, location, search, and updates Collaborative Communication is an essential component of automation automated collaboration, and a telephone is a device that enables it. While automatically interacting with the Internet, including IoT and IoS, it enables and automates humans’ collaboration, and human-machine collaboration
Case 2 Source
Bioreactor (Fig. 1.5) Courtesy ofApplikon Biotechnology Co., Schiedam (http://www.pharmaceutical-technology.com/ contractors/automation/applikon-technology/) Process Microbial or cell culture applications that can be validated, conforming with standards for equipment used in life science and food industries, such as good automated manufacturing practice (GAMP) Platform System or installation, including microreactors, single-use reactors, autoclavable glass bioreactors, and stainless steel bioreactors Autonomy Bioreactor functions with complete measurement and control strategies and supervisory control and data acquisition (SCADA), including sensors and a cell retention device Collaborative Bioreactors collaborate with sensor networks to automation adjust and optimize their processing protocols
Fig. 1.7 (a) The historic Waterloo Boy tractor, celebrating its 100th anniversary in 2021, with the author in Waterloo, Iowa. (b) Autonomous tractor, with RTK-GPS, real-time kinematic positioning unit (on top) for precision farming in Israel, Autumn 2021 Fig. 1.6 Telephone: (a) Landline telephone. (b) Cellphone
1
18
S. Y. Nof
Case 4 Source
Tractor (Fig. 1.7) With permission: Fig. 1.7a – John Deer Tractor & Engine Museum at Waterloo, Iowa; photo credit: N.C. Nof. Fig. 1.7b Prof. H. Eizenberg, Newe Ya’ar Research Center, Volcani Institute, Israel Process Tractors are designed to deliver mobility and high power for integrated tools in agriculture, construction, and transportation processes, such as land clearing, soil cultivating, crop treatment and harvesting, livestock care, heavy load moving, and others. A classic example of how automation is first designed to replace workers’ effort, and later to enable unforeseen benefits, tractors were originally designed to replace human and animal farm work. Increasingly, integrated with additional automation functions and tools, they have added and enable processes of construction, transportation, forestry, and precision farming. Platform A vehicle platform with high power engine, designed to engage tools. Modern tractors integrate automation systems for safety, comfort, and intelligent control. Autonomy Self-driving tractors can operate independently, control their steering, navigation, acceleration, and braking, and can activate and deactivate engaged tools according to programs and sensor-based commands. Collaborative Tractors are engaged in all forms of collaborative automation automation: Farmer-tractor-tools; tractor collaboration with other vehicles, or tool exchangers; tractor-tractor in-field collaboration for tool and information sharing, scheduling, and exchange; and remote knowledge-based monitoring of tractor’s engines for reliability and engine health maintenance.
Case 5 Source
Digital Photo Processing (Fig. 1.8) Adobe Systems Incorporated San Jose, California (http://adobe.com) Process Editing, enhancing, adding graphic features, removing stains, improving resolution, cropping and sizing, and other functions to process photo images Platform Software system Autonomy The software functions are fully automatic once activated by a user. The software can execute them semiautomatically under user control, or action series can be automated too Collaborative Human-automation collaboration is enabled automation
The number of different possible states and the cost of changes required are linked with two interrelated measures of flexibility: application flexibility and adaptation flexibility (Fig. 1.16). Both measures are concerned with the possible situations of the system and its environment. Automation solutions may address only switching between undisturbed, standard operations and nominal, variable situations, or can also aspire to respond when operations encounter disruptions
Fig. 1.8 Adobe Photoshop functions for digital image editing and processing: (a) automating functional actions, such as shadow, frame, reflection, and other visual effects; (b) selecting brush and palette for graphic effects; and (c) setting color and saturation values. (With permission from Adobe Systems Inc., 2008)
and transitions, such as errors and conflicts, or significant design changes. Application flexibility measures the number of different work states, scenarios, and conditions a system can handle. It can be defined as the probability that an arbitrary task, out of a given class of such tasks, can be carried out automatically.
1
Automation: What It Means to Us Around the World, Definitions, Its Impact, and Outlook
19
Case 6 Source Process
Robotic Painting (Fig. 1.9) Courtesy of ABB Co., Zürich http://www.ABB.com Automatic painting under automatic control of car body movement, door opening and closing, paint-pump functions, fast robot motions to optimize finish quality, and minimize paint waste Platform Automatic tools, machines, and robots, including sensors, conveyors, spray painting equipment, and integration with planning and programming software systems Autonomy Flexibility of motions; collision avoidance; coordination of conveyor moves, robots’ motions, and paint pump operations; and programmability of process and line operations Collaborative Robots collaborate in real time with other robots and automation material handling equipment to adjust their own operation speed
A relative comparison between the application flexibility of alternative designs is relevant mostly for the same domain of automation solutions. For instance, in Fig. 1.16 it is the domain of machining. Adaptation flexibility is a measure of the time duration and the cost incurred for an automation device or system to transition from one given work state to another. Adaptation flexibility can also be measured only relatively, by comparing one automation device or system with another, and only for one defined change of state at a time. The change of state involves being in one possible state prior to the transition, and one possible state after it. A relative estimate of the two flexibility measures (dimensions) for several implementations of machine tools automation is illustrated in Fig. 1.16. For generality, both measures are calibrated between 0 and 1.
1.4.1
Fig. 1.9 A robotic painting line: (a) the facility; (b) programmer using interface to plan offline or online, experiment, optimize, and verify control programs for the line; and (c) robotic painting facility design simulator. (With permission from ABB)
Degree of Automation
Another dimension of automation, besides measures of its inherent flexibility, is the degree of automation. Automation can mean fully automatic or semiautomatic devices and systems, as exemplified in case 1 (Sect. 3), with the steam turbine speed governor, and in case 7 (Sect. 3), with a mix of robots and operators in elevator production. When a device or system is not fully automatic, meaning that some, or more frequent human intervention is required, they are considered automated, or semiautomatic, equivalent terms implying partial automation. A measure of the degree of automation, between fully manual to fully automatic, has been used to guide the design rationalization and compare between alternative solutions. Progression in the degree of automation in machining is illustrated in Fig. 1.16. The increase of automated partial
1
20
S. Y. Nof
Fig. 1.10 Pharmaceutical pad-stick automatic assembly cell
Case 7 Source
Assembly Automation (Fig. 1.10) Courtesy ofAdapt Automation Inc., Santa Ana, California http://www.adaptautomation.com/Medical.html Process Hopper and two bowl feeders feed specimen sticks through a track to where they are picked and placed, two up, into a double nest on a 12-position indexed dial plate. Specimen pads are fed and placed on the sticks. Pads are fully seated and inspected. Rejected and good parts are separated into their respective chutes Platform System of automatic tools, machines, and robots Autonomy Automatic control through a solid-state programmable controller which operates the sequence of device operations with a control panel. Control programs include main power, emergency stop, manual, automatic, and individual operations controls Collaborative Interactive interfaces enable reprogramming automation
functions is evident when comparing the drilling machine with the more flexible machines that can also drill and, in addition, are able to perform other processes such as milling. The degree of automation can be defined as the fraction of automated functions out of the overall functions of an installation or system. It is calculated as the ratio between the number of automated operations and the total number of operations that need to be performed, resulting in a value between 0 and 1. Thus, for a device or system with partial automation, where not all operations or functions are auto-
matic, the degree of automation is less than 1. Practically, there are several methods to determine the degree of automation. The derived value requires a description of the method assumptions and steps. Typically, the degree of automation is associated with characteristics of: 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.
Platform (device, system, etc.) Group of platforms Location, site Plant, facility Process and its scope Process measures, e.g., operation cycle Automatic control Power source Economic aspects Environmental effects
In determining the degree of automation of a given application, whether the following functions are also supposed to be considered must also be specified: 1. 2. 3. 4. 5. 6. 7. 8.
Setup Organization, reorganization Control and communication Handling (of parts, components, etc.) Maintenance and repair Operation and process planning Construction Administration
1
Automation: What It Means to Us Around the World, Definitions, Its Impact, and Outlook
21
1
(1) CAD (2) DNC, bidirectional link from control computer
Receiving department
200 ton PressBreak
Monorail
Controller
Controller
Laser
Cart Sheet metal crane O2 N2 Outgoing Laser Pallet for incoming Pallet for outgoing queue Laser operator* returns returns for operator* PressBreak #1 laser #2 operator #2 PressBreak Cart (3) MIS and CIM (accounts payable and shipping data) operator #1 bidirectional link through computer to intranet Incoming queue for laser processing
Microcomputer
Forklift
Downstream department
Cart
Pallet for incoming returns
Barcode wand
Pallet for outgoing returns
Engraver
Sanding station
Sanding operator*
Controller
Buffing/grinding station
Master buffer*
Pallet for incoming Pallet for outgoing returns returns
Master engraver*
* Operator can be human and/or robot
Fig. 1.11 Elevator swing return computer-integrated production system: Three levels of automation systems are integrated, including (1) link to computer-aided design (CAD) for individual customer elevator specifications and customized finish; (2) link to direct numerical control
Case 8 Source Process
Computer-Integrated Elevator Production (Fig. 1.11) [14] Fabrication, and production execution and management Platform Automatic tools, machines, and robots, integrated with system-of-systems, comprising a production/manufacturing installation, with automated material handling equipment, fabrication and finishing machines and processes, and software and communication systems for production planning and control, robotic manipulators, and cranes. Human operators and supervisors are also included Autonomy Automatic control, including knowledge-based control of laser, press, buffing, and sanding machines/cells; automated control of material handling and material flow, and of management information flow Collaborative Automation systems, including CAD, DNC, MIS, automation and CIM, interact through cyber collaborative protocols to coordinate and optimize concurrent design, concurrent engineering, and workflow automation
(DNC) of work cell machine and manufacturing activities; and (3) link to management information system (MIS) and computer-integrated manufacturing system (CIM) for accounting and shipping management. (Source: [14])
For example, suppose we consider the automation of digital document workflow system (case 10, Sect. 3) which is limited to only the scanning process, thus omitting other workflow functions such as document feeding, joining, virtual inspection, and failure recovery. Then if the scanning is automatic, the degree of automation would be 1. However, if the other functions are also considered and they are not automatic, then the value would be less than 1. Methods to determine the degree of automation divide into two categories: • Relative determination applying a graded scale, containing all the functions of a defined domain process, relative to a defined system and the corresponding degrees of automation. For any given device or system in this domain, the degree of automation is found through comparison with the graded scale. This procedure is similar to other graded scales, e.g., Mohs’ hardness scale and Beaufort wind speed scale. This method is illustrated in Fig. 1.17,
22
S. Y. Nof
which shows an example of the graded scale of mechanization and automation, following the scale developed by Bright [17]. • Relative determination by a ratio between the autonomous and nonautonomous measures of reference. The most common measure of reference is the number of decisions made during the process under consideration (Table 1.9). Other useful measures of reference for this determination are the comparative ratios of: – Rate of service quality – Human labor – Time measures of effort – Cycle time
– Number of mobility and motion functions – Program steps To illustrate the method in reference to decisions made during the process, consider an example for case 2 (Sect. 3). Suppose in the bioreactor system process there is a total of seven decisions made automatically by the devices and five made by a human laboratory supervisor. Because these decisions are not similar, the degree of automation cannot be calculated simply as the ratio 7/(7 + 5) ≈ 0.58. The decisions must be weighted by their complexity, which is usually assessed by the number of control program commands or steps (Table 1.9). Hence, the degree of automation can be calculated as:
a)
b)
Fig. 1.12 (a) Municipal water treatment system in compliance with the Safe Drinking Water Federal Act. (courtesy of City of Kewanee, IL; Engineered Fluid, Inc.; and Rockwell Automation Co.). (b) Wastewater treatment and disposal. (Courtesy of Rockwell Automation Co)
1
Automation: What It Means to Us Around the World, Definitions, Its Impact, and Outlook
Case 9 Source
Water Treatment (Fig. 1.12) Rockwell Automation Co., Cleveland (www.rockwellautomation.com) Process Water treatment by reverse osmosis filtering system, and water treatment and disposal. When preparing to filter impurities from the city water, the controllers activate the pumps, which in turn flush wells to clean water sufficiently before it flows through the filtering equipment or activating complete system for removal of grit, sediments, and disposal of sludge to clean water supply (Fig. 1.12) Platform Installation including water treatment plant with a network of pumping stations, integrated with programmable and supervisory control and data acquisition (SCADA) control, remote communication software system, and human supervisory interfaces Autonomy Monitoring and tracking the entire water treatment and purification system Collaborative Human-automation interaction and automated automation supervisory control protocols enable error prevention, recovery, and dynamic activation of backup resources, to assure the quality of treated water
degreeof automation sum of decision steps made automatically by devices = (total sum of decision steps made) = 82/ (82 + 178) ≈ 0.32. Now designers can compare this automation design against relatively more and less elaborate options. Rationalization will have to assess the costs, benefits, risks, and acceptability of the degree of automation for each alternative design. Whenever the degree of automation is determined by a method, the following conditions must be followed: 1. The method is able to reproduce the same procedure consistently. 2. Comparison is only done between objective measures. 3. Degree values should be calibrated between 0 and 1 to simplify calculations and relative comparisons.
1.4.2
Levels of Automation, Intelligence, and Human Variability
There is obviously an inherent relation between the level of automation flexibility, the degree of automation, and the level of intelligence of a given automation application. While there are no absolute measures of any of them, their meaning is useful and intriguing to inventors, designers, users, and clients of automation. Levels of automation are shown in Table 1.10 based on the intelligent human ability they represent or replicate.
23
It is interesting to note that the progression in our ability to develop and implement higher levels of automation follows the progress in our understanding of relatively more complex platforms; more elaborate control, communication, cyber, and solutions of computational complexity; process and operation programmability; and our ability to generate renewable, sustainable, and mobile power sources.
1.5
Worldwide Surveys: What Does Automation Mean to People?
With known human variability, we are all concerned about automation, enjoy its benefits, and wonder about its risks. But all individuals do not share the same attitude toward automation equally, and it does not mean the same to everyone. We often hear people say: • I’ll have to ask my granddaughter to program the video recorder. • I hate my cellphone. • I can’t imagine my life without a smartphone. • Blame those dumb computers. • Sorry, I do not use elevators, I’ll climb the six floors and meet you up there! etc. In an effort to explore the meaning of automation to people around the world, a random, nonscientific survey was conducted during 2020–2021 by the author and his team, with the help of the following colleagues who coordinated surveys in the respective countries: Carlos E. Pereira, Jose Reinaldo Silva (Brazil), Jose A. Ceroni (Chile), George Adamides (Cyprus), Ruth Bars (Hungary), Praditya Ajidarma (Indonesia), Sigal Berman, Nir Shvalb (Israel), Funaki Kenichi, Masayuki Matsui, Ryosuke Nakajima, Katsuhiko Takahashi, Kinya Tamaki, and Tetsuo Yamada (Japan), Rashed Rabata (Jordan, UAE, and USA), Jeong Wootae (Korea), Arturo Molina (Mexico), Anthony S.F. Chiu (Philippines), AnaMaria Suduc, Florin G. Filip (Romania), Chin-Yin Huang (Taiwan), Oak Puwadol Dusadeerungsikul (Thailand), Xin W. Chen, and Seokcheon Lee (USA). Since the majority of the survey participants are students, undergraduate, and graduate, and since they migrate globally, the respondents actually originate from all continents. In other words, while it is not a scientific survey, it carries a worldwide meaning. In the current survey, we invited a wider range of respondents from more countries, compared with the original edition’s survey. The new survey population includes 506 respondents from three general regions: Americas; Asia and Pacific; Europe and Middle East; and two respondents’ categories:
1
24
S. Y. Nof
a)
The chaotic workflow Multiple steps and software Preflight
Edit
Join
The freeflow workflow One streamlined solution Mono digital Save
Convert
Impose
Preflight
Edit
Color digital Join Save
Color manage
Convert
Impose
Preflight
Edit
Join
Convert
Notify
Prepress
Manage
Offset Save
Color manage
Notify
Notify
Impose
b)
Fig. 1.13 Document imaging and color printing workflow: (a) streamlined workflow by FreeFlow. (b) Detail shows document scanner workstation with automatic document feeder and image processing. (Courtesy of Xerox Co., Norwalk)
Case 10 Source Process
Digital Document Workflow (Fig. 1.13) Xerox Co., Norwalk (http://www.xerox.com) On-demand customized processing, production, and delivery of print, email, and customized websites Platform Network of devices, machines, robots, and software systems integrated within a system-of-systems with media technologies Autonomy Integration of automatic workflow of document image capture, processing, enhancing, preparing, producing, and distributing Collaborative Automatically provide guidance for human repair automation person to overcome problems and prevent failures in the operation
1. Undergraduate and graduate students of engineering, science, management, and medical sciences (S = 395) 2. Nonstudents, professionals in automation (P = 111) Five questions were posed in this survey: • How do you define automation (do not use a dictionary)? • When and where did you encounter and recognize automation first in your life (probably as a child)? • What do you think is the major benefit of automation to humankind (only one)? • What do you think is the major risk of automation to humankind? • Give your best example of automation
1
Automation: What It Means to Us Around the World, Definitions, Its Impact, and Outlook
a)
25
1
FRM Finance resource management
MRP
SCM
Manufacturing resource planning
Supply chain management
ERP system
CRM
HRM
Customer relationship management
Human resource management
b)
Welding robot automation
Sensing measurement automation
Hybrid welding automation Manufacturing automation
Process monitoring automation
Grinding, deburring automation
Welding line automation
Fig. 1.14 Automation and control systems in shipbuilding: (a) production management through enterprise resource planning (ERP) systems. Manufacturing automation in shipbuilding examples: (b) overview; (c) automatic panel welding robots system; (d) sensor application in membrane tank fabrication; (e) propeller grinding process by robotic
automation. Automatic ship operation systems examples: (f) overview; (g) alarm and monitoring system; (h) integrated bridge system; (i) power management system; and (j) engine monitoring system. (Source: [15]) (with permission from Hyundai Heavy Industries)
26
S. Y. Nof
c)
d)
e)
g)
f) Integrated bridge system
Engine monitoring system Power management system Alarm and monitoring system
Fig. 1.14 (continued)
1
Automation: What It Means to Us Around the World, Definitions, Its Impact, and Outlook
h)
i)
j)
Fig. 1.14 (continued)
27
1
28
S. Y. Nof
Case 11 Source
Shipbuilding Automation (Fig. 1.14) [15]; Korea Shipbuilders Association, Seoul (http:// www.koshipa.or.kr); Hyundai Heavy Industries, Co., Ltd., Ulsan (http://www.hhi.co.kr) Process Shipbuilding manufacturing process control and automation; shipbuilding production, logistics, and service management; and ship operations management Platform Devices, tools, machines, and multiple robots; system-of-software systems; and system-of-systems Autonomy Automatic control of manufacturing processes and quality assurance; automatic monitoring, planning, and decision support software systems; integrated control and collaborative control for ship operations and bridge control of critical automatic functions of engine, power supply systems, and alarm systems Collaborative Automation systems collaborate at: the enterprise automation resource planning level to automatically prevent errors and recover from schedule conflicts; at the manufacturing automation level, by applying best matching workflow protocols to select and assign available tools to robots’ required tasks; and at the ship operating system to automatically activate dynamic lines of collaboration among repair teams
As indicated above, the overall most popular response (no. 1; 36%) is based on partially wrong understanding. Replacing human work may represent legacy fear of automation, lacking recognition that most automation applications are performing tasks humans cannot accomplish. The latter is also a partial, yet positive, meaning of automation as addressed by the responses “Automation helps do things humans cannot do” (included as one of the responses counted in response no. 4). Responses no. 2 and 5 (total 33%), and response no. 3 (total 17%) represent a factual meaning of how automation is implemented, and its impact. They focus on automation enablers, such as information technology, sensors, and computer control, and imply that automation means improvements, assistance, and promotion of human value. Response no. 3 is found in Asia and Pacific, (S = 18%, P = 24%), and Europe and Middle East (S = 18%, P = 24%), yet are scarce in Americas, S = 8%, P = 0%, which may reflect cultural meaning more than a definition.
1.5.2 The responses are summarized in five Tables 1.11, 1.12, 1.13, 1.14, and 1.15. Note: Not all survey participants chose to respond to all five questions. To enable effective analysis and presentation of the survey results, responses to each of the five questions were categorized in groups based on the similarity of their meanings. (Percentage shown in the five survey tables is of the number of S or P, respectively, who responded to this question, out of all S or P responding in that region.)
1.5.1
How Do We Define Automation?
The top overall and regional response to this question is (Table 1.11, no. 1): partially or fully replace human work (variation: partially or fully replace human work without or with little human participation); 36% total worldwide. This response reflects a meaning that corresponds with the definition in the beginning of this chapter, described as incomplete: Most modern automation enables work that people cannot perform by themselves, as reflected in all other respondents’ definitions (Table 1.11, nos. 2–5). In the previous edition survey, it was also the top response worldwide and regionally; 51% total worldwide. Overall, the five types of definition meanings follow three main themes: Automation replaces humans, works without them, or augments their functions (responses no. 1 and 4, total 50%); how automation works (responses no. 2 and 5, total 33%); and automation improves work and systems (responses no. 3, total 17%).
When and Where Did We Encounter Automation First in Our Life?
The top answer to this question, worldwide, is: automated manufacturing machinery or factory (19%) followed closely by transportation examples (18%). In the Americas region, however, the top category for the first automation encounter is computing and communication, perhaps because of the cellphone impact. It is also the second top category in Europe and Middle East (even though this category is only fourth worldwide). This question presents an interesting change from the previous survey. The top answer now, automated manufacturing machinery or factory, was then the top answer only in the Americas, but only the third in the other two regions. In those regions, the top answers were shared between transportation examples and vending machines, which appear now as second and third.
1.5.3
What Do We Think Is the Major Impact/Contribution of Automation to Humankind?
Six groups of impact or benefit types are found in the survey responses overall. The most inspiring impact and benefit of automation found is the answer in group no. 4, encourage/inspire creative work; inspire newer solutions. The most popular response is (both now and in the previous survey): Save time, increase productivity/efficiency; 24/7 operations (31% overall). It is the consistent top response in all three regions.
1
Automation: What It Means to Us Around the World, Definitions, Its Impact, and Outlook
1.5.4
What Do We Think Is the Major Risk of Automation to Humankind?
The automation risks found in all regions are distributed over a wide range of responses that are classified in six categories. The top automation risk identified in the three regions are: job loss or unemployment – in Asia and Pacific, as well as in Europe and Middle East; safety or war – in Americas, mostly with loss of control. These top two automation risks
a)
Voice & security
Power link advantage HMI
29
are followed, as the second overall risk category, by the risk of dependency on automation, in Asia and Pacific, and in Europe and Middle East. Interestingly, that risk is almost not considered in Americas, whereas there the third top category is no risk, which is almost not considered in Europe and Middle East. Another interesting note is that while 36% of worldwide respondents defined automation as fully or partially replacing human work, only 29% identified the major risk of automation to be loss of job or employment.
Communications equipment GE JungleMUX
Fiber loop to other sites
Corp. LAN
High speed between relays
Ethernet LAN
Data concentration/ protocol conversion
Status control analogs
GE IP server GE D20 Ethernet relays Transformer monitor and control
D20 I/O modules
GE UR relay
GE D25
Bay monitor and control GE iBOX GE hydran GE multlin
GE tMEDIC
b)
Legacy relays
GE multlin
D20EME Ethernet memory expansion module
Legacy relays
Radio to DA system
Up to four power supplies
Power switch/fuse panel
10.5 in 14 in D20ME or D20ME II Main processors (up to seven)
Modem slots (seven if MIC installed, eight otherwise)
19 in
Media interface card (MIC)
Fig. 1.15 Integrated power substation control system: (a) overview; (b) substation automation platform chassis. (Courtesy of GE Energy Co., Atlanta) (LAN local area network, DA data acquisition, UR universal relay, MUX multiplexor, HMI human-machine interface)
1
30
S. Y. Nof
Case 12 Source
Energy Power Substation Automation (Fig. 1.15) GE Energy Co., Atlanta (http://www.gepower.com/ prod serv/products/substation automation/en/ downloads/po.pdf) Process Automatically monitoring and activating backup power supply in case of breakdown in the power generation and distribution. Each substation automation platform has processing capacity to monitor and control thousands of input–output points and intelligent electronic devices over the network Platform Devices integrated with a network of system-of-systems, including substation automation platforms, each communicating with and controlling thousands of power network devices Autonomy Power generation, transmission, and distribution automation, including automatic steady voltage control, based on user-defined targets and settings; local/remote control of distributed devices; adjustment of control set points based on control requests or control input values; automatic reclosure of tripped circuit breakers following momentary faults; automatic transfer of load and restoration of power to nonfaulty sections if possible; automatically locating and isolating faults to reduce customers’ outage times; and monitoring a network of substations and moving load off overloaded transformers to other stations as required Collaborative This case illustrates collaborative automation at automation multiple levels: At a local substation, at a regional energy grid, interaction with other energy grids, and with cyber physical infrastructure; at each level, optimizing operations and cyber collaborative interactions among the substation devices and other networked subsystems
1.5.5
What Do We Think Is the Best Example of Automation?
Finally, ten areas of best automation examples are found in the survey. Overall, the majority of the best examples are of automation in transportation, followed by manufacturing line and machinery, even though in Asia and Pacific the order is reversed. The next three areas are human-assistive technologies, data analysis or coding, and smart home technologies, even though the latter area was ignored in Americas. In Asia and Pacific, however, the best examples in humanassistive technologies come third, followed by smart home technologies.
1.6
Emerging Trends
Many of us perceive the meaning of the automatic and automated factories and gadgets of the twentieth and twentyfirst century as outstanding examples of the human spirit and human ingenuity, no less than art; their disciplined organization and synchronized complex of carefully programmed
functions and services mean to us harmonious expression, similar to good music (when they work). Clearly, there is a mixture of emotions toward automation: Some of us are dismayed that humans cannot usually be as accurate, or as fast, or as responsive, attentive, and indefatigable as automation systems and installations. On the other hand, we sometimes hear the word automaton or robot describing a person or an organization that lacks consideration and compassion, let alone passion. Let us recall that automation is made by people and for the people. But can it run away by its own autonomy and become undesirable? Future automation will advance in micro- and nanosystems and systemsof-systems. Bioinspired automation and bioinspired collaborative control theory will significantly improve artificial intelligence, and the quality of robotics and automation, as well as the engineering of their safety and security. In this context, it is interesting to examine the role of automation in the twentieth and twenty-first centuries.
1.6.1
Automation Trends of the Twentieth and Twenty-First Centuries
The US National Academy of Engineering, which includes US and worldwide experts, compiled the list shown in Table 1.16 as the top 20 achievements that have shaped a century and changed the world [19]. The table adds columns indicating the role that automation has played in each achievement, and clearly automation has been relevant in all of them and essential to most of them. The US National Academy of Engineers has also compiled a list of the grand challenges for the twenty-first century. These challenges are listed in Table 1.17 with the anticipated and emerging role of automation in each. Again, automation is relevant to all of them and essential to most of them. Some of the main trends in automation are described next.
1.6.2
Bioautomation
Bioinspired automation, also known as bioautomation or evolutionary automation, is emerging based on the trend of bioinspired computing, control, and AI. They influence traditional automation and artificial intelligence in the methods they offer for evolutionary machine learning, as opposed to what can be described as generative methods (sometimes called creationist methods) used in traditional programming and learning. In traditional methods, intelligence is typically programmed from top down: Automation engineers and programmers create and implement the automation logic, and define the scope, functions, and limits of its intelligence. Bioinspired automation, on the other hand, is also created
1
Automation: What It Means to Us Around the World, Definitions, Its Impact, and Outlook
Fig. 1.16 Application flexibility and adaptation flexibility in machining automation. (After [16])
31
1
Adaptation flexibility (within application) 1 NC machining center with adaptive and optimal control Machining center with numerical control (NC) Milling machine with programmable control (NC) “Universal” ty milling ili ib machine with x e Fl Drilling conventional machine with control conventional control 1
0
Application flexibility (multiple applications)
From the worker
Through control-mechanism, testing determined work sequences
Variable
Fixed in the machine
Through variable influences in the environment
Origin of the check
Reacts to the execution React to signals
Manual
Selects from determined processes
Type of the machine reaction
Changes actions itself inside influences
Mechanical (not done by hand)
Step-No. Steps of the automation
18 Prevents error and self-optimizes current execution
17 Foresees the necessary working tasks, adjusts the execution
16 Corrects execution while processing
15 Corrects execution after the processing
14 Identifies and selects operations
13 Segregates or rejects according to measurements
12 Changes speed, position change and direction according to the measured signal
11 Registers execution
9 Measures characteristics of the execution
10 Signals pre-selected values of measurement, including error correction
8 Machine actuated by introduction of work-piece or material
7 Machine system with remote control
6 Powered tool, programmed control with a sequence of functions
5 Powered tool, fixed cycle, single function
4 Machine tools, manual controlled
3 Powered hand tools 2 Manual hand tools 1 Manual
Fig. 1.17 Automation scale: scale for comparing grades of automation. (After [17])
Energy source
32
S. Y. Nof
Table 1.9 Degree of automation: calculation by the ratio of decision types (example) Automatic decisions Decision number 1 2 3 4 Complexity (number of program steps) 10 12 14 9
5 14
6 11
7 12
Human decisions Sum 8 9 10 82 21 3 82
11 45
12 27
Total Sum 178 260
Table 1.10 Levels of automation. (Updated and expanded after [18]) Level A0 A1 A2
automation Hand tool, manual machine Powered machine tools (non-NC) Single-cycle automatics and hand-feeding machines Automatics, repeated cycles
Automated human attribute None Energy, muscles Dexterity
Examples Knife, scissors, and wheelbarrow Electric hand drill, electric food processor, and paint sprayer Pipe threading machine and machine tools (non-NC)
Diligence
A4
Self-measuring and adjusting, feedback
Judgment
A5
Evaluation
A6
Computer control, automatic cognition Limited self-programming
A7
Relating cause from effects
Reasoning
A8 A9
Unmanned mobile machines Collaborative networks
Guided mobility Collaboration
A10
Originality
Creativity
A11
Human needs and animal needs support
Compassion
A12
Interactive companions
Humor
Engine production line, automatic copying lathe, automatic packaging, NC machine, and pick-and-place robot Feedback about product: dynamic balancing, weight control. Feedback about position: pattern-tracing flame cutter, servo-assisted follower control, self-correcting NC machines, and spray painting robot Rate of feed cutting, maintaining pH, error compensation, turbine fuel control, interpolator Sophisticated elevator dispatching, telephone call switching systems, artificial neural network models Sales prediction, weather forecasting, lamp failure anticipation, actuarial analysis, maintenance prognostics, and computer chess playing Autonomous vehicles and drones, nano-flying exploration monitors Internet, collaborative supply networks, collaborative sensor networks, and networked learning Computer systems to compose music, design fabric patterns, formulate new drugs, play with automation, e.g., virtual reality games; adaptive avatar tutors Bioinspired robotic seals (aquatic mammal) to help emotionally challenged individuals, social robotic pets, assistive and nursing devices Humorous gadgets, e.g., sneezing tissue dispenser, automatic systems to create/share jokes, and interactive comedian robot
A3
Learning
NC, numerically controlled; non-NC, manually controlled Table 1.11 How do you define automation (do not use a dictionary)? [S: Students; P: Professionals] Automation definition responses
Americas (%) Asia and Pacific (%) Europe and Middle East (%) S P S P S P 1. Partially or fully replaces human work without or 61 70 38 6 30 30 with little human participationa 2. Uses machines/computers/robots/intelligence to 30 10 16 20 26 32 execute or help execute physical operations, computational commands, or tasks/mechanisms of automatic machines 3. Improves work/system in terms of labor, time, 8 0 18 24 14 19 money, quality, and productivity, and promotes human value 4. Functions and actions that assist humans/enable 2 20 20 18 12 3 humans to perform multiple actions/help do things humans cannot do 5. Integrated system of sensors, actuators, and 0 0 8 38 17 16 controllers/information technology for organizational change Total no. of respondents (491) 61 10 264 50 69 37 a Note: This definition is inaccurate; most automation accomplishes work that humans cannot do, or cannot do effectively
Worldwide (%) S P Total 40 22 36 20
24 21
16
20 17
15
12 14
8
26 12
394 97
1
Automation: What It Means to Us Around the World, Definitions, Its Impact, and Outlook
33
Table 1.12 Where did you encounter and recognize automation first in your life (probably as a child)? [S: Students; P: Professionals] First encounter with automation 1.
Production machinery: Automated manufacturing machine, factory/robot/ agricultural combine/fruit classification machine/milking machine/automated grinding machine for sharpening knives 2. Transportation: Car, truck, motorcycle, and components or garbage truck/train (unmanned) 3. Service: Vending machine: snacks, candy, drink, tickets/automatic teller machine (ATM)/automatic check-in at airports/barcode scanner/automatic bottle filling (with soda or wine)/automatic toll collection/self-checkout machine 4. Computing and communication: Computer (software), e.g., Microsoft Office, email, programming language/movie, TV/game machine/calculator/clock, watch/tape recorder, player/ telephone, answering machine/radio/video cassette recorder (VCR) 5. Building: Automatic door (pneumatic; electric)/elevator/escalator/pneumatic door of the school bus 6. Home: Washing machine/dishwasher/microwave/air conditioner/home automation/kitchen mixer/oven/toaster/bread machine/electric shaver/food processor/kettle/treadmill 7. Game: Toy/amusement park/Lego (with automation) 8. Health care: Medical equipment in the birth delivery room/X-ray machine/centrifuge/oxygen device/pulse recorder/thermometer/ultrasound machine 9. Delivery: Automatic car wash/conveyor (to deliver food to chickens)/luggage, baggage sorting machine/water delivery 10. Light and power: Fuse breaker/light by electricity/sprinkler/switch (power, light)/ traffic light Total no. of respondents (496)
1
Asia and Americas (%) Pacific (%) S P S P 11 20 20 25
Europe and Middle East (%) S P 16 18
Worldwide (%) S P Total 18 22 19
11 0
26
21
7
13
18
16 18
5
18
0
12 13
15
6
19 40
6
6
19 11
10
11 10
16 0
8
0
9
10
3
2
0
8
0
16 26
8
10 8
6 0
0 0
5 5
9 38
9 0
5 0
6 3
7 6 20 6
24 30
0
0
6
5
5
5
5
6
5
2
4
0
5
1
4
10
0
63 10
265 53
8
67 38
13
9
395 101
Table 1.13 What do you think is the major impact/contribution of automation to humankind (only one)? Major impact or contribution of automation 1. Save time, increase productivity/efficiency; 24/7 operations/mass production and service/increase consistency, improve quality/save cost/improve security and safety/deliver products, service to more people/flexibility in manufacturing/save resources 2. Advance everyday life/improve quality of life/convenience/ease of life/work/save labor/prevent people from dangerous activities/save lives/extend life expectancy/agriculture improvement 3. Medicine/medical equipment/medical system/biotechnology/health care/computer/transportation (e.g., train, traffic lights)/ communication (devices)/manufacturing (machines)/robot; industrial robot/ banking system/construction/loom/weather prediction 4. Encourage/inspire creative work; inspire newer solutions/change/improvement in (global) economy/ foundation of industry/growth of industry/globalization and spread of culture and knowledge/Industrial Revolution 5. Detect errors in health care, flights, factories/reduce (human) errors/identify bottlenecks in production 6. Assist/work for people/do things that humans cannot do/ replace people; people lose jobs/help aged/handicapped people/people lose abilities to complete tasks Total no. of respondents (502)
Europe and Americas (%) Asia and Pacific (%) Middle East (%) S P S P S P 48 30 24 19 40 50
Worldwide (%) S P Total 30 21 31
27 20
27
17
33 14
18
10 25
0
10
18
13
7
14
13
8
13
13 30
10
21
7
5
10
9
11
8
10
11
19
4
9
9
9
10
3
0
11
11
7
9
9
6
9
62 10
266 53
67 44
395 107
34
S. Y. Nof
Table 1.14 What do you think is the major risk of automation to humankind (only one)? Major risk of automation 1. 2. 3. 4. 5. 6.
Job loss/unemployment Dependency Safety/war Social/economic inequality Maintenance No risk Total no. of respondents (489)
Americas (%) S P 31 30 2 0 28 50 7 10 8 0 25 10 61 10
Asia and Pacific (%) S P 27 24 27 24 20 20 18 20 5 6 2 12 266 51
Europe and Middle East (%) S P 42 26 26 29 14 14 12 11 5 17 2 3 66 35
Worldwide (%) S P Total 31 25 29 23 13 23 20 21 20 16 16 15 5 9 6 6 8 6 393 96
Table 1.15 What do you think is the best example of automation (only one)? Best example of automation 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.
Transportation Manufacturing line/machinery Human-assistive technologies Data analysis/coding Smart home technologies Kitchen appliances Safety/war Medical equipment Banking/finance Agriculture Total no. of respondents (502)
Americas (%) S P 44 40 28 20 10 0 15 30 0 0 3 0 0 10 0 0 0 0 0 0 62 10
and implemented by automation engineers and programmers, but follows a bottom-up decentralized and distributed approach. Bioinspired techniques often involve a method of specifying a set of simple rules, followed and iteratively applied by a set of simple and autonomous human-made organisms. After several generations of rule repetition, initially human-made mechanisms of self-learning, self-repair, and self-organization enable self-evolution toward more complex behaviors. Complexity can result in unexpected behaviors, which may be robust and more reliable, can be counterintuitive compared with the original design, but can potentially become undesirable, out-of-control, and unsafe behaviors. This subject has been under intense research and examination in recent years. Natural evolution and system biology (biology-inspired automation mechanisms for systems engineering) are the driving analogies of this trend: concurrent steps and rules of responsive selection, interdependent recombination, reproduction, mutation, reformation, adaptation, and death and birth can be defined, similar to how complex organisms function and evolve in nature. Similar automation techniques are used in genetic algorithms, artificial neural networks, swarm algorithms, reinforcement machine learning, and other emerging evolutionary automation systems. Mechanisms of self-organization, parallelism, fault tolerance, recovery, backup, and redundancy are being developed and
Asia and Pacific (%) S P 30 12 27 25 15 17 7 15 9 25 6 2 3 0 2 0 1 4 1 2 266 53
Europe and Middle East (%) S P 20 31 20 14 5 8 18 14 14 17 9 0 0 8 5 6 5 3 5 0 67 44
Worldwide (%) S P Total 31 29 29 26 20 25 13 8 12 10 16 11 6 19 11 6 1 5 2 4 2 2 2 2 1 3 2 2 1 1 395 107
researched for future automation, in areas such as neurofuzzy techniques, biorobotics, digital organisms, artificial cognitive models and architectures, artificial life, bionics, and bioinformatics. See related topics in many following handbook chapters, particularly 8, 9, 10, 21, and 55.
1.6.3
Collaborative Control Theory and Collaborative Automation
Collaboration of humans and its advantages and challenges are well known from prehistory and throughout history, but have received increased attention with the advent of communication and cyber technologies. Significantly better enabled and potentially streamlined and even optimized through e-collaboration (based on communication via electronic means), and further enhanced by cyber collaborative techniques, it is emerging as one of the most powerful trends in automation, with telecommunication, computer communication, and wireless communication influencing education and research, engineering and business, health care, telemedicine, and service industries, and global society in general. Those developments, in turn, motivate and propel further applications and theoretical investigations into this highly intelligent level of automation (Table 1.10, level A9 and higher).
1
Automation: What It Means to Us Around the World, Definitions, Its Impact, and Outlook
Table 1.16 Top engineering achievements in the twentieth century [19] and the role of automation Achievement
1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17.
18. 19. 20.
Electrification Automobile Airplane Water supply and distribution Electronics Radio and television Agricultural mechanization Computers Telephone Air conditioning and refrigeration Highways Spacecraft Internet Imaging Household appliances Health technologies Petroleum and petrochemical technologies Laser and fiber optics Nuclear technologies High-performance materials
Role of automation Relevant Essential Supportive × × × ×
Table 1.17 Grand engineering challenges for the twenty-first century [20] and the role of automation Achievement
Irrelevant 1. 2. 3.
× × ×
4. 5.
× × ×
6.
× × × × × × ×
7. 8. 9. 10. 11. 12. 13.
× ×
14. ×
Interesting examples of the e-collaboration trend include wikis, which since the early 2000s have been increasingly adopted by enterprises as collaborative software, enriching static intranets and the Internet with dynamic, assistive, timely updates. Examples of e-collaborative applications that emerged in the 1990s include project communication for coplanning, sharing the creation and editing of design documents as codesign and co-documentation, and mutual inspiration for collaborative innovation and invention through cobrainstorming. More recently, collaborative insight systems have emerged for group networking in specific domains and projects. During the recent pandemic, cyber-supported video meetings enabled international conferences and remote learning to fill in the gap created by the need for distancing and limited travel. Beyond human-human automation-supported collaboration through better and more powerful communication and cyber technologies, there is a well-known but not yet fully understood trend for collaborative e-work, and more recently, cyber collaborative e-work (also called cc-work). Associated with this field is collaborative control theory (CCT), which has been under development during the past
35
Make solar energy economical Provide energy from fusion Develop carbon sequestration methods Manage the nitrogen cycle Provide access to clean water Restore and improve urban infrastructure Advance health informatics Engineer better medicines Reverse-engineer the brain Prevent nuclear terror Secure cyberspace Enhance virtual reality Advance personalized learning Engineer the tools of scientific discovery
Anticipated and emerging role of automation Relevant Essential Supportive Irrelevant × × × × × × × × × × × × × ×
30 years. Collaborative e-work is motivated by significantly improved performance of humans leveraging their collaborative automatic software and physical agents. The latter, from software automata (e.g., constructive bots as opposed to spam and other destructive bots) to automation devices, multisensors, multiagents, and multirobots, can operate in a parallel, autonomous cyberspace, thus multiplying our productivity and increasing our ability to design sustainable systems, operations, and our life. A related important trend is the emergence of cyber collaborative middleware for collaboration support of cyber physical device networks and of human team networks, organizations, and enterprises. More about this subject area can be found in several chapters of this handbook, particularly in Chs. 9, 15, 17, 18, 21, 22, and 62.
1.6.4
Risks of Automation
As civilization increasingly depends on automation and looks for automation to support solutions of its serious problems, the risks associated with automation must be understood and eliminated. Failures of automation on a very large scale
1
36
S. Y. Nof
are most risky. Just a few examples of disasters caused by automation failures are nuclear accidents; power supply disruptions and blackout; Federal Aviation Administration control systems failures causing air transportation delays and shutdowns; wireless communication network failures; and water supply and other supply chain failures. The impacts of severe natural and human-made disasters on automated infrastructure are therefore the target of intense research and development. In addition, automation experts are challenged to apply automation to enable sustainability and better mitigate and eliminate natural and human-made disasters, such as security, safety, and health calamities.
1.6.5
Need for Dependability, Survivability, Security, Continuity of Operation, and Creativity
Emerging efforts are addressing better automation dependability and security by structured backup and recovery of information and communication systems. For instance, with service orientation that is able to survive, automation can enable gradual and degraded services by sustaining critical continuity of operations until the repair, recovery, and resumption of full services. Automation continues to be designed with the goal of preventing and eliminating any conceivable errors, failures, and conflicts, within economic constraints. In addition, the trend of collaborative flexibility being designed into automation frameworks encourages reconfiguration tools that redirect available, safe resources to support the most critical functions, rather than designing absolutely failure-proof system. With the trend toward collaborative, networked automation systems, dependability, survivability, security, and continuity of operations are increasingly being enabled by autonomous self-activities, such as: • • • • • • •
Self-awareness and situation awareness Self-configuration Self-explaining and self-rationalizing Self-healing and self-repair Self-optimization Self-organization Self-protection for security
Other dimensions of emerging automation risks involve privacy invasion, electronic surveillance, accuracy and integrity concerns, intellectual and physical property protection and security, accessibility issues, confidentiality, etc. Increasingly, people ask about the meaning of automation, how can we benefit from it, trust it, yet find a way to contain its risks and limit its powers. At the extreme of this concern is the automation singularity [21, 22].
Automation singularity follows the evident acceleration of technological developments and discoveries. At some point, people ask, is it possible that superhuman machines can take over the human race? If we build them too autonomous, with collaborative ability to self-improve and self-sustain, would they not eventually be able to exceed human intelligence? In other words, superintelligent machines may autonomously, automatically, produce discoveries that are too complex for humans to comprehend; they may even act in ways that we consider out of control, chaotic, and even aimed at damaging and overpowering people. This emerging trend of thought will no doubt energize future research on how to prevent automation from running on its own without limits. In a way, this is the twenty-first-century version of the ancient and still existing human challenge of never play with fire. Automation creativity is related to the concern with automation singularity, but in a positive way. People ask: Can automation be designed to be creative and generate “out of the box” solutions to our challenging problems? “Out the box” here means solutions that humans have not prescribed to the automation, or cannot create by themselves due to various limitations. For example, to tackle the challenge of climate change, humans have designed solutions that are difficult to implement. Can collaborative automation, based on collaborative intelligence, generate creative, simple, or revolutionary solutions that are also practical? If successful, automation creativity can augment our human civilization by helping solve complex problems that require new ideas and perspectives. It would possibly be able to also solve the problems of automation singularity.
1.6.6
Quantum Computing and Quantum Automation
An emerging area of interest to automation scholars and engineers is the area of quantum computing, which has already seen intensive research and progress over the last three decades. Recent surveys (e.g., [23, 24]) indicate promising impacts over the next five to ten years and beyond. Quantum computing algorithms have shown significant superiority and faster computing relative to current computing in areas relevant to robotics and automation, for example: (1) integer and discrete algorithms; (2) solving problems with exponential queries; (3) general search, and search in unstructured data storage; and (4) solving graph theory problems, and more. There are challenges ahead, however, including reliable, stable, and scalable quantum computer hardware, and further progress required in algorithms, software, and operating systems. Nevertheless, the promised impacts on automation in an area we can term as quantum automation merit investigation and study of this field.
1
Automation: What It Means to Us Around the World, Definitions, Its Impact, and Outlook
1.7
Conclusion
What is automation and what does it mean to people around the world? After review of the evolution of automation, its design and its influence on civilization, and of its main contributions and attributes, a survey is used to highlight the meaning of automation according to people around the world. Emerging trends in automation and concerns about automation are also described. They can be summarized as addressing three general questions: 1. How can automation be improved and become more useful and dependable? 2. How can we limit automation from being too risky when it fails? 3. How can we develop better automation that is more autonomous and performs better, yet does not take over our humanity? These topics are discussed in detail in the chapters of this handbook. Acknowledgments This chapter is a revised and updated version of the original edition chapter, so first I repeat the original acknowledgment, which appeared in 2009: “I received with appreciation help in preparing and reviewing this chapter from my graduate students at our PRISM Center, particularly Xin W. Chen, Juan Diego Velásquez, Sang Won Yoon, and Hoo Sang Ko, and my colleagues on our PRISM Global Research Network, particularly Professors Chin Yin Huang, José A. Ceroni, and Yael Edan. This chapter was written in part during a visit to the Technion, Israel Institute of Technology in Haifa, where I was inspired by the automation collection at the great Industrial Engineering and Management Library. Patient and knowledgeable advice by librarian Coral Navon at the Technion Elyachar Library is also acknowledged.” For the new edition, I repeat my appreciation for the help I now received from the same colleagues listed above, and in addition, to Professor Avital Bechar, distinguished members of our Advisory Board, and several anonymous reviewers. Rashed Rabata, a former PRISM Center graduate researcher, helped me analyze the current automation survey.
References 1. Nof, S.Y.: Design of effective e-work: review of models, tools, and emerging challenges. Prod. Plann. Control. 14(8), 681–703 (2003) 2. Asimov, I.: Foreword. In: Nof, S.Y. (ed.) Handbook of Industrial Robotics, 2nd edn. Wiley, New York (1999) 3. Hearings on automation and technological change, Subcommittee on Economic Stabilization of the Joint Committee on the Economic Report, US Congress, October 14–28 (1955) 4. Buckingham, W.: Automation: Its Impact on Business and People. The New American Library, New York (1961) 5. Truxal, J.G.: Control Engineers’ Handbook: Servomechanisms, Regulators, and Automatic Feedback Control Systems. McGraw Hill, New York (1958) 6. Groover, M.P.: Automation, Production Systems, and ComputerIntegrated Manufacturing, 5th edn. Pearson (2018) 7. Burton, S., Shelton, N.: Office Procedures for the 21st Century, 8th edn. Pearson Education (2013)
37
8. Oliverio, M.E., Pasewark, W.R., White, B.R.: The Office: Procedures and Technology, 7th edn. Cengage Learning (2018) 9. Machi, M., Monostori, L., Pinto, R. (eds.): INCOM’18, information control problems in manufacturing. In: Proceedings of 16th IFAC Symposium, Bergamo, Italy (IFAC PapersOnLine 2018) 10. Felder, R., Alwan, M., Zhang, M.: Systems Engineering Approach to Medical Automation. Artech House, London (2008) 11. Cichocki, A., Helal, A.S., Rusinkiewicz, M., Woelk, D.: Workflow and Process Automation: Concepts and Technology. Springer (1998) 12. Taulli, T.: The Robotic Process Automation Handbook: A Guide to Implementing RPA Systems. Apress (2020) 13. Karakostas, B., Zorgios, Y.: Engineering Service Oriented Systems: A Model Driven Approach. IGI Global, Hershey (2008) 14. Lenart, G.M., Nof, S.Y.: Object-oriented integration of design and manufacturing in a laser processing cell. Int. J. Comput. Integr. Manuf. 10(1–4), 29–50 (1997)., special issue on design and implementation of CIM systems 15. Min, K.-S.: Automation and control systems technology in Korean shipbuilding industry: the state of the art and the future perspectives. In: Proceedings of 17th World Congress IFAC, Seoul (2008) 16. Nof, S.Y., Wilhelm, W.E., Warnecke, H.J.: Industrial Assembly. Chapman Hall, New York (1997) 17. Bright, J.R.: Automation and Management. Harvard University Press, Boston (1958) 18. Amber, G.H., Amber, P.S.: Anatomy of Automation. Prentice Hall, Englewood Cliffs (1964) 19. Century of Innovation: Twenty Engineering Achievements that Transformed our Lives. NAE: US National Academy of Engineering, Washington, DC (2003) 20. http://www.engineeringchallenges.org/ 21. Special Report: The singularity. IEEE Spectr. 45(6) (2008) 22. Priyadarshini, I., Cotton, C.: Intelligence in cyberspace: the road to cyber singularity. J. Exp. Theor. Artif. Intell. 33(4), 683–717 (2021) 23. Gyongyosi, L., Imre, S.: A survey on quantum computing technology. Comput. Sci. Rev. 31, 51–71 (2019) 24. Gill, S.S., Kumar, A., Singh, H., Singh, M., Kaur, K., Usman, M., Buyya, R.: Quantum Computing: A Taxonomy, Systematic Review and Future Directions. arXiv preprint arXiv:2010.15559 (2020)
Further Reading Alon, U.: An Introduction to Systems Biology: Design Principles of Biological Circuits, 2nd edn. Chapman Hall/CRC (2019) Axelrod, A.: Complete Guide to Test Automation: Techniques, Practices, and Patterns for Building and Maintaining Effective Software Projects. Apress (2018) Center for Chemical Process Safety (CCPS): Guidelines for Safe and Reliable Instrumented Protective Systems. Wiley, New York (2007) Dorf, R.C.: Modern Control Systems, 14th edn. Pearson, Upper Saddle River (2020) Dorf, R.C., Nof, S.Y. (eds.): International Encyclopedia of Robotics and Automation. Wiley, New York (1988) Evans, K.: Programming of CNC Machines, 4th edn. Industrial Press, New York (2016) Friedmann, P.G.: Automation and Control Systems Economics, 2nd edn. ISA, Research Triangle Park (2006) Noble, D.F.: Forces of Production: A Social History of Industrial Automation. Oxford University Press, Cambridge (1986) Parasuraman, R., Sheridan, T.B., Wickens, C.D.: A model for types and levels of human interaction with automation. IEEE Trans. Syst. Man Cyber. 30(3), 286–197 (2000) Rabiee, M.: Programmable Logic Controllers: Hardware and Programming, 4th edn. Goodheart-Wilcox (2017)
1
38
S. Y. Nof
Rainer, R.K., Prince, B.: Introduction to Information Systems, 9th edn. Wiley, New York (2021) Sands, N., Verhapen, I.: A Guide to the Automation Body of Knowledge, 3rd edn. ISA, Research Triangle Park (2018) Sheridan, T.B.: Humans and Automation: System Design and Research Issues. Wiley, New York (2002) Wiener, N.: Cybernetics, or the Control and Communication in the Animal and the Machine, 2nd edn. MIT Press, Cambridge (1965)
Shimon Y. Nof is the director of the NSF-industry-supported PRISM Center for Production, Robotics, and Integration Software for Manufacturing and Management, and professor of industrial engineering at Purdue University since 1977. Fellow of IFPR and IISE, he is the former president of IFPR and former chair of IFAC CC5 Committee for Manufacturing and Logistics. Nof is the author or editor of 16 books, and editor of Springer’s ACES book series on automation, collaboration, and e-services. His research, education, and industry contributions are concerned with cyber collaborative automation, robotics, and ework; collaborative control theory (CCT); systems security, integrity, and assurance; and integrated production and service operations with decentralized decision support networks.
2
Historical Perspective of Automation Christopher Bissell, Theodore J. Williams, and Yukio Hasegawa
Contents A History of Automatic Control . . . . . . . . . . . . . . . . . . . Antiquity and the Early Modern Period . . . . . . . . . . . . . . . Stability Analysis in the Nineteenth Century . . . . . . . . . . Ship, Aircraft, and Industrial Control Before World War II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.4 Electronics, Feedback, and Mathematical Analysis . . . . . 2.1.5 World War II and Classical Control: Infrastructure . . . . . 2.1.6 World War II and Classical Control: Theory . . . . . . . . . . . 2.1.7 The Emergence of Modern Control Theory . . . . . . . . . . . . 2.1.8 The Digital Computer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.9 The Socio-technological Context Since 1945 . . . . . . . . . . 2.1.10 Conclusion and Emerging Trends . . . . . . . . . . . . . . . . . . . . 2.1.11 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 2.1.1 2.1.2 2.1.3
2.2
39 40 40 43 44 45 47 48 49 50 51 51
Advances in Industrial Automation: Historical Perspectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
51
Advances in Robotics and Automation: Historical Perspectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
55
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
57
2.3
Abstract
Editor note: This chapter includes three parts: 2.1 A History of Automatic Control by Christopher Bissell; 2.2 Advances in Industrial Automation: Historical Perspectives by Theodore J. Williams; and 2.3 Advances in Robotics and Automation: Historical Perspectives by
Christopher Bissell: deceased. Theodore J. Williams: deceased. Yukio Hasegawa: deceased. C. Bissell () Department of Communication and Systems, The Open University, Milton Keynes, UK T. J. Williams College of Engineering, Purdue University, West Lafayette, IN, USA Y. Hasegawa System Science Institute, Waseda University, Tokyo, Japan
© Springer Nature Switzerland AG 2023 S. Y. Nof (ed.), Springer Handbook of Automation, Springer Handbooks, https://doi.org/10.1007/978-3-030-96729-1_2
Yukio Hasegawa. The three parts were written by leaders and pioneers of automation, at the time the original edition of this handbook was published in 2009. Our distinguished editorial advisory board members have recommended to keep these chapters as originally published, for their authentic historical value. Additional historical perspectives, specific to chapter topics, are provided throughout this Handbook. Keywords
Automatic control · Industrial automation · Robotics · History of robotics and automation
2.1
A History of Automatic Control
(by Christopher Bissel) Automatic control, particularly the application of feedback, has been fundamental to the development of automation. Its origins lie in the level control, water clocks, and pneumatics/hydraulics of the ancient world. From the seventeenth century onward, systems were designed for temperature control, the mechanical control of mills, and the regulation of steam engines. During the nineteenth century, it became increasingly clear that feedback systems were prone to instability. A stability criterion was derived independently toward the end of the century by Routh in England and Hurwitz in Switzerland. The nineteenth century, too, saw the development of servomechanisms, first for ship steering and later for stabilization and autopilots. The invention of aircraft added (literally) a new dimension to the problem. Minorsky’s theoretical analysis of ship control in the 1920s clarified the nature of three-term control, also being used for process applications by the 1930s. Based on servo and communications engineering developments of the 1930s, and driven by the need for high-performance gun control systems, the coherent body of theory known as classical control emerged during and just after World War II in the USA, UK, and elsewhere, as did cybernetics ideas. Meanwhile, an alternative approach 39
40
C. Bissell et al.
to dynamic modeling had been developed in the USSR based on the approaches of Poincaré and Lyapunov. Information was gradually disseminated, and state-space or modern control techniques, fueled by Cold War demands for missile control systems, rapidly developed in both East and West. The immediate postwar period was marked by great claims for automation, but also great fears, while the digital computer opened new possibilities for automatic control.
2.1.1
Antiquity and the Early Modern Period
Feedback control can be said to have originated with the float valve regulators of the Hellenic and Arab worlds [1]. They were used by the Greeks and Arabs to control such devices as water clocks, oil lamps, and wine dispensers, as well as the level of water in tanks. The precise construction of such systems is still not entirely clear, since the descriptions in the original Greek or Arabic are often vague, and lack illustrations. The best-known Greek names are Ktsebios and Philon (third century BC) and Heron (first century AD) who were active in the Eastern Mediterranean (Alexandria, Byzantium). The water clock tradition was continued in the Arab world as described in books by writers such as AlJazari (1203) and Ibn al-Sa-ati (1206), greatly influenced by the anonymous Arab author known as Pseudo-Archimedes of the ninth–tenth century AD, who makes specific reference to the Greek work of Heron and Philon. Float regulators in the tradition of Heron were also constructed by the three brothers Banu Musa in Baghdad in the ninth century AD. The float valve level regulator does not appear to have spread to medieval Europe, even though translations existed of some of the classical texts by the above writers. It seems rather to have been reinvented during the industrial revolution, appearing in England, for example, in the eighteenth century. The first independent European feedback system was the temperature regulator of Cornelius Drebbel (1572– 1633). Drebbel spent most of his professional career at the courts of James I and Charles I of England and Rudolf II in Prague. Drebbel himself left no written records, but a number of contemporary descriptions survive of his invention. Essentially an alcohol (or other) thermometer was used to operate a valve controlling a furnace flue, and hence the temperature of an enclosure [2]. The device included screws to alter what we would now call the set point. If level and temperature regulation were two of the major precursors of modern control systems, then a number of devices designed for use with windmills pointed the way toward more sophisticated devices. During the eighteenth century, the mill fantail was developed both to keep the mill sails directed into the wind and to automatically vary the angle of attack, so as to avoid excessive speeds in high winds. Another important device was the lift-tenter. Millstones have a tendency to separate as the speed of rotation increases, thus
impairing the quality of flour. A number of techniques were developed to sense the speed and hence produce a restoring force to press the millstones closer together. Of these, perhaps the most important were Thomas Mead’s devices [3], which used a centrifugal pendulum to sense the speed and – in some applications – also to provide feedback, hence pointing the way to the centrifugal governor (Fig. 2.1). The first steam engines were the reciprocating engines developed for driving water pumps; James Watt’s rotary engines were sold only from the early 1780s. But it took until the end of the decade for the centrifugal governor to be applied to the machine, following a visit by Watt’s collaborator, Matthew Boulton, to the Albion Mill in London where he saw a lift-tenter in action under the control of a centrifugal pendulum (Fig. 2.2). Boulton and Watt did not attempt to patent the device (which, as noted above, had essentially already been patented by Mead) but they did try unsuccessfully to keep it secret. It was first copied in 1793 and spread throughout England over the next ten years [4].
2.1.2
Stability Analysis in the Nineteenth Century
With the spread of the centrifugal governor in the early nineteenth century a number of major problems became apparent. First, because of the absence of integral action, the governor could not remove offset: in the terminology of the time it could not regulate but only moderate. Second, its response to a change in load was slow. And thirdly, (nonlinear) frictional forces in the mechanism could lead to hunting (limit cycling). A number of attempts were made to overcome these problems: For example, the Siemens chronometric governor effectively introduced integral action through differential gearing, as well as mechanical amplification. Other approaches to the design of an isochronous governor (one with no offset) were based on ingenious mechanical constructions, but often encountered problems of stability. Nevertheless the nineteenth century saw a steady progress in the development of practical governors for steam engines and hydraulic turbines, including spring-loaded designs (which could be made much smaller and operate at higher speeds) and relay (indirect-acting) governors [5]. By the end of the century governors of various sizes and designs were available for effective regulation in a range of applications, and a number of graphical techniques existed for steady-state design. Few engineers were concerned with the analysis of the dynamics of a feedback system. In parallel with the developments in the engineering sector a number of eminent British scientists became interested in governors in order to keep a telescope directed at a particular star as the Earth rotated. A formal analysis of the dynamics of such a system by George Bidell Airy, Astronomer Royal, in 1840 [6], clearly demonstrated the propensity of such
2
Historical Perspective of Automation
41
29 30
2
30
30
30
30
30 46
31
46
32 31
45
45 28
B
44
44 27 38
37
33
35
26
A
45
34 40
38
37
35
41
39 42
38 C 43 39
42
38
39
42
Fig. 2.1 Mead’s speed regulator. (After [1])
a feedback system to become unstable. In 1868, James Clerk Maxwell analyzed governor dynamics, prompted by an electrical experiment in which the speed of rotation of a coil had to be held constant. His resulting classic paper On governors [7] was received by the Royal Society on 20 February. Maxwell derived a third-order linear model and the correct
conditions for stability in terms of the coefficients of the characteristic equation. Unable to derive a solution for higher-order models, he expressed the hope that the question would gain the attention of mathematicians. In 1875, the subject for the Cambridge University Adams Prize in mathematics was set as the criterion of dynamical stability.
42
C. Bissell et al.
0 0
1
2 0.5
3
4 1
5
6
7 2
8
9
10
11
12 feet
3
4m
Fig. 2.2 Boulton and Watt steam engine with centrifugal governor. (After [1])
One of the examiners was Maxwell himself (prizewinner in 1857) and the 1875 prize (awarded in 1877) was won by Edward James Routh. Routh had been interested in dynamical stability for several years, and had already obtained a solution for a fifth-order system. In the published paper [8], we find derived the Routh version of the renowned Routh–Hurwitz stability criterion. Related, independent work was being carried out in Continental Europe at about the same time [9]. A summary of the
work of I.A. Vyshnegradskii in St. Petersburg appeared in the French Comptes Rendus de l’Academie des Sciences in 1876, with the full version appearing in Russian and German in 1877, and in French in 1878/1879. Vyshnegradskii (generally transliterated at the time as Wischnegradski) transformed a third-order differential equation model of a steam engine with governor into a standard form: ϕ 3 + xϕ 2 + yϕ + 1 = 0,
2
Historical Perspective of Automation
43
where x and y became known as the Vyshnegradskii parameters. He then showed that a point in the x–y plane defined the nature of the system transient response. Figure 2.3 shows the diagram drawn by Vyshnegradskii, to which typical pole constellations for various regions in the plane have been added. In 1893, Aurel Boreslav Stodola at the Federal Polytechnic, Zurich, studied the dynamics of a high-pressure hydraulic turbine, and used Vyshnegradskii’s method to assess the stability of a third-order model. A more realistic model, however, was seventh order, and Stodola posed the general problem to a mathematician colleague Adolf Hurwitz, who
y
G L
D
very soon came up with his version of the Routh–Hurwitz criterion [10]. The two versions were shown to be identical by Enrico Bompiani in 1911 [11]. At the beginning of the twentieth century, the first general textbooks on the regulation of prime movers appeared in a number of European languages [12, 13]. One of the most influential was Tolle’s Regelung der Kraftmaschine, which went through three editions between 1905 and 1922 [14]. The later editions included the Hurwitz stability criterion.
2.1.3
Ship, Aircraft, and Industrial Control Before World War II
The first ship steering engines incorporating feedback appeared in the middle of the nineteenth century. In 1873, Jean Joseph Léon Farcot published a book on servomotors in which he not only described the various designs developed in the family firm, but also gave an account of the general principles of position control. Another important maritime application of feedback control was in gun turret operation, and hydraulics were also extensively developed for transmission systems. Torpedoes, too, used increasingly sophisticated feedback systems for depth control – including, by the end of the century, gyroscopic action (Fig. 2.4). During the first decades of the twentieth century, gyroscopes were increasingly used for ship stabilization and autopilots. Elmer Sperry pioneered the active stabilizer, the gyrocompass, and the gyroscope autopilot, filing various patents over the period 1907–1914. Sperry’s autopilot was a sophisticated device: an inner loop controlled an electric motor which operated the steering engine, while an outer loop used a gyrocompass to sense the heading. Sperry also
N
F
E M H x
Fig. 2.3 Vyshnegradskii’s stability diagram with modern pole positions. (After [9])
M
m
o m'
g
f
a
k i j
u
l d
r
s
v e p
y x
z w
h
q
b
n
c
t
Fig. 2.4 Torpedo servomotor as fitted to Whitehead torpedoes around 1900. (After [15])
2
44
C. Bissell et al.
designed an anticipator to replicate the way in which an experienced helmsman would meet the helm (to prevent oversteering); the anticipator was, in fact, a type of adaptive control [16]. Sperry and his son Lawrence also designed aircraft autostabilizers over the same period, with the added complexity of three-dimensional control. Bennett describes the system used in an acclaimed demonstration in Paris in 1914 [17]:
47 49 15 33 51
Significant technological advances in both ship and aircraft stabilization took place over the next two decades, and by the mid-1930s a number of airlines were using Sperry autopilots for long-distance flights. However, apart from the stability analyses discussed in Sect. 2.1.2 above, which were not widely known at this time, there was little theoretical investigation of such feedback control systems. One of the earliest significant studies was carried out by Nicholas Minorsky, published in 1922 [18]. Minorsky was born in Russia in 1885 (his knowledge of Russian proved to be important to the West much later). During service with the Russian Navy he studied the ship steering problem and, following his emigration to the USA in 1918, he made the first theoretical analysis of automatic ship steering. This study clearly identified the way that control action should be employed: Although Minorsky did not use the terms in the modern sense, he recommended an appropriate combination of proportional, derivative, and integral action. Minorsky’s work was not widely disseminated, however. Although he gave a good theoretical basis for closed-loop control, he was writing in an age of heroic invention, when intuition and practical experience were much more important for engineering practice than theoretical analysis. Important technological developments were also being made in other sectors during the first few decades of the twentieth century, although again there was little theoretical underpinning. The electric power industry brought demands for voltage and frequency regulation; many processes using driven rollers required accurate speed control; and considerable work was carried out in a number of countries on systems for the accurate pointing of guns for naval and antiaircraft gunnery. In the process industries, measuring instruments and pneumatic controllers of increasing sophistication were developed. Mason’s Stabilog (Fig. 2.5), patented in 1933, included integral as well as proportional action, and by
35 41
31 b a
For this system the Sperrys used four gyroscopes mounted to form a stabilized reference platform; a train of electrical, mechanical and pneumatic components detected the position of the aircraft relative to the platform and applied correction signals to the aircraft control surfaces. The stabilizer operated for both pitch and roll [ . . . ] The system was normally adjusted to give an approximately deadbeat response to a step disturbance. The incorporation of derivative action [ . . . ] was based on Sperry’s intuitive understanding of the behaviour of the system, not on any theoretical foundations. The system was also adaptive [ . . . ] adjusting the gain to match the speed of the aircraft.
45
53
39 43
55
23 21 3
37
17
13 18
29
11 9
27 25
5 7
Fig. 2.5 The Stabilog, a pneumatic controller providing proportional and integral action [19]
the end of the decade three-term controllers were available that also included preact or derivative control. Theoretical progress was slow, however, until the advances made in electronics and telecommunications in the 1920s and 1930s were translated into the control field during World War II.
2.1.4
Electronics, Feedback, and Mathematical Analysis
The rapid spread of telegraphy and then telephony from the mid-nineteenth century onward prompted a great deal of theoretical investigation into the behavior of electric circuits. Oliver Heaviside published papers on his operational calculus over a number of years from 1888 onward [20], but although his techniques produced valid results for the transient response of electrical networks, he was fiercely criticized by contemporary mathematicians for his lack of rigor, and ultimately he was blackballed by the establishment. It was not until the second decade of the twentieth century that Bromwich, Carson, and others made the link between Heaviside’s operational calculus and Fourier methods, and thus proved the validity of Heaviside’s techniques [21].
2
Historical Perspective of Automation
45
The first three decades of the twentieth century saw important analyses of circuit and filter design, particularly in the USA and Germany. Harry Nyquist and Karl Küpfmüller were two of the first to consider the problem of the maximum transmission rate of telegraph signals, as well as the notion of information in telecommunications, and both went on to analyze the general stability problem of a feedback circuit [22]. In 1928, Küpfmüller analyzed the dynamics of an automatic gain control electronic circuit using feedback. He appreciated the dynamics of the feedback system, but his integral equation approach resulted only in approximations and design diagrams, rather than a rigorous stability criterion. At about the same time in the USA, Harold Black was designing feedback amplifiers for transcontinental telephony (Fig. 2.6). In a famous epiphany on the Hudson River ferry in August 1927 he realized that negative feedback could reduce distortion at the cost of reducing overall gain. Black passed on the problem of the stability of such a feedback loop to his Bell Labs colleague Harry Nyquist, who published his celebrated frequency-domain encirclement criterion in 1932 [24]. Nyquist demonstrated, using results derived by Cauchy, that the key to stability is whether or not the openloop frequency-response locus in the complex plane encircles (in Nyquist’s original convention) the point 1 + i0. One of the great advantages of this approach is that no analytical form of the open-loop frequency response is required: A set of measured data points can be plotted without the need for a mathematical model. Another advantage is that, unlike the Routh–Hurwitz criterion, an assessment of the transient response can be made directly from the Nyquist plot in terms of gain and phase margins (how close the locus approaches the critical point). Black’s 1934 paper reporting his contribution to the development of the negative feedback amplifier included what was to become the standard closed-loop analysis in the frequency domain [23]. me + n + d(E ) e b (E + N + D)
Amplifier circuit m mb (E + N + D)
Feedback circuit b
Fig. 2.6 Black’s feedback amplifier. (After [23])
E+N+D
The third key contributor to the analysis of feedback in electronic systems at Bell Labs was Hendrik Bode who worked on equalizers from the mid-1930s, and who demonstrated that attenuation and phase shift were related in any realizable circuit [25]. The dream of telephone engineers to build circuits with fast cutoff and low phase shift was indeed only a dream. It was Bode who introduced the notions of gain and phase margins, and redrew the Nyquist plot in its now conventional form with the critical point at −1 +i0. He also introduced the famous straight-line approximations to frequency-response curves of linear systems plotted on log–log axes. Bode presented his methods in a classic text published immediately after the war [26]. If the work of the communications engineers was one major precursor of classical control, then the other was the development of high-performance servos in the 1930s. The need for such servos was generated by the increasing use of analogue simulators, such as network analyzers for the electrical power industry and differential analyzers for a wide range of problems. By the early 1930s, six-integrator differential analyzers were in operation at various locations in the USA and the UK. A major center of innovation was MIT, where Vannevar Bush, Norbert Wiener, and Harold Hazen had all contributed to the design. In 1934, Hazen summarized the developments of the previous years in the theory of servomechanisms [27]. He adopted normalized curves, and parameters such as time constant and damping factor, to characterize servo-response, but he did not give any stability analysis: Although he appears to have been aware of Nyquists’s work, he (like almost all his contemporaries) does not appear to have appreciated the close relationship between a feedback servomechanism and a feedback amplifier. The 1930s American work gradually became known elsewhere. There is ample evidence from prewar USSR, Germany, and France that, for example, Nyquist’s results were known – if not widely disseminated. In 1940, for example, Leonhard published a book on automatic control in which he introduced the inverse Nyquist plot [28], and in the same year a conference was held in Moscow during which a number of Western results in automatic control were presented and discussed [29]. Also in Russia, a great deal of work was being carried out on nonlinear dynamics, using an approach developed from the methods of Poincaré and Lyapunov at the turn of the century [30]. Such approaches, however, were not widely known outside Russia until after the war.
2.1.5
World War II and Classical Control: Infrastructure
Notwithstanding the major strides identified in the previous subsections, it was during World War II that a discipline of feedback control began to emerge, using a range of design
2
46
and analysis techniques to implement high-performance systems, especially those for the control of antiaircraft weapons. In particular, World War II saw the coming together of engineers from a range of disciplines – electrical and electronic engineering, mechanical engineering, and mathematics – and the subsequent realization that a common framework could be applied to all the various elements of a complex control system in order to achieve the desired result [19, 31]. The so-called fire control problem was one of the major issues in military research and development at the end of the 1930s. While not a new problem, the increasing importance of aerial warfare meant that the control of antiaircraft weapons took on a new significance. Under manual control, aircraft were detected by radar, range was measured, prediction of the aircraft position at the arrival of the shell was computed, and guns were aimed and fired. A typical system could involve up to 14 operators. Clearly, automation of the process was highly desirable, and achieving this was to require detailed research into such matters as the dynamics of the servomechanisms driving the gun aiming, the design of controllers, and the statistics of tracking aircraft possibly taking evasive action. Government, industry, and academia collaborated closely in the USA, and three research laboratories were of prime importance. The Servomechanisms Laboratory at MIT brought together Brown, Hall, Forrester, and others in projects that developed frequency-domain methods for control loop design for high-performance servos. Particularly close links were maintained with Sperry, a company with a strong track record in guidance systems, as indicated above. Meanwhile, at MIT’s Radiation Laboratory – best known, perhaps, for its work on radar and long-distance navigation – researchers such as James, Nichols, and Phillips worked on the further development of design techniques for auto-track radar for AA gun control. And the third institution of seminal importance for fire control development was Bell Labs, where great names such as Bode, Shannon, and Weaver – in collaboration with Wiener and Bigelow at MIT – attacked a number of outstanding problems, including the theory of smoothing and prediction for gun aiming. By the end of the war, most of the techniques of what came to be called classical control had been elaborated in these laboratories, and a whole series of papers and textbooks appeared in the late 1940s presenting this new discipline to the wider engineering community [32]. Support for control systems development in the USA has been well documented [19, 31]. The National Defense Research Committee (NDRC) was established in 1940 and incorporated into the Office of Scientific Research and Development (OSRD) the following year. Under the directorship of Vannevar Bush, the new bodies tackled antiaircraft measures, and thus the servo problem, as a major priority. Section D of the NDRC, devoted to Detection, Controls,
C. Bissell et al.
and Instruments was the most important for the development of feedback control. Following the establishment of the O.R., the NDRC was reorganized into divisions, and Division 7, Fire Control, under the overall direction of Harold Hazen, covered the following subdivisions: ground-based antiaircraft fire control; airborne fire control systems; servomechanisms and data transmission; optical rangefinders; fire control analysis; and navy fire control with radar. Turning to the UK, by the outbreak of World War II various military research stations were highly active in such areas as radar and gun laying, and there were also close links between government bodies and industrial companies such as Metropolitan–Vickers, British Thomson–Houston, and others. Nevertheless, it is true to say that overall coordination was not as effective as in the USA. A body that contributed significantly to the dissemination of theoretical developments and other research into feedback control systems in the UK was the so-called Servo Panel. Originally established informally in 1942 as the result of an initiative of Solomon (head of a special radar group at Malvern), it acted rather as a learned society with approximately monthly meetings from May 1942 to August 1945. Toward the end of the war meetings included contributions from the USA. Germany developed successful control systems for civil and military applications both before and during the war (torpedo and flight control, for example). The period 1938– 1941 was particularly important for the development of missile guidance systems. The test and development center at Peenemünde on the Baltic coast had been set up in early 1936, and work on guidance and control saw the involvement of industry, the government, and universities. However, there does not appear to have been any significant national coordination of R&D in the control field in Germany, and little development of high-performance servos as there was in the USA and the UK. When we turn to the German situation outside the military context, however, we find a rather remarkable awareness of control and even cybernetics. In 1939, the Verein Deutscher Ingenieure, one of the two major German engineers’ associations, set up a specialist committee on control engineering. As early as October 1940 the chair of this body Herman Schmidt gave a talk covering control engineering and its relationship with economics, social sciences, and cultural aspects [33]. Rather remarkably, this committee continued to meet during the war years, and issued a report in 1944 concerning primarily control concepts and terminology, but also considering many of the fundamental issues of the emerging discipline. The Soviet Union saw a great deal of prewar interest in control, mainly for industrial applications in the context of five-year plans for the Soviet command economy. Developments in the USSR have received little attention in Englishlanguage accounts of the history of the discipline apart from
2
Historical Perspective of Automation
a few isolated papers. It is noteworthy that the Kommissiya Telemekhanikii Avtomatiki (KTA) was founded in 1934, and the Institut Avtomatikii Telemekhaniki (IAT) in 1939 (both under the auspices of the Soviet Academy of Sciences, which controlled scientific research through its network of institutes). The KTA corresponded with numerous Western manufacturers of control equipment in the mid-1930s and translated a number articles from Western journals. The early days of the IAT were marred, however, by the Shchipanov affair, a classic Soviet attack on a researcher for pseudoscience, which detracted from technical work for a considerable period of time [34]. The other major Russian center of research related to control theory in the 1930s and 1940s (if not for practical applications) was the University of Gorkii (now Nizhnii Novgorod), where Aleksandr Andronov and colleagues had established a center for the study of nonlinear dynamics during the 1930s [35]. Andronov was in regular contact with Moscow during the 1940s, and presented the emerging control theory there – both the nonlinear research at Gorkii and developments in the UK and USA. Nevertheless, there appears to have been no coordinated wartime work on control engineering in the USSR, and the IAT in Moscow was evacuated when the capital came under threat. However, there does seem to have been an emerging control community in Moscow, Nizhnii Novgorod, and Leningrad, and Russian workers were extremely well informed about the open literature in the West.
2.1.6
World War II and Classical Control: Theory
Design techniques for servomechanisms began to be developed in the USA from the late 1930s onward. In 1940, Gordon S. Brown and colleagues at MIT analyzed the transient response of a closed-loop system in detail, introducing the system operator 1/(1 + open loop) as functions of the Heaviside differential operator p. By the end of 1940 contracts were being drawn up between the NDRC and MIT for a range of servo projects. One of the most significant contributors was Albert Hall, who developed classic frequencyresponse methods as part of his doctoral thesis, presented in 1943 and published initially as a confidential document [36] and then in the open literature after the war [37]. Hall derived the frequency response of a unity feedback servo as KG(iω)/[1 + KG(iω)], applied the Nyquist criterion, and introduced a new way of plotting system response that he called M-circles (Fig. 2.7), which were later to inspire the Nichols chart. As Bennett describes it [38]: Hall was trying to design servosystems which were stable, had a high natural frequency, and high damping. [ . . . ] He needed a method of determining, from the transfer locus, the value of K
47
Imaginary axis M = 1.1
KG (iw) Plane Center of = – M 2 circles M 2–1 Radii of = M circles M 2–1
M = 1.3 M = 0.75 M = 1.5 M=2 –3
–2
3
–1
3 2
2
+1
+2 Real axis
1
2
0.5 1cps K= 0.5
1
0.5 cps
K=1 K=2
Fig. 2.7 Hall’s M-circles. (After [37])
that would give the desired amplitude ratio. As an aid to finding the value of K he superimposed on the polar plot curves of constant magnitude of the amplitude ratio. These curves turned out to be circles . . . By plotting the response locus on transparent paper, or by using an overlay of M-circles printed on transparent paper, the need to draw M-circles was obviated . . .
A second MIT group, known as the Radiation Laboratory (or RadLab), was working on auto-track radar systems. Work in this group was described after the war in [39]; one of the major innovations was the introduction of the Nichols chart (Fig. 2.8), similar to Hall’s M-circles, but using the more convenient decibel measure of amplitude ratio that turned the circles into a rather different geometrical form. The third US group consisted of those looking at smoothing and prediction for antiaircraft weapons – most notably Wiener and Bigelow at MIT together with others, including Bode and Shannon, at Bell Labs. This work involved the application of correlation techniques to the statistics of aircraft motion. Although the prototype Wiener predictor was unsuccessful in attempts at practical application in the early 1940s, the general approach proved to be seminal for later developments. Formal techniques in the UK were not so advanced. Arnold Tustin at Metropolitan–Vickers (Metro–Vick) worked on gun control from the late 1930s, but engineers had
2
48
C. Bissell et al.
2.1.7 Loop gain (dB) –28
0
–0.5
+0.5
–24 –20
+1
–1–0.2 –16 –12 –2 –3 400 –8 –4 –0.4 –4 –6 0 –12
+8
+2 +3 +4
–0.6 w
w 100
–0.8 1.0
+9 60
+12 +16
+5 +6
200
+4
+12
40 1.5 +18
+20
The Emergence of Modern Control Theory
2
+24 +28 –180 –160 –140 –120 –100 –80
20
4 10
6 8 +24
–60 –40 –20 0 Loop phase angle (deg)
Fig. 2.8 Nichols chart. (After [38])
little appreciation of dynamics. Although they used harmonic response plots, they appeared to have been unaware of the Nyquist criterion until well into the 1940s [40]. Other key researchers in the UK included Whitely, who proposed using the inverse Nyquist diagram as early as 1942, and introduced his standard forms for the design of various categories of servosystem [41]. In Germany, Winfried Oppelt, Hans Sartorius, and Rudolf Oldenbourg were also coming to related conclusions about closed-loop design independently of allied research [42, 43]. The basics of sampled-data control were also developed independently during the war in several countries. The ztransform in all but name was described in a chapter by Hurewizc in [39]. Tustin in the UK developed the bilinear transformation for time series models, while Oldenbourg and Sartorius also used difference equations to model such systems. From 1944 onward, the design techniques developed during the hostilities were made widely available in an explosion of research papers and text books – not only from the USA and the UK, but also from Germany and the USSR. Toward the end of the decade perhaps the final element in the classical control toolbox was added – Evans’ root locus technique, which enabled plots of changing pole position as a function of loop gain to be easily sketched [44]. But a radically different approach was already waiting in the wings.
The modern or state-space approach to control was ultimately derived from original work by Poincaré and Lyapunov at the end of the nineteenth century. As noted above, Russians had continued developments along these lines, particularly during the 1920s and 1930s in centers of excellence in Moscow and Gorkii (now Nizhnii Novgorod). Russian work of the 1930s filtered slowly through to the West [45], but it was only in the postwar period, and particularly with the introduction of cover-to-cover translations of the major Soviet journals, that researchers in the USA and elsewhere became familiar with Soviet work. But phase-plane approaches had already been adopted by Western control engineers. One of the first was Leroy MacColl in his early textbook [46]. The Cold War requirements of control engineering centered on the control of ballistic objects for aerospace applications. Detailed and accurate mathematical models, both linear and nonlinear, could be obtained, and the classical techniques of frequency response and root locus – essentially approximations – were increasingly replaced by methods designed to optimize some measure of performance such as minimizing trajectory time or fuel consumption. Higherorder models were expressed as a set of first-order equations in terms of the state variables. The state variables allowed for a more sophisticated representation of dynamic behavior than the classical single-input single-output system modeled by a differential equation, and were suitable for multivariable problems. In general, we have in matrix form x = Ax + Bu, y = Cx, where x are the state variables, u the inputs, and y the outputs. Automatic control developments in the late 1940s and 1950s were greatly assisted by changes in the engineering professional bodies and a series of international conferences [47]. In the USA, both the American Society of Mechanical Engineers and the American Institute of Electrical Engineers made various changes to their structure to reflect the growing importance of servomechanisms and feedback control. In the UK, similar changes took place in the British professional bodies, most notably the Institution of Electrical Engineers, but also the Institute of Measurement and Control and the mechanical and chemical engineering bodies. The first conferences on the subject appeared in the late 1940s in London and New York, but the first truly international conference was held in Cranfield, UK, in 1951. This was followed by a number of others, the most influential of which was the Heidelberg event of September 1956, organized by the joint control committee of the two major German engineering bodies, the VDE and VDI. The establishment of
2
Historical Perspective of Automation
the International Federation of Automatic Control followed in 1957 with its first conference in Moscow in 1960 [48]. The Moscow conference was perhaps most remarkable for Kalman’s paper On the general theory of control systems which identified the duality between multivariable feedback control and multivariable feedback filtering and which was seminal for the development of optimal control. The late 1950s and early 1960s saw the publication of a number of other important works on dynamic programming and optimal control, of which can be singled out those by Bellman [49], Kalman [50–52], and Pontryagin and colleagues [53]. A more thorough discussion of control theory is provided in Chs. 9, 11, and 10.
2.1.8
The Digital Computer
The introduction of digital technologies in the late 1950s brought enormous changes to automatic control. Control engineering had long been associated with computing devices – as noted above, a driving force for the development of servos was for applications in analogue computing. But the great change with the introduction of digital computers was that ultimately the approximate methods of frequency-response or root locus design, developed explicitly to avoid computation, could be replaced by techniques in which accurate computation played a vital role. There is some debate about the first application of digital computers to process control, but certainly the introduction of computer control at the Texaco Port Arthur (Texas) refinery in 1959 and the Monsanto ammonia plant at Luling (Louisiana) the following year are two of the earliest [54]. The earliest systems were supervisory systems, in which individual loops were controlled by conventional electrical, pneumatic, or hydraulic controllers, but monitored and optimized by computer. Specialized process control computers followed in the second half of the 1960s, offering direct digital control (DDC) as well as supervisory control. In DDC the computer itself implements a discrete form of a control algorithm such as three-term control or other procedure. Such systems were expensive, however, and also suffered many problems with programming, and were soon superseded by the much cheaper minicomputers of the early 1970s, most notably the Digital Equipment Corporation PDP series. But, as in so many other areas, it was the microprocessor that had the greatest effect. Microprocessor-based digital controllers were soon developed that were compact, reliable, included a wide selection of control algorithms, had good communications with supervisory computers, and comparatively easy to use programming and diagnostic tools via an effective operator interface. Microprocessors could also easily be built into specific pieces of equipment, such as robot arms, to provide dedicated position control, for example.
49
A development often neglected in the history of automatic control is the programmable logic controller (PLC). PLCs were developed to replace individual relays used for sequential (and combinational) logic control in various industrial sectors. Early plugboard devices appeared in the mid-1960s, but the first PLC proper was probably the Modicon, developed for General Motors to replace electromechanical relays in automotive component production. Modern PLCs offer a wide range of control options, including conventional closedloop control algorithms such as PID as well as the logic functions. In spite of the rise of the ruggedized PCs in many industrial applications, PLCs are still widely used owing to their reliability and familiarity (Fig. 2.9). Digital computers also made it possible to implement the more advanced control techniques that were being developed in the 1960s and 1970s [56]. In adaptive control the algorithm is modified according to circumstances. Adaptive control has a long history: so-called gain scheduling, for example, when
Fig. 2.9 The Modicon 084 PLC. (After [55])
2
50
C. Bissell et al.
the gain of a controller is varied according to some measured parameter, was used well before the digital computer. (The classic example is in flight control, where the altitude affects aircraft dynamics, and needs therefore to be taken into account when setting gain.) Digital adaptive control, however, offers much greater possibilities for: 1. Identification of relevant system parameters 2. Making decisions about the required modifications to the control algorithm 3. Implementing the changes Optimal and robust techniques too were developed, the most celebrated perhaps being the linear-quadratic-Gaussian (LQG) and H∞ approaches from the 1960s onward. Without digital computers these techniques, that attempt to optimize system rejection of disturbances (according to some measure of behavior) while at the same time being resistant to errors in the model, would simply be mathematical curiosities [57]. A very different approach to control rendered possible by modern computers is to move away from purely mathematic models of system behavior and controller algorithms. In fuzzy control, for example, control action is based on a set of rules expressed in terms of fuzzy variables. For example: IF the speed is “high” AND the distance to final stop is “short” THEN apply brakes “firmly.” The fuzzy variables high, short, and firmly can be translated by means of an appropriate computer program into effective control for, in this case, a train. Related techniques include learning control and knowledge-based control. In the former, the control system can learn about its environment using artificial intelligence techniques (AI) and modify its behavior accordingly. In the latter, a range of AI techniques are applied to reasoning about the situation so as to provide appropriate control action.
2.1.9
The Socio-technological Context Since 1945
This short survey of the history of automatic control has concentrated on technological and, to some extent, institutional developments. A full social history of automatic control has yet to be written, although there are detailed studies of certain aspects. Here I shall merely indicate some major trends since World War II. The wartime developments, both in engineering and in areas such as operations research, pointed the way toward the design and management of large-scale, complex, projects. Some of those involved in the wartime research were already thinking on a much larger scale. As early as 1949, in some
rather prescient remarks at an ASME meeting in the fall of that year, Brown and Campbell said [58–60]: We have in mind more a philosophic evaluation of systems which might lead to the improvement of product quality, to better coordination of plant operation, to a clarification of the economics related to new plant design, and to the safe operation of plants in our composite social-industrial community. [ . . . ] The conservation of raw materials used in a process often prompts reconsideration of control. The expenditure of power or energy in product manufacture is another important factor related to control. The protection of health of the population adjacent to large industrial areas against atmospheric poisoning and water-stream pollution is a sufficiently serious problem to keep us constantly alert for advances in the study and technique of automatic control, not only because of the human aspect, but because of the economy aspect.
Many saw the new technologies, and the prospects of automation, as bringing great benefits to society; others were more negative. Wiener, for example, wrote [61]: The modern industrial revolution is [ . . . ] bound to devalue the human brain at least in its simpler and more routine decisions. Of course, just as the skilled carpenter, the skilled mechanic, the skilled dressmaker have in some degree survived the first industrial revolution, so the skilled scientist and the skilled administrator may survive the second. However, taking the second revolution as accomplished, the average human of mediocre attainments or less has nothing to sell that it is worth anyone’s money to buy.
It is remarkable how many of the wartime engineers involved in control systems development went on to look at social, economic, or biological systems. In addition to Wiener’s work on cybernetics, Arnold Tustin wrote a book on the application to economics of control ideas, and both Winfried Oppelt and Karl Küpfmüller investigated biological systems in the postwar period. One of the more controversial applications of control and automation was the introduction of the computer numerical control (CNC) of machine tools from the late 1950s onward. Arguments about increased productivity were contested by those who feared widespread unemployment. We still debate such issues today, and will continue to do so. Noble, in his critique of automation, particularly CNC, remarks [62]: [ . . . ] when technological development is seen as politics, as it should be, then the very notion of progress becomes ambiguous: What kind of progress? Progress for whom? Progress for what? And the awareness of this ambiguity, this indeterminacy, reduces the powerful hold that technology has had upon our consciousness and imagination [ . . . ] Such awareness awakens us not only to the full range of technological possibilities and political potential but also to a broader and older notion of progress, in which a struggle for human fulfillment and social equality replaces a simple faith in technological deliverance . . . .
2
Historical Perspective of Automation
2.1.10 Conclusion and Emerging Trends Technology is part of human activity, and cannot be divorced from politics, economics, and society. There is no doubt that automatic control, at the core of automation, has brought enormous benefits, enabling modern production techniques, power and water supply, environmental control, information and communication technologies, and so on. At the same time automatic control has called into question the way we organize our societies, and how we run modern technological enterprises. Automated processes require much less human intervention, and there have been periods in the recent past when automation has been problematic in those parts of industrialized society that have traditionally relied on a large workforce for carrying out tasks that were subsequently automated. It seems unlikely that these socio-technological questions will be settled as we move toward the next generation of automatic control systems, such as the transformation of work through the use of information and communication technology ICT and the application of control ideas to this emerging field [63]. Future developments in automatic control are likely to exploit ever more sophisticated mathematical models for those applications amenable to exact technological modeling, plus a greater emphasis on human–machine systems, and further development of human behavior modeling, including decision support and cognitive engineering systems [64]. As safety aspects of large-scale automated systems become ever more important, large-scale integration and novel ways of communicating between humans and machines are likely to take on even greater significance.
51
• L.E. Harris: The Two Netherlanders, Humphrey Bradley and CornelisDrebbel (Cambridge Univ. Press, Cambridge 1961) • B. Marsden: Watt’s Perfect Engine (Columbia Univ. Press, New York 2002) • O. Mayr: Authority, Liberty and Automatic Machinery in Early Modern Europe (Johns Hopkins Univ. Press, Baltimore 1986) • W. Oppelt: A historical review of autopilot development, research and theory in Germany, Trans ASME J. Dyn. Syst. Meas. Control 98, 213–223 (1976) • W. Oppelt: On the early growth of conceptual thinking in control theory – the German role up to 1945, IEEE Control Syst. Mag. 4, 16–22 (1984) • B. Porter: Stability Criteria for Linear Dynamical Systems (Oliver Boyd, Edinburgh, London 1967) • P. Remaud: Histoire de l’automatiqueen France 1850– 1950 (Hermes Lavoisier, Paris 2007), in French • K. Rörentrop: Entwicklung der modernenRegelungstechnik (Oldenbourg, Munich 1971), in German • Scientific American: Automatic Control (Simon Shuster, New York 1955) • J.S. Small: The Analogue Alternative (Routledge, London, New York 2001) • G.J. Thaler (Ed.): Automatic Control: Classical Linear Theory (Dowden, Stroudsburg 1974) • (Added for this new edition) Bissell, C., and Dillon, C. (Eds.): Ways of Thinking, Ways of Seeing (Springer ACES Book Series 2012)
2.2 2.1.11 Further Reading • R. Bellman (Ed.): Selected Papers on Mathematical Trends in Control Engineering (Dover, New York 1964) • C.C. Bissell: http://ict.open.ac.uk/classics (electronic resource) • M.S. Fagen (Ed.): A History of Engineering and Science in the Bell System: The Early Years (1875–1925) (Bell Telephone Laboratories, Murray Hill 1975) • M.S. Fagen (Ed.): A History of Engineering and Science in the Bell System: National Service in War and Peace (1925–1975) (Bell Telephone Laboratories, Murray Hill 1979) • A.T. Fuller: Stability of Motion, ed. by E.J. Routh, reprinted with additional material (Taylor Francis, London 1975) • A.T. Fuller: The early development of control theory, Trans. ASME J. Dyn. Syst. Meas. Control 98, 109–118 (1976) • A.T. Fuller: Lyapunov centenary issue, Int. J. Control 55, 521–527 (1992)
Advances in Industrial Automation: Historical Perspectives
(by Theodore J. Williams) Automation is a way for humans to extend the capability of their tools and machines. Self-operation by tools and machines requires four functions: performance detection, process correction, adjustments due to disturbances, and enabling the previous three functions without human intervention. Development of these functions evolved in history, and automation is the capability of causing machines to carry out a specific operation on command from external source. In chemical manufacturing and petroleum industries prior to 1940, most processing was in batch environment. The increasing demand for chemical and petroleum products by World War II and thereafter required different manufacturing setup, leading to continuous processing and efficiencies were achieved by automatic control and automation of process, flow, and transfer. The increasing complexity of the control system for large plants necessitated applications of computers, which were introduced to the chemical industry in the 1960s. Automation has substituted computer-based control
2
52
systems for most, if not all, control systems previously based on human-aided mechanical or pneumatic systems to the point that chemical and petroleum plant systems are now fully automatic to a very high degree. In addition, automation has replaced human effort, eliminates significant labor costs, and prevents accidents and injuries that might occur. The Purdue Enterprise Reference Architecture (PERA) for hierarchical control structure, the hierarchy of personnel tasks, and plant operational management structure, as developed for large industrial plants, as well as frameworks for automation studies are also illustrated. Humans have always sought to increase the capability of their tools and their extensions, i.e., machines. A natural extension of this dream was making tools capable of selfoperation in order to: 1. Detect when performance was not achieving the initial expected result 2. Initiate a correction in operation to return the process to its expected result in case of deviation from expected performance 3. Adjust ongoing operations to increase the machine’s productivity in terms of (a) volume, (b) dimensional accuracy, (c) overall product quality, or (d) ability to respond to a new previously unknown disturbance 4. Carry out the previously described functions without human intervention Item 1 was readily achieved through the development of sensors that could continuously or periodically measure the important variables of the process and signal the occurrence of variations in them. Item 2 was made possible next by the invention of controllers that convert knowledge of such variations into commands required to change operational variables and thereby return to the required operational results. The successful operation of any commercially viable process requires the solution of items 1 and 2. The development of item 3 required an additional level of intelligence beyond items 1 and 2, i.e., the capability to make a comparison between the results achieved and the operating conditions used for a series of tests. Humans can, of course, readily perform this task. Accomplishing this task using a machine, however, requires the computational capability to compare successive sets of data, gather and interpret corrective results, and be able to apply the results obtained. For a few variables with known variations, this can be incorporated into the controller’s design. However, for a large number of variables or when possible unknown ranges of responses may be present, a computer must be available. Automation is the capability of causing a machine to carry out a specific operation on command from an external source. The nature of these operations may also be part of the external command received. The devise involved may likewise have
C. Bissell et al.
the capability to respond to other external environmental conditions or signals when such responses are incorporated within its capabilities. Automation, in the sense used almost universally today in the chemical and petroleum industries, is taken to mean the complete or near-complete operation of chemical plants and petroleum refineries by digital computer systems. This operation entails not only the monitoring and control of multiple flows of materials involved but also the coordination and optimization of these controls to achieve optimal production rate and/or the economic return desired by management. These systems are programmed to compensate, as far as the plant equipment itself will allow for changes in raw material characteristics and availability and requested product flow rates and qualities. In the early days of the chemical manufacturing and petroleum industries (prior to 1940), most processing was carried out in a batch environment. The needed ingredients were added together in a kettle and processed until the reaction or other desired action was completed. The desired product(s) were then separated from the by-products and unreacted materials by decanting, distilling, filtering, or other applicable physical means. These latter operations are thus in contrast to the generally chemical processes of product formation. At that early time, the equipment and their accompanying methodologies were highly manpower dependent, particularly for those requiring coordination of the joint operation of related equipment, especially when succeeding steps involved transferring materials to different sets or types of equipment. The strong demand for chemical and petroleum products generated by World War II and the following years of prosperity and rapid commercial growth required an entirely different manufacturing equipment setup. This led to the emergence of continuous processes where subsequent processes were continued in successive connected pieces of equipment, each devoted to a separate setup in the process. Thus a progression in distance to the succeeding equipment (rather than in time, in the same equipment) was now necessary. Since any specific piece of equipment or location in the process train was then always used for the same operational stage, the formerly repeated filling, reacting, emptying, and cleaning operations in every piece of equipment were now eliminated. This was obviously much more efficient in terms of equipment usage. This type of operation, now called continuous processing, is in contrast to the earlier batch processing mode. However, the coordination of the now simultaneous operations connected together required much more accurate control of both operations to avoid the transmission of processing errors or upsets to downstream equipment. Fortunately, our basic knowledge of the inherent chemical and physical properties of these processes had also advanced along with the development of the needed equipment and
2
Historical Perspective of Automation
53
Level 4b Management
Management
data
information
Scheduling and control hierarchy
presentation (Level 4)
Sales orders
Level 4a Production
Operational
scheduling and
and production
operational
supervision
management
Communications with other areas
Communi(Level 3)
Supervisor's
Intra-area
cations with
console
coordination
other supervisory systems Communi-
(Level 2)
Supervisor's
Supervisory
cations with
console
control
other control systems
Operator's
Direct digital
console
control
(Level 1) Specialized dedicated digital controllers
All physical, Process
chemical or spatial transformations
(Level 0)
Fig. 2.10 The Purdue Enterprise Reference Architecture (PERA). Hierarchical computer control structure for an industrial plant [77]
now allows us to adopt methodologies for assessing the quality and state of these processes during their operation, i.e., degree of completion, etc. Likewise, also fortunately, our basic knowledge of the technology of automatic control and its implementing equipment advanced along with knowledge of the pneumatic and electronic techniques used to implement them. Pneumatic technology for the necessary control equipment was used almost exclusively from the original development of the technique to the 1920s until its replacement by the rapidly developing electronic techniques in the 1930s. This advanced type of equipment became almost totally electronic after the development of solid-state
electronic technologies as the next advances. Pneumatic techniques were then used only where severe fire or explosive conditions prevented the use of electronics. The overall complexity of the control systems for large plants made them objects for the consideration of the use of computers almost as soon as the early digital computers became practical and affordable. The first computers for chemical plant and refinery control were installed in 1960, and they became quite prevalent by 1965. By now, computers are widely used in all large plant operations and in most small ones as well. If automation can be defined as the substitution of computer-based control systems for most, if not all, control
2
54
C. Bissell et al.
Level
Output
Task
Corporate officer
Strategy
Define and modify objectives
Division manager
Plans
Implement objective
Plant manager
Operations management
Operations control
Department manager
Control
Foreman
Direct control
Operator
Sensors
Observer
Supervisory control
Outside communications
Fig. 2.11 Personnel task hierarchy in a large manufacturing plant
systems previously based on human-aided mechanical or pneumatic systems, then for chemical and petroleum plant systems we can now truly say that they are fully automated, to a very high degree. As indicated above, a most desired by-product of the automation of chemical and petroleum refining processes must be the replacement of human effort: first, in directly handling the frequently dangerous chemical ingredients in the initiation of the process; second, that of personally monitoring and controlling the carrying out and completion of these processes; and finally that of handling the resulting products. This omits the expenses involved in the employment of personnel for carrying out these tasks, and also prevents unnecessary accidents and injuries that might occur there. The staff at chemical plants and petroleum refineries has thus been dramatically decreased in recent years. In
many locations this involves only a watchman role and an emergency maintenance function. This capability has further resulted in even further improvements in overall plant design to take full advantage of this new capability – a synergy effect. This synergy effect was next felt in the automation of the raw material acceptance practices and the product distribution methodologies of these plants. Many are now connected directly to the raw material sources and their customers by pipelines, thus totally eliminating special raw material and product handling and packaging. Again, computers are widely used in the scheduling, monitoring, and controlling of all operations involved here. Finally, it has been noted that there is a hierarchical relationship between the control of the industrial process plant unit automatic control systems and the duties of the successive levels of management in a large industrial plant
Historical Perspective of Automation
55
(Level 1)
Direct process operations
(Hardware only)
Communications with other control level systems
Physical communication with plant processes
Process
Fig. 2.12 Plant operational management hierarchical structure
from company management down to the final plant control actions [65–76]. It has also been shown that all actions normally taken by intermediary plant staff in this hierarchy can be formulated into a computer-readable form for all operations that do not involve innovation or other problemsolving actions by plant staff. Figures 2.10, 2.11, 2.12, and 2.13 (with Table 2.1) illustrate this hierarchical structure and its components. See more on the history of automation and control in Chs. 3 and 4; see further details on process industry automation in Ch. 31; on complex systems automation in Ch. 36; and on automation architecture for interoperability in Ch. 86.
2.3
Advances in Robotics and Automation: Historical Perspectives
(by Yukio Hasegawa) Historical perspectives are given about the impressive progress in automation. Automation, including robotics, has evolved by becoming useful and affordable. Methods have been developed to analyze and design better automation, and those methods have also been automated. The most important issue in automation is to make every effort to paying attention to all the details.
2
3
4
5
6
7
8
9
Functional design
Unit operational management and staff
2
10
11
12
Detailed design
(Level 2)
1
Definition phase
Area operational management and staff
Communications with other supervisory levels
Design phase
(Level 3)
Communications with other areas
Functional view
Level 4a Plant operational management and staff
Implementation view
(Level 4)
Concept phase
Level 4b Company or plant general management and staff
Operations Construction and phase installation phase
2
13
14
15
16
17
18
19
20
21
Fig. 2.13 Abbreviated sketch to represent the structure of the Purdue Enterprise Reference Architecture
The bodies of human beings are smaller than those of wild animals. Our muscles, bones, and nails are smaller and weaker. However, human beings, fortunately, have larger brains and wisdom. Humans initially learned how to use tools and then started using machines to perform necessary daily operations. Without the help of these tools or machines we, as human beings, can no longer support our daily life normally. Technology is making progress at an extremely high speed; for instance, about half a century ago I bought a camera for my own use; at that time, the price of a conventional German-made camera was very high, as much as 6 months income. However, the price of a similar quality camera now is the equivalent of only 2 weeks of the salary of a young person in Japan. Seiko Corporation started production and sales of the world’s first quartz watch in Japan about 40 years ago. At that time, the price of the watch was about 400,000 Yen. People used to tell me that such high-priced watches could only be purchased by a limited group of people with high incomes, such as airline pilots, company owners, etc. Today similar watches are sold in supermarkets for only 1000 Yen.
56
C. Bissell et al.
Table 2.1 Areas of interest for the architecture framework addressing development and implementation aids for automation studies (Fig. 2.13) Area 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
Subjects of concern Mission, vision, and values of the company, operational philosophies, mandates, etc. Operational policies related to the information architecture and its implementation Operational strategies and goals related to the manufacturing architecture and its implementation Requirements for the implementation of the information architecture to carry out the operational policies of the company Requirements for physical production of the products or services to be generated by the company Sets of tasks, function modules, and macrofunction modules required to carry out the requirements of the information architecture Sets of production tasks, function modules, and macrofunctions required to carry out the manufacturing or service production mission of the company Connectivity diagrams of the tasks, function modules, and macrofunction modules of the information network, probably in the form of data flow diagrams or related modeling methods Process flow diagrams showing the connectivity of the tasks, function modules, and macrofunctions of the manufacturing processes involved Functional design of the information systems architecture Functional design of the human and organizational architecture Functional design of the manufacturing equipment architecture Detailed design of the equipment and software of the information systems architecture Detailed design of the task assignments, skills development training courses, and organizations of the human and organizational architecture Detailed design of components, processes, and equipment of the manufacturing equipment architecture Construction, checkout, and commissioning of the equipment and software of the information systems architecture Implementation of organizational development, training courses, and online skill practice for the human and organizational architecture Construction, checkout, and commissioning of the equipment and processes of the manufacturing equipment architecture Operation of the information and control system of the information systems architecture including its continued improvement Continued organizational development and skill and human relations development training of the human and organizational architecture Continued improvement of process and equipment operating conditions to increase quality and productivity, and to reduce costs involved for the manufacturing equipment architecture
Furthermore, nowadays, we are moving toward the automation of information handling by using computers; for instance, at many railway stations, it is now common to see unmanned ticket consoles. Telephone exchanges have become completely automated and the cost to use telephone systems is now very low. In recent years, robots have become commonplace for aiding in many different environments. Robots are machines which carry out motions and information handling automatically. In the 1970s I was asked to start conducting research on robots. One day, I was asked by the management of a Japanese company that wanted to start the sales of robots to determine whether such robots could be used in Japan. After analyzing robot motions by using a high-speed film analysis system, I reached the conclusion that the robot could be used both in Japan as well as in the USA. After that work I developed a new motion analysis method named the robot predetermined time standard (RPTS). The RPTS method can be widely applied to robot operation system design and contributed to many robot operation system design projects. In the USA, since the beginning of the last century, a lot of pioneers in human operation rationalization have made significant contributions. In 1911, Frederik Tailor proposed the scientific management method, which was later reviewed
by the American Congress. Professor Gilbreth of Purdue University developed the new motion analysis method, and contributed to the rationalization of human operations. Mr. Dancan of WOFAC Corporation proposed a human predetermined time standard (PTS) method, which was applied to human operation rationalizations worldwide. In the robotic field, those contributions are only part of the solution, and people have understood that mechanical and control engineering are additionally important aspects. Therefore, analysis of human operations in robotic fields are combined with more analysis, design, and rationalization [78]. However, human operators play a very challenging role in operations. Therefore, study of the work involved is more important than the robot itself and I believe that industrial engineering is going to become increasingly important in the future [79]. Prof. Nof developed RTM, the robot time and motion computational method, which was applied in robot selection and program improvements, including selection of robots and programs of both stationary and mobile robots. Such techniques were then incorporated in ROBCAD, a computer-aided design system to automate the design and implementation of robot installations and applications. A number of years ago I had the opportunity to visit the USA to attend an international robot symposium. At that time the principle of “no hands in dies” was a big topic in America
2
Historical Perspective of Automation
due to a serious problem with guaranteeing the safety of metal stamping operations. People involved in the safety of metal stamping operations could not decrease the accident rate in spite of their increasing efforts. The government decided that a new policy to fully automate stamping operations or to use additional devices to hold and place workpieces without inserting operators’ hands between dies was needed. The decision was lauded by many stamping robot manufacturers. Many expected that about 50,000 pieces of stamping robots would be sold in the American market in a few years. At that time 700,000 stamping presses were used in the USA. In Japan, the forecast figure was modified to 20,000 pieces. The figure was not small and therefore we immediately organized a stamping robot development project team with government financial support. The project team was composed of ten people: three robot engineers, two stamping engineers, a stamping technology consultant, and four students. I also invited an expert who had previously been in charge of stamping robot development projects in Japan. A few years later, sales of stamping robots started, with very good sales (over 400 robots were sold in a few years). However, the robots could not be used and were rather stored as inactive machines. I asked the person in charge for the reason for this failure and was told that designers had concentrated too much on the robot hardware development and overlooked analysis of the conditions of the stamping operations. Afterward, our project team analyzed the working conditions of the stamping operations very carefully and classified them into 128 types. Finally, the project team developed an operation analysis method for metal stamping operations. In a few years, fortunately, by applying the method we were able to decrease the rate of metal stamping operation accidents from 12,000 per year to fewer than 4000 per year. Besides metal stamping operations, we worked on research projects for forgings and castings to promote labor welfare. Through those research endeavors we reached the conclusion that careful analysis of the operation is the most important issue for obtaining good results in the case of any type of operations [80]. I believe, from my experience, that the most important issue – not only in robot engineering but in all automation – is to make every effort to paying attention to all the details.
References 1. Mayr, O.: The Origins of Feedback Control. MIT, Cambridge (1970) 2. Gibbs, F.W.: The furnaces and thermometers of Cornelius Drebbel. Ann. Sci. 6, 32–43 (1948) 3. Mead, T.: Regulators for wind and other mills, British Patent (Old Series) 1628 (1787) 4. Dickinson, H.W., Jenkins, R.: James Watt and the Steam Engine. Clarendon Press, Oxford (1927)
57 5. Bennett, S.: A History of Control Engineering 1800–1930. Peregrinus, Stevenage (1979) 6. Airy, G.B.: On the regulator of the clock-work for effecting uniform movement of equatorials. Mem. R. Astron. Soc. 11, 249–267 (1840) 7. Maxwell, J.C.: On governors. Proc. R. Soc. 16, 270–283 (1867) 8. Routh, E.J.: A Treatise on the Stability of a Given State of Motion. Macmillan, London (1877) 9. Bissell, C.C.: Stodola, Hurwitz and the genesis of the stability criterion. Int. J. Control. 50(6), 2313–2332 (1989) 10. Hurwitz, A.: Über die Bedingungen, unterwelcheneine Gleichungnur Wurzelnmitnegativenreellen Teilenbesitzt. Math. Ann. 46, 273–280 (1895), in German 11. Bompiani, E.: Sullecondizione sotto le quali un equazione a coefficientirealeammette solo radici con parte reale negative. G. Mat. 49, 33–39 (1911), in Italian 12. Bissell, C.C.: The classics revisited – Part I. Meas. Control. 32, 139–144 (1999) 13. Bissell, C.C.: The classics revisited – Part II. Meas. Control. 32, 169–173 (1999) 14. Tolle, M.: Die Regelung der Kraftmaschinen, 3rd edn. Springer, Berlin (1922), in German 15. Mayr, O.: Feedback Mechanisms. Smithsonian Institution Press, Washington, DC (1971) 16. Hughes, T.P.: Elmer Sperry: Inventor and Engineer. Johns Hopkins University Press, Baltimore (1971) 17. Bennett, S.: A History of Control Engineering 1800–1930, p. 137. Peregrinus, Stevenage (1979) 18. Minorsky, N.: Directional stability of automatically steered bodies. Trans. Inst. Naval Archit. 87, 123–159 (1922) 19. Bennett, S.: A History of Control Engineering 1930–1955. Peregrinus, Stevenage (1993) 20. Heaviside, O.: Electrical Papers. Chelsea/New York (1970), reprint of the 2nd edn 21. Bennett, S.: A History of Control Engineering 1800–1930. Peregrinus, Stevenage (1979), Chap. 6 22. Bissell, C.C.: Karl Küpfmüller: a German contributor to the early development of linear systems theory. Int. J. Control. 44, 977–989 (1986) 23. Black, H.S.: Stabilized feedback amplifiers. Bell Syst. Tech. J. 13, 1–18 (1934) 24. Nyquist, H.: Regeneration theory. Bell Syst. Tech. J. 11, 126–147 (1932) 25. Bode, H.W.: Relations between amplitude and phase in feedback amplifier design. Bell Syst. Tech. J. 19, 421–454 (1940) 26. Bode, H.W.: Network Analysis and Feedback Amplifier Design. Van Nostrand, Princeton (1945) 27. Hazen, H.L.: Theory of servomechanisms. J. Frankl. Inst. 218, 283– 331 (1934) 28. Leonhard, A.: Die Selbsttätige Regelung in der Elektrotechnik. Springer, Berlin (1940), in German 29. Bissell, C.C.: The first all-union conference on automatic control, Moscow, 1940. IEEE Control. Syst. Mag. 22, 15–21 (2002) 30. Bissell, C.C.: A.A. Andronov and the development of Soviet control engineering. IEEE Control. Syst. Mag. 18, 56–62 (1998) 31. Mindell, D.: Between Human and Machine. Johns Hopkins University Press, Baltimore (2002) 32. Bissell, C.C.: Textbooks and subtexts. IEEE Control. Syst. Mag. 16, 71–78 (1996) 33. Schmidt, H.: Regelungstechnik – die technische Aufgabe und ihrewissenschaftliche, sozialpolitische und kulturpolitische Auswirkung. Z. VDI 4, 81–88 (1941), in German 34. Bissell, C.C.: Control engineering in the former USSR: some ideological aspects of the early years. IEEE Control. Syst. Mag. 19, 111–117 (1999)
2
58 35. Dalmedico, A.D.: Early developments of nonlinear science in Soviet Russia: the Andronov school at Gorky. Sci. Context. 1/2, 235– 265 (2004) 36. Hall, A.C.: The Analysis and Synthesis of Linear Servomechanisms (Restricted Circulation). The Technology Press, Cambridge (1943) 37. Hall, A.C.: Application of circuit theory to the design of servomechanisms. J. Frankl. Inst. 242, 279–307 (1946) 38. Bennett, S.: A History of Control Engineering 1930–1955, p. 142. Peregrinus, Stevenage (1993) 39. James, H.J., Nichols, N.B., Phillips, R.S.: Theory of Servomechanisms, Radiation Laboratory, vol. 25. McGraw-Hill, New York (1947) 40. Bissell, C.C.: Pioneers of control: an interview with Arnold Tustin. IEE Rev. 38, 223–226 (1992) 41. Whiteley, A.L.: Theory of servo systems with particular reference to stabilization. J. Inst. Electr. Eng. 93, 353–372 (1946) 42. Bissell, C.C.: Six decades in control: an interview with Winfried Oppelt. IEE Rev. 38, 17–21 (1992) 43. Bissell, C.C.: An interview with Hans Sartorius. IEEE Control. Syst. Mag. 27, 110–112 (2007) 44. Evans, W.R.: Control system synthesis by root locus method. Trans. AIEE. 69, 1–4 (1950) 45. Andronov, A.A., Khaikin, S.E.: Theory of Oscillators. Princeton University Press, Princeton (1949), translated and adapted by S. Lefschetz from Russian 1937 publication 46. MacColl, L.A.: Fundamental Theory of Servomechanisms. Van Nostrand, Princeton (1945) 47. Bennett, S.: The emergence of a discipline: automatic control 1940–1960. Automatica. 12, 113–121 (1976) 48. Feigenbaum, E.A.: Soviet cybernetics and computer sciences, 1960. Commun. ACM. 4(12), 566–579 (1961) 49. Bellman, R.: Dynamic Programming. Princeton University Press, Princeton (1957) 50. Kalman, R.E.: Contributions to the theory of optimal control. Bol. Soc. Mat. Mex. 5, 102–119 (1960) 51. Kalman, R.E.: A new approach to linear filtering and prediction problems. Trans. ASME J. Basic Eng. 82, 34–45 (1960) 52. Kalman, R.E., Bucy, R.S.: New results in linear filtering and prediction theory. Trans. ASME J. Basic Eng. 83, 95–108 (1961) 53. Pontryagin, L.S., Boltyansky, V.G., Gamkrelidze, R.V., Mishchenko, E.F.: The Mathematical Theory of Optimal Processes. Wiley, New York (1962) 54. Williams, T.J.: Computer control technology – past, present, and probable future. Trans. Inst. Meas. Control. 5, 7–19 (1983) 55. Davis, C.A.: Industrial Electronics: Design and Application, p. 458. Merrill, Columbus (1973) 56. Williams, T.J., Nof, S.Y.: Control models. In: Salvendy, G. (ed.) Handbook of Industrial Engineering, 2nd edn, pp. 211–238. Wiley, New York (1992) 57. Willems, J.C.: In control, almost from the beginning until the day after tomorrow. Eur. J. Control. 13, 71–81 (2007) 58. Brown, G.S., Campbell, D.P.: Instrument engineering: its growth and promise in process-control problems. Mech. Eng. 72, 124–127 (1950) 59. Brown, G.S., Campbell, D.P.: Instrument engineering: its growth and promise in process-control problems. Mech. Eng. 72, 136 (1950) 60. Brown, G.S., Campbell, D.P.: Instrument engineering: its growth and promise in process-control problems. Mech. Eng. 72, 587–589 (1950), discussion 61. Wiener, N.: Cybernetics: Or Control and Communication in the Animal and the Machine. Wiley, New York (1948) 62. Noble, D.F.: Forces of Production. A Social History of Industrial Automation. Knopf, New York (1984) 63. Nof, S.Y.: Collaborative control theory for e-Work, e-Production and e-Service. Annu. Rev. Control. 31, 281–292 (2007)
C. Bissell et al. 64. Johannesen, G.: From control to cognition: historical views on human engineering. Stud. Inf. Control. 16(4), 379–392 (2007) 65. Li, H., Williams, T.J.: Interface design for the Purdue Enterprise Reference Architecture (PERA) and methodology in e-Work. Prod. Plan. Control. 14(8), 704–719 (2003) 66. Rathwell, G.A., Williams, T.J.: Use of purdue reference architecture and methodology in industry (the Fluor Daniel example). In: Bernus, P., Nemes, L. (eds.) Modeling and Methodologies for Enterprise Integration. Chapman Hall, London (1996) 67. Williams, T.J., Bernus, P., Brosvic, J., Chen, D., Doumeingts, G., Nemes, L., Nevins, J.L., Vallespir, B., Vliestra, J., Zoetekouw, D.: Architectures for integrating manufacturing activities and enterprises. Control. Eng. Pract. 2(6), 939–960 (1994) 68. Williams, T.J.: One view of the future of industrial control. Eng. Pract. 1(3), 423–433 (1993) 69. Williams, T.J.: A reference model for computer integrated manufacturing (CIM). In: Int. Purdue Workshop Industrial Computer Systems. Instrument Society of America, Pittsburgh (1989) 70. Williams, T.J.: The Use of Digital Computers in Process Control, p. 384. Instrument Society of America, Pittsburgh (1984) 71. Williams, T.J.: 20 years of computer control. Can. Control. Instrum. 16(12), 25 (1977) 72. Williams, T.J.: Two decades of change: a review of the 20-year history of computer control. Can. Control. Instrum. 16(9), 35–37 (1977) 73. Williams, T.J.: Trends in the development of process control computer systems. J. Qual. Technol. 8(2), 63–73 (1976) 74. Williams, T.J.: Applied digital control – some comments on history, present status and foreseen trends for the future. In: Adv. Instrum., Proc. 25th Annual ISA Conf, p. 1 (1970) 75. Williams, T.J.: Computers and process control. Ind. Eng. Chem. 62(2), 28–40 (1970) 76. Williams, T.J.: The coming years... The era of computing control. Instrum. Technol. 17(1), 57–63 (1970) 77. Williams, T.J.: The Purdue Enterprise Reference Architecture. Instrument Society of America, Pittsburgh (1992) 78. Hasegawa, Y.: Analysis of Complicated Operations for Robotization, SME Paper No. MS79-287 (1979) 79. Hasegawa, Y.: Evaluation and economic justification. In: Nof, S.Y. (ed.) Handbook of Industrial Robotics, pp. 665–687. Wiley, New York (1985) 80. Hasegawa, Y.: Analysis and classification of industrial robot characteristics. Ind. Robot Int. J. 1(3), 106–111 (1974)
Christopher Bissell graduated from Jesus College, Cambridge, in 1974 and obtained his PhD from the Open University in 1993, where he was employed since 1980. He wrote much distance teaching material on telecommunications, control engineering, digital media, and other topics. His major research interests were the history of technology and engineering education. Dr. Bissell passed away in 2017.
2
Historical Perspective of Automation
59
2
Theodore J. Williams was professor emeritus of engineering in Purdue University. He received BS, MS, and PhD in chemical engineering from Pennsylvania State University and MS in electrical engineering from Ohio State University. Before joining Purdue, he was senior engineering supervisor at Monsanto Chemical, St.Louis. He served as chair of the IFAC/IFIP Task Force, founding chair of IFIP TC-5, as well as president of AACC, ISA, and of AFIPS. Among his honors is the Sir Harold Hartley Silver Medal, the Albert F. Sperry Founder Award, Gold Medal by ISA, and the Lifetime Achievement Award from ISA. Professor Williams passed away in 2013.
Yukio Hasegawa was professor of the System Science Institute at Waseda University, Tokyo, Japan. He enjoyed construction robotics research since 1983 as Director of Waseda Construction Robot Research Project (WASCOR) which impacted automation in construction and in other fields of automation. He received the prestigious first Engelberger Award in 1977 from the American Robot Association for his distinguished pioneering work in robotics and in robot ergonomics since the infancy of Japanese robotics. Among his numerous international contributions to robotics and automation, Professor Hasegawa assisted, as a visiting professor, to build the Robotics Institute at EPFL (Ecole Polytechnic Federal de Lausanne) in Switzerland. He passed away in 2016.
3
Social, Organizational, and Individual Impacts of Automation Francoise Lamnabhi-Lagarrigue and Tariq Samad
Contents 3.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
61
3.2
Impact of Automation and Control: Value Chain Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
62
A Historical Case Study of Automation and Its Impact: Commercial Aviation . . . . . . . . . . . . . . . . . . . . .
63
3.3 3.4 3.4.1 3.4.2 3.4.3 3.4.4 3.5
Social, Organizational, and Individual Concerns . . . . Unintended Consequences of Automation Technology . . . . . . . . . . . . . . . . . . . . . . . . . Sustainability and the Limits of Growth . . . . . . . . . . . . . . Inequities of Impact . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ethical Challenges with Automation . . . . . . . . . . . . . . . . .
64 65 65 66 66
3.5.1 3.5.2 3.5.3 3.5.4 3.5.5
Emerging Developments in Automation: Societal Implications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Smart Cities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Assistive, Wearable, and Embedded Devices . . . . . . . . . . Autonomous Weapon Systems . . . . . . . . . . . . . . . . . . . . . . Automotive Autonomy . . . . . . . . . . . . . . . . . . . . . . . . . . . . Renewable Energy and Smart Grids . . . . . . . . . . . . . . . . . .
67 67 67 68 68 69
3.6 3.6.1 3.6.2 3.6.3 3.6.4
Some Approaches for Responsible Innovation . . . . . . . Ethics Guidelines and Ethically Aligned Designs . . . . . . . Conceptual Approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ethical Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Standards, Policies, and Enforcement . . . . . . . . . . . . . . . .
70 70 71 72 72
3.7
Conclusions and Emerging Trends . . . . . . . . . . . . . . . . .
73
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
73
Abstract
Automation and control technologies have been a key enabler of past industrial revolutions and thus of the world we live in today, with substantial footprints in sectors F. Lamnabhi-Lagarrigue () Laboratoire des Signaux et Systèmes CNRS, CentraleSupelec, Paris-Saclay Univ., Paris, France e-mail: [email protected] T. Samad Technological Leadership Institute, University of Minnesota, Minneapolis, MN, USA e-mail: [email protected]
© Springer Nature Switzerland AG 2023 S. Y. Nof (ed.), Springer Handbook of Automation, Springer Handbooks, https://doi.org/10.1007/978-3-030-96729-1_3
as different as aviation, automotive, healthcare, industrial processes, the built infrastructure, and power grids. Humanity has benefited in multifarious ways as a consequence, through increased physical, social, and economic well-being. Yet, these same developments have also been at the center of concerns related to sustainability, equity, and ethics. These concerns are being exacerbated with emerging technologies such as artificial intelligence, machine learning, pervasive sensing, internet of things, nanotechnologies, neurotechnologies, biotechnologies, and genetic technologies, which are being incorporated in a new generation of engineered systems: smart cities, autonomous vehicles, advanced weapon systems, assistive devices, and others. To ensure responsible innovation in these areas, new conceptual approaches are needed and more attention must be paid to ethical and related aspects. Further advances in automation and control will be required to sustain humanity and the planetary ecosystem, but the social, organizational, and individual impacts of automation must be kept in mind as these advances are pursued. Keywords
Automation · Control technologies · Systems and control · Cyber-physical-human systems · Societal challenges · Societal impact · Environmental impact · Responsible innovation · Education · Ethics
3.1
Introduction
Automation is embedded throughout the fabric of our world today. It has had transformational impacts on people, on industrial and governmental organizations, and on society as a whole. The quality of life enjoyed by many of us, the usually safe and reliable operation of complex engineered systems, the products and conveniences we have become used to, longer lifespans, and greater prosperity – advances 61
62
in automation have had much to do with these enhancements of the human condition. Automation is a technological specialization that impacts all sectors of society and industry [4, 23, 36]. When well designed and operated, automation can result in substantial benefits. These benefits are broad ranging. Several categories can be identified that, overall, speak to the pervasiveness of the positive impact of automation. First, automation can enable human labor to be safer and more productive; supported by automation, a manufacturing plant (for example) can produce the same output with fewer workers. Repetitive, “turn-the-crank” tasks can be taken up by machines. Unsafe working environments can be addressed with automation, reducing accidents and adverse health effects for workers. For the owners and managers of enterprises, reducing human workers can be financially advantageous too. Labor productivity and safety extends to the domestic environment as well, where automation has made such tasks as cleaning, laundry, food preparation, and home temperature control easier. We can also note that better automation improves safety and health not only of the workers involved in an enterprise or system but of others affected by it as well. For example, better automation to support aircraft pilots provides safety benefits both to them and to the flying public. Automation also enables efficiency: The use of energy and materials can be reduced. Such reduction directly reduces operational costs, so that enterprises can do more with less and accrue financial benefits accordingly. But it would be a mistake to correlate reduced resource use solely with financial savings. Such reduction also has environmental benefits, and these are increasingly critical today. Because of automation, operations can be more sustainable, be less polluting, and result in reduced greenhouse gas emissions. Finally, automation can dramatically improve the quality and quantity of outputs. Yields can be increased, variances decreased, downtimes reduced, and reliability enhanced. Higher production rates have often been a motivation for investing in automation. Automation is also seen as a solution for improving quality, particularly when precision and consistency beyond the capabilities of human workers are required. Despite these benefits, both realized and potential, automation, no less than other areas of technology (but perhaps more, given its definitional connection with human workers and other stakeholders), cannot be seen as neutral in its impacts on society, nor as impervious to influences from society on its development and deployment. The different categories of benefits noted above can be in conflict with each other – reducing emissions, for example, may impose additional costs in terms of equipment and processes. Thus the societal impacts of automation are not universally a good for all, even if on average the net outcome may be positive.
F. Lamnabhi-Lagarrigue and T. Samad
This chapter reviews the broader impacts that automation has had and continues to have, and it also discusses prospects for the future. The rest of this chapter is organized as follows. First, in Sect. 3.2, we discuss the footprint of automation, highlighting the value chains involved that engage (and impact) numerous stakeholders. Although examples are discussed throughout this chapter, in Sect. 3.3, we present a case study in which we review the evolution of technology in one specific automation domain, commercial aviation, and we point out some of the principal human, organizational, and social impacts that have resulted. Some of the concerns arising from automation are discussed in Sect. 3.4. Ethical concerns are discussed in particular. Section 3.5 reviews a number of emerging domains in the development of which automation is playing a prominent role; these domains encompass transportation, health, infrastructure, and other sectors. Some recommendations for how future research and development can be more ethically aware are presented in Sect. 3.6, before concluding remarks.
3.2
Impact of Automation and Control: Value Chain Considerations
The impacts of automation are gained directly by people, organizations, and society. In addition, there are indirect impacts as well. Automation is an enabling technology for developing automation products and services faster, better, and cheaper. In that respect, automation can be seen as an “exponential” technology in terms of its compounding effect. The enablement can extend through a value chain of products and services. An example can help clarify the distinction between direct and indirect impacts. Thermostats for home temperature control are devices that automate the regulation of indoor temperature in living spaces. In the absence of thermostats, occupants of homes would have to rely on manually adjusting heating or cooling devices – furnaces, fans, windows, doors, etc. Thermostats thus save human labor. If properly used, they can also reduce energy use. Because they operate automatically, thermostats also help keep occupants more comfortable than if control had to be done through manual actions. Thermostats are automation devices and benefit their users (and others) directly. But thermostats are also products of automation. They are manufactured in factories, and these factories rely on sensors, robotic machinery, assembly lines, programmable logic controllers, manufacturing execution systems, and a host of other automation components and systems. This automation reduces human labor and will generally also reduce energy and material use, thereby reducing production costs. The automation will also help the manufacturer make thermostats that are closer to specification, minimizing variance and enhancing production output.
3
Social, Organizational, and Individual Impacts of Automation
The automation components and systems used in factories are themselves produced in other factories, and the chain extends to upstream factories and other systems, all the way to resource extraction/harvesting (e.g., petroleum for the plastic components, energy generation from renewable and fossil sources, and other primary resources that also rely on automation for their exploitation). In fact, the chain is more like a web. Factories, energy generation, resource extraction, and other systems are all networked together, forming the ecosystem that our modern economy and industrial infrastructure represents. This picture, expansive though it may seem at first glance, is still not comprehensive – the full ecosystem is a much broader network. For example, the value chain sketched above is only for the manufacturing of thermostats. But from a life cycle perspective, other functions and processes – other value webs – are involved. Thus, the design of a thermostat requires computer-aided design tools, laboratories, and test equipment, and, at the other end of the life cycle, recycling, incineration, and landfill deposition are part of the big picture as well. Furthermore, the human impacts of today’s industrially driven economy are multifarious too and relate to the value network in multiple ways. Organizations associated in one form or another with automation technologies employ a significant – if hard to quantify – fraction of total global employment, and virtually all people are users of automation technology (for example, two-thirds of the population of the world own mobile telephones, each of which is a product of automation and incorporates automation systems such as control loops [5]). As technology, and in particular automation technology, advances, the connections with human elements become increasingly intricate. These connections are being explored in the emerging field of cyber-physical-human systems (CPHS). One classification of CPHS comprises the following [35]: (1) human-machine symbiosis (e.g., smart prosthetics and exoskeletons); (2) humans as supervisors/operators of complex engineering systems (e.g., aircraft pilots, car drivers, process plant operators, and robotic surgery operators); (3) humans as control agents in multiagent systems (e.g., road automation, traffic management, and electric grid); (4) humans as elements in controlled systems (e.g., home comfort control and home security systems); and (5) humans work in parallel with digital entities (e.g., digital twins) in both physical and virtual spaces, i.e., parallel intelligence. These CPHS are being designed to address major technological applications that contribute to human welfare in a wide range of domains, including transportation, aerospace, health and medicine, robotics, manufacturing, energy, and the environment. This perspective is profoundly different from the conventional understanding where humans are treated as isolated elements who operate or benefit from
63
the system. Humans are no longer passive consumers or actors. They are empowered decision-makers and drive the evolution of the technology. Adding this human dimension, the footprint of automation becomes even more extensive, and therefore the impact of automation too.
3 3.3
A Historical Case Study of Automation and Its Impact: Commercial Aviation
As an example of how automation technologies have resulted in increasingly sophisticated engineered systems, which have thereby effected society in substantial, even revolutionary ways, in this section we review advances in the automation of commercial aviation. The first flight of a heavier-than-air powered aircraft was in 1903 by the Wright brothers. In their Wright Flyer, automation was nonexistent. The Wright brothers relied on their physiological sensors (their eyes, ears, and vestibular apparatus), processed these signals in their brains, and manipulated the control surfaces by their bodily actions. The first autopilot was demonstrated in 1914 [39] and relied on the Sperry gyroscope. Autopilots made their commercial aviation debut with the Boeing 247 in 1933 [6]. With the autopilot, what we now call “inner-loop” flight control was automated. Sensing, control computations, and actuation at the lowest level did not require direct pilot engagement. Instead the pilot could indicate the desired state of the aircraft (e.g., altitude, pitch, and roll) and the inner-loop control would affect control to follow the desired-state pilot input. The next major step in the evolution, initiated with the Boeing 707 and the McDonnell-Douglas DC-8, was the “handling qualities controller.” With this automation innovation, the control system would maintain the aircraft’s heading, and a new heading command would result in a state trajectory to effect the new heading, taking into account factors such as passenger comfort, aircraft capabilities, and fuel consumption. Today, commercial aircraft employ flight management systems (FMSs). Through an FMS, a pilot can provide a planned flight route in terms of space-time waypoints, and under normal conditions the airplane will follow the waypoint sequence through its automation [21]. Figure 3.1 illustrates this architecture, whose loops-within-loops structure recapitulates the historical evolution. How has the technological evolution affected societal development and people? The impacts have been manifold. We examine how automation in aviation has changed the roles of cockpit crew and the broader society. Advances in automation (including in flight control as outlined above but also in other areas such as communications) initially led to an increase in the number of personnel in the cockpit. At its peak, in the 1940s and 1950s, five
64
F. Lamnabhi-Lagarrigue and T. Samad
Flight route Flight management system
New heading command “Handling qualities” controller
Desired aircraft state “Innerloop” flight controller
Control surface movements
Aircraft structure
Aircraft state
Feedback of aircraft state (via sensors)
Fig. 3.1 Today’s flight control system architecture. The role of the human pilot has steadily been “upgraded” since the Wright Flyer. ([37]; © Elsevier B.V)
crew members were involved, in one form or another, in the operation of the aircraft: the pilot and copilot, who were responsible for the real-time control of the airplane; the radio operator, responsible for communications with the ground; the flight engineer, who was responsible for engines and fuel management; and the navigator, who would monitor the position of the aircraft and, if needed, replan the flight route. The latter three roles were made obsolete as a result of technological advances (for example, GPS navigation made the flight navigator role unnecessary). Today’s commercial aircraft only employ a pilot and copilot in the cockpit. Thus, as in many other fields, the development of automation resulted in the termination of some jobs and job categories. However, at the same time and also driven by technological advances, the business output of commercial aviation increased substantially, leading to many more jobs overall. From 1958 to 1996, for example, employment in commercial aviation (not just cockpit crew) grew fourfold [15]. As noted below, this quadrupling has gone along with a much larger increase in the business output: “Over time, fewer employees are hired for a given amount of additional business because technological and operational progress allows for more efficient use of both old and new employees.” Similar increases in labor efficiency have occurred across many technology-enabled sectors, not only aviation. The impacts on the general population of the growth of commercial aviation have indeed been dramatic. Over the same period of 1958–1996, passenger miles flown in the USA grew about 13-fold. Flying has become a commonplace experience. Even until 1971, less than half of US adults had traveled by an airliner. In 1997, that figure was 81%. Between 1960 and 1996, the output of the air transport industry (passenger-miles and cargo) increased 16-fold compared to a 3.6X increase for the entire business sector.
There is another critical improvement that the air travel industry has seen over time, with automation as its central enabler. Commercial flight is considerably safer than it used to be. In terms of fatalities per revenue-passenger-kilometers, the ratio has decreased 81-fold since 1970, to a five-year average of 40 fatalities per trillion revenue-passenger-kms. A number of specific automation subsystems and capabilities have been implemented in commercial aviation that have enabled the improvement in safety and reliability. These include Automatic Dependent Surveillance-Broadcast (ADSB), the Traffic Alert and Collision Avoidance System (TCAS), and Terrain Avoidance Warning Systems (TAWS). See [16] for a summary of recent developments and future challenges in aircraft control and air traffic management. But whereas the substantial overall automation-enabled improvement in aviation safety is unquestionable (see Fig. 3.2), automation can also cause accidents if poorly designed or incorrectly used. Examples include the crash of American Airlines Flight 965 in Cali, Colombia, in 1995 [32]; and the crashes of two Boeing 737 MAX aircraft in Asia and Africa in 2018 and 2019 [2, 20].
3.4
Social, Organizational, and Individual Concerns
Although advances in automation have been a key contributor to much of modern life and quality of life, they have also raised concerns in various respects. If the technologysociety nexus was complicated before, with the emerging technologies of today a compounding of potential interplays is imminent. With self-driving cars, facial recognition systems, gene editing technology, quantum encryption devices, robotics systems, exoskeletons, and other new and emerging
3
Social, Organizational, and Individual Impacts of Automation
3,500 3,000 2,500 2,000 1,500 1,000 500 0 1970
1980
1990
2000
2010
2018
Fig. 3.2 Improvement in aviation safety: reduction in fatalities per trillion revenue-passenger-kilometers (five-year averages). (Adapted from: Javier Irastorza Mediavilla [29])
65
Often, it is the combination of automation and human operators that result in system failure. Examples are numerous, including the Chernobyl [44] and Three Mile Island nuclear plant accidents [11] as well as the aviation accidents noted above. The pervasiveness of unintended consequences in humanmachine automation [42] has led to the hypothesis of “risk homeostasis,” which states that people have an inherent level of risk sensitivity; if technologies are implemented to reduce risk, people will react in ways that increase the risk. Advances in automotive safety provide some evidence for the hypothesis. Thus, if safety technologies such as antilock brakes are implemented, people will start to brake later or otherwise change their driving habits to negate the improvement [50].
3.4.2 technologies, the concerns go well beyond loss of employment and increasing inequality. The evolution of automation is rapidly resulting in, or exacerbating, major concerns about societal impacts, beyond the benefits noted elsewhere in this chapter. Many of the consequences of emerging technologies are unknown or poorly understood. In light of such concerns, Lin and Allhoff [24] raise the question whether there should be no restrictions, some restrictions, or a total ban on the development of a concept.
3.4.1
Unintended Consequences of Automation Technology
Automation and control technologies are implemented with objectives in mind that relate to improvements in one or more desired parameters, such as costs, outputs, emissions, and productivity. But it is hard to predict with any certainty what the outcome will be of a new technology [7]. Especially when the technology involves human operators or users, surprising feedback effects can occur that may offset, negate, or even reverse the potential positive impacts. This topic of “unintended consequences” is discussed at length in [47], with numerous examples – such as that companies that downsize expecting improved worker productivity to make up for fewer workers can find productivity decreasing. Complex systems can also have hidden bugs that may not manifest themselves until well after the system has been designed, installed, and commissioned. In 1990, the AT&T long-distance network in the USA collapsed, with about 50% of calls failing to go through over a period of about nine hours. AT&T lost more than $60 million in unconnected calls, and business nationwide had unmeasurable financial losses. Ultimately, the cause was traced to an incorrect “break” statement in a C-language program [9].
Sustainability and the Limits of Growth
Technology, in general, and automation, in particular, are growth engines. They allow us to do more with less, to create more products at lower cost, and to extract more resources and consume more energy, even if the consumption is more efficient. In technological and economic circles, growth in itself is often taken to be a desideratum in its own right. But untrammeled growth is unsustainable. Our resources – whether minerals, clean water and air, or energy supplies – are finite. Marketing machinery to stimulate continuously increasing demand, resource extraction systems (enabled by automation) to harvest the raw materials needed to satisfy the demand, and enterprises geared to increasing production (enabled by automation) in alignment with the demand may lead to good things in the near term but depletion of many good things soon thereafter. As Edward Abbey [1] said, “Growth for the sake of growth is the ideology of the cancer cell.” The concern about unfettered growth was voiced by Meadows et al. in their 1972 book [27], The Limits to Growth. The book included simulation results based on computer models for resource consumption that suggested the message of the title. An updated version, three decades later [28], reinforces the message and emphasizes the need for holistic understanding and structural change with an emphasis on feedback and dynamics: “The third way to respond is to work on the underlying causes, to step back and acknowledge that the human socioeconomic system as currently structured is unmanageable, has overshot its limits, and is headed for collapse, and, therefore, seek to change the structure of the system . . . . In systems terms, changing structure means changing the feedback structure, the information links in a system: the content and timeliness of the data that actors in the system have to work with, and the ideas, goals, incentives, costs, and feedbacks that motivate or constrain behavior” (pp. 236–237, italics in the original).
3
66
F. Lamnabhi-Lagarrigue and T. Samad
Automation and control have been central to growth in modern times – growth of economies, population, and prosperity – and they are thus implicated in concerns of unsustainability of growth. But automation and control can be part of the solution as well, and indeed as disciplines that are the repository of expertise in feedback structures and their effects, they will be crucial in creating a more sustainable and equitable future.
3.4.3
In some cases, these impacts and influences of automation can be controlled, by incorporating ethical principles in the design methodologies for complex systems as well as in their operational algorithms. As an example, in the context of Industry 4.0, [48] addresses the engineering of ethical behaviors in autonomous industrial cyber-physical-human systems with ethical controllers “embedded into these autonomous systems, to enable their successful integration in the society and its norms.”
Inequities of Impact 3.4.4
The impacts and influences of automation can be beneficial or adverse, but labeling a specific subfield or product of automation as a force for good or ill is ill-advised. It is the web of influences and the domino effects involved that make absolute determinations of the value of technology complicated. Disagreements may also arise on whether a specific instance of use is for the betterment or not of humanity. A military armament – say a cruise missile, reliant on automation for its guidance and control – may be justified by some on the basis of the putative evil nature of its target and may be vilified by others because of a different perception of the target, and by others yet based on a general distrust of such destructive, killing devices, regardless of the intention of a specific use. The global society, through nongovernmental organizations such as the United Nations, could be said to have come to a consensus on some uses of military technology – antipersonnel mines, cluster bombs, and poison gas come to mind. The consensus, however, is not universal. But the ambivalence on the morality and ethics of technology is not limited to military systems. Factory automation is another example that can be cited. From water mills for the milling of flour to sewing machines for clothing, to assembly lines in automobile manufacturing, and to today’s Industry 4.0, it is easy to point to the advantages that workers and society have gained, but in many cases, at least in the short term, some people and segments of society have suffered as well. Today, automation in manufacturing may be seen as a boon by companies because of its potential for cost reduction – and these companies will claim with some justification that the end result will be cheaper products for people and businesses to buy, thereby benefiting society and not just the corporations and their employees. On the other hand, automation will generally result in job losses, financial hardship for the former workers, and economic downturns in communities, and even if these consequences are felt in specific geographies, among specific skill sets, and over a limited time horizon, they are nonetheless real and significant for those affected.
Ethical Challenges with Automation
Ethical concerns increase with the development of technologies [30]. Moor establishes “that we are living in a period of technology that promises dramatic changes and in which it is not satisfactory to do ethics as usual. Major technological upheavals are coming. Better ethical thinking in terms of being better informed and better ethical action in terms of being more proactive are required.” Khargonekar and Sampath [19] add, “While the fundamental canons of ethics go back thousands of years in various societies and civilizations, and have a strong basis in the pro-social nature of human beings, accelerating socio-technological changes in a globally connected world create novel situations that require us to be much more agile and forward-looking.” They also noticed that “these issues will span across safety and security, transparency, bias, and fairness arising from integration of artificial intelligence and machine learning (AI/ML), human rights issues, and jobs. They will also include potential loss of autonomy and empowerment with increased levels of automation [34], broader problems of equitable access to technology, and socioeconomic considerations.” New ethical challenges with automation have indeed recently arisen and will continue to grow. Some will be brought into light in the next section when societal implications of some emerging developments in automation will be considered. As recalled by Kvalnes [22], “one notable example is that of the programming of self-driving cars. It is likely that these cars can contribute to considerably safer traffic and fewer accidents, as these vehicles will be able to respond much faster and more reliably than fallible human drivers. However, they also raise ethical questions about how to prioritize human lives in situations where either people inside or outside the car will die.” Kvalnes mentions also that the distinction between proscriptive and prescriptive ethics is useful to bring out the full scope of the ethical dimension of automation: “Proscriptive ethics can also be called avoid-harm ethics, which brings attention to the possible pitfalls of behaviors and decisions. In the context of automation, it is an ethics that warns us against mass unemployment, lack of control over decision-making procedures, and the scary scenario where the bots are smarter than humans and begin to communicate
3
Social, Organizational, and Individual Impacts of Automation
with each other in ways incomprehensible to human beings. Prescriptive ethics can also fall under the name of do-good ethics and concerns itself with how behaviors and decisions can improve and advance human conditions. It is important to keep in mind that there is a prescriptive dimension to the ethics of automation, in that bots can improve the services available to human beings through safer traffic; higher quality and precision in medicine; improved control over health, security, and environment issues in workplaces; and so on.” See additional details on automation and ethics in other chapters of this Handbook, particularly in Ch. 34.
3.5
Emerging Developments in Automation: Societal Implications
Automation has currently, and will have more and more in the future, a prominent role in many sociotechnical developments, including their interactions with human, and will greatly support, in many domains, the globally desired energy and ecological transition with huge societal impacts. In this section, some emerging developments in automation, at different stages of maturity, and some of their societal implications are sketched. Although from different disciplines, these examples have common factors: they have the potential to greatly improve the quality of life, there is a tremendous economic pressure to develop them as quickly as possible, the methodologies used are often extensively based on AI/ML, they combine emerging technologies, and they raise important concerns.
3.5.1
67
police officers, and healthcare facilities. If successfully implemented, smart cities ecosystems will improve the quality of life of citizens and will decrease greenhouse gas emissions to mitigate climate change. Key components of the research agenda where automation has a main role to play [23, Sect. 5.17] for making smart cities a reality include the following: (1) sensing and cooperative data collection, (2) security, safety, privacy, and energy management in the collection and processing of data, (3) dynamic resource allocation, (4) data-driven control and optimization, and (5) interdisciplinary research. What is unique in this setting is that the developed devices are not merely used for communication but also for monitoring and actuating physical processes. One major impact is regarding “security” that goes beyond protecting the privacy of citizens and their ability to use their communication devices; it includes safety-critical physical processes whose malfunction due to malicious interference can have catastrophic repercussions. It stands to reason that the deployment and operation of emerging technologies in smart cities needs to be guided by principles requiring developers to clearly inform users of these technologies about their potential risks and negative impact, to track unintended consequences making appropriate adjustments, and to actively protect against security threats and vulnerabilities. Besides the impact on security just evoked, the degree and magnitude of technological integration in these new cities can amplify socioeconomic inequality, displace segments of the workforce, create a new precarious worker class, and pose issues of safety, privacy, and resource sustainability among others.
Smart Cities 3.5.2
The emerging smart city is a sociotechnical ecosystem of people (residents, workers, and visitors), of technology, of information, and of administrators of the city and urban authorities. The concerned technological fields are numerous. They include energy (efficiency, demand response, renewable integration, distributed energy resources, and microgrid optimization and control), transportation (logistics, traffic management, and fuel efficiency), water system management (clean water, water distribution, water treatment, and conservation), security (emergency response and city monitoring), environmental monitoring and control, and e-government. The information flow of this ecosystem is based on a network of sensors and actuators embedded throughout the urban terrain interacting with wireless mobile devices and with an Internet-based backbone with cloud services. The collected data may involve traffic conditions, occupancy of parking spaces, air/water quality information, the structural health of bridges, roads or buildings, as well as the location and status of city resources including transportation vehicles,
Assistive, Wearable, and Embedded Devices
Automation has impacted the development of assistive devices for people with disabilities with the development of powered wheelchairs and prostheses. More recently, automation plays also an important role in wearable robots which are active devices developed to repair, support, correct, and/or assist the capabilities (whether of action or perception) of physically impaired people, such as powered exoskeletons for paralyzed patients, bionic prostheses for limb amputees, or artificial retinal implants for blind persons. Models and controllers for designing these systems are very challenging, see [23, Sect. 5.10]. Such mobility aids promote independence and enhance quality of life. For children, the opportunities afforded by mobility aids are crucial not only to their physical development but also to their social and cognitive development. Control paradigms will become increasingly centered on the individual for whom a particular assistive device is being
3
68
F. Lamnabhi-Lagarrigue and T. Samad
developed, which will further enhance independence and quality of life. This will ensure that a broader group of individuals will more keenly embrace the resulting technological interventions, which in turn will improve success rates and longevity, reducing the need for revision surgery. On the negative impact side, the growing number of users (both physically able and impaired) of these devices “worn intimately” in everyday life is raising ethical concerns, for instance the issue of privacy of users as data is collected by these artificial parts of the body and the security of these devices. They also raise broader questions surrounding the inviolability of the human body – both physical and psychological integrity and human dignity in general – especially when implants and/or heavy invasive surgical techniques are required. When these devices come from nanotechnologies, they also raise unprecedented challenges in terms of predictability and categorization, in part because nanosystems are able to get through what were before considered as “natural” boundaries in the human body, and in part because they will be deployed inside discrete devices, sometimes invisibly, in direct interface with the humans or in their close environment. Further, there is a need to consider the limits on developing devices that at their current stage of development can reduce some human abilities to gain or enhance others and that can boost sensorimotor performance of the human body (human enhancement technologies). They also raise concerns about the transformations of our forms of life. Finally, there are questions surrounding the respect for human diversity, nondiscrimination and equality, and the change from medicine to anthropotechnics where bodies are worked on to the point where the distinction between man and machine is no longer clear. Thus, in the face of these new possibilities offered by technology, it is pertinent for developers, users, and society in general to see that ethical considerations must guide innovation approaches.
tensively on complex AI/ML algorithms. As a result, the behavior of these weapons is often unpredictable; they are triggered by their environment at a time and place unknown to the user who activates them. These algorithms also tend to be “black boxes” and their development process also lacks transparency; thus, the algorithms, and hence the weapons, can be subject to bias, whether by design or not, and they can be effectively out of control since explanations for their behavior will be unavailable. These unpredictability, unreliability, vulnerability, and uncontrollability characteristics raise questions regarding compliance with IHL; such compliance requires that human control be retained at the different stages of the “life cycle” of an AWS to ensure that technological advancements do not outpace ethical deliberations [8, 12]. “ . . . Autonomous machines with the power and discretion to select targets and take lives without human involvement are politically unacceptable, morally repugnant and should be prohibited by international law,” said UN Secretary-General António Guterres at the Third Artificial Intelligence for Good Summit, 2019. These concerns are even more amplified with the development of biological weapons, where the convergence of recent developments in biotechnology with AI increases the possibilities for misuse. Automation technology could bring complementary expertise in order to ensure meaningful human control of autonomous weapon systems following the recommendations [17] “that the behavior of autonomous functions should be predictable to their operators, that those creating these technologies understand the implications of their work, that professional ethical codes are developed to appropriately address the development of autonomous systems and autonomous systems intended to cause harm, that designers not only take stands to ensure meaningful human control, but be proactive about providing quality situational awareness to operators and commanders using those systems.”
3.5.3
3.5.4
Autonomous Weapon Systems
The International Committee of the Red Cross (ICRC) defines an autonomous weapon system (AWS) as follows [49]: “Any weapon system with autonomy in its critical functions. That is, a weapon system that can select (i.e., search for or detect, identify, track, select) and attack (i.e., use force against, neutralize, damage or destroy) targets without human intervention.” As any weapon developments, AWS designs, in which intensive automation technology is a key element, should be subject to arms control measures which “can help ensure national and human security in the 21st Century, and must be an integral part of our collective security system” [40], in particular through the international humanitarian law (IHL) which governs the choice of weapons. We are witnessing an arms race to develop increasingly autonomous weapons [41]. This new weaponry relies in-
Automotive Autonomy
According to the World Health Organization [51], 1.35 million people died worldwide in 2016 because of traffic incidents, and about 90% of these deaths were caused by human error. The importance of these statistics is underpinned by the United Nations (UN), which has incorporated ambitious goals to reduce road traffic deaths and injuries within the UN Sustainable Development Goals. These figures could be indeed significantly reduced if the human driver was helped in case of drowsiness, fatigue, or lack of concentration. Autonomous vehicles (AV) are key technologies to realizing these improvements. Maximum safety and minimum energy expenditure with comfort characterizes the definition of ideal human mobility. SAE International [34] has defined six levels of driving automation as follows:
3
Social, Organizational, and Individual Impacts of Automation
• With human supervision: – Level 0: No automation – the human driver takes all actions necessary for driving. – Level 1: Driver assistance – the vehicle is equipped with driving aids (advanced driving assistance systems, ADAS) that can amplify emergency maneuvers (antilock braking systems and enhanced stability programs), alert the driver to the presence of a danger (blind spot detection and lane departure warnings), or make driving more comfortable (adaptive cruise control, which respects a safe distance from the vehicle in front). – Level 2: Partial automation – the driver benefits from driving assistance that includes steering and control of acceleration and braking; the driver no longer needs to touch the pedals but must keep their hands on the steering wheel at all times and continue to supervise the driving. • With machine supervision: – Level 3: Conditional automation – the driver is authorized to temporarily remove their hands from the steering wheel, which can delegate responsibility for driving to the car in certain situations, for example, in traffic jams on expressways. – Level 4: High automation – the driver is no longer required to supervise the driving of the vehicle, but remains inside the vehicle and can take back control at any moment. – Level 5: Full automation – there is no driver anymore, because the vehicle is capable of driving itself, parking, and going to pick up customers. Advanced driving assistance systems have been traditionally associated with driver/passenger comfort and safety enhancement and are generally divided into four main elements: perception, localization and mapping, path planning, and control (Level 1). On the other hand [23, Sect. 5.2], energy efficiency, which is significant with respect to CO2 emissions and carbon footprint of mobility, has been largely addressed through power train or energy management systems. In an autonomously driving vehicle, the architecture must support simultaneous lateral and longitudinal autonomous control. Hence, the active safety systems (integrated into motion control) must cooperate with the power train control systems. Therefore, a completely new and expanded architecture is emerging where the advanced driving assistance systems fully cooperate with the power train management systems to generate a safe motion control vector for the vehicle. Additionally [14], multiple studies consider adding connected vehicle technologies, such as vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) technologies, where essential information is shared to create an enhanced cooperative mobility environment. This extended and improved cooperative perception allows vehicles to predict the behavior of the key
69
environmental components (obstacles, roads, ego-vehicles, environment, and driver behavior) efficiently and to anticipate possible hazardous events. Level 2 is the highest level allowed by European legislation today and is available commercially. There is currently a race for reaching levels where machine supervision is involved. For instance, “Germany will be the first country in the world to bring autonomous vehicles from research laboratories onto the road - we have now come one decisive step closer to this goal . . . We need speedy implementation for innovations in the transformation process, so that Germany can continue to be the number one international leader in autonomous driving,” commented federal minister Andreas Scheuer [3]. As Snell [43] mentions, “there is little doubt that autonomous vehicles, more commonly known as self-driving cars, are set to be transformational. The market is set to reach roughly $42 billion by 2025 with a compound annual growth rate projected at 21% until 2030. Not only will this disruption be immense, giving birth to new types of businesses, services, and business models, but their introduction will trigger similarly immense ethical implications.” The short article [43] analyzes very clearly the ethical challenges pose by AV: “One of the key challenges for autonomous vehicles is centered around how they value human life. Who decides who lives and who dies in split-second decision-making? Who decides the value of one human life over another? And, more importantly, how is that decision calculated? Being on a road is inherently dangerous, and this danger means that trade-offs will occur as self-driving cars encounter life or death situations. It is inevitable. And with trade-offs, you need ethical guidelines.”
3.5.5
Renewable Energy and Smart Grids
Electric power networks are among the largest and most complex man-made systems. They connect hundreds of millions of producers and consumers, cover continents, and exhibit very complicated behaviors. The transition toward a lowcarbon economy leads to an even more complex system of systems [23, Sect. 5.6]; as a result, there are still many poorly understood phenomena caused by the interaction of such a large number of devices and the large spatial dimensions. Currently, major changes to the grid structure are being implemented, in particular to support the large-scale introduction of renewable energy sources, such as wind farms and solar plants. Two crucial features of these electric power sources must be addressed: most renewable sources are small generating units, dispersed over a wide geographical area, and their primary energy outputs are by nature not controllable and fluctuate over time. Integration of these highly intermittent, randomly variable, and spatially distributed resources calls for new approaches to power system operation and control.
3
70
F. Lamnabhi-Lagarrigue and T. Samad
The safety and security of these smart grids are crucial challenges for the years ahead. Experience has shown that such critical infrastructure systems do fail. When this happens, the consequences may be disastrous in terms of societal, health, and economic aspects. For example, if a large geographical area experiences a blackout for an extended period, huge economic and societal costs may result. In November 2006, a local fault in Germany’s power grid cascaded through large areas of Europe, resulting in ten million people left in the dark in Germany, France, Austria, Italy, Belgium, and Spain. Similar cascading blackouts have taken place in the USA. Let us mention the recent one in Texas due to bad weather conditions, see https://en.wikipedia.org/wiki/2021_Texas_power_crisis. Malicious attacks which may occur directly at strategic locations in the network or remotely by compromising communication commands or automation software could also happen, again with huge economic and societal issues. Monitoring and control of these infrastructures is becoming more multiscale, hierarchical, and distributed, which makes them even more vulnerable to propagating failures and targeted attacks. See [23, Sect. 4.4] for some research perspectives.
3.6
Some Approaches for Responsible Innovation
Some of the main questions around societal concerns are the following, see [31] for instance: • How should we assess and control the (short-, medium-, and long-term) impacts – physical, mental, biological, social, and ecological – on humans and the environment arising from the application of emerging technologies, and automation in particular? • How should we design and manage systems with unpredictable reactions and less than fully reliable outcomes? • How should we we develop principles and guidelines that attribute specific duties and responsibilities to the stages of research, design, development, and adoption of technologies? • How should we focus on education and public awareness at all levels, including engineers, scientists, and policymakers, toward the development of ethically aware and responsible products and services? We can identify three types of approaches on how to address these questions for responsible innovation in order to help the stakeholders involved in the development of these technologies: guidelines and recommendations, conceptual approaches, and ethical trainings. Standards and regulatory mechanisms are also discussed.
3.6.1
Ethics Guidelines and Ethically Aligned Designs
There are multiple initiatives and commissions around the world (from academia, universities, learned societies, inside disciplines, and at international bodies and governmental agencies) working on setting up recommendations. Let us mention for instance two of them, addressing, respectively, artificial intelligence and autonomous and intelligent systems, both linked to current and future developments of automation: 1. Ethics Guidelines for Trustworthy AI from the European Commission A public document prepared by the European Commission’s High-Level Expert Group on AI [13] mentions: “The Guidelines put forward a set of 7 key requirements that AI systems should meet in order to be deemed trustworthy. A specific assessment list aims to help verify the application of each of the key requirements: • Human agency and oversight: AI systems should empower human beings, allowing them to make informed decisions and fostering their fundamental rights. At the same time, proper oversight mechanisms need to be ensured, which can be achieved through human-in-the-loop, human-on-the-loop, and human-in-command approaches. • Technical Robustness and safety: AI systems need to be resilient and secure. They need to be safe, ensuring a fall back plan in case something goes wrong, as well as being accurate, reliable and reproducible. That is the only way to ensure that also unintentional harm can be minimized and prevented. • Privacy and data governance: Besides ensuring full respect for privacy and data protection, adequate data governance mechanisms must also be ensured, taking into account the quality and integrity of the data, and ensuring legitimized access to data. • Transparency: The data, system and AI business models should be transparent. Traceability mechanisms can help achieve this. Moreover, AI systems and their decisions should be explained in a manner adapted to the stakeholder concerned. Humans need to be aware that they are interacting with an AI system, and must be informed of the system’s capabilities and limitations. • Diversity, non-discrimination and fairness: Unfair bias must be avoided, as it could have multiple negative implications, from the marginalization of vulnerable groups, to the exacerbation of prejudice and discrimination. Fostering diversity, AI systems should be accessible to all, regardless of any disability, and involve relevant stakeholders throughout their entire life circle.
3
Social, Organizational, and Individual Impacts of Automation
• Societal and environmental well-being: AI systems should benefit all human beings, including future generations. It must hence be ensured that they are sustainable and environmentally friendly. Moreover, they should take into account the environment, including other living beings, and their social and societal impact should be carefully considered. • Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes. Auditability, which enables the assessment of algorithms, data and design processes plays a key role therein, especially in critical applications. Moreover, adequate and accessible redress should be ensured.” 2. Autonomous and Intelligent Systems: Ethically Aligned Design As mentioned in [19], the conceptual framework for ethical analysis proposed in the recent report Ethically Aligned Design, published by the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems (A/IS) [17] “is based on three pillars that capture the anthropological, political, and technical aspects of ethics and design: (i) universal human values, (ii) political self-determination and data agency, and (iii) technical dependability. These three pillars form the basis of eight general principles that are considered as imperatives for the ethical design of A/IS: human rights, wellbeing, data agency, effectiveness, transparency, accountability, awareness of misuse, and competence. Various ethical issues under each of these eight topics are examined in depth. Detailed recommendations on how to address these issues are provided with the overall goal of helping A/IS creators reduce to practice relevant principles in the context of their own specific product or service.” There is no doubt that these above resulting recommendations cannot be fulfilled by using only AI/ML approaches [7, 33]. Russell mentions [33] “ . . . the standard model - the AI system optimizing a fixed objective - must be replaced. It is simply bad engineering. We need to do a substantial amount of work to reshape and rebuild the foundations of AI.” Facing these recommendations indeed requires a rapprochement of areas such as machine learning, control theory, optimization, and to bridge model-based and learningbased control systems. Systems and control methodologies and the derived automation technologies [23] can help to address many technological concerns by bringing more robustness, more precision, more security, and less vulnerability.
3.6.2
Conceptual Approaches
In a related vein, Sampath and Khargonekar [38] introduced the vision, concept, and framework of socially responsible
71
automation (SRA) to help technologists and business leaders drive the evolution of automation for societal good. The “pyramid of progress,” a four-level model developed in [38] (see Fig. 3.3) that captures current industry practices as well as envisioned future approaches to automation, is summarized as follows in [26]: • “Level 0: Cost-focused automation. At the lowest level of the pyramid, technology is used solely to gain economic benefits by reducing human labor. These costbased programs are not only not socially conscious or human-centric, they often fail to deliver and can even be detrimental to business interests. • Level 1: Performance-driven automation. This approach is more cognizant of the role that humans play in the loop. Processes and systems are reengineered to take advantage of automation while still using human skills and capabilities to fill in technological shortfalls. • Level 2: Worker-centered automation. At this level, the business goal is not just performance optimization, but worker development and enrichment. In these systems, the goal of automation is not to sideline people or replace them with machines, but to encourage new forms of human-machine interaction that augment human capabilities. • Level 3: Socially responsible automation. At the top of the pyramid, automation is deployed to produce more and better jobs for humans, driving economic growth while also promoting societal well-being. Attaining such a lofty goal requires ‘explicit, active interventions,’ [as Sampath and Khargonekar write]—that is, business leaders must commit to proactively identifying new revenue streams and job-enabling growth as they roll out and refine automation.” The same authors in [19] outlined a framework for thinking about ethical issues in smart cyber-physical-human systems (CPHS), based on two main dimensions: (i) the stage of development of the CPHS domain from early-stage research to mature technologies and (ii) the locus of decision-making, individual, corporate, and government settings. These systems promise to be widely deployed in society and therefore will have a very significant impact, including possible negative consequences on individuals, communities, nations, and the world. The authors argue that it is necessary to fight the tendency toward technological determinism and that there is a need to anticipate increasing capabilities and the future deployment of such systems. In the same vein and regarding emerging technology in general, one can cite Antonio Guterres in his introduction to [10]: “A key takeaway from the report is that technologies are not deterministic. We can harness their potential for the common good, and we have an obligation to do so.”
3
72
F. Lamnabhi-Lagarrigue and T. Samad
Business goals
Stakeholder values
Create new revenue streams & Good jobs
Society: Employment, prosperity, opportunity
Enhance worker performance, skills, quality
Employee: Safety, autonomy, achievement
Performance driven automation
Increase productivity, quality, accuracy
Customer: Superior offering, service
Cost focused automation
Drive cost efficiency
Stockholder: Profit
Socially responsible automation
Human centered automation
Fig. 3.3 The socially responsible automation (SRA) pyramid [38] captures four levels of automation, each with a distinct business goal and set of stakeholder values, leading to socially responsible automation
practices at the highest level that can support both business growth and societal good
3.6.3
3.6.4
Ethical Training
Besides recommendations and conceptual approaches, many voices are being raised about the need to focus on education and public awareness at all levels, including engineers and scientists, toward the development of ethically aware and responsible products and services. For instance, in [46]: “ethical standards should be imposed in science and engineering education so that students have an ethical training enabling them to analyze, evaluate and transmit the ethical dimensions of technologies.” In particular, regarding technological universities, Taebi, et al. claim [45]: “The twenty-first century will pose substantial and unprecedented challenges to modern societies. The world population is growing while societies are pursuing higher levels of global well-being. The rise of artificial intelligence (AI) and autonomous systems, increasing energy demands and related problems of climate change are only a few of the many major issues humanity is facing in this century. Universities of technology have an essential role to play in meeting these concerns by generating scientific knowledge, achieving technological breakthroughs, and educating scientists and engineers to think and work for the public good.” It is important to foster understanding and discussion about the process of ethical reasoning about emerging technologies. These are of utmost importance at all levels of education at schools, colleges, and universities. A deep analysis of the psychological effects of technologies and products, their impact on human dignity and integrity, and their effects on the understanding of human values and the human spirit is necessary.
Standards, Policies, and Enforcement
Principles, concepts, guidelines, and training can increase awareness of the ethical and related issues surrounding automation-enabled emerging technologies. If developed through inclusive and trusted processes, such products can also spur ethical practices by individuals and organizations. However, voluntary and independent adoption by entities individually has numerous shortcomings. Even when groups are fully committed to being ethical, different groups may have different ideas about the best path forward. With opportunities for public relations benefits abounding, companies and others are likely to have pressures to present their initiatives in as positive a light as possible, resulting in loss of transparency for the public. And if some entities adopt ethical practices while others focus on profits to the exclusion of societal concerns, at least in the short term the latter may win in the competitive marketplace. For these and other reasons, the technology community, business leaders, and policymakers need to work together to develop and promote standardized and enforceable approaches. Standards help establish uniformity in how different organizations address an issue. In complex systems where components from multiple suppliers may be needed to implement an integrated system, standards can also offer interoperability. Certification authorities may need to be formed to ensure a standard is being correctly implemented by all providers. Especially where the safety and well-being of the public are at stake, policies and regulations can be enacted, often by governmental bodies. If laws are promulgated, compliance is no longer optional. Of course, to ensure compliance,
3
Social, Organizational, and Individual Impacts of Automation
enforcement will be needed. The products and services of automation technologies are rarely confined to one country or region. Thus standards, policies, and enforcement need to be considered from an international perspective, which renders them more complicated to pursue. Complexity is even reinforced by the multiple disciplines involved due to the convergence of automation technologies with AI/ML, 5G, IoT, nanotechnologies, neurotechnologies, biotechnologies, and genetic technologies. Generally speaking, technologists tend not to consider such aspects in their research and development activities. Yet, for scalable impact and to ensure societal benefits of technological developments are not compromised by ethical lapses, researchers seeking to make a positive impact on the world need to understand how technologies are commercialized, deployed, and adopted: how the risks that come with the power and capabilities of automation systems can be mitigated, and how a technology-for-technology’s-sake attitude can result in adverse impacts and backlash [25].
3.7
Conclusions and Emerging Trends
In this chapter, we have discussed the numerous benefits that developments in automation technology have provided to societies and to humankind. We have also identified several concerns with the increasingly sophisticated engineered systems that are now being developed and in which automation is responsible for autonomous and semiautonomous decision-making. In closing, we want to address a perception that some readers may have after reading to this point: that automation systems are fraught with challenges for the future and their continued evolution will create major crises for society. This is not the message we want to leave the reader with. To the contrary, advances in automation will be essential for enhancing society and for creating a sustainable world for people, organizations, and society (see, for instance, chapters 7 and 8 of [28]). Indeed, automation, leveraging its scientific roots in the systems and control discipline, can help ensure methodological transparency, stability, robustness, safety, security, reliability, and precision in the accelerating complexity of our technological world. Without advances in automation, the world is likely to become a worse place for humanity and the rest of its inhabitants, and in the near future. In this context, we point to the recently released (at the time of this writing) Net Zero by 2050 report from the International Energy Agency [18]. This report urges for a large number of specific actions that policymakers and other stakeholders should take in order to keep global temperature rise to 1.5 ◦ C. Technology and automation may be implicated in the industrial growth that has resulted in the drastic climate change that appears to be imminent, but technology and automation are now being called upon to address this existential
73
threat. The actions recommended cut across many sectors of the economy. Examples include: enhancements in energy efficiency; more flexibility in energy generation, storage, and consumption; a rapid transition to electrification across multiple sectors, especially transportation; dramatic increase in renewable generation while maintaining the reliability and availability of supply; zero-carbon buildings; carbon capture, utilization, and storage; and global deployment of advanced heat pumps. Each of these represents a call to action for automation and control scientists and engineers; without their engagement the prospects for a livable planet for future generations will fade.
References 1. Abbey, E.: The Journey Home: Some Words in the Defense of the American West. Plume (1991) 2. Aircraft Accident Investigation Bureau: Aircraft Accident Investigation Preliminary Report: Ethiopian Airlines Group, B7378 (MAX) Registered ET-AVJ, 28 NM South East of Addis Ababa, Bole International Airport, March 10, 2019, Ministry of Transport, Federal Democratic Republic of Ethiopia (n.d.). http://www.ecaa.gov.et/Home/wp-content/uploads/2019/07/ Preliminary-Report-B737-800MAX-ET-AVJ.pdf 3. Autovista Group: Germany paves the way for adoption of autonomous vehicles, 24 May 2021.(https://autovistagroup.com/ news-and-insights/germany-paves-way-adoption-autonomous-veh icles) 4. Baillieul, J., Samad, T.: Encyclopedia of Systems and Control. Springer, London (2019) 5. Bernhardsson, B.: Control for mobile phones. In: Samad, T., Annaswamy, A.M. (eds.) The Impact of Control Technology, 2nd edn. IEEE Control Systems Society (2014). http://ieeecss.org/impactcontrol-technology-2nd-edition 6. Boeing: Model 247/C-73 Transport (n.d.). https:// www.boeing.com/history/products/model-247-c-73.page 7. Bordenkircher, B.A.: The unintended consequences of automation and artificial intelligence: are pilots losing their edge? Issues Aviation Law Policy. 19(2) (2020). Abstract: https://trid.trb.org/view/ 1746925 8. Boulanin, V., Davison, N., Goussac, N., Carlsson, M.P.: Limits on Autonomy in Weapon Systems – Identifying Practical Elements of Human Control, Report from International Committee of the Red Cross (ICRC) and Stockholm International Peace Research Institute (SIPRI). Stockholm International Peace Research Institute (SIPRI) (2020) 9. Burke, D.: All Circuits are Busy Now: The 1990 AT&T Long Distance Network Collapse, CSC440-01. https://users.csc.calpoly.edu/ ∼jdalbey/SWE/Papers/att_collapse 10. Catching technological waves: Innovation with equity, Technology and Innovation Report 2021, United Nations publication issued by the United Nations Conference on Trade and Development 11. Chiles, J.R.: Inviting Disaster: Lessons from the Edge of Technology. HarperCollins (2001) 12. Dahlmann, A., Dickow, M.: Preventive Regulation of Autonomous Weapon Systems, Stiftung Wissenschaft und Politik Research paper, German Institute for International and Security Affairs, March 2019 13. European Commission, High-Level Expert Group on AI Presented Ethics Guidelines for Trustworthy Artificial Intelligence, 8 April 2019. https://ec.europa.eu/newsroom/ dae/document.cfm?doc_id=60419
3
74 14. Fayyad, J., Jaradat, M.A., Gruyer, D., Najjaran, H.: Deep learning sensor fusion for autonomous vehicle perception and localization: a review. Sensors. 20, 4220 (2020). https://doi.org/10.3390/ s20154220 15. Goodman, W.C.: Transportation by air: job growth moderates from stellar rates. Monthly Labor Review, March, 2000 16. Haissig, C.: Air traffic management modernization: promise and challenges. In: Baillieul, J., Samad, T. (eds.) Encyclopedia of Systems and Control, 2nd edn. Springer (2021) 17. IEEE: Ethically Aligned Design. IEEE Press (2019). https:// standards.ieee.org/news/2017/ead_v2.html 18. International Energy Agency: Net Zero by 2050: A Roadmap for the Global Energy Sector, May, 2021. https://www.iea.org/reports/ net-zero-by-2050 19. Khargonekar, P.P., Sampath, M.: A framework for ethics in cyber-physical-human systems, IFAC World Congress 2020. IFAC-PapersOnLine. 53(2), 17008–17015 (2020). https:// faculty.sites.uci.edu/khargonekar/files/2019/11/Ethics_CPHS.pdf 20. Komite Nasional Keselamatan Transportasi, Jakarta, Indonesia (KNKT-2019): Aircraft Accident Investigation Report: PT. Lion Mentari Airlines, Boeing 737-8 (MAX): PK-LQP, Tanjung Karawang, West Java, Republic of Indonesia, 29 October 2018. http://knkt.dephub.go.id/knkt/ntsc_aviation/baru/2018%20%20035%20-%20PK-LQP%20Final%20Report.pdf 21. Krisch, J.A.: What is the Flight Management System? A Pilot Explains, Popular Mechanics, Mar. 18. (https:// www.popularmechanics.com/flight/a10234/what-is-the-flight-man agement-system-a-pilot-explains-16606556) 22. Kvalnes, Ø.: Automation and ethics. In: Moral Reasoning at Work. Palgrave Pivot, Cham (2019). https://doi.org/10.1007/978-3-03015191-1_8 23. Lamnabhi-Lagarrigue, F., Annaswamy, A., Engell, S., Isaksson, A., Khargonekar, P., Murray, R.M., Nijmeijer, H., Samad, T., Tilbury, D., Van den Hof, P.: Systems & control for the future of humanity, research agenda: current and future roles, impact and grand challenges. Annu. Rev. Control. 43, 1–64 (2017) 24. Lin, P., Allhoff, F.: Untangling the debate: the ethics of human enhancement. NanoEthics. 2(3), 251–264 (2019) 25. MacKenzie, D., Wajcman, J.: The Social Shaping of Technology. Open University Press (1999) 26. Mayor, T.: Ethics and Automation: What to do when workers are displaced, Jul 8, 2019. (https://mitsloan.mit.edu/ideas-madeto-matter/ethics-and-automation-what-to-do-when-workers-are-di splaced) 27. Meadows, D., Meadows, D., Randers, J., Behrens, W.W.: The Limits to Growth. Universe Books (1972) 28. Meadows, D., Meadows, D., Randers, J.: Limits to Growth: The 30-Year Update. Chelsea Green Publishing (2004) 29. Mediavilla, J. I..: https://theblogbyjavier.com/2020/01/02/aviationsafety-evolution-2019-update/, CC BY 3.0 30. Moor, J.H.: Why we need better ethics for emerging technologies. Ethics Inf. Technol. 7, 111–119 (2005) 31. Ord, T.: The Precipice, Existential Risk and The Future of Humanity. Hachette (2020) 32. Rodrigo, C.C., Orlando, J.R., Saul, P.G.: Aircraft Accident Report: Controlled Flight Into Terrain, American Airlines Flight 965,
F. Lamnabhi-Lagarrigue and T. Samad
33. 34.
35.
36.
37.
38. 39.
40. 41.
42.
43.
44. 45.
46. 47. 48.
49.
50. 51.
Boeing 757-223, N651AA, Near Cali, Colombia, December 20, 1995, Aeronautica Civil of the Republic of Colombia. https:// reports.aviation-safety.net/1995/19951220-1_B752_N651AA.pdf Russell, S.: Human Compatible: AI and the Problem of Control. Viking (2019) SAE Mobilus: Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles (J3016 Ground Vehicle Standard). Available Online (2019), https:/ /saemobilus.sae.org/content/j3016_201806 Samad, T.: Human,-in-the-loop control: applications and categorization, 3rd IFAC workshop on cyber-physical-human systems. IFAC-PapersOnLine. 53(5), 311–317 (2020) Samad, T., Annaswamy, A. (eds.): The Impact of Control Technology, 2nd edn. IEEE Control Systems Society (2014). Retrieved from http://ieeecss.org/impact-control-technology-2nd-edition Samad, T., Cofer, D.: Autonomy in automation: trends, technologies, tools. Comput. Aided Chem. Eng. 9, 1–13 (2001). https:// doi.org/10.1016/S1570-7946(01)80002-X Sampath, M., Khargonekar, P.P.: Socially responsible automation. Bridges. 48(4), 45–72 (2018) Scheck, W.: Lawrence Sperry: Genius on Autopilot, Aviation History (2004). https://www.historynet.com/lawrence-sperryautopilot-inventor-and-aviation-innovator.htm Securing our Common Future, an Agenda for Disarmament, Office for Disarmament Affairs, United Nations Publications, 2018 Slijper, F., Slope, S.: The Arms Industry and Increasingly Autonomous Weapons. PAX (Peace Organization, Netherlands) Research Project (2019) Smith, P. J., Baumann, E.: Human-Automation Teaming: Unintended Consequences of Automation on User Performance, 2020 AIAA/IEEE 39th Digital Avionics Systems Conference (DASC). 2020, pp. 1–9. https://ieeexplore.ieee.org/document/9256418 Snell, R.: The Rise of Autonomous Vehicles and Why Ethics Matter. https://www.digitalistmag.com/improving-lives/2019/04/ 02/rise-of-autonomous-vehicles-why-ethics-matter-06197534 Stein, G.: Respect the unstable. IEEE Control. Syst. Mag. 23(4), 12–25 (2003). https://doi.org/10.1109/MCS.2003.1213600 Taebi, B., van den Hoven, J., Bird, S.J.: The importance of ethics in modern universities of technology. Sci. Eng. Ethics. 25, 1625–1632 (2019). https://doi.org/10.1007/s11948-019-00164-6 ten Have, H.A.M.J. (ed.): Nanotechnologies, Ethics and Politics. UNESCO Report (2007) Tenner, E.: Why Things Bite Back: Technology and the Revenge of Unintended Consequences. Vintage (1996) Trentesaux, D., Karnouskos, S.: Engineering Ethical Behaviors in Autonomous Industrial Cyber-Physical Human Systems, Cognition, Technology & Work, March 2021. https://doi.org/10.1007/ s10111-020-00657-6 Views of the International Committee of the Red Cross (ICRC) on autonomous weapon system. https://www.icrc.org/en/download/ file/21606/ccw-autonomous-weapons-icrc-april-2016.pdf Wilde, G.J.S.: Target Risk 2: A New Psychology of Safety and Health, 2nd edn. PDE Publications (2001) World Health Organization: Global Status Report on Road Safety 2018. (2018). https://apps.who.int/iris/bitstream/handle/ 10665/277370/WHO-NMH-NVI-18.20-eng.pdf?ua=1
3
Social, Organizational, and Individual Impacts of Automation
75
3 Françoise Lamnabhi-Lagarrigue IFAC Fellow, is a CNRS Emeritus Distinguished Research Fellow of the Laboratory of Signals and Systems, Paris-Saclay University. She obtained the habilitation doctorate degree in 1985. Her main recent research interests include observer design, performance, and robustness issues in control systems. She has supervised 26 PhD thesis. She founded and chaired the EECI International Graduate School on Control. She is the editor-in-chief of Annual Reviews in Control. She is the prizewinner of the 2008 French Academy of Science Michel Monpetit prize and the 2019 Irène JoliotCurie prize, Woman Scientist of the Year. She is a knight of the Legion of Honor.
Tariq Samad IEEE Fellow, IFAC Fellow, holds the W.R. Sweatt Chair and is a senior fellow in the Technological Leadership Institute, University of Minnesota, where he leads the Management of Technology program. He previously worked for Honeywell, retiring as corporate fellow. He is a past president of the American Automatic Control Council and IEEE Control Systems Society. He is co-editor-in-chief of the Springer Encyclopedia of Systems and Control. He is the editor of the Wiley/IEEE Press book series on “Technology Management, Innovation, and Leadership.”
4
Economic Effects of Automation Piercarlo Ravazzi and Agostino Villa
Contents
Abstract
4.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
77
4.2
Basic Concepts to Assess the Effects of Automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
78
4.3 4.3.1 4.3.2 4.3.3 4.4 4.4.1 4.4.2 4.4.3 4.5 4.5.1 4.5.2 4.5.3 4.6 4.6.1 4.6.2 4.6.3 4.7 4.7.1 4.7.2
Production and Distribution in Economic Theory . . . . Preliminary Elements of Production . . . . . . . . . . . . . . . . . Measurement and Characteristics of Production Factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Neoclassical Function of Production and the Distribution to Production Factors . . . . . . . . . . . . .
79 79 79 80
Microeconomic Effects of Automation in Enterprises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Effects on the Production Function . . . . . . . . . . . . . . . . . . Effects on Worker Incentives and Controls . . . . . . . . . . . . Effects on Cost Structure and Labor Demand in the Short Term . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
83
Macroeconomic Effects of Automation in the Short Period . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Demand for Labor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Labor Offer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Equilibrium and Disequilibrium in the Labor Market . . .
85 85 87 88
80 80 82
Macroeconomic Effects of Automation in the Long Period . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A Brief Historical Excursus . . . . . . . . . . . . . . . . . . . . . . . . First Era of Machines: An Intersectoral Model with Consumption and Capital . . . . . . . . . . . . . . . . . . . . . . Second Era of Machines: Automation and Artificial Intelligence (AI) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
92
Final Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Psychological Benefits of Labor . . . . . . . . . . . . . . . . . . . . . Use of Free Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
98 98 99
89 89
The growing diffusion of automation in all production sectors is causing a profound change in the organization of work and requires a new approach to assessing the efficiency, effectiveness, and economic convenience of processes. Analysis criteria, based on robust but simple models and concepts, are necessary above all for small and medium-sized enterprises (SMEs). This chapter offers an overview of concepts, based on economic theory but revised in light of industrial practice, to reflect on how to evaluate the impact of the diffusion of automation both at the microeconomic level and at the macroeconomic level. The analysis is useful to understand the current phase (the short period) and the final trends (the long period), toward which the economic system is converging, following a path (the transient) in which the advantages of digital technologies are opposite to significant risks. Keywords
Production and cost economics · Efficiency wages · Labor market · Sector interdependencies · Economic growth · Technical progress · Prospects for capitalism
89
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
P. Ravazzi · A. Villa () Department of Manufacturing Systems and Economics, Politecnico di Torino, Torino, Italy e-mail: [email protected]; [email protected]
© Springer Nature Switzerland AG 2023 S. Y. Nof (ed.), Springer Handbook of Automation, Springer Handbooks, https://doi.org/10.1007/978-3-030-96729-1_4
4.1
Introduction
In recent decades, automation of processes has spread both in industry and in services. The preconditions that ensured widespread diffusion were firstly the development of electronics, then of computer science and more recently of digital technologies. Since late 1970s, major investments in automation have been developed, followed by periods of reflection, with critical reviews of the implementations made and their economic returns. Paradigmatic could be the case of FCA (Fiat Chrysler Automobiles), which reached the highest level of automation in its assembly lines at the end of 1980s, but in the following decade it suffered from a deep crisis, in which investments in automation seemed not to be profitable. In 77
78
P. Ravazzi and A. Villa
the following period there was significant growth for which the high level of automation seems to be one of the main conditions of economic efficiency. Implementation of automation is different, as is the perception of its convenience in the electronic and IT sector. This sector is specific, however, because innovation and research are endogenous tools for the development of automation. Other industrial and service sectors have a typical attitude of “strong but cautious” interest. The main reason for this prudence is the difficulty of assessing the economic impact of automation in its production process, a difficulty that arises from the (substantial, non-statistical) uncertainty inherent in the assessment of the future profitability of investments. This chapter aims to deal with this problem using simple economic models, useful for analyzing some of the main effects of automation on production, labor incentives, and costs. First, some concepts on which the evaluation of the effects of automation is based (Sect. 4.2) are provided. Then, the dominant model in economic theory is summarized (Sect. 4.3), in order to understand the subsequent changes induced by automation in enterprises (Sect. 4.4) regarding production, incentives for control of workers and flexibility of costs. In the final part of the chapter, the macroeconomic effects of automation are analyzed, both in the short (Sect. 4.5) and in the long term (Sect. 4.6). Some final comments (Sect. 4.7) summarize the most significant considerations and results of the paper. We do not deal with the topic of automation sharing, because it goes beyond the strictly economic topics covered in the paper. It is a very specific aspect concerning the collaboration between SMEs interconnected into a network, aimed at sharing information, taking advantage of mutual interaction, thus reducing relative costs. On this subject, we refer to the book edited by Villa [1] and EASME/ COSME [2].
4.2
Basic Concepts to Assess the Effects of Automation
Every manager who has profit as a goal wants to compare the cost of a potential application of some automated devices (processing units, handling and moving systems or automated devices to improve the production organization) with the increase in revenues or the reduction of other costs, resulting from such a capital investment. First of all it is necessary to list the potential types of automation; therefore it is necessary to highlight the links between these types and the main variables of the company’s financial statements, which are influenced by the process and labor changes induced by the automation there applied. The analysis of a large number of SME clusters in ten European countries, carried out in the CODESNET (COllaborative Demand & Supply NETworks)
project, funded by the European Commission, shows that the most important types of automation in industry can be classified as follows: (a) Robotization, that is, automation of production operations (b) Flexibility, that is, “flexibility through automation” of set-up and supply (c) Monitoring, that is, “automated control” of measurements and operations These three types of automation can have effects both on the organization of the production process and on the staff: robotization allows to achieve a higher operating speed and requires a reduced amount of hours of direct labor; flexibility is crucial to reduce the response time of the company’s offer to the specific customer demand, leading to an increase in the mix of products and facilitating producer/buyer interaction; monitoring can guarantee product quality for a wide range of final goods through widespread control of work operations. Table 4.1 summarizes only the technical effects of automation, but does not allow to evaluate the economic ones, on which the interest of company managers is focused. To carry out an economic evaluation it is necessary to consider three fundamental elements: 1. An investment in automation is generally burdensome for any company, but is often critical for an SME, since the replacement of simple work with technical capital and specialized work shifts the time horizon of the assessment from short to medium-long term. 2. The success of such a decision depends both on the amount of the investment and on the managerial ability to reorganize the workforce within the company with minimal friction. 3. A significant investment in automation has long-term and wide-ranging effects on the overall employment of the economic system. These effects must be interpreted through coherent models, in order to carry out: • A microeconomic analysis, useful to the manager of the company, in order to understand how investments in automation and the consequent reorganization of the workforce can influence the expected production target, the current and expected costs and profits • A macroeconomic analysis, to understand how the diffusion of automation affects employment and the distribution of incomes in the economic system both in the short and long term, since the macroeconomic effects subsequently fall on individual companies
4
Economic Effects of Automation
79
Table 4.1 Links between automation and process/personnel in the firm Automation typology induces . . . . . . effects on the process . . . . and effects on personnel a) Robotizing Operation speed Work reduction b) Flexibility Response time to demand c) Monitoring Process accuracy and product quality Higher skills Then automation calls for... ... and search for . . . Investments New labor positions Investments and new labor positions should give rise to an expected target of production, conditioned on investments in high-technologies and high-skill workforce utilization
4 These two aspects are treated in this chapter, after introducing (Sect. 4.3) the production and distribution model proposed by the neoclassical economic theory, which is still the foundation of DSGE (Dynamic Stochastic General Equilibrium) models, dominant in macroeconomics [3].
4.3
Production and Distribution in Economic Theory
To evaluate the economic effects of automation on the production and costs of a company, it is appropriate to recall some definitions drawn from economic theory, which will be used in the subsequent analysis.
4.3.1
Preliminary Elements of Production
1. A production technique (method or process) is a combination of factors (inputs) acquired from the market and used to obtain a unity of good/service. 2. The factors of production for simplicity are limited to capital K (plants, machinery, etc.), to labor L, and to intermediate goods X (purchased externally to be transformed into the finished product). 3. A production function is represented by a technical relationship Q = Q(K, L, X), which describes the output Q as a function of the inputs entered in the production process, the capital is both in the short term, in which given K = K, ∂Q/∂L > 0, ∂Q/∂X > 0 , and in the long run, in which the capital is variable (∂Q/∂K > 0). 4. Technological progress A, of which automation is the most relevant component, is inserted mainly as an exogenous factor [Q = Af (K, L, X)]; its nature is instead endogenous, since it is essentially incorporated in capital, through process innovations, and in labor, through investment in human capital (training), increasing productivity [4]. 5. Technical efficiency requires that the rational entrepreneur, if he decides to make new investments, chooses among the available methods/processes that allow to obtain the same quantity of product without wasting the factors (if a method requires K0 capital units and L0 labor units, and another is requiring L1 > L0 labor
units, with the same capital and product, the first method is preferred). 6. Economic efficiency, on the other hand, requires that if the combinations of factors are different (with the same production, the second method uses K1 < K0 capital units), then the choice depends on the cost he has to bear to activate the process, considering the prices of the production factors. In economics, production techniques are divided into two large families: (a) fixed coefficient technologies, characterized by zero substitutability (strict complementarity) of the productive factors, assuming that a given quantity of product can be considered only by combining the factors in fixed proportions (isoquant at an angle), in the minimum quantities imposed by the efficiency technique; so let’s write Q = f (K, L, X) = K/v = λL L = X/b, where λL = Q/L is the labor constant productivity and v = K/Q, b = X/Q are the fixed coefficients of use of capital and intermediate goods in the short term; all coefficients may vary in the long run due to technical progress. (b) technologies with flexible coefficients, assumed by neoclassical economic theory, which instead admit the possibility of imperfect substitutability of the production factors, thus assuming that the same production can be achieved by means of a convex variable combination.
4.3.2
Measurement and Characteristics of Production Factors
The amount of labor L can be measured by the product between the hours h provided by each employee in a work shift, the number N of the workers of a shift, and the number T of the work shifts carried out in the unit of time considered (one day, one month, one year): L = hNT
(4.1)
Capital K, on the other hand, cannot be considered homogeneous, since normally there are interconnected plants
80
P. Ravazzi and A. Villa
and machinery of different characteristics in the company. However, the problem of its measurement is not crucial if production is connected to capital by a fixed coefficient: the capital stock can be measured in machine hours at steady speed. In the short term, capital is a fixed factor K = K , which delimits production capacity Q of the company and involves a sunk cost (irrecoverable), since any underutilization of the production capacity cannot be eliminated, by selling the machinery on the market, without incurring significant realizable losses. Labor and intermediate goods are variable factors depending on the quantities produced. In the long run, all factors are variable: after a few years, when the useful life of the capital has run out due to physical deterioration or due to the effect of technological overshoot, the plants and machinery can be replaced with others that are more economically efficient.
4.3.3
The Neoclassical Function of Production and the Distribution to Production Factors
At this point it is possible to write the neoclassical production function with flexible coefficients, supposing – for simplicity – the constant ratio b = X/Q between intermediate goods and produced quantities, but attributing decreasing returns to capital and labor, so marginal productivity (the first partial derivative) is a decreasing function of the quantity of input: Q = f (K, L, X) = Q (K, L) = X/b; ∂ 2Q ∂ 2Q ∂Q ∂Q > 0, > 0, 2 < 0 < 0, 2 ∂K ∂K ∂L ∂L
(4.2)
Economic theory (from Clark and Marshall until today) has therefore assumed as the foundation of production the hypothesis that there is no perfect substitution between the factors, attributing to each of them the law of diminishing returns, which initially Malthus, West, and Ricardo had limited to the earth. We indicate with p the selling price of the product, with w the monetary wages paid to workers, with pK the price of a unit of capital and with r the percentage cost of capital, defined in an economic perspective, which considers the entire gross operating margin on the value of the capital. Therefore it differs from the accounting cost, which includes only the depreciation charge and the cost of debt, attributing to the profit gross of tax charges and the residual (operating income) allocated to equity. We also considered the current value of capital (pK K) and not the evaluation done
by the financial market, because this last one is continuously changing, owing to the speculations on enterprise shares in condition of uncertainty with respect to future gain. We therefore define the profit by the difference between the added value [pQ − pX X = (p − pX b) Q = pˆ Q] and the cost of labor and capital (wL + rpK K). If the factors are characterized by decreasing productivity, companies can achieve the objective of maximum profit by combining them in order to equal the marginal productivity of each factor to the relative cost: max = pˆ Q − wL − cK pK K ⇒ pˆ (∂Q/∂L) = w ∂Q/∂L = w/ˆp pˆ (∂Q/∂K) = rpK ∂Q/∂K = rpK /ˆp
(4.3)
It follows a fair distribution of income, since each factor of production is remunerated according to its contribution to production: a real labor and capital, respectively, receive wage w/ˆp and a real return on capital rpK /ˆp equal to the physical marginal productivity. The econometric results of the hypothesis of flexible coefficients have been confirmed only in the presence of a relative constancy of the shares of wages and profits on the value of the product, exchanging a trivial accounting identity for a homogeneous first degree production function, such as shown by [5]. A realistic approach to the problem requires to interface with the production engineer to write a functional form that incorporates the specific characteristics of a given observed production process, leaving the econometrician only the task of estimating the parameters and checking the goodness of the equation. In the subsequent analysis, a technologicaleconomic approach will be followed, on the basis of which the effect of automation on the production processes and costs of the company will be assessed.
4.4
Microeconomic Effects of Automation in Enterprises
4.4.1
Effects on the Production Function
First of all, consider the case of production with only labor,, that is, without the use of capital or, to be more realistic, with the use of extremely elementary production tools. In this case, a characteristic of human’s psychophysical structure is his ability to produce with decreasing returns: natural fatigue grows during the production process, so the marginal productivity of work is decreasing with the increase in working hours. Let a shift be measured in hours, and h be the time provided by a standard worker. To express that the efficiency E of a worker is decreasing in the shift, it can be used with
4
Economic Effects of Automation
81
a simple exponential function that allows to transform the homogeneous hours into differentiated units of efficiency: E = hα
(4.4)
where 0 < α = (dE/dh)(h/E) < 1 is the constant elasticity of efficiency with respect to the hours provided by a worker and represents a measure of the average psychophysical strength of the worker. The relation (4.4) is characterized by decreasing returns, since dE/dh = αE/h > 0 and d2 E/dh2 = (α − 1)(dE/dh)/h < 0. Denoting with λE the constant productivity of an efficiency unit, we write the production function: Q = λE ENT
(4.5)
By substitution of (4.4) and (4.1) (putting in evidence NT) in (4.5) and therefore dividing by L the hourly productivity λL is obtained: λL = Q/L = λE hα−1 > 0
Therefore, we must exclude the hypothesis of decreasing marginal productivity of capital and connect it to production capacity Q through a constant coefficient of potential productivity of capital λK , which represents the reciprocal of the constant capital / product ratio (λK = 1/v): Q = λK K = K/v
(4.8)
Since it is the human who works in collaboration with the machines and imposes his biological rhythms, the capital must be connected to the actual production Q through a variable degree of use 0 < θ K < 1, as a consequence of the decreasing performance of the work, which expresses the effective time of use of the machines in relation to the time in which they are potentially available:
(4.6) Q = θK λK K = θK K/v = θK Q
decreasing compared to the hours worked by an operator in a shift [dλL /dh = (α − 1)λE hα−2 < 0]. Now capital as an auxiliary tool of labor is being introduced. Three problems arise in this regard: 1. The measurement of capital, since capital goods are heterogeneous and therefore a specific physical quantity cannot be used; a practical method, adopted by companies, is to consider the hours of potential use of the heterogeneous set of available production tools. 2. The evaluation of the effects produced by the association of capital with labor; a first effect contemplated by economic theory [6–9] is the increase in productivity of labor; expressing the capital-induced empowerment with a coefficient γ > 1 to be applied to the hourly productivity, the function (4.6) can be rewritten as λL = γ λE hα−1 ; however, an equally important factor, omitted from economic theory, is the effect that capital has on the psychophysical force of labor, since the tools to help work enhance by a factor δ the elasticity of efficiency, alleviating both physical and mental fatigue, so that it is obtained:
λL = γ λE hα+δ−1
chines with respect to human; while the psychophysical nature of human is characterized by decreasing returns, the machines or any production tool do not suffer from increasing fatigue during the work shift.
(4.7)
where now E = hα+δ and condition 0 < α + δ < 1 allows to preserve decreasing returns. 3. The way of inserting capital into the production function, which depends strictly on the characteristics of the ma-
(4.9)
Dividing the relationship by the working hours needed to saturate the production capacity, which correspond to those K = L of potential use of capital, and using Eq. (4.7), it is possible to explain how the degree of use of capital is adapted to decreasing returns on labor: θK = vγ λE hα+δ−1
(4.10)
leading to a decreasing use of capital [dθ K /dh = (α + δ − 1) θ K /h < 0]. Condition 0 < α + δ < 1 allows to define labor-intensive technique in which work and capital collaborate, but the former dominates the latter: the decrease in the marginal productivity of labor remains as a characteristic of the production process, even if mitigated by the aid of capital. However, the decreasing returns are exclusively attributed to the psychophysical characteristics of human. Finally, mechanization and above all automation have the property of canceling labor fatigue (α + δ = 1) and therefore they reverse the role of production factors: capital dominates over labor, imposing its mechanical rhythms on the latter. The relationship (4.7) changes to: λL = γ λE ⇐ α + δ − 1 = 0
(4.11)
by substitution in (4.10), implies θK = vλL = That, K/Q Q/L = 1 ⇐ K = L, at the full utilization of the production capacity. Automation therefore transforms labor returns from decreasing to constant and, therefore, changes
4
82
P. Ravazzi and A. Villa
the function of production to flexible coefficients (in the technological sense described) into a function with fixed coefficients. We define capital-intensive systems as all those processes in which α + δ = 1, that is, all the technologies that use machines, although a negligible residual of worker fatigue may remain in the setup phase. This analysis makes it possible to distinguish the production techniques (labor or capital intensive) on the basis of an absolute criterion, avoiding the need for a relative classification with respect to an average of the production per employee or of the capital/labor or capital/product ratio of the productive sectors.
As an intermediate objective of the company we assume the maximization of the hourly margin mh , defined as the difference between the hourly value added pˆ λL and the hourly wage w: max mh = pˆ λL (w) − w
The first-order condition requires that the company pays the worker an additional monetary unit of wages up to the point where it equals the value of its marginal productivity, that is when the elasticity w of productivity with respect to wages equals the share of labor costs on the value added: dλL w w dλL dmh = = 0 → pˆ = 1 → w = dw dw dw λL pˆ λL
4.4.2
Effects on Worker Incentives and Controls
Another problem related to automation concerns the tradeoff between incentives and control of production workers. The economic literature identifies three fundamental reasons that induce the company to pay efficiency wages higher than those of the market [10]: (a) the need to minimize the costs for hiring and training workers, by stemming so the voluntary resignation [11, 12]; (b) the information asymmetry between worker and company (only the former knows his ability and his commitment) for which this uses the incentive ex ante in order to avoid the adverse selection (by rationing the job application to select the best elements) and ex post in order to contain the moral hazard of employees, causing them to engage in the production process, in the event that the supervision costs are too high [13–15]; (c) the specificity of some production technologies that require leaving some groups of workers autonomous (the individual commitment is under the control of the team), paying an incentive to stimulate conscious participation in teamwork [16–18]. To analyze the effects of automation, we neglect the last motivation, more suitable for connoting research groups, design and development of new products. On the other hand, we consider the other motivations for intensive work systems to be crucial: productivity is not exclusively attributable to the technology and psychophysical characteristics of the workers, but also depends on their commitment. Assume that productivity λL of each worker is a concave increasing function of the hourly wage w, that is, it grows rapidly in a first interval in which the wages play an incentive role, but which subsequently grows weakly due to the decrease in the marginal utility of the remuneration in relation to the fatigue of the extension of working hours: λL = λL (w), λL (0) = 0, dλL /dw > 0, d2 λL /dw2 < 0 (4.12)
(4.13)
(4.14)
In correspondence we obtain the optimal values of the hourly monetary wage w∗ , the hourly productivity λ∗L , and the hourly added value pˆ λ∗L . The duration of the shift is therefore endogenous. In fact, from Eq. (4.7) we obtain the optimal time of each worker h∗ , explaining why in the labor-intensive systems the hours provided by the worker may differ from the contractual hours: h∗ =
λ∗L / γ λE
α+δ−1
(4.15)
Instead in capital-intensive systems λL = γ λE , whereby workers are forced to adapt to the needs of mechanized or automated production and therefore efficiency wages have no reason to exist. The maximum profit solution simply requires that the wage be set at its minimum contractual level: max mh = pˆ γ λE − w ⇐ min w. In conclusion, in labor-intensive systems, a variable wage can represent an incentive to maximize profit, extracting the optimal effort for the company from workers. However, if the productivity of the worker cannot be measured due to the complexity of the performance, control by a supervisor, to whom an incentive has to be paid, becomes preferable. In capital-intensive systems, the incentive is not justified, since there is no need to extract the effort of workers: both in industry and in services (especially financial ones), automation eliminates the problem of moral hazard at the root, since the machines carry out standardized work and are equipped with devices to report anomalies with respect to the correct execution of the process, thus minimizing worker opportunism. The only reason for paying an efficiency wage is the reduction of absenteeism. In summary, automation – in addition to increasing productivity directly and indirectly (neutralizing worker fatigue) – minimizes labor costs, eliminating direct incentives for workers or indirect incentives for supervisors in charge of process control. Obviously in the face of these benefits, the company must bear the greater burdens and risks of an
4
Economic Effects of Automation
83
automated process, which requires significant investments in physical and human capital, as will be highlighted in the next paragraph.
In systems of pure labor, in which capital is insignificant, the hypothesis of marginal productivity (λm = dQ/dL) and average productivity (λ = Q/L) both decreasing, hold: λm (L) > 0,
4.4.3
Effects on Cost Structure and Labor Demand in the Short Term
The transition from an intensive labor process to a capital intensive one has transformed the cost structure of the company, amplifying the costs of capital (given in the short term) and reducing those of labor. We define the total costs CT as the sum of the costs of the three factors (intermediate goods, labor, and capital), whose prices (pX , w, rpk ) are given for the individual company, to which are added the organizational costs CH (relating to managerial staff and persons in charge of accounting, financial, information, and control activities): CT = CH + pX X + wL + rpK K
4 It follows that marginal variable cost (cm = dCv /dQ) and average cost c, growing as the quantities produced grow, derive from Eqs. (4.18) and (4.21): cm = pX b + w/λm > 0 dcm w dλm d2 Cv dλm =− 3 = >0⇐ dQ dQ2 λm dL dL < 0,
(4.24)
dc d (Cv /Q) cm − c = = > 0 ⇐ cm > c dQ dQ Q
(4.16)
In the short term, the organization of the company and the capital stock are given CH = CH , K = K , so one can separate the fixed costs Cf (organization and capital) from variable costs Cv (intermediate goods and labor dependent on production Q through the respective input b and productivity coefficients λ): Cf = CH + rpK K = C
(4.17)
Cv = pX X + wL = (pX b + w/λ) Q
(4.18)
CT = Cf + Cv
(4.19)
Relation (4.8) defines the production capacity Q as a technical constraint that the current production Q cannot exceed: Q = λL = X/b ≤ Q = K/v
d2 Q dλm dλ = < 0, dL dL2 dL λm − λ d (Q/L) = < 0 ⇐ λ > λm = dL L (4.23)
(4.20)
If the variable cost (4.18) and the total cost (4.19) are divided by Q, after replacing Eqs. (4.17) and (4.18), the variable average cost (c = Cv /Q) and the average total cost (cT = CT /Q) are obtained: c = pX b + w/λ
(4.21)
cT = C/Q + c
(4.22)
From Eq. (4.22) it is also noted that the total average cost has a minimum point when the marginal cost is equal to the average total cost: cm − c C dcT =− 2 + 0 ⇐ cm cT dQ Q Q
(4.25)
The relations from (4.23) to (4.25) represent the general case contemplated by neoclassical economic theory, which is sustainable only in the presence of production processes of pure work or with simple aid tools, unsuitable to neutralize the growing fatigue during the work shift. In labor-intensive systems, however, an incentive is required to achieve maximum efficiency, so relationships (4.18) and (4.21) can be rewritten simply by replacing the values of the efficiency wage w∗ and the optimum productivity associated with it λ = λ∗L . Variable and total costs [relations (4.18) and (4.19)] then become linear functions of the quantity produced. The variable average cost [relationship (4.21)] is constant and the average total cost [relationship (4.22)] is a decreasing function of the quantity produced in the interval 0, Q: dcT /dQ = −C/Q2 < 0. On the other hand, in capital-intensive systems, automation has the following effects: 1. A constant productivity λA higher than that obtainable in intensive work systems λ = λA > λ∗L . 2. A wage wA set by collective or company bargaining between workers and companies, normally lower than the typical efficiency of intensive work systems (wA < w∗ ),
84
P. Ravazzi and A. Villa
since it requires no incentive to extract the optimal effort; the wages paid to workers are generally higher than the minimum wages wˆ paid for a simple job, in order to select and train specialized suitable to collaborate personnel, with the machines wA > wˆ ; then, it is difficult to evaluate if wA be greater, less or equal to w∗ (in the following, it will be assumed wA = w∗ = w). 3. A positive correlation (for linear simplicity) of labor productivity compared to the quantity produced, because the company is reluctant to replace qualified personnel in the presence of production declines induced by a fall in market demand λA = λA (Q), dλA /dQ > 0, d2 λA /dQ2 = 0
(4.26)
4. A cost of capital rpKA higher than that of labor-intensive systems, due to the higher purchase price of the machines (pKA > pK ), with the same average useful life and cost of financing. With these premises we can compare the average cost of company 1, which adopts an intensive capital production system, with that of company 2, which adopts an intensive labor system, using the relation (4.22) and considering an equal coefficient of input of intermediate goods: c1 =
CT2 C2 w = + pX b + ∗ Q Q λ2
As mentioned, in company 1 compared to company 2, the labor cost per unit of product decreased [w/λ1 (Q) < w/λ∗2 ] but the cost of capital is increased (pK1 > pK2 ), assuming equal organization costs (CH1 = CH2 ). The negative sign of the cost gap D indicates the advantage of automation: D(Q) = c1 − c2 = r (pK1 − pK2 ) K/Q + w [1/λ1 (Q) − 1/λ∗2 ] < 0
Qe = 1 − θQ Q
(4.28)
On the basis of Qe each firm calculates an experimental price (or list price) pi (i = 1, 2) to be proposed to the market, sufficient to cover the unit cost of its product (full pricing), which includes the depreciation rate and the normal profit rate (indicated with r) on the capital value pK K:
C1 w CT1 = + pX b + c2 Q Q λ1 (Q) =
technique dominates for high Q values: there is a critical level of production which makes the two technologies indifferent. Capital-intensive techniques (high fixed costs and low variable costs) are therefore more risky than intensive labor techniques (low fixed costs and high variable costs), in the event that there is a significant drop in the demand for goods produced by company: excess work can be expelled from the production process in a short time while excess capital, which is a sunk cost, requires long amortization of its value. Total costs of firms 1 and 2 are shown in quadrant a of Fig. 4.1. Costs are reformulated in average terms in quadrant b. In both cases up to point A, firm 2 has total and unit costs lower than those of firm 1; in the 0QA range is therefore more efficient. Beyond point A, in the interval QA Q, the situation reverses, but this is the normal condition, since the previous decision of both companies was to have an equally high production capacity Q > QA , as shown in quadrant c, which describes their linear functions of production with respect to the amount of work. Firm 1 is more efficient than firm 2 also in terms of programmed quantity Qe on the basis of demand expectations, lower than the production capacity for a part θ Q reserved to cope with excess demand:
(4.27)
Inequality r (pK1 − pK2 ) K/Q < w [1/λ∗2 − 1/λ1 (Q)] is a necessary condition for the capital-intensive production technique to be economically efficient: the higher cost of capital of the automated system must be lower than the higher labor cost of the labor intensive system. The difference in cost, however, benefits the automated company as the production grows: dD/dQ = −w (∂λ1 /∂Q) /λ21 − r (pK1 − pK2 ) K/Q2 < 0 It can therefore be said that an intensive work system is more advantageous for low Q values, while the automated
p1 = cT1 (Qe ) < p2 = cT2 (Qe )
(4.29)
If the market price p of the product is high enough to allow the less efficient company to achieve normal profit [p = p2 = cT2 (Qe )], the most efficient company benefits from the effects of its automated technology, creating a differential return or surplus or extra profit S1 : S1 = (p − p1 ) Qe = [cT2 (Qe ) − cT1 (Qe )] Qe
(4.30)
If the market price allows firm 2 to cover only variable costs p = c2 = pX b + w/λ∗2 , this company would be about to leave the market in the short term (marginal firm): it can survive only until the capital is wiped out (in the long run) and the firm close the activity. Finally, in quadrant d we associated different productivities levels (on the abscissas) to the cumulative quantities of work (on the ordinates) for the entire interval up to the programmed production and then extending it until the saturation of the production capacity. This results in two decreasing step-by-step relationships between productivity and employment, from which it is evident that the aggregate
4
Economic Effects of Automation
a) CT
85
b) tan a1 = c1 tan b1 = p1 tan a2 = c2 tan b2 = p2
E1
A a1
Cf 1
b1
CT2 Q
CT1
E2 A
Q QA
Qe
d) D tan g1 = l1 tan g2 = l2
L2
Q
0
Q
4
p1
c1
b2
L
QA
Qe
Q
L P
O
I H
C E2
g1 L1
p2
c2
0
c)
CT1/Q
a2
Cf 2
CT/Q
CT2
g2
0
N
B
F
G
E1 Qe
M
Q Q
0
l2
l1
Q/L
Fig. 4.1 Costs relating to firms 1 (capital intensive) and 2 (labor intensive)
demand for labor rotates with the increase in production from the position λ1 FGHI (continuous line) to the position λ1 MNOP (dashed line).
4.5
Macroeconomic Effects of Automation in the Short Period
Even if we have abandoned the microeconomic postulate of decreasing marginal productivity of production factors, from quadrant d of Fig. 4.1, it is possible to derive a macroeconomic function of short-term labor demand that proposes it based on the decreasing marginal efficiency of the companies.
4.5.1
Demand for Labor
To extend the model to companies operating in the economic system, at the aggregate level we must eliminate intermediate goods and consider only the added value, placing b = 0 in
the relationship (4.21), in order to move from the concept of gross production Q to that of gross domestic product Y in real terms. We therefore rewrite the variable average cost relationship (4.21), which is also the minimum price pˆ m at which the least efficient firm remains temporarily on the market in the short term: pˆ m = cm = w/λm
(4.31)
where cm and λm indicate the average variable cost and the (constant) productivity of the marginal firm and not the marginal cost and marginal productivity of the neoclassical theory. The equation shows that lin price is an increasing ear function of money wages ∂ pˆ /∂w = 1/λm > 0 . Intramarginal firms (with more efficient technologies), on the other hand, are able to partially or completely recover fixed costs or even obtain profits higher than normal due to their higher productivity. Considering a large number of companies under free competition, the production capacity of each is a sub-multiple of the market extension. Consequently, inframarginal companies should align their planned list price with the higher one of the marginal company: if they did
86
P. Ravazzi and A. Villa
not carry out this alignment, selling their product at a lower price, they would quickly saturate their production capacity, creating a queue of unsatisfied demand that would pour on other companies that apply a higher price, up to paying the maximum price of the marginal company. The production (or offer) price charged by the marginal company therefore represents the equilibrium price (P∗ = pm ), dependent on nominal wage level w. In a short-term equilibrium from the Eq. (4.31), the equality between productivity and real wage ω is rediscovered: λm = w/P∗ = ω
w L
A
E
w
C
M
I
w^
N
H
(4.32)
If the aggregate demand (and therefore the actual production Y) is greater than the programmed production Ye of all the enterprises (Y > Ye ), the incomes of inframarginal enterprises will increase, because the average fixed cost decreases, without prejudice to the average variable cost, while the marginal enterprise will continue to remain at the point of exit from the market, since the total fixed costs remain unchanged and the revenues cover only the variable costs both increased in the same proportion. If the aggregate demand is lower (Y < Ye ) and the restriction is uniformly distributed, inframarginal companies will face a reduction in extra profits or will have to renounce coverage of a portion of their fixed costs. The dismissal of labor force induced by the fall in production will lighten variable costs, but the average fixed cost will rise for all companies at the same price. In Fig. 4.2 the graph of quadrant d of Fig. 4.1 has been re-proposed with the Cartesian axes inverted, ordering the companies according to their relative efficiency: from that with higher productivity up to that which in the abstract has zero productivity. A continuous differentiation of companies was assumed levels of aggregate production were and three e considered Y0 < Y < Y . The flatter the curves, the lighter the technological diversity between firms: at the limit, a horizontal straight line excludes differentiation and involves three vertical straight lines for the three production levels indicated. Consider the planned production curve Ye : moving along the downward curve, companies make decreasing profits. In correspondence with point D (ω = ω) the marginal company is located, while the underlying companies are out of the market, because the price does not cover the average variable cost; if real wage drops ωˆ < ω , a greater number of inefficient firms remain on the market, which increase the demand for labor ωI ˆ > ωD . If actual production expands to saturate production capac ity Y = Y > Y e , the demand for work is increasing from ωD to ωE and from ωI ˆ to ωM; ˆ if it falls to a level below the programmed level (Y = Y0 < Ye ), demand for work drops ˆ to ωH. ˆ from ωD to ωC and from ωI
F
D
Y0
Ye
Y LD
0
Fig. 4.2 Balance in the labor market: decreasing demand from businesses and rigid supply
In this model, real wages have the role of selecting the number of companies able to survive on the market. Unlike the neoclassical approach of a single representative company that produces with decreasing returns with respect to each production factor, the assumption of linear technologies (with constant productivity), ordered on the basis of their decreasing efficiency, entails an aggregate demand for work LD depending not only on real wages ω, but also from production Y within the limit of short-term production capacity Y ≤ Y : LD = LD (Y, ω)
4.5.2
∂LD /∂Y > 0, LD /∂ω < 0
(4.33)
Labor Offer
The dominant economic theory hypothesizes that a free worker can rationally choose how to use his available time, dividing it optimally between work and free time: an increase in real wages would induce him – except in extreme cases of very high wages – to give up free time, in order to acquire more income to be used for consumption and savings. From this behavior derives a job offer LS increasing compared to real wages: LS = LS (ω) with dLS /dω > 0. Keynesian criticism of this postulate underlines that the employment contract stipulated between the parties has on the one hand the job performance (hours, shifts, rules of conduct, etc.) and on the other hand the payment of a nominal wage w. Bonds are not instantly redeemable, but are fixed for a sufficiently long time, aimed at guaranteeing the company the opportunity to evaluate the monetary profit expected from the planned production and investment decisions. In this time span, nominal wages remain fixed, but prices can vary.
4
Economic Effects of Automation
87
At the salary established by the contract, the labor supply can be considered infinitely elastic, in the sense that the entire workforce is willing to work on that salary (provided it is able to ensure subsistence), regardless of the disutility of the work, a more suitable concept to choose the type of work logic than to decide whether to work or not to work and for how long, since contracts are imposed by the needs of production. Workers cannot therefore make an excellent choice between work and leisure: at the contracted wages w the job offer LS is given by the existing quantity of the workforce L. The following two conditions can be written, indicating the price level with P: w = w ⇒ ω = w/P
(4.34)
LS = L
(4.35)
To support the hypothesis of fixed nominal wages during the average duration of contracts, we can write the following relation, which incorporates some suggestions of the Keynesian theory [19]: w = w (, u, Pe ) = w (0 , u0 , P0 ) = w
(4.36)
The nominal wage depends on three exogenous variables, which in order of relevance are: 1. The institutional, political, and social conditions of the historical moment in which the employment contract is signed (time 0), which are summarized by means of the exogenous variable = 0 . 2. The contractual strength of the parties, largely conditioned by the unemployment rate u experienced in a sufficiently long time preceding the negotiation (u = u0 ). 3. The expectation of the general price level Pe , agreed between the parties at the time of signing the contract and therefore incorporated in it (Pe = P0 ); the expectation is generally extrapolative, because a rational expectation (deduced from a model of macroeconomic forecasts) is credible only if it reflects the beliefs about perceived inflation compared to that statistically detected, but perceived inflation is strongly conditioned by inflation experienced in the current period. Eq. (4.36) establishes that the nominal wage is fixed for the entire duration of the contract, while the real one in (4.34) decreases as the price level increases: dω/dP = − ω/P < 0, in line with the statements of Keynes [20]. But the equilibrium price P∗ is given, since it is calculated by the companies on the basis of the programmed quantity. Consequently, the royal hall is also fixed, as subsequently affirmed by Keynes [21]: ω = w/P∗ = ω.
4.5.3
Equilibrium and Disequilibrium in the Labor Market
To ensure full employment, the job application LD must absorb all the offer available on the market, but it is not said that this will happen, having excluded the neoclassical hypothesis of free choice between work and free time, which would entail a growing curve of job supply according to real wages, able to intersect the job application. Instead, we hired in the short term: LD ≤ L
(4.37)
In Fig. 4.2 we have added a vertical line that delimits the short-term job offer L. The following question arises: is it more effective an increase in production or a reduction in real wages to achieve a higher growth rate gL of the demand for work? The answer depends on the value of the elasticities with respect to production and real wages – respectively εY = (∂L/∂Y)(Y/L) > 0 and εω = (∂L/∂ω)(ω/L) < 0 – and the uniform growth rate of these variables (g = dY/Y = dω/ω): 1 dL = gL = L L
∂L ∂L dY + dω ∂Y ∂ω
(4.38)
= (εY + εω ) g > 0 ⇒ εY > εω If firms are not differentiated in terms of technological efficiency (equal productivity), the changes in real wages are irrelevant (the labor demand curves are vertical), so only changes in production count. Only if there is a differentiation of technological efficiency (decreasing demand curves), a decrease in real wages can induce greater employment. The stronger the differentiation (the more elastic the job demand curves are), the more likely is to achieve a point of contact with the potential job supply curve, thus ensuring full employment. Otherwise, even a minimum real wage, for example w = w, ˆ may be insufficient to achieve full employment of workers, as in Fig. 4.2, where it can be distinguished: (a) Technological or structural unemployment (the MN section), which cannot be absorbed even by fully utilizing the production capacity, this being a technological constraint (b) Rising cyclical unemployment (HM > HI) as production decreases OnlyDif the job demand is constrained by the potential offer L < L , then the labor force is scarce compared to the productive capacity of the economy, so it is possible to achieve full employment of the labor force, but not necessarily that of
4
88
P. Ravazzi and A. Villa
capital. If the offer intersects the demand at a point in the IM interval, the full employment of the workforce corresponds to that desired by the companies; if the intersection occurs at a point to the left of point I, the capital is underutilized. In these cases, a strengthening of workers’ power should occur, which could lead to an increase in the nominal wages at the subsequent contract renewal. This causes firms to translate it on the equilibrium price P∗ , in proportion to the growth in variable costs: dP∗ /dw = 1/λm > 0. The increase in the nominal wage, however, is greater than the increase in prices (the real wage increases), not rendering the contractual action of the workers in vain:
dω =
dw ω ∂ω ∂ω dw + ∗ dP∗ = ∗ 1 − dw dP P λm
> 0 ⇐ ω < λm (4.39)
The conclusions drawn on differentiation are paradoxical: the more an economy is characterized by inefficient small businesses (as in Italy), the more employment benefits from wage reductions; if on the other hand the most efficient technologies are widespread among companies (as in Germany), employment gets little benefit from wage compression and mainly benefits from the growth of production. It is therefore no coincidence that wages are higher in economies with higher average productivity, induced by the adoption of capital-intensive techniques, compared to economies in which technological backwardness compels companies to compete on labor costs. A problem that emerges from this analysis is the possibility that the differentiation of the companies remains in the long run: the marginal ones are forced to exit the market and can be replaced by new incoming companies equipped with more efficient technologies, thus eliminating the differentiation. To maintain it, it is necessary to conceive a dynamic process in which the new technologies of the incoming companies allow a positioning at the top in the ranking of production efficiency, moving the others downwards through a creative destruction [22, 23], that is, a process of gradual expulsion of less efficient firms, so as to always propose a curve of decreasing productivity and labor demand, even if shifted upwards. Even the existing companies that come down in the ranking, after having partially or totally recovered the value of the capital invested, can in turn adopt new technologies that allow them to go back to the highest positions. In this dynamic process it is necessary to assume that in the long term the technical progress incorporated in the capital is such as to allow a continuous replacement of the old enterprises by the new ones and a continuous reinvestment by the existing ones. This dynamic is not a groundless hypothesis in contemporary capitalism, characterized by continuous digital innovations.
4.6
Macroeconomic Effects of Automation in the Long Period
Long-term analysis requires a different model to tackle the problem of the growth of the economic system. For a complete analysis, it can be referred to Ravazzi and Abrardi [24]. Before proceeding in this direction, it is appropriate to outline some historical events to understand the drastic change that has occurred in the production systems compared to the past.
4.6.1
A Brief Historical Excursus
Morris [25] classified humanity’s progress through four characteristics, assigning each a score, which he then added to obtain an indicator of social evolution: the ability to dominate energy for the development of economic activity; the ability to organize, which was mainly expressed in the transformation of the village into a city; the war force, that is, the number of soldiers and the type of weapons; the information technology, aimed at sharing and processing data. From this classification it emerges that the only event that caused a truly significant leap in economic development was the first industrial revolution of the second half of the eighteenth century, whose determining factors were first the division of labor, as it was documented by Smith [26], then the diffusion of the machines in the production processes, highlighted by Ricardo [27], and then by Marx [28]. This innovative phase concerned mechanics, chemistry, and technology and was above all a consequence of the invention of the steam engine and subsequent improvements. Together with the division of labor, machines have determined both the creation of factories for large-scale production and the organization of fast transport means. A second industrial revolution occurred between the end of the nineteenth century and the 40 years following the Second World War, but the basic characters of new production processes still connote the first era of machines [29]. A much more radical revolution has only recently appeared: a second era of machines.
4.6.2
First Era of Machines: An Intersectoral Model with Consumption and Capital
In this phase, capitalism evolved as a result of the cooperation of machines with labor, aimed at obtaining higher productivity, changing the organization of production. The first era of machines will be described by means of a model of sectoral interdependencies [30–32] in physical terms with three sectors: one industry producing consumer goods (sector 1), exclusively demanded by workers, and two specific industries to the production of capital goods (sectors 2 and 3), used as inputs in production processes.
4
Economic Effects of Automation
89
The Model in Terms of Quantity: Potential Growth without Technical Progress The production of the first sector Q1 is destined to satisfy the demand for consumer goods of the workers employed in each of the three sectors (L = L1 + L2 + L3 ): Q1 = c1 (L1 + L2 ) + c3 L3 = c1 L + s3 L3
(4.41)
Considering perfectly competitive markets, the long-term intersectoral mobility of capital (in search of maximum yield) determines the leveling of profit rates and – since in the model, profits are all invested – the equality of the growth rate g of the capital:
Ki = gK i
ki = Ki /Li = vi /li
(4.40)
where c1 is the quantity of consumer goods per unit of labor, purchased by workers in the first two sectors, who perform simple work; c3 is the quantity consumed by third sector workers, who perform complex work, for which they benefit from greater consumption (s3 = c3 − c1 > 0). As for capital, the machines used in the production of consumer goods differ from those for the production of capital goods: but we assume that the machines made by sector 3 for sector 2 are built with the exclusive application of complex work, in order to design and manufacture computers (hardware and software). The second and third sector productions are therefore aimed at satisfying the demand for investment goods of the first two sectors (Ii ; i = 1, 2), which correspond to the increase in the capital stock Ki , given the hypothesis of infinite duration: Qi+1 = Ii = Ki (i = 1, 2)
The technology of each sector can also be summarized by a specific rate fixed capital/work coefficient ki , which represents the degree of mechanization or capital intensity of a given production sector. Using the relationships (4.44) and (4.45), we can obtain:
(4.42)
To find the solution of the model, we replace the relations (4.44) and (4.45), respectively, in (4.40) and (4.43), obtaining a system of three equations that guarantees the balance of the markets: Sector 1 (consumer goods) Q1 = c1 (l1 Q1 +l2 Q2 )+ c3 l3 Q3 Sector 2 (machines T1) Q2 = gv1 Q1 Sector 3 (machines T2) Q3 = gv2 Q2 (4.47) For the first equation in balance, the supply of consumer goods Q1 must be equal to the demand from workers employed in all three sectors (second member). Based on the second relation, production Q2 of machines of first type (T1) must match the demand for fixed capital investments in the consumer goods sector. For the third relation, production Q3 of new capital goods of the second type (T2) must be equal to the demand for investments in fixed capital of the second sector. The system (4.47) is indeterminate, being composed of three linearly independent equations and four unknowns: Q1 , Q2 , Q3 , g. By using exchange relations Rij = Qi /Qj and selecting Q2 as a unit of measurement, we rewrite the system in the form: Sector 1 (consumer goods) Sector 2 (machines T1) Sector 3 (machines T2)
Substituting the relation (4.42) in (4.41), we obtain: Qi+1 = gK i
(4.43)
It is assumed that the workers are all employed in the three sectors, according to fixed utilization coefficients of work per product unit (li = Li /Qi = 1/λi ):
L=
3 i=1
Li =
3
li Qi
(4.44)
i=1
The capital stock, installed in each production sector, is also considered fully utilized with fixed capital coefficients per unit of product: vi . Then, it is possible to write: Ki = vi Qi
(4.45)
(4.46)
+c3 l3 R32 R12 = Q1 /Q2 = c1 l21−c 1 l1 R12 = 1/ gv1 R32 = Q3 /Q2 = gv2 (4.48)
By matching the first two equations and replacing the third, we obtain a second degree equation: R12 =
c1 l2 + c3 l3 v2 g 1 = → c3 l3 v2 g2 + c1 l2 g 1 − c1 l1 gv1 (4.49) − (1 − c1 l1 ) /v1 = 0
from which the equilibrium growth rate is obtained g∗ > 0, having appropriately chosen the positive sign: 1 c1 l2 + g∗ = − 2 c3 l3 v2
1 c1 l2 2 c3 l3 v2
2 +
1 − c1 l1 c3 l3 v1 v2
(4.50)
If we replace the relations (4.40) and (4.43) in (4.44) and (4.45), we get the system in terms of factors:
4
90
P. Ravazzi and A. Villa
Labor Capital
(1 − c1 l1 ) L = l2 gK 1 + (1 + l1 s3 ) l3 gK 2 K1 = v1 c1 L + s3 l3 gK 2 K2 = v2 gK 1 (4.51)
The first equation expresses the intersectoral equilibrium of the labor market. The other two describe the balance of the capital market. From the solution the relative quantities (K1 /K2 ) and the same growth rate g∗ are obtained. Economy develops along a trajectory of potential equilibrium with full occupation of resources: a golden age, as defined by Joan Robinson [33]. To compute the absolute quantities of products and factors, the perspective is shifted to the short term, characterized by two types of constraints: (a) a given capital stock of an industrial sector, typically that of the production of consumer goods, which affects all the others K1 = K 1 ; (b) a given aggregate job offer L = L . In case a we include the constraint in the system (4.51), obtaining the same equilibrium growth rate (g∗ ) and the ∗ quantities work balance (L ) and the capital of the second ∗of sector K2 . By replacing these variables in (4.43), the productions of the ∗ two∗ machine manufacturing sectors arefinally obtained Q2 , Q3 and then that of consumer goods Q∗1 by the first equation of the system (4.47). If the existing offer is L˜ > L∗ , the unemployment rate (0 ≤ u = (L˜ − L∗ )/L˜ < 1 ⇐ L˜ > L∗ ) is positive or even advantageous for companies, because production capacity is fully occupied and unemployment reduces the bargaining power of workers. In case b, existing workers are all employed L = L˜ and they constitute a constraint for production. From the system ∗ ) and the (4.51) we obtain the equilibrium growth rate (g ∗ capital stock of the first and second sectors K1 , K2∗ , from which we then obtain the quantities produced by all the three sectors. It is possible that the degree of utilization of the capital stock of one or both sectors be not fully utilized, so entrepreneurs are no longer inclined to acquire new capital goods based on the equilibrium growth rate: the expected growth rate ge and the effective rate g of capital accumulation would slow down (g = ge < g∗ ). The demand for capital goods would be lower than potential production, triggering a period of recessive instability [34–37], that cannot be avoided without an intervention from the state, as suggested by Keynes [20].
The Model in Terms of Prices: Wages and Profits We must now moving from the model in physical terms to that expressed in value, assuming – for simplicity – that the nominal wages w (paid for simple work) and z (for com-
plex work) are intended to purchase consumer goods, while profits are entirely invested in capital goods. Furthermore, the rate r of profit on invested capital (or interest rate) must be uniform in all productive sectors, in order to prevent capital move from one sector to another, in search of greater performance. Denoting with pi the unit prices of goods produced by the three sectors (i = 1, 2, 3), then rp2 and rp3 represent the usage prices or unit costs of capital. For each sector we write the equality between the total cost (of labor and capital) at the first member and the total revenue at the second member: Sector 1 (Consumer goods) Sector 2 (machines T1) Sector 3 (machines T2)
wL1 + rp2 K1 = p1 Q1 wL2 + rp3 K2 = p2 Q2 zL3 = p3 Q3 (4.52)
Real wages ω and ζ , can be defined by dividing the nominal wages w and z by the price p1 of consumer goods, imposing the following conditions (ωˆ is the subsistence wage): w = c1 p1 → ω = w/p1 = c1 ≥ ωˆ z = c3 p1 > w → ζ = z/p1 = c3 > ω
(4.53)
(4.54)
We use these two relationships to rewrite the system (4.52), dividing each element by Qi , in order to obtain the equality between the average cost (first member) and the average revenue (the selling price) of each sector: Sector 1 (consumer goods) Sector 2 (machines T1) Sector 3 (machines T2)
ωp1 l1 + rp2 v1 = p1 ωp1 l2 + rp3 v2 = p2 ζ p 1 l3 = p 3 (4.55)
We rewrite the system in terms of relative prices, assuming the price of the consumer goods sector as a unit of measure (p1 = 1, p21 = p2 /p1 , p31 = p3 /p1 ): Sector 1 (consumer goods) Sector 2 (machines T1) Sector 3 (machines T2)
ωl1 + rp21 v1 = 1 ωl2 + rp31 v2 = p21 ζ l3 = p31 (4.56)
The system is now composed of three equations and five variables: ω, ζ , r, p21 , p31 . We obtain a second degree equation Z equal to (4.49) of the model in physical units, being ω = c1 e ζ = c3 , with the only difference of having the variable of the profit rate r instead of the growth rate g: Z = ζ l3 v2 r2 + ωl2 r − (1 − ωl1 ) /v1 = 0
(4.57)
4
Economic Effects of Automation
91
It follows that the profit rate and the growth rate of the economic system are the same (r∗ = g∗ ), because we assumed exogenous the remuneration of work determined by the bargaining between the social partners ω = ω, ζ = ζ , so the solution of (4.57) is analogous to (4.50): 1 ωl2 r =− + 2 ζ l3 v2 ∗
1 ωl2 2 ζ l3 v2
2 +
1 − ωl1 v1 v2 ζ l3
(4.58)
If we remove the hypothesis of real wages given for simple work, the previous relationship expresses the link existing between the rate of profit r and the real wages ω. Using the derivation rule of the implicit function (4.57), we obtain the marginal rate of substitution between profit and real wages: dr ∂Z/∂ω l2 r + l1 /v1 =− =− 0 highlights the role of endogenous technical progress induced by the rate of capital growth. The condition for obtaining a decrease in the work content per unit of product requires a critical value of the growth rate:
li < 0 ⇐ g > θθ02 − θθ12 li(t−1) . The general solution of Eq. (4.59) is the following: li = li∗ + li(0) − li∗ (1 − θ1 )t
(4.60)
where li∗ is the particular equilibrium solution (li(t) = li(t−1) ), in which h0 = θ 0 /θ 1 , h1 = θ 2 /θ 1 : li∗ =
θ0 − θ2 g h0 θ0 = h0 − h1 g ≥ 0 ⇐ g ≤ = θ1 h1 θ2
(4.61)
The limit of (4.60) for t → ∞ is the particular solution (4.61), if 0 < θ 1 < 1 ⇒ lim (1 − θ 1 )t = 0:
lim h0 − h1 g + li(0) − h0 − h1 g (1 − θ1 )t = h0 + h1 g This solution shows that in a stationary state (g = 0), in which no new machines are produced, the work content per
4
94
P. Ravazzi and A. Villa
unit of product and productivity are constant: li∗ = h0 . Instead in a progressive economy (g > 0), the former decreases and productivity increases: dl∗i /dg = −h1 < 0. The speed of convergence toward the equilibrium solution depends on θ 1 : if θ 1 → 1, the equilibrium level li∗ can be achieved quickly, so h0 = θ 0 e h1 = θ 2 . This hypothesis is reasonable in the long run and therefore we include it in the model. Finally, unlike the first era of machines, in the second era automation and AI continuously increase the parameter θ 2 , due to the exponential increase in productivity and the symmetrical fall in the work content. Now consider the capital / product coefficient vi . Based on the relationship (4.46) it can be expressed as the interaction between capital per unit of work ki and work per production unit li : vi = ki li = ki /λi
(4.62)
All the coefficients (ki , λi , li ) depend on the diffusion of machines in production processes and are therefore functions of the growth rate g of capital. We can therefore rewrite the previous relationship, using the concepts of elasticity of each coefficient (εv , εk , ελ ) with respect to g: vi =
ki (g) dvi g dki g dλi g ⇒ εv = = − λi (g) dg vi dg ki dg λi
(4.63)
= εk − ελ 0 se εk ελ The acquisition of the machines on the one hand leads to an increase in the capital ratio per unit of work (mechanization effect: dki /dg > 0) and on the other – due to the technical progress incorporated in capital – it induces an increase in the productivity of labor (dλi /dg > 0), that is, a decrease in the amount of work per unit of product (substitution effect: dli /dg < 0), which counteracts the first effect. In the following analysis, we assume that the two forces are perfectly opposed so the capital / product relationships do not change significantly in the long run [45, 46]: for a verification, we used Mediobanca data, available online, approximating the capital/product ratio with the ratio between the book values of gross tangible fixed assets and net turnover; given the delay with which inflation corrects the values of the stock of fixed assets (only occasionally revalued by specific laws), it is not surprising the fall in the relationship in the period of the great inflation of the seventies and the rise in the following decade; making the data homogeneous, the ratio was stable around 0.3 for medium-sized enterprises and 1 for large ones. So in terms of elasticity, let’s assume: εk = ελ . At this point we can replace the particular solution (4.61) in the relation (4.49), obtaining a new second degree equation:
R12 =
1 c1 (θ0 − θ2 g) + c3 l3 gv2 = → (c3 l3 v2 − c1 θ2 ) g2 1 − c1 (θ0 − θ2 g) gv1 θ2 1 − c1 θ0 + c1 θ0 − =0 g− v1 v1 (4.64)
The solution provides equilibrium growth rate with endogenous technical progress g˜ ∗ : c1 (θ0 v1 − θ2 ) 2v1 (c3 l3 v2 − c1 θ2 ) 2 1 − c1 θ0 c1 (θ0 v1 − θ2 ) + + 2v1 (c3 l3 v2 − c1 θ2 ) v1 (c3 l3 v2 − c1 θ2 ) (4.65)
g˜ ∗ = −
The condition for the positivity of the growth rate requires that the following inequalities be verified: c3 l3 v2 > c1 θ 2 , θ 0 < c1 . In equilibrium, the growth rate g˜ ∗ with endogenous technical progress (incorporated into capital) is greater than that without progress: g˜ ∗ > g∗ ⇐ c1 θ2 < 1. Ultimately, technical progress increases the potential growth of the economic system and in the second era of the machines it undergoes a continuous and significant acceleration, due to automation, so that even aggregate demand must necessarily grow at a higher and higher rate to ensure dynamic balance over time. The economic system moves along a path where the simple work of the first two sectors is first gradually and then exponentially replaced by machines and specialized work, converging toward a final goal characterized by its disappearance. In the transient period, the increase in productivity, induced by this replacement, is however gradual and can even be disappointing [47, 48], with a mean delay estimated in 7 years [49, 50]. Adoption of new technologies clashes with human’s incompetence and inertia toward organizational change. But learning by doing helps to derive from the reorganization of the processes the advantage that new technologies promise, allowing to increase productivity and to expel simple work to replace it with specialized work. Acemoglu and Autor [51] have tried to understand that machines in the transient period mainly replace both cognitive (e.g., bookkeeping) and manual (e.g., assembly line) routine work, but not non-repetitive, whether cognitive (e.g., finance) or manual (e.g., hair cutting).
Long-Term Trends: Complete Automation and Diffusion of Digital Goods Some economists [52] show underlying optimism, since they see only the supply side of this new industrial revolution, neglecting the risks associated with the massive replacement of labor and the consequent fall in consumption. When the
4
Economic Effects of Automation
95
machines will completely replace both simple and specialized work, that is, when the amount of work per unit of product will tend to cancel itself, the system (4.48) expressed in physical units will be simplified by placing l1 = l2 → 0: Sector 1 (consumer goods) Sector 2 (machines T1) Sector 3 (machines T2)
R12 = c3 l3 R32 R12 = 1/gv1 R32 = gv2 (4.66)
In this final phase, sector 1 produces all consumer goods, including personal services, only purchased by workers in sector 3. Workers from the other two sectors, without means of subsistence, would be destined to perish, unless the State provides for their maintenance with a subsidy, comparable to the basic universal income proposed to reduce inequalities [53, 54]. By adding suffixes I and II to associate the growth rates to the respective machine age, the system solution (4.66) results in an equilibrium growth rate g∗II greater than g∗I (for simplicity without technical progress), expressed by (4.50): l1 >0 g∗II = 1/ c3 l3 v1 v2 > g∗I ⇐ g∗II + l2 v1
(4.67)
Let us now reformulate the model (4.56), expressed in value, by placing l1 = l2 → 0 and ω = 0, since the nonexistence of simple work excludes the payment of real wages to workers: Sector 1 (consumer goods) Sector 2 (machines T1) Sector 3 (machines T2)
rp21 v1 = 1 rp31 v2 = p21 ζ l3 = p31 (4.68)
from which the profit rate is obtained equal to the capital growth rate (4.67), being c3 = ζ : Z = ζ l3 v1 v2 rII2 − 1 = 0 → rII∗ l1 = 1/ ζ l3 v1 v2 > rI∗ ⇐ rII∗ + >0 l2 v1
(4.69)
The consequent greater surplus will be acquired by the owners of the machines and destined for accumulation, thus allowing a higher growth rate, with the same capital/product ratio. The relation (4.69) also shows that in the final phase of complete automation the trade-off between the rate of profit and real wages disappears, since one of the two antagonistic parties in wage bargaining is expelled from the production processes. Only workers in sector 3 remain active, dedicated to the production of digital goods (hardware and software) or char-
acterized by high artistic, scientific, and operational skills (researchers, innovators, entrepreneurs, and managers), who receive a remuneration that is no longer constant (as in the first era), but variable according to their success in the market. In this final phase, therefore, another conflict arises for the distribution of income between the class of innovators/entrepreneurs and the class of capitalists. The trade-off is represented by a decreasing and convex curve: ∂Z dr 1 drII ∂Z r d2 rII drII = 0 → II = − II < 0, =− dζ + >0 ∂ζ ∂rII dζ 2ζ ζ dζ dζ 2
The bargaining between the parties is however limited to only two subjects: the owners of capital and the managers, who however are destined to disappear because the machines equipped with artificial intelligence are able to process all the information available and can make better decisions than humans [55, 56], without the need for incentives to align their goals with those of the owners. Innovators, artists and scientists can obtain a higher compensation at the expense of the capitalists, only if their innovative products and their capabilities are successful on the market. Once the division between the two new social classes has been resolved, the system (4.68) allows to determine the relative prices. To determine the absolute quantities of the products, we assume that, in the final stage, the complex work of sector 3 is less than the total workforce of the first era (0 < θ 3 < 1): L 1 = L 2 = 0 ⇒ L 3 = θ3 L < L
(4.70)
The quantities of capital and production and the amount of profits of the second era of the machines will therefore be lower than those of the first era. If the rational objective of the capitalists is not the maximization of the profit rate r, but the pursuit of the maximum profit (i = rKi ), then the final phase will not be appreciated by the owners of the capital. Avoiding the implosion is a priority for the whole population.
The Transient: Inequality and Unemployment The increase in productivity induced by automation implies greater competitiveness on domestic and international markets. The associated reduction in the quantity of work per unit of product also leads to the loss of workers’ bargaining power, with a consequent drop in wages, strengthening competitiveness in a virtuous circle for capital owners. This entails: (a) the progressive concentration of wealth in a small part of the population, thus producing the two classes that dominates the second era of machines (the traditional class of capital owners and the emerging class of superstars of the digital economy); (b) an increase of unemployment in the absence of compensatory interventions by the State for the support of aggregate demand.
4
96
P. Ravazzi and A. Villa
With regard to inequality, OECD [57] notes that the wages/GDP ratio from 1980 to 2010 has fallen by 13 points in France and Japan (from 76% to 63% and from 78% to 65%, respectively), in the USA by almost 10 points (from 70% to about 60%), in Italy of 8 points (from 70% to 62%), and in the United Kingdom of 5 points (from 75% to 70%). World Inequality Database data [58] reinforce this inequality for the USA: from 1980 to 2016 the income share of the richest 1% of the population doubled (from 10% in 1980 to 20% in 2016), while that of the poorest 50% fell from 20% to 13%. According to the trickle theory, an increase in the income and wealth of the richest 1% also benefits the poorest 50%, because innovations reduce the costs of goods produced [59]. This theory is not confirmed by empirical evidence, because the real income of this part of the population has decreased, reducing the consumption of quality goods and forcing them to choose low-priced substitutes [60]. The myth of social mobility has also been dispelled: those born poor remain poor for many generations [61]. As shown by the OECD data [62]: in the USA, Great Britain, and Italy it takes on average five generations for children of a low-income family reach an average income; in France and Germany mobility is lower (six generations); only the Scandinavian countries are characterized by high upward social mobility (two to three generations). The inequality of wealth is even more striking. The situation in Italy is emblematic: the division into deciles of the possession of Italian financial wealth [63] shows that in 2016 10% of the wealthiest families owned 52% of financial assets, while 50% of the poorest families owned just over 10%, essentially bank and similar deposits. Concerning unemployment, we reformulate the relationship (4.59), considering only the impulse of the capital growth rate on the variation of the work content per unit of product (i = 1, 2):
li = li(t) − li(t−1) = −θ2 g → limli(t) = li(t−1) − θ2 g = 0 t→T
(4.71) The time needed to converge toward this final phase (indicated by T) depends both on the actual growth rate (g ≤ g∗ ) of the economy, and from the effect θ 2 that this exerts on labor productivity. If automation incorporated in capital is small (θ 2 → 0), the dynamic to achieve work euthanasia will be slow. If instead it is high (θ 2 > 0) and even growing (dθ 2 /dt > 0) convergence will be rapid. In the transitory period, therefore the simple works of the first two sectors are gradually expelled from the increasing pace of the production of goods and from that of services and replaced with a reduced number of skilled workers, also destined to be eliminated subsequently. During this phase, the demand for consumer goods will gradually decrease, creating a progres-
sive excess of supply of goods in sector 1, which will affect the demand for machines in sector 2 and therefore on the production of machines in sector 3. Aggregate demand will continually be insufficient to absorb the productive capacity of the economic system (the effective growth rate will be lower than the potential one: g < g∗ , leading to a progressive drop in production. As mentioned, exogenous support from the state will be needed to avoid recessions: expansive public expenditure, financed with taxes on the capital and income of those who are lucky enough to still have a job. The system (4.52) must therefore be completed by adding a sector 4, which summarizes the budget of the Public Administration (PA): Sector 4 (P.A.) : w N − L12 = τw wN + τz zL3 + τk (p2 K1 + p3 K2 ) (4.72) where N is the offer of simple and specialized work (separating it from complex work L3 ) and L12 is the demand (employees) of the first two sectors; w N − L12 therefore indicates both public transfers to the unemployed U and the wages of employees L4 of the PA, so that N − L12 = U + L4 , assuming to pay everyone the same salary; τw wN represents the levy on wages received by all labor forces, hit at the rate τ w ; τ z zL3 are taxes on sector 3 revenues, to which a rate has been applied τ z > τ w ; τ k (p2 K1 + p3 K2 ) is the tax levy on the total value of the capital, hit by a rate τ k . From the relation (4.72) we derive the deficit (DPA) of the public budget as the difference between expenditure and revenue:
DPA = wL4 + wU − τw wN + τz zL3 + τk (p2 K1 + p3 K2 ) (4.73) Assuming therefore that the public budget must be kept in balance over the long term, the State can choose a combination of rates, respecting the following balance relationship, which is simplified in the final phase (L = 0): w (1 − τw ) N − L − τz zL3 w (1 − τw ) N − τz zL3 τk = → p2 K1 + p3 K2 p2 K1 + p3 K2 (4.74) The state will face a linear trade-off between taxation of capital and taxation of third-sector workers’ incomes in order to be able to pay a salary to the unemployed and public administration employees. This would allow capitalism to continue to expand to its potential level, avoiding a downward screwing which would be disadvantageous for all.
4
Economic Effects of Automation
Already in the early twentieth century the forecasts for the future of capitalism contrasted the optimism of Clark [64], who considered the phenomenon of unemployment transitory, to the realism of Keynes [65] and Meade [66], convinced that mechanization would have resulted in permanent technological unemployment, an idea taken up by Leontief and Duchin [67], who however thought of a nontraumatic period in which governments would have been able to manage change. Some scenarios are possible but not desirable: chain bankruptcies of micro and small enterprises that do not automated their production; activities outside the market and legality in order to survive; forms of Luddism; the risk of the collapse of democracy. So it is necessary for the state to intervene with economic policies that preserve capitalism, the market and democratic institutions, maintaining their merits (incentives for efficiency, decentralized allocation exchange, environment conducive to development), and removing defects (disparity between those who have capital and talent and those who only own their workforce).
Artificial Intelligence The traumatic effects induced by automation are amplified with the spread of Machine Learning (ML) and AI (machines capable of performing tasks that normally require human intelligence). In this regard, there is a considerable heterogeneity of opinions. From an extreme perspective, someone thinks that AI will replace human labor in any aspect. At the other end, the dominant economic theory considers AI as a simple support of human work [68–70], since it requires an increase in specialized work greater than the decrease in simple work [71–73] and job creation both in the new technologies sector [29, 74, 75], and among selfemployed workers [76], however, creating a wage gap between low-level and skilled workers [76, 77]. AI would also reduce production costs and sales prices, stimulating demand for goods and creating more jobs [78– 80], based on the hypothesis (empirically not verified for poor) of a high elasticity of demand [81]. Agrawal et al. [54] emphasize instead the role of AI in supporting research, increasing the probability of innovations. In our model, where Sect. 4.3 is present, to deliver a complex work, we have adopted an intermediate scenario of a partial replacement of men, since AI is able to perform better only certain tasks: these also include mid-level monitoring tasks, leading to a migration toward flatter and more decentralized organizational structures [82], comforting with the econometric verification our analysis carried out in paragraph 4.2. Creative, strategic works or social interaction should remain, even if AI could support these activities [83].
97
ML and AI have potential implications also in terms of competition: the cost structure typical of capital-intensive technologies (labor savers) with high fixed costs and significant economies of scale, which in many cases are combined with strong network externalities, could determine only one dominant or a limited number of companies [84]. Reduced competition can slow down the pace of innovation, but algorithms can facilitate consumers to compare different products and thereby promote competition [85, 86]. A crucial aspect is the use of data: lower privacy could favor buyers to evaluate product characteristics and encourage high quality production. However, Jin [87] notes that artificial intelligence exacerbates some privacy issues thus favoring sellers at the expense of consumers.
4.7
Final Comments
Industrial automation is characterized by the predominance of capital over human labor, increasing productivity and the rate of growth of production. Automation also plays a positive role in reducing costs of labor control, making useless monetary incentives both direct to production workers and indirect to supervisors employed to ensure maximum commitment of personnel. It is automation itself that dictates the pace of production and forces workers to respect it. Despite these positive effects, the increase in capital intensity implies a greater rigidity of the cost structure: a higher cost of capital according to the higher value of the investment, with the transformation of some variable costs (those of labor) into fixed costs. This induces a greater variance of profit in relation to production volumes and a greater risk for automated systems. The trade-off between return and automation risk encourages companies resort to capital-intensive systems in cases where the use of a high production capacity is envisaged. New digital technologies have their most significant effects on a macroeconomic level. In the short term they differentiate companies based on the technologies adopted (more or less efficient), with significant consequences on employment and inflation, creating in certain situations stagflation effects and ensuring only in particular cases the full employment of the resources. In the long run, new technologies increase the growth rate of the economic system, but also the unemployment rate, and inequalities that economic policy must counter. In the first era of machines, which began following the Industrial Revolution and continued until the end of the twentieth century, machines played a role in substitution and in part complementary to human work. Workers expelled from the production process found their place in other production activities, typically those of services.
4
98
P. Ravazzi and A. Villa
In the second era, machines have gradually lost their complementary character with human activity, accentuating that of replacement first in industry and then in services, due to the spread of increasingly automated technologies unemployment and inequality become endemic phenomena. It follows that the adoption of public policies to support the aggregate demand will play a key role in guaranteeing the growth of the economic system balancing costs and benefits of automation [88], so it is precisely at the political level to find the correct solution [89]. The new technologies will be able to ensure greater wellbeing, through better diagnostic methods, greater safety in transport and traffic, widespread dissemination of innovative products, liberation from tiring work, greater availability of free time, etc. However, these benefits come with risks that could lead to regret the first machine age. Someone could use new technologies in a perverse way, destroying people’s privacy and undermining companies interconnected to the digital network [29]. Public resources need to be invested to counter these harmful effects. Other effects of digital innovations are of psychological nature with regard to labor and leisure.
4.7.1
Psychological Benefits of Labor
Labor is not just a sacrifice, as claimed by neoclassical economists, for whom wages are the reward for giving up free time. Regular labor is also dignity, self-esteem, pride, social commitment, medicine against boredom and depression, discipline, and organization of one’s time. The loss of a job is a misfortune, which predisposes the individual to physical and mental illnesses and the failure of family relationships, because one feels excluded from the active part of society. A human without a regular job or with an irregular job lives in worse mental conditions than a poor but regularly employed individual, because he/she loses the spatial and temporal reference of daily life [90]. Ensuring a job for the unemployed should, therefore, be the main objective of a policy maker who intends to pursue collective well-being. Our model contemplates the euthanasia of simple work and the persistence of the class of artists, innovators and entrepreneurs in the final phase of capitalism. But how many people can aspire to be part of this small circle? If we are convinced that only a portion of the population has the necessary skills to be a creative, a researcher, or an entrepreneur, even if it does not rise to the heights of a superstar, the State should create a series of social activities, useful to the community, carried out by people and not by robots, in exchange for a guaranteed minimum wage. In this case, those who want to commit themselves can provide socially useful work and those who want to laze can enjoy their free time.
At this point another question arises: how will humanity use the abundance of free time?
4.7.2
Use of Free Time
To analyze this problem, we use a rational choice model [91] which considers two uses of free time: (a) an active use (spent in study and research aimed at enriching one’s knowledge), which involves low monetary costs, but imposes a strongly increasing subjective cost, due to the increasingly intense effort, induced by the diminishing returns of learning; the psychological cost Caf of the active time Ta can be expressed as an increasing function at increasing rates, with a constant that measures the monetary cost Ca of the tools required to carry out this activity
Caf = Ca + fa (Ta ) ; df a /dT a > 0, d2 fa /dT 2a > 0
(4.75)
(b) a passive use, characterized by a subjective cost which is also increasing due to boredom, induced by repetitiveness, and dissatisfaction with wasting one’s available time; the psychological cost Cpf of the passive time Tp can be formalized through a similar function with a constant monetary cost Cp to acquire entertainment tools Cpf = Cp + fp Tp ; dfp /dT p > 0, d2 fp /dT 2p > 0 (4.76) The problem of rational choice can be formulated as a minimum investment cost over the time of the two activities. The overall psychological cost Cf of free time is obtained by adding the relations (4.75) and (4.76), the minimization of which must respect the constraint of free available time T: min Cf = Caf + Cpf = Ca + Cp + fa (Ta ) + fp Tp sub Ta + Tp ≤ T
(4.77)
The optimal solution implies the cancellation of the differential and the equality of the marginal costs: dCf =
∂fp dTp ∂fa ∂fa /∂Ta dT a + dT p = 0 → − = ∂Ta ∂Tp ∂fp /∂Tp dTa = −1 →
∂fp ∂fa = ∂Ta ∂Tp (4.78)
In this way the optimal quantities of free active time and passive time Ta∗ e Tp∗ are obtained.
4
Economic Effects of Automation
99
b)
See in Chapter 30 related details on economic rationalization of automation projects and quality of service.
a) ∂Cpf
∂Caf ∂Ta
∂Caf ∂Ta
∂Tp
∂ Cpf ∂Tp
E Cp Ca 0
Ta*
E Ca
Tp* T
References
0
Ta*
Tp*
Cp T
Fig. 4.4 Optimal distribution of free time in the absence (a) and in the presence (b) of digital goods
The psychological cost of active time is generally more intense than passive time, but the monetary cost in the absence of digital goods (games and multimedia content, chats, social sites, etc.) is lower (Ca < Cp ), so it can turn out Ta∗ Tp∗ . In quadrant (a) of Fig. 4.4 we have placed on the ordinates the costs of the use of free time and on the abscissas the hours used for active time (measured from left to right) and passive time (from right to left), whose constraint is given by the vertical line T. The two curves growing at increasing rates, representative of marginal costs, intersect at point E of excellent distribution of the free available time. We have hypothesized a solution, presumably optimistic, in which Ta∗ > Tp∗ . At this point we introduce digital goods, which the second era of machines makes available in large quantities, whose monetary cost has become low and, in some cases, even zero. Digital goods also reduce the marginal costs of passive free time: the variety of services and the possibility of creating interactions of the subject with sounds, images, and other individuals on Internet, according to a practically infinite combination, allows to drastically reduce the boredom and the discomfort associated with prolonged use. In quadrant (b) of Fig. 4.4 we have introduced digital goods, shifted downwards the monetary cost Cp and flattened the curvature of the marginal cost of passive leisure. The effects on the cost of active free time are on the other hand much smaller, for simplicity negligible. The availability of digital goods therefore shifts the balance in favor of an intense use of passive free time, which could lead to compulsive behavior and isolation of individuals. It is necessary to invest in educational programs, which stimulate intellectual curiosity, avoiding the perverse use of free time. This pessimistic conclusion contrasts with the idealistic ones proposed by [92] and by Keynes (4.65) of a humanity dedicated to culture, arts, and sciences, but these great economists could not imagine the explosion of digital products of the second era of the machines.
1. Villa, A. (ed.): Managing Cooperation in Supply Network Structures and Small or Medium-Sized Enterprises: Main Criteria and Tools for Managers. Springer, London (2011) 2. EASME/COSME: Annual Report on European SMEs 2015/2016 – SME recovery continues, SME Performance Review 2015/2016 (2016) 3. Woodford, M.: Convergence in macroeconomics. Elements of the new synthesis. Am. Econ. J. Macroecon. 1(1), 267–279 (2009) 4. Romer, P.: Endogenous technological change. J. Polit. Econ. 98(5), 71–102 (1990) 5. Shaikh, A.: Laws of production and Laws of algebra: the humbug production function. Rev. Econ. Stat. 56, 115–120 (1974) 6. Samuelson, P.A.: Foundations of Economic Analysis. Harvard University Press, Cambridge, MA (1947) 7. Shephard, R.W.: Cost and Production Functions. Princeton University Press, Princeton (1953) 8. Shephard, R.W.: Theory of Cost and Production Functions. Princeton University Press, Princeton (1970) 9. Diewert, W.E.: Duality approaches to microeconomic theory. In: Arrow, K.J., Intriligator, M.D. (eds.) Handbook of Mathematical Economics, vol. II. North-Holland, Amsterdam (1982) 10. Weiss, A.: Efficiency Wages. Princeton University Press, Princeton (1990) 11. Stiglitz, J.E.: Wage determination and unemployment in LDC’s: the labor turnover model. Q. J. Econ. 88(2), 194–227 (1974) 12. Salop, S.: A model of the natural rate of unemployment. Am. Econ. Rev. 69(2), 117–112 (1979) 13. Weiss, A.: Job queues and layoffs in labor markets with flexible wages. J. Polit. Econ. 88, 526–538 (1980) 14. Shapiro, C., Stiglitz, J.E.: Equilibrium unemployment as a worker discipline device. Am. Econ. Rev. 74(3), 433–444 (1984) 15. Calvo, G.A.: The inefficiency of unemployment: the supervision perspective. Q. J. Econ. 100, 373–387 (1985) 16. Akerlof, G.A.: Labor contracts as partial gift exchange. Q. J. Econ. 97(4), 543–569 (1982) 17. Akerlof, G.A.: Gift exchange and efficiency-wage theory: four views. Am. Econ. Rev. 74, 79–83 (1984) 18. Miyazaki, H.: Work, norms and involontary unemployment. Q. J. Econ. 99, 297–311 (1984) 19. Hicks, J.R.: The Crisis in Keynesian Economics. Basil Blackwell & Mott, Oxford, UK (1974) 20. Keynes, J.M.: The general theory of employment, interest and money. In: The Collected Writings of John Maynard Keynes, vol. VII, p. 1973. MacMillan, London (1936) 21. Keynes, J.M.: Relative movements of real wages and output. Econ. J. 49, 34–51 (1939) 22. Schumpeter, J.A.: The Theory of Economic Development. An Inquiry into Profits, Capital, Credit, Interest, and the Business Cycle. Harvard University Press, Cambridge, MA (1934) 23. Schumpeter, J.A.: Business Cycles: A Theoretical, Historical and Statistical Analysis of the Capitalist Process, vol. 2. McGraw – Hill Book Company, New York/London (1939) 24. Ravazzi, P., Abrardi, L.: Crescita economica e prospettive del capitalismo. Gli effetti delle tecnologie digitali. Carocci, Roma (2019) 25. Morris, I.: Why the West Rules – For Now: The Patterns of History, and What They Reveal About the Future. Farrar, Straus and Giroux, New York (2010)
4
100 26. Smith, A.: An Inquiry into the Nature and Causes of the Wealth of Nations, vol. 1994. Modern Library, The Random House Publishing Group, New York (1776) 27. Ricardo, D.: The principles of political economy, and taxation. In: Sraffa, P. (ed.) The Works and Correspondences of David Ricardo, vol. I, pp. 1951–1973. Cambridge University Press (1817) 28. Marx, K.: Das Kapital, Kritik der politischen Oekonomie, vol. I, II (1885), III (1894). In: Marx-Engels Werke, vol. XXIII, pp. 1956– 1968. Institut für Marxismus-Leninismus (1867) 29. Brynjolfsson, E., Mcafee, A.: The Second Machine Age. Work, Progress, and Prosperity in a Time of Brilliant Technologies. W.W. Norton, New York (2014) 30. Leontief, W.W.: The Structure of American Economy 1919–1929. Harvard University Press, Cambridge, MA (1941) 31. Leontief, W.W.: The Structure of American Economy 1919–1939. Oxford University Press, New York (1951) 32. Sraffa, P.: Production of Commodities by Means of Commodities, Prelude to a Critique of Economic Theory. Cambridge University Press, Cambridge (1960) 33. Robinson, J.: The Accumulation of Capital. Macmillan, London (1956) 34. Harrod, R.F.: An essay in dynamic theory. Econ. J. 49, 14–33 (1939) 35. Domar, E.D.: The “burden” of the debt and the national income. Am. Econ. Rev. 34, 798–827 (1944) 36. Domar, E.D.: Capital expansion, rate of growth, and employment. Econometrica. XIV, 137–147 (1946) 37. Domar, E.D.: Expansion and employment. Am. Econ. Rev. 37, 34– 35 (1947) 38. Levy, F., Murnane, R.J.: The New Division of Labor: How Computers Are Creating the Next Job Market. Princeton University Press, Princeton (2004) 39. Forni, A.: Robot: la nuova era. Vivere, lavorare, investire nella società robotica di domani. GEDI Gruppo Editoriale S.p.A, Torino (2016) 40. Frey, C.B., Osborne, M.A.: The Future of Employment: How susceptible are Jobs to Computerisation? Technological Forecasting and Social Change, vol. 114. Elsevier (2017) 41. Bostrom, N.: Superintelligence: Paths, Dangers, Strategies. Oxford University Press, Oxford (2014) 42. Arthur, W.B.: The Nature of Technology: What It Is and How It Evolves. Free Press, New York (2009) 43. Cockburn, L.M., Henderson, R., Stern, S.: The impact of artificial intelligence on innovation. In: NBER Working Paper, No. 24449 (2018) 44. Aghion, P., Bloom, N., Blundell, R.: Competition and innovation: an inverted U-relationship. Q. J. Econ. 120(2), 701–728 (2005) 45. Kaldor, N.: Capital accumulation and economic growth. In: Lutz, F.A., Hague, D.C. (eds.) The Theory of Capital, pp. 177–222. St. Martin Press (1961) 46. Acemoglu, D., Restrepo, P.: Robots and jobs: evidence from US labor markets. In: NBER Working Paper, 2328 (2017) 47. Brynjolfsson, E., Hitt, L.M.: Beyond computation: information technology, organizational transformation and business performance. J. Econ. Perspect. 14(4), 23–48 (2000) 48. Stiroh, K.J.: Information technology and the U.S. productivity revival: what do the industry data say? Am. Econ. Rev. 92(5), 1559– 1576 (2002) 49. Bresnahan, T.F., Brynjolfsson, E., Hitt, L.M.: Information technology, workplace organization, and the demand for skilled labor: firm-level evidence. Q. J. Econ. 117(1), 339–376 (2002) 50. Brynjolfsson, E., Hitt, L.M.: Computing productivity: firm-level evidence. Rev. Econ. Stat. 8(4), 793–808 (2003) 51. Acemoglu, D., Autor, D.: Skills, tasks and technologies: implications for employment and earnings. Handb. Labor Econ. 4, 1043– 1071 (2011)
P. Ravazzi and A. Villa 52. Weitzman, M.L.: Recombinant growth. Q. J. Econ. 113(2), 331– 360 (1998) 53. Korinek, A., Stiglitz, J.E.: Artificial intelligence and its implications for income distribution and unemployment. In: NBER Working Paper, 24174 (2017) 54. Agrawal, A., McHale, J., Oettl, A.: Finding needles in haystacks: artificial intelligence and recombinant growth. In: Agrawal, A., Gans, J., Goldfarb, A. (eds.) The Economics of Artificial Intelligence: An Agenda. University of Chicago Press, Chicago (2018) 55. Kahneman, D., Klein, G.: Conditions for intuitive expertise. A failure to disagree. Am. Psychol. 64(6), 515–526 (2009) 56. Kahneman, D.: Thinking, Fast and Slow. Farrar, Straus and Giroux, New York (2011) 57. OECD: An overview of growing income inequalities. In: OECD Countries: Main Findings. OECD (2011) 58. Alvaredo, F., Chancel, L., Piketty, T., Saez, E., Zucman, G.: World inequality report. In: WID World Conference (2018) 59. Mankiw, N.G.: Defending the one percent. J. Econ. Perspect. 27, 21 (2013) 60. Bernstein, J.: Three questions about consumer spending and the middle class. In: Bureau of Labor Statistics (2010) 61. Solon, G.: A Model of Intergenerational Mobility Variation over Time and Place. Cambridge University Press, Cambridge (2004) 62. OECD: A Broken Social Elevator? How to Promote Social Mobility. OECD (2018) 63. Banca D’italia: Indagine sui bilanci delle famiglie italiane, marzo (2018) 64. Clark, J.B.: Essentials of Economic Theory as Applied to Modern Problem of Industry and Public Policy. Macmillan, London (1915) 65. Keynes, J.M.: Economic possibilities for our grandchildren. In: Essays in Persuasion. Norton, New York (1930)., 1963 66. Meade, J.E.: Efficiency, Equality and the Ownership of Property. George Allen & Unwin Ltd., London (1964) (Routledge reprint) 67. Leontief, W.W., Duchin, F.: The Future Impact of Automation on Workers. Oxford University Press, New York (1986) 68. Zeira, J.: Workers, machines, and economic growth. Q. J. Econ. 113(4), 1091–1117 (1998) 69. Bessen, J.: Automation and jobs: when technology boosts employment. In: Boston Univ. Law and Economics Research, Paper No. 17-09 (2017) 70. Agrawal, A., Gans, J., Goldfarb, A.: Human judgement and AI pricing. In: NBER Working Paper, 24284 (2018b) 71. Acemoglu, D., Restrepo, P.: The race between machine and man: implications of technology for growth, factor shares and employment. Am. Econ. Rev. 108(6), 1488–1542 (2018a) 72. Acemoglu, D., Restrepo, P.: Modeling automation. In: NBER Working Paper, 24321 (2018b) 73. Acemoglu, D., Restrepo, P.: Artificial intelligence, automation and work. In: Agrawal, A., Gans, J., Goldfarb, A. (eds.) The Economics of Artificial Intelligence: An Agenda. University of Chicago Press, Chicago (2018c) 74. Autor, D.: Why are there still so many jobs? The history and future of workplace automation. J. Econ. Perspect. 29(3), 3–30 (2015) 75. Autor, D., Salomons, A.: Is automation labor-displacing? Productivity growth, employment, and the labor share. In: NBER Working Paper, 24871 (2018) 76. Tirole, J.: Economics for the Common Good. Princeton University Press, Princeton (2017) 77. Hemous, D., Olsen, M.: The Rise of the Machines: Automation, Horizontal Innovation and Income Inequality. University of Zurich (2016)., manuscript 78. Kotlikoff, L., Sachs, J.D.: Smart machines and long-term misery. In: NBER Working Paper, No. 18629 (2012) 79. Graetz, G., Michaels, G.: Robots at work. In: CEP Discussion Paper, vol. 133, (2015)
4
Economic Effects of Automation
80. Nordhaus, W.: Are we approaching an economic singularity? Information technology and the future of economic growth. In: NBER Working Paper, No. 21547 (2015) 81. Aghion, P., Jones, B., Jones, C.: Artificial intelligence and economic growth. In: Agrawal, A., Gans, J., Goldfarb, A. (eds.) The Economics of Artificial Intelligence: An Agenda. University of Chicago Press, Chicago (2018) 82. Bloom, N., Garicano, L., Sadun, R., Van Reenen, J.: The distinct effects of information technology and communication technology on firm organization. Manag. Sci. 60(12), 2859–2858 (2014) 83. Boden, M.A.: Creativity and artificial intelligence. In: Artificial Intelligence, vol. 103, pp. 347–356 (1998) 84. Varian, H.: Artificial intelligence, economics, and industrial organization. In: Agrawal, A., Gans, J., Goldfarb, A. (eds.) The Economics of Artificial Intelligence: An Agenda. University of Chicago Press, Chicago (2018) 85. Chevalier, J.: Antitrust and artificial intelligence: discussion of varian. In: Agrawal, A., Gans, J., Goldfarb, A. (eds.) The Economics of Artificial Intelligence: An Agenda. University of Chicago Press, Chicago (2018) 86. Milgrom, P.R., Tadelis, S.: How Artificial intelligence and machine learning can impact market design. In: NBER Working Paper, 24282 (2018) 87. Jin, G.Z.: Artificial intelligence and consumer privacy. In: NBER Working Paper, 24253 (2018) 88. Agrawal, A., Gans, J., Goldfarb, A.: Economic policy for artificial intelligence. In: Innovation Policy and the Economy, vol. 19, pp. 139–159. The University of Chicago Press (2019) 89. Goolsbee, A.: Public policy in an AI economy. In: NBER Working Paper, 24653 (2018) 90. Wilson, W.J.: When Work Disappears: The World of the New Urban Poor. Vintage Books, New York (1996) 91. Ravazzi, P.: Effetti delle tecnologie informatiche e di comunicazione su occupazione e tempo libero. In: Lombardini, S. (ed.) Sviluppo tecnologico e disoccupazione: trasformazione della società. Accademia Nazionale dei Lincei, Roma (1998) 92. Marx, K.: Critique of the Gotha programme. In: Marx-Engels Selected Works, vol. 3, pp. 13–30. Progress Publishers, Moscow, 1970
Piercarlo Ravazzi, graduated in Economics at the University of Rome, was professor of Economic Growth Theory and Political Economy at the University of Trieste and full professor of Analysis of Economic Systems at the Polytechnic of Turin from 1990 to 2017. In 2018–19 he taught Capitalism and new technologies in Management Engineering PhD. He is the author of essays and books on economic growth, production economics, managerial finance, telecommunications economics, and industrial history.
101
Agostino VILLA is Former Faculty Professor of Politecnico di Torino since November 1, 2020. During his career, Villa has acquired the official titles of Full Professor of Technologies and Production Systems and of Automatic Controls and Systems Theory. He is a member, Pastpresident and fellow of international scientific institutions among which IFPR, IFAC, IFIP, responsible of about 15 European or national research projects, and author of 10 books and more than 200 papers. Since 2017, Villa has been founder of the PMInnova Program, an agreement between Politecnico di Torino and Banca di Asti, to support the innovation of small-mid enterprises.
4
5
Trends in Automation Christopher Ganz and Alf J. Isaksson
Contents 5.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
5.2 5.2.1 5.2.2
Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 Market Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
5.3 5.3.1 5.3.2 5.3.3 5.3.4 5.3.5
Current Trends . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Digitalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Collaborative Robots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Industrial AI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Virtual Models and Digital Twin . . . . . . . . . . . . . . . . . . . .
106 106 110 111 111 112
5.4 5.4.1 5.4.2 5.4.3
Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Autonomous Industrial Systems . . . . . . . . . . . . . . . . . . . . . Collaborative Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . New Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
114 114 115 115
5.5
Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
Abstract
The growing importance of digitalization in business also has its impact on automation. In this section, we will put this into the perspective of significant market drivers, and different views on aspects of automation, across the industrial value chain, or along the automation lifecycle. Key trends in digitalization are having an impact on automation. Making data available for more applications through the industrial Internet of Things (industrial IoT) and cyber physical systems will be covered. Such systems
C. Ganz () C. Ganz Innovation Services, Zurich, Switzerland e-mail: [email protected] A. J. Isaksson ABB AB, Corporate Research, Västerås, Sweden e-mail: [email protected]
© Springer Nature Switzerland AG 2023 S. Y. Nof (ed.), Springer Handbook of Automation, Springer Handbooks, https://doi.org/10.1007/978-3-030-96729-1_5
expand the traditional automation stack with more advanced (wired and wireless) communication technologies, and the computing power and storage available in a cloud further extends capacity for additional functionality. An important use of data is artificial intelligence. We will cover AI’s impact on the industry, and how it plays into the automation functionality. The context of the data and its analysis is brought together in the concept of the digital twin that is covered as well. The availability of these technologies, from IoT to AI, allows for an evolution of automation toward more autonomous systems. Autonomy is today mostly seen in vehicles, we will put it into the context of the industrial automation system and will show how systems (among other robots) will become more autonomous, and will get the capability to collaborate. This will open a whole new range of applications, enabled by a bright future of automation. Keywords
Digitalization · Industrial Internet of Things · Cyber-physical systems · Collaborative robots · Industrial AI · Digital twin · Autonomous systems · Collaborative systems
5.1
Introduction
When this chapter was written for Edition 1 of the Springer “Handbook of Automation” more than a decade ago, automation technology was changing incrementally adding new technologies to established concepts and architectures. Shortly after that, several developments, mostly in the consumer market (e.g., smartphones and cloud storage), started to take off and have an impact also on industrial applications. Many of the trends described in Edition 1 are still steadily developing and remain important. Topics such as safety and 103
104
C. Ganz and A. J. Isaksson
cyber security are technologies that can be seen as the “license to play,” today even more than in the past. But unlike earlier developments, where a technology had an impact on a component of the architecture, the current technology developments are moving the coordinate systems of what is possible in several axes. Cloud computing and IoT, massively increasing computing power not only in cloud but also in devices, ultra-low-cost smartphone sensors, wired (TSN) and wireless (5G) communication capacity, and advancements in machine learning (ML) open the door to new solutions of the traditional automation use cases. The different technologies build on each other and create a momentum that is beyond the incremental developments of the decades before. In the market, the impact has been recognized by labeling what is going on as the “Fourth Industrial Revolution” [1]. As we write, these technology trends start to show their impact in automation. While many plants are still operating on traditional automation structures, advancements of automation toward autonomy, incorporating artificial intelligence enabled by new levels of computing power, will not only lead to new architectures, but also to new applications in areas where automation levels are low today. In this chapter, we look into some environmental factors in the markets and outlay how some of the current technology trends will impact automation going forward. The comparison Edition 1 of the “Handbook of Automation” are mentioned in the respective sections. Many of the trends identified materialized, and new trends are on the rise as we write. With all the dynamics in markets and technologies, this is an interesting time to look at trends in automation. At this time, it is much more difficult to predict, where these trends will lead the automation industry in another decade. It will be interesting to watch.
5.2
Enterprise Cash Flow Optimization Most industrial investment decisions are based on net present value (NPV) analysis, which includes the cash flow without compared to with the investment. An investment in a higher level of automation therefore needs to have a positive impact on cash flow. The cash flow calculation can be broken down to the following contributors:
Environment
Automation requirements are driven by the process (physical or business process) that is automated. This automated context itself is subject to global drivers and trends. To identify the promising trends in automation, they have to support those overall drivers. We will therefore have a closer look at markets and applications.
5.2.1
Global Versus Local Over the last few decades, economy of scale has led to ever larger factories manufacturing for the global market. These factories have followed the lowest labor rates and have concentrated in East Asia. Value chains today are spanning the globe, tightly interrelating economies. These interdependencies come at a cost: Disruptions anywhere in the chain quickly lead to downstream supply problems. Inventories along the supply chain help mitigating this risk, again at a cost. And the delay between production and consumption requires long-term planning and limits flexibility in production. Re-shoring production may address some of these issues. Re-shored plants produce for a more regional market and are therefore potentially smaller. Together with the potentially increased labor cost, higher productivity is required to remain competitive. Automation has a major role to play in such a scenario. Demographic trends in developed countries also put a limit on re-shoring: The workforce to handle the manufacturing may not be available or may not be willing to work in a dull, dirty, or dangerous environment. In such environments, automation is one way of scaling production cost effectively. “The Economist” has analyzed automation levels around the globe [2] and has found a correlation between the automation level and increased employment. A flourishing industry enabled by automation will create jobs the demographically available workforce is interested in rather than destroy them.
Market Requirements
There is no successful technology that does not meet the market requirements. To know some of the market requirements and resulting trends is one key input for the analysis.
• Sales/Top line: The ability to sell offerings, as well as the ability to produce what was sold. The volume of produced units in relation to the theoretical maximum capacity of a plant is often referred to as the “overall equipment effectiveness (OEE).” • Operational cost (COGS – cost of goods sold and OpEx – operational expenditure), the running production cost. • CapEx – capital expenditure: the investments in equipment, buildings, etc. • Working capital: the change in working capital, i.e., inventory, and changes in accounts receivable/payable. The benefits of increasing the automation level are often focusing on increasing OEE only. However, the other factors
5
Trends in Automation
Sales price Time to market Flexibility Defective units Uptime Time to repair Parts produced Idle time Operating time Material Energy Health Productivity Oper. risk Lifecycle cost Real estate Inventory Accounts R/P
105
Product attractiveness Sales
Quality Availability
Produced units
Productivity Free cash flow new
Personnel
Equipment
NPV
COGS & OpEx
5
CapEx Working cap
Free cash flow old
Fig. 5.1 Levers influencing cash flow and NPV
Applications
In order to further structure this section, we are now looking into it from a number of application angles, along different axes. Automation has different requirements and properties depending on key properties of these applications.
Industrial Value Chain Automation requirements vary greatly between use cases and industry verticals. However, there are some commonalities along the industrial value chain (a simplified version is shown in Fig. 5.2). Raw material extraction and processing for use in industry and power is mostly done in continuous processes that produce large amounts of material. To optimize asset utilization, such plants mostly run 24/7 over a longer period of time. The material produced is very often a commodity. Differentiation in process industries is typically through price, therefore process effectiveness is important. Industries that produce industrial equipment or consumer goods mostly generate discrete pieces of products. Factories may shut down more easily, some do over nights or weekends. Differentiation in these industries is through customer specification (capital goods) or product attractiveness (consumer goods). Flexibility in production is one driving requirement for automation there. Many plants within the value chain are
Consumer industries
Discrete manufacturing
5.2.2
Consumers
Capital goods
Process P Proces s / Utilities
Extraction E
Continuous process
listed in Fig. 5.1 may generate additional value. Furthermore, the automation solution comes at a cost (e.g., CapEx or nonmonetary cost such as software maintenance efforts) which needs to be considered in the overall cash flow analysis as well.
Natural resources
Fig. 5.2 Value chain across industries
hybrid though, where a continuous process paired with some discrete production steps (e.g., for packaging). The automation trends that we analyze in this chapter mostly touch on the industries depicted in Fig. 5.2. We do mention some additional applications in Sect. 5.4.3, but many service industries (financial, travel, entertainment, etc.) are outside the covered scope.
106
Plants: Continuous Versus Discrete The operational boundary conditions described in section “Industrial Value Chain” greatly drive the structure and capabilities of the automation system. While process plants are rarely changed after construction, discrete factories are more frequently reconfigured to produce new products. This has an impact on engineering, operations, and maintenance procedures. It also drives the overall automation level. The continuous flow of material and energy calls for a higher automation level for a process plant. This may require a higher engineering effort, which is justified, because the automation application is only slightly modified as plant or recipe change over time. The continuous and often tightly interconnected nature of the process also requires a control system architecture that has full control over the complete process, i.e., which can monitor and control the plant centrally. This is the primary application of a distributed control system (DCS). The flexibility required in discrete manufacturing plants often does not justify the design and programming of a complex automation system, since the product changes more frequently. Furthermore, discrete manufacturing steps happen in manufacturing lines built from individual manufacturing cells or machines. Such machines are specifically designed for a particular process step, and the machine builder delivers it with all controls installed. The cells and machines are only loosely coupled, with buffers and intermediate storage reducing the interdependency between components. Automation is mostly done on the machine or cell level by use of programmable logic controllers (PLC). A supervisory function may be available in a manufacturing execution system (MES) or SCADA (supervisory control and data acquisition) system. Service Industries A whole range of industries does not show up in Fig. 5.2: service industries. Service industries are characterized such that they provide services to their customers. Examples are transportation, healthcare, hospitality, but also communities and cities. To deliver their services, they mostly also employ industrial size plants. But the plant outcome is not a product that is shipped to a customer, but a service that is executed for a customer. Compared to products, services cannot be preproduced, stored, or transported. Services are delivered instantaneously. The transaction between the service provider and the customer is executed simultaneously. In both processes and discrete industries, the plant can mostly operate independently from the supply chain (upstream or downstream). Incoming material as well as produced output can be stored and shipped at a different time than the production is executed. Service industries typically reach beyond their own organization, they have to interact with the customer consuming the service.
C. Ganz and A. J. Isaksson
Still, the service may be relying on an industrial plant (e.g., wastewater treatment in a city, HVAC in a hotel, etc.). Such a plant often has to respond to an immediate request of the service customer.
Plant Lifecycle Optimizing plant operations by advanced automation applications is definitely an area where an owner gets most of his operational benefits. Although the main benefit of an advanced automation system is with the plant owner (or operator), the automation system is very often not directly sold to that organization, but to an engineering, procurement, and construction (EPC) contractor instead. And for these customers, price is one of the top decision criteria. As automation systems are hardly ever sold off the shelf, but are engineered for a specific plant, engineering costs are a major portion of the price of an automation system. Engineering efficiency is therefore a key capability for the automation supplier. Corresponding tools and data management with models and digital twins will be discussed in Sect. 5.3.5. While discrete manufacturing plants are already built in a modular structure, this concept is more and more also coming to process plants. Engineering is done for the module or cell, while the overall automation connects the high-level components and orchestrates them. Furthermore, as we will discuss in Sect. 5.4.1, autonomous systems may lead toward automation systems that do not need to be preconfigured to the extent done today, but that can perceive the situation and react accordingly. The very long lifecycle of industrial plants compared to IT systems requires a long-term strategy to manage the lifecycle of the automation system accordingly. The total cost of ownership not only includes the initial installation cost, but also the cost to maintain the system hardware, operating system, and application code. Adaptation to new technology can hardly be done by replacing the old installation completely. Questions such as compatibility with already installed automation components, upgrade strategies, and integration of old and new components become important to obtain the optimal automation solution for the extended plant.
5.3
Current Trends
5.3.1
Digitalization
Digitalization, or digital transformation, is one of the key topics today that influences many businesses. Its goal is mostly to become more data driven, run business processes more efficiently by using more data in decision and optimization routines. In automation in general, and in industrial automation in particular, data is an integral part. Automation takes
5
Trends in Automation
(measured) data, analyzes it, and turns it into action through actuators (see Fig. 5.12). Digitalization in industrial automation finds its representation in a number of trends that we will discuss in this section.
Industrial Internet of Things (IIoT) The smartphone revolution gave the idea of the “Internet of Things” a massive boost. The concept originates in the 1980s and describes devices that are directly connected to the Internet. With communication, intelligence, and sensing built into the device, the technology that allowed the cost-effective production of IoT devices became available. Starting at home applications, surveillance cameras, lightbulbs, door locks, and home appliances became Internet accessible. Organizations such as the Industrial Internet Consortium (IIC)) [3] or the German “Plattform Industrie 4.0” [4] are providing guidance on how to incorporate the Internet of Things in industrial environments. To arrive at a common understanding on what “Industrial IoT” means and how it shall be organized, participating industries have agreed on a number of references and artifacts that describe it. Of particular interest are the reference architectures that resulted from these activities. The Industrial Internet Consortium reference architecture [5] is structured along four “viewpoints”: business, usage, functional, and implementation. They follow the hierarchy of requirements, where business requirements are cascaded down to the implementation. Cross-cutting aspects such as industry vertical specifics or lifecycle views are considered inherently. Particular emphasis is on cyber security, where the IIC has published an Industrial Internet Security Framework [6].
107
A similar set of reference documents was published by the Plattform Industrie 4.0. Their Reference Architectural Model for Industry 4.0 (RAMI 4.0)) [7] specifies layers similar to the IIC’s viewpoints (business, functional, information, communication, integration, and asset), but covers lifecycle aspects (according to IEC62890) and hierarchical levels (IEC 62264/61512) in a more structured way (Fig. 5.3). This hierarchical level axis of the RAMI4.0 framework makes reference to automation system architectures. This is an essential aspect in many industrial Internet applications: Unlike consumer systems, where a single device/sensor (smartphone, light bulb, and temperature sensor) is directly connected to an Internet platform, industrial installation typically has some levels of automation (DCS and PLC) installed. Sensors are integral parts of devices needed for control, and data is very often collected in a plant historian or MES system. Before connecting a device directly to the Internet, it needs to be checked whether the data is not already available in a historian database and can be fetched from there. In many cases, it is also not desired to allow direct access, since the real-time control requirements take precedence over data collection. Industrial installations also provide more on-site computing capacity than consumer installations. Whether data is exported to the cloud and analyzed there or whether some analysis can be done on-site on a local cloud-like infrastructure (“fog computing”) is a design decision that depends on the plant’s use cases and requirements [8]. While short response time and low latency requirements are preferably implemented near the physical process, i.e., on-site, high computing requirements, e.g., training of an artificial intelligence (AI) network, require computing capacity that is typically only available in a cloud. If all the data that
Fig. 5.3 Reference Architectural Model for Industry 4.0 (RAMI4.0). (© Plattform Industrie 4.0 and ZVEI)
5
108
C. Ganz and A. J. Isaksson
is analyzed is available on-site, moving it to a cloud may create significant communication overhead, i.e., favor local analytics. If data from many sites need to be analyzed, e.g., to analyze faults across a fleet of installed products, it is easier to bring the data to the cloud than to one site.
Industrial Internet of Services In many industrial Internet concepts, the focus is on data collection and storage, and on some of the data analytics. But even if the concepts mention “actionable insights,” the action that follows the insight is hardly ever mentioned. But without the action resulting from the insight from data and analytics, there is no gain. To create value, the insights gained digitally need to reach the physical world again. If analysis reveals where bottlenecks are, where energy is lost, or where equipment is not performing as desired, these problems remain. Only when things are changed in the physical world, the situation can improve. These changes are very often done by the plant personnel, or by service technicians. To really create value, the Internet of Things has to incorporate people as well as services [9]. In addition to industrial services, the cloud-based infrastructures introduced by the IoT open the possibilities for new business models. Infrastructure, platform, or other software offerings can now be offered “as a service,” paid when used. This supports the overall business trend to turn CapEx into OpEx. Products as a service have been introduced as well, for example, by Rolls-Royce aircraft engines in their “power by the hour” concept [10]. This approach was tried by a number of companies, but many have failed. Cyber-Physical Systems Architectures A slightly more recent term for communication, computing, and control combined with processes governed by the law
of physics is cyber-physical systems (CPS). The concept of CPS applies to much more than industrial automation and can be found in many domains such as energy systems, transportation, and medical systems. For a recent overview of CPS and its research challenges, see [11]. For process automation the implementation of CPS has for decades almost been synonymous with that of distributed control systems (DCS). Despite the name a DCS is surprisingly centralized where still all measurement signals are often brought into a so-called marshalling room where all control nodes are located. The first DCS was introduced in process industry in the 1970s, and their overall architecture is in many ways similar also today. As indicated in the upper right-hand corner of Fig. 5.4, the DCS is linked to other functions such as computerized maintenance management systems (CMMS), manufacturing execution systems (MES), and enterprise resource planning (ERP) systems. This attempt to encapsulate most automation functions from one entry point has led to the term collaborative process automation systems (CPAS), see the book [12]. Functionally, we are typically describing the various automation tasks as a hierarchy (see Fig. 5.5 for the ISA-95 definition). What separates the different levels is a number of things such as the closeness to the physical process, the time scale and latency requirements, the need for computing power, and data storage requirements. This may be summarized as different levels having a different quality of service requirement. Still to this day different levels of the automation hierarchy are in many process industries dealt with by different softwares, in different departments, not seldom in different geographic locations. The hierarchy illustrated in Fig. 5.5 is of course valid also for discrete manufacturing automation, with PLCs primarily controlling binary on-off signals. As already mentioned,
Business systems —
Panel 800
—
System workplaces
—
Wireless network
Extended operator workplace
—
Collaboration table
System networks Router/ r fi f rewall Router/firewall Base system servers
AC 800M controller
AC 800M High integrity controller
Wireless HART gateway
ABB and 3rd party PLC & DCS
Power automation
Variable speed arive
LV switch gear
Process electrification
Modbus
Modulebus
Motor controller
Profibus / foundation fieldbus / modulebus
Protection & control IED
Profibus / profinet / devicenet / modulebus / devicent
IEC 61850
Field networks Fire & gas
Other PLC
S800 I/O
Wireless HART Shut-down
S900 I/O
Process instrumentation
Fig. 5.4 Modern DCS – ABB’s System 800xA. (With permission)
Safety
PLC & DCS
DMZ MI servers
— CMMS, ERP, DMS, DM CAD, Video…
Internet
—
5
Trends in Automation
Level 4
109
Business planning & logistics
Plant production scheduling, business management, etc. Time frame: months, weeks, days, shifts
Level 3
Manufacturing operations management
Level 2 Manufacturing control
Dispatching production, detailed production scheduling, quality monitoring Time frame: shifts, hours, minutes, seconds
5
Process monitoring, supervisory and basic automatic control
Level 1
Process sensing and manipulation
Level 0
The physical production process
Fig. 5.5 The ISA-95 automation hierarchy
On-premise platform
Global cloud Firewall
Safety
PLC
Real-time bus DCS
above machine automation is, however, by nature in fact often more distributed since much of the control is built into the individual machines or robots. An immediate impact of the industrial Internet of Things is that not all measurement signals are channeled through the DCS. Already today in particular condition monitoring data, such as vibration measurements for motors, may be directly connected to a cloud server. In, for example, [13, 14], it is argued that the future factory from a communication perspective may not have a hierarchy at all. Instead signals from all products and devices would be available at all locations in the system. Hence from a communication and data flow perspective the future production facility may not have a hierarchy but rather be connected in a mesh network. Much of the same ideas were presented also by ExxonMobil already in 2016 [15]. They promoted a use of open standards and that the processing of automation functions may be divided between so-called distributed control nodes (DCN) near or in the devices, an on-premise computing platform, and the global cloud. This has now been further developed into a first version of a standard reference architecture presented by The Open Group [16]. A much simplified version of their O-PAS™ architecture is shown in Fig. 5.6, where it has been indicated that this shift will have to be gradual maintaining some form of connectivity or backward compatibility to legacy systems such as, for example, DCS, PLC, and safety systems. A natural
Distributed control nodes (DCN)/devices Legacy systems
Fig. 5.6 Potential future control architecture
progression is then that what is today considered as a Level 2 function such as advanced process control (APC) could be carried out in the DCN (if timing critical) or in the on-premise platform (most likely) or even in the global cloud (if slow and noncritical). A logical consequence of this is that there needs to be a decoupling between the engineering of automation functions and their deployment. With devices such as pumps and valves containing more computing power, it may be appealing to embed the control functionality directly into the devices instead of using a separate hardware called the DCN. Here standardization may become an issue since, for example, there are still many
110
C. Ganz and A. J. Isaksson
different ways to parametrize a PID controller. One way to alleviate this would be to group some measurement, actuation, and process devices together into modules. Ongoing development is mainly driven by time to market requirements in specialty chemicals and pharma and has led to standardization of module-type packages (MTP) [17]. Such a modular automation approach will make the engineering process more efficient [18], very much in the same manner as machine automation for discrete manufacturing. Similarly, Level 3 functions such as production planning and scheduling may be carried out either in the on-premise platform or in the global cloud. In fact, as pointed out in a 2018 survey, almost 50% of responding manufacturing companies were already considering shifting their MES to the public cloud [19]. Despite the data flow and deployment being much more flexible it may well be that the ISA-95 hierarchy will still be helpful to functionally structure the different parts of the automation system in different quality of service clusters. This increased level of connectivity and further integration of OT and IT of course puts even more emphasis in the future on cyber security. For a thorough account of past cyber security events see [20], and academic activities on cyber security from an automatic control perspective are nicely surveyed in [21].
5.3.2
time is WirelessHART, first introduced in 2007 but ratified by IEC only early 2009. WirelessHART uses a time-slotted protocol built on top of the IEEE 802.15.4 radio standard, which like Wi-Fi operates in the 2.4 GHz open industrial, scientific, and medical (ISM) frequency band. Later in 2009, the relatively similar standard ISA 100.11a built on the same radio standard was introduced. It is fair to say that both these standards were developed primarily for monitoring applications. With time slots of 10 ms, the capacity is not sufficient to handle closing a large number of control loops in a process industry. Nevertheless, for condition monitoring a reasonably large market has been reached. However, closed-loop wireless process control is to a large extent still aspirational, even though some small-scale experiments with protocols more dedicated to control look very promising [23]. Meanwhile, another very important development is the transition via 4G into the newly released 5G mobile communication standard. With 5G we now for the first time see signs of convergence between the different wireless standards, where the cellular network is split into three use cases: • Enhanced mobile broadband – for high data rates, wide coverage, high mobility, and large data rates • Massive machine-type communication (mMTC) – for wide coverage, low power, and small data amounts • Ultrareliable low-latency communication (URLLC) – deterministic, low latency, and small data amounts
Communication
As described in the previous version of this chapter there are many means of communication available to the industrial user. For wired communication there is the traditional 4– 20 mA cables but also various different fieldbus standards (Foundation Fieldbus, Profibus, etc). What has happened since the publication of the previous version is a path toward increased use of industrial Ethernet via standards like IEC 61784. In particular, the introduction of the standard on time-sensitive networking (TSN) – IEC/IEEE 60802 – that supplements the standard Ethernet with time synchronization and guaranteed delivery times has made it possible to use industrial Ethernet also for fast and time-critical applications. This harmonization on Ethernet means that it is possible to combine more or less all needs for industrial communication within one standard. Another important development in recent years is the increased adoption of OPC UA [22], with its integrated information model with predescribed modules for many different industrial applications. This promises to greatly reduce the engineering time when configuring the automation system. For wireless communication, there was already in 2009, at the time of the Handbook Edition 1, a number of different technologies available (Bluetooth, Wi-Fi, and GSM/3G). For process industry one standard that was only emerging at the
Very much the same as Ethernet for wired communication, this promises that 5G may be used to combine many industrial tasks such as production order dispatch, handheld operator stations, video streaming, condition monitoring, and closed-loop control within the same standard. Especially 5G URLLC corresponds to a major technical leap making it more attractive to closed-loop control in manufacturing and process industry. In particular, since at least in some countries there may be the possibility of dedicated frequencies for industry. Another important development that has mainly occurred in the last decade is the shift to smart mobile phones. It is sometimes easy to forget that the Apple iPhone was only released in June of 2007, and the App Store about a year later followed by android market (now Google Play) the same autumn. As we know for us as consumers it has changed almost every facet of our life. There are now apps for almost everything we do: how we interact with our friends, how we book travel, do our banking, watch TV, etc. As discussed in the previous section, the concept of apps in industrial platforms is still developing. Although most smartphone apps are more consumer oriented, we should not forget that smartphones and tablets have had a profound impact also for industry. What in the past needed a lot of proprietary solutions can today be realized by programming
5
Trends in Automation
111
an app in Apple iOS or android. Examples include a handheld mobile DCS operator terminal, work order dispatch in an underground mine with a Wi-Fi network installed, and creating a user interface for smart motor sensor uploading the information via Bluetooth or Wi-Fi.
5.3.3
Collaborative Robots
Connectivity
An interesting classification of cyber-physical systems is presented in [11], where a simultaneous development is seen in two independent directions (see Fig. 5.7). One trend toward autonomous system is further discussed in Sect. 5.4.1. The other observed trend is toward more collaborative systems. Nowhere is this trend more visible than for collaborative robots, also known as cobots. As the name collaborative robot indicates, it is a robot intended to work in direct collaboration with humans or at least in close proximity of humans (Fig. 5.8). Unlike conventional industrial robots, who are fenced off from human contact, cobots using lightweight softer material and limited speed are designed to be inherently safe for working close to humans. Even though the term “cobot” was coined already in the 1990s the market has not really taken off until the last decade after the previous version of this handbook was published. The global cobot market was estimated at about 980 million USD in 2019 but with a projected compound annual growth rate (CAGR) of more than 40% in coming years [24]. Notice, however, that today’s collaborative robots are merely designed to coexist with humans without a fence, rather than being truly collaborative. For robots to fully collaborate with humans will require significant further steps in their development, for example, introducing intelligence that can anticipate and understand human intent without explicit programming.
Distributed
Connected
Collaborative
Automatic
Adaptive
Autonomous
5.3.4
Industrial AI
The current popularity of artificial intelligence is mostly driven by the increased availability of data, and the computing power available not only in small devices such as smartphones, but also in cloud computing environment. Furthermore, recent advancements in neural network technologies (e.g., convolutional NN) have led to breakthroughs in, for example, image processing. These drivers have made it possible to address larger, more complex problems, mostly by applying variants of neural networks.
Consumer Versus Industrial AI The recent developments are driven in the consumer space, where companies like Google and Facebook get hold on personal data that was never available before (even if they then use the consumer data in B2B relations with advertisers). The successes in this area led to the increased attention also in industrial applications. While some of the technologies can easily be applied, some fundamental properties of industrial applications require a more distinct approach. This chapter addresses some of the challenges and proposes an industrial approach to AI. It covers three areas, where the difference between a consumer application and an industrial use case is significant: • Individuals: While most current AI use cases look at a very large number of comparable individuals (humans, images, etc.), industry often deals with a smaller population of greater variance. Current ML approaches are quite sensitive to the type of equipment they are trained on. To arrive at a sufficient amount of data, it needs to be collected from similar devices in different plants. This again may be difficult because customers are quite careful with what data is shared to a common platform.
Computational complexity
Fig. 5.7 Proposed classification of CPS. (Adopted from [11])
Fig. 5.8 ABB’s YuMi introduced in 2015 as the world’s first dual-arm collaborative robot
5
112
• Information: In addition to industrial data being more complex to collect, the data contains much less information that can be used to learn. Data collected during steadystate operation does hardly vary and provides no new information over time, whereas the interesting abnormal cases are rare. Unlike consumer applications, industrial use cases such as failure detection are not focusing on the average, but on the outliers. • Impact: Industrial systems interact with a lot of energy in potentially dangerous environments. Mistakes may have an immediate and highly unpleasant impact. Systems that deal with industrial installations today (automation and safety systems) therefore take special measures to avoid a malfunction in the plant to the greatest extent possible. Unfortunately, an ML system does not show a consistent behavior. Its reliability cannot be properly calculated. At the time of writing, AI struggles to achieve the required reliability for safety applications.
Merging AI with Conventional Algorithms In order to successfully apply AI in applications where the data is generated by known physics, using an AI system would at most reproduce the physical laws we already know. The known physics need to be calculated out of the data to get to the unknown, the deviations, the failure patterns, and the areas where we observe an effect that was not considered in the models. Even though some of these effects can indeed be modeled by physics, it might not be realistic to do so, either because it is too complex, or because the parameters needed (e.g., material fatigue, etc.) are not easily measured. These are the areas where the data-driven approaches such as machine learning (ML) are very helpful: to explain effects that were not considered in the physical model. To achieve this, a combination of algorithms, first principle-based as well as data-driven methods, is most helpful in getting most of the insight from process data. Such algorithms can be split into explainable and unexplainable subsystems that are analyzed with the respective methods. It may also be beneficial to create a truly hybrid algorithm that uses AI to tune some of the first principles parameters or learn from the deviations. The lack of information from the field, in particular rare data from failures, quality issues, or from other events not observed before (e.g., shapes of parts to be gripped), can sometimes be compensated by using simulated data. As we will see in Sect. 5.3.5, equipment is often designed using simulation. Such a simulator model may be used to run the virtual device (i.e., its digital twin) under rare or failure conditions, or a wild variety of shapes can be automatically created to simulate varying parts. If the AI solution is then trained with the simulated data, a large amount of data can be generated by varying the conditions over a wider range. In
C. Ganz and A. J. Isaksson
operation, the trained model then does not need the complex and expensive simulation tool, but can detect the effects similar to those seen in the simulator. The nature of current ML approaches however is such that it will still not detect something where the simulator did not produce data. The allocation of functionality in a physical model, an AI model, and in getting real data or simulated data is a nontrivial design decision.
5.3.5
Virtual Models and Digital Twin
The broader availability of real-time data (as discussed in section “Industrial Internet of Things (IIoT)”) allows deeper insight into the behavior of the equipment. If all the information about a device is available from the IoT, the idea is to create a digital representation of an asset that behaves exactly in the same way as a real asset, but only in the virtual environment. This representation is commonly referred to as the “digital twin.” The exact nature and scope of a digital twin is widely discussed in literature [25]. If indeed the digital twin is analyzed to retrieve knowledge about the asset in the field, more than just the measurements are required. The additional information needed depends on the use case. If a device is to be analyzed in depth, its exact type needs to be known, together with type-related information: 3D design drawings or manufacturer instructions (installation, operation, and maintenance) are examples of digital representations available for a whole fleet of products. The particular instance of the device may have test records or supplier factory information. Most use cases however also need information about the device’s context: If it is to be physically accessed, its location needs to be known, signal cable wiring is required to diagnose communication problems, and electrical supply and wiring is needed in case it does not have power. In many situations and use cases, the device is analyzed in the context of the process: what is its function, and what other devices are operating to provide that function. All these use cases require different sets of aspects that are typically available in digital form. In all the use cases, the information must be available for human or machine analysis. A consistent digital twin framework that allows the access of a wide variety of digital information is therefore needed to enable this. The different digital representations of an asset are created along its lifecycle (Fig. 5.9). Today’s equipment is mostly designed in software tools, and design-stage simulation already shows how the device will behave in reality. The digital variant of the asset therefore exists before its physical variant. At some point, the asset is created physically and is installed on a plant in the context of its function. Once commissioned, it is operated, and then maintained. The digital twin should
5
Trends in Automation
Design
113
Manufacture
Build / integrate
Operate / maintain
Live data
Product type
System context
System context
Product instance
Product instance
Product instance
Product type
Product type
Product type
Digital twin
5 Physical instance
System twin
Live!
Fig. 5.9 Digital twin aspects along the equipment lifecycle
Fig. 5.10 Virtual commissioning of robot cell – simulated cell to the (a) and real cell to the (b)
carry all the digital representations along the value chain, to make use of information that was already created in an earlier step. The digital twin is therefore not only an important operational concept to analyze equipment but can also serve as an engineering framework where data is only entered once and can seamlessly be accessed through the digital twin interfaces. The engineering integration and required efficiency already listed as an important driver in Edition 1 has found its key platform in digital twin concepts. Figure 5.10 shows the digital twin of a robot cell that can be programmed using the same tools as the real robot. Its behavior can be tested virtually such that when the software is loaded into the real robot controller, the commissioning
time is cut to a fraction because most can already be run in the digital twin. One key functionality of a digital twin that is very often mentioned is the capability to simulate its behavior. Such simulation models are state of the art in the design stage, and at times also in operation. To then map the simulated model to the physically measured parameters to recreate operational situations for in-depth understanding and maybe for what-if simulations is still a more complex step that requires system identification and parameter estimation. It is to be noted that although simulation and modeling is frequently mentioned as one of the advantages of using a digital twin, creating a model that serves the purpose of
114
C. Ganz and A. J. Isaksson
No autonomy, humans are in complete control without assistance.
Level 0
Level 5
Level 1
Assistance with or control of subtasks. Humans always responsible, specifying set points.
Level 4
Level 2
Occasional autonomy in certain situations. Humans always responsible, specifying intent.
Level 3
Prerequisite: Automation system monitors the environment.
Autonomous operation in all situations. Humans may be completely absent.
System in full control in certain situations. Humans might supervise. Limited autonomy in certain situations. System alerts to issues. Humans confirm proposed solutions or act as a fallback.
Fig. 5.11 Proposed autonomy levels for industrial systems. (From [29])
a use case is still not a trivial task. While we had predicted increasing importance of modeling in Ed. 1 of the “Handbook of Automation,” this has not progressed. Creating a physical model from first principles is still a complex and largely manual task. Digital twin offers one advantage that models created in one lifecycle stage may be reused in another, but the fundamental problem remains for the time being. There has been development, however, in auto-generating a process model based on a library of component models once a process topology is available in electronic format [26]. To cover the variety of possible digital representations of not just single devices, but also machines, main equipment, or plants, a digital twin concept in essence requires a data interchange framework. One that has already successfully been applied is the “Asset Administration Shell” defined by the Plattform Industrie 4.0 [27]. It provides relevant guidance on how to uniquely identify components, how to publish and access information, and how to relate components and systems.
5.4
Outlook
After walking through those trends that are currently observed in implementation projects at various stages, let us have a broader look into the future to see where automation systems may be heading to.
5.4.1
Autonomous Industrial Systems
Although the level of automation and amount of human interaction has been a field of study for many years, see for exam-
ple [28] and the references therein, it has received renewed interest with the ongoing development of self-driving cars. In the automotive industry there are now agreed standards of autonomy levels ranging from fully manual driving – Level 0 – to fully autonomous – Level 5. Inspired by this a level 0–5 taxonomy for industrial systems autonomy was proposed in [29], see Fig. 5.11. Industrial systems autonomy is, however, more complex than autonomous cars which is mainly about replacing or assisting the driver gradually with more automation. An industrial plant needs automation along the entire lifecycle of its existence, for the engineering, operation, and maintenance. For a more elaborate discussion on the implications of this, see [29]. One way to look at the move toward more autonomy is that it adds another outer layer of feedback. While most classical control loops are based on a single measured variable and consist of the phases sense, analyze, and act, an autonomous system has to process many different inputs together to reach a decision. This can be described in the three steps of perceive, understand, and solve, see Fig. 5.12. Perceive combines signals from often heterogeneous sensors into information. In the future also process industry will see more automated use of unconventional sensor sources such as visual or infrared images, requiring application of modern AI technology. Understand is meant to provide the reasoning explaining what is happening and its consequences, while solve should decide on the next steps and implement this action to the system. This loop is superimposed over the automation loop that remains in place. While automation takes care of the stability and safety of the process, the autonomous loop adds flexibil-
5
Trends in Automation
115
Understand
Analyze
Sense Perceive
Act Solve
Fig. 5.12 Loops in classical control systems (grey) and autonomous systems (red)
ity and resilience. See more on automation and autonomy in Chs. 19 and 20.
5.4.2
Collaborative Systems
Collaboration in the context of automation has different meanings, depending on who is collaborating with whom: • Collaboration between two systems • Collaboration between a system and a human • Collaboration between humans, supported by a system Collaboration between two systems is mainly a question of communication technology (as discussed in Sect. 5.3.2) and the required protocols that the collaborating entities commonly understand. Collaboration between humans and the system is a key component of automation systems: Operator stations in control rooms are the traditional way of interfacing with the system. Little has changed in interaction principles since early days of graphical screens. Process graphics, alarm lists, and objects with faceplates are still the standard today. Moving from automated to autonomous systems, this takes on an even higher complexity. Autonomy levels discussed in Sect. 5.4.1 show that humans remain involved, from “in the loop” toward “on the loop,” before humans stay completely out of the loop. For the foreseeable future, humans still have at least a supervisory task even in autonomous systems. This results in an effect that is commonly referred to as “automation paradox” [30]: If the system becomes more automated/autonomous, the human has to interact more rarely. His or her exposure to the system and the experience on the job is therefore lower. However, at the time of interaction, the situation is
more complex, because the autonomous system has run the process to the current state in ways not comprehensible by the operator. This challenge is not solved today. While the traditional control room is located on-site, a higher level of autonomy combined with the installed communication bandwidth allows operation to be shared between local and remote staff, where local staff is primarily skilled to physically interact with the process (e.g., maintenance and repair), whereas process optimization specialists and other experts can be colocated remotely to be available to a number of plants without travel. New technologies, such as augmented or virtual reality (AR/VR) , help to not only transmit technical data, but to allow for a more immersive experience for the remote operator. Furthermore, analytics information can be blended with the real-time image, to give direct insight into the plant while looking at it. Such remote expert centers may be set up by service providers, e.g., by the equipment supplier. One example is the concept of a “collaborative operations center” in the marine industry, where the vessel, the ship owner’s operations center, and the supplier’s expert center are connected to continuously not only monitor the vessel’s technical performance (fuel consumption and equipment health), but also consider schedules, weather forecast, etc. to optimize routes collaboratively [31]. See more on collaborative systems in Chs. 15 and 18.
5.4.3
New Applications
As we have seen earlier in this chapter, devices become more intelligent, and have the communication capabilities to interact with other devices. Automation of even simple processes becomes possible on a device level, combined with IoT technology that makes available powerful functionality as a service. Such devices may contain technology that was not available at today’s cost level even a few years ago. Hospital labs are seeing highly integrated analyzers that contain automated functions that used to be spread over a whole lab in the past. Further automation in the form of autonomous guided vehicles (AGV) combined with robots is gaining importance in that sector. Another area that is very quickly seeing advancements in automation is agriculture. Again, AGVs play a role here, allowing tractors to very precisely plant and harvest crop and vegetables. Further functions such as quality control and robotic manipulation of food are seen in advanced applications. But automation is also becoming popular further away from the plant floor. Robotic process automation does neither deal with robots nor process automation in the industrial sense, but is covering automated business processes on the enterprise level. An enterprise that is fully autonomously
5
116
C. Ganz and A. J. Isaksson
accepting orders, producing and shipping the products, and billing, without human interaction, may be the ultimate autonomous industrial plant we may see in the future. Such efficiency improvements through automated processes are today referred to as “hyperautomation,” more and more addressing white-collar tasks. See more on device-level automation in Chs. 13, 16, and 17; on hospital and healthcare automation in Chs. in Section H of this Handbook; on agriculture automation in Ch. 19; and on enterprise and business process automation in Ch. 65.
5.5
Conclusions
Technologies that support automation use cases are going through a major transition. While some have already reached a level of maturity that resulted in a wide application in automation applications (e.g., cloud in IoT), others are still rapidly progressing at a speed that makes projections of their possible impact difficult. What can be said is that they will have an impact. Key drivers observed are the steady increase in computing power also in low-energy environments that makes very low footprint solutions possible. The mass market technology used in smartphones may well lead to similarly compact solutions integrated in industrial devices, leading to automation functionality moving far toward the edge. Paired with powerful back-end IoT cloud solutions, a new architecture of intelligent devices, and distributed software agents (on edge, fog, and cloud), may evolve that enables new automation functionalities at higher efficiencies. An industrialized flavor of AI is key and could be driving the move from automation to autonomous systems. Further technologies that are appearing on the horizon, such as quantum computing, may have an even larger impact. But this development is still too far out to speculate about a successful industrial implementation. The enthusiasm about the new possibilities of technologies has to be put in relation to the use cases and values delivered. Still today, most advanced implementations of new technology just reached a proof-of-concept level and have difficulties to scale. Configuration and engineering effort is in many cases still significantly too high, and new business models that justify the spending for production sites are hard to find. The inclusion of new technologies into the world of industrial automation must consider all lifecycle aspects, and before all must focus on the value of the use case. Technology will always be just the enabler.
References 1. Schwab, K.: The Fourth Industrial Revolution: what it means, how to respond. World Economic Forum, 01 2016. [Online]. Available: https://www.weforum.org/agenda/2016/01/the-fourth-industrialrevolution-what-it-means-and-how-to-respond/. Accessed 25 Aug 2020 2. The Economist: The Automation Readiness Index, 08.12.2017. [Online]. Available: https://automationreadiness.eiu.com. Accessed 5 Aug 2020 3. Industrial Internet Consoritum: Industrial Internet Consortium [Online]. Available: https://www.iiconsortium.org. Accessed 5 Aug 2020 4. German Government: Plattform Industrie 4.0 [Online]. Available: (https://www.plattform-i40.de/PI40/Navigation/EN/Home/home.h tml). Accessed 5 Aug 2020 5. Industrial Internet Consortium: Industrial Internet Reference Architecture (IIRA) [Online]. Available: https://hub.iiconsortium.org/ iira. Accessed 16 Aug 2020 6. Industrial Internet Consortium: Industrial Internet Security Framework (IISF) [Online]. Available: https://hub.iiconsortium.org/iisf. Accessed 16 Aug 2020 7. Plattform Industrie 4.0: RAMI4.0 – a reference framework for digitalization [Online]. Available: (https://www.plattformi40.de/PI40/Redaktion/EN/Downloads/Publikation/rami40-an-intr oduction.pdf). Accessed 16 Aug 2020 8. Ganz, C.: Buzzword demystifier: cloud, edge and fog computing. ABB Rev. 16(05), 78–79 (2019) 9. Ganz, C.: The internet of things, services and people. In: Industry 4.0 Revealed. Springer, Berlin, Heidelberg. (2016) 10. Smith, D.J.: ‘Power-by-the-hour’: the role of technology in reshaping business strategy at Rolls-Royce. Tech. Anal. Strat. Manag. 25(8), 987–1007 (2013) 11. Allgöwer, F., Borges de Sousa, J., Kapinski, J., Mosterman, P., Oehlerking, J., Panciatici, P., Prandini, M., Rajhans, A., Tabuada, P., Wenzelburger, P.: Position paper on the challenges posed by modern applications to cyber-physical systems theory. Nonlinear Anal. Hybrid Syst. 34, 147–165 (2019) 12. Hollender, M.: Collaborative Process Automation Systems. International Society of Automation, Research Triangle Park (2010) 13. Monostori, L., Kadar, B., Bauernhansl, T., Kondoh, S., Kumara, S., Reinhart, S., Sauer, O., Schuh, G., Sihn, W., Ueda, K.: Cyberphysical systems in manufacturing. CIRP Ann. Manuf. Technol. 65, 621–641 (2016) 14. Isaksson, A.J., Harjunkoski, I., Sand, G.: The impact of digitalization on the future of control and operations. Comput. Chem. Eng. 114, 122–129 (2018) 15. Forbes, H.: ExxonMobil’s Quest for the Future of Process Automation. ARC Insights, April (2016) 16. Open Group: O-PAS Standard, Version 1.0: Part 1 – Technical Architecture Overview (Informative). The Open Group, Reading (2019) 17. VDI: Automation engineering of modular systems in the process industry. Part 1: general concept and interfaces. Beuth Verlag, Berlin 2019. (2017) 18. Bloch, H., Fay, A., Hoernicke, M.: Analysis of service-oriented architecture approaches suitable for modular process automation. In: IEEE ETFA, Berlin, 7–9 Sept 2016 19. IDC: Industrial Customers are ready for cloud – now. September 2018. [Online]. Available: https://d1.awsstatic.com/analystreports/AWS%20infobrief_final.pdf?trk=ar_card. Accessed 25 Aug 2020
5
Trends in Automation
20. Middleton, B.: A History of Cyber Security Attacks – 1980 to PRESENT. CRC Press, Taylor & Francis Group, Boca Raton (2017) 21. Zacchia Lun, Y., D’Innocenzo, A., Smarra, F., Malavolta, I., Di Benedetto, M.D.: State of the art of cyber-physical systems security: an automatic control perspective. J. Syst. Softw, Berlin, Heidelberg. 149, 174–216 (2019) 22. Mahnke, W., Leitner, S.-H., Damm, M.: OPC Unified Architecture. Springer, Berlin, Heidelberg (2009) 23. Ahlén, A., Åkerberg, J., Eriksson, M., Isaksson, A.J., Iwaki, T., Johansson, K.H., Knorn, S., Lindh, T., Sandberg, H.: Toward wireless control in industrial process automation: a case study at a paper mill. IEEE Control. Syst. Mag. 39(5), 36–57 (2019) 24. Market Data Forecast: Global Cobot Market Report, February 2020. [Online]. Available: https://www.marketdataforecast.com/ market-reports/cobot-market 25. Malakuti, S., Schlake, J., Grüner, S., Schulz, D., Gitzel, R., Schmitt, J., Platenius-Mohr, M., Vorst, P., Garrels, K.: Digital twin – a key software component of industry 4.0. ABB Rev. 20(11), 27–33 (2018) 26. Arroyoa, E., Hoernicke, M., Rodriguez, P., Fay, A.: Automatic derivation of qualitative plant simulation models from legacy piping and instrumentation diagrams. Comput. Chem. Eng. 92, 112– 132 (2016) 27. Platenius-Mohr, M., Malakuti, S., Grüner, S., Goldschmidt, T.: Interoperable digital twins in IIoT systems by transformation of information models: a case study with asset administration shell. In: 9th International Conference on the Internet of Things, Bilbao (2019) 28. Parasuraman, R., Sheridan, T.B., Wickens, C.D.: A model for types and levels of human interaction with automation. IEEE Trans. Syst. Man Cybern. Syst. Hum. 30(3), 286–297 (2000) 29. Gamer, T., Hoernicke, M., Kloepper, B., Bauer, R., Isaksson, A.J.: The autonomous industrial plant-future of process engineering, operations and maintenance. J. Process Control. 88, 101–110 (2020) 30. Bainbridge, L.: Ironies of automation. Automatica. 19(6), 775–779 (1983) 31. Ralph, M., Domova, V., Björndahl, P., Vartiainen, E., Zoric, G., Windischhofer, R., Ganz, C.: Improving remote marine service. ABB Rev. 05(12), 44–48 (2016)
117
Christopher Ganz is an independent innovation consultant with more than 30 years of experience in the entire value chain of industrial innovation, including over 25 years at ABB. His innovation focus is on industrial digitalization and its implementation in service business models.As one of the authors of ABB’s digitalization strategy, he was able to build on his work in ABB research and global ABB service management, and today supports companies in innovation processes, and teaches the CAS Applied Technologies: R&D and Innovation at ETH Zürich. Christopher holds a doctorate degree (technical sciences) from ETH Zürich.
Alf J. Isaksson is an automation and control expert with ABB Corporate Research in Västerås, Sweden. He first pursued an academic career culminating in promotion to full professor at the Royal Insititute of Technology (KTH) in 1999. In 2001 he joined ABB and has since been one of its foremost control specialists covering the full range of process automation applications.Between 2012 and 2019, he was coordinating all control research at ABB’s seven global research centers. He now since 2020 has a position as corporate research fellow. Alf holds a PhD in Automatic Control from Linköping University.
5
Part II Automation Theory and Scientific Foundations
6
Linear Control Theory for Automation István Vajk, Jeno˝ Hetthéssy, and Ruth Bars
Contents 6.1
Systems and Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
6.2
Open Loop Control, Closed Loop Control . . . . . . . . . . 122
6.3
Quality Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
6.4
Types of Signals and Models of Systems . . . . . . . . . . . . 125
6.5
Description of SISO Continuous-Time Linear Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Description in the Time Domain . . . . . . . . . . . . . . . . . . . . . Description in the Frequency Domain . . . . . . . . . . . . . . . . Description in the Laplace Operator Domain . . . . . . . . . .
125 125 127 129
Description of SISO Discrete-Time Linear Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Description in the Time Domain . . . . . . . . . . . . . . . . . . . . . Description in the z – Operator Domain . . . . . . . . . . . . . . Description in the Frequency Domain . . . . . . . . . . . . . . . .
130 130 132 133
6.5.1 6.5.2 6.5.3 6.6 6.6.1 6.6.2 6.6.3
6.14
Robust Stability and Performance . . . . . . . . . . . . . . . . . 152
6.15
LMI in Control Engineering . . . . . . . . . . . . . . . . . . . . . . 155
6.16
Model-Based Predictive Control . . . . . . . . . . . . . . . . . . . 157
6.17
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
6.7
Resulting Transfer Functions of Closed Loop Control Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
6.8
Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
6.9
Static and Dynamic Response . . . . . . . . . . . . . . . . . . . . . 136
6.10 Controller Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 6.10.1 Continuous PID Controller Design . . . . . . . . . . . . . . . . . . . 137 6.10.2 Discrete Time PID Controller Design . . . . . . . . . . . . . . . . 139 6.11 6.11.1 6.11.2 6.11.3
Responses of MIMO Systems and “Abilities” . . . . . . . Transfer Function Models . . . . . . . . . . . . . . . . . . . . . . . . . . State-Space Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Matrix Fraction Description . . . . . . . . . . . . . . . . . . . . . . . .
141 141 141 143
6.12
Feedback System – Stability Issue . . . . . . . . . . . . . . . . . 143
6.13 6.13.1 6.13.2 6.13.3
Performances for MIMO LTI Systems . . . . . . . . . . . . . . Control Performances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . H2 Optimal Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . H∞ Optimal Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
147 148 149 151
Abstract
Automation means automatic control of various processes in different areas of life. Control is based mainly on the notion of negative feedback. Analysis and design of control systems is a complex field. Mathematical models of the processes to be controlled are needed, and effective control methods should exist to ensure the required performance of the controlled processes. In industrial environment besides continuous processes and continuous control systems, sampled data digital control systems gain more and more importance. In this chapter some basic knowledge on linear control systems with deterministic signals and some advanced control algorithms both for continuous and discrete-time systems are presented. In the second part of this chapter mainly continuoustime linear systems with multiple inputs and multiple outputs (MIMO systems) are considered. Specifically, stability, performance, and robustness issues, as well as optimal control strategies are discussed in details for MIMO linear systems using algebraic and numerical methods. As an important class for practical applications, predictive controllers are also discussed. Keywords
I. Vajk () · J. Hetthéssy () · R. Bars () Department of Automation and Applied Informatics, Faculty of Electrical Engineering and Informatics, Budapest University of Technology and Economics, Budapest, Hungary e-mail: [email protected]; [email protected]; [email protected]
© Springer Nature Switzerland AG 2023 S. Y. Nof (ed.), Springer Handbook of Automation, Springer Handbooks, https://doi.org/10.1007/978-3-030-96729-1_6
Control theory · Linear control systems · Negative feedback · Closed loop control · Continuous control systems · Discrete control systems · MIMO systems · Stability · Optimal control · Robustness · Predictive control · LMI in Control Engineering 121
122
6.1
I. Vajk et al.
Systems and Control
Systems are around us. In nature they are technological, biological, economical, etc. systems. Systems are connected to their environment. Input quantities coming from the environment act on the systems, while the systems react with output quantities to the environment. Input and output quantities containing information about the physical, chemical, etc. variables are called signals. The behavior of various systems should match prescribed technical expectations. To achieve this, systems should be controlled, i.e., appropriate inputs should be generated to ensure the required output performance. As an example consider a technological system (process) generating electrical energy. The produced electrical energy should be available with constant voltage and frequency in spite of the changing daily consumption. In an oil refinery the crude oil is refined in a distillation column, and the concentration of the end product should meet the required value. The temperature of a room should be comfortable in spite of the changing output environment. In transportation systems besides keeping the prescribed speed there are a number of other requirements (e.g., to avoid collisions, to protect the environment, etc.) which should be fulfilled by control. Further examples for control systems are given in many chapters of this Handbook, e.g., in Chs. 15, 25, 43, and 44. In order to control a system first, its operation should be analyzed. System analysis would result in a model of the system. In system modeling for control, the signal transfer properties of the system are of interest, i.e., the way a system transfers the input signals into output signals. System modeling which investigates the relationship between the input and output signals can be approached in two ways: one way is to analyze the physical behavior of the system and describe its operation by mathematical relationships (equations, characteristic curves). Another way is to measure the input and the output data and fit a mathematical model to these data. In the latter case a model structure is supposed first, then the parameters of this model are determined. The procedure itself is called identification. Generally these two methods are used together to establish an adequate process model. The model of the system is symbolized by a rectangular block, its inputs are indicated by arrows, acting on the box, and the outputs are indicated by arrows leaving the box. The mathematical relation characterizing the model is written inside the box. Concerning a single box with an input and an output, three problems can be formulated (Fig. 6.1). Identification: Given the measured input and output signal, determine a model of the system (the mathematical relationship between the input and the output, denoted by P). Analysis: Given the model of
y(t)
u(t) P=?
u(t)
y(t) = ? P
u(t) = ? P
y(t) = yref
Fig. 6.1 Problems of identification, analysis, and design
ydo
ydi r
y C
P
Fig. 6.2 Open loop control
the system and its input signal, determine the output signal. Synthesis: Given the model of the system, and the required output. Determine the input signal resulting in the required system response. In more involved system structures several boxes can show up and the basic control tasks are tracking a reference signal and rejecting the effects of the disturbances.
6.2
Open Loop Control, Closed Loop Control
To achieve the required system output, prescribed by a reference signal (denoted by r), open or closed loop control systems can be built. In open loop control a control unit (whose signal transfer is denoted by C) is connected serially to the system (whose signal transfer is denoted by P) (Fig. 6.2). A disturbance may act at the input (ydi ) or at the output (ydo ) of the system. Addition of signals is symbolized on the scheme by a small circle, arrows coming into the circle indicate the signals to be added, while the arrow coming out of the circle represents their sum. This control structure is able to track the reference signal, but it is ineffective regarding disturbance rejection. A good choice of the control transfer C can be the inverse of the system transfer P. Then tracking of the reference signal would be ideal. To realize a good open loop control quite accurate knowledge on both the system and the disturbances would be required.
6
Linear Control Theory for Automation
123
r
Reference forming
6
Manipulated variable, process input
Reference signal
Reference signal calculator
Amplifier, signal forming
e
Decision, error computation
Controlled variable
y
u Acting
Process
Actuator
Controller
Sensing
Sensor
Fig. 6.3 Structural diagram of closed loop control
ydo
ydi r
F
r1
e
C
u
P
–
y yn
Fig. 6.4 Block diagram of closed loop control
In closed loop control a negative feedback loop is built around the system. The output signal of the system (process) is measured by a sensor and is compared to the reference signal. The deviation of the reference signal and the output signal forms the error signal. The error signal provides the input to the controller unit, which both amplifies it and modifies its shape. The output of the controller gives the input for the actuator which modifies the input signal of the system. Figure 6.3 gives the structural diagram, while Fig. 6.4 shows the block diagram of the closed loop control. (F denotes a possible reference signal filter and yn represents a noise component introduced by the sensor.) Closed loop control is able both to track the reference signal and to reject the effect of the disturbances. Whatever causes the deviation between the reference signal and the output signal, the error acts to eliminate this deviation. Because of the dynamics of the system this settling process takes time. The control system should be stable, i.e., after changes in the input signals (reference and disturbances) the transients should smooth down along the loop and a new steady state should be reached. Appropriate design of the controller is responsible to ensure the required properties of the control system. Controller design is based on the model of the process and aims to meet the prescribed quality specifications set for the control system. In the sequel closed loop control systems will be considered.
In large control systems (e.g., oil refineries, power plants), in addition to closed loop controls dominating in long-term operation, open loop controls play also an important role, e.g., during the start and stop phase of the process, when complex sequential operations have to be performed. Open loop and closed loop controls should operate together in a harmonized manner. If besides the output signal the disturbance or some inner signals are also measured, and available for control, then the performance of the control system can further be improved. If the disturbance is measured, the closed loop control can be supplemented by a feedforward path. This forward path contributes to compensating the effect of the disturbance at the output (See in Ch. 25). If the process can be separated into serially connected parts, and besides the output signal some intermediate signals can also be measured, then embedded control loops could be built, improving the performance of the control system (cascade control) (see in Ch. 25). These solutions are frequently used in process industries. Today, industrial control systems are increasingly implemented by computer control, where the main control tasks are executed by computers in real time. The process and the computer are connected via analog/digital (A/D) and digital/analog (D/A) converters. The output of the process is sampled and digitalized, thus the information on the output signal is available in digital form only at the sampling instants. The computer calculates the actual value of the control signal according to a digital control algorithm and forwards it via a D/A converter as the process input. As the process is continuous, the control signal should act also between the sampling points. This is realized by using a holding unit. The structure of the sampled data control system is shown in Fig. 6.5.
124
I. Vajk et al.
Discrete-time signals
Continuous-time signals
Real-time clock
u[k] Process input u(t)
D/A Real-time environment (scheduling)
Controller algorithm
Continuous-time process
y[k]
Process output y(t)
A/D
r[k] Man/ machine interface
Network interface
Reference signal, sampling interval, controller parameters
Fig. 6.5 The structure of sampled data control system realized by computer control
t r[k]
e[k] _
t
u[k]
u(t)
C(z)
ZOH
P(s)
Discrete controller
Zero order hold
Continuous process
t y(t)
Ts
y[k]
F(s)
Sampler
Low pass filter
t
Fig. 6.6 Block diagram of the closed loop sampled data control system
The block diagram of the closed loop sampled data control system is given in Fig. 6.6, where the behavior of the signals in the time domain at several points of the control loop is also indicated. Selecting the sampling time is an important issue. According to the Shannon sampling theorem, the sampling frequency should be more than twice of the analog signal
frequency. Then the continuous signal could be reconstructed based on its samples. In computer control of industrial processes, more sophisticated control problems can also be solved, distributed control systems can be implemented, where spatially distributed control systems can operate in an optimized aligned manner communicating with each other.
6
Linear Control Theory for Automation
6.3
Quality Specifications
In a feedback control system the main requirements are: • • • •
Stability Good reference signal tracking (servo property) Good disturbance rejection Good measurement noise attenuation
Good performance can be specified, e.g., by the prescribed shape of the unit step response characterized by the allowed overshoot and the required settling time. For stability, all the signals in the control loop should remain bounded. In addition, to keep operational costs low, small process input values are preferred. Controller design is based on the model of the process. As models always imply uncertainties, design procedures aiming at stability and desirable performance should be extended to tolerate modeling uncertainties as well, i.e., control systems should be robust with respect to the incomplete knowledge available on the process. Thus the specifications, which provide controller design objectives, are completed by the following requirements: • Achieve reduced input signals • Achieve robust stability • Achieve robust performance Some of the above design objectives could be conflicting; however, a compromise among them could still provide acceptable control performance. In the sequel, the basic principles of control, methods of analysis, and design are discussed briefly. (For detailed discussion [1–5] and [9–12] are referred.)
6.4
Types of Signals and Models of Systems
The signal is a physical quantity which carries information. The signals can be external input signals, output signals leaving the system, or inner signals of the system. Signals can be classified in different ways. A signal is continuous if it is continuously maintained over a given time range. The signal is discrete-time or sampled if it provides information only at determined time points. The signal is analog, if its value directly represents the measured variable (e.g., current, velocity, etc.). The signal is digital, if the information is represented by coded values of the physical variable in digits. The signal is deterministic if its course can be described by a function of time. The signal is stochastic if its evolution is probabilistic according to given statistics. The model of a system provides a relationship between the inputs and the outputs of the system. The model is static, if
125
its output depends only on the actual value of its input. The model is dynamic, if the output depends on previous values of the input and output signals as well. A model can be linear or nonlinear. Static characteristics plots the steady values of the output signal versus the steady values of the input signal. If this curve is a straight line, the system is linear, otherwise it is nonlinear. Generally, systems are nonlinear, but in most cases their linear model can be obtained as a result of a linearization around a working point. There are general methods for analyzing linear systems, while analyzing nonlinear models need more sophisticated methods. Different analysis methods are needed whether a system is excited by deterministic or stochastic signals. A model may have lumped or distributed parameters. The parameters of a lumped system are considered constant. In distributed parameter systems, the parameters depend both on time and space. A model can be continuous time or discrete time. Discrete time models, which describe the relationship between the sampled output and the sampled input signals of a system, are applied in the analysis and design of computer control system. Besides input/output models state space models are also widely used. Beyond I/O relations, these models describe the behavior of the inner system states, as well. Considering the number of the input and output signals the model can be Single Input Single Output (SISO), Multi Input Multi Output (MIMO), Single Input Multi Output (SIMO), and Multi Input Single Output (SIMO). In the sequel dynamic, linear or linearized, deterministic, lumped parameter, continuous and discrete time, SISO and MIMO input/output, and state space models will be dealt with.
6.5
Description of SISO Continuous-Time Linear Systems
Systems can be described in the time, frequency, and operator domain. Those descriptions in different domains can equivalently be transformed into each other. System analysis or design may be more favorable in one or in another form.
6.5.1
Description in the Time Domain
A continuous time (CT) linear SISO system can be described in the time domain by a differential equation of order n or by a system constructed by a set of n first order differential equations (also called state equations), or it can be characterized by typical time responses in return to typical input signals (e.g., impulse response, step response, response for sinusoidal signals) [1, 2, 6]. The differential equation gives the relationship between the output signal y(t) and its derivatives and the input signal u(t) and its derivatives. It is given in the following form:
6
126
I. Vajk et al. n
n−1
y(t) an d dty(t) + an−1 d dtn−1 + · · · + a1 dy(t) + a0 y(t) = n dt m m−1 u(t) d u(t) d du(t) bm dtm + bm−1 dtm−1 + · · · + b1 dt + b0 u(t)
The condition of physical realizability is m ≤ n. a0 , a1 , . . . , an , b0 , b1 , . . . , bm are constant coefficients. Often the differential equation is written in the so-called time constant form: n
n−1
n−1 d y(t) Tnnd dty(t) + Tn−1 + · · · + T1 dy(t) + y(t) = n dt dtn−1 m m−1 u(t) du(t) m−1 d m d u(t) A τm dtm + τm−1 dtm−1 + · · · + τ1 dt + u(t)
where parameter A = b0 /a0 is the gain of the systemhaving √ a physical dimension, and Ti = i ai /a0 and τj = j bj /b0 . Parameters T1 , . . . , Tn , τ 1 , . . . , τ m are time constants, their dimension is sec. (The above definition of the gain is valid if a0 = 0 and b0 = 0.) The time constant form is advantageous as for lower order cases the shape of the time response for typical inputs (e.g., for unit step) can approximately be visualized even without solving the differential equation. The system is called proportional if a0 = 0 and b0 = 0. In this case for step input in steady state the output signal is constant and it will be proportional to the input signal. The system exhibits a “derivative” action if b0 = 0 and an “integral” action if a0 = 0. In many practical processes the output responds with a delay (also called dead time) to changes in the input signal. In this case the time arguments on the right side of the differential equation become (t − Td ) instead of (t), where Td > 0 denotes the delay. Accordingly, a process formed by a pure delay and a gain is simply represented by y(t) = Au(t − Td ). The solution of the differential equation describes the behavior of the system, i.e., the time course of the output signal as a response to a given input. The solution has to fulfil n initial conditions prescribed for the output signal and its derivatives. The solution consists of two components, namely, the general solution of the homogenous equation and one particular solution of the inhomogeneous equation: y(t) = yh (t) + yi (t) The solution of the homogenous equation is determined by the roots of the characteristic equation: an λn + an−1 λn−1 + · · · + a1 λ + a0 = 0 The general solution of the homogenous equation (the transient response) in case of single roots has the following form: yh (t) = k1 eλ1 t + k2 eλ2 t + · · · + kn eλn t =
n i=0
ki eλi t
For multiple roots (e.g., supposing a triple root, λ1 = λ2 = λ3 ), the solution is given as follows: yh (t) = k1 + k2 t + k3 t2 eλ1 t + k4 eλ4 t + · · · + kn eλn t The transients are decreasing in time if the roots of the characteristic equation are located on the left side of the complex plane, i.e., Reλi < 0. Let f (u) denote the particular solution of the inhomogeneous equation which depends on the input signal u. f (u) can be found by some mathematical procedure – e.g., by method of variations of parameters or by some simple considerations. The general solution of the differential equation becomes y(t) = yh (t) + f (u) = k1 eλ1 t + k2 eλ2 t + · · · + kn eλn t + f (u) The constants are determined in the knowledge of the initial conditions. If the input signal is an impulse (Dirac delta, an impulse of zero duration with infinite amplitude and unity area) or a unit step, the solution of the differential equation is simpler. The output signals corresponding to such inputs are the impulse response or weighting function (denoted by w(t)) and the step response (denoted by v(t)), which are characteristic for the system. In their knowledge, the output of the system can be calculated also for arbitrary input signals. In a system of order n besides the input and the output signals n inner variables, the so-called state variables can be defined. State variables cannot change their values abruptly when the input signal changes abruptly, it takes time to reach their new values determined by the input signal. Their initial values (the initial conditions) represent information on the past history of the effects of the input changes in the system. The state variables are denoted by xi , i = 1, 2, . . . , n , which are the components of the state vector x. Using the state variables the system can be described by a state space model, in which, instead of a single differential equation of order n a set of n differential equations of first order is considered. Such models can be often derived from physical considerations. If an input-output model of the system is known characterized by a single differential equation of order n, the state variables could be defined in different ways. Generally they are combinations of higher-order derivatives of outputs and inputs. The state equation in vector-matrix form is given as: x˙ (t) = Ax(t) + bu(t) y(t) = cx(t) + du(t)
6
Linear Control Theory for Automation
127
where A, b, c are the parameter matrices of dimensions (nxn), (nx1), (1xn), respectively and d is scalar parameter. For MIMO systems similar formalism is obtained for the state equation with appropriate dimensions of the parameter matrices. As an example let us suppose that the right side of the differential equation is g(t) = b0 u(t), then the output signal and its higher-order derivatives (up to n-1) can be chosen as state variables and the state equation can be obtained in the following form: x˙ 1 = x2 x˙ 2 = x3 .. . x˙ n = y = x1
− aa0n x1
−
a1 x an 2
− ··· −
an−1 x an n
+
⎢ ⎢ ⎢ A=⎢ ⎢ ⎣
0 0 .. .
1 0
0 1
··· ···
6.5.2
Description in the Frequency Domain
If y(t) is a periodic signal, it can be expanded into its Fourier series, i.e., it can be decomposed as a sum of sinusoidal components [6]. Let us denote the time period of the signal by T. Then the basic frequency is ω0 = 2π /T. The complex form of the Fourier series is y(t) =
b0 u an
0 0
⎤ ⎥ ⎥ ⎥ ⎥; ⎥ ⎦
1 cn = T
t
x(t) = e x(0) +
A(t−τ )
At
e
∞ −∞
d = 0.
The above state equation is one possible representation, as any linear combination of the state variables provides new state variables, while the relation between the output and the input remains unchanged. A set of state equations with identical I/O properties can be obtained by the so-called similarity transformations [1, 2, 8]. The solution of the state equation is obtained in the following form: b u (τ ) dτ
T/2
y(t)e−j n ω0 t dt
−T/2
cn is a complex number and cn = cˆ −n , where cˆ denotes complex-conjugate. The cn amplitudes assigned to the discrete frequencies ω = nω0 compose the discrete amplitude spectrum of the periodic signal y(t). Generally, higher frequency components appear with lower amplitude. In practice the input of a system generally is not periodic. If an aperiodic signal is absolute integrable, i.e.,
b0 /an
cn ejnω0 t
where n is integer and coefficients cn can be calculated by the relationship
− a0 /an −a1 /an −a2 /an · · · −an−1 /an ⎤ ⎡ 0 ⎢ 0 ⎥ ⎥ ⎢ ⎥ ⎢ b = ⎢ ... ⎥ ; ⎥ ⎢ ⎦ ⎣ c = [1 0 0 · · · 0 ] ;
∞ n=−∞
In this case, the parameter matrices are: ⎡
may remain hidden when solving the differential equation describing the input/output relationship (e.g., controllability, observability, [2, 8]).
|y(t)| dt = finite,
the continuous complex spectrum Y(jω) of the signal y(t), the so-called Fourier transform of signal y(t) can be introduced, which is given by the following relationship: Y(jω) =
∞ −∞
y(t)e−jωt dt = F {y(t)}
and its inverse transform can be given as y(t) =
1 2π
∞
Y(jω) ejωt dω
−∞
0
The solution contains two parts, the free response, which is the effect of the initial conditions x(0), and the forced response, which is the effect of the input signal. Solving a set of first-order differential equations is generally simpler than solving the differential equation of order n. It gives information not only on the output signal, but also on the inner variables. The state space form of a dynamical system also shows properties of the system which otherwise
which can be considered as extension of the Fourier series. It is also called the Fourier integral. The first derivative of y(t) is y˙ (t) =
dy(t) 1 = dt 2π
∞
jωY(jω) ejωt dω
−∞
so the Fourier transform of the first derivative of the signal is jωY(jω). Applying the Fourier transformation to a differential
6
128
I. Vajk et al.
u(t) = Au sin(wt + ju)
P (s)
y(t) = Ay sin(wt + jy) + ytransient
Fig. 6.7 The frequency function
equation, an algebraic equation is obtained, whose solution is much simpler than the solution of the differential equation. If a linear system is excited by an input signal whose Fourier spectrum is known, the superposition theorem can be applied to calculate the complete system response. The output signal of the system can be approximated by the sum of the responses to the individual sinusoidal components of the input signal. The more frequency components are considered, the more accurate the approximation of the time function of the output signal will be. In case of periodic input signal discrete frequencies should be taken into account, in case of aperiodic signal the frequency spectrum is continuous, so the system response to all frequencies should be considered. The basic property of stable linear systems is that for sinusoidal input signals they respond with sinusoidal output signals of the same frequency as that of the input signal after decaying of the transients (i.e., in steady-state). Both the amplitude and the phase angle of the output signal depend on the frequency (Fig. 6.7). The input signal is u(t) = Au sin (ωt + ϕu ) = Im Au ej(ωt+ϕu ) ;
y(t) = ysteady (t) + ytransient (t) The steady state response is obtained as ysteady (t) = Ay (ω) sin ωt + ϕy (ω) = Im Ay (ω) ej(ωt+ϕy (ω)) The frequency function P(jω) is a complex function representing how the amplitude ratio of the output and the input signal and the phase difference between them change with the frequency. The frequency response function can be given as Ay (ω) j(ϕy(ω)−ϕu ) e = a (ω) ejϕ(ω) Au
–f 0, the (closed loop) system is stable, if ϕt = 0, the system is at the boundary of stability, if ϕt < 0, the system is unstable. Another measure is the gain margin, which relates to the absolute value at the intersection point of the Nyquist diagram with the negative real axis. For stability, this value should be less than 1. These measures can also be read from the Bode diagrams. Generally these stability measures can be used in cases when one intersection point exists. (In case of multiple intersection points they can be used with due care.) If no delay is involved, the control system is stable, if ωc is located on the straight line asymptote of slope −20 dB/decade of the Bode amplitude diagram of
Im
w f w f
1
K2 w 0
K1
Re
Stable
Unstable
Fig. 6.15 Nyquist stability criterion
Im
j t2 < 0
w c2
1
K1
K2 Re
j t1 > 0
w c1
Fig. 6.16 Phase margin
the open loop. If there is a delay in the loop, the ωc < 1/Td condition should also be met. Note that with ωc < 1/2Td the phase margin will be approximately 60◦ [2]. Nyquist stability criterion can be extended for unstable open loop systems (generalized Nyquist stability criterion, see [1, 2]).
6.9
Static and Dynamic Response
A control system should follow the reference signal and should reject the effect of the disturbances. Because of the dynamics of the system (time constants, delay) the required behavior does not happen immediately after the appearance of the input excitation. The signals in the control system (the output signal, the control signal, the error signal) reach their steady state, static values through a transient (dynamic)
6
Linear Control Theory for Automation
137
6.10
Table 6.3 Steady state error Type number r(t): unit step r(t): unit ramp r(t): quadratic signal
i=0 1/(1 + k) ∞ ∞
i=1 0 1/k ∞
i=2 0 0 1/k
response. Both the static and the transient response should meet prescribed requirements. The static accuracy, i.e., the steady state error in case of a given reference signal or disturbance can be calculated based on the resulting transfer functions, using the final value theorem. Namely, the steady state error depends on the number of integrators in the open loop. The number of the integrators is called also type number. The transfer function of the open loop is L(s) =
k Lt (s) si
where k is the gain, i means the number of integrators (i = 0, 1 or 2) in the loop, and Lt (s)|s = 0 = 1 contains the time delays which influence the transient response. From the overall transfer function related to the error signal, supposing F = 1 (see Fig. 6.4) lim e(t) = lim s
t→∞
s→0
1 si r(s) = lim s r(s) s→0 si + k Lt (s) 1 + ski Lt (s)
Table 6.3 gives the steady error for type number i = 0,1, 2 and unit step, ramp, and parabolic reference signal. With more integrators and with higher loop gain k the static accuracy gets better. But static accuracy and stability are conflicting demands. With i > 2 generally stability cannot be ensured. The properties of the dynamic (transient) response can be characterized, e.g., by the overshoot of the step response and the settling time. The overshoot is defined as σ =
vmax − vsteady 100%, vsteady
where v denotes the step response. The settling time ts is defined as the time when the step response reaches its steady value within 3–5% accuracy. In the frequency domain, the settling time is related to the cut-off frequency (determined from the frequency function of the open loop), it can be approximated as [2] 3 10 < ts < ωc ωc With higher value of the cut-off frequency, the control system becomes faster on the price of larger control signal values. The overshoot relates to the phase margin. A practical guideline is: if ϕt ≈ 60◦ , the overshoot is less than 10%.
Controller Design
6 The behavior of a control system should meet quality specifications, formulated in the time or in the frequency domain. With appropriate controller design the required behavior can be reached or can well be approximated. Figure 6.17 shows the required loop shape of the approximate Bode amplitude diagram of the open loop [2]. The slope of the diagram in the low frequencies indicates the number of integrators (0 slope: no integrators, −20 dB/decade: one integrator, −40 dB/decade: two integrators) which determine the static accuracy. The cut-off frequency is related to the settling time. For stability, ωc has to be located at a straight line of slope −20 dB/decade. Phase margin around 60◦ generally would ensure calm transient settling process without significant overshoot. Delay in the loop restricts the value of the cut-off frequency. Besides locating it at a straight line of slope −20 dB/decade, for reaching the required phase margin, ωc < 1/2Td should be ensured. Therefore control of systems involving delay will be slow. A controller can be designed in the frequency domain in achieving appropriate “loop-shaping.” In DT systems, the frequency function is similar to the continuous one in the low frequency domain, when ω < 1/Ts .
6.10.1 Continuous PID Controller Design The most widely used controllers in the industry are PID controllers, which contain parallel connected proportional, integrating, and differentiating effects [13–16]. The transfer function of the controller is 1 + sT D C(s) = kc 1 + sTI However, the ideal differentiating effect cannot be realized. Differentiation should appear together with a time lag. The practical algorithm is given as C(s) = kc
sTD 1 + 1+ sTI 1 + sT
≈ kc
1 + sT I 1 + sT D sTI 1 + sT
where the approximation can be accepted if TI TD T. Along a practical interpretation of the PID controller, the proportional part reflects to the actual value of the error, the integrating part reflects to the past course of the error, while the differentiating part shows some prediction of the error. With these effects for a given model of the system generally the quality specifications can be reached. P controller uses only the proportional part of the controller algorithm. PI controller applies the P and I effects. PD controller uses the P and D parts only.
138
I. Vajk et al.
By introducing an integrator in the control circuit, the static accuracy is improved. Using a differentiating effect will accelerate the response of the system to the input signal on the price of higher values in the control signal. Parameter kc and the ratio of the time constants TD /T determine the overexcitation (the ratio of the initial and final values of the step response of the controller). The maximum value of the control signal is restricted due to the technical realization. To realize CT PID controllers, analog operational amplifiers are used. Table 6.4 shows the step responses and the approximate Bode diagrams of various controllers discussed above. If the Bode diagram of the open loop system supposing a unity controller (C(s) = 1) does not fulfil the quality specifications shown in Fig. 6.17, a more involved controller
should be designed. First its structure (P, PI, PD or PID) have to be chosen, then its parameters have to be determined to shape appropriately the Bode diagram of the control loop. A number of techniques exist to design controllers. A very useful method is the pole cancellation, which means that the unfavorable pole of the system is cancelled by a zero of the controller and a favorable pole is introduced instead. With PI controller, for instance, the largest time constant of the system is cancelled and an integrating effect is introduced instead, ensuring good reference signal tracking properties. The gain of the controller is to be chosen to ensure the appropriate phase margin (about 60◦ ). It has to be emphasized that it is never allowed to cancel unstable system poles. The maximum value of the control signal remains to be checked in order to comply with the technical limits. Typically, the design turns out to be an iterative process.
Table 6.4 Step responses and Bode diagrams of PID controllers Controller transfer function and its effect 1 PI controller CPI (s) = kc 1 + sTI Increases static accuracy
Step response and Bode diagram |C( jw )| v(t) kc
kc = 1
_ 20dB / decade 1 TI
j t
TI
_90°
sTD PD controller CPD (s) = kc 1 + 1 + sT Accelerates the control behavior. Overexcitation in the control signal
ek
|C( jw)| B 0d
v(t) kc τ TI
/d
+2
j +90° TI
1 TI
1 τ
w
kc t w
PID controller
kc = 1
|C( jw )| B/
de
⎧
k
kc TI
de k
B/ 0d 2
v(t) T ⎭ kc ⎧1 D ⎭ TI
t
j +90° −90°
1 TI
+2 0d
1 + sT I 1 + sT D 1 sTD + ≈ kc sTI 1 + sT sTI 1 + sT Increases static accuracy and accelerates the control behavior. Overexcitation CPID (s) = kc 1 +
1 τ
1 TI
w
w
6
Linear Control Theory for Automation
139
6
| L( jw) |
| C( jw)P( jw) |
wc
w1
Bode diagram should be over the constraint providing good tracking, disturbance rejection and low parameter sensitivity
Should be high to speed up the control
w
w noise
–20dB/decade
jt > 60° A good choice for stability and nice transient response
Bode diagram should be below the constraint to decrease the noise
Fig. 6.17 Required shape of the asymptotic Bode amplitude diagram of the open loop
As an example, consider controller design in case of a system containing two lags and delay. The transfer function of the process is P(s) =
1 e−5s . (1 + 5s) (1 + 10s)
A PID controller is designed ensuring pole cancellation and approximately 60◦ for the phase margin. (1 + 10s) (1 + 5s) C(s) = . 10s (1 + s) Figure 6.18 shows the step response of the output and the control signal of the control system and the Bode diagram of the open loop. In case of significant delay the control system with PIDlike controller will be slow, as for stability and appropriate dynamic performance the cut-off frequency (ωc ) is restricted, its value has to be less than 1/2Td , namely, the delay itself reduces the phase margin by −ωc Td . In practice, several experimental controller tuning methods are used based on the model of the system (in most cases the model is considered as a lag element with delay established by measurements). We refer here to the Ziegler-Nichols rules, Chien-Hrones-Reswick method, Strejc method, Åström relay method, Åström-Hägglund method [14, 15].
An important practical problem shows up by the possible saturation of the controller, when the output of the integrating part of the controller could “run away” too far so it should be restricted to comply with the technical limits. This problem has to be handled (anti-reset windup, [1, 14]).
6.10.2 Discrete Time PID Controller Design In sampled data (discrete time) DT control systems the requirements set for a control system are the same as in the case of continuous systems. The main differences are that in case of sampled data systems the information on the output signal is available only at the sampling points, and the input signal is held constant for a sampling period, as a continuous control signal is required at the input of the system. It was seen that sampling and holding introduces an extra delay of value about the half of the sampling time. The discrete and the continuous loop frequency functions approximate each other till ω = 1/Ts . In the frequency domain this low frequency range is of interest, as the cut-off frequency in the discrete system cannot go beyond the value of ωc =
2
Ts 2
1 + Td
in order to ensure the appropriate phase margin. The controller is realized by a real time recursive algorithm, which
140
I. Vajk et al.
Bode diagram Gm = 8.68 dB (at 0.263 rad/s), Pm = 55.8 deg (at 0.0995 rad/s) 1.5 Magnitude (dB)
20
1
y
0.5 0
0
10
20
30
40
50
60
10 0 –10 –20 –30 0
4
–180
Phase (deg)
6
u 2 0 0
10
20
30
40
50
–360 –540 10–2
60
10–1
t (sec)
100
Frequency (rad/s)
Fig. 6.18 PID controller design for a system with two lags and delay Table 6.5 Discrete PID controllers Pulse transfer function of the controller CP (z) = kc z − e−Ts /T1 CPI (z) = kc z−1 CPD (z) = kc
z − e−Ts /T2 z − e−Ts /(T2 /n)
CPDid (z) = kc
z − e−Ts /T2 z −Ts /T1
CPID (z) = kc z−ez−1
z−e−Ts /T2 z
Difference equation u[k] = kc e[k] − kc exp (−Ts /T1 )e[k − 1] + u[k − 1] u [k] = kc e [k] − kc exp (−Ts /T2 ) e [k − 1] + exp (−Ts / (T2 /n)) u [k − 1] u[k] = kc e[k] − kc exp (−Ts /T2 )e[k − 1] u [k] = kc e [k] − kc (exp (−Ts /T1 ) + exp (−Ts /T2 )) e [k − 1] + kc (exp (−Ts /T1 ) exp (−Ts /T2 )) e [k − 2] + u [k − 1]
runs in every sampling period and forwards the actual calculated control value toward the process. Different approaches exist for discrete controller design [9, 11, 12]. One DT design method is to start with designing a continuous controller considering the extra delay, and then calculating its discretized form. A second method is the design of discrete controller considering the pulse transfer function of the process, ensuring a prescribed resulting pulse transfer function of the closed loop control system. A third method is the design of a discrete PID controller based on frequency domain considerations of the sampled system. This method is discussed here. According to Table 6.2 determine the pulse transfer function of the process. Pole cancellation technique can be applied in the z-domain. In case of lag elements with PI controller the pole corresponding to the largest time constant is cancelled and integrating effect is introduced instead. With PD effect (which in discrete case can be ideal, as it is realizable) instead of a large time constant a smaller one can be realized.
With PID controller both effects are utilized. The gain of the controller is determined to ensure the required phase margin. This would ensure approximately the decreased cutoff frequency given above. The pulse transfer functions of the PID controllers and their corresponding difference equations providing a real time algorithm are given in Table 6.5. As an example, design discrete time PID control for the process with two time lags and a delay, considered also in the continuous control case. The sampling time is chosen as Ts = 2.5sec. The pulse transfer function of the process is G(z) =
0.048929 (z + 0.7788) −2 z . (z − 0.7788) (z − 0.6065)
The pulse transfer function of the PID controller using pole cancellation technique is C(z) = kc
(z − 0.7788) (z − 0.6065) . (z − 1) z
6
Linear Control Theory for Automation
141
output vector. Then the transfer function matrix contains all possible transfer functions between any of the inputs and any of the outputs
1.5 1
⎡
y 0.5 0
0
10
20
30
10
20
30
40
50
60
70
80
40
50
60
70
80
2.5 2 u 1.5 1 0.5 0
t (sec)
Fig. 6.19 Control and output signals in a sampled data control system
The gain of the controller is chosen to ensure a phase margin of about 60◦ , kc ≈ 2.2. The simulation result in Fig. 6.19 shows the step responses of the output and the control signal.
⎤ y1 (s) ⎢ ⎥ .. ⎢ ⎥ . y(s) = ⎢ ⎥ = G(s)u(s) ⎣ yny −1 (s) ⎦ yny (s) ⎤⎡ ⎤ ⎡ u1 (s) G1,1 (s) G1,2 (s) . . . G1,nu (s) ⎥⎢ ⎥ ⎢ .. .. .. .. .. ⎥⎢ ⎥ ⎢ . . . . . =⎢ ⎥⎢ ⎥, ⎣ Gny −1,1 (s) Gny −1,2 (s) . . . Gny −1,nu (s) ⎦ ⎣ unu −1 (s) ⎦ Gny ,1 (s) Gny ,2 (s) . . . Gny ,nu (s) unu (s) where s is the Laplace operator and Gk, (s) denotes the transfer function from the th component of the input u to the kth component of the output y. The transfer function approach has always been an emphasized modeling tool for control practice. One of the reasons is that the Gk, (s) transfer functions deliver the magnitude and phase frequency functions via a formal substitution of Gk, (s)s=jω = Ak, (ω) ejϕk, (ω) . Note that for real physical processes lim Ak, (ω) = 0. The ω→∞
Analysis and design of control systems and simulation of the results is supported by software systems. MATLAB/SIMULINK program can be effectively used in analysis and design of control systems. Besides PID controllers, advanced control methods are gaining more and more interest and practical acceptance. Especially in discrete control several other methods can be used and realized by computer programs. For example, predictive control, Youla parameterization, Smith predictor, internal model control provide methods ensuring faster behavior in case of systems containing big delay, better disturbance rejection and robust performance. Control methods based on state space models provide optimal control by state feedback. In this case not only the output signal, but also all the state variables (or if not measurable, their estimated values) are fed back ensuring the required control performance.
6.11
Responses of MIMO Systems and “Abilities”
In this Section, continuous-time linear systems with multiple inputs and multiple outputs (MIMO systems) will be considered. As far as the mathematical models are concerned, transfer functions, state-space models and matrix fraction descriptions will be used [17].
6.11.1 Transfer Function Models Consider a linear process with nu control inputs arranged to a u ∈ Rnu input vector and ny outputs arranged to a y ∈ Rny
transfer function matrix G(s) is stable if each of its elements is a stable transfer function. Also, the transfer function matrix G(s) will be called proper if each of its elements is a proper transfer function.
6.11.2 State-Space Models Introducing nx state variables arranged to an x ∈ Rnx state vector, the state-space model of a MIMO system is given by the following equations: x˙ (t) = Ax(t) + Bu(t) y(t) = Cx(t) + Du(t), where A ∈ Rnx ×nx , B ∈ Rnx ×nu , C ∈ Rny ×nx and D ∈ Rny ×nu are the system parameters [18, 19]. The various “abilities” such as controllability, observability, reachability, constructability, stabilizability, and detectability are important features of the system [10, 20]. Roughly speaking, controllability is the system ability to reach any state whatever we want with appropriate system input in finite time, assuming arbitrary initial state. Observability is the ability to determine the internal system behavior using observations, i.e., the ability to determine internal system states from external input and the output observations. Let see these terms more precisely. A system is controllable if there exists input that moves the state from an arbitrary x(t0 ) to the origin in finite time. A system is reachable if
6
142
I. Vajk et al.
there exists input that moves the state from the origin to an arbitrary state x(tf ) in finite time. A system is observable if the initial state x(t0 ) of the system can be determined from the inputs and outputs observed in the time interval t0 ≤ t ≤ tf . A system is constructable if the present state of the system x(tf ) can be determined from the present and past inputs and outputs observed in the interval t0 ≤ t ≤ tf . Using the above-defined abilities a linear time-invariant system can be decomposed into observable and reachable subsystems. A system may have reachable and observable (xro ), reachable and unobservable (xro ), unreachable and observable (xro ), and unreachable and unobservable (xro ) states. Applying this state separation, the state space description can be transformed into the so-called Kalman decomposition form as follows: ⎡ ⎤ ⎡ ⎤⎡ ⎤ ⎡ ⎤ x˙ ro Aro Aro,ro Aro,ro Aro,ro xro Bro ⎢ x˙ ro ⎥ ⎢ 0 Aro ⎢ ⎥ ⎢ ⎥ 0 Aro,ro ⎥ ⎢ ⎥=⎢ ⎥ ⎢ xro ⎥ + ⎢ Bro ⎥ u ⎣ x˙ ro ⎦ ⎣ 0 0 Aco Aro,ro ⎦ ⎣ xro ⎦ ⎣ 0 ⎦ x˙ ro 0 0 0 Aro xro 0 ⎡
⎤ xro ⎢ xro ⎥ ⎥ y = 0 Cro 0 Cro ⎢ ⎣ xro ⎦ + Du xro If all components of the state variable are measured, the eigenvalues of A, i.e., the poles of the system can be changed by state feedback. Using linear feedback u = −Kx + Ur
x˙ˆ = Aˆx + Bu + L y − yˆ yˆ = Cˆx + Du {A, C} is detectable if there exists L such that A–LC has all eigenvalues with negative real part. In this case the error x− xˆ goes to zero. State feedback and state observer exhibit dual properties and share some common structural features. Comparing Fig. 6.20a, b it is seen that the structure of the state feedback control and that of the full order observer resemble each other in a large extent. The output signal, as well as the L and C matrices in the observer play identical role as the control signal, as well as the B and K matrices do in the state feedback control. The parameters in matrix L and the parameters in matrix K are to be freely adjusted for the observer and for the state feedback control, respectively. In a sense, calculating the controller and observer feedback gain matrices represent dual problems. In this case duality means that any of the structures shown in Fig. 6.20a, b can be turned to its dual form by reversing the direction of the signal propagation, interchanging the input and output signals (u ↔ y) and transforming the summation points to signal nodes and vice versa. Finally, let us see the relation between the transfer function and the state-space descriptions. Using the Laplace transforms in the state-space model equations the relation between the state-space model and the transfer function matrix can easily be derived as G(s) = C(sI − A)−1 B + D.
the closed loop system is given by
As far as the above relation is concerned the condition of lim Ak, (ω) = 0 raised for real physical processes leads
x˙ = (A − BK) x + Br {A, B} is stabilizable if there exists K such that the A– BK has all eigenvalues with negative real part. If not all states are measured the states can be estimated by an observer:
y
a)
ω→∞
to D = 0. Note that the G(s) transfer function contains only the controllable and observable subsystem represented by the state-space model {A, B, C, D}.
u
b)
C u
ur
B
³ dt
x
B y K
A
Fig. 6.20 (a, b) Duality of state control and state estimation
L
³ dt A
xˆ
C
yˆ
6
Linear Control Theory for Automation
143
YR (s) = K(sI − A + BK)−1 L
6.11.3 Matrix Fraction Description Transfer functions can be factorized in several ways. As matrices, in general, do not commute, the MFD (Matrix Fraction Description) form exits as a result of right and left factorization, respectively [10, 21, 22]:
XR (s) = I + C(sI − A + BK)−1 L YL (s) = K(sI − A + LC)−1 L
−1 G(s) = BR (s)A−1 R (s) = AL (s)BL (s),
where AR (s), BR (s), AL (s) and BL (s) are all stable transfer function matrices. In [10] it is shown that the right and left MFDs can be related to stabilizable and detectable statespace models, respectively. To outline the procedure consider first the Right Matrix Fraction Description (RMFD) G(s) = BR (s)A−1 R (s). For the sake of simplicity, the practical case by D = 0 will be considered. Assuming that {A, B} is stabilizable, apply a state feedback to stabilize the closedloop system using a gain matrix K ∈ Rnu ×nx : u(t) = −Kx(t), then the RMFD components can be derived in a straightforward way as −1
BR (s) = C(sI − A + BK) B
AR (s) = I − K(sI − A + BK)−1 B. It can be shown that G(s) = BR (s)A−1 R (s) will not be a function of the stabilizing gain matrix K, however, the proof is rather involved [10]. Also, following the above procedure both BR (s) and AR (s) will be stable transfer function matrices. In a similar way, assuming that {A, C} is detectable apply a state observer to detect the closed-loop system using a gain matrix L. Then the LMFD components can be obtained as BL (s) = C(sI − A + LC)−1 B AL (s) = I − C(sI − A + LC)−1 L being stable transfer function matrices. Again, G(s) = A−1 L (s)BL (s) will be independent of L. Concerning the coprime factorization, an important relation, the Bezout identity will be used, which holds for the components of the RMFD and LMFD coprime factorization:
6
XL (s) = I + K(sI − A + LC)−1 B. Note that the Bezout identity plays an important role in control design. A good review on that can be found in [17]. Also note that an MFD factorization can be accomplished by using the Smith-McMillan form of G(s) [17]. As a result of this procedure, however, AR (s), BR (s), AL (s), and BL (s) will be polynomial matrices. Moreover, both AR (s) and AL (s) will be diagonal matrices. MFD description will be used in Sect. 6.12 for stability analysis and controller synthesis.
6.12
Feedback System – Stability Issue
In general, a feedback control system follows the structure shown in Fig. 6.21, where the control configuration consists of two subsystems. In this general set-up any of the subsystems S1 (s) or S2 (s) may play the role of the process or the controller [4]. Here {u1 , u2 } and {y1 , y2 } are multivariable external input and output signals in general sense, respectively. Further on S1 (s) and S2 (s) represent transfer function matrices according to y1 (s) = S1 (s) [u1 (s) + y2 (s)]
y2 (s) = S2 (s) [u2 (s) + y1 (s)] .
Being restricted to linear systems, the closed-loop system is internally stable if and only if all the four entries of the transfer function matrix H11 (s) H12 (s) H21 (s) H22 (s)
u1
e1
y1 S1 (s)
XL (s) YL (s) AR (s) −YR (s) = − BL (s) AL (s) BR (s) XR (s) XL (s) YL (s) AR (s) −YR (s) = I, BR (s) XR (s) − BL (s) AL (s) where
y2
e2 S2 (s)
Fig. 6.21 A general feedback configuration
u2
144
I. Vajk et al.
are asymptotically stable, where
e1 (s) u1 (s) + y2 (s) H11 (s) H12 (s) u1 (s) = = e2 (s) u2 (s) + y1 (s) H21 (s) H22 (s) u2 (s) −1 −1 [I − S2 (s)S1 (s)] [I − S2 (s)S1 (s)] S2 (s) = −1 −1 [I − S1 (s)S 2 (s)] S1 (s) [I − S1 (s)S2 (s)] u1 (s) u2 (s)
Also, from
Adopting the condition earlier developed for internal stability with S1 (s) = G(s) and S2 (s) = − K(s) it is seen that now we need asymptotic stability for the following four transfer functions:
[I + K(s)G(s)]−1 [I + K(s)G(s)]−1 K(s) −1 − [I + G(s)K(s)] G(s) [I + G(s)K(s)]−1
Using the following notations: y = y1 , u = y2 , r = u2 , u1 = 0 the error signal can be expressed:
e1 (s) = u1 (s) + y2 (s) = u1 (s) + S2 (s)e2 (s)
e(s) = r(s) − y(s) = r(s) − G(s)K(s)e(s)
e2 (s) = u2 (s) + y1 (s) = u2 (s) + S1 (s)e1 (s)
=> e(s) = [I + G(s)K(s)]−1 r(s) which leads to
we have
e1 (s) H11 (s) H12 (s) u1 (s) = e2 (s) H21 (s) H22 (s) u2 (s) −1 I −S2 (s) u1 (s) = , −S1 (s) I u2 (s)
u(s) = K(s)e(s) = K(s)[I + G(s)K(s)]−1 r(s) = Q(s)r(s), where Q(s) = K(s)[I + G(s)K(s)]−1 .
so for internal stability we need the transfer function matrix
[I − S2 (s)S1 (s)]−1 [I − S2 (s)S1 (s)]−1 S2 (s) [I − S1 (s)S2 (s)]−1 S1 (s) [I − S1 (s)S2 (s)]−1 −1 I −S2 (s) = − S1 (s) I
to be asymptotically stable [21]. The feedback interconnection is well-defined if and only if the matrix [I − S1 (s)S2 (s)] is not identically singular. In the control system literature, a more practical but still general closed-loop control scheme is shown in Fig. 6.22 with a generalized plant P(s) and controller K(s) [18, 21, 22]. In this configuration u and y represent the process input and output, respectively, w denotes external inputs (command signal, disturbance or noise), z is a set of signals representing the closed-loop performance in general sense. The controller K(s) is to be adjusted to ensure a stable closed-loop system with appropriate performance.
w
z P(s)
u
y
It can easily be shown that in case of a stable G(s) plant any stable Q(s) transfer function, in other words ‘Q parameter’ results in internal stability. Rearranging the above equation K(s) parameterized by Q(s) exhibits the all stabilizing controllers: K(s) = [I − Q(s)G(s)]−1 Q(s).
This result is known as the Youla parameterization [21, 23, 52]. Recalling u(s) = Q(s)r(s) and y(s) = G(s)u(s)=G(s)Q(s) r(s) allows one to draw the block diagram of the closed-loop system explicitly using Q(s). The control scheme shown in Fig. 6.23 satisfies u(s) = Q(s)r(s) and y(s) = G(s)Q(s)r(s), moreover the process modeling uncertainties (G(s) of the physical process and G(s) of the model, as part of the controller are different) are also taken into account. This is the well-known Internal Model Control (IMC) scheme [24, 25].
Plant r
Q(s) −
u
y
G(s)
Controller G(s)
K(s) Model
Fig. 6.22 General control system configuration Fig. 6.23 Internal model controller
−
6
Linear Control Theory for Automation
145
A quick evaluation for the Youla parameterization should point out a fundamental difference between designing an overall transfer function T(s) from the r(s) reference signal to the y(s) output signal is based on a nonlinear parameterization by K(s) y(s) = T(s)r(s) = G(s)K(s)[I + G(s)K(s)]−1 r(s) versus a design by y(s) = T(s)r(s) = G(s)Q(s)r(s) linear in Q(s). Further analysis of the relation by y(s) = T(s)r(s) = G(s)Q (s)r(s) indicates that Q(s) = G−1 (s) is a reasonable choice to achieve an ideal servo controller to ensure y(s) = r(s). However, to intend to set Q(s) = G−1 (s) is not a practical way for several reasons [10]: • Nonminimum-phase processes would exhibit unstable Q(s) controllers and closed-loop systems. • Problems concerning the realization of G−1 (s) are immediately seen regarding processes with positive relative degree or time delay. • The ideal servo property would destroy the disturbance rejection capability of the closed-loop system. • Q(s) = G−1 (s) would lead to large control effort. • Effects of errors in modeling the real process by G(s) need further analysis. Replacing the exact inverse G−1 (s) by an approximated inverse may well be in harmony with practical demands. To discuss the concept of handling processes with time delay consider SISO systems and assume that −sT d
G(s) = Gp (s)e
processes with time delay. The idea, however, to let the time delay appear in the overall transfer function and restrict the design procedure for a process with no time delay is more than 50 years old and comes from Otto Smith [26]. The fundamental concept of the design procedure called Smith predictor is to set up a closed-loop system to control the output signal predicted ahead by the time delay. Then to meet the causality requirement the predicted output is delayed to derive the real system output. All these conceptional steps can be summarized in a control scheme, just redraw Fig. 6.24 to Fig. 6.25 with Q(s) = K(s)[I + Gp (s)K(s)]−1 .
The fact that the output of the internal loop can be considered as the predicted value of the process output explains the name of the controller (see the model output in the Figure). Note that the Smith predictor is applicable for unstable processes, as well. In case of unstable plants stabilization of the closed-loop system needs a more involved discussion. In order to separate the unstable (in more general sense the undesired) poles both the plant and the controller transfer function matrices will be factorized to (right or left) coprime transfer functions as follows:
Plant r
Q (s) −
u
B (s) A (s) B (s)
esTd
A (s)
−
Model
Fig. 6.24 IMC control of a plant with time delay
y(s) = Tp (s)e−sT d r(s) = Gp (s)Q(s)e−sT d r(s) can be assigned as the overall transfer function to be achieved. Updating Fig. 6.23 for G(s) = B(s) e−sT d Fig. 6.24 A(s) illustrates the resulting control scheme. A key point is here, however, that the parameterization by Q(s) should consider only Gp (s) to achieve Gp (s)Q(s) specified by the designer. Note that in the model by Fig. 6.24, uncertainties in Gp (s) and in the time delay should both be taken into account when studying the closed-loop system. The control scheme in Fig. 6.24 has immediately been derived by applying the Youla parameterization concept for
y
Controller
,
where Gp (s) = B(s) is a proper transfer function with no time A(s) delay and Td > 0 is the time delay. Recognizing that the time delay characteristics is not invertible,
esTd
Plant
r
K (s) –
–
u B(s) A(s)
y
e–sTd
Controller B(s) A(s)
e–sTd Model
Fig. 6.25 Controller using Smith predictor
–
6
146
I. Vajk et al. −1 G(s) = BR (s)A−1 R (s) = AL (s)BL (s)
XR (s) = XoR (s) − BR (s)Q(s), YR (s) = YoR (s) + AR (s)Q(s) XL (s) = XoL (s) − Q(s)BL (s), YL (s) = YoL (s) + Q(s)AL (s)
−1 K(s) = YR (s)X−1 R (s) = XL (s)YL (s)
where BR (s), AR (s), BL (s), AL (s), YR (s), XR (s), YL (s), and XL (s) are all stable coprime transfer functions. The stability implies that BR (s) should contain all the RHP-zeros of G(s), and AR (s) should contain as RHP-zeros all the RHP-poles of G(s). Similar statements are valid for the left coprime pairs. As far as the internal stability analysis is concerned, assuming that G(s) is strictly proper and K(s) is proper, the coprime factorization offers the stability analysis via checking the stability of
AR (s) −YR (s) BR (s) XR (s)
−1 and
XL (s) YL (s) − BL (s) AL (s)
−1 ,
respectively. According to the Bezout identity [21, 22, 27] there exist XL (s) and YL (s) as stable transfer function matrices satisfying XL (s)AR (s) + YL (s)BR (s) = I. In a similar way, a left coprime pair of transfer function matrices XR (s) and YR (s) can be found by
delivers all stabilizing controllers parameterized by any stable proper Q(s) transfer function matrix with appropriate size. Though the algebra of the controller design procedure may seem rather involved, in terms of block diagrams it can be interpreted in several transparent ways. Herewith below in Fig. 6.26 two possible realizations are shown to support the reader to compare the results obtained for unstable processes with those shown earlier in Fig. 6.23 to control stable processes. Another obvious interpretation of the general design procedure can also be read out from the realizations of Fig. 6.26. Namely, the immediate loops around G(s) along YoL (s) and XoL −1 (s) or along XoR −1 (s) and YoR (s), respectively, stabilize the unstable plant, then Q(s) serves the parameterization in a similar way as originally introduced for stable processes. Having the general control structure developed using LMFD or RMFD components (Fig. 6.26 gives the complete review), respectively, we are in the position to show how the control of the state-space model introduced earlier in Sect. 6.11.2 can be parameterized with Q(s). To visualize this capability recall K(s) = XL −1 (s)YL (s) and
BL (s)YR (s) + AL (s)XR (s) = I.
BL (s) = C(sI − A + LC)−1 B AL (s) = I − C(sI − A + LC)−1 L
The stabilizing K(s) = YR (s)XR −1 (s) = XL −1 (s)YL (s) controllers can be parameterized as follows. Assume that the Bezout identity results in a given stabilizing controller K = YoR (s)XoR −1 (s) = XoL −1 (s)YoL (s), then
and apply these relations in the control scheme of Fig. 6.26 using LMFD components. Similarly, recall K(s) = YR (s)XR −1 (s)
r
Plant u
0
–1
X L (s)
u
y G(s)
0
–
0 –1
0
Y R (s)
X R (s)
–
Q(s)
Q(s)
BL(s)
y G(s)
–
Y L (s)
–
r
Plant
AL(s)
Fig. 6.26 Two different realizations of all stabilizing controllers for unstable processes
AR(s)
BR(s)
6
Linear Control Theory for Automation
r
Plant
u
G(s)
147
u
– K(sI – A+ LC)–1L
–1
K (sI –A+ LC) B
– C
C (sI – A + LC)–1L
–
A –
K
–
Fig. 6.27 State-space realization of all stabilizing controllers derived from LMFD components
r
Plant G(s)
–
...dt
B
Q(s)
u
y
L
–
C(sI – A+ LC)–1B
G(s)
6
r
Plant
y
Q(s)
Fig. 6.29 State-space realization of all stabilizing controllers
y –
w
K(sI–A+BK)–1L C(sI–A+BK)–1L
z P(s)
–
u
y
Q(s) J(s)
–1
–
K(sI–A+BK) B C(sI–A+BK)–1B
Q(s)
Fig. 6.28 State-space realization of all stabilizing controllers derived from RMFD components Fig. 6.30 General control system using Youla parameterization
BR (s) = C(sI − A + BK)−1 B −1
AR (s) = I − K(sI − A + BK) B and apply these relations in the control scheme of Fig. 6.26 using RMFD components. To complete the discussion on the various interpretations of the all stabilizing controllers, observe that the control schemes in Figs. 6.27 and 6.28 both use 4 × n state variables to realize the controller dynamics beyond Q(s). As Fig. 6.29 illustrates, equivalent reduction of the block diagrams of Figs. 6.27 and 6.28, respectively, both lead to the realization of the all stabilizing controllers. Observe that the application of the LMFD and RMFD components leads to identical control schemes. In addition, any of the realizations shown in Figs. 6.26, 6.27, 6.28, and 6.29 can directly be redrawn to form the general control system scheme most frequently used in the literature to summarize the structure of the Youla parameterization. This general control scheme is shown in Fig. 6.30. In fact, the state-space realization by Fig. 6.29 follows the general control scheme shown in Fig. 6.30, assuming z = 0 and w = r. The transfer function J(s) itself is realized by the state estimator and state feedback using the gain matrices L
and K, as shown in Fig. 6.29. Note that Fig. 6.30 can also be derived from Fig. 6.22 by interpreting J(s) as a controller stabilizing P(s), thus allowing one to apply an additional all stabilizing Q(s) controller.
6.13
Performances for MIMO LTI Systems
So far we have derived various closed-loop structures and parameterizations attached to them only to ensure internal stability. Stability, however, is not the only issue for the control system designer. To achieve goals in terms of the closed-loop performance needs further considerations [10, 21, 24]. Just to see an example: in control design it is a widely posed requirement to ensure zero steady-state error while compensating step-like changes in the command or disturbance signals. The practical solution suggests one to insert an integrator into the loop. The same goal can be achieved while using the Youla parameterization, as well. To illustrate this action, SISO systems will be considered. Apply stable Q1 (s) and Q2 (s) transfer functions to form Q(s) = sQ1 (s) + Q2 (s).
148
I. Vajk et al.
∧ z(t)2 = z(s)|s=jω 2 = z(jω)2 = 1/2 ∞ 1 2 |z(jω)| dω . 2π −∞
Then Fig. 6.23 suggests the transfer function between r and r − y to be 1 − Q(s)G(s).
Another important selection for v takes ν → ∞, which results in
To ensure 1 − [Q(s)G(s)]s=0 = 0
z(t)∞ = sup |z(t)| t
we need and it is interpreted as the largest or worst-case error.
−1
Q2 (0) = [G(0)] . Alternatively, using state models the selection according to Q(0) = 1/ C(−A + BK + LC)−1 B will insert an integrator to the loop.
6.13.1 Control Performances Several criteria exist to describe the required performances for the closed-loop performance. To be able to design closedloop systems with various performance specifications, appropriate norms for the signals and systems involved should be introduced [21, 24].
Signal Norm One possibility to characterize the closed-loop performance is to integrate various functions derived from the error signal. Assume that a generalized error signal z(t) has been constructed. Then
∞
z(t)ν =
System Norms Frequency functions are extremely useful tools to analyze and design SISO closed-loop control systems. MIMO systems, however, exhibit an input-dependent, variable gain at a given frequency. Consider a MIMO system given by a transfer function matrix G(s) and driven by an input signal w and delivering an output signal z. The norm z(jω)=G(jω)w(jω) of the system output z depends both on the magnitude and the direction of the input vector w(jω), where . . . denotes Euclidean norm. Bounds for G(jω) are given by σ (G(jω)) ≤
where σ (G(jω)) and σ (G (jω)) denote the infimum and supremum values of the singular values of G(jω), respectively. The most frequently used system norms are the H2 and H∞ norms defined as follows: G2 =
1/ν |z|ν dt
G(jω) w(jω) ≤ σ (G(jω)) , w(jω)
1 2π
∞
trace GT (−jω) G(jω) dω
1/2
−∞
and
0
defines the Lν norm of z(t) with v as a positive integer. The relatively easy calculations required for the evaluations made the L2 norm the most widely used criterion in control. A further advantage of the quadratic function is that energy represented by a given signal can also be taken into account in this way in many cases. Moreover, applying the Parseval’s theorem the L2 norm can be evaluated using the signal described in the frequency domain. Namely, having z(s) as the Laplace transform of z(t)
∞
z(s) =
−st
z(t)e
dt
0
the Parseval’s theorem offers the following closed form to calculate the L2 norm:
G∞ = supσ (G(jω)) . ω
It is clear that the system norms – as induced norms – can be expressed by using signal norms. Introducing g(t) as the unit impulse response of G(s) the Parseval’s theorem suggests expressing the H2 system norm by the L2 signal norm: G22 =
∞
trace gT (t)g(t) dt.
0
Further on, the H∞ norm can also be expressed as G∞ = supω supw
G(jω) w where w=0 and w∈Cnw w
,
6
Linear Control Theory for Automation
149
where w denotes a complex valued vector. For a dynamic system the above expression leads to G∞ = sup w
z(t)2 w(t)2
where w(t)2 = 0
if G(s) is stable and proper. The above expression means that the H∞ norm can be expressed by L2 signal norm. Assume a linear system given by a state model {A, B, C}, and calculate its H2 and H∞ norms. Transforming the state model to a transfer function matrix
6.13.2 H2 Optimal Control
z G11 G12 w = G21 G22 y u or equivalently, by a state model ⎤⎡ ⎤ ⎡ ⎤ ⎡ x x˙ A B1 B2 ⎣ z ⎦ = ⎣ C1 0 D12 ⎦ ⎣ w ⎦ . C2 D21 0 u y
−1
G(s) = C(sI − A) B the H2 norm is obtained by G22 = trace CPo CT = trace BT Pc B , where Pc and Po are delivered by the solution of the following Lyapunov equations:
Assume that (A, B1 ) is controllable, (A, B2 ) is stabilizable, (C1 , A) is observable, and (C2 , A) is detectable. For the sake of simplicity, non-singular DT12 D12 and D21 DT21 matrices, as well as DT12 C1 = 0 and D21 BT1 = 0 will be considered. Using a feedback via K(s)
APo + Po AT + BBT = 0 Pc A + AT Pc + CT C = 0. The calculation of the H∞ norm can be performed via an iterative procedure, where in each step an H2 norm is to be minimized. Assuming a stable system construct the following Hamiltonian matrix H=
u = −K(s)y the closed-loop system becomes z = F(G(s), K(s)) w, where F(G(s), K(s)) =
1 A BBT γ2 . − CT C −AT
For large γ the matrix H has nx eigenvalues with negative real part and nx eigenvalues with positive real part. As γ decreases these eigenvalues eventually hit the imaginary axis. Thus G∞ = inf {γ ∈ R : H has no eigenvalues with zero real part} .
γ >0
Note that each step within the γ -iteration procedure is actually equivalent to solve an underlying Riccati equation. The solution of the Riccati equation will be detailed later on in Sect. 6.13.2. So far stability issues have been discussed and signal and system norms, as performance measures have been introduced to evaluate the overall operation of the closed-loop system. In the sequel the focus will be turned on design procedures resulting in both stable operation and expected performance. Controller design techniques to achieve appropriate performance measures via optimization procedures related to the H2 and H∞ norms will be discussed, respectively [28].
6
To start the discussion consider the general control system configuration shown in Fig. 6.22. Describe the plant by the transfer function
G11 (s) − G12 (s)(I + K(s)G22 (s))−1 K(s)G21 (s). Aiming at designing optimal control in H2 sense the J2 = 2 F G(jω) , K(jω) norm is to be minimized by a realiz2 able K(s). Note that this control policy can be interpreted as a special case of the linear quadratic (LQ) control problem formulation. To show this relation assume a weighting matrix Qx assigned for the state variables and a weighting matrix Ru assigned for the input variables. Choosing
Q1/2 x C1 = 0
and
D12
0 = R1/2 u
and z = C1 x + D12 u as an auxiliary variable the well-known LQ cost function can be reproduced with T 1/2 Qx = Q1/2 Qx x
and
T 1/2 Ru = R1/2 Ru . u
150
I. Vajk et al.
Up to this point the feedback loop has been set up and the design problem has been formulated to find K(s) minimizing 2 J2 = F G (jω) , K (jω) . Note that the optimal con2 troller will be derived as a solution of the state-feedback problem. The optimization procedure is discussed below.
Pc = GF−1 . At the same time, it should be noted that there exist further, numerically advanced procedures to find Pc .
State-Feedback Problem If all the state variables are available then the state variable feedback
State-Estimation Problem The optimal state estimation (state reconstruction) is the dual of the optimal control task [17, 29]. The estimated states are derived as the solution of the following differential equation:
u(t) = −K2 x(t)
x˙ˆ (t) = Aˆx(t) + B2 u(t) + L2 y(t) − C2 xˆ (t) , where
is used with the gain by −1 K2 = DT12 D12 BT2 Pc ,
−1 L2 = Po CT2 D21 DT21
where Pc represents the positive definite or positive semidefinite solution of the
and the Po matrix is the positive definite or positive semidefinite solution of the Riccati equation by
−1 AT Pc + Pc A − Pc B2 DT12 D12 BT2 Pc + CT1 C1 = 0
−1 Po AT + APo − Po CT2 D21 DT21 C2 Po + B1 BT1 = 0.
Riccati equation. According to this control law the
Note that the −1 A − Po CT2 D21 DT21 C2
A − B2 K2 matrix will determine the closed-loop stability. As far as the solution of the Riccati equation is concerned, an augmented problem set-up can turn this task to an equivalent eigenvalue-eigenvector decomposition (EVD). In details, the EVD decomposition of the Hamiltonian matrix H=
−1 A −B2 DT12 D12 BT2 − CT1 C1 −AT
will separate the eigenvectors belonging to stable and unstable eigenvalues, then the positive definite Pc matrix can be calculated from the eigenvectors belonging to the stable eigenvalues. Denote the diagonal matrix containing the stable eigenvalues and collect the associated eigenvectors to a block matrix
F , G
i.e.,
F F H = . G G Then it can be shown that the solution of the Riccati equation is obtained by
matrix characterizing the closed-loop system is stable, i.e., all its eigenvalues are on the left-hand half plane. Remark 1 Putting the problem just discussed so far into a stochastic environment the above state estimation is also called Kalman filter. Remark 2 The gains of L2 ∈ Rnx ×ny and K2 ∈ Rnu ×nx have been introduced and applied in earlier stages in this Chapter to create the LMFD and RMFD descriptions, respectively. Here their optimal values have been derived in H2 sense.
Output-Feedback Problem If the state variables are not available for feedback, the optimal control law utilizes the reconstructed states. In case of designing optimal control in H2 sense, the control law u(t) = − K2 x(t) is replaced by u(t) = −K2 xˆ (t). It is important to prove that the joint state estimation and control leads to stable closed-loop control. The proof is based on observing that the complete system satisfies the following state equations:
x˙ A − B2 K2 B2 K2 x = x − xˆ 0 A − L2 C2 x˙ − x˙ˆ B1 + w. B1 − L2 D21
6
Linear Control Theory for Automation
151
The above form clearly shows that the poles introduced by the controller and those introduced by the observer are separated from each other. The concept therefore is called separation principle. The importance of this observation lies in the fact that in the course of the design procedure the controller poles and the observer poles can be assigned independently from each other. Note that the control law by u(t) = −K2 xˆ (t) still exhibits an optimal controller in the sense that F(G(jω), K(jω))2 is minimized.
6.13.3 H∞ Optimal Control Herewith below the optimal control in H∞ sense will be discussed. The H∞ optimal control minimizes the H∞ norm of the overall transfer function of the closed-loop system by
−1 K∞ = DT12 D12 BT2 Pc .
State-Estimation Problem The optimal state estimation in H∞ sense requires to minimize ! " z − zˆ o ∧ 2 : w2 = 0 J∞ = sup w2 as a function of L. Again, minimization can be performed by γ -iteration. Specifically, γ min is looked for to satisfy J∞o < γ with γ > 0 for all w. To find the optimal L∞ gain, the symmetrical positive definite or positive semi-definite solution of the following Riccati equation is required: −1 Po AT + APo − Po CT2 D21 DT21 C2 Po
J∞ = F G (jω), K (jω)
+γ −2 Po CT1 C1 Po + B1 BT1 = 0
∞
using a state variable feedback with a constant gain where F(G(s), K(s)) denotes the overall transfer function matrix of the closed-loop system [10, 21, 22]. To minimize J∞ needs to follow rather involved procedures. As one option the γ iteration has already been discussed earlier. In short, as earlier discussions on the norms pointed out, the H∞ norm can be calculated using L2 norms by ∧
J∞ = sup
z2 : w2 = 0 w2
State-Feedback Problem If all the state variables are available then the state variable feedback u(t) = −K∞ x(t) is used with the gain K∞ minimizing J∞ . Similarly to the H2 optimal control discussed earlier, in each step of the γ -iteration K∞ can be obtained via Pc as the symmetrical positive definite or positive semi-definite solution of the −1 AT Pc + Pc A − Pc B2 DT12 D12 BT2 Pc +γ −2 Pc B1 BT1 Pc + CT1 C1 = 0 Riccati equation, provided that the matrix by
providing that −1 A − Po CT2 D21 DT21 C2 + γ −2 Po CT1 C1 represents a stable system, i.e., it has all its eigenvalues on the left-hand half plane. Finding the solution Po belonging to the minimal γ value, the optimal feedback gain matrix is obtained by −1 L∞ = Po CT2 D21 DT21 . Then x˙ˆ (t) = Aˆx(t) + B2 u(t) + L∞ y(t) − C2 xˆ (t) is in complete harmony with the filtering procedure obtained earlier for the state reconstruction in H2 sense.
Output-Feedback Problem If the state variables are not available for feedback then a K(s) controller satisfying J∞ < γ is looked for. This controller, similarly to the procedure followed by the H2 optimal control design, can be determined in two phases: first the unavailable states are to be estimated, then state feedback driven by the estimated states is to be realized. As far as the state feedback is concerned, similarly to the H2 optimal control law, the H∞ optimal control is accomplished by
−1 A − B2 DT12 D12 BT2 Pc + γ −2 B1 BT1 Pc
u(t) = −K∞ xˆ (t).
represents a stable system (all the eigenvalues are on the lefthand half plane). Once Pc belonging to the minimal γ value has been found, the state variable feedback is realized by using the feedback gain matrix of
However, the H∞ optimal state estimation is more involved than the H2 optimal state estimation. Namely, the H∞ optimal state estimation includes the worst-case estimation of the exogenous w input, and the feedback matrix L∞ needs to be
6
152
I. Vajk et al.
modified, too. The L∗∞ modified feedback matrix takes the following form: −1 L∗∞ = I − γ −2 Po Pc L∞ −1 −1 = I − γ −2 Po Pc Po CT2 D21 DT21 then the state estimation applies the above gain according to x˙ˆ (t) = A + B1 γ −2 BT1 Pc xˆ (t)+ B2 u(t) + L∗∞ y(t) − C2 xˆ (t) .
while γ min is looked for all the three conditions above should be satisfied. The optimal control in H∞ sense is accomplished by K(s) belonging to γ min . The H∞ control can efficiently be solved by using LMI technique. Its details can be found, e.g., in [23]. Remark: Now as we are ready to design optimal controllers in H2 or H∞ sense, respectively, it is worth devoting a minute to analyze what can be expected from these procedures. To compare the nature of the H2 vs. H∞ norms a relation for G2 should be found, where the G2 norm is expressed by the singular values. It can be shown that #
Reformulating the above results into a transfer function form gives K(s) = −1 K∞ sI − A − B1 γ −2 BT1 Pc + B2 K∞ + L∗∞ C2 L∗∞ .
G2 =
1 2π
∞
−∞
i
$1/2 σi2 (G (jω)) dω
.
Comparing now the above expression with G∞ = sup σ (G(jω)) ω
The above K(s) controller satisfies the norm inequality F(G(jω), K(jω))∞ < γ and it results in a stable control strategy if the following three conditions are satisfied [22]:
it is seen that G∞ represents the largest possible singular value, while G2 represents square root of the sum of the squares of all the singular values over all frequencies [22].
• Pc is a symmetrical positive semi-definite solution of the algebraic Riccati equation A Pc + Pc A − T
−1 Pc B2 DT12 D12 BT2 Pc
+γ −2 Pc B1 BT1 Pc + CT1 C1 = 0, provided that −1 A − B2 DT12 D12 BT2 Pc + γ −2 B1 BT1 Pc is stable. • Po is a symmetrical positive semi-definite solution of the algebraic Riccati equation −1 Po AT + APo − Po CT2 D21 DT21 C2 Po +γ −2 Po CT1 C1 Po + B1 BT1 = 0, provided that −1 A − Po CT2 D21 DT21 C2 + γ −2 Po CT1 C1 is stable. • The largest eigenvalue of Pc Po is smaller than γ 2 : ρ(Pc Po ) < γ 2 . The H∞ optimal output feedback control design procedure minimizes the F(G(jω), K(jω))∞ norm via γ iteration and
6.14
Robust Stability and Performance
When designing control systems, the design procedure needs a model of the process to be controlled. So far it has been assumed that the design procedure is based on a perfect model of the process. Stability analysis based on the nominal process model can be qualified as nominal stability analysis (NS). Similarly, closed-loop performance analysis based on the nominal process model can be qualified as nominal performance analysis (NP). It is evident, however, that some uncertainty is always present in the model. Moreover, an important purpose of using feedback is even to reduce the effects of uncertainty involved in the model. The classical approach introduced the notions of the phase margin and gain margin as measures to handle uncertainty. However, these measures are rather crude and contradictory [21, 30]. Though they work fine in a number of practical applications, they are not capable to support the design for processes exhibiting unusual frequency behavior (e.g., slightly damped poles). The post-modern era of the control theory put special emphasis on modeling uncertainties. Specifically, wide classes of structured, as well as additively or multiplicatively unstructured uncertainties have been introduced and taken into account in the design procedure. Modeling, analysis, and synthesis methods have been developed under the name robust control [24, 31, 32]. Note that the LQR design method inherits some measures of robustness; however, in general the pure structure of the LQR regulators does not guar-
6
Linear Control Theory for Automation
153
antee stability margins [17, 22]. The uncertainties of the description are often modelled by unstructured and structured perturbations. If unstructured uncertainties are assumed then the elements of the perturbation matrix are free. Applying structured uncertainties the perturbation matrix may have a predefined structure. As far as the unstructured uncertainties are concerned, let G0 (s) denote the nominal transfer function matrix of the process. Then the true plant behavior can be expressed by G(s) = G0 (s) + a (s),
G(s) = G0 (s) (I + i (s)) ,
G(s) = (I + 0 (s)) G0 (s), where a (s) represents an additive perturbation, i (s) an input multiplicative perturbation, and o (s) an output multiplicative perturbation. All the above three perturbation models can be transformed to a common form: G(s) = G0 (s) + W1 (s)(s)W2 (s), where (s)∞ ≤ 1. Uncertainties extend the general control system configuration outlined earlier in Fig. 6.22. The nominal plant now is extended by a block representing the uncertainties and the feedback is still applied in parallel as Fig. 6.31 shows. This standard model “removes” (s), as well as the K(s) controller from the closed-loop system and lets P(s) to represent the rest of the components. It may involve additional output signals (z) and a set of external signals (w) including the set point. Using a priori knowledge on the plant, the concept of the uncertainty modeling can further be improved. Separating identical and independent technological components into groups the perturbations can be expressed as structured uncertainties
Structured uncertainties clearly lead to less conservative design as the unstructured uncertainties may even want to take care of perturbations never occurring in practice. Consider the following control system (see Fig. 6.32) as one possible realization of the standard model shown in Fig. 6.31. As a matter of fact here the common form of the perturbations is used. Derivation is also straightforward from the standard form of Fig. 6.31 with z = 0 and w = r. Handling the nominal plant and the feedback as one single unit described by R(s) = (I + K(s)G0 (s))−1 K(s), condition for the robust stability can easily be derived by applying the small gain theorem (see Fig. 6.33). The small gain theorem is the most fundamental result in robust stabilization under unstructured perturbations. According to the small gain theorem any closed-loop system consisting of two stable subsystems G1 (s) and G2 (s) results in stable closed-loop system provided that G1 (jω)∞ G2 (jω)∞ < 1.
Applying the small gain theorem to the stability analysis of the system shown in Fig. 6.33, the condition by W2 (jω) R(jω) W1 (jω) (jω)∞ < 1 guarantees closed-loop stability. As W2 (jω) R(jω) W1 (jω) (jω)∞ ≤ W2 (jω) R(jω) W1 (jω)∞ ∞ and (jω)∞ ≤ 1,
W1(s) Δ (s)W2 (s) r
K(s)
u
y
G0 (s)
–
yΔ
uΔ
z
P(s) u
W1(s) D(s) W2(s)
y K(s)
Fig. 6.31 Standard model of control system extended by uncertainties
6
i (s) ≤ 1 i = 1, ..., r
where
Fig. 6.32 Control system with uncertainties
Δ(s)
w
(s) = diag (1 (s), 2 (s), . . . , r (s))
– R(s)
Fig. 6.33 Control system with uncertainties
154
I. Vajk et al.
thus the stability condition reduces to W2 (jω) R(jω) W1 (jω)∞ < 1. To support the closed-loop design procedure for robust stability introduce the γ norm by W2 (jω)R(jω)W1 (jω) (jω)∞ = γ < 1. Finding a K(s) such that the γ norm is kept at its minimum the maximally stable robust controller can be constructed. The performance of a closed-loop system can be rather conservative in case of having unstructural information on the uncertainties. To avoid this drawback in the design procedure, the so-called structural singular value is used instead of the H∞ norm (being equal to the maximum of the singular value). The structured singular value of a matrix M is defined as follows: ∧
μ (M) =1/ min {k| det (I − kM) = 0 for structured , σ () ≤ 1} where has a block-diagonal form of = diag ( . . . i . . . ) and σ () ≤ 1. This definition suggests the following interpretation: a large value of μΔ (M) indicates that even a small perturbation can make the I − M matrix singular. On the other hand, a small value of μΔ (M) represents favorable conditions in this sense. The structured singular value can be considered as the generalization of the maximal singular value [21, 22]. Using the notion of the structured singular value, the condition for robust stability (RS) can be formulated as follows. Robust stability of the closed-loop system is guaranteed if the maximum of the structured singular value of W2 (jω)R(jω)W1 (jω) lies within the unity uncertainty radius: sup μ (W2 (jω) R(jω) W1 (jω)) < 1. ω
Designing robust control systems is a far more involved task than testing robust stability. The design procedure minimizing the supremum of the structured singular value is called structured singular value synthesis or μ synthesis. At this moment there is no direct method to synthesize a μ optimal controller. Related algorithms to perform the minimization are discussed in the literature under the term DK-iteration [22]. In [22], not only a detailed discussion is presented but also a MATLAB program is shown to provide better understanding of the iterations to improve the robust performance conditions. So far the robust stability issue has been discussed in this Section. It has been shown that the closed-loop system remains stable, i.e., it is robustly stable, if stability is guaranteed for all possible uncertainties. In a similar way, the notion of the robust performance (RP) is to be worked out.
The closed-loop system exhibits robust performance if the performance measures are kept within a prescribed limit even for all possible uncertainties, including the worst case, as well. Design considerations for the robust performance have been illustrated in Fig. 6.31. As the system performance is represented by the signal z, robust performance analysis is based on investigating the relation between the external input signal w and the performance output z: z = F(G(s), K(s), (s)) w. Defining a performance measure on the transfer function matrix F(G(s), K(s), (s)) by J(F (G, K, )) the performance of the transfer function from the exogenous inputs w and to outputs z can be calculated. The maximum of the performance – even in the worst case possibly delivered by the uncertainties – can be evaluated by sup {J(F(G, K, )) : ∞ < 1} .
Based on this value, the robust performance of the system can be judged. If the robust performance analysis is to be performed in H∞ sense, the measure to be applied is J∞ = F(G (jω), K(jω) , (jω))∞ . In this case, the prespecified performance can be normalized and the limit can be selected to unity. So equivalently, the robust performance requirement can be formulated as the following inequality: F (G(jω), K (jω), (jω))∞ < 1,
∀(jω)∞ ≤ 1.
Robust performance analysis can formally be traced back to robust stability analysis. In this case a fictitious p uncertainty block representing the nominal performance requirements should be inserted across w and z (see Fig. 6.34). Then introducing the
p 0 0
matrix of the uncertainties gives a pleasant way to trace back the robust performance problem to the robust stability problem. If robust performance synthesis is used, the performance measure must be minimized. In this case μ optimal design problem can be solved as an extended robust stability design problem.
6
Linear Control Theory for Automation
155
min {λ : B(x) < λ C(x), C(x) > 0, A(x) < 0 } D p(s)
where A, B, and C are affine symmetrical matrices of x. Such an extension in the constraints results in special bilinear constraints; however, the resulting quasi convex problem still remains solvable in polynomial time.
D(s) yΔ
uΔ w
z
P(s) u
y K(s)
Fig. 6.34 Design for robust performance traced back to robust stability
6.15
LMI in Control Engineering
There exist several control problems with no analytical solution. Consequently, only numerical solutions will be available to solve this kind of problems. In the last few decades, a number of extremely effective tools have been elaborated to solve convex optimum problems [33, 34]. These tools can directly be applied to solve various control problems [35–37]. As far as the usability of these tools is concerned, the primary dilemma is not the linearity-nonlinearity, but rather the convexity-nonconvexity issue. The application of the interior point methods provides solutions for a number of convex (quasi convex) optimization problems in polynomial time. Just to mention a few of these problems: • LMI feasible problem: Here x is looked for such that A(x) ∗ 0 holds, where A is a symmetrical matrix affine in x, and ∗ stands for a relation (>, ≥, ≤ or 0 for all x = 0 and ∂V V˙ (x) = T f (x) < 0 for all x = 0 ∂x This is a sufficient condition for stability. For linear systems x˙ = Ax, allowing us to choose V(x) = xT Px/2 as a Lyapunov function, where P is a positive definite symmetrical matrix (P>0). The stability condition ˙ can be formulated as V(x) = xT AT P + PA x ≤ 0, i.e., AT P + PA < 0. If a P matrix satisfying PT = P > 0 and AT P + PA < 0 can be found, stability holds. However, finding an appropriate P is an LMI feasible problem as the constraints for the elements of the P matrix are linear. Express it in another way: if all the eigenvalues of matrix A exhibit negative real part, P satisfying PA + AT P < 0 and P = PT > 0 can be found. The problem to allocate the eigenvalues can be generalized to detect the location of regional poles. All the eigenvalues of A are found in a region determined by the matrix inequality D := s ∈ C : M + sN + s∗ NT < 0 where M is a real symmetrical matrix, while N is a real matrix, if P can be found to satisfy the following matrix inequality:
6
156
I. Vajk et al.
M ⊗ P + N ⊗ (PA) + NT ⊗ AT P < 0 and P = PT > 0 % Here denotes the Kronecker product. The decay rate is defined to be the largest α such that: lim eαt x(t) = 0
t→∞
To determine the decay rate the following equivalent problem is to be solved: sup α : AT P + PA + 2αP < 0, P = PT > 0 P,α
Though the above problem is bilinear both in P and α, it still can be solved as an LMIP (GEVP) problem. Dissipativity and passivity. A system is dissipative if
∞
u(t) y(t)dt ≥ η T
0
∞
u(t)T u(t) dt
T PB− CT A P + PA dissip = sup η : BT P − C 2ηI − DT + D η,P ≤ 0, P = PT > 0 The system is passive if this value is nonnegative. Expansivity and Hinf performance. System expansivity can be defined by the following inequality: ∞
H2 performance. The H2 norm – frequently used in control studies – can also be calculated using the LMIP technique: H22 = min tr BT PB : PA + AT P + CT C < 0, P
P = PT > 0
Controller Synthesis. The classical control synthesis looks for a controller achieving certain quality measures and/or ensuring operation below/above certain limits. If all the state variables available for control, static state feedback can be used by u = −Kx
0
holds along all the trajectories developing from zero initial conditions. In the above equation η > 0 denotes the dissipativity value. Using the LMI technique the above conditions can be expressed by the following LMIP forms:
T A P + PA + CT C PB + CT D H∞ =min γ : DT D − γ 2 I BT P + DT C γ ,P ≤ 0, P = PT > 0
y(t)T y(t)dt ≤ γ 2
0
∞
u(t)T u(t) dt
0
for all inputs. In a more compact form:
For γ < 1 the system is nonexpansive. To satisfy the above inequality, for a given system the smallest upper bound for γ can be found. This value is called RMS gain.
u
u
y2 u2
P (A − BKC) + (A − BKC)T P < 0 which is not jointly convex by P and K. The problem itself is nonconvex in this case, however, several heuristic iterative solutions have been worked out to solve this problem. Robustness. The classical control problems operate by nominal models, known uncertainties are not taken into account in a direct way. Generalization of the classical dynamic system description is called differential inclusions. For example, a classical dynamic system description according to x˙ = f (x(t), t), x(0) = xo
y2 ≤ γ u2
L2 (H) = sup γ = sup
To proceed, relations developed earlier need to be modified by substituting A– BK for A and C–DK for C. Consequently, the related LMI items elaborated for P and K will also change to bilinear matrix inequalities (BMI). However, using a congruent transformation and substituting appropriate variables the BMIs can be reformulated as LMIs. Having the case of state variables not available for control the problem set up leads to the following relation:
u2 = 0
Considering LTI systems finding the RMS gain is equivalent to solve the following LMIP problem:
can be generalized as follows: x˙ ∈ F(x(t), t), x(0) = xo where F is a set-valued function. The above system offers solutions along a number of trajectories. The aim here is not to derive individual trajectories, but rather to check whether all the trajectories possess certain properties, e.g., whether they all tend to zero. The above description is called differential
6
Linear Control Theory for Automation
157
inclusion (DI). It is a typical assumption that the set mapped by F is convex both by x and t. As far as linear differential inclusions (LDI) are concerned, DI restricts itself as x˙ ∈ x, x(0) = xo
⊂ Rnxn
Namely, the LDI is interpreted as a linear system family with time variable parameters: x˙ = A(t)x, x(0) = xo
A∈
So here the key point is the interpretation of LDI as a time varying linear uncertainty, where is the uncertain region of A. As an example, study the stability issue assuming a polytopic LDI problem. The system to study is x˙ ∈ x, x(0) = x0 , where can be given by the vertices of the polytope: = Co{A1 . . . An }. In that case the condition of the quadratic stability is: PT = P > 0, PA + AT P < 0 for all A ∈ The above set of conditions can be studied as the following LMI condition set: PAi + ATi P < 0 i = 1 . . . n
and P = PT > 0
In recent years, convex optimization has become a reliable and efficient computational tool of wide interest in engineering, thanks to its ability to solve very large, practical engineering problems. Examples in this Section demonstrate the challenge to reformulate control problems as LMI problems set up. Once the reformulation is completed the solution for the original problem is immediately obtained, namely, powerful programming tools are at hand to proceed (e.g., Yalmip) [38]. As far as a number of control problems are concerned, note that linear programming (LP) and second order cone programming (SOCP) problems, being special semidefinite programming (SDP) problems, can be far more efficiently applied to progress than SDP algorithms.
6.16
Model-Based Predictive Control
As we have seen so far most of the control system design methods assume a mathematical model of the process to be controlled. Given a process model and knowledge on the external system inputs the process output can be predicted to some extent. To characterize the behavior of the closed-
loop system a combined loss function can be constructed from the values of the predicted process outputs and that of the associated control inputs. Control strategies minimizing this loss function are called model-based predictive control (MPC). Related algorithms using special loss functions can be interpreted as an LQ (Linear Quadratic) problem with finite horizon. The performance of well-tuned predictive control algorithms is outstanding for processes with delay. Specific model-based predictive control algorithms are also known as Dynamic Matrix Control (DMC), Generalized Predictive Control (GPC), and Receding Horizon Control (RHC) [39–45, 53]. Due to the nature of the model-based control algorithms the discrete time (sampled data) version of the control algorithms will be discussed in the sequel. Also, most of the detailed discussion is reduced for SISO systems in this Section. The fundamental idea of predictive control can be demonstrated through the DMC algorithm [46], where the process output sample y(k + 1) is predicted by using all the available process input samples up to the discrete time instant k (k = 0, 1, 2 . . . ) via a linear function func: yˆ (k + 1) = func(u(k), u(k − 1) , u(k − 2) , u(k − 3) , . . . ) . Repeating the above one-step-ahead prediction for further time instants by yˆ (k + 2) = func(u (k + 1), u(k), u (k − 1), u(k − 2), . . . ) yˆ (k + 3) = func(u (k + 2), u(k + 1) , u(k), u(k − 1), . . . ) .. . requires the knowledge of future control actions u(k + 1), u(k + 2), u(k + 3) . . . , as well. Introduce the free response involving the future values of the process input obtained provided no change occurs in the control input at time k: y∗ (k + 1) = func(u(k) = u (k − 1), u(k − 1), u (k − 2), u (k − 3), . . . ) y∗ (k + 2) = func(u (k + 1) = u(k − 1), u(k) = u(k − 1), u(k − 1), u (k − 2), . . . ) y∗ (k + 3) = func(u (k + 2) = u (k − 1), u(k + 1) = u(k − 1), u(k) = u(k − 1) , u(k − 1), . . . ) .. . Using the free response just introduced above, the predicted process outputs can be expressed by yˆ (k + 1) = s1 u(k) + y∗ (k + 1) yˆ (k + 2) = s1 u (k + 1) + s2 u(k) + y∗ (k + 2) yˆ (k + 3) = s1 u (k + 2) + s2 u(k + 1) +s3 u(k) + y∗ (k + 3) .. .
6
158
I. Vajk et al.
Uopt = S−1 Y ref − Y ∗ ,
where u(k + i) = u(k + i) − u(k + i − 1) and si denotes the ith sample of the discrete time step response of the process. Now a more compact form of
where Yref has been constructed from the samples of the future set points. The Receding Horizon Control (RHC)concept utilizes only the very first element of the Uopt vector according to
Predicted value = Forced response + Free response
u(k) = l Uopt ,
is looked for in vector/matrix form: ⎡
⎤ ⎡ yˆ (k + 1) s1 ⎢ yˆ (k + 2) ⎥ ⎢ s2 ⎢ ⎥ ⎢ ⎢ yˆ (k + 3) ⎥ = ⎢ s3 ⎣ ⎦ ⎣ .. .. . .
0 s1 s2 .. .
0 0 s1 .. .
⎤⎡ ⎤ ... u(k) ⎢ ⎥ ...⎥ ⎥ ⎢ u (k + 1) ⎥ ⎥ ⎢ . . . ⎦ ⎣ u (k + 2) ⎥ ⎦ .. .. . .
⎤ y∗ (k + 1) ⎢ y∗ (k + 2) ⎥ ⎥ ⎢ + ⎢ y∗ (k + 3) ⎥ . ⎦ ⎣ .. .
where l = 1 0 0 . . . . Observe that RHC needs to recalculate Y∗ and to update Uopt in each step. The application of the RHC algorithm results in zero steady-state error; however, it requires considerable control effort while minimizing the related loss function. Smoothing in the control input can be achieved
⎡
• by extending the loss function with another component penalizing the control signal or its change • by reducing the control horizon as follows: U =
Apply here the following notations: ⎡
s1 ⎢ s2 ⎢ S = ⎢ s3 ⎣ .. .
0 s1 s2 .. .
0 0 s1 .. .
[u(k) u(k+1) u(k+2) . . . u(k+Nu−1) 0 0 . . . 0]T .
⎤ ... ...⎥ ⎥ ...⎥ ⎦ .. .
Accordingly, define the loss function by Ny
2 yref (k + i) − yˆ (k + i) +
i=1
T Yˆ = yˆ (k + 1) yˆ (k + 2) yˆ (k + 3) . . .
λ
Nu
(u(k + i) − u(k + i − 1))2 ,
i=1
T U = u(k) u (k + 1) u(k + 2) . . . T Y ∗ = y∗ (k + 1) y∗ (k + 2) y∗ (k + 3) . . . .
−1 u(k) = l ST S + λI ST Y ref − Y ∗ , where ⎡
Utilizing these notations Yˆ = S U + Y ∗ holds. Assuming that the reference signal (set point) yref (k) is available for the future time instants, define the loss function to be minimized by Ny
then the control signal becomes
2
yref (k + i) − yˆ (k + i) .
i=1
If no restriction for the control signal is taken into account the minimization leads to
s1 s2 s3 .. .
⎢ ⎢ ⎢ ⎢ S=⎢ ⎢ ⎢ ⎣ sN −1 y
sNy
⎤ 0 ... 0 0 ⎥ s1 . . . 0 0 ⎥ ⎥ s2 . . . 0 0 ⎥ ⎥. .. .. .. .. ⎥ . . . . ⎥ sNy −2 . . . sNy −Nu +1 sNy −Nu ⎦ sNy −1 . . . sNy −Nu +2 sNy −Nu +1
All the above relations can easily be modified to cover the control of processes with known time delay. Simply replace y(k + 1) by y(k + d + 1) to consider y(k + d + 1) as the earliest sample of the process output effected by the control action taken at time k, where d > 0 represents the discrete time delay.
6
Linear Control Theory for Automation
As the idea of the model-based predictive control is quite flexible, a number of variants of the above discussed algorithms exist. The tuning parameters of the algorithm are the prediction horizon Ny , the control horizon Nu , and the λ penalizing factor. Predictions related to disturbances can also be included. Just as an example a loss function by
Yˆ − Y ref
T
Wy Yˆ − Y ref + UTc Wu Uc
159
C q−1 A q−1 y(k) = B q−1 u (k − d) + ζk . where A(q−1 ), B(q−1 ), and C(q−1 ) are polynomials of the backward shift operator q−1 , and = 1 − q−1 . Further on ζ k is a discrete-time white noise sequence. Then the conditional expected value of the loss function E
can be assigned to incorporate weighting matrices Wu and Wy , respectively, and a reduced version of the control signals can be applied according to U = Tc Uc , where Tc is an a priori defined matrix typically containing zeros and ones. Then minimization of the loss function results in −1 u(k) = l Tc TTc ST Wy STc + Wu TTc ST Wy Y ref − Y ∗ . Constraints existing for the control input open a new class for the control algorithms. In this case a quadratic programming problem (conditional minimization of a quadratic loss function to satisfy control constraints represented by inequalities) should be solved in each step. In details, the loss function
Yˆ − Y ref
T
Wy Yˆ − Y ref + UTc Wu Uc
is to be minimized again by Uc , where Yˆ = STc Uc + Y ∗ under the constraint of umin ≤ u(j) ≤ umax
or
T Yˆ − Y ref Wy Yˆ − Y ref + UTc Wu Uc |k
is to be minimized by Uc . Note that model-based predictive control algorithms can be extended for MIMO and nonlinear systems. While LQ design supposing infinite horizon provides stable performance, predictive control with finite horizon using receding horizon strategy lacks stability guarantees. Introduction of terminal penalty in the cost function including the quadratic deviations of the states from their final values is one way to ensure stable performance. Other methods leading to stable performance with detailed stability analysis, as well as proper handling of constraints are discussed in [41, 42, 49], where mainly sufficient conditions have been derived for stability. For real time applications fast solutions are required. Effective numerical methods to solve optimization problems with reduced computational demand and suboptimal solutions have been developed [39]. MPC with linear constraints and uncertainties can be formulated as a multiparametric programming problem, which is a technique to obtain the solution of an optimization problem as a function of the uncertain parameters (generally the states). For the different ranges of the states, the calculation can be executed off-line [39, 50]. Different predictive control approaches for robust constrained predictive control of nonlinear systems are also in the front of interest [39, 51].
umin ≤ u(j) ≤ umax .
6.17 The classical DMC approach is based on the samples of the step response of the process. Obviously, the process model can also be represented by a unit impulse response, a state-space model, or a transfer function. Consequently, beyond the control input the process output prediction can utilize the process output, the state variables, or the estimated state-variables, as well. Note that the original DMC is an open-loop design method in nature, which should be extended by a closed-loop aspect or it should be combined with an IMC compatible concept to utilize the advantages offered by the feedback concept. A further remark relates to stochastic process models. As an example, the Generalized Predictive Control concept [47, 48] applies the following model:
Summary
In this Chapter the classical and advanced control system design methods have been discussed. Controller design is based on the system model. PID is the most applied classical controller. Some aspects of continuous and discrete PID controller design are dealt with. Advanced methods reflect the current state of the art of the related applied research activity in the field and a number of advanced methods are available today for demanding control applications. One of the driving forces behind browsing among various advanced techniques has been the applicability of the control algorithms for practical applications. Starting with the stability issues, then covering performance and robustness issues, LMI control strategies and predictive control algorithms have
6
160
been discussed. The shown concepts of feedback can be extended for certain nonlinear systems. It has to be mentioned that several MATLAB toolboxes support the analysis and design of conventional and advanced control systems.
References 1. Åström, K.J., Murray, R.M.: Feedback Systems: An Introduction for Scientists and Engineers. 2nd edn. Princeton University Press, Princeton and Oxford (2021) 2. Keviczky, L., Bars, R., Hetthéssy, J., Bányász, Cs.: Control Engineering. Springer Nature Singapore Pte Ltd. (2019) 3. Keviczky, L., Bars, R., Hetthéssy, J., Bányász, Cs.: Control Engineering: MATLAB Exercises. Springer Nature Singapore Pte Ltd. (2019) 4. Levine, W.S. (ed.): The Control Handbook. CRC Press, Boca Raton (1996) 5. Levine, W.S. (ed.): The Control Handbook, 3 volumes set, 2nd edn. CRC Press, Boca Raton (2011) 6. Kamen, E.W., Heck, B.S.: Fundamentals of Signals and Systems Using the Web and Matlab, 3rd edn. Pearson, Harlow (2014) 7. Fodor, Gy.: Laplace Transforms in Engineering. Publishing House of the Hungarian Academy of Sciences, Akadémiai KiadÓ, Budapest (1965) 8. Csáki, F.: State Space Methods for Control Systems. Publishing House of the Hungarian Academy of Sciences, Akadémiai KiadÓ, Budapest (1977) 9. Åström, K.J., Wittenmark, B.: Computer-Controlled Systems: Theory and Design. Prentice Hall, Upper Saddle River (1997)., 3rd edn, 2011 10. Goodwin, G.C., Graebe, S.F., Salgado, M.E.: Control System Design. Prentice Hall, Upper Saddle River (2000) 11. Goodwin, G.C., Middleton, R.H.: Digital Control and Estimation: a Unified Approach. Prentice Hall, Upper Saddle River (1990) 12. Isermann, R.: Digital Control Systems, Vol. I. Fundamentals, Deterministic Control. Springer, Berlin/Heidelberg (1989) 13. Åström, K.J., Hägglund, T.: PID Controllers: Theory, Design and Tuning. ISA: International Society for Measurement and Control (1995) 14. Åström, K.J., Hägglund, T.: Advanced PID Control. ISA: The Instrumentation, Systems, and Automation Society (2006) 15. Dwyer, A.O.: Handbook of PI and PID Controller Tuning Rules, 3nd edn. Imperial College Press, London (2006) 16. Guzmán, J.L., Hägglund, T., Åström, K.J., Dormido, S., Berenguel, M., Piquet, Y.: Understanding PID design through interactive tools, 19th IFAC World Congress. IFAC Proc. 47(3), 12243–12248 (2014) 17. Kailath, T.: Linear Systems. Prentice Hall, Englewood Cliffs, NJ (1980) 18. Aplevich, J.D.: The Essentials of Linear State-Space Systems. Wiley, New York (2000) 19. Decarlo, R.A.: Linear Systems. A State Variable Approach with Numerical Implementation. Prentice Hall, Englewood Cliffs, NJ (1989) 20. Kwakernaak, H., Sivan, R.: Linear Optimal Control Systems. Wiley-Interscience, Wiley Hoboken (1972) 21. Maciejowski, J.M.: Multivariable Feedback Design. England Addison-Wesley Publishing Company (1989)
I. Vajk et al. 22. Skogestad, S., Postlethwaite, I.: Multivariable Feedback Control. Wiley, New York (2007) 23. Gahinet, P., Apkarian, P.: A linear matrix inequality approach to H∞ control. Int. J. Robust Nonlinear Control. 4, 421–448 (1994) 24. Morari, M., Zafiriou, E.: Robust Process Control. Prentice Hall, Englewood Cliffs, NJ (1989) 25. Garcia, E.C., Morari, M.: Internal model control: 1. A unifying review and some new results. Ind. Eng. Chem. Process. Des. Dev. 21, 308–323 (1982) 26. Smith, O.J.M.: Close control of loops with dead time. Chem. Eng. Prog. 53, 217–219 (1957) 27. Kuˇcera, V.: Diophantine equations in control – a survey. Automatica. 29, 1361–1375 (1993) 28. Burl, J.B.: Linear Optimal Control: H2 and H-Infinity Methods. Addison-Wesley, Reading, MA (1999) 29. Liptak, B.G. (ed.): Instrument Engineers’ Handbook, Process Control and Optimization, 4th edn. CRC Press. Taylor & Francis Group, Boca Raton (2006) 30. Doyle, J.C., Francis, B.A., Tannenbaum, A.R.: Feedback Control Theory. Macmillan Publishing Co., New York (1992) 31. Zhou, K., Doyle, J.C., Glover, K.: Robust and Optimal Control. Prentice Hall, Upper Saddle River, NJ (1996) 32. Vidyasagar, M., Kimura, H.: Robust controllers for uncertain linear multivariable systems. Automatica. 22, 85–94 (1986) 33. Boyd, S., Vandenberghe, L.: Convex Optimization. Cambridge University Press, New York (2004) 34. Van Antwerp, J.G., Braatz, R.D.: A tutorial on linear and bilinear matrix inequalities. J. Process Control. 10, 363–385 (2000) 35. Boyd, S., Ghaoui, L.E., Feron, E., Balakrishnan, V.: Linear Matrix Inequalities in System and Control Theory. SIAM, Philadelphia (1994) 36. Duan, G.R., Yu, H.H.: LMIs in Control Systems: Analysis, Design and Application. CRC Press. Taylor & Francis Group, Boca Raton (2013) 37. Herrmann, G., Turner, M.C., Postlethwaite, I.: Linear matrix inequalities in control. In: Turner, M.C., Bates, D.G. (eds.) Mathematical Methods for Robust and Nonlinear Control. London, Springer (2007) 38. Efberg, J.: YALMIP: a toolbox for modeling and optimization in MATLAB. In: IEEE International Symposium on Computer Aided Control Systems Design Taipei, 24 Sept, pp. 284–289 (2004) 39. Camacho, E.F., Bordons, C.: Model Predictive Control. 2nd edn. Springer-Verlag Berlin (2004) 40. Clarke, D.W. (ed.): Advances in Model-Based Predictive Control. Oxford University Press, Oxford (1994) 41. Maciejowski, J.M.: Predictive Control with Constraints. Prentice Hall, London (2002) 42. Rossiter, J.A.: Model-Based Predictive Control – a Practical Approach. CRC Press, London (2003)., 2nd edn, 2017 43. Soeterboek, R.: Predictive Control, a Unified Approach. Prentice Hall, Englewood Cliffs, N.J. (1992) 44. Rakovi´c, S.V., Levine, W.S.: Handbook of Model Predictive Control. Birkhäuser, Cham, Switzerland (2019) 45. Kouvaritakis, B., Cannon, M.: Model Predictive Control, Classical, Robust and Stochastic. Springer Cham, Switzerland (2015) 46. Cutler, C.R., Ramaker, B.L.: Dynamic matrix control – a computer control algorithm. In: Proceedings of JACC, San Francisco (1980) 47. Clarke, D.W., Mohtadi, C., Tuffs, P.S.: Generalised predictive control – part 1. The basic algorithm. Automatica. 23(137) (1987) 48. Clarke, D.W., Mohtadi, C., Tuffs, P.S.: Generalised predictive control – part 2. Extensions and interpretations. Automatica. 23(149) (1987)
6
Linear Control Theory for Automation
49. Mayne, D.Q., Rawlings, J.B., Rao, C.V., Scokaert, P.O.M.: Constrained model predictive control: stability and optimality. Automatica. 36, 789–814 (2000) 50. Borrelli, F.: Constrained Optimal Control of Linear and Hybrid Systems. Springer-Verlag, Berlin (2003) 51. Badgewell, T.A., Qin, S.J.: Nonlinear predictive control chapter: review of nonlinear model predictive control application (IEE Control Eng. Series) (2001) 52. Keviczky, L., Bányász, Cs.: Two-Degree-of-Freedom Control Systems. The Youla Parameterization Approach. Elsevier, London (2015) 53. Haber, R., Bars, R., Schmitz, U.: Predictive Control in Process Engineering. From the Basics to the Applications. Wiley-VCH, Weinheim, Germany (2011)
István Vajk graduated as electrical engineer and obtained his PhD degree in 1977 from the Budapest University of Technology and Economics (BME), Hungary. From 1994 to 2016 he was the head of Department of Automation and Applied Informatics at BME. Currently he is full professor at the same department. His main interest covers the theory and application of control systems, especially adaptive systems and system identification, as well as applied informatics.
161
6
Jen˝o Hetthéssy received his PhD degree in 1975 from the Technical University of Budapest, Faculty of Electrical Engineering. He received his CSc degree in 1978 from the Hungarian Academy of Sciences with a thesis on the theory and industrial application of self-tuning controllers. He spent several years as a visiting professor at the Electrical Engineering Department of the University of Minnesota, USA. He has published more than 90 papers.
Ruth Bars graduated at the Electrical Engineering Faculty of the Technical University of Budapest. She received her CSc degree in 1992 from the Hungarian Academy of Sciences and the PhD degree based on research in predictive control. Currently she is honorary professor at the Department of Automation and Applied Informatics, Budapest University of Technology and Economics. Her research interests are predictive control, advanced control algorithms and development of new ways in control education.
7
Nonlinear Control Theory for Automation Alberto Isidori
Contents 7.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
7.2
Autonomous Dynamical Systems . . . . . . . . . . . . . . . . . . 164
7.3 7.3.1 7.3.2
Stability and Related Concepts . . . . . . . . . . . . . . . . . . . . 166 Stability of Equilibria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 Lyapunov Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
7.4 7.4.1 7.4.2
Asymptotic Behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 Limit Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 Steady-State Behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
7.5 7.5.1 7.5.2 7.5.3 7.5.4
Dynamical Systems with Inputs . . . . . . . . . . . . . . . . . . . Input-to-State Stability (ISS) . . . . . . . . . . . . . . . . . . . . . . . . Cascade Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Feedback Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Steady-State Response . . . . . . . . . . . . . . . . . . . . . . . . .
7.6
Stabilization of Nonlinear Systems via State Feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Relative Degree, Normal Forms . . . . . . . . . . . . . . . . . . . . . Feedback Linearization . . . . . . . . . . . . . . . . . . . . . . . . . . . . Global Stabilization via Partial Feedback Linearization . . . . . . . . . . . . . . . . . . . . . . . . . . . . Global Stabilization via Backstepping . . . . . . . . . . . . . . . . Semiglobal Practical Stabilization via High-Gain Partial-State Feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.6.1 7.6.2 7.6.3 7.6.4 7.6.5
171 171 173 174 174 175 175 178 178 179 181
7.7 7.7.1 7.7.2 7.7.3 7.7.4
Observers and Stabilization via Output Feedback . . . Canonical Forms of Observable Nonlinear Systems . . . . . High-Gain Observers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Nonlinear Separation Principle . . . . . . . . . . . . . . . . . . Robust Feedback Linearization . . . . . . . . . . . . . . . . . . . . . .
183 183 184 185 185
7.8
Recent Progresses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
of the chapter begins by reviewing the notion of dynamical system and continues with a discussion of the concepts of stability and asymptotic stability of an equilibrium, with emphasis on the method of Lyapunov. Then, it continues by reviewing the notion of input-to-state stability and discussing its role in the analysis of interconnections of systems. Then, the first part is concluded by the analysis of the asymptotic behavior in the presence of persistent inputs. The second part of the chapter is devoted to the presentation of systematic methods for stabilization of relevant classes of nonlinear systems, namely, those possessing a globally defined normal form. Methods for the design of full-state feedback and, also, observer-based dynamic output feedback are presented. A nonlinear separation principle, based on the use of a high-gain observer, is discussed. A special role, in this context, is played by the methods based on feedback linearization, of which a robust version, based on the use of extended high-gain observer, is also presented. Keywords
Dynamical systems · Stability · Method of Lyapunov · Input-to-state stability · Steady-state behavior · Normal forms · Feedback linearization · Backstepping · High-gain observers · Separation principle · Extended observers
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
7.1
Introduction
Abstract
The purpose of this chapter is to review some notions of fundamental importance on the analysis and design of feedback laws for nonlinear control systems. The first part A. Isidori () Department of Computer, Control and Management Engineering, University of Rome “La Sapienza”, Rome, Italy e-mail: [email protected]
© Springer Nature Switzerland AG 2023 S. Y. Nof (ed.), Springer Handbook of Automation, Springer Handbooks, https://doi.org/10.1007/978-3-030-96729-1_7
Modern engineering systems are very complex and comprise a high number of interconnected subcomponents which, thanks to the remarkable development of communications and electronics, can be spread over broad areas and linked through data networks. Each component of this wide interconnected system is a complex system on its own, and the good functioning of the overall system relies upon the possibility to efficiently control, estimate, or monitor each 163
164
A. Isidori
one of these components. Each component is usually highdimensional, highly nonlinear, and hybrid in nature and comprises electrical, mechanical, or chemical components, which interact with computers, decision logics, etc. The behavior of each subsystem is affected by the behavior of part or all of the other components of the system. The control of those complex systems can only be achieved in a decentralized mode, by appropriately designing local controllers for each individual component or small group of components. In this setup, the interactions between components are mostly treated as commands, dictated from one particular unit to another one, or as disturbances, generated by the operation of other interconnected units. The tasks of the various local controllers are then coordinated by some supervisory unit. Control and computational capabilities being distributed over the system, a steady exchange of data among the components is required, in order for the system to behave properly. In this setup, each individual component (or small set of components) is viewed as a system whose behavior, in time, is determined or influenced by the behavior of other subsystems. Typically, the physical variables by means of which this influence is exerted can be classified in two disjoint sets: one set consisting of all commands and/or disturbances generated by other components (which in this context are usually referred to as exogenous inputs) and another set consisting of all variables by means of which the accomplishment of the required tasks is actually imposed (which in this context are usually referred to as control inputs). The tasks in question typically comprise the case in which certain variables, called regulated outputs, are required to track the behavior of a set exogenous commands. This leads to the definition, for the variables in question, of a tracking error, which should be kept as small as possible, in spite of the possible variation –in time– of the commands and in spite of all exogenous disturbances. The control input, in turn, is provided by a separate subsystem, the controller, which processes the information provided by a set of appropriate measurements (the measured outputs). The whole control configuration assumes –in this case– the form of a feedback loop, as shown in Fig. 7.1.
Exogenous input
Controller
Control input
Regulated output Controlled plant
Feedback
Fig. 7.1 Basic feedback loop
Measured output
In any realistic scenario, the control goal has to be achieved in spite of a good number of phenomena, which would cause the system to behave differently than expected. As a matter of fact, in addition to the exogenous phenomena already included in the scheme of Fig. 7.1, that is, the exogenous commands and disturbances, a system may fail to behave as expected also because of endogenous causes, which include the case in which the controlled system responds differently as a consequence of poor knowledge about its behavior due to modeling errors, damages, wear, etc. The ability to successfully handle large uncertainties is one of the main, if not the single most important, reasons for choosing the feedback configuration of Fig. 7.1. To evaluate the overall performances of the system, a number of conventional criteria are chosen. First of all, it must be made sure that the behavior of the variables of the entire system is bounded. In fact, the feedback strategy, which is introduced to the purpose of offsetting exogenous inputs and to attenuate the effect of modeling error, may cause to unbounded behaviors, which have to be avoided. Boundedness and convergence to the desired behavior are usually analyzed in conventional terms via the concepts of asymptotic stability and of steady state behavior, discussed hereafter in Sects. 7.2 thorough 7.4. Since the systems under considerations are systems with inputs (control inputs and exogenous inputs), the influence of such inputs on the behavior of a system has also to be assessed, as discussed in Sect. 7.5. The analytical tools developed in this way are then taken as a basis for the design of a controller, in which –usually– control structure and free parameters are chosen in such a way as to guarantee that the overall configuration exhibits the desired properties in response to exogenous commands and disturbances and is sufficiently tolerant of any major source of uncertainty. This is discussed in Sects. 7.6 thorough 7.8.
7.2
Autonomous Dynamical Systems
In loose terms, a dynamical system is a way to describe how certain physical entities of interest, associated with a natural of artificial process, evolve in time and how their behavior is, or can be, influenced by the evolution of other variables. The most usual point of departure in the analysis of the behavior of a natural or artificial process is the construction a mathematical model, consisting of a set of equations expressing basic physical laws and/or constraints. In the most frequent case, when the study of evolution in time is the issue, the equations in question take the form of an ordinary differential equation, defined on a finite-dimensional Euclidean space. In this chapter, we shall review some fundamental facts underlying the analysis of the solutions of certain ordinary
7
Nonlinear Control Theory for Automation
165
differential equations arising in the study of physical processes. In this analysis, a convenient point of departure is the case of a mathematical model expressed in state-space form, i.e., by means of a first-order differential equation x˙ = f (x)
(7.1)
in which x ∈ Rn is a vector of variables associated with the physical entities of interest, usually referred to as the state of the system. A solution of the differential equation (7.1) is a differentiable function x¯ : J → Rn defined on some interval J ⊂ R such that, for all t ∈ J, d¯x(t) = f (¯x(t)). dt If the map f : Rn → Rn is locally Lipschitz, i.e., if for every x ∈ Rn there exists a neighborhood U of x and a number L > 0 such that, for all x1 , x2 in U,
same context, the flow of a system like (7.1) and the flow of another system, say y˙ = g(y). In this case, the symbol φ, which represents “the map,” must be replaced by two different symbols, one denoting the flow of (7.1) and the other denoting the flow of the other system. The easiest way to achieve this is to use the symbol x to represent “the map” that characterizes the flow of (7.1) and to use the symbol y to represent “the map” that characterizes the flow of the other system. In this way, the map characterizing the flow of (7.1) is written as x(t, x). This notation at first may seem confusing, because the same symbol x is used to represent “the map” and to represent the “second argument” of the map itself (the argument representing the initial condition of (7.1)), but it is somewhat inevitable. Once the notation has been understood, though, no further confusion should arise. Sometimes, for convenience, the notation x(t, x) is used instead. A dynamical system is said to be complete, if the set S coincides with the whole of R × Rn . In the special case of a linear differential equation x˙ = Ax
|f (x1 ) − f (x2 )| ≤ L|x1 − x2 |, then, for each x0 ∈ Rn there exist two times t − < 0 and t + > 0 and a solution x¯ of (7.1), defined on the interval (t − , t + ) ⊂ R, that satisfies x¯ (0) = x0 . Moreover, if x˜ : (t − , t + ) → Rn is any other solution of (7.1) satisfying x˜ (0) = x0 , then necessarily x˜ (t) = x¯ (t) for all t ∈ (t − , t + ), that is, the solution x¯ is unique. In general, the times t − < 0 and t + > 0 may depend on the point x0 . For each x0 , there is a maximal open interval (tm− (x0 ), tm+ (x0 )) containing 0, on which a solution x¯ with x¯ (0) = x0 is defined: this is the union of all open intervals on which there is a solution with x¯ (0) = x0 (possibly, but not always, tm− (x0 ) = −∞ and/or tm+ (x0 ) = +∞). Given a differential equation of a form (7.1), associated with a locally Lipschitz map f , define a subset S of R × Rn as follows S = {(t, x) : t ∈ (tm− (x), tm+ (x), x ∈ Rn }.
Then, define on S a map φ : W→ Rn as follows: φ(0, x)=x and, for each x ∈ Rn , the function ϕx : (tm− (x), tm+ (x)) → Rn t → φ(t, x) is a solution of (7.1). This map is called the flow of (7.1). In other words, for each fixed x, the restriction of φ(t, x) to the subset of S consisting of all pairs (t, x), for which t ∈ (tm− (x), tm+ (x)), is the unique (and maximally extended in time) solution of (7.1) passing through x at time t = 0. Sometimes, a slightly different notation is used for the flow. This is motivated by the need of expressing, within the
(7.2)
in which A is a n×n matrix of real numbers, the flow is given by φ(t, x) = eAt x where the matrix exponential eAt is defined as the sum of the series ∞ i t i A. eAt = i! i=0 Let S be a subset of Rn . The set S is said to be invariant for (7.1) if, for all x ∈ S, φ(t, x) is defined for all t ∈ (−∞, +∞) and φ(t, x) ∈ S
for all t ∈ R.
A set S is positively (resp. negatively) invariant if for all x ∈ S, φ(t, x) is defined for all t ≥ 0 (resp. for all t ≤ 0) and φ(t, x) ∈ S for all such t. Equation (7.1) defines a dynamical system. To reflect the fact that the map f does not depend on other independent entities (such as the time t or physical entities originated from “external” processes), the system in question is referred to an autonomous system. Complex autonomous systems arising in analysis and design of physical processes are usually obtained as a composition of simpler subsystems, each one modeled by equations of the form x˙ i = fi (xi , ui ) yi = hi (xi , ui )
i = 1, . . . , N
in which xi ∈ Rni . Here, ui ∈ Rmi and, respectively, yi ∈ Rpi are vectors of variables associated with physical entities by
7
166
A. Isidori
means of which the interconnection of various component parts is achieved.
7.3
Stability and Related Concepts
7.3.1
Stability of Equilibria
Consider an autonomous system as (7.1) and suppose f is locally Lipschitz. A point xe ∈ Rn is called an equilibrium point if f (xe ) = 0. Clearly, the constant function x(t) = xe is a solution of (7.1). Since solutions are unique, no other solution of (7.1) exists passing through xe . The study of equilibria plays a fundamental role in analysis and design of dynamical systems. The most important concept in this respect is that of stability, in the sense of Lyapunov, specified in the following definition. For x ∈ Rn , let |x| denote the usual Euclidean norm, that is, n 1/2 xi2 . |x| = i=1
Definition 1 An equilibrium xe of (7.1) is stable if, for every ε > 0, there exists δ > 0, such that |x(0) − xe | ≤ δ
⇒
|x(t) − xe | ≤ ε
for all t ≥ 0.
An equilibrium xe of (7.1) is asymptotically stable, if it is stable and, moreover, there exists a number d > 0, such that |x(0) − xe | ≤ d
⇒
lim |x(t) − xe | = 0.
t→∞
An equilibrium xe of (7.1) is globally asymptotically stable, if it is asymptotically stable and, moreover, lim |x(t) − xe | = 0 ,
t→∞
for every x(0) ∈ Rn .
The most elementary, but rather useful in practice, result in stability analysis is described as follows. Assume f (x) is continuously differentiable and suppose, without loss of generality, that xe = 0 (if not, change x into x¯ := x − xe and observe that x¯ satisfies the differential equation x˙¯ = f (¯x + xe ) in which now x¯ = 0 is an equilibrium). Expand f (x) as follows f (x) = Ax + f˜ (x) (7.3) in which A=
∂f (0) ∂x
is the Jacobian matrix of f (x), evaluated at x = 0, and by construction
|f˜ (x)| = 0. x→0 |x| lim
The linear system x˙ = Ax, with the matrix A defined as indicated, is called the linear approximation of the original nonlinear system (7.1) at the equilibrium x = 0. Theorem 7.1 Let x = 0 be an equilibrium of (7.1). Suppose every eigenvalue of A has real part less than −c, with c > 0. Then, there are numbers d > 0 and M > 0 such that |x(0)| ≤ d ⇒ |x(t)| ≤ Me−c t |x(0)|
for all t ≥ 0. (7.4)
In particular, x = 0 is asymptotically stable. If at least one eigenvalue of A has positive real part, the equilibrium x = 0 is not stable. This property is usually referred to as the principle of stability in the first approximation. The equilibrium x = 0 is said to be hyperbolic, if the matrix A has no eigenvalue with zero real part. Thus, it is seen from the previous theorem that a hyperbolic equilibrium is either unstable or asymptotically stable. The inequality on the right-hand side of (7.4) provides a useful bound on the norm of x(t), expressed as a function of the norm of x(0) and of the time t. This bound, though, is very special and restricted to the case of an hyperbolic equilibrium. In general, bounds of this kind can be obtained by means of the so-called comparison functions, which are defined as follows (see [5, p.95]). Definition 2 A continuous function α : [0, a) → [0, ∞) is said to belong to class K if it is strictly increasing and α(0) = 0. If a = ∞ and limr→∞ α(r) = ∞, the function is said to belong to class K∞ . A continuous function β : [0, a) × [0, ∞) → [0, ∞) is said to belong to class KL if, for each fixed s, the function α : [0, a) → [0, ∞) r → β(r, s) belongs to class K and, for each fixed r, the function ϕ : [0, ∞) → [0, ∞) s → β(r, s) is decreasing and lims→∞ ϕ(s) = 0.
The composition of two class K (respectively, class K∞ ) functions α1 (·) and α2 (·), denoted α1 (α2 (·)) or α1 ◦ α2 (·), is a class K (respectively, class K∞ ) function. If α(·) is a class K function, defined on [0, a) and b = limr→a α(r), there exists a unique inverse function, α −1 : [0, b) → [0, a), namely, a function satisfying
7
Nonlinear Control Theory for Automation
α −1 (α(r)) = r,
for all r ∈ [0, a)
a|x|2 ≤ xT Px ≤ a|x|2
for all r ∈ [0, b).
for all x ∈ Rn . The property, of a matrix P, to be positive definite is usually expressed with the shortened notation P > 0 (which actually means xT Px > 0 for all x = 0). In the case of linear systems, the criterion of Lyapunov is expressed as follows.
and α(α −1 (r)) = r,
167
Moreover, α −1 (·) is a class K function. If α(·) is a class K∞ function, so is also α −1 (·). The properties of stability, asymptotic stability, and global asymptotic stability can be easily expressed in terms of inequalities involving comparison functions. In fact, it turns out that the equilibrium x = 0 is stable if and only if there exist a class K function α(·) and a number d > 0 such that |x(t)| ≤ α(|x(0)|)
Theorem 7.2 The linear system x˙ = Ax is asymptotically stable (or, what is the same, the eigenvalues of A have negative real part) if there exists a positive definite matrix P, such that the matrix Q := PA + AT P
for all x(0) such that |x(0)| ≤ d and all t ≥ 0 , the equilibrium x = 0 is asymptotically stable if and only if there exist a class KL function β(·, ·) and a number d > 0 such that |x(t)| ≤ β(|x(0)|, t)
(7.5)
is negative definite. Conversely, if the eigenvalues of A have negative real part, then, for any choice of a negative definite matrix Q, the linear equation PA + AT P = Q
for all x(0) such that |x(0)| ≤ d and all t ≥ 0 , and the equilibrium x = 0 is globally asymptotically stable if and only if there exist a class KL function β(·, ·) such that |x(t)| ≤ β(|x(0)|, t)
has a unique solution P, which is positive definite. Note that, if V(x) = xT Px, ∂V = 2xT P ∂x
for all x(0)and all t ≥ 0. and hence
7.3.2
Lyapunov Functions
The most important criterion for the analysis of the stability properties of an equilibrium is the criterion of Lyapunov. We introduce first the special form that this criterion takes in the case of a linear system x˙ = Ax in which x ∈ Rn . Any symmetric n × n matrix P defines a quadratic form V(x) = xT Px. The matrix P is said to be positive definite (respectively positive semidefinite) if so is the associated quadratic form V(x), i.e., if, for all x = 0, V(x) > 0
respectively V(x) ≥ 0.
The matrix is said to be negative definite (respectively negative semidefinite) if −P is positive definite (respectively positive semidefinite). It is easy to show that a matrix P is positive definite if (and only if) there exist positive numbers a and a satisfying
∂V Ax = xT (PA + AT P)x. ∂x
Thus, to say that the matrix PA + AT P is negative definite is equivalent to say that the form ∂V Ax ∂x is negative definite. The general, nonlinear, version of the criterion of Lyapunov appeals to the existence of a positive definite, but not necessarily quadratic, function of x. The quadratic lower and upper bounds of (7.5) are therefore replaced by bounds of the form α(|x|) ≤ V(x) ≤ α(|x|). (7.6) in which α(·), α(·) are simply class K functions. The criterion in question is summarized as follows. Theorem 7.3 Let V : Rn → R be a continuously differentiable function satisfying (7.6) for some pair of class K functions α(·), α(·). If, for some d > 0, ∂V f (x) ≤ 0 ∂x
for all |x| < d,
(7.7)
7
168
A. Isidori
the equilibrium x = 0 of (7.1) is stable. If, for some class K function α(·) and some d > 0, ∂V f (x) ≤ −α(|x|) ∂x
for all |x| < d,
(7.8)
the equilibrium x = 0 of (7.1) is locally asymptotically stable. If α(·), α(·) are class K∞ functions and the inequality in (7.8) holds for all x, the equilibrium x = 0 of (7.1) is globally asymptotically stable. A function V(x) satisfying (7.6) and either of the subsequent inequalities is called a Lyapunov function. The inequality on the left-hand side of (7.6) is instrumental, together with (7.7), in establishing existence and boundedness of x(t). A simple explanation of the arguments behind the criterion of Lyapunov can be obtained in this way. Suppose (7.7) holds. Then, if x(0) is small, the differentiable function of time V(x(t)) is defined for all t ≥ 0 and nonincreasing along the trajectory x(t). Using the inequalities in (7.6), one obtains α(|x(t)|) ≤ V(x(t)) ≤ V(x(0) ≤ α(|x(0)|) and hence |x(t)| ≤ α −1 ◦ α(|x(0)|), which establishes the stability of the equilibrium of x = 0. Similar arguments are very useful in order to establish the invariance, in positive time, of certain bounded subsets of Rn . Specifically, suppose the various inequalities considered in Theorem 7.3 hold for d = ∞, and let c denote the set of all x ∈ Rn for which V(x) ≤ c, namely, c = {x ∈ Rn : V(x) ≤ c}. A set of this kind is called a sublevel set of the function V(x). Note that if α(·) is a class K∞ function, then c is a compact set for all c > 0. Now, if ∂V(x) f (x) < 0 ∂x at each point x of the boundary of c , it can be concluded that, for any initial condition in the interior of c , the solution x(t) of (7.1) is defined for all t ≥ 0 and x(t) ∈ c for all t ≥ 0, that is, the set c is invariant in positive time. Indeed, existence and uniqueness are guaranteed by the local Lipschitz property, so long as x(t) ∈ c , because c is a compact set. The fact that x(t) remains in c for all t≥0 is proved by contradiction. For, suppose that, for some trajectory x(t), there is a time t1 such that x(t) is in the interior of c at all t < t1 and x(t1 ) is on the boundary of c . Then, V(x(t)) < c for all t < t1
and
V(x(t1 )) = c
and this contradicts the previous inequality, which shows that the derivative of V(x(t)) is strictly negative at t = t1 . To show that (7.8) guarantees asymptotic decay of x(t) to 0, define γ (r) = α(α −1 (r)), which is a class K function, and observe that α(|x|) ≥ γ (V(x)). Hence, ∂V f (x) ≤ −γ (V(x)). ∂x Observe now that V(x(t), a nonincreasing and nonnegativevalued continuous function of t, has a limit for t → ∞. Let this limit be denoted by V∞ . Suppose V∞ is strictly positive. Then, d V(x(t)) ≤ −γ (V(x(t)) ≤ −γ (V∞ ) < 0. dt Integration with respect to time yields V(x(t)) ≤ V(x(0)) − γ (V∞ )t for all t. This cannot be the case, because for large t the righthand side is negative, while the left-hand side is nonnegative. From this, it follows that V∞ = 0, and therefore, using the fact that V(x) vanishes only at x = 0, it is concluded that limt→∞ x(t) = 0. Example 1 In many cases, stability of a nonlinear system can be checked by means of a quadratic Lyapunov function. For instance, in the case of the system x˙ 1 = −x1 + x1 x2 x˙ 2 = −x2 − x12 , picking the quadratic function V(x1 , x2 ) = x12 + x22 , it is found that ∂V f (x) = −2x12 − 2x22 ∂x and hence, according to Theorem 7.3, it is concluded that the equilibrium x = 0 is globally asymptotically stable. But in the case of the system x˙ 1 = −x1 + x1 x2 x˙ 2 = −x2 − x14 , it is more convenient to pick V(x1 , x2 ) = x14 + 2x22 , yielding ∂V f (x) = −4x14 − 4x22 . ∂x The case of the system (see [11, p.124]) x˙ 1 = x2 x˙ 2 = −h(x1 ) − x2 ,
7
Nonlinear Control Theory for Automation
in which h(·) is a locally Lypschitz function satisfying h(0) = 0 and h(y)y > 0 for all y = 0, can be handled by means of a Lyapunov function consisting of two terms, one of which is a quadratic function and the other of which is an integral term
169
asymptotically and locally exponentially stable if and only if there exist a continuously differentiable function V(x) : Rn → R and class K∞ functions α(·), α(·), α(·), and real numbers δ > 0, a > 0, a > 0, a > 0, such that α(|x|) ≤ V(x) ≤ α(|x|)
x1 1 T kk V(x) = x x+ h(y)dy. k1 2 0 A function having such structure is known as a Lurie-type Lyapunov function. If the quadratic term is positive definite, which is the case if and only if 0 < k < 1, the function in question is positive definite and, in fact, bounded from below as α(|x|) < V(x), in which α(·) is a class K∞ function. A simple calculation shows that ∂V f (x) = −kh1 (x1 )x1 − (1 − k)x22 ∂x in which the right-hand side is negative for all nonzero x.
The criterion for asymptotic stability provided by the previous theorem has a converse, namely, the existence of a function V(x) having the properties indicated in Theorem 7.3 is implied by the property of asymptotic stability of the equilibrium x = 0 of (7.1). In particular, the following result holds (see e.g. [22]). Theorem 7.4 Suppose the equilibrium x = 0 of (7.1) is locally asymptotically stable. Then, there exist d > 0, a continuously differentiable function V : Rn → R, and class K functions α(·), α(·), α(·), such that (7.6) and (7.8) hold. If the equilibrium x = 0 of (7.1) is globally asymptotically stable, there exist a continuously differentiable function V : Rn → R and class K∞ functions α(·), α(·), α(·), such that (7.6) and (7.8) hold with d = ∞. To conclude, observe that, if x = 0 is an hyperbolic equilibrium and all eigenvalues of A have negative real part, |x(t)| is bounded, for small |x(0)|, by a class KL function β(·, ·) of the form β(r, t) = Me−λ t r. If the equilibrium x = 0 of system (7.1) is globally asymptotically stable and, moreover, there exist numbers d > 0, M > 0, and λ > 0 such that |x(t)| ≤ Me−λt |x(0)|
for all |x(0)| ≤ d and all t ≥ 0
it is said that this equilibrium is globally asymptotically and locally exponentially stable. It can be shown that the equilibrium x = 0 of the nonlinear system (7.1) is globally
∂V f (x) ≤ −α(|x|) ∂x
for all x ∈ Rn
and α(s) = as2 , α(s) = as2 , α(s) = as2
7.4
Asymptotic Behavior
7.4.1
Limit Sets
for all s ∈ [0, δ].
In the analysis of dynamical systems, it is often important to determine whether or not, as time increases, the variables characterizing the motion asymptotically converge to special motions exhibiting some form of recurrence. This is the case, for instance, when a system possesses an asymptotically stable equilibrium: all motions issued from initial conditions in a neighborhood of this point converge to a special motion, in which all variables remain constant. A constant motion, or more in a general periodic motions, is a motion characterized by a property of recurrence, which is usually referred to as a steady-state motion or behavior. The steady-state behavior of a dynamical system can be viewed as a kind of limit behavior, approached either as the actual time t tends to +∞ or, alternatively, as the initial time t0 tends to −∞. Relevant, in this regard, are certain concepts introduced by G.D.Birkhoff in [2]. In particular, a fundamental role is played by the concept of ω-limit set of a given point, defined as follows. Consider an autonomous dynamical system such as (7.1) and let x(t, x0 ) denote its flow. Assume, in particular, that x(t, x0 ) is defined for all t ≥ 0. A point x is said to be an ω-limit point of the motion x(t, x0 ) if there exists a sequence of times {tk }, with limk→∞ tk = ∞, such that lim x(tk , x0 ) = x. k→∞
The ω-limit set of a point x0 , denoted ω(x0 ), is the union of all ω-limit points of the motion x(t, x0 ) (see Fig. 7.2). If xe is an asymptotically stable equilibrium, then xe = ω(x0 ) for all x0 in a neighborhood of xe . However, in general, an ω-limit point is not necessarily a limit of x(t, x0 ) as t → ∞, because the function in question may not admit any limit as t → ∞. It happens though, that if the motion x(t, x0 )
7
170
A. Isidori
It follows from the definition that if B consists of only one single point x0 , all xk ’s in the definition above are necessarily equal to x0 and the definition in question reduces to the definition of ω-limit set of a point, given earlier. It also follows that, if for some x0 ∈ B the set ω(x0 ) is nonempty, all points of ω(x0 ) are points of ω(B). Thus, in particular, if all motions with x0 ∈ B are bounded in positive time,
x (t3, x0) x (t2, x0) x (t1, x0)
x
x0
ω(x0 ) ⊂ ω(B).
x0 ∈B
However, the converse inclusion is not true in general. The relevant properties of the ω-limit set of a set, which extend those presented earlier in Lemma 7.1, can be summarized as follows [6].
w (x0)
Fig. 7.2 The ω-limit set of a point x0
is bounded, then x(t, x0 ) asymptotically approaches the set ω(x0 ). Lemma 7.1 Suppose there is a number M such that |x(t, x0 )| ≤ M for all t ≥ 0. Then, ω(x0 ) is a nonempty compact connected set, invariant under (7.1). Moreover, the distance of x(t, x0 ) from ω(x0 ) tends to 0 as t → ∞. It is seen from this that the set ω(x0 ) is filled by motions of (7.1) which are defined, and bounded, for all backward and forward times. The other remarkable feature is that x(t, x0 ) approaches ω(x0 ) as t → ∞, in the sense that the distance of the point x(t, x0 ) (the value at time t of the solution of (7.1) starting in x0 at time t = 0) to the set ω(x0 ) tends to 0 as t → ∞. A consequence of this property is that, in a system of the form (7.1), if all motions issued from a set B are bounded, all such motions asymptotically approach the set =
ω(x0 ).
x0 ∈B
However, the convergence of x(t, x0 ) to is not guaranteed to be uniform in x0 , even if the set B is compact. There is a larger set, though, which does have this property of uniform convergence. This larger set, known as the ω limit set of the set B, is precisely defined as follows. Consider again system (7.1), let B be a subset of Rn , and suppose x(t, x0 ) is defined for all t ≥ 0 and all x0 ∈ B. The ω-limit set of B, denoted ω(B), is the set of all points x for which there exists a sequence of pairs {xk , tk }, with xk ∈ B and limk→∞ tk = ∞ such that lim x(tk , xk ) = x .
k→∞
Lemma 7.2 Let B be a nonempty bounded subset of Rn , and suppose there is a number M such that |x(t, x0 )| ≤ M for all t ≥ 0 and all x0 ∈ B. Then, ω(B) is a nonempty compact set, invariant under (7.1). Moreover, the distance of x(t, x0 ) from ω(B) tends to 0 as t → ∞, uniformly in x0 ∈ B. If B is connected, so is ω(B). Thus, as it is the case for the ω-limit set of a point, the ωlimit set of a bounded set B, being compact and invariant, is filled with motions, which exist for all t ∈ (−∞, +∞) and are bounded backward and forward in time. But, above all, the set in question is uniformly approached by motions with initial state x0 ∈ B. An important corollary of the property of uniform convergence is that if ω(B) is contained in the interior of B, then ω(B) is also asymptotically stable. Lemma 7.3 Let B be a nonempty bounded subset of Rn , and suppose there is a number M such that |x(t, x0 )| ≤ M for all t ≥ 0 and all x0 ∈ B. Then, ω(B) is a nonempty compact set, invariant under (7.1). Suppose also that ω(B) is contained in the interior of B. Then, ω(B) is asymptotically stable, with a domain of attraction that contains B.
7.4.2
Steady-State Behavior
Consider now again system (7.1), with initial conditions in a closed subset X ⊂ Rn . Suppose the set X is positively invariant, which means that for any initial condition x0 ∈ X, the solution x(t, x0 ) exists for all t ≥ 0 and x(t, x0 ) ∈ X for all t ≥ 0. The motions of this system are said to be ultimately bounded if there is a bounded subset B with the property that, for every compact subset X0 of X, there is a time T > 0, such that x(t, x0 ) ∈ B for all t ≥ T and all x0 ∈ X0 . In other words, if the motions of the system are ultimately bounded, every motion eventually enters and remains in the bounded set B.
7
Nonlinear Control Theory for Automation
171
Suppose the motions of (7.1) are ultimately bounded and let B = B be any other bounded subset with the property that, for every compact subset X0 of X, there is a time T > 0 such that x(t, x0 ) ∈ B for all t ≥ T and all x0 ∈ X0 . Then, it is easy to check that ω(B ) = ω(B). Thus, in view of the properties described in Lemma 7.2 above, the following definition can be adopted (see [9]).
with state x ∈ Rn and input u ∈ Rm , in which f (0, 0) = 0 and f (x, u) is locally Lipschitz on Rn × Rm . The input function u : [0, ∞) → Rm of (7.10) can be any piecewise continuous bounded function. The set of all such functions, endowed with the supremum norm u(·)∞ = sup |u(t)|, t≥0
Definition 3 Suppose the motions of system (7.1), with initial conditions in a closed and positively invariant set X, are ultimately bounded. A steady-state motion is any motion with initial condition x(0) ∈ ω(B). The set ω(B) is the steady-state locus of (7.1), and the restriction of (7.1) to ω(B) is the steady-state behavior of (7.1).
7.5
Dynamical Systems with Inputs
7.5.1
Input-to-State Stability (ISS)
z˙ = g(z),
(7.9)
with x ∈ Rn , z ∈ Rm , in which we assume f (0, 0) = 0 and g(0) = 0. If the equilibrium x = 0 of x˙ = f (x, 0) is locally asymptotically stable and the equilibrium z = 0 of the lower subsystem is locally asymptotically stable, then the equilibrium (x, z) = (0, 0) of the cascade is locally asymptotically stable. However, in general, global asymptotic stability of the equilibrium x = 0 of x˙ = f (x, 0) and global asymptotic stability of the equilibrium z = 0 of the lower subsystem do not imply global asymptotic stability of the equilibrium (x, z) = (0, 0) of the cascade. To infer global asymptotic stability of the cascade, a stronger condition is needed, which expresses a property describing how –in the upper subsystem– the response x(·) is influenced by its input z(·). The property in question requires that when z(t) is bounded, over the semi-infinite time interval [0, +∞), then also x(t) be bounded, and, in particular that, if z(t) asymptotically decays to 0, then also x(t) decays to 0. These requirements altogether lead to the notion of input-to-state stability, introduced and studied in [15, 16]. The notion in question is defined as follows (see also [8, Chapter 10] for additional details). Consider a nonlinear system x˙ = f (x, u)
Definition 4 System (7.10) is said to be input-to-state stable if there exist a class KL function β(·, ·) and a class K function m γ (·), called a gain function, such that, for any input u(·) ∈ L∞ n and any x0 ∈ R , the response x(t) of (7.10) in the initial state x(0) = x0 satisfies |x(t)| ≤ β(|x0 |, t) + γ (u(·)∞ )
In this section, we show how to determine the stability properties of an interconnected system, on the basis of the properties of each individual component. The easiest interconnection to be analyzed is a cascade connection of two subsystems, namely, a system of the form x˙ = f (x, z)
m is denoted by L∞ .
(7.10)
for all t ≥ 0. (7.11)
It is common practice to replace the wording “input-tostate stable” with the acronym “ISS.” In this way, a system possessing the property expressed by (7.11) is said to be an ISS—system. Since, for any pair β > 0, γ > 0, max{β, γ } ≤ β +γ ≤ max{2β, 2γ }, an alternative way to say that a system is input-to-state stable is to say that there exists a class KL function β(·, ·) and a class K function γ (·), such that, for any m input u(·) ∈ L∞ and any x0 ∈ Rn , the response x(t) of (7.10) in the initial state x(0) = x0 satisfies |x(t)| ≤ max{β(|x0 |, t), γ (u(·)∞ )} for all t ≥ 0. (7.12) The property, for a given system, of being input-to-state stable, can be given a characterization, which extends the criterion of Lyapunov for asymptotic stability. The key tool for this analysis is the notion of ISS–Lyapunov function, defined as follows. Definition 5 A C1 function V : Rn → R is an ISS– Lyapunov function for system (7.10) if there exist class K∞ functions α(·), α(·), and α(·) and a class K function χ(·), such that α(|x|) ≤ V(x) ≤ α(|x|)
for all x ∈ Rn
(7.13)
and |x| ≥ χ(|u|)
⇒
∂V f (x, u) ≤ −α(|x|) ∂x
(7.14)
for all x ∈ R and u ∈ R . n
m
An alternative, equivalent, definition is the following one.
7
172
A. Isidori
Definition 6 A C1 function V : Rn → R is an ISSLyapunov function for system (7.10) if there exist class K∞ functions α(·), α(·), α(·), and a class K function σ (·) such that (7.13) holds and ∂V f (x, u) ≤ −α(|x|) + σ (|u|) ∂x
unique solution of the Lyapunov equation PA + AT P = −I and observe that the function V(x) = xT Px satisfies a|x|2 ≤ V(x) ≤ a|x|2 for suitable a > 0 and a > 0 and that
(7.15)
∂V (Ax + Bu) ≤ −|x|2 + 2|x||P||B||u|. ∂x
for all x ∈ Rn and all u ∈ Rm . The importance of the notion of ISS–Lyapunov function resides in the following criterion, which extends the criterion of Lyapunov for global asymptotic stability to systems with inputs.
Pick any 0 < ε < 1 and set c=
2 |P||B| , 1−ε
χ(r) = cr.
Then, Theorem 7.5 System (7.10) is input-to-state stable if and only if there exists an ISS–Lyapunov function. The comparison functions appearing in the estimates (7.13) and (7.14) are useful to obtain an estimate of the gain function γ (·), which characterizes the bound (7.12). In fact, it can be shown that if system (7.10) possesses an ISS–Lyapunov function V(x), the sublevel set u(·)∞ = {x ∈ Rn : V(x) ≤ α(χ(u(·)∞ ))} is invariant in positive time for (7.10). Thus, in view of the estimates (7.13), if the initial state of the system is initially inside this sublevel set, the following estimate holds |x(t)| ≤ α −1 (α(χ(u(·)∞ )))
for all t ≥ 0 ,
and one can obtain an estimate of γ (·) as γ (r) = α −1 ◦ α ◦ χ(r). In other words, establishing the existence of an ISS– Lyapunov function V(x) is useful not only to check whether or not the system in question is input-to-state stable but also to determine an estimate of the gain function γ (·). Knowing such estimate is important, as it will be shown later, in using the concept of input-to-state stability to determine the stability of interconnected systems. The following simple examples may help understanding the concept of input-to-state stability and the associated Lyapunov-like theorem. Example 2 Consider a linear system x˙ = Ax + Bu with x ∈ Rn and u ∈ Rm , and suppose that all the eigenvalues of the matrix A have negative real part. Let P > 0 denote the
|x| ≥ χ(|u|)
⇒
∂V (Ax + Bu) ≤ −ε|x|2 . ∂x
Thus, the system is input-to-state stable, with a gain function γ (r) = (c a/a) r which is a linear function. Consider now the simple nonlinear one-dimensional system x˙ = −axk + xp u , in which k ∈ N is odd, p ∈ N satisfies p < k, and a > 0. Choose a candidate ISS–Lyapunov function as V(x) = 12 x2 , which yields ∂V f (x, u) = −axk+1 + xp+1 u ≤ −a|x|k+1 + |x|p+1 |u|. ∂x Set ν = k − p to obtain
∂V f (x, u) ≤ |x|p+1 −a|x|ν + |u| . ∂x Thus, using the class K∞ function α(r) = εrk+1 , with ε > 0, it is deduced that ∂V f (x, u) ≤ −α(|x|) ∂x provided that (a − ε)|x|ν ≥ |u|. Taking, without loss of generality, ε < a, it is concluded that condition (7.14) holds for the class K function χ(r) =
r ν1 . a−ε
Thus, the system is input-to-state stable.
7
Nonlinear Control Theory for Automation
173
An important feature of the previous example, which made it possible to prove the system is input-to-state stable, is the inequality p < k. In fact, if this inequality does not hold, the system may fail to be input-to-state stable. This can be seen, for instance, in the simple example
viewed as a system with input z, and state x is input-to-state stable and that system z˙ = g(z, u),
viewed as a system with input u, and state z is input-to-state stable as well. Then, system (7.16) is input-to-state stable.
x˙ = −x + xu. To this end, suppose u(t) = 2 for all t ≥ 0. The state response of the system, to this input, from the initial state x(0) = x0 coincides with that of the autonomous system x˙ = x , i.e., x(t) = et x0 , which shows that the bound (7.11) cannot hold.
We conclude with an alternative characterization of the property of input-to-state stability, which is useful in many instances (see [17]). Theorem 7.6 System (7.10) is input-to-state stable if and only if there exists class K functions γ0 (·) and γ (·) such that, m and any x0 ∈ Rn , the response x(t) for any input u(·) ∈ L∞ in the initial state x(0) = x0 satisfies |x(·)|∞ ≤ max{γ0 (|x0 |), γ (u(·)∞ )}
As an immediate corollary of this theorem, it is possible to answer the question of when the cascade connection (7.9) is globally asymptotically stable. In fact, if the upper subsystem x˙ = f (x, z) , viewed as a system with input z, and state x is input-to-state stable and the equilibrium z = 0 of the lower subsystem is globally asymptotically stable, the equilibrium (x, z) = (0, 0) of system (7.9) is globally asymptotically stable. Example 3 We have observed before that, in the composite system (7.9), global asymptotic stability of the equilibrium x = 0 of x˙ = f (x, 0) and global asymptotic stability of the equilibrium z = 0 of the lower subsystem do not imply, in general, global asymptotic stability of the equilibrium (x, z) = (0, 0). To see that this is the case, consider the system
lim sup |x(t)| ≤ γ (lim sup |u(t)|). t→∞
(7.18)
x˙ = −x + x2 z z˙ = −z.
t→∞
The property of input-to-state stability is of paramount importance in the analysis of interconnected systems. The first application consists in the analysis of the cascade connection. As a matter of fact, the cascade connection of two input-tostate stable systems turns out to be input-to-state stable. More precisely, consider a system of the form (see Fig. 7.3)
In the upper subsystem, the equilibrium x = 0 for z = 0 is globally asymptotically stable, but the system is not input-tostate stable; the equilibrium z = 0 of the lower subsystem is globally asymptotically stable. It can be shown that the equilibrium (x, z) = (0, 0) is not globally asymptotically stable and, even, that there are initial conditions from which trajectories escape to infinity in finite time. To show that this is the case, consider the differential equation
x˙ = f (x, z)
x˙˜ = −˜x + x˜ 2
7.5.2
Cascade Connections
(7.16)
z˙ = g(z, u),
(7.19)
with initial condition x˜ (0) = x0 . Its solution is
in which x ∈ Rn , z ∈ Rm , f (0, 0) = 0, g(0, 0) = 0, and f (x, z), g(z, u) are locally Lipschitz. Theorem 7.7 Suppose that system x˙ = f (x, z),
(7.17)
x˜ (t) =
−x0 exp(−t). x0 − 1 − x0 exp(−t)
Suppose x0 > 1. Then, the x˜ (t) escapes to infinity in finite time. In particular, the maximal (positive) time interval on which x˜ (t) is defined is the interval [0, tmax (x0 )) with x0 . tmax (z0 ) = ln x0 − 1
u
z˙ = g(z, u)
Fig. 7.3 Cascade connection
z
x˙ = f(x, z)
Now, return to system (7.9), with initial condition (x0 , z0 ), and let z0 be such that z(t) = exp(−t)z0 ≥ 1
for all t ∈ [0, tmax (x0 )).
7
174
A. Isidori
Clearly, on the time interval [0, tmax (x0 )), we have
of class KL, and the second of class K, such that the response n2 x1 (·) to any input x2 (·) ∈ L∞ satisfies
x˙ = −x + x2 z ≥ −x + x2 .
|x1 (t)| ≤ max{β1 (|x1 (0)|, t), γ1 (x2 (·)∞ )} By comparison with (7.19), it follows that
for all t ≥ 0.
x(t) ≥ x˜ (t).
Likewise, the hypothesis of input-to-state stability of the second subsystem is equivalent to the existence of three class functions β2 (·), γ2 (·), and γu (·), such that the response x2 (·) n1 m to any input x1 (·) ∈ L∞ , u(·) ∈ L∞ satisfies
Hence, x(t) escapes to infinity, at a time t∗ ≤ tmax (x0 ). On the contrary, in the cascade connection x˙ = −x3 + x2 z z˙ = −z
|x2 (t)| ≤ max{β2 (|x2 (0)|, t), γ2 (x1 (·)∞ ), γu (u(·)∞ )} for all t ≥ 0.
the equilibrium (x, z) = (0, 0) is globally asymptotically stable because the upper subsystem, viewed as a system with input z, is input-to-state stable.
7.5.3
(7.21)
(7.22) The important result, for the analysis of the stability of the interconnected system (7.20), is that if the composite function γ1 ◦ γ2 (·) is a simple contraction, i.e., if
Feedback Connections γ1 (γ2 (r)) < r
In this section, we investigate the stability property of feedback-connected nonlinear systems, and we will see that the property of input-to-state stability lends itself to a simple characterization of an important sufficient condition under which the feedback interconnection of two globally asymptotically stable systems remains globally asymptotically stable. Consider the following interconnected system (Fig. 7.4) x˙ 1 = f1 (x1 , x2 )
(7.20)
x˙ 2 = f2 (x1 , x2 , u),
in which x1 ∈ Rn1 , x2 ∈ Rn2 , u ∈ Rm , and f1 (0, 0) = 0, f2 (0, 0, 0) = 0. Suppose that the first subsystem, viewed as a system with internal state x1 and input x2 , is input-to-state stable. Likewise, suppose that the second subsystem, viewed as a system with internal state x2 and inputs x1 and u, is inputto-state stable. In view of the results presented earlier, the hypothesis of input-to-state stability of the first subsystem is equivalent to the existence of functions β1 (·, ·), γ1 (·), the first
for all r > 0,
(7.23)
the system in question is input-to-state stable. This result is usually referred to as the small-gain theorem. Theorem 7.8 If the condition (7.23) holds, system (7.20), viewed as a system with state x = (x1 , x2 ) and input u, is input-to-state stable. The condition (7.23), i.e., the condition that the composed function γ1 ◦ γ2 (·) is a contraction, is usually referred to as the small gain condition.
7.5.4
The Steady-State Response
In this subsection, we show how the concept of steady state, introduced earlier, and the property of input-to-state stability are useful in the analysis of the steady-state response of a system to inputs generated by a separate autonomous dynamical system (see [10]). Example 4 Consider an n-dimensional, single-input, asymptotically stable linear system z˙ = Fz + Gu
x˙1 = f1(x1, x2) x2
x1
x˙2 = f2(x1, x2, u)
u
forced by the harmonic input u(t) = u0 sin(ωt + φ0 ). A simple method to analyze the asymptotic behavior of (7.24) consists in viewing the forcing input u(t) as provided by an autonomous “signal generator” of the form
Fig. 7.4 Feedback connection
(7.24)
0 ω w ˙ = w := Sw −ω 0
u = 1 0 w := Qw
7
Nonlinear Control Theory for Automation
175
and in analyzing the state-state behavior of the associated “augmented” system w˙ = Sw z˙ = Fz + GQw.
(7.25)
Since the spectra of F and S are disjoint, the Sylvester equation S = F + GQ has a solution and the graph of the linear map z = w is an invariant subspace for the system (7.25). Since all trajectories of (7.25) approach this subspace as t → ∞, the limit behavior of (7.25) is determined by the restriction of its motion to this invariant subspace. Revisiting this analysis from the viewpoint of the more general notion of steady-state introduced earlier, let W ⊂ R2 be a set of the form W = {w ∈ R2 : |w| ≤ c}
(7.26)
in which c is a fixed number, and suppose the set of initial conditions for (7.25) is W × Rn . This is in fact the case when the problem of evaluating the periodic response of (7.24) to harmonic inputs whose amplitude does not exceed a fixed number c is addressed. The set W is compact and invariant for the upper subsystem of (7.25), and, as it is easy to check, the ω-limit set of W under the motion of the upper subsystem of (7.25) is the subset W itself. The set W × Rn is closed and positively invariant for the full system (7.25), and, moreover, since the lower subsystem of (7.25) is input-to-state stable, the motions of system of (7.25), for initial conditions taken in W ×Rn , are ultimately bounded. Then, it is easy to check that the steady state locus of (7.25) is the set ω(B) = {(w, z) ∈ R2 × Rn : w ∈ W, z = w} , i.e., the graph of the restriction of the map z = w to the set W. The restriction of (7.25) to the invariant set ω(B) characterizes the steady-state behavior of (7.24) under the family of all harmonic inputs of fixed angular frequency ω and amplitude not exceeding c.
Example 5 A similar conclusion, namely, the fact that the steady state locus is the graph of a map, can be reached if the “signal generator” is any nonlinear system, with initial conditions chosen in a compact invariant set W. More precisely, consider an augmented system of the form w ˙ = s(w) z˙ = Fz + Gq(w) ,
(7.27)
in which w ∈ W ⊂ Rr and x ∈ Rn , and assume that (i) all eigenvalues of F have negative real part and (ii) the set W is a compact set, invariant for the the upper subsystem of (7.27). As in the previous example, since the lower subsystem of (7.27) is input-to-state stable, the motions of system (7.27), for initial conditions taken in W×Rn , are ultimately bounded. It is easy to check that the steady-state locus of (7.27) is the graph of the map π : W → Rn defined by π(w) = lim
0
T→∞ −T
e−Fτ Gq(w(τ , w))dτ.
(7.28)
There are various ways in which the result discussed in these examples can be generalized. For instance, it can be extended to analyze, in the neighborhood of a locally exponentially stable equilibrium point, the steady-state response of a nonlinear system z˙ = f (z, u)
(7.29)
to an input u produced by a signal generator of the form w˙ = s(w) u = q(w)
(7.30)
with initial conditions in a compact invariant set W. In this case, too, it is possible to show that, if W is small enough, the steady-state locus of (7.29)–(7.30) is the graph of a map π : W → Rn .
7.6
Stabilization of Nonlinear Systems via State Feedback
7.6.1
Relative Degree, Normal Forms
In this section, we consider the class of single-input singleoutput nonlinear systems that can be modeled by equations of the form x˙ = f (x) + g(x)u (7.31) y = h(x) in which x ∈ Rn and in which f : Rn → Rn and g : Rn → Rn are vector fields and h : Rn → R is a smooth function. Such systems are usually referred to as input-affine systems. Stabilization of nonlinear systems is a very difficult task, and general methods are not available. Only if the equations of the system exhibit a special structure do there exist systematic methods for the design of pure state feedback (or, if necessary, dynamic, output feedback) laws yielding global asymptotic stability of an equilibrium. Special structures of
7
176
A. Isidori
the equations, which facilitate the design of feedback laws, are revealed by changes of coordinates in the state space. While, in the case of a linear system, a change of coordinates consists in replacing the original state vector x with a new vector x˜ related to x by means of the linear transformation x˜ = Tx in which T is a nonsingular matrix, in the case of a nonlinear system, a change of coordinates is a transformation x˜ = (x) in which (·) is a map Rn → Rn having the following properties: (i) (·) is invertible, i.e., there exists a map −1 : Rn → Rn , such that −1 ((x)) = x for all x ∈ Rn and (−1 (˜x)) = x˜ for all x˜ ∈ Rn (ii) (·) and −1 (·) are both smooth mappings, i.e., have continuous partial derivatives of any order. A transformation of this type is called a global diffeomorphism. Sometimes, a transformation possessing properties (i) and (ii) and defined for all x is difficult to find. Thus, in some cases, one rather looks at transformations defined only in a neighborhood of a given point. A transformation of this type is called a local diffeomorphism. To the purpose of deriving a special form of the equations describing the system, it is appropriate to associate with (7.31) an integer, known as relative degree, that –in a nutshell– “counts” the number of times the output y(t) of the system has to be differentiated (with respect to time) so as to have the input u(t) explicitly appearing. Since taking successive derivatives of the output of (7.31) implies taking successive derivatives of h(x(t)) in which x(t) obeys x˙ (t) = f (x(t)) + g(x(t))u(t), it is convenient to introduce a notation by means of which such successive derivatives can be expressed in compact form. Let λ be real-valued smooth function and f a smooth vector field, both defined on a subset U of Rn . The function Lf λ is the real-valued smooth function defined as n ∂λ ∂λ f (x). fi (x) := Lf λ(x) = ∂xi ∂x i=1
This function is sometimes called the derivative of λ along f. If g is another vector field, the notation Lg Lf λ(x) stands for the derivative of the real-valued function Lf λ along g, and the notation Lfk λ(x) stands for the derivative of the real-valued function Lfk−1 λ along f . The nonlinear system (7.31) is said to have relative degree r at a point x◦ if there exists a neighborhood U of x◦ such that: (i) Lg Lfk h(x) = 0 for all x ∈ U and all k < r − 1 (ii) Lg Lfr−1 h(x◦ ) = 0 all x ∈ U.
The concept thus introduced is a local concept, namely, r may depend on the specific point x◦ about which the functions Lg Lfk h(x) are evaluated. The value of r may be different at different points of Rn , and there may be points where a relative degree cannot be defined. However, since f (x), g(x), and h(x) are smooth, the set of points, where a relative degree can be defined, is an open and dense subset of Rn . Note that a single-input single-output linear system is a special case of a system of the form (7.31), obtained by setting f (x) = Ax, and g(x) = B, h(x) = Cx. In this case, Lfk h(x) = CAk x and therefore Lg Lfk h(x) = CAk B. Thus, the integer r is the smallest value of r for which CAr−1 B = 0. In order to check that the integer in question characterizes the number of derivatives of y(t) needed to have u(t) explicitly appearing, assume that the system at time t = 0 is in the state x(0) = x◦ , and calculate the value of the output y(t) and of its derivatives with respect to time y(k) (t), for k = 1, 2, . . . for values of t near t = 0. By definition y(t) = h(x(t)) and y(1) (t) =
∂h dx ∂h = [f (x(t)) + g(x(t))u(t)] ∂x dt ∂x
= Lf h(x(t)) + Lg h(x(t))u(t). At time t = 0, y(1) (0) = Lf h(x◦ ) + Lg h(x◦ )u(0). Thus, if r = 1, the value y(1) (0) is an affine function of u(0). If r > 1 and |t| is small (in which case x(t) remains in a neighborhood of x◦ ), one has Lg h(x(t)) = 0 for all such t and consequently y(1) (t) = Lf h(x(t)) , which in turn yields y(2) (t) =
∂Lf h ∂Lf h dx = [f (x(t)) + g(x(t))u(t)] ∂x dt ∂x
= Lf2 h(x(t)) + Lg Lf h(x(t))u(t). At time t = 0, y(2) (0) = Lf2 h(x◦ ) + Lg Lf h(x◦ )u(0). Thus, if r = 2, the value y(2) (0) is an affine function of u(0). If r > 2, and |t| is small, one has Lg Lf h(x(t)) = 0 and consequently y(2) (t) = Lf2 h(x(t)). Continuing in the same way, one arrives at
7
Nonlinear Control Theory for Automation
y(k) (t) = Lfk h(x(t)) y(r) (0) = Lfr h(x◦ ) +
177
for all k < r and all t near t = 0 Lg Lfr−1 h(x◦ )u(0).
The calculations above suggest that the functions h(x), Lf h(x), . . . , Lfr−1 h(x) must have a special importance. As a matter of fact, such functions can be used in order to define, at least partially, a local coordinate transformation around x◦ . If r = n, these functions define a full change of coordinates. Otherwise, if r < n, the change of coordinates can be completed by picking n − r additional functions, yielding a full change of coordinates. Proposition 7.1 Suppose the system has relative degree r at x◦ . Then, r ≤ n. If r is strictly less than n, it is always possible to find n − r more functions ψ1 (x), . . . , ψn−r (x), such that the mapping ⎛ ⎞ ψ1 (x) ⎜ ... ⎟ ⎜ ⎟ ⎜ ψn−r (x) ⎟ ⎜ ⎟ ⎟ (x) = ⎜ ⎜ h(x) ⎟ ⎜ Lf h(x) ⎟ ⎜ ⎟ ⎝ ... ⎠ Lfr−1 h(x) qualifies as a local coordinates transformation in a neighborhood of x◦ . The value at x◦ of these additional functions can be fixed arbitrarily. Moreover, it is always possible to choose ψ1 (x), . . . , ψn−r (x), in such a way that Lg ψi (x) = 0
for all 1 ≤ i ≤ n − r and all x around x◦ .
The description of the system in the new coordinates is found very easily. Set ⎛
⎞
⎛
⎞
ψ1 (x) z1 ⎜ z2 ⎟ ⎜ ψ2 (x) ⎟ ⎟ ⎟ ⎜ z=⎜ ⎝ ··· ⎠ = ⎝ ··· ⎠, ψn−r (x) zn−r
⎛
⎞
⎛
⎞
h(x) ξ1 ⎜ ξ2 ⎟ ⎜ Lf h(x) ⎟ ⎟ ⎜ ⎟ ξ =⎜ ⎝· · ·⎠ = ⎝ · · · ⎠ Lfr−1 h(x) ξr
and x˜ = col(z1 , . . . , zn−r , ξ1 , . . . , ξr ) := (x). Bearing in mind the previous calculations, it is seen that dξ1 ∂h dx = = Lf h(x(t)) = ξ2 (t) dt ∂x d t ··· ∂(Lfr−2 h) dx dξr−1 = = Lfr−1 h(x(t)) = ξr (t). dt ∂x dt while, for ξr , dξr = Lfr h(x(t)) + Lg Lfr−1 h(x(t))u(t). dt
On the right-hand side of this equation, x must be replaced by its expression as a function of x˜ , which will be written as x = −1 (z, ξ ). Thus, setting
7
q(z, ξ ) = Lfr h(−1 (z, ξ )) b(z, ξ ) =
Lg Lfr−1 h(−1 (z, ξ ))
the equation in question can be rewritten as dξr = q(z(t), ξ(t)) + b(z(t), ξ(t))u(t). dt Note that, by definition of relative degree, at the point x˜ ◦ =col(z◦ , ξ ◦ )=(x◦ ), we have b(z◦ , ξ ◦ )=Lg Lfr−1 h(x◦ ) =0. Thus, the coefficient b(z, ξ ) is nonzero for all (z, ξ ) in a neighborhood of (z◦ , ξ ◦ ). As far as the other new coordinates are concerned, one cannot expect any special structure for the corresponding equations, if nothing else has been specified. However, if ψ1 (x), . . . , ψn−r (x) have been chosen in such a way that Lg ψi (x) = 0, then ∂ψi dzi = [f (x(t)) + g(x(t))u(t)] dt ∂x = Lf ψi (x(t)) + Lg ψi (x(t))u(t) = Lf ψi (x(t)). Setting
⎞ Lf ψ1 (−1 (z, ξ )) ⎠ ··· f0 (z, ξ ) = ⎝ −1 Lf ψn−r ( (z, ξ )) ⎛
the latter can be rewritten as dz = f0 (z(t), ξ(t)). dt Thus, in summary, in the new (local) coordinates, the system is described by equations of the form z˙ = f0 (z, ξ ) ˙ξ1 = ξ2 ξ˙2 = ξ3 ···
(7.32)
ξ˙r−1 = ξr ξ˙r = q(z, ξ ) + b(z, ξ )u. In addition to these equations, one has to specify how the output of the system is related to the new state variables. Being y = h(x), it is readily seen that y = ξ1 . The equations thus found are said to be in strict normal form. They can be given a compact expression if one uses the three matrices Aˆ ∈ Rr × Rr , Bˆ ∈ Rr × R, and Cˆ ∈ R × Rr , defined as
178
A. Isidori
⎛
⎞ 0 1 0 ··· 0 ⎜0 0 1 · · · 0 ⎟ ⎜ ⎟ ˆA = ⎜ · · · · · · · ⎟ , ⎜ ⎟ ⎝0 0 0 · · · 1 ⎠ 0 0 0 ··· 0
⎛
⎞ 0 ⎜0⎟ ⎜ ⎟ ˆB = ⎜· · ·⎟ , ⎜ ⎟ ⎝0⎠ 1
Cˆ = 1 0 0 · · · 0 .
(7.33) With the aid of such matrices, the equations above can be rewritten in the form z˙ = f0 (z, ξ ) ˆ + B[q(z, ˆ ξ˙ = Aξ ξ ) + b(z, ξ )u] ˆ y = Cξ.
(7.34)
Note that, if f (0) = 0, that is, if x = 0 is an equilibrium of the autonomous system x˙ = f (x), and if h(0) = 0, the functions Lfk h(x) are zero at x = 0. Since the values of the complementary functions ψj (x) at x = 0 are arbitrary, one can pick them in such a way that ψj (0) = 0. As a result, (0) = 0. Accordingly, in system (7.34), we have f0 (0, 0) = 0
q(0, 0) = 0.
It can be shown that, if the system has relative degree r at each x◦ ∈ Rn and some additional technical conditions (involving the vector fields f (x), g(x), see [8, Chapter 11]) hold, the coordinates transformation (x) considered in Proposition 7.1 and, consequently, the normal form (7.34) are globally defined. In what follows, we assume that this is the case.
7.6.2
ˆ + Bv ˆ ξ˙ = (Aˆ + Bˆ K)ξ ˆ y = Cξ. It is seen, in this way, that if a system has relative degree n and a globally defined normal form exists, there exist a change of coordinates and a feedback law ⎛
⎞ h(x) ⎜ Lf h(x) ⎟ ⎟ ξ =⎜ ⎝ ··· ⎠ Lfn−1 h(x)
u=
1 Lg Lfn−1 h(x)
[−Lfn h(x) + v]
that transform the system into a fully linear system. This design method is know as feedback linearization. Note also ˆ B) ˆ is controllable, the eigenvalues of that, since the pair (A, ˆ ˆ ˆ (A + BK) can be freely assigned.
7.6.3
Global Stabilization via Partial Feedback Linearization
If r < n, the system (7.36) is only partly linear. As a matter of fact, the system is linear from the input-output viewpoint, but there are internal dynamics, namely, those of z˙ = f0 (z, ξ ) ,
(7.37)
that are possibly nonlinear. Thus, if a feedback law of the form (7.35) is to be used, the role of such internal dynamics should be carefully weighted. In this respect, the following notions play an important role.
Feedback Linearization
As anticipated, once the system has been expressed in normal form as in (7.34), the design of feedback laws is easied. In fact, since by definition the coefficient b(z, ξ ) is nowhere zero, it is admissible to consider a feedback law of the form u=
1 ˆ + v) , (−q(z, ξ ) + Kξ b(z, ξ )
(7.35)
in which Kˆ ∈ R × Rr is a vector of design parameters. Under this feedback law, the system becomes z˙ = f0 (z, ξ ) ˆ + Bv ˆ ˙ξ = (Aˆ + Bˆ K)ξ ˆ y = Cξ.
(7.36)
In the special case in which r = n, the subset z of new coordinates is missing, and the equations above reduce those of a fully linear system
Definition 7 A system is globally minimum-phase if the equilibrium z = 0 of z˙ = f0 (z, 0) is globally asymptotically stable. The system is strongly minimum-phase if system (7.37), viewed as a system with input ξ and state z, is input-to-state stable. Remark 1 The terminology “minimum-phase” is borrowed from a terminology in use for linear systems. In fact, in a linear system, the dynamics (7.37) are linear dynamics that can be modeled as z˙ = F0 z + G0 ξ . It can be shown that, if the system is controllable and observable, the eigenvalues of F0 coincide with the zeros of the transfer function of the system. Since, according to a terminology that dates back to the work of H.W. Bode, systems whose transfer function have all zeros with negative real part are said to be “minimum-phase,” it turns out that a linear system is minimum-phase if (and only if) all the eigenvalues of F0 have negative real part, that is, if the system z˙ = F0 z is (globally) asymptotically stable. In the case of a nonlinear system, global stability of the equilibrium z = 0 of z˙ = f0 (z, 0) is only implied by the property of
7
Nonlinear Control Theory for Automation
179
input-to-state stability of (7.37), and this motivates the use of the attribute “strongly” to denote the latter, stronger, property.
If a system is strongly minimum-phase, global asymptotic stability of the equilibrium (z, ξ ) = (0, 0) can be achieved by means of the partially linearizing control u=
1 ˆ + v). (−q(z, ξ ) + Kξ b(z, ξ )
(7.38)
ˆ B) ˆ is reachable, it is possible to pick In fact, since the pair (A, ˆ is a Hurwitz matrix. In this Kˆ so that the matrix (Aˆ + Bˆ K) case, the resulting system z˙ = f0 (z, ξ ) ˆ + Bv ˆ ξ˙ = (Aˆ + Bˆ K)ξ
q(z, ξ ) = Lfr h(x)
and to observe that since ξi = Lfi−1 h(x) for i = 1, . . . , r, then ˆ = Kξ
r
kˆ i Lfi−1 h(x)
i=1
ˆ Thus, in which kˆ 1 , . . . , kˆ r are the entries of the row vector K. the following conclusion holds. Proposition 7.2 Consider a system of the form (7.31), with f (0) = 0 and h(0) = 0. Suppose the system has relative degree r and possesses a globally defined normal form. Suppose the system is strongly minimum-phase. If Kˆ ∈ R×Rr ˆ ∈ C − , the state feedback is any vector such that σ (Aˆ + Bˆ K) law u(x) =
1 Lg Lfr−1 h(x)
−Lfr h(x) +
r
kˆ i Lfi−1 h(x) ,
7.6.4
Global Stabilization via Backstepping
We describe now a method for global asymptotic stabilization of a system of the following form z˙ = f0 (z, ξ1 ) ˆ + B(q(z, ˆ ξ˙ = Aξ ξ ) + b(z, ξ )u).
appears as a cascade connection in which an input-to-state stable system (the lower subsystem) drives an input-to-state stable system (the upper subsystem). According to Theorem 7.7, such cascade connection is input-to-state stable stable (and, hence, its equilibrium (z, ξ ) = (0, 0) is globally asymptotically stable). The feedback law (7.38) is expressed in the (z, ξ ) coordinates that characterize the normal form (7.34). To express it in the original coordinates that characterize the model (7.31), it suffices to bear in mind that b(z, ξ ) = Lg Lfr−1 h(x) ,
This feedback strategy, although very intuitive and elementary, is not useful in a practical context, because it relies upon exact cancelation of certain nonlinear functions and, as such, possibly non-robust. Uncertainties in f (x) and g(x) would make this strategy unapplicable. Moreover, the implementation of such control law requires the availability, for feedback purposes, of the full state x of the system, a condition that might be hard to ensure. It will be seen in what follows how such drawbacks can be overcome.
(7.39)
Unlike the case addressed in the previous section, here the dynamics of z are affected only by the first component ξ1 of ξ . The dynamics in question, though, is not required to be input-to-state stable, as in the previous section, but it is simply assumed that the equilibrium z = 0 of z˙ = f0 (z, ξ1 ) is stabilizable by means of some control law ξ1 = v (z). The method for stabilization is a recursive method, based on a simple “modular” property. Lemma 7.4 Consider a system described by equations of the form z˙ = f0 (z, ξ ) (7.41) ξ˙ = q(z, ξ ) + b(z, ξ )u in which (z, ξ ) ∈ Rn × R and the functions f0 (z, ξ ), q(z, ξ ), and b(z, ξ ) are continuously differentiable functions. Suppose that b(z, ξ ) = 0 for all (z, ξ ) and that f0 (0, 0) = 0 and q(0, 0) = 0. If z = 0 is a globally asymptotically stable equilibrium of z˙ = f0 (z, 0), there exists a differentiable function u = u(z, ξ ) with u(0, 0) = 0, such that the equilibrium at (z, ξ ) = (0, 0) z˙ = f0 (z, ξ ) ξ˙ = q(z, ξ ) + b(z, ξ )u(z, ξ ) is globally asymptotically stable. The construction of the stabilizing feedback u(z, ξ ) is achieved as follows. First of all observe that, using the assumption b(z, ξ ) = 0, the imposition of the preliminary feedback law (7.35), with Kˆ = 0, yields the simpler system
i=1
globally asymptotically stabilizes the equilibrium x = 0.
(7.40)
z˙ = f0 (z, ξ ) ˙ξ = v.
7
180
A. Isidori
with v (0) = 0, which globally asymptotically stabilizes the equilibrium z = 0 of z˙ = f0 (z, v (z)). Then, there exists a differentiable function u = u(z, ξ ) with u(0, 0) = 0, such that the equilibrium at (z, ξ ) = (0, 0)
Then, express f0 (z, ξ ) in the from f0 (z, ξ ) = f0 (z, 0) + p(z, ξ )ξ in which p(z, ξ ) = [f0 (z, ξ )−f0 (z, 0)]/ξ is at least continuous. Since by assumption z = 0 is a globally asymptotically stable equilibrium of z˙ = f0 (z, 0), by the converse Lyapunov theorem there exists a smooth real-valued function V(z), which is positive definite and proper, satisfying ∂V f0 (z, 0) < 0 ∂z for all nonzero z. Consider, for the full system, the “candidate” Lyapunov function
z˙ = f0 (z, ξ ) ξ˙ = q(z, ξ ) + b(z, ξ )u(z, ξ ) is globally asymptotically stable. To prove the result and to construct the stabilizing feedback, it suffices to consider the (globally defined) change of variables y = ξ − v (z), which transforms (7.41) into a system
1 W(z, ξ ) = V(z) + ξ 2 , 2
z˙ = f0 (z, v (z) + y) ∂v f0 (z, v (z) + y) + q(v (z) + y, ξ ) + b(v (z) + y, ξ )u, y˙ = − ∂z
and observe that ∂W ∂V ∂W ∂V z˙ + ξ˙ = f0 (z, 0) + p(z, ξ )ξ + ξ v. ∂z ∂ξ ∂z ∂z Choosing v = −ξ −
∂V p(z, ξ ) ∂z
(7.42)
yields ∂W ∂V ∂W z˙ + ξ˙ = f0 (z, 0) − ξ 2 < 0 ∂z ∂ξ ∂z for all nonzero (z, ξ ), and this, by the direct Lyapunov criterion, shows that the feedback law ∂V 1 −q(z, ξ ) − ξ − p(z, ξ ) u(z, ξ ) = b(z, ξ ) ∂z globally asymptotically stabilizes the equilibrium (z, ξ ) = (0, 0) of the associated closed-loop system. In the next Lemma (which contains the previous one as a particular case), this result is extended, by showing that, to the purpose of stabilizing the equilibrium (z, ξ ) = (0, 0) of system (7.41), it suffices to assume that the equilibrium z = 0 of z˙ = f0 (z, ξ ) is stabilizable by means of a virtual control law ξ = v (z). Lemma 7.5 Consider again the system described by equations of the form (7.41). Suppose there exists a continuously differentiable function ξ = v (z),
(7.43) which meets the assumptions of Lemma 7.4, and then follow the construction of a stabilizing feedback as described. Using repeatedly the property indicated in Lemma 7.5, it is straightforward to derive the expression of a globally stabilizing feedback for a system in the form (7.40). It must be observed, though, that also such feedback strategy, like the one discussed in the previous section, relies upon exact cancelation of certain nonlinear functions and, as such, is possibly non-robust. Example 6 Consider the system z˙ = z − z3 + ξ1 ˙ξ1 = ξ2 ξ˙2 = z + ξ12 + u. The dynamics of z can be stabilized by means of the virtual control ξ1∗ (z) = −2z. This yields a system z˙ = −z − z3 with Lyapunov function W1 (z) = 12 z2 . Change ξ1 into ζ1 = ξ1 − ξ1∗ (z) = ξ1 + 2z, to obtain z˙ = −z − z3 + ζ1 ˙ζ1 = ξ2 − 2z − 2z3 + 2ζ1 ξ˙2 = z + (ζ1 − 2z)2 + u. The subsystem consisting of the two upper equations, viewed as a system with state (z, ζ1 ) and control ξ2 , can be stabilized by means of a virtual control
7
Nonlinear Control Theory for Automation
ξ2∗ (z, ζ1 ) = −(−2z−2z3 +2ζ1 )−ζ1 −
∂W1 = −3ζ1 +z+2z3 . ∂z
This yields a system z˙ = −z − z + ζ1 ζ˙1 = −z − ζ1 3
181
equilibrium point all trajectories, which have origin in a a priori fixed (and hence possibly large) bounded set. Consider again a system satisfying the assumptions of Lemma 7.4. Observe that b(z, ξ ), being continuous and nowhere zero, has a well-defined sign. Choose a simple control law of the form u = −k sign(b) ξ
(7.44)
z˙ = f0 (z, ξ ) ξ˙ = q(z, ξ ) − k|b(z, ξ )|ξ.
(7.45)
with Lyapunov function 1 W2 (z, ζ1 ) = (z2 + ζ12 ). 2 Change now ξ2 into ζ2 = ξ2 − ξ2∗ (z, ζ1 ) = ξ2 + 3ζ1 − z − 2z3 to obtain a system of the form z˙ = −z − z3 + ζ1 ζ˙1 = −z − ζ1 + ζ2 ζ˙2 = a(z, ζ1 , ζ2 ) + u in which a(z, ζ1 , ζ2 ) = (ζ1 − 2z)2 − z − 4ζ1 − 3ζ2 + 3z3 + 2z5 − 2z2 ζ1 . This system is stabilized by means of the control u(z, ζ1 , ζ2 ) = −a(z, ζ1 , ζ2 )−ζ2 −
∂W2 = −a(z, ζ1 , ζ2 )−ζ2 −ζ1 , ∂ζ1
and the corresponding closed-loop system has Lyapunov function 1 W3 (z, ζ1 , ζ2 ) = (z2 + ζ12 + ζ22 ). 2 Reversing all changes of coordinates used in the above construction, one may find the expression u = ψ(z, ξ1 , ξ2 ) of the stabilizing law in original coordinates (z, ξ1 , ξ2 ).
7.6.5
Semiglobal Practical Stabilization via High-Gain Partial-State Feedback
The global stabilization results presented in the previous sections are indeed conceptually appealing, but the actual implementation of the feedback law requires accurate knowledge of the functions that characterize the model of the system to be stabilized and access to all components of its state. In this section, we show how these drawbacks can, in part, be overcome, if a less ambitious design goal is pursued, namely, if instead of seeking global stabilization one is interested in a feedback law capable of asymptotically steering to the
to obtain the system
Assume that the equilibrium z = 0 of z˙ = f0 (z, 0) is globally asymptotically but also locally exponentially stable. If this is the case, then the linear approximation of the first equation of (7.45) at the point (z, ξ ) = (0, 0) is a system of the form z˙ = F0 z + G0 ξ (7.46) in which F0 is a Hurwitz matrix. Moreover, the linear approximation of the second equation of (7.45) at the point (z, ξ ) = (0, 0) is a system of the form ξ˙ = Qz + Rξ − kb0 ξ in which b0 = |b(0, 0)|. It follows that the linear approximation of system (7.45) at the equilibrium (z, ξ ) = (0, 0) is a linear system x˙ = Ax in which G0 F0 . A= Q (R − kb0 )
Standard arguments show that, if the number k is large enough, the matrix in question has all eigenvalues with negative real part (in particular, as k increases, n eigenvalues approach the n eigenvalues of F0 and the remaining one is a real eigenvalue that tends to −∞). It is therefore concluded, from the principle of stability in the first approximation, that if k is sufficiently large, the equilibrium (z, ξ ) = (0, 0) of the closed loop system (7.45) is locally asymptotically (actually locally exponentially) stable. However, a stronger result holds. In fact, it can be proven that, for any arbitrary compact subset K of Rn × R, there exists a number k∗ , such that, for all k ≥ k∗ , the equilibrium is (z, ξ ) = (0, 0) of the closed-loop system (7.45) is locally asymptotically and all initial conditions in K produce a trajectory that asymptotically converges to this equilibrium. In other words, the basin of attraction of the equilibrium (z, ξ ) = (0, 0) of the closed loop system contains the set K. Note that the number k∗ depends on the choice of the set K and, in principle, it increases as the size of K increases. The
7
182
A. Isidori
property in question can be summarized as follows (see [7, Chapter 9] for further details). A system x˙ = f (x, u) is said to be semiglobally stabilizable (an equivalent, but a bit longer, terminology is asymptotically stabilizable with guaranteed basin of attraction) at a given equilibrium point x¯ if, for each compact subset K ⊂ Rn , there exists a feedback law u = u(x), which in general depends on K, such that in the corresponding closed-loop system x˙ = f (x, u(x)) the point x¯ is a locally asymptotically stable equilibrium and x(0) ∈ K
⇒
lim x(t) = x¯
Example 7 Consider a system, with x = (z, ξ ) ∈ R2 , modeled by z˙ = −z + zξ (7.47) ξ˙ = z2 ξ + u In the upper subsystem, the point z = 0 is a globally asymptotically stable equilibrium for ξ = 0. However, the system –viewed as a system with input ξ = 0– is not inputto-state stable, and hence one cannot use the method of partial feedback linearization to achieve global asymptotic stability. One can prove, though, that asymptotic stabilization with a guaranteed region of attraction is possible if a control u = −kξ is used, with k > 0 sufficiently large. To this end, let R be an (arbitrarily large) fixed number, and suppose initial conditions of (7.47) satisfy |x(0)| ≤ R. Consider, for (7.47), the candidate Lyapunov function V(x) = 12 (z2 + ξ 2 ) and observe that, with u = −kξ , one has ∂V f (x) = −z2 + z2 ξ + z2 ξ 2 − kξ 2 ∂x So long as |x(t)| ≤ R, the previous expression can be estimated as
∂V f (x) ≤ −z2 + (R + R2 )|z||ξ | − kξ 2 ∂x = |z| |ξ |
|z| |ξ |
1 −1 (R + R2 ) 2 1 2 (R + R ) −k 2
1 −1 (R + R2 ) |z| 2 . 1 2 |ξ | (R + R ) −k 2
|z| |ξ |
−1 0 |z| 2 . ≤ |z| |ξ | |ξ | 0 − 12
Consider now the sublevel set 0.5R2 of V(x), which is the set of all x’s satisfying |x| ≤ R. On such set, the estimates above are valid, and, hence, it is seen that there is a number k∗ , depending on the choice of R, such that if k ≥ k∗ and x(t) ∈ 0.5R2 , then
t→∞
(i.e., the compact subset K is contained in the basin of attraction of the equilibrium x¯ ). The result sketched above claims that system (7.41), under the said assumptions, is semiglobally stabilizable at (z, ξ ) = (0, 0), by means of a feedback law of the form (7.44).
One can make the right-hand side negative definite by choosing a suitably large k. For instance, it can easily be seen that if k ≥ k∗ = 12 (1 + (R + R2 )2 ), then
1 ∂V f (x) ≤ − |x|2 . ∂x 2 This proves that the set 0.5R2 is invariant in positive time for the controlled system. For any initial condition satisfying |x(0)| ≤ R, the previous inequality holds, and it can be concluded that x(t) exponentially decays to 0.
The design methods shown above can be easily extended to deal with a system of the form (7.32), having r > 1. Let, in this case, the variable ξr be replaced by a new state variable defined as θ = ξr + a0 ξ1 + a1 ξ2 + · · · + ar−2 ξr−1 in which a0 , a1 , . . . , ar−2 are design parameters. After this change of coordinates, a system is obtained that has a structure, which is identical to that of system (7.41). In fact, having set ζ = col(z, ξ1 , . . . , ξr−1 ) ∈ Rn−1 ¯ , θ), y¯ as and defined f¯0 (ζ , θ), q¯ (ζ , θ), b(ζ ⎞ f0 (z, ξ1 , . . . , ξr−1 , − r−1 i=1 ai−1 ξi + θ) ⎜ ⎟ ξ2 ⎜ ⎟ ⎟ ¯f0 (ζ , θ) = ⎜ ··· ⎜ ⎟ ⎝ ⎠ ξr−1 r−1 − i=1 ai−1 ξi + θ ⎛
q¯ (ζ , θ) = a0 ξ2 + a1 ξ3 + · · · + ar−2 (−
r−1 i=1
r−1
ai−1 ξi + θ)
+ q(z, ξ1 , . . . , ξr−1 , − i=1 ai−1 ξi + θ) ¯ , θ) = b(z, ξ1 , . . . , ξr−1 , − r−1 b(ζ i=1 ai−1 ξi + θ) , the system can be rewritten in the form
7
Nonlinear Control Theory for Automation
ζ˙ = f¯0 (ζ , θ) ¯ , θ)u , θ˙ = q¯ (ζ , θ) + b(ζ
183
u=
(7.48)
−k sign(Lg Lfr−1 h(x))
ai Lfi−1 h(x) ,
(7.51)
i=1
identical to that of (7.41). Thus, identical stability results can ¯ , θ) satisfy condibe obtained, if f¯0 (ζ , θ), q¯ (ζ , θ), and b(ζ tions corresponding to those assumed on f0 (z, ξ ), q(z, ξ ), and b(z, ξ ). From this viewpoint, it is trivial to check that f¯0 (0, 0) = ¯ , θ)| > 0 for all (ζ , θ). In order to be 0, q¯ (0, 0) = 0, and |b(ζ able to use the stabilization results indicated above, it remains to check whether the equilibrium ζ = 0 of ζ˙ = f¯0 (ζ , 0) is globally asymptotically and also locally exponentially stable. This system has the structure of a cascade interconnection r−1
z˙ = f0 (z, ξ1 , . . . , ξr−1 , − i=1 ai−1 ξi ) ⎞ ⎛ ⎞⎛ ⎞ 0 1 0 ··· 0 ξ1 ξ˙1 ⎜ 0 0 1 · · · 0 ⎟ ⎜ ξ2 ⎟ ⎜ ξ˙2 ⎟ ⎟ ⎜ ⎟⎜ ⎟ (7.49) ⎜ ⎜ ⎟ ⎜ ··· ⎟ = ⎜ · · · ··· · ⎟ ⎟ ⎜ ⎟⎜ ··· ⎟. ⎜ ⎝ 0 0 0 · · · 1 ⎠ ⎝ξr−2 ⎠ ⎝ξ˙r−2 ⎠ −a0 −a1 −a2 · · · −ar−2 ξ˙r−1 ξr−1 ⎛
with the ai ’s, such that the polynomial (7.50) is Hurwitz and ar−1 = 1. Then, for every choice of a compact set K, there is a number k∗ and a finite time T, such that, if k ≥ k∗ , the equilibrium x = 0 of the resulting closed-loop system is asymptotically (and locally exponentially) stable, with a domain of attraction A that contains the set K. In the (z, ξ ) coordinates, the feedback law presented above depends only on the components ξ1 , . . . , ξr of the state and not on the component z. For this reason, it is usually referred to as a partial-state feedback law.
7.7
Observers and Stabilization via Output Feedback
7.7.1
Canonical Forms of Observable Nonlinear Systems
In this and in the following two sections, we consider nonlinear systems modeled by equations of the form x˙ = f (x, u) y = h(x, u)
If the ai ’s are such that the polynomial d(λ) = λr−1 + ar−2 λr−2 + · · · + a1 λ + a0
r
(7.50)
is Hurwitz, the lower subsystem of the cascade is (globally) asymptotically stable. If system (7.32) is strongly minimumphase, the upper subsystem of the cascade, viewed as a system with input (ξ1 , . . . , ξr−1 ) and state z, is input-tostate stable. Thus, system (7.49) is globally asymptotically stable. If, in addition, the linear approxmation of (7.32) is locally exponentially stable, then system (7.49) is also locally exponentially stable. Under such hypotheses, observing that –in the present context– the stabilizing feedback (7.44) becomes
with state x ∈ Rn , input u ∈ Rm , and output y ∈ R. The design of observers for such systems usually requires the preliminary transformation of the equations, describing the system in a form that corresponds to the observability canonical form usually considered for linear systems. A key requirement for the existence of observers is the existence of a global change of coordinates x˜ = (x) carrying system (7.52) into a system of the form x˜˙ 1 = f˜1 (˜x1 , x˜ 2 , u) x˜˙ 2 = f˜2 (˜x1 , x˜ 2 , x˜ 3 , u) ··· ˙x˜ n−1 = f˜n−1 (˜x1 , x˜ 2 , . . . , x˜ n , u) x˜˙ n = f˜n (˜x1 , x˜ 2 , . . . , x˜ n , u)
u = −kθ = −k(a0 ξ1 + a1 ξ2 + · · · + ar−2 ξr−1 + ξr ) and bearing in mind the fact that ξi = Lfi−1 h(x) for i = 1, . . . , r, one can conclude the following stabilization result. Proposition 7.3 Consider system (7.31), with f (0) = 0 and h(0) = 0. Suppose the system has relative degree r and possesses a globally defined normal form. Suppose the system is strongly minimum-phase. Suppose also that the matrix F0 that charactrizes the linear approximation (7.46) of f0 (z, ξ ) at (z, ξ ) = (0, 0) is Hurwitz. Let the control be provided by a feedback of the form
(7.52)
(7.53)
˜ x1 , u) y = h(˜
˜ x1 , u) and the f˜i (˜x1 , x˜ 2 , . . . , x˜ i+1 , u)’s satisfy in which h(˜ ∂ h˜ = 0 , ∂ x˜ 1
and
∂ f˜i = 0 , ∂ x˜ i+1
for all i = 1, . . . , n − 1 (7.54)
for all x˜ ∈ Rn and all u ∈ Rm . This form is usually referred to as the uniform observability canonical form.
7
184
A. Isidori
⎛
⎞ h(x) ⎜ Lf h(x) ⎟ ⎟ (x) = ⎜ ⎝ ··· ⎠. Lfn−1 (x)
The existence of canonical forms of this kind can be checked as follows (see [4, Chapter 2]). Define –recursively– a sequence of real-valued functions ϕi (x, u) as ϕ1 (x, u) := h(x, u) ,
···
ϕi (x, u) :=
∂ϕi−1 f (x, u) , ∂x
for i = 1, . . . , n. Using these functions, define a sequence of Ri -valued functions i (x, u) as follows ⎞ ϕ1 (x, u) ⎟ ⎜ i (x, u) = ⎝ ... ⎠ ⎛
ϕi (x, u) for i = 1, . . . , n. Finally, for each of the i (x, u)’s, compute the subspace ∂ i Ki (x, u) = ker , ∂x (x,u) in which ker[M], the null space of the matrix M, denotes subspace consisting of all vectors v such that Mv = 0. Note that, since the entries of the matrix ∂i ∂x are in general dependent on (x, u), so is its null space Ki (x, u). The role played by the objects thus defined in the construction of the change of coordinates, yielding an observability canonical form, is explained in this result. Lemma 7.6 Consider system (7.52) and the map x˜ = (x) defined by ⎞ ⎛ ϕ1 (x, 0) ⎜ ϕ2 (x, 0) ⎟ ⎟ (x) = ⎜ ⎝ ··· ⎠. ϕn (x, 0) Suppose that (x) has a globally defined and continuously differentiable inverse. Suppose also that, for all i = 1, . . . , n, dim[Ki (x, u)] = n − i for all u ∈ Rm and for all x ∈ Rn Ki (x, u) = independent of u.
If the system is linear, Lfi h(x) = CAi x, and hence x˜ = (x) is the change of coordinates that brings the system in the observability canonical form.
7.7.2
High-Gain Observers
Once a system has been changed into its uniform observability canonical form, an asymptotic observer can be built as follows. Take a copy of the dynamics of (7.53), corrected by an innovation term proportional to the difference between the output of (7.53) and the output of the copy. More precisely, consider a system of the form ˜ x1 , u)) x˙ˆ 1 = f˜1 (ˆx1 , xˆ 2 , u) + κcn−1 (y − h(ˆ ˙xˆ 2 = f˜2 (ˆx1 , xˆ 2 , xˆ 3 , u) + κ 2 cn−2 (y − h(ˆ ˜ x1 , u)) ··· ˜ x1 , u)) x˙ˆ n−1 = f˜n−1 (ˆx, u) + κ n−1 c1 (y − h(ˆ ˙xˆ n = f˜n (ˆx, u) + κ n c0 (y − h(ˆ ˜ x1 , u)) ,
(7.55)
in which κ and cn−1 , cn−2 , . . . , c0 are design parameters. The state of the system thus defined is capable to asymptotically track the state of system (7.53), no matter what the initial conditions x(0), x˜ (0) and the input u(t) are, provided that the two following technical hypotheses hold: (i) Each of the maps f˜i (˜x1 , . . . , x˜ i , x˜ i+1 , u), for i = 1, . . . , n, is globally Lipschitz with respect to (˜x1 , . . . , x˜ i ), uniformly in x˜ i+1 and u. (ii) There exist two real numbers α and β, with 0 < α < β, such that ∂ h˜ α≤ ≤β, ∂ x˜ 1
∂ f˜ i and α ≤ ≤β, ∂ x˜ i+1 for all i = 1, . . . , n − 1 (7.56)
for all x˜ ∈ Rn , and all u ∈ Rm . Let a “scaled observation error” e = col(e1 , . . . , en ) be defined as ei = κ n−i (ˆxi − x˜ i ),
i = 1, 2, . . . , n.
Then, system (7.52) is globally transformed, via (x), to a system in uniform observability canonical form.
Then, the following result holds (see [4, Chapter 6]).
Note that, in the case of an input-affine system (see (7.31)), ϕi (x, 0) = Lfi−1 h(x). Hence, in this case
Theorem 7.9 Suppose assumptions (i) and (ii) hold. Then, there is a choice of the coefficients c0 , c1 , . . . , cn−1 (that depend only on the parameters αandβ in (7.56)) and a
7
Nonlinear Control Theory for Automation
185
number κ ∗ such that, if κ ≥ κ ∗ , the observation error e(t) exponentially decays to zero as times tends to infinity, regardless of what the initial states x˜ (0), xˆ (0) and the input u(t) are. The convergence property indicated in this result requires a sufficiently large value of the “gain” paremeter κ. For this reason the observer in question is called a high-gain observer (see [4, Chapter 6] for further details).
7.7.3
The Nonlinear Separation Principle
The observer described in the previous section makes it possible to design a dynamic, output feedback, stabilizing control law, thus extending to the case of nonlinear systems the well-known separation principle for stabilization of linear systems. Let the system be expressed in uniform observability canonical form (7.53), rewritten as x˙˜ = f˜ (˜x, u) ˜ x, u). y = h(˜
(7.57)
Suppose a feedback law is known u = u∗ (˜x) that globally asymptotically stabilizes the equilibrium point x˜ = 0 of the closed-loop system x˙˜ = f˜ (˜x, u∗ (˜x)).
(7.58)
Since x˜ is not directly available, one might intuitively think of replacing x˜ by its estimate xˆ , provided by the observer, in the map u∗ (˜x), that is, of controlling the system via u = u∗ (ˆx). Such simple intuition, in fact, is idea behind the separation principle for linear systems. In the context of nonlinear systems, though, an extra precaution is needed, motivated by the following reason. Observe that, if the plant is controlled by u = u∗ (ˆx), the dynamics of x˜ are those of
xˆ
ˆ u*(x)
gl(·)
˜ u) x˙ = f˜(x, ˜ ˜ u) y = h(x,
u
y
7 ˆ u) xˆ˙ = f (x, ˆ u)) + G(y −h (x,
Fig. 7.5 Observer-based control for a nonlinear system
conditions xˆ (0) and x˜ (0) are taken in a compact set, one should expect that, if κ is large, there is an initial interval of time on which |e(t)| is large (this phenomenon is sometimes referred to as “peaking”). As a consequence, also the term Tκ e(t) in the dynamics above is large and, since the system is nonlinear, this may result in a finite escape time for x˜ (t). To avoid such inconvenience, it is appropriate to “saturate” the control, by choosing instead a law of the form u = g (u∗ (ˆx))
(7.59)
in which g : R → R is a smooth saturation function that is a function characterized by the following properties: (i) g (s) = s if |s| ≤ . (ii) g (s) is odd and monotonically increasing, with 0 < g (s) ≤ 1. (iii) lims→∞ g (s) = (1 + c) with 0 < c 1. The consequence of choosing the control u as in (7.59) is that global asymptotic stability is no longer assured. However, semiglobal stabilizability is still possible (see [4,13,18], and [12] for further details) (Fig. 7.5).
Theorem 7.10 Consider system (7.57), assumed to be expressed in uniform observability canonical form, and suppose Assumptions (i) and (ii) hold. Suppose that a state feedback law u = u∗ (˜x) globally asymptotically stabilizes the equilibrium x˜ = 0 of (7.58). Let the system be controlled x˙˜ = f˜ (˜x, u∗ (˜x − Tκ e) by (7.59), in which xˆ is provided by the observer (7.55). Then, for every choice of a compact set K, there exist a number in which Tκ is the diagonal matrix Tκ =diag{κ −n+1 , . . . ,κ −1 ,1} and a number κ ∗ such that, if κ > κ ∗ , all trajectories
of the that can be seen as a system with internal state x˜ driven by the input e. The system in question, by assumption, has a closed-loop system with initial conditions x˜ (0), xˆ (0) ∈ K globally asymptotically stable equilibrium at x˜ = 0 if e = 0, are bounded and limt→∞ (˜x(t), xˆ (t)) = (0, 0). but there is no guarantee that this system be input-to-state stable. Thus, the effect of a nonzero e(t) has to be taken into account in the analysis of the asymptotic behavior. It has been shown above that asymptotic convergence of the 7.7.4 Robust Feedback Linearization observation error e(t) to zero is achieved by increasing the gain parameter κ of the nonlinear observer (7.55). Since In the previous sections, we have shown that, if a system |e(0)| grows unbounded with increasing κ, even if the initial is strongly minimum-phase, the control law (7.35) makes
186
A. Isidori
the system linear from the input-output viewpoint and also internally stable (if Kˆ is properly chosen). We have also observed that this control law is not robust as it relies upon accurate knowledge of b(z, ξ ) and q(z, ξ ) and availability of the full state (z, ξ ). In this section, we show how such law can be made robust and input-output feedback linearization (with internal stability) can be achieved up to any arbitrary degree of accuracy (see [3] for more details). The intuition that motivates the construction summarized below is that the components ξ1 , . . . , ξr of ξ and the term q(z, ξ ) + b(z, ξ )u, quantities that determine the expression of the control (7.35), coincide – respectively – with the output y of the system and of its higher derivatives with respect to time, up to that of order r. As such, these quantities could be approximately estimated by means of a bank of “rough” differentiators. In what follows, it is assumed that, for some fixed pair 0 < bmin < bmax , the coefficient b(z, ξ ) satisfies bmin ≤ b(z, ξ ) ≤ bmax for all (z, ξ ), in which case there exist a number b0 such that |b(z, ξ ) − b0 | ≤ δ0 |b0 |
for all (z, ξ )
(7.60)
for some δ0 < 1. The control law imposed to the system is a law of the form (compare with (7.35)) u(ξˆ , σ ) = g
1 [−σ + Kˆ ξˆ + v] b0
(7.61)
in which g is a saturation function (as defined in the previous section), b0 is a number satisfying (7.60), Kˆ ∈ R1×r is the same as in (7.35), and ξˆ ∈ Rr , andσ ∈ R are states of the (r + 1)-dimensional dynamical system ξ˙ˆ1 = ξˆ2 + κcr (y − ξˆ1 ) ξ˙ˆ2 = ξˆ3 + κ 2 cr−1 (y − ξˆ1 ) ··· ˙ξˆ r−1 ˆ c2 (y − ξˆ1 ) r−1 = ξr + κ ξˆ˙r = σ + b0 u(ξˆ , σ ) + κ r c1 (y − ξˆ1 ) σ˙ = κ r+1 c0 (y − ξˆ1 ).
the initial conditions of the resulting closed-loop system satisfy z(0), ξ(0), ξˆ (0), σ (0) ∈ K. Let V > 0 be a fixed (but otherwise arbitrary) number and suppose v(t) in (7.61) satisfies |v(t)| ≤ V for all t ≥ 0. Then, given any number ε, there exist a choice of the design parameters , c0 , . . . , cr and a number κ ∗ such that, if κ > κ ∗ , the state z(t), ξ(t), ξˆ (t), σ (t) of the close-loop system remains bounded for t ≥ 0 and its component ξ(t) satisfies |ξ(t) − ξL (t)| ≤ ε
∀t ∈ [0, ∞),
(7.63)
in which ξL (t) denotes the trajectory of ˆ L + Bv ˆ , ξ˙L = (Aˆ + Bˆ K)ξ
ξL (0) = ξ(0).
ˆ , it is Since the output y of (7.34) is given by y = Cξ seen from all of the above that, with the indicated control law, a system is obtained whose input-output behavior can be made arbitrarily close to the behavior that would have been obtained by the feedback-linearizing law (7.35), had all the nonlinearities been known and all internal states been available. Remark 2 It should be stressed that the observers described in the present section guarantee asymptotic decay of the state estimation error if the gain parameter κ is sufficiently large. Since parameter in question is raised up to power n in the equations that characterize the observer, it goes without saying that if the dimension n of the system is large, the presence of such large parameter might boost the effect of noises affecting the measured output y, letting alone possible numerical instabilities. This adverse effect can be mitigated if a different, slight more elaborate, structure of observer is used, having dimension 2n−2, but in which the highest power of the gain parameter is only 2, regardless of the actual value of n. Details on this, more recent, approach to the design of high-gain observers can be found in [1].
(7.62)
in which the coefficients κ and c0 , c1 , . . . , cr are design parameters. The dynamical system thus defined is usually referred to as an “extended high-gain observer” and the following relevant result holds (see [3] for more details).
7.8
Recent Progresses
In recent years, the design methods described in Sects. 7.6 and 7.7 have been extended to multi-input multi-output (MIMO) systems, modeled by equations of the form x˙ = f (x) +
Theorem 7.11 Let system (7.34) be controlled by (7.61), in which (ξˆ , σ ) are states of the extended observer (7.62). Suppose system (7.34) is strongly minimum-phase, and let Kˆ be such that Aˆ + Bˆ Kˆ is Hurwitz. Let K ∈ Rn−r × Rr × Rr × R be a fixed (but otherwise arbitrary) compact set, and suppose
y1 = h1 (x) ··· ym = hm (x)
m i=1
gi (x)ui
7
Nonlinear Control Theory for Automation
in which f (x), g1 (x), . . . , gm (x) are smooth vector fields and h1 (x), . . . , hm (x) smooth functions, defined on Rn . The extensions of the methods in question are relatively straightforward if controlled plant has a well-defined vector relative degree (see [7, Sec 5.1] for a definition of vector relative degree). It is well-known, though, that MIMO systems having a vector relative degree form only a special class of MIMO systems (regardless of whether a system is linear or nonlinear, in fact). In the case of linear systems, the assumption of the existence of a vector relative degree is seldom used, and more general cases of systems are usually handled. In the (meaningful) case of systems having the same number of input and output components, attention is usually given to the case in which the input-output map is invertible (that is, the transfer function is nonsingular). Indeed, having a nonsingular transfer function is a much weaker assumption than having a vector relative degree. Feedback design for MIMO nonlinear systems that are invertible but do not necessarily possess a vector relative degree was studied in [14] and, more recently, in [19, 20] and [21]. In particular, in the latter paper, a robust design method has been proposed, based on interlaced use of dynamic extensions and extended observers, by means of which it is possible to recover, up to any desired degree of accuracy, the performances achievable by means of the classical, but not robust, technique for feedback linearization based on dynamic extension and state feedback. As such, this result provides an extension, to a broad class of MIMO systems, of the design method sketched in Sect. 7.7.4.
187 10. Jiang, Z.P., Teel, A.R., Praly, L.: Small–gain theorem for ISS systems and applications. Math. Control Signals Syst. 7, 95–120 (1994) 11. Khalil, H.: Nonlinear Systems, 3rd edn. Prentice Hall, Hoboken (2002) 12. Khalil, H.K.: High-Gain Observers in Nonlinear Feedback Control. SIAM Series: Advances in Design and Control, Philadelphia (2017) 13. Khalil, H.K., Esfandiari, F.: Semiglobal stabilization of a class of nonlinear systems using output feedback. IEEE Trans. Autom. Control AC-38, 1412–1415 (1993) 14. Liberzon, D.: Output–input stability implies feedback stabilization. Syst. Control Lett. 53, 237–248 (2004) 15. Sontag, E.D.: On the input–to–state stability property. Eur. J. Control. 1, 24–36 (1995) 16. Sontag, E.D., Wang, Y.: On characterizations of the input–to–state stability property. Syst. Contr. Lett. 24, 351–359 (1995) 17. Teel, A.R.: A nonlinear small gain theorem for the analysis of control systems with saturations. IEEE Trans. Autom. Control AC41, 1256–1270 (1996) 18. Teel, A.R., Praly, L.: Tools for semiglobal stabilization by partial state and output feedback. SIAM J. Control. Optim. 33, 1443–1485 (1995) 19. Wang, L., Isidori, A., Su, H.: Global stabilization of a class of invertible MIMO nonlinear systems. IEEE Trans. Autom. Control 60(3), 616–631 (2015) 20. Wang, L., Isidori, A., Marconi, L., Su, H.: Stabilization by output feedback of multivariable invertible nonlinear systems. IEEE Trans. Autom. Control AC-62, 2419–2433 (2017) 21. Wu, Y., Isidori, A., Lu, R., Khalil, H.: Performance recovery of dynamic feedback-linearization methods for multivariable nonlinear systems. IEEE Trans. Autom. Control AC-65, 1365–1380 (2020) 22. Yoshizawa, T.: Stability Theory and the Existence of Periodic Solutions and Almost Periodic Solutions. Springer, New York (1975)
References 1. Astolfi, D., Marconi, L.: A high-gain nonlinear observer with limited gain power. IEEE Trans. Autom. Control 53(10), 2324– 2334 (2016) 2. Birkhoff, G.D.: Dynamical Systems. American Mathematical Society, Providence (1927) 3. Freidovich, L.B., Khalil, H.K.: Performance recovery of feedbacklinearization- based designs. IEEE Trans. Autom. Control 53, 2324–2334 (2008) 4. Gauthier, J.P., Kupka, I.: Deterministic Observation Theory and Applications (Cambridge University Press, Cambridge, 2001) 5. Hahn, W.: Stability of Motions (Springer, Berlin, 1967) 6. Hale, J.K., Magalhães, L.T., Oliva, W.M.: Dynamics in Infinite Dimensions. Springer, New York (2002) 7. Isidori, A.: Nonlinear Control Systems, 3rd edn. Springer, Berlin (1995) 8. Isidori, A.: Nonlinear Control Systems II. Springer, Berlin (1999) 9. Isidori, A., Byrnes, C.I.: Steady-state behaviors in nonlinear systems with an application to robust disturbance rejection. Ann. Rev. Control 32(1), pp. 1–16 (2007)
Alberto Isidori From 1975 to 2012, he has been Professor of Automatic Control at the University of Rome Sapienza, where is now Professor Emeritus. His research interests are primarily in analysis and design of nonlinear control systems. Author of several books, and of more than 120 articles on archival journals. Recipient of various prestigious awards, which include the Quazza Medal of IFAC (in 1996), the Bode Lecture Award from the Control Systems Society of IEEE (in 2001), the Honorary Doctorate from KTH of Sweden (in 2009), the Galileo Galilei Award from the Rotary Clubs of Italy (in 2009), the Control Systems Award of IEEE in 2012. Fellow of IEEE and of IFAC. President of IFAC in the triennium 2008–2011. Since 2012, corresponding member of the Accademia Nazionale dei Lincei.
7
8
Control of Uncertain Systems Vaneet Aggarwal and Mridul Agarwal
Contents 8.1 8.1.1 8.1.2
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 Historical Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
8.2 8.2.1 8.2.2
General Scheme and Components . . . . . . . . . . . . . . . . . 191 Stochastic Optimal Control Problem . . . . . . . . . . . . . . . . . 191 Key Approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
8.3 8.3.1 8.3.2 8.3.3 8.3.4
8.3.6
Challenges and Solutions . . . . . . . . . . . . . . . . . . . . . . . . . Model Predictive Control . . . . . . . . . . . . . . . . . . . . . . . . . . Learning System Model Using Gaussian Process . . . . . . . Constrained Markov Decision Processes . . . . . . . . . . . . . . Model-Free Reinforcement Learning for Decision-Making . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Constrained Reinforcement Learning with Discounted Rewards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Case Study for a Constrained RL Setup . . . . . . . . . . . . . . .
8.4
Application Areas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
8.5
Conclusions, Challenges, and Trends . . . . . . . . . . . . . . . 201
8.3.5
192 193 193 194 195
strained Markov decision process (CMDP) as an approach for decision-making with uncertainties, where the system is modeled as a MDP with constraints. The formalism of CMDP is extended to a model-free approach based on reinforcement learning to make decisions in the presence of constraints. Keywords
Uncertain systems · Dynamical systems · Model predictive control · Reinforcement learning · Process constraints · Markov decision processes (MDPs) · Constrained MDPs
198 199
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
Abstract
Dynamical systems typically have uncertainties due to modeling errors, measurement inaccuracy, mutations in the evolutionary processes, etc. The stochastic deviations can compound overtime and can lead to catastrophic results. In this chapter, we will present different approaches for decision-making in systems with uncertainties. We first consider model predictive control, which models and learns the system and uses the learned model to optimize the system. In many cases, a prior model of the system is not known. For such cases, we will also explain Gaussian process regression, which is one of the efficient modeling techniques. Further, we present con-
V. Aggarwal () · M. Agarwal Purdue University, West Lafayette, IN, USA e-mail: [email protected]; [email protected]
© Springer Nature Switzerland AG 2023 S. Y. Nof (ed.), Springer Handbook of Automation, Springer Handbooks, https://doi.org/10.1007/978-3-030-96729-1_8
8.1
Introduction
8.1.1
Motivation
In many applications of dynamical systems, uncertainties happen frequently. For example, modeling errors such as modeling a nonlinear system with a simple linear model may result imprecise prediction results. Also, one can think of a sensor where the noise results in measurement inaccuracies. Further, consider mutations in the evolutionary processes such as change in the demand at a restaurant based on the time of the day. The dynamics of uncertain systems has long been and will continue to be one of the dominant themes in mathematics and engineering applications due to its deep theoretical and immense practical significance. During the last couple of decades, analysis of uncertain dynamical systems and related models has attracted the attention of a wide audience of professionals, such as mathematicians, researchers, and practitioners. In spite of the amount of published results recently focused on such systems, there remain many challenging open questions and novel design requirements. In this chapter, we will review some approaches for handling uncertainties in the design of control systems.
189
190
V. Aggarwal and M. Agarwal
An example domain for such control problem is communication networks, which are designed to enable the simultaneous transmission of heterogeneous types of information: file transfers, interactive messages, computer outputs, facsimile, voice and video, etc. [1]. This information flows over a shared network. In such systems, we wish to maximize throughput and decrease latency, with different forms of decision variables, including the routing and scheduling decisions. The system has various uncertainties, for instance, the service times of the processing nodes, the arrival times of the different requests, the background processes and the buffer capacity of the nodes, etc. In this chapter, we first introduce the problem of stochastic optimal control. Then, we discuss two approaches for the problem. The first is a model-based approach based on model predictive control (MPC). When the system model is not available for the MPC approach, the model needs to be learned as well. We describe Gaussian process regression to learn a nonparametric model of the system. The second is an approach based on constrained Markov decision process (CMDP). Further, a model-free approach is provided for CMDP, which is based on reinforcement learning. The MPC approach models the system parameters, which are then used to optimize the system. On the other hand, modelfree approaches do not explicitly learn the state transition probabilities and directly optimize the controller that gets the best out of the system.
8.1.2
Historical Background
Over the past six decades, there has been significant interest and developments by industry and academia in advanced process control. During the 1960s, advanced control was taken in general as any algorithm that deviated from the classical three-term proportional-integral-derivative (PID) controller. For quality product, safety, and economic process, PID controller accounts for more than 80% of the installed automatic feedback control devices in the process industries [2, 3]. The interest in MPCs can be traced back to a set of papers in the late 1970s [4–7]. MPC has attracted many researchers due to better performance and control of processes including non-minimum phase, long time delay or open-loop unstable characteristics over minimum variance (MV), generalized minimum variance (GMV), and pole placement (PP) techniques [3,8,9]. The main advantages of MPC over structured PID controllers are its ability to handle constraints, nonminimum phase processes, and changes in system parameters (robust control) and its straightforward applicability to large, multivariate processes [3]. The MPC concept has a long history. The connections between the closely related minimum time optimal control problem and linear programming were recognized first by
Zadeh and Whalen [10]. The authors of [11] proposed the moving horizon approach, which is at the core of all MPC algorithms. It became known as “open-loop optimal feedback.” The extensive work on this problem during the 1970s was reviewed by Gutman in his PhD thesis [12]. The connection between this work and MPC was discovered later in [13]. For details, the reader is referred to [7]. MPC uses the system model to find controllers that can ensure robust system performance and/or avoid dangerous control actions. The MPC approach relies on a sufficiently descriptive model of the system to optimize performance and ensure constraint satisfaction. This critically requires a good model to ensure the success of the control system. In practice, however, model descriptions can be subject to considerable uncertainty that may originate from insufficient data, restrictive model classes, or even the presence of external disturbances. The fields of robust [14] and stochastic [15] MPC provide a systematic treatment of numerous sources of uncertainty affecting the MPC controller, ensuring constraint satisfaction with regard to certain disturbance or uncertainty classes. One of the approaches for model learning in MPC is Gaussian process regression [16–19]. The appeal of using Gaussian process regression for model learning stems from the fact that it requires little prior process knowledge and it directly provides a measure of residual model uncertainty. MPC is a model-based control philosophy in which the current control action is obtained by online optimization of objective function. It uses the philosophy of receding horizon and predicts the future outcome of actions in order to determine what action the agent should take in the next step. If the future horizon to consider is sufficiently short and the dynamics is deterministic, the prediction can often be approximated well by linear dynamics, which can be evaluated instantly. However, because MPC must finish its assessment of the future before taking every action, its performance is limited by the speed of the predictions. MPC requires this computation for each time step even if the current state is similar to the ones experienced in the past. Meanwhile, if the prediction is done for only a short horizon, MPC may suggest a move to a state leading to a catastrophe. In order to deal with these issues, the approach of constrained Markov decision process (CMDP) is used. The approach of CMDP has had a lot of applications starting from the 1970s. In 1970, a problem of hospital admission scheduling was considered using CMDP [20]. Golabi et al. have used CMDPs to develop a pavement management system for the state of Arizona to produce optimal maintenance policies for a 7400-mile network of highways [21]. A saving of 14 million dollars was reported in the first year of implementation of the system, and a savings of 101 million dollars was forecast for the following 4 years. Winden and Dekker developed a CMDP model for deter-
8
Control of Uncertain Systems
mining strategic building and maintenance policies for the Dutch Government Agency (Rijksgebouwendienst), which maintains 3000 state-owned buildings with a replacement value of about 20 billion guilders and an annual budget of some 125 million guilders [22]. Several methods to solve CMDP have been proposed, including the ones based on a linear program (LP), Lagrangian approach, and mixture of stationary policies; for more details on these approaches, the reader is referred to [1]. The approach of CMDP assumes the knowledge of the Markov models. In order to alleviate that, the concept of reinforcement learning has been used to make decisions. Reinforcement learning has made great advances in several applications, ranging from online learning and recommendation engines [23], natural language understanding and generation [24], transportation systems [25–27], networking [28, 29] to mastering games, such as Go [30] and chess [31]. The idea is to learn from extensive experience how to take actions that maximize some notion of reward by interacting with the surrounding environment. The interaction teaches the agent how to maximize its reward without knowing the underlying dynamics of the process. A classical example is swinging up a pendulum in an upright position. By making several attempts to swing up a pendulum and balancing it, one might be able to learn the necessary forces that need to be applied in order to balance the pendulum without knowing the physical model behind it [32]. Reinforcement learning models the problem as a constrained Markov decision process (MDP) and aims to find efficient approaches for the problem. Reinforcement learning-based model-free solutions have been widely considered [33–38]. Recently, the authors of [39] proposed a novel model-free approach for reinforcement learning with constraints, which will be elaborated in this chapter. This approach formulates the problem as zero-sum game, where one player (the agent) solves a Markov decision problem and its opponent solves a bandit optimization problem. The standard Q-learning algorithm is extended to solve the problem, whose convergence guarantees have been shown in [39]. Another effort of identifying system along with controlling is known as adaptive control [40]. Unlike reinforcement learning this method learns the model of the system and then optimizes it. Initially, there may or may not be a prior on the system model. In the case of the existence of a prior on the model, Bayesian model estimation techniques can be used [41]. For linear systems, a method for Bayesian approach is using Kalman filtering to refine the model parameters [42]. In many cases, linear model may not be sufficient to model the system dynamics. For such cases, Gaussian processes can be used [43]. Apart from allowing nonlinear system models, another advantage of Gaussian process is that they are nonparametric models and hence model a larger class of system dynamics.
191 Table 8.1 Multiple tasks associated with optimizing systems under uncertainty. On choosing a model-based learning, additional model identification is also required Task
Toolkit Learning-based Adaptive control control Reinforcement learning Parametric Identification Nonparametric Stochastic control
Model-based control Model-free control
Gain scheduling, self-tuning SARSA, Q-learning, policy gradients Least squares estimation, Kalman filtering Gaussian process regression Model predictive control, linear quadratic control Constrained Markov decision processes
References [40] [44–47]
8 [40, 42]
[43] [48]
[1]
Broadly, the topic of control of uncertain systems includes the concepts of learning-based control and stochastic control. For model-based learning, system identification approaches are typically used. Some of these approaches are summarized in Table 8.1.
8.2
General Scheme and Components
In this section, we first describe the problem of stochastic optimal control, which models the control of uncertain systems. Then, key approaches for the problem will be described.
8.2.1
Stochastic Optimal Control Problem
The system dynamics in discrete time can be formulated as x(k + 1) = ft (x(k), u(k), k, θt , w(k)),
(8.1)
where xk ∈ X is the system state and u(k) ∈ U is the applied input at time k. We use the subscript “t” to emphasize that these quantities represent the true system dynamics or true optimal control problem. The dynamics are subject to various sources of uncertainty, which we distinguish by using two categories: First is the parametric uncertainty that arises because of the mismatch between the true system model and the assumed system model. The parametric uncertainty of the system is described using a random variable θt , which is therefore constant over time. The second source of uncertainty is the disturbances or process noise in the system such as noise in the sensor or an imperfect actuator that may not always perform the intended control task. w(k) describes a
192
V. Aggarwal and M. Agarwal
sequence of random variables corresponding to disturbances or process noise in the system, which are often assumed to be independent and identically distributed (i.i.d.). The aim of the control problem is to design an input sequence {u(k)} such that the expected reward is maximized, where the expected reward is given as Jt (T) = E
T
rt (x(k), u(k), k) ,
(8.2)
Thus, the overall aim of stochastic control problem is to find πk (·) such that the objective in (8.2) is maximized and the constraints in (8.4)–(8.5) are satisfied.
8.2.2
Key Approaches
The key approaches that have been studied for the problem include:
k=1
over a certain horizon T, where the expected value is taken with respect to all random variables, i.e., w(k) and θt . rt (x(k), u(k), k) is the reward that the system gives the controller on taking action u(k) in state x(k) at time step k. Some system may result in a loss or cost lt (·) instead of reward, which can be trivially converted to reward as rt (·) = −lt (·). In systems which may be deployed for eternity, T approaches infinity, and, to keep the summation in (8.2), a discount factor γ < 1 is introduced, and we redefine the average reward as Jt (γ ) = E
∞
γ k−1 rt (x(k), u(k), k) ,
(8.3)
k=1
Often, the system comes with some constraints. For example, consider the example of an autonomous car trying to change lane. Then, along with the objective to reduce the time to merge, the controller must ensure the safety. We now describe the formulation of the constraints. Let x = [x(1), · · · , x(T)]T and u = [u(1), · · · , u(T)]T . The control problem also has some probabilistic constraints on the state space and applied inputs, which are given as follows: Pr(x ∈ Xj ) ≥ pj for all j = 1, · · · Cx
(8.4)
Pr(u ∈ Uj ) ≥ qj for all j = 1, · · · Cu
(8.5)
Here, Xj and Uj are sets which constrain the values x and u can take, respectively. Also, Pr(·) denotes probability, and pj and qj are real numbers that are the values of the constraints. For pj = 1 and qj = 1, the constraints are strict constraints and must be satisfied almost surely. Here, Cx and Cu are the number of constraints on x and u, respectively. Further, any sets Xj and Uj can be defined for the above definition. The constraints that the inputs or the state at each time must satisfy certain properties can be easily defined as a special case with appropriate representation of the above sets. The control action u(k) is a function of the states observed up to time k, is represented as a policy πk : X k → U , and is given as u(k) = πk (x(1), · · · , x(k))
(8.6)
1. Linear quadratic stochastic control: The control of a linear stochastic differential equation with an additive control, an additive Brownian motion, and a quadratic cost in the state and the control is probably the most wellknown control problem and has a simple, explicit solution [49–52]. The problem can be approximated to this setup to have efficient solutions. 2. Model predictive control (MPC): MPC is a modelbased control philosophy in which the current control action is obtained by online optimization of objective function. It uses the philosophy of receding horizon and predicts the future outcome of actions in order to determine what action the agent should take in the next step [4–7]. 3. Constrained Markov decision process (CMDP): This problem models the situation of optimizing one dynamic objective subject to certain dynamic constraints, where the system evolution is modeled as a Markov decision process [1]. 4. Constrained stochastic dynamic programming: We note that dynamic programming is one approach to take decisions based on MDP. Such an approach has been extended for CMDP to account for the constraints [53–55]. 5. Reinforcement learning (RL) with constraints: In this problem, efficient approaches for CMDP are provided, when the state evolution is not known a priori [33–39, 56–61]. In the remainder of this chapter, we will detail the approaches of MPC, CMDP, and RL with constraints.
8.3
Challenges and Solutions
The problem of decision-making with uncertainty is challenging because: 1. The policy function π(·) is general, and optimization over general function is not as straightforward. 2. The reward function or the constraint sets are not necessarily convex, leading to the resulting problem being nonconvex.
8
Control of Uncertain Systems
193
3. There is uncertainty in the model parameters, represented as θ and w(k), where the optimal robust control policy must take into account the information on such uncertainty available only through the state update and the observation of the rewards. In this section, we will discuss three approaches for the problem: The first is model predictive control (MPC). The second formulates the problem as a constrained Markov decision process (CMDP) and provides approaches to solve the problem. The third uses the concepts of model-free reinforcement learning to solve the CMDP.
8.3.1
Model Predictive Control
The term model predictive control (MPC) does not designate a specific control strategy but rather an ample range of control methods, which make explicit use of a model of the process to obtain the control signal by optimizing an objective function. MPC is particularly attractive to staff with only a limited knowledge of control because the concepts are very intuitive and, at the same time, the tuning is relatively easy. Further, MPC can be used to control a great variety of processes, from those with relatively simple dynamics to more complex ones, including systems with long delay times or non-minimum phase or unstable ones. The basic structure of MPC is shown in Fig. 8.1. A model is used to predict the future plant outputs, based on past and current values and on the proposed optimal future control actions. These actions are calculated by the optimizer taking into account the cost function (where the future tracking error is considered) as well as the constraints. Consider the objective in Eq. (8.2) or in Eq. (8.3). We want to find a control sequence, u(1), u(2), · · · , that maximize the objective under the constraints defined in Eqs. (8.4) and (8.5). Now, this optimization problem is not always solvable. To
Past inputs and outputs
Predicted outputs Model
Reference trajectory +
make this problem tractable, at time k, we estimate the states and inputs for the future over a shorter time horizon N < T. The prediction of state at time k + i is given as xi,k , which predicts the state i time steps ahead when at current time k. Similarly, ui,k represents the prediction of the control action i time steps ahead when at current time k. In addition, the system evolution model, given in (8.1), is approximated as xi+1,k = f (xi,k , ui,k , i + k) for all i > 0
(8.7)
This prediction step using Eq. (8.7) is also known as forward prediction. Further, the reward function is approximated; thus, rt will be changed to r, to indicate this is a model-based approximation of the true reward. At each time k, using the approximate system evolution and reward function, the inputs over the time horizon are found as a solution to the optimization problem given below: JMPC,k = maxu0,k ,···uN,k
N
r(xi,k , ui,k , k + i)
i=0
subject to xi+1,k = f (xi,k , ui,k , i + k), x0,k = x(k) [u0,k , · · · uN,k ] ∈ Uj , j = 1, · · · Cu [x0,k , · · · xN,k ] ∈ Xj , j = 1, · · · Cx (8.8) Note that at every time step k, the solution u0,k , · · · , uN,k is solution for an open-loop control problem for the time horizon N. However, at time step k, after applying the control action u(k) = u0,k , the planning is done after observing state update x(k + 1), and MPC becomes closed-loop control. In order to efficiently solve the problem, the reward function and the model evolution are approximated to make the problem convex. The above optimization problem is solved at time k, and the input u0,k is used. This is, then, used to repeat at time k+1 after the state at time k+1 has been observed. The notion of stability of such systems has been widely studied based on Lyapunov-type analysis [62, 63]. One of the key aspects of MPC is the model prediction. Since MPC depends on the system modeling, in the following, we will consider an approach based on Gaussian processes to model the system.
−
8.3.2
Future inputs
Learning System Model Using Gaussian Process
Optimizer Future errors Cost function
Fig. 8.1 Basic structure of MPC
Constraints
Gaussian process (GP) regression has been widely used in supervised machine learning for its flexibility and inherent ability to describe uncertainty in the prediction. In the context of control, it is seeing increasing use for modeling of nonlinear dynamical systems from data, as it allows for direct assessment of the residual model uncertainty.
8
194
V. Aggarwal and M. Agarwal
To use Gaussian processes, we assume that the system state space is X = Rn and the control action space is U = Rm . This allows us to make use of the nice properties provided by complete metric spaces. An example of Gaussian regression is presented in Fig. 8.2. Note that the prediction is extremely close to the true values in the center; however, the extreme ends are not properly fitted. GP regression models the system as x(k + 1) = f (x(k), u(k)) + w(k),
(8.9)
where the dynamics have i.i.d. zero-mean additive noise w(k) ∼ N (0, σw2 ). Note that f is an approximation of the true system model ft , and the subsequent state sk+1 depends only on the last state sk and control action u(k), and hence, the system model in Eq. (8.9) is also a Markovian model. Using recorded states and input sequences of length D, X˜ = [˜x(1), · · · , x˜ (D)] where x˜ (k) = (x(k), u(k)), and considering for each state and applied input the resulting state measurement X + = [x(2), · · · , x(D + 1)], the joint distribution of data points and function values at a test point x˜ = (x, u) is
2 KX, X ˜ X˜ + Iσw KX,˜ ˜ x ∼ N 0, , T KX,˜ Kx˜ ,˜x f (˜x) ˜ x +
1 κ(˜x, x˜ ) = σf2 exp − (˜x − x˜ )T L−1 (˜x − x˜ ) , (8.11) 2 where σf2 is the signal variance and L is a positive diagonal matrix, constituting (together with the noise variance σw ) the hyperparameters of the GP regression. A challenge in an MPC formulation based on a GP dynamics model is the propagation of the resulting stochastic state distributions over the prediction horizon. This is typically achieved by assuming that subsequent function evaluations of f (x, u)|X + are independent. Approximate Gaussian distributions of the predicted states xi,k over the prediction horizon are then derived using techniques related to extended Kalman filtering, e.g., by linearization, sigma-point transform, or exact moment matching. After fitting a Gaussian process for the model prediction, the MPC problem can be reformulated as JMPC−GP,k = maxu0,k ,···uN,k subject to
N i=0
r(xi,k , ui,k , k + i)
x0,k = x(k)
2 KX, ˜ X˜ + Iσw KX,˜ ˜ xi,k X+ ∼ N 0, T KX,˜ Kx˜ i,k ,˜xi,k f (xi,k , ui,k ) ˜ xi,k
(8.10)
[x0,k , · · · xN,k ] ∈ Xj , j = 1, · · · Cx ˜ under the where KX, ˜ X˜ is the Gram matrix of recorded data X kernel κ and KX,˜ ˜ x represents the corresponding entries for test point x˜ . The kernel function κ is essential for shaping the resulting prediction and must be chosen such that any resulting Gram matrix KX, ˜ X˜ is positive definite. Several functions that satisfy this condition are available, the most popular of which is the squared exponential kernel, given as
20 15
f(x, u)
10 5 0 –5 –10 0
2
4
[u0,k , · · · uN,k ] ∈ Uj , j = 1, · · · Cu Various solvers have been proposed for efficiently solving Eq. (8.12). The PILCO (probabilistic inference for learning control) algorithm [64] uses analytic gradients for policy improvement. Additionally particle swarm optimization-based techniques [65] and sampling-based methods [66] can also be used.
8.3.3
f(x, u) Observations Prediction 95% confidence interval
6 x, u
Fig. 8.2 Fitting Gaussian regression and prediction
8
10
(8.12)
Constrained Markov Decision Processes
Markov decision processes (MDPs), also known as controlled Markov chains, constitute a basic framework for dynamically controlling systems that evolve in a stochastic way. We focus on discrete time models: we observe the system at times t = 1, 2, · · · , T. T is called the horizon and may be either finite or infinite. A controller has an influence on both the costs and the evolution of the system, by choosing at each time unit some parameters, called actions. As is often the case in control theory, we assume that the behavior of the system at each time is determined by what is called the “state” of the system, as well as the control action. The system moves sequentially between different states in a random way; the current state and control action fully determine the probability to move to any given state in the next time unit.
8
Control of Uncertain Systems
195
We typically assume that the MDP is unichain, which implies that all trajectories generated by any policy end up with a Markov chain having a single (aperiodic) ergodic class. CMDP is one of the approaches to model the stochastic optimal control problem. In the following, we consider the case where the controller minimizes the objective subject to constraints. We shall call this class of MDPs constrained MDPs, or simply CMDPs. To make the above precise, we define a tuple {X , U , P , c, d} where • X is the state space, with finite cardinality. • U is the action set, with finite cardinality. Let K = (x, u) : x ∈ X , u ∈ U be the set of state-action pairs. • P are the transition probabilities; thus, Px,a,y is the probability of moving from state x(k) to x(k + 1) if action u(k) is chosen. • c : K → R is the instantaneous cost function, which is the objective of the CMDP. • d : K → RI are the I values of the constraint cost function, thus providing the cost of the I constraints. For any policy π and initial distribution β, the objective for a finite horizon T is given as T 1 E[c(x(k), u(k)], C(β, π ) = lim T→∞ T k=1
(8.13)
where x(k) is the state at time k, u(k) is the action selected at time k, and the expectation is over the transition probability Px(k),u(k),x(k+1) for all k ∈ {1, · · · , T} and the initial state distribution x(1) ∼ β. The overall CMDP problem can be written as T 1 E[di (x(k), u(k))] ≤ Vi , T→∞ T t=1
min C(β, π ) subject to lim π
(8.14) where di (x(k), u(k)) is the ith element of the vector d(x(k), u(k)) for i ∈ {1, · · · , I}. For simplicity of notation, we will use y to denote x(k + 1) and drop the time step arguments. The optimal solution of CMDP can be shown to be independent of β, and thus β can be omitted in the above. We note that the problem of CMDP can be converted to a linear programming problem, as shown in [1]. The corresponding linear problem is obtained by considering finding ρ ∈ [0, 1]|K| that satisfies inf < ρ, c > s.t. < ρ, di > ≤ Vi ∀ i = 1, · · · , I, ρ
y,u
ρ(y, u)(δx (y) − Py,u,x ) = 0∀x ∈ X
ρ(y, u) = 1,
y,u
ρ(y, u) ≥ 0 ∀ y, u.
(8.15)
Let ρ ∗ be the optimal value of ρ for the above LP. Then, the optimal stationary policy is given as ρ ∗ (y, u) π ∗ (y) = a with probability ∗ u ∈U ρ (y, u )
(8.16)
It can be shown that the constrained MDP and the above linear problem are equivalent. The solution for the LP exists if and only if one exists for the CMDP. Further, if a solution exists, the policy π ∗ (y) is optimal for CMDP. Thus, we note that when the model parameters (MDP parameters, objective function, and the constraint functions) are known, the problem reduces to a linear problem.
8.3.4
Model-Free Reinforcement Learning for Decision-Making
MPC is a model-based control philosophy in which the current control action is obtained by online optimization of objective function. Model-free reinforcement learning formulates predictive control problem with a control horizon of only length one but takes a decision based on infinite horizon information. Reinforcement learning has made great advances in several applications [23–31, 67]. Even though reinforcement learning has been widely studied, the application to control requires solving the problem of decision-making with constraints. The previous section considered the aspects of CMDP where the model was known. In this section, we will expand the discussion to provide model-free approaches.
Unconstrained Reinforcement Learning Before jumping into the domain of constrained RL, we first discuss few common algorithms for optimizing policies in unconstrained RL setup. In reinforcement learning, we define the total expected reward obtained by starting in a state x and following a stationary policy π as value of the state. Formally, we have ∞ γ k r(x(k), π(x(k)))|x(k) = x , (8.17) V π (x) = Eπ k=0
where the expectation is over the states observed from following the policy π and the transition probabilities P. Along with the value of the state, state-action value function of a policy π is defined as the total expected value obtained on
8
196
V. Aggarwal and M. Agarwal
taking action u in state x and following policy π thereafter. This gives Q (x, u) = Eπ r(x, u) + π
∞
k
γ r(x(k), π(x(k)))
(8.18)
k=1
= r(x, u) + γ E [V(x(k + 1))]
(8.19)
Note that when u = π(x), Qπ (x, u) = V π (x). Further, this conveniently allows us to define an optimal policy π ∗ (x) as ∗ π ∗ (x) = maxa Qπ (x, a). Now, it remains to learn the stateaction value function. Let αk be a such that 0 ≤ αk < 1,
∞
state. Q-learning also requires us to have a discrete action space, which may not always be true. Both of the issues can be resolved by learning an optimal stochastic policy π ∗ ∗ directly instead of learning the Qπ (x, u). A stochastic policy π : X × U → [0, 1] is a distribution over action given the state. For MDPs, the optimal action is deterministic, and hence, the probability of taking the optimal action becomes 1 with the probability of all the other action becoming 0. To learn the stochastic policy, we can perform a gradient descent on the policy π using the expected total rewards J as the function of π . ∇π J = ∇π E
αk = ∞.
(8.20) =E
T
γ r(x(k), u(k))
u
γ ∇π log(π(u(k)|x(k)))r(x(k), u(k)) (8.24)
(8.21)
leads to the convergence of Q-values to the optimal Q-values if each state-action pair is visited infinitely often. To ensure that the state-action pairs are indeed visited infinitely often, we use an -greedy Q-learning algorithm [47] described in Algorithm 1. This algorithm performs random exploration with probability and selects the optimal action with the according to current Q-values with probability 1−. We also note that the Q-value update can be directly extended to large state space using the neural networks, akin to deep Q-learning [68–71]. This is because the multiple Q-function for the constraints can be learned using deep neural networks. However, the fundamental idea still follows Eq. (8.21). The Q-learning method described now requires the controller to estimate the expected rewards of taking an action in a state. However, at times, we may be interested in designing a controller that directly learns which actions are good in a
∇ πJ =
T N
γ k ∇π log(π(uk,n |xk,n ))r(xk,n , uk,n ) (8.25)
n=1 k=1
where xk,n is the state x(k) and uk,n is the applied action at the time step k in the nth trajectory. The gradient estimation in Eq. (8.25) was first proposed as REINFORCE algorithm [44]. A reference pseudocode is also provided in Algorithm 2. Since then, there have been a lot of work on policy gradient to implement them using function approximators [45], and better gradient updates have been proposed [72] to ensure the updated policies converge to the optimal policy. Recently, after the fame of deep neural networks, various other optimization frameworks are used for policy gradients. Trust Region Policy Optimization algorithm [73] and Proximal Policy Optimization algorithm [74] are
Algorithm 2 Policy gradient algorithm
Algorithm 1 -greedy Q-learning algorithm 1: Initialize Q(x, u) ← 0 for all (x, u) ∈ K. Select αk according to Eq. (8.20). 2: for time steps k = 0, · · · do 3: Observe state x(k) and select action u(k) = maxa Q(x(k), u(k)). 4: Perform random action with probability and u(k) with 1 − probability. 5: Obtain reward rk and the next state x(k + 1). 6: Update the Q-value table as
+ 1), u ))
+ (1−αk )Q(x(k), u(k))
k
Equation (8.24) can be approximated by rolling out multiple trajectories {{xk,n , uk,n }Tk=1 }Nn=1 and averaging over them as
Q(x(k + 1), u )) Q(x(k), u(k)) = αk (rk + max + (1 − αk )Q(x(k), u(k))
(8.23)
k=1
It has been shown [46] that updating Q-values for state-action pair x(k), u(k) at time step k as
7: end for
k
k=1
k=0
Q(x(k), u(k)) = αk (rk + maxu Q(x(k
T
1: Initialize π ← 0 for all (s, a) ∈ K. . Learning rate η. 2: for episodes n = 0, · · · do 3: for time steps k = 1, · · · , T do 4: Observe state xk and select action ak ∼ π(·|xk ). 5: Play action ak , obtain reward rk and the next state xk+1 . 6: end for 7: Estimate gradient
∇ πJ =
T
γ k ∇π log(π(u(k)|x(k)))r(x(k), u(k))
k=1
(8.22)
8: Update policy π ← π + η∇ πJ 9: end for
(8.26)
8
Control of Uncertain Systems
two of the algorithms that use optimization techniques to ensure a faster convergence of policy gradients.
Constrained Reinforcement Learning with Average Reward Constrained MDP problems are convex, and hence, one can convert the constrained MDP problem to an unconstrained zero-sum game, where the objective is the Lagrangian of the optimization problem [1]. However, when the dynamics and rewards are not known, it doesn’t become apparent how to do it as the Lagrangian will itself become unknown to the optimizing agent. Previous work regarding constrained MDPs, when the dynamics of the stochastic process are not known, considers scalarization through weighted sums of the rewards; see [56] and the references therein. Another approach is to consider Pareto optimality when multiple objectives are present [75]. However, none of the aforementioned approaches guarantee to satisfy lower bounds for a given set of reward functions simultaneously. Further, we note that deterministic policies are not optimal [1]. We note that the objective in the problem can be converted to the constraint as max E[X] is equivalent to finding the largest δ such that E[X] ≥ δ. Thus, using bisection search on δ, objective can be maximized. In the following, we formulate a multi-objective problem, which amounts to finding a policy that satisfies multiple constraints. Let the different objectives be ro ; o = 1, · · · , I. The multi-objective reinforcement learning problem is that for the optimizing agent to find a stationary policy π(x(k)) that satisfies the average expected reward find π ∈
T−1 1 o r (x(k), π(x(k))) ≥ 0 (8.27) s. t. lim E T→∞ T k=0 for o = 1, · · · , I.
The multi-objective reinforcement learning problem of Markov decision processes is that of finding a policy that satisfies a number of constraints of the form (8.27). Reinforcement learning-based model-free solutions for constrained MDP have been widely proposed without guarantees [33–38]. Recently, [76] proposed a policy gradient algorithm with Lagrange multiplier in multi-time scale for discounted constrained reinforcement learning algorithm and proved that the policy converges to a feasible policy. [77] found a feasible policy by using Lagrange multiplier and zero-sum game for reinforcement learning algorithm with convex constraints and discounted reward. [57] showed that constrained reinforcement learning has zero duality gap, which provides a theoretical guarantee to policy gradient algorithms in the dual domain. [78] proposed the C-UCRL
197
Algorithm 3 Zero-sum Markov bandit algorithm for CMDP with average reward 1: Initialize Q(s, a, o) ← 0 and N(s, a, o) ← 0 ∀(s, a, o) ∈ S × A × O. Observe s0 and initialize a0 randomly 2: for Iteration k = 0, . . . , K do 3: Take action ak and observe next state sk+1 4: πk+1 , ok = arg max min πk+1 o∈O R(sk , ak , ok ) + Q(sk+1 , πk+1 (sk+1 ), ok ) 1 5: t = N(sk , ak , ok ) ← N(sk , ak , ok ) + 1; αt = t+1 1 6: f = |S||A|O| s,a,o Q(s, a, o) 7: y = R(sk , ak , ok ) + E[Q(sk+1 , πk+1 (sk+1 ), ok ] − f 8: Q(sk , ak , ok ) ← (1 − αt )Q(sk , ak , ok ) + αk ∗ y 9: Sample ak+1 from the distribution πk+1 (·|sk+1 ) 10: end for
3√ algorithm which achieve sublinear O(T 4 log(T/δ)) with probability 1 − δ while satisfying the constraints. However, this algorithm needs the knowledge of the model dynamics, and hence, it is not model free. [77] proposed four algorithms for the constrained reinforcement learning problem in primal, dual, or primal-dual domains and showed a sublinear bound for regret and constraint violations. [60] proposed an optimism-based algorithm, which achieves a regret bound of √ ˜ hides ˜ AT) with O(1) constraint violations where O O(DS the logarithmic factors. However, all these algorithms are model based. In the following, we will describe a modelfree algorithm to deal with constraints, which has provable guarantees. The algorithm is based on the results in [39]. Let R(s, a, o) = ro (s, a) for o ∈ {1, · · · , I}. The modelfree algorithm is summarized in Algorithm 3. The algorithm uses the concept of zero-sum Markov bandits to have maxmin in line 4, which gives a randomized policy. Suppose that there exists a state x∗ ∈ X which is recurrent for every stationary policy π played by the agent. Further, suppose that the absolute values of the reward functions r and {ro }Io=1 are bounded by some constant c known to the agent. Let there exists a deterministic number d > 0 such that for every x ∈ X , u ∈ U , and o ∈ {1, · · · , I}, with probability 1, we have
lim inf k→∞
N(k, x, u, o) ≥ d, k
where N(k, x, u, o) =
k
1(x,u,o) (x(k ), u(k ), o(k))
k =1
is the visitation count to state-action pair (x, u) and the maximized objective o. Based on the above assumptions, the authors of [39] showed that the recursion
8
198
V. Aggarwal and M. Agarwal
Qk+1 (x, u, o) = Qk (x, u, o) + 1(x,u,o) (x(k), u(k), o(k)) × βN(k,x,u,o) max min (R(x, u, o(k))
8.3.5
Constrained Reinforcement Learning with Discounted Rewards
π ∈ o(k)∈O
+ E [Qk (x(k + 1), π(x(k + 1)), o(k)) − f (Qk ) − Qk (x, u, o)]
(8.28)
converges. Further, let the converged value of the recursion be Q∗ . Then, the policy π (s) = arg max min E [Q (x, π(x), o)] π ∈ o∈O
Given a stochastic process with state sk at time step k, reward function r, constraint function rj , and a discount factor 0 < γ < 1, the multi-objective reinforcement learning problem is that for the optimizing agent to find a stationary policy π(sk ) that simultaneously satisfies in the discounted reward setting
(8.29)
is a solution to (8.27) for all x ∈ X . An example for the above problem is a search engine example [39], formulated as follows. In a search engine, there is a number of documents that are related to a certain query. There are two values that are related to every document, the first being a (advertisement) value ui of document i for the search engine and the second being a value vi for the user (could be a measure of how strongly related the document is to the user query). The task of the search engine is to display the documents in a row some order, where each row has an attention value, Aj for row j. We assume that ui and vi are known to the search engine for all i, whereas the attention values {Aj } are not known. The strategy π of the search engine is to display document i in position j, π(i) = j, with probability pij . Thus, the expected average reward for the search engine is
π
∞
γ k r(sk , π(sk ))
(8.30)
γ k rj (sk , π(sk )) ≥ 0
(8.31)
k=0
E
s.t.
∞
k=0
Note that maximizing Eq. (8.30) is equivalent to maximizing δ subject to the constraint E
∞
γ k r(sk , π(sk )) ≥ δ
k=0
Thus, one could always replace r with r−(1−γ )δ and obtain a constraint of the form (8.30). Thus, we can rewrite the problem as an optimization problem of finding a stationary policy π subject to the initial state s0 = s and the constraints (8.30), that is, find π ∈ ∞ k j s. t. E γ r (sk , π(sk )) ≥ 0
N 1 E [ui Aπ(i) ] R = lim N→∞ N i=1 e
and for the user
E
max
(8.32)
k=0
for j = 1, . . . , J. Let αk (s, a, o) = αk · 1(s,a,o) (sk , ak , ok ) satisfy
N 1 E [vi Aπ(i) ] . R = lim N→∞ N i=1 u
0 ≤ αk (s, a, o) < 1,
find π N 1 E(ui Aπ(i) ) ≥ Re N→∞ N i=1
s. t. lim
N 1 E(vi Aπ(i) ) ≥ Ru N→∞ N i=1
lim
αk (s, a, o) = ∞,
k=0 ∞
The search engine has multiple objectives here where it wants to maximize the rewards for the user and itself. One solution is to define a measure for the quality of service for the user, Ru ≥ Ru , and at the same time satisfy a certain lower bound Re of its own reward, that is,
∞
(8.33)
αk2 (s, a, o) < ∞, ∀(s, a, o) ∈ S × A × O.
k=0
The authors of [39] related this problem to zero-sum Markov bandit problem. This connection was used to provide a reinforcement learning approach to solve this problem. The algorithm is given in Algorithm 4. In line 1, we initialize the Q-table, observe s0 , and select a0 randomly. In line 3, we take the current action ak and observe the next state sk+1 so that we can compute the max-min operator in line 4. Line 5 updates the Q-table. Line 6 samples the next action from the policy gotten from line 4. Notice that the max-min can be converted to a maximization problem with linear inequalities and can be solved by linear programming efficiently due to the number
8
Control of Uncertain Systems
199
Algorithm 4 Zero-sum Markov bandit algorithm for CMDP with discounted reward 1: Initialize Q(s, a, o) ← 0 for all (s, a, o) ∈ S × A × O. Observe s0 and initialize a0 randomly. Select αk according to Eq. (8.33) 2: for Iteration k = 0, . . . , K do 3: Take action ak and observe next state sk+1 4: πk+1 , ok = arg max min Q(sk+1 , πk+1 (sk+1 ), ok )
Table 8.2 Transition probability of the queue system Current state P(xt+1 = xt − 1) P(xt+1 = xt ) 1 ≤ xt ≤ L − 1 a(1 − b) ab + (1 − a) (1 − b) xt = L a 1−a xt = 0 0 1 − b(1 − a)
P(xt+1 = xt + 1) (1 − a)b 0 b(1 − a)
πk+1 o∈O
Q(sk , ak , ok ) ← (1 − αk )Q(sk , ak , ok ) + αk [r(sk , ak , ok ) + γ E(Q(sk+1 , πk+1 (sk+1 ), ok )] 6: Sample ak+1 from the distribution πk+1 (·|sk+1 ) 7: end for
8
5:
of inequalities here is limited by the space of the opponent. Even for large-scale problems, efficient algorithms exist for max-min [79]. Based on the converged value of Q-table (say Q ), the policy is given as
π (s0 ) = arg max min E Q (s0 , π(s), o) π ∈ o∈O
π (s) = arg max E(Q (s, π(s), o ))
(8.34)
π ∈
8.3.6
Case Study for a Constrained RL Setup
In this section, we evaluate Algorithm 4 on a queuing system with a single server in discrete time. In this model, we assume there is a buffer of finite size L. A possible arrival is assumed to occur at the beginning of the time slot. The state of the system is the number of customers waiting in the queue at the beginning of time slot such that |S| = L + 1. We assume there are two kinds of actions: service action and flow action. The service action space is a finite subset A of [amin , amax ] and 0 < amin ≤ amax < 1. With a service action a, we assume that a service of a customer is successfully completed with probability a. If the service succeeds, the length of the queue will reduce by one; otherwise, there is no change of the queue. The flow is a finite subset B of [bmin , bmax ] and 0 ≤ bmin ≤ bmax < 1. Given a flow action b, a customer arrives during the time slot with probability b. Let the state at time t be xt . We assume that no customer arrives when state xt = L and thus can model this by the state update not increasing on customer arrival when xt = L. Finally, the overall action space is the product of service action space and flow action space, i.e., A × B. Given an action pair (a, b) and current state xt , the transition of this system P(xt+1 |xt , at = a, bt = b) is shown in Table 8.2. Assuming that γ = 0.5, we want to optimize the total discounted reward collected and satisfy two constraints with respect to service and flow simultaneously. Thus, the overall optimization problem is given as
min
π a ,π b
s.t.
E
∞
γ t c(st , π a (st ), π b (st ))
t=1
E
∞
γ c (st , π (st ), π (st )) ≤ 0 t 1
a
b
(8.35)
t=1
E
∞
γ c (st , π (st ), π (st )) ≤ 0 t 2
a
b
t=1
where πha and πhb are the policies for the service and flow at time slot h, respectively. We note that the expectation in the above is with respect to both the stochastic policies and the transition probability. In order to match the constraint satisfaction problem modeled in this paper, we use the bisection algorithm on δ and transform the objective to a constraint ∞ t a b ≤ δ. In the setting of the E t=1 γ c(st , π (st ), π (st )) simulation, we choose the length of the queue L = 5. We let the service action space be A = [0.3, 0.4, 0.5, 0.6, 0.7] and the flow action space be B = [0, 0.2, 0.4, 0.6] for all states besides the state s = L. Moreover, the cost function is set to be c(s, a, b) = s − 5, the constraint function for the service is defined as c1 (s, a, b) = 10a − 5, and the constraint function for the flow is c2 (s, a, b) = 5(1 − b)2 − 2. For different values of δ, the numerical results are given in Fig. 8.3. To show the performance of the algorithm, we choose the values of δ close to the real optimum value, and thus, the figure shows the performance with δ = 9.5, 9.55, 9.575, 9.6, 9.625, and 9.7. For each value of δ, we run the algorithm for 105 iterations. Rather than evaluating the policy in each iteration, we evaluate the policy every 100 iterations while evaluate at every iteration for the last 100 iterations. In order to get the expected value of the constraints, we collect 10,000 trajectories and calculate the average constraint function value among them. These constraint function values for the three constraints are plotted in Fig. 8.3. For δ = 9.5, we see that the algorithm converges after about 60,000 iterations and all three constrains are larger than 0, which means that we find a feasible policy for the setting δ = 9.5. Moreover, it is reasonable that all three constraints converge to a same value since the proposed Algorithm 4 optimize the minimal value function among V(s, a, o) with respect to o. We see that the three constraints for δ = 9.5 are close to each other and nonnegative, thus demonstrating the constraints are satisfied and δ = 9.5 is feasible.
200
V. Aggarwal and M. Agarwal
a)
d)
0
−0.5
Reward Service Flow
−1
Reward Service Flow
0
−0.5
−1 0
0.2
0.4
0.6
0.8
1
0
0.2
·105
Iteration
b)
1
0.5
0.5
Constraint value
Constraint value
1
0.4
0.6
0.8
1 ·105
Iteration
e)
1
Constraint value
Constraint value
0.5 0.5
0
−0.5
Reward Service Flow
−1
0
−0.5
Reward Service Flow
−1 0
0.2
0.4
0.6
0.8
1
0
0.2
·105
Iteration
0.6
Reward Service Flow
Constraint value
Constraint value
0.5
0
Reward Service Flow
−0.5 −1
1 ·105
1
1
0.5
0.8
Iteration
f)
c)
0.4
0
−0.5
−1 0
0.2
0.4
0.6
Iteration
0.8
1 ·105
0
0.2
0.4
0.6
Iteration
0.8
1 ·105
Fig. 8.3 Value of constraints with iterations for the zero-sum Markov bandit algorithm applied to discrete time single-server queue. (a) δ = 9.5. (b) δ = 9.55. (c) δ = 9.575. (d) δ = 9.6. (e) δ = 9.625. (f) δ = 9.7
8
Control of Uncertain Systems
On the other extreme, we see the case when δ = 9.7. We note that all three constraints are below 0, which means no feasible policy for this setting. Thus, seeing the cases for δ = 9.5 and 9.7, we note that the optimal objective is between the two values. Looking at the case where δ = 9.625, we also note that the service constraint is clearly below zero and the constraints are not satisfied. Similarly, for δ = 9.55, the constraints are nonnegative – the closest to zero are the service constraints, which are crossing zero every few iterations and thus the gap is within the margin. This shows that the optimal objective is within 9.55 and 9.625. However, the judgment is not as evident between the two regimes, and it cannot be clearly mentioned from δ = 9.575 and δ = 9.6 if they are feasible or not since they are not consistently lower than zero after 80,000 iterations like in the case of δ = 9.625 and δ = 9.7, are not mostly above zero as for δ = 9.5. Thus, looking at the figures, we estimate the value of optimal objective between 9.55 and 9.625. In order to compare the result with the theoretical optimal total reward, we can assume the dynamics of the MDP is known in advance and use the linear programming algorithm to solve the original problem. The result solved by the LP is 9.62. We note that Q(s, a, o) has 6 × 5 × 4 × 3 = 360 elements, and it is possible that 105 iterations are not enough to make all the elements in Q table to converge. Further, sampling 104 trajectories can only achieve an accuracy of 0.1 with 99% confidence for the constraint function value, and we are within that range. Thus, more iterations and more samples (especially more samples) would help improve the achievable estimate from 9.55 in the algorithm performance. Overall, considering the limited iterations and sampling in the simulations, we conclude that the result by the proposed algorithm is close to the optimal result obtained by the linear programming. Note that the fundamental difference between the modelbased MPC methods and the model-free RL methods is that the MPC method can predict the probable state in which the system can transition. But, RL-based methods only predicts the action to perform in a given state that can ensure the constraints remain satisfied.
8.4
201
spacecrafts [88], agriculture [89], search engine [39], scheduling [29, 38, 90], resource allocation [28, 91–93], etc. In these systems, a variety of approaches including MPC, CMDP, and RL with constraints have been applied. Thus, the approaches studied in this chapter have widespread applications.
8 8.5
Conclusions, Challenges, and Trends
In this chapter, we investigated the model and approaches for controlling dynamical systems in the presence of uncertainty. We first consider model predictive control, where model parameters are learned, for instance, using Gaussian process regression. We then abstracted the problem to a constrained Markov decision process (CMDP). When the model parameters are unknown, the concept of reinforcement learning (RL) is used. We introduced Q-learning-based algorithms and policy gradient methods for finding optimal policies for the unconstrained RL setup. We then investigated the constrained setup using zero-sum games and bandit optimization. We note that the approaches for machine learning, in specific, reinforcement learning, have been increasing. This is influencing an increased use of learning-based approaches in the control systems [25, 26, 28, 67, 94–97]. Some of the problems of key interest for further developments in the area include: Efficient Model-Free Approaches for CMDP with Peak and Expected Costs In many problems, certain decisions cannot be taken in certain states, and such constraints are called peak constraints. Handling both peak and expected constraints in the setup, and analyzing efficient algorithms for regret bounds, is an open problem. Scalable Solutions for Model-Free CMDP Most of the approaches pointed for reinforcement learning with constraints are for tabular setup, where the set of actions and states are discrete. However, the deep-learning-based approaches, which are scalable and have provable guarantees, are important for more widespread use of these approaches [98].
Application Areas
Finding optimal control in the presence of uncertainties has seen significant success in recent decades, and the approaches in this chapter have been used as the primary control method for the systematic handling of system constraints [80, 81], with wide adaptation in diverse fields, such as process control [82], automotive systems [48], robotics [83], video streaming [84–86], heating systems [87],
Beyond MDP Markov property requires that the states be organized in such a way that history (previous states and actions) is not relevant for predicting subsequent dynamics and rewards. This is not always possible. One of the possibility for such an issue is the use of partially observed MDP (POMDP). Even though efficient algorithms of POMDP are being studied, such approach in the presence of constraints is in its infancy.
202
V. Aggarwal and M. Agarwal
References 1. Altman, E.: Constrained Markov Decision Processes, vol. 7. CRC Press, Boca Raton (1999) 2. Farrell, R., Polli, A.: Comparison of unconstrained dynamic matrix control to conventional feedback control for a first order model. Adv. Instrum. Control 45(2), 1033 (1990) 3. Holkar, K., Waghmare, L.: An overview of model predictive control. Int. J. Control Autom. 3(4), 47–63 (2010) 4. Rachael, J., Rault, A., Testud, J., Papon, J.: Model predictive heuristic control: application to an industrial process. Automatica 14(5), 413–428 (1978) 5. Cutler, C.R., Ramaker, B.L.: Dynamic matrix control—a computer control algorithm. In: Joint Automatic Control Conference, vol. 17, p. 72 (1980) 6. Prett, D.M., Gillette, R.: Optimization and constrained multivariable control of a catalytic cracking unit. In: Joint Automatic Control Conference, vol. 17, p. 73 (1980) 7. Garcia, C.E., Prett, D.M., Morari, M.: Model predictive control: theory and practice—a survey. Automatica 25(3), 335–348 (1989) 8. Mayne, D.Q., Rawlings, J.B., Rao, C.V., Scokaert, P.O.: Constrained model predictive control: stability and optimality. Automatica 36(6), 789–814 (2000) 9. Fernandez-Camacho, E., Bordons-Alba, C.: Model Predictive Control in the Process Industry. Springer, Berlin (1995) 10. Zadeh, L., Whalen, B.: On optimal control and linear programming. IRE Trans. Autom. Control 7(4), 45–46 (1962) 11. Propoi, A.: Application of linear programming methods for the synthesis of automatic sampled-data systems. Avtomat. i Telemeh 24, 912–920 (1963) 12. Gutman, P.O.: Controllers for bilinear and constrained linear systems. PhD Thesis TFRT-1022 (1982) 13. Chang, T., Seborg, D.: A linear programming approach for multivariable feedback control with inequality constraints. Int. J. Control 37(3), 583–597 (1983) 14. Lorenzen, M., Cannon, M., Allgöwer, F.: Robust MPC with recursive model update. Automatica 103, 461–471 (2019) 15. Bujarbaruah, M., Zhang, X., Tanaskovic, M., Borrelli, F.: Adaptive stochastic mpc under time varying uncertainty. IEEE Trans. Autom. Control (2020) 16. Kocijan, J., Murray-Smith, R., Rasmussen, C.E., Girard, A.: Gaussian process model based predictive control. In: Proceedings of the 2004 American Control Conference, vol. 3, pp. 2214–2219. IEEE (2004) 17. Cao, G., Lai, E.M.K., Alam, F.: Gaussian process model predictive control of an unmanned quadrotor. J. Intell. Robot. Syst. 88(1), 147–162 (2017) 18. Hewing, L., Kabzan, J., Zeilinger, M.N.: Cautious model predictive control using gaussian process regression. IEEE Trans. Control Syst. Technol. (2019) 19. Matschek, J., Himmel, A., Sundmacher, K., Findeisen, R.: Constrained Gaussian process learning for model predictive control. IFAC-PapersOnLine 53(2), 971–976 (2020) 20. Kolesar, P.: A markovian model for hospital admission scheduling. Manag. Sci. 16(6), B-384 (1970) 21. Golabi, K., Kulkarni, R.B., Way, G.B.: A statewide pavement management system. Interfaces 12(6), 5–21 (1982) 22. Winden, C., Dekker, R.: Markov decision models for building maintenance: a feasibility study. J. Oper. Res. Soc. 49, 928–935 (1998) 23. Shi, B., Ozsoy, M.G., Hurley, N., Smyth, B., Tragos, E.Z., Geraci, J., Lawlor, A.: Pyrecgym: a reinforcement learning gym for recommender systems. In: Proceedings of the 13th ACM Conference on Recommender Systems, pp. 491–495 (2019) 24. Luketina, J., Nardelli, N., Farquhar, G., Foerster, J., Andreas, J., Grefenstette, E., Whiteson, S., Rocktäschel, T.: A survey of rein-
25.
26.
27.
28.
29.
30.
31.
32. 33.
34.
35.
36.
37.
38.
39.
40.
forcement learning informed by natural language. In: Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19, pp. 6309–6317. International Joint Conferences on Artificial Intelligence Organization (2019) Al-Abbasi, A.O., Ghosh, A., Aggarwal, V.: Deeppool: Distributed model-free algorithm for ride-sharing using deep reinforcement learning. IEEE Trans. Intell. Transp. Syst. 20(12), 4714–4727 (2019) Singh, A., Al-Abbasi, A.O., Aggarwal, V.: A distributed modelfree algorithm for multi-hop ride-sharing using deep reinforcement learning. IEEE Trans. Intell. Transp. Syst. (2021) Chen, J., Umrawal, A.K., Lan, T., Aggarwal, V.: Deepfreight: A model-free deep-reinforcement-learning-based algorithm for multi-transfer freight delivery. In: International Conference on Automated Planning and Scheduling (ICAPS) (2021) Wang, Y., Li, Y., Lan, T., Aggarwal, V.: Deepchunk: Deep qlearning for chunk-based caching in wireless data processing networks. IEEE Trans. Cogn. Commun. Netw. 5(4), 1034–1045 (2019) Geng, N., Lan, T., Aggarwal, V., Yang, Y., Xu, M.: A multi-agent reinforcement learning perspective on distributed traffic engineering. In: 2020 IEEE 28th International Conference on Network Protocols (ICNP), pp. 1–11. IEEE (2020) Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A., Hubert, T., Baker, L., Lai, M., Bolton, A., Chen, Y., Lillicrap, T., Hui, F., Sifre, L., Driessche, G.V.D., Graepel, T., Hassabis, D.: Mastering the game of go without human knowledge. Nature 550, 354 – 359 (2017) Silver, D., Hubert, T., Schrittwieser, J., Antonoglou, I., Lai, M., Guez, A., Lanctot, M., Sifre, L., Kumaran, D., Graepel, T., et al.: A general reinforcement learning algorithm that masters chess, Shogi, and go through self-play. Science 362(6419), 1140–1144 (2018) Åström, K.J., Wittenmark, B.: Adaptive Control, 2nd edn. AddisonWesley Longman Publishing, Boston (1994) Djonin, D.V., Krishnamurthy, V.: Mimo transmission control in fading channels: a constrained markov decision process formulation with monotone randomized policies. IEEE Trans. Signal Process. 55(10), 5069–5083 (2007) Lizotte, D., Bowling, M.H., Murphy, A.S.: Efficient reinforcement learning with multiple reward functions for randomized controlled trial analysis. In: Proceedings of the 27th International Conference on International Conference on Machine Learning, ICML’10, pp. 695–702. Omnipress, USA (2010) Drugan, M.M., Nowe, A.: Designing multi-objective multiarmed bandits algorithms: a study. In: The 2013 International Joint Conference on Neural Networks (IJCNN), pp. 1–8. IEEE (2013) Achiam, J., Held, D., Tamar, A., Abbeel, P.: Constrained policy optimization. In: Proceedings of the 34th International Conference on Machine Learning—Volume 70, ICML’17, pp. 22–31. JMLR.org (2017) Abels, A., Roijers, D., Lenaerts, T., Nowé, A., Steckelmacher, D.: Dynamic weights in multi-objective deep reinforcement learning. In: K. Chaudhuri, R. Salakhutdinov (eds.) Proceedings of the 36th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 97, pp. 11–20. PMLR, Long Beach (2019) Raghu, R., Upadhyaya, P., Panju, M., Agarwal, V., Sharma, V.: Deep reinforcement learning based power control for wireless multicast systems. In: 2019 57th Annual Allerton Conference on Communication, Control, and Computing (Allerton), pp. 1168– 1175. IEEE (2019) Gattami, A., Bai, Q., Agarwal, V.: Reinforcement learning for multi-objective and constrained markov decision processes. In: Proceedings of AISTATS (2021) Sastry, S., Bodson, M.: Adaptive Control: Stability, Convergence and Robustness. Courier Corporation (2011)
8
Control of Uncertain Systems
41. Kumar, P.R.: A survey of some results in stochastic adaptive control. SIAM J. Control Optim. 23(3), 329–380 (1985) 42. Kalman, R.E.: A new approach to linear filtering and prediction problems. Trans. ASME-J. Basic Eng. 82, 35–45 (1960) 43. Schulz, E., Speekenbrink, M., Krause, A.: A tutorial on gaussian process regression: Modelling, exploring, and exploiting functions. J. Math. Psychol. 85, 1–16 (2018) 44. Williams, R.J.: Simple statistical gradient-following algorithms for connectionist reinforcement learning. Mach. Learn. 8(3–4), 229– 256 (1992) 45. Sutton, R.S., McAllester, D.A., Singh, S.P., Mansour, Y.: Policy gradient methods for reinforcement learning with function approximation. In: Advances in Neural Information Processing Systems, pp. 1057–1063 (2000) 46. Watkins, C.J., Dayan, P.: Q-learning. Mach. Learn. 8(3-4), 279–292 (1992) 47. Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press (2018) 48. Di Cairano, S., Yanakiev, D., Bemporad, A., Kolmanovsky, I.V., Hrovat, D.: An MPC design flow for automotive control and applications to idle speed regulation. In: 2008 47th IEEE Conference on Decision and Control, pp. 5686–5691. IEEE (2008) 49. Fleming, W.H., Rishel, R.W.: Deterministic and Stochastic Optimal Control, vol. 1. Springer, Berlin (2012) 50. Koppang, P., Leland, R.: Linear quadratic stochastic control of atomic hydrogen masers. IEEE Trans. Ultrason. Ferroelectr. Freq. Control 46(3), 517–522 (1999) 51. Duncan, T.E., Pasik-Duncan, B.: A direct approach to linearquadratic stochastic control. Opuscula Math. 37(6), 821–827 (2017) 52. Bank, P., Voß, M.: Linear quadratic stochastic control problems with stochastic terminal constraint. SIAM J. Control Optim. 56(2), 672–699 (2018) 53. Hordijk, A., Kallenberg, L.C.: Constrained undiscounted stochastic dynamic programming. Math. Oper. Res. 9(2), 276–289 (1984) 54. Neto, T.A., Pereira, M.F., Kelman, J.: A risk-constrained stochastic dynamic programming approach to the operation planning of hydrothermal systems. IEEE Trans. Power Apparatus Syst. (2), 273– 279 (1985) 55. Chen, R.C., Blankenship, G.L.: Dynamic programming equations for discounted constrained stochastic control. IEEE Trans. Autom. Control 49(5), 699–709 (2004) 56. Roijers, D.M., Vamplew, P., Whiteson, S., Dazeley, R.: A survey of multi-objective sequential decision-making. J. Artif. Int. Res. 48(1), 67–113 (2013) 57. Paternain, S., Chamon, L., Calvo-Fullana, M., Ribeiro, A.: Constrained reinforcement learning has zero duality gap. In: H. Wallach, H. Larochelle, A. Beygelzimer, F. d’ Alché-Buc, E. Fox, R. Garnett (eds.) Advances in Neural Information Processing Systems, vol. 32, pp. 7555–7565. Curran Associates (2019) 58. Bai, Q., Agarwal, M., Aggarwal, V.: Joint optimization of multiobjective reinforcement learning with policy gradient Based algorithm. J. Artif. Intell. Res. 74, 1565–1597 (2022) 59. Bai, Q., Bedi, A.S., Agarwal, M., Koppel, A., Aggarwal, V.: Achieving zero constraint violation for constrained reinforcement learning via primal-dual approach. In Proceedings of the AAAI Conference on Artificial Intelligence 36(4), 3682–3689 (2022) 60. Agarwal, M., Bai, Q., Aggarwal, V.: Concave utility reinforcement learning with zero-constraint violations (2021). arXiv preprint arXiv:2109.05439 61. Liu, C., Geng, N., Aggarwal, V., Lan, T., Yang, Y., Xu, M.: Cmix: Deep multi-agent reinforcement learning with peak and average constraints. In: Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 157–173. Springer, Berlin (2021)
203 62. Preindl, M.: Robust control invariant sets and Lyapunov-based MPC for IPM synchronous motor drives. IEEE Trans. Ind. Electron. 63(6), 3925–3933 (2016) 63. Sopasakis, P., Herceg, D., Bemporad, A., Patrinos, P.: Risk-averse model predictive control. Automatica 100, 281–288 (2019) 64. Deisenroth, M., Rasmussen, C.E.: Pilco: A model-based and dataefficient approach to policy search. In: Proceedings of the 28th International Conference on Machine Learning (ICML-11), pp. 465–472 (2011) 65. Yiqing, L., Xigang, Y., Yongjian, L.: An improved PSO algorithm for solving non-convex NLP/MINLP problems with equality constraints. Comput. Chem. Eng. 31(3), 153–162 (2007) 66. Madani, T., Benallegue, A.: Sliding mode observer and backstepping control for a quadrotor unmanned aerial vehicles. In: 2007 American Control Conference, pp. 5887–5892. IEEE (2007) 67. Manchella, K., Umrawal, A.K., Aggarwal, V.: Flexpool: A distributed model-free deep reinforcement learning algorithm for joint passengers and goods transportation. IEEE Trans. Intell. Transp. Syst. 22(4), 2035–2047 (2021) 68. Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., Riedmiller, M.: Playing Atari with deep reinforcement learning (2013). arXiv preprint arXiv:1312.5602 69. Van Hasselt, H., Guez, A., Silver, D.: Deep reinforcement learning with double q-learning. In: Proceedings of the AAAI conference on artificial intelligence, vol. 30 (2016) 70. Hester, T., Vecerik, M., Pietquin, O., Lanctot, M., Schaul, T., Piot, B., Horgan, D., Quan, J., Sendonaris, A., Osband, I., et al.: Deep q-learning from demonstrations. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32 (2018) 71. Schrittwieser, J., Antonoglou, I., Hubert, T., Simonyan, K., Sifre, L., Schmitt, S., Guez, A., Lockhart, E., Hassabis, D., Graepel, T., et al.: Mastering Atari, Go, Chess and Shogi by planning with a learned model. Nature 588(7839), 604–609 (2020) 72. Kakade, S.M.: A natural policy gradient. Adv. Neural Inform. Process. Syst. 14, 1531–1538 (2001) 73. Schulman, J., Levine, S., Abbeel, P., Jordan, M., Moritz, P.: Trust region policy optimization. In: International Conference on Machine Learning, pp. 1889–1897 (2015) 74. Schulman, J., Wolski, F., Dhariwal, P., Radford, A., Klimov, O.: Proximal policy optimization algorithms (2017). arXiv preprint arXiv:1707.06347 75. Moffaert, K.V., Nowé, A.: Multi-objective reinforcement learning using sets of pareto dominating policies. J. Mach. Learn. Res. 15, 3663–3692 (2014) 76. Tessler, C., Mankowitz, D.J., Mannor, S.: Reward constrained policy optimization. In: International Conference on Learning Representations (2018) 77. Efroni, Y., Mannor, S., Pirotta, M.: Exploration-exploitation in constrained MDPs (2020). arXiv preprint arXiv:2003.02189 78. Zheng, L., Ratliff, L.: Constrained Upper Confidence Reinforcement Learning, pp. 620–629. PMLR, The Cloud (2020) 79. Parpas, P., Rustem, B.: An algorithm for the global optimization of a class of continuous minimax problems. J. Optim. Theory Appl. 141, 461–473 (2009) 80. Morari, M., Lee, J.H.: Model predictive control: past, present and future. Comput. Chem. Eng. 23(4–5), 667–682 (1999) 81. Hewing, L., Wabersich, K.P., Menner, M., Zeilinger, M.N.: Learning-based model predictive control: Toward safe learning in control. Ann. Rev. Control Robot. Auton. Syst. 3, 269–296 (2020) 82. Darby, M.L., Nikolaou, M.: Mpc: Current practice and challenges. Control Eng. Practice 20(4), 328–342 (2012) 83. Incremona, G.P., Ferrara, A., Magni, L.: Mpc for robot manipulators with integral sliding modes generation. IEEE/ASME Trans. Mechatron. 22(3), 1299–1307 (2017) 84. Yin, X., Jindal, A., Sekar, V., Sinopoli, B.: A controltheoretic approach for dynamic adaptive video streaming
8
204
85.
86.
87.
88.
89.
90.
91.
92.
93.
94.
95.
96.
V. Aggarwal and M. Agarwal over HTTP. In: Proceedings of the 2015 ACM Conference on Special Interest Group on Data Communication, pp. 325–338 (2015) Elgabli, A., Aggarwal, V., Hao, S., Qian, F., Sen, S.: Lbp: Robust rate adaptation algorithm for SVC video streaming. IEEE/ACM Trans. Netw. 26(4), 1633–1645 (2018) Elgabli, A., Aggarwal, V.: Fastscan: Robust low-complexity rate adaptation algorithm for video streaming over HTTP. IEEE Trans. Circuits Syst. Video Technol. 30(7), 2240–2249 (2020) Širok`y, J., Oldewurtel, F., Cigler, J., Prívara, S.: Experimental analysis of model predictive control for an energy efficient building heating system. Appl. Energy 88(9), 3079–3087 (2011) Saponara, M., Barrena, V., Bemporad, A., Hartley, E., Maciejowski, J.M., Richards, A., Tramutola, A., Trodden, P.: Model predictive control application to spacecraft rendezvous in Mars sample return scenario. EDP Sciences (2013) Ding, Y., Wang, L., Li, Y., Li, D.: Model predictive control and its application in agriculture: A review. Comput. Electron. Agric. 151, 104–117 (2018) Chung, H.M., Maharjan, S., Zhang, Y., Eliassen, F.: Distributed deep reinforcement learning for intelligent load scheduling in residential smart grid. IEEE Trans. Ind. Inform. (2020) Li, R., Zhao, Z., Sun, Q., Chih-Lin, I., Yang, C., Chen, X., Zhao, M., Zhang, H.: Deep reinforcement learning for resource management in network slicing. IEEE Access 6, 74429–74441 (2018) Zeng, D., Gu, L., Pan, S., Cai, J., Guo, S.: Resource management at the network edge: a deep reinforcement learning approach. IEEE Netw. 33(3), 26–33 (2019) Zhang, Y., Yao, J., Guan, H.: Intelligent cloud resource management with deep reinforcement learning. IEEE Cloud Comput. 4(6), 60–69 (2017) Vamvoudakis, K.G., Modares, H., Kiumarsi, B., Lewis, F.L.: Game theory-based control system algorithms with real-time reinforcement learning: how to solve multiplayer games online. IEEE Control Syst. Mag. 37(1), 33–52 (2017) Koch, W., Mancuso, R., West, R., Bestavros, A.: Reinforcement learning for UAV attitude control. ACM Trans. Cyber-Phys. Syst. 3(2), 1–21 (2019) Bai, W., Zhang, B., Zhou, Q., Lu, R.: Multigradient recursive reinforcement learning NN control for affine nonlinear systems with unmodeled dynamics. Int. J. Robust Nonlinear Control 30(4), 1643–1663 (2020)
97. Redder, A., Ramaswamy, A., Quevedo, D.E.: Deep reinforcement learning for scheduling in large-scale networked control systems. IFAC-PapersOnLine 52(20), 333–338 (2019) 98. Bai, Q., Bedi, A. S., Aggarwal, V.: Achieving zero constraint violation for constrained reinforcement learning via conservative natural policy gradient primal-dual algorithm. arXiv preprint (2022) arXiv:2206.05850
Vaneet Aggarwal received the B.Tech. degree in 2005 from Indian Institute of Technology, Kanpur, India, and the MA and PhD degrees in 2007 and 2010, respectively, from Princeton University, Princeton, NJ, USA, all in electrical engineering. He was a researcher at AT&T LabsResearch (2010–2014) and is currently a Full Professor in the School of IE & the School of ECE (by courtesy) at Purdue University.
Mridul Agarwal received his Bachelor of Technology degree in Electrical Engineering from Indian Institute of Technology Kanpur, India, in 2014 and his PhD degree in Electrical and Computer Engineering at Purdue University in 2022. He worked on 4G-LTE modems for Qualcomm for 4 years (2014–2018). His research interests include reinforcement learning, bandits, and recommendation systems.
9
Artificial Intelligence and Automation Sven Koenig, Shao-Hung Chan, Jiaoyang Li, and Yi Zheng
Contents
Keywords
9.1
Artificial Intelligence (AI): The Study of Intelligent Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
9.2 9.2.1 9.2.2 9.2.3 9.2.4
AI Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Knowledge Representation and Reasoning . . . . . . . . . . . . Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Machine Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.3
Combining AI Techniques . . . . . . . . . . . . . . . . . . . . . . . . 223
9.4
Case Study: Automated Warehousing . . . . . . . . . . . . . . 223
9.5
AI History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
9.6
AI Achievements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
9.7
AI Ethics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
206 206 209 212 216
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
Abstract
In this chapter, we discuss artificial intelligence (AI) and its impact on automation. We explain what AI is, namely, the study of intelligent agents, and explain a variety of AI techniques related to acquiring knowledge from observations of the world (machine learning), storing it in a structured way (knowledge representation), combining it (reasoning), and using it to determine how to behave to maximize task performance (planning), in both deterministic and probabilistic settings and for both single-agent and multi-agent systems. We discuss how to apply some of these techniques, using automated warehousing as case study, and how to combine them. We also discuss the achievements, current trends, and future of AI as well as its ethical aspects.
S. Koenig () · S.-H. Chan · J. Li · Y. Zheng Computer Science Department, University of Southern California, Los Angeles, CA, USA e-mail: [email protected]; [email protected]; [email protected]; [email protected]
© Springer Nature Switzerland AG 2023 S. Y. Nof (ed.), Springer Handbook of Automation, Springer Handbooks, https://doi.org/10.1007/978-3-030-96729-1_9
Artificial intelligence · Automated warehousing · Ethics · Intelligent agents · Knowledge representation · Machine learning · Multi-agent systems · Optimization · Planning · Reasoning
9.1
Artificial Intelligence (AI): The Study of Intelligent Agents
According to the Merriam-Webster dictionary, intelligence is (1) “the ability to learn or understand to deal with new or trying situations” or (2) “the ability to apply knowledge to manipulate one’s environment or to think abstractly as measured by objective criteria (such as tests)” [1]. The field of artificial intelligence (AI) studies how to endow machines with these abilities. Compatible with this definition, creating AI systems is often equated with building (intelligent) agents, where an agent is an input-output system that interacts with the environment, similar to a feedback controller. It receives observations about the state of the environment from its sensors and can execute actions to change it. How to obtain highlevel observations from low-level sensor signals (e.g., with vision, gesture recognition, speech recognition, and natural language processing) and how to translate high-level actions into low-level effector signals are considered part of AI but not of core AI and thus not discussed here. The program of an agent is essentially a function that specifies a mapping from possible sequences of past observations and actions to the next action to execute. Cognitive agents use functions that not only resemble those of humans but also calculate them like humans. In other words, cognitive agents “think” like humans. In general, however, agents can use any function, and the function is evaluated according to a given objective function that evaluates the resulting agent behavior. For believable agents, the objective function measures how close 205
206
S. Koenig et al.
the agent behavior is to that of humans. In other words, believable agents behave like humans, which might include modeling human emotions. Examples are intelligent voice assistants and lifelike non-player characters in video games. For rational agents, the objective function measures how well the agent does with respect to given tasks. In other words, rational agents maximize task performance. Examples are chess programs and autonomous vacuum cleaners. AI was originally motivated mostly by creating cognitive agents since it seems reasonable to build agents by imitating how existing examples of intelligent systems in nature, namely, humans, think. AI quickly turned to noncognitive agents as well since the computing hardware of agents is different from the brains of humans. After all, it is difficult to engineer planes that fly by flapping their wings like existing examples of flying systems in nature, namely, birds, due to their different hardware. For a long time, AI then focused on rational agents, mostly autonomous rational agents without humans in the loop but also autonomous rational agents that are able to obtain input from humans, which often increases the task performance, and nonautonomous rational agents (such as decision-support systems). More recently, AI has also started to focus on believable agents as technological advances in both AI and hardware put such systems within reach. Both rational and believable agents are important for automation. Rational agents can be used to automate a variety of tasks. Believable agents offer human-like interactions with gestures and speech and understand and imitate emotions. They can, for example, be used as teachers [42, 63] and companions [22,134] as well as for elderly care [24,109,131] and entertainment purposes [133]. In this article, we focus on rational agents. A thermostat is a rational agent that tries to keep the temperature close to the desired one by turning the heating and air conditioning on and off, but AI focuses on more complex rational agents, often those that use knowledge to perform tasks that are difficult for humans.
9.2
AI Techniques
Knowledge is important since rational agents make use of it. The initial emphasis of AI on cognitive agents explains why the study of rational agents is often structured according to the cognitive functions of humans. One typically distinguishes acquiring knowledge from observations of the world (machine learning), storing it in a structured way (knowledge representation), combining it (reasoning), and using it to determine how to behave to maximize task performance (planning). The interaction between agents is also important. We will discuss all these AI techniques separately in the context of stand-alone examples but also how to combine them. Rational agents can be built based on a variety of
techniques, often related to optimization, since they maximize task performance. Since trivial techniques are typically much too slow, AI develops techniques that exploit problem structure to result in the same or similar agent behavior but satisfy existing time constraints. Some of these techniques originated in AI but others originated much earlier in other disciplines that have also studied how to maximize performance, including operations research, economics, and engineering. AI has adopted many such techniques. The techniques used in AI are therefore very heterogeneous.
9.2.1
Optimization
Optimization is the task of assigning values to variables, possibly under some constraints, so as to minimize the value of a function of these variables. One often cannot systematically enumerate all possible assignments of values to all variables (solutions) and return one with the smallest function value because their number is too large or even infinite. In this case, one often starts with a random solution and then moves from solution to solution to discover one with a small function value (local search). Optimization techniques allow planning to maximize task performance and machine learning to find good models of the world.
Continuous Function Minimization The continuous function minimization problem is to find a global minimum of a function of several variables with continuous domains. (A global maximum can be determined by finding a global minimum of the negative function.) Since this can be difficult to do analytically, one often uses local search in form of gradient descent to find a small local minimum instead. Gradient descent first randomly assigns a value to each variable and then repeatedly tries to decrease the resulting function value by adjusting the values of the variables so that it takes a small step in the direction of the steepest downward slope (against the gradient) of the function at the point given by the current values of all variables. In other words, it first chooses a random solution and then repeatedly moves from the current solution to the best neighboring solution. Once this is no longer possible, it has reached a local minimum. It repeats the procedure for many iterations to find local minima with even smaller function values (random restart), a process which can easily be parallelized, until a time bound is reached or the smallest function value found is sufficiently small. Gradient descent can use a momentum term to increase the step size when the function value decreases substantially (to speed up the process) and decrease it otherwise. Function minimization under constraints has been studied in AI in the context of constraint programming but has also been studied in operations research. A continuous function
9
Artificial Intelligence and Automation
minimization problem under constraints can be formulated as follows: minimize x1 ,...,xn ∈R
subject to
f0 (x1 , . . . , xn ) fi (x1 , . . . , xn )≤ 0, for all i ∈ {1, . . . , m}. (9.1)
Here, x1 , . . . , xn ∈ R are real-valued variables, f0 is the function to be minimized (objective function), and the remaining functions fi are involved in constraints (constrained functions). (An equality constraint fi (x1 , . . . , xn ) = 0 can be expressed as fi (x1 , . . . , xn ) ≤ 0 and −fi (x1 , . . . , xn ) ≤ 0. Similarly, a range constraint xj ∈ [a, b] can be expressed as −xj + a ≤ 0 and xj − b ≤ 0.) Lagrange multipliers can be used for function minimization if all constraints are equality constraints. Other special cases include linear programming (LP), where the objective function and constrained functions are linear in the variables, and quadratic programming, where the objective function is quadratic and the constrained functions are linear [12]. “Programming” is a synonym for optimization in this context.
Discrete Function Minimization The discrete function minimization problem is to find the global minimum of a function of several variables with discrete domains. The local search analog to gradient descent for discrete functions is hill climbing. One has to decide what the neighbors of a solution should be. Their number should be small compared to the number of solutions. Like gradient descent, hill climbing first chooses a random solution and then repeatedly moves from the current solution to the best neighboring solution. Unlike gradient descent, hill climbing can determine the best neighboring solution simply by enumerating all neighboring solutions and calculating their function values. For example, the constraint satisfaction problem (CSP) consists of variables, their domains, and constraints among the variables that rule out certain assignments of values from their domains to them. The problem is to find a solution that satisfies all constraints. (The constraint optimization problem is a generalization where every constraint is associated with a nonnegative cost and the problem is to find a solution that minimizes the sum of the costs of the unsatisfied constraints.) For example, the NP-hard map coloring problem can be modeled as a CSP where the countries are the variables, the set of available colors is the domain of each variable, and the constraints specify that two countries that share a border cannot be colored identically. A solution to a map coloring problem is an assignment of a color to each country. To solve a map coloring problem systematically, one can perform a depth-first search by repeatedly first choosing a country and then a color for it. Different strategies exist
207
for how to choose countries, how to choose their colors, and how to determine when the procedure should undo the last assignment of a color to a country. It should definitely undo the last assignment when two countries that share a border have been colored identically. However, the number of possible map colorings is exponential in the number of countries, which makes such a (systematic) search too slow for a large map. To solve a map coloring problem with hill climbing, one can define the neighbors of a solution to be the solutions that differ in the assignment of a color to exactly one country. The function to be minimized is the number of errors of a solution (error or loss function), that is, the number of pairs of countries that share a border but are colored identically. Solving the map coloring problem then corresponds to finding a global minimum of the error function with zero errors, that is, a solution that obeys all coloring constraints. Hill climbing with random restarts often finds a map coloring much faster than search (although this is not guaranteed) but cannot detect that no map coloring exists. Many other NP-hard problems can be solved with hill climbing as well, including the SATisfiability (SAT) problem, which is the problem of finding an interpretation that makes a propositional sentence true, or the traveling salesperson problem (TSP), which is the problem of finding a shortest route that visits each city of a given set of cities exactly once and returns to the start city. To avoid getting stuck in local minima, local search cannot only make moves that decrease the function value (exploiting moves) but also has to make moves that increase the function value (exploring moves). Exploring moves help local search to reach new solutions. We now discuss several versions of local search that decide when to make an exploring move, when to make an exploiting move, and which moves to choose (exploration-exploitation problem). Tabu Search To encourage exploration by avoiding repeated visits of the same solutions, local search can maintain a tabu list of previous solutions that it will not move to for a number of steps. Tabu search first chooses a random solution and then repeatedly chooses the best neighboring solution that is not in the tabu list and updates the tabu list. Simulated Annealing Simulated annealing models the process of heating a metal and then slowly cooling it down to decrease its defects, which minimizes the energy. It first chooses a random solution and then repeatedly chooses a random neighboring solution. If moving to this neighboring solution decreases the function value, it makes the move (exploitation). Otherwise, it makes the move with a probability that is the larger, the less the function value increases and the less time has passed since
9
208
function optimization started (exploration). Thus, simulated annealing makes fewer and fewer exploring moves over time. Genetic Algorithms To allow for the reuse of pieces of solutions in a new context, local search can maintain a set (population) of solutions (individuals or, synonymously, phenotypes) that are represented with strings of a fixed length (chromosomes or, synonymously, genotypes), rather than only one solution, and then create new solutions from pieces of existing ones. A genetic algorithm (GA), similar to evolution in biology, first determines the function values of all n solutions in the population and then creates a new population by synthesizing two new solutions n/2 times, a process which can easily be parallelized, see Fig. 9.1: (1) It chooses two solutions from the population as parent solutions, each randomly with a probability that is inversely proportional to its function value, and generates two offspring solutions for them. Thus, solutions with small function values (fitter solutions) are more likely to reproduce. It splits both parent solutions into two parts by cutting their strings at the same random location. It then reassembles the parts into two new offspring solutions, one of which is the concatenation of the left part of the first parent solution and the right part of the second parent solution and the other one of which is the concatenation of the left part of the second parent solution and the right part of the first parent solution (crossover). This recombination step ensures that the offspring solutions are genetic mixtures of their parent solutions and can thus be fitter than them (exploitation) or less fit than them (exploration). (2) It changes random characters in the strings of the two offspring solutions. This mutation step introduces novelty into the offspring solutions and can thus make them fitter or less fit. Mutation is necessary to create diversity that avoids convergence to a local minimum. For example, if all solutions of a population are identical, then recombination creates only offspring solutions that are identical to their parent solutions. Once all (new) offspring solutions have been generated, they form the next population. This procedure is repeated for many steps (generations) until a time bound is reached, the smallest function value of a solution in the new population is sufficiently small, or this function value seems to have converged. Then, the solution in the most recent population with the smallest function value is returned. A good representation of a solution as string is important to ensure that recombination and mutation likely create strings that are solutions and have a chance to increase the fitness. In general, strings are discarded and replaced with new strings if they are not solutions. GAs can move the best solutions from one population to the next one to avoid losing them. To solve a map coloring problem for m countries with a given number of colors, the function to be minimized is the number of pairs of countries that share a border but are colored identically. GAs can represent a solution with
S. Koenig et al.
a string of m characters, where the ith character represents the assignment of a color to country i. The strings of the initial population are created by assigning colors to countries randomly. Recombination chooses a subset of countries and creates two offspring solutions from two parent solutions, one of which consists of the colors assigned by the first parent solution to the countries in the subset and the colors assigned by the second parent solution to the other countries and the other one of which consists of the colors assigned by the second parent solution to the countries in the subset and the colors assigned by the first parent solution to the other countries. Mutation iterates through the countries and changes the color of each country to a random one with a small probability. Thus, recombination and mutation create strings that are solutions. Actual applications of GAs to map coloring use more complex recombination strategies. Genetic programming, a form of GAs, uses tree-like representations instead of strings to evolve computer programs that solve given tasks. Other Discrete Function Minimization Techniques A discrete function minimization problem under constraints can be formulated like its continuous counterpart (9.1), except that all or some of the variables have discrete domains. Special cases are integer linear programming (ILP) and mixed integer linear programming (MILP), where the objective function and constrained functions are linear in the variables, just like for LP, but all (for ILPs) or some (for MILPs) variables have integer rather than real values. An LP relaxation, where all integer-valued variables are replaced with real-valued variables, the resulting LP is solved, and the values of integer-valued variables in the resulting solution are rounded, is often used to find reasonable solutions for ILPs or MILPs quickly. However, since many discrete function minimization problems under constraints can be expressed as ILPs or MILPs, a lot of effort has been devoted to developing more effective but still reasonably efficient solvers for them.
Example Applications LPs have been used for controlling oil refinement processes [57]. CSPs have been used for designing complex systems like hybrid passenger ferries [127], mechanical systems like clusters of gear wheels [152], and software services like plan recognition [104]. TSPs have been used for overhauling gas turbine engines, wiring computers, and plotting masks for printed circuit boards [85]. Simulated annealing has been used for global wiring [135] and optimizing designs of integrated circuits [38]. GAs have been used for tuning operating parameters of manufacturing processes [54,121,156]. MILPs have been used for scheduling chemical processing systems [34]. Other constraint programming techniques have been used for optimizing the layouts of chemical plants [8].
9
Artificial Intelligence and Automation
209
Random location
1
2
2
3
4
3
1
2
4
1
1
2
Crossover
4
3
4
1
1
2
1
3
4
1
1
2
4
3
2
3
4
3
Mutation
4
3
2
Parent solutions
3
4
3
Offspring solutions
Fig. 9.1 Illustration of crossover and mutation
9.2.2
Knowledge Representation and Reasoning
Knowledge representation is the task of storing knowledge in a structured way. An ontology describes the world in terms of its objects and their properties and relationships. Agents need to represent such knowledge about their environment and themselves in a knowledge base with a knowledgerepresentation language that is unambiguous and expressive. They also need to reason with the knowledge. Reasoning is the task of combining existing knowledge, that is, inferring new facts from given ones. For example, knowledge representation is about representing the rules of a coin flip game “heads, I win; tails, you lose” as well as background knowledge about the world (e.g., that “heads” and “tails” always have opposite truth values and “I win” and “you lose” always have identical truth values), while reasoning is about inferring that “I win” is guaranteed to hold when playing the game. Facts, like “heads; I win”, are expressed with sentences. The syntax of a knowledge-representation language defines which sentences are well-formed, while the semantics of well-formed sentences define their meanings. For example, arithmetic formulas are sentences. “x+2=5” is a well-formed arithmetic formula, and so it makes sense to ask for its meaning, that is, when it is true.
Propositional Logic We first discuss propositional logic as a knowledgerepresentation language. Knowledge Representation with Propositional Logic Sentences in propositional logic represent statements that are either true or false (propositions). The syntax specifies how sentences are formed with the truth values t(rue) and f(alse), propositional symbols (here: in capital letters), and operators, such as ¬, ∧, ∨, ⇒, and ⇔, that roughly represent “not,” “and,” “or” (in the inclusive sense of “and/or”), “if . . . then . . .,” and “if and only if” in English. For example, “heads, I win” can be represented with “H ⇒ I,” where the propositional symbol H represents “coin flip results in
heads” and the propositional symbol I represents “I win.” An interpretation assigns each propositional symbol a truth value. The semantics specifies for which interpretations a well-formed sentence is true. Reasoning with Propositional Logic We want to be able to infer that, whenever the rules of our game are followed, I will win even though we might not know the complete state of the world, such as the outcome of the coin flip. More formally: Whenever an interpretation makes the knowledge base “heads, I win; tails, you lose” (plus the background knowledge) true, then it also makes the sentence “I win” true. In this case, we say that the knowledge base entails the sentence. Entailment can be checked by enumerating all interpretations and checking that the definition of entailment holds for all of them. In our example, it does. However, this procedure is very slow since it needs to check 2100 interpretations for knowledge bases that contain 100 propositional symbols. Instead, AI systems use inference procedures, like proof by contradiction using the inference rule resolution, that check entailment by manipulating the representations of the knowledge base and the sentence, similar to how we add two numbers by manipulating their representations as sequences of digits.
First-Order Logic First-order logic (FOL) is often more adequate to represent knowledge than propositional logic since propositional logic is often not sufficiently expressive. Sentences in FOL still represent propositions, but FOL is a superset of propositional logic that uses names (here: in all caps) to refer to objects, predicates (here: in mixed case) to specify properties of objects or relationships among them, functions to specify mappings from objects to an object, and quantification to express both at least one object makes a given sentence true or that all objects make a given sentence true. For example, the knowledge base “Poodle(FIDO) ∧ ∀ x (Poodle(x) ⇒ Dog(x))” expresses that Fido is a poodle and all poodles are dogs. This knowledge base entails the sentence “Dog(FIDO).” An interpretation now corresponds to functions that map names
9
210
S. Koenig et al.
to objects, predicates to properties or relationships, and functions to mappings from objects to an object. A finite sentence in FOL, different from one in propositional logic, can have an infinite number of interpretations (because one can specify infinitely many objects, such as the nonnegative integers, e.g., by representing zero and a function that calculates the successor of any given nonnegative integer), so we can no longer check entailment by enumerating all interpretations. In general, there is a fundamental limit for FOL since entailment is only semi-decidable for FOL, that is, there are inference procedures (such as proof by contradiction using resolution) that can always confirm that a knowledge base entails a sentence, but it is impossible to always confirm that a knowledge base does not entail a sentence (e.g., because the inference procedure can run forever in this case).
Other Knowledge Representation Languages FOL requires sentences to be either true or false. However, the truth of a sentence might need to be quantified, for example, in terms of how true it is (as in “the machine is heavy”), resulting in fuzzy logic, or in terms of the likelihood that it is true (as in “the coin flip will result in heads”), resulting in probabilistic reasoning. For example, fuzzy logic allows one to represent vague (but certain) propositions, based on fuzzy set theory where membership in a set (such as the set of heavy machines) is indicated by a degree of membership between zero (the element is not in the set) and one (the element is in the set) rather than the Boolean values false and true. Figure 9.2 shows membership functions of the fuzzy sets for “light,” “medium,” and “heavy” machines. Fuzzy control systems are control systems based on fuzzy logic. In general, FOL has been extended or modified in many directions, for example, to be able to express facts about time (as in “a traffic light, once red, always becomes green eventually”) and to allow for the retraction of entailment conclusions when more facts are added to a knowledge base (nonmonotonic reasoning). For example, if someone is told that “someone saw a bird yesterday on the roof,” then they will typically assume that the bird could fly (default reasoning).
Membership degree
1.0
Heavy Medium Light
0.5
0.0
0
10
20 30 40 Machine weight (kg)
50
Fig. 9.2 Membership functions of three fuzzy sets
However, when they are then also told that “the bird had a broken wing,” then they know that it could not fly. Also, proof by contradiction using resolution can be slow for FOL, and its steps can be difficult to explain to users, resulting in hard-to-understand explanations for why a knowledge base entails a sentence. Knowledge-representation languages with less powerful but faster and easier-to-understand reasoning techniques are thus also used, such as rule-based expert systems and semantic networks. Rule-Based Expert Systems Rule-based expert systems partition the knowledge base into an “if-then” rule memory and a fact memory (or, synonymously,working memory) since reasoning leaves the rule memory unchanged but adds facts to the fact memory. For example, the rule memory contains “if Poodle(x) then Dog(x)” for our earlier example, and the initial fact memory contains “Poodle(FIDO).” The inference rule that utilizes that “P(A) ∧ ∀ x (P(x) ⇒ Q(x))” entails “Q(A)” (modus ponens) is used for reasoning. In forward chaining, it is used to determine that the knowledge base entails the query “Dog(FIDO),” which is then added to the fact memory. Forward chaining can be used for configuration planning in design and to find all possible diagnoses. In backward chaining, one needs to be given the sentence that one wants to show as being entailed by the knowledge base (query), such as “Dog(FIDO).” The rule “if Poodle(x) then Dog(x)” shows that the knowledge base entails “Dog(FIDO)” if it entails “Poodle(FIDO),” which it does since “Poodle(FIDO)” is in fact memory. Backward chaining can be used to confirm a given hypothesis in diagnosis. Rule-based expert systems cannot always confirm that a knowledge base entails a sentence. For example, they cannot confirm that the knowledge base consisting of the rule memory “Rule 1: if P then R; Rule 2: if ¬P then R” and an empty fact memory entails “R,” even though it does. Rulebased expert systems have the advantage of representing knowledge in a modular way and isolating the decisions which rules to apply from the knowledge, making modifications of the knowledge base easy. They have also been extended to logic programming languages, such as Prolog, by allowing commands in the “then” part of rules. Semantic Networks Semantic networks, a concept from psychology, are directed graphs that represent concepts visually with nodes and their properties or relationships with directed edges. For example, Fig. 9.3 shows a semantic network that represents the knowledge that Fido is a poodle, all poodles are dogs, and all dogs can bark. Semantic networks can use specialized and fast reasoning procedures of limited reasoning capability that follow edges to reason about the properties and relationships of concepts. This way, they can determine inherited properties. For example, to find out whether Fido can bark, one can
9
Artificial Intelligence and Automation
211
Knowledge Representation with Bayesian Networks Able to Dog
Bark
Subset of
Poodle
Element of
Fido
Fig. 9.3 Semantic network
start at node “Fido” in the semantic network from Fig. 9.3 and see whether one can reach a node with an outgoing edge that is labeled with “able to” and points to node “Bark” by repeatedly following edges labeled with “element of” and “subset of.” They can also disambiguate the meaning of words in sentences. For example, to find out that the word “bank” in the sentence “the bank keeps my money safe” likely refers to the financial institution rather than the land alongside of a body of water, one can determine that node “Money” is fewer hops away in the semantic network that expresses knowledge about the world from the node of the first meaning of bank than the node of the second meaning of bank. Semantic networks can be more expressive than FOL by allowing for default reasoning, but the semantics of edges and reasoning procedures need to be very carefully defined and some operators (such as “not” and “or”) are not easy to represent and reason with.
Bayesian Networks Probabilistic reasoning is an especially important extension of FOL because the world is often nondeterministic or the knowledge bases of agents are incomplete, for example, because the agents do not have complete knowledge of the world or do not include known facts in their knowledge bases, for example, to keep the sizes of their knowledge bases manageable. For example, an “if-then” rule “if Jaundice(x) then YellowEyes(x)” states that someone with jaundice always has yellow eyes (in other words, that the conditional probability P(YellowEyes(x)=t|Jaundice(x)=t), which is the probability that YellowEyes(x) is true given that Jaundice(x) is true, is 1), which is not true in reality.
For a medical diagnosis system, each symptom or disease is a random variable, and its presence or absence is indicated by the truth value of that random variable. An interpretation assigns a truth value to each random variable. A joint probability table maps each interpretation to the probability of the corresponding scenario (joint probability). From the joint probability table, one can calculate the conditional probability that a sick person has a certain disease given the presence or absence of some symptoms. However, there are too many interpretations to be able to elicit all joint probabilities from a doctor, and many of them are too close to zero to result in good estimates. Instead, one can represent the joint probability table with a directed acyclic graph that represents random variables visually with nodes and their conditional dependences with directed edges (Bayesian network). Every node is assigned a conditional probability table that maps each combination of values of the predecessor nodes of the node to the conditional probability that the node takes on a certain value if the predecessor nodes take on the given values. Figure 9.4a shows an example Bayesian network with three random variables that take on the values true or false (Boolean random variables): C(old), H(igh Temperature), and R(unning Nose). The joint probabilities are calculated as products of conditional probabilities, one from each conditional probability table. Here, P(C=c ∧ H=h ∧ R=r) = P(C=c) P(H=h|C=c) P(R=r|C=c), where the notation means that the equation holds no matter whether C, H, and R are true or false. Thus, the joint probability table in Fig. 9.4b results. The joint probabilities are products of several conditional probabilities. Thus, the joint probabilities are often close to zero, but the conditional probabilities are not necessarily close to zero. Furthermore, the conditional probabilities directly correspond to medical knowledge (such as the probability that a person with a cold has a high temperature), which makes them often easier to estimate. There are 8 possible interpretations, and one thus needs to specify 8-1 = 7 joint probabilities for the joint probability table. The eighth joint probability does not need to be specified since all joint probabilities sum to one. The reason why one only needs to specify five conditional probabilities across all conditional probability tables of the Bayesian network is that the Bayesian network makes conditional independences visible in its graph structure rather than only in its (conditional) probabilities. Two random variables X and Y are independent if and only if P(X=x ∧ Y=y) = P(X=x) P(Y=y). Thus, one needs only two probabilities, namely P(X=t) and P(Y=t), instead of three probabilities to specify the joint probability table of two independent Boolean random variables. However, making independence assumptions is often too strong. For example, if diseases and symptoms were independent, then the probability that a sick person has a certain disease would not depend on the presence or absence
9
212
S. Koenig et al.
a)
b) H R C
C
P(H=t|C=t)=1 P(H=t|C=f)=2/3
H
P(C=t)=1/4
R
P(R=t|C=t)=1 P(R=t|C=f)=1/3
t t t t f f f f
t t f f t t f f
P(H=t ∧ R=t ∧ C=t)
t 1 ½ 1 ½ 1/4 = 1/4 f 2/3 ½ 1/3 ½ 3/4 = 1/6 t 1 ½ 0 ½ 1/4 = 0 f 2/3 ½ 2/3 ½ 3/4 = 1/3 t 0 ½ 1 ½ 1/4 = 0 f 1 / 3 ½ 1 / 3 ½ 3 / 4 = 1 / 12 t 0 ½ 0 ½ 1/4 = 0 f 1/3 ½ 2/3 ½ 3/4 = 1/6
c)
HRC t t t f
t f f t
t f f f
Fig. 9.4 Bayesian network, corresponding joint probability table, and training examples. (a) Bayesian network. (b) Joint probability table. (c) Training examples
of symptoms. Diseases could thus be diagnosed without seeing the sick person. Making conditional independence assumptions is often more realistic. Two random variables X and Y are conditionally independent given a third random variable Z if and only if P(X=x ∧ Y=y|Z=z) = P(X=x|Z=z) P(Y=y|Z=z), in other words, X and Y are independent if one knows the value of Z (no matter what that value is). The graph structure of a Bayesian network implies conditional independences that hold no matter what the values of the conditional probabilities in its conditional probability tables are. Here, P(H=h ∧ R=r|C=c) = P(H=h|C=c) P(R=r|C=c). While a Bayesian network corresponds to exactly one joint probability table (meaning that both of them specify the same joint probabilities), a joint probability table can correspond to many Bayesian networks (viz., one for each factorization of the joint probabilities, such as P(C=c ∧ H=h ∧ R=r) = P(H=h | R=r ∧ C=c) P(R=r | C=c) P(C=c) or P(C=c ∧ H=h ∧ R=r) = P(C=c | H=h ∧ R=r) P(H=h | R=r) P(R=r)) – and these Bayesian networks can differ in their number of conditional probabilities after their simplification. A Bayesian network with few conditional probabilities can typically be found by making most of its edges go from causes to effects, for example, from diseases to symptoms (as in our example). Then, one typically needs to specify many fewer conditional probabilities than joint probabilities and speeds up reasoning.
Reasoning with Bayesian Networks
Reasoning with Bayesian networks means calculating probabilities, often conditional probabilities that one random variable takes on a given value if the values of some other random variables are known, for example, the probability P(C=t|R=f) that a sick person with no running nose has a cold or the probability P(C=t|H=f ∧ R=f) that a sick person with normal temperature and no running nose has a cold. In general, reasoning is NP-hard but can often be scaled to large Bayesian networks by exploiting their graph structure in form of their conditional independence assumptions.
Example Applications FOL has been used for verifying software [115] and proving theorems [33]. Fuzzy logic has been used for controlling machines, including maintaining a constant feed rate for weight belt feeders [153], increasing the machining efficiency for rough milling operations [44], and improving vehicle control of anti-lock braking systems [87, 154]. Rule-based expert systems have been used for tele-monitoring heart failures [116], analyzing satellite images [120], and developing loadshedding schemes for industrial electrical plants [23]. Semantic networks have been used for understanding consumer judgments [39, 43] and capturing knowledge of production scheduling [112]. Bayesian networks have been used in manufacturing for diagnosing and predicting faults [18, 139], calculating failure rates for maintenance planning [56], and predicting the energy consumption of manufacturing processes [94].
9.2.3
Planning
Planning is the optimization task of using knowledge to determine how to behave to maximize task performance. It can use the generic optimization techniques discussed earlier but often solves the specific optimization problem of finding an (ideally) shortest or cost-minimal sequence of actions that allows an agent to transition from a given start state to a given goal state and thus uses specialized optimization techniques to exploit the structure of this problem well and run fast. A state characterizes the information that an agent needs to have about the past and present to choose actions in the future that maximize its task performance. For example, a soda machine does not need to remember in which order a customer has inserted which coins. It only needs to remember how much money the customer has already inserted. An important part of planning is scheduling, which is the assignment of resources to plans (including when actions should be executed). We first discuss planning for an agent in the absence of
9
Artificial Intelligence and Automation
213
other agents (single-agent systems) and then planning in the presence of other agents (multi-agent systems).
Deterministic Planning in Single-Agent Systems Deterministic planning problems assume that the execution of the same action in the same state always results in the same successor state, and the actions thus have deterministic effects. The input of a deterministic planner is the start state of an agent (which is typically its current state), the (desired) goal state, and a set of actions that the agent can execute to transition from one state to another one. Its output is a plan, which is an (ideally) shortest or, if costs are associated with the actions, cost-minimal sequence of actions that the agent can execute to transition from the start state to the goal state. Since many planning problems have more than 10100 states [108], one often trades off solution quality for runtime and is satisfied with a short sequence of actions. Consider, for example, the eight puzzle, which is a toy with eight-numbered square tiles in a square frame. A state specifies which tiles are in which locations (configuration of the eight puzzle). The current state can be changed by repeatedly moving a tile that is adjacent to the location with the missing tile to that location. Figure 9.5 shows a start state in the upper-left corner and a goal state in the lowerleft corner. A shortest plan is to move the 7 tile to the left with MoveTile(T7,L8,L7) and then the 8 tile to the left with MoveTile(T8,L9,L8). Specifying Deterministic Planning Instances with STRIPS Deterministic planning is essentially the graph-search problem of finding a short path from the start state to the goal state on the directed graph (state space) that represents states with
1
2
3
4
5
6
7
8
Start state
1
2
3
4
5
6
7
8 Goal state
Fig. 9.5 Eight puzzle instance
vertices and actions with edges. However, the state space of the eight puzzle has 9!/2 = 181,440 states (since, to populate the eight puzzle, the 1 tile can be placed in any of the 9 locations, the 2 tile can be placed in any of the remaining 8 locations, and so on, but only half of the states are reachable from any given state), which makes the state space tedious to specify explicitly by enumeration. Even larger state spaces, such as those with more than 10100 states, cannot be specified explicitly by enumeration since both the necessary time and amount of memory are too large. Therefore, one specifies state spaces implicitly with a formal specification language for planning instances, such as STRIPS. STRIPS was first used for the Stanford Research Institute Problem Solver, an early planner, which explains its name. States are specified with a simplified version of FOL as conjunctions (often written as sets) of predicates whose arguments are names (grounded predicates). Actions are specified as parameterized action schemata that define when an action can be executed in a state (viz., when one can replace each parameter with a name so that all predicates in its precondition set are part of the state) and how to calculate the resulting successor state (viz., by deleting all predicates in its delete effect set from the state and then adding all predicates in its add effect set). Figure 9.5 shows the input of the planner for our example. The Planning Domain Definition Language (PDDL) [37] is a more expressive version of STRIPS that includes typing, negative preconditions, conditional add and delete effects, and quantification in preconditions, add effects, and delete effects. Its more advanced version PDDL2.1 [35] further supports optimization metrics, durative (rather than instantaneous) actions, and numeric preconditions, add effects, and delete effects. PDDL3 [36] further supports constraints over possible actions in the plan and the states reached by them
Action MoveTile (x,y,z) - "Move tile x from location y to location z" Precondition set = {At (x,y), MissingTile (z), Adjacent (y,z)} Add effect set = {At (x,z), MissingTile (y)} Delete effect set = {At (x,y), MissingTile (z)} Start state = {At (T1, L1), At (T2, L2), At (T3, L3), At (T4, L4), At (T5, L5), At (T6, L6), At (T7, L8), At (T8, L9), MissingTile (L7), Adjacent (L1, L2), Adjacent (L2, L3), Adjacent (L4, L5), Adjacent (L5, L6), Adjacent (L7, L8), Adjacent (L8, L9), Adjacent (L1, L4), Adjacent (L4, L7), Adjacent (L2, L5), Adjacent (L5, L8), Adjacent (L3, L6), Adjacent (L6, L9), Adjacent (L2, L1), Adjacent (L3, L2), Adjacent (L5, L4), Adjacent (L6, L5), Adjacent (L8, L7), Adjacent (L9, L8), Adjacent (L4, L1), Adjacent (L7, L4), Adjacent (L5, L2), Adjacent (L8, L5), Adjacent (L6, L3), Adjacent (L9, L6)} Goal state = {At (T1, L1), At (T2, L2), At (T3, L3), At (T4, L4), At (T5, L5), At (T6, L6), At (T7, L7), At (T8, L8), MissingTile (L9)}
9
214
S. Koenig et al.
(trajectory constraints) as well as soft trajectory constraints, which are desirable but do not necessarily have to be satisfied (preferences). Solving Deterministic Planning Instances Deterministic planning instances can be solved by translating them into different optimization instances, such as (M)ILP instances [11, 101] or SAT instances [61, 106]. Deterministic planning instances can also be solved with search techniques that find short paths in the given state space. These search techniques build a search tree that represents states with nodes and actions (that the agent can execute to transition from one state to another) with edges. Initially, the search tree contains only the root node labeled with the start state. In each iteration, the search technique selects an unexpanded leaf node of the search tree for expansion. If no such leaf node exists, then it terminates and returns that no path exists. Otherwise, if the state of the selected node is the goal state, then it terminates and returns the action sequence of the unique path in the search tree from the root node to the selected node. Otherwise, it expands the selected node as follows: For each action that can be executed in the state of the selected node, it creates an edge that connects the selected node with a new node labeled with the successor state reached when the agent executes the action in the state of the selected node. Finally, it repeats the procedure. Different search techniques differ in the information that they use for selecting an unexpanded fringe node. All of the following search techniques find a path from the start state to the goal state or, if none exists, report this fact (they are complete) when used on state spaces with a finite number of states and actions. Figure 9.6 shows a path-finding instance, and Fig. 9.7 shows the resulting search trees. Uninformed (or, synonymously, blind) search techniques use only information from the search tree for selecting an unexpanded fringe node. The g-value of a node n is the sum of the edge costs from the root node to node n. Depth-first search always selects an unexpanded fringe node with the largest g-value under the assumption that all edge costs are one (no matter what the actual edge costs are), that is, that the g-value of a node n is the number of edges from the root node to node n. To be complete, it has to prune nodes if one of their
C 5
5
Start state
10
D
A 8
8 B
Fig. 9.6 A path-finding instance
16
E
Goal state
ancestors is labeled with the same state, which avoids paths with cycles. It is not guaranteed to find shortest paths (even if all actual edge costs are one) but can be implemented with a stack, resulting in a memory consumption that is linear in the search depth. Uniform-cost search always selects an unexpanded fringe node with the smallest g-value (using the actual edge costs). It is guaranteed to find shortest paths and can be implemented with a priority queue. Breadth-first search always selects an unexpanded fringe node with the smallest g-value under the assumption that all edge costs are one (no matter what the actual edge costs are). It is a special case of uniform-cost search since uniform-cost search behaves like breadth-first search if all edge costs are one. It is guaranteed to find shortest paths only if all actual edge costs are one and can be implemented with a first-in first-out queue. Both uniform-cost search and breadth-first search can result in a memory consumption that is exponential in the search depth. Iterative deepening implements a breadth-first search with a series of depth-first searches of increasing search depths. It is guaranteed to find shortest paths in those situations when breadth-first search finds them, can be implemented with a stack, and results in a memory consumption that is linear in the search depth. But it has a runtime overhead compared to breadth-first search. More sophisticated linear-memory search techniques also exist, including in case all edge costs are not one. Informed (or, synonymously, heuristic) search techniques use additional problem-specific information for selecting an unexpanded fringe node. The h-value of a node n is an estimate of the goal distance of the state of node n, which is the smallest sum of the edge costs from the state of node n to the goal state. An h-value is admissible if and only if it is not an overestimate of the corresponding goal distance. The f-value of a node is the sum of its g- and hvalues. A* always selects an unexpanded fringe node with the smallest f-value, that is, the smallest estimated sum of the edge costs from the start state via the state of the node to the goal state. Uniform-cost search is a special case of A* since A* behaves like uniform-cost search if all h-values of A* are zero. A* is guaranteed to find shortest paths (if all h-values are admissible) and can be implemented with a priority queue. There exist linear-memory search variants of A*. The h-values being admissible guarantees that the fvalues of all nodes are not overestimates, a principle known in AI as “optimism in the face of uncertainty” (or missing knowledge). This way, when A* expands a node, the state of the node is either on a shortest path from the start state to the goal state (which is good) or not (which allows A* to discover the fact that the state is not on a shortest path). On the other hand, if some h-values are not admissible, then A* might not expand all nodes on a shortest path (because their f-values are too large) and is thus not guaranteed to find shortest paths. An admissible h-value of node n is typically found as the
9
Artificial Intelligence and Automation
a)
215
b)
c)
1 A
1 8
3
C
g=8
B
g=0 h = 20 f = 20
8
5
g=8 h = 16 f = 24
2
5
2
C
g=5
16
C
g=5 h = 15 f = 20
9
5 8
5
16
5
16
E
3 4
D
10 10 4
3
B
8 D
8
5
A
g=0
B 2
3
A
C
B
8
1
5 8
2
d)
A
1
5
E
D
5
D
E
g = 16 10
g = 24
4
D
g = 10 10
4 E
g = 10 h = 10 f = 20
10
E E
D
g = 26
6
E
g = 20
E
g = 20 h=0 f = 20
Fig. 9.7 Search trees for depth-first search, breadth-first search, uniform-cost search, and A* search for the path-finding instance from Fig. 9.6. The admissible h-value of a node for the A* search is the goal distance of its state. The numbers indicate the order of node expansions,
and the found paths are shown in bold. Not used here is the common optimization to prune nodes once a node labeled with the same state has been expanded. (a) Depth-first search. (b) Breadth-first search. (c) Uniform-cost search. (d) A* search
smallest sum of the edge costs from the state of node n to the goal state in a state space that contains additional actions (relaxation of the original state space) because adding actions cannot increase the goal distances. For example, for path finding on a map of cities and their highway connections, one could add actions that move off-road on a straight line from any city to any other city. The resulting admissible h-value of a node is the straight-line distance from the city of the node to the goal city. For the eight puzzle, one could add actions that move a tile on top of a neighboring tile. The resulting admissible h-value of a node is the sum, over all tiles, of the differences in the x- and y-coordinates of the location of a tile in the configuration of the node and its location in the goal configuration. For a planning problem specified in STRIPS, one needs large admissible h-values to keep the number of expanded nodes manageable due to the large state space since the larger the admissible h-values, the fewer nodes A* expands. One could delete some or all of the elements of the precondition sets of action schemata but often deletes all elements of their delete sets instead. In both cases, additional actions can be executed in some states because the actions need to satisfy fewer preconditions in the first case and because previous actions did not delete some of their preconditions in the second case. So far, we have assumed a forward search from the start state to the goal state. For this, one needs to determine all possible successor states of a given state. One can also perform a backward search from the goal state to the start state, provided that one is able to determine all possible predecessor
states of a given state. If both forward and backward searches are possible, then one should choose the search direction that results in the smaller average number of child nodes of nodes in the search tree (average branching factor). Bidirectional search searches simultaneously in both directions to search faster and with less memory. If the state space is specified with STRIPS, then it is easy to determine all possible successor states of a given state and thus implement a forward search, while it is more difficult to determine all possible predecessor states of a given state.
Probabilistic Planning in Single-Agent Systems Actions often do not have deterministic effects in practice. For example, a sick person might not respond to their medicine. The resulting probabilistic (or, synonymously, decision-theoretic) planning problems are often specified with (totally observable) Markov decision processes (MDPs) or probabilistic PDDL (PPDDL) [147]. MDPs assume that (1) the current state is always known (e.g., can be observed after every action execution), and the probability distributions over the cost and the successor state resulting from an action execution (2) are known and (3) depend only on the executed action and the state it is executed in (Markov property). For deterministic planning problems, a plan is a sequence of actions. However, for MDPs, a plan needs to specify which action to execute in each state that can be reached during its execution. It is a fundamental result that a plan that minimizes the expected total (discounted) plan-
216
execution cost can be specified as a function that assigns each state the action that should be executed in that state (policy). (The discount factor weighs future costs less than the current ones and is primarily used as a mathematical convenience since it ensures that all infinite sums are finite.) Policies that minimize the expected total (discounted) plan-execution cost can be found with stochastic dynamic programming techniques, such as value iteration and policy iteration. Assumptions 1–3 from above do not always hold in practice. Partially observable MDPs (POMDPs) relax Assumption 1 by assuming that the current state can be observed after every action execution only with noisy observations, which means that only a probability distribution over the current state is known. For example, the disease of a sick person is not directly observable, but the observed symptoms provide noisy clues about it. A policy now assigns each probability distribution over the current state the action that should be executed in this belief state, which results in more complicated and slower stochastic dynamic programming techniques than for MDPs since the number of states is typically finite, but the number of belief states is infinite.
Planning in Multi-agent Systems Multi-agent systems have become more important as parts of companies have been connected with the Internet, computational devices have been increasingly networked as part of the Internet of things, and teams of robots have been fielded successfully. Several agents can be more robust than a single agent since they can compensate for the failure of some agents. Several agents can also act in parallel and reduce the task-completion time. Centralized planning techniques for multi-agent systems can be obtained from planning techniques for single-agent systems and maximize the performance of the team (social welfare) but might raise privacy concerns for the involved agents. Decentralized (and distributed) planning techniques correspond to the many ways of making decentralized collective decisions in multi-agent systems, for example, with negotiating, voting, or bidding. Competitive agents are self-interested, that is, maximize their own performance, while cooperative agents maximize the performance of the team, for example, because they are bound by social norms or contracts (in case of humans) or are programmed this way (in case of robots). In both cases, one needs to understand how the agents can interact (mechanism) and what solutions can result from these interactions. One often wants to design the mechanism so that the solutions have desirable properties. For this, AI can utilize insights from other disciplines. Economics has mostly studied competitive agents, for example, in the context of non-cooperative game theory or auctions. An important part of the design of an auction mechanism in this context is to ensure that agents cannot game the system, for example, with collusion or shilling. Operations research, on the other
S. Koenig et al.
hand, has mostly studied cooperative agents, for example, in the context of vehicle routing. Concepts from economics also apply to cooperative agents since they can, for example, use cooperative auctions to assign tasks to themselves: Each agent bids on all tasks and is then assigned the tasks that it must execute, for example, the tasks for which it was the highest bidder. An important part of the design of an auction mechanism in this context is to design the auction so that, if each agent maximizes its own performance, the team maximizes social welfare. Different from many applications in economics, it is often also important that task allocation is fast.
Example Applications PDDL has been used for managing greenhouses [47] and composing and verifying web services [51, 98]. A* has been used for planning the paths of automated guided vehicles [138]. MDPs and POMDPs have been used for planning robot motions [129], tuning the parameters of wireless sensor networks [64, 92], and managing power plants [105]. Planning for multi-agent systems has been used for planning multi-view drone cinematography [93], production planning for auto engine plants [97], and assembling furniture with robots [66].
9.2.4
Machine Learning
Machine learning is the task of acquiring knowledge from observations of the world, often because knowledge is hard to specify by hand. Machine learning typically involves specifying a set of considered functions as possible models of some aspect of the world and choosing a good function from this set based on the observations. The set of considered functions is often specified in form of a parameterized function and using optimization techniques to determine its parameters. The key challenge is to ensure that the resulting learned function generalizes beyond the observations. For example, it is difficult to program an agent to ride a bicycle without falling down, and it thus makes sense to let it learn from experience (that is, past observations) how to do it well, which involves handling situations that the agent did not experience during learning. We discuss three classes of machine learning techniques. Assume that one wants to learn which actions to execute in different situations. Supervised learning is similar to a teacher presenting examples for how to behave optimally in many situations, unsupervised learning is similar to experiencing many situations and trying to make sense of them without any further information, and reinforcement learning is similar to experiencing many situations, experimenting with how to behave in each of them, and receiving feedback
9
Artificial Intelligence and Automation
217
as to how good the behavior was (e.g., when falling down while learning how to ride a bicycle).
examples, validation examples, and (if needed) test examples:
Supervised Learning In supervised learning, an agent is provided with observations in form of labeled examples, where the agent knows the values of a fixed number of features and the value of the label of each example. Its task is to predict the values of the label of the encountered unlabeled examples in use, where the agent knows only the values of the features. This task can be framed as learning a function that maps examples to the predicted values of the label. There is a possible infinite set of considered functions (hypothesis space), and one wants to find the best function from this set, which is ideally the true function that actually generates the values of the label. If the set of possible values of the label is discrete, then this is a classification task; see Fig. 9.8a. Otherwise, it is a regression task; see Fig. 9.8b. For example, recognizing handwritten digits from the pixels in images or spam emails from the number of times certain words occur in them is a classification task, while predicting the life expectancy of a person from their medical history is a regression task. An objective function captures how well a function predicts the values of the label of examples encountered in use (error function or, synonymously, loss function). The task is to choose the function from the set of considered functions that minimizes the error function. Examples of error functions are mappings from the set of considered functions to the resulting number of prediction errors for classification tasks and the resulting average of the squared differences between the true and predicted values of the label (mean squared error) for regression tasks. One often characterizes the number of prediction errors for classification tasks using precision and recall. Assume, for example, that 100 emails are provided to a spam email detection system, of which 20 are spam emails (actual positive examples). If the system identifies 15 emails as spam emails (identified positive examples) and 10 of them are indeed spam emails (correctly identified positive examples), then the number of correctly identified positive examples divided by the number of identified positive examples (precision) is 10/15, and the number of correctly identified positive examples divided by the number of actual positive examples (recall) is 10/20. Since there are too many examples that could be encountered in use, one cannot provide all of them as labeled examples. Machine learning thus needs to generalize from the labeled examples to the ones encountered in use. To make this possible, one typically assumes that the labeled examples are independently drawn from the same probability distribution as the examples encountered in use, resulting in them being independent and identically distributed (iid). The labeled examples serve different purposes and are therefore often partitioned randomly into three sets, namely, training
• The training examples are used to learn the function with a given machine learning technique and its parameters. For example, one might want to find the polynomial function of a given degree that has the smallest mean squared error on the examples encountered in use. Since these examples are not available during learning, one settles for finding the polynomial function of the given degree that has the smallest mean squared error on the training examples, which can be found, for example, with gradient descent. If the error is small, then one says that the function fits the training examples well. Any machine learning technique has to address underfitting and overfitting; see Fig. 9.9. Underfitting means that the learned function is not similar to the true function, typically because the true function is not in the set of considered functions (resulting in high bias). For example, if the set of considered functions contains only linear functions for regression tasks (linear regression tasks) but the true function is a polynomial function of higher degree, then the best linear function is likely not similar to the true function and thus fits neither the training examples nor the examples encountered in use well, as shown in Fig. 9.9a. Overfitting means that the learned function fits the training examples well but has adapted so much to the training examples that it is no longer similar to the true function even if the true function is in the set of considered functions, typically because there is lots of sampling noise in the training examples due to their small number or because the error function or machine learning technique is too sensitive to the noise in the feature or label values of the training examples (resulting in high variance). For example, if the training examples consist of a single photo that contains a cat, then a function that predicts that every photo contains a cat fits the training examples well but is likely not similar to the true function and thus likely does not fit the examples encountered in use well. Similarly, if there is noise in the values of the features or the label for regression tasks, then a polynomial function of higher degree than the true function can fit the training examples better than the true function but is likely not similar to the true function and thus likely does not fit the examples encountered in use well, as shown in Fig. 9.9. The bias-variance dilemma states that there is a tradeoff between the bias and the variance. To balance them, one can enlarge or reduce the set of considered functions, for example, by increasing or decreasing the number of parameters to be learned. Other techniques exist as well. For example, Least Absolute Shrinkage and Selection Operator (LASSO) regression is a popular technique to reduce overfitting for linear regression tasks. It minimizes
9
218
S. Koenig et al.
a)
b)
8
14
7 6
12
5
10 8
y
y
4 3
6 2 4
1
2
0 –1 –1
0
1
2
3
4
0 0.00
5
0.25
0.50
0.75
1.00
x
1.25
a)
2.00
predictions are accurate. (b) Regression instance with training examples (shown as small blue dots) with one real-valued feature x and real-valued label y. The red line shows the predicted values of the label for all possible values of x. Not all predictions are accurate
b) Learned function True function Training example
y
Learned function True function Training example
y
1.75
x
Fig. 9.8 Illustration of classification and regression. (a) Classification instance with training examples (shown as small blue and green dots) with two real-valued features x and y and a label with the discrete values blue or green. The predicted values of the label on one side of the red line are blue, and the predicted values on the other side are green. Not all
x
1.50
x
Fig. 9.9 Illustration of underfitting and overfitting. (a) The learned function underfits. (b) The learned function overfits
9
Artificial Intelligence and Automation
the mean squared error plus a weighted penalty term (regularization term) that is the L1 norm of the weights of the linear function to be learned. • The validation examples are used to select parameters for the machine learning technique (hyperparameters) that allow it to learn a good function, which has to be done with examples that are different from the training examples (cross-validation). Hyperparameters are, for example, the degree of the polynomial function to be learned for polynomial regression tasks or the weight of the regularization term for LASSO. They can be selected manually or automatically with exhaustive search, random search, or machine learning techniques. • The test examples are used as proxy for the examples encountered in use when assessing how well the learned function will likely do in use. The more training and validation examples available, the better a function can be learned. A fixed split of the nontest examples into training and validation examples does not make good use of the non-test examples, which is a problem if their number is small. One therefore often uses each nontest example sometimes as training example and sometimes as validation example. For example, k-fold cross-validation partitions the non-test examples into k sets and then performs k rounds of learning for given hyperparameters, each time using the examples in one of the partitions as validation examples and the remaining non-test examples as training examples. After each round, it calculates how well the resulting function predicts the values of the label of the validation examples. The hyperparameters are then evaluated according to the average of these numbers over all k rounds. Supervised machine learning techniques differ in how they represent the set of considered functions. We discuss several examples in the following because none of them is universally better than the others (no free lunch theorem) [142]. We do not discuss variants of supervised learning, such as the one where one synthesizes examples and then is provided with their values of the label at a cost (active learning) or the one where one uses the knowledge gained from solving one classification or regression task to solve a similar task with fewer examples or less runtime (transfer learning), which is important since machine learning techniques are typically both training data and runtime intensive. Meta-learning, which is learning from the output of other machine learning techniques, can be used to improve the learning ability of machine learning techniques by learning to learn, for example, across related classification or regression tasks for multi-task learning [72]. Decision Tree Learning Decision tree learning represents the set of considered functions with trees where each non-leaf node is labeled with a
219
R t
f
f
H t
t
f
f
Fig. 9.10 Decision tree for the example from Fig. 9.4c
feature, the branches of each non-leaf node are labeled with a partition of the possible values of that feature, and each leaf node is labeled with a value of the label (decision trees); see Fig. 9.10. Decision trees can be used for both classification and regression tasks (classification and regression trees). The value of the label of an unlabeled example is determined by starting at the root node and always following the branch that is labeled with the value of that feature for the example that labels the current node. The label of the leaf node eventually reached is the predicted value of the label for the example. For example, the decision tree in Fig. 9.10 predicts that a sick person with normal temperature and no running nose does not have a cold. Decision tree learning constructs a small decision tree, typically from the root node down. It chooses the most informative feature for the root node (often using metrics from information theory) and then recursively constructs each subtree below the root node. Overfitting can be reduced by limiting the depth of a decision tree or, alternatively, pruning it after its construction. It can also be reduced by using ensemble learning to learn several decision trees (random forest) and then letting them vote on the value of the label by outputting the most common value among them for classification tasks and the average of their values for regression tasks. One can also use meta-learning to determine how to combine the predictions of the decision trees. Naïve Bayesian Learning Naïve Bayesian learning represents the set of considered functions with Bayesian networks of a specific graph structure. The graph structure of the Bayesian networks is the one given in Fig. 9.4a, with an edge from the class to each feature. Naïve Bayesian learning estimates the conditional probabilities, typically using frequencies. Assume, for example, a diagnosis task where the features are the symptoms H(igh Temperature) and R(unning Nose) in Fig. 9.4c and the label is the disease C(old). Three training examples have C=f, and two of those have H=t, so P(H=t|C=f) = 2/3. Overall, the con-
9
220
ditional probabilities in Fig. 9.4a result for the given training examples. Bayesian networks of the simple graph structure shown in Fig. 9.4a are called naïve Bayesian networks because the conditional independences implied by their graph structure might not be correct. In fact, the conditional independence P(H=t ∧ R=t|C=f) = P(H=t|C=f) P(R=t|C=f) does not hold for the training examples since P(H=t ∧ R=t|C=f) = 0 = 2/3 1/3 = P(H=t|C=f) P(R=t|C=f). Thus, naïve Bayesian learning can underfit but is often expressive enough for the learned function to be sufficiently similar to the true function and then has the advantage that it reduces the set of considered functions compared to Bayesian networks with more complex graph structures and thus reduces overfitting. When one needs to predict the value of the label of an unlabeled example, for example, H=f and R=f, one calculates P(C=t|H=f ∧ R=f) = P(C=t ∧ H=f ∧ R=f) / P(H=f ∧ R=f) = P(C=t ∧ H=f ∧ R=f) / (P(C=t ∧ H=f ∧ R=f) + P(C=f ∧ H=f ∧ R=f)) = P(C=t) P(H=f|C=t) P(R=f|C=f) / (P(C=t) P(H=f|C=t) P(R=f|C=f) + P(C=f) P(H=f|C=f) P(R=f|C=f)) = 0 to determine the probability that the value of the label is true, that is, that a sick person with normal temperature and no running nose has a cold. This calculation is essentially the one of Bayes’ rule P(C=t|H=f) = P(C=t ∧ H=f) / P(H=f) = P(C=t ∧ H=f) / (P(C=t ∧ H=f) + P(C=f ∧ H=f)) = P(C=t) P(H=f|C=t) / (P(C=t) P(H=f|C=t) + P(C=f) P(H=f|C=f)) but generalized to multiple observed symptoms with the conditional independences implied by the graph structure of naïve Bayesian networks. Neural Network Learning Neural network learning represents the set of considered functions with artificial neural networks (NN). NNs are directed acyclic graphs, inspired by the networks of neurons in brains, that represent primitive processing units (perceptrons) with nodes and the flow of information from one perceptron to another with directed edges. A perceptron has a number of real-valued inputs and produces one real-valued output. If a single perceptron is used for learning, then the inputs of the perceptron are the feature values, and its output is the value of the label. A weight is associated with each input. The perceptron calculates the weighted sum of its inputs and then applies a nonlinear activation function to the weighted sum to create its output. The activation function is typically monotonically nondecreasing. An example is the sigmoid function σ (x) = 1/(1 + e−x ), which is a differentiable approximation of a threshold function to facilitate gradient descent. Perceptron learning finds the weights (including the amount of horizontal translation of the activation function, which is often expressed as weight) that minimize the error function. A single perceptron can essentially represent all functions from n real-valued feature values (for any n) to a label with two values where all examples that the function maps to one value of the label lie on one side of an
S. Koenig et al.
(n − 1)-dimensional separating plane and all examples that the function maps to the other value lie on the opposite side of the plane (the examples are linearly separable). This might be expressive enough for the learned function to be sufficiently similar to the true function and then has the advantage that it reduces the set of considered functions compared to a NN and thus reduces overfitting. Often, however, a single perceptron is not expressive enough (e.g., because it cannot even specify an exclusive or function), which is why one connects perceptrons into a NN. If a NN is used for learning, then the inputs of a perceptron in the NN can be either feature values, which are the inputs of the NN, or the outputs of other perceptrons. The output of a perceptron in the NN can be an input to multiple other perceptrons and/or the value of the label, which is the output of the NN. The perceptrons are often organized in layers. NNs can have multiple outputs (e.g., one for each value of the label, such as the possible diseases of a sick person), which are produced by the output layer. Sometimes, they are postprocessed by a softmax layer to yield a probability distribution over the possible values of the label. The remaining layers are the hidden layers. A NN is deep if and only if it has many hidden layers. There is typically a regularity in the connections from layer to layer. For example, in fully connected layers, the output of each perceptron in a layer is the input of all perceptrons in the next layer. NN learning often, but not always, takes the graph structure of the NN as given and uses gradient descent to find the weights of all perceptrons of the NN that minimize the error function. For example, the backpropagation technique adjusts the weights repeatedly with one forward pass through the layers (that calculates the outputs of all perceptrons for a given training example) and one backward pass (that calculates the gradient of the error function with respect to each weight). Specialized NNs exist for specific tasks: • Convolutional NNs (CNNs) are NNs that are specialized for processing matrix-like data, such as images. They use several kinds of layers in addition to the layers already described. For example, a convolutional layer (without kernel flipping) is associated with a filter whose values are learned. The filter is a matrix A (kernel) that moves over the matrix B of the outputs of the previous layer with a step size that is given by the stride. It calculates the dot product of matrix A and the submatrix of matrix B that corresponds to each location. The matrix of these dot products is the output of the convolutional layer; see Fig. 9.11. Overall, a convolutional layer extracts local features. A pooling layer is similar to a convolutional layer except that, instead of calculating the dot product, it returns the maximum value (max pooling) or the average value (average pooling) of the elements of the submatrix of matrix B that corresponds to each location; see Fig. 9.12.
9
Artificial Intelligence and Automation
221
Input matrix
3
0
1
0
0
1
0
3
0
Kernel
0
0
2
Output matrix
1
0
–1
0
1
1
0
–1
1
1
1
0
0
5
1
0
0
0
0
0
2
0
9
2
0
–5
3
3
5
2
0
Fig. 9.11 Illustration of convolution (without kernel flipping) with stride one
Max pooling with 2¥2 kernel and stride 2
Input matrix
3
0
1
4
0
1
3
0
1
1
2
1
1
5
1
0
Average pooling with 2¥2 kernel and stride 2
3
4
5
2
1
2
2
1
Fig. 9.12 Illustration of max pooling and average pooling with stride two
Overall, a pooling layer compresses the information in a lossy way. • Recurrent NNs (RNNs) are NNs that are specialized for classifying sequences (of examples) over time, including handwriting and speech. They use internal state to remember the outputs of some perceptrons from the previous time step and can use them as input of some perceptrons in the current time step. Long short-term memory (LSTM) networks are versions of RNNs that make it easier to maintain the internal state by using three gates to the memory, namely, one that decides what to store in memory (input gate), one that decides what to delete from memory (forget gate), and one that decides what to output from memory (output gate). k-Nearest Neighbor Learning k-nearest neighbor learning represents the set of considered functions with all training examples rather than parameters.
There is nothing to be learned, but more effort has to be spent when determining the value of the label of an unlabeled example, which involves finding the k training examples most similar to it and then letting them vote on the value. Support Vector Machine Learning Linear support vector machine learning considers the same functions as single perceptrons in NNs but represents them differently, namely, with the subset of the training examples that are closest to the separating plane (support vectors), rather than parameters. Support vector machine (SVM) learning improves on k-nearest neighbor learning since it only uses the necessary training examples rather than all of them to represent the learned function. It improves on single perceptrons since its separating plane has the largest possible distances to the training examples (maximum margin separator), which makes it more robust to sampling noise. The set of considered functions and thus the learned function can also be more
9
222
complex by embedding the training examples into higherdimensional spaces with the kernel trick.
Unsupervised Learning In unsupervised learning, an agent is provided with unlabeled examples, and its task is to identify structure in the examples, for instance, to find simpler representations of the examples while preserving as much information contained in them as possible (dimensionality reduction, which compresses the information in a lossy way and thus might decrease the amount of computation required to process the examples and reduce overfitting), to group similar examples such as photos that contain similar animals (clustering), or to generate new examples that are similar to the given ones (generative modeling). We describe one unsupervised machine learning technique for each of these purposes, namely, principal component analysis for dimensionality reduction, k-means clustering for clustering, and generative adversarial networks for generative modeling. The unlabeled examples can again be partitioned into training, validation, and test examples. Principal Component Analysis Principal component analysis (PCA) obtains a simpler representation of the training examples by projecting them onto the first principal components to reduce their number of features. The principal components are new features that are constructed as linear combinations (or mixtures) of the features such that the new features are uncorrelated and contain most of the information in the training examples (that is, preserve most of the variance). Dimensionality reduction of examples can make them easier to visualize, process, and therefore also analyze, which addresses the curse of dimensionality. k-Means Clustering k-means clustering partitions the training examples into k sets of similar examples. It randomly chooses k different centroids, one for each set, and then repeats the following two steps until the sets no longer change: First, it assigns each example to the set with the least squared Euclidean distance to its centroid. Second, it updates the centroids of the sets to the means of all examples assigned to them. Generative Adversarial Network Learning Generative adversarial network learning generates new examples that are similar to the training examples. Generative adversarial networks (GANs) consist of two NNs that are trained simultaneously: The generator attempts to generate examples that are similar to the training examples, and the discriminator attempts to distinguish the generated examples from the training examples. Once the discriminator fails in its task, the generator has achieved its task and can be used to generate new examples that are similar to the training examples.
S. Koenig et al.
Reinforcement Learning In reinforcement learning (RL), similar to operant conditioning in behavioral psychology, an agent interacts with the environment by executing actions and receiving feedback (reinforcement) in the form of possibly delayed penalties and rewards, which are represented with real-valued costs. Its task is to identify a behavior (in form of a policy) that minimizes its expected total (discounted) plan-execution cost. RL can be used to figure out which actions worked and which did not when the agent failed to achieve its objective after executing several actions in a row (credit-assignment problem), for example, when falling down while learning how to ride a bicycle. Genetic programming can be used for RL, but most RL is based on MDPs and relaxes Assumption 2 from Sect. 9.2.3 by assuming that the probability distributions over the cost and successor state resulting from an action execution in a state are unknown before the action execution but can be observed afterward. Model-based RL uses frequencies to estimate these probability distributions from the observed (state, action, cost, successor state) tuples when the agent interacts with the environment by executing actions. It then determines a policy for the resulting MDP. Model-free RL directly estimates a policy. Q-learning, the most popular model-free RL technique, is a stochastic dynamic programming technique that learns a q-value q(s, a) for each (state, action) pair, that estimates the smallest possible expected total (discounted) plan-execution cost (until the goal state is reached) if plan execution starts with executing action a in state s. All q-values are initially zero. Whenever the agent executes action a in state s and then observes cost c and successor state s , it updates q(s, a) ← (1 − α)q(s, a) + α(c + γ
min
a executable in s
q(s , a )) (9.2)
= q(s, a) + α(c + γ
min
a executable in s
q(s , a ) − q(s, a)). (9.3)
This formula can easily be understood by realizing that mina executable in s q(s , a ) estimates the smallest possible expected total (discounted) plan-execution cost if plan execution starts in state s . Thus, the formula calculates a weighted average of the old q-value q(s, a) and the smallest possible expected total (discounted) plan-execution cost after the agent executes action a in state s and then observes cost c and successor state s , see (9.2). Equivalently, the formula changes q(s, a) by taking a small step in the direction of the smallest possible expected total (discounted) plan-execution cost after the agent executes action a in state s and then observes cost c and successor state s ; see (9.3). The learning rate α > 0 is a hyperparameter, typically chosen close to zero, for calculating the weighted average or, equivalently, step size. The discount factor 0 < γ ≤ 1 is the hyperparam-
9
Artificial Intelligence and Automation
eter from Sect. 9.2.3, typically chosen close to one, that is used as a mathematical convenience. Deep NNs can be used to approximate the q-values and generalize the experience of the agent to situations that it did not experience during learning (deep RL). The policy, to be used after learning, maps each state s to the action a with the smallest q-value q(s, a). These exploiting actions utilize the knowledge of the agent to achieve a small expected total (discounted) planexecution cost. During learning, the agent also needs to execute exploring actions that allow it to gain new knowledge (e.g., help it to visit states that it has visited only a small number of times so far) and improve its policy, as was already discussed in Sect. 9.2.1 in the context of local search. The agent could tackle this exploration-exploitation problem, for example, by choosing a random action that can be executed in its current state with a small probability > 0 and the exploiting action otherwise (-greedy exploration).
Example Applications Decision tree learning has been used for diagnosing faults and monitoring the conditions of industrial machines [60] and identifying good features for such tasks [125]. Random forest learning has been used for diagnosing faults of industrial machines [19, 146] and classifying remote sensing data [7,77]. Naïve Bayesian learning has been used for diagnosing faults of industrial machines [151] and diseases [30, 136]. CNN learning has been used for segmenting manufacturing defects in images [143], detecting and classifying faults in semiconductor manufacturing [69], and finding good grasp configurations for novel objects [68]. The Dexterity Network Dataset (Dex-Net) and Grasp Quality CNN (GQ-CNN) are used for robust learning-based grasp planning [78–82]. RNN learning has been used for classifying objects from their motion trajectories to allow self-driving cars to decide which obstacles to avoid [29] and forecasting time series [48], such as taxi demand [145]. PCA has been used for monitoring and diagnosing faults in industrial processes [21, 107]. SVM learning has been used for classifying remote sensing data [110]. k-nearest neighbor learning has been used for detecting faults in semiconductor manufacturing [74, 155]. k-means clustering has been used for understanding climate and meteorological data, including monitoring pollution, identifying sources, and developing effective control and mitigation strategies [40]. GANs have been used for diagnosing faults [17, 46], detecting credit card fraud [32], and creating novel designs from sketches for rapid prototyping [102]. RL has been used for dispatching orders in the semiconductor industry [124], manipulating objects with industrial robots [59], and following lanes for autonomous driving [62]. Machine vision systems, powered by PCA, SVMs, NNs, and decision trees, have been used for the automated quality inspection of fruit and vegetables [25] as well as machine components [103].
223
9.3
Combining AI Techniques
AI research has traditionally focused on improving individual AI techniques (sometimes assuming idealized conditions for their use) and thus on the narrow tasks that they can handle. It is often nontrivial to combine them to create agents that solve broad jobs, for example, perform search-and-rescue operations. For example, it makes sense to combine AI techniques that acquire knowledge (like vision or machine learning) with AI techniques that use it (like planning). However, it is difficult to figure out how different AI techniques should pass information and control back and forth between them in order to achieve synergistic interactions. For example, the combination of low-level motion planning in continuous state spaces and high-level task planning in discrete state spaces has been studied for a long time in the context of using robots to re-arrange objects [58], but no consensus has been reached yet on good ways of combining the two planners. Of course, many complete agents have been built. While one sometimes attempts to determine a monolithic function for such agents directly, one often provides the function as a given composition of subfunctions, resulting in modular agent architectures. Such agent architectures describe how agents are composed of modules, what the modules do, and how they interact. Many such agent architectures are ad hoc, but proposals have been made for general agent architectures, mostly in the context of cognitive agents and robots. For example, the three-layer robot architecture consists of a slow planner, a behavior sequencer that always chooses the current behavior during plan execution based on the plan, and a fast reactive feedback controller that implements the chosen behavior. Other agent architectures include the blackboard architecture and the subsumption architecture. Meta-reasoning techniques can be used to decide which output quality a module needs to provide or for how long it should run to allow the agent to maximize its performance. For example, an anytime algorithm is one that provides an output quickly and then improves the quality of its output the longer it runs, which can be described by a mapping from its input quality and runtime to its output quality (performance profile). It might simplify meta-reasoning if every module of an agent architecture could be implemented as an anytime algorithm.
9.4
Case Study: Automated Warehousing
In Amazon fulfillment centers, millions of items are stored on special shelves, see Fig. 9.13. When an order needs to be fulfilled, autonomous robots pick up the shelves that store the ordered items and bring them to picking stations at the perimeter of the fulfillment center, where a human worker takes the ordered items from the shelves so that they can be boxed and shipped to the customer. The robots then return the
9
224
S. Koenig et al.
Fig. 9.13 Amazon fulfillment center. (Photos ©Amazon.com, lnc.)
shelves, either to their original locations or different locations [144]. On the order of one thousand, robots can operate in such automated warehouses and similar sorting centers (where robots carry packages to chutes that each serve a different loading dock) per floor [2]. Each robot moves one shelf at a time and recharges when necessary. Amazon puts stickers onto the floor that delineate a four-neighbor grid, on which the robots move essentially without location uncertainty. Human workers no longer need to move around in such automated warehouses, collecting the right items under time pressure – a physically demanding job whose automation makes it possible to store 40% more inventory [3] and at least doubles the picking productivity of the human workers (e.g., compared to conveyorized systems) [144]. Many of the manipulation tasks in warehouses cannot be automated yet, but the number of human workers required to operate automated warehouses will be small once they are. Automated warehouses need to store as many items as possible, resulting in the corridors needed to move the shelves being narrow. Robots carrying shelves cannot pass each other in these corridors, and their movements thus need to be planned carefully, which can be done centrally. Shelves close to the perimeter can be fetched faster than shelves in the center. The following questions arise, among others: (1) How should one move the robots to avoid them obstructing each other and allow them to reach their goal locations as quickly as possible to maximize throughput? (2) Where should one place the corridors to maximize throughput? (3) Which picking station should one use for a given order? (4) Which one of several possible shelves should one fetch to obtain an item for a given order? (5) Which robots should one use to fetch a given shelf for a given order? (6) On which shelves should one put items when restocking items? (7) Where should one place a given shelf to maximize throughput, so that shelves that contain frequently ordered items can be fetched fast? (8)
When should one start to process a given order given that different orders have different delivery deadlines? (9) How should one estimate the time that it takes a robot to fetch a shelf and a human worker to pick an item, especially since the picking time varies over the course of a shift? AI techniques, such as those for multi-agent systems, can help make these decisions. For example, the layout optimization problem of Question 2 can be solved with GAs, and the scheduling problems of Questions 3–8 can be solved optimally with MIP techniques or suboptimally with versions of hill climbing. Possible techniques for Question 1 have been studied in the context of the NP-hard multi-agent path finding (MAPF) problem and are described in more detail below. Centralized MAPF planning techniques plan paths for all robots. Efficient techniques exist but do not result in movement plans with quality guarantees. Examples include techniques based on either movement rules [26, 140] or planning paths for one robot after the other, where each path avoids collisions with the paths already planned (prioritized search) [75]. Search can also be used to find movement plans with quality guarantees, but its runtime can then scale exponentially in the number of robots [76,150]. For example, searching a graph that represents tuples of locations (one for each robot) with vertices is prohibitively slow. Instead, one often divides the overall MAPF problem into mostly independent subproblems by planning a shortest path for each robot under the assumption that the other robots do not exist. If the resulting movement plan has no collisions, then it is optimal. Otherwise, one needs to resolve the collisions. One collision-resolution technique groups all colliding robots into a team and plans for them jointly [122, 123], thus avoiding collisions among them in the future. Another collision-resolution technique chooses one of the collisions, for example, one where two robots are in the same location
9
Artificial Intelligence and Automation
x at the same time t [117]. It then chooses one of two robots and recursively considers two cases, namely, one where the chosen robot is not allowed to be at location x at time t and one where it must be at location x at time t, and the other robots are not allowed to be at location x at time t (disjoint splitting) [73], thus avoiding this collision in the future. The runtime of the search depends on the choice of collision and robot, but choosing them well is poorly understood and can be slow. Machine learning can be used to make good choices fast [52]. Finally, movement plans with and without quality guarantees can also be found by translating MAPF instances into different optimization instances, such as IPs [149] or SAT instances [126]. Decentralized (and distributed) MAPF planning techniques allow each robot to plan for itself, which avoids the issue of centralized planning that a multi-agent system fails if the central planner fails. One decentralized MAPF planning technique uses deep RL to learn a policy that maps the current state to the next movement of the robot, similar to how Deepmind (which is now part of Google) learned to play Atari video games [89]. The state is characterized by information such as the goal location of the robot, the locations of the other robots and their goal locations, and the locations of the obstacles – all in a field of view centered on the location of the robot. All robots use the same policy. Learning takes time but needs to be done only once. Afterward, it allows for the quick retrieval of the next movement of the robot based on the current state. A combination of deep RL and learning a policy that imitates the movement plans found by a search technique (imitation learning) works even better but is still incomplete [113].
9.5
AI History
The state of the art in AI, as described so far, has evolved over a period of more than 60 years. In the 1940s and 1950s, researchers worked on creating artificial brains. In 1950, Alan Turing published his paper “Computing Machinery and Intelligence” [132] that asked whether machines could think and introduced the imitation game (Turing test) to determine whether a computer is intelligent or, in our terminology, a believable agent, namely, if a human judge who corresponds via text messages with two conversation partners, a human participant and a computer, cannot determine who is who. In 1956, the term AI was introduced at the Dartmouth conference to name the newly created research field. AI was driven by the belief that “a physical symbol system has the necessary and sufficient means for general intelligent action” [95], that logic provides a good means for knowledge representation, and that reasoning can be achieved with search. In 1958, the AI programming language Lisp was introduced. In the 1960s, AI thrived, but, in the 1970s, the first period of pessimism
225
characterized by reduced funding and interest in AI (AI winter) hit due to lack of progress caused by the limited scalability of AI systems and the limited expressiveness of perceptrons. In the 1980s, AI thrived again, due to expert systems and their applications utilizing hand-coded knowledge to overcome the scalability issues, starting an era of knowledge-intensive systems. In the 1980s, the backpropagation technique was reinvented and broadly discovered. It allowed one to train networks of perceptrons, overcoming the limited expressiveness of single perceptrons, starting the research field of connectionism, that provided an alternative to the physical symbol system hypothesis. In the late 1980s and early 1990s, the second AI winter hit due to a slowdown in the deployment of expert systems (since it was challenging to build expert systems for complex domains with uncertainty) and the collapse of the Lisp machine market (since many AI systems can easily be implemented with conventional programming languages on conventional computers). In the 1990, AI started to thrive again, due to probabilistic reasoning and planning replacing deterministic logic-based reasoning and planning, and machine learning replacing knowledge acquisition via the work-intensive and error-prone interviewing of experts and hand-coding of their knowledge. Lots of data became available due to the networking of computers via the Internet and the pervasiveness of mobile devices, starting an era of big data and compute-intensive machine learning techniques to understand and exploit them, such as deep learning with CNNs, resulting in many new AI applications. In fact, it was argued that a substantial increase in the amount of training data can result in much better predictions than improved machine learning techniques [6, 45]. In 2017, the One Hundred Year Study on AI, a long-term effort to study and predict how AI will affect society, issued its first AI Index Report. Its reports show that the number of annual AI publications increased around ninefold from 1996 to 2017 [118], and the attendance at the machine learning conference NeurIPS increased about eightfold from 2012 to 2019 [99]. The funding for AI startups increased more than 30-fold from 2010 to 2018 worldwide [99]. As a result, the percentage of AI-related jobs among all jobs in the USA increased about fivefold from 2010 to 2019 [99].
9.6
AI Achievements
The strength of game-playing AI is often used to evaluate the progress of AI as a whole, probably because humans can demonstrate their intellectual strength by winning games. Many initial AI successes were for board games. In the late 1950s, a Checkers program achieved strong amateur level [111]. In the 1990s, the Othello program BILL [70] defeated the highest-ranked US player, the Checkers program Chinook [114] defeated the world champion, and the chess
9
226
S. Koenig et al.
program DeepBlue [16] defeated the world champion as well [99]. There have been lots of AI successes for non-board games as well. In the 2010s, the Jeopardy! program Watson beat two champions of this question-answer game show [31]. The Poker programs Libratus [13] and DeepStack [91] beat professional players of this card game [99]. A program that played 49 Atari video games demonstrated human-level performance at these early video games by learning directly from video frames [89], the Go program AlphaGo beat, one of the world’s best Go players of this board game [119], and the Starcraft II program AlphaStar reached Grandmaster level at this modern video game [137]. AI systems now affect all areas of everyday life, industry, and beyond, with especially large visibility in the areas of autonomous driving and intelligent voice assistants. In the 1990s, the Remote Agent controlled a NASA spacecraft without human supervision [10]. Also in the 1990s, a car drove from Pittsburgh to San Diego, with only 2% of the 2850 miles steered by hand [55]. In the 2000s, several cars autonomously completed the 132-mile off-road course of the second DARPA Grant Challenge and the subsequent DARPA Urban challenge that required them to drive in realistic urban traffic and perform complex maneuvers [14,90,130]. In 2018, Waymo cars autonomously drove over 10 million miles on public roads [67], and, in 2020, Waymo announced that it would open its fully driverless service to the general public [71]. Data from GPS devices and other sensors are now used to estimate current and future traffic conditions [49, 50] and improve public transportation systems [9]. The future is always difficult to predict. In a survey of all authors of the machine learning conferences ICML and NeurIPS in 2015, Asian respondents predicted that unaided machines would be able to accomplish every individual task better and more cheaply than human workers by 2046 with 50% probability, while North American authors predicted that it would happen only by 2090 with 50% probability [41].
9.7
AI Ethics
AI can result in cheaper, better performing, more adaptive, more flexible, and more general automation solutions than more traditional automation techniques. Its increasing commercialization has raised a variety of ethical concerns, as the following examples demonstrate in the context of driverless cars: (1) Adding a small amount of noise to images that is not noticeable by humans can change image recognition results dramatically, which might result in driverless cars not recognizing stop signs that have been slightly altered, for example, unintentionally with dirt or intentionally with chewing gum [96]. (2) Driverless trucks might replace millions of truck
drivers in the future [65]. AI also creates jobs, but they often require different skill sets than the eliminated ones [141]. (3) Driverless cars might have to make split-second decisions on how to avoid accidents. If a child suddenly runs onto the road, they might have to decide whether to hit the child or avoid it and potentially injure their passengers (trolley problem [5, 128]), which ideally requires explicit ethical judgment on their part [28]. Of course, other applications have also raised concerns: (1) Microsoft’s learning chatbot Tay made racist remarks after less than a day on Twitter [86], and (2) a tinder chatbot promoted the movie “Ex Machina” by pretending to be a girl on an online dating site [27]. (3) Significantly fewer women than men were shown online ads promoting wellpaying jobs [83], and (4) decision-support systems wrongly labeled more African-American than Caucasian arrested people as potential re-offenders [100], which affects their bail bonds. Overall, AI systems can process large quantities of data, detect regularities in them, draw inferences from them, and determine effective courses of action – sometimes as part of hardware that is able to perform many different, versatile, and potentially dangerous actions. The behavior of AI systems can also be difficult to validate, predict, or explain since they are complex, reason in ways different from humans, and can change their behavior with learning. Finally, their behavior can also be difficult to monitor by humans in case of fast decisions, such as buy and sell decisions on stock markets. Therefore, one needs to worry about the reliability, robustness, and safety of AI systems, provide oversight of their operation, ensure that their behavior is consistent with social norms and human values, determine who is liable for their decisions, and ensure that they impact the standard of living, distribution, and quality of work and other social and economic aspects in a positive way. (The previous four sentences were rephrased from [15].) These issues have resulted in the AI community focusing more on the explainability and fairness of decisions made by AI systems and starting conferences such as the AAAI/ACM AI, Ethics, and Society (AIES) conference, policy makers trying to regulate AI and its applications, and the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems creating guidelines and standards for “ethically aligned design” [20] to help designers and developers with the creation of AI systems and safeguard them from liability. In general, philosophy has studied ethics for a long time, and the resulting theories apply in this context as well, such as deontology (law-based ethics, as exemplified by Asimov’s three laws of robotics [4]), consequentialism (utilitarian ethics), and teleological ethics (virtue ethics) [148]. See additional details on AI and automation in other chapters of this Handbook, particularly in Chs. 8, 10, 15, 17, 70, and 72; and on AI ethics in Chs. 3, and 34.
9
Artificial Intelligence and Automation
Further Reading See [108], the most popular and very comprehensive AI textbook, for in-depth information on AI and its techniques. We followed it in our emphasis of rational agents and the description of the history of AI in Sect. 9.5. See [53, 84, 88] for additional AI applications in automation. Acknowledgments We thank Zhe Chen, Taoan Huang, Shariq Iqbal, Nitin Kamra, Zhanhao Xiao, and Han Zhang for their helpful comments on the draft of this chapter. We also thank Amazon Robotics for providing us with photos of one of their fulfillment centers. Our research was supported by the National Science Foundation (NSF) under grant numbers 1409987, 1724392, 1817189, 1837779, and 1935712 as well as a gift from Amazon. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the sponsoring organizations, agencies, or the US government.
References 1. Intelligence. Merriam-Webster (2020). https://www.merriamwebster.com/dictionary/intelligence. Accessed 27 Oct 2020 2. Ackerman, E.: Amazon uses 800 robots to run this warehouse. IEEE Spectrum (2019). Accessed 27 Oct 2020 3. Amazon Staff: What robots do (and don’t do) at Amazon fulfillment centers. The Amazon blog dayone (2020). Accessed 27 Oct 2020 4. Asimov, I.: Runaround. I, Robot p. 40 (1950) 5. Awad, E., Dsouza, S., Kim, R., Schulz, J., Henrich, J., Shariff, A., Bonnefon, J.F., Rahwan, I.: The moral machine experiment. Nature 563, 59–64 (2018) 6. Banko, M., Brill, E.: Scaling to very very large corpora for natural language disambiguation. In: Annual Meeting of the Association for Computational Linguistics, pp. 26–33 (2001) 7. Belgiu, M., Dr˘agu¸t, L.: Random forest in remote sensing: a review of applications and future directions. ISPRS J. Photogramm. Remote Sensing 114, 24–31 (2016) 8. Belov, G., Czauderna, T., de la Banda, M., Klapperstueck, M., Senthooran, I., Smith, M., Wybrow, M., Wallace, M.: Process plant layout optimization: equipment allocation. In: International Conference on Principles and Practice of Constraint Programming, pp. 473–489 (2018) 9. Berlingerio, M., Calabrese, F., Di Lorenzo, G., Nair, R., Pinelli, F., Sbodio, M.L.: AllAboard: A system for exploring urban mobility and optimizing public transport using cellphone data. In: Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 663–666 (2013) 10. Bernard, D., Dorais, G., Fry, C., Jr, E., Kanefsky, B., Kurien, J., Millar, W., Muscettola, N., Nayak, P., Pell, B., Rajan, K., Rouquette, N., Smith, B., Williams, B.: Design of the Remote Agent experiment for spacecraft autonomy. In: IEEE Aerospace Conference, pp. 259–281 (1998) 11. Booth, K.E.C., Tran, T.T., Nejat, G., Beck, J.C.: Mixed-integer and constraint programming techniques for mobile robot task planning. IEEE Robot. Autom. Lett. 1(1), 500–507 (2016) 12. Boyd, S., Vandenberghe, L.: Convex Optimization, pp. 146–156. Cambridge University Press, Cambridge (2004) 13. Brown, N., Sandholm, T., Machine, S.: Libratus: The superhuman AI for no-limit Poker. In: International Joint Conference on Artificial Intelligence, pp. 5226–5228 (2017)
227 14. Buehler, M., Iagnemma, K., Singh, S.: The DARPA Urban Challenge: Autonomous Vehicles in City Traffic. Springer, Berlin (2009) 15. Burton, E., Goldsmith, J., Koenig, S., Kuipers, B., Mattei, N., Walsh, T.: Ethical considerations in artificial intelligence courses. AI Mag. 38(2), 22–34 (2017) 16. Campbell, M., Hoane, A.J., Hsu, F.H.: Deep blue. Artif. Intell. 134(1–2), 57–83 (2002) 17. Cao, S., Wen, L., Li, X., Gao, L.: Application of generative adversarial networks for intelligent fault diagnosis. In: IEEE International Conference on Automation Science and Engineering, pp. 711–715 (2018) 18. Carbery, C.M., Woods, R., Marshall, A.H.: A Bayesian network based learning system for modelling faults in large-scale manufacturing. In: IEEE International Conference on Industrial Technology, pp. 1357–1362 (2018) 19. Cerrada, M., Zurita, G., Cabrera, D., Sánchez, R.V., Artés, M., Li, C.: Fault diagnosis in spur gears based on genetic algorithm and random forest. Mech. Syst. Signal Process. 70–71, 87–103 (2016) 20. Chatila, R., Havens, J.C.: The IEEE global initiative on ethics of autonomous and intelligent systems. In: Robotics and Well-Being, pp. 11–16. Springer, Berlin (2019) 21. Chen, Q., Wynne, R., Goulding, P., Sandoz, D.: The application of principal component analysis and kernel density estimation to enhance process monitoring. Control Eng. Practice 8(5), 531–543 (2000) 22. Chowanda, A., Blanchfield, P., Flintham, M., Valstar, M.F.: ERiSA: Building emotionally realistic social game-agents companions. In: International Conference on Intelligent Virtual Agents, pp. 134–143 (2014) 23. Croce, F., Delfino, B., Fazzini, P.A., Massucco, S., Morini, A., Silvestro, F., Sivieri, M.: Operation and management of the electric system for industrial plants: an expert system prototype for load-shedding operator assistance. IEEE Trans. Ind. Appl. 37(3), 701–708 (2001) 24. Cropp, M.: Virtual healthcare assistant for the elderly piques interest. Radio New Zealand (2019). Accessed 27 Oct 2020 25. Cubero, S., Aleixos, N., Moltó, E., Gómez-Sanchis, J., Blasco, J.: Advances in machine vision applications for automatic inspection and quality evaluation of fruits and vegetables. Food Bioprocess Technol. 4(4), 487–504 (2011) 26. de Wilde, B., ter Mors, A., Witteveen, C.: Push and rotate: cooperative multi-agent path planning. In: International Conference on Autonomous Agents and Multi-Agent Systems, pp. 87–94 (2013) 27. Edwards, C., Edwards, A., Spence, P.R., Westerman, D.: Initial interaction expectations with robots: Testing the human-to-human interaction script. Commun. Stud. 67(2), 227–238 (2016) 28. Etzioni, A., Etzioni, O.: Incorporating ethics into artificial intelligence. J. Ethics 21(4), 403–418 (2017) 29. Fathollahi, M., Kasturi, R.: Autonomous driving challenge: to infer the property of a dynamic object based on its motion pattern. In: European Conference on Computer Vision, pp. 40–46 (2016) 30. Fatima, M., Pasha, M.: Survey of machine learning algorithms for disease diagnostic. J. Intell. Learn. Syst. Appl. 9(1), 1–16 (2017) 31. Ferrucci, D., Brown, E., Chu-Carroll, J., Fan, J., Gondek, D., Kalyanpur, A.A., Lally, A., Murdock, J.W., Nyberg, E., Prager, J., Schlaefer, N., Welty, C.: Building Watson: an overview of the DeepQA project. AI Mag. 31(3), 59–79 (2010) 32. Fiore, U., De Santis, A., Perla, F., Zanetti, P., Palmieri, F.: Using generative adversarial networks for improving classification effectiveness in credit card fraud detection. Inform. Sci. 479, 448– 455 (2019) 33. Fitting, M.: First-Order Logic and Automated Theorem Proving, 2nd edn. Springer, Berlin (1996) 34. Floudas, C.A., Lin, X.: Mixed integer linear programming in process scheduling: modeling, algorithms, and applications. Ann. Oper. Res. 139(1), 131–162 (2005)
9
228 35. Fox, M., Long, D.: PDDL2.1: An extension to PDDL for expressing temporal planning domains. J. Artif. Intell. Res. 20, 61–124 (2003) 36. Gerevini, A., Haslum, P., Long, D., Saetti, A., Dimopoulos, Y.: Deterministic planning in the fifth international planning competition: PDDL3 and experimental evaluation of the planners. Artif. Intell. 173(5-6), 619–668 (2009) 37. Ghallab, M., Knoblock, C., Wilkins, D., Barrett, A., Christianson, D., Friedman, M., Kwok, C., Golden, K., Penberthy, S., Smith, D., Sun, Y., Weld, D.: PDDL—the planning domain definition language. Tech. rep., Yale Center for Computational Vision and Control (1998) 38. Gielen, G.G.E., Walscharts, H.C.C., Sansen, W.: Analog circuit design optimization based on symbolic simulation and simulated annealing. IEEE J. Solid-State Circuits 25(3), 707–713 (1990) 39. Godart, F.C., Claes, K.: Semantic networks and the market interface: lessons from luxury watchmaking. In: Research in the Sociology of Organizations, Structure, Content and Meaning of Organizational Networks, pp. 113–141. Emerald Publishing Limited (2017) 40. Govender, P., Sivakumar, V.: Application of k-means and hierarchical clustering techniques for analysis of air pollution: a review (1980–2019). Atmos. Pollut. Res. 11(1), 40–56 (2020) 41. Grace, K., Salvatier, J., Dafoe, A., Zhang, B., Evans, O.: Viewpoint: when will AI exceed human performance? Evidence from AI experts. J. Artif. Intell. Res. 62, 729–754 (2018) 42. Gratch, J., DeVault, D., Lucas, G.M., Marsella, S.: Negotiation as a challenge problem for virtual humans. In: International Conference on Intelligent Virtual Agents, pp. 201–215 (2015) 43. Grebitus, C., Bruhn, M.: Analyzing semantic networks of pork quality by means of concept mapping. Food Quality Preference 19(1), 86–96 (2008) 44. Haber, R.E., Alique, J.R., Alique, A., Hernández, J., UribeEtxebarria, R.: Embedded fuzzy-control system for machining processes: Results of a case study. Comput. Ind. 50(3), 353–366 (2003) 45. Halevy, A.Y., Norvig, P., Pereira, F.: The unreasonable effectiveness of data. IEEE Intell. Syst. 24(2), 8–12 (2009) 46. Han, T., Liu, C., Yang, W., Jiang, D.: A novel adversarial learning framework in deep convolutional neural network for intelligent diagnosis of mechanical faults. Knowl.-Based Syst. 165, 474–487 (2019) 47. Helmert, M., Lasinger, H.: The scanalyzer domain: greenhouse logistics as a planning problem. In: International Conference on Automated Planning and Scheduling, pp. 234–237 (2010) 48. Hewamalage, H., Bergmeir, C., Bandara, K.: Recurrent neural networks for time series forecasting: Current status and future directions. Int. J. Forecasting 37(1), 388–427 (2021) 49. Hofleitner, A., Herring, R., Abbeel, P., Bayen, A.M.: Learning the dynamics of arterial traffic from probe data using a dynamic Bayesian network. IEEE Trans. Intell. Transp. Syst. 13(4), 1679– 1693 (2012) 50. Horvitz, E., Apacible, J., Sarin, R., Liao, L.: Prediction, expectation, and surprise: methods, designs, and study of a deployed traffic forecasting service. In: Conference on Uncertainty in Artificial Intelligence, pp. 275–283 (2005) 51. Huang, H., Tsai, W.T., Paul, R.: Automated model checking and testing for composite web services. In: IEEE International Symposium on Object-Oriented Real-Time Distributed Computing, pp. 300–307 (2005) 52. Huang, T., Dilkina, B., Koenig, S.: Learning to resolve conflicts for multi-agent path finding with conflict-based search. In: IJCAI Workshop on Multi-Agent Path Finding (2020) 53. Ivanov, S., Webster, C.: Robots, Artificial Intelligence, and Service Automation in Travel, Tourism and Hospitality. Emerald Publishing Limited (2019)
S. Koenig et al. 54. Jain, N.K., Jain, V., Deb, K.: Optimization of process parameters of mechanical type advanced machining processes using genetic algorithms. Int. J. Mach. Tools Manuf. 47(6), 900–919 (2007) 55. Jochem, T., Pomerleau, D.: Life in the fast lane: The evolution of an adaptive vehicle control system. AI Mag. 17(2), 11–50 (1996) 56. Jones, B., Jenkinson, I., Yang, Z., Wang, J.: The use of Bayesian network modelling for maintenance planning in a manufacturing industry. Reliab. Eng. Syst. Saf. 95(3), 267–277 (2010) 57. Jones, D.S., Fisher, J.N.: Linear programming aids decisions on refinery configurations. In: Handbook of Petroleum Processing, pp. 1333–1347. Springer, Berlin (2006) 58. Kaelbling, L.P., Lozano-Pérez, T.: Integrated task and motion planning in belief space. Int. J. Robot. Res. 32(9–10), 1194–1227 (2013) 59. Kalashnikov, D., Irpan, A., Pastor, P., Ibarz, J., Herzog, A., Jang, E., Quillen, D., Holly, E., Kalakrishnan, M., Vanhoucke, V., Levine, S.: Scalable deep reinforcement learning for vision-based robotic manipulation. In: Conference on Robot Learning, vol. 87, pp. 651–673 (2018) 60. Karabadji, N.E.I., Khelf, I., Seridi, H., Laouar, L.: Genetic optimization of decision tree choice for fault diagnosis in an industrial ventilator. In: Condition Monitoring of Machinery in NonStationary Operations, pp. 277–283. Springer, Berlin (2012) 61. Kautz, H., Selman, B.: Planning as satisfiability. In: European Conference on Artificial Intelligence, pp. 359–363 (1992) 62. Kendall, A., Hawke, J., Janz, D., Mazur, P., Reda, D., Allen, J., Lam, V., Bewley, A., Shah, A.: Learning to drive in a day. In: IEEE International Conference on Robotics and Automation, pp. 8248–8254 (2019) 63. Kenny, P., Parsons, T.D., Gratch, J., Leuski, A., Rizzo, A.A.: Virtual patients for clinical therapist skills training. In: International Workshop on Intelligent Virtual Agents, pp. 197–210 (2007) 64. Kianpisheh, S., Charkari, N.M.: Dynamic power management for sensor node in WSN using average reward MDP. In: International Conference on Wireless Algorithms, Systems, and Applications, pp. 53–61 (2009) 65. Kitroeff, N.: Robots could replace 1.7 million american truckers in the next decade. Los Angeles Times (2016). Accessed 27 Oct 2020 66. Knepper, R.A., Layton, T., Romanishin, J., Rus, D.: IkeaBot: an autonomous multi-robot coordinated furniture assembly system. In: IEEE International Conference on Robotics and Automation, pp. 855–862 (2013) 67. Krafcik, J.: Where the next 10 million miles will take us. Waymo Blog (2018). Accessed 27 Oct 2020 68. Kumra, S., Kanan, C.: Robotic grasp detection using deep convolutional neural networks. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 769–776 (2017) 69. Lee, K.B., Cheon, S., Kim, C.O.: A convolutional neural network for fault classification and diagnosis in semiconductor manufacturing processes. IEEE Trans. Semicond. Manuf. 30(2), 135–142 (2017) 70. Lee, K.F., Mahajan, S.: BILL: A table-based, knowledge-intensive Othello program. Tech. rep., Carnegie-Mellon University (1986) 71. Lee, T.B.: Waymo finally launches an actual public, driverless taxi service. Ars Technica (2020). Accessed 27 Oct 2020 72. Lemke, C., Budka, M., Gabrys, B.: Metalearning: a survey of trends and technologies. Artif. Intell. Rev. 44(1), 117–130 (2015) 73. Li, J., Harabor, D., Stuckey, P.J., Ma, H., Koenig, S.: Disjoint splitting for multi-agent path finding with conflict-based search. In: International Conference on Automated Planning and Scheduling, pp. 279–283 (2019) 74. Li, Y., Zhang, X.: Diffusion maps based k-nearest-neighbor rule technique for semiconductor manufacturing process fault detection. Chemom. Intell. Lab. Syst. 136, 47–57 (2014)
9
Artificial Intelligence and Automation
75. Ma, H., Harabor, D., Stuckey, P., Li, J., Koenig, S.: Searching with consistent prioritization for multi-agent path finding. In: AAAI Conference on Artificial Intelligence, pp. 7643–7650 (2019) 76. Ma, H., Tovey, C., Sharon, G., Kumar, T.K.S., Koenig, S.: Multi-agent path finding with payload transfers and the packageexchange robot-routing problem. In: AAAI Conference on Artificial Intelligence, pp. 3166–3173 (2016) 77. Ma, L., Cheng, L., Li, M., Liu, Y., Ma, X.: Training set size, scale, and features in geographic object-based image analysis of very high resolution unmanned aerial vehicle imagery. ISPRS J. Photogramm. Remote Sensing 102, 14–27 (2015) 78. Mahler, J., Goldberg, K.: Learning deep policies for robot bin picking by simulating robust grasping sequences. In: Annual Conference on Robot Learning, pp. 515–524 (2017) 79. Mahler, J., Liang, J., Niyaz, S., Laskey, M., Doan, R., Liu, X., Ojea, J.A., Goldberg, K.: Dex-net 2.0: Deep learning to plan robust grasps with synthetic point clouds and analytic grasp metrics. In: Robotics: Science and Systems (2017) 80. Mahler, J., Matl, M., Liu, X., Li, A., Gealy, D., Goldberg, K.: Dex-net 3.0: computing robust robot suction grasp targets in point clouds using a new analytic model and deep learning. In: IEEE International Conference on Robotics and Automation, pp. 1–8 (2018) 81. Mahler, J., Matl, M., Satish, V., Danielczuk, M., DeRose, B., McKinley, S., Goldberg, K.: Learning ambidextrous robot grasping policies. Sci. Robot. 4(26), eaau4984 (2019) 82. Mahler, J., Pokorny, F.T., Hou, B., Roderick, M., Laskey, M., Aubry, M., Kohlhoff, K., Kröger, T., Kuffner, J., Goldberg, K.: Dex-net 1.0: a cloud-based network of 3D objects for robust grasp planning using a multi-armed bandit model with correlated rewards. In: IEEE International Conference on Robotics and Automation, pp. 1957–1964 (2016) 83. Martinho-Truswell, E.: As jobs are automated, will men and women be affected equally? Harvard Business Review (2019). Accessed 27 Oct 2020 84. Masrour, T., Cherrafi, A., Hassani, I.E.: Artificial Intelligence and Industrial Applications: Smart Operation Management. Springer, Berlin (2021) 85. Matai, R., Singh, S.P., Mittal, M.L.: Traveling salesman problem: an overview of applications, formulations, and solution approaches. Traveling Salesman Probl. Theory Appl. 1, 1–24 (2010) 86. Mathur, V., Stavrakas, Y., Singh, S.: Intelligence analysis of Tay Twitter bot. In: International Conference on Contemporary Computing and Informatics, pp. 231–236 (2016) 87. Mirzaei, A., Moallem, M., Mirzaeian, B., Fahimi, B.: Design of an optimal fuzzy controller for antilock braking systems. In: IEEE Vehicle Power and Propulsion Conference, pp. 823–828 (2005) 88. Mishra, S.K., Polkowski, Z., Borah, S., Dash, R.: AI in Manufacturing and Green Technology, 1st edn. CRC Press, Boca Raton (2021) 89. Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A.A., Veness, J., Bellemare, M.G., Graves, A., Riedmiller, M., Fidjeland, A.K., Ostrovski, G., Petersen, S., Beattie, C., Sadik, A., Antonoglou, I., King, H., Kumaran, D., Wierstra, D., Legg, S., Hassabis, D.: Human-level control through deep reinforcement learning. Nature 518(7540), 529–533 (2015) 90. Montemerlo, M., Thrun, S., Dahlkamp, H., Stavens, D., Strohband, S.: Winning the DARPA grand challenge with an ai robot. In: AAAI National Conference on Artificial Intelligence, pp. 982– 987 (2006) 91. Moravˇcík, M., Schmid, M., Burch, N., Lis`y, V., Morrill, D., Bard, N., Davis, T., Waugh, K., Johanson, M., Bowling, M.: DeepStack: expert-level artificial intelligence in heads-up no-limit Poker. Science 356(6337), 508–513 (2017) 92. Munir, A., Gordon-Ross, A.: An MDP-based dynamic optimization methodology for wireless sensor networks. IEEE Trans. Parallel Distrib. Syst. 23(4), 616–625 (2012)
229 93. Nägeli, T., Meier, L., Domahidi, A., Alonso-Mora, J., Hilliges, O.: Real-time planning for automated multi-view drone cinematography. ACM Trans. Graph. 36(4) (2017) 94. Nannapaneni, S., Mahadevan, S., Rachuri, S.: Performance evaluation of a manufacturing process under uncertainty using Bayesian networks. J. Cleaner Prod. 113, 947–959 (2016) 95. Newell, A., Simon, H.A.: Computer science as empirical inquiry: symbols and search. Commun. ACM 19(3), 113–126 (1976) 96. Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celik, Z.B., Swami, A.: Practical black-box attacks against machine learning. In: ACM on Asia Conference on Computer and Communications Security, pp. 506–519 (2017) 97. Pechoucek, M., Rehak, M., Charvat, P., Vlcek, T., Kolar, M.: Agent-based approach to mass-oriented production planning: case study. IEEE Trans. Syst. Man Cybernet. C (Appl. Rev.) 37(3), 386–395 (2007) 98. Peer, J.: A PDDL based tool for automatic web service composition. In: Principles and Practice of Semantic Web Reasoning, pp. 149–163 (2004) 99. Perrault, R., Shoham, Y., Brynjolfsson, E., Clark, J., Etchemendy, J., Grosz, B., Lyons, T., Manyika, J., Mishra, S., Niebles, J.C.: The AI index 2019 annual report. Tech. rep., Stanford University (2019) 100. Piano, S.L.: Ethical principles in machine learning and artificial intelligence: Cases from the field and possible ways forward. Hum. Soc. Sci. Commun. 7(1), 1–7 (2020) 101. Pochet, Y., Wolsey, L.A.: Production Planning by Mixed Integer Programming. Springer, Berlin (2006) 102. Radhakrishnan, S., Bharadwaj, V., Manjunath, V., Srinath, R.: Creative intelligence—automating car design studio with generative adversarial networks (GAN). In: Machine Learning and Knowledge Extraction, pp. 160–175 (2018) 103. Ravikumar, S., Ramachandran, K.I., Sugumaran, V.: Machine learning approach for automated visual inspection of machine components. Expert Syst. Appl. 38(4), 3260–3266 (2011) 104. Reddy, S., Gal, Y., Shieber, S.M.: Recognition of users’ activities using constraint satisfaction. In: Houben, G.J., McCalla, G., Pianesi, F., Zancanaro, M. (eds.) International Conference on User Modeling, Adaptation, and Personalization, pp. 415–421 (2009) 105. Reyes, A., Ibargüengoytia, P.H., Sucar, L.E.: Power plant operator assistant: an industrial application of factored MDPs. In: Mexican International Conference on Artificial Intelligence, pp. 565–573 (2004) 106. Rintanen, J.: Planning as satisfiability: heuristics. Artif. Intell. 193, 45–86 (2012) 107. Russell, E.L., Chiang, L.H., Braatz, R.D.: Fault detection in industrial processes using canonical variate analysis and dynamic principal component analysis. Chemom. Intell. Lab. Syst. 51(1), 81–93 (2000) 108. Russell, S.J., Norvig, P.: Artificial Intelligence: A Modern Approach, 4th edn. Pearson (2021) 109. Sakai, Y., Nonaka, Y., Yasuda, K., Nakano, Y.I.: Listener agent for elderly people with dementia. In: ACM/IEEE International Conference on Human-Robot Interaction, pp. 199–200 (2012) 110. Salcedo-Sanz, S., Rojo-Álvarez, J.L., Martínez-Ramón, M., Camps-Valls, G.: Support vector machines in engineering: an overview. Wiley Interdiscip. Rev. Data Mining Knowl. Discovery 4(3), 234–267 (2014) 111. Samuel, A.L.: Some studies in machine learning using the game of checkers. IBM J. Res. Dev. 3(3), 211–229 (1959) 112. Santoso, A.F., Supriana, I., Surendro, K.: Designing knowledge of the PPC with semantic network. J. Phys. Conf. Series 801, 12015 (2017) 113. Sartoretti, G., Kerr, J., Shi, Y., Wagner, G., Kumar, S., Koenig, S., Choset, H.: PRIMAL: pathfinding via reinforcement and imitation multi-agent learning. IEEE Robot. Autom. Lett. 4(3), 2378–2385 (2019)
9
230 114. Schaeffer, J., Lake, R., Lu, P., Bryant, M.: CHINOOK: the world man-machine Checkers champion. AI Mag. 17(1), 21–29 (1996) 115. Schmitt, P.H.: First-order logic. In: Deductive Software Verification, pp. 23–47. Springer, Berlin (2016) 116. Seto, E., Leonard, K.J., Cafazzo, J.A., Barnsley, J., Masino, C., Ross, H.J.: Developing healthcare rule-based expert systems: case study of a heart failure telemonitoring system. Int. J. Med. Inform. 81(8), 556–565 (2012) 117. Sharon, G., Stern, R., Felner, A., Sturtevant, N.: Conflict-based search for optimal multi-agent pathfinding. Artif. Intell. 219, 40– 66 (2015) 118. Shoham, Y., Perrault, R., Brynjolfsson, E., Clark, J., LeGassick, C.: The AI index 2017 annual report. Tech. rep., Stanford University (2017) 119. Silver, D., Huang, A., Maddison, C.J., Guez, A., Sifre, L., van den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M., Dieleman, S., Grewe, D., Nham, J., Kalchbrenner, N., Sutskever, I., Lillicrap, T., Leach, M., Kavukcuoglu, K., Graepel, T., Hassabis, D.: Mastering the game of Go with deep neural networks and tree search. Nature 529, 484–489 (2016) 120. Soh, L.K., Tsatsoulis, C., Gineris, D., Bertoia, C.: ARKTOS: an intelligent system for SAR sea ice image classification. IEEE Trans. Geosci. Remote Sensing 42(1), 229–248 (2004) 121. Srinivasu, D., Babu, N.R.: A neuro-genetic approach for selection of process parameters in abrasive waterjet cutting considering variation in diameter of focusing nozzle. Appl. Soft Comput. 8(1), 809–819 (2008) 122. Standley, T.: Finding optimal solutions to cooperative pathfinding problems. In: AAAI Conference on Artificial Intelligence, pp. 173–178 (2010) 123. Standley, T., Korf, R.: Complete algorithms for cooperative pathfinding problems. In: International Joint Conference on Artificial Intelligence, pp. 668–673 (2011) 124. Stricker, N., Kuhnle, A., Sturm, R., Friess, S.: Reinforcement learning for adaptive order dispatching in the semiconductor industry. CIRP Ann. 67(1), 511–514 (2018) 125. Sugumaran, V., Muralidharan, V., Ramachandran, K.: Feature selection using decision tree and classification through proximal support vector machine for fault diagnostics of roller bearing. Mech. Syst. Signal Process. 21(2), 930–942 (2007) 126. Surynek, P.: Reduced time-expansion graphs and goal decomposition for solving cooperative path finding sub-optimally. In: International Joint Conference on Artificial Intelligence, pp. 1916– 1922 (2015) 127. Tchertchian, N., Yvars, P.A., Millet, D.: Benefits and limits of a constraint satisfaction problem/life cycle assessment approach for the ecodesign of complex systems: a case applied to a hybrid passenger ferry. J. Cleaner Prod. 42, 1–18 (2013) 128. Thomson, J.J.: The trolley problem. Yale Law J. 94(6), 1395–1415 (1985) 129. Thrun, S., Burgard, W., Fox, D.: Probabilistic Robotics, chap. Application to Robot Control, pp. 503–507. MIT Press, Cambridge (2005) 130. Thrun, S., Montemerlo, M., Dahlkamp, H., Stavens, D., Aron, A., Diebel, J., Fong, P., Gale, J., Halpenny, M., Hoffmann, G., Lau, K., Oakley, C., Palatucci, M., Pratt, V., Stang, P., Strohband, S., Dupont, C., Jendrossek, L.E., Koelen, C., Markey, C., Rummel, C., van Niekerk, J., Jensen, E., Alessandrini, P., Bradski, G., Davies, B., Ettinger, S., Kaehler, A., Nefian, A., Mahoney, P.: Stanley: the robot that won the DARPA grand challenge. J. Field Robot. 23(9), 661–692 (2006) 131. Tsiourti, C., Moussa, M.B., Quintas, J., Loke, B., Jochem, I., Lopes, J.A., Konstantas, D.: A virtual assistive companion for older adults: design implications for a real-world application. In: SAI Intelligent Systems Conference, pp. 1014–1033 (2016)
S. Koenig et al. 132. Turing, A.M.: Computing machinery and intelligence. Mind LIX(236), 433–460 (1950) 133. Umarov, I., Mozgovoy, M.: Believable and effective AI agents in virtual worlds: current state and future perspectives. Int. J. Gaming Comput. Med. Simul. 4(2), 37–59 (2012) 134. Vardoulakis, L.P., Ring, L., Barry, B., Sidner, C.L., Bickmore, T.W.: Designing relational agents as long term social companions for older adults. In: International Conference on Intelligent Virtual Agents, pp. 289–302 (2012) 135. Vecchi, M.P., Kirkpatrick, S.: Global wiring by simulated annealing. IEEE Trans. Compu. Aided Design Integr. Circuits Syst. 2(4), 215–222 (1983) 136. Vembandasamy, K., Sasipriya, R., Deepa, E.: Heart diseases detection using naïve Bayes algorithm. Int. J. Innovative Sci. Eng. Technol. 2(9), 441–444 (2015) 137. Vinyals, O., Babuschkin, I., Czarnecki, W.M., Mathieu, M., Dudzik, A., Chung, J., Choi, D.H., Powell, R., Ewalds, T., Georgiev, P., Oh, J., Horgan, D., Kroiss, M., Danihelka, I., Huang, A., Sifre, L., Cai, T., Agapiou, J.P., Jaderberg, M., Vezhnevets, A.S., Leblond, R., Pohlen, T., Dalibard, V., Budden, D., Sulsky, Y., Molloy, J., Paine, T.L., Gulcehre, C., Wang, Z., Pfaff, T., Wu, Y., Ring, R., Yogatama, D., Wünsch, D., McKinney, K., Smith, O., Schaul, T., Lillicrap, T., Kavukcuoglu, K., Hassabis, D., Apps, C., Silver, D.: Grandmaster level in StarCraft II using multi-agent reinforcement learning. Nature 575, 350–354 (2019) 138. Wang, C., Wang, L., Qin, J., Wu, Z., Duan, L., Li, Z., Cao, M., Ou, X., Su, X., Li, W., Lu, Z., Li, M., Wang, Y., Long, J., Huang, M., Li, Y., Wang, Q.: Path planning of automated guided vehicles based on improved a-star algorithm. In: IEEE International Conference on Information and Automation, pp. 2071–2076 (2015) 139. Wang, G., Hasani, R.M., Zhu, Y., Grosu, R.: A novel Bayesian network-based fault prognostic method for semiconductor manufacturing process. In: IEEE International Conference on Industrial Technology, pp. 1450–1454 (2017) 140. Wang, K., Botea, A.: MAPP: a scalable multi-agent path planning algorithm with tractability and completeness guarantees. J. Artif. Intell. Res. 42, 55–90 (2011) 141. Wilson, H.J., Daugherty, P.R., Morini-Bianzino, N.: The jobs that artificial intelligence will create. MIT Sloan Manag. Rev. 58(4), 14–16 (2017) 142. Wolpert, D.H., Macready, W.G.: No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1(1), 67–82 (1997) 143. Wong, V.W., Ferguson, M., Law, K., Lee, Y.T.T., Witherell, P.: Automatic volumetric segmentation of additive manufacturing defects with 3D U-Net. In: AAAI Spring Symposium on AI in Manufacturing (2020) 144. Wurman, P., D’Andrea, R., Mountz, M.: Coordinating hundreds of cooperative, autonomous vehicles in warehouses. AI Mag. 29(1), 9–20 (2008) 145. Xu, J., Rahmatizadeh, R., Bölöni, L., Turgut, D.: Real-time prediction of taxi demand using recurrent neural networks. IEEE Trans. Intell. Transp. Syst. 19(8), 2572–2581 (2018) 146. Yang, B.S., Di, X., Han, T.: Random forests classifier for machine fault diagnosis. J. Mech. Sci. Technol. 22(9), 1716–1725 (2008) 147. Younes, H.L.S., Littman, M.L.: PPDDL1.0: an extension to PDDL for expressing planning domains with probabilistic effects. Tech. rep., Carnegie Mellon University (2004) 148. Yu, H., Shen, Z., Miao, C., Leung, C., Lesser, V.R., Yang, Q.: Building ethics into artificial intelligence. In: International Joint Conference on Artificial Intelligence, pp. 5527–5533 (2018) 149. Yu, J., LaValle, S.: Planning optimal paths for multiple robots on graphs. In: IEEE International Conference on Robotics and Automation, pp. 3612–3617 (2013) 150. Yu, J., LaValle, S.: Structure and intractability of optimal multirobot path planning on graphs. In: AAAI Conference on Artificial Intelligence, pp. 1444–1449 (2013)
9
Artificial Intelligence and Automation
151. Yusuf, S., Brown, D.J., Mackinnon, A., Papanicolaou, R.: Fault classification improvement in industrial condition monitoring via hidden Markov models and naïve Bayesian modeling. In: IEEE Symposium on Industrial Electronics & Applications, pp. 75–80 (2013) 152. Yvars, P.A.: Using constraint satisfaction for designing mechanical systems. Int. J. Interact. Design Manuf. 2, 161–167 (2008) 153. Zhao, Y., Collins, E.G.: Fuzzy PI control design for an industrial weigh belt feeder. IEEE Trans. Fuzzy Syst. 11(3), 311–319 (2003) 154. Zhao, Z., Yu, Z., Sun, Z.: Research on fuzzy road surface identification and logic control for anti-lock braking system. In: IEEE International Conference on Vehicular Electronics and Safety, pp. 380–387 (2006) 155. Zhou, Z., Wen, C., Yang, C.: Fault detection using random projections and k-nearest neighbor rule for semiconductor manufacturing processes. IEEE Trans. Semicond. Manuf. 28(1), 70–79 (2014) 156. Zolpakar, N.A., Lodhi, S.S., Pathak, S., Sharma, M.A.: Application of multi-objective genetic algorithm (MOGA) optimization in machining processes. In: Optimization of Manufacturing Processes, pp. 185–199. Springer, Berlin (2020)
Sven Koenig is a professor in computer science at the University of Southern California. His research focuses on AI techniques for making decisions that enable single situated agents (such as robots or decisionsupport systems) and teams of agents to act intelligently in their environments and exhibit goal-directed behavior in real-time, even if they have only incomplete knowledge of their environments, imperfect abilities to manipulate them, limited or noisy perception, or insufficient reasoning speed.
Shao-Hung Chan is a Ph.D. student in computer science at the University of Southern California. He received a Bachelor of Science degree from National Cheng-Kung University in 2017 and a Master of Science degree from National Taiwan University in 2019. His research focuses on designing planning techniques for real-world applications, including hierarchical planning techniques that allow teams of agents to navigate without collisions.
231
Jiaoyang Li is a Ph.D. student in computer science at the University of Southern California. She received a Bachelor’s degree in automation from Tsinghua University in 2017. Her research is on multi-agent systems and focuses on developing efficient planning techniques that enable hundreds of autonomous robots to fulfill navigation requests without collisions.
Yi Zheng is a Ph.D. student in computer science at the University of Southern California. He received a Bachelor of Science degree in computer science from the University of Southampton in 2019. His research focuses on developing scalable planning techniques for multiagent systems. He is also interested in applying machine learning to multi-agent planning problems.
9
Cybernetics, Machine Learning, and Stochastic Learning Automata
10
B. John Oommen, Anis Yazidi, and Sudip Misra
Contents
Abstract
10.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233 10.1.1 A General Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234 10.1.2 Automation, Automaton, and Learning Automata . . . . . . 235 10.2
A Learning Automaton . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
10.3
Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
10.4 Classification of Learning Automata . . . . . . . . . . . . . . . 238 10.4.1 Deterministic Learning Automata . . . . . . . . . . . . . . . . . . . 238 10.4.2 Stochastic Learning Automata . . . . . . . . . . . . . . . . . . . . . . 238 10.5 10.5.1 10.5.2 10.5.3 10.5.4 10.5.5
Estimator Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . Rationale and Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . Continuous Estimator Algorithms . . . . . . . . . . . . . . . . . . . Discrete Estimator Algorithms . . . . . . . . . . . . . . . . . . . . . . The Use of Bayesian Estimates in PAs . . . . . . . . . . . . . . . . Stochastic Estimator Learning Algorithm (SELA) . . . . . .
241 241 241 243 244 244
10.6 10.6.1 10.6.2 10.6.3
Challenges in Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . Previous Flawed Proofs . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Rectified Proofs of the PAs . . . . . . . . . . . . . . . . . . . . . Proofs for Finite-Time Analyses . . . . . . . . . . . . . . . . . . . . .
245 245 246 246
10.7
Hierarchical Schemes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
10.8
Point Location Problems . . . . . . . . . . . . . . . . . . . . . . . . . . 247
10.9
Emerging Trends and Open Challenges . . . . . . . . . . . . 247
This chapter presents the area of Cybernetics and how it is related to Machine Learning (ML), Learning Automata (LA), Re-inforcement Learning (RL) and Estimator Algorithms – all considered topics of Artificial Intelligence. In particular, Learning Automata are probabilistic finite state machines which have been used to model how biological systems can learn. The structure of such a machine can be fixed, or it can be changing with time. A Learning Automaton can also be implemented using action (choosing) probability updating rules which may or may not depend on estimates from the Environment being investigated.
Keywords
Artificial intelligence · Cybernetics · Estimator algorithms · Learning automation · Learning automata · Reinforcement learning · Stochastic learning
10.10 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
B. John Oommen () School of Computer Science, Carleton University, Ottawa, ON, Canada Department of Information and Communication Technology, University of Agder, Grimstad, Norway e-mail: [email protected] A. Yazidi Computer Science Department, Oslo Metropolitan University, Oslo, Norway e-mail: [email protected] S. Misra Department of Computer Science and Engineering, Indian Institute of Technology Kharagpur, Kharagpur, India e-mail: [email protected].
© Springer Nature Switzerland AG 2023 S. Y. Nof (ed.), Springer Handbook of Automation, Springer Handbooks, https://doi.org/10.1007/978-3-030-96729-1_10
10.1
Introduction
We aim to present a brief and comprehensive perspective of the area of “Cybernetics” and to explain how it is related to Machine Learning (ML); in particular to learning automata (LA), reinforcement learning (RL) and estimator algorithms. Each of these are fields of study in their own rights, with its own nomenclature and subfields of research. They are also all knit together within the general concept of artificial intelligence (AI). The aim of this introductory section is to present a nonmathematical perspective of the different fields. The formal concepts will be introduced when we proceed to delve into the details of LA, RL, and estimator algorithms, which will constitute the formal portion of the chapter. 233
234
10.1.1 A General Overview Machine Learning Machine Learning (ML) has made significant quantum leaps recently, and it is difficult to imagine a field of science or an industry that will not be affected by ML. This accelerated development in ML is mainly due to the breakthroughs in the field of Deep Learning (DL). Deep Learning (DL) denotes the new development in the field of neural networks (NNs) that developed exponentially due to the increase in the computational power of computers in the last two decades. This led to deeper NNs consisting of multiple layers, as well as innovative architectures of NNs. Informally speaking, ML constitutes of three areas: supervised learning, unsupervised learning, and reinforcement learning (RL). Supervised learning operates with training data, i.e., data for which the ground truth is known and where the aim is to build a data-driven predictor that can predict the label of a data point whose ground truth is unknown. Unsupervised learning operates without the corresponding labels of the given data points and aspires to find patterns in the data. This, in turn, usually reduces to tasks associated with clustering. RL, under which the field of learning automata (LA) falls, is a form of learning utilizing “trial and error.” RL operates with feedback (typically received instantaneously) related to the performance of the chosen action so that the learner learns from previous experience and improves a predetermined, long-term performance function. From a holistic perspective, it appears as if the closest field to cybernetics within the realm of ML is RL. Both cybernetics and RL share the same paradigm, namely, that of controlling a system based on a continuous feedback mechanism, which, in turn, formally characterizes the deviation that the decision/action has, from the optimal goal. Overview of ML and Connection to Cybernetics Norbert Wiener is considered the father of cybernetics, and his works and seminal book [78] are regarded as pioneering references in the field. In an editorial article [43], cybernetics was perceived through the lenses of ML, and the article claimed that the developments within the area of NN was first anchored in cybernetics. As early as in the 1940s, Norbert Wiener paved the way toward the field of brain-machine interfaces even as he observed and confirmed the need for “feedback processes, involving sensors, signals and actuators, everywhere around him, including in all living systems and in human–machine interactions” [43]. However, when it comes to brain-machine interfaces, cybernetics is concerned with the control, actuation, and understanding of the biological neural signals in order to process them and to issue commands directly from the brain. Such an understanding
B. John Oommen et al.
of the nervous signals in brain-machine interfaces is a key enabler for medical applications, such as in brain-controlled prosthetics. ML and AI, in general, are rather concerned with mimicking the functioning of NNs to endow machines with humanlike intelligence. There are a multitude of ML models available in the literature. An abridged list of the well-known and classical ML methods include: Logistic Regression Decision Trees, Support Vector Machines, Naive Bayes, Random Forest, and neural networks [23]. However, the advent of the field of Deep Learning has led to more sophisticated and superior approaches than what one would call the “classical” ones. Deep Learning includes architectures such as deep NNs, deep belief networks, graph neural networks, recurrent NNs such as Long Short-Term Memory (LSTM), and convolutional NNs [19]. However, while cybernetics usually relies on physical models of the system under study, the latter approaches are concerned with building data-driven models. Based on evidence from the literature, we advocate that since cybernetics is a form of a closed-loop control system, it can particularly benefit most from the advances within deep RL and within recurrent NNs that are made in the realm of Deep Learning. Furthermore, the field of cybernetics has benefited generally from NNs to assist in the design of neural controllers. NNs usually operate based on the concept of feedback backward learning, where the measured error is fed back to adjust the weights of the network, which is akin to the concept to residual error in a closed-loop cybernetic system.
Applications of Cybernetics in Machine Learning ML can benefit significantly from cybernetics in many ways. Cybernetics can help in the field of explainable artificial intelligence, in which the results of the proposed solution can be understood by humans. In such cases, one tries to explain complicated black box ML models in a humanunderstandable manner. There is a stream of research in the intersection between cybernetics and neural systems, which has to do with the stability of NNs when they are perceived as dynamical systems. This stream is particularly centered on recurrent NNs [7,9,68] and centers about the stability of NNs. It is booming and is one of richest research arenas within ML that is also relevant for the cybernetics community. This is of utmost importance to help the practitioner have a full understanding of the stability of the system, especially when implemented for autonomous robots (and vehicles) and in scenarios where the interactions with the physical world can lead to hazardous and unsafe situations [9]. Connection to Reinforcement Learning From a cybernetic perspective, Markov Decision Processes (MDPs) and RL can be seen as being central to control. RL
10
Cybernetics, Machine Learning, and Stochastic Learning Automata
algorithms are becoming more and more sophisticated, and their applications are increasing considerably. Stochastic LA falls under the umbrella of RL. The popularity of RL is increasing as new applications of its algorithms emerge. There are a variety of RL algorithms, such as deep Q-learning [25], policy gradient methods [50], trust region policy optimization [47], and proximal policy optimization [66]. MDPs are the basis of RL. An LA (which is the topic of interest of the major portion of this chapter) is a learning scheme, which is an extension of the multiarmed bandit problem. MDPs, however, consider a more complicated case where there are also states in the environment. Upon taking an action in a state, the learner receives an immediate reward and transits to another state according to a Markovian Law. The learner is interested in the “long-term reward,” which is usually a discounted reward. There have been different attempts to integrate LA as a part of the action selection at each state in MDP. Some notable works include the results of [11], which uses the pursuit scheme (studied, in detail, later) for MDPs and the work due to Wheeler and Narendra [67], where the authors propose a nonintuitive way of updating the probabilities of the LA at the different states by carrying the update whenever the agent revisits to the same state again. The latter work [67] led to more interest in distributed multiagent learning based on teams of LA [30, 65]. According to Barto [51], who is considered to be the father of RL, “we can view reinforcement learning as a computationally simple, direct approach to the adaptive control of nonlinear systems.” In a seminal paper entitled “RL is direct adaptive optimal control,” Barto et al. treated Q-learning as a form of a proofof-concept scheme for a novel control system design. Qlearning is considered a form of model-free learning. Thus, from this perspective, RL can be classified broadly under model-free learning and model-based learning. LA and the temporal difference method [51] also fall under the umbrella of model-free learning, and they all aim at learning an optimal policy without learning a model of the environment. Model-based learning methods such as Adaptive Dynamic Programming [8] learn an optimal policy by rather iteratively building a model of the environment. In a subsequent work, Barto established a link between dynamic programming, which is considered as model-based learning, and control theory [8]. RL has gained popularity in control theory, and for a review of the applications of RL in controlling industrial applications, we refer the reader to the following review [28]. For instance, Lee et al. designed a bioinspired RL mechanism for controlling an automated artificial pancreatic system [20]. The above is an attempt to provide the reader with an overall overview of ML, RL, AI, and LA and of how these domains are intertwined. We now proceed to a more detailed and formal survey of the area of LA and its applications.
235
10.1.2 Automation, Automaton, and Learning Automata While etymology suggests that automaton and automation are closely related, these words describe different concepts, as explained in Ch. 1, Sect. 10.1.1 LAs have scores of applications, as described in this chapter, and in the references of this chapter.
Learning Automata and Cybernetics What is a learning automaton? What is “learning” all about? What are the different types of learning automata (LA) available? How are LA related to the general field of cybernetics? These are some of the fundamental issues that this chapter attempts to describe so that we can understand the potential impact of the mechanisms and their capabilities as primary tools that can be used to solve a host of very complex problems. The Webster’s dictionary defines “cybernetics” as the science of communication and control theory that is concerned especially with the comparative study of automatic control systems (as the nervous system, the brain, and mechanicalelectrical communication systems). The word “cybernetics” itself has its etymological origins in the Greek root kybernan meaning “to steer” or “to govern.” Typically, as explained in the Encyclopaedia Britannica, “cybernetics is associated with models in which a monitor compares what is happening to a system at various sampling times with some standard of what should be happening, and a controller adjusts the system’s behaviour accordingly.” Of course, the goal of the exercise is to design the “controller” so as to appropriately adjust the system’s behavior. Modern cybernetics is an interdisciplinary field, which philosophically encompasses an ensemble of areas including neuroscience, computer science, cognition, control systems, and electrical networks. An automaton is a self-operating machine attempting to achieve a certain goal. As part of its mandate, it is supposed to respond to a sequence of instructions to attain to this goal. These instructions are usually predetermined and could be deterministic or stochastic. The automaton is designed so as to work with a random environment, and it does this in such a way that it achieves “learning.” From a psychological perspective, the latter implies that the automaton can acquire knowledge and, consequently, modify its behavior based on the experience that it has gleaned. Thus, we refer to the automata studied in this chapter as adaptive automata or learning automata since they can adapt to the dynamics of the environment in which they operate. More precisely, the adaptive automata that we study in this chapter glean information from the responses that they receive from the environment by virtue of interacting with it. Further, the automata attempt to learn the best action from a set of possible actions that are offered to them by the random environment, which could be
10
236
stationary or nonstationary. The automaton is thus a model for a stochastic “decision maker,” whose task is to arrive at the best action. What do “learning automata” have to do with “cybernetics”? The answer probably lies in the results of the Russian pioneer Tsetlin [59] and [60]. Indeed, when Tsetlin first proposed his theory of learning, his aim was to use the principles of automata theory to model how biological systems could learn. Little did he guess that his seminal results would lead to a completely new paradigm for learning and a subfield of cybernetics. Narendra and Thathachar [27], who have pioneered this field, have described the operations of the LA as follows: “. . . a decision maker operates in the random environment and updates its strategy for choosing actions on the basis of the elicited response. The decision maker, in such a feedback configuration of decision maker (or automaton) and environment, is referred to as the learning automaton. The automaton has a finite set of actions, and corresponding to each action, the response of the environment can be either favorable or unfavorable with a certain probability.” The latter is a precise quote from [27] (p. 3). The applications of LA are predominantly in optimization problems where one needs to determine the optimal action from a set of actions. LA are particularly pertinent when the level of noise or uncertainty is high in the environment, and conversely, they are not so helpful when the noise level is low where alternate optimization strategies are advocated [27]. The first studies with LA models date back to the studies by mathematical psychologists like Bush and Mosteller [10] and Atkinson et al. [3]. In 1961, the Russian mathematician Tsetlin [59, 60] studied deterministic LA in detail. Varshavskii and Vorontsova [62] introduced the stochastic variable structure versions of the LA. Tsetlin’s deterministic automata [59, 60] and Varshavskii and Vorontsova’s stochastic automata [62] were the major initial motivators of further studies in this area. Following them, several theoretical and experimental studies have been conducted by several researchers: K. Narendra, M. A. L. Thathachar, S. Lakshmivarahan, M. Obaidat, K. Najim, A. S. Poznyak, N. Baba, L. G. Mason, G. Papadimitriou, O.-C. Granmo, X. Zhang, L. Jiao, A. Yazidi, and B. J. Oommen, just to mention few. A comprehensive overview of the research in the field of LA can be found in the classic text by Narendra and Thathachar [27] (and its more recent edition); in the special issues of the journal IEEE Transactions on Systems, Man, and Cybernetics, Part B [31]; and in a recent paper published by two of the present authors and others [71]. It should be noted that none of the work described in this chapter is original. Most of the discussions, terminologies, and all the algorithms that are explained in this chapter are fairly standard and are taken from the corresponding existing pieces of literature.
B. John Oommen et al.
LA have boasted scores of applications. These include theoretical problems like the graph partitioning problem [36]. They have been used in controlling intelligent vehicles [61]. When it concerns neural networks and hidden Markov models, Meybodi et al. [21] have used them in adapting the former, and the authors of [14] have applied them in training the latter. Network call admission, traffic control, and quality of service routing have been resolved using LA in [5, 6, 64], while the authors of [48] have studied the problem of achieving distributed scheduling using LA. They have also found applications in tackling problems involving network and communications issues [24, 32, 37, 41]. Apart from these, the entire field of LA and stochastic learning has had a myriad of applications listed in the reference books [17, 26, 27, 42, 58]. We conclude this introductory section by emphasizing that this brief chapter is not be a comprehensive survey of the field of LA. In particular, we have not addressed the concept of LA, which possesses an infinite number of actions [46], systems that deal with “teachers” and “liars” [40], nor with any of the myriads of issues that arise when we deal with networks of LA [58]. Also, the reader should not expect a mathematically deep exegesis of the field. Due to space limitations, the results available are merely cited. Additionally, while the results that are reported in the acclaimed books are merely alluded to, we give special attention to the more recent results – namely, those which pertain to the discretized, pursuit and estimator algorithms, the hierarchical schemes, and those that deal with the Stochastic Point Location (SPL) Problem. Finally, we mention that the bibliography cited here is by no means comprehensive. It is brief and is intended to serve as a pointer to the representative papers in the theory and applications of LA.
10.2
A Learning Automaton
To present a formal introduction to the field, we present some initial sections, which essentially draw from [17, 26, 27, 42, 58]. In the field of automata theory, an automaton can be defined as a quintuple consisting of a set of states, a set of outputs or actions, an input, a function that maps the current state and input to the next state, and a function that maps a current state (and input) into the current output. Definition 10.1 An LA is defined by a quintuple A, B, Q, F(., .), G(.), where: (i) A = {α1 , α2 , . . . , αr } is the set of outputs or actions, and α(t) is the action chosen by the automaton at any instant t. (ii) B = {β1 , β2 , . . . , βm } is the set of inputs to the automaton. β(t) is the input at any instant t. The set B can be
10
Cybernetics, Machine Learning, and Stochastic Learning Automata
finite or infinite. In this chapter, we consider the case when m = 2, i.e., when B = {0, 1}, where β = 0 represents the event that the LA has been rewarded, and β = 1 represents the event that the LA has been penalized. (iii) Q = {q1 , q2 , . . . , qs } is the set of finite states, where q(t) denotes the state of the automaton at any instant t. (iv) F(., .) : Q × B → Q is a mapping in terms of the state and input at the instant t, such that q(t + 1) = F[q(t), β(t)]. It is called a transition function, i.e., a function that determines the state of the automaton at any subsequent time instant t + 1. This mapping can either be deterministic or stochastic. (v) G(.): is a mapping G : Q → A, and is called the output function. Depending on the state at a particular instant, this function determines the output of the automaton at the same instant as α(t) = G[q(t)]. This mapping can, again, be either deterministic or stochastic. Without loss of generality, G is deterministic. If the sets Q, B, and A are all finite, the automaton is said be finite.
10.3
237
ci {c1, ..., cr} Random environment a {a1, ..., ar}
b {0, 1} Learning automaton
Fig. 10.1 The automaton-environment feedback loop
depending on the penalty probability ci corresponding to αi . Although the LA does not know the penalty probabilities, the reward/penalty feedback, corresponding to the action that is provided, helps it to choose the subsequent action. This simple process is repeated through a series of environmentautomaton interactions. The intention is that, hopefully, the LA finally learns the optimal action, i.e., the one that provides the minimum average penalties, from the environment. We now provide a few important definitions used in the field. P(t) is referred to as the action probability vector where P(t) = [p1 (t), p2 (t), . . . , pr (t)]T , in which each element of the vector.
Environment
The environment, E, typically, refers to the medium in which the automaton functions. The environment possesses all the external factors that affect the actions of the automaton. Mathematically, the environment can be abstracted by a triple A, C, B. A, C, and B are defined as follows: (i) A = {α1 , α2 , . . . , αr } is the set of actions. (ii) B = {β1 , β2 , . . . , βm } is the output set of the environment. Again, we consider the case when m = 2, i.e., with β = 0 representing a “reward” and β = 1 representing a “penalty.” (iii) C = {c1 , c2 , . . . , cr } is a set of penalty probabilities where element ci ∈ C corresponds to an input action αi . The strategy or paradigm by which the learning is achieved is as follows. It involves a simple learning feedbackoriented loop involving a random environment (RE) and the LA. This loop is depicted in Fig. 10.1. The premise of the feedback is the following. The RE possesses a set of possible actions {α1 , α2 , . . . , αr }, and the LA has to choose one of these. Let us assume that the LA chooses αi . This serves as an input to the RE that operates as an oracle that knows the underlying penalty probability distribution of the system. It, in turn, “prompts” the LA with a reward (denoted by the value “0”) or a penalty (denoted by the value “1”)
pi (t)=Pr[α(t) = αi ], i = 1, . . . , r, such that
r
pi (t)=1 ∀t.
i=1
(10.1) Given an action probability vector, P(t) at time t, the average penalty is: M(t) = E[β(t)|P(t)] = Pr[β(t) = 1|P(t)] =
r
Pr[β(t) = 1|α(t) = αi ] Pr[α(t) = αi ] (10.2)
i=1
=
r
ci pi (t).
i=1
The average penalty for the “pure-chance” automaton is given by: r 1 M0 = ci . (10.3) r i=1 As t → ∞, if the average penalty M(t) < M0 , at least asymptotically, the automaton is generally considered to be better than the pure-chance automaton. E[M(t)] is given by: E[M(t)] = E{E[β(t)|P(t)]} = E[β(t)].
(10.4)
An LA that performs better than by pure-chance is said to be expedient.
10
238
B. John Oommen et al. s
Definition 10.2 An LA is considered expedient if:
Definition 10.3 An LA is said to be absolutely expedient if E[M(t + 1)|P(t)] < M(t), implying that E[M(t + 1)] < E[M(t)]. Definition 10.4 An LA is considered optimal if limt→∞ E[M(t)] = cl ,where cl = mini {ci }. Definition 10.5 An LA is considered -optimal if: (10.5)
where > 0, and can be arbitrarily small, by a suitable choice of some parameter of the LA. It should be noted that one cannot design an optimal LA – such an LA cannot exist. Marginally suboptimal performance, also termed above as -optimal performance, is what LA researchers attempt to attain.
10.4
Classification of Learning Automata
10.4.1 Deterministic Learning Automata An automaton is termed as a deterministic automaton if both the transition function F(., .) and the output function G(.), defined in Sect. 10.2, are deterministic. Thus, in a deterministic automaton, the subsequent state and action can be uniquely specified, provided the present state and input are given.
10.4.2 Stochastic Learning Automata If, however, either the transition function F(., .) or the output function G(.) are stochastic, the automaton is termed to be a stochastic automaton. In such an automaton, if the current state and input are specified, the subsequent states and actions cannot be specified uniquely. In such a case, F(., .) only provides the probabilities of reaching the various states from a given state. Let F β1 , F β2 , . . . , F βm denote the conditional probability matrices, where each of these conditional matrices F β (for β ∈ B) is a s × s matrix, whose arbitrary element β fij is: β
fij = Pr[q(t+1) = qj |q(t) = qi , β(t) = β], i, j = 1, 2, . . . , s. (10.6) β
(10.7)
j=1
limt→∞ E[M(t)] < M0 .
limn→∞ E[M(t)] < cl + ,
β
fij = 1, where β ∈ B; i = 1, 2, . . . , s.
In Equation (10.6), each element fij of the matrix F β represents the probability of the automaton moving from state qi to the state qj on receiving an input signal β from the RE. F β is a Markov matrix, and hence:
Similarly, in a stochastic automaton, if G(.) is stochastic, we have: gij = Pr{α(t) = αj |q(t) = qi }, i, j = 1, 2, . . . , s.
(10.8)
where gij represents the elements of the conditional probability matrix of dimension s × r. Intuitively, gij denotes the probability that when the automaton is in state qi , it chooses the action αj . As in Equation (10.7), we have: r
gij = 1, for each row i = 1, 2, . . . , s.
(10.9)
j=1
Fixed Structure Learning Automata β In a stochastic LA, if the conditional probabilities fij and gij are constant, i.e., they do not vary with the time step “t” and the input sequence, the automaton is termed to be a Fixed Structure Stochastic Automaton (FSSA). The popular examples of these types of automata were proposed by Tsetlin [59, 60], Krylov [16], and Krinsky [15] – all of which are -optimal under various conditions. Their details can be found in [27]. Variable Structure Learning Automata Unlike the FSSA, Variable Structure Stochastic Automata (VSSA) are the ones in which the state transition probabilities are not fixed. In such automata, the state transitions or the action probabilities themselves are updated at every time instant β using a suitable scheme. The transition probabilities fij and the output function gij vary with time, and the action probabilities are updated on the basis of the input. These automata are discussed here in the context of linear schemes. But the concepts discussed below can be extended to nonlinear updating schemes as well. The types of automata that update transition probabilities with time were introduced in 1963 by Varshavskii and Vorontsova [62]. A VSSA depends on a random number generator for its implementation. The action chosen is dependent on the action probability distribution vector, which is, in turn, updated based on the reward/penalty input that the automaton receives from the RE. Definition 10.6 A VSSA is a quintuple Q, A, B, T, where Q represents the different states of the automaton, A is the set of actions, B is the set of responses from the environment to the LA, G is the output function, and T is the action probability updating scheme T : [0, 1]r × A × B → [0, 1]r , such that
10
Cybernetics, Machine Learning, and Stochastic Learning Automata
P(t + 1) = T(P(t), α(t), β(t)),
(10.10)
where P(t) is the action probability vector. Normally, VSSA involve the updating of both the state and action probabilities. For the sake of simplicity, in practice, it is assumed that in such automata, each state corresponds to a distinct action, in which case the action transition mapping G becomes the identity mapping, and the number of states, s, is equal to the number of actions, r (s = r < ∞). VSSA can be analyzed using a discrete-time Markov process, defined on a suitable set of states. If a probability updating scheme T is time invariant, {P(t)}t≥0 is a discretehomogenous Markov process, and the probability vector at the current time instant P(t), along with α(t), and β(t)), completely determines P(t + 1). Hence, each distinct updating scheme, T, identifies a different type of learning algorithm as follows:
239
updating scheme for a continuous automaton is described below. We assume that the action αi is chosen, and thus, α(t) = αi . The updated action probabilities can be specified as: For β(t) = 0, ∀j = i , pj (t + 1) = pj (t) − gj (P(t)) (10.11) For β(t) = 1, ∀j = i , pj (t + 1) = pj (t) + hj (P(t)) Since P(t) is a probability vector,
In a VSSA, if a chosen action αi is rewarded, the probability for the current action is increased, and the probabilities for all the other actions are decreased. On the other hand, if the chosen action αi is penalized, the probability of the current action is decreased, whereas the probabilities for the rest of the actions could, typically, be increased. This leads to the following different types of learning schemes for VSSA: • Reward-Penalty (RP): In both the cases, when the automaton is rewarded as well as penalized, the action probabilities are updated. • Inaction-Penalty (IP): When the automaton is penalized, the action probability vector is updated, whereas when the automaton is rewarded, the action probabilities are neither increased nor decreased. • Reward-Inaction (RI): The action probability vector is updated whenever the automaton is rewarded, and is unchanged whenever the automaton is penalized. An LA is considered to be a continuous automaton if the probability updating scheme T is continuous, i.e., the probability of choosing an action can be any real number in the closed interval [0, 1]. In a VSSA, if there are r actions operating in a stationary environment with β = {0, 1}, a general action probability
pj (t) = 1. Therefore,
j=1
When β(t) = 0,
r
pi (t + 1) = pi (t) +
10 gj (P(t)),
j=1,j =i
(10.12) and when β(t) = 1,
• Absorbing algorithms are the ones in which the updating scheme, T is chosen in such a manner that the Markov process has absorbing states. • Non-absorbing algorithms are the ones in which the Markov process has no absorbing states. • Linear algorithms are the ones in which P(t+1) is a linear function of P(t). • Nonlinear algorithms are the ones in which P(t + 1) is a nonlinear function of P(t).
r
r
pi (t + 1) = pi (t) −
hj (P(t)).
j=1,j =i
The functions hj and gj are nonnegative and continuous in [0, 1] and obey: ∀i = 1, 2, . . . , r, ∀P ∈ (0, 1)R , 0 < gj (P) < pj , (10.13) and
0
2, are straightforward and can be found in [27]. • • • •
The Linear Reward-Inaction Scheme (LRI ) The Linear Inaction-Penalty Scheme (LIP ) The Symmetric Linear Reward-Penalty Scheme (LRP ) The Linear Reward--Penalty Scheme (LR−P )
For a 2-action LA, let gj (P(t)) = a pj (t) and hj (P(t)) = b (1 − pj (t))
(10.14)
In Equation (10.14), a and b are called the reward and penalty parameters, and they obey the following inequalities: 0 < a < 1, 0 ≤ b < 1. Equation (10.14) will be used further to develop the action probability updating equations. The abovementioned linear schemes are quite popular in LA because of their analytical tractability. They exhibit significantly different characteristics as can be seen in Table 10.1.
240
B. John Oommen et al.
Table 10.1 Properties of the continuous learning schemes Learning scheme LRI
Learning Usefulness Optimality parameters (good/bad) a> Good -optimal 0; b = 0 as a → 0 LIP a= Very bad Not even 0, b > 0 expedient LRP a= Bad Never b; a, b > 0 (Symmetric) -optimal LR−P a> Good -optimal 0, b 0 Good -optimal Absorbing (stationary E) DLIP N>0 Very bad Expedient Ergodic (nonstationary E) ADLIP N>0 Good -optimal Artificially absorbing Sluggish (Stationary environments) DLRP N>0 Reasonable -optimal Ergodic If cmin < 0.5 (nonstationary E) ADLRP N > 0 Good -optimal Artificially absorbing (stationary E) MDLRP N > 0 Good -optimal Ergodic (nonstationary E)
number of multiples of N1 of the action probabilities, where N is the so-called resolution parameter. This not only increases the rate of convergence of the algorithm but also reduces the time in terms of the clock cycles it takes for the processor to do each iteration of the task and the memory needed. Discretized algorithms have been proven to be both more time and space efficient than the continuous algorithms. Similar to the continuous LA paradigm, the discretized versions, the DLRI , DLIP , and DLRP automata have also been reported. Their design, analysis, and properties are given in [33–35, 53] and are summarized in Table 10.2.
10.5
241
speaking, be referred to as “proofs.” The flaw was discovered by the authors of [44]. The details of this flaw and how it has been subsequently rectified has been explained in Sect. 10.6. In a random environment, these algorithms help in choosing an action by increasing the confidence in the reward capabilities of the different actions. For example, these algorithms initially process each action a number of times and then (in one version) could increase the probability of the action with the highest reward estimate [1]. This leads to a scheme with better accuracy in choosing the correct action. The previous non-estimator VSSA algorithms update the probability vector directly on the basis of the response of the environment to the automaton where, depending on the type of vector updating scheme being used, the probability of choosing a rewarded action in the subsequent time instant is increased, and the probabilities of choosing the other actions could be decreased. However, estimator algorithms update the probability vector based on both the estimate vector and the current feedback provided by the environment to the automaton. The environment influences the probability vector both directly and indirectly, the latter as being a result of the estimation of the reward estimates of the different actions. This may, thus, lead to increases in action probabilities different from the currently rewarded action. Even though there is an added computational cost involved in maintaining the reward estimates, these estimator algorithms have an order of magnitude superior performance than the non-estimator algorithms previously introduced. Lanctôt and Oommen [18] further introduced the discretized versions of these estimator algorithms, which were proven to have an even faster rate of convergence.
Estimator Algorithms 10.5.2 Continuous Estimator Algorithms
10.5.1 Rationale and Motivation As we have seen so far, the rate of convergence of learning algorithms is one of the most important considerations, which was the primary reason for designing the family of “discretized" algorithms. With the same goal, Thathachar and Sastry designed a new class of algorithms, called the estimator algorithms [45, 55–57], which have faster rate of convergence than all the previous families. These algorithms, like the previous ones, maintain and update an action probability vector. However, unlike the previous ones, these algorithms also keep running estimates for each action that is rewarded using a reward estimate vector and then use those estimates in the probability updating equations. The reward estimates vector is, typically, denoted in the literature by ˆ D(t) = [dˆ1 (t), . . . , dˆr (t)]T . The corresponding state vector ˆ is denoted by Q(t) = P(t), D(t). Unfortunately, the convergence results (proofs) of all the LA presented in this section are flawed and should, strictly
Thathachar and Sastry introduced the class of continuous estimator algorithms [45, 55–57] in which the probability updating scheme T is continuous, i.e., the probability of choosing an action can be any real number in the closed interval [0, 1]. As mentioned subsequently, the discretized versions of these algorithms were introduced by Oommen and his coauthors, Lanctôt and Agache [2, 18]. These algorithms are briefly explained in Sect. 10.5.3.
Pursuit Algorithm The family of pursuit algorithms is a class of estimator algorithms that pursue an action that the automaton “currently” perceives to be the optimal one. The first pursuit algorithm, referred to as the CPRP algorithm and introduced by Thathachar and Sastry [54,55], pursues the optimal action by changing the probability of the current optimal action whether it receives a reward or a penalty by the environment. In this case, the currently perceived “best action” is rewarded,
10
242
B. John Oommen et al.
and its action probability value is increased with a value directly proportional to its distance to unity, namely, 1 − pm (t), whereas the “less optimal actions” are penalized and their probabilities decreased proportionally. To start with, based on the probability distribution P(t), the algorithm chooses an action α(t). Whether the response was a reward or a penalty, it increases that component of P(t), which has the maximal current reward estimate, and it decreases the probability corresponding to the rest of the actions. Finally, the algorithm updates the running estimates of the reward probability of the action chosen, this being the principal idea behind keeping and using the running ˆ estimates. The estimate vector D(t) can be computed using the following formula, which yields the maximum likelihood estimate: Wi (t) dˆi (t) = , ∀i = 1, 2, . . . , r, Zi (t)
(10.16)
where Wi (t) is the number of times the action αi has been rewarded until the current time t, and Zi (t) is the number of times αi has been chosen until the current time t. Based on the above concepts, the CPRP algorithm is formally given in [1, 2, 18]. The algorithm is similar in principle to the LRP algorithm because both the CPRP and the LRP algorithms increase/decrease the action probabilities of the vector independent of whether the environment responds to the automaton with a reward or a penalty. The major difference lies in the way the reward estimates are maintained, used, and updated on both reward/penalty. It should be emphasized that whereas the non-pursuit algorithm moves the probability vector in the direction of the most recently rewarded action, the pursuit algorithm moves the probability vector in the direction of the action with the highest reward estimate. Thathachar and Sastry [54] have theoretically proven their -optimality and experimentally proven that these pursuit algorithms are more accurate and several orders of magnitude faster than the nonpursuit algorithms. The Reward-Inaction version of this pursuit algorithm is also similar in design and is described in [2, 18]. Also, other pursuit-like estimator schemes have also been devised, and they can be found in [2].
TSE Algorithm A more advanced estimator algorithm, which we refer to as the TSE algorithm to maintain consistency with the existing literature [1, 2, 18], was designed by Thathachar and Sastry [56, 57]. Like the other estimator algorithms, the TSE algorithm ˆ and uses maintains the running reward estimates vector D(t) it to calculate the action probability vector P(t). When an action αi (t) is rewarded, according to the TSE algorithm, the probability components with a reward estimate greater
than dˆi (t) are treated differently from those components with a value lower than dˆi (t). The algorithm does so by increasing the probabilities for all the actions that have a higher estimate than the estimate of the chosen action and decreasing the probabilities of all the actions with a lower estimate. This is done with the help of an indicator function Sij (t) that assumes the value 1 if dˆi (t) > dˆj (t) and the value 0 if dˆi (t) ≤ dˆj (t). Thus, the TSE algorithm uses both the ˆ probability vector P(t) and the reward estimates vector D(t) to update the action probabilities. The algorithm is formally described in [1]. On careful inspection of the algorithm, it can be observed that P(t + 1) depends indirectly on the response of the environment to the automaton. The feedback from the ˆ environment changes the values of the components of D(t), which, in turn, affects the values of the functions f (.) and Sij (t) [1, 18, 56, 57]. Analyzing the algorithm carefully, we obtain three cases. If the ith action is rewarded, the probability values of the actions with reward estimates higher than the reward estimate of the currently selected action are updated using the following equation [56]: {pi (t) − pj (t)pi (t)} ˆ ˆ pj (t + 1) = pj (t) − λ f (di (t) − dj (t)) r−1 (10.17) When dˆi (t) < dˆj (t), since the function f (dˆi (t) − dˆj (t)) is monotonic and increasing, f (dˆi (t) − dˆj (t)) is seen to be negative. This leads to a higher value of pj (t + 1) than that of pj (t), which indicates that the probability of choosing actions, which have estimates greater than that of the estimates of the currently chosen action, will increase. For all the actions with reward estimates smaller than the estimate of the currently selected action, the probabilities are updated based on the following equation: pj (t + 1) = pj (t) − λf dˆi (t) − dˆj (t) pj (t)
(10.18)
The sign of the function f dˆi (t) − dˆj (t) is negative, which indicates that the probability of choosing actions, which have estimates less than that of the estimate of the currently chosen action, will decrease. Thathachar and Sastry have proven that the TSE algorithm is -optimal [56]. They have also experimentally shown that the TSE algorithm often converges several orders of magnitude faster than the LRI scheme.
Generalized Pursuit Algorithm Agache and Oommen [2] proposed a generalized version of the pursuit algorithm (CPRP ) proposed by Thathachar and Sastry [54, 55]. Their algorithm, called the Generalized Pursuit Algorithm (GPA), generalizes Thathachar and Sastry’s
10
Cybernetics, Machine Learning, and Stochastic Learning Automata
pursuit algorithm by pursuing all those actions that possess higher reward estimates than the chosen action. In this way, the probability of choosing a wrong action is minimized. Agache and Oommen experimentally compared their pursuit algorithm with the existing algorithms and found that their algorithm is the best in terms of the rate of convergence [2]. In the CPRP algorithm, the probability of the best estimated action is maximized by first decreasing the probability of all the actions in the following manner [2]: pj (t + 1) = (1 − λ)pj (t), j = 1, 2, . . . , r
(10.19)
The sum of the action probabilities is made unity by the help of the probability mass Δ, which is given by [2]: Δ=1−
r
pj (t + 1) = 1 −
j=1
=1−
r
(1 − λ)pj (t)
j=1
r
pj (t) + λ
j=1
r
pj (t) = λ
(10.20)
j=1
Thereafter, the probability mass Δ is added to the probability of the best estimated action. The GPA algorithm, thus, equidistributes the probability mass Δ to the action estimated to be superior to the chosen action. This gives us [2]: pm (t + 1) = (1 − λ)pm (t) + Δ = (1 − λ)pm (t) + λ, (10.21) where dˆm = maxj=1,2,...,r (dˆj (t)). Thus, the updating scheme is given by [2]: pj (t + 1) = (1 − λ)pj (t) +
λ , if dˆj (t) > dˆi (t), j = i K(t)
pj (t + 1) = (1 − λ)pj (t), ifdˆj (t) ≤ dˆi (t), j = i pj (t + 1), pi (t + 1) = 1 −
(10.22)
j =i
where K(t) denotes the number of actions that have estimates greater than the estimate of the reward probability of the action currently chosen. The formal algorithm is omitted but can be found in [2].
10.5.3 Discrete Estimator Algorithms As we have seen so far, discretized LA are superior to their continuous counterparts, and the estimator algorithms are superior to the non-estimator algorithms in terms of the rate of convergence of the learning algorithms. Utilizing the previously proven capabilities of discretization in improving the
243
speed of convergence of the learning algorithms, Lanctôt and Oommen [18] enhanced the pursuit and the TSE algorithms. This led to the designing of classes of learning algorithms, referred to in the literature as the Discrete Estimator Algorithms (DEA) [18]. To this end, as done in the previous discrete algorithms, the components of the action probability vector are allowed to assume a finite set of discrete values in the closed interval [0, 1], which is, in turn, divided into the number of subintervals that is proportional to the resolution parameter, N. Along with this, a reward estimate vector is maintained to keep an estimate of the reward probability of each action [18]. Lanctôt and Oommen showed that for each member algorithm belonging to the class of DEAs to be -optimal, it must possess a pair of properties known as the Property of Moderation and the Monotone Property. Together, these properties help prove the -optimality of any DEA algorithm [18]. Moderation Property A DEA with r actions and a resolution parameter N is said to possess the property of moderation if the maximum magnitude by which an action probability can decrease per iteration is bounded by 1/rN. Monotone Property Suppose there exists an index m and a time instant t0 < ∞, such that dˆm (t) > dˆj (t), ∀j s.t. j = m and ∀t s.t. t ≥ t0 , where dˆm (t) is the maximal component of ˆ D(t). A DEA is said to possess the Monotone Property if there exists an integer N0 such that for all resolution parameters N > N0 , pm (t) → 1 with probability one as t → ∞, where pm (t) is the maximal component of P(t). The discretized versions of the pursuit algorithm and the TSE algorithm possessing the moderation and the monotone properties are presented below.
Discrete Pursuit Algorithm The Discrete Pursuit Algorithm (formally described in [18]) is referred to as the DPA in the literature and is similar to a great extent to its continuous pursuit counterpart, i.e., the CPRI algorithm, except that the updates to the action probabilities for the DPA algorithm are made in discrete steps. Therefore, the equations in the CPRP algorithm that involve multiplication by the learning parameter λ are substituted by the addition or subtraction by quantities proportional to the smallest step size. As in the CPRI algorithm, the DPA algorithm operates 1 in three steps. If = rN (where N denotes the resolution and r the number of actions) denotes the smallest step size, the integral multiples of denote the step sizes in which the action probabilities are updated. Like the continuous Reward-Inaction algorithm, when the chosen action α(t) = αi is penalized, the action probabilities remain unchanged. However, when the chosen action α(t) = αi is rewarded, and
10
244
the algorithm has not converged, the algorithm decreases, by the integral multiples of , the action probabilities that do not correspond to the highest reward estimate. Lanctôt and Oommen have shown that the DPA algorithm possesses the properties of moderation and monotonicity and that it is thus -optimal [18]. They have also experimentally proved that in different ranges of environments from simple to complex, the DPA algorithm is at least 60% faster than the CPRP algorithm [18].
Discrete TSE Algorithm Lanctôt and Oommen also discretized the TSE algorithm and have referred to it as the Discrete TSE Algorithm (DTSE) [18]. Since the algorithm is based on the continuous version of the TSE algorithm, it obviously has the same level of intricacies, if not more. Lanctôt and Oommen theoretically proved that like the DPA estimator algorithm, this algorithm also possesses the Moderation and the Monotone properties while maintaining many of the qualities of the continuous TSE algorithm. They also provided the proof of convergence of this algorithm. There are two notable parameters in the DTSE algorithm: 1 1. Δ = rNθ , where N is the resolution parameter as before. 2. θ is an integer representing the largest value any of the action probabilities can change by in a single iteration.
A formal description of the DTSE algorithm is omitted here but is in [18].
Discretized Generalized Pursuit Algorithm Agache and Oommen [2] provided a discretized version of their GPA algorithm presented earlier. Their algorithm, called the Discretized Generalized Pursuit Algorithm (DGPA) also essentially generalizes Thathachar and Sastry’s pursuit algorithm [54,55]. But unlike the TSE, it pursues all those actions that possess higher reward estimates than the chosen action. In essence, in any single iteration, the algorithm computes the number of actions that have higher reward estimates than the current chosen action, denoted by K(t), whence the probability of all the actions that have estimates higher than the chosen action is increased by an amount Δ/K(t), and the probabilities for all the other actions are decreased by an amount Δ/(r − K(t)), where Δ = 1/rN denotes the resolution step and N the resolution parameter. The DGPA algorithm has been proven to possess the Moderation and Monotone properties and is thus -optimal [2]. The detailed steps of the DGPA algorithm are omitted here.
B. John Oommen et al.
10.5.4 The Use of Bayesian Estimates in PAs There are many formal methods to achieve estimation, the most fundamental of them being the ML and Bayesian paradigms. These are age-old and have been used for centuries in various application domains. As mentioned in the previous sections, PAs have been designed, since the 1980s, by using ML estimates of the reward probabilities. However, as opposed to these estimates, more recently, the first author of this present chapter (and others) have shown that the family of Bayesian estimates can also be incorporated into the design and analysis of LA. Incorporating the Bayesian philosophy has led to various Bayesian Pursuit Algorithms (BPAs). The first of these BPAs was the Continuous Bayesian Pursuit Algorithm (CBPA) (which, unless otherwise stated, is referred to here as the BPA) [74]. This subsequently made the way for the development of the Discretized Bayesian Pursuit Algorithm (DBPA) [75]. The BPAs obey the same “pursuit” paradigm for achieving the learning. However, since one can invoke the properties of their a priori and a posteriori distributions, the Bayesian estimates can provide more meaningful estimates when it concerns the field of LA and are superior to their ML counterparts. The families of BPAs are, probably, the fastest and most accurate nonhierarchical LA reported in the literature.
10.5.5 Stochastic Estimator Learning Algorithm (SELA) The SELA algorithm belongs to the class of discretized LA and was proposed by Vasilakos and Papadimitriou [63]. It has, since then, been used for solving problems in the domain of computer networks [4, 64]. It is an ergodic scheme, which has the ability to converge to the optimal action irrespective of the distribution of the initial state [63, 64]. As before, let A ={α1 , α2 , . . . , αr } denote the set of actions and B = {0, 1} denote the set of responses that can be provided by the environment, where β(t) represents the feedback provided by the environment corresponding to a chosen action α(t) at time t. Let the probability of choosing the kth action at the tth time instant be pk (t). SELA updates the estimated environmental characteristics as the vector E(t), which can be defined as E(t) = D(t), M(t), U(t), explained below. D(t) = {d1 (t), d2 (t), . . . , dr (t)} represents the vector of the reward estimates, where: W
dk (t) =
βk (t)
i=1
W
,
(10.23)
10
Cybernetics, Machine Learning, and Stochastic Learning Automata
In the above equation, the numerator on the right-hand side represents the total rewards received by the LA in the window size representing the last W times a particular action αk was selected by the algorithm. W is called the learning window. The second parameter in E(t) is called the Oldness Vector and is represented as M(t) = {m1 (t), m2 (t), . . . , mr (t)}, where mk (t) represents the time passed (counted as the number of iterations) since the last time the action αk (t) was selected. The last parameter U(t) is called the Stochastic Estimator Vector and is represented as U(t) = {u1 (t), u2 (t), . . . , ur (t)}, where the stochastic estimate ui (t) of action αi is calculated using the following formula: ui (t) = di (t) + N(0, σi 2 (t)),
(10.24)
where, N(0, σi 2 (t)) represents a random number selected from a normal distribution, which has a mean of 0 and a standard deviation of σi (t) = min{σmax , a mi (t)}, and a is a parameter signifying the rate at which the stochastic estimates become independent, and σmax represents the maximum possible standard deviation that the stochastic estimates can have. In symmetrically distributed noisy stochastic environments, SELA is shown to be -optimal and has found applications for routing in ATM networks [4, 64].
10.6
Challenges in Analysis
10.6.1 Previous Flawed Proofs It is relatively easy to design a new LA. How is it extremely difficult to analyze a new machine? Indeed, the formal proofs of the convergence accuracies of LA are, probably, the most difficult part in the fascinating field involving the design and analysis of LA. Since this is a comprehensive chapter, it would be educative to record the distinct mathematical techniques used for the various families, namely, for the FSSA, VSSA, Discretized, and PAs. • The proof methodology for the family of FSSA simply involves formulating the Markov chain for the LA. Thereafter, one computes its equilibrium (or steady state) probabilities and then derives the asymptotic action selection probabilities. • The proofs of the asymptotic convergence for VSSA involve the theory of small-step Markov processes and distance diminishing operators. In more complex cases, they resort to the theory of regular functions. • The proofs for Discretized LA also consider the asymptotic analysis of the LA’s Markov chain in the discretized space. Thereafter, one derives the total convergence probability to the various actions.
245
• The convergence proofs become much more complex when we have to deal with two rather non-orthogonal interrelated concepts, as in the case of the families of EAs. These two concepts are the convergence of the reward estimates and the convergence of the action probabilities themselves. Ironically, the combination of these phenomena to achieve the updating rule is what grants the EAs their enhanced speed. But this leads to a surprising dilemma: If the accuracy of the estimates computed is not dependable because of an inferior estimation phase (i.e., if the nonoptimal actions are not examined “enough number of times”), the accuracy of the EA converging to the optimal action can be forfeited. Thus, the proofs for EAs have to consider criteria that the other types of LA do not have to. The -optimality of the EAs and the families of pursuit algorithms including the CPA, GPA, DPA, TSE, and the DTSE (explained in the previous sections) have been studied and presented in [1, 2, 18, 45, 55–57]. The methodology for all these derivations was the same and was erroneous. The estimate used was an MLE, and by the law of large numbers, when all the actions are chosen infinitely often, they claimed that the estimates are ordered in terms of their reward probabilities. They also claimed that these estimates are properly ordered forever after some time instant, t0 . We refer to this as the the monotonicity property. If now the learning parameter is small/large enough, they will consequently converge to the optimal action with an arbitrarily large probability. The fault in the argument is the following: While such an ordering is, indeed, true by the law of large numbers, it is only valid if all the actions are chosen infinitely often. This, in turn, forces the time instant, t0 , to also be infinite. If such an “infinite" selection does not occur, the ordering cannot be guaranteed for all time instants after t0 . In other words, the authors of all these papers misinterpreted the concept of ordering “forever” with the ordering “most of the time” after t0 , rendering the “proofs” of the -optimality invalid. The authors of [44] discovered the error in the proofs of the abovementioned papers, and the detailed explanation of this discovery is found in [44]. These authors also presented a correct proof for the -optimality of the CPA based on the monotonicity property except that their proof required an external condition that the learning parameter, λ, decreased with time. This, apparently, is the only strategy by which one can prove the -optimality of the CPA. However, by designing the LA to jump to an absorbing barrier in a single step when any pj (t) ≥ T, where T is a user-defined threshold close to unity, the updated proofs have shown -optimality because of a weaker property, i.e., the submartingale property of pm (t), where t is greater than a finite time index, t0 . It is not an elementary exercise to generalize the arguments of [44] for the submartingale property if the value
10
246
of λ is kept fixed. However, the submartingale arguments can avoid the previously flawed methods of reasoning. This is achieved by the proofs not requiring that the process {dˆj (t)|t ≥ 0} satisfies the monotonicity property but the submartingale property. This phenomenon is valid for the corrected proofs of the CPA, DPA, and the family of Bayesian PAs.
10.6.2 The Rectified Proofs of the PAs With regard to the abovementioned flaw, we mention: 1. The primary intent of much of the theoretical work in the field of LA in the more-recent past has been to correct the abovementioned flaw for the various families of PAs. 2. To do this, the researchers have introduced a new method that can, hopefully, be used to prove the -optimality of all EAs, which are specifically enhanced with artificially enforced absorbing states [35] and in particular, for the ACPA, the CPA with artificially enforced absorbing states. 3. Though the method used in [44] is also applicable for absorbing EAs, it has been shown that while the monotonicity property is sufficient for convergence, it is not really necessary for proving that the ACPA is -optimal. 4. In [76], the authors have presented a completely new proof methodology, which is based on the convergence theory of submartingales and the theory of regular functions [27]. This proof is, thus, distinct in principle and argument from the proof reported in [44], and it adds insights into the convergence of different absorbing EAs. 5. Thereafter, the authors of [77] have utilized the convergence theory of submartingales and the theory of regular functions to prove the -optimality of the DPA. 6. Finally, the authors of [74] and [75] have invoked the same theories (of submartingales and of regular functions) to prove the -optimality of the families of Bayesian PAs, where the estimates of the reward probabilities are not computed using an ML methodology but rather due to a Bayesian paradigm.
10.6.3 Proofs for Finite-Time Analyses One of the most challenging analytic issues when it concerns LA relates to the Finite-Time Analysis (FTA) of LA algorithms. This is a topic for which very little work has been reported in the literature. This is as opposed to the asymptotic steady-state analysis (i.e., after the transient phase of the chain has elapsed) for which there are an extensive number of results. For ergodic Markov chains, one obtains the latter by evaluating the eigenvector associated with the eigenvalue λ = 1, and there are many reported closed-form
B. John Oommen et al.
and computational methods to achieve this. For absorbing chains, one computes the absorption first passage probabilities by invoking the Kolmogorov-Chapman equations. All of these techniques have formed the basis for analysis even when the number of actions is large and/or when they are arranged hierarchically. The FTA of Markov chains and LA is far more complex. Unlike the steady-state scenario, which only involves the case when λ = 1, the FTA involves all the non-unity eigenvalues (and predominantly the second largest one) of the Markov matrix, and this convergence occurs in a geometric manner. It is hard enough trying to analytically evaluate these non-unity eigenvalues, and so very little work has been done on the FTA of any family of LA, including the families of FSSA and/or VSSA. However, recently, Zhang et al. [72] have published some of the most conclusive results for the FTA of the DPA. An understanding of the FTA for other types of algorithms is an interesting research direction deserving future research.
10.7
Hierarchical Schemes
Hierarchical schemes have recently regained popularity in the field of LA [69, 71, 73]. Arranging the search space, generally, in the form of a hierarchy yields superior search results when compared to a linear search. The idea of using a hierarchy is not new in the field of LA. In [71], Yazidi et al. laid the foundations of a new model for devising efficient and rapid LA schemes that are specially adequate for a large number of actions by introducing a novel paradigm that extends the CPA’s capability to such sets of actions. The settings in which the number of actions is large is particularly challenging since the dimensionality of the probability vector becomes consequently large, and many of its components tend to decay in a few iterations to small values that are under what the machine accuracy can permit. This leads to the fact that they cease to be selected. In this case, the LA will be inaccurate, and the theoretical assumption that each action will be probed for a large number of times will not be fulfilled, in practice, if we use the family of estimator LA. The most distinguishing characteristic of the scheme reported in [71] is the fact that while it is hierarchical, all the actions reside in the leave nodes. Further, at each level of the hierarchy, one only needs a two-action LA, and thus, we can easily eliminate the problem of having extremely low action probabilities. By design, all the LA of the hierarchy resort to the pursuit paradigm, and therefore, the optimal action of each level trickles up toward the root. Thus, by recursively applying the “max” operator in which the maximum of several local maxima is a global maximum, the overall hierarchy converges to the optimal action.
10
Cybernetics, Machine Learning, and Stochastic Learning Automata
10.8
247
Point Location Problems
The Stochastic Point Location (SPL), an LA-flavored problem, was pioneered by Oommen in [38]. Since then, it has been studied and analyzed by numerous researchers over more than two decades. The SPL involves determining an unknown “point” when all that the learning system stochastically knows is whether the current point that has been chosen is to the left or the right of the unknown point. The formal specification of the SPL problem is as follows: An arbitrary Learning Mechanism (LM) is assumed to be trying to infer the optimal value, say μ∗ ∈ [0, 1], of some variable μ. μ could also be some parameter of a black box system. Although the LM is unaware of μ∗ , the SPL assumes that it is interacting with a teacher/oracle from whom it receives responses. From the perspective of the field of LA, this entity serves as the intelligent “environment,” E. However, rather than provide rewards/penalties, E stochastically (or in a “faulty” manner) informs the LM of whether the current value of μ is too small or too big. Thus, E may provide the feedback to the LM to decrease μ when it should be increased and vice versa. In the initial versions of the SPL, the model assumes that the probability of receiving an intelligent response is p > 0.5. The index “p” is an indicator of the “quality/effectiveness” of E. So when the current μ > μ∗ , the environment would accurately suggest that the LM decrease λ with probability p or would have inaccurately suggested that it increases μ with probability (1 − p) and vice versa. The task in solving the SPL involves proposing a strategy by which one can update the present guess μ(n) at time n of μ∗ to yield the subsequent guess, μ(n + 1). One will observe that each updating rule leads to a different potential solution to the SPL. The aim in determining a proposed solution is to have it converge to a value μ(∞), whose expected value is arbitrarily close to μ∗ . Such a scheme, which attains this property, is characterized as being -optimal. We catalog all the reported -optimal solutions to the SPL [38], namely: 1. The first-reported SPL solution that proposed the problem and pioneered a solution operating in a discretized space [38] 2. The Continuous Point Location with Adaptive Tertiary Search (CPL-ATS) solution where three LA worked in parallel to resolve it [39] 3. Th extension of the latter, namely, the Continuous Point Location with Adaptive d-ARY Search (CPL-AdS), which used ‘d’ LA in parallel [40], which could operate in truth-telling and deceptive environments 4. The General CPL-AdS Methodology that extended the CPL-AdS to possess all the properties of the latter but
5.
6.
7.
8.
which could also operate in nonstationary environments [13] The Hierarchical Stochastic Search on the Line (HSSL), where the authors proposed that the LM moved to distant points in the interval (modeled hierarchically) and specified by a tree [69] The Symmetrical Hierarchical Stochastic Search on the Line (SHSSL), which symmetrically enhanced the HSSL to work in deceptive environments [73] The Adaptive Step Search (ASS), which used historical information within the last three steps to determine the current step size [52] The probability flux-based SPL presented by Mofrad et al. [22], which used the theory of weak estimators to perform the tasks of searching and estimating (in tandem) the effectiveness, p, of the environment
10.9
Emerging Trends and Open Challenges
Although the field of LA is relatively young, the analytic results that have been obtained are quite phenomenal. Simultaneously, however, it is also fair to assert that the tools available in the field have been far too underutilized in reallife problems. We believe that the main areas of research that will emerge in the next few years will involve applying LA to a host of application domains. Here, as the saying goes, “the sky is the limit” because LA can probably be used in any application where the parameters characterizing the underlying system are unknown and random. Some possible potential applications are listed below: 1. LA could be used in medicine to help with the diagnosis process. 2. LA have potential applications in Intelligent Tutorial (or Tutorial-like) systems to assist in imparting imperfect knowledge to classrooms of students, where the teacher is also assumed to be imperfect. Some initial work is already available in this regard [12]. 3. The use of LA in legal arguments and the associated decision-making processes is open. 4. Although LA have been used in some robotic applications, as far as we know, almost no work has been done for obstacle avoidance and intelligent path planning of reallife robots. 5. We are not aware of any results that use LA in the biomedical application domain. In particular, we believe that they can be fruitfully utilized for learning targets and in the drug design phase. 6. One of the earliest applications of LA was in the routing of telephone calls over landlines. But the real-life application
10
248
of LA in wireless and multi-hop networks is still relatively open. We close this section by briefly mentioning that the main challenge in using LA for each of these application domains would be that of modeling what the “environment” and “automaton” are. Besides this, the practitioner would have to consider how the response of a particular solution can be interpreted as the reward/penalty for the automaton or for the network of automata.
10.10 Conclusions In this chapter, we have discussed most of the important learning mechanisms reported in the literature pertaining to learning automata (LA). After briefly stating the concepts of Fixed Structure stochastic LA, the families of continuous and discretized Variable Structure Stochastic Automata were discussed. The chapter, in particular, concentrated on the more recent results involving continuous and discretized pursuit and estimator algorithms and on the current developments concerning hierarchical LA and the Stochastic Point Location Problem. In each case, we have briefly summarized the theoretical and experimental results of the different learning schemes. See additional details on cybernetics, machine learning, and stochastic learning in Chapters 8, 9, and 72.
References 1. Agache, M.: Estimator Based Learning Algorithms. M.C.S. Thesis, School of Computer Science, Carleton University, Ottawa, Ontario, Canada, 2000 2. Agache, M., Oommen, B.J.: Generalized pursuit learning schemes: new families of continuous and discretized learning Automata. IEEE Trans. Syst. Man Cybernet. B, 32(6), 738–749 (2002) 3. Atkinson, C.R., Bower, G.H., Crowthers, E.J.: An Introduction to Mathematical Learning Theory. Wiley, New York (1965) 4. Atlasis, A.F., Saltouros, M.P., Vasilakos, A.V.: On the use of a stochastic estimator learning algorithm to the ATM routing problem: a methodology. Proc. IEEE GLOBECOM 21(6), 538–546 (1998) 5. Atlassis, A.F., Loukas, N.H., Vasilakos, A.V.: The use of learning algorithms in ATM networks call admission control problem: a methodology. Comput. Netw. 34, 341–353 (2000) 6. Atlassis, A.F., Vasilakos, A.V.: The use of reinforcement learning algorithms in traffic control of high speed networks. In: Advances in Computational Intelligence and Learning, pp. 353–369 (2002) 7. Barabanov, N.E., Prokhorov, D.V.: Stability analysis of discretetime recurrent neural networks. IEEE Trans. Neural Netw. 13(2), 292–303 (2002) 8. Barto, A.G., Bradtke, S.J., Singh, S.P.: Learning to act using realtime dynamic programming. Artif. Intell. 72(1–2), 81–138 (1995) 9. Bonassi, F., Terzi, E., Farina, M., Scattolini, R.: LSTM neural networks: Input to state stability and probabilistic safety verification. In: Learning for Dynamics and Control. PMLR, pp. 85–94 (2020)
B. John Oommen et al. 10. Bush, R.R., Mosteller, F.: Stochastic Models for Learning. Wiley, New York (1958) 11. Chang, H.S., Fu, M.C., Hu, J., Marcus, S.I.: Recursive learning automata approach to Markov decision processes,. IEEE Trans. Autom. Control 52(7), 1349–1355 (2007) 12. Hashem, M.K.: Learning Automata-Based Intelligent Tutorial-like Systems, Ph.D. Dissertation, School of Computer Science, Carleton University, Ottawa, Canada, 2007 13. Huang, D., Jiang, W.: A general CPL-AdS methodology for fixing dynamic parameters in dual environments. IEEE Trans. Syst. Man Cybern. SMC-42, 1489–1500 (2012) 14. Kabudian, J., Meybodi, M.R., Homayounpour, M.M.: Applying continuous action reinforcement learning automata (CARLA) to global training of hidden Markov models. In: Proceedings of ITCC’04, the International Conference on Information Technology: Coding and Computing, pp. 638–642. Las Vegas, Nevada, 2004 15. Krinsky, V.I.: An asymptotically optimal automaton with exponential convergence. Biofizika 9, 484–487 (1964) 16. Krylov, V.: On the stochastic automaton which is asymptotically optimal in random medium. Autom. Remote Control 24, 1114– 1116 (1964) 17. Lakshmivarahan, S.: Learning Algorithms Theory and Applications. Springer, Berlin (1981) 18. Lanctôt, J.K., Oommen, B.J.: Discretized estimator learning automata. IEEE Trans. Syst. Man Cybern. 22, 1473–1483 (1992) 19. LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436–444 (2015) 20. Lee, S., Kim, J., Park, S.W., Jin, S.-M., Park, S.-M.: Toward a fully automated artificial pancreas system using a bioinspired reinforcement learning design: IEEE J. Biomed. Health Inform. 25(2), 536–546 (2020) 21. Meybodi, M.R., Beigy, H.: New learning automata based algorithms for adaptation of backpropagation algorithm pararmeters. Int. J. Neural Syst. 12, 45–67 (2002) 22. Mofrad, A.A., Yazidi, A., Hammer, H.L.: On solving the SPL problem using the concept of probability flux. Appl. Intell. 49, 2699–2722 (2019) 23. Mohri, M., Rostamizadeh, A., Talwalkar, A.: Foundations of Machine Learning. MIT Press, Cambridge (2018) 24. Misra, S., Oommen, B.J.: GPSPA : a new adaptive algorithm for maintaining shortest path routing trees in stochastic networks. Int. J. Commun. Syst. 17, 963–984 (2004) 25. Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A.A., Veness, J., Bellemare, M.G., Graves, A., Riedmiller, M., Fidjeland, A.K., Ostrovski, G., et al.: Human-level control through deep reinforcement learning. Nature 518(7540), 529–533 (2015) 26. Najim, K., Poznyak, A.S.: Learning Automata: Theory and Applications. Pergamon Press, Oxford (1994) 27. Narendra, K.S., Thathachar, M.A.L.: Learning Automata. PrenticeHall, Englewood Cliffs (1989) 28. Nian, R., Liu, J., Huang, B.: A review on reinforcement learning: introduction and applications in industrial process control. Comput. Chem. Eng. 139, 106886 (2020) 29. Norman, M.F.: On linear models with two absorbing barriers. J. Math. Psychol. 5, 225–241 (1968) 30. Nowé, A., Verbeeck, K., Peeters, M.: Learning automata as a basis for multi agent reinforcement learning. In: International Workshop on Learning and Adaption in Multi-Agent Systems, pp. 71–85. Springer, Berlin (2005) 31. Obaidat, M.S., Papadimitriou, G.I., Pomportsis, A.S.: Learning automata: theory, paradigms, and applications. IEEE Trans. Syst. Man Cybern. B 32, 706–709 (2002) 32. Obaidat, M.S., Papadimitriou, G.I., Pomportsis, A.S., Laskaridis, H.S.: Learning automata-based bus arbitration for shared-medium ATM switches. IEEE Trans. Syst. Man Cybern. B 32, 815–820 (2002)
10
Cybernetics, Machine Learning, and Stochastic Learning Automata
33. Oommen, B.J., Christensen, J.P.R.: -optimal discretized linear reward-penalty learning automata. IEEE Trans. Syst. Man Cybern. B 18, 451–457 (1998) 34. Oommen, B.J., Hansen, E.R.: The asymptotic optimality of discretized linear reward-inaction learning automata. IEEE Trans. Syst. Man Cybern. 14, 542–545 (1984) 35. Oommen, B.J.: Absorbing and ergodic discretized two action learning automata. IEEE Trans. Syst. Man Cybern. 16, 282–293 (1986) 36. Oommen, B.J., de St. Croix, E.V.: Graph partitioning using learning automata. IEEE Trans. Comput. C-45, 195–208 (1995) 37. Oommen, B.J., Roberts, T.D.: Continuous learning automata solutions to the capacity assignment problem. IEEE Trans. Comput. C-49, 608–620 (2000) 38. Oommen, B.J.: Stochastic searching on the line and its applications to parameter learning in nonlinear optimization. IEEE Trans. Syst. Man Cybern. SMC-27, 733–739 (1997) 39. Oommen, B.J., Raghunath, G.: Automata learning and intelligent tertiary searching for stochastic point location. IEEE Trans. Syst. Man Cybern. SMC-28B, 947–954 (1998) 40. Oommen, B.J., Raghunath, G., Kuipers, B.: Parameter learning from stochastic teachers and stochastic compulsive liars. IEEE Trans. Syst. Man Cybern. B 36, 820–836 (2006) 41. Papadimitriou, G.I., Pomportsis, A.S.: Learning-automata-based TDMA protocols for broadcast communication systems with bursty traffic. IEEE Commun. Lett. 4(3)107–109 (2000) 42. Poznyak, A.S., Najim, K.: Learning Automata and Stochastic Optimization. Springer, Berlin (1997) 43. Return of cybernetics. Nat. Mach. Intell. 1(9), 385–385, 2019. https://doi.org/10.1038/s42256-019-0100-x 44. Ryan, M., Omkar, T.: On -optimality of the pursuit learning algorithm. J. Appl. Probab. 49(3), 795–805 (2012) 45. Sastry, P.S.: Systems of Learning Automata: Estimator Algorithms Applications, Ph.D. Thesis, Department of Electrical Engineering, Indian Institute of Science, Bangalore, India, 1985 46. Santharam, G., Sastry, P.S., Thathachar, M.A.L.: Continuous action set learning automata for stochastic optimization. J. Franklin Inst. 331B5, 607–628 (1994) 47. Schulman, J., Levine, S., Abbeel, P., Jordan, M., Moritz, P.: Trust region policy optimization. In: International Conference On Machine Learning, pp. 1889–1897. PMLR (2015) 48. Seredynski, F.: Distributed scheduling using simple learning machines. Eur. J. Oper. Res. 107, 401–413 (1998) 49. Shapiro, I.J., Narendra, K.S.: Use of stochastic automata for parameter self-optimization with multi-modal performance criteria. IEEE Trans. Syst. Sci. Cybern. SSC-5, 352–360 (1969) 50. Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (2018) 51. Sutton, R.S., Barto, A.G., Williams, R.J.: Reinforcement learning is direct adaptive optimal control. IEEE Control Syst. Mag. 12(2), 19–22 (1992) 52. Tao, T., Ge, H., Cai, G., Li, S.: Adaptive step searching for solving stochastic point location problem. Intern. Conf. Intel. Comput. Theo ICICT-13, 192–198 (2013) 53. Thathachar, M.A.L., Oommen, B.J.: Discretized reward-inaction learning automata. J. Cybern. Inform. Sci. Spring, 24–29 (1979) 54. Thathachar, M.A.L., Sastry, P.S.: Pursuit algorithm for learning automata. Unpublished paper that can be available from the authors. 55. Thathachar, M.A.L., Sastry, P.S.: A new approach to designing reinforcement schemes for learning automata. In: Proceedings of the IEEE International Conference on Cybernetics and Society, Bombay, India, 1984 56. Thathachar, M.A.L., Sastry, P.S.: A class of rapidly converging algorithms for learning automata. IEEE Trans. Syst. Man Cybern. SMC-15, 168–175 (1985) 57. Thathachar, M.A.L., Sastry, P.S.: Estimator algorithms for learning automata. In: Proceedings of the Platinum Jubilee Conference on
249
58.
59.
60. 61.
62.
63.
64.
65. 66. 67. 68.
69.
70.
71.
72.
73.
74.
75.
76.
Systems and Signal Processing. Department of Electrical Engineering, Indian Institute of Science, Bangalore (1986) Thathachar, M.A.L.T., Sastry, P.S.: Networks of Learning Automata: Techniques for Online Stochastic Optimization. Kluwer Academic, Boston (2003) Tsetlin, M.L.: On the behaviour of finite automata in random media. Autom. Remote Control 22, 1210–1219 (1962). Originally in Avtomatika i Telemekhanika 22, 1345–1354 (1961) Tsetlin, M.L.: Automaton Theory and Modeling of Biological Systems. Academic Press, New York (1973) Unsal, C., Kachroo, P., Bay, J.S.: Simulation study of multiple intelligent vehicle control using stochastic learning automata. Trans. Soc. Comput. Simul. Int. 14, 193–210 (1997) Varshavskii, V.I., Vorontsova, I.P.: On the behavior of stochastic automata with a variable structure. Autom. Remote Control 24, 327–333 (1963) Vasilakos, A.V., Papadimitriou, G.: Ergodic discretized estimator learning automata with high accuracy and high adaptation rate for nonstationary environments. Neurocomputing 4, 181–196 (1992) Vasilakos, A., Saltouros, M.P., Atlassis, A.F., Pedrycz, W.: Optimizing QoS routing in hierarchical ATM networks using computational intelligence techniques. IEEE Trans. Syst. Sci. Cybern. C, 33, 297–312 (2003) Verbeeck, K., Nowe, A.: Colonies of learning automata. IEEE Trans. Syst. Man Cybern. B (Cybernetics) 32(6), 772–780 (2002) Wang, Y., He, H., Tan, X.: Truly proximal policy optimization. In: Uncertainty in Artificial Intelligence, pp. 113–122. PMLR (2020) Wheeler, R., Narendra, K.: Decentralized learning in finite Markov chains. IEEE Trans. Autom. Control 31(6), 519–526 (1986) Wu, L., Feng, Z., Lam, J.: Stability and synchronization of discretetime neural networks with switching parameters and time-varying delays. IEEE Trans. Neural Netw. Learn. Syst. 24(12), 1957–1972 (2013) Yazidi, A., Granmo, O.-C., Oommen, B.J., Goodwin, M.: A novel strategy for solving the stochastic point location problem using a hierarchical searching scheme. IEEE Trans. Syst. Man Cybern. SMC-44, 2202–2220 (2014) Yazidi, A., Hassan, I., Hammer, H.L., Oommen, B.J.: Achieving fair load balancing by invoking a learning automata-based twotime-scale separation paradigm. IEEE Trans. Neural Netw. Learn. Syst. 32(8), 3444–3457 (2020) Yazidi, A., Zhang, X., Jiao, L., Oommen, B.J.: The hierarchical continuous pursuit learning automation: a novel scheme for environments with large numbers of actions. IEEE Trans. Neural Netw. Learn. Syst. 31, 512–526 (2019) Zhang, X., Jiao, L., Oommen, B.J., Granmo, O.-C.: A conclusive analysis of the finite-time behavior of the discretized pursuit learning automaton IEEE Trans. Neural Netw. Learn. Syst. 31, 284–294 (2019) Zhang, J., Wang, Y., Wang, C., Zhou, M.: Symmetrical hierarchical stochastic searching on the line in informative and deceptive environments. IEEE Trans. Syst. Man Cybern. SMC-47, 626–635 (2017) Zhang, X., Granmo, O.C., Oommen, B.J.: The Bayesian pursuit algorithm: a new family of estimator learning automata. In: Proceedings of IEAAIE2011. pp. 608–620. Springer, New York, (2011) Zhang, X., Granmo, O.C., Oommen, B.J.: On incorporating the paradigms of discretization and Bayesian estimation to create a new family of pursuit learning automata. Appl. Intell. 39, 782–792 (2013) Zhang, X., Granmo, O.C., Oommen, B.J., Jiao, L.: A formal proof of the -optimality of absorbing continuous pursuit algorithms using the theory of regular functions. Appl. Intell. 41, 974–985 (2014)
10
250
B. John Oommen et al.
77. Zhang, X., Oommen, B.J., Granmo, O.C., Jiao, L.: A formal proof of the -optimality of discretized pursuit algorithms. Appl. Intell. (2015). https://doi.org/10.1007/s10489-015-0670-1 78. Wiener, N.: Cybernetics or Control and Communication in the Animal and the Machine. MIT Press, Cambridge (2019)
Dr. Sudip Misra is a professor in the Department of Computer Science & Engineering at IIT Kharagpur. He did his PhD from Carleton University, Ottawa. His research interests are in IoT, sensor networks, and learning automata applications. He is an associate editor of IEEE TMC, TVT, and SJ. He is a fellow of NASI (India), IET (UK), BCS (UK), and RSPH (UK).
Dr. B. John Oommen is a chancellor’s professor at Carleton University in Ottawa, Canada. He obtained his B. Tech. from IIT Madras, India (1975); his MS from IISc in Bangalore, India (1977); and his PhD from Purdue University, USA (1982). He is an IEEE and IAPR fellow. He is also an adjunct professor with the University of Agder in Grimstad, Norway.
Dr. Anis Yazidi received the MSc and PhD degrees from the University of Agder, Grimstad, Norway, in 2008 and 2012, respectively. He is currently a full professor with the Department of Computer Science, OsloMet-Oslo Metropolitan University, Oslo, Norway. He was a senior researcher with Teknova AS, Grimstad. He is leading the research group on Autonomous Systems and Networks.
Network Science and Automation
11
Lorenzo Zino, Baruch Barzel, and Alessandro Rizzo
Contents
Abstract
11.1
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
11.2 11.2.1 11.2.2 11.2.3 11.2.4
Network Structure and Definitions . . . . . . . . . . . . . . . . Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Networks as Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Network Matrices and Their Properties . . . . . . . . . . . . . . Real-World Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . .
252 252 252 253 254
11.3 11.3.1 11.3.2 11.3.3 11.3.4
Main Results on Dynamics on Networks . . . . . . . . . . . Consensus Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Perturbative Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
254 254 256 257 258
11.4 11.4.1 11.4.2 11.4.3
Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Distributed Sensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Infrastructure Models and Cyberphysical Systems . . . . . Motion Coordination . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
260 260 263 266
11.5 11.5.1 11.5.2 11.5.3
Ongoing Research and Future Challenges . . . . . . . . . Networks with Adversarial and Malicious Nodes . . . . . . Dynamics on Time-Varying and Adaptive Networks . . . Controllability of Brain Networks . . . . . . . . . . . . . . . . . .
268 268 269 270
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
L. Zino () Department of Electronics and Telecommunications, Politecnico di Torino, Torino, Italy Engineering and Technology Institute Groningen, University of Groningen, Groningen, The Netherlands e-mail: [email protected] B. Barzel () Department of Mathematics, Bar-Ilan University, Ramat-Gan, Israel
From distributed sensing to autonomous vehicles, networks are a crucial component of almost all our automated systems. Indeed, automation requires a coordinated functionality among different, selfdriven, autonomous units. For example, robots that must mobilize in unison, vehicles or drones that must safely share space with each other, and, of course, complex infrastructure networks, such as the Internet, which require cooperative dynamics among its millions of interdependent routers. At the heart of such multi-component coordination lies a complex network, capturing the patterns of interaction between its constituting autonomous nodes. This network allows the different units to exchange information, influence each other’s functionality, and, ultimately, achieve globally synchronous behavior. Here, we lay out the mathematical foundations for such emergent large-scale networkbased cooperation. First, analyzing the structural patterns of networks in automation, and then showing how these patterns contribute to the system’s resilient and coordinated functionality. With this toolbox at hand, we discuss common applications, from cyber-resilience to sensor networks and coordinated robotic motion. Keywords
Cascading failures · Consensus · Coupled oscillators · Distributed control · Distributed sensing · Dynamics on networks · Networks · Pinning control · Resilience · Synchronization
The Gonda Interdisciplinary Brain Science Center, Bar-Ilan University, Ramat-Gan, Israel e-mail: [email protected] A. Rizzo () Department of Electronics and Telecommunications, Politecnico di Torino, Torino, Italy Institute for Invention, Innovation, and Entrepreneurship, New York University Tandon School of Engineering, Brooklyn, NY, USA e-mail: [email protected]
© Springer Nature Switzerland AG 2023 S. Y. Nof (ed.), Springer Handbook of Automation, Springer Handbooks, https://doi.org/10.1007/978-3-030-96729-1_11
11.1
Overview
In its classic form, automation focuses on replacing human labor and, at times, also cognitive skills, with autonomous units. Robots and machines, for example, take the place of manual workers in production lines, and computers enhance 251
252
L. Zino et al.
our ability to analyze data and conduct elaborate calculations. Yet, modern automation reaches beyond the autonomous units and scales our view up to the autonomous system. Indeed, many of our most crucial technological systems cannot be simply reduced to their technical components, but rather their true function and value are rooted in their ability to drive these components toward cooperative large-scale behavior. For example, the Internet, in technical terms, is a critical infrastructure comprising routers, cables, and computers. Yet, put together, the Internet is not a big computer or router. It is an evolving system that interacts and responds to its human environment, shares, produces, and processes information – and – in its essence – exhibits functions that supersede those of its autonomous components. More broadly, whether it involves drones, autonomous vehicles, or sensing units, modern automation is an emergent phenomenon, in which technological units – nodes – cooperate to generate higher-level functionalities. Simple building blocks come together to perform complex functions. Lacking central design, and without specific control over each and every unit, these complex multi-component systems often amaze us with their reliable and resilient functionality. Indeed, we can carelessly send an email and have no doubt that the Internet’s globally distributed routers will find an efficient path for it to reach its destination. Similarly, we turn on electrical units and need not to think about the automatic redistribution of loads across all power grid components. The basis for this seemingly uncoordinated cooperation lies not at the technology of the units themselves but rather at their patterns of connectivity – namely the network. Each of these systems has an underlying complex network, allowing the components to transfer information, distribute loads, and propagate signals, thus enabling a synchronous action among distributed, self-driven, autonomous nodes. Now, we therefore transition our viewpoint, from the specific technology of the components, to the design principles of their interaction networks. What characteristics of the network allow units to reach a functional consensus? What are the bottlenecks and Achilles’ heels of the network that mark its potential vulnerabilities? Conversely, what ensures the network’s resilience in the face of failures and attacks? In this chapter, we seek the fundamental principles that look beyond the components and allow us to mathematically characterize, predict, and ultimately control the behavior of their networks.
11.2
Network Structure and Definitions
Seeking the connectivity patterns that drive networks and automation, we first begin by laying the mathematical foundations – indeed, the language – of network science. We focus on the network characteristics that, as we show below, play
a key role in the emergent behavior of our interconnected autonomous systems.
11.2.1 Notation We gather here the notational conventions used throughout this chapter. Unless otherwise stated, vectors and matrices have dimension n and n × n, respectively. The sets R and R+ denote the set of real and real non-negative numbers. The sets Z and Z+ denote the set of integer and integer non-negative numbers. The set C denote the set of complex numbers. Given a complex number z ∈ C, we denote by |z| its modulus, by R(z) its real part, and by I (z) its imaginary part. The imaginary unit is denoted as ι. Given a vector x or matrix M, we denote by x and M the transpose vector and matrix, respectively. We use ||x|| to denote x’s euclidean norm. The symbol 1 denotes a vector of all 1 entries, i.e., 1 = (1, . . . , 1) ; I denotes the identity matrix; and, more generally, diag(x) represents a matrix with vector x on its diagonal and 0 for all its off-diagonal entries. Given a set S , we denote by |S | its cardinality.
11.2.2 Networks as Graphs The most basic mathematical model of a network is given by a graph with n nodes, indexed by positive integers, forming the node set V = {1, . . . , n}. These nodes are connected through a set of directed edges (or links), i.e., the edge set E ⊆ V × V , such that (i, j) ∈ E if and only if node i is connected to node j. An edge that connects node i to itself, (i, i), is called self-loop. The pair of sets G = (V , E ) defines the network. To track the spread of information between nodes in a network we map its pathways, linking potentially distant nodes through indirect connections. A path from node i to node j is a sequence of edges (i, i1 ), (i1 , i2 ) . . . , (i−1 , j) ∈ E , where is the path length. In case i = j, the path is called cycle. A path (cycle) is said to be simple, if no node is repeated in the sequence of edges (apart from the first and last nodes in a cycle). The length lij of the shortest path(s) between i and j, provides a measure of distance between them. This, however, is not, formally a metric, as the distance from i to j may, generally, be different than that from j to i. The greatest common divisor of the lengths of all the cycles passing through a node i is denoted as pi and called the period of node i, with the convention that, if no cycles pass through i, then pi = ∞. If pi = 1, we say that node i is aperiodic. Note that, if i has a self-loop, then it is necessarily aperiodic. If there exists a path from i to j, we say that j is reachable from i. If a node i is reachable from any j ∈ V , we say that i is a globally reachable node. If i is reachable from j and j is reachable from i, we say that
11
Network Science and Automation
253
A network is said to be undirected if for any (i, j) ∈ E , then (j, i) ∈ E . For undirected networks, much of the notation presented above simplifies. In fact, if i is reachable from j, then also j is necessarily reachable from i. As a consequence, for undirected graphs, if there exists a globally reachable nodes, then the network is necessarily strongly connected. Furthermore, the in- and out-neighbors coincide, and consequently the in-degree is equal to the out-degree. Hence, we can omit the prefix in/out and refer to the neighbors and the degree of a node. In an undirected network, distance is symmetric and each edge is also a cycle of length 2. See Fig. 11.1 for some illustrative examples of networks and their properties.
the two nodes are connected. Connectivity is an equivalence relation, which determines a set of equivalence classes, called strongly-connected components. If all the nodes belong to the same strongly connected component, we say that the network is strongly-connected. It can be shown that the period of a node and its global reachability are class properties, that is, all the nodes belonging to the same class have the same period and, if one node is globally reachable, then all the nodes share this property. Hence, we will say that a strongly connected component is globally reachable and has period p. For strongly connected graphs, paths provide a measure of node “centrality” in the network. Specifically, denoting the set Sij containing all the shortest paths between two nodes i and j, we define the betweenness centrality of a node i as the (normalized) number of shortest paths traversing through i, namely |{s ∈ Sjk : i ∈ s}| γi ∝ . (11.1) |Sjk | j,k∈V
11.2.3 Network Matrices and Their Properties
Besides graphs, we can use matrices to represent networks. Given a network G = (V , E ), we define a non-negativevalued matrix W ∈ Rn×n + , such that Wij > 0 ⇐⇒ (i, j) ∈ E . The resulting matrix W associates each edge of the graph As we shall see in Sect. 11.4.2, the betweenness centrality (i, j) with a weight W ij , forming the network’s weighted plays a key role in determining the vulnerability of a network adjacency matrix. Together, this yields the weighted network, to cascading failures. Betweenness centrality, we emphasize, For each node i ∈ V , we is just one of the many centrality measures that can be defined denoted by the triplet G = (V , E−, W). + Wij and wi := Wji its weighted outto rank the importance of nodes in a network. For example, denote by wi := and in-degrees, respectively. See Fig. 11.1 for an example. in the next section, we present the Bonacich centrality [1]. capture transition rates between Often the weights W ij Given a node i ∈ V , we denote by nodes, and hence the weight matrix W is assumed to be Ni+ := {j ∈ V : (i, j) ∈ E } and Ni− := {j ∈ V : (j, i) ∈ E } a stochastic matrix, in which all rows sum to 1. For such (11.2) matrices, the Perron-Frobenius theory can be used to study the spectral properties of W [2]. An immediate observathe sets of out-neighbors and in-neighbors, respectively, tion is that 1 is an eigenvalue of W, associated with the namely, the set of nodes that i links to, and the nodes linking right-eigenvector 1. All the other eigenvalues of W are not to i, respectively. The cardinality of Ni+ is i’s out-degree larger than 1 in modulus. Further results can be established, di+ := |Ni+ |, and that of Ni− is its in-degree di− := |Ni− |. depending on the connectivity properties of the network,
a)
b)
8
c)
8
3 5
3
c2 c1
3
5
2 6
6 4 1
Fig. 11.1 Three examples of networks: a) an undirected network, b) a directed network, and c) a weighted (directed) network. The network in a) is strongly connected and aperiodic, that is, p = 1. Aperiodicity can be verified by observing that through node 7 we have the cyan cycle c1 = (7, 6), (6, 7) and the green cycle c2 = (7, 5), (5, 8), (8, 7), of length equal to 2 and 3, respectively. Node 6 has a self loop and degree d6 = 2. In b), there are four strongly connected components,
3
7 2
1 4
1 2
1
2
4 1
2
1 5
5
6
8
1
7
7
7
1 2
2
highlighted in different colors: {6}, {5, 7, 8}, {1, 2}, and {3, 4}; the latter (the cyan one) is globally reachable and has period p3 = p4 = 2. Node 1 has out-degree d+ = 2 and in-degree d− = 1. In c), we show the network in b) equipped with a weight matrix. Note that weight W12 = 1, and that node 1 has weighted out-degree w+ 1 = 3 and weighted in-degree w− = 2 (out-going and in-going edges are denoted in green and cyan, 1 respectively)
11
254
L. Zino et al.
which can be directly mapped into properties of the matrix W. For instance, strong connectivity of the network yields irreducibility of W, which guarantees that the eigenvalue 1 is simple, and it admits a left-eigenvector π with strictly positive entries. This result can be extended to networks with a unique globally reachable strongly connected component, whereby the left-eigenvector π has strictly positive entries corresponding to the nodes that belong to the component, whereas the other entries are equal to 0 (see, e.g., [3]). Note that, under these hypotheses we have lim Wt → 1π .
t→∞
(11.3)
Such repeated multiplications of W, t → ∞, capture the convergence of a linear process driven by W. Therefore, the lefteigenvector π, the Bonacich centrality of the weighted network, plays a key role in determining the consensus point of linear averaging dynamics, as we will discuss in Sect. 11.3.1. Next, we define the weighted Laplacian matrix as L = diag(w+ ) − W,
(11.4)
where w+ is a vector that assembles the out-degrees of all nodes. In simple terms, the diagonal entries of the Laplacian contain the weighted out-degree of the node, while the offdiagonal entries are equal to −Wij , the sign inverse of the i, j weight. As opposed to W that contains the entire information on the network structure, in the Laplacian the information on the presence and weight of the self-loops is lost. By design, the rows of L sum to 0. Consequently, the all-1 vector, a right eigenvector of the Laplacian, now has eigenvalue 0. For stochastic matrices, as all rows sum to one, the Laplacian reduces to L = I − W. Hence, the two matrices share the same eigenvectors ξ1 , . . . , ξn , and their eigenvalues are shifted by 1, namely if μk is an eigenvalue of W then λk = 1 − μk is an eigenvalue of L. Therefore, all the spectral properties described above for stochastic matrices W apply directly to the study the Laplacian spectrum.
these probabilities are fat-tailed, describing a majority of low degree nodes, coexisting alongside a small minority of hubs, which can have orders of magnitude more neighbors than the average. According to real-world empirical observations [4], several real-world degree distributions in technological and biological networks can be accurately approximated by a power law of the form P± (d) ∼ d−γ , describing a collection degree that lacks a typical scale, i.e., scale-free networks [1]. The exponent γ plays a key role in shaping the degree distribution, whereby smaller values of γ yield more heterogeneous and disperse distributions. In particular, powerlaw distributions with exponent γ ∈ [2, 3), which are often observed in real-world networks, exhibit a constant average degree, whereas its variance grows with the network size. Other real-world networks, including many social networks, exhibit weaker scale-free properties, which can be captured, for instance, by log-normal distributions [5]. Finally, another key feature of many real-world networks, especially in the social and biological domains, is the tendency of nodes to create clusters, whereby two nodes that are connected have a higher probability of having other connections “in common.” Such a property, in combination with the slow logarithmic growth of the shortest paths, characterizes the so told smallworld network models, which have become popular in network science in the past few decades [6].
11.3
Main Results on Dynamics on Networks
Up to this point we discussed the characterization of the network’s static structure – who is connected to whom. However, networks are truly designed to capture dynamic processes, in which nodes spread information and influence each other’s activity [7, 8]. Hence, we seek to use the network to map who is influenced by whom. We, therefore, discuss below dynamics flowing on networks.
11.3.1 Consensus Problem 11.2.4 Real-World Networks Social, biological, and technological networks have been extensively studied over the past two decades, uncovering several widespread characteristics, universally observed across these different domains. For example, most real networks exhibit extremely short paths between all connected nodes, with the average shortest path length often scaling as ¯ ∼ log n, a slow logarithmic growth with the network size [1]. Another universal feature, crossing domains of inquiry, is degree-heterogeneity. This is captured by P± (d), the probability for a randomly selected node to have an in-degree di− = d (out-degree di+ = d). In many real-world networks,
Since the second half of the twentieth century, the consensus problem has started attracting the interest of the systems and control community. The reason for such a growing interest is that the consensus problem, initially proposed by J. French and M.H. De Groot as a model for social influence [9,10], has found a wide range of applications, encompassing opinion dynamics [11, 12], distributed sensing [13], and formation control [14]. In its very essence, the problem consists of studying whether a network of dynamical systems is able to reach a common state among its nodes in a distributed fashion via pairwise exchanges of information, lacking any sort of centralized entity. In the following, we provide a
11
Network Science and Automation
255
formal definition of the problem and illustrate some of its main results. Here, our nodes represent units (also called agents) V = {1, . . . , n}, modeling, for instance, individuals, sensors, or robots. Agents are connected through a weighted network G = (V , E , W), where W is a stochastic matrix. Each agent i ∈ V is characterized by a state variable xi (t) ∈ R, which, at every time-step, is updated to the weighted average of the states of its neighboring agents - hence the consensus dynamics. The consensus problem can be formulated either in a discrete-time framework (t ∈ Z+ ), or in continuous time (t ∈ R+ ). In discrete-time, the simplest state update rule is formulated via xi (t + 1) =
Wij xj (t),
(11.5)
j∈V
or, in compact matrix form, as x(t + 1) = Wx(t),
(11.6)
where x(t) is a n-dimensional vector capturing all the agents’ states. In the continuous-time version, (11.5) and (11.6) are formulated using the weighted Laplacian matrix as x˙ (t) = −εLx(t),
(11.7)
where ε > 0 is a positive constant that determines the rate of the consensus dynamics. One of the main goals of the consensus problem is to study how the topological properties of the network influence the asymptotic behavior of x(t). In particular, the key question is to determine under which conditions the states of the agents converge to a common quantity, reaching a consensus state x∗ . Perron-Frobenius theory [2] provides effective tools to study the consensus problem in terms of the spectral properties of the weight matrix W, establishing conditions that can be directly mapped into connectivity patterns of the network. Specifically, it is shown that a consensus state is reached for any initial condition x(0) if and only if the graph described by W has a unique globally reachable strongly connected component. In the case of discrete-time consensus, it is also required that such a strongly connected component is aperiodic [3]. An extremely important consequence of this observation, especially for applications seeking to drive agents toward consensus, is that such convergence is guaranteed if the network is strongly connected and, for discrete-time consensus, if at least one self-loop is present. This captures rather general conditions to reach consensus. A second key result obtained through the PerronFrobenius theory touches on the characterization of the consensus state. Specifically, given the initial conditions
x(0) and the (unique) left eigenvector of W associated with the unit eigenvalue π, the consensus point coincides with the weighted average x∗ = π x(0) =
πi xi (0).
(11.8)
i∈V
More specifically, each agent contributes to the consensus state proportionally to its Bonacich (eigenvector) centrality. This result has several important consequences. Notably, agents that do not belong to the unique globally reachable aperiodic strongly connected component do not contribute to the formation of the consensus state, as their centrality is equal to 0 [3]. A special case of the consensus problem is when the consensus is driven toward the arithmetical average of the initial conditions. This is observed if and only if matrix W is doubly stochastic (a trivial case occurs when the matrix is symmetric). A simulation of a consensus dynamics with W doubly stochastic is illustrated in Fig. 11.2b. Consensus state x∗ captures, when indeed reached, the long-term state of the system. Next, we seek to estimate the rate of convergence to this consensus. For a review of the main findings, see, e.g. [15]. Given the linear nature of the consensus dynamics, the convergence rates are closely related to the eigenvalues of the stochastic matrix W. Intuitively, the speed of convergence of the consensus dynamics depends on the speed of convergence of matrix Wt to 1π , which is determined by the second largest eigenvalue in modulus, denoted by ρ2 . Under the convergence conditions for the discrete-time consensus dynamics, we recall that ρ2 < 1. For symmetric matrices W, all the eigenvalues are real and positive, so ρ2 is the second largest eigenvalue. Following [16], we can directly write the speed of convergence of the discrete-time consensus as ||x(t) − 1x∗ || ≤ ρ2t ||x(0)|| ,
(11.9)
A similar expression can be obtained for the continuoustime model in (11.7). For non-symmetric matrices, the result is slightly more complicated, due to the potential presence of non-trivial Jordan blocks. More details and an explicit derivation of the speed of convergence for generic matrices W can be found in [3]. Building on these seminal results, the systems and control community has extensively studied the consensus problem toward designing algorithms to converge to desired consensus points and expanding its original formulation to account for a wide range of extensions. These extensions include, but are not limited to, asynchronous and time-varying update rules [17, 18], the presence of antagonistic interactions [19], quantized communications [20], and the investigation of different operators other than the weighted average, such as maximum and minimum operators [21].
11
256
L. Zino et al.
a)
b) 3
2
x (t)
4
2
1
0
–2
5
6
–4 0
20
40 Time, t
c)
d)
x (t)
10
x (t)
10
0
0
–10
–10 0
20
40
0
Time, t
20
40 Time, t
Fig. 11.2 Consensus dynamics and synchronization. We consider six dynamical systems connected through the 3-regular network illustrated in a). All the weights are Wij = 1/3, as in (11.20). In b), we show the trajectories of a continuous-time consensus dynamics, which converge to the average of the initial conditions. In c) and d), we illustrate the trajectories of six Rössler oscillators, with equations x˙ i = −yi , y˙ i =
x1 + 0.2yi , z˙i = 0.2 + zi (xi − 0.2) (for the sake of readability, we plot only the first variable xi for each of the dynamical systems, a similar behavior is observed for the other variables). In c), the oscillators are not coupled and the trajectories are not synchronized. In d), the dynamical systems are coupled with ε = 0.3 and G(xj (t)) = [xj (t), 0, 0] , leading the trajectories to the synchronization manifold
11.3.2 Synchronization
Rd → Rd . Overall, the state of agent i is governed by the differential equation
The consensus problem captures a cooperative phenomenon, in which agents reach a common state in the absence of centralized control. More broadly, this problem is part of a general class of challenges, consisting of synchronizing a set of coupled dynamical systems in a distributed fashion toward a common trajectory. From a historical perspective, and also due to its applicability to real-world settings [22], the synchronization challenge is often formulated in continuous time. In this vein, synchronization focuses on a state vector associated with each agent, xi ∈ Rd , rather than a scalar state variable. In the absence of a coupling network, these state variables evolve according to a generic nonlinear function F : Rd → Rd , capturing each individual node’s self-dynamics. The couplings via Wij then introduce a distributed, potentially nonlinear averaging dynamics, regulated by a function G :
x˙ i (t) = F(xi ) − ε
Lij G(xj (t)),
(11.10)
j∈V
where ε ≥ 0 is a non-negative parameter that quantifies the strength of the coupling between agents. Setting the function F null and G equal to the identity, (11.10) reduces to the consensus dynamics in (11.7). Figure 11.2c,d illustrate an example of a set of Rössler oscillators [22] in the absence and in the presence of coupling, respectively, on the network illustrated in Fig. 11.2a. The goal of the synchronization problem consists of determining the conditions that the nonlinear functions F and G and the network - here, represented by the Laplacian matrix L - have to satisfy in order to guarantee that the vector
11
Network Science and Automation
257
state of each node xi (t) converges to a common trajectory x∗ (t). Formally, this maps to studying the stability of the synchronization manifold S := {x(t) : xi (t) = xj (t) = x∗ (t), ∀ i, j ∈ V }.
(11.11)
From (11.10) and from the zero-sum property of L, it is straightforward to observe that the synchronization manifold S is invariant, that is, if x(t∗ ) ∈ S , for some t∗ ≥ 0, then the system will stay synchronized for all t ≥ t∗ . Hence, the problem of the synchronization of coupled dynamical systems ultimately boils down to determining the asymptotic stability conditions for such a manifold. The seminal work by [23] introduced an effective and elegant technique to study the local stability of the synchronous state, through the study of the stability of the modes transverse to it. The approach, based on a master stability function (MSF), decouples the synchronization problem in two independent sub-problems, one focusing on the node’s dynamics, the other on the network topology. First, the dynamical system in (11.10) is linearized about the synchronous state x∗ (t) by defining δxi (t) = xi (t) − x∗ (t), and writing the linearized equations ˙ i = JF (x∗ )δxi − εJG (x∗ ) δx
Lij δxj ,
(11.12)
j∈V
with i ∈ V , and where JF ∈ Rd×d and JG ∈ Rd×d are the Jacobian matrices of F and G, respectively. The system of equations in (11.12) can be further decomposed into n decoupled equations by projecting them along the eigenvectors of the Laplacian L, obtaining ξ˙k = [JF (x∗ ) − ελk JG (x∗ )]ξk ,
(11.13)
where λ1 , . . . , λn are the eigenvalues of L, in ascending order in modulus. Assuming W to be strongly connected, we have λ1 = 0 and λ2 = 0. Hence, the first term k = 1 of (11.13) coincides with the perturbation parallel to the synchronization manifold, and all the other n − 1 equations correspond to its transverse directions. Therefore, the synchronization manifold is stable if all the equations k = 2, . . . , n in (11.13) damp out. We define the MSF λmax (α) that associates each complex number α ∈ C with the maximum Lyapunov exponent of ξ = [JF (x∗ )+αJG (x∗ )]ξ . Hence, if λmax (σ λk ) < 0, for all k = 2, . . . , n, the synchronization manifold is locally stable. Interestingly, the MSF allows us to decouple the effects of the dynamics and the role of the network. The nodal dynamics, captured via JF and JG , fully determines the critical region of the complex plane in which λmax (α) < 0; the network topology, in contrast, determines the eigenvalues λk of L. Synchronization is attained if all the eigenvalues belong to the critical region. Hence, the MSF can be effectively utilized
to design a network structure to synchronize a general set of dynamical systems in a distributed fashion. Generalizations of the MSF approach have been proposed to deal with many different scenarios, including moving agents [24], time-varying and stochastic couplings [25], and more complex network structures such as multi-layer networks [26, 27] and simplicial complexes [28]. The main limitations for the use of the MSF are that, in general, the computation of the largest Lyapunov exponent can be performed only numerically, thus limiting the analytical tractability of the problem, and that, using the MSF, only local stability results can be established. Different approaches to study the stability of the synchronization manifolds have been developed in the last two decades by means of Lyapunov methods [29] and contraction theory [30]. While these approaches have allowed specific rigorous, global convergence results, they typically provide sufficient conditions for synchronization that, in general, are more conservative than those established via MSF-based approaches.
11.3.3 Perturbative Analysis In case (11.10) exhibits a stable fixed-point x∗ , the challenge is to understand its fixed-point dynamics [7, 8, 31]. This can be observed by studying its response to small perturbations, either structural – removing nodes/links or altering weights, or dynamic – small activity perturbations x∗ → x + δx(t). Such perturbation instigates a signal S, which then propagates through all network pathways to impact the state of all other nodes. First, we wish to track the signal’s spatiotemporal propagation, namely how much time it will take for the signal to travel from a source node i to a target node j. This can be done by extracting the specific response time τi of all nodes to the incoming signal, then summing over all subsequent responses along the path from i to j. The resulting propagation patterns uncover a complex interplay between the weighted network structure Wij and the nonlinear functions F and G, resulting in three dynamic classes [32]: Ultra-fast. For certain dynamics, the highly connected hubs respond rapidly (τi small), and hence, as most pathways traverse through them, they significantly expedite the signal propagation through the network. Fast. In other dynamics, the degrees play no role, the system exhibits a typical response time τi for all nodes, and the signal propagation is limited by the shortest path length of the network. Ultraslow. In this last class, hubs are bottlenecks (τi large), causing signals to spread extremely slowly. The long-term response of the system to signal S is observed once the propagation is completed and all nodes have reached their final, perturbed state xi∗ + δxi (S). This defines the cascade C = {i ∈ V : δxi (S) > Th}, the group of all nodes, whose response exceeded a pre-defined threshold Th.
11
258
L. Zino et al.
Often, the cascade sizes |C| are broadly distributed - most perturbations have an insignificant impact, while a selected few may cause a major disruption [8]. A sufficiently large perturbation can lead to instability, resulting in an abrupt transition from one fixed-point, the desired state, to another, potentially undesired [33]. This can capture, for instance, a major blackout, or an Internet failure, which can be caused by infrastructure damage, i.e., structural perturbations, or by overloads, capturing activity perturbations.
8 7 3 5 6 4 u6 (t)
1 2 u1 (t)
11.3.4 Control
Fig. 11.3 Schematic of a pinning control scheme. The master node (in black) is connected to the two pinned nodes (in red), namely 1 and 6
Consensus and synchronization are emergent phenomena, that is, their onset is spontaneously achieved over time by the network nodes. Similarly, perturbation analysis captures the network’s response to naturally occurring disturbances. With this in mind, the natural next step is to seek controlled interventions to drive the network toward a desired dynamic behavior, e.g., an equilibrium point, a limit cycle, an attractor, etc. Typically, we wish to achieve such control with limited resources, i.e., acting upon a subset of the network nodes, links, or using a limited amount of energy. Network control can be achieved via off-line strategies, such as controlling the nodal dynamic or their coupling, or on-line strategies, through adaptive rewiring of the network structure [34]. Here, to outline the principles of control, we focus on a representative problem, where one seeks to synchronize a network system, whose uncontrolled dynamics in (11.10) does not naturally converge to a synchronous solution [35]. One of the first and most successful methods proposed to synchronize a network is pinning control, originally proposed in [36], where one controls the entire network by driving just a small number of nodes. First we designate a master node, whose state is fully under our control, namely, we can freely determine its dynamics by designing a suitable control input signal. The master node is then linked to a small number of pinned nodes 1, . . . , m, influenced indirectly, via their links to the master node, by our input signal, as illustrated in Fig. 11.3. These pinned nodes follow x˙ i (t) = F(xi ) − ε
Lij G(xj (t)) + ui (t),
(11.14)
j∈V
where ui (t) represents the control exerted by the master node on the pinned node i ∈ {1, . . . , m}. The remaining n−m nodes continue to evolve according to (11.10). The fundamental question is how many and which nodes should be pinned in order to control the network? Clearly, a general answer to this problem does not exist. Indeed, such an answer inevitably depends on the properties of the nonlinear
functions F and G, on the weighted network topology Wij , and on the form of the control inputs ui . Therefore, the first approach to tackle this challenge is under linear dynamics, mapping (11.14) to x˙ (t) = Wx + Bu ,
(11.15)
where W is the weighted network matrix, as in the consensus problem in (11.7), and B ∈ Rn×m is a rectangular matrix, routing the control inputs u(t) to the m pinned nodes. The linear equation in (11.15) reduces the control problem to a structural one, seeking the conditions for Wij that guarantee controllability [37]. Under a single pinned node m = 1, the controllability of the network is determined by the presence of a subgraph of G that is a cactus, that is, such that it spans all the nodes of the network and in which every edge belongs to at most one simple cycle [37]. Other results based on structural controllability are summarized in [38]. For the general case with m ≥ 1, standard control-theoretic tools may be leveraged, specifically Kalman’s controllability rank condition, showing that (11.15) is controllable if the matrix C := [B, WB, W2 B, . . . Wn−1 B] has full rank, i.e., Rank(C) = n. The challenge is to verify Kalman’s condition, which becomes unfeasible for a large-scale real-world network. To address this, it was shown that this condition can be mapped to the more scalable graph-theoretic concept of maximal matching [39], where one seeks the largest set of edges with different start and end nodes. Linking the size of the maximal matching to the number of nodes that should be pinned to control the network, this criterion helps predict the controllability of (11.15). Controllability is found to be primarily driven by the network’s degree distribution P± (d), independent of the weights Wij , and, counter-intuitively, often driven by the low-degree nodes, especially for directed networks.
11
Network Science and Automation
259
Next, if indeed the network is controllable, we wish to design the control inputs ui (t) to drive it to the desired dynamic state. For a detailed discussion, we refer to the review papers [40, 41]. A common approach employs a feedback strategy, setting ui (t) = ki G(x∗ (t) − xi (t)),
(11.16)
where x∗ (t) is the desired dynamic state and ki ≥ 0 are the control gains [36]. For example, such feedback control can steer the system to the stationary state x∗ (t) = x¯ , achieving consensus toward a desired equilibrium via control rather than to spontaneous emergence as in (11.7). In undirected scale-free networks, this strategy shows that controlling the hubs is most effective for reaching synchronization, as it requires a smaller number of nodes to be pinned [36]. The specific feedback structure in (11.16) has enabled the scientific community to establish several important results in network control, even though a complete theory is still lacking. In fact, the expression for ui (t) in (11.16) allows us to write (11.14) in a form similar to (11.10), with an additional term that accounts for the master node. The MSF approach used to study synchronization for the uncontrolled problem can then be extended to tackle this scenario. Specifically, in [42], the authors propose to add an (n + 1)-th equation for the master node, that is, simply x˙ n+1 = F(xn+1 ). Consequently, all the n + 1 equations can be written in the form of (11.10), where the Laplacian matrix L is substituted by an augmented Laplacian matrix M, in which a row and a column corresponding to the master node are added. All the remaining entries in this added row are equal to zero, while the column contains the control gains. Furthermore, the corresponding control gains are subtracted from the diagonal of the augmented Laplacian. The MSF approach can then be applied to the augmented system on n + 1 equations to determine the stability, depending on k1 , . . . , km . One of the main limitations of the approaches described so far is that they rely on the assumption that all the dynamical systems on network nodes are identical, which ensures stability of the synchronous trajectory once it has been attained. Unfortunately, in many real-world applications, we face the problem of synchronizing networks of heterogeneous dynamical systems, often in the presence of noise and disturbances. To overcome these problems, in [43, 44], the authors proposed to use a coupling term that includes a distributed integral action. Building on this intuition, in [45], a distributed proportional-integral-derivative (PID) control protocol was proposed to reach consensus in a network or linear heterogeneous systems in the presence of disturbances. In its simplest formulation, we can assume that each node of the network is a unidimensional dynamical system with linear dynamics. Hence, the proposed model can be formulated as follows:
x˙ i (t) = ai xi (t) + bi + ui (t) ,
(11.17)
for all i ∈ V , where ai ∈ R is a constant that determines the uncoupled dynamic of the system, bi ∈ R is a constant disturbances (e.g. an input/output in the dynamical system i, and ui (t) is the control action applied to node i at time t. In the standard coupling discussed presented so far, the control action coincides with the continuous-time consensus dynamics, that is, ui (t) = −ε
Lij xj (t) .
(11.18)
j∈V
However, the control action in (11.18) is not able to guarantee convergence to consensus in many scenarios, when the parameters ai and bi are heterogeneous. In [45], a controller in the following form has been proposed: ui (t) = −
j∈V
t
Lij αxj (t) + β
xj (s)ds + γ x˙ j
, (11.19)
0
where α, β, γ ∈ R+ are constant parameters that weigh the three terms in the controller. Note that, besides utilizing the information from the state of the neighboring nodes xj (t), the proposed controller uses also their integral and derivatives. The analysis of (11.19) has allowed the authors of [45] to explicitly derive conditions for the parameters α, β, and γ to control the system toward a consensus. These conditions, besides depending on the disturbances bi , depend on the network structure through the second largest eigenvalue in modulus of W. Recently, several efforts have started to address the control problem also from an energy point of view, namely, toward the development of a minimum-energy theory for the control of network systems. Several results have been established for linear systems in the form of (11.15). In [46], the authors establish upper and lower bounds on the control cost – defined in terms of the smallest eigenvalue of the Gramian matrix – as scaling laws of the time required to reach synchronization. Further development in this direction has been proposed in [47], where rigorous bounds on the smallest eigenvalue of the Gramian matrix are derived as functions of the eigenvalues of matrix W and on the number of pinned nodes m, establishing non-trivial trade-offs between the number of pinned nodes and the control cost. Recently, an interesting discussion on minimum energy control has been presented in [48]. Therein, using an approach based on standard control-theoretic tools, the authors express the control energy as a function of the real part of the eigenvalues of W. An interesting heuristic has then been proposed for generic (directed) networks, according to which the cost for controlling a network is reduced if the pinned nodes are chosen as those with large (weighted) out− degree (w+ i ) but small in-degree (wi ).
11
260
11.4
L. Zino et al.
Applications
Many of our modern technological applications rely on networks. Such systems, e.g., the Internet or the power grid, behave almost as natural phenomena, often lacking a blueprint or central control, hence functioning based on the emergent cooperation between their components. Therefore, the network principles of consensus, synchronization, and control play a key role in an array of modern-day applications, from distributed sensing to cyber-resilience and coordinated robotic motion.
11.4.1 Distributed Sensing Sensor networks are a key application of the theoretical findings discussed in Sect. 11.3. Since the 1990s, the use of wired or wireless networks of sensors able to exchange data has become widespread in technological and industrial systems [49]. The Internet of Things has pushed this technology further, providing the market with a plethora of inexpensive devices able to perform sensing, computation, and communication at a negligible cost [49]. One of the most common applications of sensor networks is distributed estimation, that is, the joint estimation of the value of a quantity of interest, with the advantage of reducing the measurement uncertainty, without requiring a centralized processor to gather all the sensors’ measurements and centrally perform the desired estimation. There are several advantages to synchronize distributed estimations, from high robustness against noise, errors, and malicious attacks, to lower costs by relinquishing the need for a central processor. Most importantly, a network of redundant sensors performing under distributed operations avoids the risk of a single point of weakness, leading to systemic failure. Formally, in the distributed sensing problem, each of the n sensors is represented by a node, and the communication patterns between sensors by links, leading to the sensor network G = (V , E ). Each sensor i ∈ V performs a measurement of a common quantity of interest x, whose result we denote by xˆ i . The distributed sensing problem consists of designing an algorithm to estimate the actual value of the quantity in a distributed fashion, using local exchanges of information only between the linked sensors, namely sensor i can only share information with its out-neighborhood Ni+ . The goal of such estimation is to converge, asymptotically or in finite time, to a common estimate x∗ of the quantity x, thus reducing the uncertainty associated with each local measurement xˆ i . Assuming that the sensors are all identical and unbiased, that is, the expected value of their measurements coincides with x and they all have the same variance σ 2 , the best estimate of the quantity of interest can be obtained by computing the arithmetic average of the sensors’ measurements, whose variance, following the central limit theorem, approaches
σ 2 /n [50]. Recalling the consensus problem formalized and discusses in Sect. 11.3.1, the arithmetic average of a set of measurements can be computed in a distributed fashion, by initializing the state of each sensor to the measured value, i.e., setting xi (0) = xˆ i , and then updating the state of the sensors according to the algorithm in (11.5), with a suitable weight matrix W adapted to the sensor network, in order to ensure convergence to the average. Assuming that the network of sensors is strongly connected and that each sensor can access its own state (all the nodes have a self-loop), we know from Sect. 11.3.1 that the state of each node converges to a common quantity, precisely equal to the weighted average of the initial conditions, with weights determined by each sensors’ Bonacich centrality, i.e., xi (t) → x∗ = π xˆ . Hence, to estimate the arithmetic average of the initial measurements, one has to design the matrix W such that its Bonacich centrality is uniform among all nodes. This can be satisfied by designing W to be doubly stochastic, having both its rows and columns sum to 1. Hence, the problem of designing an algorithm for distributed sensing ultimately boils down to designing a doubly stochastic matrix W adapted to the graph representing the sensor network. Several methods have been proposed to successfully address this problem, depending on the properties of the network. The simplest method is to select symmetric weights W = W , in which case stochasticity directly implies double-stochasticity. If the network is undirected and regular, namely all nodes have the same degree di = d for all i ∈ V , this amounts to setting W with uniform weights as Wij =
1 , d
(11.20)
for all (i, j) ∈ E . In this case, at each time step, each sensor will average its own state with those of all its neighbors. For non-regular networks, where degrees are potentially heterogeneous, the practical solution is to use Metropolis weights setting 1 , (11.21) Wij = max{di , dj } for all (i, j) ∈ E such that j = i, and Wii = 1 −
Wij .
(11.22)
j∈V \{i}
Finally, if, in addition, the network is directed, doublestochasticity requires the use of Birkhoff’s theorem for doubly-stochastic matrices [51]. Specifically, selecting a set of simple cycles C = {c1 , . . . , cm } that span all the edges of the network, we let Vh to be the nodes through which cycle ch passes and define the permutation matrix W(h) associated with cycle ch as
11
Network Science and Automation
Wij(h) =
1 if (i, j) ∈ ch , or i = i and i ∈ / Vh , 0 otherwise.
261
(11.23)
Note that each matrix W(h) is doubly stochastic, but in general, they are not adapted to the network, since only a subset of the edges of the network are associated with non-zero entries of a permutation matrix (precisely, those associated with the corresponding cycle). Then, we construct the doubly stochastic matrix adapted to the network as m 1 I+ W= W(h) . m+1 h=1
(11.24)
With an appropriate construction, it is now verified that the consensus dynamics will solve the distributed sensing problem, leading us to investigate the system’s performance. First, we ask how fast does the algorithm converge and, consequently, how to optimally interconnect a set of sensors to guarantee rapid convergence. This problem ultimately boils down to estimating the speed of convergence of the consensus algorithm and, hence, to the computation of the second largest eigenvalue of W. In addition to speed, we wish to evaluate accuracy, asking how noise and errors in each sensor’s measurement propagate to the distributed estimate x∗ . As we will show later on, this is also strictly related to the spectral properties of the weight matrix W. The problem of understanding how to interconnect a set of nodes to have fast convergence to consensus is a problem of paramount importance in the design of networks of sensors and, more in general, of networked systems. The rate of convergence of the state of the nodes to the desired estimate can be bounded using (11.9), that is, as a function of ρ2 , the second largest eigenvalue of W in modulus. Another important metric that measures the convergence performance of the distributed sensing algorithm is J(t) :=
1 ||x(t) − 1x∗ ||2 , n t≥0
(11.25)
which can be associated with the cost of a linear-quadratic regulator problem. Also this metric can be related to the spectrum of matrix W. Specifically, in [52], the authors show that for a set of homogeneous unbiased sensors with variance σ 2 in the measurements, it holds n 1 σ2 J(t) = . n i=1 1 − |μi |2
the average (in fact, for complete graphs it holds μ2 = · · · = μn = 0). However, complete network structures may be unfeasible in large-scale systems because they require too many connections, while often sensors may be able to process only a limited number of inputs. The problem of identifying families of graphs with bounded degree that yield fast consensus has been extensively studied in the literature. In [52], the family of de Bruijn graphs has been identified as a good candidate to this goal, ensuring not only much faster convergence rate than other topologies with bounded degree such as lattices [53], but also finite-time convergence. Other important networks for which fast convergence results have been established include several realistic models of complex networks, including small-world networks [54, 55]. So far, we have focused on the performance of a distributed sensing algorithm in terms of its speed of convergence to consensus. Another important performance metric quantifies how the error and the noise in the initial sensor’s measurements propagate to the distributed estimation. If the sensors are all equal and unbiased, we can assume that each measurement can be expressed as xˆ i = x + ηi , where ηi ∼ N (0, σ 2 ) is a Gaussian distributed random variable with mean equal to 0 and variance σi2 . If W is symmetric (e.g. using W constructed according to (11.20) or (11.21)), then we can compute the error at the tth iteration of the algorithm as 1 1 1 E[||x(t) − 1x||2 ] = E[||Wt x(0) − 1x||2 ] = E[||Wt η||2 ] n n n n 2 σ = |μi |2t . (11.27) n i=1 Note that the estimation error converges to the asymptotic 2 value σn , which coincides with the error of a centralized estimator. The speed of convergence is again associated with the other eigenvalues of W and, in particular, with the second largest in modulus, which governs the speed of convergence to zero of the slowest mode. In a scenario of unbiased heterogeneous sensors, the arithmetic average of the sensors’ measurements may not be the best estimate of the desired quantity. In fact, assuming that the measurement of sensor i ∈ V is a Gaussian random variable with mean equal to x and variance equal to σi2 , that is, xˆ i ∼ N (x, σ 2 ), the arithmetic average of the measurements x∗ would lead to an estimate of the desired quantity such that
(11.26)
In light of the previous two metrics, the optimal solution would be to utilize a fully connected (complete) network, where convergence is reached in one step, since each sensor can access the information of all the others and compute
x∗ ∼ N
1 2 x, 2 σ n i∈V i
,
(11.28)
that is, the average of independent Gaussian random variables [50]. Instead, the minimum-variance estimator xbest is obtained through a weighted average, where the weight given to the measurement of sensor i is proportional to the inverse
11
262
L. Zino et al.
of its variance [50]. In fact, the estimator obtained with these weights has the following distribution xbest ∼ N
x,
1
,
1 i∈V σi2
(11.29)
whose variance is always less than or equal to the one of the arithmetic mean, with equality holding only if all the variances are equal (in that case, the two averages coincide). Such an estimator can be easily implemented in a distributed fashion, without the need of re-designing the weight matrix W. In fact, the minimum-variance estimation can be computed by running two consensus dynamics in parallel, with a doubly stochastic matrix W. Specifically, each sensor i ∈ V has a two-dimensional state, initialized as yi (0) =
xˆ i σi2
zi (0) =
and
1 , σi2
(11.30)
where xˆ i is the measurement performed by sensor i. Being W doubly-stochastic, the consensus point of the two states will be equal to ∗
y =
xˆ i i∈V σi2
n
∗
z =
and
1 i∈V σi2
n
,
(11.31)
which implies that each sensor can calculate the minimumvariance estimator by simply computing the ratio xi (t) =
a)
yi (t)/zi (t), which converges to xbest . A comparison between a distributed sensing algorithm based on the computation of the average of the measurements and the one proposed in (11.31) is illustrated in Fig. 11.4. The problem of understanding how noisy measurements affect the distributed estimation process becomes crucial when one adopts the distributed estimation process to reconstruct the state of a dynamical system from a set of repeated measurements obtained by a network of sensors. Such a problem has been extensively studied in the literature, and a two-stage strategy to tackle it has been originally proposed and demonstrated in [56,57]. This strategy entails the use of a Kalman filter, which does not require any information from the other sensors, and then p updates, performed according to a consensus dynamic, as illustrated in the schematic in Fig. 11.5. In its simplest implementation, denoting by yi (t) the measurement of sensor i at time t, the estimate xi (t) is updated as
1
0.5
x (t)
0.5
x (t)
(11.32)
where at the first step, a Kalman filter is applied, with ∈ (0, 1) the (common) Kalman gain, producing a (local) estimate zi (t); then, the sensors average their estimates according to p consecutive steps of a consensus dynamics, using the local estimates of the neighboring sensors zj , thus producing a distributed estimate xi (t). Subsequent studies have allowed the systems and control community to derive rigorous esti-
b)
1
zi (t) = (1 − )xi (t) + yi (t) , xi (t + 1) = j∈V (W p )ij zj (t) ,
0
–0.5
0
–0.5
–1
–1 0
5
10 Time, t
Fig. 11.4 Comparison between two different methods to perform distributed sensing. In both simulations, 100 sensors make a noisy measurement of a quantity, which is equal to 0. The first 20 sensors have variance σi2 = 0.01; the other 80 sensors have variance σi2 = 1. Sensors are connected through a random regular graph with degree 6, generated
0
5
10 Time, t
with a configuration model [1], and W is defined according to (11.20). In a), we use a standard consensus dynamics; in b), we use the algorithm proposed in (11.31). We observe that the latter sensibly reduces the error in the distributed estimation
11
Network Science and Automation
a)
263
b)
8
z2 (t), z3 (t)
7 3 y1 (t) 5
z1 (t) Kalman filter
Consensus
6 1 4 2
x1 (t + 1)
Fig. 11.5 Schematic of a distributed Kalman filter. The scheme in panel b) refers to node 1, highlighted in red in panel a)
mations on the error of this estimation process, relating it to the eigenvalues of W, the number of consensus steps m, and the Kalman gain , and formalizing optimization problems to optimally design such an estimation process [53, 58]. Several extensions of this problem have been proposed, for instance, to address heterogeneity across the sensors [59], or to distributed estimation of unknown payloads through robotic manipulation [60].
11.4.2 Infrastructure Models and Cyberphysical Systems The opportunity of mathematically modeling a set of possibly heterogeneous dynamical systems, evolving and interacting over a topology of any complexity, makes networks an intuitive modeling paradigm for cyberphysical systems (CPS) and infrastructure models. CPSs are complex interconnected systems of different nature (mechanical, electrical, chemical, etc.), governed by software components, where the behavior of hardware and software is usually deeply intertwined. They usually operate over multiple spatial and temporal scales and require transdisciplinary knowledge to be studied and managed. Typical examples of CPSs are industrial control systems, robotics systems, autonomous vehicles, avionics systems, medical monitoring, tele-surgery, and the Internet of Things, to name a few [61, 62]. Infrastructure models, on the other hand, are complex connections of units of the same nature, interconnected or interacting over a spatially extended network. Typical examples are power grids (see Fig. 11.6), water grids, transportation networks, computer networks, and telecommunication networks [63–68]. It is evident that single infrastructure systems are heavily interdependent: for example, a shortage in the power grid for a prolonged time interval would cut the mobile communication network off. These critical interdependencies have led scholars to gather together single infrastructure models in more complex, interconnected ones, leading to the concept
of critical infrastructure models. Modeling tools for such systems are interdependent paradigms, such as systems-ofsystems, networks-of-networks, or multilayer networks [69]. The most important problem in all these systems concerns the effect of isolated failures on the global functioning of the system, that is, how the performance of the system degrades in spite of isolated or multiple failures [70–74]. In critical infrastructures, moreover, a problem of interest is to study how failures propagate both over and across the single infrastructures [61, 62]. The study of this problem is key toward understanding the mechanisms that trigger cascading failures, for instance, causing massive blackouts in power grids [63], which in turn may damage the functionality of the telecommunication networks [61]. Hence, these results can be applied to detect the vulnerability of CPSs and infrastructure systems, preventing the emergence of these disastrous cascading phenomena. In terms of topological properties of a network, the vulnerability of a network system is strictly related to the network resilience, that is, the ability of the network to maintain its connectivity when nodes and/or edges are removed. Such a problem has been extensively studied in statistical physics by means of percolation theory [75–77]. In these works, the robustness of different network topologies has been analytically studied, establishing important results that relate the vulnerability of a network to the degree distribution of the nodes and the correlation between their degrees. In [75], the authors use percolation theory to study resilience of different types of random networks, depending on their degree distribution P(d). Specifically, they assume that a fraction p of the nodes is randomly removed, and they study whether there exists a critical threshold for p such that, below such threshold, the network has a giant strongly connected component; while, above the threshold, such a giant component vanishes. Interestingly, the authors find that such a threshold depends on the ratio between the second moment of the degree distribution and its average. Specifically, the threshold tends to 1 as this ratio increases. Hence, networks with power-law degree
11
264
L. Zino et al.
1 Generator
Consumer
Increasing S I
Sweden 2
III Finland
3 Norway
4
II
Denmark
Fig. 11.6 Northern European power grid, composed by 236 nodes and 320 edges (reprinted from [64])
distribution P(d) ∝ d−α , with α ≤ 3 are extremely resilient, as the threshold p → 1 for this types of networks. These networks are extremely important for applications, as they are effective proxies of several real-world networks, including social networks and the internet [1]. However, this “static” concept of network resilience per se is often not sufficient to accurately determine the presence of vulnerabilities in a network of dynamical systems, since it is not able to capture the complex interplay between the structural properties of the network and the dynamical aspects of the processes unfolding over the network [33]. In fact, in realworld infrastructures and cyberphysical systems, the global
performance of the network may be only mildly affected by the loss of connectivity of marginal nodes, whereas serious damages to the system and its performance can occur even if the network remains connected (e.g. due to overloads). This requires us to use the tools of our perturbative analysis of Sect. 11.3.3, seeking to predict the onset and propagation of dynamic cascades in the network. The theory of dynamic vulnerability has started to be developed from the concept of cascading failure, that is, understanding when the failure of a node in a network of dynamical systems can trigger a large-scale cascade of failures, causing the collapse of the entire network or of a
11
Network Science and Automation
265
substantial part of it. In [70], the authors propose a simple but effective model for studying cascading failures caused by overloads in complex networks, such as power grids or cyberphysical systems. In their model, it is assumed that each pair of nodes of a strongly connected network exchanges one unit of quantity (e.g. information or energy) through the shortest path between them. According to this model, the load of a node bi (t) is equal to the number of shortest paths passing through it, and thus proportional to the betweenness centrality γi . The authors assume that the capacity of each node is proportional to its initial load, that is, Ci = (1 + α)bi (0) ∝ γi ,
(11.33)
for some α ≥ 0, and that nodes whose load is greater than their capacity (bi (t) > Ci ) fail and are thus removed from the network. Clearly, each removal will change the network structure, and this may change the loads on the nodes, potentially triggering a cascade of failures. In their work, the authors have shown that the effect of the failure (removal) of a node depends on its betweenness centrality. Specifically, the failure of a node i with small γi has a small impact on the network, while the failure of a node i with large γi would likely trigger a cascade, especially when the betweenness centrality of the nodes is highly heterogeneous. Interestingly, the authors show that for scale-free networks the failure of the node with the highest betweenness centrality may cause the failure of more than 60% of the nodes in a network with 5000 nodes. In [71], it has been shown that the immediate removal of some nodes and edges after the initial failure, but before the propagation of the cascade (specifically, nodes with low centrality and edges on which the initial failure would strongly increase the load) can drastically reduce the size of the cascade. In [72], a similar model for cascading failures has been proposed for weighted networks, where the weight of each edge Wij represents the efficiency of the communication across that edge. In this model, overloaded nodes are not removed, but the communication through edges insisting on them is degraded and thus the corresponding efficiency is reduced, according to the following formula: Wij (t + 1) =
if bi (t) ≤ Ci , Wij Wij bCi (t)i if bi (t) > Ci .
(11.34)
The authors utilize this model to study the effect of failures on the network performance, evaluated as the average link efficiency, while different performance measures have been discussed in [73, 78]. Even for this model, the effect of the failure of a node is related to the (weighted) betweenness centrality of the nodes that fail. Building on these seminal works, a huge body of literature has been developed toward including more real-world features in the modeling frameworks.
Specifically, accurate models of power grids have been proposed [74], which have allowed the researchers to accurately identify critical properties of the network structure [64], sets of vulnerable nodes [65] and vulnerable edges [66], by leveraging network-theoretic tools. Besides power grids, these models for cascading failures have been tailored to study the vulnerability of other complex systems, including water supply networks [67] and production networks [68]. These models, while assuming a dynamic exchange of energy between all nodes, are in their essence still structural, mapping the loads on the nodes to the network structure via γi or updating the weights according to (11.34). To truly capture the role of the dynamics we must include the nonlinear mechanisms F, G driving the system, a là (11.10). First, one maps the system’s resilience function that captures its potential fixed-points x∗ – desirable vs. undesirable. For random networks with arbitrary degree sequence d± = (d1± , . . . , dn± ) , it is shown [33] that the control parameter, governing the transitions between these states is determined by Wij via 1 Wd− β= . (11.35) 1 W1 Hence, the network’s dynamic response to any form of perturbation – changing weights, removing/adding nodes or links – can be reduced to the perturbation’s subsequent impact on the macroscopic parameter β. This parameter, in turn, fully determines the perturbation outcome, specifically whether the system will transition to the undesired x∗ . These vulnerability results have highlighted the richness and complexity of the emergent behavior of infrastructure networks and have allowed to shed light on the phenomenon of cascading failures on a network. However, in the real world, power grids, water supply systems, production networks, transportation and communication networks, and many other infrastructure networks are not isolated, indeed they are often mutually dependent and interdependent, whereby the failure of one of these layers may cause severe consequences on the other layers. Such complex systems of interconnected complex systems determine the so told critical infrastructure of a country, whose modeling and vulnerability analysis is key for many applications in our complex, hyper-connected world. In [61], the authors proposed an approach based on the functional interconnection of different networks, to study how failures in one network affect the functioning of the others. In the proposed approach, the interdependency between different systems is captured by a matrix function called interdependence matrix. In the example proposed in that paper, the authors investigate the interconnection of the Italian electric grid and the telecommunication network. In [62], a simple but effective model to the study cascade of failures in interdependent networks was proposed and analyzed. In the simplest scenario, two networks G1 = (V1 , E1 )
11
266
and G2 = (V2 , E2 ) are considered. Each network Gi is characterized by its degree distribution Pi (d), which associates with each number d the probability that a node of Gi has degree d. The interdependence between the two networks is modeled as follows. Each node i ∈ V1 is associated with a node ¯i ∈ V2 through a functional dependence, whereby the functionality of node i depends on the one of ¯i. This represents, for instance, the fact that ¯i supplies some critical resources to node i. Similar, each node i ∈ V2 is associated with a node in V1 through a functional dependence. Then, when a node fails, as a consequence all the nodes that have a functional dependence on it fail too, possibly triggering a cascade. The model can be easily generalized to multiple dependencies and networks. The authors utilize an analytical approach based on branching processes to study a system where some random nodes fail on one of the two networks and consider different network structures, and they reach some non-trivial findings. Interestingly, network structures with broad degree distribution, which exhibit a high resilience when the network is isolated, display instead high vulnerability when interconnected. Recent developments of the theory of interconnected networks are extensively discussed in the review paper [69].
11.4.3 Motion Coordination Motion coordination is a very important phenomenon in many biological systems, being key to the emergence of collective behaviors such as fish schooling and birds flocking. Remarkably, even though the single members of a fish school or of a bird flock are not aware of the entire state of the system, they are able to achieve global coordination by means of local interactions with other members [79]. The observation and study of these biological systems have inspired the design of distributed protocols to reproduce these coordinated behaviors in groups of robots. Motion coordination for groups of robots finds a wide range of applications, spanning from the coordination of drones, to platooning of autonomous vehicles, and coverage problems in surveillance [38, 80, 81]. As we shall see, network science is an important tool for the analysis and the design of these systems, where global coordination emerges as an effect of local interactions. In its simplest formulation, the problem of motion coordination can be summarized as follows. We consider a group of n robots, each one characterized by a position xi (t) ∈ Rd and a velocity vi (t) ∈ Rd , where d is the dimension of the space in which the robots are moving. For the sake of simplicity, we introduce the scenario d = 1, in which the robots are moving in a uni-dimensional space. Similar to biological systems, robots coordinate their motion in a distributed fashion, that is, the system of robots is connected by the presence of a network structure G = (V , E ) that determines how robots can communicate, and the control action implemented on robot
L. Zino et al.
i ∈ V is a function of the information exchanged with its neighbors, without the intervention of a central unit. Control actions can apply a force to the robot, and thus they determine the acceleration of the robot. Hence, in its simplest form, the dynamics of robot i ∈ V is described by the following system of equations: x˙ i (t) = vi (t) , (11.36) v˙ i (t) = ui (t) , where ui (t) is the control exerted on node i and is a function of xj (t) and vj (t), for j ∈ Ni+ . Even though the model in (11.36) is quite simplistic, many robot dynamics can be simplified to this form by applying suitable nonlinear feedback linearization techniques, based on the inverse dynamics of the system [82]. In [80], the authors consider the rendezvous problem, where a group of robots governed by (11.36) must coordinate in a decentralized fashion toward reaching a consensus in which xi (t) = xj (t) and vi (t) = vj (t), for all i, j ∈ V . Note that the latter task (that is, convergence of the velocities) can be easily achieved by defining a control ui (t) according to a standard consensus dynamics, that is, in the form ui (t) = − j∈V Lij vi (t), for some matrix W adapted to the graph G . However, the convergence of the velocities does not in general implies that all the positions converge. Hence, to address this problem, the authors have proposed a secondorder linear consensus protocol, in which ui (t) = −
Lij (αxi (t) + βvi (t)) ,
(11.37)
j∈V
where W is a weight matrix adapted to G . In plain words, the control action applied to robot i is a linear combination of a weighted average of the position and of the velocity of the neighbor robots. The case α = 0 reduces to the consensus algorithm on the velocities, whose limitations were previously discussed. In [80], the authors establish conditions for the dynamics in (11.36) to coordinate the network under the second-order linear consensus protocol in (11.37). Specifically, they show that the presence of a globally reachable node in G is a necessary condition to reach coordination. A sufficient condition is also proved, for the special case α = 1. Further studies establish a necessary and sufficient condition for reaching coordination [83]. Specifically, besides the presence of a globally reachable node, the authors found that (11.37) succeeds in controlling the system if I (λi ) β2 > max , i=2,...,n |λi |2 R(λi ) α
(11.38)
where λ2 , . . . λn are the non-zero eigenvalues of L, and R(x) and I (x) denote the real and imaginary part of a complex number x ∈ C, respectively. Note that also the convergence of the second-order consensus dynamics is related to the spectral properties of the network.
11
Network Science and Automation
267
Rendezvous is a very important and ambitious coordination task, which is not always required. Even simpler coordination tasks are of great interest to the robotics and control community. Flocking, for instance, requires that robots reach a coordination state in which they have the same velocities but can maintain different positions (often with some requirements on the distances) [84]; formation control problems, on the other hand, require the robots to displace according to some specific shape and follow a (possibly predefined) trajectory as a rigid body [14]. In the previous discussion, we have observed that the coordination of the velocities, which is a common goal of these problems, can be directly reduced to a consensus problem in one-dimensional space. However, such a task may become non-trivial on a plane or on the three-dimensional space, in particular when some additional constraints on the robots’ positions are posed. Here, we illustrate a simple formulation of such types of problems in a two-dimensional plane [85]. Particles with coupled oscillator dynamics constitute a valuable and successful framework to model and study collective motion on the plane. In its simplest formulation, we assume that a set of particles can move on a circle. In this setting, each node (representing a member of the group) is characterized by an oscillator, which models the position of the node as a point on the complex plane. Specifically, given the Cartesian coordinates (xi (t), yi (t)) of the position of node i at time t, such a position is encoded as the complex number zi (t) = xi (t) + ιyi (t) = eιθi (t) , where θi (t) is the phase of node i at time t, as illustrated in Fig. 11.7. The state of each node is thus characterized by its phase θi (t), which evolves on the one-dimensional torus (i.e., the quotient space R with the equivalence relation 0 = 2π ). Similar to the uni-dimensional model described
y (t) 1 1 2 0.5
6 q1 x (t)
–0.5
–1
0.5
1
–0.5 4 3
5 –1
Fig. 11.7 Example of six coupled oscillators moving on the unit cycle. In the figure, we highlight the phase of node 1, denoted by θ1
above, the control action can apply a force to the robot, affecting its (angular) velocity. Hence, the state of node i evolves according to the following system of equations:
z˙i (t) = eιθi (t) , θ˙i (t) = ui (t) ,
(11.39)
where ui (t) is a control action that depends on rj (t) and xj (t), with j ∈ Ni+ . The approach in [85] consists of writing the control action as the sum of three contribution: a constant term ωi , termed natural oscillation; a term that depends only on the phase of the other nodes uor i (t), called orientation control; and a term that depends also on the others’ position sp ui (t), termed spacing control, as sp
ui (t) = ωi + uor i (t) + ui (t) .
(11.40)
When the goal of the controller is to coordinate the phases sp of the nodes, one can set the spacing control ui (t) = 0, and focus on the system θ˙i (t) = ui (t) = ωi + uor i (θ (t)) ,
(11.41)
which evolves on an n-dimensional torus. Several analyses of the system in (11.39) have been pursued, for different shapes of the control action uor i (θi (t)) [85]. Of particular interest is the Kuramoto model, in which it is assumed that nodes can interact through a weighted network G = (V , E , W), and it is set as uor i (θ (t)) = K
Wij sin θj (t) − θi (t) ,
(11.42)
j∈V
for some constant K > 0 that represents the coupling strength. Most of the studies of the model have been proposed for complete networks with uniform weights, that is, Wij = 1/N for all i, j ∈ V . See, e.g., the recent review [86] for more details. These results shed light on the critical role played by the coupling strength K on the emergence of synchronization, with the observation of interesting phenomena such as explosive synchronization and the emergence of Chimera states. However, several interesting results have been established for scenarios in which the communication between nodes is constrained by the presence of network structures, relating the emergence of synchronization to the coupling strength K and to network properties such as its connectivity and its degree distribution. For more details see, for instance, [87]. So far, we have assumed that the network of interactions is not affected by the motion of the robots. However, in many real-world applications, the interactions between robots depend on the position of the robots, whereby robots often have a limited range of interaction. In these scenarios, it has been shown in [38] that the application of a standard
11
268
L. Zino et al.
linear consensus dynamics as a control action may cause the loss of the necessary connectivity properties, thus hindering convergence. Hence, in many applications, one of the main problems is that the network G is not fixed but evolves together with the dynamical process, and more sophisticated nonlinear control actions have to be designed to guarantee that the network retains some connectivity properties (e.g. the presence of a globally reachable node) that guarantee convergence [38]. Several extensions of the standard theory of consensus and synchronization dynamics have been proposed to deal with time-varying networks [3].
11.5
Ongoing Research and Future Challenges
Network science, an emerging field, is continuously evolving, helping us uncover both the universal and the systemspecific connectivity patterns of real-world complex systems. While we currently have a rather strong grip on the structure of many relevant networks, we continue to seek progress on several challenges. For example, networks that host heterogeneous types of nodes, or ones whose links vary with time. Another important challenge is to systematically translate our advances on network structure into predictions on their actual observed dynamic behavior. We wish to exploit the powerful toolbox of network science toward practical modern-day applications, ultimately, aiming to understand, predict and control out most crucial complex technological systems.
cooperative interactions, and the negative edge set E − , which models antagonistic interactions. The use of signed networks has started becoming popular in the 1940s, within the social psychology literature, with the definition of the important concept of structural balance [88]. A signed network is structurally balanced if and only if its nodes can be partitioned into two sets, with all positive edges connecting the nodes within each set and negative edges connecting the nodes between the two sets. In the last decade, the study of dynamical processes on signed networks has started growing in popularity, in particular for its several insightful applications to biological and social systems [89]. The consensus dynamics on signed graphs has been originally proposed and analyzed in a seminal paper by C. Altafini [19]. Therein, the consensus on signed networks is defined in a continuous-time framework by means of the signed Laplacian matrix of a graph. Specifically, given (which represent the a stochastic weight matrix W ∈ Rn×n + strength of the interactions between nodes), and the positive and negative edge sets E + and E − (which represents the type of such interactions–either cooperative or antagonistic), the signed Laplacian matrix L¯ is defined as ⎧ ⎪ ⎪ −Wij ⎨ Wij L¯ ij = Wij ⎪ ⎪ ⎩ j=i 0
if i = j and (i, j) ∈ E + , if i = j and (i, j) ∈ E − , if i = j , if i = j and (i, j) ∈ /E.
The consensus dynamics is then defined as ¯ x˙ (t) = −Lx(t) .
11.5.1 Networks with Adversarial and Malicious Nodes A common feature of all the processes described in this chapter is the presence of cooperative dynamics between the nodes. In the consensus dynamics described in Sect. 11.3.1, nodes cooperate to reach a common state, and such a modeling framework is utilized to design distributed sensing algorithms (Sect. 11.4.1). In the synchronization problem (in Sect. 11.3.2), the presence of a cooperative coupling between a group of dynamical systems lead to the emergence of collective network behaviors, as also discussed in the applications illustrated in Sect. 11.4.3. However, many realworld applications witness the presence of not only cooperative interactions but also of antagonistic interactions. Such antagonistic interactions model, for instance, malicious attacks on distributed sensing systems, or the presence of noncooperative robots in multi-agent systems. Signed networks have emerged as a powerful tool to represent and study the presence of cooperative and antagonistic interactions. In a signed network, the edge set E is partitioned into two complementary subsets: the positive edge set E + , which models
(11.43)
(11.44)
Note that, different from the standard Laplacian matrix, the terms corresponding to negative edges appear in the Laplacian with a positive sign. Hence, instead of averaging their state with the ones of their neighbors, nodes move away from those with whom they are connected through a negative edge. The theoretical analysis of the consensus on signed networks, initially performed in [19] for undirected networks via a Lyapunov argument, and then extended to more general cases, including directed networks [90], has highlighted the importance of the graph-theoretic concept of structural balance. In fact, for strongly connected networks, it has been shown that, if the signed network is structurally balanced, then the state of the nodes converge to a bipartite consensus (see Fig. 11.8a,b), in which the state of all nodes converge to the same quantity in absolute value, but with different sign. Otherwise, if the network is not structurally balanced, the state of all the nodes converge to the consensus point x∗ = 0 (see Fig. 11.8c,d). Extensions of these results, accounting for time-varying network topologies and for more complex dynamics including synchronization of dynamical systems and stochastic dynamics have been proposed and
11
Network Science and Automation
269
a)
b) 4 2 +
7
x (t)
8 – +
+
–
0
3 –2
5 6
+
+
+
–4 4 0
1
–
50
100
150
Time, t
2
11
c)
d) 4 2 +
7
x (t)
8 – –
+
–
3 –2
5 6
0
+
+
+
–4 4
1
0 –
2
50
100
150
Time, t
Fig. 11.8 Consensus dynamics on signed networks. In the networks in a) and c), positive edges are denoted in blue, and negative edges are denoted in red. The network in a) is structurally balanced (the two sets are V1 = {1, 3, 4, 6} and V2 = {2, 5, 7, 8}). Hence, the corresponding
trajectories of the consensus dynamics in b) converge to a polarized equilibrium. The network in c) is not structurally balanced, and the corresponding trajectories in d) converge to the consensus point x∗ = 0
analyzed, see, e.g. the review paper [89]. These results have allowed to understand the mechanisms of many important phenomena that emerge whenever cooperating and antagonistic interactions present in a system, including polarization phenomena in social systems.
may be simplistic in many real-world scenarios. In motion coordination, we have already mentioned that agents may have a limited range of interactions, and thus the position of the agents may influence the network structure [38]. In sensor networks and to perform distributed estimations, we have assumed that all the sensors perform their tasks (namely, exchanging information and averaging the measurements) in a synchronized fashion, that is, according to a centralized clock. However, in many applications, the sensors may perform their tasks asynchronously and in the presence of disturbances and communication errors, yielding thus a timevarying pattern of information exchange [18, 91–93]. Furthermore, the use of adaptive networks may be a strategy to enhance the control of networks of dynamical systems and
11.5.2 Dynamics on Time-Varying and Adaptive Networks In this chapter, we have focused our discussion on groups of interacting dynamical systems whose patterns of interaction do not change in time, and thus they can be represented by a time-invariant network. However, such an assumption
270
improve the performance [94, 95]. These three real-world problems are just three examples of the several motivations that have called for an extension of the theory of interacting dynamical systems to dynamical network structures [96, 97]. For all these reasons, the last 15 years have witnessed a growing interest in the study of dynamics evolving on timevarying networks of interactions. A time-varying network can be described by a time-invariant node set V ={1, . . . , n}, a time-varying set of edges E (t)⊆V × V , and (possibly) a time-varying weight matrix W(t)∈Rn×n + , such that Wij (t)>0 ⇐⇒ (i, j) ∈ E (t), where the network can evolve in discrete time t ∈ Z+ or in continuous time t ∈ R+ . Such a growing body of literature has started developing from a small set of seminal papers written at the beginning of the 2000s by different groups of researchers from the systems and control community [18, 91–93]. In these papers, the consensus dynamics on time-varying networks is formalized in both the discrete- and continuous-time frameworks, and necessary and sufficient conditions on the network structure for convergence of the states of all nodes to a consensus state are derived. These results typically require that the network obtained by combining all the edge sets over a sufficiently long time time-horizon has some connectivity properties, such as being strongly connected for undirected graphs [91], or having a globally reachable node [93]. Further conditions may be required to reach desired consensus states, such as the average of the initial conditions [92]. These seminal works laid the foundation of several lines of research, including the analysis of the performance and of the convergence times of consensus dynamics on time-varying networks, scenarios with communication noise or disturbances, and the extension of these results to more complex dynamical systems such as synchronization and motion coordination problems. Further details can be found in several surveys and books, for instance, see [3, 97]. Among these several extensions, of particular interest is the scenario in which the network of interaction evolves according to a stochastic process. In fact, the stochasticity in the network formation process may hinder the direct application of the convergence results established in the seminal papers on consensus on time-varying networks [18,91–93] and their more recent extensions to other dynamics. Hence, dynamics on stochastically switching networks have called for the development of a different set of tools for their analysis, grounded in the theory of stochastic dynamical systems. The main difficulty of the analysis of such problems is that the network and the nodal dynamics co-evolve, and thus the analytical approaches should be tailored to the specific properties of both dynamics and to the corresponding time-scales, hindering the possibility of developing a general theory for these dynamics. In particular, several interesting results have been established in scenarios in which the network formation dynamics is much faster than the nodal dynamics, which
L. Zino et al.
often goes under the name of blinking networks. In these scenarios, under some conditions, the stochastic network dynamics behaves like a deterministic system determined by the expectation of the stochastic variables. For these scenarios, the results for deterministic consensus and synchronization problems have been adapted and extended [98, 99]. In a regime in which the dynamics of the networks and the nodal dynamics co-evolve at comparable time scales, the aforementioned averaging techniques cannot be applied. Rigorous convergence results for the consensus problem have initially been established for simple time-varying random network structures, where each edge is present or absent with a fixed probability, independently of the others [100]. Extensions have then been developed to incorporate non-trivial correlations between edges for networks of homogeneous nodes [101]. In the last few years, activity-driven networks have been proposed as a valuable framework to perform rigorous analyses of dynamics on time-varying networks with heterogeneous nodes [102–104]. In [105], the consensus dynamics on activity-driven networks is studied by means of a perturbation technique, establishing closed-form results on the speed of convergence that highlight that heterogeneity hinders distributed coordination. In all these works, it is assumed that the dynamics of the network is not directly influenced by the nodal dynamics. However, in many realistic scenarios, the state of the nodes influences the network formation process. Besides the already mentioned problems related to motion coordination, in which the network connectivity is passively affected by the position of the agents in the space [24, 38, 106], an interesting scenario is that of adaptive topologies [94, 107], in which the state of the nodes actively affects the network formation process, whereby nodes adapt their contact in order to facilitate the coordination with others. In particular, we mention the edge-snapping model, proposed in [94], in which nodes implement local decentralized rules to activate and deactivate edges by means of a second-order dynamical process. In [94], the effectiveness of such an adaptive topology in synchronizing networks of dynamical systems has been proved utilizing a MSF-based approach. Based on this seminal paper, adaptive topologies have then been proposed to enhance pinning control schemes to facilitate the synchronization process [95].
11.5.3 Controllability of Brain Networks The recent discoveries in neuroscience have allowed the scientific community to increase our understanding on the structure and the functioning of the brain [108]. The brain is a highly-complex system, formed by a large number of neurons, spanning from the 302 neurons of the C. Elegans a nematode often used as a model organism by researchers
11
Network Science and Automation
- to the tens of billions for human beings. Each neuron has a non-trivial internal dynamics, while they also interact with other neurons. Such a structure has led an increasing number of researchers to investigate the brain functioning from the perspective of networks of coupled dynamical systems, starting from the seminal paper by D. S. Bassett and E. Bullmore [109]. In these works, networks are used to represent the patterns of interactions within brains, where nodes may be used to represent either single neurons or brain units that perform a specific function [108]. Among the growing body of literature on brain networks, we would like to mention the work by S. Gu et al. [110]. In this paper, the authors investigate the controllability of structural brain networks, utilizing an approach based on the computation of the smallest eigenvalue of the controllability Gramian, as proposed in [47] (see Sect. 11.3.4 for more details). Specifically, the authors study the controllability of a network of coupled linear dynamical systems, connected according to the structure of the brain, where each node represents an area of the brain, and its state represents the magnitude of the neurophysiological activity of the corresponding area. The results of this study provide some interesting insights into the functioning of the brain; distinct areas of the brain (with different connectivity characteristics) have been identified as critical to control the network, depending on the state that the network has to reach. While linear dynamics may be used to study the functioning of the brain at a coarse level of brain areas, the neuronal activity is highly nonlinear, calling for the use of networks of coupled nonlinear dynamical systems. Network of coupled oscillators – as the Kuramoto oscillators presented in Sect. 11.4.3 – have emerged as a suitable modeling framework to study brain networks at the level of the single neurons or small groups of neurons. Of particular interest is the analysis of partial (or clustered) synchronization [111]. In fact, in a healthy functioning brain, we typically observe the emergence of cluster synchronization, where neurons belonging to the same area of the brain have a synchronized activity, but such an activity is not synchronized with the one of neurons in other areas, while global synchronization is often observed in pathological brains of epileptic patients. Recent results on the stability of clustered synchronization for coupled nonlinear dynamical systems [112] and Kuramoto oscillators [113] have helped shedding light on this important phenomenon, establishing quantitative conditions on the network structure and on the coupling between the dynamical systems that guarante the stability of cluster synchronized states, specifically highlighting the role of symmetries on the emergent behavior of cluster synchronization and, potentially, on its controllability [114,115]. These control-theoretic results might be used to help design stimulation techniques to prevent the emergence of epileptic seizures.
271
Further Reading – A. Barrat, M. Barthélemy, and A. Vespignani. Dynamical Processes on Complex Networks. Cambridge University Press, Cambridge, UK, 2008. – M. E. J. Newman. Networks: an introduction. Oxford University Press, Oxford, UK, 2010. – A.-L. Barabási. Network Science. Cambridge University Press, Cambridge, UK, 2016. – V. Latora, V. Nicosia, and G. Russo. Complex Networks: Principles, Methods and Applications. Cambridge, UK, 2017. Acknowledgments The authors are indebted to Claudio Altafini, Francesco Bullo, Mario di Bernardo, Mattia Frasca, Maurizio Porfiri, and Sandro Zampieri for precious discussion and advice.
References 1. Boccaletti, S., Latora, V., Moreno, Y., Chavez, M., Hwang, D.-U.: Complex networks: Structure and dynamics. Phys. Rep. 424(4), 175–308 (2006) 2. Seneta, E. Non-negative Matrices and Markov Chains. Springer, New York (1981) 3. Fagnani, F., Frasca, P.: Introduction to Averaging Dynamics over Networks. Springer, Cham (2018) 4. Clauset, A., Shalizi, C.R., Newman, M.E.J.: Power-law distributions in empirical data. SIAM Rev. 51(4), 661–703 (2009) 5. Broido, A.D., Clauset, A.: Scale-free networks are rare. Nat. Commun. 10(1), 1017 (2019) 6. Watts, D.J., Strogatz, S.H.: Collective dynamics of ‘small-world’ networks. Nature 393(6684), 440–442 (1998) 7. Barzel, B., Biham, O.: Quantifying the connectivity of a network: the network correlation function method. Phys. Rev. E 80(4), 046104–046113 (2009) 8. Barzel, B., Barabási, A.-L.: Universality in network dynamics. Nat. Phys. 9, 673–681 (2013) 9. French, J.R.P.: A formal theory of social power. Psychol. Rev. 63(3), 181–194 (1956) 10. DeGroot, M.H.: Reaching a consensus. J. Am. Stat. Assoc. 69(345), 118–121 (1974) 11. Jia, P., MirTabatabaei, A., Friedkin, N.E., Bullo, F.: Opinion dynamics and the evolution of social power in influence networks. SIAM Rev. 57(3), 367–397 (2015) 12. Anderson, B.D.O, Ye, M.: Recent advances in the modelling and analysis of opinion dynamics on influence networks. Int. J. Autom. Comput. 16(2), 129–149 (2019) 13. Ren, W., Beard, R.W.: Distributed Consensus in Multi-vehicle Cooperative Control, 1 edn. Springer, London (2003) 14. Oh, K.K., Park, M.C., Ahn, H.S.: A survey of multi-agent formation control. Automatica 53, 424–440 (2015) 15. Olshevsky, A., Tsitsiklis, J.N.: Convergence speed in distributed consensus and averaging. SIAM J. Control. Optim. 48(1), 33–55 (2009) 16. Xiao, L., Boyd, S.: Fast linear iterations for distributed averaging. Syst. Control Lett. 53(1), 65–78 (2004) 17. Tsitsiklis, J., Bertsekas, D., Athans, M.: Distributed asynchronous deterministic and stochastic gradient optimization algorithms. IEEE Trans. Autom. Control 31(9), 803–812 (1986) 18. Moreau, L.: Stability of multiagent systems with time-dependent communication links. IEEE Trans. Autom. Control 50(2), 169– 182 (2005)
11
272 19. Altafini, C.: Consensus problems on networks with antagonistic interactions. IEEE Trans. Autom. Control 58(4), 935–946 (2013) 20. Frasca, P., Carli, R., Fagnani, F., Zampieri, S.: Average consensus on networks with quantized communication. Int. J. Robust Nonlinear Control 19(16), 1787–1816 (2009) 21. Giannini, S., Petitti, A., Di Paola, D., Rizzo, A.: Asynchronous max-consensus protocol with time delays: convergence results and applications. IEEE Trans. Circuits Syst. Regul. Pap. 63(2), 256–264 (2016) 22. Pikovsky, A., Rosenblum, M.G., Kurths, J.: Synchronization, A Universal Concept in Nonlinear Sciences. Cambridge University, Cambridge (2001) 23. Pecora, L.M., Carroll, T.L.: Master stability functions for synchronized coupled systems. Phys. Rev. Lett. 80, 2109–2112 (1998) 24. Frasca, M., Buscarino, A., Rizzo, A., Fortuna, L., Boccaletti, S.: Synchronization of moving chaotic agents. Phys. Rev. Lett. 100, 044102 (2008) 25. Porfiri, M.: A master stability function for stochastically coupled chaotic maps. EPL (Europhys. Lett.) 96(4), 40014 (2011) 26. Del Genio, C.I., Gómez-Gardeñes, J., Bonamassa, I., Boccaletti, S.: Synchronization in networks with multiple interaction layers. Sci. Adv. 2(11), e1601679 (2016) 27. Tang, L., Wu, X., Lü, J., Lu, J.-A., D’Souza, R.M.: Master stability functions for complete, intralayer, and interlayer synchronization in multiplex networks of coupled rössler oscillators. Phys. Rev. E 99(1), 012304 (2019) 28. Gambuzza, L.V., Di Patti, F., Gallo, L., Lepri, S., Romance, M., Criado, R., Frasca, M., Latora, V., Boccaletti, S.: Stability of synchronization in simplicial complexes. Nat. Commun. 12(1), 1255 (2021) 29. DeLellis, P., di Bernardo, M., Russo, G.: On QUAD, Lipschitz, and contracting vector fields for consensus and synchronization of networks. IEEE Trans. Circuits Syst. Regul. Pap. 58(3), 576– 583 (2011) 30. Wang, W., Slotine, J.-J.E.: On partial contraction analysis for coupled nonlinear oscillators. Biol. Cybern. 92(1), 38–53 (2004) 31. Harush, U., Barzel, B.: Dynamic patterns of information flow in complex networks. Nat. Commun. 8, 1–11 (2016) 32. Hens, C., Harush, U., Haber, S., Cohen, R., Barzel, B.: Spatiotemporal signal propagation in complex networks. Nat. Phys. 15, 403– 412 (2019) 33. Gao, J., Barzel, B., Barabási, A.-L.: Universal resilience patterns in complex networks. Nature 530, 307–312 (2016) 34. DeLellis, P., di Bernardo, M., Gorochowski, T.E., Russo, G.: Synchronization and control of complex networks via contraction, adaptation and evolution. IEEE Circuits Syst. Mag. 10(3), 64–82 (2010) 35. Tang, Y., Qian, F., Gao, H., Kurths, J.: Synchronization in complex networks and its application–a survey of recent advances and challenges. Annu. Rev. Control. 38(2), 184–198 (2014) 36. Wang, X.F., Chen, G.: Pinning control of scale-free dynamical networks. Physica A: Statistical Mechanics and its Applications 310(3), 521–531 (2002) 37. Lin, C.-T.: Structural controllability. IEEE Trans. Autom. Control 19(3), 201–208 (1974) 38. Mesbahi, M., Egerstedt, M.: Graph theoretic methods in multiagent networks, 1st edn. Princeton University, Princeton (2010) 39. Liu, Y.Y., Slotine, J.J., Barabási, A.L.: Controllability of complex networks. Nature 473(7346), 167–173 (2011) 40. Wang, X., Su, H.: Pinning control of complex networked systems: A decade after and beyond. Annu. Rev. Control. 38(1), 103–111 (2014) 41. Xing, W., Shi, P., Agarwal, R.K., Zhao, Y.: A survey on global pinning synchronization of complex networks. J. Frankl. Inst. 356(6), 3590–3611 (2019)
L. Zino et al. 42. Sorrentino, F., di Bernardo, M., Garofalo, F., Chen, G.: Controllability of complex networks via pinning. Phys. Rev. E 75, 046103 (2007) 43. Carli, R., D’Elia, E., Zampieri, S.: A PI controller based on asymmetric gossip communications for clocks synchronization in wireless sensors networks. In: 2011 50th IEEE Conference on Decision and Control and European Control Conference, pp. 7512–7517 (2011) 44. Andreasson, M., Dimarogonas, D.V., Sandberg, H., Johansson, K.H.: Distributed control of networked dynamical systems: Static feedback, integral action and consensus. IEEE Trans. Autom. Control 59(7), 1750–1764 (2014) 45. Burbano Lombana, D.A., di Bernardo, M.: Distributed PID control for consensus of homogeneous and heterogeneous networks. IEEE Trans. Control Netw. Syst. 2(2), 154–163 (2015) 46. Yan, G., Ren, J., Lai, Y.-C., Lai, C.-H., Li, B.: Controlling complex networks: How much energy is needed? Phys. Rev. Lett. 108, 218703 (2012) 47. Pasqualetti, F., Zampieri, S., Bullo, F.: Controllability metrics, limitations and algorithms for complex networks. IEEE Trans. Control Netw. Syst. 1(1), 40–52 (2014) 48. Lindmark, G., Altafini, C.: Minimum energy control for complex networks. Sci. Rep. 8(1), 1–14 (2018) 49. Akyildiz, I.F., Su, W., Sankarasubramaniam, Y., Cayirci, E.: A survey on sensor networks. IEEE Commun. Mag. 40(8), 102–114 (2002) 50. Ross, S.M.: Introduction to Probability Models. Academic Press, Cambridge, 11 edn. (2014) 51. Bernstein, D.S.: Matrix Mathematics: Theory, Facts, and Formulas, 2nd edn. Princeton University, Princeton (2011) 52. Delvenne, J.-C., Carli, R., Zampieri, S.: Optimal strategies in the average consensus problem. Syst. Control Lett. 58(10), 759–765 (2009) 53. Carli, R., Fagnani, F., Speranzon, A., Zampieri, S.: Communication constraints in the average consensus problem. Automatica 44(3), 671–684 (2008) 54. Olfati-Saber, R.: Ultrafast consensus in small-world networks. In: Proceedings of the 2005, American Control Conference, 2005, vol. 4, pp. 2371–2378 (2005) 55. Tahbaz-Salehi, A., Jadbabaie, A.: Small world phenomenon, rapidly mixing Markov chains, and average consensus algorithms. In: Proceedings of the 2007 46th IEEE Conference on Decision and Control, pp. 276–281 (2007) 56. Olfati-Saber, R., Shamma, J.S.: Consensus filters for sensor networks and distributed sensor fusion. In: Proceedings of the 44th IEEE Conference on Decision and Control, pp. 6698–6703 (2005) 57. Spanos, D.P., Olfati-Saber, R., Murray, R.M.: Approximate distributed Kalman filtering in sensor networks with quantifiable performance. In: IPSN 2005. Fourth International Symposium on Information Processing in Sensor Networks, 2005, pp. 133–139 (2005) 58. Cattivelli, F.S., Sayed, A.H.: Diffusion strategies for distributed Kalman filtering and smoothing. IEEE Trans. Autom. Control 55(9), 2069–2084 (2010) 59. Di Paola, D., Petitti, A., Rizzo, A.: Distributed Kalman filtering via node selection in heterogeneous sensor networks. Int. J. Syst. Sci. 46(14), 2572–2583 (2015) 60. Franchi, A., Petitti, A., Rizzo, A.: Distributed estimation of state and parameters in multiagent cooperative load manipulation. IEEE Trans. Control Netw. Syst. 6(2), 690–701 (2018) 61. Rosato, V., Issacharoff, L., Tiriticco, F., Meloni, S., Porcellinis, S.D., Setola, R.: Modelling interdependent infrastructures using interacting dynamical models. Int. J. Crit. Infrastruct. 4(1/2), 63 (2008)
11
Network Science and Automation
62. Buldyrev, S.V., Parshani, R., Paul, G., Stanley, H.E., Havlin, S.: Catastrophic cascade of failures in interdependent networks. Nature 464(7291), 1025–1028 (2010) 63. Atputharajah, A., Saha, T.K.: Power system blackouts–literature review. In: 2009 International Conference on Industrial and Information Systems (ICIIS), pp. 460–465 (2009) 64. Menck, P.J., Heitzig, J., Kurths, J., Schellnhuber, H.J.: How dead ends undermine power grid stability. Nat. Commun. 5(1), 1–8 (2014) 65. Yang, Y., Nishikawa, T., Motter, A.E.: Small vulnerable sets determine large network cascades in power grids. Science 358(6365), eaan3184 (2017) 66. Schäfer, B., Witthaut, D., Timme, M., Latora, V.: Dynamically induced cascading failures in power grids. Nat. Commun. 9(1), 1–13 (2018) 67. Zhong, H., Nof, S.Y.: The dynamic lines of collaboration model: collaborative disruption response in cyber–physical systems. Comput. Ind. Eng. 87, 370–382 (2015) 68. Baqaee, D.R.: Cascading failures in production networks. Econometrica 86(5), 1819–1838 (2018) 69. Valdez, L.D., Shekhtman, L., La Rocca, C.E., Zhang, X., Buldyrev, S.V., Trunfio, P.A., Braunstein, L.A., Havlin, S.: Cascading failures in complex networks. J. Complex Networks 8(2), cnaa013 (2020) 70. Motter, A.E., Lai, Y.-C.: Cascade-based attacks on complex networks. Phys. Rev. E 66, 065102 (2002) 71. Motter, A.E.: Cascade control and defense in complex networks. Phys. Rev. Lett. 93(9), 1–4 (2004) 72. Crucitti, P., Latora, V., Marchiori, M.: Model for cascading failures in complex networks. Phys. Rev. E 69(4), 4 (2004) 73. Arianos, S., Bompard, E., Carbone, A., Xue, F.: Power grid vulnerability: a complex network approach. Chaos 19(1), 013119 (2009) 74. Bompard, E., Pons, E., Wu, D.: Extended topological metrics for the analysis of power grid vulnerability. IEEE Syst. J. 6(3), 481– 487 (2012) 75. Callaway, D.S., Newman, M.E.J., Strogatz, S.H., Watts, D.J.: Network robustness and fragility: Percolation on random graphs. Phys. Rev. Lett. 85, 5468–5471 (2000) 76. Cohen, R., Erez, K., Ben Avraham, D., Havlin, S.: Resilience of the internet to random breakdowns. Phys. Rev. Lett. 85, 4626– 4628 (2000) 77. Cohen, R., Erez, K., Ben Avraham, D., Havlin, S.: Breakdown of the internet under intentional attack. Phys. Rev. Lett. 86, 3682– 3685 (2001) 78. Bompard, E., Napoli, R., Xue, F.: Analysis of structural vulnerabilities in power transmission grids. Int. J. Crit. Infrastruct. Prot. 2(1–2), 5–12 (2009) 79. Sumpter, D.J.T.: Collective Animal Behavior. Princeton University, Princeton (2011) 80. Ren, W., Atkins, E.: Distributed multi-vehicle coordinated control via local information exchange. Int. J. Robust Nonlinear Control 17(10–11), 1002–1033 (2007) 81. Bullo, F., Cortés, J., Martínez, S.: Distributed Control of Robotic Networks: A Mathematical Approach to Motion Coordination Algorithms. Princeton University, Princeton (2009) 82. Isidori, A.: Nonlinear Control Systems, 3rd edn. Springer, London (1995) 83. Yu, W., Chen, G., Cao, M.: Some necessary and sufficient conditions for second-order consensus in multi-agent dynamical systems. Automatica 46(6), 1089–1095 (2010) 84. Olfati-Saber, R.: Flocking for multi-agent dynamic systems: algorithms and theory. IEEE Trans. Autom. Control 51(3), 401–420 (2006) 85. Paley, D.A., Leonard, N.E., Sepulchre, R., Grunbaum, D., Parrish, J.K.: Oscillator models and collective motion. IEEE Control. Syst. Mag. 27(4), 89–105 (2007)
273 86. Wu, J., Li, X.: Collective synchronization of Kuramoto-oscillator networks. IEEE Circuits Syst. Mag. 20(3), 46–67 (2020) 87. Rodrigues, F.A., Peron, T.K.D., Ji, P., Kurths, J.: The Kuramoto model in complex networks. Phys. Rep. 610, 1–98 (2016) 88. Heider, F.: Attitudes and cognitive organization. J. Psychol. 21(1), 107–112 (1946). PMID: 21010780 89. Shi, G., Altafini, C., Baras, J.S.: Dynamics over signed networks. SIAM Rev. 61(2), 229–257 (2019) 90. Xia, W., Cao, M., Johansson, K.H.: Structural balance and opinion separation in trust–mistrust social networks. IEEE Trans. Control Netw. Syst. 3(1), 46–56 (2016) 91. Jadbabaie, A., Lin, J., Morse, A.S.: Coordination of groups of mobile autonomous agents using nearest neighbor rules. IEEE Trans. Autom. Control 48(6), 988–1001 (2003) 92. Olfati-Saber, R., Murray, R.M.: Consensus problems in networks of agents with switching topology and time-delays. IEEE Trans. Autom. Control 49(9), 1520–1533 (2004) 93. Ren, W., Beard, R.W.: Consensus seeking in multiagent systems under dynamically changing interaction topologies. IEEE Trans. Autom. Control 50(5), 655–661 (2005) 94. DeLellis, P., di Bernardo, M., Garofalo, F., Porfiri, M.: Evolution of complex networks via edge snapping. IEEE Trans. Circuits Syst. Regul. Pap. 57(8), 2132–2143 (2010) 95. DeLellis, P., di Bernardo, M., Porfiri, M.: Pinning control of complex networks via edge snapping. Chaos: An Interdisciplinary Journal of Nonlinear Science 21(3), 033119 (2011) 96. Belykh, I., di Bernardo, M., Kurths, J., Porfiri, M.: Evolving dynamical networks. Physica D: Nonlinear Phenomena 267, 1–6 (2014). Evolving Dynamical Networks 97. Kia, S.S., Scoy, B.V., Cortes, J., Freeman, R.A., Lynch, K.M., Martinez, S.: Tutorial on dynamic average consensus: The problem, its applications, and the algorithms. IEEE Control Syst. 39(3), 40–72 (2019) 98. Belykh, I.V., Belykh, V.N., Hasler, M.: Blinking model and synchronization in small-world networks with a time-varying coupling. Physica D: Nonlinear Phenomena 195(1), 188–206 (2004) 99. Porfiri, M.: Stochastic synchronization in blinking networks of chaotic maps. Phys. Rev. E 85, 056114 (2012) 100. Hatano, Y., Mesbahi, M.: Agreement over random networks. IEEE Trans. Autom. Control 50(11), 1867–1872 (2005) 101. Abaid, N., Igel, I., Porfiri, M.: On the consensus protocol of conspecific agents. Linear Algebra Appl. 437(1), 221–235 (2012) 102. Perra, N., Gonçalves, B., Pastor-Satorras, R., Vespignani, A.: Activity driven modeling of time varying networks. Sci. Rep. 2(1), 469 (2012) 103. Zino, L., Rizzo, A., Porfiri, M.: Continuous-time discretedistribution theory for activity-driven networks. Phys. Rev. Lett. 117, 228302 (2016) 104. Zino, L., Rizzo, A., Porfiri, M.: An analytical framework for the study of epidemic models on activity driven networks. J. Complex Networks 5(6), 924–952 (2017) 105. Zino, L., Rizzo, A., Porfiri, M.: Consensus over activity-driven networks. IEEE Trans. Control Netw. Syst. 7(2), 866–877 (2020) 106. Frasca, M., Buscarino, A., Rizzo, A., Fortuna, L.: Spatial pinning control. Phys. Rev. Lett. 108(20), 204102 (2012) 107. Zhou, C., Kurths, J.: Dynamical weights and enhanced synchronization in adaptive complex networks. Phys. Rev. Lett. 96, 164102 (2006) 108. Tang, E., Bassett, D.S.: Colloquium: control of dynamics in brain networks. Rev. Mod. Phys. 90, 031003 (2018) 109. Bassett, D.S., Bullmore, E.: Small-world brain networks. Neuroscientist 12(6), 512–523 (2006) 110. Gu, S., Pasqualetti, F., Cieslak, M., Telesford, Q.K., Yu, A.B., Kahn, A.E., Medaglia, J.D., Vettel, J.M., Miller, M.B., Grafton, S.T., Bassett, D.S.: Controllability of structural brain networks. Nat. Commun. 6(1), 1–10 (2015)
11
274 111. Pecora, L.M., Sorrentino, F., Hagerstrom, A.M., Murphy, T.E., Roy, R.: Cluster synchronization and isolated desynchronization in complex networks with symmetries. Nat. Commun. 5(1), 1–8 (2014) 112. Gambuzza, L.V., Frasca, M., Latora, V.: Distributed control of synchronization of a group of network nodes. IEEE Trans. Autom. Control 64(1), 365–372 (2018) 113. Menara, T., Baggio, G., Bassett, D.S., Pasqualetti, F.: Stability conditions for cluster synchronization in networks of heterogeneous Kuramoto oscillators. IEEE Trans. Control Netw. Syst. 7(1), 302–314 (2020) 114. Gambuzza, L., Frasca, M., Sorrentino, F., Pecora, L., Boccaletti, S.: Controlling symmetries and clustered dynamics of complex networks. IEEE Trans. Netw. Sci. Eng. 8(1), 282–293 (2020) 115. Della Rossa, F., Pecora, L., Blaha, K., Shirin, A., Klickstein, I., Sorrentino, F.: Symmetries and cluster synchronization in multilayer networks. Nat. Commun. 11(1), 1–17 (2020)
Lorenzo Zino is an Assistant Professor at Politecnico di Torino, Italy, since 2022. He received his Ph.D. in Pure and Applied Mathematics (with honors) from Politecnico di Torino and Università di Torino (joint doctorate program), in 2018. His research interests encompass the modeling, the analysis, and the control aspects of dynamical processes over networks, applied probability, network modeling and analysis, and game theory. (https://lorenzozino90.wixsite.com/lzino)
Baruch Barzel is an Associate Professor at Bar-Ilan University, Israel. He received his Ph.D. in physics from the Hebrew University of Jerusalem, Israel, in 2010, and went on to pursued his postdoctoral training at the Network Science Institute of Northeastern University and at the Channing Division of Network Medicine in Harvard Medical School, Boston. Today Baruch directs the Network Science Center at Bar-Ilan University, and his lab’s goal is to develop the statistical physics of network dynamics (https://www.barzellab.com)
L. Zino et al.
Alessandro Rizzo is an Associate Professor at Politecnico di Torino, Italy, and a Visiting Professor at the NYU Tandon School of Engineering, Brooklyn NY, USA. He received his Ph.D. in Electronics and Automation Engineering and the Laurea in Computer Engineering from University of Catania, Italy, in 2000 and 1996, respectively. During his tenure in Torino, he has established the Complex Systems Laboratory, where he conducts and supervises research on complex networks and systems, dynamical systems, and robotics (https://csl.polito.it)
What Can Be Automated? What Cannot Be Automated?
12
Richard D. Patton and Peter C. Patton
Contents 12.1
The Limits of Automation . . . . . . . . . . . . . . . . . . . . . . . 275
12.2
The Limits of Mechanization . . . . . . . . . . . . . . . . . . . . . 276
12.3
Expanding the Limit . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
12.4
The Current State of the Art . . . . . . . . . . . . . . . . . . . . . 280
12.5
A General Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
12.6 12.6.1
Editor’s Notes for the New Edition . . . . . . . . . . . . . . . . 282 What Can We Automate, But We Prefer Not to Automate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282 What Should Not Be Automated? . . . . . . . . . . . . . . . . . . 282
12.6.2
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
Abstract
The question of what can and what cannot be automated challenged engineers, scientists, and philosophers even before the term automation was defined. While this question may also raise ethical and educational issues, the focus here is scientific. In this chapter, the limits of automation and mechanization are explored and explained in an effort to address this fundamental conundrum. The evolution of computer languages to provide domain-specific solutions to automation design problems is reviewed as an illustration and a model of the limitations of mechanization. The current state of the art and a general automation principle are also provided.
R. D. Patton () Lawson Software, St. Paul, MN, USA e-mail: [email protected] P. C. Patton () School of Engineering, Oklahoma Christian University, Oklahoma City, OK, USA e-mail: [email protected]
© Springer Nature Switzerland AG 2023 S. Y. Nof (ed.), Springer Handbook of Automation, Springer Handbooks, https://doi.org/10.1007/978-3-030-96729-1_12
Keywords
Complex adaptive system · Execution model · European Economic Community · Pattern language · Complex interact system
12.1
The Limits of Automation
The recent (1948) neologism automation comes from autom(atic oper)ation. The term automatic means self-moving or self-dictating (Greek autómatos) [1]. The Oxford English Dictionary (OED) defines it as: Automatic control of the manufacture of a product through a number of successive stages; the application of automatic control to any branch of industry or science; by extension, the use of electronic or other mechanical devices to replace human labor.
Thus, the primary theoretical limit of automation is built into the denotative meaning of the word itself. Automation is all about self-moving or self-dictating as opposed to selforganizing. Another way of saying this is that the very notion of automation is based upon, and thus limited by, its own mechanical metaphor. In fact, most people would agree that, if an automated process began to self-organize into something else, then it would not be a very good piece of automation per se. It would perhaps be a brilliant act of creating a new form of life (i.e., a self-organizing system), but that is certainly not what automation is all about. Automation is fundamentally about taking some process that itself was created by a life process and making it more mechanical or in the modern computing metaphor hardwired, such that it can be executed without any volitional or further expenditure of life process energy. This phenomenon can even be seen in living organisms themselves. The whole point of skill-building in humans is to drive brain and other neural processes to become more nearly hardwired. It is well known that, as the brain executes some particular circuit more and more, it hardwires itself by growing more and more synaptic connections. This, in effect, automates a volitional 275
276
R. D. Patton and P. C. Patton
process by making it more mechanical and thus more highly automated. Such nerve training is driven by practice and repetition to achieve greater performance levels in athletes and musicians. This is also what happens in our more conscious processes of automation. We create some new process and then seek to automate it by making it more mechanical. An objection to this notion might be to say that the brain itself is a mechanism, albeit a very complex one, and since the brain is capable of this self-organizing or volitional behavior, how can we make this distinction? I think there are primarily two and possibly a third answer to this, depending on one’s belief system. 1. For those who believe that there is life after death it seems to me conclusive to say that, if we live on then, there must be a distinction between our mind and our brain. 2. For almost everyone else there is good research evidence for a distinction between the mind and the brain. This can especially be seen in people with obsessive compulsive disorder and, in fact, The Mind and the Brain is very persuasive in showing that a person has a mind which is capable of programming their brain or overcoming some faulting programming of their brain – even when the brain has been hardwired in some particularly unuseful way [2]. 3. However, the utter materialist would further argue that the mind is merely an artifact of the electrochemical function of the brain as an ensemble of neurons and synapses and does not exist as a separate entity in spite of its phenomenological differences in functionality. In any case, as long as we acknowledge that there is some operational degree of distinction between the mind as analogous to software and the brain as analogous to hardware, then we simply cannot determine to what extent the mind and the brain together are operating as a mechanism and to what extent they are operating as a self-organizing open system. The ethical and educational issues related to what can and cannot be automated, though important, are outside of the scope of this chapter. Readers are invited to read Ch. 47 on Ethical Issues of Automation Management, Ch. 17 on The Human Role in Automation, and Ch. 44 on Education and Qualification.
12.2
The Limits of Mechanization
So, another way of asking what are the limits of automation is to ask what are the limits of mechanization? or, what are machines ultimately capable of doing autonomously? But what does mechanical mean? Fundamentally it means linear or stepwise, i.e., able to carry out an algorithmically defined process having clear inputs and clear outputs. The well-known Carnot circle used by the military engineer
The rest of the universe
In
The system under study
Out
Fig. 12.1 The Carnot circle as a system definition tool
and founder of thermodynamics L. N. S. Carnot (1796–1832) to describe the heat engine and other mechanical systems (Fig. 12.1) separates the engine or system under study from the rest of the universe by a circle with an arrow in for input and an arrow out for output. This is a simple but powerful philosophical (and graphical) tool for understanding the functions and limits of systems and familiar to every mechanical and systems engineer. More complex mechanical processes may have to go beyond strictly linear algorithmic control to include negative feedback, but still operate in such a way that they satisfy a clear objective function or goal. When the goal requires extremely precise positioning of the automation, a subprocess or correction function called dither is added to force the feedback loop to hunt for the precise solution or positioning in a heuristic manner but still subsidiary to and controlled by the algorithm itself. In any case, mechanical means noncontext sensitive and discrete, even if it involves dither. Machine theory is basically the opposite of general system theory. And by a general system we mean an open system, i.e., a system that is capable of locally overcoming entropy and is self-organizing. Today such systems are typically referred to as complex adaptive systems (CAS). An open system is fundamentally different from a machine. The core difference is that an open system is nonlinear. That is, in general, everything within the system is sensitive to its entire context, i.e., to everything else in the system, not just the previous step in an algorithm. This is most easily seen in quantum theory with the two-slit experiment. Somehow the single electron is aware that there are two slits rather than one and thus behaves like a wave rather than a particle. In essence, the single electron’s behavior is sensitive to the entire experimental context. Now imagine a human brain with 100 billion neurons interconnected with some 100 trillion synapses, where even local synaptic structures make their own locally complex circuits. The human brain is also bathed in chemicals that influence the operation of this vast network of neurons whose resultant behavior then influences the mix of chemicals. And then there are the hypothesized quantum effects, giving rise to potential nonlocal effects, within any
12
What Can Be Automated? What Cannot Be Automated?
particular neuron itself. This is clearly a vastly different sort of thing than a machine: a difference in degree of context sensitivity that amounts to a difference in kind. One way to illustrate this mathematically is to define a system of simultaneous differential equations. Denoting some measure of elements, pi (i = 1, 2, . . . , n), by Qi , these, for a finite number of elements and in the simplest case, will be of the form: dQ1 /dt = f1 (Q1 , Q2 , . . . , Qn ) dQ2 /dt = f2 (Q1 , Q2 , . . . , Qn ) ··· dQn /dt = fn (Q1 , Q2 , . . . , Qn ) . Change of any measure Qi is therefore a function of all Q, from Q1 to Qn ; conversely, change of any Qi entails change of all other measures and of the system as a whole [3]. What has been missing from the strong artificial intelligence (AI) debate and indeed from most of Western thought is the nature of open systems and their propensity to generate emergent behavior. We are still mostly caught up in the mechanical metaphor believing that it is possible to mechanize or model any system using machines or algorithms, but this is exactly the opposite of what is actually happening. It is systems themselves that use a mechanizing process to improve their performance and move onto higher levels of emergent behavior. However, it is no wonder that we are so caught up in the mechanical metaphor, as it is the very basis of the Industrial Revolution and is thus responsible for much of our wealth today. In the field of business application software this peripheral blindness (or, rather, intense focus) has led to huge failures in building complex business application software systems. The presumption is that building a software system is like building a bridge; that it is like a construction project; that it is fundamentally an engineering problem where there is a design process and then a construction process where the design process can fully specify something that can be constructed using engineering principles; more specifically, that there is some design process done by designers; and that the construction process is the process of programming done by programmers. However, it is not. It is really a design problem through and through and, moreover, a design problem that does not have general-purpose engineering-like principles and solutions that have already been reduced to code by years of practice. It is a design problem of pure logic where the logic can only be fully specified in a completed program where the construction process is then simply running a compiler against the program to generate machine instructions that carry out the logic. There are no general-purpose solutions to all logic design problems. There are only special-purpose domain-specific solutions. This can be seen in the evolution, or lately lack thereof, of computer languages. We went from first-
277
generation languages (machine code) to second-generation languages (assembler), to third-generation languages (FORTRAN, COBOL, and C), and to fourth-generation languages (Java and Progress) where each generation represented some tenfold productivity improvement in our ability to produce systems (Fig. 12.2). This impressive progression however slowed down and essentially stopped dead in its tracks some 20 years ago in its ability to significantly improve productivity. Since then there have been many attempts to come up with the next (fifth)generation language or general-purpose framework for dramatically improving productivity. In the 1980s, computeraided software engineering (CASE) tools were the hoped-for solution. In the 1990s, object-oriented frameworks looked promising as the ultimate solution and now the industry as a whole seems primarily focused on the notion of serviceoriented architecture (SOA) as our best way forward. While these attempts have helped boost productivity to some extent, they have all failed to produce an order of magnitude shift in the productivity of building software systems (Fig. 12.3). What they all have in common is the continuation of the mechanical metaphor. They are all attempts at generalpurpose engineering-like solutions to this problem. Generalpurpose languages can only go so far in solving the productivity problem. However, special-purpose languages or domain-specific languages (DSL) take us much farther. There are multiple categories of DSLs. There are horizontal DSLs that address some functional layer, like SQL does for data access or UML does for conceptual design (as opposed to the detail design which is the actual complete program in this metaphor). And there are vertical DSLs which address some topical area such as mathematical analysis, molecular modeling, or business process applications. There is also the distinction between a DSL and a domain-specific design language (DSDL). All DSDLs are DSLs but not all DSLs are DSDLs (a prescission distinction in Peirce’s nomenclature [4]). A design language is distinct from a programming or implementation language in that in a design language everything is defined relative to everything else. This is like the distinction between a design within a CAD system and the actually built component. In the CAD system one can move a line and everything changes around it. In the actually built component that line is a physical entity in some hardened structural relationship. Then there is the further prescission distinction of a pattern language versus a DSDL. A pattern language is a DSDL that has built-in syntax that allows for specific design solutions to be applied to specific domain problems. This tends to be the most vertically domain-specific and also the most powerful sort of language for improving the productivity of building systems. Lawson Landmark is one example of a pattern language that delivers a 20 times improvement over fourth-generation languages in its particular domain.
12
278
R. D. Patton and P. C. Patton
1st Generation
Machine code
2nd Generation
Assembler Machine specific symbolic languages
|10× over 1st generation Allowed complex operating systems to be built – cost was built into hardware price
3rd Generation
High level language C, COBOL, FORTRAN
|10× over 2nd generation Allowed for independent software companies to become profitable
4th Generation
High-level language integrated with a (virtual) machine environment Visual Basic, Powerbuilder, Java
|10× over 3rd generation Greatly increased the scale of how large a software company could grow to
“5th Generation”
Non-existent as a general purpose language must be domain-specific Lawson Landmark (Business Applications)
10–20× over 4th generation
Fig. 12.2 Software language generations
Quality adaptability innovation
Developer productivity (log scale) x The industry has only made incremental progress from 4GLs for 20+ years x Creating a 5GL works only for a specific problem domain 4GL (VB, Powerbuilder)
5GL Vertical domainspecific language 20× improvement
SOA Objects CASE
3GL (C, COBOL)
General purpose & broad
Fig. 12.3 Productivity progression of languages
It is not possible to mechanize or model an entire complex system. We can only mechanize aspects of any particular system. To mechanize the entire system is to destroy the system as such; i.e., it is no longer a system. Analysis of a system into its constituent parts loses its essence as a system, if a system is truly more than the sum of its parts. That is, the mechanistic model lacks the system’s inherent capability for self-organization and thus adaptability, and thus becomes more rigid and more limited in its capabilities, although, perhaps, much faster. And the limit to what aspects of a system can be mechanized is really the limit of our cognitive
ability to model those aspects. In other words it is a design problem. In general, the theoretical limit of mechanization is always the current limit of our capacity to design linear stepwise models in any particular solution domain. Thus the key to what can be automated is the depth of our knowledge and understanding within any specific problem domain, the depth required being that depth necessary to be able to design automatons that fit, where fit is determined both by the automaton actually producing what is expected as well as fitting appropriately within the system as a whole; i.e., if the automation disturbs the system that it is within beyond some threshold then the very system that gives rise to the useful automaton will change so dramatically that the automation is no longer useful. An example of this might be a software program for paying employees. If this program does pay employees correctly but requires dramatic changes to the manner in which time card information is gathered such that it becomes too great a burden on some part of the system then the program will likely soon be surrounded by organizational T cells and rejected. Organisms are not machines, but they can to a certain extent become machines, or perhaps congeal into machines. Never completely, however, for a thoroughly mechanized organism would be incapable of reacting to the incessantly changing conditions of the outside world. The principle of progressive mechanization expresses the transition from undifferentiated wholeness to higher function, made possible by specialization and division of labor; this principle also implies loss of potentialities in the components and of regularity in the whole [5].
12
What Can Be Automated? What Cannot Be Automated?
12.3
279
Expanding the Limit
Human business processes (buying, selling, manufacturing, invoicing, and so on), like actual organisms, operate fundamentally as open systems or rather complex adaptive system (CAS). Other examples of CAS are cities, insect colonies, and so on. The architecture of an African termite mound requires more bits to describe than the number of neurons in a termite brain. So, where do they store it? In fact, they do not, because each termite is programmed as an automaton to instinctively perform certain functions under certain stimuli and the combined activity of the colony produces and maintains the mound. Thus, the entire world is fundamentally made up of vastly complex interacting systems. When we automate we are ourselves engaging in the principle of progressive mechanization. We are mechanizing some portion or aspect of a system. We can only mechanize those aspects that are themselves already highly specialized within the system itself or that we can make highly specialized; for example, it is possible to make a mechanical heart but it is impossible to make a mechanical stem cell or a mechanical human brain. It is possible to make a mechanical payroll program but impossible to make a mechanical company that responds to dynamic market forces and makes a profit. The result of this is that any mechanization requires specific understanding of some particular specialization within some particular system. In other words, there are innumerable specific domains of specialization within all of the complex interacting systems in the world. And thus any mechanization is inherently domain specific and requires detailed domain knowledge. The process of mechanization is therefore fundamentally a domain-specific design problem. Thus, the limit of mechanization is our ability to comprehend some particular aspect of a system well enough to be able to extract some portion or aspect of that system in which it is possible to define a boundary or Carnot circle with clear inputs and clear outputs along with a transfer function mapping those inputs to those outputs. If we cannot enumerate the inputs and outputs and then describe what inputs result in what outputs then we simply cannot automate the process. This is the fundamental requirement of mechanization. This does not, however, mean that we are limited to our current simple mechanical metaphor of a machine as a set of inputs into a box that transforms those inputs into their associated outputs. Indeed the next step of mechanization under way is to try to mimic aspects of how a system itself works. One aspect of this is the object-oriented revolution in software, which itself grew out of the work done in complex adaptive systems research. Unfortunately, the prevailing simple mechanical metaphor and lack of understanding of the nature of CAS has blunted the evolution of object-oriented technology and limited its impact. Object-oriented concepts have taken us from a simplistic view of a machine as a black box process or function F
Input
The black box
Output
Fig. 12.4 The engineer’s black box
as shown in Fig. 12.4, in which (outputs = F[inputs]), to conceptualizing a machine as an interacting set of agents or objects. These concepts have allowed us to manage higher levels of complexity in the machines we build but they have not taken us to a higher order of magnitude in our ability to mechanize complex systems. In the realm of business process software what has been missing is the following: 1. The full realization that what we are modeling with business process software is really a portion or aspect of a complex adaptive system. 2. That this is fundamentally a logic design problem rather than an engineering problem. 3. That, in this case, the problem domain is primarily about semiotics and pragmatics, i.e., the nature of sign processes and the impact on those who use them. Fortunately, there is a depth of research in these areas that provides guidance toward more powerful techniques and tools for making further progress. John Holland, the designer of the Holland Machine, one of the first parallel computers, has done extensive research into CAS and has discovered many basic principles that all CAS seem to have in common. His work led to object-oriented concepts; however, the object-oriented community has not continued to seek to implement his further discoveries and principles regarding CAS into object-oriented computing languages. One very useful concept is how CAS systems use rule-block formations in driving their behavior and the nature of how adaptation continues to build rule-block hierarchies on top of existing rule-block structures [6]. Christopher Alexander [7] is a building architect who discovered the notion of a pattern language for designing and constructing buildings and cities [8]. In general, a pattern language is a set of patterns or design solutions to some particular design problem in some particular domain. The key insight here is that design is a domain-specific problem that takes deep understanding of the problem domain and that, once good design solutions are found to any specific problem, we can codify them into reusable patterns. A simple example of a pattern language and the nature of domain specificity is perhaps how farmers go about building barns. They do not hire architects but rather get together and, through a set of rules of thumb based on how much livestock they have and storage and processing considerations, build a barn with such and such dimensions. One simple pattern is that, when the
12
280
barn gets to be too long for just doors on the ends, they put a door in the middle. Now, can someone use these rules of thumb or this pattern language to build a skyscraper in New York? Charles Sanders Peirce was a nineteenth-century American philosopher and semiotician. He is one of the founders of the quintessential American philosophy known as pragmatism, and came up with a triadic semiotics based on his metaphysical categories of firstness, secondness, and thirdness [9]. Firstness is the category of idea or possibility. Secondness is the category of brute fact or instance, and thirdness is the category of laws or behavior. Peirce argues that existence itself requires these categories; that they are in essence a unity and that there is no existence without these three categories. Using this understanding one could argue that this demystifies to some extent the Christian notion of God being a unity and yet also being triune by understanding the Father as firstness, the Son as secondness, and the Holy Spirit as thirdness. Thus the trinity of God is essentially a metaphysical statement about the nature of existence. Peirce goes on to build a system of signs or semiotics based on this triadic structure and in essence argues that reality is ultimately the stuff of signs; e.g., God spoke the universe into existence. At first one could seek to use this notion to support the strong AI view that all intelligence is symbol representation and symbol processing and thus a computer can ultimately model this reality. However, a key concept of Peirce is that the meaning of a sign requires an interpretant, which itself is a sign and thus also requires a further interpretant to give it meaning in a never-ending process of meaning creation; that is, it becomes an eternally recursive process. Another way of viewing this is to say that this process of making meaning is ultimately an open system. And while machines can model and hence automate closed systems they cannot fully model open systems. Peirce’s semiotics and his notion of firstness, secondness, and thirdness provide the key insights required for building robust ontology models of any particular domain. The key to a robust ontology model is not that it is right once and for all but rather that it can be done using a design language and that it has perfect fidelity with the resulting execution model. An ontology model is never right; it is just more and more useful. This is because of the notion of the relativity of categories. Perception is universally human, determined by man’s psychophysical equipment. Conceptualization is culturebound because it depends on the symbolic systems we apply. These symbolic systems are largely determined by linguistic factors, the structure of the language applied. Technical language, including the symbolism of mathematics, is, in the last resort, an efflorescence of everyday language, and so will not be independent of the structure of the latter. This, of course, does not mean that the content of mathematics is true only within a certain culture. It is a tautological system of a
R. D. Patton and P. C. Patton
hypothetico-deductive nature, and hence any rational being accepting the premises must agree to all its deductions [10]. The most critical aspect of achieving the theoretical limit of automation is the ability to continue to make execution models that are more and more useful. And this is true for two reasons: 1. CAS are so complex that we cannot possibly understand large portions of them perfectly ab initio. We need to model them, explore them, and then improve them. 2. CAS are continually adapting. They are continually changing and thus, even if we perfectly modeled some aspect of a CAS, it will change and our model will become less and less useful. In order to do this we need a DSDL capable of quickly building high-fidelity models. That is, models which are themselves the drivers of the actual execution of automation. In order for the model of the CAS to ultimately actually drive the execution it is critical that the ontology model be in perfect fidelity with the execution model. Today there is a big disconnect between what one can build in an analysis model and the resultant executing code or execution model. This is analogous to having a design of a two-story home on a blueprint and then having the result become a three-story office building. What is required is a design language that can both model the ontology in its full richness as the analysis model but also be able to model the full execution in perfect fidelity with the ontology model. In essence this means a single model that fully incorporates firstness, secondness, and thirdness in all their richness that can then either be fully interpreted on a machine or fully generated to some other machine language or a combination of both. Only with such a powerful design or pattern language we can overcome the inherent limitations we have in our ability to comprehend the workings of some particular CAS, model, and then automate those portions that can be mechanized and continue to keep pace with the continually evolving CAS that we are seeking to progressively mechanize.
12.4
The Current State of the Art
The computing industry in general is still very much caught up in the mechanical metaphor. While object-oriented design and programming technology is more than decades old, and there is a patterns movement in the industry that is just over a decade old, there are very few examples of this new DSDL technology in use today. However, where it has been used the results have been dramatic, i.e., on the order of a 20-fold reduction of complexity as measured in lines of code. Lawson Landmark™ is such a pattern language intended for the precise functional specification of business enterprise computer applications by domain specialists rather
12
What Can Be Automated? What Cannot Be Automated?
than programmers [11]. The result of the domain expert’s work is then run through a metacompiler to produce Java code which can be loaded and tested against a transaction script also prepared by the same or another domain expert (i.e., accountant, supply chain expert, etc.). The traditional application programmer does not appear in this modern business application development scenario because his or her function has been automated. It has taken nearly 50 years for specification-based programming to arrive at this level of development and utility and several other aggressive attempts at automation of intellectual functions have taken as long. Automation of human intellectual functions is basically an aspect of AI, and the potential of AI is basically a philosophical rather than a technical question. The two major protagonists in this debate are the philosopher Hubert Dreyfus [12, 13] and Ray Kurzweil [14–16], an accomplished engineer. Dreyfus argues that computers will never ever achieve anything like human intelligence simply because they do not have consciousness. He gives the argument from what he calls associative or peripheral consciousness in the human brain. Kurzweil has written extensively as a futurist arguing that computers will soon exceed human intelligence and even develop spiritual capabilities. Early in the computer era Herbert Simon hypothesized that a computer program that was able to play master-level chess would be exhibiting intelligence [17]. Dreyfus and other strong AI opponents argue that it is not, because the chess program does not automatically play chess like a human does and therefore does not exhibit AI at all. They go on to show that is impossible for a program to play an even simpler game like Go well using this same technology. Playing chess was the straw man set up by Herbert Simon in the mid-1950s to be the benchmark of AI, i.e., if a computer could beat the best human chess player, then it would show true intelligence, but it actually does not. It took nearly 50 years to develop a special-purpose computer able to beat the leading human chess master, Gary Kasparov, but it does not automate or mechanize human intelligence to do so. In fact, it just came up with a different design for a computational algorithm that could play chess. This is analogous to the problem of flying. We did not copy how a bird flies but rather came up with a special-purpose design that suited our particular requirements. The strong AI hypothesis appears to assume that all human thought, or at least intelligent thought, can be reduced to computation, and since computers can compute orders of magnitude faster than humans they will soon, says Ray Kurzweil, exhibit humanlike intelligence, and eventually intelligence even superior to that of humans. Of course, the philosophers who gainsay the strong AI hypothesis argue that not all intelligent human thought and its consequent behavior can be reduced to simple computation, or even logic. An early goal of AI was to mechanize the function of an autonomous vehicle, that is, to give the vehicle a goal or set
281
of objectives and the algorithms needed to accomplish them and let it function. The annual autonomous vehicle race in the Nevada desert shows that at least simple goals can be met autonomously or at least as well as a mildly intoxicated human driver. The semiautonomous Mars rovers show off this technology even better but still fall far short of being intelligent. Another of the early intrinsically human activities that AI researchers tried to automate was the typewriter. Professor Marvin Minsky started his AI career as a graduate student at MIT in the mid-1950s working on a voice typewriter to type military correspondence for the Department of Defense (DoD). Now, more than 50 years later, the technology is modestly successful as a software program. At the same time Professor Anthony Oettenger did his dissertation at the Harvard Computation Laboratory on automatic language translation with the Automatic Russian– English Dictionary. It was modestly successful for a narrow genre of Russian technical literature and created the whole new field of computational linguistics. Today, more than 50 years later, a similar technology is available as a software program for Russian and a few other languages with a library of genre-specific and technical-area-specific vocabulary plug-ins sold separately. The best success story on automatic language translation today is the European Economic Community (EEC), which writes its memos in French in the Brussels headquarters and converts them into the 11 languages of the EEC and then sends them out to member nations. French bureaucratese is a very narrow language genre and probably the easiest case for autotranslation automation due not only to the genre but the source language. No one has ever suggested that Eugene Onegin will ever be translated automatically from Russian to English, since so far it has even resisted human efforts to do so. Amazing as it may seem, Shakespeare’s poetry translates beautifully to Russian but Pushkin’s does not translate easily to English. Linguists are divided, like philosophers, on whether computational linguistics technology will ever really achieve true automation, but we think that it probably will, subject to human pre- and post-editing in actual volume production practice, and then only for very narrow subject areas in a few restricted genres. Technical prose in which the translator is only copying one fact at a time from one language to another will be possible, but poetry will always be too complex, since poetry always violates the rules of grammar of its own source language.
12.5
A General Principle
What we need is a general principle which will cleanly divide all the things that can be done into those which can be automated and those which cannot be automated. You cannot automate what you cannot do manually, but the converse is not true, since you cannot always automate everything you
12
282
R. D. Patton and P. C. Patton
can do manually [4, 18, 19]. However, this principle is much too blunt. In his book Darwin’s Black Box Professor Michael Behe argued from the principle of irreducible complexity that neo-Darwinism was inadequate to explain the development of the eye or of the endocrine system because too many mutations and their immediate useful adaptations had to happen at once, since each of 20 or more required mutations would have no competitive advantage in adaptation individually [20]. In his second book The Edge of Evolution he sharpens his principle significantly to divide biological adaptations into those which can be explained by neo-Darwinism (single mutations) and those which cannot (multiple mutations). The refined principle, while sharper, still leaves a ragged edge between the two classes of adaptive biological systems, according to Behe [21]. In our case we postulate the principle of design. Anything that can be copied can be copied automatically. However, any process involving design cannot be automated and any sufficiently (i.e., irreducibly) complex adaptive system cannot be automated (as Behe shows), however simple adaptive systems can be modeled to some extent mechanistically. Malaria can become resistant to a new antibiotic in weeks by Darwin’s black box if it requires only a one-gene change, however in 10,000 years malaria has not been able to overcome the cell cycle mutation in humans because it would require too many concurrent mutations. We conclude that anything that can be reduced to an algorithm or computational process can be automated, but that some things, like most human thought and most functions of complex adaptive systems, are not reducible to a logical algorithm or a computational process and therefore cannot be automated.
12.6
Editor’s Notes for the New Edition
By advice from reviewers and our advisory board, this chapter is repeated from the first edition. Two common and related questions have been expressed by readers since its publication, and are mentioned next.
12.6.1 What Can We Automate, But We Prefer Not to Automate In the early 1980s, I worked with a doctoral student on what was then a challenge: developing two robots that can interact and work together (what now we define as “robot collaboration”). One of the challenges was that robot programming was in its infancy. Another challenge was that to set the computational power to enable such a task we had to reserve exclusive use of the entire university computer network. (We were allowed to do it only between 4 and 6 AM.) We began making progress.
The student, who was a tennis player, said to me one day: “I like this project. Often, my tennis partner does not show up for a game. I can program the robot to play tennis with me instead.” We both laughed, and I said: “Why not program both robots to play the game, so you can also sleep over?” We both laughed even harder. It meant that we can automate and delegate fun tasks but then lose all the fun of playing. We can generalize this realization: Let us not automate ourselves out of activities that we enjoy and have time to enjoy.
Examples: Automatic scanning of documents and interpreting them can be and is automated for efficiency and accuracy, yet we still prefer to read a good book by ourselves, at our own pace. Similarly, driving a car, navigating a boat, playing sports and competing, dancing, singing in a choir, creating artwork, and many more.
12.6.2 What Should Not Be Automated? This question is part of the problem of automation and ethics. (See Ch. 34.)
References 1. American Machinist, 21 Oct. 1948. Creation of the term automation is usually attributed to Delmar S. Harder 2. Schwartz, J.M., Begley, S.: The Mind and the Brain: Neuroplasticity and the Power of Mental Force. Harper, New York (2002) 3. von Bertalanffy, K.L.: General System Theory: Foundations, Development, Applications, p. 56. George Braziller, New York (1976) 4. Spencer, D.D.: What Computers Can Do. Macmillan, New York (1984) 5. von Bertalanffy, K.L.: General System Theory: Foundations, Development, Applications, p. 213. George Braziller, New York (1976). Revised edition 6. Holland, J.: Hidden Order, How Adaptation Builds Complexity. Addison-Wesley, Reading (1995) 7. Alexander, C.: The Timeless Way of Building. New York, Oxford (1979) 8. Alexander, C.: A Pattern Language. New York, Oxford (1981) 9. Peirce, C.S.: Collected Papers of Charles Sanders Peirce, Vols. 1–6 ed. by C. Hartshorne, P. Weiss, 1931–1935; Vols. 7–8 ed. by A. W. Burks. Harvard University Press, Cambridge (1958) 10. von Bertalanffy, K.L.: General System Theory: Foundations, Development, Applications, p. 237. George Braziller, New York (1976) 11. Jyaswal, B.K., Patton, P.C.: Design for Trustworthy Software: Tools, Techniques and Methodology of Producing Robust Software, p. 501. Prentice-Hall, Upper Saddle River (2006) 12. Dreyfus, H.: What Computers Can’t Do: A Critique of Artificial Intelligence. Harper Collins, New York (1978) 13. Dreyfus, H.: What Computers Still Can’t Do: A Critique of Artificial Reason. MIT Press, Cambridge (1992) 14. Kurzweil, R.: The Age of Intelligent Machines. MIT Press, Cambridge (1992) 15. Kurzweil, R.: The Age of Spiritual Machines: When Computers Exceed Human Intelligence. Viking, New York (1999)
12
What Can Be Automated? What Cannot Be Automated?
283
16. Kurzweil, R.: The Singularity Is Near: When Humans Transcend Biology. Viking, New York (2005) 17. Simon, H.: Perception in chess. Cognitive Psychol. 4, 11–27 (1973) 18. Arden, B.W. (ed.): What Can Be Automated? The Computer Science and Engineering Research Study (COSERS) Ser. MIT Press, Cambridge (1980) 19. Wilson, I.Q., Wilson, M.E.: What Computers Cannot Do. Vertex, New York (1970) 20. Behe, M.: Darwin’s Black Box: The Biochemical Challenge to Evolution. Free, New York (2006) 21. Behe, M.: The Edge of Evolution: The Search for the Limits of Darwinism. Free, New York (2007)
Further Reading Frank, M., Roehrig, P., Pring, B.: What to Do when Machines Do Everything: How to Get Ahead in a World of AI, Algorithms, Bots, and Big Data. Wiley, New York (2017) Pasquale, F.: The automated public sphere. In: The Politics and Policies of Big Data: Big Data, Big Brother, pp. 110–128. Taylor and Francis (2018) Verbeek, P.P.: What Things Do. Penn State University Press, University Park (2021) Wash, R., Rader, E., Vaniea, K., Rizor, M.: Out of the loop: how automated software updates cause unintended security consequences. In: 10th Symposium On Usable Privacy and Security (SOUPS), pp. 89–104 (2014) West, D.M.: The Future of Work: Robots, AI, and Automation. Brookings Institution Press, Washington, DC (2018)
Richard D. Patton is Chief Technology Officer of Lawson Software and has been involved with building business application software languages and development methodologies for over 25 years. His current project is building a new application development language based on the latest research in the areas of pattern languages and complex adaptive systems theory as well as Charles Sanders Peirce’s triadic semiotics.
Peter C. Patton has taught computer science, mathematics, aerospace engineering, and ancient history, classical civilizations at the Universities of Kansas, Minnesota, Stuttgart, and Pennsylvania. His last book, Design for Trustworthy Software (together with Bijay Jayaswal) won the Crosby Medal from the ASQ. He is currently teaching mechanical engineering, Western civilization, and philosophy to engineering students at Oklahoma Christian University. He is currently teaching software engineering to aerospace engineering in-service graduate student engineers at Oklahoma Christian University as an adjunct professor of engineering science.
12
Part III Automation Design: Theory, Elements, and Methods
Designs and Specification of Mechatronic Systems
13
Rolf Isermann
Contents 13.1
From Mechanical to Mechatronic Systems . . . . . . . . . 288
13.2
Mechanical Systems and Mechatronic Developments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Machine Elements and Mechanical Components . . . . . . Electrical Drives and Servosystems . . . . . . . . . . . . . . . . . Power-Generating Machines . . . . . . . . . . . . . . . . . . . . . . . Power-Consuming Machines . . . . . . . . . . . . . . . . . . . . . . Vehicles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Trains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13.2.1 13.2.2 13.2.3 13.2.4 13.2.5 13.2.6
290 290 291 291 291 291 291
13.3.3 13.3.4 13.3.5
Functions of Mechatronic Systems . . . . . . . . . . . . . . . . Basic Mechanical Design . . . . . . . . . . . . . . . . . . . . . . . . . Distribution of Mechanical and Electronic Functions . . . . . . . . . . . . . . . . . . . . . . . . . . Operating Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . New Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Other Developments . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13.4
Integration Forms of Processes with Electronics . . . . 293
13.5
Design Procedures for Mechatronic Systems . . . . . . . 296
13.6
Computer-Aided Design of Mechatronic Systems . . . 299
13.7 13.7.1
Model-Based Control Function Development . . . . . . . 300 Model-in-the-Loop Simulation and Control Prototyping . . . . . . . . . . . . . . . . . . . . . . . . . . 300 Software-in-the-Loop and Hardware-in-the-Loop Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301
13.3 13.3.1 13.3.2
13.7.2
292 292 292 292 293 293
13.8
Control Software Development . . . . . . . . . . . . . . . . . . . 301
13.9
Internet of Things for Mechatronics . . . . . . . . . . . . . . 302
13.10
Mechatronic Developments for Vehicles . . . . . . . . . . . 303
13.11 13.11.1 13.11.2 13.11.3 13.11.4 13.11.5
Mechatronic Brake Systems . . . . . . . . . . . . . . . . . . . . . . Hydraulic Brake System . . . . . . . . . . . . . . . . . . . . . . . . . . Antilock Control with Switching Valves (ABS) . . . . . . . Electromechanical Brake Booster . . . . . . . . . . . . . . . . . . . Electrohydraulic Brake System (EHB) . . . . . . . . . . . . . . Electromechanical Brake (EMB) . . . . . . . . . . . . . . . . . . .
305 305 305 306 307 307
R. Isermann () Institute of Automatic Control, Research group on Control Systems and Process Automation, Darmstadt University of Technology, Darmstadt, Germany e-mail: [email protected]
© Springer Nature Switzerland AG 2023 S. Y. Nof (ed.), Springer Handbook of Automation, Springer Handbooks, https://doi.org/10.1007/978-3-030-96729-1_13
13.12 13.12.1 13.12.2 13.12.3
Mechatronic Steering Systems . . . . . . . . . . . . . . . . . . . . Electrical Power Steering (EPS) . . . . . . . . . . . . . . . . . . . . Basic Designs of EPS Systems . . . . . . . . . . . . . . . . . . . . . Fault-Tolerant EPS Structures . . . . . . . . . . . . . . . . . . . . . .
308 309 309 310
13.13
Conclusion and Emerging Trends . . . . . . . . . . . . . . . . . 311
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
Abstract
Many technical processes and products in the area of mechanical and electrical engineering show increasing integration of mechanics with digital electronics and information processing. This integration is between the components (hardware) and the information-driven functions (software), resulting in integrated systems called mechatronic systems. Their development involves finding an optimal balance between the basic mechanical structure, sensor and actuator implementation, and automatic information processing and overall control. Frequently formerly mechanical functions are replaced by electronically controlled functions, resulting in simpler mechanical structures and increased functionality. The development of mechatronic systems opens the door to many innovative solutions and synergetic effects which are not possible with mechanics or electronics alone. This technical progress has a very strong influence on a multitude of products in the areas of mechanical, electrical, and electronic engineering and is increasingly changing the design, for example, of conventional electromechanical components, machines, vehicles, and precision mechanical devices, and their connection to cloud services. The contribution describes besides of general aspects a procedure for the computer-aided design and specification of mechatronic systems, model-based control system design, and control software development and illustrates mechatronic developments for brake and steering systems of automobiles.
287
288
R. Isermann
Keywords
Mechatronic systems · Drives · Machines · Mechatronic functions · Design procedures · Hardware-in-the-loop simulation · V-development model · Model-based control · Control software development · Mechatronic brakes · Mechatronic steering systems
13.1
From Mechanical to Mechatronic Systems
Mechanical systems generate certain motions or transfer forces or torques. For the oriented command of, e.g., displacements, velocities, or forces, feedforward and feedback control systems have been applied for many years. The control systems operate either without auxiliary energy (e.g., a fly-ball governor) or with electrical, hydraulic, or pneumatic auxiliary energy, to manipulate the commanded variables directly or with a power amplifier. A realization with added
a)
I
fixed wired (analog) devices turns out to enable only relatively simple and limited control functions. If these analog devices are replaced with digital computers in the form of, e.g., online coupled microcomputers, the information processing can be designed to be considerably more flexible and more comprehensive. Figure 13.1 shows the example of a machine set, consisting of a power-generating machine (DC motor) and a power-consuming machine (circulation pump): (a) a scheme of the components, (b) the resulting signal flow diagram in two-port representation, and (c) the open-loop process with one or several manipulated variables as input variables and several measured variables as output variables. This process is characterized by different controllable energy flows (electrical, mechanical, and hydraulic). The first and last flow can be manipulated by a manipulated variable of low power (auxiliary power), e.g., through a power electronics device and a flow valve actuator. Several sensors yield measurable variables. For a mechanical–electronic system, a digital electronic system is added to the process. This electronic
w1
V w2
b)
Power generating machine (DC motor)
Power electronics (actuator) IA
V A
Drive train (gear unit) T1
Power consuming machine (pump) T2
PGM
DT w1
VA
w2
w3 Po
c)
variables
T3 PCM
Pi
Manipulated
w3
Energy flow
Primary energy flow Pi Auxiliary energy supply
Measured
Mechanics & energy converter
Actuator
Energy supply
Fig. 13.1 Schematic representation of a machine set: (a) scheme of the components; (b) signal flow diagram (two-port representation); (c) open-loop process. V – voltage; VA – armature voltage; IA – armature
Sensors variables Consumer energy flow Po
Energy consumer
current; T – torque; ω – angular frequency; Pi – drive power; Po – consumer power
13
Designs and Specification of Mechatronic Systems
289
system acts on the process based on the measurements or external command variables in a feedforward or feedback manner (Fig. 13.2). If then the electronic and the mechanical system are merged to an autonomous overall system, an integrated mechanical–electronic system results. The electronic system processes information, and such a system is characterized at least by a mechanical energy flow and an information flow. These integrated mechanical–electronic systems are called mechatronic systems. Thus, mechanics and electronics
Man/machine interface Reference variables
Monitored variables
Information processing Manipulated variables
Measured variables
Information flow Energy flow
Actuators
Mechanics & energy converter Primary energy flow
Auxiliary energy supply
Sensors
Consumer energy flow
Energy supply
Energy consumer
Mechanical, hydraulic, thermal, electrical
Fig. 13.2 Mechanical process and information processing develop toward a mechatronic system
Micro electronics Power electronics Sensors Actuators
are joined. The word mechatronics was probably first created by a Japanese engineer in 1969 [43] and had a trademark by a Japanese company until 1972 [19]. Several definitions can be found in [24, 44, 47, 51, 61]. All definitions agree that mechatronics is an interdisciplinary field, in which the following disciplines act together (Fig. 13.3): • Mechanical systems (mechanical elements, machines, and precision mechanics) • Electronic systems (microelectronics, power electronics, and sensor and actuator technology) • Information technology (systems theory, control and automation, software engineering, artificial intelligence, and cloud services) The solution of tasks to design mechatronic systems is performed on the mechanical as well as on the digital-electronic side. Thus, interrelations during design play an important role; because the mechanical system influences the electronic system, and vice versa, the electronic system influences the design of the mechanical system (Fig. 13.4). This means that simultaneous engineering has to take place, with the goal of designing an overall integrated system (an organic system) and also creating synergetic effects. A further feature of mechatronic systems is integrated digital information processing. As well as basic control functions, more sophisticated control functions may be realized, e.g., calculation of nonmeasurable variables, adaptation of controller parameters, detection and diagnosis of faults, and, in the case of failures, reconfiguration to redundant components. A connection to cloud services, like remote maintenance, remote fault diagnosis, and teleoperation, offers further possibilities in the frame of Internet-of-Things (IoT). Hence, mechatronic systems are developing with adaptive or
Information technology
Electronics Mechatronics
Mechanics & electromechanics Mechanical elements Machines Precision mechanics Electrical elements
Fig. 13.3 Mechatronics: synergetic integration of different disciplines
System theory Modeling Automation-technology Software Artificial intelligence
13
290
R. Isermann
even learning behavior, which can also be called intelligent mechatronic systems. The developments to date can be found in [15, 19, 27, 61, 65, 71]. An insight into general aspects are given editorially in journals [24, 47], conference proceedings such as [2, 12, 25, 26, 38, 66], journal articles by [30, 68], and books [6, 9, 21, 28, 35, 46, 54]. A summary of research projects at the Darmstadt University of Technology can be found in [33].
13.2
Mechanical Systems and Mechatronic Developments
subdivided into mechanical components, machines, vehicles, precision mechanical devices, and micromechanical components. The design of mechanical products is influenced by the interplay of energy, matter, and information. With regard to the basic problem and its solution, frequently either the energy, matter, or information flow is dominant. Therefore, one main flow and at least one side flow can be distinguished [52]. In the following, some examples of mechatronic developments are given. The area of mechanical components, machines, and vehicles is covered by Fig. 13.5.
Mechanical systems can be applied to a large area of mechanical engineering. According to their construction, they can be
13.2.1 Machine Elements and Mechanical Components a) Conventional procedure
b) Mechatronic procedure Design construction
Design construction
Mechan. system
Electronics
Separate components
Mechan. system
Electron. system
Mechatronic overall system
Fig. 13.4 Interrelations during the design and construction of mechatronic systems
Machine elements are usually purely mechanical. Figure 13.5 shows some examples. Properties that can be improved by electronics are, for example, self-adaptive stiffness and damping, self-adaptive free motion or pretension, automatic operating functions such as coupling or gear shifting, and supervisory functions. Some examples of mechatronic approaches are hydrobearings for combustion engines with electronic control of damping, magnetic bearings with position control, automatic electronic–hydraulic gears, adaptive shock absorbers for wheel suspensions, and electromechanical steering systems (EPS), [57].
Mechatronic systems
Mechatronic machine components • Semi-active hydraulic dampers • Automatic gears • Magnetic bearings
Mechatronic motion generators • Integrated electrical servo drives • Integrated hydraulic servo drives • Integrated pneumatic servo drives • Robots (multi-axis, mobile)
Fig. 13.5 Examples of mechatronic systems
Mechatronic power producing machines
Mechatronic power consuming machines
• Brushless DC motor • Integrated AC drives • Mechatronic combustion engines
• Integrated multi-axis machine tools • Integrated hydraulic pumps
Mechatronic automobiles • Anti-lock braking systems (ABS) • Electro hydraulic break (EHB) • Active suspension • Electrical power steering (EPS)2
Mechatronic trains • Tilting trains • Active boogie • Magnetic levitated trains (MAGLEV)
13
Designs and Specification of Mechatronic Systems
291
13.2.2 Electrical Drives and Servosystems
13.2.4 Power-Consuming Machines
Electrical drives with direct-current, universal, asynchronous, and synchronous motors have used integration with gears, speed sensors or position sensors, and power electronics for many years. Especially the development of transistor-based voltage supplies and cheaper power electronics on the basis of transistors and thyristors with variable-frequency three-phase current supported speed control drives also for smaller power. Herewith, a trend toward decentralized drives with integrated electronics can be observed. The way of integration or attachment depends, e.g., on space requirement, cooling, contamination, vibrations, and accessibility for maintenance. Electrical servodrives require special designs for positioning. Hydraulic and pneumatic servodrives for linear and rotatory positioning show increasingly integrated sensors and control electronics. Motivations are requirements for easyto-assemble drives, small space, fast change, and increased functions. Multiaxis robots and mobile robots show mechatronic properties from the beginning of their design.
Examples of mechatronic power-consuming machines are multiaxis machine tools with trajectory control, force control, tools with integrated sensors, and robot transport of the products; see, e.g., [64]. In addition to these machine tools with open kinematic chains between basic frame and tools and linear or rotatory axes with one degree of freedom, machines with parallel kinematics will be developed. Machine tools show a tendency toward magnetic bearings if ball bearings cannot be applied for high speeds, e.g., for highspeed milling of aluminum, and also for ultracentrifuges [39]. Within the area of manufacturing, many machinery, sorting, and transportation devices are characterized by integration with electronics, but as yet they are mostly not fully hardware integrated. For hydraulic piston pumps the control electronics is now attached to the casing. Further examples are packing machines with decentralized drives and trajectory control or offset printing machines with replacement of the mechanical synchronization axis through decentralized drives with digital electronic synchronization and high precision.
13
13.2.5 Vehicles 13.2.3 Power-Generating Machines Machines show an especially broad variability. Powerproducing machines are characterized by the conversion of hydraulic, thermodynamic, or electrical energy and delivery of power. Power-consuming machines convert mechanical energy to another form, thereby absorbing energy. Vehicles transfer mechanical energy into movement, thereby consuming power. Examples of mechatronic electrical power-generating machines are brushless DC motors with electronic commutation or speed-controlled asynchronous and synchronous motors with variable-frequency power converters. Combustion engines increasingly contain mechatronic components, especially in the area of actuators [57]. Gasoline engines showed, for example, the following steps of development: microelectronic-controlled injection and ignition (1979), electrical throttle (1991), direct injection with electromechanical (1999) and piezoelectric injection valves (2003), and variable valve control (2004); see, for example, [56]. Diesel engines first had mechanical injection pumps (1927), then analog-electronic-controlled axial piston pumps (1986), and digital-electronic-controlled high-pressure pumps, since 1997 with common-rail systems [57]. Further developments are exhaust turbochargers with wastegate or controllable vanes (variable turbine geometry, VTG), since about 1993.
Many mechatronic components have been introduced, especially in the area of vehicles, or are in development: antilock braking control (ABS), controllable shock absorbers, controlled adaptive suspensions, active suspensions, drive dynamic control through individual braking (electronic stability control, ESC), electrohydraulic brakes (2001), active front steering (AFS) (2003), and electrical power steering (EPS). Of the innovations for vehicles 80–90% are based on electronic/mechatronic developments. Here, the value of electronics/electrics of vehicles increases to about 30% or more. Advanced driver assistance systems (ADAS) make use of automatic braking and acceleration for adaptive cruise control, wheel individual braking for electronic stability control (ESC) and traction control (TRC), and lane keeping control (LKC) based on mechatronic brake and steering systems; see, for example, [8, 10, 23, 32, 40, 67].
13.2.6 Trains Trains with steam, diesel, or electrical locomotives have followed a very long development. For wagons the design with two boogies with two axes is standard. ABS braking control can be seen as the first mechatronic influence in this area. The high-speed trains (TGV and ICE) contain modern asynchronous motors with power electronic control. The trolleys are supplied with electronic force and position control. Tilting trains show a mechatronic design (1997) and
292
R. Isermann
actively damped and steerable boogies also [17]. Further, magnetically levitated trains are based on mechatronic constructions; see, e.g., [17].
13.3
Functions of Mechatronic Systems
Mechatronic systems enable, after the integration of the components, many improved and also new functions. This will be discussed by using examples based on [26, 28].
13.3.1 Basic Mechanical Design The basic mechanical construction first has to satisfy the task of transferring the mechanical energy flow (force and torque) to generate motions or special movements, etc. Known traditional methods are applied, such as material selection, as well as calculation of strengths, manufacturing, production costs, etc. [69] By attaching sensors, actuators, and mechanical controllers, in earlier times, simple control functions were realized, e.g., the fly-ball governor. Then gradually pneumatic, hydraulic, and electrical analog controllers were introduced. After the advent of digital control systems, especially with the development of microprocessors around 1975, the information processing part could be designed to be much more sophisticated. These digitally controlled systems were first added to the basic mechanical construction and were limited by the properties of the sensors, actuators, and electronics, i.e., they frequently did not satisfy reliability and lifetime requirements under rough environmental conditions (temperature, vibrations, and contamination) and had a relatively large space requirement and cable connections, and low computational speed. However, many of these initial drawbacks were removed with time, and since about 1980 electronic hardware has become greatly miniaturized, robust, and powerful, and has been connected by field bus systems. Based on this, the emphasis on the electronic side could be increased and the mechanical construction could be designed as a mechanical–electronic system from the very beginning. The aim was to result in more autonomy, for example, by decentralized control, field bus connections, plug-and-play approaches, and distributed energy supply, such that selfcontained units emerge.
13.3.2 Distribution of Mechanical and Electronic Functions In the design of mechatronic systems, interplay for the realization of functions in the mechanical and electronic parts is crucial. Compared with pure mechanical realizations, the use of amplifiers and actuators with electrical auxiliary energy
has already led to considerable simplifications, as can be seen in watches, electronic typewriters, and cameras. A further considerable simplification in the mechanics resulted from the introduction of microcomputers in connection with decentralized electrical drives, e.g., for electronic typewriters, sewing machines, multiaxis handling systems, and automatic gears. The design of lightweight constructions leads to elastic systems that are weakly damped through the material itself. Electronic damping through position, speed, or vibration sensors and electronic feedback can be realized with the additional advantage of adjustable damping through algorithms. Examples are elastic drive trains of vehicles with damping algorithms in the engine electronics, elastic robots, hydraulic systems, far-reaching cranes, and space constructions (e.g., with flywheels). The addition of closed-loop control, e.g., for position, speed, or force, does not only result in precise tracking of reference variables, but also an approximate linear overall behavior, even though mechanical systems may show nonlinear behavior. By omitting the constraint of linearization on the mechanical side, the effort for construction and manufacturing may be reduced. Examples are simple mechanical pneumatic and electromechanical actuators and flow valves with electronic control. With the aid of freely programmable reference variable generation, the adaptation of nonlinear mechanical systems to the operator can be improved. This is already used for driving pedal characteristics within engine electronics for automobiles, telemanipulation of vehicles and aircraft, and in the development of hydraulically actuated excavators and electric power steering. However, with increasing number of sensors, actuators, switches, and control units, the cables and electrical connections also increase, such that reliability, cost, weight, and required space are major concerns. Therefore, the development of suitable bus systems, plug systems, and fault-tolerant and reconfigurable electronic systems is a challenge for the designer, as can be seen in modern automotive platforms.
13.3.3 Operating Properties By applying active feedback control, the precision of, e.g., a position is reached by comparison of a programmed reference variable with a measured control variable and not only through the high mechanical precision of a passively feedforward-controlled mechanical element. Therefore, the mechanical precision in design and manufacturing may be reduced somewhat and simpler constructions for bearings or slideways can be used. An important aspect in this regard is compensation of larger and time-variant friction by adaptive friction compensation. Larger friction at the cost of backlash
13
Designs and Specification of Mechatronic Systems
may also be intended (e.g., gears with pretension), because it is usually easier to compensate for friction than for backlash. Model-based and adaptive control allow operation at more operating points (wide-range operation) compared with fixed control with unsatisfactory performance (danger of instability or sluggish behavior). A combination of robust and adaptive control enables wide-range operation, e.g., for flow, force, and speed control, and for processes involving engines, vehicles, and aircraft. Better control performance allows the reference variables to be moved closer to constraints with improved efficiencies and yields (e.g., higher temperatures, pressures for combustion engines and turbines, compressors at stalling limits, and higher tensions and higher speed for paper machines and steel mills).
13.3.4 New Functions Mechatronic systems also enable functions that could not be performed without digital electronics. Firstly, nonmeasurable quantities can be calculated on the basis of measured signals and influenced by feedforward or feedback control. Examples are time-dependent variables such as the slip for tires, internal tensions, temperatures, the slip angle and ground speed for steering control of vehicles or parameters such as damping and stiffness coefficients, and resistances. The automatic adaptation of parameters, such as damping and stiffness for oscillating systems based on measurements of displacements or accelerations, is another example. Integrated supervision and fault diagnosis becomes increasingly important with more automatic functions, increasing complexity, and higher demands on reliability and safety. Then, fault tolerance by triggering of redundant components and system reconfiguration, maintenance on request, and any kind of teleservice make the system more intelligent. Cloud services in the frame of the Internet-of-Things (IoT) are further possibilities.
13.3.5 Other Developments Mechatronic systems frequently allow flexible adaptation to boundary conditions. A part of the functions and also precision becomes programmable and rapidly changeable. Advanced simulations enable the reduction of experimental investigations with many parameter variations. Also, shorter time to market is possible if the basic elements are developed in parallel and the functional integration results from the software. A far-reaching integration of the process and the electronics is much easier if the customer obtains the functioning system from one manufacturer. Usually, this is the manufacturer of the machine, the device, or the apparatus. Although
293
these manufacturers have to invest a lot of effort in coping with the electronics and the information processing, they gain the chance to add to the value of the product. For small devices and machines with large production numbers, this is obvious. In the case of larger machines and apparatus, the process and its automation frequently comes from different manufacturers. Then, special effort is needed to produce integrated solutions. Table 13.1 summarizes some properties of mechatronic systems compared with conventional electromechanical systems.
13.4
Integration Forms of Processes with Electronics
Figure 13.6a shows a general scheme of a classical mechanical–electronic system. Such systems resulted from adding available sensors and actuators and analog or digital controllers to the mechanical components. The limits of this approach were the lack of suitable sensors and actuators, unsatisfactory lifetime under rough operating conditions (acceleration, temperature, and contamination), large space requirements, the required cables, and relatively slow data processing. With increasing improvements in the miniaturization, robustness, and computing power of microelectronic components, one can now try to place more emphasis on the electronic side and design the mechanical part from the beginning with a view to a mechatronic overall system. Then, more autonomous systems can be envisaged, e.g., in the form of encapsulated units with noncontacting signal transfer or bus connections and robust microelectronics. Integration within a mechatronic system can be performed mainly in two ways: through the integration of components and through integration by information processing (see also Table 13.1). The integration of components (hardware integration) results from designing the mechatronic system as an overall system and embedding the sensors, actuators, and microcomputers into the mechanical process (Fig. 13.6b). This spatial integration may be limited to the process and sensor or the process and actuator. The microcomputers can be integrated with the actuator, the process, or sensor, or be arranged at several places. Integrated sensors and microcomputers lead to smart sensors, and integrated actuators and microcomputers develop into smart actuators. For larger systems, bus connections will replace the many cables. Hence, there are several possibilities for building an integrated overall system by proper integration of the hardware. Integration by information processing (software integration) is mostly based on advanced control functions. Besides basic feedforward and feedback control, an additional
13
294
R. Isermann
Table 13.1 Some properties of conventional and mechatronic designed systems Conventional design Added components Bulky Complex Cable problems Connected components Simple control Stiff construction Feedforward control, linear (analog) control Precision through narrow tolerances Nonmeasurable quantities change arbitrarily Simple monitoring Fixed abilities
a)
Microcomputer
Actuators
Process
Mechatronic design Integration of components (hardware) Compact Simple mechanisms Bus or wireless communication Autonomous units Integration by information processing (software) Elastic construction with damping by electronic feedback Programmable feedback (nonlinear) digital control Precision through measurement and feedback control Control of nonmeasurable estimated quantities Supervision with fault diagnosis Adaptive and learning abilities
Sensors
Integration by information processing Knowledge base
b)
Microcomputer
Actuators
Process
Sensors
Information gaining • Identification • State observer
Performance criteria Design methods • Control • Supervision • Optimization
Methematical process models
Possible points of integration
c) Information processing
Process knowledge
Software
Online information processing
Hardware
Microcomputer
Feedword, feedback control
Actuators
Process
Supervision diagnosis
Adaptation optimization
Sensors Integration of components Microcomputer
Fig. 13.6 Integration of mechatronic systems: (a) general scheme of a (classical) mechanical–electronic system; (b) integration through components (hardware integration); (c) integration through functions (software integration)
influence may take place through process knowledge and corresponding online information processing (Fig. 13.6c). This means processing of available signals at higher levels, as will be discussed in the next section. This includes the solution of tasks such as supervision with fault diagnosis, optimization, and general process management. The corresponding problem solutions result in online information processing, especially using real-time algorithms, which must be adapted to the properties of the mechanical process, e.g., expressed by mathematical models in the form of static characteristics, differential equations, etc. (Fig. 13.7). Therefore,
Actuator
Process
Sensors
Fig. 13.7 Integration of mechatronic systems: integration of components (hardware integration); integration by information processing (software integration)
a knowledge base is required, comprising methods for design and information gain, process models, and performance criteria. In this way, the mechanical parts are governed in various ways through higher-level information processing with intelligent properties, possibly including learning, thus resulting in integration with process-adapted software. Both types of integration are summarized in Fig. 13.7. In the following, mainly integration through information processing will be considered.
13
Designs and Specification of Mechatronic Systems
295
v
Management
Higher levels
Supervision
Supervision level
Control feedback
Feedforw. control
Process
u
r
Control level
y
Information processing
Recent approaches for mechatronic systems mostly use signal processing at lower levels, e.g., damping or control of motions or simple supervision. Digital information processing, however, allows the solutions of many more tasks, such as adaptive control, learning control, supervision with fault diagnosis, decisions for maintenance or even faulttolerance actions, economic optimization, and coordination. These higher-level tasks are sometimes summarized as pro-
Processlevel
Fig. 13.8 Different levels of information processing for process automation. u: manipulated variables; y: measured variables; v: input variables; r: reference variables
cess management. Information processing at several levels under real-time condition is typical for extensive process automation (Fig. 13.8). With the increasing number of automatic functions (autonomy) including electronic components, sensors, and actuators, increasing complexity, and increasing demands on reliability and safety, integrated supervision with fault diagnosis becomes increasingly important. This is, therefore, a significant natural feature of an intelligent mechatronic system. Figure 13.9 shows a process influenced by faults. These faults indicate unpermitted deviations from normal states and can be generated either externally or internally. External faults are, e.g., caused by the power supply, contamination or collision, internal faults by wear, missing lubrication, and actuator or sensor faults. The classic methods for fault detection are limit-value checking and plausibility checks of a few measurable variables. However, incipient and intermittent faults cannot usually be detected, and in-depth fault diagnosis is not possible with this simple approach. Therefore, model-based fault detection and diagnosis methods have been developed in recent years, allowing early detection of small faults with normally measured signals, also in closed loops [11, 16, 29]. Based on measured input signals U(t), output signals Y(t), and process models, features are generated by, e.g., parameter estimation, state and output observers, and parity equations (Fig. 13.9).
Faults N U
Actuators
Process model-based fault detection
Y
Sensors
Process
Process model
Feature generation
Thresholdplausibil.check
Vibration signal models
r, Q, x features Normal behavior
Change detection s Analytical symptoms Fault diagnosis f Diagnosed faults
Fig. 13.9 Scheme for model-based fault detection and diagnosis [29]
Signal-based fault detection
13
296
13.5
R. Isermann
Design Procedures for Mechatronic Systems
The design of comprehensive mechatronic systems requires systematic development and use of modern software design tools. As with any design, mechatronic design is also an iterative procedure. However, it is much more involved than for pure mechanical or electrical systems. Figure 13.10 shows that, in addition to traditional domain-specific engineering, integrated simultaneous (concurrent) engineering is required due to the integration of engineering across traditional boundaries that is typical of the development of mechatronic systems. Hence, the mechatronic design requires a simultaneous procedure in broad engineering areas. Traditionally, the design of mechanics, electrics and electronics, control, and human–machine interface was performed in different departments with only occasionally contact, sometimes sequentially (bottom-up design). Because of the requirements for integration of hardware and software functions, these areas
have to work together and the products have to be developed more or less simultaneously to an overall optimum (concurrent engineering, top-down design). Usually, this can only be realized with suitable teams. The principle procedure for the design of mechatronic systems is, e.g., described in the VDI guideline 2206 [71]. A flexible procedural model is described, consisting of the following elements: 1. Cycles of problem solutions at microscale: • Search for solutions by analysis and synthesis of basis steps • Comparison of requirements and reality • Performance and decisions • Planning 2. Macroscale cycles in the form of a V-model: • Logical sequence of steps • Requirements • System design • Domain-specific design
System definition
Traditional engineering
Requirements engineering (specification)
Mechanical & electrical engineering
Electronic engineering
Information & control engineering
Operating engineering
Process design
Electronic hardware design
Inform. processing & software design
Human machine interface design
Integration of components (hardware)
Integration by information processing (software)
Integrated (concurrent) engineering
Integrated mechan. electronic system Generation of synergetic effects
Reliability & safety engineering
Manufacturing engineering
Mechatronic system
Fig. 13.10 From domain-specific traditional engineering to integrated, simultaneous engineering (iteration steps are not indicated)
13
Designs and Specification of Mechatronic Systems
297
• • • •
System integration Verification and validation Modeling (supporting) Products: laboratory model, functional model, and preseries product 3. Process elements for repeating working steps: • Repeating process elements • System design, modeling, element design, integration, etc. The V-model, according to [14, 71], is distinguished with regard to system design and system integration with domainspecific design in mechanical engineering, electrical engineering, and information processing. Usually, several design cycles are required, resulting, e.g., in the following intermediate products: • Laboratory model: first functions and solutions, rough design, and first function-specific investigations
• Functional model: further development, fine-tuning, integration of distributed components, power measurements, and standard interfaces • Pre-series product: consideration of manufacturing, standardization, further modular integration steps, encapsulation, and field tests The V-model originates most likely from software development [63]. Some important design steps for mechatronic systems are shown in Fig. 13.11 in the form of an extended Vmodel, where the following are distinguished: system design up to laboratory model, system integration up to functional model, and system tests up to pre-series product. The maturity of the product increases as the individual steps of the V-model are followed. However, several iterations have to be performed, which is not illustrated in the figure. One intension of the V-model is that the results, tests, and documents of the right branch correspond to the development procedures of the left branch.
Degree of maturity Requirements • Overall functions • Rated values • Costs & milestones
Validation
Specifications • Fulfillment of requirements • Sources, limitations • Reliability, safety
System design
Production • Simultaneous planning • Technologies • Assembling • Quality control Field testing • Final product • Normal use • Statistics • Certification
Verification
System testing • Test rigs • Stress testing, electromagnetic compatibility • Behavior testing • Reliability, safety
System design • Paritioning • Modules • Mechanics vs. electronics • Synergies Modeling & simulation • Models of component • Behavior analysis • Requirements for components • Design Component design (domain specific) Mechanics Electronics Automatic Human machine control interface Prototypes • Laboratory solutions • Modifications former products • Prototype computers/algorithms
System integration (software) • Signal analysis • Filtering • Tuning of algorithms System integration (hardware) • Assembling • Mutual adaption • Optimization • Synergies Component testing • Hardware-in-the-loop simulation • Stress analysis
Mechatronic components • Mechanics • Electronics • Control-software • Human machine interface
Fig. 13.11 A “V” development scheme for mechatronic systems
System integration
13
298
Depending on the type of product, the degree of mechatronic design is different. For precision mechanic devices the integration is already well developed. In the case of mechanical components one can use as a basis well-proven constructions. Sensors, actuators, and electronics can be integrated by corresponding changes, as can be seen, e.g., in adaptive shock absorbers, hydraulic brakes, and fluidic actuators. In machines and vehicles it can be observed that the basic mechanical construction remains constant but is complemented by mechatronic components, as is the case for machine tools, combustion engines, and automobiles. Some important steps of the V-model for the development of hardware and software functions can be described as follows: 1. Requirements • User requirements • Definition of general (overall) functions and data (rated values) of the final product • General solution outline • Development and manufacturing costs • Timely development and milestones • Result: requirements document (does not include technical implementation) 2. Specifications • Definition of the product that fulfills the requirements • Partitioning in manageable modules • Specification of features and data of the modules • Consideration of the sources, tools, and limitations for the development and final manufacturing and maintenance • Specification of hardware data • Specification of used software, compilers, and development systems • Result: specification document 3. System design • Detailed partitioning into electronic, mechanic, hydraulic, pneumatic, and thermal components with their auxiliary power supplies • Detailed fixing of type of sensors and actuators and their data • Detailed data of interfaces between microcomputer, sensors, and actuators • Task distribution between sensors and actuators with integrated electronics • Specification of power-related data • Hardware design: data of microprocessors, data storages, interfaces, bus systems, cabling, and plug systems • Control engineering design: – Definition of sensor inputs and outputs to the actuators
R. Isermann
4.
5.
6.
7.
– Control system structure: feedforward control (open loop) and feedback control (closed loop) – Required engine and component models – Model-based design – Calibration (parametrization) methods – Supervision and diagnosis functions • Reliability and safety issues: FMEA (fault mode and effect analysis) studies for sensor, actuator, and ECU faults and failures • Result: control system design document Modeling and simulation • Required mathematical models of processes, sensors, actuators, drives, and transmission • Theoretical/physical modeling • Experimental modeling • Use of modeling/identification tools • Measurement procedures for test benches and design of experiments • Kind of models: stationary (lookup tables, polynomials, and neural networks) and dynamic (differential equations and neural networks) • Result: overall process and component models Domain-specific component design • Design of mechanics, sensors, actuators, and electronics • Hardware and software integration of functions • Prototype solutions • Human-machine interface • Laboratory models Control function development • Hierarchical control structure • Computer-supported design and manual design • Operation states: from start-up to shut-off • Control functions: process state dependent and time dependent • Sampling times and word length • Supervision and diagnosis functions • Model-in-the-loop simulation: process and μCmodels • Rapid control prototyping with development μC, bypass computer and test bench • Result: control structure and control algorithms Control software development • Software architecture layers and modules • Software-component interfaces • High-level language: selection, floating point (e.g., Ccode, MATLAB/Simulink) • Availability of compilers for target software • Implementation of control functions and modules into software structure • Standardization and reuse of software modules • Code optimization
13
8.
9.
10.
11.
12.
13.
14.
Designs and Specification of Mechatronic Systems
• Testing of software modules with, e.g., model-in-theloop simulation • Rapid control prototyping with bypass computer and test bench experiments • Transfer of high-level language control software into machine code. Use of cross-compiler • Test of control functions with simulated process • Software-in-the-loop simulation • Hardware-in-the-loop simulation if real-time functions with components are of interest • Result: implemented control software in target microcomputer (μC) Mechatronic components integration • Integration of components • Mechanical integration • Integration of mechanics, sensors, actuators, and microcomputer • Integration with cables, switches, and bus systems • Result: integrated prototype product Hardware and software testing • Hardware-in-the-loop simulation • Real-time software analysis • Stress analysis Final system integration (hardware) • Assembling • Mutual adaption of components • Optimization with regard to manufacturing • Creation of synergies Final system integration (software functions) • Use of real signals • Filtering of signals • Calibration of control parameters • Tests for stationary and dynamic behavior • Tests of the human-machine interface System testing • Use of test benches, if required • Stress testing • Reliability and safety tests • Electromagnetic compatibility Field testing • Final product • Normal and abnormal use • Statistics • Certification, if required Production • Preparations for series production • Used machines • Assembling lines • Quality control
One special topic within the domain-specific component design (no. 5) is the design of the human-machine interface (HMI). This holds especially for mechatronic equipment,
299
machines, and vehicles and needs special consideration with regard to the human role and capabilities. In this context, a design with virtual reality and visual simulations can be used as described in [5]. If the overall system consists of many different components a modular design and specification of the subcomponents is advisable with a hierarchical structure and horizontal and vertical branches. Some examples are given in [45].
13.6
Computer-Aided Design of Mechatronic Systems
One general goal in the design of mechatronic systems is the use of computer-aided design tools from different domains. A survey is given in [71]. The design model given in [14] distinguishes the following integration levels: • Basic level: specific product development, computeraided engineering (CAE) tools • Process-oriented level: design packages, status, process management, and data management • Model-oriented level: common product model for data exchange (STEP) • System-oriented level: coupling of information technology (IT) tools with, e.g., CORBA, DCOM, and JAVA The domain-specific design is usually designed on general CASE tools, such as CAD/CAE for mechanics, twodimensional (2D) and three-dimensional (3D) design with AutoCAD, computational fluid dynamics (CFD) tools for fluidics, electronics, and board layout (PADS), microelectronics (VHDL), and computer-aided design of control systems CADCS tools for the control design (see, e.g., [14, 34]). For overall modeling, object-oriented software is especially of interest based on the use of general model-building laws. The models are first formulated as noncausal objects installed in libraries. They are then coupled with graphical support (object diagrams) by using methods of inheritance and reusability. Examples are MODELICA, MOBILE, VHDL-AMS, and 20 SIM; see, e.g., [13, 34, 48, 49, 50]. A broadly used tool for simulation and dynamics design is MATLAB/SIMULINK. When designing mechatronic systems the traditional borders of various disciplines have to be crossed. For the classical mechanical engineer this frequently means that knowledge of electronic components, information processing, and systems theory has to be deepened, and for the electrical/electronic engineer that knowledge on thermodynamics, fluid mechanics, and engineering mechanics has to be enlarged. For both, more knowledge on modern control principles, software engineering, and information technology may be necessary (see also [69]).
13
300
13.7
R. Isermann
Model-Based Control Function Development
A systematic and efficient development of control functions and their optimization and calibration requires special simulation methods and computers, which support the control function development as well as calibration and software testing. This is part of the control system integration, represented in the right branch of the V-development model in Fig. 13.11.
13.7.1 Model-in-the-Loop Simulation and Control Prototyping A design of new control functions in an early development phase may be based on simulations with process models and an ECU model, i.e., control algorithms in a high-level language, as, e.g., MATLAB-Simulink. This is called modelin-the-loop simulation (MiL). Both, the ECU and the process
are then represented as a model, i.e., a virtual picture of the real parts. If some control functions of a real development ECU can already be applied to the real process on a test bench, some new control functions may be tested as prototypes with a special real-time computer parallel to the ECU. This is called rapid control prototyping (RCP). Frequently the new control functions operate in a bypass mode and use the interfaces of the ECU to the sensors and the actuators, see Fig. 13.12. The computing power for the experimental RCP computer exceeds that of the ECU and operates with a high-level language. Thus, the new control functions do not have to be implemented in machine code within the limited computer power and fix point restrictions of an ECU. This may save considerable development time by trying and testing new functions directly on a higher software level with the real engine. If a development ECU is not available, a powerful real-time computer can be used if the required sensors and actuators interface are implemented. It is then called fullpass mode, see [60].
ECU model
Process model
q FW
p2, stat
p2
p2 q FW
SiL
neng neng
T2, stat
T2
Control algorithm
upwm
p2, setpoint
iL H
RC P
Simulation tool
High-performance real-time computer (full pass, bypass)
Integrated mechatronic system (final product) Real process (engine)
Real ECU + real actuator (injection pump)
Fig. 13.12 Different couplings of process and electronics for a mechatronic design. SiL: software-in-the-loop; RCP: rapid control prototyping; HiL: hardware-in-the-loop [31]
13
Designs and Specification of Mechatronic Systems
13.7.2 Software-in-the-Loop and Hardware-in-the-Loop Simulation Existing process models in high-level language can be used for the validation of software functions as test candidates in an early development phase by software-in-the-loop simulation (SiL), see Fig. 13.12. The software functions may already be implemented with fix point or floating point arithmetic and required interfaces, before they are implemented on the target ECU. Real-time behavior is not required. For a final validation of control functions the target ECU with its interfaces has to cooperate with real signals. In order not to use real process on expensive test benches, real-time process models are implemented in a powerful development computer. The sensor signals may be generated by special electronic modules and the output signals are frequently transferred to real actuators. Thus, the real ECU with implemented software operates with some real components, but with simulated real-time high-performance process models and is known as hardware-in-the loop simulation (HiL), see Fig. 13.12. The advantages are that, e.g., software functions can be tested under real-time constraints, validation tests are reproducible and can be automated, critical boundary conditions (high speed and high load) can be realized without being dangerous, the reaction to faults and failures can be investigated, on-board diagnosis functions can be tested, etc.
13.8
Control Software Development
According to the V-development model in Fig. 13.11, the first steps are the control system development and the control functions development in high-end software (e.g., MATLABSimulinkTM and StateflowTM). The next steps are then the control software development and the software implementation on the ECU for series production using special software tools and computers for code generation and testing with compilers and real-time simulation. In the following, some remarks are given briefly for the software architecture for the example of combustion engine control, which may also be applied to mechatronic systems in general. The design of the software architecture has to consider many aspects from software development to the requirements of the target microprocessor and includes connected modules which have to be flexible with regard to continuous changes and variants. Several software layers have to be defined. The minimum is two layers, a platform software and an application software, as shown in Fig. 13.13, [60]. The platform software is oriented to the ECU and comprises the operating system, as well as communications and network management according to OSEK/VDX (2005) standards and diagnostic protocols. OSEK stands for “Open Systems and Interfaces for Automotive Electronics” and is the result of a committee
301
of major automotive manufacturers and component suppliers to support portability and reusability of application software under real-time constraints, started in 1993. It also contains standardized flash memory programming procedures. The standardization of the platform software is additionally advantageous during the software development with regard to software changes and parametrization. Interfaces for measurement and calibration via CAN protocols support the development phase as well, see [7, 8]. A hardware abstraction layer (HAL) gives access to the peripheral components of the ECU and is specified for the used microprocessors. The application software can be designed by the vehicle manufacturer and contains vehicle-specific functions. Standardization takes place for control functions, ranging from lookup tables and their interpolation to dynamic control algorithms. The standardization is, e.g., treated in the MSRMEGMA working group and ASAM [3]. The configuration of standardized software components allows a specific application by using configuration tools. An automated configuration comprises, e.g., the handling of signals, messages, buses, nodes, and functions. It may contain export and import interfaces with data exchange formats and a documentation interface. More details like data models for engine and vehicle variants, storage in volatile (RAM) or nonvolatile memories (ROM, PROM, EPROM, or flash memory), and description files for data structure can be found, e.g., in [60]. Activities for an open industry standard of the automotive software architecture between suppliers and manufacturers are going on in the AUTOSAR consortium (AUTomotive Open System ARchitecture) since 2003, [4, 22]. One of its aims is an open and standardized automotive software architecture. The standard includes specifications describing software architecture components and defining their interfaces. The AUTOSAR architecture separates the basis software from the application software and connects them by standardized interfaces, see Fig. 13.14. To master the complexity, several layers are defined, see [72]. The connection to the microcomputer is provided by the lowest level, the microcontroller abstraction layer. Here, the interfaces are defined to the memories, the I/O drivers, their communication, and to additional features which are not part of the microcontroller. The second layer is the ECU abstraction layer, comprising the hardware design of the ECU including driver to external components. The service layer at the third level provides basic software modules like the operating system, memory administration, and bus communication. This layer is relatively independent on the ECU hardware. The fourth level is the runtime environment (RTE), which separates the basis software and application software and carries out data exchange in both directions. Therefore, the application software components have standardized interfaces to the RTE. The
13
302
R. Isermann
Function f2
Platform software
Function f1
Application software
Function f3
Flash loader Interaction layer Diagnostic protocol ISO
Network management
OSEK-COM
OSEK-NM
Network layer ISO
Bus driver(s) (CAN, K-line, LIN)
Hardware abstraction layer (HAL)
Operating system OSEK-OS
....
Fig. 13.13 Software architecture composed of standardized software components, [60]
and testing. An exchange of information becomes possible through standardized software input and output ports. Thus a validation of the interaction of the SWCs and interfaces is possible before software implementation.
Application layer AUTOSAR runtime environment Service layer ECU abstraction layer
Complete drives
Microcontroller abstraction layer Microcontroller
Fig. 13.14 AUTOSAR layer structure of automotive software, [72]
RTE also integrates the application software components (SWCs), see Fig. 13.15. This separation and integration with standardized interfaces enables a hardware-independent software development. The application software components can therefore be transferred to other ECUs and reused. A virtual function bus (VFB) connects the various software components during the design and allows a configuration independent of a specific hardware. Thus the SWCs are runnable entities and can be linked together for development
13.9
Internet of Things for Mechatronics
The Internet of Things (IoT) covers a network of physical objects with embedded sensors, microcomputers, and communication links to connect and exchange data with other devices and systems over the Internet. The applications of IoT devices comprise many areas like, for example, consumers (smart home, wearable equipment, medical, and health care), industry (manufacturing, technical processes, and energy systems), and infrastructure (buildings, bridges, roadways, and environment), see, e.g., [1, 18, 55]. IoT devices may use several networks to communicate by wireless or wired technologies. Their addresses are, e.g., based on radio frequency identification (RFID) tags (an electromechanic electronic product code) or on an IP address for linking with the Internet or the use of an Internet protocol. For wireless communication, Bluetooth, Wi-Fi, LTE, or 5G
13
Designs and Specification of Mechatronic Systems
303
13 Application software component
Actuator software component
Sensor software component
AUTOSAR software
Application software component
AUTOSAR interface
AUTOSAR interface
AUTOSAR interface
..............
AUTOSAR interface
AUTOSAR runtime environment (RTE)
Standardized interface
Standardized interface
Operating system
Standardized interface
Services Standardized interface
Standardized interface Communication Standardized interface
AUTOSAR interface ECU abstraction Standardized interface
Standardized interface
AUTOSAR interface
Complex drivers
Microcontroller abstraction
Basic software
ECU hardware
Fig. 13.15 AUTOSAR software architecture components and interfaces (RTE: runtime environment), [41]
are possible and for wired communication Ethernet or power line communication (power and data), see [36, 62]. In the frame of mechatronic systems the principle of IoT may be applied for mechatronic machine components (special suspensions and bearings), motion generators (servodrives, robots), power-producing machines (electrical drives and combustion engines), power-consuming machines (machine tools and pumps), mechatronic components of vehicles (automatic gears, ABS, and EPS), and mechatronic trains (boogies drives), see Fig. 13.5. The additional functions by communication via the Internet are supervision and monitoring of the status for fault detection and maintenance, adaptation of control algorithms, teleoperation, and statistics for the manufacturer. This holds especially for the parts subject to wear and aging, like actuators, drives, and tools. The data obtained from sensors and actuators are treated locally if fast decisions have to be made, or in higher levels, like an edge gateway, if more computations are required or via the cloud for, e.g., teleservice.
13.10 Mechatronic Developments for Vehicles Many automotive developments in the last three decades have been possible through an increasing number of mechatronic
components in the power train and the chassis. Figure 13.16 gives some examples for engines, drive trains, suspensions, brakes and steering systems. This development has a considerable influence on the design and operation of the power train consisting of the combustion engine, the drive train, and the chassis with suspension, steering, and braking systems. In the case of hybrid drives this includes also the electrical motor and the battery. The mechatronic components replace formally pure mechanical, hydraulic, or pneumatic parts and use sensors with electrical outputs, actuators with electrical inputs, and digital electronics for control. The available electrical sensor measurements open the access to internal functions and thus enable new possibilities not only for control but also for fault detection and diagnosis. The development of sensors, actuators, and electronic control for automobiles is depicted in Fig. 13.17. The first mechatronic components for the control of vehicles have been wheel speed sensors and electrohydraulic switching valves for the brake system (1979), electrical throttle (1986), and semi-active shock absorbers (1988). These basic mechatronic components then allowed to develop control systems for antilock braking (ABS, 1979), automatic traction control (TCS, 1986), electronic stability control (ESC, 1995), and active body control (ABC, 1999). Next development steps were electric power steering (EPS, 1996) and active front steering (AFS, 2003). These mechatronic systems served mainly to
304
R. Isermann
Mechatronic vehicle components
Mechatronic combustion engines - Electric throttle - Mechatronic fuel injection - Mechatronic valve trains - Variable geometry turbocharger (VGT) - Emission control - Evaporative emission control - Electrical pumps and fans
Mechatronic drive trains
Hybrid/ electric drives
- Automatic hydrodynamic transmission - Automatic mechanic shift transm. - Continuously variable transmission (CVT) - Automatic traction control (ATC) - Automatic speed and distance control (ACC)
Mechatronic suspensions
- Electromotors: asynchr. EM synchr. EM - Hybrid drives: parallel, series powersplit - Batteries Lith. Ion - 48 V
Mechatronic brakes
Mechatronic steering
- Hydraulic antilock braking (ABS) - Electronic stability control (ESC) - Electro-hydraulic brake (EHB) - Electromechanical brake (EMB) - Electrical parking brake
- Semi-active shock-absorbers - Active hydraulic suspension (ABC) - Active pneumatic suspension - Active anti-roll bars (dynamic drive control (DDC) or roll-control)
- Parametercontrolled power-assisted steering - Electromechanical powerassisted steering (EPS) - Active front steering (AFS)
2 3
4
5
6
1
-
+ 7
8
Electrical energy flow
Fig. 13.16 Mechatronic components and systems for automobiles and engines [32]
Sensors
Driver-assistance systems and mechatronic components
Actuators Hydraulic pump
Wheel speed Pedal position Yaw rate
Antilock brakes (ABS, 1979)
Highly automated driving (20xx)
Traction control (TCS, 1986)
Lateral and longitudinal acceleration Electronic stability control (ESC, 1995) Steering angle Susp. deflection
Wheel acceleration Steering torque Roll angle
Electronic throttle valves
Lane keeping assist (LKA, 2007)
Electro-pneumatic brake booster
Parking assistance (2003)
Magnetic proportional valves
Dynamic drive control (DDC, 2003)
Brake assist (BA, 1996)
Distance (radar) Brake pressure
Anti-collision avoidance (20xx)
Magnet switching valves
Electrical power steering (EPS, 1996) Electronic air suspension control (EAS, 1998) Adaptive cruise control (ACC, 1999)
Electro-hydraulic absorbers
Continuous damping control (CDC, 2002) Electro-hydraulic brake (EHB, 2001) Active body control (ABC, 1999)
Hydraulic pump with accumulator Electro-hydraulic or electro-motoric stabiliser Electrical superposition angle actuator
Fig. 13.17 Examples for the introduction of sensors, actuators, and electronic control systems for automobiles [32]
increase safety and comfort and required many new sensors with electrical outputs, actuators with electrical inputs, and micro-controller-based electronic control units. Because they support the driver in performing driving maneuvers, they are driver assistance systems.
Parallel to the increase of electronic control functions for the chassis the engines and drive trains have shown a similar development. This has to be seen together with the improvements of the combustion, fuel consumption and
13
Designs and Specification of Mechatronic Systems
305
emission reductions, as well as hybrid and electrical drives, see, e.g., [8, 56, 57]. In the following mechatronic engineering for automobiles is illustrated by considering the developments of brake and steering systems.
13
1
M 8
7
13.11 Mechatronic Brake Systems
3
4
In the frame of active vehicle safety, the optimal function of brake systems has one of the highest priorities. In the long history of the development of mechanical, hydraulic, and pneumatic brake systems, the improvements of the braking performance by electronic control since the introduction of ABS brake control around 1978 play a dominant role for automotive control.
13.11.1 Hydraulic Brake System Service brake systems for passenger cars and light utility vehicles consist usually of two independent hydraulic circuits. They are configured either diagonal, i.e., the right front wheel and the left rear wheel belong to one circuit, or parallel, where the two front wheels and the two rear wheels form each one circuit. The brake pedal acts on a dual master cylinder and the brake booster, which amplifies the pedal force, e.g., by a vacuum-operated booster with a factor 4–10, and acts on the brake master cylinder. There, the force is converted into hydraulic pressure with up to 120–180 bar. A hydraulic unit carries the solenoid valves and the ECU for the brake control system (ABS, ESC). The pressurized brake fluid transmits the brake pressure through brake lines and flexible brake hoses to the wheel brake cylinders. The pistons of the brake cylinder then generate the force to press the brake pads against the disc or drum brake. Brake force boosters use an auxiliary energy to amplify the pedal force, either stemming from air vacuum or hydraulic pressure. Vacuum brake boosters use the (variable) vacuum in the intake manifold of gasoline engines or from special vacuum pumps in the case of diesel engines.
13.11.2 Antilock Control with Switching Valves (ABS) In order to avoid locking wheels during strong braking, antilock braking systems (ABS) are installed. They ensure that the wheels rotate with a certain angular speed close to optimal slip with the goals to keep the vehicle’s steerability and stability and to shorten braking distances. A hydraulic-electronic control unit (HECU) consists of a central hydraulics block with valves, an electric-driven pump
5
2
1 2 3 4 5 6 7 8
Hydraulic control unit (HCU)
6
Brake master cylinder Wheel brake cylinder Inlet valve Outlet valve Intermediate accumulator Return pump Pulsation damper Flow restrictor
Fig. 13.18 Hydraulic brake system with magnetic 2/2-way valves for ABS-braking
(90 W to 220 W), see Fig. 13.18, and a coil carrier with an electronic control unit (two micro-controllers, ∼ 20 MHz, ROM with 256 kB to 1 MB). The coil carrier is connected to the hydraulics unit with a magnetic connector. An inlet valve and an outlet valve (both 2/2 way) within the hydraulic block enable to modulate the brake pressure for each wheel. The inlet valve is normally open and the outlet valve is closed. Based on the wheel speed sensor, the wheel’s acceleration is determined. If this acceleration shows during breaking a sharp negative value, a deceleration, the braking pressure in the master cylinder has to be reduced in order to avoid locking of the wheel, though the braking pedal is applied by the driver to strong braking. Therefore, first the inlet valve is closed and the brake pressure remains constant. If the deceleration continues the outlet valve is opened, the braking pressure gets smaller, and the wheel results in smaller slip, thus avoiding complete locking of the wheel. By determining the wheel speed rate and the tire slip a sequence of valve commands is generated, thus resulting in an optimal range of brake slip values, see [8, 32, 70]. Figure 13.19 shows a hydraulic system for four wheels. The ABS control unit comprises for each of the two separated brake circuits six magnetic values, one DC-driven return pump, a damper pressure, and low-pressure accumulator. Together with the tandem master cylinder, the pneumatic brake booster, the brake systems form a hydraulic mechatronic system, where the electrohydraulic components are together with the ECU integrated in one ABS block.
306
R. Isermann
Vacuum brake booster 1 Tandem master cylinder 2 Push rod 3 Pressure chamber 4 Intermediate piston 5 Floating chamber 6 Isolating valve 7 Changeover valve 8 Pressure-control valve 9 DC motor 10 Return pump 11 Damper chamber 12 Throttling point 13 Inlet valve 14 Outlet valve 15 Non-return valve 16 Low-pressure accumulator FL front left FR front right RL rear left RR rear right
Fluid supply
Brake pedal
Floating circuit (F)
Pressure circuit (P)
MK 60 Hydroaggregat Continental Teves AG
Wheel brakes
Fig. 13.19 Hydraulic brake system for antilock braking of a passenger car in a diagonal configuration, [32]
The brake fluid leaving the brake circuit is stored in an intermediate accumulator and then pumped back by the return pump to the brake master cylinder.
13.11.3 Electromechanical Brake Booster The conventional vacuum brake booster requires a vacuum source, either from the manifold of gasoline engines or an extra vacuum pump for diesel engines. Additionally, the required packaging space is relatively large. With the goal
of an integrated mechatronic brake force generator, an electromechanical brake booster (i-booster) has been developed, which uses the dual master cylinder and an electrical motor with gear to amplify the pedal force and is used since 2013 for series production. The advantages are a faster brake pressure generation, better use of electrical inputs for driver assistance systems, redundancy for automatic driving, and use for electrical and hybrid drives. The generation of the supporting brake force is performed, e.g., by a permanently excited synchronous motor (PMSM) followed by a two-step reduction gear and a worm gear, with a power of about 300 W, see [8].
13
Designs and Specification of Mechatronic Systems
307
'z
M
bB
FF
FBB
Gear
el. motor
FBR Brake Foot pedal force
13
'z
zgear
+
Controller
zMP –
–
Master piston
Pressure force feedback I
b A VI PI Clearances Pressure generation
pI
pMPI
Fig. 13.20 Signal flow for an electromechanical brake booster acting on the master piston of a dual master cylinder with difference position control. Please note that only the master piston for brake circuit I is considered [32]
A schematic illustration of the signal flow is depicted in Fig. 13.20. The generation of the supporting braking force is based on the measurement of the travel ways of the pedal push rod and of the electromotorically driven rack. The control variable is the difference of the two travel ways with the goal to bring it to zero. An alternative is to measure directly the difference of the two travel ways. This electromechanical brake booster is more precise and faster than a vacuum booster. Because of the electrically actuated braking force, it allows the direct blending for recuperative braking with generators or electrical drives and a direct use for automatic driving.
13.11.4 Electrohydraulic Brake System (EHB) A further mechatronic development is the electrohydraulic brake system (EHB), where the brake forces for each wheel are individually controlled electronically by continuously operating valves. This allows to directly integrate ABS, TCS, and ESC functions in one compact mechatronic braking unit. The EHB came on the market in 2001 and consists of the following components: • Actuation unit with dual master cylinder, pedal travel force simulator, backup travel sensor, and brake fluid reservoir • Hydraulic control unit (HCU) with motor, three piston pumps, high-pressure accumulator, eight continuously operating electromechanical valves, two isolation valves, two balancing valves, and six pressure sensors • Electronic control unit (ECU) with two microprocessors
The brake pedal force simulator generates a suitable pedal travel/force characteristic with damping. The pedal travel is measured by two separate angle position sensors. Additionally, the brake pressure of the brake master cylinder is measured, such that a threefold redundancy of the drivers brake command exists. In normal driving mode the electrically driven hydraulic three piston pump charges the high-pressure metal diaphragm accumulator leading to a pressure between 90 and 180 bar, which is measured by an accumulator pressure sensor. The isolating valves are closed if the brake pedal is activated and separate the pressure in the master cylinder from the brake calipers. The braking demand from the driver, measured by the pedal travel sensors, is then transferred to the ECU, which controls the brake pressures individually for each wheel “by wire,” see [37]. The software structure of the ECU is characterized by a modular structure with different levels from basic functions through yaw control, traction control (TCS), and antilock control (ABS) to electronic stability control (ESC), see [59].
13.11.5 Electromechanical Brake (EMB) The number of registered cars being equipped with modern electronic driver-assisting brake control systems such as antilock braking (ABS), traction control system (TCS), and electronic stability control (ESC) have increased steadily. However, to embed such functionality in conventional hydraulic brake systems, a large number of electrohydraulic components are required. In recent years, the automotive industry and their suppliers have therefore started to develop
308
R. Isermann
brake-by-wire systems as typical mechatronic solutions. There are two concepts favored, the electrohydraulic system (EHB) and the fully electromechanical system (EMB). The electrohydraulic brake-by-wire system still uses brake fluid and conventional brake actuators but proportional valves. However, with regard to the disadvantages of the electrohydraulic system (brake fluid, brake lines, proportional valves, etc.) the fully electromechanical brake system is a promising concept. An electromechanically actuated brake system provides an ideal basis for converting electrical command signals into clamping forces at the brakes. Standard and advanced braking functions can then be realized with uniform hardware and software. The software modules of the control unit and the sensor equipment determine the functionality of the brake-by-wire system. The reduction of vehicle hardware and entire system weight are not the only motivational factors contributing to the development of a fully mechatronic brake-by-wire system. The system is environmentally friendly due to the lack of brake fluid and requires little maintenance (only pads and discs). Its decoupled brake pedal can be mounted in a crash-compatible and space-saving manner in the passenger compartment. There are no restraints to the design of the pedal characteristics, so ergonomic and safety aspects can be directly included. This “plug and play” concept with a minimized number of parts also reduces production and logistics costs. A wheel brake module consists of an electromechanical brake with servo-amplifier and microcontroller unit, see Fig. 13.21. The microcontroller performs the clamping force and position control as well as the clearance management, the communication, and the supervision of the wheel module. To drive the DC brushless motor of the wheel brake, a compact and powerful servo-amplifier is used. It processes the motor rotor position, which is provided by a resolver, for electronic commutation. Additionally, the servo-amplifier
Micro-controller System bus (CAN)
Desired clamping for
Wheel speed
is equipped with two analog controllers, a motor current controller, and a rotor velocity controller. The design of this mechatronic brake consisting of an electromechanical converter, gear, friction brake, and sensors is mainly driven by the demand for minimized space and lightweight construction. A gear set consisting of the motordriven planetary gear and a spindle generates the clamping force.
13.12 Mechatronic Steering Systems Except very small cars, electrical power–assisted steering systems (EPS) are standard for small to larger passenger cars and hydraulic power–assisted steering systems (HPS) for large passenger cars, and lightweight and heavy commercial vehicles. These power-assisted steering systems amplify the driver’s muscular forces by adding additional steering forces resulting from hydraulic or electrical auxiliary energy. Some development steps for power steering systems are shown in Fig. 13.22. In addition to the mechanical steering system a torque sensor in the steering column measures the manual torque MH of the driver via a torsion bar generating a small twist angle δ M . This value then directly changes an orifice valve in the case of an hydraulic actuator to influence hydraulic flows or is an electrical output of a torque sensor in the case of electrical power steering. Based on this torque measurement the hydraulic or electrical power actuator is feedforward controlled via a calibrated characteristic curve UPA = f (δ M ) which determines the actuator force dependent on the steering angle δ H and the vehicle speed vX . For both hydraulic and electric power steering systems many different constructions exist, see, e.g., [8, 23, 53].
Servo-amplifier Control signals: - Initialization - Control mode - Desired value
Functions: x Clamping force control x Position control x Clearance management x Communication x Supervision
Sensor signals: - Rotor position - Motor current
Wheel brake
Phasevoltages Functions: x Velocity control x Current control x Electronic communication
Rotor position
Clamping force
Fig. 13.21 Block diagram of an electromechanical wheel brake module [32]
13
Designs and Specification of Mechatronic Systems
Hydraulic power steering (HPS)
Electrical power steering (EPS)
309
Electrical power assisted steering (HPS + EPS)
Active front steering (AFS)
Steer-bywire (SbW)
Hydraulic Electrical
Fig. 13.22 On the development of different power steering systems [32]
13.12.1 Electrical Power Steering (EPS) An electrical power steering system (EPS) uses an electromotor powered by the electrical board net to support the manual torque by the driver through a mechanical transmission. Though the energy density of electrical system is less than that for hydraulic systems and therefore more package space may be required around the actuator space, they offer many advantages and have therefore penetrated passenger cars since about 1988 from the A segment to heavy vehicles, with electrical power from 150–1000 W. This required partially a 24 V power supply. The upcoming 48 V voltage level will further increase their application rate. The advantages of EPS compared to HPS are mainly: • Reduction of fuel consumption • Elimination of engine-driven oil pump, flexible hoses, hydraulic cylinder, and many seals • Complete system modules delivered as a single assembly, ready and tested units • An EPS electronic control unit (ECU) together with a steering angle sensor allows a precise position and torque control • ECU enables the use of additional inputs from chassis sensors and other ECUs via bus systems • Enhanced steering functions like electrical damping, active return of the steering wheel to straight-line driving, and compensation of disturbance torques from road or side wind by control software • Addition of driver assistance functions like automated parking, lane keeping, and trailer backup assist, and for automated driving Thus EPS systems show some typical advantages of a mechatronic design: compactness, integration of mechanics, electrics and electronics, integration by software, precision though digital feedback control, and autonomous units.
Examples for EPS systems are shown in Fig. 13.23. A torque sensor in the steering shaft as a measure of the driver’s manual torque is transmitted to a specific ECU which determines together with other measurements like the vehicle’s speed the power steering assisting torque of the electric motor.
13.12.2 Basic Designs of EPS Systems Depending on the arrangement of the electrical motor at the column or at the rack with different gears following EPS type can be distinguished; see, e.g., [53, 58] Fig. 13.23. Column-type EPS systems, consisting of the torque sensor, the electric motor, and a reduction gear, are arranged directly at the steering column inside the vehicle. The ECU is installed separately or attached to the motor or torque sensor. The reduction gear can be a worm gear, allowing sufficient back-turning capability. This type is usually applied for light vehicles and uses the existing rack and pinion steering system. Pinion-type EPS (single pinion) may also use the existing rack and pinion system. The electric motor with ECU is connected to the pinion by a worm gear. This enables more direct connection to the rack, but undergoes harder environmental influences (temperature and splash water). Dual pinion-type EPS acts with a second pinion on an additional rack, e.g., by a worm gear, and allows more freedom for the integration in the vehicle. It also contains a gear set and the ECU. Axis-parallel-type EPS (APA) has the electric motor arranged parallel to the rack and transmit the rotary movement to a straight rack movement by a ball screw which is driven by a tooth belt and the electric motor. The transfer of forces from the ball nut to the rack is performed by a chain of hardened steel balls with ball screw threads. The gear set operates without backlash. The advantages are high precision
13
310
R. Isermann
Recirculation ball gear
Steering column shaft Torque sensor
Steering rack
Pinion gear Tooth belt Brushless DC motor
Fig. 13.24 Axis-parallel EPS with PMSM electric motor, tooth belt, recirculation ball gear, and rack spindle. (Source: [74], Imax = 80 A, Pel,max = 960 W, Frack,max = 12,000 N)
and suitability also for heavier vehicles. Figure 13.24 depicts an example. Axis-parallel rack-type EPS (RC: rack concentric) has the ball nut of the ball screw without additional gear directly operating on the rack. The electrical motor has then a hollow shaft where the rack ball screw passes through. This very direct, stiff design requires a high torque motor and precise control. The generated rack forces range from about 6000 N for the column-type to 15,000 N for the axis-parallel-type EPS, with electrical motors power from 200–1000 W.
13.12.3 Fault-Tolerant EPS Structures
Lore Fig. 13.23 Different types of electrical power steering systems: (a) column type (b) pinion type (c) dual pinion type (d) axis-parallel type (e) rack-centered axis-parallel type [32]
In the case of faults and even failures of the electrical power steering system, the driver can take over the required steering torque for small and medium cars. However, for larger cars and light commercial vehicles the EPS should be failoperational with regard to failures. For automatic driving the driver is out of the loop. Automatic closed-loop systems cover usually smaller additive and parametric faults of the controlled process. However, if the faults become larger, either sluggish, less damped, or unstable behavior may result. The not engaged driver may then not perform the right steering command in time. Therefore, it will be required that EPS systems in the case of automatic controlled driving must have fail-operational functions in the case of certain faults, i.e., have to be fault tolerant. For most components of the steering actuator system hardware redundancy is required in the form of dynamic redundancy with hot or cold standby, see [28]. This means that a selection of components has to be doubled. Figure 13.25 depicts redundancy structures with different degrees of redundancy. At first the torque sensor can be duplicated or an
13
Designs and Specification of Mechatronic Systems
a)
b)
c)
Sensors ECU Inverter Three phase motor
Faults
311
Sensors
Sensors ECU
ECU
ECU
Three phase motor
Gear
Gear
Redundancy
Redundancy
ECU
Inverter Inverter
Inverter Inverter Switch
d) Sensors ECU Inverter
Sensors
ECU Inverter
2 three phase motors (serial)
Six phase motor
Gear Redundancy
13
e)
Gear Redundancy
ECU
ECU
Inverter Inverter 2 three phase motors (parallel)
Gear
Gear
Redundancy
Sensors
-
9
9
9
9
ECU
-
9
9
9
9
Inverter
-
9
9
9
9
Motor
-
-
9
9
9
Gear
-
-
-
-
(9)
Fig. 13.25 Redundancy structures for EPS systems [32]
analytical redundancy concept can be programmed. In a next step the torque sensor, the ECU, and inverter are duplicated. The arrangement can be cold or hot standby. In the case of a fault in one channel the other channel stays active and the faulty channel is switched off. Case C in Fig. 13.25 has a further redundance in the windings by a multiphase configuration, see [20, 73]. A serial connection of two motors is shown in Fig. 13.25d. Figure 13.25e depicts a duplication of the complete EPS actuator. In this parallel arrangement also the gear is duplicated. The degree of redundancy increases from case A to E, however on cost of hardware extent, installation space, cost, and weight. The selection of the redundancies also depends on fault statistics for the different components. Case C seems to be a reasonable compromise, however requires a special motor design. In the case of a winding fault the power is reduced. In the cases D and E the power can be distributed differently to both motors in the normal operating range.
13.13 Conclusion and Emerging Trends This chapter could only give a brief overview on the design of mechatronic systems. As outlined, mechatronic systems cover a very broad area of engineering disciplines. Advanced mechatronic components and systems are realized in many products such as automobiles, combustion engines, aircraft, electrical drives, actuators, robots, and precision mechanics and micromechanics. However, the integration aspects of
mechanics and electronics include increasingly more components and systems in the wide areas of mechanical and electrical engineering. The design of mechatronic systems follows along a Vmodel and includes modeling, HiL and SiL simulation, integration aspects, prototyping, and testing. Further developments show communication links to higher levels in the frame of Internet-of-Things, teleservice, and cloud computing. As the development toward the integration of computerbased information processing into products and their manufacturing comprises large areas of engineering, suitable education in modern engineering and also training is fundamental for technological progress. This means, among others, to take multidisciplinary aspects and method-oriented procedures into account. The development of curricula for mechatronics as a proper combination of electrical and mechanical engineering and computer science during the last decades shows this tendency. For additional content on education and training for automation see Chs. 32, 63, and 68; on fault tolerance, error, conflict, and disruption prevention in Ch. 22; and on CAD/CAE of automation in Ch. 28. On the design of human-machine and human-automation interfaces, see Chs. 19 and 20.
References 1. Acharjya, D.P., Geetha, M.K. (eds.): Internet of Things: Novel Advances and Envisioned Applications. Springer, Berlin (2017)
312 2. AIM – IEEE/ASME Conference on Advanced Intelligent Mechatronics. Atlanta (1999), Como (2001), Kobe (2003), Monterey (2005), Zürich (2007). Montreal (2010), Budapest (2011), Munich (2017), Hongkong (2019), virtual (2021) 3. ASAM:(Association for Standardization of Automation and Measuring Systems). http://www.asam.net 4. AUTOSAR: Automotive Open System Architecture. www.autosar.org (2012) 5. Banerjee, P.P.: Virtual reality and automation. Chapter 15. In: Nof, S.Y. (ed.) Springer Handbook of Automation, pp. 269–278. Springer, Dordrecht (2009) 6. Bishop, C.: The Mechatronics Handbook. CRC, Boca Raton (2002) 7. Borgeest, K.: Elektronik in der Fahrzeugtechnik. Vieweg, Wiesbaden (2008) 8. Bosch, R.: Automotive Handbook, 10th edn. Wiley, Chichester (2018) 9. Bradley, D., Dawson, D., Burd, D., Loader, A.: MechatronicsElectronics in Products and Processes. Chapman Hall, London (1991) 10. Breuer, B., Bill, K.H.: Bremsenhandbuch (Handbook of Brakes), 2nd edn. Vieweg, Wiesbaden (2006) in German 11. Chen, J., Patton, R.J.: Robust Model-Based Fault Diagnosis for Dynamic Systems. Kluwer, Boston (1999) 12. DUIS: Mechatronics and robotics. In: Hiller, M., Fink, B. (eds.) 2nd Conf., Duisburg/Moers, Sept 27–29. IMECH, Moers (1993) 13. Elmqvist, H.: Object-Oriented Modeling and Automatic Formula Manipulation in Dymola. Scand. Simul. Soc. SIMS, Kongsberg (1993) 14. Gausemeier, J., Moehringer, S.: New guideline 2206: a flexible procedure model for the design of mechatronic systems. In: Proceedings of ICED 03, 14th International Conference on Engineering Design, Stockholm (2003) 15. Gausemeier, J., Brexel, D., Frank, T., Humpert, A.: Integrated product development. In: 3rd Conf. Mechatron. Robot. Teubner, Paderborn/Stuttgart (1996) 16. Gertler, J.: Fault Detection and Diagnosis in Engineering Systems. Marcel Dekker, New York (1998) 17. Goodall, R., Kortüm, W.: Mechatronics developments for railway vehicles of the future. In: IFAC Conf. Mechatron. Syst. Elsevier, Darmstadt/London (2000) 18. Greengard, S.: The Internet of Things. MIT-Press (2015) 19. Harashima, F., Tomizuka, M.: Mechatronics – “what it is, why and how?”. IEEE/ASME Trans. Mechatron. 1, 1–2 (1996) 20. Hayashi, J.: Road Map of the Motor for an Electric Power Steering System. 4th ATZ-Konferenz chassis.tech plus, München (2013) 21. Heimann, B., Gerth, W., Popp, K.: Mechatronik (Mechatronics). Fachbuchverlag Leipzig, Leipzig (2001) in German 22. Heinecke, H., Schnelle, K.P., Fennel, H., Bortolazzi, J., Lundh, L., Leflour, J., Maté, J.L., Nishikawa, K., Scharnhorst, T.: AUTomotive Open System ARchitecture - an industry-wide initiative to manage the complexity of emerging automotive E/Earchitectures. In: SAE 2004 Convergence, pp. 325–332 23. Heissing, B., Ersoy, M. (eds.): Chassis Handbook. Vieweg, Teubner, Wiesbaden (2011) 24. IEEE/ASME Trans. Mechatron. 1(1). IEEE, Piscataway (1996), (scope) 25. IFAC-Symposium on Mechatronic Systems: Darmstadt (2000), Berkeley (2002), Sydney (2004), Heidelberg (2006), Boston (2010), Hangzhou (2013) Loughborough (2016), Vienna (2019). Elsevier, Oxford (2000–2019) 26. Isermann, R. (ed.): IMES. Integrated Mechanical Electronic Systems Conference (in German) TU Darmstadt, March 2–3, Fortschr.Ber. VDI Series 12, 179. VDI, Düsseldorf (1993) 27. Isermann, R.: Modeling and design methodology of mechatronic systems. IEEE/ASME Trans. Mechatron. 1, 16–28 (1996)
R. Isermann 28. Isermann, R.: Mechatronic Systems. Springer, Berlin (2003) German editions: 1999, 2003, 2005 29. Isermann, R.: Fault-Diagnosis Systems – An Introduction from Fault Detection to Fault Tolerance. Springer, Berlin/Heidelberg (2006) 30. Isermann, R.: Mechatronic systems – innovative products with embedded control. Control. Eng. Pract. 16, 14–29 (2008) 31. Isermann, R.: Engine Modeling and Control. Spinger, Berlin (2014) 32. Isermann, R.: Automotive Control Systems. Springer, Heidelberg (2021) 33. Isermann, R., Breuer, B., Hartnagel, H. (eds.): Mechatronische Systeme für den Maschinenbau (Mechatronic Systems for Mechanical Engineering). Wiley, Weinheim (2002) results of the special research project 241 IMES in German 34. James, J., Cellier, F., Pang, G., Gray, J., Mattson, S.E.: The state of computer-aided control system design (CACSD). IEEE Control. Syst. Mag. 15(2), 6–7 (1995) 35. Janschek, K.: Systementwurf mechatronischer Systeme. Springer, Berlin (2010) 36. Jeschke, S., Brecher, C., Song, H., Rawat, D.B. (eds.): Industrial Internet of Things. Springer, Berlin (2017) 37. Jonner, W., Winner, H., Dreilich, L., Schunck, E.: Electrohydraulic Brake System – The First Approach SAE Technical Paper Series. Society of Automotive Engineers, Warrendale (1996) 38. O. Kaynak, M. Özkan, N. Bekiroglu, I. Tunay (Eds.): Recent Advances in Mechatronics, Proceedings of International Conference ICRAM’95, Istanbul, 1995) 39. Kern, S., Roth, M., Abele, E., Nordmann, R.: Active damping of chatter vibrations in high speed milling using an integrated active magnetic bearing. In: Adaptronic Congress 2006; Conference Proceedings, Göttingen (2006) 40. Kiencke, U., Nielsen, L.: Automotive Control Systems, 2nd edn. Springer, Berlin (2005) 41. Kirschke-Biller, F.: Autosar – a worldwide standard current developments, rollout and outlook. In: 15th International VDI Congress Electronic Systems for Vehicles, Baden-Baden, Germany (2011) 42. Kitaura, K.: Industrial Mechatronics. New East Business (1986) in Japanese 43. Kyura, N., Oho, H.: Mechatronics – an industrial perspective. IEEE/ASME Trans. Mechatron. 1, 10–15 (1996) 44. MacConaill, P.A., Drews, P., Robrock, K.-H.: Mechatronics and Robotics I. ICS, Amsterdam (1991) 45. Mann, W.: Practical automation specification. Chapter 56. In: Nof, S.Y. (ed.) Springer Handbook of Automation, pp. 797–808. Springer, Dordrecht (2009) 46. McConaill, P., Drews, P., Robrock, K.-H.: Mechatronics and Robotics. ICS, Amsterdam (1991) 47. Mechatronics: An International Journal. Aims and Scope. Pergamon Press, Oxford (1991) 48. Otter, M., Cellier, C.: Software for modeling and simulating control systems. In: Levine, W.S. (ed.) The Control Handbook, pp. 415– 428. CRC, Boca Raton (1996) 49. Otter, M., Elmqvist, E.: Modelica – language, libraries, tools, workshop and EU-project RealSim. Simul. News Eur. 29/30, 3–8 (2000) 50. Otter, M., Schweiger, C.: Modellierung mechatronischer Systeme mit MODELICA (Modelling of mechatronic systems with MODELICA). In: Mechatronischer Systementwurf: Methoden – Werkzeuge – Erfahrungen – Anwendungen, Darmstadt, 2004. VDI Ber. 1842, pp. 39–50. VDI, Düsseldorf (2004) 51. Ovaska, S.J.: Electronics and information technology in high range elevator systems. Mechatronics. 2, 88–99 (1992) 52. Pahl, G., Beitz, W., Feldhusen, J., Grote, K.-H.: Engineering Design, 3rd edn. Springer, London (2007) 53. Pfeffer, P., Harrer, M.: Lenkungshandbuch, 2nd edn. Vieweg Teubner Verlag, Springer, Wiesbaden (2013)
13
Designs and Specification of Mechatronic Systems
54. Pfeiffer, F.: Mechanical System Dynamics. Springer, Berlin (2008) 55. Raji, R.S.: Smart networks for control. IEEE Spectr. 31(6), 49–55 (1994) 56. Reif, K.: Gasoline Engine Management. Springer, Wiesbaden (2015a) 57. Reif, K.: Diesel Engine Management. Springer, Wiesbaden (2015b) 58. Reimann, G., Brenner, P., Büring, H.: Steering actuator systems. In: Winner, H., et al. (eds.) Handbook of Driver Assistance Systems. Springer International Publishing, Switzerland (2016) 59. Remfrey, J., Gruber, S., Ocvirk, N.: Hydraulic brake systems for passenger vehicles. In: Winner, H., Hakuli, S., Lotz, F., Singer, C. (eds.) Handbook of Driver Assistance Systems, pp. 1–23. Springer International Publishing, Switzerland (2016) 60. Schäuffele, J., Zurawka, T.: Automotive Software Engineering, 6th edn. Springer, Wiesbaden (2016) 61. Schweitzer, G.: Mechatronics – a concept with examples in active magnetic bearings. Mechatronics. 2, 65–74 (1992) 62. Serpanos, D., Wolf, M.: Internet-of-Things (IoT) Systems. Springer, Berlin (2018) 63. STARTS Guide: The STARTS Purchases Handbook: Soft-Ware Tools for Application to Large Real-Time Systems, 2nd edn. National Computing Centre Publications, Manchester (1989) 64. Stephenson, D.A., Agapiou, J.S.: Metal Cutting Theory and Practice, 2nd edn. CRC, Boca Raton (2005) 65. Tomizuka, M.: Mechatronics: from the 20th to the 21st century. In: 1st IFAC Conf. Mechatron. Syst., pp. 1–10. Elsevier, Oxford/Darmstadt (2000) 66. UK Mechatronics Forum. Conferences in Cambridge (1990), Dundee (1992), Budapest (1994), Guimaraes (1996), Skovde (1998), Atlanta (2000), Twente (2002). IEE & ImechE (1990– 2002) 67. Ulsoy, A.G., Peng, H., Cakmakci, M.: Automotive Control Systems. Cambridge University Press (2012) 68. van Amerongen, J.: Mechatronic design. Mechatronics. 13, 1045– 1066 (2003) 69. van Amerongen, J.: Mechatronic education and research – 15 years of experience. In: 3rd IFAC Symp. Mechatron. Syst., pp. 595–607, Sydney (2004)
313 70. van Zanten, A., Kost, F.: Brake-based asstistance functions. Chapter 39. In: Winner, H., Hakuli, S., Lotz, F., Singer, C. (eds.) Handbook of Driver Assistance Systems. Springer International Publishing, Switzerland (2016) 71. VDI 2206: Design Methodology for Mechatronic Systems. Beuth, Berlin (2004) in German 72. Wernicke, M., Rein, J.: Integration of existing ECU software in the autosar architecture. ATZ-Elektronik. (1), 20–25 (2007) 73. Yoneki, S., Hitozumi, E., Collerais, B.: Fail-operational EPS by Distributed Architecture, pp. 421–442. 4. ATZ-Konferenz chassis.tech plus, München (2013) 74. ZF-Lenksysteme (product information, 2010)
Rolf Isermann served as a professor for control systems and process automation at the Institute of Automatic Control of Darmstadt University of Technology from 1977–2006. Since 2006 he has been professor emeritus and head of the Research Group for Control Systems and Process Automation at the same institution. He has published several books and his current research concentrates on fault-tolerant systems, control of combustion engines and automobiles, and mechatronic systems. Rolf Isermann has held several chair positions in VDI/VDE and IFAC and organized several national and international conferences.
13
Sensors, Machine Vision, and Sensor Networks
14
Wootae Jeong
Contents 14.1 14.1.1 14.1.2 14.1.3 14.1.4 14.1.5 14.1.6 14.1.7 14.1.8
Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sensing Principles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Position Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Velocity Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Acceleration Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Flow Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ultrasonic Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Micro- and Nanosensors . . . . . . . . . . . . . . . . . . . . . . . . . . Miscellaneous Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . .
315 316 317 317 317 317 318 318 318
14.2 14.2.1 14.2.2 14.2.3
Machine Vision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Image-based Automation Technology . . . . . . . . . . . . . . . Machine Vision System Components . . . . . . . . . . . . . . . Artificial Intelligence and Machine Vision . . . . . . . . . . .
319 319 320 322
14.3 14.3.1 14.3.2 14.3.3 14.3.4 14.3.5 14.3.6 14.3.7 14.3.8
Sensor Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sensor Network Systems . . . . . . . . . . . . . . . . . . . . . . . . . . Multisensor Data Fusion Methods . . . . . . . . . . . . . . . . . . Sensor Network Design Considerations . . . . . . . . . . . . . . Sensor Network Architectures . . . . . . . . . . . . . . . . . . . . . Sensor Network Protocols . . . . . . . . . . . . . . . . . . . . . . . . . Sensor Network Security . . . . . . . . . . . . . . . . . . . . . . . . . . Sensor Network Applications . . . . . . . . . . . . . . . . . . . . . . Industrial Internet of Things (IIoT) . . . . . . . . . . . . . . . . .
322 322 323 325 326 327 328 329 329
14.4 14.4.1 14.4.2 14.4.3
Emerging Trends . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Heterogeneous Sensors and Applications . . . . . . . . . . . . Appropriate Quality-of-Service (QoS) Model . . . . . . . . . Integration with Other Networks . . . . . . . . . . . . . . . . . . .
330 331 331 331
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332
Abstract
Sensors are essential devices in many industrial applications such as factory automation, digital appliances, automotive applications, environmental monitoring, and system diagnostics. The main role of those sensors is to measure changes of physical quantities of surroundings. In general, sensors are embedded into sensory devices with a circuitry as a part of a system. In this chapter, W. Jeong () Korea Railroad Research Institute, Uiwang, South Korea e-mail: [email protected]
© Springer Nature Switzerland AG 2023 S. Y. Nof (ed.), Springer Handbook of Automation, Springer Handbooks, https://doi.org/10.1007/978-3-030-96729-1_14
various types of sensors and their working principles are briefly explained as well as their technical advancement to recent applications such as self-driving vehicles is introduced. In addition, artificial intelligent-based machine vision systems are briefly described as a core technology for current smart factories. The individual sensor issue is also extended to wireless networked sensors and their applications from recent research activities. Through this chapter, readers can also understand the working principle of various sensors and also how sensors or networked sensors can be configured and collaborated with each other to provide higher performance and reliability within networked sensor systems. Keywords
Sensor node · Sensor network · Wireless sensor network · Machine vision · Machine learning
14.1
Sensors
A sensor is an instrument that responds to a specific physical stimulus and produces a measurable corresponding electrical signal. A sensor can be mechanical, electrical, electromechanical, magnetic, or optical. Any devices that are directly altered in a predictable, measurable way by changes in a realworld parameter can be a sensor for that parameter. Sensors have an important role in daily life because of the need to gather information and process it conveniently for specific tasks. Recent advances in microdevice technology, microfabrication, chemical processes, and digital signal processing have enabled the development of micro/nanosized, lowcost, and low-power sensors called microsensors. Microsensors have been successfully applied to many practical areas, including medical and space devices, military equipment, telecommunication, and manufacturing applications [1, 2]. When compared with conventional sensors, microsensors 315
316
W. Jeong
have certain advantages, such as interfering less with the environment they measure, requiring less manufacturing cost, being used in narrow spaces and harsh environments, etc. The successful application of microsensors depends on sensor capability, cost, and reliability.
mometer is a sensor that measures resistance by examining the resistivity change of a material. By expanding (14.1) in a Taylor series and then simplifying the equation, resistance change can be expressed as R = ρ
l ρ ρ·l + l − A 2 . A A A
(14.2)
14.1.1 Sensing Principles Sensors can be technically classified into various types according to their working principle, as listed in Table 14.1. That is, sensors can measure physical phenomena by capturing resistance change, inductance change, capacitance change, thermoelectric effect, piezoelectric effect, photoelectric effect, hall effect, and so on [3, 4]. Among these effects, most sensors utilize the resistance change of a conductor, i.e., resistivity. As long as the current density is uniform in the insulator, the resistance R of a conductor of cross-sectional area A can be computed as R=
ρ·l , A
(14.1)
where l is the length of the conductor, A is the cross-sectional area, and ρ is the electrical resistivity of the material. Resistivity is a measure of the material’s ability to oppose electric current. Therefore, change of resistance can be measured by detecting physical deformation (l or A) of conductive materials or by sensing resistivity (ρ) of conductor. As an example, a strain gage is a sensor that measures resistance by deformation of length or cross-sectional area, and a therTable 14.1 Technical classification of sensors according to their working principle Sensing principle Resistance change
Sensors Strain gage, potentiometer, resistance temperature detector (RTD), potentiometric throttle position sensor (TPS), magnetoresistive sensor, thermistor, piezoresistive sensor, photoresistive sensor Capacitance Capacitive-type torque meter, capacitance level change sensor Inductance Inductive angular position sensor (magnetic change pick-up), inductive torque meter, linear variable differential transformer (LVDT) Electromagnetic Electromagnetic flow meter induction Thermoelectric Thermocouple effect Piezoelectric Piezoelectric accelerometer, sound navigation and effect ranging (SONAR) Photoelectric Photodiode, phototransistor, photo-interrupter effect (optical encoder) Hall effect Hall sensor
By dividing both side of (14.2) by the resistance R, the resistance change rate can be expressed as R ρ l A ρ = + − = + ε + 2νε R ρ l A ρ
(14.3)
where ν and ε are the Poisson’s ratio of the material and the strain, respectively. When the resistivity (ρ) of a sensing material is close to constant, the resistance can be determined from the values of strain (ε) and Poisson’s ratio (ν) of the material (e.g., strain gage). When the resistivity of a sensing material is sensitive to the measuring targets and values of ε, ν can be neglected; the resistance can be measured from the resistivity change (ρ/ρ) (e.g., resistance temperature detector (RTD)). In capacitance-based sensors, the sensor measures the amount of electric charge stored between two plates of capacitors. The capacitance C can be calculated as C=
ε·A , d
(14.4)
where d is the separation between the plates, A is the area of each plate, and ε is the dielectric constant (or permittivity) of the material between the plates. The dielectric constant for a number of very useful dielectrics changes as a function of the applied electrical field. Thus, capacitance-based sensors utilize capacitance change by measuring the dielectric constant, the area (A) of each plate, or the separation (d) between the plates. A capacitive-type torque meter is an example of a capacitance-based sensor. Inductance-based sensors measure the ratio of the magnetic flux to the current. Linear variable differential transformers (LVDT; Fig. 14.1) and magnetic pick-up sensors are representative inductance-based sensors. Electromagnetic induction-based sensors are based on Faraday’s law of induction, which is involved in the operation of inductors, transformers, and many forms of electrical generators. The law states that the induced electromotive force (EMF) in any closed circuit is equal to the time rate of change of the magnetic flux through the circuit. Quantitatively, the law takes the following form E=−
dB , dt
(14.5)
14
Sensors, Machine Vision, and Sensor Networks
A
B
Fig. 14.1 Cutaway view of an LVDT. Current is driven through the primary coil at A, causing an induction current to be generated through the secondary coils at B
where E is the electromotive force (EMF), and B is the magnetic flux through the circuit. Besides these types of sensors, thermocouples measure the relative difference of temperature between two points rather than absolute temperature. In traditional applications, one of the junctions (the cold junction) is maintained at a reference temperature, while the other end is attached to a probe. Having available a cold junction at a known temperature is simply not convenient for most directly connected control instruments. They incorporate into their circuits an artificial cold junction using some other thermally sensitive device, such as a diode or thermistor, to measure the temperature of the input connections at the instrument, with special care being taken to minimize any temperature gradient between terminals. Hence, the voltage from a known cold junction can be simulated, and the appropriate correction applied. Photodiodes, phototransistors, and photo-interrupters are sensors that use the photoelectric effect. Various types of sensors and their working principle are summarized in Table 14.1. Sensors can also be classified by the physical phenomena measured, such as position, velocity, acceleration, heat, pressure, flow rate, sound, etc. This classification of sensors is briefly explained below.
14.1.2 Position Sensors A position sensor is any device that enables position measurement. Position sensors include limit switches or proximity sensors that detect whether or not something is close to or has reached a limit of travel. Position sensors also include potentiometers that measures rotational or linear position. The linear variable differential transformer (LVDT) is an example of the potentiometers for measuring linear displacement, while resolvers and optical encoders measure the rotational position of a rotating shaft. The resolver and LVDT function much like a transformer. The optical encoder produces digital
317
signals that convert motion into a sequence of digital pulses. In fact, there also exist optical encoders for measuring linear motion. Some position sensors are classified by their measuring techniques. Sonars measure distance with sonic/ultrasonic waves, and radar utilizes electronic/ radio to detect or measure the distance between two objects. Many other sensors are used to measure position or distance.
14.1.3 Velocity Sensors Speed measurement can be obtained by taking consecutive position measurements at known time intervals and computing the derivative of the position values. A tachometer is an example of a velocity sensor that does this for a rotating shaft. The typical dynamic time constant of a tachometer is in the range 10–100 μs. A tachometer is a passive analog sensor that provides an output voltage proportional to the velocity of a shaft. There is no need for an external reference or excitation voltage. Traditionally tachometers have been used for velocity measurement and control only, but all modern tachometers have quadratic outputs which are used for velocity, position, and direction measurements, making them effectively functional as position sensors.
14.1.4 Acceleration Sensors An acceleration sensor or accelerometer is a sensor designed to measure continuous mechanical vibration such as aerodynamic flutter and transitory vibration such as shock waves, blasts or impacts. Accelerometers are normally mechanically attached or bonded to an object or structure for which acceleration is to be measured. The accelerometer detects acceleration along one axis and is insensitive to motion in orthogonal directions. Since acceleration of objects is directly related to the force applied from the Newton’s second law of motion, accelerometers are broadly used in engineering dynamics applications. Strain gages or piezoelectric elements constitute the sensing element of an accelerometer, converting vibration into a voltage signal. The design of an accelerometer is based on the inertial effects associated with a mass connected to a moving object. Detailed information and technical working processes about position, velocity, and acceleration sensors can be found in many references [5, 6].
14.1.5 Flow Sensors A flow sensor is a device for sensing the rate of fluid flow. In general, a flow sensor is the sensing element used in a flow
14
318
W. Jeong
meter to record the flow of fluids. Some flow sensors have a vane that is pushed by the fluid (e.g., a potentiometer), while other flow sensors are based on heat transfer caused by the moving medium.
14.1.6 Ultrasonic Sensors Ultrasonic sensors or transducer generates high-frequency sound waves and evaluate the echo received back by the sensor. An ultrasonic sensor computes the time interval between sending the signal and receiving the echo to determine the distance to an object. Radar or sonar works on a principle similar to that of the ultrasonic sensor. Some sensors are depicted in Fig. 14.2. However, there are many other groups of sensors not listed in this section. With the advent of semiconductor electronics and manufacturing technology, sensors have become miniaturized and accurate, and brought into existence micro/nanosensors.
a)
b)
c)
d)
14.1.7 Micro- and Nanosensors A microsensor is a miniature electronic device functioning similar to existing large-scale sensors. With recent microelectromechanical system (MEMS) technology, microsensors are integrated with signal-processing circuits, analogto-digital (A/D) converters, programmable memory, and a microprocessor, a so-called smart microsensor [7, 8]. Current smart microsensors contain an antenna for radio signal transmission. Wireless microsensors are now commercially available and are evolving with more powerful functionalities, as illustrated in Fig. 14.3. In general, a wireless microsensor consists of a sensing unit, a processing unit, a power unit, and communication elements. The sensing unit is an electrical part detecting the physical variable from the environment. The processing unit (a tiny microprocessor) performs signal-processing functions, i.e., integrating data and computation required in the processing of information. The communication elements consist of a receiver, a transmitter, and an amplifier if needed. The power unit provides energy source with other units (Fig. 14.4). Basically, all individual sensor nodes are operated by a limited battery, but a base-station node as a final data collecting center can be modeled with an unlimited energy source. Under the microscale, nanosensors are used in chemical and biological sensory applications to deliver information about nanoparticles. As an example, nanotubes are used to sense various properties of gaseous molecules. In developing and commercializing nanosensors, developers still need to overcome high costs of production and reliability challenges. The nanosensors are currently used in specific applications, but there is still tremendous room to improve the technology for practical uses in real-life applications.
14.1.8 Miscellaneous Sensors Fig. 14.2 Various sensors: (a) absolute encoder, (b) photoresistor, (c) sonar, and (d) digital load cell cutaway. (Courtesy of Society of Robots)
Among the many technologies which make self-driving, autonomous vehicle possible is a combination of sensors and
Mica2 2002
Wec 1999 smart rock
Rene 2000
Dot 2001 demo scale
Fig. 14.3 Evolution of smart wireless microsensors. (Courtesy of Crossbow Technology Inc.)
Mica 2002
Spec 2003 mote on a chip
14
Sensors, Machine Vision, and Sensor Networks
Node
Processing unit (processor, memory)
Receiver
Node
319
Sensing unit (sensor, ADC)
Transmitter
Power unit
Amplifier
Node
Fig. 14.4 Wireless micronode model. Each node has a sensing module (analog-to-digital converter (ADC)), processing unit, and communication elements
actuators. In particular, diverse sensors are recently utilized for autonomous vehicles, but there are essential sensors in autonomous driving assistance such as camera, radar, and lidar. These sensors help cars to detect objects, monitor surroundings, and safely plan their paths. Video camera in autonomous cars is used to see and interpret the objects in the road by replacing human drivers’ eyes. The vehicles with these cameras at certain position are capable of maintaining a 360◦ view of their external environment, thereby providing a broader picture of the traffic conditions around cars. In addition, by utilizing 2D currently and once utilizing 3D cameras, the cameras can provide highly detailed and realistic images to detect obstacles, classify them, and determine the distances between them and the vehicle. However, these cameras are highly dependent on weather conditions such as rain, snow, and fog. Radar (radio detection and ranging) sensors send out radio waves that detect objects and gauge their distance and speed in relation to the vehicle in real time. While short range (24 GHz) radar applications enable blind spot monitoring, the ideal lane-keeping assistance, and parking aids, the roles of the long range (77 GHz) radar sensors include automatic distance control and brake assistance. Unlike camera sensors, radar systems typically have relatively less trouble when identifying objects during fog or rain [9]. However, commonly used 2D radars have a limitation to determine accurately an object’s height, a wider variety of 3D radar sensors are recently being developed. Lidar (light detection and ranging) sensors work very similar to radar sensors, but they use laser light instead of radio waves [10–12]. Lidar is sometimes called 3D laser scanning. Apart from measuring the distances to various objects on the road, lidar allows creating 3D images of the detected objects and mapping the surroundings. Moreover, lidar can be configured to create a full 360-degree map around the vehicle rather than relying on a narrow field of view. These two advantages make autonomous vehicle manufacturers such
as Tesla [13], Google [14, 15], and Uber [16] choose lidar systems although lidar sensors are still much more expensive than radar sensors. There are also miscellaneous sensors to measure physical quantities such as strain, force, temperature, humidity, flow, and pressure. The load cell consists of several strain gages connected to a bridge circuit to yield a voltage proportional to the load. Force sensors are represented by a load cell that is used to measure a force. Temperature sensors are devices that indirectly measure quantities such as pressure, volume, electrical resistance, and strain, and then convert the values using the physical relationship between the quantity and temperature; for example: (a) a bimetallic strip composed of two metal layers with different coefficients of thermal expansion utilizes the difference in the thermal expansion of the two metal layers, (b) a resistance temperature sensor constructed of metallic wire wound around a ceramic or glass core and hermetically sealed utilizes the resistance change of the metallic wire with temperature, and (c) a thermocouple constructed by connecting two dissimilar metals in contact produces a voltage proportional to the temperature of the junction [17, 18].
14.2
Machine Vision
14.2.1 Image-based Automation Technology Machine vision is also widely used sensor. A vision sensor is typically used embedded in a vision system using cameras. A vision system is generally used for applications of measurement, guidance, and inspection. The visual measurement technology can improve the productivity of the factory by measuring the key dimensions, counting a number of parts or features, surface quality, and assembly effect of the product. The visual guidance technology can significantly improve manufacturing efficiency and body assembly quality by guiding the machine to complete automatic handling, optimum matching assembly, and precise drilling. The visual inspection technology can monitor the stability of the body manufacturing process and can also be used to ensure the integrity and traceability of the product to reduce the cost of manufacturing. Machine vision and image-based automation technology has improved significantly over the last decade in that they have become rather standard smart sensing components in most factory automation systems for part sorting, object inspection, and robot guidance. In general, a vision system consists of a vision camera, a lighting module, and an image processing system as shown in Fig. 14.5. The basic principle of operation of a vision system is that it forms an image by measuring the light reflected from objects, and the sensor head analyzes the output voltage from the light intensity
14
320
W. Jeong
Vision system package “All in one”system “Smart camera” “Vision sensor”
Optional ext. computer for operator interface
LED Computer
Camera Lens Imager
Electronics Power/control Signal
Frame grabber or other signal conversion
Digital image
Fig. 14.5 Overall procedure of machine vision system
a)
b)
c)
d)
Fig. 14.6 Types of vision sensor applications: (a) automated low-volume/high-variety production, (b) vision sensors for error-proof oil cap assembly, (c) defect-free parts with 360◦ inspection, (d) inspection of two-dimensional (2D) matrix-marked codes. (Courtesy of Cognex Corp.)
received. The sensor head consists of an array of photosensitive, photodiodes, or charge-coupled devices (CCD). Currently, various signal-processing techniques for the reflected signals are applied for many industrial applications to provide accurate outputs, as illustrated in Fig. 14.6 [19, 20].
14.2.2 Machine Vision System Components Lighting: Since machine vision itself is not capable of lighting, proper illumination is essential for the machine vision to identify the object clearly. Key features of various lighting
14
Sensors, Machine Vision, and Sensor Networks
321
techniques are summarized in Table 14.2 and corresponding lighting techniques are depicted in Fig. 14.7. Industrial Cameras: Industrial cameras originally used in the vision system is the PC-based machine vision. The
Table 14.2 Key features of various lighting techniques Lighting techniques Back lighting Axial or coaxial lighting
Features Accurate outline of objects for applications Very even illumination and homogeneous image on specular surfaces Reducing low-contrast surface features and highlighting high-contrast geometric surface features depending on reflective Structured light Providing contrast-independent surface inspection and 3D information in 2D images Dark-field Preferred for low-contrast applications using illumination low-angle ring light Specular light is reflected away from the camera Bright-field Preferred for high-contrast applications using illumination high-angle ring light Producing sharp shadows and inconsistent illumination throughout the entire field of view Constant diffuse Providing the most uniform illumination of illumination features of interest (dome lighting) Masking irregularities that are not of interest Collimated Highly accurate back lighting illumination Reducing stray light and highlighting surface features as a front light
a)
Camera
PC-based machine vision still offers the best performances and the best flexibility. Recently, the industrial camera is evolving in smart cameras and board camera. The smart camera, an evolution of the PC-based machine vision, is integrated system with onboard computing capabilities. Thus, it is easier to implement but its flexibility and performance are limited. The board camera is an evolution of industrial camera. By adopting CMOS instead of CCD, components of the industrial camera are simplified and reduced to a single board. Line Scan Camera and Area Scan Camera: As depicted in Fig. 14.8, line scan cameras record a single row of pixels at a time and build continuous images, allowing for much higher resolutions than area scan cameras. Thus, the line scan camera is effective when an object or camera is moving. Area scan cameras capture an area image of a given scene in one exposure cycle with designed vertical and horizontal pixels. Area scan cameras are easier to setup and implement than line scan cameras, and also they are suited toward when an object is stationary. Image Processing: Image processing in vision system is the procedure for extracting required information from a digital image which is performed by various software tools. The machine vision software should provide sufficient algorithm depth and capability to perform the required inspection tasks. It should be also capable of adequate flexibility in the process configuration to service the automation requirements. The
Camera Heat dissipation material
b)
c)
Camera
LED
Half mirror
LED
Chip LEDs
d)
Camera
Object
e)
Illumination system Camera
Object
f)
Camera
LED LED LED Object Object Object
Fig. 14.7 Lighting techniques for vision sensor applications: (a) back lighting, (b) axial or coaxial lighting, (c) structured lighting, (d) dark-field illumination, (e) bright-field illumination, and (f) constant diffuse illumination (dome lighting)
14
322
W. Jeong
a)
b) 2 area scan cameras
Lin
ea Ar n sca
can es
Line scan camera
Fig. 14.8 Line scan camera (a) and area scan cameras (b)
major functions of the vision algorithms are image transformation, preprocessing including image enhancement, content statistics, edge detection, geometric search, color image processing, correlation, and connectivity. Today, the size of vision system components becomes condensed and minimized to be adoptable for easy implementation in manufacturing. Machine vision components are also becoming more intelligent and sophisticated as combined with an artificial intelligence technology.
14.2.3 Artificial Intelligence and Machine Vision One of the revolutionary progresses in the machine vision systems is an extensive use of machine learning. As a subset of artificial intelligence, machine learning, based on convolutional neural networks (CNNs) [21] and deep learning algorithm [22–24], makes the vision system more intelligent. That is, the CNNs were rapidly integrated into vision system to provide more intelligent decisions by solving image classification, localization, object detection, segmentation, and other problems with state-of-art accuracy. As a result, the embedded deep learning has radically improved the value proposition for machine vision by accelerating inspection processes, improving operational efficiencies, and increasing productivity in applications that were not cost-effective to strive for in the past. Deep learning and also reinforcement learning [25] provide solution for vision applications difficult to program with rule-based algorithms, and it handles complex surface textures and variations in part appearance and acclimatizes to new examples without reprogramming core networks. This technology effectively handles judgmentbased inspection, part location, classification, and character recognition challenges efficiently when compared to conventional machine vision systems. Deep learning, based on CNNs, has become the most widely used algorithm for different problems of computer vision although it requires higher computer processing capability.
In fact, for the machine learning models to be accurate in various automation applications, very large amounts of digital image data are required for a comprehensive training process to be able to classify new objects independently afterward. There exist many public datasets available to implement intelligent machine learning models, but many industrial applications require very specialized datasets. Therefore, many companies in providing intelligent machine vision solution make efforts on collecting right metadata with their own smart cameras. Successful examples of the AI-based vision system can be found in automation service providing vendors, such as Keyence [26], Cognex [27], and Adlink [28]. Today, the AI-based machine vision system is becoming essential technology of smart factories.
14.3
Sensor Networks
14.3.1 Sensor Network Systems Before the advent of wireless and microminiaturization technology, single-sensor systems had an important role in a variety of practical applications because they were relatively easy to construct and analyze. Single-sensor systems, however, were the only solution when there was critical limitation of implementation space. Moreover, single-sensor systems for recently emerging applications have various limitations and disadvantages. • They have limited applications and uses; for instance, if a system should measure several variables, e.g., temperature, pressure, and flow rate, at the same time in the application, single-sensor systems are insufficient. • They cannot tolerate a variety of failures which may take place unexpectedly. • A single sensor cannot guarantee timely delivery of accurate information all of the time because it is inevitably affected by noise and other uncertain disruptions. These limitations are critical when a system requires highly reliable and timely information. Therefore, singlesensor systems are not suitable when robust and accurate information is required in the application. To overcome the critical disadvantages of single-sensor systems in most applications, multisensor-based wireless network systems which require replicated sensory information have been studied, along with their communication network technologies. Replicated sensor systems are applicable not only because microfabrication technology enables production of various microsensors at low manufacturing cost, but also because microsensors can be embedded in a system with replicated deployment. These redundantly deployed sensors enable a system to improve accuracy and tolerate
14
Sensors, Machine Vision, and Sensor Networks
sensor failure, i.e., distributed microsensor arrays and networks (DMSA/DMSN) are built from collections of spatially scattered microsensor nodes. Each node has the ability to measure the local physical variable within its accuracy limit, process the raw sensory data, and cooperate with its neighboring nodes. Sensors incorporated with dedicated signal-processing functions are called intelligent, or smart, sensors. The main roles of dedicated signal processing functions are to enhance design flexibility and realize new sensing functions. Additional roles are to reduce loads on central processing units and signal transmission lines by distributing information processing to the lower layers of the system [8]. A set of microsensors deployed close to each other to measure the same physical quantity of interest is called a cluster. Sensors in a cluster can be either of the same or different type to form a distributed sensor network (DSN). DSN can be utilized in a widely distributed sensor system and implemented as a locally concentrated configuration with a high density.
14.3.2 Multisensor Data Fusion Methods There are three major ways in which multiple sensors interact [29, 30]: (1) competitive, when sensors provide independent measurement of the same information regarding a physical phenomenon; (2) complementary, when sensors do not depend on each other directly, but are combined to give a more complete image of the phenomena being studied; and (3) cooperative, when sensors combine data from independent sensors to derive information that would be unavailable from the individual sensors. In order to combine information collected from each sensor, various multisensory data fusion methods can be applied. Multisensor data fusion is the process of combining observations from a number of different sensors to provide a robust and complete description of an environment or process of interest. Most current data fusion methods employ probabilistic descriptions of observations and processes and use Bayes’ rule to combine this information [31, 32]. Although the probabilistic grid and the Kalman filter described in the following section is more commonly used as integrating multiple readings from single sensor, they are also effective in processing sensing data from distributed sensors.
Bayes’ Rule Bayes’ rule provides basic logic among the most data fusion methods. In general, Bayes’ rule provides a means to make inferences about an object or environment of interest described by a state x, given an observation z. Based on the rule of conditional probabilities, Bayes’ rule is obtained as
323
P (x|z) =
P(z|x) P(x) . P(z)
(14.6)
The conditional probability P(z | x) serves the role of a sensor model. The probability is constructed by fixing the value of x = x and then asking what probability density P(z | x = x) on x is inferred. The multisensory form of Bayes’ rule requires conditional independence P(z1 , · · · , zn |x) = P(z1 |x) · · · P(zn |x) =
n
P(zi |xi ) (14.7)
i=1
The recursive form of Bayes’ rule is
P x|Z
k
P(zk |x) P x|Z k−1 = P zk |Z k−1
(14.8)
From this equation, one needs to compute and store only the posterior density P(x | Zk − 1 ), which contains a complete summary of all past information.
Probabilistic Grids Probabilistic grids, also called occupancy grids, are the means to implement the Bayesian data fusion technique to problems in mapping [33] and tracking [34]. Practically, a grid of likelihoods on the states xij is produced in the form P(z = z | xij ) = (xij ). It is then trivial to apply Bayes’ rule to update the property value at each grid cell as P+ xij = C xij P xij , ∀i, j,
(14.9)
where C is a normalizing constant obtained by summing posterior probabilities to 1 at node ij only. Computationally, this is a simple pointwise multiplication of two grids. Grid-based fusion is appropriate to situations where the domain size and dimension are modest. In such cases, grid-based methods provide straightforward and effective fusion algorithms. Monte Carlo and particle filtering methods can be considered as grid-based methods, where the grid cells themselves are samples of the underlying probability density for the state.
The Kalman Filter The Kalman filter is a recursive linear estimator that successively calculates an estimate for a continuous-valued state on the basis of periodic observations of the state. The Kalman filter may be considered a specific instance of the recursive Bayesian filter [35] for the case where the probability densities on states are Gaussian. The Kalman filter algorithm produces estimates that minimize mean-squared estimation error conditioned on a given observation sequence and so is the conditional mean
14
324
W. Jeong
xˆ (i|j) E x(i) z(1), · · · , z(j) E x(i) Z j
(14.10)
The estimate variance is defined as the mean-squared error in this estimate P(i|j) E [x(i) − xˆ (i|j)] [x(i) − xˆ (i|j)]T | Z j (14.11) The estimate of the state at a time k, given all information up to time k, is written xˆ (k|k). The estimate of the state at a time k given only information up to time k − 1 is called a one-step-ahead prediction and is written xˆ (k|k − 1). The Kalman filter is appropriate to data fusion problems where the entity of interest is well defined by a continuous parametric state. Thus, it would be useful to estimate position, attitude, and velocity of an object, or the tracking of a simple geometric feature. Kalman filters, however, are inappropriate for estimating properties such as spatial occupancy, discrete labels or processes whose error characteristics are not easily parameterized.
Sequential Monte Carlo Methods The sequential Monte Carlo (SMC) filtering method is a simulation of the recursive Bayes update equations using sample support values and weights to describe the underlying probability distributions. SMC recursion begins with a posterior probability density represented by a set of support values Nk−1 i , wik−1|k−1 in the form and weights xk−1 i=1
i i wk−1 δ xk−1 − xk−1 P xk−1 |Z k−1 =
uncertainty in situations where there is a lack of probabilistic information, but in which sensor and parameter error is known to be bounded. In this technique, the uncertainty in a parameter x is simply described by a statement that the true value of the state x is known to be bounded between a and b, i.e., x ∈ [a, b]. There is no other additional probabilistic structure implied. With a, b, c, d ∈ R, interval arithmetic is also possible as d [a, b] , [c, d] = max (|a − c|, |b − d|)
(14.14)
Interval calculus methods are sometimes used for detection, but are not generally used in data fusion problems because of the difficulties to get results converged to anything value, and to encode dependencies between variables.
Fuzzy Logic Fuzzy logic has achieved widespread popularity for representing uncertainty in high-level data fusion tasks. Fuzzy logic provides an ideal tool for inexact reasoning, particularly in rule-based systems. In the conventional logic system, a membership function μA (x) (also called the characteristic function) is defined. Then the fuzzy membership function assigns a value between 0 and 1, indicating the degree of membership of every x to the set A. Composition rules for fuzzy sets follow the composition processes for normal crisp sets as A ∩ B μA∩B (x) = min [μA (x), μB (x)]
(14.15)
A ∪ B μA∪B (x) = max [μA (x), μB (x)]
(14.16)
Nk−1
(14.12)
i=1
Leaving the weights unchanged wk i = wk − 1 i and allowing the new support value xk i to be drawn on the basis of old support value xk − 1 i , the prediction becomes k−1
P xk |Z k−1 = wik−1 δ xk − xki .
N
(14.13)
There exist a number of similarities between fuzzy set theory and probability as the probability density function and the fuzzy membership function resemble. However, the relation between probability theory and fuzzy logic has been an object of discussion [37, 38].
i=1
The SMC observation update step is relatively straightforward and described in Refs. [31, 36]. SMC methods are well suited to problems where statetransition models and observation models are highly nonlinear. However, they are inappropriate for problems where the state space is of high dimension. In addition, the number of samples required to model a given density faithfully increases exponentially with state-space dimension.
Interval Calculus Interval representation of uncertainty has a number of potential advantages over probabilistic techniques. An interval to bound true parameter values provides a good measure of
Evidential Reasoning Evidential reasoning methods are qualitatively different from either probabilistic methods or fuzzy set theory. In evidential reasoning, belief mass cannot only be placed on elements and sets, but also sets of sets, while in probability theory a belief mass may be placed on any element xr ∈ χ and on any subset A χ. The domain of evidential reasoning is the power set 2χ . Evidential reasoning methods play an important role in discrete data fusion, attribute fusion, and situation assessment, where information may be unknown or ambiguous. Multisensory fusion methods and their models are summarized in Table 14.3 and details can be founded in Refs. [31, 32].
14
Sensors, Machine Vision, and Sensor Networks
325
Table 14.3 Multisensor data fusion methods [31] Approach
Method
Probabilistic modeling
Bayes’ rule Probabilistic grids The Kalman filter
Nonprobabilistic modeling
Sequential Monte Carlo methods Interval calculus Fuzzy logic Evidential reasoning
Fusion model and rule P(z |x)Px|Z k−1 P x|Z k = Pk z |Z k−1 (k ) P+ (xij ) = C(xij )P(xij )
P (i|j) E {[ x(i) − xˆ (i|j) [x(i) − xˆ (i|j)]T | Z j Nk P xk |Z k = C wik−1 P zk = zk |xk = xki δ xk − xki i=1
d([a, b], [c, d]) = max (|a − c|, |b − d|) A ∩ B μA ∩ B (x) = min [μA (x), μB (x)] A ∪ B μA ∪ B (x) = max [μA (x), μB (x)] 2X = {{occupied, empty}, · · · {occupied}, {empty}, 0}
In addition, multisensor integration or fusion is not only the process of combining inputs from sensors with information from other sensors, but also the logical procedure of inducing optimal output from multiple inputs with one representative format [39]. In the fusion of large-size distributed sensor networks, an additional advantage of multisensor integration (MSI) is the ability to obtain more fault-tolerant information. This fault tolerance is based on redundant sensory information that compensates for faulty or erroneous readings of sensors. There are several types of multisensor fusion and integration methods, depending on the types of sensors and their deployment [40]. This topic has received increasing interest in recent years because of the sensibility of networks built with many low-cost micro- and nanosensors. As an example, a recent improvement of the fault-tolerance sensor integration algorithm (FTSIA) by Liu and Nof [41, 42] enables it not only to detect possibly faulty sensors and widely faulty sensors, but also to generate a final data interval estimate from the correct sensors after removing the readings of those faulty sensors.
14.3.3 Sensor Network Design Considerations Sensor networks are somewhat different from traditional operating networks because sensor nodes, especially microsensors, are highly prone to failure over time. As sensor nodes weaken or even die, the topology of the active sensor networks changes frequently. Especially when mobility is introduced into the sensor nodes, maintaining the robustness and discovering topology consistently become challenging. Therefore, the algorithms developed for sensor network communication and task administration should be flexible and stable against changes of network topology and work properly under unexpected failure of sensors. In addition, in order to be used in most applications, DSN systems should be designed with application-specific communication algorithms and task administration protocols, because microsensors and their networking systems are extremely resource constrained. Therefore, most research
efforts have focused on application-specific protocols with respect to energy consumption and network parameters such as node density, radio transmission range, network coverage, latency, and distribution. Current network protocols also use broadcasting for communication, while traditional and ad hoc networks use point-to-point communication. Hence, the routing protocols, in general, should be designed by considering crucial sensor network features as follows: 1. Fault tolerance: Over time, there is always potential possibility that sensor nodes may fail or be blocked due to lack of power, physical damage, or environmental interference. The failure of sensor nodes, however, should not affect the overall operation of the sensor network. Thus, fault tolerance or reliability is the ability to sustain sensor network functionality despite likely problems. 2. Accuracy improvement: Redundancy of information can reduce overall uncertainty and increase the accuracy with which events are perceived. Since nodes located close to each other are combining information about the same event, fused data improve the quality of the event information. 3. Network topology: A large number of nodes deployed throughout the sensory field should be maintained by carefully designed topology because any changes in sensor nodes and their deployments affect the overall performance of DSN. Therefore, a simple and flexible topology is usually preferred. 4. Timeliness: DSN can provide the processing parallelism that may be needed to achieve an effective integration process, either at the actual speed that a single sensor could provide or at even faster operation speed. 5. Energy consumption: Since each wireless sensor node is working with a limited power source, the design of powersaving protocols and algorithms is a significant issue for providing longer lifetime of sensor network systems. 6. Lower cost: Despite the use of redundancy, a distributed microsensor system obtains information at lower cost than the equivalent information expected from a single sensor,
14
326
because it does not require the additional cost of functions to obtain the same reliability and accuracy. The current cost of a microsensor node, e.g., dust mote [7], is still expensive, but it is expected to be less than US$ 1 in the near future, so that sensor networks can be justified. 7. Scalability: The coverage area of a sensor network system depends on the transmission range of each node and the density of the deployed sensors. The density of the deployed nodes should be carefully designed to provide a topology appropriate for the specific application. To provide the optimal solution to meet these design criteria in the sensor network, researchers have considered various protocols and algorithms. However, none of these studies has been developed to improve all design factors because the design of a sensor network system has typically been application specific. A distributed network of microsensor arrays (MSA) can yield more accurate and reliable results based on built-in redundancy. Recent developments of flexible and robust protocols with improved fault tolerance will not only meet essential requirements in distributed systems but will also provide advanced features needed in specific applications. While micro-electro-mechanical systems (MEMS) sensor technology has advanced significantly in recent years, scientists now realize the need for design of effective MEMS sensor communication networks and task administration.
14.3.4 Sensor Network Architectures Recent developments of flexible and robust protocols with improved fault tolerance will not only meet essential requirements in distributed systems but will also provide advanced features needed in specific applications. They can produce widely accessible, reliable, and accurate information about physical environments. Various architectures have been proposed and developed to improve the performance of systems and fault-tolerance functionality of complex networks depending on their applications. General DSN structures for multisensor systems were first discussed by Wesson et al. [43]. Iyengar et al. [44] and Nadig et al. [45] improved and developed new architectures for distributed sensor integration. A network is a general graph G = (V, L), where V is a set of nodes (or vertices) and L is a set of communicating links (or edges) between nodes. For a DSN, a node means an intelligent sensing node consisting of a computational processor and associated sensors, and an edge is the connectivity of nodes. As shown in Fig. 14.7, a DSN consists of a set of sensor nodes, a set of cluster-head (CH) nodes, and a communication network interconnecting the nodes [40, 44]. In general, one sensor node communicates with more than
W. Jeong
a) BS
Cluster head
b) BS
Cluster head
Cluster
c)
BS
d)
BS
Fig. 14.9 Four different configurations of wireless sensor networks: (a) single hop with clustering, (b) multihop with clustering, (c) single hop without clustering, and (d) multihop without clustering. BS base station
one CH, and a set of nodes communicating with a CH is called a cluster. A clustering architecture can increase system capacity and enable better resource allocation [45, 46]. Data are integrated in CH by receiving required information from associated sensors of the cluster. In the cluster, CHs can interact not only with other CHs but also with higher-level CHs or a base station. A number of network configurations have been developed to prolong network lifetime and reduce energy consumption in forwarding data. In order to minimize energy consumption, routing schemes can be broadly classified into two categories: (1) clustering-based data forwarding scheme (Fig. 14.9 a, b) and (2) multihop data forwarding scheme without clustering (Fig. 14.9 b, d). In recent years, with the advancement of wireless mobile communication technologies such as the fifth generation network communication, ad hoc wireless sensor networks (AWSNs) have become important. With this advancement, the above wired (microwired) architectures remain relevant only where wireless communication is physically prohibited; otherwise, wireless architectures are considered superior. The architecture of AWSN is fully flexible and dynamic, that is, a mobile ad hoc network represents a system of wireless nodes that can freely reorganize into temporary networks as needed, allowing nodes to communicate in areas with no existing infrastructure. Thus, interconnection between nodes can be dynamically changed, and the network is set up only for a short period of communication [47, 48]. Now the AWSN with an optimal ad hoc routing scheme has become an important design concern. In applications where there is no given pattern of sensor deployment, such as product monitoring in flexible automa-
14
Sensors, Machine Vision, and Sensor Networks
tion environment, the AWSN approach can provide efficient sensor networking. Especially in dynamic network environments such as AWSN, three main distributed services, i.e., lookup service, composition service, and dynamic adaptation service by self-organizing sensor networks, are also studied to control the system (see, for instance, Ref. [49]). In order to route information in an energy-efficient way, directed diffusion routing protocols based on the localized computation model [50, 51] have been studied for robust communication. The data consumer will initiate requests for data with certain attributes. Nodes will then diffuse the requests towards producers via a sequence of local interactions. This process sets up gradients in the network which channel the delivery of data. Even though the network status is dynamic, the impact of dynamics can be localized. A mobile-agent-based DSN (MADSN) [52] utilizes a formal concept of agent to reduce network bandwidth requirements. A mobile agent is a floating processor migrating from node to node in the DSN and performing data processing autonomously. Each mobile agent carries partially integrated data which will be fused at the final CH with other agents’ information. To save time and energy consumption, as soon as certain requirements of a network are satisfied in the progress of its tour, the mobile agent returns to the base station without having to visit other nodes on its route. This logic reduces network load, overcoming network latency, and improves fault-tolerance performance.
14.3.5 Sensor Network Protocols Communication protocols for distributed microsensor networks provide systems with better network capability and performance by creating efficient paths and accomplishing effective communication between the sensor nodes [49, 53, 54]. The point-to-point protocol (PTP) is the simplest communication protocol and transmits data to only one of its neighbors, as illustrated in Fig. 14.10a. However, PTP is not appropriate for a DSN, because there is no communication path in case of failure of nodes or links.
a) PTP
b) FP
c) GP
Fig. 14.10 Three basic communication protocols: (a) point-to-point protocol (PTP), (b) flooding protocol (FP), and (c) gossiping protocol (GP)
327
In the flooding protocol (FP), the information sent out by the sender node is addressed to all of its neighbors, as shown in Fig. 14.10b. This disseminates data quickly in a network where bandwidth is not limited and links are not loss-prone. However, since a node always sends data to its neighbors, regardless of whether or not the neighbor has already received the data from another source, it leads to the implosion problem and wastes resources by sending duplicate copies of data to the same node. The gossiping protocol (GP) [55, 56] is an alternative to the classic flooding protocol in which, instead of indiscriminately sending information to all its neighboring nodes, each sensor node only forwards the data to one randomly selected neighbor, as depicted in Fig. 14.10c. While the GP distributes information more slowly than FP, it dissipates resources, such as energy, at a relatively lower rate. In addition, it is not as robust relative to link failures as a broadcasting protocol (BP), because a node can only rely on one other node to resend the information for it in the case of link failure. In order to solve the problem of implosion and overlap, Heinzelman et al. [57] proposed the sensor protocol for information via negotiation (SPIN). SPIN nodes negotiate with each other before transmitting data, which helps ensure that only useful transmission of information will be executed. Nodes in the SPIN protocol use three types of messages to communicate: ADV (new data advertisement), REQ (request for data), and DATA (data message). Thus, SPIN protocol works in three stages: ADV–REQ–DATA. The protocol begins when a node advertises the new data that is ready to be disseminated. It advertises by sending an ADV message to its neighbors, naming the new data (ADV stage). Upon receiving an ADV, the neighboring node checks to see whether it has already received or requested the advertised data to avoid implosion and the overlap problem. If not, it responds by sending a REQ message for the missing data back to the sender (REQ stage). The protocol completes when the initiator of the protocol responds to the REQ with a DATA message, containing the missing data (DATA stage). In a relatively large sensor network, a clustering architecture with a local cluster-head (CH) is necessary. Heinzelman et al. [58] proposed the low-energy adaptive clustering hierarchy (LEACH), which is a clustering-based protocol that utilizes randomized rotation of local cluster base stations to evenly distribute the energy load of sensors in DSN. Energyminimizing routing protocols have also been developed to extend the lifetime of the sensing nodes in a wireless network; for example, a minimum transmission energy (MTE) routing protocol [59] chooses intermediate nodes such that the sum of squared distances is minimized by assuming a squareof-distance power loss between two nodes. This protocol, however, results in unbalanced termination of nodes with respect to the entire network.
14
328
W. Jeong
14.3.6 Sensor Network Security
In recent years, a time-based network protocol has been developed. The objective of a time-based protocol is to ensure that, when any tasks keep the resource idle for too long, their exclusive service by the resource is disabled; that is, the time-based control protocol is intended to provide a rational collaboration rule among tasks and resources in the networked system [41]. Here, slow sensors will delay timely response and other sensors may need to consume extra energy. The patented fault tolerant time-out protocol (FTTP) uses the basic concept of a time-out scheme effectively in a microsensor communication control (FTTP is a patentpending protocol of the PRISM Center at Purdue University, USA). The design of industrial open protocols for mostly wired communication known as fieldbuses, such as DeviceNet and ControlNet, have also been evolved to provide open data exchange and a messaging framework [60]. Further development for wireless has been investigated in asset monitoring and maintenance using an open communication protocol such as ZigBee [61]. Wireless sensor network application designers also require a middleware to deliver a general runtime environment that inherits common requirements under application specifications. In order to provide robust functions to industrial applications under somewhat limited resource constraints and the dynamics of the environment, appropriate middleware is also required to contain embedded trade-offs between essential quality-of-service (QoS) requirements from applications. Typically, a sensor network middleware has a layered architecture that is distributed among the networked sensor systems. Based on traditional architectures, researchers [62] have recently been developing a middleware for facility sensor network (MidFSN) whose layered structure is classified into three layers as depicted in Fig. 14.11.
Sensor level Networked sensors
1. Authentication during all routing phases: Implementing authentication procedure during all routing phases to exclude unauthorized nodes from participating in the routing. 2. Randomize message forwarding: A node selects a requesting message at random to forward. This method, however, increases the delay of the route discovery. 3. Secure neighbor verification: By using a three-round authenticated message exchange between two nodes, this solution prevents illegal use of a high power range to launch the rushing attacks. 4. Trust-level metric: Mirroring the minimum trust value required by the sender. However, it is difficult to define trust level based on a proper key sharing mechanism.
Middleware level Context management
Low-level data abstraction • Data dessamination • Self-configuration • Sensor rate configuration • Communication scheduling • Monitoring the battery power • Sensor state control • Cluster control messages
Another issue in wireless sensor networks is related to sensor network security. Since the sensor network may operate in a hostile environment, security needs to be built into the network design and not as an afterthought. That is, network techniques to provide low-latency, survivable, and secure networks are required. However, sensor network security, especially in the wireless sensor network, is difficult to achieve because of its fundamental characteristics of vulnerability of wireless links. In general, low probability of communication detection is needed for networks because sensors are envisioned for use behind enemy lines. For the same reasons, the network should be protected again intrusion and spoofing. For the wireless network security, there currently exist solutions as follows [63]:
• • • •
Resource context Cluster context Energy context Temporal context
• • • • •
Services Service interpreter Application specification Qos requirements Data aggregation Analysis service
Qos control Error handling Service differenciation Network control Event detection and coordination • Middleware component control • • • •
Fig. 14.11 Middleware architecture for facility sensor network applications (MidFSN). (After Ref. [36])
Application level
Application manager • Analyze data • Manage database • Application requirements • Industrial application classification • Trade-offs
14
Sensors, Machine Vision, and Sensor Networks
329
14 Base station
Middleware (MidFSN) Resource management QoS control services Photoelectronic sensor Inductive sensor
Fig. 14.12 Wireless microsensor network system with two different types of microsensor nodes in an industrial automation application. (After Ref. [62])
14.3.7 Sensor Network Applications Distributed microsensor networks have mostly been applied to military applications. However, a recent trend in sensor networks has been to apply the technology to various industrial applications. Figure 14.12 illustrates a networked sensor application used in factory automation. Environment applications such as examination of flowing water and detection of air contaminants require flexible and dynamic topology for the sensor network. Biomedical applications of collecting information from internal human body are based on bio/nanotechnology. For these applications, geometrical and dynamical characteristics of the system must be considered at the design step of network architecture [51]. It is also essential to use fault-tolerant network protocols to aggregate very weak signals without losing any critical signals. Specifically designed sensor network systems can also be applicable for intelligent transportation systems, monitoring material flow, and home/office network systems. Public transportation systems are another example of sensor network applications. Sensor networks have been successfully implemented into highway systems for vehicle and traffic monitoring, providing key technology for intelligent transportation systems (ITS). Recently various networked sensors have been applied to railway system to monitor the location of rolling stocks and detect objects or obstacles on the rail in advance (Fig. 14.13). Networked sensors can cover a wide monitoring area and deliver more accurate information by implementing multisensor fusion algorithms [64–66].
Recent self-driving, autonomous cars utilize diverse array of sensors (Fig. 14.14). Many sensors are sharing information for better vehicle intelligence. The majority of automotive manufacturers most commonly use three types of sensors in the autonomous vehicle: cameras, radars, and lidars. These sensors play an essential role in automated driving such as detecting oncoming obstacles, monitoring their surroundings, and safely planning their paths. They are also integrated in the main computer of the vehicle. Rapid improvement of the autonomous vehicle technology is recently drawing attentions on the research of vehicle to vehicle (V2V) communication based on the sensor network technology.
14.3.8 Industrial Internet of Things (IIoT) In the era of industry 4.0, the fourth industrial revolution, modern industry facilitates computerization and interconnectivity into the traditional industry [67]. Industry 4.0 enables factories and manufacturing process more flexible and intelligent by using sensors, autonomous machines, and microchips. In particular, wide supply of a high speed wired or wireless internet accelerates processing, communication, and networking capabilities. Since the terms of internet of things (IoT) was first used in 1999 [68], the industrial internet of things (IIoT) additionally becomes the extensive use of the internet of things (IoT) in manufacturing through automation, optimization, intelligent or smart factory, industrial control, and maintenance as illustrated in Fig. 14.15 [69–71]. While there are plenty of IoT definitions, one of the definitions is “network
330
W. Jeong
GPS
Kalman filter
Accelerometer Doppler
Gyroscope
Tachometer
Transponder
Fig. 14.13 Networked sensors for train tracking and tracing (global positioning system (GPS))
Lidars Radars Cameras Main computer in truck
Fig. 14.14 Diverse sensors used in autonomous driving car
connectivity and computing capability to objects, devices, sensors, and items not ordinarily considered to be computers. These ‘smart objects’ require minimal human intervention to generate, exchange, and consume data; they often feature connectivity to remote data collection, analysis, and management capabilities” [68]. Fundamental aspects of the IIoT are listed as follows: • Architecture: While IIoT architecture can be designed differently depending on applications, the Industrial Internet Consortium provides models by four different viewpoints: business, functional, usage, and implementation views [69]. • Connectivity: IIoT and its communication protocols should provide interoperability at different levels, scalability and flexibility to transfer smart data among existing heterogeneous communication protocols.
• Standardization: Since fundamental goal of IIoT is information sharing from heterogeneous environment, standardization activities should ensure interoperability by identifying potential global needs, benefits, concepts, and conditions for the factory. Table 14.4 shows comparison of key features between IoT and IIoT. Main benefits from adopting the IIoT can be operational efficiency improvement, maximizing asset utilization, downtime reduction, and productivity improvement.
14.4
Emerging Trends
Energy-conserving microsensor network protocols have drawn great attention over the years. Other important metrics such as latency, scalability, and connectivity have also been
14
Sensors, Machine Vision, and Sensor Networks
Industrial Internet of things
Smart factory solution
Automation robots
Smart product
331
The integration of multiple types of sensors such as seismic, acoustic, and optical sensors in a specific network platform and the study of the overall coverage of the system also present several interesting challenges. Diverse sensors integration for self-driving and autonomous vehicle is a recent example of heterogeneous sensors utilization. In addition, when sensor nodes should be shared by multiple applications with differing goals, protocols must efficiently serve multiple applications simultaneously. Therefore, for heterogeneous sensors and their network application development, several research questions should be considered:
Industry 4.0 Cyber-physical production systems (CPPS)
Factory impact
Fig. 14.15 Industrial internet of things (IIoT) and industry 4.0 in manufacturing
1. How should resources be utilized optimally in heterogeneous sensor networks? 2. How should heterogeneous data be handled efficiently? 3. How much and what type of data should be processed to meet quality-of-service (QoS) goals while minimizing energy usage?
Table 14.4 Comparison between IoT and IIoT [70] Status Model Connectivity
IoT New devices and standards Human-centered Ad hoc
Criticality
Not stringent
Data volume
Medium to high
IIoT Existing devices and standards Machine-oriented Structured (nodes are fixed; centralized network management) Mission critical (timing, reliability, security, privacy) Very high
deeply studied recently. However, it should be realized that there are still emerging research issues in sensor network systems. Although current wireless microsensor network research is moving forward to more practical application areas, emerging research on the following new topics should be further examined and will surface.
14.4.2 Appropriate Quality-of-Service (QoS) Model Research in QoS has received considerable attention over the years. QoS has to be supported at media access control (MAC), routing, and transport layers. Most existing ad hoc routing protocols do not support QoS. The routing metric used in current work still refers to the shortest path or minimum hop. However, bandwidth, delay, jitter, and packet loss (reliability or data delivery ratio) are other important QoS parameters. Hence, mechanisms of current ad hoc routing protocols should allow for route selection based on both QoS requirements and QoS availability. In addition to establishing QoS routes, QoS assurance during route reconfiguration has to be supported too. QoS considerations need to be made to ensure that end-to-end QoS requirements continue to be supported. Hence, there is still significant room for research in this area.
14.4.1 Heterogeneous Sensors and Applications In many networked sensor applications and their performance evaluation, homogeneous or identical sensors were most commonly considered; therefore, network performance was mainly determined by the geometrical distances between sensors and the remaining energy of each sensor. In practical applications, other factors can also influence coverage, such as obstacles, environmental conditions, and noise. In addition to nonhomogeneous sensors, other sensor models can deal with nonisotropic sensor sensitivities, where sensors have different sensitivities in different directions.
14.4.3 Integration with Other Networks For information exchange and data fusion, sensor networks may interface with other networks, such as a Wi-Fi network, a cellular network, or the industrial internet of things (IIoT). Therefore, to find the best way to interface these networks under required security will be a big issue. Sensor network protocols should support (or at least integrated as a separated network) the protocols of the other networks; otherwise sensors could have dual network interface capabilities.
14
332
References 1. Luo, R.C.: Sensor technologies and microsensor issues for mechatronics systems. IEEE/ASME Trans. Mechatron. 1(1), 39–49 (1996) 2. Luo, R.C., Yih, C., Su, K.L.: Multisensor fusion and integration: approaches, applications, and future research directions. IEEE Sensors J. 2(2), 107–119 (2002) 3. Wilson, J.S.: Sensor Technology Handbook. Newnes Elsevier, Amsterdam (2004) 4. Fraden, J.: Handbook of Modern Sensors: Physics, Designs, and Applications, 3rd edn. Springer, Berlin/Heidelberg (2003) 5. Webster, J.G. (ed.): The Measurement, Instrumentation and Sensors Handbook. CRC Press, Boca Raton (1998) 6. de Silva, C.W. (ed.): Mechatronic Systems: Devices, Design, Control, Operation and Monitoring. CRC Press, Boca Raton (2008) 7. Warneke, B., Last, M., Liebowitz, B., Pister, K.S.J.: Smart dust: communicating with a cubic-millimeter computer. Computer. 34(1), 44–51 (2001) 8. Yamasaki, H. (ed.): Intelligent Sensors, Handbook of Sensors and Actuators, vol. 3. Elsevier, Amsterdam (1996) 9. Richards, M.A., Scheer, J., Melvin, W.A.: Principle of Modern Radar. Scitech Publishing (2010) 10. National Oceanic and Atmospheric Administration: What is LIDAR. US Department of Commerce. https:// oceanservice.noaa.gov/facts/lidar.html. Accessed 23 Mar 2021 11. Travis, S.: Introduction to Laser Science and Engineering. CRC Press (2019) 12. Shan, J., Toth, C.K.: Topographic Laser Ranging and Scanning, 2nd edn. CRC Press (2018) 13. Tesla.: https://www.tesla.com. Accessed 25 Mar 2021 14. Harris, M.: The unknown start-up that built Google’s first selfdriving car. In: IEEE Spectrum: Technology, Engineering, and Science News (2014) 15. Waymo LLC.: https://waymo.com. Accessed 25 Mar 2021 16. Uber Advanced Technologies Group (ATG).: https:// www.uber.com/us/en/atg/. Accessed 25 Mar 2021 17. Cetinkunt, S.: Mechatronics. Wiley, New York (2007) 18. Alciatore, D.G., Histand, M.B.: Introduction to Mechatronics and Measurement Systems. McGraw-Hill, New York (2007) 19. Batchelor, B.G. (ed.): Machine Vision Handbook. Springer, London (2012) 20. Hornberg, A. (ed.): Handbook of Machine and Computer Vision. Wiley-VCH Verlag GmbH (2017) 21. Valueva, M.V., Nagornov, N.N., Lyakhov, P.A., Valuev, G.V., Chervyakov, N.I.: Application of the residue number system to reduce hardware costs of the convolutional neural network implementation. Math. Comput. Simul. 177, 232–243 (2020) 22. Bengio, Y., Courville, A., Vincent, P.: Representation learning: a review and new perspectives. IEEE Trans. Pattern Anal. Mach. Intell. 35(8), 1798–1828 (2013) 23. Schmidhuber, J.: Deep Learning in Neural Networks: An Overview. Neural Netw. 61, 85–117 (2015) 24. Bengio, Y., LeCun, Y., Hinton, G.: Deep learning. Nature. 521(7553), 436–444 (2015) 25. Hu, J., Niu, H., Carrasco, J., Lennox, B., Arvin, F.: Voronoi-based multi-robot autonomous exploration in unknown environments via deep reinforcement learning. IEEE Trans. Veh. Technol. 69(12), 14413–14423 (2020) 26. Keyence Vision Sensor with Built-in AI.: https://keyence.com/ products/vision/vision-sensor/iv2/. Accessed 3 Aug 2020 27. Cognex Deep Learning Tools.: https://cognex.com/products/deeplearning. Accessed 3 Aug 2020 28. Chen, N.: Breaking the Boundaries of Smart Camera and Embedded Vision Systems, White Paper. ADLINK Technology (2020)
W. Jeong 29. Brooks, R., Iyengar, S.: Multi-Sensor Fusion. Prentice Hall, New York (1998) 30. Faceli, K., Andre de Carvalho, C.P.L.F., Rezende, S.O.: Combining intelligent techniques for sensor fusion. Appl. Intell. 20, 199–213 (2004) 31. Hugh, D.-W., Henderson, T.C.: Multisensor data fusion. In: Siciliano, B., Khatib, O. (eds.) Springer Handbook of Robotics. Springer, Berlin/Heidelberg (2008) 32. Mitchell, H.B.: Multi-Sensor Data Fusion. Springer, Berlin/Heidelberg (2007) 33. Elfes, A.: Sonar-based real-world mapping and navigation. IEEE Trans. Robot. Autom. 3(3), 249–265 (1987) 34. Stone, L.D., Barlow, C.A., Corwin, T.L.: Bayesian Multiple Target Tracking. Artech House, Norwood (1999) 35. Mayback, P.S.: Stochastic Models, Estimation and Control, vol. I. Academic, New York (1979) 36. Bar-Shamon, Y., Fortmann, T.E.: Tracking and Data Association. Academic, New York (1998) 37. Zadeh, L.A.: Discussion: probability theory and fuzzy logic are complementary rather than competitive. Technometrics. 37(3), 271–276 (1995) 38. Gaines, B.R.: Fuzzy and probability uncertainty logics. Inf. Control. 38(2), 154–169 (1978) 39. Luo, R.C., Kay, M.G.: Multisensor integration and fusion in intelligent systems. IEEE Trans. Syst. Man Cybern. 19(5), 901–931 (1989) 40. Iyengar, S.S., Prasad, L., Min, H.: Advances in Distributed Sensor Integration: Application and Theory, Environmental and Intelligent Manufacturing Systems, vol. 7. Prentice Hall, Upper Saddle River (1995) 41. Liu, Y., Nof, S.Y.: Distributed micro flow-sensor arrays and networks: design of architecture and communication protocols. Int. J. Prod. Res. 42(15), 3101–3115 (2004) 42. Nof, S.Y., Liu, Y., Jeong, W.: Fault-tolerant time-out communication protocol and sensor apparatus for using same, patent pending, Purdue University (2003) 43. Wesson, R., Hayes-Roth, F., Burge, J.W., Stasz, C., Sunshine, C.A.: Network structures for distributed situation assessment. IEEE Trans. Syst. Man Cybern. 11(1), 5–23 (1981) 44. Iyengar, S.S., Jayasimha, D.N., Nadig, D.: A versatile architecture for the distributed sensor integration problem. IEEE Trans. Comput. 43(2), 175–185 (1994) 45. Nadig, D., Iyengar, S.S., Jayasimha, D.N.: A new architecture for distributed sensor integration. In: Proceedings of IEEE Southeastcon’93E (1993) 46. Ghiasi, S., Srivastava, A., Yang, X., Sarrafzadeh, M.: Optimal energy aware clustering in sensor networks. Sensors. 2, 258–269 (2002) 47. Lin, C.R., Gerla, M.: Adaptive clustering for mobile wireless networks. IEEE J. Sel. Areas Commun. 15(7), 1265–1275 (1997) 48. Ilyas, M.: The Handbook of Ad Hoc Wireless Networks. CRC Press, Boca Raton (2002) 49. Lim, A.: Distributed services for information dissemination in selforganizing sensor networks. J. Frankl. Inst. 338(6), 707–727 (2001) 50. Estrin, D., Heidemann, J., Govindan, R., Kumar, S.: Next century challenges: scalable coordination in sensor networks. In: Proceedings of the 5th Annual International Conference on Mobile Computing and Networking (MobiCOM ’99) (1999) 51. Intanagonwiwat, C., Govindan, R., Estrin, D., Heidemann, J., Silva, F.: Directed diffusion for wireless sensor networking. IEEE/ACM Trans. Netw. 11(1), 2–16 (2003) 52. Chau, Y.A., Geraniotis, E.: Multisensor correlation and quantization in distributed detection systems. In: Proceedings of the 29th IEEE Conference on Decision and Control, vol. 5, pp. 2692–2697 (1990)
14
Sensors, Machine Vision, and Sensor Networks
53. Heinzelman, W.B., Chandrakasan, A.P., Balakrishnan, H.: An application-specific protocol architecture for wireless microsensor networks. IEEE Trans. Wirel. Commun. 1(4), 660–670 (2002) 54. Iyengar, S.S., Sharma, M.B., Kashyap, R.L.: Information routing and reliability issues in distributed sensor networks. IEEE Trans. Signal Process. 40(12), 3012–3021 (1992) 55. A.S. Ween, N. Tomecko, D.E. Gossink: Communications architectures and capability improvement evaluation methodology, 21st Century Military Communications Conference on Proceedings (MILCOM 2000), 2 (2000) pp. 988–993 56. Hedetniemi, S.M., Hedetniemi, S.T., Liestman, A.L.: A survey of gossipingand broadcasting in communication networks. Networks. 18, 319–349 (1988) 57. Heinzelman, W.R., Kulik, J., Balakrishnan, H.: Adaptive protocols for information dissemination in wireless sensor networks. In: Proceedings of the 5th Annual ACM/IEEE International Conference on Mobile Computing and Networking (MobiCom’99), pp. 174– 185 (1999) 58. Heinzelman, W.R., Chandrakasan, A., Balakrishnan, H.: Energyefficient communication protocol for wireless microsensor networks. In: Proceedings of 33rd Annual Hawaii International Conference on System Sciences, pp. 3005–3014 (2000) 59. Ettus, M.: System capacity, latency, and power consumption in multihop-routed SS-CDMA wireless networks. In: Radio and Wireless Conference (RAWCON’98), pp. 55–58 (1998) 60. OPC HAD specifications: Version 1.20.1.00 edition (2003). http:// www.opcfoundation.org 61. ZigBee Alliance: Network Specification, Version 1.0, 2004 62. Jeong, W., Nof, S.Y.: A collaborative sensor network middleware for automated production systems. Comput. Ind. Eng. 57(1), 106– 113 (2009) 63. Djenouri, D., Khelladi, L., Badache, A.N.: A survey of security issues in mobile ad hoc and sensor networks. IEEE Commun. Surv. Tutorials. 7(4), 2–28 (2005) 64. Jeong, W., Nof, S.Y.: Performance evaluation of wireless sensor network protocols for industrial applications. J. Intell. Manuf. 19(3), 335–345 (2008) 65. Jeong, W., Jeong, D.: Acoustic roughness measurement of railhead surface using an optimal sensor batch algorithm. Appl. Sci. 10, 2110 (2020)
333 66. Jeong, D., Choi, H.S., Choi, Y.J., Jeong, W.: Measuring acoustic roughness of a longitudinal railhead profile using a multi-sensor integration technique. Sensors. 19, 1610 (2019) 67. Lu, Y.: Industry 4.0: a survey on technologies, applications and open research issues. J. Ind. Inf. Integr. 6, 1–10 (2017) 68. Rose, K., Eldridge, S., Chapin, L.: The internet of things: an overview. Internet Society (ISOC). 80, 1–50 (2015) 69. Boyes, H., Hallaq, B., Cunningham, J., Watson, T.: The industrial internet of things (IIoT): an analysis framework. Comput. Ind. 101, 1–12 (2018) 70. Sisinni, E., Saifullah, A., Han, S., Jennehag, U., Gidlund, M.: Industrial internet of things: challenges, opportunities, and directions. IEEE Trans. Ind. Inf. 14(11), 4724–4734 (2018) 71. Industrial Internet Consortium. Industrial internet reference architecture. http://www.iiconsortium.org/IIRA.htm. Accessed 25 Mar 2021
Wootae Jeong received his PhD from Purdue University in 2006. He currently works as Principal Researcher at Korea Railroad Research Institute and concurrently adjunct Professor at the University of Science and Technology since 2007. His research interests include measurement and control of automation and production systems, autonomous measuring devices, and intelligent robot system.
14
Intelligent and Collaborative Robots
15
Kenji Yamaguchi and Kiyonori Inaba
Contents 15.1
The Industrial Robot Market . . . . . . . . . . . . . . . . . . . . . . 335
15.2
Emergence of Intelligent and Collaborative Robots . . . . . . . . . . . . . . . . . . . . . . . . . 338
15.3 15.3.1 15.3.2 15.3.3
Intelligent and Collaborative Robots . . . . . . . . . . . . . . . Basic Technology for Industrial Robots . . . . . . . . . . . . . . . Intelligent Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Collaborative Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
339 339 339 343
15.4 Offline Programming and IoT . . . . . . . . . . . . . . . . . . . . . 346 15.4.1 Offline Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346 15.4.2 IoT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346 15.5 15.5.1 15.5.2 15.5.3 15.5.4 15.5.5
Applications of Intelligent and Collaborative Robots . . . . . . . . . . . . . . . . . . . . . . . . . Welding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Machining . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Assembly . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Picking, Packing, and Palletizing . . . . . . . . . . . . . . . . . . . . Painting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Keywords 347 347 350 351 352 352
15.6 Installation Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . 354 15.6.1 Range of Automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354 15.6.2 Return on Investment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355 15.7
human operators and industrial robots. The industrial intelligent robot and collaborative robot have recently been a key technology to solve issues that today’s manufacturing industry is faced with, including a decreasing number of skilled workers and demands for reduced manufacturing costs and delivery time. In this chapter, the latest trends in key technologies such as vision sensors, force sensors, and safety measures for collaborative robots are introduced with some robot cell application examples that have succeeded in drastically reducing machining costs.
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355
Abstract
It has been a belief for a long time since the birth of the industrial robot that the only thing it can do is playback simple motions that are taught in advance. At the beginning of the twenty-first century, the industrial robot was reinvented as the industrial intelligent robot, able to perform highly complicated tasks like skilled production workers, mainly due to the rapid advancement in vision and force sensors. In addition, collaborative robots were developed to meet the demand of collaboration between
K. Yamaguchi · K. Inaba () FANUC CORPORATION, Yamanashi, Japan e-mail: [email protected]; [email protected]
© Springer Nature Switzerland AG 2023 S. Y. Nof (ed.), Springer Handbook of Automation, Springer Handbooks, https://doi.org/10.1007/978-3-030-96729-1_15
Industrial robot · Intelligent robot · Collaborative robot · Vision sensor · Force sensor · ISO 10218 · Mobile robot · Offline programming · IoT · Applications
15.1
The Industrial Robot Market
Automation involving the use of industrial robots has helped to improve productivity and manufacturing quality, mainly in the automotive industry. Recently, securing an adequate workforce is a serious issue due to the declining birth rates and aging populations all over the world. Therefore, the demand for robotization has recently grown in various production processes not only in the automotive industry, but in other markets such as the logistics and food industries. The number of industrial robots in operation in major industrialized countries is shown in Table 15.1 [1]. Conventional industrial robots are typically used in applications such as spot welding, arc welding, material handling, loading/unloading, assembly, and painting, as shown in Figs. 15.1, 15.2, 15.3, and 15.4. Devol started the history of the industrial robot by filing the patent of its basic idea in 1954. A teaching-playbacktype industrial robot was delivered as a product for the first time in the USA in 1961, and many teaching-playback-type robots were adopted in factories around the world. However, in the 2000s, due to the diversification of market demands, 335
336
K. Yamaguchi and K. Inaba
Table 15.1 Number of industrial robots in operation in select countries. (Source: World Robotics 2020) Country/Region Africa South Africa Rest of Africa Egypt Morocco Tunisia Other Africa America North America Canada Mexico United States South America Brazil Rest of South America Argentina Chile Colombia Peru Puerto Rico Venezuela America, not specified Asia/Australia South East Asia China India Indonesia Japan Republic of Korea Malaysia Singapore Chinese Taipei Thailand Vietnam Other South/East Asia Hong Kong, China North Korea Macau Philippines Rest of Asia Iran Kuwait Oman Pakistan Quatar Saudia Arabia United Arab Emirates Uzbekistan Other Asia Australia/New Zealand Australia
2014 3,874 3,452 422 112 128 141 41 248,430 236,891 8,180 9,277 219,434 11,405 9,557 1,848 1,593 102 68 20 41 24 134 780,128 764,409 189,358 11,760 5,201 295,829 176,833 5,730 7,454 43,484 23,893 1,945 2,922 1,998 43 10 871 669 554 2 2 2 1 70 34 4 6,259 8,791 7,927
2015 4,114 3,604 510 143 142 155 70 273,607 260,642 11,654 14,743 234,245 12,818 10,733 2,085 1,761 130 103 23 43 25 147 887,397 871,207 256,463 13,768 6,265 286,554 210,458 6,537 9,301 49,230 26,293 2,455 3,883 2,817 43 10 1,013 389 226 2 6 3 1 81 66 4 7,069 8,732 7,742
2016 4,906 4,322 584 167 173 158 86 299,502 285,143 13,988 20,676 250,479 14,189 11,732 2,457 2,078 151 124 33 46 25 170 1,034,397 1,015,504 349,470 16,026 7,155 287,323 246,374 8,168 11,666 53,119 28,182 4,059 3,962 2,808 44 10 1,100 373 168 2 6 11 3 91 88 4 9,879 8,641 7,536
2017 5,153 4,457 696 188 230 180 98 323,704 307,135 18,045 27,032 262,058 15,071 12,413 2,658 2,238 182 149 48 16 25 1,498 1,253,498 1,231,550 501,185 19,000 7,913 297,215 273,146 10,788 15,801 59,930 30,110 12,234 4,228 2,903 45
2018 5,521 4,416 1,105 253 536 199 117 361,006 339,354 21,627 32,713 285,014 17,180 14,179 3,001 2,504 211 196 49 16 25 4,472 1,477,878 1,449,835 649,447 22,935 8,655 318,110 300,197 12,400 19,858 67,768 32,331 13,782 4,352 2,889 56
2019 6,545 5,120 1,425 295 746 242 142 389,233 362,136 25,230 37,275 299,631 18,925 15,294 3,631 3,064 250 220 58 16 23 8,172 1,687,763 1,653,951 783,358 26,306 9,147 354,878 319,022 13,114 21,935 71,782 33,962 15,865 4,582 2,858 56
2019/2018 +19% +16% +29% +17% +39% +22% +21% +8% +7% +17% +14% +5% +10% +8% +21% +22% +18% +12% +18%
1,280 429 175 1 7 44 4 103 95
1,407 516 190 3 7 67 4 133 110 2 19,380 8,147 6,927
1,668 569 201 3 7 73 11 157 115 2 25,316 7,927 6,649
+19% +10% +6%
13,221 8,298 7,126
–8% +83% +14% +14% +21% +15% +6% +12% +6% +6% +10% +6% +5% +15% +5% –1%
+9% +175% +18% +5% +31% –3% –4%
CAGR 2014–2019 +11% +8% +28% +21% +42% +11% +28% +9% +9% +25% +32% +6% +11% +10% +14% +14% +20% +26% +24% –17% –1% +128% +17% +17% +33% +17% +12% +4% +13% +18% +24% +11% +7% +52% +9% +7%
+14% –3% –18% +8% +28% +105% +62% +18% +28% –13% +32% –2% –3% (continued)
15
Intelligent and Collaborative Robots
337
Table 15.1 (continued) Country/Region 2014 New Zealand 864 Europe 411,062 Central and Eastern Europe 30,782 Balkan countries 1,989 Bosnia-Herzegowina 7 Croatia 121 Serbia 42 Slovenia 1,819 Czech Republic 9,543 Hungary 4,302 Poland 6,401 Romania 1,361 Russian Federation 2,694 Slovakia 3,891 Other Eastern Europe 601 Belarus 140 Bulgaria 197 Estonia 83 Latvia 19 Lithuania 57 Moldova 5 Ukraine 100 Western Europe 345,078 Austria 7,237 Belgium 7,995 Germany 175,768 Spain 27,983 France 32,233 Italy 59,823 Netherlands 8,470 Portugal 2,870 Switzerland 5,764 United Kingdom 16,935 Nordic countries 21,047 Denmark 5,119 Finland 4,178 Norway 1,008 Sweden 10,742 Rest of Europe 8,317 Turkey 6,286 All other European countries 2,031 Greece 392 Iceland 22 Ireland 667 Israel 938 Malta 12 Other Europe 5,838 Others not specified 28,594 Total 1,472,088 Sources: IFR, national associations
2015 990 433,303 36,273 2,290 9 136 65 2,080 11,238 4,784 8,136 1,702 3,032 4,378 713 145 250 97 22 93 5 101 358,267 7,859 7,989 182,632 29,718 32,161 61,282 9,739 3,160 6,258 17,469 22,508 5,459 4,124 1,068 11,857 10,269 7,940 2,329 446 23 763 1,080 17 5,986 33,229 1,631,650
2016 1,105 459,972 43,612 2,727 19 163 93 2,452 13,049 5,424 9,693 2,468 3,366 6,071 814 150 288 123 24 128 5 96 373,575 9,000 8,521 189,305 30,811 33,384 62,068 11,320 3,942 6,753 18,471 24,181 5,915 4,422 1,173 12,671 12,481 9,756 2,725 491 31 880 1,278 45 6,123 38,782 1,837,559
2017 1,172 498,045 52,863 3,128 25 175 123 2,805 15,429 7,711 11,360 3,076 4,028 7,093 1,038 161 371 170 28 195 5 108 396,027 10,156 9,207 200,497 32,352 35,321 64,403 12,505 4,622 7,476 19,488 25,202 6,361 4,342 1,250 13,249 14,782 11,599 3,183 568 42 945 1,533 95 9,171 44,876 2,125,276
2018 1,220 543,220 61,271 3,815 28 196 177 3,414 17,603 8,481 13,632 3,555 4,994 7,796 1,395 168 479 220 58 290 5 175 426,558 11,162 9,561 215,795 35,209 38,079 69,142 13,385 5,050 8,492 20,683 26,036 6,617 4,553 1,219 13,647 17,076 13,498 3,578 640 37 1,026 1,770 105 12,279 51,918 2,439,543
2019 1,278 579,948 69,091 4,456 30 239 246 3,941 19,391 9,212 15,769 4,057 6,185 8,326 1,695 192 560 283 72 385 5 198 447,833 12,016 9,965 221,547 36,716 42,019 74,420 14,370 5,620 9,506 21,654 27,036 6,820 4,721 1,271 14,224 18,945 15,022 3,923 665 37 1,130 1,979 112 17,043 58,588 2,722,077
2019/2018 +5% +7% +13% +17% +7% +22% +39% +15% +10% +9% +16% +14% +24% +7% +22% +14% +17% +29% +24% +33%
CAGR 2014–2019 +8% +7% +18% +18% +34% +15% +42% +17% +15% +16% +20% +24% +18% +16% +23% +7% +23% +28% +31% +47%
+13% +5% +8% +4% +3% +4% +10% +8% +7% +11% +12% +5% +4% +3% +4% +4% +4% +11% +11% +10% +4%
+15% +5% +11% +5% +5% +6% +5% +4% +11% +14% +11% +5% +5% +6% +2% +5% +6% +18% +19% +14% +11% +11% +11% +16% +56% +24% +15% +13%
+10% +12% +7% +39% +13% +12%
15
338
Fig. 15.1 Spot welding
K. Yamaguchi and K. Inaba
Fig. 15.3 Loading/unloading
Fig. 15.4 Painting Fig. 15.2 Arc welding
it was required to meet not only conventional mass production needs, but also the small batch production of higher variety of products. As a result, having to prepare various kinds of fixtures for each part type, which were needed for teaching-playback-type industrial robots, made it difficult in many cases to automate. It was necessary for robots to work flexibly like a human operator, adjusting its movements even when parts are not located precisely in a predetermined position. This requirement led to the development of intelligent robots equipped with vision or force sensors. Today, many of these intelligent robots have been widely deployed in production processes. Since industrial robots for heavy part handling or highspeed operations inherently have high energy, it is necessary
to enclose them with safety fences so as not to harm nearby humans. On the other hand, in manual assembly processes such as automobile final assembly and electronic device manufacturing, the robots are required to support human work to address labor shortages. At such manufacturing sites, since the safety fences can prevent human operators from getting close to the robots, the need increased for robots to be able to work collaboratively with humans without safety fences.
15.2
Emergence of Intelligent and Collaborative Robots
As mentioned above, in teaching-playback-type robots, dedicated equipment was used to supply workpieces to the robots. But to do so, human operators had to set the workpieces
15
Intelligent and Collaborative Robots
into the dedicated equipment beforehand, so the robot was able to pick up and handle them, such as for loading parts into a machine tool. In 2001, industrial intelligent robots appeared in the manufacturing scene mainly to automate loading workpieces to the fixtures of machine tools such as machining centers. After the first private sector numerical control (NC) was developed in the 1950s, the machining process itself was almost completely automated by NCs. However, loading and unloading of workpieces to and from machine tools was still done by human operators even in the 1990s. The intelligent robot appeared in 2001 for the first time. The term intelligent robot does not mean a humanoid robot that walks and talks like a human being, but rather one that performs highly complicated tasks like a skilled worker on the production site by utilizing vision sensors and force sensors. By having robots utilize vision and finding and picking workpieces randomly located inside containers that they get delivered in, it enabled automation of loading of workpieces to the fixture of the machining center without the need of human intervention or dedicated part supply machinery. Thus, so many intelligent robots have been deployed in production mainly due to their high potential for enhancing global competitiveness and as a key technology to solve issues that today’s manufacturing industry is faced with, including the decreasing number of skilled workers and demands for reducing manufactuing costs and deliverly time. Recently, intelligent robots have been applied to more flexible and complicated tasks, and it is increasing due to the development of 3D vision sensors, vision technology utilizing AI, and force sensor technology that enables assembly work. On the other hand, there are still many manufacturing processes where robot automation for assembly is difficult. Therefore, many tasks in these processes still rely on manual work. Many advanced industrial countries, however, have difficulties in hiring operators because of falling birth rates and aging populations. At such manufacturing sites, robotassisted operations are expected to reduce burden on the operators. However, conventional robots that require safety fences make it difficult to achieve effective robot assistance [2–5]. For these reasons, the demand for collaborative robots that can work safely with human operators without safety fences has rapidly increased, and the development of collaborative robots began in earnest around 2010. In 2010 [8–10], the requirements for collaborative robots in ISO 10218-1, the industrial robot safety standard, were revised, initiating the introduction of collaborative robots for use in production sites. From the late 2010s to the present, many collaborative robots have been developed, which shows how high the market demand is.
339
15.3
Intelligent and Collaborative Robots
15.3.1 Basic Technology for Industrial Robots Figure 15.5 shows a typical configuration of a vertical articulated type six-axis robot [12]. There are no major mechanical differences in the structure of an intelligent robot or a collaborative robot than that of a conventional robot. They are comprised of several servomotors, reducers, bearings, arm castings, etc. As shown in Fig. 15.6, the sensor interface, which enables the connection of sensors to the controller, such as vision sensors, force sensors, and safety sensors for collaborative robots, and high-speed microprocessors and communication interfaces are the key differences of intelligent and collaborative robots compared to conventional robots. The servo control of an intelligent robot or a collaborative robot is similar to that of a conventional robot as shown in Fig. 15.7. However, the performances of several servo control functions of intelligent and collaborative robots such as interpolation times are highly enhanced compared to conventional robots [20–23].
15.3.2 Intelligent Robot The intelligent robot [11] is a robot that is controlled based on the visual and tactile information from the sensors it is equipped with, and can respond flexibly to the surrounding environment.
Vision Sensors Two types of vision sensors are often used in production sites, two-dimensional (2D) and three-dimensional (3D) vision sensors. The 2D vision sensor acquires two-dimensional images of an object by irradiating natural light or artificial
J4, J5, J6 motor
J3 axis J4 axis
J3 motor J2 axis
J5 axis
J1 motor
J6 axis Wrist flange J2 arm
J2 motor J1 axis Base
Reducer J2 axis motor J1 base
Fig. 15.5 Mechanical structure of industrial robot
15
340
K. Yamaguchi and K. Inaba
Communication control
Motion control CPU
SRAM
Flash ROM
DRAM
CPU
Servo processor
USB RS232C RS422 ethernet
6-axis amplifier
Servo motor
Safety sensor
Force sensor
Vision sensor
I/O unit
Teach pendant
Fig. 15.6 Control system
Target position
Velocity
Phase current
Current position
t t
Motion command + –
Position control
+ –
Velocity control
+ –
Current control
Power amp
Current feedback
Pulse coder Servo motor
Phase information Velocity feedback Position feedback
Fig. 15.7 Servo control
light onto the object and taking an image of the reflected light by a CCD or other camera. On the other hand, with recent processing capacity improvements, 3D vision sensors have
been used in various robot applications, and it is possible to automate more complicated tasks such as picking randomly oriented parts from containers.
15
Intelligent and Collaborative Robots
341
Camera cable
Robot controller
Camera and lens
Lighting equipment
Workpiece
Fig. 15.8 Vision system
Depth image Generate 3D data (3D data generator tool)
3D application
Convert to grayscale image
2D application
2D image
2D application
Fig. 15.9 3D vision detection
The vision for industrial robots requires to be reliable in tough manufacturing environments. As shown in Fig. 15.6, by incorporating the image processing device into the robot controller and connecting the vision sensor to the robot controller directly, the vision has the same high reliability and ease of use as that of the industrial robots. As shown in Fig. 15.8, a simple vision system can be realized. A robot can use a 3D vision sensor to measure the threedimensional shape of parts, detect the position and orienta-
tion of each part that is three-dimensionally placed, and pick them up. As shown in Fig. 15.9, the measured distance is output as a depth image and is used to process with various detection methods to find a part. The 3D vision sensor shown in Fig. 15.10 has two cameras and one projector, all built-in. The 3D vision sensor projects a pattern from the built-in projector, acquires images by the two built-in cameras, and measures the 3D shape of the part. By measuring the 3D shape with one pattern projection, the
15
342
K. Yamaguchi and K. Inaba
3D vision sensor
Fig. 15.10 3D vision sensor Fig. 15.12 Bag palletizing system 3D vision sensor
3D vision sensor
Fig. 15.13 AI bin picking system by operator pointing
Fig. 15.11 Bin picking system using 3D vision sensor
measurement time can be reduced compared to projecting multiple patterns. By shortening the measurement time and directly controlling in the robot controller, the robot can acquire a depth image without stopping. In addition, it is possible to measure the image for visual tracking where a sensor or a part is moving. As an application example using the 3D vision sensor, Fig. 15.11 shows a handling robot with a sensor picking multimixed parts from a tote. Based on the information from the sensor, the robot picks up various types of small parts in the container and transfers them to the conveyor. By detecting the parts not as a shape but as an arbitrary blob from the acquired depth image, multiple types of parts can be picked up without registering the shape of each part. In addition, due to the compactness of 3D vision sensors and the short measurement time, it has enabled the sensors to be mounted on the robots without increasing the cycle time. Mounting the 3D vision sensor on the robot can eliminate the
need to install multiple sensors around the robot, and enables it to pick up the part from multiple places, which makes it possible for a simple and inexpensive system. In the example of Fig. 15.12, the 3D vision sensor detects bags, and the robot picks up the part and transfers it. In this case, the 3D vision sensor is used not only for picking, but also for detecting the shifted bag after loading on the pallet. In recent years, AI has been used to simplify the teaching, and improve robustness and accuracy of part detection. Bin picking applications require adjustments of settings of the vision sensor in order to have the robot detect a part and pick it up, which requires skilled knowledge of the vision systems. Many work hours for system setup are required. On the other hand, in the case of Fig. 15.13, the AI learns how to detect the part by intuitive instruction work such as directly pointing to the part on the image with a mouse or a touch panel. The operator can easily set up the system without any special knowledge. Also, by utilizing AI, the robot can make pass/fail judgments based on good and bad images of the object. In the example of Fig. 15.14, the vision is used to check the presence or absence of assembled parts such as welded nuts.
15
Intelligent and Collaborative Robots
Pass Pass Fail Fail
343
Mx Fx
Fz Mz
My
Fy
Robot
15 Controller
Force sensor
Fig. 15.15 Robot system with force sensor
Fig. 15.14 AI inspection
In conventional vision systems, the presence/absence of a part was determined by matching a part shape taught in advance. However, welding spatter or soot around the target object, or halation on the metal surface, etc. could cause an erroneous judgement, and the robot user was required to have skilled knowledge on how to set up the vision system to get reliable judgements. This kind of error proofing when using AI judges the pass/fail from the image using machine learning [18], instead of trying to find the same shape and position of the part from the taught image. This results in a robust inspection process that is resistant to fluctuations due to surrounding conditions. High-precision error proofing can be easily performed by learning several to several tens of pass/fail images as a dataset, without requiring a fine-tuning of vision settings.
Force Sensors The force sensor is used to improve a robot’s dexterity. It detects real-time force and moment when the robot contacts any objects. With force sensors, the robots can push with a designated force or realize a compliant motion by moving in the direction of an external force. It enables robots to smoothly conduct difficult assembly tasks that only skilled workers were able to do, such as precise fitting with a clearance of only several micrometers, assembly and meshing of gears, and polishing/deburring. A robot system with a force sensor on the robot’s wrist is shown in Fig. 15.15. An example of the force sensor is shown in Fig. 15.16. The force sensor has a part that is slightly deformed with an external force. By detecting deformation amount accurately, it is possible to calculate force in three directions (X, Y, and Z) and moment around X, Y, and Z axes. There are several kinds of force sensor based on a detection principle: strain gauge
Force sensor
Fig. 15.16 Force sensor
type, capacitance type, piezoelectric effect type, and optical type. Figure 15.17 shows an example of the assembly of a crank mechanical unit of an injection molding machine utilizing a force sensor.
15.3.3 Collaborative Robot Collaborative robots can work with operators safely without safety fences. As shown in Fig. 15.18, processes requiring both operators and robots coexisting without safety fences can be configured using collaborative robots. 1. Workspace sharing 2. Collaborative operation In “workspace sharing,” the operators and the robots work independently, but also engage in work close to each other without safety fences. In this case, the operator and the robot have a risk of contacting each other, but the hazards are
344
K. Yamaguchi and K. Inaba
minimized by using collaborative robots that are designed not to inflict harm to the person upon contact. The other style is called “collaborative operation,” in which the example shows assembly work by the operator with the robot assisting by handling heavy components. The operators are freed from doing heavy lifting work, enhancing productivity, and allows labor reduction because only one
Force sensor
operator is needed to complete the work that previously required two or more operators. With the expansion of applications that industrial robots are used for, and in response to a growing need for enhanced productivity, safety standards were also required to change, and in 2011, the international standard ISO 10218-1:2011 was revised to allow operators and robots to coexist without safety fences. The collaborative operation requirements specified in ISO 10218-1:2011 are the following. Any one of the four requirements must be met before a robot is operated without safety fences. An overview of the requirements is shown in Fig. 15.19. 1. 2. 3. 4.
Crank mechanical unit
Fig. 15.17 Assembly of crank mechanical unit with force sensor
Fig. 15.18 (a) Workspace sharing. (b) Collaborative operation
Safety-rated monitored stop Hand guidance Speed and separation monitoring Power and force limiting
For item 1, safety-rated monitored stop, not move when an operator enters into the operating space of the robot. For item 2, hand guidance, which is an operation mode where an operator directly handles a robot for operation, a robot must equip with an emergency stop switch, an enabling device, and an operation speed monitor. For item 3, speed and separation monitoring, a robot must stop or slow down when the relative speed and distance between the robot and the operator exceed certain criteria. For item 4, power and force limiting, the force
15
Intelligent and Collaborative Robots
1. Safety-rated monitored stop Robot stops due to operating space intrusion.
345
2. Hand guidance Operator directly operates robot.
Robot stops.
15 Safety sensor 3. Speed and separation monitoring Robot stops when relative speed and distance exceed criteria.
– Emergency-stop switch and enable switch – Robot speed monitoring 4. Power and force limiting Robot stops when force exceeds a criterion. Stops
Safety sensor
Fig. 15.20 Collaborative robots
Robot stops.
Fig. 15.19 ISO10218-1:2011, collaborative operation requirements
generated from a robot is restricted and when the applied force exceeds certain criteria, the robot must stop. Each requirement was created to ensure a high reliability pertaining to safety functions and to specifically conform to safety category 3, PL (performance level) d, specified in ISO 13849-1. Here, the authors would like to introduce the history of collaborative robots. The concept of collaborative robots was proposed in the mid-1990s as a method for direct physical interaction between humans and robots. Initially, the method was investigated to ensure human safety like intelligent assist devices (IADs) that are computer-controlled tools that enable production workers to lift, move, and position payloads quickly, accurately, and safely. From the late 1990s to the early 2000s, collaborative robots that were driven by limited power were developed, and in the late 2000s, the development of collaborative robots by industrial robot manufacturers began in earnest. From the viewpoint of intrinsic safety, the initial collaborative robots were mostly those of small types with a light robot weight and a payload of less than 10 kg. However, in response to the growing demand for robots to do heavy-duty works instead of humans, mainly at the production sites of automobile manufacturers, in the mid2010s, a collaborative robot with a payload of over 30 kg appeared, based on functional safety [14]. In addition, with the aim of performing more complex tasks like humans, collaborative robots with dual arms have also appeared. As such, collaborative robots have been diverse.
In recent years, as shown in Fig. 15.20, the collaborative robots have been developed in accordance with the standard of ISO 10218-1. There are various safety measures, but the following method is general. In accordance with the safety requirements of item 4, power and force limiting, a sensor is built into the robot, and when the robot contacts a person or an obstacle, the robot detects it and instantly stops. In addition, it is important for the collaborative robot to provide simple robot operation and teaching so that it can be used easily at production sites where people perform most of the work. For example, the manual guided teaching function to operate the arm directly is used to make the teaching more intuitive than the conventional teaching with a teach pendant [19]. Furthermore, since it is not necessary to surround the collaborative robot with safety fences as shown in Fig. 15.21, a mobile robot equipped with a collaborative robot on an automatic guided vehicle (AGV) can freely move and the robot can continue to work at the destination. Although depending on the drive system, the mobile robot can move in every direction: forward, backward, right, and left. In addition, the AGV guided by SLAM (simultaneous localization and mapping) registers map information for the driving area allowing the AGV to travel while estimating its own position and flexibly respond to equipment layout changes without laying magnetic tape guides. Generally, the stopping accuracy of the mobile robot is about several centimeters, but the stopping accuracy is insufficient for the robot to work accurately. Therefore, a vision sensor mounted on the robot is used to compensate its position, for accurate part picking.
346
K. Yamaguchi and K. Inaba
without any collisions with machine tools or safety fences in the system where the robot loads/unloads a work piece. Even the person who is not skilled at teaching robots can generate an optimum robot program in a short time.
15.4.2 IoT
Fig. 15.21 Mobile robot
15.4
Offline Programming and IoT
15.4.1 Offline Programming Offline programming systems have greatly contributed to decreasing robot programming hours as PC performance has improved. There is usually a library of robot models in the offline programming system. The data of workpiece shapes used in 3D CAD systems is read into the offline programming system. Peripheral equipment is often defined by using an easy shape-generating function of the offline programming system, or by using 3D CAD data. As shown in Fig. 15.22, after loading the workpiece’s 3D CAD data into the offline programming system, the robot motion program can be easily generated automatically, just by designating the position or posture of the workpieces or the tool on the robot wrist. Furthermore, the offline programming system can verify the cycle time, robot trajectory, etc. The robot programs created by the offline programming systems are loaded to an actual robot, then the robot executes the robot programs. The offline programming system can generate a motion path without any collisions with fixtures and without any singularity positions. Figure 15.23 shows how the motion path for loading/unloading a workpiece is automatically generated
Robot Maintenance Support Tool In a large production line, a problem with one robot would cause a lengthy production downtime. To avoid such downtime, prognostics and health management (PHM) becomes important. The PHM is a prediction of remaining life by modeling equipment deterioration progress, and execution of preventive maintenance to avoid sudden downtime. The prediction includes analysis processes, such as sending notifications of any signs of failure before the robot stops. Requests for preventive maintenance in advance have been increasing to help reduce maintenance cost and avoid the extra burden on understaffed maintenance workers. Robot maintenance support tools based on the PHM contain multiple functions helpful for preventive maintenance and support production uptime improvement by managing data on a server and notifying customers upon detection of abnormal conditions in advance. IoT technology for preventive maintenance allows robot users to conduct maintenance before a robot failure by predicting and detecting abnormalities from big data that is acquired by connecting a large number of robots over multiple countries and factories (Fig. 15.24) [13, 15]. As required, robot users are notified of the results which can lead them to perform maintenance work before an actual failure, and realize zero downtime. IoT Platform In recent years, an IoT platform that connects various devices (machine tools, industrial machines, robots, PCs, sensors, etc.) on the factory floor with a network (Ethernet) has been introduced on the factory floor. There are also methods of collecting equipment operation information in real time and sending it to a host server, but this data could be large and burden the network or increase network costs. On the other hand, as shown in Fig. 15.25, when an IoT platform is built between the factory floor (edge layer) and the higher side server (cloud layer or fog layer), it is possible to acquire large amounts of data from the edge layer, and to increase the communication speed. Since a large amount of data (big data) contain important information related to factory productivity and reliability, strict security management is required to prevent it from leaking outside the factory. Some IoT platforms are open, allowing users to create their own application software.
15
Intelligent and Collaborative Robots
347
Layout investigation
Program generation
Verification
Layout
Programming
Simulation
Library Robot
x2D layout import xLayout creation
xCAD-to-path xGraphic jogging xEasy teaching
xCycle time xRobot trajectory
Others
Virtual TP Simple modeling 3D CAD data IGES format
x2D-3D conversion xParts modeling xLibrary registration
Load program to actual robot
Teaching robot is the same operation as actual robot
CAD interface
Fig. 15.22 Offline programming system
15.5
Applications of Intelligent and Collaborative Robots
Intelligent robots and collaborative robots have been introduced to various production sites, contributing to automation of production processes and operating rate improvements. The following are typical application examples.
15.5.1 Welding Up to now, many industrial robots have been used for spot welding and arc welding mainly in the automotive industry. An automotive welding site is one of the most automated production sites. The first mass introduction of industrial robots at the welding site was in spot welding lines for automotive manufacturing. Even today, a large amount of industrial robots with a servo welding gun are used for body welding. In this application, cycle time and welding quality are very important and have been dependent on people’s skilled teaching. To address this issue, learning robots, which have external sensors and can automatically generate optimized motion programs with the feedback from the sensor, are useful and can increase productivity without requiring skilled workers (Fig. 15.26).
When a spot welding production line is being installed, it is often set up and programmed at a different location in advance of the actual installation. In such cases, it consumes a lot of time to reteach each robot position that will shift upon transitioning from the two locations. To improve the efficiency of this, a vision sensor is used to automatically adjust the robot program based on the shifting of the robots, peripheral equipment, and car bodies at the actual site. Vision sensors have been used for inspection to confirm the quality of welds, as shown in Fig. 15.27. Predictable preventive maintenance mentioned above is especially important for spot welding lines. Alarms are sent in advance to notify of abnormalities, which can allow planned maintenance and realize zero downtime. In arc welding, laser vision sensors are mounted on the robot arm to make the robot system more intelligent. When welding inconsistent production parts, the laser vision sensor adjusts the weld program to maintain the weld quality. The laser vision sensor can also perform real-time tracking where it tracks the joint to be welded just ahead of the current position during welding and compensates the position to the detected joint. The laser vision sensor’s adaptive welding function is used to adjust and optimize the weld parameters based on sensor feedback in addition to real-time tracking (Fig. 15.28).
15
348
K. Yamaguchi and K. Inaba
Fig. 15.23 Auto path generation
Data center Factory A
Factory B
Head office factory
Data collector
Fig. 15.24 IoT for maintenance
Data collector
Data collector
15
Intelligent and Collaborative Robots
349
Cloud
15
Application software Open platform Fog Edge heavy
Edge Machine tools, robots, PLC sensors, camera
Fig. 15.27 Vision inspection in welding
Fig. 15.25 IoT platform structure
Fig. 15.28 Intelligent arc welding system
Fig. 15.26 Learning robots
Also, collaborative robots provide a new ability to work more closely with operators. These robots come with easyto-use software designed to reduce work by reducing programming time. The easy-to-program interface supports simple applications that were previously welded manually. No special knowledge about robot teaching is required, the operator holds the torch by hand and moves it to a rough welding position to perform manual guided teaching, and teaches welding start and end points. The torch posture is set to a preset angle. It is automatically converted, the welding operation is completed, and welding conditions are also automatically set by setting the joints and materials of the workpiece to be welded (Figs. 15.28 and 15.29).
Fig. 15.29 Arc welding using a collaborative robot
350
15.5.2 Machining In machining, exchanging workpieces on a machine tool and removing them after machining are relatively simple supplemental tasks, but the automation rate is not very high. Such tasks are still dependent on manual labor in many production processes. The reason for this is that skilled knowledge is needed, and acquiring such knowledge and skills requires a lot of time. The collaborative robots, which do not require any safety fences, and robot vision systems that allow only rough positioning of workpieces, have helped to simplify systems, thereby reducing the time and costs needed to perform design and setup. Figure 15.30 shows a machine tending system with two ROBODRILLs and one robot. With this system, a robot is installed between two machine tools, and the robot uses the vision sensor mounted to its hand to detect and remove two types of randomly arranged workpieces from a container. Each workpiece is fed by the robot to the machine tools on the right or left from the side doors. In a study of this system, an offline programming system was used to arrange CAD data for the two machine tools, the robot, the workpieces, and the peripheral equipment, study the layout, and generate a robot program using an auto path generation function. These collaborative robots eliminate the need to install safety fences used for conventional robots, making it easy to install the robot and reducing the space required. For example, the collaborative robot can be installed in any place at any time by moving it on a handcart [16]. In addition, the collaborative robot mounted on the handcart can be used to exchange the workpieces on a machine tool (Fig. 15.31). After the robot is roughly located in place using the handcart, an integrated robot vision system automatically compensates for the position by means of the camera mounted on
Fig. 15.30 Machine tending
K. Yamaguchi and K. Inaba
the robot. This allows the robot to exchange the workpiece easily and accurately without the operator having to reteach the robot each time it is relocated. In addition to using the handcart as described above, the operator can use the mobile robot – which features a small collaborative robot on an AGV – to use a single robot in different places without fixing the robot in place. Mounting a collaborative robot on an AGV eliminates the need to enclose the robot with safety fences at the site wherever it goes, making it possible to transfer workpieces in various layouts by using a simple system. In addition, the layout can be easily changed when the product model is changed. Figure 15.32 shows a machine tending system for a machine tool with a mobile robot. With this system, a tray containing several workpieces is carried from the workpiece stocker by the mobile robot, and fed to the robot, which is located on the side of the machine tool. The robot takes a workpiece from the tray and feeds it from the side of the machine tool. The mobile robot also collects a tray containing multiple machined workpieces and carries it to the workpiece unloading area. As mentioned in Sect. 15.3.3, the mobile robot can move in every direction, including forward, backward, right, and left. In addition, the mobile robot is guided by simultaneous localization and mapping (SLAM). This registers map information for the area that the mobile robot travels, allowing the AGV to travel while estimating its own position. Since this means that there is no need to install guides, such as magnetic tape, it offers the flexibility to change the equipment layout.
Fig. 15.31 Machine tending by a collaborative robot
15
Intelligent and Collaborative Robots
351
Intelligent robot performing assembly
Robot upper arm to be assembled
15
Intelligent robot performing assembly
Fig. 15.34 Robot total assembly
Fig. 15.32 Machine tending by a mobile robot
Force sensor Robot upper arm to be assembled
Intelligent robot performing assembly 3D vision sensor Robot wrist to be assembled
Fig. 15.33 Robot arm assembly
15.5.3 Assembly The intelligent robot plays an active part in assembling various machines or electronic equipment. As an example, the assembly of a large 200 kg payload robot is shown in Fig. 15.33. An intelligent robot assembles the wrist unit of the robot to the upper arm unit, and Fig. 15.34 shows intelligent robots assembling the upper arm unit to the lower arm unit.
The upper arm unit weighs more than 200 kg. When the robot picks up a wrist unit, the location of the wrist unit is detected with a vision sensor, and then picks it up with a gripper. However, the actual pick position of the wrist unit within the gripper may get shift slightly when it is gripped. To correct this slight misalignment, the wrist unit position within the gripper is detected again by a vision sensor after being picked up so that it is able to be preciely adjusted to attach it to the upper arm. In fitting and inserting the wrist unit to the upper arm, a precise fitting with clearance less than 10 μm is performed by the force control using the force sensor. Detection and compensation using the vision sensor and the force control using the force sensor allows stable automatic assembly. The processes of inserting bearing units to a ball screw are introduced as an effective example of assembly using a collaborative robot. Figure 15.35 shows a system overview and Fig. 15.36 shows how bearings are inserted. This process inserts several types of bearings into a ball screw to produce a complete unit with a weight of about 20 kg. Because there are many parts to assemble, this process is not suitable for full automation, and therefore the work was done manually. However, since a completed assembled unit is heavy, two operators were needed to move and carry it with caution. Although safety fences were needed for the automation using conventional robots, safety fences could not be installed due to need of frequent access by operators, making the introduction of robots difficult. To reduce burdens on operators and the risk of injuries, a collaborative robot was introduced into this process. Specifically, a collaborative robot was introduced to transfer the ball screw units and to assist in the assembly so that the operators could be exclusively engaged in the bearing insertion work. As shown in Fig. 15.35, the collaborative robot lifts the ball screw from a stocker and carries it to a bearing press insertion
352
K. Yamaguchi and K. Inaba
Bearings
Bearing press insertion machine Collaborative robot
Operator
Ball screw stocker
Fig. 15.35 System for insertion of bearings to a ball screw
Fig. 15.36 Insertion of bearings
machine. Subsequently, the robot performs assembly assistance by supporting the ball screw to prevent it from falling. Finally, while the robot is carrying the complete unit back to the stocker, the operator can perform the initial setup, such as opening the next package with bearing units, leading to increased productivity. With the introduction of the collaborative robot, operator savings has been achieved, where just one person can be allocated to the work that previously needed two. Moreover, the risk of operator injury has been drastically reduced because operators are freed from the heavy manual work and their work burdens are reduced.
Fig. 15.37 Picking system
products manufactured by specialized machines, a packing process for packing them into a box, and a palletizing process for loading the boxes on shipping pallets. An example of the series of processes is shown below. In the picking process, as shown in Fig. 15.37, the robot detects the position of the flowing biscuit by using the camera placed above the conveyor. Then, the robot picks each one from the conveyor in syncronization with the conveyor movement, and sequentially stacks it on the conveyor in the next process. The cycle time of this process directly affects manufacturing output. Therefore, the robot used here is required to have high-speed and high-duty performance, and accurate synchronous control with high-speed vision processing. In the packing process, as shown in Fig. 15.38, the position of a stack of biscuits is detected by a camera placed above the conveyor, and the robot synchronized with the conveyor picks the stack and places it into the box. In this process, since multiple products may be collectively picked, the gripper becomes larger than the gripper in the picking process, so the robots used here are required to have higher payloads, in addition to high speed and high duty performance. In the palletizing process, as shown in Fig. 15.39, the packed cases are picked together and placed on the shipping pallet. In this process, it is common that multiple products are carried at once, and it is necessary to stack the products higher on the pallet in order to improve transportation efficiency. Therefore, the robots used here are required to have a large gripper and a wide operation range capable of accommodating multistage stacking, in addition to high transportability.
15.5.4 Picking, Packing, and Palletizing
15.5.5 Painting
Intelligent robots have been widespread at manufacturing sites for food, pharmaceuticals, cosmetics, etc. In general, manufacturing processes consist of a picking process of
Painting robots apply various types of waterborne or solventborne liquid and powder materials to component surfaces via a painting applicator. Painting was initially a completely
15
Intelligent and Collaborative Robots
353
Six axis Seven axis
15
Five axis
Fig. 15.40 Painting robots
Fig. 15.38 Packing system
Fig. 15.39 Palletizing system
manual process. Manually applying paint required personnel to stand in a hazardous environment for long periods of time, and paint defects such as touch marks, dirt, and nonuniform film thickness were typical due to human fatigue. Painting robots were introduced nearly 40 years ago into paint booths to remove painting personnel from the dangerous atmosphere of the paint booth and to achieve improved paint quality. The majority of paint robots are six-axis devices. Some exterior painting robots have five axes, which permitted simple exterior paint path development and maintenance. The initial seven-axis robots were six-axis versions mounted on a robot transfer unit or rail to increase the robot’s work envelop. Recently, seven-axis pedestal painting robots have been introduced into the market. The unique seven-axis pedestal
robot improves the robot’s reach within the painting work envelop and is used to minimize the paint booth length and width (Fig. 15.40). Over time, paint robot intelligence has improved. For example, as for the applicator, the manually calibrated spray guns were replaced with high-performance rotary spray atomizers to improve paint transfer efficiency. Opener robots were developed so the painting robots can paint the interior surface. The robots communicate via a network, and the communication improves system throughput and eliminates robot to part collisions. In Fig. 15.41, an opener robot lifts the hood to allow the painting robot to coat the interior surface. Improvements in robot controller software and hardware have dramatically improved robot intelligence. The controller constantly evaluates the robot’s position within the paint booth using a simplified CAD model of the robot. This permits the booth size to be significantly smaller by minimizing the space required for safety devices. The controller can also monitor the robot and paint application devices to eliminate any production downtime. While monitoring the robot servo system, the robot can predict when maintenance will be required and alert the end user to this need. The maintenance can then be planned during nonproduction hours, and results in zero downtime during production. Painting robots have also started to use vision in the painting process. Some paint systems utilize a vision system to track the part through the painting booth providing offsets to the robot. In other systems, 3D vision is used to offset the painting path to account for parts that are not exactly positioned on the conveyor system. In Fig. 15.42, a 3D vision system determines the part location on the moving conveyor. The intelligent vision system allows for some flexibility in the actual part position and eliminates the need to constantly maintain parts on the conveyor for position accuracy.
354
K. Yamaguchi and K. Inaba
Fig. 15.41 Painting robot system
Drone robot IoT AI
AI Smart factory
Intelligent and collaborative robot
Fig. 15.43 Smart factory
3D vision sensor
Fig. 15.42 Intelligent painting
15.6
Installation Guidelines
The following items are guidelines that should be examined before intelligent robot and collaborative robot are introduced.
15.6.1 Range of Automation Though intelligent robots can load workpieces into the machine tools or assemble parts with high accuracy, it cannot do everything a skilled worker can do. For instance, for handling and assembling of soft or flexible parts such as wire
harnesses, human operators are more skillful than intelligent robots. It is necessary to clearly separate the work for automation and work that relies on skilled people before considering the introduction of robots. On the other hand, based on such work separation between people and robots, further improvement of the rate of automatization can be realized by utilizing the collaborative robot. For example, it is possible to construct a production system in which human operators concentrate on the complicated work such as a flexible cable assembly, and simple work such as parts transfer or assembly of simple parts is performed by the collaborative robots. Since the collaborative robots do not need safety fences, it is possible to install the robots at existing human-centered production sites, which can promote the automation step by step without stopping production. In the future, as shown in Fig. 15.43, smart factories will be realized, where intelligent and collaborative robots and mobile robots or drone robots can move around the production site to support human operators or transfer many kinds of parts. In the smart factory, the IoT platform and AI will realize a highly efficient production system by allowing each device to autonomously move according to production plans and human production activities [17].
15
Intelligent and Collaborative Robots
15.6.2 Return on Investment One of the biggest benefits of the introduction of intelligent robots is the simplification of the peripheral equipment since flexibility is one of the major features of intelligent robots. Collaborative robots do not require the installation of costly safety fences, or additional costs for layout changes because, as previously mentioned, they can be directly introduced at existing human-centered production sites. Also, for conventional robots, due to the enclosure of the entire robot system with safety fences, the simultaneous automation of all the target processes is needed. On the other hand, the collaborative robots without safety fences can realize the automatization step by step focusing on the most necessary processes, which can minimize engineering processes and installation costs. Collaborative robots have the similar role to traditional intelligent assist devices (IADs) [6, 7] from the view point of human work support without safety fences. In addition to the productivity, flexibility, safety, etc., collaborative robots and IADs have another very important characteristic: both of them dramatically extend the performance capabilities of human workers in terms of highly demanding assembly tasks. Furthermore, from the view point of automatic support for humans, collaborative robots have more advantage over IADs. In case of IADs, human workers usually need to operate the IADs by themselves to move parts to the target location. On the other hand, in case of collaborative robots, human workers cannot only move the parts by themselves with the robot using the hand guidance function like IADs but also do another work near the robot while it moves or assembles the parts automatically. For example, collaborative robots can do extremely tight tolerance insertions. Similarly, collaborative robots can enable humans to do tight tolerance insertions even in the case of large, heavy, unwieldy parts such as automotive dash assemblies into automobiles. They also have the ability to allow humans to do blind insertions efficiently where the human cannot see or feel the fitting surfaces involved in the insertion operation. From the viewpoint of supplementing human work, collaborative robots contribute to the optimization of arrangement of human resources in production sites.
15.7
Conclusion
Intelligent robots using vision and force sensors appeared on the factory floor at the beginning of 2000. Furthermore, around 2010, collaborative robots were also developed for manufacturing. They could work unattended at night and during weekends because they reduced manual operations such as arrangement or transfer of workpieces, and/or the necessity of system monitoring compared with the
355
conventional robots. Thus, the efficiency of equipment has risen and machining and assembly costs have been reduced, which has led to improvements in global competitiveness. Industrial intelligent robots and collaborative robots have reached a high level of skill as explained above. However, there are still many tasks that skilled workers can do but the robots cannot do, such as the assembly of flexible objects like wire harnesses. There are several ongoing research and development activities to solve these challenges. One idea is to automate such a task completely, and another is to do it partially by utilizing collaborative robots. In the future, more and more collaborations between robots and humans will be seen [24]. Acknowledgments The authors would like to thank S.Y. Nof, Y Inaba, S. Sakakibara, K. Abe, S. Kato, and M. Morioka for their contribution to this work.
References 1. IFR: World Robotics 2020 – Industrial Robots, pp. 46–47, 24 Sept 2020 2. Cobots: US Patent 5,952,796 3. Teresko, J.: Here Come the Cobots!. Industry Week, 21 Dec 2004 4. Cobot Architecture: IEEE Transactions on Robotics and Automation. 17(4) (2001) 5. A Brief History of Collaborative Robots. Engineering.com, 19 May 2016 6. Intelligent Assist Devices: Revolutionary Technology for Material Handling (PDF). Archived from the original (PDF) on 5 Jan 2017. Retrieved 29 May 2016 7. Robotic Industries Association: BSR/T15.1, March 2020 8. ISO 10218-1:2011 Robots and Robotic Devices – Safety Requirements for Industrial Robots – Part 1: Robots. International Organization for Standardization (ISO) 9. ISO 10218-2:2011 Robots and Robotic Devices – Safety Requirements for Industrial Robots – Part 2: Robot Systems and Integration. International Organization for Standardization (ISO) 10. ISO/TS 15066:2016 Robots and Robotic Devices – Collaborative Robots. International Organization for Standardization (ISO) 11. Inaba, Y., Sakakibara, S.: Industrial Intelligent Robots, Springer Handbook of Automation, pp. 349–363. Springer Berlin Heidelberg, Berlin/Heidelberg (2009) 12. Ibayashi, J., Oe, M., Wang, K., Ban, K., Kokubo, K.: The latest technology for high speed & high precision on FANUC Robot R2000iC. FANUC Tech. Rev. 23(1), 38–46 (2015) 13. Takahashi, H., Nagatsuka, Y., Arita, S.: Zero down time function for improvement in robot utilization. FANUC Tech. Rev. 23(2), 20–28 (2015) 14. Morioka, M., Iwayama, T., Inoue, T., Yamamoto, T., Naito, Y., Sato, T., Toda, S., Takahashi, S.: The human-collaborative industrial robot – development of “green robot”. FANUC Tech. Rev. 24(1), 20–29 (2016) 15. Nagatsuka, Y., Aramaki, T., Yamamoto, T., Tanno, Y., Takikawa, R., Nakagawa, H.: ZDT on ROBOT-LINKi. FANUC Tech. Rev. 27(1), 13–22 (2019) 16. Morioka, M., Inaba, G., Namiki, Y., Takeda, T.: QSSR functions for robots. FANUC Tech. Rev. 28(1), 8–17 (2020)
15
356
K. Yamaguchi and K. Inaba
Further Reading Aliev, R.A., Yusupbekov, N.R., et al. (eds.): 11th World Conference on Intelligent System for Industrial Automation. Springer Nature, Cham, Switzerland (2021) Bianchini, M., Simic, M., Ghosh, A., Shaw, R.N. (eds.): Machine Learning for Robotics Applications. Springer Nature, Singapore (2021) Gurgul, M.: Industrial Robots and Cobots: Everything You Need to Know About Your Future Co-Worker. Michal Gurgul. Publisher: Michal Gurgul (2018) Nof, S.Y. (ed.): Handbook of Industrial Robotics, 2nd edn. Wiley, New York (1999) Ross, L.T., Fardo, S.W., Walach, M.F.: Industrial Robotics Fundamentals: Theory and Applications, 3rd edn. Goodheart-Willcox, Tinley Park, IL (2017) Siciliano, B., Khatib, O. (eds.): Springer Handbook of Robotics, 2nd edn. Springer International Publishing, Cham, Switzerland (2016) Spong, M.W., Hutchinson, S., Vidyasagar, M.: Robot Modeling and Control, 2nd edn. Wiley, New York (2020) Suzman, J.: Work: A Deep History, from the Stone Age to the Age of Robots. Penguin Press, New York (2021)
Kenji Yamaguchi completed the master’s program in Precision Machinery Engineering, Graduate School of Engineering, the University of Tokyo in 1993, and in the same year, joined FANUC CORPORATION. He was appointed President and COO in 2016, and has held the position of President and CEO since 2019.
Kiyonori Inaba graduated from the Department of Mechanical Engineering of the University of California, Berkeley in 2009, with a doctoral degree in engineering. He joined FANUC CORPORATION in 2009 and was appointed Executive Managing Officer in 2013.
Control Architecture for Automation
16
Oliver Riedel, Armin Lechler, and Alexander W. Verl
Contents 16.1 16.1.1 16.1.2 16.2
Historical Background and the Motivation for a Change . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357 Motion Control and Path Planning . . . . . . . . . . . . . . . . . . 358 Logic Controllers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358
16.2.5
General Scheme for Control Architectures for Automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Typical Control Architectures and Systems . . . . . . . . . . . Industrial Communication and Fieldbuses . . . . . . . . . . . . Proprietary and Partially Open Interfaces . . . . . . . . . . . . Resources in the Network: Cloud and Edge for Storage and Compute . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Engineering and Virtual Tryout . . . . . . . . . . . . . . . . . . . .
16.3
Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361
16.4 16.4.1 16.4.2 16.4.3 16.4.4
New Architectural Components and Solutions . . . . . Virtualization Techniques . . . . . . . . . . . . . . . . . . . . . . . . . Components for Communication . . . . . . . . . . . . . . . . . . . Hardware-in-the-Loop Simulation . . . . . . . . . . . . . . . . . . Data Analytics and AI . . . . . . . . . . . . . . . . . . . . . . . . . . . .
16.5
Conclusion and Trends . . . . . . . . . . . . . . . . . . . . . . . . . . 374
16.2.1 16.2.2 16.2.3 16.2.4
359 359 360 360 360 361
361 362 366 369 372
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375
Abstract
Automation technology has always tried to ensure efficient reusability of technology building blocks and effective implementation of solutions with modular approaches. These approaches, also known as control architectures for automation, are experiencing an increasing change toward more software-based methods. All classical architectures, such as NC, CNC, and PLC, are currently reaching their limits in terms of flexibility, adaptability, connectivity, and expandability. In addition,
O. Riedel () · A. Lechler · A. W. Verl Institute for Control Engineering of Machine Tools and Manufacturing Unit, University of Stuttgart, Stuttgart, Germany e-mail: [email protected]; [email protected]; [email protected]
© Springer Nature Switzerland AG 2023 S. Y. Nof (ed.), Springer Handbook of Automation, Springer Handbooks, https://doi.org/10.1007/978-3-030-96729-1_16
more and more functionalities are shifting from the classic pyramid-shaped communication to the smallest embedded devices or even to systems on higher levels of the communication pyramid such as MES. Operation technology is more and more penetrated by IT, which, due to its high speed of innovation, opens up evergrowing solution spaces. These include convergent networks that are equipped with real-time functionality and can be used directly in production for all kinds of horizontal and vertical communication needs. New wireless communications such as Wi-Fi 6 and 5G also represent a current revolution in automation technology – however, this does not apply in consumer electronics. Therefore, it is time to sketch new control architectures for automation. The advantage of these new architectures can not only be found in the classical application area of production but also in engineering in general. For example, the method of hardware-in-the-loop simulation has considerable potential in production control as well as in the upfront simulation of production plants and virtual commissioning. As in many other fields of applications, the topic of artificial intelligence is added as a further architectural component, as well. AI recently became a very versatile tool to solve a broad range of problems at different hierarchical levels of the automation pyramid. Keywords
Convergent networks · Time-sensitive networks · Virtualization and containers · Hardware-in-the-loop simulation · OPC UA · Data analytics · Virtual commissioning
16.1
Historical Background and the Motivation for a Change
Two main methods are necessary for the automation of manual activities. The first method aims to automate motion 357
358
sequences in the form of paths in workspace, the second to model logical sequences.
16.1.1 Motion Control and Path Planning The first approaches to automation were purely mechanical. Cam discs were used, e.g., to represent the memory for path and speed curves. Cam discs usually rotate at a constant speed. During one revolution, the required motion profile in terms of path and speed is transmitted via a probe tip of the transmission element to the component to be moved, e.g., the machine tool slide. An important area of application for cam disc controls lies in the field of automatic turning and milling machines. The production process is automatically controlled by cam discs, which are mounted on control shafts. The cam discs store the path, speed, and transmit the feed power required at the actuator, as well as the torques or forces required to accelerate and compensate friction and weights [1]. Other transmission elements consist of mechanical components such as rollers, levers, or racks with pinions. Such mechanical controls are not very susceptible to faults, however, deformations and backlash in the transmission elements can impair the working accuracy and the masses, that need to be accelerated, limit the dynamic behavior. The piece times required per workpiece are very short and are usually in the seconds range. Due to the high programming and setup times, as well as relatively high manufacturing costs for the curves, the use of these automatic machines was limited to large series and mass production. Flexibility is very limited since changes in the motion sequence require mechanical exchange of cam discs. The first numerical control (NC) was developed by John T. Parson in the 1940s, to increase the flexibility in path control. By exchanging the memory, another program could run by the control. In NCs, the program describes the path of the tool in space, which then results in the geometry of the workpiece. Later, the control was implemented with a computer, which allowed the programs to be loaded into the control from an external source. The first computerized numerical control (CNC) system was developed in the late 1960s in the USA (Bendix) and Europe (AEG, Siemens). The flexible programming of control systems has offered a wide application area with different requirements and production processes. Over the last decades, the number and complexity of algorithms, that have been included in control systems, increased. During the early 1990s, open architectures of CNCs were developed to divide the complexity to single software components, that are independent from hardware, manufacturers, and each other [2]. Parallel to the requirements of CNCs for multiaxis free forming of workpieces, other industries require high per-
O. Riedel et al.
formance and flexible path planning. Their kinematics requirements are not as complex as those of milling machines. To control the packaging processes, motion controllers have been developed that allow flexible programming of the motion of such machines.
16.1.2 Logic Controllers Dependent mechanical cam switches are used to store logical sequences over time or position. Cam switches move a tappet when passing over them, which triggers a switching function of mechanical, electrical, hydraulic, or pneumatic type. The cam switches are mounted on cam rails or cam rollers, which can have several cam switch tracks. They can be fixed at any position. The logical program sequence is determined by the place, respective the distance of fixation. The cam bar, which can be fixed to the slide or machine tool bed, is a path memory. The cam roller, which rotates with the control shaft, is a time memory of the cam program control. To a large extend, the machines’ control is based on linking conditions and processes that can be derived from binary structures. By using Boole’s algebra, it is possible to design and describe binary circuits for control systems not only purely intuitively, but also with the help of mathematical methods. The systematic treatment of control problems without allocation to the energy form and the possibility of applying minimization and adaptation calculations for an existing component is of great advantage. The electrical contact control is characterized by its basic element, the electromagnetic relay. By using the contact control, even larger power ratings can be switched and the switching times are in the range of milliseconds. Delay circuits can also be realized via simple circuits with direct current relays. A disadvantage is the low flexibility of relay circuits. Therefore, contact control was replaced by more flexible electronic or software-based solutions at the beginning of the 1960s. Nevertheless, contact control is still used to a limited extent for the automation of equipment. It can particularly be found in the power control range and areas with special personal protection requirements (e.g., emergency stop and safety interlocks). For facilities with safety requirements, extensive national and international standards, guidelines, and safety regulations are required. Before commissioning, the system or machine manufacturer must provide proof of safety to these testing authorities, for example, in the form of a type test. In contrast to electronic or software-based solutions, electromechanical safety circuits often simplify to provide proof of safety, since relays can be used to describe a defined error response and their design permits certain error exclusions. Practically, this means that safety-relevant functions are often
16
Control Architecture for Automation
realized in contact control and the nonsafety-relevant functions are realized electronically or by software. Programmable logic controllers (PLC) offer further flexibility and are nowadays the most important type of device for solving automation tasks in control engineering. The history of the PLC began in 1968. In contrast to connectionprogrammed (hardwired) contact, electronic controls should have PLCs, whereby the following criteria were essential: • The control system should be easy to program and, if necessary, should also be reprogrammable (e.g., when changing over transfer lines). • The control system should work more reliable than an electromechanical solution and should be superior to the known solutions in the price/performance ratio. The development of semiconductor technology made it possible to meet these requirements to a large extend with “programmable controllers.” Around the beginning of the 1980s, the development of programmable logic controllers became the central component of automation technology. Upon market launch, manufacturers were initially faced with the problem of familiarizing a clientele experienced in conventional contact control with this “new technology.” The main difficulty was to exploit the possibilities of the PLC to the greatest possible extend, while at the same time enabling application-oriented programming, which is quick and easy to learn without requiring extensive prior training as a programmer.
16.2
General Scheme for Control Architectures for Automation
Up to the Fourth Industrial Revolution, further developments toward digitalization and adaptation of information technology solutions have been incorporated into production systems. CNCs and PLCs still are the production processes’ core control, as well as communication via fieldbus systems. Interfaces have been defined by the community to simplify the standard functionality of control systems. Only recently, the opportunity for cloud or edge computing has been taking hold in production systems, as has been the development of continuous integration of production knowledge into the control by engineering toolchains. The following chapters will summarize the state of the art in these sectors.
16.2.1 Typical Control Architectures and Systems Most machines or production systems use CNCs and PLCs to control production processes and operations. These controls
359
can be found in the middle layer of the automation pyramid clustering all field-level devices, such as axes, sensors, subcomponents, and reporting to or taking orders from manufacturing execution systems (MES) or enterprise resource planning (ERP), e.g., higher-level planning systems. For specific applications, the central control unit can also be a robot control (RC), which covers specific requirements in terms of kinematic transformation and actuation when a robot is part of a production task. AGVs (automated guided vehicles) also have individual control, often relying on custom control designs for localization and actuation of movements. If production equipments, such as a machine tool, a handling robot, and an AGV for logistical support, are working together on a joint production task, a master control often is used to control the interaction between the individually controlled entities. Protocols for the interaction between automation components and manufacturing equipment are specified in VDMA 34180/ISO 21919. For an equipment controller that combines the motion planning task as well as the logical controller, e.g., a machine tool, the controls need to interact and to cover additional functionalities. All computations relevant to motion and path planning are executed in the NC. To activate further process requirements, the coolant supply or the chip conveyor, the NC offers machine functions (“M” commands in the NC programming code following IEC 66025) that directly trigger the PLC controller, which is responsible for all logical machine functions. The NC also handles job management in many cases, e.g., the production program administration and execution as well as the tool management of all necessary tools that are available in the tool magazine of the machine and that might be needed for the execution of different programs. These tasks are mostly handled manually in small manufacturing plants, meaning personnel will be in charge of loading workpieces into the machine tool workspace and be responsible for the exchange and replacement of tools in the tool magazine. Larger manufacturing cells offer further automation in these tasks, such as pallet changing systems, as well as automated loading and unloading of the machine tool. The user can follow all processes and adapt necessary parameters at the human-machine interface (Fig. 16.1). The standard IEC 61131-3 specifies a widely used programmable logic controller language. Possible formats in which to program logic operations in a control include textbased languages such as structured text (ST; similar to Pascal) and instruction list (IL). Three graphical languages are available: ladder diagram, function block diagram (FBD), and sequential function chart (SFC). Additional function libraries, such as the PLCopen motion control, [3] offer functionality to incorporate the abovementioned functionality in this programming language as well.
16
360
O. Riedel et al.
HMI Automation systems
Job management
Motion control
PLC
Tool management Axes
Machine functions
Fig. 16.1 Architectural structure of an equipment controller
The motion control is a specific task which considers all moving actuators and defines the synchronized movements. Different architectures and control loop designs are in practice, depending on the performance measure and application of actuated components in manufacturing [4]. For complex motions, such as machining work pieces, CNC offer extends functionality such as transformation of the contour definition into axes movement, correction of tool geometries, etc. [5, 6].
Starting with initially two technologies for industrial communication systems, Ethernet and Modbus, 18 different fieldbus protocols or even protocol families have developed in the meantime. Current commercial solutions additionally combine the cabling of communication technology with the power supply in a common hybrid cable. This further reduces the effort required for commissioning axis systems. Noteworthy characteristics for fieldbus systems in manufacturing technology are • • • •
Short cycle times, real-time capability, and synchronicity High-reliability requirements Line or ring topologies Applications with many participants but very little data per participant
Parallel to fieldbuses for communication between controller and field-level devices, communication between controller and higher-level systems as MES and ERP has been developed. In addition to many proprietary interfaces offered by controller manufacturers, Open Platform Communications Unified Architecture (OPC UA) has emerged as the Industry 4.0 protocol for accessing information of machine tools [7], robots [8], and other machinery equipment [9].
16.2.3 Proprietary and Partially Open Interfaces 16.2.2 Industrial Communication and Fieldbuses Since central controllers control a varying number of axes, commercial systems are modular in design. In most cases, the central control runs on a structurally separate unit. The servo drives are directly connected to the motors. Drive and central control communicate via so-called fieldbus protocols. The first development of fieldbus protocols started in 1975. At the turn of the millennium, IEC 61158 defined the basic structure in layers, divided into services and protocols. Existing fieldbuses have been incorporated into this structure; implementation concepts with profiles have been summarized in IEC 61784 in 2002. Today’s fieldbuses use the physical layers of the open systems interconnection model (OSI; reference model for network protocols) via the industrial Ethernet, however extended by special message formats, as well as real-time and security aspects. The constructional advantage is that an Ethernet interface is common regardless of the fieldbus technology used. To ensure functional safety in industrial production, the main focus lies on fast and reliable communication. Fieldbus communication and control processes usually run in a time cycle of 1–10 ms and numerous extensions for realtime features have been implemented.
Digitization of Industry 4.0 drove the development of more and more open interfaces. But even earlier, initiatives have been working on developing standards in order to enable interoperability and user-defined extensions for factory automation. For controller architectures, Open System Architecture for Controls Within Automation Systems (OSACA) [10] and a project called “cross-manufacturer modules for user-oriented use of the open control architecture (Hümnos)” were initiatives taking place before the turn of the century to develop a definition of a hardware-independent reference architecture for numerical machine tool equipment including the functionalities of NC, RC, PLC, and cell controls (CC) [11]. For higher-level access to manufacturing components, activities to define asset administration shells [12], description formats such as automation ML [13] as well as a data standard in ECLASS [14] were taking place.
16.2.4 Resources in the Network: Cloud and Edge for Storage and Compute State-of-the-art manufacturing systems calculate all control and logic calculation on the controller device. Cloud con-
16
Control Architecture for Automation
nectivity offered by, e.g., Siemens Mindsphere [15] currently concentrates on services such as analysis and predictions, connectivity to machines, and monitoring. In specific applications, additional edge computing devices offer the additional computing power to the central control unit. Most edge devices are currently installed in production systems to enable systems without state-of-theart connectivity to connect to cloud systems or to offer the controller data in a specified format such as, e.g., asset administration shells or OPC UA [16].
16.2.5 Engineering and Virtual Tryout Current engineering for manufacturing uses toolchains such as the CAD-CAM-CNC chain, where manufacturing instructions can directly be deducted from the three-dimensional product description and then be converted into G-code according to IEC 66025. However, this toolchain is to be individually adapted to the specific manufacturing technology or machine tool. Advances, such as STEP-NC [17], combined the geometric description of workpiece features with processing information. However, until today there are only individual integrations into existing CNC systems. For logical programming of controllers, manufacturerdependent tools are offered from, e.g., Siemens (TIA Portal), Bosch Rexroth (IndraWorks), and Beckhoff (TwinCAT), many of which rely on the Codesys® IDE [18]. These tools allow the mapping of hardware inputs and outputs to runtime variables, the programming of logical control according to IEC 61131-3. Motion control configuration and basic programming can also be handled in these tools with options to synchronize axes, define cams, and, in some cases, even transformation for a machine setup. Virtual testing is a more recent development in which control engineering can be tested with a virtual machine before the ramp-up of the hardware machine. The virtual machine is modeled to behave according to the real machine and is provided with a visual representation so that the behavior executed by the controller can be monitored. The virtual tryout shortens development times and offers the possibility to detect errors in control logic before the ramp-up with very little risk, as the virtual machine cannot be damaged in trial runs.
16.3
Challenges
Nevertheless, the greatest challenge in automation technology is the control of complex manufacturing processes in a control architecture that is still quite rigid today. Besides, increasing adaptability to requirements from the market, such as quantities or product changes, is required for an economical application. Up to now, machine builders have very
361
often differentiated in mechanical solutions, and automation systems are optimized for one operating state. Increasingly, however, a flexibilization in automation solutions is required. A differentiation of the machine builders is, therefore, rather necessary in the algorithms and the software. It is insufficient to just buy control functions: the control systems must be sufficiently open to integrate individual algorithms. Operators must also be able to adapt the operating point of machines and systems to new market requirements. This may involve adjustments or extensions of algorithms from the field of control engineering, but also data analysis or artificial intelligence. Additionally, to the challenges described before, an increasing demand of convergence of “Operation Technology” (OT) and IT can be determined. Driven by the short innovation cycles of information technology, shifting the physical properties of machines and processes into the virtual world of software sounds increasingly tempting. The result increases in flexibility, the mastery of even the greatest complexity, even with small production quantities, is within reach. However, not every promise of IT can be fulfilled even in the tough conditions of production. The requirements for security, accuracy, reliability, availability, and others are still the top priority for production planners. The more tempting the offerings of cloud and edge technology, and as much as the new paradigms of Industry 4.0 are demanded, the more important it is that new architectures for automation make the connection between state-of-the-art IT and the demands of production. Industry 4.0 concepts can only generate added value through new algorithms and services. New architectures and solutions are, therefore, required, which are described below.
16.4
New Architectural Components and Solutions
Many developments outside of production offer highperformance approaches to the use of modern computer technology. Often “only” at one or the other point of a standard or de facto standard an addition or modification has to be made to implement this in the automation software. In the last decade, several attempts have been made to transform traditional architectural models into new technically modern architectures. The classical three-part division of the architecture was always in the foreground [19, 20]. This is on the one hand the physical level for the description of the real resources of which a system is composed. Secondly, it is the functional level, which describes the necessary functions that the real system described on the physical level should or can perform. The third level describes the interaction between the functional and the physical level. Among the multitude of new approaches, three have emerged
16
362
as particularly noteworthy: ISA-95, RAMI-4.0, and IIRA – these three candidates are critically compared and evaluated in [21, 22]. What they all have in common is the dissolution of rigid structures to freely networkable structural elements, which are particularly needed by cyber-physical production systems (CPPS). Unfortunately, many promising approaches are useless in the production on closer examination as important general conditions are not fulfilled. The new architectures for automation can be divided into the following categories, most of which have already grown out of the research stage and are – as of 2020 – in productive use: • Virtualization technologies (virtual machines, containers, etc.) to separate software from dedicated hardware • New hardware/protocols for communication (timesensitive networking, wireless technologies, 5G, extensions to OPC UA, etc.) to allow flexible and performant any-to-any information exchange • Converged industrial communication based on above technologies to secure investment over a long time • Hardware in the loop simulation, e.g., digital twins to recognize potential problem upfront • Data analytics and artificial intelligence to drill down in the newly available data In the following, the details of these new architectural components are being discussed.
16.4.1 Virtualization Techniques A hierarchical separation between individual control levels with regard to the control architecture exists in today’s production plants. In the respective control level, new target values for the level below are calculated, based on actual values using static algorithms. The control in today’s production is thus always top down. The individual control systems are self-contained units that can exchange mostly statically configured information via a large number of different interfaces. In addition, the individual controls have a fixed scope of functions and computing power. The software of these control systems has a fixed link to the used computing hardware. Methods of virtualization are used to counteract exactly these limitations, considering architectures and solutions in IT. System virtualization allows the decoupling of physical and virtual computing platforms – which is a potential for automation control architectures, too. In the field of cloud computing, a variety of use cases is enabled. This includes the consolidation of services on multiple virtual machines, which would have traditionally been deployed on different physical platforms. This is possible, because so-called “hypervisors” provide strong isolation between multiple virtual
O. Riedel et al.
operating system (OS) instances, as well as between the host system and virtual machines. Furthermore, resources such as memory, communication devices, and processes can be dynamically allocated to virtual machines according to specified quality of service (QoS) parameters. Hence, virtualization makes load balancing and optimization of resource usage in clusters of physical computers possible. While the hypervisor-based virtualization provides strong and dynamic isolation between virtual machines. It is considered heavy weight concerning overheads, such as memory usage and especially latencies. These drawbacks are overcome by OSlevel virtualization, e.g., container-based virtualization and methods, whose virtual deployment units are software containers. The container-based virtualization allows running applications in isolated areas. No additional guest operating systems are executed, because containers share the kernel of the host operating system. Whereas the abstraction level for virtual machines is at the hardware level, the abstraction level in OS level virtualization lays at the system call level (see Fig. 16.2). This weak form of isolation between the individual containers is realized by using namespaces and control groups. With the help of namespaces, the OS kernel provides different processes with different views on the system. Global resources, such as the file system and process identities, are surrounded by a namespace layer. Thus, each container has an individual view on the system. The management of resources is carried out by control groups, which are used to limit resources for process groups. A namespace regulates what a container “sees,” control groups regulate the access of a container instance to resources. Unlike virtual machines, containers achieve near-native performance in terms of memory usage and latencies. Their main drawback is the weak isolation between instances, e.g., faulty software, which causes the host kernel to crash that will affect all other containers and regular processes on the same host. Thus, software containers can only be used if all containers belong to the trusted codebase of an ecosystem. To alleviate these limitations, cloud providers employ additional levels of indirection. These include lightweight virtual machines [23], where every container is executed inside a minimal virtual machine and user space kernels [24], which latter implement large amounts of kernel functionalities in user space. Isolation is achieved as they only access a limited set of system calls to realize the required functionalities. As latency is not the most important property in the higher levels of automation systems, management, planning, and supervisory systems can easily be migrated to cloud systems. However, the same does not apply to lower-level systems. The convergence of automation technology and operation technology in, e.g., the control level entails similar advantages as cloud computing, such as optimized business processes and hardware utilization. The decentralization is one major trend of Indus-
16
Control Architecture for Automation
363
VM VM Container App 1
App 2
Bins/Libs
Bins/Libs
System calls
App 1
App 2
Bins/Libs Guest OS
App 1
App 2
App 1
Bins/Libs
Bins/Libs
Bins/Libs
Guest OS
Guest OS
Guest OS
Bins/Libs Guest OS
16
Virtualized hardware Virtualized hardware, hypercalls
Container runtime
Type 1 hypervisor
Host OS
Host OS
Hardware
Hardware
Type 2 hypervisor with paravirtualization
Hardware
Fig. 16.2 OS-level virtualization, types of hypervisors, and paravirtualization
try 4.0 which includes a flattening of the classical communication hierarchies and the integration of higher-level functionalities even in small automation devices. Because this type of automation system is based on multiple, distributed systems, which interact to achieve cohesive intelligence, the technologies of cloud computing are an appealing basis to realize the deployment. However, as latency is an important factor in control systems, the flexible form of hypervisorbased virtualization especially on cloud computing resources is not suitable for all use cases. The main reason for this is that the hypervisors, which are used in cloud systems, typically employ hierarchical schedulers. For example, QEMU/KVM, which makes Linux a hypervisor, uses the Linux scheduler to schedule virtual CPUs (vCPUs). vCPUs are regular user space threads that can be assigned to virtual machines. The scheduler of the virtual machine, in turn, assigns the guests threads to vCPUs. While hierarchical real-time scheduling is theoretically proven [25–27], this approach suffers from high scheduling overheads. Furthermore, there is a semantic gap between the host scheduler and the guest scheduler that comes with problems such as the lock holder preemption problem [28]. While paravirtualized scheduling, in which hosts and guests can communicate to overcome the semantic gap, is possible, this approach violates the isolation paradigm [25]. Lower latencies are achieved because interrupts do not have to be received by the hypervisor and forwarded to a virtual machine, as interrupt remapping enables the virtual machines to directly receive interrupts [29]. This approach is typically employed, if strong isolation and low latency are needed by applications. Use cases include configurations, where critical and noncritical software components
are deployed on the same physical host system and strong isolation is mandatory [29]. Every node runs a midlevel container runtime which implements a standardized interface such as the container runtime interface (CRI). The midlevel runtimes in term delegates commands to low-level runtimes. Low-level container runtimes compatible to the open contain initiative (OCI) such as runc or crun set up necessary control groups and namespaces and execute commands inside them. These container tools are an appealing basis for modular, distributed control applications. Containers are used as units of deployment, which encapsulate functionalities, such as services in service-oriented architectures (SOA) or, more specifically, microservices. The cloud-based control platform for cyber-physical production systems pICASSO [30, 31] is an application example for virtualization in industrial control, which provides machine controls as a service (MCaaS) from local/global private/public clouds. The need for more flexible architectures of production systems arises from the hierarchical separation between the individual control levels. New setpoints for the subordinate level are calculated based on the actual values using static control algorithms. The architecture of today’s production systems is thus top down. Individual control systems have fixed functional scopes. Individual controllers are self-contained units that use a large variety of communication interfaces to exchange mostly statically configured information. This leads to a significantly restricted efficiency of production systems. To overcome these limitations of cloud-based control platforms such as pICASSO enable increases in efficiency through flexible provisioning of control systems for cyberphysical production systems. The existing, monolithic
364
control architectures are broken up, modularized, and extended with cloud-native technologies such as centralized data processing and service-oriented architectures. In this way, expert knowledge of plant manufacturers and plant operators can be utilized optimally. Furthermore, cloudbased control systems offer a suitable basis for the dynamic provisioning of computing resources for cyber-physical production systems. Cloud-based control architectures enable a large variety of use cases, e.g., robot control as a service [32], control of robots and machine tools with superordinate factory clouds [33], cloud-based control of partially automated manual workplaces, etc. While the issues of dynamic scaling and dynamic provisioning of modularized control systems can be solved by cloud computing, another challenge arises. Physical parts of the machines, such as sensors and actors, which are located on the shop floor, must communicate with control algorithms, which in turn are located in remote cloud-based systems. Among traditional distributed industrial control networks, fieldbuses and realtime Ethernet (RTE) are used to achieve deterministic, reliable communication with low latencies. To overcome the inherent nondeterministic behavior of standard Ethernet and its TCP/UDP/IP protocols, modifications are applied to different levels of the communication stack, starting from the application level and ending at the hardware level. Opportunity analysis for the communication of control data between machines and cloud-based control systems based on two application scenarios is presented as part of pICASSO. The first use case is the control of a five-axis milling machine, the second one the control of a three-axis milling machine. Three types of two-way data streams could be identified in the control system. Data is exchanged between a CNC and spindle/axis drives. These include control and status words, setpoints, actual positions, and additional settings such as modes of operation. The second type of data streams includes actual values and setpoints. These are exchanged between PLCs and I/O terminals. The third stream includes data of the human-machine interface (HMI) (Fig. 16.3). Only the axis control loop is considered critically with cycle times of 1 ms because extra delays in the PLC and HMI communication path only decrease the machine’s performance or the user’s satisfaction, but will not influence the quality of the workpiece. It has been shown by benchmarks that TCP is not applicable for control software as a service (CSaaS), e.g., between a machine located in Germany and the control system in New Zealand because of missing bandwidth capabilities. Even though UDP provides the necessary bandwidth, an average telegram loss of 3–4% with up to 10.000 consecutive missed packages makes additional measures necessary. Consecutive message loss is solved by using buffers located in the communication modules. Furthermore, they may interpolate single missing telegrams. The best results in terms of the average round trip time were achieved
O. Riedel et al.
COM
HMI
PLC
NC
Operating system ETH
HW 1
...
HW 1
Active network bridge PLC – programmable logic controller NC – numerical control ETH – Ethernet interface
Lathe HW – hardware HMI – human machine interface COM – communication module
Fig. 16.3 The cloud-based control platform for cyber-physical production systems pICASSO [34]
by WebSockets. The WebSocket protocol is based on HTTP and TCP, thus reliable and allows transmitting datagrambased telegrams like UDP. The high average round trip time and the nondeterministic jitter of WANs prevent cloud-based control systems from being used universally. Nevertheless, cloud-based control systems provide an appealing basis for resource-efficient and scalable production systems, if these drawbacks can be counteracted by suitable algorithms or high latencies and jitter are acceptable for specific production processes. Another source of latencies besides communication is the virtualization overhead. Most virtual machine monitors are designed to provide high throughput but not low latencies and deterministic behavior. Low latencies are achieved by adapting non-real-time hypervisors or by using dedicated real-time hypervisors like XtratuM [35] and RT-Xen [36]. As the latter originates from the field of embedded systems, these are not widely available in public cloud systems. One approach to low latency cloud instances is QEMU/KVM in combination with the PREEMPT_RT-patch. Both the host system, e.g., the QEMU/KVM hypervisor, and the guest OS are patched to provide real-time scheduling. One appealing component of a real-time scheduler framework is the earliest deadline first scheduler, which is combined with the constant bandwidth server (CBS). This algorithm guarantees that a task receives exactly runtime microseconds within its deadline from the beginning of each period. Test frameworks for hierarchical scheduling, such as multiprocessor periodic resource models [27] and the parallel supply function [26], are based on this property. Although schedulable configurations are theoretically guaranteed to meet all temporal requirements, hierarchical scheduling is not widely adopted in real-time systems, where low latency is required. For this and
16
Control Architecture for Automation
the above reasons, partitioning hypervisors may be used instead. Modularization entails a greater need for orchestration between services, as the traditional centralistic components of control technology are omitted. A service registry based on the OPC UA standard is used to manage and coordinate the individual components and to enable the exchange of information between them. For this purpose, information models are generated or existing models such as OPC UA companion specifications are used. Other implementations of cloud-based control systems have similar architectures and differ mainly in the hypervisors and IP-based communication protocols. These architectures leverage serverbased virtualization technologies such as hypervisor-based virtual machines. Due to the nondeterministic components and communication protocols and thus the high execution jitters and communication delays, this solution is unsuitable for time-critical and deterministic applications. To reduce the communication delay between virtualized, server-based controllers, and field devices and to minimize the execution jitter of control programs, virtualization software is integrated into embedded devices [37]. The overhead that comes along with the hypervisor-based virtualization makes the implementation of hard real-time applications almost impossible. Most of the existing literature on the virtualization of Soft PLCs follows hypervisor-based approaches [38–40]. Due to that fact and the restricted modularity, these architectures still form monolithic control systems characterized by nondeterministic behavior, which makes them unsuitable for applications that require hard real-time capabilities, even though especially the cloud-based architectures provide non-negligible advantages such as dynamic scaling and optimized resource consumption. The reason for the restricted modularity is that the PLC programs, written in the programming languages of the IEC 61131-3 themselves form monolithic applications, could benefit from further modularization. Besides the cyclic execution of control programs on a PLC according to the IEC 61131, the IEC 61499 specifies a generic model for distributed automation systems. The execution of the control logic is event based. The IEC 61499 allows for objectoriented programming of systems consisting of several programmable devices connected via a network. The basis for IEC 61499 applications is the basic function block, which in contrast to the function blocks of IEC 61131 has both inputs and outputs for data and events. The algorithms of the basic function blocks are not executed cyclically, but when an event is received by an event input. The behavior of a basic function block is defined in the execution control chart. States, transitions, and actions are defined. An action consists of an algorithm, which is executed when changing to the corresponding state, and the setting of an output event. Both standards are considered as a basis for modular control applications. A similar concept for modularized control software is specified by the proprietary TwinCAT Component Object
365
Model (TcCOM), which defines the properties and the behavior of software modules. There are many applications for modules; all tasks of an automation system can be specified in modules. Consequently, no distinction is made between whether the module primarily represents the basic functions of an automation system, such as real-time tasks, fieldbus drivers, or a PLC runtime system, or whether it is a more user- and application-specific algorithm for the open-loop or closed-loop control of a machine unit. There are three types of inter-module communication in TcCOM. I/O Mapping, which copies the outputs of modules into the inputs of other modules; I/O data pointer, which realizes communication via shared memory, blocking remote procedure calls; and the automation device specification (ADS), which provides acyclic, event-based communication within a single device and between modules on multiple devices via a network. One major drawback of TcCOM is the limited isolation, as real-time modules are executed directly in kernel mode. Furthermore, every module must implement a specific state machine, which might reduce the flexibility regarding the module’s functionality. However, the main drawback is the low portability of modules, as a TcCOM module will not run directly on PLC hardware of a different vendor. In [41, 42], an architecture for implementing the IEC 61499 is presented. Software containers are used as deployment units for function blocks. The authors identify a strong isolation mechanism as a necessary basis for the architecture, but still utilize software containers, due to the high overhead of virtual machines. For orchestration and scheduling of the function blocks a central component called the flexible control manager is used. This combines the functionalities of a container image registry and a service registry. Further publications which are based on software containers for encapsulation of control software are [43–47]. Tasci et al. propose an architecture for real-time container runtime environments and real-time inter-process communication between modules, e.g., microservices that are written in IEC61131-3 or any other programming language, encapsulated in containers [46] as shown in Fig. 16.4. Real-time messaging is used for communication between microservices. Compared to other communication technologies, such as representational state transfer (REST) or remote procedure call (RPC), messaging offers the highest degree of flexibility, scalability, and loose coupling. For performance reasons, the messaging system must support different protocols, which behave identically from the perspective of the application. Thus, an efficient inter-process communication (IPC) method is used for communication between microservices running on the same node and a network protocol is used for communication across multiple nodes. This approach is similar to the communication infrastructure in TcCOM but differs in terms of loose coupling and dependencies. Modules can be implemented in any programming language and can differ completely in
16
366
O. Riedel et al.
Node
Image registry
Low-level container runtime
High-level container runtime
Sevice e.g., IEC 61131-3
Service registry
Sevice e.g., IEC 61131-3
Fig. 16.4 Simplified architecture of a container-based control system [46]
behavior. The only dependency lays in the interfaces between services, which are specified by common message structures. For telegram exchange, protocols, such as UDP and TCP, or real-time capable protocols are used. A basic technology for implementing real-time-capable Ethernet protocols is timesensitive networking (TSN), which standardizes real-time mechanisms on the data link level and the properties of the hardware. Depending on the operating system, a variety of different mechanisms can be used to implement IPC, such as signals, shared memory, pipes, or Unix domain sockets under Linux or I/O data pointer and I/O mapping in TwinCAT. To avoid having to consider the peculiarities of each transport mechanism within the applications, the API of the messaging library abstracts these. The implementation [47] leverages UDP/IP for inter-node communication and Unix domain sockets for intra-node communication. Either Linux with PREEMPT_RT patch or – for applications, which require a particularly low latency – Xenomai is utilized as the operating system. Container technologies are increasingly finding their way into commercial control systems.
16.4.2 Components for Communication The digitalization of production with its innovative solutions like big data or AI-based approaches is the key trend in the automation industry. Most solutions are based on IT systems with a close link to real processes and their control systems (e.g., operation technology). Some applications like visualization or diagnostics only require access to the data,
while others require a real-time integration, like a feedback loop for challenges like vision-based control. For efficient digitalization, converged networks are required that allow the interconnection of IT and OT systems while maintaining deterministic requirements. Converged networks are already well established in other industries for large-scale networks. For instance, a unified infrastructure is used for telephone, Internet, television, and data services. Converged networks generally have two dimensions: a convergence of different traffic types (data with different requirements) and applications on the one side and a convergence of technologies on the other side. For industrial communication and especially those working on the shop floor, converged networks are not well established yet. They need to support the deterministic and safety-critical real-time traffic, as well as typical IT traffic at the same time with no interference. A successful transformation would allow the transition from a communication setup according to the automation pyramid to a flat network as depicted in Fig. 16.5 for cyber-physical production systems. This transition requires significant changes and new technologies. Fieldbuses typically provide all-in-one solutions, covering all layers of the OSI network layer model. In contrast, IT-based systems use a stack of protocols, combining functions on different layers. Some of the existing IT and OT technologies can be used in a more stack-based approach. To fill the gaps, new key technologies are emerging, including TSN, adding real-time capabilities to standard Ethernet according to IEEE802.1, wireless technologies with realtime capabilities including 5G and WiFi-6, and OPC UA, providing a unified solution on the higher layers.
16
Control Architecture for Automation
367
Management
Real-time
Planning Supervisory Control Field Automation pyramid
CPPS-based automation
16 Fig. 16.5 Transformation from the automation pyramid to a CPPS-based production, according to [48]
Time-Sensitive Networking A key enabling technology for converged networks is timesensitive networking (TSN), referring to a set of new standards in IEEE802.1, extending the well-established Ethernet with real-time capabilities [49]. Ethernet, as the prevalent standard for IT communication, has not been suitable for industrial applications, however TSN Ethernet can potentially be used from the cloud down to the field level. In contrast to fieldbuses, TSN only covers the lower layers of communication and therefore fits a layered approach. The work of the IEEE802.1 TSN Task Group is based on the IEEE Audio Video Broadcasting (AVB) standards. To enhance Ethernet with deterministic capabilities, the new standards cover time synchronization, scheduling, performance optimization, and configuration. TSN is not a single standard, but rather a collection of useful functions and tools to guarantee determinism. Depending on the application and specific requirements, different TSN standards can be combined and must be supported by the network infrastructure as well as by the endpoints. However, even endpoints that do not support any TSN features are interoperable and can play a useful role in a network. TSN does not strictly prioritize one traffic type over another as done by other real-time communication systems, instead of allowing a parallel and individual allocation of resources. TSN supports the topologies similar to standard Ethernet, however, it adds capabilities for redundant paths to improve reliability. Besides, the endpoints and network infrastructure, switched endpoints that are popular in industrial automation are supported as well. From a TSN perspective, these devices are a switch and an endpoint combined in a single enclosure. The most relevant TSN standards are introduced in the following: • Time synchronization of the devices in a TSN network is essential and the base for other standards. IEEE 802.1AS defines the synchronization mechanism and is derived from the precision time protocol (PTP) IEEE 1588. The common time base is used for TSN mechanisms like
•
•
•
•
traffic shaping, but also on the application layer, replacing an event-based synchronization. In the newest revision, new mechanisms for redundancy were introduced, thereby fulfilling requirements from the automation industry. Traffic shaping is used to allow deterministic data transmission. IEEE802.1Qbv specifies a time-aware shaper that allows time-gated transmission of data of a certain traffic type according to a cyclically recurring schedule. By having well-designed time slots across the network, minimal latencies can be guaranteed and collisions be ruled out. Alternatively, guarantees regarding bandwidth can be based on the credit base scheduler, standardized in IEEE 802.1Qav. Redundancy is provided by the standard IEEE 802.1CB. It allows sending data on two independent paths by multiplying the packet at one point and dropping copies of a package at another. Thereby, a seamless redundancy is guaranteed so that an application is not interfered at all by a loss of a path. Frame preemption which is specified in IEEE 802.1Qbu allows the interruption of the transmission of a frame in order to transmit a frame with a higher priority. The remainder of the preempted frame is then sent afterward. This feature gives certain real-time guarantees based on prioritized traffic, as well as a reduction of guard bands for scheduled traffic. The configuration of a TSN network, including the infrastructure and the endpoints, is essential to guarantee a deterministic behavior. In IEEE 802.1Qcc different configuration models (centralized, distributed, and hybrid) are introduced and further detailed by other subsequent standards. Each model has specific advantages and disadvantages.
Wireless Technologies Industrial real-time networks are almost exclusively based on wired networks since wireless technologies do not provide the required determinism. The changing requirements due to the digital transformation cause an increasing need for flexible and mobile connections. Two emerging wireless
368
technologies are potential future solutions, based on WLAN and cellular networks.
Deterministic WLAN: Wi-Fi 6 Wireless LAN (WLAN) according to IEEE 802.11 is the prevalent wireless standard for IT networks. With the standard IEEE 802.1ax, which is expected to be released in 2020, new features will be integrated improving the performance and for the first time enabling real-time communication [50]. The Wi-Fi Alliance, the industrial consortium that is guaranteeing interoperability between devices and provides certification, refers to this standard as Wi-Fi 6. Key features of Wi-Fi 6 are: • Support for bands beyond 6 GHz (Wi-Fi 6E). • The spatial multiplexing technique multiuser multiple input multiple output (MIMO) that has been introduced in Wi-Fi 5 (IEEE 802.11 ac) which allows the simultaneous data transmission from an access point (AP) to multiple client is extended in the upload direction. • OFDMA (orthogonal frequency-division multiple access) is multiplexing in the frequency domain. Channels are split in so-called resource units (RU) that can be assigned to different clients in the networks. This allows a collisionfree simultaneous transmission. The RUs are assigned by a central management, thereby allowing deterministic network access with latency guarantees. • The target wake time (TWT) mechanism allows to wake up devices only in specified periods to reduce energy consumption. Hardware with Wi-Fi 6 support is expected to be in the market in 2020, first generic devices in 2021. Wi-Fi 6 provides – like TSN – base functions to build a real-time network. How to combine and configure these functions and how to integrate them with higher-layer protocols and wired technologies is an ongoing development. A key advantage of Wi-Fi 6 is that it is fully operating on license-exempt bands and can be set up and maintained by local administrators. Therefore, no network operators are required. The projected performance numbers match many requirements of the automation industry.
5G: Fifth-Generation Cellular Networks 5G refers to the fifth generation of the standard for broadband cellular networks specified by the third Generation Partnership Project (3GPP) group. It is based on the long-term evolution (LTE – 4G) standard and is released in multiple stages, starting in 2018. Key features are a higher bandwidth, new frequency bands, capabilities for large numbers of devices, and real-time guarantees. Therefore, mechanisms like OFDM (orthogonal frequency-division multiplexing) are used to assign the resources to different devices.
O. Riedel et al.
5G is designed for three use cases: • Enhanced mobile broadband (eMBB) is focused on proving high data rates to end devices. • Massive machine-type communication (mMTC) has the goal to connect large numbers of devices as an enabler of the Internet of Things (IoT), including mobility, smart cities but also industrial applications. • Ultra-reliable and low latency (uRLLC) has the goal to support real-time communication with guaranteed latencies and a high reliability, potentially usable for industrial automation. In the beginning, the focus was on eMBB and is now shifting toward mMTC and uRLLC. Even though the goal of 5G is to guarantee latencies of less than 1 ms, the usability for industrial real-time communication for control applications will be limited. This number only refers to the latency of a single wireless link, not from application to application. However, many applications related to the digitalization of production have corresponding requirements. In order to be integrated into a larger real-time network, 5G should be transparently usable in a TSN network. The goal is to manage a 5G link the same way than a cable-based link, however with a higher latency and jitter. This approach requires a close link to the configuration of the 5G and the TSN network and is still under development.
OPC Unified Architecture (OPC UA) To fulfill the increasing demand for interconnection between OT and IT systems, a standardized protocol up to the application layer was required. In the recent past, OPC UA is driven by a large number of companies and managed by the OPC Foundation. OPC UA combines the technical specification of the transport layer with a data model and associated modeling rules. Based on these two basic layers, the OPC UA base services are specified. They guarantee the communication setup, session handling, and methods for signing and encryption. OPC UA also specifies methods for data access (DA), alarms and conditions (AC), historical access to information (HA), and complete programs (Prog). Two additional layers allow detailed modeling of devices, machines, processes, or services based on information models. To ensure interoperability, companion specifications can be defined for specific device types across vendors. Single vendors can extend these and integrate additional vendor-specific specifications (Fig. 16.6). The communication is based on a client server model in standard OPC UA. Publisher subscriber mechanisms were added in part 14 of the OPC UA standard, recently [51]. A current initiative running under the name of umati (universal machinery technology interface) aims to gather
16
Control Architecture for Automation
369
Vendor specific specification
Companion specifications
DA
AC
HA
Prog
OPC UA base services
Transport
OPC UA data model
Fig. 16.6 OPC UA architecture
OPC UA specifications connected to the field of production under a common roof. In the umati community, information modeling for OPC UA relies on a base framework defined by the OPC UA for machinery specification [9]. OPC UA information models for specific technologies or production equipment such as robotics [8], machine tools [7], plastics and rubber machinery [52–55], and many more are based on the common foundation and are in steady exchange to harmonize elements and components with frequent use in production information models. The goal of the initiative [56] is to • Simplify the effort for machine connection to customerspecific IT infrastructures and ecosystems • Simplify the effort for machine-to-machine and machineto-device communication • Reduce costs through faster realization of customerspecific projects To achieve this, umati creates a community and an ecosystem consisting of 1. OPC UA companion specification to define globally applicable semantics for machine tools 2. Communication default requirements for the implementation of an OPC UA environment (e.g., encryption, authentication, and server settings [ports and protocols]) to allow plug-and-play connectivity between machines and software 3. Quality assurance through testing specifications and tools, certification, and serving as ombudsman for supplierclient disputes 4. Marketing and a label for visibility in the market through a global community of machine builders, component suppliers, and added value services
TSN-Based Converged Industrial Communication TSN in combination with deterministic wireless technologies is seen as the key enabler for converged real-time networks for industrial applications. The performance of TSN regarding synchronization and deterministic delivery guarantees is comparable to established fieldbuses. According to the stacked communication approach, other protocols are required on the higher layers. Besides the established IT protocols which will be used in the future as well, specific industrial solutions are required. There are two current trends: • Fieldbus over TSN: Two approaches for transmitting fieldbus communication over TSN are used. Either the OSI layers 1 and 2 are fully replaced by standard TSN according to IEEE 802.1. Another approach is using active network gateways, to convert all fieldbus traffic to TSN-compatible Ethernet frames [57–59]. • Unified OPC UA-based solution: Within the OPC Foundation, the Field Level Communication (FLC) initiative is working on a communication from the field to the application level based on TSN networks [60, 61]. The goal is to deliver an open approach to implement OPC UA including TSN and associated application profiles. Working groups within FLC align with the TSN profile for industrial automation (TSN-IA-Profile) which is standardized by the IEC/IEEE 60802 joint working group, so that OPC UA can share one common multivendor TSN network infrastructure together with other applications.
16.4.3 Hardware-in-the-Loop Simulation Hardware-in-the-loop (HiL) simulation is used as a virtual environment for developing and testing automation software of production systems. First, the motivation for a virtual test environment is given. The second part outlines the setup of HiL simulations. Next, real-time HiL simulation, as a special subset of HiL, is motivated and explained. Finally, the automation of the test procedure using HiL simulation is regarded.
Motivation and Use of HiL Simulation According to the “rule of ten” late changes during the life cycle of a system or product result in exponentially increasing costs [62]. Early fault detection and frontloading of changes are therefore an important cost, time, and quality factor for a system or product. The automation in production systems running on logic, motion, robot, and numerical controllers is realized as software components. Since the engineering process of the overall production system is historically driven by the mechanical discipline, the software engineers are more often than not the last ones to try out and to test their solutions
16
370
O. Riedel et al.
on the assembled production system. This is an unacceptable state in the engineering as this sequential and first-time-right process leads to delays and late problem-solving. To cope with the problems of a sequential engineering process of production systems the idea is to give software engineers a virtual environment to allowed frontloaded, parallelized software programming and testing. The frontloaded testing allows for earlier bug identification and fixing in the software. The use of HiL simulation allows these applications [63]: • • • •
A virtual environment for control hardware and software Test of the automation software and the HMI Optimizations of the automation software Training of operators for the real production system.
Figure 16.7 illustrates the allegory of a HiL simulation: The HiL simulation allows switching between the real, physical production system and a virtual, simulated production system. HiL simulation is applicable from single machine tools to entire production plants.
Setup of a HiL Simulation A HiL simulation consists of three components: a real control system (any kind of automation software and hardware), a simulation, and a communication route in between [63] shows the setup of a HiL simulation and shows it is a closedloop approach. The real control system consists of a controller (e.g., PLC, NC, or RC) running the automation software and possibly an HMI with push button panels and displays. The simulation of the remaining production system is running on a powerful PC and is coupled with the control system. For the simulation, a simulation model is created
containing the components, the processes, and the communication interface of the production system [64]. Further elements of the simulation model may include special (e.g., physics) behavior, workpieces, tools, and parts of the HMI. The communication route between the control system and simulation is the fieldbus as used in the real production system. The fieldbus covers the forward and backward communication between control system and simulation. For simulating the production system, a powerful PC and a simulation tool are necessary. Simulation tools are available as commercial products (e.g., SIMIT, ISG-virtuos, and Simulink with MATLAB), as well as open-source applications (e.g., OpenModelica and Scilab). Within the simulation tool, the user creates the simulation models which map the production system. The simulation tools provide two main components: First, a graphical user interface for modeling the simulation model and providing a 3D view of the production system. The second component is a solver for calculation of the simulation steps. After each step, the results are transferred via the fieldbus to other real and/or virtual components. Dependent on the industry division the simulation models are created with a different width of the model and level of detail. For example, within the simulation model of an entire assembly line the material flow between the assembly modules is very important, whereas a robot welding cell has its scope on the kinematics of the robot and collision detection. Figure 16.8, taken from [64], shows a classification of different model types used in simulation models for HiL simulation.
Virtual commissioning models I/O models
• PLC • NC • MC • RC
Real Virtual
Abbreviations PLC – programmable logic controller NC – numerical control MC – motion control RC – robot control
Fig. 16.7 Allegory of a hardware-in-the-loop simulation: switching between real and virtual production system
Device model
Process model
Model for continuous processes
Model for discrete processes
Cold commissioning model
Network models
State machine
Kinematic models Kinematic models with 3D model Dynamic models
Fig. 16.8 Overview and classification of models for HiL simulation [64]
16
Control Architecture for Automation
In addition, the created simulation models can be used in model-in-the-loop (MiL) or software-in-the-loop (SiL) simulations which have their own advantages and disadvantages over HiL simulations, see [64]. Between the three types MiL, SiL, and HiL, the most realistic virtual environment for the control system is the HiL simulation. Within HiL simulation types a real-time hardware-in-the-loop (RT-HiL) simulation allows the most accurate behavior for the control system.
Real-Time HiL Simulation A RT-HiL simulation is capable of delivering simulation results within the cycle time of the fieldbus communication route. This leads to two advantages over a non-RT-HiL simulation: • Components with a high update frequency close to the cycle time of the fieldbus, like drives and material flow elements, can be simulated realistically • Every occurrence of signal change coming from the control system is detected and processed by the simulation Non-RT HiL simulations need to emulate the communication with the fieldbus. This leads to missed signal changes or late response behavior from the simulation. Real-time capability for the HiL simulation is determined by the simulation tool and its environment on the PC. The solver, the algorithms, and the communication within the simulation tool as well as the scheduling and task management of the underlying OS have special requirements to fulfill. For the simulation tool there are three essentials for a real-time capable simulation: 1. Simulation and control need to be synchronized in communication. 2. A time-deterministic calculation of the simulation results is necessary. 3. A loss-free communication via the fieldbus is needed. Early work on real-time HiL simulations was done by Pritschow and Röck [65]. According to them, the most important requirement for a RT-HiL simulation is a timedeterministic reaction to signal changes data from the control system. The required time synchronicity between simulation and control can only be guaranteed if the simulator is running on a real-time operating system (RTOS). As commercial PCOS prioritize the interaction with the user they are not suitable for RT-HiL applications. A dedicated RTOS is needed to prioritize the two tasks of time-deterministic calculation and a loss-free communication of the simulation results. The decision to use RT-HiL simulation depends on the cycle times of the controller and those of the processes within the production system.
371
Usage of HiL Simulation The most widely adopted application of HiL simulations within the industry is its usage for virtual commissioning of production systems. Virtual commissioning is intended to test run the entire production system in a virtual environment before the system is delivered to the customer. The goal is to fix errors and to optimize the output of the machine tool or production plant before start of production. HiL simulations serve as the virtual environment for the tests. Besides acceptance tests, integration tests and overall system tests can be carried out in the virtual environment as well [64]. To guarantee a seamless porting from HiL simulation onto the shop floor no changes to control system or fieldbus configuration should be applied in the virtual environment. Whereas the simulation model offers a new range of testing possibilities through the following modifications [64]: • Virtual push buttons panels and indicators can be integrated into the simulation model and used to support the test run carried out by the commissioning specialist. • By parameterization of the simulation model before starting the simulation, the virtual production system can start in any desired state. • Additionally, signals can be traced parallel to the test run within the simulation tool. • Any state of the production system can be reached by overwriting signals in the running simulation. Furthermore, the simulation model offers the possibility to test failure and error situations realistically. Failure and error situations of the production system can merely be tested on the physical system as it leads to damages to the production system and injuries to the personnel involved. Erroneous behavior can be modeled in the simulation and will be triggered within a test run. The use of HiL simulation does not end with the start of production of the production system. In today’s production, more often changes from evolving product requirements, legal changes and customer needs demand higher flexibility. The changes in requirements lead to new engineering, optimizations, and expansion of the production system to meet the new requirements. To try out, test, and commission the new version of the production system the HiL simulations are reused.
Automating the Test Procedure for HiL Testing automation software using a HiL simulation is a socalled dynamic test process, as the automation software is executed for the test. Dynamic test process is divided into four phases [66]:
16
372
1. Design: The testing procedure is described explicitly, socalled test cases, for the tester to test/try specific functions, error cases, or requirements. 2. Setup: Activities to provide and maintain the test environment, e.g., HiL simulation. 3. Execution: The procedure described in the test cases is executed operating and observing the HiL simulation. Results and checks during the test execution are noted down as a log. 4. Evaluation: Results and checks from the log are examined for further actions. A test report might be created to share with third parties. Automation of one or several of these steps is referred to as test automation. A need for more awareness and thus structured and automated testing in the engineering process of production systems is outlined in [67, 68]. Once test cases are specified, their execution and evaluation are a rather straightforward activity. This becomes especially true when minor changes in an automation software during engineering lead to a retest of the entire automation to cross-checking for errors, so-called regression testing. These regression tests require less intellectual effort and are very time consuming. To free capacities of test personnel and limit errors during manual test execution, the automation of the phases execution and evaluation is to be tackled first in test automation and can be realized by introducing a computeraided software testing tool [67], see [63]. The automated generation of test cases is a more difficult step as the source for test case design (requirements, specifications, and other descriptions of functions and behavior) and would need a formal, complete, and consistent description. Thus, approaches can mostly be found as of today in academia only [69–72].
16.4.4 Data Analytics and AI It is interesting to note that artificial intelligence (AI) and its subfield, machine learning from a historical perspective, always had a tight relationship to control technology. Early on, the cybernetics concept of Norbert Wiener, one of the roots of AI as a field, addressed, among others, the issue of a closed signal loop consisting of a sensor, a controller, and an actuator [73]. One of the early successes in the field of AI in the 1960s was the Stanford Research Institute Problem Solver (STRIPS) planning system [74], used to control the behavior of Shakey the Robot. More recently, the success of reinforcement learning methods, among others, are fueling AI research. Reinforcement learning has been established as a common thread of three different research directions, one of which was optimal control [75]. This historical relationship is visible if we look at the academic examples and benchmark problems for search and planning, often using robot-related
O. Riedel et al.
tasks as an example and reinforcement learning, often benchmarked using well-known problems of control technology [76]. AI in its short history of existence saw periods of rapid growth and periods of near stagnation and recently became a very powerful tool to solve a broad range of problems at different hierarchical levels of the automation pyramid.
Closed Loop Control Problems Many examples of solving classical control problems using learning-based methods can be found in the scientific literature. Learning control systems are capable of learning from their environment and their performance does not directly depend on imprecise analytic models. Besides, solving similar problems that other methods can also solve, reinforcement learning methods, presented in Fig. 16.9, can offer an added benefit of directly linking image processing tasks to control tasks by using convolutional neural networks (CNN) as a policy [77]. These approaches suffer in many cases from sample inefficiency. Improving this sample inefficiency is one of the main concerns of the reinforcement learning community. To address this, techniques like hindsight experience replay [78] and the use of probabilistic models (like gaussian processes) as policy have emerged [79]. Lately, a new approach called differentiable programming has attracted attention [78]. Differentiable programming requires algorithmic models (or programs) that can be differentiated (using automatic differentiation) and allows the direct chaining of the differentiable algorithmic model with a deep neural network. Machine Vision Image processing cannot only be helpful as a part of the closed-loop control system, but also independently. The detection of obstacles for robot navigation, identifying grasping points for manipulation [80] or bin picking [81], based on camera information are valuable tasks that can be performed using machine learning methods. With the help of transfer learning [82], the technique to partially reuse already trained networks has improved the training efficiency of such a model and made their training suitable also for industrial use cases. Data-Driven Models In many cases control architectures include analytical or numerical models. An example for this is the inverse kinematics model of a robot controller or a friction model for a positioning stage. In many cases, these analytical models are only approximations and do not precisely reflect the real situation, however, data-driven approaches models have been developed to recreate these [83, 84]. Data-driven models are created from data originating from the phenomenon being modeled and can therefore better reflect reality. Furthermore, data-driven models are also used in analytics applications, as described below.
16
Control Architecture for Automation
373
Agent
Actions: e.g., force to move cart
Reinforcement learning algorithm
Reward: e.g., inverted pendulum deviation
Environment(s) e.g., inverted pendulum
States or observations: e.g., cart position and velocity and pendulum angle and angular velocity
Policy
Fig. 16.9 General structure of a reinforcement learning algorithm applied to the well-known inverted pendulum problem
Planning Problems Planning problems represent a deliberation process where a sequence of actions is defined to reach an a priori known goal [85]. One of the early successes of AI as a field was solving planning problems, and methods have been continuously improved and refined ever since. Planning problems in the classical sense of the word refer to the creation of a sequence of actions in a discrete state and action space in which state and action are described symbolically. Problems are highly relevant for automation in general planning, as these provide the automated job dispatching ability of MES and/or ERP systems [86]. Adding semantic states and actions and continuous state and/or action spaces to the problem significantly increases its complexity [87]. A further increase in complexity by embedding the planning problem in a physical simulation reaches the limits of current solutions [88]. As the world watched the Lee Se-dol defeated by DeepMind’s AlphaGo, many wondered why a robot did not move the bricks on the Go board when AlphaGo made a move. This question was answered recently in a publication from DeepMind [88], in which physically embedded planning problems (e.g., a robot playing a board game by physically moving the pieces and reasoning about the game) have been identified as a limit to current AI methods. As modern control architecture become more capable of facilitating data exchange with other systems and are less closed, the use of data analysis and AI methods will become more common. Data analytics has a close relationship to AI and machine learning, but also to statistics.
Data Analytics Analytics have recently gained attention through the emergence of industrial Internet of Things, which allow the aggregation of data form multiple sources to an unprecedented extend. As a result, analytics initially had to cope better with a large amount of data, but later had to use this data more intelligently. As shown in [63], three forms of analytics have emerged [89]: • Descriptive analytics is the simplest form of analytics, which offers a correlation between past behavior and past outcome (past successes and failures). It is a “postmortem” analytic. • Predictive data analytics, build on descriptive analytics, use data from the past to predict future events. In automation, a well-known example for this is predictive maintenance. Based on historical data that describes the past behavior of a certain component or equipment, using statistical and/or machine learning models, a prediction is made for future behavior or remaining useful life. Prescriptive analytics build on both descriptive and predictive analytics not only predicts future behavior, but also provides the reason for this prediction and offers decision option on how to take advantage/exploit or mitigate a predicted situation. When applied to maintenance, prescriptive maintenance would not report only predicted remaining useful life, but also the reason for the predicted failure and a decision to replace a soon-to-be-defective component to avoid the failure.
16
374
16.5
O. Riedel et al.
Conclusion and Trends
In the future, it will be increasingly difficult to speak of “typical” architectures in automation technology. The convergence of operating technology (OT) and information technology (IT) is in full swing and cannot be reversed. The traditional pyramid structure of automation technology and its networking with “higher” levels of corporate functions is dissolving – and this is in favor of any two networking instead of pure horizontal or vertical communication. Along with this goes the possibility of an almost arbitrary distribution of functionalities in this flexible network. The planners and operators of automation technology have already learned that innovation and investment cycles no longer follow the laws of production. The challenge now is to develop both technologies asynchronously and yet synchronously and to link them with each other. This article has shown which basic architectural building blocks help us today to lead automation technology into the future with reliable investments. As explained in this chapter, cloud computing, virtualization techniques, convergent networks, and the first solutions using AI algorithms for processing large amounts of data are already established on the market. Many of these solutions come from the IT sector, consumer electronics, or other application domains and are now being adapted for use in production. The best example is the TSN, which is based on the distribution of audio signals and is now well on the way to replacing the classic field buses. However, without a combination with higher-quality communication protocols such as OPC UA, its companion standards, and umati, this will not establish itself and will happen quickly. It is worth taking a look at upcoming technologies, when we talk about the disappearance of traditional architectures and the pace of innovation in IT. Three topics stand out from these trends, which will be briefly examined: Advanced systems engineering (ASE): Today’s advanced machine tool and production systems engineering uses mechatronic, system-based modular kits to offer machines, automation technology, and production systems in an economical and customer-specific manner. The tools developed in recent years for “virtual validation” and “real-time-capable virtual commissioning” have not been part of this engineering process yet. They are used in special cases in between the engineering and the rampup of the machine tool or production system. A further development as well as a continuous virtual validation is currently not taking place. The approach of modelbased systems engineering (MBSE) and advanced system engineering (ASE) has been known in the automotive and aerospace industry for product development for some time. It promises digitization and model-based development across all engineering disciplines for product development and
will be expanded to the field of production. In order to meet the challenges of customer-specific production in a dynamic environment, the established engineering process has to be redesigned in engineering terms using MBSE concepts. The new engineering methods must meet the requirements of adaptability, communication, consistency, and data integration to demonstrate efficiency. Software-defined manufacturing (SDM): The share of software R&D cost in production machines will continue to rise to over 50%. A key aspect of Industry 4.0 is the adaptability and reconfigurability to cope with new tasks. Whereas the processing of information per se is flexible and not limited to a specific hardware, factory objects of today are not flexible and not automatically adaptable. However, the modularization and reconfiguration of production systems and their automation components are limited today because there is no proper separation between the executing physical part of the machine and the automation software part. This software is firmly connected to the machine and a subsequent upgrade of the functionality is almost impossible unless the upgrade has been planned in advance and is already implemented in the system. Therefore, the focus of softwaredefined manufacturing (SDM), which has been derived from IT and communication technology is: the so-called softwaredefined anything (SDx). SDx represents concepts, which permit the (re-)configuration of system functionality via software only. SDM-based production would mean that the entire manufacturing process is automatically configured by software based on the desired properties of the final product. In addition, the data generated during the product development and during the manufacturing process is recorded and linked together so that optimizations can be made. Artificial intelligence: ASE, MBSE, and SDM generate a lot of data in the operative production and its engineering process. The amount of this data and its classification – the so-called labeling – enables the application of AI technology in the future of modern production systems. On the one hand, algorithms and especially those for machine learning are constantly evolving. On the other hand, the necessary computing and storage capacity is being found in ever smaller and thus production-ready embedded devices. The repeatedly quoted credo “big data is the new oil of the industry” is correct – it is only necessary to ensure that the large data generated in production is linked to the correct and consistent data models. Once this is achieved – which corresponds to a continuous process – there is little to prevent the successful use of neural networks and so-called deep learning. The authors of this chapter are certain that the speed of innovation in automation technology will increase even further through the use of IT. It will be important to monitor this trend carefully. Necessary and useful components of these trends should be subjected to a critical analysis regarding their applicability in automation technology. The most
16
Control Architecture for Automation
challenging task at the moment is certainly the continuous integration/deployment of these new technologies into greenfield and brownfield production plants. Acknowledgments The authors would like to acknowledge the help of all colleagues involved in this chapter and especially the contributors who could not be listed as authors: Dr. Akos Csiszar, Caren Dripke, Florian Frick, Karl Kübler, and Timur Tasci. Without their support and their specific content, this chapter would not have become a reality. Also, many thanks to the sponsors (German Research Foundation, Federal Ministry of Education and Research, Federal Ministry for Economic Affairs and Energy) of various projects through which the results could be generated.
References 1. Arnold, H.: The recent history of the machine tool industry and the effects of technological change. Münchner Betriebswirtschaftliche Beiträge, vol. 14 (2001) 2. Brecher, C., Verl, A., Lechler, A., Servos, M.: Open control systems: state of the art. Prod. Eng. Res. Dev. 4(2-3), 247–254 (2010). https://doi.org/10.1007/s11740-010-0218-5 3. PLC Open: Function Blocks for Motion Control: Part 1. [Online]. Available: https://plcopen.org/system/files/downloads/ plcopen_motion_control_part_1_version_2.0.pdf. Accessed 17 May 2020 4. Tan, K.K., Lee, T.H., Huang, S.: Precision Motion Control: Design and Implementation. Springer, London (2007) 5. Suh, S.-H., Kang, S.-K., Chung, D.-H., Stroud, I.: Theory and design of CNC, Springer Science and Business Media (2008) 6. Keinert, M., Kaiser, B., Lechler, A., Verl, A.: Analysis of CNC software modules regarding parallelization capability. In: Proceedings of the 24th International Conference on Flexible Automation and Intelligent Manufacturing (FAIM), San Antonio (2014) 7. OPC UA for Machine Tools: Part 1 – Monitoring and Job, 40501-1, OPC Foundation, Sept (2020) 8. OPC Unified Architecture for Robotics: Part 1: Vertical Integration, OPC 40010-1, OPC Foundation, May (2019) 9. OPC UA for Machinery: Part 1 – Basic Building Blocks, 40001-1, OPC Foundation, Sept (2020) 10. Cordis: European Commission:CORDIS:Projects and Results: Open System Architecture for Controls within Automation Systems. [Online]. Available: https://cordis.europa.eu/project/id/1783. Accessed 25 Sept 2020 11. Brecher, C., Verl, A., Lechler, A., Servos, M.: Open control systems: state of the art. Prod. Eng. Res. Dev. 4(2–3), 247–254. https:/ /doi.org/10.1007/s11740-010-0218-5 12. Plattform Industrie 4.0: Details of the Administration Shell – Part 1. [Online]. Available: https://www.dex.siemens.com/mindsphere/ mindsphere-packages. Accessed 26 Sept 2020 13. AutomationML e.V.: AutomationML. [Online]. Available: https://www.automationml.org/o.red.c/home.html. Accessed 26 Sept 2020 14. eCl@ss e.V.: eCl@ss Content. [Online]. Available: https:// www.eclasscontent.com/index.php. Accessed 26 Sept 2020 15. Siemens Industry Software Inc.: MindSphere Packages. [Online]. Available: https://www.dex.siemens.com/mindsphere/mindspherepackages. Accessed 26 Sept 2020 16. Schlechtendahl, J., Keinert, M., Kretschmer, F., Lechler, A., Verl, A.: Making existing production systems industry 4.0-ready. Prod. Eng. Res. Dev. 9(1), 143–148 (2015)
375 17. Industrial automation systems and integration. Physical device control. Data model for computerized numerical controllers, 14649, ISO, Apr 2004 18. 3S-Smart Software Solutions GmbH, CODESYS® Runtime: Converting any intelligent device into an IEC 61131-3 controller using the CODESYS Control Runtime System 19. Levis, A.H.: Systems architecture. In: Handbook of Systems Engineering, 2nd edn, pp. 479–506. Wiley (2009) 20. Buede, D.M., Miller, W.D.: The Engineering Design of Systems: Models and Methods. Wiley, Hoboken, (2016) 21. Moghaddam, M., Cadavid, M.N., Kenley, C.R., Deshmukh, A.V.: Reference architectures for smart manufacturing: a critical review. J. Manuf. Syst. 49, 215–225 (2018). https://doi.org/10.1016/ j.jmsy.2018.10.006 22. Morel, G., Pereira, C.E., Nof, S.Y.: Historical survey and emerging challenges of manufacturing automation modeling and control: a systems architecting perspective. Annu. Rev. Control. 47, 21–34 (2019). https://doi.org/10.1016/j.arcontrol.2019.01.002 23. katacontainers: Kata Containers Docs: Understand the basics, contribute to and try using Kata Containers. [Online]. Available: https:/ /katacontainers.io/docs/. Accessed 21 Apr 2020 24. Google LLC: gVisor Documentation. [Online]. Available: https:// gvisor.dev/docs/. Accessed 22 Apr 2020 25. Abeni, L., Biondi, A., Bini, E.: Hierarchical scheduling of real-time tasks over Linux-based virtual machines. J. Syst. Softw. 149, 234– 249 (2019). https://doi.org/10.1016/j.jss.2018.12.008 26. Bini, E., Bertogna, M., Baruah, S.: Virtual multiprocessor platforms: specification and use. In: 30th IEEE Real-Time Systems Symposium, 2009: RTSS 2009; 1–4 Dec. 2009, Washington, DC, USA; proceedings, Washington, DC, pp. 437–446 (2009) 27. Shin, I., Easwaran, A., Lee, I.: Hierarchical scheduling framework for virtual clustering of multiprocessors. In: Euromicro Conference on Real-Time Systems, 2008: ECRTS ‘08; 2–4 July 2008, Prague, Czech Republic, Prague, pp. 181–190 (2008) 28. Friebel, T., Biemueller, S.: How to Deal with Lock Holder Preemption, Xen Summit North America 164 (2008) 29. Ramsauer, R., Kiszka, J., Lohmann, D., Mauerer, W.: Look mum, no VM exits! (almost). In: Proceedings of the 13th Workshop on Operating Systems Platforms for Embedded Real-Time Applications (OSPERT). [Online]. Available: https://arxiv.org/pdf/ 1705.06932 30. Schlechtendahl, J., Kretschmer, F., Lechler, A., Verl, A.: Communication Mechanisms for Cloud Based Machine Controls, Procedia CIRP 17, pp. 830–834 (2014). https://doi.org/10.1016/ J.PROCIR.2014.01.074 31. Schlechtendahl, J., Kretschmer, F., Sang, Z., Lechler, A., Xu, X.: Extended study of network capability for cloud based control systems. Robot. Comput. Integr. Manuf. 43, 89–95 (2017). https:// doi.org/10.1016/j.rcim.2015.10.012 32. Vonasek, V., Pˇeniˇcka, R., Vick, A., Krüger, J.: Robot control as a service – towards cloud-based motion planning and control for industrial robots, In 10th International Workshop on Robot Motion and Control (RoMoCo), pp. 33–39 (2015) 33. Vick, A., Horn, C., Rudorfer, M., Krüger, J.: Control of robots and machine tools with an extended factory cloud. In: IEEE International Workshop on Factory Communication Systems – Proceedings, WFCS, vol. 2015 (2015). https://doi.org/10.1109/ WFCS.2015.7160575 34. Kretschmer, F., Friedl, S., Lechler, A., Verl, A.: Communication extension for cloud-based machine control of simulated robot processes. In 2016 IEEE International Conference on Industrial Technology (ICIT), pp. 54–58 (2016) 35. Masmano, M., Ripoll, I., Crespo, A., Jean-Jacques, M.: XtratuM: A Hypervisor for Safety Critical Embedded Systems, In 11th RealTime Linux Workshop, pp. 263–272 (2009)
16
376 36. Xi, S., Wilson, J., Lu, C., Gill, C.: RT-Xen: towards real-time hypervisor scheduling in Xen. In: 2011 Proceedings of the Ninth ACM International Conference on Embedded Software (EMSOFT), pp. 39–48 (2011) 37. Sandstrom, K., Vulgarakis, A., Lindgren, M., Nolte, T.: Virtualization technologies in embedded real-time systems, In IEEE 18th Conference on Emerging Technologies & Factory Automation (ETFA), pp. 1–8 (2013) 38. Auer, M.E., Zutin, D.G. (eds.): Online engineering & Internet of Things: Proceedings of the 14th International Conference on Remote Engineering and Virtual Instrumentation REV 2017, held 1517 March 2017, Columbia University, New York, USA. Springer International Publishing, Cham (2018) 39. Givehchi, O., Imtiaz, J., Trsek, H., Jasperneite, J.: Control-as-aservice from the cloud: a case study for using virtualized PLCs. In: 2014 10th IEEE Workshop on Factory Communication Systems (WFCS 2014), pp. 1–4 (2014) 40. Goldschmidt, T., Murugaiah, M.K., Sonntag, C., Schlich, B., Biallas, S., Weber, P.: Cloud-based control: a multi-tenant, horizontally scalable soft-PLC. In: 2015 IEEE 8th International Conference on Cloud Computing, pp. 909–916 (2015) 41. Garcia, C.A., Garcia, M.V., Irisarri, E., Pérez, F., Marcos, M., Estevez, E.: Flexible container platform architecture for industrial robot control. In: 2018 IEEE 23rd International Conference on Emerging Technologies and Factory Automation (ETFA), pp. 1056–1059 (2018) 42. Nikolakis, N., Senington, R., Sipsas, K., Syberfeldt, A., Makris, S.: On a containerized approach for the dynamic planning and control of a cyber – physical production system. Robot. Comput. Integr. Manuf. 64, 101919 (2020). https://doi.org/10.1016/ j.rcim.2019.101919 43. Telschig, K., Schönberger, A., Knapp, A.: A real-time container architecture for dependable distributed embedded applications. In: 2018 IEEE 14th International Conference on Automation Science and Engineering (CASE), pp. 1367–1374 (2018) 44. Morabito, R.: Virtualization on internet of things edge devices with container technologies: a performance evaluation. IEEE Access. 5, 8835–8850 (2017). https://doi.org/10.1109/ ACCESS.2017.2704444 45. Goldschmidt, T., Hauck-Stattelmann, S.: Software containers for industrial control. In: 2016 42th Euromicro Conference on Software Engineering and Advanced Applications (SEAA), pp. 258– 265 (2016) 46. Tasci, T., Walker, M., Lechler, A., Verl, A.: Towards modular and distributed automation systems: towards modular and distributed automation systems: leveraging software containers for industrial control. In: Automation2020 (2020) 47. Tasci, T., Melcher, J., Verl, A.: A container-based architecture for real-time control applications. In: Conference proceedings, ICE/IEEE ITMC 2018: 2018 IEEE International Conference on Engineering, Technology and Innovation (ICE/ITMC): Stuttgart 17.06.–20.06.18, Stuttgart, pp. 1–9 (2018) 48. Faller, C., Höftmann, M.: Service-oriented communication model for cyber-physical-production-systems. Procedia CIRP. 67, 156– 161 (2018). https://doi.org/10.1016/j.procir.2017.12.192 49. Lo Bello, L., Steiner, W.: A perspective on IEEE time-sensitive networking for industrial communication and automation systems. Proc. IEEE. 107(6), 1094–1120 (2019). https://doi.org/10.1109/ JPROC.2019.2905334 50. Cisco Systems: IEEE 802.11ax: The Sixth Generation of Wi-Fi White Paper. [Online]. Available: https://www.cisco.com/c/en/us/ products/collateral/wireless/white-paper-c11-740788.html 51. Unified Architecture Part 14: PubSub, OPC Foundation, Feb 2018. [Online]. Available: https://opcfoundation.org/developertools/specifications-unified-architecture/part-14-pubsub/
O. Riedel et al. 52. OPC UA for Plastics and Rubber Machinery: Extrusion Lines and Their Components, 40084, OPC Foundation, Apr 2020 53. OPC UA for Plastics and Rubber Machinery: Data exchange Between Injection Moulding Machines and MES, 40077, OPC Foundation, Apr 2020 54. OPC UA for Plastics and Rubber Machinery: Peripheral devices – Part 1: Temperature Control Devices, 40082-1, OPC Foundation, Apr 2020 55. OPC UA for Plastics and Rubber Machinery: General Type Definitions, 40083, OPC Foundation, Apr 2020 56. umati Initiative: umati – universal machine technology interface. [Online]. Available: https://www.umati.org/about/. Accessed 26 Sept 2020 57. PROFIBUS Nutzerorganisation e.V.: PROFINET over TSN: Guideline, 1st ed. [Online]. Available: https://www.profibus.com/ download/profinet-over-tsn/ 58. Weber, K.: EtherCAT and TSN – Best Practices for Industrial Ethernet System Architectures. Available: https://www.ethercat.org/ download/documents/Whitepaper_EtherCAT_and_TSN.pdf. Accessed 1 Apr 2020 59. Szancer, S., Meyer, P., Korf, F.: Migration from SERCOS III to TSN-simulation based comparison of TDMA and CBS transportation. In: OMNeT++, pp. 52–62 (2018) 60. Bruckner, D., et al.: OPC UA TSN: A New Solution for Industrial Communication. Available: https://www.intel.com/content/dam/ www/programmable/us/en/pdfs/literature/wp/opc-ua-tsn-solutionindustrial-comm-wp.pdf. Accessed 9 Apr 2020 61. OPC Foundation: Initiative: Field Level Communications (FLC): OPC Foundation extends OPC UA including TSN down to field level. Available: https://opcfoundation.org/flc-pdf. Accessed 24 Apr 2020 62. Fricke, E., Gebhard, B., Negele, H., Igenbergs, E.: Coping with changes: causes, findings, and strategies. Syst. Eng. 3(4), 169–179 (2000). https://doi.org/10.1002/1520-6858(2000)3:43.0.CO;2-W 63. Röck, S.: Hardware in the loop simulation of production systems dynamics. Prod. Eng. Res. Dev. 5(3), 329–337 (2011). https:// doi.org/10.1007/s11740-011-0302-5 64. Virtual commissioning: model types and glossary, 3693 Part 1, Verein Deutscher Ingenieure e.V. (VDI); Verband der Elektrotechnik Elektronik Informationstechnik e.V. (VDE), Berlin, Aug 2016 65. Pritschow, G., Röck, S.: “Hardware in the loop” simulation of machine tools. CIRP Ann. Manuf. Technol. 53(1), 295–298 (2004). https://doi.org/10.1016/S0007-8506(07)60701-X 66. Software and systems engineering part 2: software testing: test processes, 29119-2, International Organization for Standardization (ISO); International Electrotechnical Commission (IEC); Institute of Electrical and Electronics Engineers (IEEE), Geneva, s.l., New York, Sept 2013 67. Kübler, K., Neyrinck, A., Schlechtendahl, J., Lechler, A., Verl, A.: Approach for manufacturer independent automated machine tool control software test. In: Wulfsberg, J.P. (ed.) Applied Mechanics and Materials, v.794, WGP Congress 2015: Progress in production engineering; selected, peer reviewed papers from the 2015 WGP Congress, September 7–8, 2015, Hamburg, Germany, Pfaffikon: Trans Tech Publications Inc, pp. 347–354 (2015) 68. Kormann, B., Tikhonov, D., Vogel-Heuser, B.: Automated PLC software testing using adapted UML sequence diagrams. IFAC Proc. Vol. 45(6), 1615–1621 (2012). https://doi.org/10.3182/ 20120523-3-RO-2023.00148 69. Kormann, B., Vogel-Heuser, B.: Automated test case generation approach for PLC control software exception handling using fault injection. In: Proceedings of 2011 IECON, Melbourne, pp. 365– 372 (2011)
16
Control Architecture for Automation
70. Kübler, K., Schwarz, E., Verl, A.: Test case generation for production systems with model-implemented fault injection consideration. Procedia CIRP. 79, 268–273 (2019). https://doi.org/10.1016/ j.procir.2019.02.065 ˇ 71. Enoiu, E.P., Cauševi´ c, A., Ostrand, T.J., Weyuker, E.J., Sundmark, D., Pettersson, P.: Automated test generation using model checking: an industrial evaluation. Int. J. Softw. Tools Technol. Transfer. 18(3), 335–353 (2016). https://doi.org/10.1007/s10009014-0355-9 72. Rösch, S., Tikhonov, D., Schütz, D., Vogel-Heuser, B.: Modelbased testing of PLC software: test of plants’ reliability by using fault injection on component level. IFAC Proc. Vol. 47(3), 3509– 3515 (2014). https://doi.org/10.3182/20140824-6-ZA-1003.01238 73. Wiener, N., Mitter, S., Hill, D.: Cybernetics or Control and Communication in the Animal and the Machine, Reissue of the 1961, 2nd edn. The MIT Press, S.l., Cambridge, Massachusetts (2019) 74. Kuipers, B., Feigenbaum, E.A., Hart, P.E., Nilsson, N.J.: Shakey: from conception to history. AIMag. 38(1), 88 (2017). https:// doi.org/10.1609/aimag.v38i1.2716 75. Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. The MIT Press, Cambridge, MA (2018) 76. Tunyasuvunakool, S., et al.: dm_control: software and tasks for continuous control. Softw. Impacts. 6, 100022 (2020). https:// doi.org/10.1016/j.simpa.2020.100022 77. Ross, S., et al.: Learning monocular reactive UAV control in cluttered natural environments. In: 2013 IEEE International Conference on Robotics and Automation, Karlsruhe, May 2013, pp. 1765–1772 (2013) 78. Andrychowicz, M., et al.: Hindsight Experience Replay, July 2017. [Online]. Available: http://arxiv.org/pdf/1707.01495v3 79. Deisenroth, M.P., Rasmussen, C.E.: PILCO: a model-based and data-efficient approach to policy search. In: 28th International Conference on Machine Learningg, ICML, pp. 465–473 (2011) 80. Mahler, J., et al.: Dex-Net 2.0: Deep Learning to Plan Robust Grasps with Synthetic Point Clouds and Analytic Grasp Metrics, Mar 2017. Available: http://arxiv.org/pdf/1703.09312v3 81. Mahler, J., Goldberg, K.: Learning deep policies for robot bin picking by simulating robust grasping sequences. In: Conference on Robot Learning, pp. 515–524 (2017) 82. Shin, H.-C., et al.: Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning. IEEE Trans. Med. Imaging. 35(5), 1285– 1298 (2016). https://doi.org/10.1109/TMI.2016.2528162 83. Csiszar, A., Eilers, J., Verl, A.: On solving the inverse kinematics problem using neural networks. In: 2017 24th International Conference on Mechatronics and Machine Vision in Practice (M2VIP), Auckland, Nov 2017, pp. 1–6 (2017) 84. Huang, S.N., Tan, K.K., Lee, T.H.: Adaptive friction compensation using neural network approximations. IEEE Trans. Syst. Man Cybern. C. 30(4), 551–557 (2000). https://doi.org/10.1109/ 5326.897081 85. Ghallab, M., Nau, D.S., Traverso, P.: Automated Planning and Acting. Cambridge University Press, New York (2016). [Online]. Available: https://www.loc.gov/catdir/enhancements/fy1618/ 2016017697-b.html 86. Helo, P., Suorsa, M., Hao, Y., Anussornnitisarn, P.: Toward a cloudbased manufacturing execution system for distributed manufacturing. Comput. Ind. 65(4), 646–656 (2014). https://doi.org/10.1016/ j.compind.2014.01.015 87. Dornhege, C., Eyerich, P., Keller, T., Trueg, S., Brenner, M., Nebel, B.: Semantic attachments for domain-independent planning systems. In: Towards Service Robots for Everyday Environments
377 88. Mirza, M., et al.: Physically Embedded Planning Problems: New Challenges for Reinforcement Learning, Sept 2020. Available: http://arxiv.org/pdf/2009.05524v1 89. Lepenioti, K., Bousdekis, A., Apostolou, D., Mentzas, G.: Prescriptive analytics: literature review and research challenges. Int. J. Inf. Manag. 50, 57–70 (2020). https://doi.org/10.1016/ j.ijinfomgt.2019.04.003
16
Professor Riedel studied technical cybernetics at the University of Stuttgart and earned his doctorate at the Faculty of Engineering Design and Production Engineering. For over 25 years, he has been involved with the theory and practical application of virtual validation methods in product development and production. After working for 18 years in the international IT and automotive industry he was appointed professor at the University of Stuttgart and is managing director of the Institute for Control Engineering of Machine Tools and Manufacturing Units (ISW), holds the chair for Information Technology in Production and is managing director of the Fraunhofer Institute for Industrial Engineering (IAO). Oliver Riedel specializes in information technology for production and product development, IT engineering methods, and data analytics.
Dr.-Ing. Armin Lechler studied technical cybernetics with emphasis on production engineering at the University of Stuttgart. Afterward in 2006, he was research assistant at the Institute for Control Engineering of Machine Tools and Manufacturing Units (ISW) at University of Stuttgart in the research area “Information and Communication Technology.” He served as the head of the Control Engineering Department and he was appointed the managing chief engineer at ISW and became the deputy director in 2015. Armin Lechler specializes in industrial communication and control architectures.
378
Alexander W. Verl studied electrical engineering at FriedrichAlexander-University Erlangen-Nuernberg and earned his Dr.-Ing. degree in control engineering at the Institute of Robotics and Mechatronics (DLR). Since 2005 he is full professor and managing director of the Institute for Control Technology of Machine Tools and Manufacturing Units (ISW) at the University of Stuttgart. From 1992 to 1994 he worked as a development engineer at Siemens AG. During the period 1997–2005 he was founder and managing director of AMATEC Robotics GmbH (became part of KUKA Roboter GmbH in 2005); and 2006–2014 he was also head of institute at Fraunhofer IPA as an ancillary job. From 2014 to 2016 he was on leave from the University during his time as executive vice president of Technology Marketing and Business Models at the Fraunhofer Gesellschaft e.V. Alexander Verl specializes in industrial control technology, machine tool control, industrial robotics, and software-defined manufacturing.
O. Riedel et al.
Cyber-Physical Automation
17
Cesar Martinez Spessot
Contents 17.1 17.1.1 17.1.2
Status of the Cyber-Physical Automation . . . . . . . . . . 380 Vertical Approach on Cyber-Physical Automation . . . . . 380 Horizontal Approach on Cyber-Physical Automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380
17.2
381
17.2.3
Challenges in Implementing Cyber-Physical Automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Problems of the Vertical Cyber-Physical Approach . . . . . . . . . . . . . . . . . . . . . . . . . The Need for Edge Computing and Workload Consolidation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Need for Services Unification . . . . . . . . . . . . . . . . . .
17.3 17.3.1 17.3.2 17.3.3 17.3.4 17.3.5 17.3.6 17.3.7 17.3.8 17.3.9 17.3.10
Cyber-Physical Unified Services Framework . . . . . . . Cyber-Physical Architecture . . . . . . . . . . . . . . . . . . . . . . . Cyber-Physical Components . . . . . . . . . . . . . . . . . . . . . . . Communications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cyber-Physical Infrastructure . . . . . . . . . . . . . . . . . . . . . . Automation Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cyber-Physical Manageability . . . . . . . . . . . . . . . . . . . . . Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Data Management and Analytics . . . . . . . . . . . . . . . . . . . Reliability and Safety . . . . . . . . . . . . . . . . . . . . . . . . . . . . IT/OT Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
382 383 383 384 385 389 390 393 397 399 400
17.4
Conclusion, Emerging Trends, and Challenges . . . . . 401
17.2.1 17.2.2
381
381 382
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403
Abstract
The digital transformation is already changing companies and even entire industries. Over the past decade, a new breed of digital companies has arisen, quickening the tempo of business. These companies relied their core business on cyber-physical automation, delivering more convenient, cost-efficient, and engaging experiences for their customers while disrupting the entire industry business models.
Today the potential impacts of cyber-physical automation extend beyond these disruptors and are a pressing issue for organizations spanning almost every industry. As more companies and people, systems, and things become interconnected, so the scope to create new and novel digital value chains rapidly increases. At the same time, this rapid change is creating new and complex challenges for many industries. Fragmented solutions, limited standards, immature security models, and inadequate approaches to the maintenance of digital assets are among the barriers that may prevent organizations from scaling valuable cyber-physical solutions. Clevel executives cited several roadblocks that were driving these facts, such as difficulties integrating with existing infrastructure and processes and lack of overall security posture. Considering the growing number of cyber-physical components, high increase of data volumes, and existing infrastructure constraints, it is especially critical to reduce complexity and simplify scaling requirements. Two key concepts for achieving those goals are the unification of services and workload consolidation. In this chapter, we will review each of the constituent technologies for a cyber-physical system, examine how each contributes to the automation, and present a solution for the presented roadblocks. This chapter provides a broad analysis of cyber-physical systems’ past and present jointly with associated trends.
Keywords
Cyber-physical unified framework · Edge computing · Workload consolidation · Digital transformation · Cyber-physical security
C. Martinez Spessot () Intel Corporation, Hillsboro, OR, USA e-mail: [email protected]
© Springer Nature Switzerland AG 2023 S. Y. Nof (ed.), Springer Handbook of Automation, Springer Handbooks, https://doi.org/10.1007/978-3-030-96729-1_17
379
380
17.1
C. Martinez Spessot
Status of the Cyber-Physical Automation
In the last five years, the cyber-physical automation [16] has benefited from tremendous innovations driven by the emergence of cloud-native and edge technologies. Collectively, these approaches enable organizations to transform their business in ways that are more scalable, flexible, and data driven than previously possible. The cyber-physical automation demonstrated the transformation of how operational technologies (OT) work, by scalability improvement, connectivity openness, and cost reduction. The initial implementations of cyber-physical automation [17] were done based on a cloud-centric architectural approach, where sensors directly send the data gathered (data acquisition) to the databases located in the cloud. This approach allowed automation systems to cover everything from field/environment equipment to services delivered in the cloud. It also helped define a series of microservices that establish the infrastructure to support any processing in the cloud. Any cyber-physical system (CPS) includes complex interactions and interdependencies among a large group of production and control facilities and organizations (legacy systems). With rapid development in information technology and cybernetics, intensive computing resources are used to connect computerized physical devices to provide control, communication, coordination, and collaboration [15]. The cyber-physical technologies have introduced new infrastructure layers, each with their own administration measures required for effective management. In response to these changes, the broader industry at large has developed solutions, services, and products that primarily fall into two categories in terms of approach: (1) vertical or solution-based technologies and (2) horizontal or platform-based technologies.
17.1.1 Vertical Approach on Cyber-Physical Automation A commonly observed trend is the emergence of a variety of point solutions called vertical cyber-physical solutions. A vertical cyber-physical solution is frequently based on a cloud-centric architecture created to solve just one particular automation use case. Some typical examples are industrial chiller maintenance automation, light automation based on workforce presence and behavior, parking automation based on people’s demand, and computer vision to deal with the industrial automation scenarios. With this first cyber-physical automation wave, all the solution deployments were end-to-end implementations. With these systems, the organization rarely integrates its
information technology/operational technology (IT/OT) systems directly with the brownfield. It focuses on cloud-tocloud communication (typically using SOA), delegating the deployment of analytical capacity to the cloud infrastructure. These vertical cyber-physical automation solutions provided a similar bill of materials (such as data ingestion agents, sensors, gateways, and databases). However, there was no common understanding of the cyber-physical automation domain or attempt to standardize these solutions. Additionally, the lack of standardization increased the cost of implementation, support, and maintenance, offered a higher likelihood of failure, and provided limited or no interoperability. Furthermore, many of these solutions are based on inappropriate models of governance, which fundamentally neglect privacy and security in their design and have challenges in terms of scalability. On the other hand, a new wave of a business model called IoT-as-a-Service (IOTaaS) was generated, where companies realized that they could reduce costs by accessing applications located in a third-party data center without investing on infrastructure, software, and human resources. This business model had demonstrated remarkable success in the consumer market, where the principal factor is the monetization of data. With this approach, the panorama ends up becoming a set of value-added services that should be consumed continuously. However, the customer cannot openly develop analytical capacity and improve the knowledge of the business and operation. These solutions are still far from being inserted in the rapid day-to-day operation of an organization. However, it begins to manifest in insights for aspects such as maintenance, management of the workforce, or asset monitoring that previously required the human in situ supervision.
17.1.2 Horizontal Approach on Cyber-Physical Automation After the first wave of implementations, which is usually based on the vertical approach, organizations started to find options to provide a holistic administration of all the cyberphysical stack. Up until then, the implementations done working with point solutions helped to demonstrate the feasibility of cyber-physical automation but failed to be scalable in the whole organization. Furthermore, a critical factor for scalable automation is to have security across the entire lifecycle from the time the automation is built to runtime. The vertical approach failed to provide this critical requisite. With this precedence, organizations started to work on standardization and a generalist platform approach that offers the components and the stack necessary to cover the ingestion, process, and analysis of the information, building the solutions by themselves.
17
Cyber-Physical Automation
The horizontal approach generated a foundational platform that is capable of acquiring, summarizing, normalizing, and moving the data to where it will be consumed. At the same time, it solved many of the challenges listed above, creating a scalable infrastructure. Both situations are mutually reinforced with each other since most of the vertical solutions (except some born from the beginning in specific niches) employ the leading cloud providers in partnership with comprehensive ecosystems. The mix of both situations complicated the technology management and challenged the IT/OT environments. With this, many programs are immersed in several political issues implementing new technologies and getting large deployments to facilitate the production of value.
17.2
Challenges in Implementing Cyber-Physical Automation
17.2.1 The Problems of the Vertical Cyber-Physical Approach The cyber-physical automation is transforming the way companies and organizations work by increasing efficiency, improving reliability, and reducing waste. The cloud is a powerful tool for many IT/OT workloads, but it can fall short at times for IoT use cases. Issues such as latency for the control plane, massive volumes of remote data, and data aggregation (while maintaining context) require a complementing capability. The cyber-physical paradigm based only on a vertical approach is suffering six major problems [3]: • Creates silos where every automation use case has its building blocks from the device (sensor and actuator) to the cloud. • Generates inconsistent redundancy; many automation’s use cases utilize the same building blocks, and for that reason, they are repeated in the infrastructure, and cannot benefit from the reusability software pattern. Also, the configurations can differ in complexity in software configuration management. • Increases the deployment complexity based on the inconsistent redundancy of building blocks; many of the building block configurations differ, generating an intrinsic complexity in the software configuration management, in the automation manageability, and the infrastructure deployment. • Demonstrates the limitation of the cloud-centric architecture due to the use of AI and computer vision; use cases demand near real time to process decisions in the automation environment. Hence, the principal bottleneck, in this case, is the communication capability and cost.
381
• Increased total cost of ownership: The duplicated infrastructures need specific hardware (such as gateways and telecommunication routers) for every solution, which consequently incurs high capital and operating expenses. Furthermore, the duplication of licenses and support increases the cost of operating expenses. • Incompatible protocol, interfaces, and radio frequencies: Devices operate with different interface protocols and also operate in similar radio frequencies that create collisions, interference, and noise in the system. Most of the organizations and companies that opted for the rapid adoption of the first-generation cyber-physical automation solutions are now facing the problems as mentioned earlier after starting a digital transformation program.
17.2.2 The Need for Edge Computing and Workload Consolidation The growth in cyber-physical automation and the vast amount of new data produced opportunities and challenges that require more distributed compute performance and capabilities [5]. System-wide collaboration with a holistic view of the automation places new demands on existing infrastructure where tightly coupled OS, firmware, and hardware make adding/changing functions difficult and costly. More analytics and devices introduced into the cyber-physical networks require more advanced and flexible ways of managing data to ensure that organizations are extracting maximum and timely value from them. Addressing regulatory compliance and concerns on organizational policies means keeping more data local and limiting data dispersion across multiple networks and locations. Technology advances in compute performance, security, and AI are converging to transform the new data into actionable insights, driving faster, more informed decisionmaking. Cyber-physical automation solutions based on these capabilities are becoming an essential part of operational performance and product/service delivery. Nevertheless, these new technological advances face many resource constraints and other concerns that sole reliance on data center and cloud interactions presents, including: • • • •
Low latency Privacy and data security Lack of persistent/reliable connectivity Bandwidth cost and availability
The evolution of edge computing is coming to resolve the aforementioned issues, including real-time analytics and a high level of decentralization. Edge computing could be
17
382
defined as a “distributed processing where data processing is placed closer to the people or things that produce and/or consume information for fast service delivery.” Edge computing applies the “divide and conquers” mantra to the algorithmic resolution of complex problems, with its application to the responsibility for process and data (a division of concern). Edge computing not only divides the algorithms into more straightforward elements but also stratifies responsibility in the calculations to be performed in each layer. Additionally, edge computing establishes a set of levels of data abstraction that ranges from raw data (produced by cyber-physical components) to business indicators for decision-making. With edge computing, cheap direct access to raw data is possible, and the application of analytics on this information is in the hands of the end customer who exploits the asset in their facilities. The concept of edge computing should be understood not only as a technical capability but as a key lever to accelerate new software solutions for the organizational operations, providing advanced distributed data analytics. Through this, edge workload consolidation responds to the need to manage and obtain value from the machine-generated data and to support new operational use cases with the need for low latency. Given this aim, edge workload consolidation offers a unified way to distribute the computing, storage, and service capacity among the different layers of the infrastructure. In this sense, edge workload consolidation complements the traditional IoT in facing the new challenges with its three main characteristics: • Distributed processing: high-speed and flexible computational intelligence to distribute workloads in a fast way, allowing efficient management of different software and device tools • Distributed storage: scalable storage infrastructure to manage different sources of data from legacy, business tools, and operational/nonoperational data • Fast delivery: fast data distribution for both machines and operators, which allows the deployment of advanced use cases to add value to the raw data
17.2.3 The Need for Services Unification As discussed, the cloud is a powerful tool for many IT/OT workloads, but it can fall short at times for IoT use cases. Issues such as latency for the control plane, massive volumes of remote data, and data aggregation (while maintaining context) require a complementing capability. As mentioned earlier, the continuing evolution of edge computing provides solutions to many of these issues. To realize the true benefit of IoT, enterprises should adopt a platform-based approach that provides the
C. Martinez Spessot
common ground for building cyber-physical automation implementations. This approach must be supported by a reference architecture that describes the essential building blocks and defines several technologies such as security, privacy, and performance. Interfaces should be standardized; best practices in terms of functionality and information usage need to be provided.
17.3
Cyber-Physical Unified Services Framework
The cyber-physical unified services framework is composed of the ten disciplines (aspects) that any organization or company should consider in order to have a scalable and manageable cyber-physical implementation that provides real business value. Ultimately, this framework helps the entire organization to gain a clear understanding of the value of standardization in each of the disciplines it includes and the options available to them, so they can make informed decisions about what is best for their business. As is shown in Fig. 17.1, the disciplines and their definitions are as follows: 1. Cyber-physical architecture: defines how the cyberphysical systems, work, and infrastructure should be designed based on microservices, with the objective of having highly flexible, scalable, responsive, reliable, and available solutions 2. Cyber-physical components: define cyber elements (like digital twins and software agents) and the physical elements (like sensors and actuators) to be used for industrial automation 3. Communications: defines the technologies based on industrial use cases and environmental constraints (such as wired/wireless, distance, noise, and used bands), intending to provide the base for intelligent collaboration 4. Cyber-physical infrastructure: defines how workloads are processed and orchestrated on the edge and on the cloud to efficiently distribute loads across the different architecture tiers while optimizing scalability, availability, and performance 5. Automation software: defines the cyber-physical applications (workload) to perform advanced control in the standardized infrastructure and identifies the standard services that support them 6. Cyber-physical manageability: defines functions required to manage edge devices at scale, including features such as secure device provisioning, over-the-air (OTA) updates, and physical components decommissioning
17
Cyber-Physical Automation
383
Cyber physical components
Autonomous infrastructure
2
1
Cyber physical manageability
4
3
Data management and analytics
6
IT/OT integration
8
5
10
7
9
17 Cyber physical architecture
Communications
Automation software
Security
Reliability and safety
Fig. 17.1 Cyber-physical unified service framework
7. Security: provides requirements for keeping the complete infrastructure secure, including physical and logical integrity, bootstrapping and provisioning, intrusion prevention, and protection of data at rest and in transit 8. Data management and analytics: defines a set of policies, procedures, practices, and tools to guarantee the quality and availability of data, which includes the definition of conventional infrastructure data brokers (per tier) that allow data enrichment and aggregation 9. Reliability and safety: defines physical and logical reliability, such as health indication, remote power cycle, and remote diagnosis, and safety considerations aligned to industrial government policies and organizational guidelines 10. IT/OT integration: defines the integration of IT and OT for a unified infrastructure, allowing the generation of synergies required to make the infrastructure more efficient The cyber-physical unified framework provides a foundation for building end-to-end IoT solutions. It includes guidance for selecting the right building blocks to support business needs and providing consistency, scalability, and completeness of IoT solution architectures across the enterprise. The framework also includes a reference architecture that customers may implement using their preferred building blocks.
17.3.1 Cyber-Physical Architecture The first step in working through a cyber-physical solution is to define a base architecture that can be used by various internal and external stakeholders to build their solutions
in the company. This reference architecture will provide a framework on the basis of the construction of architectures to support specific use cases as well as guidance on selecting and leveraging components to realize the company’s needs. By nature, the reference architecture is more abstract than system architectures that have been designed for a particular application. At the same time, understanding system constraints better can provide input to the architectural design, which will then identify future opportunities. System architects can use this document as an architectural template to define their unique cyber-physical system requirements and design concrete architectures to address them. Using this common approach to architectural design builds consistent implementation across different use cases. Equally important, it will help in achieving a common understanding of the overall system both internally and externally among its diverse stakeholders, which will aid in system deployment and significantly enhance system interoperability across the company.
17.3.2 Cyber-Physical Components In the context of the cyber-physical system, components (also referred to as things, smart devices, or embedded devices) is comprised of the sensors or actuators used as environmental instruments. These components use communications to facilitate the exchange of data or actuate in the environment. The hardware that the sensors interface with has varying degrees of capability concerning connectivity, computation, and security that determines the best way to connect the hardware to network to monitor event data and makes decisions based on that data. This type of hardware is generally
384
characterized by low power consumption and usage of a lowpower microprocessor or a microcontroller. As a hardware counterpart, the software executed on the components is task specific. In general, the cyber-physical components have the following bill of materials (BoM) to be considered on an implementation:
C. Martinez Spessot
•
• Firmware and operating system • Input/Output interfaces to connect to sensors and other devices • Storage • Memory • Compute/Processing capability • Communication hardware and software (like digital twin’s edge layers and management agents)
17.3.3 Communications In support of the cyber-physical tenet of seamless data ingestion and device control, the architecture should consider the implementation of a broad protocol normalization and closed-loop control systems. A key aspect is enabling multiprotocol data communication among the devices at the edge as well as among the endpoint devices/gateways, network, and data center. Figure 17.2 depicts the three types of networks involved in this process. Proximity networks (PAN) and local area networks (LAN) connect to sensors, actuators, devices, control systems, and assets, which are collectively called “edge nodes.” PANs are usually wireless and more constrained by antenna distance (and sometimes battery life) than LANs. Wide area networks (WAN) provide connectivity for data and control flows between the endpoint devices and the remote data center services. They may be corporate networks, overlays of private networks over the public Internet, 4/5G mobile networks, or even satellite networks. The gateways in the middle of Fig. 17.2 are the primary on-premises devices of the reference architecture. They perform protocol normalization, ingest data from things, and control things based on their own application software or commands from the data center or cloud. Since the gateways both ingest data and execute commands, they are ideal for implementing closed-loop control systems. Gateways unify the broad range of endpoint things, which are characterized by low-cost, low-power, purpose-built, limited, and disjoint features. The architecture of the automation and deployment should consider the following nonfunctional requirements: • RF coverage and range: The required range of the technology given a particular link budget can help provide
•
•
•
•
essential propagation models taking operating frequency, transmit power, receive sensitivity, identify gains/losses, and antenna placements. Certain deployments can pose additional RF propagation challenges, including different degrees of slow and multipath fading. Density and scale: A sufficient level of planning around the scale of the system is necessary to identify the right wireless communication technology and design the system correctly. Many contentions-based communication protocols start to add much overhead reducing the overthe-air efficiency (% useful data) due to the very nature of the on-air access methodology (e.g., CSMA/CA). Contention-free and semi-contention-free solutions exist (such as IEEE 802.15.4/TSCH and ZigBee). The former (and derivatives of it) exists where a more significant degree of scale/density can be offered through time synchronization, effectively limiting or eliminating interference/collisions due to a larger degree of nodes otherwise contending over-air access. Spectrum: Most IoT wireless solutions today operate in unlicensed spectrums (e.g., 868 MHz, 902 MHz, 2.4 GHz, and 5.2 GHz), which are attractive due to their costeffectiveness compared to the technologies in the licensed spectrum (e.g., LTE-NB). This operation usually creates very crowded spaces subject to interference from other sources. Quality of service: It is essential to understand the different traffic models in the network and to identify if some must be treated differently due to their QoS (e.g., on-air priority), retransmissions, in-order delivery, etc. Different options exist with support for a higher degree of QoS, service guarantee/reliability, and contracts. Traffic: Understanding the traffic patterns, models, and flows and associated bandwidth requirements for the system such as asymmetry between uplink versus downlink and communication methodology (such as RESTful interactions, and public/subscription) is essential. Additionally, understanding other boundaries such as maximum tolerated latencies and jitter (i.e., the variance of latency) is also essential. Resilience and fail-over: Several wireless solutions offer a different type of mesh solutions that can help increase the effective range of coverage through intermediate routing nodes. These solutions come with the expense of additional latency as well as increased power consumption. Additionally, several solutions provide a layer of resilience without a single point of failure and fail-over that allows the network to be functional even if a routing node stops working. This implies that proper network planning with redundancy is built into the design and deployment phase of the system.
17
Cyber-Physical Automation
385
Proximity
Proximity • Wired/wireless • Plug & play • Optional data security • Peer-to-peer
802.15.4, 6LoWPAN, BT-LE, THREADz, ZigBee, Z-wave, RFID, ... HUB
Intelligent things
HUB
1. Cheap/smart HUB 2. Wired/wireless 3. Powered/low powered/PoE 4. Headless 5. Plug & play edge device • Discovery • Athenticate Wired (RS-232, ...) • Pairing (connection established) USB, BACNET ModBus, • Update configuration Profibus 6. Streaming 7. Optional data security 8. Peer-to-peer/mesh LAN 9. Open/interoperable
WAN • Location • Embedded SIM • Dynamic access network selection
Add mapping between capability, communications and protocols
Data center/cloud • Device management • Data management • Analytics
Access network IoT WAN HUB
WLAN 802.11 Ethernet
Fixed (Wi-Fi, Ethernet) Mobile (2G, 3G, LTE) Other (LTE-M, unlicensed access technology, satellite...)
• Connectivity & access management • Policy controls (deterministic) • Simple provisioning management, usage reporting • Secure (TLS, DNS DANE)
Fig. 17.2 Detailed view of communication
Additionally, the automation infrastructure requires the integration of many protocols. Some of them are directly related to the IoT infrastructure like MQTT, CoAP, XMPP, and other related to the industrial environment like CAN bus (ISO 11898-2/ ISO 11898-3), Modbus, or Profibus. To unify industrial automation, the cyber-physical functions should apply to different resources, and they cannot rely on specific communication functions directly. This presents the need for generic communication services that will define the middle layer [25]. An example of this is OPC UA (open platform communications unified architecture), a service-oriented machine-to-machine communication protocol widely used in industrial automation and defined in the IEC 62541 specification. It has use cases across the oil and gas, utilities, and transportation. It is platform independent and thus can run on various operating systems and hardware platforms. OPC UA can scale down to embedded controllers or mobile devices up to powerful servers controlling a collection of machines. Furthermore, it can also be integrated into cloud platforms [26]. The various features and components of OPC UA are described in different specification parts released and publicly available by the OPC Foundation [27]. The major strength of OPC UA is the semantic description of the address space model together with various companion specifications, which extend the basic semantic descriptions for many domains such as computer vision, openPLC, or robotics.
Another data functional middle layer used for the automation is the DDS (data distribution service), a data-centric publish/subscribe middleware for highly dynamic distributed systems using either TCP/IP or UDP/IP. It is standardized by the OMG (Object Management Group). It can be used to link multiple processes together in a publisher/subscriber relationship. Unlike a server-client relationship like basic TCP, DDS can operate with multiple publishers and subscribers together, allowing for much more seamless integration between multiple modules. Another advantage of DDS is that it can use RTPS discovery, allowing the system to automatically handle any new connections to an existing network and reconnections [28]. These communication middle layers are evolving to bridge the gaps and unify the DSPS (data stream processing strategy) required to measure the automation project [18, 19]. The next stage to cover is the cyber-physical infrastructure in charge of the metrics validation, processing, and automation execution.
17.3.4 Cyber-Physical Infrastructure The industry automation infrastructures are complex and are getting even more complex and heterogeneous. The instrumentation of the environment (such as a manufacturing plant or a building) generated inconsistent deployments creating a roadblock to scaling [24].
17
386
The fundamental requirements always remained unchanged: to execute industrial processes in a timely, reliable, and uniform way. The absence of an optimum technology to meet these goals established the critical need for unified approaches. To organically scale the automation in the industry, the companies need to unify the cyber-physical infrastructure.
Workload Consolidation In the previous section, the need for edge computing and workload consolidation was described as an overview of the cyber-physical systems, where an essential aspect of the autonomous infrastructure is workload consolidation. Consolidating new and legacy systems in edge computing environments can serve multiple purposes and lower the total cost of ownership in several ways. The benefits of workload consolidation are the following: • Decreases system equipment footprint: Instead of having a dedicated edge compute device for each point solution, a single, more resourceful device can host several IoT solutions/components. • Eases deployment and management: By reducing the number of devices used by point solutions, organizations can considerably reduce the number of devices they need to manage, which means less operational work. • Increases security: Organizations have the potential to minimize the attack surface of their network by reducing their hardware, firmware, and software. • Reduces system integration complexity and access to data: A framework is required to allow different workloads to coexist in the same device. This simplifies solution deployment and integration. The enterprise data bus receives data provided by different solutions. • Improves reliability of underlying process control systems: Duplication of data requests is minimized, and the load on critical (but aging) control systems is reduced. • Ensures no vendor lock-in: Customers can get their data from each solution, and vendors have to adapt to hardware defined by the customer. • Optimizes utilization of the aggregated computation at the edge: The inefficient use of edge device resources, which only run a few services per device, is eliminated. • Accelerates the adoption of cyber-physical technologies and helping enterprises with digital transformation.
Infrastructure Anatomy A cyber-physical system must support the execution of multiple types of workloads to provide flexibility, allow legacy solutions to run on top of the platform, and administer optimal use of the installed technologies.
C. Martinez Spessot
With the goal of supporting the requisite mentioned earlier, the autonomous infrastructure should be based in a multitier system schema (see Fig. 17.3) with four main components: • Edge compute devices: These are the devices commonly referred to as IoT gateways or edge compute devices, having enough processing power to host several IoT solutions. They are deployed close to sensors, receiving data directly from them, and interacting with actuators. • Edge compute server: On-premise server clusters (usually high available and reliable servers) required for consolidating data coming from many edge nodes. These local servers provide independence from cloud providers, allowing higher availability, low latency, and reduction of data transmission. • Core: Server clusters used to deploy the fundamental management tools to orchestrate the edge devices and edge server devices. This tier provides the necessary agility and speed to face real-time use cases and manage an extensive amount of data efficiently. • Cloud: Group of elements reserved to deploy cloud tools, allowing cloud operation in plant like ingest, digital twin, or cluster oversizing. This architecture makes possible the deployment of solutions without compromising the data security within the facility.
Edge Compute Device Tier Anatomy The implementation of the edge compute device tier (see Fig. 17.4) should be architected considering the following building blocks: 1. Computer hardware: processor-based devices, including hardware security features, networking, memory/storage, I/O for communications, virtualization, and accelerators (such as FPGA, vision processing unit, or graphics processing unit). 2. Host OS: base operating system required to run the full stack. 3. Hypervisor: a computer software that creates and runs virtual machines. This is an optional component on the edge compute device. 4. Workload orchestrator/AEP: management and orchestration of workloads to reach efficiency on resource utilization, including VMs and containers; workload transfers across tiers. 5. Guest OS: This component is only present when the edge compute device includes a hypervisor and refers to the operating system specifically required by workloads. 6. Containerization: flexible and lightweight way run isolated workloads. 7. Applications: workloads coming from solution providers.
17
Cyber-Physical Automation
387
New field compute (on premises)
MQTT broker
Distributed fieldbus
MQTT broker
Compose | Kubernetes Docker
Containerization
Network | Security | Dev. Mgmt.
Host operating system
Ubuntu | Fedora
Secure boot / TXT | TPM 2.0 | Networking | Memory / Storage | Movidius FPGA GNA | Accelerators
Kubernetes
Containerization
Docker
Core services
Network | Security | Dev. Mgmt.
Guest operating system
Others (ERP, Big data, etc.)
OSIPI (central)
Profile exten.
Image registry
Data ingestion AWS | IOT Onesait things | IoT hub
Digital twins AWS green grass Azure | Edge Onesait things | Digital twin
Cluster over sizing Workload orchestration
Kubernetes Docker
Containerization
AWS | Azure ACI
Network | Security | Dev. Mgmt.
Core services
Legend
Guest operating system Hypervisor / virtualization
Ubuntu server | Fedora server Red Hat | Windows ESXi | Open | Nebula | KVM
Container Common core container Common optional container
Hardware
TXT | Optane | Movidius | FPGA | GNA TPM 2.0 | Networking | Memory/Storage | Accelerators
Fig. 17.3 Multitier architecture: edge to cloud
VM
Ubuntu server | Fedora server Red Hat | Windows ESXi | Open | Nebula | KVM
TXT | Optane | Movidius | FPGA | GNA TPM 2.0 | Networking | Memory / Storage | Accelerators
Cloud
SCADA (centre)
Use case N Microservices
Use case 2 Microservices Data UX
GIS
Things (sensors / actuators)
Use case 1 Microservices
Edge core
Influx DB
Use case N Microservices
Workload orchestration
Current data center (on premises)
Data processing
17
OpenVINO
Hardware
MQTT broker
Image registry
Artificial intelligence / framework
Hypervisor / virtualization
Rule engine
Profile exten.
Pods / containers
Workload orchestration
Hardware
Data UX
Influx DB
Pods / containers
Core services
Use case 5 Microservices
Data processing
Use case 4 Microservices
Rule engine
Use case 3 Microservices
Use case 2 Microservices
Profile exten.
Use case 1 Microservices
Data UX
Influx DB
Use case N Microservices
Use case 5 Microservices
Use case 3 Microservices
Use case 2 Microservices
Data processing
Rule engine
Edge server
Use case 4 Microservices
Things (sensors / actuators)
Use case 1 Microservices
Edge device
388
C. Martinez Spessot
9
Data visualization 8
Data processing tools
Use case 2 Workload
Data management
Manageability
Communications
Security
Use case 1 Workload
Use case 3 Workload
7
6
Containerization Guest operating system Workload orchestrator
5 4
13 12
3
Hypervisor
11
Host operating system
2
10 Computing hardware (Interl®Core™ processor) HW security
Networking
Memory / storage
Device IO
1
Visualization
Fig. 17.4 Edge compute device
8. Data processing: tools for converting, transforming, enriching, and cleansing data. 9. Data visualization: required for presenting the data. 10. Security: cross-item enforcing enterprise security policies. 11. Communications and connectivity: cross-item defining communication technologies and protocols defined to ingest and exchange data. 12. Manageability: cross-component for remote management of the infrastructure. 13. Data management: cross-item for managing data, including data brokers and data schemas.
4. 5. 6. 7. 8.
9.
Edge Server Tier Anatomy The implementation of the edge server tier (see Fig. 17.5) should be architected considering the following building blocks: 1. Computer hardware: on-premise server clusters, including hardware security features, networking, memory/storage, I/O for communications, virtualization, and accelerators 2. Hypervisor: bare metal-type hypervisor 3. Workload orchestrator/AEP: management and orchestration of workloads to reach efficiency on resource
10. 11. 12.
13. 14.
utilization, including VMs and containers; workload transfers across tiers Guest OS: operating system specifically required by workloads Containerization: flexible and lightweight method for running isolated workloads Applications: workloads coming from solution providers Plant historian: short-term data storage Plant data lake: data storage and all type of databases, including more prominent data retention policies, including structured data and nonstructured data Data processing: tools for converting, transforming, enriching, and cleansing data Data visualization: presenting the data in an easy-tointerpret manner Security: cross-item enforcing enterprise security policies Communications and connectivity: cross-item capability defining communication technologies and protocols defined to ingest and exchange data Manageability: cross-item capability for remote management of the infrastructure Data management: cross-item capability for managing data, including data brokers and data schemas
17
Cyber-Physical Automation
389
Data visualization
10
Data processing tools
9 8
Data lake 7
Data management
Manageability
Communications
Security
Historian
App A
6
App B
App Z
17 5
Containerization Guest operating system
14
4
Workload orchestrator
3
13 2
Hypervisor
12 11
Computing hardware (Intel®Core™ processor) HW security
Networking
Memory / storage
Device IO
1 Visualization
Fig. 17.5 Edge server
17.3.5 Automation Software A critical part of the automation is the software that resides on a cyber-physical infrastructure described in the last point. Commonly known as “applications,” the automation software has the goal of satisfying the business and technical requirements of executing a behavior in a particular aspect of the environment. From an organization standpoint, the software should achieve systemic cost reduction (compared to the activity executed before) or create a new revenue stream (new product of or services). As presented in Sect. 17.2.2, the new wave of automation software requires the deployment on the edge by using workload consolidation. One of the critical advantages of having an automation software workload consolidable is the feasibility of moving the application from one hardware to another. This allows the capability of having ubiquitous computing [11], very useful in cyber-physical automation in case that an HW cannot process the workload due their limitation (like memory, processor, networking, and/or hard disk) and the software needs to be moved to a most capable HW. The workload consolidation architecture is genuinely the foundational element to have the benefits of cyber-physical
ubiquitous computing, allowing a workload to be migrated through the infrastructure. In every architecture, the technical requirements will identify the bill of materials (BOM) where the application can be executed. Based on the workload consolidation strategy, the application can be classified as agnostic, adapted, or specific (see Fig. 17.6) by where the code can be executed. An agnostic automation software can be executed in any standard hardware and support (or can be ported) to a virtual execution. This class represents all the applications that typically can be found in the cloud or on-premises working from small processing units to servers. The adapted class refers to the applications which some part of the whole code is executed on particular programmable processing units, like vision processing units (VPUs) or field programmable gate arrays (FPGA). This software is optimized to be executed in specific hardware, which is a mandatory prerequisite if we need to move from one HW service unit to another. There is a particular type of code that only can be executed in connection to the edge sensor or actuators (such as drivers, protocol conversion, video compression for communication, or servo-motor applications) and/or require real-time operations. These applications usually are fixed to the hardware
390
C. Martinez Spessot
Agnostic • Runs in commoditized HW/SW • Low complexity on workload deployment • Supports virtual execution (virtual machines, containers, etc.)
Every automation software will change during the time, and is indispensable to count with the right component to manage the change of the code and the data. The component, as mentioned earlier, is part of the cyber-physical manageability.
17.3.6 Cyber-Physical Manageability Adapted • Uses specialized functions with low/medium variability • Optimized to run on a specific platform, SW and/or HW • Benefits from using accelerators (FPGA, movidius, etc.)
Specific • Uses unique and fixed function in a particular HW • Requires real-time operations and results • Requires direct connection with the edge sensors/actuators • Too expensive to port/move it
Fig. 17.6 Automation software classification
and have a low probability of changing. This automation software is classified as specific, due to the low chances to be moved to other hardware. Usually, this software needs to be executed in “bare-metal” that means without the virtual machine or containerization layers. To make an application workload consolidable, the data, configuration, and code should accomplish the following criteria: • Security: provides security on communications, allows certificate management and use of HW features (such as TPM), and provides encryption (data at rest) and secure update/migration. • Prioritization between workloads: The application should provide a manifest declaring the performance characteristics like traffic/network consumption and processing prioritization. • Packaging: The application should support virtualization or containerization. • Parametrizable workloads: Every application should present their parameters like OS/container/VM environment variables to configure ports, APIs, etc. This is important to avoid collisions with other apps. • Data messaging and protocols: The application should declare the protocols and messaging to ensure collision avoidance with other solutions. • Well-defined resource utilization metrics: In the automation software’s manifest, it should declare the HW/SW constraints (where the application can be executed) and the use of specific HW such as accelerators.
The management of a device is a fundamental component that provides the control plane of cyber-physical architecture. It is vital that the selected platform is secure and that it covers all types of devices and connectivity options required for the infrastructure. There are two main lifecycles to cover with such platforms: “device lifecycle” and “software and firmware deployment lifecycle.” Manageability is available to an IoT infrastructure to manage and maintain edge devices, edge servers, and cyberphysical components. Ability to perform updates and see the device’s health, properties, and metrics and ease of use to securely onboard, provision, and decommission devices are the basic requirements that are put forth by all cyber-physical infrastructure.
Stages and Functions Required by a Device Management Component Cyber-physical devices will have different stages throughout their life (see Fig. 17.7). These stages are: • Early life, in which the devices are prepared for being used • Useful life, in which the devices are deployed and used in the field • End of life, in which the devices are planned to be removed or replaced • Reuse life, in which the devices are still usable, but they are repurposed • Decommission, in which the devices are discarded by following a set of clearing and invalidating procedures Each of the stages includes device lifecycle and/or software lifecycle functions.
Functions of Device Manageability During the commissioning phase, the device enrollment process is leveraging the trusted platform module (TPM) identity function, where devices can be preregistered based on their immutable public endorsement key, which is tied to the physical instance of the TPM. During the registration phase, the device is challenged to prove that it is a legitimate device that holds the private part of the endorsement key. One approach to prove this is to have the management backend generate a random nonce and encrypt with the endorsement
17
Cyber-Physical Automation
391
Stages Pre-life Decommission
Early-life Commission
Decommission Configuration Device lifecycle
Migration
Rollback & recovery
Remote access & diagnosis Provisioning
Functions Uninstall
Activation Software lifecycle Updates
Deactivation End of life reuse-life
17
Configuration
Useful-life
Boundaries
Fig. 17.7 Device manageability – stages and functions
key. The device can meet the challenge by decrypting the nonce and signing it before returning the response back to the device management backend. The registration and enrollment flow used here is not necessarily explicitly coupled to TPM but could be used with other hardware security modules (HSMs) such as the device identity composition engine (DICE). Ideally, this function is fully automated and instrumented as zero touch to allow to scale IoT device deployment. Configuration activity is used to customize the device in order to create a specific behavior. This activity can be part of the early life. This can be performed before or after deployment (specifically during useful life) through manual intervention (not ideal for scalability from the operations point of view) or through an automated way (by triggering scripts from the manageability solution). This function allows the configuration of things, such as device parameters, that are required to set communication mode, security, logging, etc. Remote access and diagnosis are critical activities used to monitor, troubleshoot, and solve issues from a remote location without the need to send a technician to the device. During its standard operational lifetime, the device must meet the compliance requirements, which include the integrity verification of the platform state. The remote
attestation feature provides means for the device to measure and collect integrity measurements emanating from a variety of sources on the platform, such as BIOS firmware, bootloader, kernel, filesystem, etc. TPM 2.0 is used to get signed quotes from the underlying platform configuration register (PCR) banks along with data from the TPM event log. The verifier then uses this information to determine the compliance state of the device in order to take appropriate action (e.g., alarm, log, etc.). During the useful life stage, the software/firmware lifecycle management is vital since it allows the device to have different behavior. Activities on this lifecycle include: • Provisioning: This is one of the main activities since it allows the installation of software/firmware. • Activation/Deactivation: Software under subscription models can be enabled or disabled using the available digital rights management (DRM) mechanism. • Updates: This is the primary function required by any software component. Without the update mechanism, the solution will not be improved or fixed (business logic functionality, general bug, or, more importantly, security issue such as zero-day attacks).
392
• Configuration: This is required to remotely modify the software and firmware for changing behavior, improving performance for a specific use case, etc. • Uninstall: It is always convenient to remove the software that is no longer required in the device, considering storage limitations and possible security issues. Rollback and recovery activities are essential after a failed deployment (provisioning, update, configuration, etc.) as they help in reverting the changes to the previous stable configuration. Migration can happen when a device is on the end-oflife stage (planned to be replaced). All the configurations (software, firmware, data, and configurations) are gathered and made available to a new device. Migration can also happen on the reuse life stage when a device is still usable but is requiring some maintenance. After this, the previous configuration can be redeployed. In the decommissioning phase, a device may be removed, deleted, blacklisted, and unregistered. This process includes securely sanitizing any sensitive data from the device. This may be particularly important if the device ends up missing. The device management component can trigger the data sanitization process, but this requires the device to be online and connected. There is also a set of locally enforced triggers in offline scenarios that are important to support. During a data sanitization process, sensitive data is rendered inaccessible so that it becomes infeasible for an adversary to recover the data with a significant level of effort. There are different levels of data sanitization concerning how difficult it is for an adversary to gain access to the data. Cryptographic erase (CE), which is a process that will effectively destroy the passphrase to the entire volume from its storage (in this case, the TPM nonvolatile memory), is considered here. There are several prerequisites and guiding principles to ensure the effectiveness of this approach, such as: • Local backups of the partition • Length and entropy of the passphrase protecting the volume • Cryptographic algorithms used and associated key lengths • Configuration of hibernation and swap partitions
Device Management: Features and Functionalities Many characteristics go into choosing a device manageability component. Customer requirements for each solution are unique, and a device management component should provide a range of functionalities to enable the maximum capability. However, to select a device management component that meets the customer needs, offerings of all the device management components must be reviewed and understood.
C. Martinez Spessot
It is also important to note that these features are subjective and are based on what the customer considers to be a priority. For a baseline standard to efficiently and safely manage IoT devices, the following are considered to be of utmost importance:
Automatic Device Onboarding It should be as close to zero-touch provisioning as possible. Device onboarding is simplified to avoid manual intervention. This includes automated key/certs provisioning. For example, an OS image is built with the device management agent. This implementation enables the device to automatically onboard when it is turned on and connected to a network source. Device Manageability Dashboard Most device management components come with preexisting dashboard availability for default methods and properties of devices. Device’s health can be monitored using device status (online/offline), location, CPU utilization, memory availability, etc. These are very important for operation teams in improving user experience and in simplifying adoption, as opposed to having to develop custom dashboards based on APIs. Remote Login This characteristic is intended for troubleshooting and diagnosis of devices that are not physically accessible. Device management components usually offer a secure shell (SSH) capability. A device that is connected in the same network can SSH into another device that has SSH enabled. Other vendors directly offer remote desktop or terminal capability that allows looking into the device. Device Grouping/Hierarchy Management Batch device management and support maintain different levels of organization/department. In some components, they are referred to as tenants. The ability of a component to maintain a hierarchical structure of devices allows servicing of more than one customer/project and allows customers to organize their devices into multiple tenants. Device grouping can also be performed by filtering or searching devices based on a feature called “Tags.” It is the ability to give one or more descriptive tags to a group of devices so they can be filtered accordingly for searching/performing updates, etc. Remote Script Execution A script or a command can be run directly from the device management portal. Most vendors offer ways to configure the parameters, choose an executable, etc.
17
Cyber-Physical Automation
OTA The over-the-air update is a popular feature among device management components, where a package is deployed onto a device along with instructions to perform any action like updating, installation, and uninstallation. OTA is used for firmware, BIOS, or software updates. In some platforms, this may be based on the previously mentioned remote script execution, although it is preferred as a separate feature showing the progress of package deployment. Campaigns The campaign is a functionality that allows multiple devices (even under multiple tenants) to be updated as a batch remote execution and not just an over-the-air update. Device Configuration Provisioning Device management components offer commands to provision device properties, OS configuration files, network files, etc. Rule Engine A rule engine is required for operational data processing. Users can create rules from the dashboard that react to events and triggers and execute actions like sending notifications, alerts, etc. Deployment Model Support Device management components offer a variety of support – on premise, cloud (SaaS), or hybrid. Manageability Standards Support A versatile device management component includes support for widely used standards like LWM2M, TR069, and OMADM. This allows a simple deployment process for devices. Cyber-Physical Components Management This is the administration of components, such as sensors and actuators, that need to be paired with edge devices or gateways. Device management components offer things management; the same way edge devices are managed. Certificate Management Most device management vendors include features to manage certificates (generation, renovation, and revocation), including custom CA. Protocols Supported In any industrial environment, there are often many different devices in use, with each having its own protocol. This means that organizations typically need to handle several different protocols to gather data. The ability to provide protocol conversion in a human-machine interface (HMI), like Red Lion’s Graphite HMIs, or any other automation product, is
393
a mandatory requirement to connect several different devices with different protocols, and aggregate overall data collection. The device management components include support for the protocols described on the (Sect. 17.3.3) communications such as HTTP/S, MQTT, DSS, OPC UA, ROS and CoAP. It is required for seamless integration with enterprise data infrastructure. The support and administration of protocols conversion components, like translating from MQTT to CoAP [23], are required to integrate every device with the automation infrastructure.
Scalability Scalability is the characteristic that allows IoT solutions to scale, include a large number of edge devices, and connect devices like sensors and actuators. Device management components need to provide support for scalability to manage and maintain thousands of devices without restricting them.
17.3.7 Security Historically, production and manufacturing deployments are implemented in separated networks and based on clientserver communication, for example: A gas tank level sensor sends information to the industrial control system to adapt its processes. The typical SCADA systems periodically request the state of connected programmable logic controllers (PLCs) or remote terminal units (RTUs) and respond with control commands or updates of setpoints [20]. With the cyber-physical automation, this communication model changed completely, with the massively increment of sensors and actuators required to deploy the Industry 4.0. As a result, a vast exchange of devices data emerged, generating dynamic communication relationships between various endpoints and moving from “one-to-one” to “machine-tomachine” (M2M) communication [21]. This complex communication relationships render traditional end-to-end security futile for sufficiently protecting the sensitive and safety-critical data transmitted in industrial systems [22]. Additionally, to the communication issue and as discussed in the previous points, one of the critical areas to define in the cyber-physical architecture is the edge compute. While the primary goal of edge computing is to provide more efficient, performant, and lightweight computing, a securityby-design approach should be taken [4]. This approach will help not directly to expose the edge computing infrastructures to broader attack surfaces. On the basis of the security-by-design approach, the consideration of hardware is fundamental to protect the trustworthiness of all the compute chain. The hardware root of trust (HRT) is mandatory to have a highly secure device on
17
394
C. Martinez Spessot
Table 17.1 Edge security capabilities and implementation Capabilities Trustworthiness Protection of data at rest Protection of data in transit Protection of data in use
Implementation Secure boot, credential storage Disk encryption, data sanitization Data encryption, credential storage Credential storage
the edge [6]. It protects the device secrets on the hardware in addition to providing physical countermeasures against side channel attacks. As opposed to software-only-based approaches, hardware security provides two important properties that may be used to establish device security: • First, the hardware has a specific purpose, and an attacker cannot reuse it for unintended actions. • Second, the hardware can detect and mitigate against physical attacks. When used to protect secrets and device’s correctness, the hardware provides a robust root of trust upon which additional software functionality can be implemented securely and safely. The fourth cornerstones of cyber-physical automation security are confidentiality, integrity, and availability, which relate directly to the capabilities shown in Table 17.1. The following points demonstrate how the capabilities can be achieved:
Secure Boot Secure boot is a technology that became an industry standard, where the “firmware verifies that the system’s bootloader, kernel, and, potentially, user space, are signed with a cryptographic key authorized by a database stored in the firmware.” The unified extensible firmware interface (UEFI) specification defines the above process as secure boot, which is “based on the Public Key Infrastructure (PKI) process to authenticate modules before they are allowed to execute. These modules can include firmware drivers, option ROMs, UEFI drivers on disk, UEFI applications, or UEFI boot loaders. Through image authentication before execution, Secure Boot reduces the risk of pre-boot malware attacks such as rootkits.” Secure boot relies on cryptographic signatures that are embedded into files using the authenticode file format. The integrity of the executable is verified by checking the hash. Authenticity and trust are established by checking the signature. The signature is based on X.509 certificates, and the platform must trust it. The system’s firmware has four different sets of keys:
Platform key (PK) UEFI secure boot supports a single PK that establishes a trust relationship between the platform owner and the platform firmware, by controlling access to the Key Exchange Key (KEK) database. Each platform has a unique key. The public portion of the key is installed into the system, likely at production or manufacturing time. Therefore, this key is: • The highest-level key in secure boot • Usually provided by the motherboard manufacturer • Owned by the manufacturer Key Exchange Key (KEK) These keys establish a trust relationship between the firmware and the operating system. Typically, there are multiple KEKs on a platform; it is their private portion that is required to authorize and make changes to the “DB” or “DBX.” Authorized signatures/Whitelist database (DB) This database contains a list of public keys used to verify the authenticity of digital signatures of certificates and hashes on any given firmware or software object. For example, an image, either signed with a certificate enrolled in DB, or that has a hash stored in DB, will be allowed to boot. The forbidden signatures/Blacklist database (DBX) This database contains a list of public keys that correspond to unauthorized or malicious software or firmware. It is used to invalidate EFI binaries and loadable ROMs when the platform is operating in a secure mode. It is stored in the DBX variable. The DBX variable may contain either keys, signatures, or hashes. In the secure boot mode, the signature stored in the EFI binary (or computed using SHA-256 if the binary is unsigned) is compared against the entries in the database.
Data Encryption Data encryption protects the private data stored on each of the hosts in the IoT infrastructure. The Linux Unified Key Setup (LUKS) is the standard for Linux hard disk encryption. By providing a standard on-disk format, it facilitates compatibility among different distributions and secure management of multiple encryption keys. Using TPM 2.0 as credential storage for disk encryption keys mitigates the security risk of having disk encryption keys stored in plain text on the device disk. Additionally, it removes the need to distribute keys through an insecure channel where they can be disclosed. Moreover, a platform integrity requirement can be established by using TPM 2.0 Platform Configuration Registries (PCR) policies, where the TPM will provide the disk encryption key only if the PCR values are the expected ones configured at the time of the key creation. Combining PCR policies with secure/measured boot prevents unsealing of
17
Cyber-Physical Automation
395
Static passphrase
• Volume passphrase is stored on filesystem • How is the passphrase protected?
TPM static passphrase
• Key is securely stored in TPM • Decoupled from physical drive
TPM passphrase w/ static policy
• Passphrase released only if system is in an authorized state • Attacker cannot boot into alternative OS and mount the volume
Static passphrase flexible policy
• Adress PCR brittleness concern • Support updates of system through use of signed policy
17
Fig. 17.8 TPM static passphrase
secrets if an unexpected boot binary or boot configuration is used in the platform. One of the benefits of the TPM 2.0 is the availability of the authorized policies, which update the authorized boot binaries and boot configuration values in the event of a platform update. In this case, a policy is signed by an authorized principal. It can be verified using TPM primitives, allowing the contents of the PCRs to be different at the moment to unseal the disk encryption keys. This implementation solves the “PCR Fragility” problem associated with the TPM 1.2 family, which was considered a scale barrier for the TPMbased security adoption (see Fig. 17.8). From the operative system, TPM-based disk encryption can be configured and automated using the Clevis opensource tool. Clevis is a framework for automated encryption/decryption operations, with support for different key management techniques based on “Pins” (Tang, SSS, and TPM2). The TPM2 pin uses Intel’s TPM2 software stack to interact with the TPM and automate the cryptographic operations used in the disk encryption operations.
Execution Policies and Integrity Protection The goal of the integrity measurement architecture (IMA) is to: 1. Detect if files have been accidentally or maliciously altered (both remotely and locally) 2. Appraise a file’s measurement against a “good” value stored as an extended attribute 3. To enforce local file integrity These goals are complementary to the mandatory access control (MAC) protections provided by LSM modules, such as SELinux and Smack, which can protect the file integrity depending on their policies.
Measure IMA- measurement
File Update measurement list List
Extend PCR TPM 2.0
PCR Hash-attributes 10
Hash-data
Path
91f34b5c671d73 91f34b5c671d73 /etc/xyc 504b274a9196 504b274a9196 61cf80dable127 61cf80dable127
Fig. 17.9 TPM 2.0 and IMA
As is schematized in Fig. 17.9, the following modules provide several integrity functions: • Collect – measures a file before it is accessed • Store – adds the measurement to a kernel resident list and, if the hardware TPM is present, extends the IMA PCR • Attest – if present uses the TPM to sign the IMA PCR value in order to allow a remote attestation of the measurement list • Appraise – enforces local validation of a measurement against a “good” value stored in an extended attribute of the file • Protect – protects a file’s security extended attributes (including appraisal hash) against offline attack • Audit – audits file hashes
396
Components IMA-measurement is part of the overall IMA based on the trusted computing group’s open standards. IMAmeasurement maintains a runtime measurement list and, if anchored in the hardware TPM, an aggregate integrity value over this list. The benefit of anchoring the aggregate integrity value in the TPM is that any software attack cannot compromise the measurement list without being detectable. IMA-measurement can be used to attest to the system’s runtime integrity. Based on these measurements, a remote party can detect whether critical system files have been modified or if malicious software has been executed. IMA-appraisal extends the “secure boot” concept of verifying a file’s integrity before transferring control or allowing the file to be accessed by the OS. The IMA-appraisal extension adds local integrity validation and enforcement of the measurement against a “good” value stored as extended security.ima attribute. The primary method for validating security.ima is hashed based, which provides file data integrity, and digital signature based, which provides authenticity in addition to providing file data integrity. IMA-audit includes file hashes in the system audit logs, which can be used to augment existing system security analytics/forensics. The IMA-measurement , IMA-appraisal, and IMA-audit aspects of the kernel’s integrity subsystem complement one another but can be configured and used independently. IMAmeasurement and IMA-appraisal allow the extension of the hardware root of trust model into user space applications and configuration files, including the container system. IMA-appraisal benefits are similar to the ones provided by secure boot, in the sense that the Linux Kernel first appraises each file in the trusted computing base against a known good value before passing control to it. For example, the dockerd binary will only be executed if the actual digest of the dockerd file matches the approved digest stored in the security.ima attribute. In the digital signature case, the binary will only be executed if the signature can be validated against a public signing key stored in the machine owner’s key infrastructure. On the other hand, IMA-measurement benefits are similar to the ones provided by the measured boot process, where the Linux Kernel will first measure each file in the trusted computing base before passing control to it. These measurements are stored in a particular file in the Linux security file system, which can later be used in a remote attestation scenario. Credential Storage TPM generates strong, secure cryptographic keys. They are strong in the sense that the keys are derived from true random source and large keyspace, and they are secure in the sense that the private key material never leaves the TPM secure boundary in plain form. The TPM stores keys in one of four hierarchies:
C. Martinez Spessot
1. 2. 3. 4.
Endorsement hierarchy Platform hierarchy Owner hierarchy, also known as storage hierarchy Null hierarchy
A hierarchy is a logical collection of entities: keys and NV data blobs. Each hierarchy has a different seed and different authorization policies. Hierarchies differ as to when their seeds are created and who certifies their primary keys. The owner hierarchy is reserved for the final user of the platform, and it can be used as a secure symmetric/asymmetric critical storage facility for any user space application or service that requires them. An example of symmetric key storage is presented in the disk encryption section, where a random key is stored in the TPM, and a TPM PCR policy is used to enforce integrity state validation before unsealing the key. An example of asymmetric key storage is the use case for VPN credentials. VPN clients, like OpenVPN, use a combination of public and private keys and certify (in addition to other security mechanisms as Diffie Hellman parameters and TLS authentication keys) to establish identity and authorization against the VPN server. By using the TPM, a private key is created internally in the NVRAM, and a certificate signing request is derived from the private key using the Intel’s TPM 2.0 software stack. The certificate signing request is then signed by the organization PKI, and a certificate is provided to the VPN client to establish authentication with the VPN server. At the moment of establishing the connection, the VPN client needs to prove that it owns the private part, having to interact directly with the TPM or indirectly using the PKCS#11 cryptographic token interface. Another example of asymmetric key storage is Docker Enterprise. To allow the host to pull and run Docker containers from a Docker Trusted Registry, it is required to use a Docker client bundle, which consists of a public and private key pair signed by the Docker Enterprise Certificate Authority. Similarly, to the VPN client case, a private key can be securely generated in the TPM, and a certificate signing request can be derived from the private key to be signed by the Docker Enterprise CA. Once a certificate is issued to the client, the Docker Daemon will need to interact with the TPM to prove that it has accessed the private key at the moment to establish a TLS connection with the Docker Enterprise Service.
Data Sanitization Data sanitization is a process to render access to target data on the media that is infeasible for a given level of recovery effort. The level of effort applied when attempting to retrieve data may range widely. For example, a party might attempt simple keyboard attacks without the use of specialized tools, skills,
17
Cyber-Physical Automation
or knowledge of the medium’s characteristics. On the other end of the spectrum, the party might have extensive capabilities and might be able to apply state-of-the-art laboratory techniques in their attempts to retrieve the data. To implement an effective data sanitization process, we rely on the cryptographic erase (CE) process. This technique, used widely in self-encrypting drives, leverages the encryption of target data by enabling the sanitization of the target data’s encryption key. This leaves only the ciphertext remaining on the media, effectively sanitizing the data by preventing read access. Without the encryption key used to encrypt the target data, the data is unrecoverable. The level of effort needed to decrypt this information without the encryption key then is less than the strength of the cryptographic key or the strength of the cryptographic algorithm and mode of operation used to encrypt the data. If strong cryptography is used, the sanitization of the target data is reduced to sanitization of the encryption key(s) used to encrypt the target data. Thus, with CE, sanitization may be performed with high assurance much faster than with other sanitization techniques. The encryption itself acts to sanitize the data. By storing all the sensitive data in encrypted partitions and using the TPM 2.0 to store the disk encryption key, the cryptographic erase process can be applied by issuing a TPM2_CC_Clear command that will erase the content of the TPM NV RAM. After the precise operation, any attempt to unlock the encrypted partition by requesting the key to the TPM will result in an error, based on the fact the TPM does not hold the disk encryption key anymore. It has to be noted that the sanitization process relies on the fact that the key needs to be securely stored in the TPM, and the key should not be stored in plain text on the device disk at any time. Additionally, to prevent disclosing the key while being in the device memory, an encrypted swap partition is used.
17.3.8 Data Management and Analytics Section 17.3.3 presented how to transmit the information and middle layers’ requirements to define the automation protocols. Additionally, in Sect. 17.3.4, the infrastructure anatomy was presented to support the execution of multiple workloads to process the metrics and execute the decision engine to produce the automation. The data management and analytics present how to create the decision algorithms based on artificial intelligence. There is no doubt that AI, including machine learning and deep learning algorithms, as well as the hardware to accelerate them, is a transformative technology. AI is already providing profound capabilities and benefits that were not achievable a few years ago. Looking to the future, AI has the potential to help solve some of humanity’s biggest challenges.
397
While AI algorithms have existed for many years, rapid expansion in AI-based capabilities across the organizations has been observed recently. This is because the processing and data storage costs have fallen at similarly dramatic rates. In parallel, computer scientists have advanced AI algorithm design, including neural networks, leading to greater accuracy in training models. Despite this clear potential, many organizations are yet to get started with AI, and adoption is often not necessarily happening as fast as many reports from the media and academia might suggest [7]. As enterprises look to begin their AI journeys, the more common use cases exist around predictive maintenance, computer vision, and natural language processing (NLP). Example opportunities for deep and machine learning include: • Vertical: Many organizations are looking to solve challenges specific to their industries, e.g., manufacturing process and spares management, retail inventory management, and patient outcomes in health care. • Line of business: Across industries, corporations have similar needs depending on individual lines of businesses. For example, natural language processing has applications in customer service departments, and image recognition and predictive maintenance have relevance to supply chain applications. • Technology architecture: Many examples of AI that we come across have similar architectures, even if they use different data pools and deliver different results. For example, the image processing and anomaly detection used by one customer to detect solar panel defects can be based on a similar platform to that which conservationists might use to “listen” for behavioral changes in bats. • IT related: Some applications of AI can exist across applications and services because they are about managing data flows, preempting bottlenecks, predicting faults, and responding quickly to failures and breaches. To take advantage of the new and exciting AI opportunities, one of the first considerations is a suitable infrastructure. AI solutions frequently demand new hardware and software, like those around collation and annotation of data sources and scalable processing or creating and fine-tuning of models as new data becomes available. For any given AI solution, the options include: • Repurpose existing hardware to deliver the AI solution at a minimal cost • Buy a one-off AI solution to address the needs of the use case only
17
398
• Build a broader platform that can support the needs of multiple AI solutions • Outsource AI solution delivery to a third-party resource, including cloud AI and the opportunities it represents are evolving rapidly, so budget holders may be reluctant to invest too heavily. A lack of in-house expertise, for example, can undermine solution delivery, creating the potential for reputational risk if mistakes or delays occur. Meanwhile, a lack of trust in the effectiveness of AI can create a substantial barrier to realizing its full value. One of the most common problems organizations face is the lack of a process to implement a solution, specifically from the proof of concept (PoC) to production release. Stage 1: Confirm the Opportunities and Business Value It is vital to be clear from the outset on the goals to be achieved with AI, their importance in the organization, and the respective prioritization of the use cases. To identify the opportunities, the organization should assess where AI can make the most immediate difference. Steps that could be taken are as follows: • Identify areas of the business with an apparent problem to be solved or value to be gained from AI • Work with existing pools of expertise, using the skills and experience available in house • Benchmark what the other organizations are doing in the same industry Stage 2: Characterize the Problem and Profile the Data After the AI use case selection and prioritization, the next step is to gather the requirements in more detail, mapping it to broad categories such as reasoning, perception, or computer vision. Additionally, several nonfunctional requirements need to be analyzed, such as: • Hardware to be executed, data center capacity, and use of accelerators based on data and benchmark • Data security, privacy, and regulatory factors • Forecast and sizing of the new data/information; size of the training model Stage 3: Architecture and Solution Deployment Infrastructure The next stage of designing and deploying the AI solution is to identify the technologies to be used, including: • Underlying products and systems infrastructure • AI-specific software to drive the infrastructure
C. Martinez Spessot
• Enabling AI frameworks to support the planned solution • Visualization and front-end software and/or hardware One of the critical goals of this definition is to identify which components need to be built, bought or reused, and/or make use of cloud services. Stage 4: Pilot Implementation As part of the AI project execution, it is recommended to perform a pilot deployment. The goal of this effort is to train the personnel, understand the organization’s implications, validate the approach, and confirm/evaluate the right resources to be applied in the final implementation. Also, it serves as a “lighthouse” to understand the political needs to be satisfied in a specific organization. Stage 5: Solution Tuning and Optimization The result of the pilot provides a clear identification where the algorithms and processes need to be optimized. Additionally, the engineering team is more prepared based on the learning provided by the environment instrumentation produced in the pilot deployment. The next step is to optimize software around areas, such as data curation and labeling, and to experiment with, train, and deploy new models that may give better results. Figure 17.10 presents the optimization possibilities to improve the algorithms and processes’ execution performance. It demonstrates that AI execution usually does not scale linearly. For instance, scaling a single-node configuration to a cluster of 50 nodes will not necessarily result in 50 times improvement of the performance. The engineering team needs to test and optimize a multinode configuration in much the same way as it is done in a single-node configuration. Stage 6: Solution Scale-Up Once the implementation is built, tested, and deployed in the environment, the next step is to analyze the experiences of the users. Positive experiences among the consumers of the solution can lead to greater demand and, therefore, higher levels of success. A scalability analysis is needed in order to understand the capabilities, satisfy future demands, and support future users technically. The recommended steps are the following: • Scale-up broader infrastructure: AI success requires examination of every link in the chain of inference and review of existing technology platforms, networks, and storage to increase the amount of data available and to improve timeliness and latency. This will minimize the potential for future bottlenecks while maximizing the value that can be derived from the data sources. • Scale-out to other business scenarios: The solution may have applications in other parts of the business. For in-
17
Cyber-Physical Automation
399
Hierarchical parallelism
Coarse-grained / multi-node Domain decomposition
Fine-grained parallelism / within node Sub-domain: 1) Multi-level domain decomposition (ex. across layers) 2) Data decomposition (layer parallelism)
17 Scaling Improve load balancing Reduce synchronization events, all-to-all comms
Utilize all the cores OpenMP, MPI, TBB... Reduce synchronization events, serial code Improve load balancing
Vectorize / SIMD Unit strited access per SMID lane High vector efficiency Data alignment
Efficient memory / cache use Blocking Data reuse Prefetching Memory allocation
Fig. 17.10 Performance optimization
stance, a predictive maintenance solution may have been deployed for one area of the manufacturing operation and now needs to be broadened. The goal here is to adopt a portfolio approach to manage the extension of solutions across a more extensive user base. • Plan for management and operations: By their nature, many AI use cases require the systems to perform inference real time rather than offline or batch mode. Also, models may need to be retrained and updated over time.
17.3.9 Reliability and Safety The primary intent of having a functional safety cyberphysical automation system is to prevent the risk of death or injury as a result of systematic, random, or common-cause failures in safety-relevant systems. So-called random failures are caused by a malfunction of the safety system’s parts or components in contrast to systematic failures that are a result of a wrong or inadequate specification of a safety function. The necessary motivation for deploying functional safety systems is to ensure safe operation as well as safe behavior in cases of failure. Due to the great environment’s impact (humans, infrastructure, and ecosystem), cyber-physical systems deserve proof as safety evidence [8]. The correctness of the solution needs to be verified from a systemic standpoint, because testing of only the automation behavior may miss important
issues. Consequently, the need for a standard is imperative to evaluate the components and their execution as a whole. The International Electrotechnical Commission’s standard, IEC 61508: “Functional safety of electrical/electronic/programmable electronic safety-related systems” [9] is understood as the standard for designing safety systems for electrical, electronic, and programmable electronic (E/E/PE) equipment. This standard was developed in the mid-1980s and has been revised several times to cover the technical advances in various industries. Also, derivative standards have been developed for specific markets and applications that prescribe the particular requirements on functional safety systems in these industry applications. Example applications include process automation (IEC 61511), machine automation (IEC 62061), transportation (railway EN 50128), medical (IEC 62304), automotive (ISO 26262), power generation, distribution, and transportation. Cyber-physical automation solutions must follow internationally accepted safety standards. Solutions’ designs first need approval from certification bodies or trade associations that testify that these designs comply with their appropriate safety standards and legislations. To provide the certification body with a complete picture of the system concerning safety, the owners or operators must have all available information and documentation for the components used to build the cyber-physical automation. Since the HW and SW developers typically source automation components, such as sensors, actuators, and processing units (like PLCs), from automation
400
C. Martinez Spessot
Table 17.2 Safety integrity levels SIL 1 2 3 4
PFH 0.00001–0.000001 0.000001–0.0000001 0.0000001–0.00000001 0.00000001–0.000000001
component manufacturers, these components consequently must also be designed to the applicable standards. This also means that the component suppliers must provide the relevant safety documentation (safety manuals) to the solution designers. The functional safety consideration of a system, therefore, covers all aspects of components, both software and hardware, that have high integrity, self-test mechanisms, and a fail-safe state. In the end, the designers will reach a compound or average certainty level of failure probability (probability of a failure per hour of operation that introduces “danger”) and RRF (risk reduction factor) for a design, which is categorized in Table 17.2. IEC 61508 describes a methodology to design, deploy, operate, and decommission safety devices. Every company that follows this standard can testify to and document the fitness for the safety of their designs or machines and sell the device as IEC 61508 compliant. Cyber-physical automation providers and factory owners are used to relying on certifications from third-party certification bodies that state a neutral individual has reviewed the component’s development processes and confirms its compliance with safety concepts, the specifications meet the standards and the correctness of its failure calculations. Automation component manufacturers are particularly affected when designing functional safety devices because these devices are considered to be sophisticated electrical and electronic devices. For these components, such as software, hardware, tools, mechanical parts, etc., safety has implications for the complete system design and across all stages of life, from concept to inception to decommissioning. Safety can only be assured by looking at all aspects of a system, including both hardware and software. As an example, software alone cannot provide safety assurance, as its correct operation is dependent on the system hardware. Similarly, hardware alone cannot satisfy the safety requirements. Therefore, an integrated approach to safety is essential. Also, functional safety designs are not guaranteed simply by good design. The correct or incorrect behavior of any system is influenced by many factors, including failure rates, production, programming, and use. To simplify and speed up the safety certification process, ® Intel worked closely with TÜV Rheinland (product certification company) to provide an IEC61508 certified Functional Safety Data Package, which includes:
PFH (power) 10−5 – 10−6 10−6 – 10−7 10−7 – 10−8 10−8 – 10−9
RRF 100,000–1,000,000 1,000,000–10,000,000 10,000,000–100,000,000 100,000,000–1,000,000,000
®
®
®
• Safety Manuals for Intel FPGAs and Intel Quartus Prime Design Software • Diagnostic and standard intellectual property (IP) such as ® the Nios II processor • FPGA design flows including a safety separation design flow ® ® • Development tools, including the Intel Quartus Prime Design Software Having immediate access to qualified semiconductor data, intellectual property (IP), development flows, and design ® tools from a vendor like Intel can help you significantly shorten your overall project timeline by 1 and 1/2 years to 2 years [10]. By reusing a system concept for a drive that followed a preapproved implementation and following a qualified design methodology, a qualified design flow, tools/IP, and typical application development can be significantly accelerated. The certification is accelerated as reliability data for the components is immediately available and provided in a format that can be easily integrated into the overall documentation for the safety qualification. Designers can take advantage of flexible design integration using FPGAs for both safety and system design. As the safety aspect is considered as a critical requirement for the application, it is integrated into the overall concept and can be realized by meeting cost and time to implementation tar gets.
17.3.10 IT/OT Integration With the rise of Industry 4.0 and cyber-physical automation, the integration between operational technology (OT) systems and informational technology (IT) systems is becoming a necessity. As discussed in the previous points, edge computing instrumented the environment where previously was only a space for the OT. Table 17.3 describes the difference between IT and OT infrastructures. The challenge that organizations face at the time of implementing cyber-physical automation are: • Current data management infrastructure between IT/OT are separated. It generates redundancy and fragmentation in the infrastructure, increasing the total cost of ownership.
17
Cyber-Physical Automation
Table 17.3 IT/OT ownership: infrastructure IT IT can provide IaaS, PaaS, and SaaS usually in a hybrid cloud architecture
OT OT equipment usually works as an individual unit or in a close vendor network with limited or without access to the Internet Confidentiality: data is protected, Confidentiality: operational data usually confidential. In this case, is highly protected. Usually, the privacy vector enters into the operational data is seen as top equation, and IT has the tools to secret implement the right level of access and protection IT infrastructure in most cases OT equipment was usually not designed to have some level of designed for remote access. It access to the Internet with the makes them insecure about appropriate layers of security exposing to the Internet Data integration and insights: IT Data integration and insights: OT has the infrastructure to collect, usually keeps the solution (by integrate, and analyze data design) isolated and does not coming from the entire company have the infrastructure to and generate useful information integrate and analyze the data for the decision-makers generated. Each solution may have its own mechanism to analyze the data generated but usually does not have an interface to integrate with other solutions
• Operational requirements are not meet for all the IT technologies. It generates barriers for adoption in the lower level of the OT models. • Legacy OT infrastructure was not designed to expose to IT environments without compromise security. The same is true in data access management. • OT infrastructure is usually proprietary and does not expose a standard interface for integration. As the cyber-physical solutions were implemented, many organizations moved their enterprise-grade IT out of the data center, putting it on the edge. With the need for workload consolidation, the OT functions are integrated into the HW (edge device and/or edge servers) previously uniquely managed by IT. In sum, several IT components such as systems management, scalable storage, and high-performance processors are combined with OT components such as control systems, data acquisition systems, and industrial networks. Cyber-physical workload consolidation integrates old legacy systems with multiple purposes local edge compute devices by using virtualization/containerization (see Fig. 17.11). There are researches [12] demonstrating the advantages of running OT functionalities on a VM, like virtual RTUs [13] or virtual PLCs [14]. As an example of cyber-physical automation in the industry, Fig. 17.11 presents the evolution of OT infrastructure in a printed circuit board (PCB) manufacturing space. In this evolution, many of the
401
standard OT functions are executed in one single device following the architecture presented on Sect. 17.3.4. Additionally, industrial control systems can benefit from many decades of security research and development by leveraging, as an example, real-time monitoring for intrusion detections [14]. The expected adoption of workload consolidation will increase the level of automation in the provisioning and delivery of IT/OT services. Each of the constituent infrastructures brings a unique automation capability into the mix.
17.4
Conclusion, Emerging Trends, and Challenges
The first wave of cyber-physical deployments proved that digital transformation was feasible in many organizations. These efforts generated automation to improve safety and efficiency in numerous application domains. Examples include autonomous drones, self-driving cars, air quality monitoring systems for smart cities, home health care devices, and safety systems for humans on manufacturing. At the same, these implementations generated a significant issue for organizations and consumers: technology fragmentation. The lack of standardization created silos in the implementation; many solutions use the same HW/SW components (duplicated infrastructure) that cannot be integrated and, in extreme cases, generated incompatibilities. Of even greater importance, they do not have a holistic approach to security, creating vulnerabilities and threats to the system. With the second wave of digital transformation, the organization focused on generating cyber-physical automation systems that can be integrated with their infrastructure (positioning every deployment on a systemic standpoint). Companies worked on controlling the data through any permutation of on-premises or cloud data integration, compared to just sending data into a cloud and then having no choice but to pay to access and use it throughout their applications. It is essential to decouple industry-specific domain knowledge from the underlying technology. Many cyber-physical automation platform providers tout the ability to do, as an example, predictive maintenance. However, their developers do not have the necessary years of hands-on experience and know-how to define data failure patterns of any particular type of machine. Many companies, like Minsait [3] and Intel, are joining efforts to standardize the implementation of cyber-physical systems. The base ground of all the effort is to decouple infrastructure from applications creating a unified open platform to host all the automation use cases. The cyber-physical unified framework allows the disassociation of the infrastructure from automation use cases by using the workload consolidation principles (virtual machine
17
402
C. Martinez Spessot
VM: PLC virtualized
VM: digital twin monitoring
VM: predictive maintenance
Edge server Edge devices Chip inserter
Reflow oven
Punch printer
Soft PLCs or virtual PLC
Machines
Fig. 17.11 OT evolution
and containerization). The execution of this framework is implemented as close as possible to the point of data creation. Additionally, it defines a clear separation between the transport from protocols, application frameworks, virtualization/containerization engines, operating systems, and HW firmware. Holistic security is increasingly crucial in light of developing cyber-physical automation. The integration of automation solutions, from a security perspective, is complicated by the enormous numbers of noncomputing devices, edge devices, and edge servers being outfitted with networking and data transfer capabilities. Because these systems often communicate over the external parties (public or hybrid cloud) and/or interface with other networks, they and their extended environments must be secured. The hardware-based root of trust, implemented with the trusted platform module (TPM), and the support in many software layers demonstrated to be the solution for the security issues. The foundational cyber-physical automation platform presented in this document provides an enterprise-grade architecture to enable digital transformation at scale, with emphasis on meeting the security requirements for the protection of sensitive data at rest, in transit, in use, and the trustworthiness of the overall system. ExxonMobil [1] and Intel have contributed with key ecosystem players in the IoT technology space to pave the road for better and more secure cyber-physical implementations. Based on the increasing availability and commoditization of high-bandwidth networks (such as 5G) architectures based on workload consolidation will allow the realization of ubiquitous cyber-physical automation. The execution of the automation will not be hosted in one particular dedicated computation node of the infrastructure. Conversely, the workload will be divided into many different nodes of the
infrastructure, depending on dynamic requirements and user consumption. The subsequent development of the high-speed network will permit the cyber-physical components (sensor and actuators) to reduce their hardware and software footprint. Consequently, the computation power and memory size will be remarkably reduced. The cyber-physical components will rely on the next processor in the infrastructure chain to compute the data. These efforts will result in the creation of thin cyber-physical component (TCC), lightweight devices optimized for establishing a remote connection with an autonomous workload consolidation infrastructure. The processing will be done in the infrastructure without a clear division between edge computing or cloud/core computing. The cyber-physical software will be executed indistinctly. AI and computer vision technologies need to be democratized in order to enable subject matter experts to apply their knowledge to create cyber-physical automation that delivers excellent outcomes with easy-to-use hardware and software applications. Workload based on artificial intelligence and computer vision will be commonly adopted. The inference and learning processes will use the advantages of a cyberphysical unified framework to gather all the integrated information of the organization. The cyber-physical unified framework provides the disciplines and guidance that organizations need to create and efficiently scale end-to-end automation solutions. By using this framework to consolidate and orchestrate workloads at the edge rather than continuing to support an increasingly complex and costly array of disparate cyber-physical solutions, organizations can improve security, performance, and reliability while reducing capital expenses and lowering their total cost of ownership. This platform proved by ExxonMobil
17
Cyber-Physical Automation
[1] and Georgia Pacific [2] to be the right approach to deliver cyber-physical automation in the industry. Edge computing and workload consolidation represent a massive opportunity for organizations to move digitalization to a new global scale and accelerate the generation of impact on both their business and operations. However, there are still multiple technological and operational challenges, such as the generation and management of a broad ecosystem of partners, the generalization and dissemination of the concept, and its transfer to use cases and projects with a quantifiable impact on the company’s profit and loss.
References 1. Martinez Spessot, C., Hedge, D., Carranza, M., Nuyen, D., et al.: Securing the IoT Edge. https://software.intel.com/content/www/ us/en/develop/articles/securing-the-iot-edge.html 2. Martinez Spessot, C., Carranza, M.E., et al.: Introducing the Intel® IoT Unified Edge Framework using workload consolidation to efficiently scale Industrial IoT solutions. https://www.intel.com/content/www/us/en/internet-of-things/indu strial-iot/unified-edge.html 3. Martinez Spessot, C., Ortega de Mues, M., Carranza, M., Lang, J., et al.: Creating an effective and scalable IoT infrastructure by introducing Edge Workload Consolidation (eWLC). https://software. intel.com/en-us/articles/edge-workload-consolidation-eWLC 4. Xiao, Y., Jia, Y., Liu, C., Cheng, X., Yu, J., Weifeng, L.: Edge computing security: State of the art and challenges. IEEE. June 2019. https://doi.org/10.1109/JPROC.2019.2918437. https:// www.researchgate.net/publication/333883752 5. Martinez Spessot, C., Guzman, M., Oliver, N., Carranza, M., et al.: Scaling edge inference deployments on enterprise IoT implementation. https://www.wwt.com/white-paper/scaling-edge-inferencedeployments-on-enterprise-iot-implementations 6. Hunt, G., Letey, G., Nightingale, E.B.: The seven properties of highly secure devices. Microsoft Research NExT Operating Systems Technologies Group (2018) 7. “Steps to an AI Proof of Concep..” Intel® Corporation. https:// www.intel.com/content/dam/www/public/us/en/documents/whitepapers/ai-poc-whitepaper.pdf 8. Platzer, A.: Logical Foundations of Cyber-Physical Systems. Springer. isbn:978-3-319-63588-0 9. IEC 61508:2010 CMV. https://www.iec.ch/functionalsafety/ standards/ 10. Qualified Functional Safety Data Package. Intel® Corporation. https://www.intel.com/content/dam/www/programmable/us/ en/pdfs/literature/po/ss-functional-safety.pdf 11. Salsabeel, S., Zualkerman, I.: IoT for ubiquitous learning applications: current trends and future prospects. In: Ubiquitous Computing and Computing Security of IoT. Springer, Cham (2019) 12. Ansari, S., Castro, F., Weller, D., Babazadeh, D., Lehnhoff, S.: Towards virtualization of operational technology to enable large-scale system testing. In: IEEE EUROCON 2019 – 18th International Conference on Smart Technologies, Novi Sad, Serbia, 2019, pp. 1–5. https://doi.org/10.1109/EUROCON.2019.8861980
403 13. Top, P., et al.: Simulation of a RTU cyber attack on a transformer bank. In: 2017 IEEE Power & Energy Society General Meeting, Chicago, 2017, pp. 1–5 14. Adepu, S., et al.: Control behavior integrity for distributed cyberphysical systems. In: 2020 ACM/IEEE 11th International Conference on Cyber-Physical Systems (ICCPS), Sydney, 2020, pp. 30– 40. https://doi.org/10.1109/ICCPS48487.2020.00011 15. Zhong, H., Nof, S.Y.: The dynamic lines of collaboration model: collaborative disruption response in cyber–physical systems. Comput. Ind. Eng. 87, 370–382 (2015)., ISSN 0360-8352 16. Nof, S.Y.: Automation: what it means to us around the world. In: Springer Handbook of Automation, pp. 13–52. isbn:978-3-54078831-7 17. Nayak, A., Levalle, R.R., Lee, S., Nof, S.Y.: Resource sharing in cyber-physical systems: modelling framework and case studies. Int. J. Prod. Res. 54(23), 6969–6983 (2016). https://doi.org/10.1080/ 00207543.2016.1146419 18. Diván, M.J., Sánchez-Reynoso, M.L.: Metadata-based measurements transmission verified by a Merkle Tree. Knowl.-Based Syst. 219, 106871 (2021)., ISSN 0950-7051 19. Diván, M.J., Sánchez-Reynoso, M.L.: A metadata and Z scorebased load-shedding technique in IoT-based data collection systems. Int. J. Math. Eng. Manag. Sci. 23(1), 363–382. https://doi.org/ 10.33889/IJMEMS.2021.6.1.023 20. Gao, J., Liu, J., Rajan, B., Nori, R., et al.: SCADA communication and security issues. Secur. Commun. Netw. 7(1), 175–194 (2014). https://doi.org/10.1002/sec.698 21. Roman, R., Zhou, J., Lopez, J.: On the features and challenges of security and privacy in distributed internet of things. Comput. Netw. 57(10), 2266–2279 (2013). https://doi.org/10.1016/ j.comnet.2012.12.018 22. Dahlmanns, M., Pennekamp, J., Fink, I.B., Schoolmann, B., Wehrle, K., Henze, M.: Transparent end-to-end security for publish/subscribe communication in cyber-physical systems. In: Proceedings of the 1st ACM Workshop on Secure and Trustworthy Cyber-Physical Systems (SaT-CPS ’21) 23. Saito, K., Nishi, H.: Application protocol conversion corresponding to various IoT protocols. In: IECON 2020 the 46th Annual Conference of the IEEE Industrial Electronics Society, Singapore, 2020, pp. 5219–5225. https://doi.org/10.1109/ IECON43393.2020.9255101 24. Bortolini, M., Faccio, M., Galizia, F.G., Gamberi, M., Pilati, F.: Adaptive automation assembly systems in the industry 4.0 era: a reference framework and full–scale prototype. Appl. Sci. 11, 1256 (2021). https://doi.org/10.3390/app11031256 25. Wollschlaeger, M., Sauter, T., Jasperneite, J.: The future of industrial communication: automation networks in the era of the internet of things and industry 4.0. IEEE Ind. Electron. Mag. 11(1), 17–27 (2017). https://doi.org/10.1109/MIE.2017.2649104 26. Mühlbauer, N., Kirdan, E., Pahl, M.-O., Waedt, K.: Feature-based comparison of open source OPC-UA implementations. In: Reussner, R.H., Koziolek, A., Heinrich, R. (eds.) INFORMATIK 2020, pp. 367–377. Gesellschaft für Informatik, Bonn (2021). https:// doi.org/10.18420/inf2020_34 27. Ungurean, I., Gaitan, N.C., Gaitan, V.G.: A middleware based architecture for the industrial internet of things. KSII Trans. Internet Inf. Syst. 10(7), 2874–2891 (2016) 28. Larsson, A.: Integrating Data Distribution Service DDS in an existing software architecture: exploring the effects of performance in scaling. Dissertation (2020). Retrieved from http://urn.kb.se/ resolve?urn=urn:nbn:se:liu:diva-172257
17
404
Cesar Martinez Spessot is the general manager of services and senior AI architect with Intel Corporation. He leads an organization chartered to guide the Fortune 100 companies on their digital transformation, generating billion-dollar businesses. Cesar is the strategic technical advisor to C-level executives implementing automation as part of their core business. He has many international patents in the digital transformation and AI domains. Cesar previously worked as AI and HPC researcher and professor in many universities. He holds MS degrees in information systems and computer science (UTN), business administration (MBA), PMP PMI® Certification, and many executive certifications from MIT Sloan.
C. Martinez Spessot
Collaborative Control and E-work Automation
18
Mohsen Moghaddam and Shimon Y. Nof
Contents 18.1
Background and Definitions . . . . . . . . . . . . . . . . . . . . . . 405
18.2 18.2.1 18.2.2 18.2.3 18.2.4
Theoretical Foundations for e-Work and CCT . . . . . e-Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Integration and Communication . . . . . . . . . . . . . . . . . . . . Distributed Decision Support . . . . . . . . . . . . . . . . . . . . . . Active Middleware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
408 409 410 411 411
18.3 18.3.1 18.3.2 18.3.3
Architectural Enablers for Collaborative e-Work . . . Internet-of-Things (Physical Architecture) . . . . . . . . . . . Internet-of-Services (Functional Architecture) . . . . . . . . IoT-IoS Integration (Allocated Architecture) . . . . . . . . .
412 413 413 415
18.4 18.4.1 18.4.2 18.4.3
Design Principles for Collaborative e-Work, Collaborative Automation, and CCT . . . . . . . . . . . . . . Generic Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Design Principles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Emerging Thrusts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
417 417 418 426
18.5
Conclusions and Challenges . . . . . . . . . . . . . . . . . . . . . . 428
18.6
Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429
of collaborative e-Work and Collaborative Control Theory (CCT) are reviewed. The “four-wheels” of collaborative e-Work along with their respective e-Dimensions and their role in e-Business and e-Service are explained and illustrated. The architectural enablers for collaborative e-Work, which are based on recent developments in cyber technologies, Internet-of-Things (IoT), and Internet-ofServices (IoS), are discussed as blueprints for designing smart agents, functions and services, and their integration. The design principles of CCT for effectiveness in the design and operation of e-Work automation solutions are presented, along with case studies of e-Work, e-Manufacturing, e-Logistics, e-Business, and e-Service to enable readers to get a glimpse into the depth and breadth of ongoing efforts to revolutionize collaboration in such e-Systems. Challenges and emerging thrusts are discussed to stimulate future discoveries in CCT and e-Work automation.
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430
Keywords Abstract
Collaborative Control Theory (CCT) · Collaborative automation · Collaborative intelligence · Cyber-physical systems · Internet-of-Things · Internet-of-Services
A major requirement for automation of complex and distributed industrial control systems is represented by collaborative e-Work, e-Business, and e-Service. The potential benefits, opportunities, and sustainability of emerging electronic activities, such as virtual manufacturing, 18.1 Background and Definitions e-Healthcare, e-Commerce, e-Production, e-Collaboration, e-Logistics, and other e-Activities, cannot be realized Collaboration is a fundamental phenomenon inherent in evwithout the design of effective collaborative e-Work. In ery stage and every aspect of our lives. Increasingly, so is this chapter, the theoretical and technological foundations collaborative automation. We learn to collaborate with our parents and then our family and friends early in life. As goalM. Moghaddam () oriented beings, we are “pre-programmed” to communicate Department of Mechanical and Industrial Engineering, Northeastern our needs and opinions and support, complement, and ocUniversity, Boston, MA, USA casionally compete with other people to achieve our goals. e-mail: [email protected] Collaboration is not limited to humans and encompasses S. Y. Nof any biological or artificial system, where entities are bound PRISM Center and School of Industrial Engineering, Purdue University, West Lafayette, IN, USA to using shared information and resources and committing e-mail: [email protected] to shared responsibilities for a common agenda. Examples © Springer Nature Switzerland AG 2023 S. Y. Nof (ed.), Springer Handbook of Automation, Springer Handbooks, https://doi.org/10.1007/978-3-030-96729-1_18
405
406
of biological systems range from colonies of ants to flocks of bird, schools of fish, and packs of wolves, all depending upon each other’s signals and actions to survive and succeed. Examples of artificial systems include collaboration among humans and machines (e.g., drivers-vehicles, pilots-aircraft), humans and computers (e.g., designers-CAD software, children-computer games), computers and computers (e.g., clients-servers), and machines and machines (e.g., multi-robot manufacturing cells). Collaboration has always been around and has evolved along with our daily lives. With the growth and abundance of information and communication technologies (e.g., Internet, smartphones, social media, videoconferencing), we collaborate more frequently and easily. Yet, although there is no question about the necessity of collaboration, a key fundamental question remains to be answered: How can a better collaboration process be achieved and validated to produce better outcomes, and is there collaboration intelligence? This chapter addresses these questions by explaining how collaborative e-Work (“e”: electronic, as opposed to manual) as the foundation of collaborative automation, such as e-Business, e-Service, e-Commerce, and other e-Activities, can be enabled and optimized and how effective, efficient, and sustainable collaboration can be designed and guided through recent advances in Collaborative Control Theory (CCT) [1–4] (Fig. 18.1). The remainder of this section provides the background and fundamental definitions for e-Work and collaboration, followed by the motivation and objectives of this chapter. e-Work plays a transformative role in enabling and augmenting collaboration among humans and agents: As power fields, such as magnetic fields and gravitation, influence bodies to organize and stabilize, so does the sphere of computing and information technologies. It envelops us and influences us to organize our work systems in a different way, and purposefully, to stabilize work while effectively producing the desired outcomes [1].
The transformation of work through e-Work in turn influences and transforms its various aspects such as business, commerce, and service (Fig. 18.2): • e-Business. It refers to the integration of the products, procedures, and services of an enterprise using the Internet and information and communication technologies. e-Business allows to automate internal and external business transactions by electronic means, including internal business processes and external collaboration with suppliers, stakeholders, and business partners [5]. • e-Commerce. It refers to the integration of purchasing/sales transactions for products and services, including business-to-business and business-to-customer, using the Internet and information and communication technologies. Besides facilitating online purchases (e.g., Amazon, e-Bay, online travel agencies), e-Commerce
M. Moghaddam and S. Y. Nof
continues to play a disruptive role in the way companies interact with their customers and identify their needs to better tailor the design of their products, services, and systems to what customers desire. Google reports over 600% increase over the past 5 years in the amount of time people spend exploring others’ experiences before making a decision on a product or a service [6]. Specifically, recent studies shows that 91% of 18–34 year-olds trust eCommerce reviews as much as personal recommendations and that e-Commerce platform reviews influence the purchase decisions for 93% of customers [7]. • e-Service. It refers to the provision of services by means of the Internet and information and communication technologies. The three elements of an e-Service system include service provider, service consumer, and channels of provision. It goes beyond service organizations and encompasses all enterprises including manufacturers of goods which provide support throughout the life cycle of their products, either through direct communication with customers or through automated performance monitoring of smart products with built-in sensing and communication capabilities. The activities enabled by e-Work are not necessarily limited to e-Business, e-Commerce, and e-Service and include e-Healthcare, e-Training, e-Logistics, v-Factories (“v”: virtual), and v-Enterprises, among others (Fig. 18.3). These e-Activities are enabled by information and communication technologies and augmented and optimized through collaboration algorithms and protocols that support the interactions between humans, computers, and machines. “e-Activities,” in this context, refer to the activities enabled by and executed through electronic information and communication systems, supported by the Internet. The terms coordination, cooperation, and collaboration are often used interchangeably despite their subtle and key differences. Highlighting and understanding the distinctions between these terms and their implications are critical for the design of e-Work systems: • Coordination. The dictionary definitions for coordination are “the process of organizing people or groups so that they work together properly and well” and “the harmonious functioning of parts for effective results” (MerriamWebster). Coordination in the context of e-Work involves the use of information and communication technologies to reach mutual benefits among entities and enable them to work harmoniously. • Cooperation. The dictionary definitions for cooperation are “the actions of someone who is being helpful by doing what is wanted or asked for: common effort” and “association of persons for common benefit” (Merriam-Webster). Cooperation in the context of e-Work involves all aspects of coordination, in addition to a resource-sharing
18
Collaborative Control and E-work Automation
a)
Machines
407
Computers
Humans plan and guide the collaboration process and are full participants in it
b)
Machines
Humans guide the collaboration process plan and participate in it
Internet
Cell phones
Other humans
Cell phones
Other humans
Human
Computers
Internet
Human
Agents
Software agents use algorithms and protocols to optimize collaboration
Fig. 18.1 (a) Collaboration: humans and other entities use cyber technologies to collaborate. (b) Optimized collaboration: humans and other entities use cyber technologies to collaborate, while the collaboration
process itself is augmented by cyber collaboration algorithms and protocols. (After [4])
dimension to support the achievement of individual and common goals. Cooperation requires the division of labor, among the participating entities, and the aggregate result of individual efforts. • Collaboration. The dictionary definitions for collaboration are “to work jointly with others or together, especially in an intellectual endeavor” and “to cooperate with an agency or instrumentality with which one is not immediately connected” (Merriam-Webster). Collaboration is the most complex among these processes, where all involved entities must share information, resources, and responsibilities to jointly plan, execute, and assess actions and create values that can exceed the aggregate result of individual efforts.
‘collaboration-ability’ and ‘adaptability’ in dealing with emergence. [8]
Collaborative control through e-Work and CCT thus refers to the joint planning, execution, and assessment of the actions and decisions taken by distributed and autonomous agents, enabled by effective sharing of information, resources, and responsibilities. It augments the performance of individuals and organizations of humans, machines, robots, software, and services and enables better results with physical tools and infrastructure by cyber-support for collaborative intelligence (Fig. 18.4). Collaborative intelligence, in general, means the combined intelligence from multiple sources of intelligence, for instance, multiple distributed agents and knowledge sources. It has also been defined as: a measure of an agent’s capability in perceiving and comprehending new information, sharing required resources, information, and responsibilities with other peers to resolve new local and global problems in a dynamic environment. In brief, collaborative intelligence is a combined measure of an agent’s
In advanced collaborative automation systems, hubs of collaborative intelligence, called HUB-CIs, enable the exchange and streamlining of combined intelligence (signals, information, know-how, etc.) from multiple agents, systems, and sources, thus further improving timely decisions about collaborative tasks and actions [89]. What makes the control of such distributed and collaborative systems challenging, however, is the fact that: 1. The level of collaboration between agents and with each agent may vary, and agents may even compete for limited resources (e.g., space, equipment, time, knowledge) in certain situations despite being members of a collaborative system [1, 2, 4]. Otherwise, the system could always be controlled as a single entity [10]. 2. Collaboration between the agents must be optimized based on the CCT design principles, algorithms, protocols, and criteria (Table 18.1) to ensure the best matching of collaborating entities; minimize unnecessary, costly, and potentially harmful collaboration [11]; allow resilience in dealing with dynamic and unexpected changes in the environment [12]; and enable effective and proactive handling of conflicts and disruptions [13]. This chapter first provides an overview of the theoretical foundations for e-Work and CCT [3, 14] in Sect. 18.2, including the fundamentals of e-Work, integration and communication technologies, distributed decision support systems, and active middleware technologies. Section 18.3
18
408
M. Moghaddam and S. Y. Nof
e-Work e-Work features and functions • e-Operations • Human-computer interaction • Human-robot interaction • Integration • Collaboration • Coordination • Networking • e-Learning • e-Training
e-Work models and tools • Agents • Protocols • Workflow • Middleware • Parallelism • Teamwork • Groupware • Decision support systems
e-Business/ e-Service Factories, plants • e-Design • e-Manufacturing • ERP • e-Logistics • Robotic facilities • Process automation
e-Commerce Customers, services • Online orders • Online sales • Sales force automation • e-Procurement • Design customization • Financial transactions
Offices, services • Team projects • Telework • CRM Our enterprise • Scheduling
Alliances • Joint projects • Outsourcing • Joint marketing • e-Supply
Suppliers, services • Online purchasing • Supply chain management • Funds transfer
Other e-Activities: Examples: Telemedicine, e-Charities, e-Explorations
Fig. 18.2 e-Work as the foundation for e-Business, e-Service, e-Commerce, and other e-Activities. (After [1])
e-Business e-Service
e-Mfg
e-Logistics v-Factory v-Design
Enables
... Require
e-Work
Fig. 18.3 The scope of e-Work as the enabler of e-Activities. (After [1])
discusses the architectural and technological enablers of collaborative e-Work, including the emerging cyber technologies, Internet-of-Things (IoT) and Internet-ofServices (IoS), and the blueprints for describing smart objects, functions, and their allocation for collaborative control in e-Work systems. Section 18.4 discusses the CCT design principles, algorithms, and protocols [4] addressing the aforementioned challenges associated with controlling and optimizing the interactions and decision-making processes of distributed and autonomous agents. Section 18.5
provides concluding remarks and directions for future research and discoveries in CCT and e-Work automation.
18.2
Theoretical Foundations for e-Work and CCT
Collaborative e-Work, also known as cyber-collaborative work, is a key enabler for the collaborative control of agents through CCT [3, 4]. CCT has been developed over the last
Collaborative Control and E-work Automation
IoT, Internet of things & IoS, Internet of services
Control Feedback Physical execution
Fig. 18.4 The collaborative factory of the future [9], where humans, robots, machines, products, processes, and systems with augmented e-Work capabilities harmoniously share information, resources, and responsibilities and optimize their collaborative workflow actions and decisions through CCT principles, algorithms, and protocols. (Extended illustration from [89])
409
Communication
18
18
Feedback Control Augmented collaboration & AR (augmented reality) intelligent agents
Operating agents
Mfg., assembly, & Warehouse with data fusion agents
Table 18.1 Criteria for assessment of collaborative e-Work systems, augmented and optimized through CCT [4] Criterion Agility Autonomy Dependability
Integration-ability Reachability Resilience Scalability Viability
Definition The ability of a network of agents to respond and adapt to changes in real time The level of delegation of authority, task assignments, and decentralization in a distributed system The probability of a task being successfully executed, which requires availability (readiness of the system for service), reliability (continuity of service), sustainability (avoiding catastrophic consequences), and integrity (avoiding improper alterations of information) The ability to integrate data from distributed agents, thus increasing its usefulness The feasibility and quality of interconnections and interactions between individual agents in a distributed system The ability to survive unforeseen circumstances, risks, disruptions, and high impact events The ability of a process, system, network, or organization to handle the increasing amount of work and accommodate that growth The trade-off between the cost of operating and sustaining a distributed system of agents and the reward gained from their service
three decades to augment and optimize collaborations in distributed e-Work, e-Business, and e-Service systems. The “e” in collaborative e-Work is supported by the continuous creation of knowledge across the four dimensions described in this section (Fig. 18.5).
18.2.1 e-Work The concepts, models, and theories that support collaboration among agents and the fundamental building-blocks of e-Work include: • Reference models and architectures. The blueprints that describe the agents, their relationships, and interaction
rules, intended to augment (1) human abilities and capabilities at work, (2) organizational abilities to accomplish their goals, and (3) the abilities of all organizational agents (e.g., human, computers, robots) for collaboration. Section 18.3 provides an overview of the emerging industry reference models and architectures for e-Work and collaborative control. • Agents. Independently operating and executing programs, capable of responding autonomously to expected or unexpected events. Activities, resources, and tasks in an eWork system can be automated and integrated by agents. An agent is an independent and autonomous entity with agency—the ability to execute a set of functions in an environment shared with other agents and governed by a set of rules. An agent must be independent, sociable, and
410
M. Moghaddam and S. Y. Nof
Agents
Protocols
Reference models & architectures
Workflow e-Work
Communication protocols
Decision models
Smart objects Human-machine interaction
Distributed decision support
Integration & communication
Distributed control systems
Collaborative decision-making
Connected value chain Active middleware Middleware technology
Knowledge-based systems Cloud computing
Fig. 18.5 The foundations and dimensions of collaborative e-Work. (Adapted from [3])
able to respond to observed environment states and take initiative. • Protocols. Rules and platforms for autonomous agents to interact, make decision, and fulfill their individual and collective goals. The protocols of interest in e-Work are those that function at the application level and determine workflow control. Protocols at this level enable effective collaboration by coordinating information exchange and decisions such as resource allocation among production tasks. Section 18.4.1 discusses a generic collaboration control protocol for formalizing, structuring, and administering agent interactions. • Workflows. The systematic assignment and processing of activities in dynamic, distributed environments by enabling process scalability, availability, and performance reliability. The dynamics of e-Work systems lead to different entities continuously joining, leaving, or remaining (reconfiguring) in a collaborative network [15, 16]. Regardless of the decision of a given entity to join/leave/remain, all scenarios require that existing data resources, processes, and services be integrated via workflow models.
18.2.2 Integration and Communication The technologies that support seamless communication and information sharing among agents, including:
• Communication protocols. The system of rules and languages that allow heterogeneous agents to exchange information. They provide a platform for interaction and collaboration among autonomous agents, in order to achieve higher productivity. The emerging status of smart CPS in industrial control systems (e.g., the I4.0 components) are connected via IoT and communicate through standard protocols such as OPC-UA [17], AutomationML [18], and MTConnect [19]. • Smart objects. Embedded sensing and communication technologies that allow tracing the digital thread of a product or a system. The I4.0 components are a formalism of smart objects (e.g., hardware, software, products, ideas, or concepts) that enable identifiability and communication capabilities through OPC-UA, Fieldbus, or TCP/IP. Smart products/equipment with I4.0 component capabilities can also communicate their status to the manufacturer and other stakeholders throughout their life cycle. • Human-Machine Interaction (HMI). The technologies that enable interactions between humans and machines, from simple interfaces to augmented, virtual, and mixed reality. HMI enables the development of systems, platforms, and interfaces that support humans in their roles as learners, workers, and researchers in computer environments. HMI technologies are increasingly gaining a different degree of importance, because social distancing and tele-work may become necessary in post-COVID19 economy to avoid the detrimental effects of future epidemics on public health.
18
Collaborative Control and E-work Automation
24
411
Teleservice remote expert
24 Remote services
Cloud
Intranet VPN ERP
MES
CP factory partner facility Router Secure WLAN
Switch
WLAN NFC
Supply chain
Logistics and distribution
Intra logistics
Fieldbus/DeviceNet
Automation network RFID
Internet
18 Intra logistics
CP factory local facility
Shipping and delivery
Customers
Fig. 18.6 Global networking through horizontal integration of supply chain, external and internal logistics, manufacturing processes, shipping and delivery, and customer service. (Courtesy of Festo)
• Connected value chains. The digital channels that allow to integrate insights and analytics throughout a product’s or system’s life cycle. They enable the creation of models, architectures, and methodologies that integrate e-Work processes beyond the boundaries of a single enterprise. For example, the RAMI4.0 standard refers to them as horizontal integration across value chains, enabled through communication, data management, intelligent logistics, secure IT, and cloud services (as illustrated in Fig. 18.6).
18.2.3 Distributed Decision Support
Section 18.4 discusses the design principles and analytical models for distributed control in collaborative e-Work systems. • Collaborative decision-making. The methods and frameworks that support the collaborative actions of autonomous agents to avoid conflicts and achieve harmonized decisions. The distributed decisions in multiagent systems may have different relationships in terms of hierarchy, from fully hierarchical control, where the decision of one agent is fully controlled by higher-level agents, to fully heterarchical, where no hierarchy can be identified and the decision-making relationship graph of the agents is strongly connected [20].
The theories, models, and technologies that enable collaborative decision-making by agents, including:
18.2.4 Active Middleware • Decision models. The generic knowledge representation languages with normative status and computational properties of decision-modeling formalisms and algorithms. Decision models integrate computer tools and methodologies to provide more effective and better quality decisions. • Distributed control systems. Computerized control systems for industrial processes with no centralized or supervisory control function that empower systems with a greater degree of autonomy by enabling all agents to negotiate for the assignment/allocation of tasks and their respective individual and group reward (incentives). What differentiates distributed control from centralized control is that the level of collaboration between the agents may vary, and agents may choose to compete in certain situations despite being members of a collaborative system.
The platforms and technologies that enable interoperability among distributed, heterogeneous agents, which include: • Middleware technology. Layer of software residing between application and network layer of heterogeneous platforms that enable interoperable activities. Middleware is a layer of services or applications that provides the connectivity between the operating systems, network layers, and the application layer. Active middleware consists of six major components classified into two categories (Fig. 18.7): – Active: (1) multi-agent systems (enhance proactivity and adaptability), (2) workflow management systems (integrate and automate process execution), and (3)
412
M. Moghaddam and S. Y. Nof
a)
b)
c) Active middleware
Middleware technology e.g., CORBA, DCOM/COM
Intranet Distributed ERP databases
Internet
Decision support systems
Workflow Multi-agent mgmt. systems system
Databases Protocols
Modeling tools
User interface
Fig. 18.7 The active middleware architecture. Supports (a) the formulation, description, decomposition, and allocation of problems; (b) the synthesis of the actions and decisions of distributed and autonomous
agents; (c) interoperability among applications, conflict resolution, and group decision-making. (Courtesy of IBM)
protocols (provide platforms for the agents to interact and automate the workflow). – Supportive: (4) decision support systems, (5) modeling tools, and (6) databases. These components support the active components through providing information, rules, and specifications. • Cloud computing. The on-demand accessibility of computing and storage services provided by public or private platforms. The National Institute of Standards and Technology (NIST) defines cloud computing as “a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction” [21]. • Knowledge-based systems. Systems that provide the capabilities to mine, construct, and extract information from a variety of sources and transform it into knowledge capital. Distributed knowledge systems are a collection of autonomous knowledge-based objects, also referred to as knowledge agents, which interact with each other to evolve the knowledge needed to improve their individual and collective decisions.
impact. To enable these changes, new reference models and architectures are required to allow cyber collaboration within complex networks of smart objects and services, referred to as IoT (Internet-of-Things) and IoS (Internet-of-Services), respectively. The vision is to enable seamless connectivity and interoperability between isolated systems and integrate the computational and physical elements of industrial machines and equipment, bridging the gap between the “cyber” and “physical” worlds [4]. Cyber technologies enable the integration of physical processes with computation through embedded computers and feedback control loops [22, 23]. Building on these capabilities, IoT and IoS offer access to a full-fledged Internet of smart CPS objects [24], ranging from machines to systems, products, and humans. The convergence of industrial control systems with the CPS and IoT technologies [25] creates new possibilities for more dependable, resilient, and intelligent industrial automation. The emerging reference models and architectures such as the Industrial Internet Reference Architecture (United States), Reference Architecture Model for Industry 4.0 (Germany), Alliance Industrie du Futur (France), Piano Impresa 4.0 (Italy), Industrial Value-Chain Initiative (Japan), and Made in China 2025 are some pioneering international efforts for building next-generation industrial control systems. These architectures aim to reimagine the hierarchical Purdue Enterprise Reference Architecture (PERA) [26], or the automation hierarchy [27], as networks of cyber-augmented agents that enable adaptive control of industrial processes through advanced information and communication technologies [28–30]. The control capabilities envisioned by these architectures align with the traditional control paradigms such as holonic systems [31–34] introduced in the late 1990s.
18.3
Architectural Enablers for Collaborative e-Work
The evolution of the industry has always been driven by new technology that facilitates improvements such as increased throughput, lower costs, and reduced environmental
18
Collaborative Control and E-work Automation
The fundamental challenge, as elaborated in Sect. 18.1, is to enable these IoT and IoS agents to autonomously function as a system and fulfill their individual and collective goals. As a prerequisite for designing, optimizing, and augmenting a collaborative system through CCT, a reference architecture is defined as a “blueprint” that provides current or future descriptions of a domain composed of components and their interconnections, actions or activities those components perform, and the rules or constraints for those activities. [35]
In the context of e-Work, a reference architecture is comprised of three fundamental architectural elements including the following [36]: (1) Physical architecture describes the components, interconnections, and activities associated with the IoT. (2) Functional architecture describes the components, interconnections, and activities associated with the IoS. (3) Allocated architecture integrates the physical and functional architectures (i.e., IoT-IoS integration). This section provides an overview of IoT and IoS architectures and discusses the integration and optimization challenges addressed by the CCT design principles, algorithms, and protocols (descried in Sect. 18.4).
18.3.1 Internet-of-Things (Physical Architecture) Collaborative e-Work is based on inter-networking of agents, or in the IoT terminology, “things” (e.g., sensors, computing devices, robots, machines, systems) supporting a wide range of interaction, communication, and computing services. These agents are meant to perform various actions or activities following certain rules and constraints. ISA95 [27], the international standard for enterprise control system integration based on PERA [26], defines the physical architecture of an industrial system as a hierarchical model starting from top-level components such as enterprise and sites and progressing down to areas, work centers, work units, equipment, and physical processes (Fig. 18.8a). ISA-95 suggests a monolithic and hierarchical model for the systematic representation of physical architectures of an industrial control system. The complex and dynamic nature of modern industrial control systems, however, calls for a higher “agility quotient” [37]: the ability to effectively accommodate time pressure, recover from errors/conflicts, handle inadequate options, invent innovative solutions, handle disruptions, and self-configure (see Table 18.1, the collaborative e-Work criteria). These requirements in turn call for a modular design of the physical architecture—a transition from the automation hierarchy to a network of distributed, self-contained, and autonomous entities with integrated local collaborative autonomy and global orchestration.
413
RAMI4.0 formalizes the physical objects and resources as I4.0 components (Fig. 18.8b), which may refer to a group of hardware, software, products, ideas, or concepts, configured on a dynamic basis to carry out different functions on demand. It defines a set of characteristics for a physical to be considered an I4.0 component, including identifiability, communication ability (e.g., via OPC-UA [17]), compliant services, states and semantics, virtual description, safety, security, quality of service, nestability, and separability (DIN 2016). RAMI4.0 proposes an administration shell as the logical unit of an I4.0 component responsible for virtual representation, interaction with the system, and resource management. Built upon the hierarchical model of PERA, an I4.0 component may represent a smart product, a field device to a controller, a station, or a work center, as long as they fulfill two key requirements: (1) communication capability, the ability of digital communication (e.g., via Fieldbus or TCP/IP), and (2) presentation, the “publicity” of a component in the information system [38].
18.3.2 Internet-of-Services (Functional Architecture) The purpose of any smart device, controller, machine, robot, software, or simply any other agent is to accomplish a set of functions. Thus, parallel to the physical architecture of an e-Work system characterized by IoT, there must be a functional architecture characterized as an IoS (Internetof-Services). In this context, a service refers to: a logical representation of a set of activities that has specified outcomes, is self-contained, may be composed of other services, and is a “black box” to consumers of the service. [39]
ISA-95 [27] defines a functional hierarchy model (known as the automation hierarchy or “pyramid”) that classifies those functions into five hierarchical levels (Fig. 18.9a). Unlike the hierarchical ISA-95/PERA model where each functional level is directly mapped onto their corresponding physical level (Fig. 18.8), emerging functional architectures aim to allow the tasks specified under each layer to be directly assigned to different distributed and autonomous agents, thus transitioning from a hierarchical model to a network model (Fig. 18.9b). The IBM reference architecture [41], for example, proposes a functional architecture for industrial control systems with two layers (Fig. 18.10): 1. Platform/hybrid cloud layer. Functions for plant-wide data processing and analytics are performed. Data is sent to the Enterprise level, and commands are sent back down to the Edge. The Enterprise layer provides similar functions but with a broader scope and with data from multiple
18
414
M. Moghaddam and S. Y. Nof
a)
b)
External world Enterprise Site Area Process cell
Production unit
Production line
Storage zone
Unit
Unit
Work cell
Storage module
Lower level resources in batch ops.
Lower level resources in continuous ops.
Lower level resources in discrete ops.
Lower level resources in handling ops.
Administration shell
Manifest
Virtual representation technical functionality
Resource
Object Object
Smart devices & products ISA-95 (PERA) physical architecture
I4.0 component
Fig. 18.8 The ISA-95/PERA role-based equipment hierarchy (a) and the I4.0 component (b). RAMI4.0 incorporates three additional elements in the original ISA-95 hierarchy including smart devices, smart products, and external world
Internet of services Level 4: enterprise resource planning Level 3: plant/site management Level 2: process control Level 1: control (e.g., PLCs) Real-time critical
Level 0: field devices The automation pyramid (ISA-95/PERA)
Internet of things
Fig. 18.9 Transition from the hierarchical functional architecture of ISA-95/PERA to interconnected networks of agents (IoT) and services (IoS). (Adapted from [40])
plants. These functions are supported by the overlaying functions of device management, configuration management, security, visualization, development support, management, and cognitive services. 2. Equipment/device layer. The IBM architecture utilizes the Edge as a middle-agent between smart devices and tools and the plant and enterprise layers. The Edge is responsible for receiving data from these devices, providing basic analytics, and determining which information gets sent to the higher levels. It also sends commands to the smart devices.
The NIST architecture [42] is another example of serviceoriented functional architectures for industrial control systems (Fig. 18.11). Designed specifically for manufacturing control, the NIST architecture utilizes a manufacturing service bus to connect various types of services in the system including the operational technology (OT) and information technology (IT) domains, the virtual domain, and management. A service bus then connects the enterprise to external collaborators through business intelligence services. The OT domain services include functions provided by physical components such as machines, robots, and production lines
18
Collaborative Control and E-work Automation
415
Edge
Plant
Equipment/device layer
Enterprise Platform (hybrid cloud)
OT/IT gateway Enterprise IT
2 7
Plant IT
6
Analytics-based insights & actions
Enterprise
Shop-floor workflow Plant system of
Production machines
Transformation
Plant service bus records (e.g., MES,
7
Composite applications Enterprise user directory
Enterprise
line scheduler)
Enterprise data store
4 Rule-based analytics
1
3
Smart devices & tools Edge data store
Transformation & connectivity
Application component Infrastructure service API management Data store User Security DevOps Analytics Edge plant flow
Plant data store
Rule-based analytics
Transformation
5
Edge information model
Shop-floor analytics
Device Rules, config. mgmt. management & composition
Enterprise system of records (e.g., ERP, CRM, PLM)
Data integration
Plant information model
Enterprise information model
Rule-based analytics Supply chain integration & provenance
Composite applications
Security
Visualization
Development support
Management
Partner IT
Partner systems
Cognitive service
Fig. 18.10 The IBM reference architecture for IoS. (Courtesy of IBM [41])
(i.e., IoT). The IT domain services are services provided by the enterprise IT and include manufacturing operations management, enterprise resource planning, and supply chain management services. The virtual domain services are provided by the digital factory, digital twin, or virtual model of the enterprise and include queries and simulation services. The management or common services include all other services required by the enterprise including management of life cycle, data, security, knowledge, devices, service quality, and network configuration. All of the above enable effective collaborative automation and collaborative e-Work.
18.3.3 IoT-IoS Integration (Allocated Architecture) With the IoT and IoS architectures defined (in the previous section), the third architectural element is allocated archi-
tecture that aims at integrating IoT with IoS. An allocated architecture is defined as: a complete description of the system design, including the functional architecture allocated to the physical architecture, derived input/output, technology and system-wide trade off and qualification requirements for each component, an interface architecture that has been integrated as one of the components, and complete documentation of the design and major design decisions. [36]
The best matching of the IoT and IoS components is enabled by forming a collaborative network of organizations (CNO) [15, 16], where the organizations share their respective resources and services to accomplish their individual and collective goals (Fig. 18.12). Due to the dynamic variations in demand and capacity, organizations may encounter capacity shortage/surplus over time and thus collaborate by sharing their shareable services to balance the overall workload of the network and minimize idle resources, congestion, and delays. These decisions deal
18
416
M. Moghaddam and S. Y. Nof
Digital factory Connected enterprises
Modeling and simulation services
MOM
ERP
PLM
SCM
Order management
CRM
Manufacturing service bus Network management
Security management
Knowledge management
Data management
Device management
Business intelligence
Manufacturing service bus
SMS
APC
DCS
Consumers
Edge compute
PLC
Gateway
Smart assets Real-time & safety critical
Connected logistics
Real factory Real-time services Data/modeling services Operation services Enterprise services IT services
Control apps Data analytics apps Operations apps Enterprise apps Configuration apps
OT domain Database knowledge base
IT domain Virtual domain
Fig. 18.11 The NIST reference architecture. (Adapted from [42])
Enterprise service bus
Internet of ‘services’ (IoS) A
Service
B
Servicecomponent integration A
B Edge connectivity
Internet of ‘things’ (IoT)
Component
Fig. 18.12 IoT-IoS integration in a collaborative network of organizations composed of two organizations, A and B. (After [43])
with intra-/inter-organization matching of services and components. Best matching for collaborative service-component integration can be done such that the overall fulfillment of the requested services is maximized while unnecessary collaboration is avoided to minimize its associated costs [43]. In other words, the collaboration can be optimized. Experiments show that best matching of components and services in a
CNO can significantly improve the fulfillment of requested services while avoiding unnecessary collaboration and that the impact of best matching grows with the size of the CNO network. That is, as the number and diversity of components and services increase, the capabilities of the organizations and the CNO as a whole in fulfilling their allocated services increase, which is due to having access to a larger and
18
Collaborative Control and E-work Automation
417
more diverse pool of components and thus higher likelihood of finding the best match for each individual component and service. The allocated architecture, in essence, can help define the allocation of functions to physical objects and the execution of functions by distributed and autonomous agents. Advanced analytical models and tools are required for controlling this dynamic composition and execution process, as described next.
18.4
t1
18.4.1 Generic Framework TAP [45] is a generic framework for effective task allocation and administration in collaborative e-Work systems. TAP enables efficient processing of several tasks (e.g., service requests, intermediate processes) through interaction and collaboration among distributed agents. The fulfillment of individual tasks depends on how individual agents collaborate with each other. TAP supports efficient control, coordination, and harmonization of tasks by enabling the exchange of information and decisions between the participating agents. ISO 14258:1998 [46] formalizes a task data model with three main data elements: primary (e.g., product, shop floor, planning and control), secondary (e.g., responsibility), and tertiary (e.g., resource). TAP allows agents to handle two types of situations, where the dependencies between the tasks and agents are dynamic (a function of time) and where
t3 p3
t3 p4
t4
Design Principles for Collaborative e-Work, Collaborative Automation, and CCT
The “networked” nature of the emerging e-Work systems demands the participating entities (e.g., machines, devices, robots, humans) to be more autonomous, self- and environment-aware, and intelligent. They also have to share distributed and decentralized control of tasks and resources. That introduces new problems related to optimizing the interactions in an ecosystem of agents [44]. Hence, rigorous analytical methods and mechanisms must be and have been developed for guiding, optimizing, and governing the interactions between the agents—from simple data exchange to high-level cognitive collaborations [4]. This section first presents a consolidated framework for collaborative e-Work based on Task Administration Protocols (TAP) to demonstrate potential scenarios handled by various CCT design principles. Next, the basic and emerging CCT design principles are discussed and illustrated based on years of research and discovery, experimental, and practice validations, rooted in successful, proven principles and models of heterogeneous, autonomous, distributed, and parallel computing and communication.
p2
p1
p10
p5
p6
t5
t6 p11
p12
p7
p8
p9
t7 p13
18 Fig. 18.13 A generic Petri net representation of Task Administration Protocols. (After [9])
complex decisions beyond simple coordination need to be made (e.g., tasks close to the deadline, dynamic priorities, or preemption). Figure 18.13 shows a generic Petri net associated with an e-Work system, where a set of agents must process a set of tasks. The Petri net is composed of a set of places P = p1 , . . . , p13 and a set of transitions T = t1 , . . . , t7 , which respectively represent the possible states of the system and the actions that change the states. The following steps indicate how TAP enable the system of agents transitions between different places: 1. Arrival of tasks (t1 ): Upon arrival, the collaboration requirements and dependencies of the tasks must be identified. This transition is made before the tasks are ready to be processed by the agents (i.e., p1 ). The task that has higher priority should be served first. The arriving tasks may be new or returning. The returning tasks can be classified as in two categories: • Timed-out tasks. Some tasks may be returned due to unresolved conflicts or errors (i.e., p11 ), excessive agent occupation, preemption, adaptation, or relaxation of their requirements (i.e., p12 ). • Tasks with unavailable agents. Some agents may temporarily or permanently become unavailable (i.e., p7 ). As a result, their associated task is returned to the queue. Transition t1 is supported by the Collaboration Requirement Planning and Parallelism principles of CCT discussed in Sect. 18.4.2. 2. Arrival of resources (t2 ): Similar to t1 , this transition is made before the agents are ready to process the tasks (i.e., p2 ). The agents available in the system could be new
418
3.
4.
5.
6.
agents, agents freed after a successful operation (i.e., p9 ), agents with timed-out tasks (i.e., p13 ), or agents with cancelled tasks (i.e., p8 ). Through this transition, the optimal configuration and assignment of agents are identified with respect to the current requirements of the system. Transition t2 is supported by the Parallelism, Resilience by Teaming, Error and Conflict Prognostics, AssociationDissociation, and Dynamic Lines of Collaboration principles of CCT discussed in Sect. 18.4.2. Task-resource matching (t3 ): Upon identifying available tasks and agents in the system, this transition is performed to match and synchronize the best agents (i.e., p4 ) and tasks (i.e., p3 ). As explained earlier, the Collaboration Requirement Planning is the key principle applied for dynamic matching of tasks and agents, determining “who does what and when.” Transition t3 is supported by the Collaboration Requirement Planning, Parallelism, Association-Dissociation, Dynamic Lines of Collaboration, and Best Matching principles of CCT discussed in Sect. 18.4.2. Real-time execution (t4 ): This transition allows real-time modification of collaboration plans and provision of feedback on each individual task and agent. This transition may lead to the following potential states in the system: errors/conflicts in the process (i.e., p5 ), abnormal processes with time-out conditions (i.e., p6 ); unavailability of agents (i.e., p7 ), cancellation of tasks (i.e., p8 ), or successful completion of the process (i.e., p9 ). Transition t4 is supported by the Collaboration Requirement Planning principle of CCT discussed in Sect. 18.4.2. Resolution of errors/conflicts (t5 ): This transition is activated once an error or a conflict is detected (i.e., p5 ). Supported by the Error and Conflict Prognostics principle of collaborative e-Work, this transition allows to identify and isolate the error/conflict and prevent future occurrences. The outcome, however, is not necessarily an error-/conflict-free system. Thus, the potential states of the system after this transition may include recovered/resolved (i.e., p10 ) or unrecovered/unresolved (i.e., p11 ) error/conflict. If p10 occurs, the system will return to normal execution (i.e., t4 ), while under p11 , the system will undergo the t6 transition. Transition t5 is supported by the Error and Conflict Prognostics principle of CCT discussed in Sect. 18.4.2. Timed-out task & resource release (t6 ): This transition follows abnormal processes with time-out conditions (i.e., p6 ) and/or unrecovered/unresolved errors/conflicts (i.e., p11 ). The time-out condition occurs when a task uses an agent more than certain time-out threshold. Following this transition, the respective agent and task are released and returned to the set of available agents (see t2 ) and available tasks (see t1 ), respectively. Transition t6 is supported by
M. Moghaddam and S. Y. Nof
the Parallelism and Error and Conflict Prognostics principles of CCT discussed in Sect. 18.4.2. 7. Re-synchronization and emergence handling (t7 ): This transition is triggered once an agent becomes unavailable (i.e., p7 ), a task is cancelled (i.e., p8 ), or both. The configuration and assignment of the agents and tasks are then updated and optimized in real-time, and then the excessive tasks (i.e., p12 ) are returned to their corresponding queues and the freed agents (i.e., p13 ) become available. Transition t7 is supported by the Association-Dissociation, Dynamic Lines of Collaboration, and Best Matching principles of CCT discussed in Sect. 18.4.2. TAP provides a foundational framework for developing analytical methods, algorithms, and protocols that enable and optimize agent interactions and behaviors in collaborative eWork systems based on the CCT design principles discussed next.
18.4.2 Design Principles Over the years, several principles have been developed at the PRISM (Production, Robotics, and Integration Software for Manufacturing and Management) Center at Purdue University for the design and engineering of collaborative control and e-Work automation. These principles address many of the collaborative e-Work dimensions (Fig. 18.5) in various manufacturing, production, and service application domains. This research has been rooted in successful, proven principles and models of heterogeneous, autonomous, distributed, and parallel computing and communication for effective task administration in collaborative e-Work systems. The remainder of this section elaborates on various design principles for collaborative e-Work and CCT (see [4] for details). Principle of Collaboration Requirement Planning This principle specifies that effective collaboration can only be achieved through advanced planning and feedback loops, generally in two phases: (1) Plan generation, where a binary collaboration requirement matrix is developed to determine the tasks and collaborators of each agent. The entries of this matrix signify whether a resource (one or a group of multiple agents) is capable of performing a task or not. (2) Plan execution & revision, where the plan generated in the first phase is revised and executed in real-time. This phase is intended to resolve any potential conflicts in the plan that were overlooked in the first phase. In essence, this principle involves solving a problem of dynamic matching [11] between agents and tasks, as required by the allocated architecture described in Sect. 18.3.3.
18
Collaborative Control and E-work Automation
419
Principle of Parallelism In any distributed network of agents, some tasks must be performed in parallel. Thus, task dependencies and their types must be considered before they are assigned to agents. Two tasks can be identical, complementary, or unrelated, which determines whether they can be performed in parallel or not. The economic aspect of performing tasks should also be considered. This principle states that agents must be enabled to communicate and interact to collaboratively perform their assigned tasks. This comes with a trade-off between the operation cost and the communication cost—the higher/lower the communicate rate, the higher/lower the communication cost, the lower/higher the operation cost. Obtaining an optimal degree of parallelism is key to minimizing the total cost. This principle is built upon the Distributed Planning of Integrated Execution Method (DPIEM) [47] and includes five guidelines to enable e-Work support systems (Fig. 18.14):
The parallelism principle is highly relevant to the KISS: Keep It Simple System! principle. Some e-Work systems either are too complicated and confusing for human workers or change drastically over time with updates. Such systems are not cost-effective from the training standpoint. This is an additional workload and cost that ought to be minimized. Building on traditional human-computer and humanautomation usability design principles, KISS states that eWork systems must be as complicated as necessary and as simple as possible.
1. Formulate, decompose, and allocate problems 2. Enable applications to communicate and interact under TAP 3. Trigger and resynchronize independent agents to coherently and cohesively take actions and make decisions 4. Enable agents to reason, interact, and coordinate with each other and with their environment 5. Develop conflict resolution, error recovery, and diagnostic and recovery strategies based on the Error and Conflict Prognostics principle
• Error. An error occurs when an agent’s goal, result, or state is different from the expectation by a value larger than a margin of error. An error is defined as follows:
Principle of Error and Conflict Prognostics Any e-Work system, regardless of it size and distribution degree, is prone to errors and/or conflicts. This principle states that errors and conflicts must be predicted and prevented early and with minimum cost, in order to avoid the eventual collapse of the system (Fig. 18.15).
∃E[ωa (t)],
dissatisfy
if sωa −−−−→ κr (t),
(18.1)
where E is error, ωa (t) is agent a at time t, sωa is the state of agent a at time t, and κr (t) is system constraint r at time t.
Organizations at the same location
w8
w1
w6 w4 Integration
w2 w7
Logical integrated organization W (Y *, F)
w3
Yi, Fi
w 4⬘ Dist. assignment
w5 Distributed organizations (Y, F d)
Integration (w 1,...,w r) oW
Operational optimization Stage 2: Min F (F should include server cost)
Fig. 18.14 The operational stages of the Distributed Planning of Integrated Execution Method (DPIEM). (After [47]). : degree of parallelism, : operational cost of an integrated task graph, d : distributed
w 2⬘ w 7⬘
Stage 3: Stage 1:
Tasks being executed in the organization:
w 6⬘
w 1⬘
1. Measurement of K when assigning to distributed sites 2. Comparison of K and savings from F 3. Repeat DPIEM K > Fd – F
w 3⬘ w 5⬘
Optimized integrated distributed organizations
Communication links among distributed tasks
operational cost, K: the costs of distributed assignment of integrated tasks, : integrated organization of agents, ω: individual agents
18
420
M. Moghaddam and S. Y. Nof
Propagate
Diagnosed
E[wa(t)] Predicted detected
CEs
Propagate
CEs
Predicted
wa(t) Detection Detect diagnose logic
Predict
Prediction logic
:(t)
CEs
Predicted
Predicted detected
Diagnosed
Propagate
Propagate C[:(t)]
CEs
Result
Activity
Fig. 18.15 The error and conflict prognostics framework in a collaboration network. (After [48])
• Conflict. A conflict occurs between two or more agents, when an agent’s goal or error hinders another agent’s actions. A conflict is defined as follows: ∃C[(t)],
dissatisfy
if s −−−−→ κr (t),
(18.2)
where C is conflict, (t) is the integrated agent network at time t, s is the state of the integrated agent network at time t, and κr (t) is system constraint r at time t. As the system becomes more distributed, the rate of interactions among agents increases, and as a consequence, the probability of error or conflict occurrence increases. See Chapter 22 for detailed discussions on automated error and conflict prognostics. Principle of Association-Dissociation This principle states that the dynamic variations in the topology, size, and operations of collaborative networks of agents must be optimized with regard to three key questions: 1. When and why should an agent be associated with or dissociated from a network of agents [16]? 2. What are the expected rewards and costs for an individual agent to participate or opt to remain in a network [15]? 3. What are the impacts of the interdependencies between the agents’ preferences and goals on the overall performance of the network [49]?
Consider, for example, a clustered network of sensors, where sensors in each cluster transmit signals to their respective cluster head, and the sensors continually associate with or dissociate from the network when recharged or run out of power, respectively (Fig. 18.16). The three aforementioned questions become critical in this case, because the Association-Dissociation decisions are subject to the topology of each cluster as well as the status of each sensor, thus requiring dynamic matching conditioned on the interdependent preferences and rewards of distributed agents [49]. Experiments on distributed systems like this example prove the critical role of the Association-Dissociation principle in rapid adjustment of the overall performance and optimization of the system performance through best matching (Fig. 18.17). Other application examples of this principle include networked organizations, swarm robotics, machine scheduling, and storage assignment, among others [11]. The AssociationDissociation principle analyzes the conditions and timing for individual agents or networks of agents to associate with or dissociate from a collaborative network. The decisions are made at different levels in the entire network—by individual agents, subnetworks, or clusters of agents, or among multiple networks. The analysis includes several phases, from creation and execution to dissolution and support. Principle of Resilience by Teaming This principle states that a network of weak agents is more efficient and reliable than a single strong agent [50]. An argument supporting this premise is that a monolithic system will be at risk of shutting down in case of an unexpected incident or error in its single agent. On the other hand, a distributed network of agents will still continue to function, if not as optimal as before, when an agent fails. Exploration and research of the mechanisms behind the failure of e-Work networks have revealed that those capable of surviving are not only robust, but resilient [12]. In a given network of distributed and autonomous agents that interact to enable the flow of tasks, physical goods, and/or information, each agent contains a set of internal resources Ra for transforming the input flow into output flow, following a set of control protocols CPa . A communication link cli→j represents the connection between the control protocols of agents i and j (from i to j). A flow link fli→j , on the other hand, represents the flow of tasks, physical goods, and/or information from agent i to agent j (Fig. 18.18). Each individual agent is responsible for controlling its own internal resource network, to enable intra-agent flow, and defining sourcing and distribution networks, to enable interagent flow. The Resilience by Teaming principle, based on the agent network formalism in Fig. 18.18, defines a two-level frame-
18
Collaborative Control and E-work Automation
421
Cluster head sensor Base station
Regular sensor Communication ω5 ω2 ω3 :1
ω 18
ω 12
ω 15
ω4
:2
ω 11
ω1 ω8
ω 17 ω9
:3
ω6
ω 16
:5
:4
ω7
Associating sensor (e.g., recharged)
ω 14
ω 10
ω0
Dissociating sensor (e.g., no power)
18 ω 13
Fig. 18.16 The Association-Dissociation principle in a sensor network with single- and multi-hop clusters. Sensor ω13 is dissociating from cluster 5 and sensor ω0 is joining the network. Two key questions
must be answered: (a) What cluster should ω0 associate with? (b) Will the association of ω0 and dissociation of ω13 impact the overall, optimal topology of the network? (After [11], Ch. 6)
work to ensure the global and local resilience of the network (Fig. 18.19) [52]:
age sourcing from predecessors (SFCP), control flow through internal resources (IFCP), and enable the distribution of flow to successors of an agent (DFCP).
• Network-level requirements for global resilience: – Situation awareness. Agents are required to participate in secure and privacy-preserving transactions with other agents and with the network regulators to collaboratively detect and resolve potential vulnerabilities in network topology or agent interactions. – Negotiation. Agents are required to follow certain negotiation mechanisms and protocols for all strategic decisions to be made collaboratively, including topological requirements, demand and capacity sharing, and defining service-level agreement. The purpose of these negotiations is to enhance individual and collective resilience of agents. • Agent-level requirements for local resilience: – Design. Agents are required to follow two team design protocols, Sourcing Team Formation/Reconfiguration Protocol (STF/RP) and Distribution Network Formation/Re-configuration Protocol (DNF/RP), to dynamically make sourcing and distribution network decisions, respectively. This is done through communication links cli→j , by making or breaking flow links fli→j . – Operation. Agents are required to operationalize the design decisions by following three protocols to man-
According to the Resilience by Teaming principle, resilience is an inherent ability of agents and an emergent ability of agent networks and involves restoring/maintaining quality of service (QoS) through the active prognosis of errors and conflicts that potentially lead to disruptions [53]. Principle of Dynamic Lines of Collaboration Due to the interconnected nature of CPS agents, disruptions that initially affect only a small part of any network tend to escalate and cascade throughout the entire system. This principle states that intelligent technologies for interaction, communication, sharing, and collaboration must support first responders and emergency handlers and enhance their responsiveness and ability to collaborate with one another in controlling disruptions and preventing their escalation [13]. This principle is particularly critical for disruptions with cascade failure effect, where, depending on the local network structure, an initial failure fades away quickly or propagates to a system collapse (Fig. 18.20a). In such situations, the failure can be prevented by deployment of a response team, if at least two responders are available that are connected to each vertex of the failed link and the responders are able to collaborate within a limited response time (Fig. 18.20b).
422
M. Moghaddam and S. Y. Nof
Association/dissociation of agents (e.g., sensors) D
A
D
A
D
A
D
A
D
A
D
A
D
A
D
A
D
0
50 100 150 200 250 300 350 400 450 500 Iterations
0.95 0.9 0.85 0.8 0.75 0.7 50 100 150 200 250 300 350 400 450 500 Iterations
A D
A
D
A
D
A
D
A
D
A D
0.98 0.96 0.94 0.92 0.9 0.88 0.86 0.84 0.82
A
D
A
D
A
D
A
D
1.1 1.05 Fitness value
Fitness value
A
1
0
Large network
D
0.9 0.88 0.86 0.84 0.82 0.8 0.78 0.76 0.74
Fitness value
Fitness value
Small network
A
Arrangement/dissolution of networks (e.g., sensor clusters)
1 0.95 0.9 0.85 0.8
0
50 100 150 200 250 300 350 400 450 500 Iterations
Fig. 18.17 The impact of best matching with interdependent preferences and rewards on the ability of a network of agents (e.g., Fig. 18.16) to rapidly adapt to the association/dissociation of individual agents or to the arrangement/dissolution of networks, and optimize its overall fitness
0
50 100 150 200 250 300 350 400 450 500 Iterations
value in two different problem sets with small and large number of agents/networks. (After [49]). Solid/no-fill line: best-match considering/without considering interdependent preferences and rewards
Comm. w/ other agents
Network agent Control protocols CPa
clioj
i
Output
Input
clioj j flioj
fljoi
Supply network
Communication link Flow link
Fig. 18.18 Intra- and inter-agent communication and flow networks. (After [51])
Internal resources Ra
flioj
18
Collaborative Control and E-work Automation
423
N
Network level
Situation awareness SN topology awareness via collaboration among agents and SN regulators
Vulnerability detected?
Collaborative SN vulnerability detection Effective communication networks
TO[Pa1]
Design Team formation
Resource parallelism Storage design for predictive management
STF/RP Pa Pa1Pa2 q(|Pa1|, 0, 'tr) 'tr
Operation Collaboration for resilience
Agent level
Sa
Internal resource network
Sourcing team formation/reconfiguration protocol
IRN topology J [r], r Ra
TL[r S ]
SN topology requirements for resilience Lateral collaboration; DCS SLA definition
SLAaoj
Pa Pa1Pa2
Sourcing network
Sourcing flow control protocol SFCP
Y
GEO ) GO[Pa1] (dmin
Agent-agent negotiation
Internal flow control protocol IFCP
SLAjoa Distribution network
Distribution network formation/ re-configuration protocol
DNF/RP Haoj N ra Distribution flow control protocol DFCP
Fig. 18.19 Formalism and framework for the Resilience by Teaming principle in a network of distributed and autonomous agents depicted in Fig. 18.18. (After [52]). SN: supply network, DCS: demand-capacity sharing, SLA: service-level agreement, IRN: internal resource network, GO: topographical overlap, TO: topological overlap, TL: target level of storage, Pa : predecessors of agent a, Sa : successors of agent a, ν[r]:
design capacity of resource r ∈ Ra , rs : storage type resource in Ra , Hi→j : set of intermediaries between i and j, t: time, θ(x, w, tr ): fraction of sourcing requests that can be served by x agents (out of w) within tr , GEO : minimum distance between agents to avoid proximity-correlated dmin disruptions, ρ, κ: model parameters
Dynamic Lines of Collaboration is an emerging CCT principle, which still requires extensive, multidisciplinary investigations in several directions [13]. Two crucial requirements for emergency response and disruption handling and prevention in distributed networks are reducing the response time and controlling the spread of disruptions. Strategies observed in biological systems (e.g., immune system) can inspire the control of collaborating responders to achieve this end. Furthermore, new mechanisms are required to tailor the advanced response strategies to the unique features and requirements of each particular domain. Dynamic teaming and resource reinforcement functions are also required for effective and proactive response to both random disruptions and targeted attacks, errors, or conflicts (Fig. 18.21).
principle is motivated by the need for optimizing agent interactions through best matching in planning and scheduling, enterprise network design, transportation and construction planning, recruitment, problem-solving, selective assembly, team formation, and sensor network design, among others [56–61]. The PRISM taxonomy of best matching [11] depicted in Fig. 18.22 is a standardized framework for synthesizing best matching problems in different domains across 3 + 1 dimensions representing the sets of entities and their relationships (D1), the conditions that govern the matching processes (D2), the criteria for finding the best match (D3), and the dynamics of the matching process over time (D+).
Principle of Best Matching This principle states that best matching of distributed and autonomous entities in an e-Work system, either natural or artificial, is essential to ensure their smooth conduct and successful competitiveness [55]. The
Best matching plays a key role in optimizing collaborative activities and transactions in distributed e-Work systems. In collaborative supply networks, for instance, demandcapacity sharing between individual firms has become a common practice and an attractive strategy for both competing and non-competing suppliers (e.g., airlines, test
18
424
M. Moghaddam and S. Y. Nof
a)
0
0
0
0 0
1
0 0
0 1
0
1
0
0
1
1 0
0
t = 0, the highlighted node fails.
0 0
t = 1, the dashed links fail.
0
t = 2, nodes fail according to j.
b) 1
1 0
0
1 G t = 0, the dashed link fails; two highlighted responders start to respond.
0
1
0
0
R t = tD, one highlighted responder disconnects from its original node (depot) and connects with a desired node to repair the failed link.
t = tD+tR. Link is restored. All responders are back to idle state.
Fig. 18.20 Cascade failure (a) and link repair (b) with the Dynamic Lines of Collaboration disruption handling and prevention models (after [13]). ϕ: cascading threshold, t: time tD : responder travel time, tR : repair
timespan, G: a network representation of the cyber-physical system, R: a network representation of the response team
Fig. 18.21 Emergency disruption handling in Hetch Hetchy water system based on the principle of Dynamic Lines of Collaboration. (After
[54]). Hetch Hetchy serves 2.6 million residential, commercial, and industrial customers in San Francisco bay area from 11 reservoirs. (Map source: San Francisco Public Utilities Commission, 2014)
Collaborative Control and E-work Automation
D2
Co nd it i o
Number of sets Pairwise relationships - One-to-one (1-to-1) - One-to-many (1-to-many) - Many-to-many (many-to-many)
ts
D1
425
Se
18
ns
-
Resource constraints Precedence relations Resource sharing Interdependent preferences Sequential matching ...
e
m
Ti Criteria
D3
Number & type of criteria Objective function - Overall satisfaction - Bottleneck - Minimum deviation - Goal programming - Multi-objective - ...
D+
Dynamic system parameters Emergent sets of entities
18
Fig. 18.22 The PRISM taxonomy of best matching in dynamic and distributed e-Work systems. (After [11])
Demand fulfillment
Capacity utilization
Network stability
Total cost
45
Improvement (%)
40 35 30 25 20 15 10 5 0
0
10
20
30 Experiments
40
50
60
Fig. 18.23 Improvements in demand fulfillment, capacity utilization, stability, and total cost of a collaborative supply network through dynamic best matching of capacity-/demand-sharing proposals and
suppliers-customers—baseline: collaboration with fixed pre-matching of suppliers and customers and under the demand-capacity sharing protocol. (After [59])
and assembly factories, outsourced maintenance/logistics providers). Demand-capacity sharing decisions are intended to maximize profit and improve resource utilization, customer satisfaction, and the overall stability of the supply network, by allowing suppliers with capacity shortage to utilize the excessive capacities of other suppliers in fulfilling their excessive demand [62–64]. However, high frequency of “unoptimized” collaboration may impose additional costs associated with transactions, negotiations, and lateral transshipment of stocks. Best matching can play a key role in minimizing the collaboration costs and
improving demand fulfillment, capacity utilization, and network stability, through dynamic matching of suppliers and customers based on the capacity-demand gap for each supplier, and improving the collaboration process by matching the demand-/capacity-sharing proposals associated with different suppliers (Fig. 18.23) [56]. Another application example of the best matching principle is in collaborative assembly systems [60], where manufacturing resources or “tools” are dynamically shared between busy and idle workstations to alleviate bottlenecks (Fig. 18.24). This process can be accomplished through two
426
M. Moghaddam and S. Y. Nof
Hierarchical control
Collaborative control
Feedback control
Mediator agent
Workstation 1
WA
Workstation 2
TA
WA
Workstation N
TA
WA
TA
Cyber layer Physical layer Busy
Idle Workstations
Tool
Fig. 18.24 Collaborative assembly architecture. Workstation agents (WAs) monitor the processes and calculate the estimated workload in each cycle. Tool agents (TAs) negotiate with WAs to identify their best
match(es) during each cycle and establish their viability (survivability) and autonomy (independence). (Adapted from [11])
integrated best matching mechanisms: (1) dynamic tool sharing between fully loaded (i.e., bottleneck) and partially loaded workstations and (2) dynamic matching of tasks and workstations. This dynamic matching process accomplished through distributed multi-agent negotiation mechanisms [61] are proven to enhance the overall balanceability of assembly lines (i.e., the overall workload evenly distributed between all active workstations) and minimize the deviation of the assembly system from the planned cycle time in case of dynamic and unexpected changes in processing times and/or system performance (Fig. 18.25). The CCT design principles, protocols, and tools described above have been validated experimentally and have been proven to be successful in industrial, service, and government applications. Examples are described in [4] and in [12,16,47, 51–53,53,56,59,62,63,65,66] for manufacturing and supply chains, [57, 60, 61] for assembly, [13, 67], for disruption handling, and [68–70] for greenhouse automation.
influencing collaborative control in e-Work, e-Business, and e-Service. Animal-to-animal interactions such as those observed among ants, bees, fish, and birds have provided a source of inspiration for devising collaborative control mechanisms. The autocatalytic behavior [71] of these insects/animals creates a positive feedback loop that over time enables them to collectively make decisions that optimize their overall performance (e.g., detect the shortest path to a food source or find the best navigation trajectory that minimizes energy consumption). In a similar fashion, the need to reduce the complexity of decisions in a collaborative e-Work system can be accomplished by sharing appropriate hints between the agents to make decisions with local information, thus minimizing communication overload (Fig. 18.26). Applications in e-Work, e-Business, and eService include distributed decision-making by mobile software agents, message-based interactions among agents and robots, and measures such as viability (survivability) and autonomy (independence).
18.4.3 Emerging Thrusts
Learning and Adaptation in Collaborative Control The negotiation-based methods for collaborative control are often capable of modeling a generic and limited set of states and actions and depend upon hand-engineered policies that
Bioinspired Collaborative Control The knowledge and discoveries in biological sciences have recently begun
Collaborative Control and E-work Automation
Deviation from planned cycle time (%)
18
427
70 60 50 40 30 20 10 0 –10 –20 Experiments Tool sharing with best matching
Fig. 18.25 Improvements in the deviation from the planned cycle time through the collaborative assembly architecture in Fig. 18.24. Best matching not only reduces the deviation compared to the baseline
Tool sharing without best matching
approach with no matching, it also is able to improve the optimal plan by responding to the dynamic changes in the system during execution. (After [61])
Nominated Nominated Nominated solution 1 solution 2 solution 3 R6 R5 R3 R1
R6 R5 R3 R1 Time
R6 R5 R3 R1 Time
Time
{R1-R3-R4-R6} Ra2
{R1-R3-R5-R6} {R1-R2-R4-R6}
Ra1
Oa1 t0
Ra6 Ra3
t1
t2
Ra4
Ra5 Time
t3
Arrival of order 1 at time t0
R2 *Op2
R4 *Op3 R6 *Op4
R1 *Op1 R3 *Op2
R5 *Op3
Fig. 18.26 Exploring mechanism of an ant agent in multi-agent manufacturing control systems. (After [72]). R: resource, Ra : resource agent, t: time
are difficult to build, are domain-specific, and unable to capture the complex nature of agent interactions in collaborative e-Work, e-Business, and e-Service systems. One of the established approaches that tackles this limitation is the dynamical systems theory [73–76], which mathematically describes the behavior of complex, collaborative dynamical systems through advanced differential equations. Although these methods allow to develop precise mathematical models for actions, states, state transitions, and policies, they are not the best platform for building collaborative e-Work system for one main reason: They are “model-based” and thus
domain-specific and require a significant amount of time and effort to build and maintain. Recent advances in artificial intelligence and deep learning [77] have enabled new possibilities for collaborative control of distributed and autonomous agents in e-Work systems through deep reinforcement learning (DRL) [78]. This setup is ideal for industrial applications where, for example, an industrial robot has to take several steps to finish an operation (e.g., assembly, welding, pick-and-place), a mobile robot has to navigate through various points in a warehouse to complete
18
428
M. Moghaddam and S. Y. Nof
ical environments for the agents to experience different policies and master their respective complex tasks through trial-and-error.
Environment Aggregate action ak
Reward State Reward State rk(1) sk(1) rk(2) sk(2) Agent 1 Action ak(1)
Reward State rk(n) sk(n)
Agent 2 Action ak(2)
Agent n Action ak(n)
(i)
Fig. 18.27 The stochastic game for multi-agent RL. sk : state of agent (i) i at step k, a(i) k : action of agent i at step k, rk : reward of agent i at step k
a delivery, or a smart product has to go through a series of processing equipment to be finished. The resurgence of artificial neural networks after the multi-decade long “AI winter” and the rise of deep learning allowed DRL to boom and reach superhuman performance in playing both singleand multi-agent Atari games just a few years ago [79–82]. These recent remarkable achievements by DRL signify its potentials for solving more complex and practical problems, including the collaborative control and automation of multiagent, industrial systems. Multi-agent DRL [83–86] involves multiple agents that learn by dynamically interacting with a shared environment (Fig. 18.27). At the intersection of DRL, game theory, and direct policy search techniques, multi-agent RL is formally defined as a stochastic game: a dynamic game with probabilistic transitions played in a sequence of stages, where the outcomes of each stage depend on (1) the previous stage of the game played and (2) the actions of the agents in that game [84]. Notwithstanding its potentials, application of multiagent DRL for intelligent control of distributed and collaborative e-Work, e-Business, and e-Service systems require several theoretical and technological challenges to be addressed: 1. The first and foremost is the curse of dimensionality. The state-action space of agents exponentially grows as a function of the number of states, actions, and agents in the system. 2. Specifying the agents’ learning goals in the general stochastic game is a challenging task as the agents’ returns are correlated and cannot be maximized independently. 3. All agents in the system are learning simultaneously, and thus, each agent is faced with a moving-target learning problem, known as non-stationarity. That is, the best overall policy changes as the other agents’ policies change. 4. The agents’ actions must be mutually consistent in order to achieve their individual and collective goals. 5. Application of multi-agent RL to practical e-Work settings requires precise and detailed digital replicas of the phys-
Further research will address and overcome the above limitations. As a result, learning and adaptive collaborative control will enable better collaboration engineering of collaborative automation and collaborative e-Work.
18.5
Conclusions and Challenges
Collaboration must be automated in distributed and complex systems of machines, humans, and organizations that must be automated to augment the abilities of individual entities and entire e-Work systems to accomplish their missions. The benefits of e-Work, e-Business, and e-Service and any collaborative automation, in general, are the direct results of automating collaborative processes, the integration of physical processes with computation through embedded computers and feedback control loops, and the access to a full-fledged Internet of smart agents, machines, systems, products, and humans. Collaborative e-Work and CCT enable the cyber-based automation of processes and strategies for error/conflict/disruption mitigation and prevention that are proven to guarantee better service, faster response, and more profit. The adaptability and autonomy of e-Work systems and enterprises enabled by CCT allow the automation of business processes 24/7 with decreasing need for human intervention, from factory floors to transportation systems, hospitals, agricultural fields, trade markets, and offices. Moreover, the abundance of and immediate access to business and operational data from anywhere, and anytime, enable workers and enterprises to collaborate more efficiently and more effectively toward their individual and shared goals. The methods by which distributed agents, from micro- and nano-sensors to computers, machines, humans, and enterprises, interact and collaborate are being rapidly transformed. Future research and discovery must advance the theoretical foundations of e-Work and the principles of CCT in various theoretical and technological directions [4, 87, 88], among them: 1. Computer and communication security. Emerging collaborative automation systems demand higher levels of dependability and security, and effective structured backup and recovery mechanisms for information, computer, and communication systems. New theories, methods, and technologies are required for preventing and eliminating any conceivable errors, failures, and conflicts, and sustaining critical continuity of operations. 2. Information assurance. A key requirement for nextgeneration e-Work systems is to manage the risks associ-
18
3.
4.
5.
6.
7.
Collaborative Control and E-work Automation
ated with the use, processing, storage, and transmission of information among distributed entities. Information assurance is critical for ensuring confidentiality, integrity, and availability of information and knowledge-based systems. Ethics. The design of collaborative automation systems must be cognizant of potential ethical issues and challenges. Examples include privacy violation, anonymous access to individual property, recording conversations and proprietary knowledge, cybercrime, cyberterrorism, and information hiding or obscuring. Sustainability. Sustainable design of collaborative automation systems require new technical and organizational solutions surrounding business processes, supply networks, product design and life cycle management, process innovation, and technology advancements. Major sustainability challenges are: • Environmental: Strategic reduction of waste, reduction of industrial pollution, energy consumption and hazards, and environmental disasters caused by the industry • Social: Employee satisfaction, quality of life, expenditure on peripheral development, and workplace safety and health • Economic: Sustaining profitability and viability, transportation and civil activities, turnover ratio, gross margin, and capital investment Ability to collaborate more effectively. The ubiquitous and real-time access to abundant business and operational data and platforms through collaborative automation systems (e.g., mobile commerce, tele-medicine, fee-based information delivery, ubiquitous computing) must enhance the ability of enterprises to collaborate internally and with the external world. This requires new communication technologies, and collaborative control mechanisms provide all entities with the information and decision-making capabilities they need to work toward their individual and collective goals. Incentive-based collaboration. Collaborative automation systems must always be designed and administered around a fundamental question: Why share and not compete, and what are the role and optimal level of collaboration to ensure the competitiveness of an individual entity? Collaboration by itself may not always lead to success—it must be balanced by a certain degree of competition to ensure both sustainability and growth. Multicultural interactions. With trends toward globalization, the design of collaborative automation systems must incorporate cross-cultural communications, interactions, and collaborations among individuals from a variety of cultural backgrounds working together to achieve common goals. The challenge is to integrate and harmonize
429
the knowledge and expertise dispersed among several disciplines and research areas to avoid potential cultural conflicts. 8. Leveraging recent advances in artificial intelligence. The rise of deep learning after the decades-long AI winter must be leveraged to invent methods and algorithms that enable agents to directly learn from abundant real-time sensory data to optimize the performance of distributed e-Work systems in a distributed, synchronous, and autonomous fashion. Future research must focus on the development of the digital tools and analytical methods and algorithms that can augment CCT and e-Work automation by bringing this vision to life. That includes defining and developing the foundations for collaborative control to create synergies between existing CCT design principles and AI methods such as reinforcement learning, expanding the breadth of investigation and implementation of collaborative e-Work principles and methodologies, and defining and assessing new measures for productivity, efficiency, resilience, and adaptability to better understand and improve the emerging e-Work systems and collaborative automation environments. See more on collaborative robots automation in Chapter 15, and on interoperability in Chapter 33.
18.6
Further Reading
• X.W. Chen: Network Science Models for Data Analytics Automation: Theories and Applications (Springer ACES Series, Cham, Switzerland 2022) vol. 9. • L.N. de Castro: Fundamentals of Natural Computing: Basic Concepts, Algorithms, and Applications (Chapman Hall/CRC, New York 2006). • H.M. Deitel, P.J. Deitel, K. Steinbuhler: e-Business and e-Commerce for Managers (Prentice-Hall, New Jersey 2001). • V.G. Duffy, S.J. Landry, J.D. Lee, N. Stanton (Eds.): Human-Automation Interaction: Transportation (Springer ACES Series, Cham, Switzerland 2023). • V.G. Duffy, M. Lehto, Y. Yih, R.W. Proctor (Eds.): Human-Automation Interaction: Manufacturing, Services and User Experience (Springer ACES Series, Cham, Switzerland 2023). • V.G. Duffy, M. Ziefle, P-L.P. Rau, M.M. Tseng (Eds.): Human-Automation Interaction: Mobile Computing (Springer ACES Series, Cham, Switzerland 2023). • P.O. Dusadeerungsikul, X. He, M. Sreeram, S.Y. Nof: Multi-agent system optimisation in factories of the future: cyber collaborative warehouse study. International journal of production research. 60(20), 6072–6086 (2022). • P. Harmon, M. Rosen, M. Guttman: Developing e-Business Systems and Architectures: A Manager’s Guide (Academic, San Diego 2001).
18
430
• S. Johnson: Emergence: The Connected Lives of Ants, Brains, Cities and Software (Scribner, New York 2002). • M. Kevin, K.M. Passino: Biomimicry for Optimization, Control, and Automation (Springer, London 2005). • P.B. Lowry, J.O. Cherrington, R.R. Watson (Eds.): The eBusiness Handbook (St. Lucie, New York 2002). • C. Ma, A. Concepcion: A security evaluation model for multi-agent distributed systems. In: Technologies for Business Information Systems, ed. by W. Abramowicz, H.C. Mayr (Springer, New York 2007) pp. 403–415. • S. McKie: e-Business Best Practices Leveraging Technology for Business Advantage (Wiley, New York 2001). • M. Moghaddam, S.Y., Nof: Best Matching Theory & Applications (Springer International Publishing, 2017). • F. Munoz, A. Nayak, S.C. Lee: Engineering Applications of Social Welfare Functions: Generic Framework of Dynamic Resource Allocation (Springer ACES Series 2023). • W.P.V. Nguyen, O.P. Dusadeerungsikul, S.Y. Nof: Collaborative Control, Task Administration, and Fault Tolerance for Supply Network. In: Supply Network Dynamics and Control, ed. by A. Dolgui, D. Ivanov, B. Sokolov (Springer Series in Supply Chain Management 2022). • S.Y. Nof, J., Ceroni, W., Jeong, M., Moghaddam: Revolutionizing Collaboration through e-Work, e-Business, and e-Service (Springer International Publishing, 2015). • R. Reyes Levalle: Resilience by Teaming in Supply Chains and Networks (Springer International Publishing, 2018). • R.T. Rust, P.K. Kannan (Eds.): e-Service: New Directions in Theory and Practice (Sharpe, New York 2002). • H. Zhong, S.Y., Nof: Dynamic Lines of Collaboration – Disruption Handling & Control (Springer International Publishing, 2020). • I. Tkach, Y. Edan: Distributed Heterogeneous Multi Sensor Task Allocation Systems (Springer ACES Series, Cham, Switzerland 2020) vol. 7.
References 1. Nof, S.Y.: Design of effective e-work: review of models, tools, and emerging challenges. Prod. Plan. Control 14(8), 681–703 (2003) 2. Nof, S.Y.: Collaborative e-work and e-manufacturing: challenges for production and logistics managers. J. Intell. Manuf. 17(6), 689– 701 (2006) 3. Nof, S.Y.: Collaborative control theory for e-Work, e-Production, and e-Service. Ann. Rev. Control 31, 281–292 (2007) 4. Nof, S.Y., Ceroni, J., Jeong, W., Moghaddam, M.: Revolutionizing Collaboration Through e-Work, e-Business, and e-Service. Springer, Berlin (2015) 5. Gasos, J., Thoben, K.-D.: E-Business Applications: Technologies for Tommorow’s Solutions. Springer Science & Business Media, Berlin (2002) 6. Google.: 6X growth in mobile watch time of travel diaries and vlogs in the past two years – YouTube Data, U.S. (2017)
M. Moghaddam and S. Y. Nof 7. Fullerton, L.: Online reviews impact purchasing decisions for over 93% of consumers (2017) 8. Zhong, H., Levalle, R.R., Moghaddam, M., Nof, S.Y.: Collaborative intelligence – definition and measured impacts on Internetworked e-Work. Manag. Prod. Eng. Rev. 6(1) (2015) 9. Moghaddam, M., Nof, S.Y.: The collaborative factory of the future. Int. J. Comput. Integ. Manuf. 30(1), 23–43 (2017) 10. Murphey, R., Pardalos, P.M.: (eds.) Cooperative Control and Optimization, vol. 66. Applied Optimization. Springer US, Boston (2002) 11. Moghaddam, M., Nof, S.Y.: Best Matching Theory & Applications, vol. 3. Automation, Collaboration, & E-Services. Springer International Publishing, Berlin (2017) 12. Levalle, R.R.: Resilience by Teaming in Supply Chains and Networks, vol. 5. Automation, Collaboration, & E-Services. Springer International Publishing, Cham (2018) 13. Zhong, H., Nof, S.Y.: Dynamic Lines of Collaboration, vol. 6. Automation, Collaboration, & E-Services. Springer International Publishing, Cham (2020) 14. Nof, S.Y.: Springer Handbook of Automation, Ch. 88. Springer Science & Business Media, Berlin (2009) 15. Chituc, C.-M., Nof, S.Y.: The join/leave/remain (JLR) decision in collaborative networked organizations. Comput. Ind. Eng. 53(1), 173–195 (2007) 16. Yoon, S.W., Nof, S.Y.: Affiliation/dissociation decision models in demand and capacity sharing collaborative network. Int. J. Prod. Eco. 130(2), 135–143 (2011) 17. Mahnke, W., Leitner, S.-H., Damm, M.: OPC Unified Architecture. Springer, Berlin (2011) 18. Schleipen, M., Drath, R.: Three-view-concept for modeling process or manufacturing plants with AutomationML (2009) 19. Vijayaraghavan, A., Sobel, W., Fox, A., Dornfeld, D., Warndorf, P.: Improving machine tool interoperability using standardized interface protocols: Mt connect (2008) 20. Trentesaux, D.: Distributed control of production systems. Eng. Appl. Artif. Intel. 22(7), 971–978 (2009) 21. Mell, P., Grance, T.: Perspectives on cloud computing and standards. national institute of standards and technology (NIST). Inform. Technol. Lab. 45–49 (2009) 22. Riedl, M., Zipper, H., Meier, M., Diedrich, C.: Cyber-physical systems alter automation architectures. Ann. Rev. Control 38(1), 123–133 (2014) 23. Lee, E.A.: Cyber physical systems: design challenges. In: 2008 11th IEEE International Symposium on Object and ComponentOriented Real-Time Distributed Computing (ISORC), pp. 363– 369. IEEE, Piscataway (2008) 24. Nolin, J., Olson, N.: The Internet of Things and convenience. Int. Res. 26(2), 360–376 (2016) 25. Monostori, L., Kádár, B., Bauernhansl, T., Kondoh, S., Kumara, S., Reinhart, G., Sauer, O., Schuh, G., Sihn, W., Ueda, K.: Cyberphysical systems in manufacturing. CIRP Ann. Manuf. Technol. 65(2), 621–641 (2016) 26. Williams, T.J.: The Purdue enterprise reference architecture. Comput. Ind. 24(2–3), 141–158 (1994) 27. IEC: IEC 62264:2013: ISA95 – Enterprise-Control System Integration. Technical report (2013) 28. Luder, A., Schleipen, M., Schmidt, N., Pfrommer, J., Hensen, R.: One step towards an industry 4.0 component. In: 2017 13th IEEE Conference on Automation Science and Engineering (CASE), pp. 1268–1273. IEEE, Piscataway (2017) 29. Moghaddam, M., Kenley, C.R., Colby, J.M., Berns, M.N.C., Rausch, R., Markham, J., Skeffington, W.M., Garrity, J., Chaturvedi, A.R., Deshmukh, A.V.: Next-generation enterprise architectures: common vernacular and evolution towards service-
18
30.
31. 32.
33.
34.
35. 36. 37. 38.
39.
40.
41. 42.
43.
44.
45.
46. 47. 48. 49.
50.
51.
Collaborative Control and E-work Automation orientation. In: Proceedings – 2017 IEEE 15th International Conference on Industrial Informatics, INDIN 2017 (2017) Moghaddam, M., Cadavid, M.N., Kenley, C.R., Deshmukh, A.V.: Reference architectures for smart manufacturing: a critical review. J. Manuf. Syst. 49 (2018) Márkus, A., Kis Váncza, T., Monostori, L.: A market approach to holonic manufacturing. CIRP Ann. 45(1), 433–436 (1996) Van Brussel, H., Wyns, J., Valckenaers, P., Bongaerts, L., Peeters, P.: Reference architecture for holonic manufacturing systems: PROSA. Comput. Ind. 37(3), 255–274 (1998) Leitão, P., Restivo, F.: ADACOR: a holonic architecture for agile and adaptive manufacturing control. Comput. Ind. 57(2), 121–130 (2006) Váncza, J., Monostori, L., Lutters, D., Kumara, S.R., Tseng, M., Valckenaers, P., Van Brussel, H.: Cooperative and responsive manufacturing enterprises. CIRP Ann. Manuf. Technol. 60(2), 797–820 (2011) Levis, A.H.: Systems architecture. Wiley Encyclopedia of Electrical and Electronics Engineering (2001) Buede, D.M., Miller, W.D.: The engineering design of systems: models and methods. John Wiley & Sons, Hoboken (2016) Dove, R., LaBarge, R.: 8.4.1 Fundamentals of Agile Systems Engineering – Part 1 (2014) DIN: DIN SPEC 91345:2016-04, Reference Architecture Model Industrie 4.0 (RAMI4.0). Technical report, DIN Deutsches Institut für Normung e. V., Berlin (2016) Moghaddam, M., Kenley, C.R., Colby, J.M., Berns, M.N.C., Rausch, R., Markham, J., Skeffington, W.M., Garrity, J., Chaturvedi, A.R., Deshmukh, A.V.: Next-generation enterprise architectures: common vernacular and evolution towards serviceorientation. In: 2017 IEEE 15th International Conference on Industrial Informatics (INDIN), pp. 32–37. IEEE, Piscataway (2017) Monostori, L., Kádár, B., Bauernhansl, T., Kondoh, S., Kumara, S., Reinhart, G., Sauer, O., Schuh, G., Sihn, W., Ueda, K.: Cyberphysical systems in manufacturing. CIRP Ann. 65(2), 621–641 (2016) IBM: IBM Industry 4.0 Reference Architecture (2018). https:// www.ibm.com/cloud/architecture/architectures/iot_industrie_40 Lu, Y., Riddick, F., Ivezic, N.: The Paradigm Shift in Smart Manufacturing System Architecture, pp. 767–776. Springer, Cham (2016) Moghaddam, M., Nof, S.Y.: Collaborative service-component integration in cloud manufacturing. Int. J. Prod. Res. 56(1–2), 677–691 (2018) Hollnagel, E., Woods, D.D.: Joint Cognitive Systems: Foundations of Cognitive Systems Engineering. Taylor & Francis, Milton Park (2005) Ko, H.S., Nof, S.Y.: Design and application of task administration protocols for collaborative production and service systems. Int. J. Prod. Eco. 135(1), 177–189 (2012) ISO: ISO 14258:1998 – Industrial automation systems—concepts and rules for enterprise models Ceroni, J.A., Nof, S.Y.: A workflow model based on parallelism for distributed organizations. J. Intell. Manuf. 13(6), 439–461 (2002) Chen, X.W., Nof, S.Y.: Prognostics and diagnostics of conflicts and errors over e-work networks. In: Proceedings of ICPR-19 (2007) Moghaddam, M., Nof, S.Y.: Best-matching with interdependent preferences–implications for capacitated cluster formation and evolution. Decis. Support. Syst. 79, 125–137 (2015) Jeong, W.: Fault-tolerant timeout communication protocols for distributed micro-sensor network systems. PhD Thesis, Dissertation, Purdue University (2006) Reyes Levalle, R., Nof, S.Y.: Resilience by teaming in supply network formation and re-configuration. Int. J. Prod. Eco. 160, 80– 93 (2015)
431 52. Levalle, R.R., Nof, S.Y.: A resilience by teaming framework for collaborative supply networks. Comput. Ind. Eng. 90, 67–85 (2015) 53. Levalle, R.R., Nof, S.Y.: Resilience in supply networks: definition, dimensions, and levels. Ann. Rev. Control 43, 224–236 (2017) 54. Zhong, H., Nof, S.Y.: The dynamic lines of collaboration model: Collaborative disruption response in cyber–physical systems. Comput. Ind. Eng. 87, 370–382 (2015) 55. Moghaddam, M.: Best matching processes in distributed systems, Ph.D. Dissertation, Purdue University (2016) 56. Moghaddam, M., Nof, S.Y.: Combined demand and capacity sharing with best matching decisions in enterprise collaboration. Int. J. Prod. Eco. 148, 93–109 (2014) 57. Moghaddam, M., Nof, S.Y.: Best-matching with interdependent preferences—implications for capacitated cluster formation and evolution. Decis. Support. Syst. 79, 125–137 (2015) 58. Moghaddam, M., Nof, S.Y., Menipaz, E.: Design and administration of collaborative networked headquarters. Int. J. Prod. Res. 54(23) (2016) 59. Moghaddam, M., Nof, S.Y.: Real-time optimization and control mechanisms for collaborative demand and capacity sharing. Int. J. Prod. Eco. 171 (2016) 60. Moghaddam, M., Nof, S.Y.: Balanceable assembly lines with dynamic tool sharing and best matching decisions—a collaborative assembly framework. IIE Trans. 47(12), 1363–1378 (2015) 61. Moghaddam, M., Nof, S.Y.: Real-time administration of tool sharing and best matching to enhance assembly lines balanceability and flexibility. Mechatronics 31, 147–157 (2015) 62. Yoon, S.W., Nof, S.Y.: Demand and capacity sharing decisions and protocols in a collaborative network of enterprises. Decis. Support Syst. 49(4), 442–450 (2010) 63. Seok, H., Nof, S.Y.: Dynamic coalition reformation for adaptive demand and capacity sharing. Int. J. Prod. Eco. 147, 136–146 (2014) 64. Yilmaz, I., Yoon, S.W., Seok, H.: A framework and algorithm for fair demand and capacity sharing in collaborative networks. Int. J. Prod. Eco. 193, 137–147 (2017) 65. Seok, H., Nof, S.Y., Filip, F.G.: Sustainability decision support system based on collaborative control theory. Ann. Rev. Control 36(1), 85–100 (2012) 66. Scavarda, M., Seok, H., Puranik, A.S., Nof, S.Y.: Adaptive direct/indirect delivery decision protocol by collaborative negotiation among manufacturers, distributors, and retailers. Int. J. Prod. Eco. 167, 232–245 (2015) 67. Nguyen, W.P.V., Nof, S.Y.: Strategic lines of collaboration in response to disruption propagation (CRDP) through cyber-physical systems. Int. J. Prod. Eco. 230, 107865 (2020) 68. Ajidarma, P., Nof, S.Y.: Collaborative detection and prevention of errors and conflicts in an agricultural robotic system. Stud. Inf. Control 30(1), 19–28 (2021) 69. Dusadeerungsikul, P.O., Nof, S.Y.: A collaborative control protocol for agricultural robot routing with online adaptation. Comput. Ind. Eng. 135, 456–466 (2019) 70. Dusadeerungsikul, P.O., Nof, S.Y.: A cyber collaborative protocol for real-time communication and control in human-robot-sensor work. Int. J. Comput. Commun. Control 16(3) (2021) 71. Dorigo, M., Maniezzo, V., Colorni, A.: Ant system: optimization by a colony of cooperating agents. IEEE Trans. Syst. Man Cybern. B Cybern. 26(1), 29–41 (1996) 72. Hadeli, K.: Bio-inspired multi-agent manufacturing control system with social behaviour. PhD Thesis, Thesis at Katholieke Universiteit Leuven, Department of Department of …(2006) 73. Murray, R.M.: Recent research in cooperative control of multivehicle systems (2007) 74. Qu, Z., Wang, J., Hull, R.A.: Cooperative control of dynamical systems with application to autonomous vehicles. IEEE Trans. Auto. Control 53(4), 894–911 (2008)
18
432 75. Qu, Z.: Cooperative control of dynamical systems: applications to autonomous vehicles. Springer Science & Business Media, Berlin (2009) 76. Wang, Y., Garcia, E., Casbeer, D., Zhang, F.: Cooperative control of multi-agent systems: theory and applications. John Wiley & Sons, Hoboken (2017) 77. Goodfellow, I.: NIPS 2016 Tutorial: Generative Adversarial Networks (2016) 78. Sutton, R.S., Barto, A.G.: Reinforcement learning: an introduction. MIT Press, Cambridge (1998) 79. Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A.A., Veness, J., Bellemare, M.G., Graves, A., Riedmiller, M., Fidjeland, A.K., Ostrovski, G., Petersen, S., Beattie, C., Sadik, A., Antonoglou, I., King, H., Kumaran, D., Wierstra, D., Legg, S., Hassabis, D.: Human-level control through deep reinforcement learning. Nature 518(7540), 529–533 (2015) 80. Silver, D., Huang, A., Maddison, C.J., Guez, A., Sifre, L., Van Den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M., Dieleman, S., Grewe, D., Nham, J., Kalchbrenner, N., Sutskever, I., Lillicrap, T., Leach, M., Kavukcuoglu, K., Graepel, T., Hassabis, D.: Mastering the game of Go with deep neural networks and tree search. Nature 529(7587), 484–489 (2016) 81. Silver, D., Hassabis, D.: AlphaGo Zero: starting from scratch (2017) 82. Vinyals, O., Babuschkin, I., Czarnecki, W.M., Mathieu, M., Dudzik, A., Chung, J., Choi, D.H., Powell, R., Ewalds, T., Georgiev, P., Oh, J., Horgan, D., Kroiss, M., Danihelka, I., Huang, A., Sifre, L., Cai, T., Agapiou, J.P., Jaderberg, M., Vezhnevets, A.S., Leblond, R., Pohlen, T., Dalibard, V., Budden, D., Sulsky, Y., Molloy, J., Paine, T.L., Gulcehre, C., Wang, Z., Pfaff, T., Wu, Y., Ring, R., Yogatama, D., Wünsch, D., McKinney, K., Smith, O., Schaul, T., Lillicrap, T., Kavukcuoglu, K., Hassabis, D., Apps, C., Silver, D.: Grandmaster level in StarCraft II using multi-agent reinforcement learning. Nature 575(7782), 350–354 (2019) 83. Zhang, K., Yang, Z., Ba¸sar, T.: Multi-agent reinforcement learning: a selective overview of theories and algorithms. Preprint. arXiv:1911.10635 (2019) 84. Bu¸soniu, L., Babuška, R., De Schutter, B.: Multi-agent reinforcement learning: an overview. In: Innovations in Multi-Agent Systems and Applications-1, pp. 183–221. Springer, Berlin (2010) 85. Gupta, J.K., Egorov, M., Kochenderfer, M.: Cooperative multiagent control using deep reinforcement learning. In: International Conference on Autonomous Agents and Multiagent Systems, pp. 66–83. Springer, Berlin (2017) 86. Egorov, M.: Multi-agent deep reinforcement learning. In: CS231n: Convolutional Neural Networks for Visual Recognition (2016) 87. Nof, S.Y., Morel, G., Monostori, L., Molina, A., Filip, F.: From plant and logistics control to multi-enterprise collaboration. Ann. Rev. Control 30(1), 55–68 (2006) 88. Morel, G., Pereira, C.E., Nof, S.Y.: Historical survey and emerging challenges of manufacturing automation modeling and control: a systems architecting perspective. Ann. Rev. Control 47, 21–34 (2019) 89. Dusadeerungsikul, P. O., Nof, S. Y.: Cyber collaborative warehouse with dual-cycle operations design, Int. J. of Prod. Res., 1–13 (2022)
M. Moghaddam and S. Y. Nof
Mohsen Moghaddam, Ph.D., is an Assistant Professor of Mechanical and Industrial Engineering at Northeastern University, Boston. Prior to joining Northeastern, he was with the GE-Purdue Partnership in Research and Innovation in Advanced Manufacturing as a Postdoctoral Associate. His areas of research interest include cyber-physical manufacturing, human-technology collaboration, user-centered design, artificial intelligence, and machine learning. He is coauthor of over 40 refereed journal articles and two books including Revolutionizing Collaboration Through e-Work, e-Business, and e-Service (Springer, 2015) and Best Matching Theory Applications (Springer, 2017). He also served as a reviewer for several international journals such as Int. J. Production Economics; Int. J. Production Research; J. Intelligent Manufacturing, Computers Industrial Engineering; Decision Support Systems; Computers in Industry; IEEE Trans. on Industrial Informatics; IEEE Trans. on Systems, Man, Cybernetics; and IEEE Trans. on Automation Science and Engineering. His scholarly research is supported by the US National Science Foundation and industry.
Shimon Y. Nof, Ph.D., D.H.C., is a Professor of Industrial Engineering and the director of the NSF-industry supported PRISM Center, Purdue University. Has held visiting positions at MIT and universities in Chile, EU, Hong Kong, Israel, Japan, Mexico, Philippines, and Taiwan. Received his B.Sc. and M.Sc. in Industrial Engineering and Management (Human-Machine Systems), Technion, Haifa, Israel; Ph.D. in Industrial and Operations Engineering, University of Michigan, Ann Arbor. In 2002, he was awarded with the Engellerger Medal for Robotics Education. Fellow of IISE; Fellow, President and former Secretary General of IFPR; former Chair of IFAC CC-Mfg. and Logistics systems, Dr. Nof has published over 550 articles on production engineering and cyber technology and is the author/editor of sixteen books, including the Handbook of Industrial Robotics (Wiley, 1985) and the International Encyclopedia of Robotics (Wiley, 1988), both winners of the “Most Outstanding Book in Science and Engineering” award by the Association of American Publishers; Industrial Assembly; the Handbook of Industrial Robotics (2nd Edition) (Wiley, 1999); the Handbook of Automation (Springer, 2009); Cultural Factors in Systems Design: Decision Making and Action (CRC press, 2012); Laser and Photonic Systems: Design and Integration (CRC Press, 2014); Revolutionizing Collaboration through e-Work, e-Business, and e-Service in the Automation, Collaboration, and E-Services (ACES) Book Series (Springer, 2015); and Best Matching Theory and Applications (ACES, Springer 2017).
Design for Human-Automation and Human-Autonomous Systems
19
John D. Lee and Bobbie D. Seppelt
Contents 19.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433
19.2
The Variety of Automation and Increasingly Autonomous Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434
19.3 19.3.1 19.3.2 19.3.3
Challenges with Automation and Autonomy . . . . . . . Changes in Feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Changes in Tasks and Task Structure . . . . . . . . . . . . . . . . Operators’ Cognitive and Emotional Response . . . . . . . .
435 436 437 438
19.4 19.4.1
Automation and System Characteristics . . . . . . . . . . . Authority and Autonomy at Information Processing Stages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Complexity, Observability, and Directability . . . . . . . . . Timescale and Multitasking Demands . . . . . . . . . . . . . . . Agent Interdependencies . . . . . . . . . . . . . . . . . . . . . . . . . . Environment Interactions . . . . . . . . . . . . . . . . . . . . . . . . .
439 440 440 441 441 442
19.5.1 19.5.2 19.5.3 19.5.4 19.5.5
Automation Design Methods and Application Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Human-Centered Design . . . . . . . . . . . . . . . . . . . . . . . . . . Fitts’ List and Function Allocation . . . . . . . . . . . . . . . . . . Operator-Automation Simulation . . . . . . . . . . . . . . . . . . . Enhanced Feedback and Representation Aiding . . . . . . . Expectation Matching and Simplification . . . . . . . . . . . .
442 442 444 445 446 447
19.6 19.6.1 19.6.2
Future Challenges in Automation Design . . . . . . . . . . 449 Swarm Automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 449 Operator–Automation Networks . . . . . . . . . . . . . . . . . . . . 449
19.4.2 19.4.3 19.4.4 19.4.5 19.5
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 450
Abstract
Designers frequently look toward automation as a way to increase system efficiency and safety by reducing human involvement. This approach can disappoint because the contribution of people often becomes J. D. Lee () Department of Industrial and Systems Engineering, University of Wisconsin-Madison, Madison, WA, USA e-mail: [email protected] B. D. Seppelt () Autonomous Vehicles, Ford Motor Company, Dearborn, MI, USA e-mail: [email protected]
© Springer Nature Switzerland AG 2023 S. Y. Nof (ed.), Springer Handbook of Automation, Springer Handbooks, https://doi.org/10.1007/978-3-030-96729-1_19
more, not less, important as automation becomes more powerful and prevalent. More powerful automation demands greater attention to its design, supervisory responsibilities, system maintenance, software upgrades, and automation coordination. Developing automation without consideration of the human operator can lead to new and more catastrophic failures. For automation to fulfill its promise, designers must avoid a technologycentered approach that often yields strong but silent forms of automation, and instead adopt an approach that considers the joint operator-automation system that yields more collaborative, communicative forms of automation. Automation-related problems arise because introducing automation changes the type and extent of feedback that operators receive, as well as the nature and structure of tasks. Also, operators’ behavioral, cognitive, and emotional responses to these changes can leave the system vulnerable to failure. No single approach can address all of these challenges because automation is a heterogeneous technology. There are many types and forms of automation and each poses different design challenges. This chapter describes how different types of automation place different demands on operators. It also presents strategies that can help designers achieve promised benefits of automation. The chapter concludes with future challenges in automation design and human interaction with increasingly autonomous systems. Keywords
Automation design · Vehicle automation · Mental models · Supply chains · Trust
19.1
Introduction
Designers often view automation as the path toward greater efficiency and safety. In many cases, automation does deliver these benefits. In the case of the control of cargo ships 433
434
and oil tankers, automation has made it possible to operate a vessel with as few as 8–12 crew members, compared with the 30–40 that were required 50 years ago [1]. In the case of aviation, automation has reduced flight times and increased fuel efficiency [2]. Similarly, automation in the form of decision-support systems has been credited with saving millions of dollars in guiding policy and production decisions [3]. Machine learning and artificial intelligence (AI) have become a central part of many businesses, such as the algorithms that manage Facebook news feeds and the algorithms that help stylists select clothing at StitchFix. Automation promises greater efficiency, lower workload, and fewer human errors; however, these promises are not always fulfilled. A common fallacy is that automation can improve system performance by eliminating human variability and errors. This fallacy often leads to mishaps that surprise operators, managers, and designers. As an example, the cruise ship Royal Majesty ran aground because the global positioning system (GPS) signal was lost and the position estimation reverted to position extrapolation based on speed and heading (dead reckoning). For over 24 h the crew followed the compelling electronic chart display and did not notice that the GPS signal had been lost or that the position error had been accumulating. The crew failed to heed indications from boats in the area, lights on the shore, and even salient changes in water color that signal shoals. The surprise of the GPS failure was only discovered when the ship ran aground [4, 5]. As this example shows, automation does not guarantee improved efficiency and error-free performance. For automation to fulfill its promise, we must focus not on the design of the automation, but on designing the joint human-automation system. A study of 3000 organizations in 29 industries found that only 11% had seen a sizable return on their investment in artificial intelligence (AI). In contrast, 73% of those organizations that blended AI and human capabilities into a team that learns reported a sizable return on their investment [6]. Automation often fails to provide expected benefits because it does not simply replace the human in performing a task, but also transforms the job and introduces a new set of tasks and organizational requirements [7]. One way to view the automation failure that led to the grounding of the Royal Majesty is that it was simply a malfunction of an otherwise well-designed system – a problem with the technical implementation. Another view is that the grounding occurred because the interface design failed to support the new navigation task and failed to counteract a general tendency for people to over-rely on generally reliable automation – a problem with human-technology integration. Although it is often easiest to blame automation failures on technical problems or human errors, many problems result
J. D Lee and B. D Seppelt
from a failure to consider the challenges of designing not just the automation, but a joint human-automation system. Automation fails because the role of the person performing the task is often underestimated, particularly the need for people to compensate for the unexpected. Although automation can handle typical cases it often lacks the flexibility of humans to handle unanticipated situations. In most applications, neither the human nor the automation can accommodate all situations – each has limits. Avoiding these failures requires a design process with a focus on the joint humanautomation team. Automation design itself can be viewed as people controlling a system at a distance. In automation design, developers must anticipate, understand, and define the roles of human operators and automation. Complex settings make it difficult for designers to anticipate how their automation design choices will perform, particularly how people will work with it. Successful automation design must empower the operator to compensate for the limits of the automation as well as capitalize on its capabilities. This chapter describes automation and increasingly autonomous systems and some of the problems they frequently encounter. It then describes how these problems relate to types of automation and what design strategies can help designers achieve the promise of automation. The chapter concludes with future challenges in automation design. Figure 19.1 provides an overview of the chapter and a general approach for achieving the promise of automation. This approach begins by understanding the challenges posed by automation and recognizing the diversity of automation. This then leads to applying design methods suited to the characteristics of a particular instance of automation.
19.2
The Variety of Automation and Increasingly Autonomous Systems
Automation and increasingly autonomous systems encompass an expanding variety of technologies. Unlike the automation and telerobotics of 50 years ago, which was relegated to technical domains, such as process control and aviation, today automation in its various forms is moving into daily life. Algorithms govern the news we see, guide the cars we drive, and even inform parole and policing decisions [8, 9]. Factory automation that was once separated from people is moving into closer relationships, like robots that collaborate with people or as exoskeletons that people wear [10, 11]. Some automation even physically melds with our bodies and senses, such as prostheses that are coupled to the brain or as augmented and virtual reality that allows you to see and hear a different world [12–14]. Automation can extend human capabilities profoundly.
19
Design for Human-Automation and Human-Autonomous Systems
Challenges
435
Characteristics
Human-centered design
Changes in feedback Changes in task structure Cognitive and emotional response
Methods
Authority and autonomy Complexity, observability, and directability Timescale and multitasking demands Agent interdependencies
Fitts’ list and function allocation Operator-automation simulation Enhanced feedback and representation aiding Expectation matching and simplification
Environment interactions
Fig. 19.1 Automation challenges, characteristics, and design method
Some forms of automation act not as extensions of our capability, but as supervisors or coaches. Such automation can monitor drivers to detect indications of distraction drowsiness, or other impairments [15, 16]. Wearable technology can detect falls and call for aid, and even remind people to wash their hands. As work and teaching move out of the office and classroom to the home, such automation can coach and police behavior by tracking productivity and cheating [14, 17]. Such automation raises concerns about ethics of privacy and fairness that go beyond the more typical focus on performance and safety. In all of these instances, automation does not eliminate the person, it amplifies and extends human influence. This increasing influence reflects the greater span of control that automation gives operators, as well as the greater influence developers’ decisions have on how automation acts in the world. Increasingly autonomous automation increases the degree to which developers can directly contribute to mishaps. Consequently, automation can amplify “human errors” of both developers and operators. The role of “human error” in mishaps shifts from the traditional focus on “operator error” to “automation designer error”. The balance of this chapter describes potential problems and design strategies to ensure automation provides the benefits designers expect.
19.3
Challenges with Automation and Autonomy
Automation is often designed and implemented with a focus on the technical aspects of sensors, algorithms, and actuators. These are necessary but not sufficient design considerations to ensure that automation enhances system performance. Such a technology-centered design approach often confronts the human operator with challenges that lead
to system failures. Because automation often dramatically extends operators’ influence on systems (e.g., automation makes it possible for one person to do the work of ten), the consequences of these failures can be catastrophic. The factors underlying these failures are complex and interacting. Failures arise because introducing automation changes the type and extent of feedback that operators receive, as well as the nature and structure of tasks. Also, operators’ behavioral, cognitive, and emotional responses to these changes can leave the system vulnerable to failure. A technology-centered approach to automation design often ignores these challenges, and as a consequence, fails to realize the promise of automation. The quest for “autonomous systems” often accentuates the problems of a technology-centered approach. Autonomy is often framed in terms of the ability of a system to operate without the need for human involvement. In this context, autonomy is pursued to eliminate human error, variability, and costs; however, autonomous systems do not eliminate people, they simply change the roles of people. Ironically, these new roles often make the occasional human error more consequential. Demonstrations of autonomy often neglect the network of sensors, algorithms, and people needed to support autonomous systems, particularly when such systems confront situations that violate the developers’ assumptions [18]. Rather than considering autonomy as an aspirational endpoint, it may be more useful to consider systems as “increasingly autonomous” with a realization that increasing autonomy makes the reduced human input more, not less, consequential [19]. In the case of algorithmic trading in financial markets, an entry error, a spreadsheet typo, or a poorly specified algorithm can now do billions of dollars of damage where a single person’s influence without algorithmic trading is much more limited. In addition, autonomy suggests a system that is self-governing and free
19
436
to act independently of people. Such independence brings new challenges associated with aligning the goals and intentions of the technology with those of the people that it affects [20–22]. Many of these challenges can be grouped into three areas: changes in feedback, changes in the task structure, and the cognitive and emotional response to these changes. Greater autonomy is needed in some situations, such as in Ch. 51 which considers the need for autonomy in space exploration where distance precludes direct control. The need for autonomy in space exploration helps to clarify the tradeoffs associated with increasing autonomy. Specifically, autonomy can be considered autonomy from whom? This implies that some degree of control will be lost for some people. Communication delays, communication bandwidth, and control delays may all require greater autonomy and associated loss of some control by some people. Also, autonomy can be considered in terms of autonomy to do what? As with people in an organization, autonomy is granted with limits and with people deciding the purpose of the increasingly autonomous automation. Chapter 51 discusses issues of autonomy and space exploration in more detail. Here we consider how increasingly autonomous systems can be crafted so they enhance rather than diminish human agency, avoiding the false dichotomy of automation autonomy necessarily reduces human autonomy.
19.3.1 Changes in Feedback Feedback is central to control. One reason why automation fails is that automation often dramatically changes the type and extent of the feedback the operator receives. In the context of driving a car, the driver keeps the car in the center of the lane by adjusting the steering wheel according to visual feedback regarding the position of the car on the road and haptic feedback from the forces on the steering wheel. Emerging vehicle technology may automate lane keeping, but in doing so, it might remove haptic cues, thereby leaving the driver with only visual cues. Diminished or eliminated feedback is a common occurrence with automation and it can leave people less prepared to intervene if manual control is required [23, 24]. Automation can eliminate and change feedback in a way that can undermine the ability of people to work with automation to control the system. Automation often replaces the feedback available in manual control with qualitatively different feedback. As an example, introducing automation into paper-making plants moved operators from the plant floor and placed them in control rooms. This move distanced them from the physical process and eliminated the informal feedback associated with vibrations, sounds, and smells that many operators relied upon [25]. At best, this change in cues requires operators to relearn
J. D Lee and B. D Seppelt
how to control the plant. At worst, the instrumentation and associated displays may not have the information needed for operators to diagnose automation failures and intervene appropriately. Automation can also qualitatively shift the feedback from raw system data to processed, integrated information. Although such integrated data can be simple and easily understood, particularly during routine situations, it may also lack the detail necessary to detect and understand system failures. As an example, the bridge of the cruise ship Royal Majesty had an electronic chart that automatically integrated inertial and GPS navigation data to show operators their position relative to their intended path. This high-level representation of the ship’s position remained on the intended course even when the underlying GPS data were no longer used and the ship’s actual position drifted many miles off the intended route. In this case, the lack of low-level data and any indication of the integrated data quality left operators without the feedback they needed to diagnose and respond to the failures of the automation. The diminished feedback that accompanies automation often has a direct influence on a mishap, as in the case of the Royal Majesty. However, diminished feedback can also act over a longer period to undermine operator’ ability to perform tasks. In situations in which the automation takes on the tasks previously assigned to the operator, the operator’s skills may atrophy as they go unexercised [26]. Operators with substantial previous experience and well-developed mental models detect disturbances more rapidly than operators without this experience, but extended periods of monitoring automatic control may undermine skills and diminish operators’ ability to generate expectations of correct behavior [24]. Such deskilling leaves operators without the skills to accommodate the demands of the job if they need to assume manual control. This is a particular concern in aviation, where pilots’ aircraft handling skills may degrade when they rely on the autopilot. In response, some pilots disengage the autopilot and fly the aircraft manually to maintain their skills [27, 28]. One approach to combat the potential for disengagement and deskilling is to amplify rather than mask indications that the automation is approaching its operating boundary [29, 30]. More generally, designers need to consider how to design feedback that communicates what the automation is doing and what it will do next to help operators decide if and when to intervene [31]. Automation design requires the specification of sensor, algorithm, and actuator characteristics and their interactions. A technology-centered approach might stop there; however, automation that works effectively requires specification of the feedback provided to its operators. Without careful consideration of feedback mechanisms, implementing automation can eliminate and change feedback operators receive in a way that can undermine the ability of automation to enhance overall task performance.
19
Design for Human-Automation and Human-Autonomous Systems
19.3.2 Changes in Tasks and Task Structure One reason for automation is that it can relieve operators of repetitive, labor-intensive, and error-prone tasks. Frequently, however, the situation becomes more complex because automation does not simply relieve the operator of tasks, it changes the nature of tasks and adds new tasks. This often means that automation requires new skills of operators. Often, automation eliminates simple physical tasks and leaves complex cognitive tasks that may appear easy. These superficially easy, yet complex, tasks often lead organizations to place less emphasis on training. On ships, training and certification unmatched to the demands of the automation have led to accidents because of the operators’ misunderstanding of new radar and collision avoidance systems [32]. For example, on the exam used by the US Coast Guard to certify radar operators, 75% of the items assess skills that have been automated and are not required by the new technology [33]. The new technology makes it possible to monitor a greater number of ships, thereby enhancing the need for interpretive skills such as understanding the rules of the road that govern maritime navigation and automation. These are the very skills that are underrepresented on the Coast Guard exam. Though automation may relieve the operator of some tasks, it often leads to new and more complex tasks that require more, not less, training. Automation can also change the nature and structure of tasks so that easy tasks are made easier and hard tasks harder – a phenomenon referred to as clumsy automation [34]. As Bainbridge [35] notes, one of the ironies of automation is that designers often leave the operator with the most difficult tasks because the designers found them difficult to automate. Because the easy tasks have been automated, the operator has less experience and an impoverished context for responding to the difficult tasks. In this situation, automation has the effect of both reducing workload during already lowworkload periods and increasing it during high-workload periods; for example, a flight management system tends to make the low-workload phases of flight (e.g., straight and level flight or a routine climb) easier, but high-workload phases (e.g., the maneuvers in preparation for landing) more difficult as pilots have to share their time between landing procedures, communication, and programming the flight management system. Such effects are seen not only in aviation but in many other domains, such as the operating room [36, 37]. The effects of clumsy automation often occur at the micro level of individual operators and over several minutes, but such effects can also occur across teams of operators over hours or days of operation. Such macro-level clumsy automation is evident in maritime operations, where automation used for open-ocean sailing reduces the task requirements of the crew, prompting reductions in the crew size. In this situation,
437
automation can have the consequence of making the easy part of the voyage (e.g., open-ocean sailing) easier and the hard part (e.g., port activities) harder [38]. Avoiding clumsy automation requires a broad consideration of how automation affects both the micro and macro task structure of operators. Because automation changes the task structure, new forms of human error often emerge. Ironically, managers and system designers introduce automation to eliminate human error, but new and more disastrous errors often result, in part because automation extends the scope of and reduces the redundancy of human actions. As a consequence, human errors may be more likely to go undetected and do more damage; for example, a flight-planning system for pilots can induce dramatically poor decisions because the automation assumes weather forecasts represent reality and lacks the flexibility to consider situations in which the actual weather might deviate from the forecast [39]. This designer-induced automation fragility becomes a more prominent threat as systems become increasingly autonomous. Automation-induced errors also occur because the task structure changes in a way that undermines collaboration between operators. Effective system performance involves performing both formal and informal tasks. Informal tasks enable operators to compensate for the limits of the formal task structure; for example, with paper charts mariners will check each others’ work, share uncertainties, and informally train each other as positions are plotted [40]. Eliminating these informal tasks can make it more difficult to detect and recover from errors, such as the one that led to the grounding of the Royal Majesty. Automation can also disrupt the cooperation between operators reflected in these informal tasks. Cooperation occurs when a person acts in a way that is in the best interests of the group even when it is contrary to his or her own best interests. Most complex, multiperson systems depend on cooperation. Automation can disrupt interactions between people and undermine the ability and willingness of one operator to compensate for another. Because automation also acts on behalf of people, it can undermine cooperation by giving one operator the impression that another operator is acting competitively, such as when the automation’s behavior is due to a malfunction rather than the operator’s intent [41]. Automation does not simply eliminate tasks once performed by the operator: it changes the task structure and creates new tasks that need to be supported, thereby opening the door to new types of errors. When automation is introduced into a dynamic task environment (e.g., driving, aviation, ship navigation, etc.), additional tasks are introduced that the person must perform [24, 35, 36]. Still part of the overall system, the person cannot simply or complacently relegate tasks to automation: they must supervise the automation. New skills are required in the role of supervisor. Supervision of automation in a dynamic, uncertain environment involves information integration and analysis, system expertise, an-
19
438
alytical decision-making, sustained attention, and maintenance of manual skill [27, 42]. Contrary to the expectations of a technology-centered approach to automation design, introducing automation makes it more rather than less important to consider the operators’ tasks and added roles.
19.3.3 Operators’ Cognitive and Emotional Response Automation sometimes causes problems because it changes operators’ feedback and tasks. Operators’ cognitive and emotional responses to these changes can amplify these problems; for example, as automation changes the operator’s task from direct control to monitoring, the operator may be more prone to direct attention away from the monitoring task, further diminishing the feedback the operator receives from the system. The tendency to overtrust and complacently rely on automation, particularly during multitasking situations, may underlie this tendency to disengage from the monitoring task [43–45]. People are not passive recipients of the changes to the task structure that automation makes. Instead, people adapt to automation and this adaptation leads to a new task structure. One element of this adaptation is captured by the ideas of reliance and compliance [46]. Reliance refers to the degree to which operators depend on automation to perform a function. Compliance refers to the degree to which automation changes the operators’ response to a situation. Inappropriate reliance and compliance are common automation problems that occur when people rely on or comply with automation in situations where it performs poorly, or when people fail to capitalize on its capabilities [47]. Maladaptive adaptation generally, and inappropriate reliance specifically, depends, in part, on operators’ attitudes, such as trust and self-confidence [48, 49]. In the context of operator reliance on automation, trust has been defined as an attitude that the automation will help achieve an operator’s goals in a situation characterized by uncertainty and vulnerability [50]. Trust is relevant to describing the relationship between people and automation because people respond socially to technology in a way that is similar to how they respond to other people [51]. Just as trust mediates relationships between people, it may also mediate relationships between people and automation [52, 53, 54–58]. Many studies have shown that trust is a useful concept in describing human-automation interaction, both in naturalistic [25] and in laboratory settings [59–61]. As an example, the difference in operators’ trust in a route-planning aid and their selfconfidence in their ability was highly predictive of reliance on the aid [62]. This and other studies show that people tend to rely on automation they trust and to reject automation they do not trust [50].
J. D Lee and B. D Seppelt
Inappropriate reliance often stems from a failure of trust to match the true capabilities of the automation. Trust calibration refers to the correspondence between a person’s trust in the automation and the automation’s capabilities [50]. Overtrust is poor calibration in which trust exceeds system capabilities; with distrust, trust falls short of automation capabilities. Trust often responds to automation as one might expect; it increases over time as automation performs well and declines when automation fails. Importantly, however, trust does not always follow the changes in automation performance. Often, it is poorly calibrated. Trust displays inertia and changes gradually over time rather than responding immediately to changes in (automation) performance. After a period of unreliable performance, trust is often slow to recover, remaining low even when the automation performs well [60]. More surprisingly, trust sometimes depends on surface features of the system that seem unrelated to its capabilities, such as the colors and layout of the interface [63–65]. Attitudes such as trust and the associated influence on reliance can exacerbate automation problems such as clumsy automation. As noted earlier, clumsy automation occurs when automation makes easy tasks easier and hard tasks harder. Inappropriate trust can make automation more clumsy because it leads operators to be more willing to delegate tasks to the automation during periods of low workload, compared with periods of high workload [35]. This observation demonstrates that clumsy automation is not simply a problem of task structure, but one that depends on operator adaptation that is mediated by attitudes, as with trust. The automation-related problems associated with inappropriate trust often stem from operators’ shifts from being a direct controller to a monitor of the automation. This shift also changes how operators perceive feedback. Automation shifts people from direct involvement in the perception– action loop to supervisory control [66, 67]. Passive observation that is associated with supervisory control is qualitatively different than active monitoring associated with manual control [68, 69]. In manual control, perception directly supports control, and control actions guide perception [70]. Monitoring automation disconnects the operators’ actions from actions on the system. Such disconnects can undermine the operator’s mental model (i.e., their working knowledge of system dynamics, structure, and causal relationships between components), leaving the mental model inadequate to guide expectations and control [71, 72]. The shift from being a direct controller to a supervisory controller can also have subtle, but important, effects on behavior as operators adapt to the automation. Over time automation can unexpectedly shift operators’ safety norms and behavior relative to safety boundaries. Behavioral adaptation describes this effect and refers to the tendency of operators
19
Design for Human-Automation and Human-Autonomous Systems
to adapt to the new capabilities of automation, in which they change their behavior so that the potential safety benefits of the technology are not realized. Automation intended by designers to enhance safety may instead lead operators to reduce effort and leave safety unaffected or even diminished. Behavioral adaptation can occur at the individual [73–75], organizational [76], and societal levels [77]. Antilock brake systems (ABS) for cars demonstrate behavioral adaptation. ABS automatically modulates brake pressure to maintain maximum brake force without skidding. This automation makes it possible for drivers to maintain control in extreme crash avoidance maneuvers, which should enhance safety. However, ABS has not produced expected safety benefits. One reason is that drivers of cars with ABS tend to drive less conservatively, adopting higher speeds and shorter following distances [78]. Vision enhancement systems provide another example of behavioral adaption. These systems make it possible for drivers to see more at night – a potential safety enhancement; however, drivers tend to adapt to the vision systems by increasing their speed [79]. A related form of behavioral adaptation that undermines the benefits of automation is the phenomenon in which the presence of automation causes diffusion of responsibility and a tendency to exert less effort when the automation is available [80, 81]. As a result, people tend to commit more omission errors (failing to detect events not detected by the automation) and more commission errors (incorrectly concurring with erroneous detection of events by the automation) when they work with automation. This effect parallels the adaptation of people when they work in groups; diffusion of responsibility leads people to perform more poorly when they are part of a group compared to individually [82]. The issues noted above have primarily addressed the direct performance problems associated with automation. Job satisfaction is another human–automation interaction issue that goes well beyond performance to consider the morale of workers whose job is being changed by automation [83]. Automation that is introduced merely because it increases the profit of the company may not necessarily be well received. Automation often has the effect of deskilling a job, making skills that operators worked for years to perfect suddenly obsolete. Properly implemented, automation should reskill workers and make it possible for them to leverage their old skills into new ones that are extended by its presence. Many operators are highly skilled and proud of their craft; automation can either empower or demoralize them [25]. Demoralized operators may fail to capitalize on the potential of an automated system. The cognitive and emotional response of operators to automation can also compromise their health. If automation creates an environment in which the demands of the work increase, but the decision latitude decreases, it may then lead
439
to problems ranging from increased heart disease to increased incidents of depression [84]. However, if automation extends the capability of the operator and gives them greater agency and decision latitude, then job satisfaction and health can improve. As an example of improved satisfaction, night-shift operators, who had greater decision latitude than day-shift operators, leveraged their increased latitude to learn how to manage the automation more effectively [25]. Increasingly autonomous automation can undermine or enhance users’ autonomy. Automation that enhances the autonomy of people brings benefits of greater system performance as well as health and satisfaction of operators. Automation problems can be described independently, but they often reflect an interacting and dynamic process [85]. One problem can lead to another through positive feedback and vicious cycles. As an example, inadequate training may lead the operator to disengage from the monitoring task. This disengagement leads to poorly calibrated trust and overreliance, which in turn leads to skill loss and further disengagement. A similar dynamic exists between clumsy automation and automation-induced errors. Clumsy automation produces workload peaks, which increase the chance of mode and configuration errors. Recovering from these errors can further increase workload, and so on. Designing and implementing automation without regard for human capabilities and defining the human role as a byproduct is likely to initiate these negative dynamics.
19.4
Automation and System Characteristics
The likelihood and consequences of automation-related problems depend on the characteristics of the automation and the system being controlled. Automation is not a homogenous technology. Instead, there are many types of automation and each poses different design challenges. As an example, automation can highlight, alert, filter, interpret, decide, and act for the operator. It can assume different degrees of control and can operate over timescales that range from milliseconds to months. The type of automation and the operating environment interact with the human to produce the problems just discussed. As an example, if only a single person manages the system then diminished cooperation and collaboration are not a concern. Some important system and automation characteristics include: • • • • •
Authority and autonomy of information processing stages Complexity, observability, and directability Time-scale and multitasking demands Agent interdependencies Interaction with environment
19
440
19.4.1 Authority and Autonomy at Information Processing Stages Defining automation in terms of information processing stages describes it according to the information processing functions of the person that it supports or replaces. Automation can sense the world, analyze information, identify appropriate responses to states of the world, or control actuators to change those states [86, 87]. Information acquisition automation refers to technology that replaces the process of human perception. Such automation highlights targets [88, 89], provides alerts and warnings [90, 91], organizes, prioritizes, and filters information. Information analysis automation refers to technology that supplants the interpretation of a situation. An example of this type of automation is a system that critiques a diagnosis generated by the operator [92]. Action selection automation refers to technology that combines information to make decisions on behalf of the operator. Unlike information acquisition and analysis, action selection automation suggests or decides on actions using assumptions about the state of the world and the costs and values of the possible options [87]. Action implementation automation supplants the operators’ activity in executing a response. The types of automation at each of these four stages of information processing can differ according to the degree of authority and autonomy. Automation authority and autonomy concern the degree to which automation can influence the system [93]. Authority reflects the extent to which the automation amplifies the influence of operators’ actions and overrides the actions of other agents. High-authority automation has the power to control. One facet of authority concerns whether or not operators interact with automation by switching between manual and automatic control. With some automation, such as adaptive cruise control in cars, drivers simply engage or disengage the automation, whereas automation on the flight deck involves managing a complex network of modes that are appropriate for some situations and not for others. Interacting with such flight-deck automation requires the operator to coordinate multiple goals and strategies to select the mode of operation that fits the situation [94]. With such multilevel automation, the idea of manual control may not be relevant, and so the issues of skill loss and other challenges with manual intervention may be of less concern. In these situations, the concern becomes whether the design supports automation management. The problems with high-authority, multilevel automation are more likely to be those associated with mode confusion and configuration errors. Autonomy reflects the degree to which automation acts without operator input or the opportunity to intervene. Billings [28] describes two levels of autonomy: management by consent, in which the automation acts only with the consent of the operator, and management by exception,
J. D Lee and B. D Seppelt
in which automation initiates activities autonomously. As another example, automation can either highlight targets [89], filter information, or provide alerts and warnings [90]. Highlighting targets exemplifies a relatively low degree of autonomy because it preserves the underlying data and allows operators to guide their attention to the information they believe to be most critical. Filtering exemplifies a higher degree of automation autonomy because operators are forced to attend to the information the automation deems relevant. Alerts and warnings similarly exemplify a relatively high level of autonomy because they guide the operator’s attention to automation-dictated information and environmental states. High levels of authority and autonomy can make automation appear to act as an independent agent, even if the designers had not intended operators to perceive it as such [95]. High levels of these two automation characteristics are an important cause of clumsy automation and mode error and can also undermine cooperation between people [96].
19.4.2 Complexity, Observability, and Directability Complexity, observability, and directability refer to the degrees of freedom resolved by the automation, how that complexity is revealed to the operator, and the degree to which the operator can influence how the automation resolves degrees of freedom [93]. As automation becomes increasingly complex, it can transition from what operators might consider a tool that they use to act on the environment to an agent that acts as a semiautonomous partner. According to the agent metaphor, the operator no longer acts directly on the environment but acts through an intermediary agent [97] or intelligent associate [98]. As an agent, automation initiates actions that are not in direct response to operators’ commands. Automation that acts as an agent is typically very complex and may or may not be observable. Observable automation reveals what it is doing, what it is planning, and why it is acting as it is. One of the greatest challenges with automated agents is that of mutual intelligibility. Instructing the agent to perform even simple tasks can be onerous, and agents that try to infer operators’ intent and act autonomously can surprise operators, who might lack accurate mental models of agent behavior. One approach is for the agents to learn and adapt to the characteristics of the operator through a process of remembering what they have been being told to do in similar situations [99]. After the agent completes a task, it can be equally challenging to make the results observable and meaningful to the operator [97]. Because of these characteristics, agents are most useful for highly repetitive and simple, lowrisk activities, where the cost of failure is limited. In high-risk situations, constructing effective management strategies and
19
Design for Human-Automation and Human-Autonomous Systems
providing feedback to clarify agent intent and communicate behavior becomes critical [100]. The challenges associated with agents reflect a general tradeoff with automation design: more complex automation is often more capable, but less understandable. As a consequence, even though more complex automation may appear superior, the performance of the resulting human-automation system may be inferior to that of a simpler, less capable version of the automation. Directability concerns the degree to which the operator can influence the behavior of the automation. At a high level, this involves empowering people to align the intent, or more technically, the cost function, of the automation with their intent and goals. Directable automation operates on a hierarchy of goals, subgoals, intents, and trajectories that are consistent with how people think about the activity, and can be adjusted at each level by people. For example, spell check is directable in that it can be easily overridden either at the level of the immediate intent (rejecting a suggested change) or at the level of the overall goal (e.g., changing the dictionary). Directable automation makes it possible for people to actively explore its boundaries, not passively receive an explanation. In machine learning (ML) and Artificial Intelligence (AI), which underlies much automation, substantial effort has been devoted to explaining the output of ML models [101]. The emerging field of explainable AI can be thought of as making it more observable. Beyond explanation, the concept of directability merits consideration as a way of helping people not just understand the automation but to enable people to guide it toward more robust behavior. Another approach is to create a simpler, inherently interpretable model. Most data analytic problems include several models (e.g., a deep learning neural net and a simple decision tree) that can produce predictions that are close to optimal – they are part of a Rashomon set [102]. Of the members of this set, the interpretable model might have slightly lower predictive accuracy. However, this simpler, more interpretable model will help people detect assumption violations and detect when the situation drifts away from the situation that produced the training data. For this reason, it is often better to search for a simple ML solution rather than try to explain a complex one [102]. AI, machine learning, and automation more generally, often benefit by reducing complexity and by making it more observable and directable.
19.4.3 Timescale and Multitasking Demands Timescale concerns the tempo of the interactions with automation. The timescale of automation varies dramatically, from decision-support systems that guide corporate strategies over months and years to antilock brake systems that modulate brake pressure over milliseconds. These distinctions can be described in terms of strategic, tactical, and operational
441
automation. Strategic automation concerns balancing values and costs, as well as defining goals; tactical automation, on the other hand, involves setting priorities and coordinating tasks. Operational automation concerns the moment-tomoment perception of system state and adjustment. With operational automation, operators can experience substantial time pressure when the tempo of activity, on the order of milliseconds to seconds, exceeds their capacity to monitor the automation and respond promptly to its limits [103, 104]. The timing of driver intervention with vehicle automation that maintains both the distances to the vehicle ahead and position within the lane is another example of a timescale that can challenge human capabilities. As vehicle automation becomes more capable drivers may be able to disengage for many minutes to perform other tasks, but then they may need to intervene within seconds when the automation encounters a situation that it cannot accommodate, such as a stopped vehicle or a tight curve [30, 105]. Monitoring automation for long periods with the requirement for a rapid intervention is a well-known design problem that has contributed to fatal crashes [106]. The timescale of automation also limits the opportunity to explain the behavior of automation. Complex displays that convey nuanced explanations may be useful for tactical and strategic automation, but simple peripheral displays that don’t require operators’ focused attention may be more appropriate for operational automation.
19.4.4 Agent Interdependencies Agent interdependencies describe how tightly coupled the work of one operator or element of automation is with another [7, 76]. In some situations, automation might directly support the work of a team of people; and, in other situations, automation might support the activity of a person that has little interaction with others. An important source of automation-related problems is the assumption that automation affects only one person or one set of tasks, causing interactions with other operators to be neglected. Seemingly independent tasks may be coupled, and automation tends to tighten this coupling. As an example, on the surface, adaptive cruise control affects only the individual driver who is using the system. Because adaptive cruise control responds to the behavior of the vehicle ahead, however, its behavior cannot be considered without taking into account the surrounding traffic dynamics. Failing to consider these interactions of inter-vehicle velocity changes can lead to oscillations and instabilities in the traffic speed, potentially compromising driver safety and highway capacity [107, 108]. Similar failures occur in supply chains, as well as in petrochemical processes where people and automation sometimes fail to coordinate their activities [109]. Designing for such situations requires a change in perspective from one centered on
19
442
a single operator and a single element of automation to one that considers a network of multi-operator–multi-automation interactions [110, 111].
J. D Lee and B. D Seppelt
uncertainty that reflects a lack of knowledge that can be rectified with more data. • Interdependent dynamics: Feedback loops where the predictions and actions of the algorithm can influence the behavior it tries to predict.
19.4.5 Environment Interactions Interaction with the environment refers to the degree to which the automation system is isolated from or interacts with the surrounding environment. Closed systems do not interact with an external environment, whereas open systems require that the automation consider the environmental context. The environmental context can affect the reliability and behavior of the automation, the operator’s perception of the automation, and thus the overall effectiveness of the human– automation partnership [112–115]. An explicit environmental representation improves understanding of joint humanautomation performance [114]. More broadly, some types of automation and machine learning can dramatically outperform people in closed systems. Closed systems, such as board games are complex but bounded. In contrast, open systems often involve unknowable interactions with other agents, sensor failures, and variability of meaning in sensed objects. Current machine learning, AI, and automation tend to perform well in closed systems, but struggles in open systems that demand errorfree performance. More concretely, AI can easily beat a person in a computer-based chess game, but might lose badly when playing on a physical chessboard selected by the person. Some open systems present particular challenges to AI and are sometimes called wicked domains [116]. Wicked domains have a set of characteristics that makes learning difficult for people and for ML and AI. In contrast with kind domains, in wicked domains the information available for learning poorly matches information available when that learning is applied to make choices and predictions. In human learning, wicked domains fuel decision biases when the environment eliminates failures and people learn only from successes (i.e., survivorship bias) and people focus on an unrepresentative sample (i.e., selection bias). Considered from the perspective of machine learning, characteristics of wicked domains include [21, 117]: • High-hazard low-risk: Challenging situations occur infrequently, but when they do, failure has catastrophic consequences. • A long tail of edge cases: Available data does not cover the range of situations that will be encountered occasionally. • Non-stationary systems: The statistical patterns to be learned change over time. • Aleatoric uncertainty: Uncertainty that reflects the inherent unpredictability of the system, rather than epistemic
19.5
Automation Design Methods and Application Examples
The previous section described some important characteristics of automation and systems that contribute to automationrelated problems; these distinctions help to identify design approaches to minimize them. This section describes specific strategies for designing effective automation, which include: • • • • •
Human-centered design Function allocation with Fitts’ list Operator-automation simulation and analysis Representation aiding and enhanced feedback Expectation matching and automation simplification
19.5.1 Human-Centered Design Human-centered design maintains a focus on the people the automation is meant to support. Such a focus can address many of the issues discussed in the preceding sections. The human-centered design process can be simplified into three phases: understand users, create a design concept, and evaluate the design concept [118, 119]. Understanding users involves careful observations of people and the tasks they perform, as well as principles of human behavior. With automation design, this also involves understanding the system, such as whether it is open or closed, wicked or kind. Creating involves using this understanding to produce initial design concepts. These concepts are then evaluated with heuristic evaluations and usability tests. Evaluation completes the cycle and helps designers better understand the users and their needs, providing input to the next cycle to create an improved design concept. Figure 19.2 shows iterative cycles of understand, create, evaluate moving outward as the design concept evolves. The cycles vary in how long they take to complete, with the outer cycles taking months or years and inner cycles taking minutes. An informal review of a paper prototype might take minutes, but the in-service monitoring of released products might extend over years. The human-centered design process in Fig. 19.2 applies to many aspects of automation design from the details of the interface layout to the interaction architecture. Automation design sometimes focuses on interface design and broader
19
Design for Human-Automation and Human-Autonomous Systems
aspects of automation design get neglected because designers assume that automation can substitute for the person. However as discussed previously, automation often fundamentally changes the role of the person. Job design provides a framework for considering the broader aspects of automation design. At the dawn of the Industrial Revolution, the pioneering work of Fredrick Taylor revolutionized work through job design. He minimized waste by identifying the most efficient method to perform the job. His analysis showed that proce-
Product released Create Pre-production model Paper prototype
Create C
ate Ev alu
nd
sta
der
nd
Un
sta
Ev alu
der
ate
Un
E
U
Fig. 19.2 A human-centered design process that applies to automation design. It is an iterative cycle that repeats at multiple time scales [118]
443
duralizing and standardizing tasks enabled people to become very good at a narrowly defined job [118]. This approach to proceduralizing and standardizing work and to specializing workers, known as Taylorism, often underlies automation design. Automation that follows Taylorism can leave workers with little latitude for dealing with unexpected events. Taylorism can leave people performing highly repetitive tasks with little autonomy. This can make them vulnerable to repetitive motion disorders, poor job satisfaction, and a variety of stress-related health problems [120, 121]. In response to the limits of Taylorism, other approaches to job design have been developed [122]. These approaches question some of the fundamental assumptions of Taylor and automation design: the idea that proceduralizing and standardization improve performance. Figure 19.3 integrates elements of job design that might also help guide automation design [123–125]. The core of this approach is that job characteristics influence the psychological states of the person, which in turn affect work outcomes. Automation that supports rather than constrains human autonomy and provides good feedback about the situation can help workers improvise solutions that go beyond the capability of automation. At the center of Fig. 19.2 is the idea of task load and balance, which describes how people respond and adapt to job demands [125]. Balance guides job design because giving workers autonomy and variety can eliminate boredom and increase job satisfaction, but can also overwhelm workers with new responsibilities. Properly designed automation can help people balance this load. Unfortunately, automation is not always designed to support balancing. As discussed
Job design Job characteristics Significance Identity Variety Autonomy Feedback
Psychological and physical state Task load and balance Cognitive Physical Social Technical
Team characteristics
Individual characteristics
Composition Teamwork skills Psychological safety
Knowledge Skills Abilities
Experienced meaningfulness Experienced responsibility Knowledge of results
Short-term outcomes Safety Performance Satisfaction
Fatigue Stress
Long-term outcomes
Training and selection
Fig. 19.3 Job design provides a broad perspective for design automation and increasingly autonomous systems [118]
Health Innovation Engagement Retention
19
444
above, clumsy automation may reduce the pilots’ load during straight and level flight, but increases pilots’ load during the already demanding periods surrounding takeoff and landing. To make automation less clumsy, balance can be supported by enabling people to perform tasks ahead of time, as well as delay and delegate tasks. Balance can go beyond making automation less clumsy. The three feedback loops in Fig. 19.3 show how people balance load at a range of timescales [118]. The innermost loop connects the psychological state to task load and balance. Here people adjust what tasks they perform, as well as when and how they perform them based on their physical and psychological state. This innermost loop operates over minutes to days. The next loop connects the short-term outcomes, particularly satisfaction, to show how people adjust their behavior over the space of days and weeks. The outermost loop describes how long-term effects influence the people and the characteristics of the people performing the job. This loop shows how long-term outcomes, such as retention, can impose demands on training new workers to replace those that have left because of a poorly designed job. A welldesigned job enhances health, engagement, retention, as well as the accumulation of knowledge and skills of the workers. In many organizations, introducing automation often accompanies other changes, such as a shift from well-defined jobs to work being defined by contributions to multiple teams performing activities not part of their core job: a shift from hierarchical to lateral control [20, 126]. For this reason, automation design should go beyond a focus on individual tasks to consider broader issues of teamwork and lateral control. Lateral control places emphasis on teams in job design, highlighted in the lower left of Fig. 19.3. Automation design can be considered as adding a team member, with the expectation that the automated team member enters with certain teamwork skills, including [127]: • • • •
Agree to work together Be mutually predictable in their actions Be mutually directable Maintain common ground – knowledge, assumptions, capabilities, and intentions regarding the joint work
A user-centered approach to automation design must consider such teamwork skills in addition to the interface and interaction design elements of automation. AI teaming is an increasingly important consideration in automation design as automation becomes more capable.
19.5.2 Fitts’ List and Function Allocation Function allocation with the Fitts’ list is a long-standing technique for identifying the role of operators and automation.
J. D Lee and B. D Seppelt
This approach assesses each function and whether a person or automation might be best suited to perform it [113, 128, 129]. Functions better performed by automation are automated and the operator remains responsible for the rest, and for compensating for the limits of the automation. The relative capability of the automation and the human depends on the stage of automation [87]. Applying a Fitts’ list to determine an appropriate allocation of function has, however, substantial weaknesses. One weakness is that any description of functions is a somewhat arbitrary decomposition of activities that can mask complex interdependencies. As a consequence, automating functions as if they were independent tends to fractionate the operator’s role, leaving the operator with an incoherent collection of functions that were too difficult to automate [35]. Another weakness is that this approach neglects the tendency for operators to use automation in unanticipated ways because automation often makes new functions possible [130]. Another challenge with this general approach is that it often carries the implicit assumption that automation can substitute for functions previously performed by operators and that operators do not need to be supported in performing functions allocated to the automation – the substitution myth [131]. This substitution-based allocation of functions fails to consider the qualitative change automation can bring to the operators’ work and the adaptive nature of the operator. As a consequence of these challenges, the Fitts’ list provides only general guidance for automation design and has been widely recognized as problematic [131, 132]. Ideally, the function allocation process should not focus on what functions should be allocated to the automation or the human but should identify how the human and the automation can complement each other in jointly satisfying the functions required for system success [117, 133]. Although imperfect, the Fitts’ list approach has some general considerations that can improve the design. People tend to be effective in perceiving patterns and relationships amongst data and less so with tasks requiring precise repetition [83]. Human memory tends to organize large amounts of related information in a network of associations that can support effective judgments. Based on a deep well of experience people can also reframe situations and adjust when the situation fundamentally changes. Reframing is beyond the capacity of even the most advanced artificial intelligence. People adapt, improvise, and accommodate unexpected variability – they provide adaptive capacity to accommodate the unexpected that automation cannot. For these reasons, it is important to leave the big picture to the human and the details to the automation [83].
19
Design for Human-Automation and Human-Autonomous Systems
19.5.3 Operator-Automation Simulation Operator-automation simulation refers to computer-based techniques that explore the space of operator–automation interaction to identify potential problems. Discrete event simulation tools commonly used to evaluate manufacturing processes are well-suited to operator-automation analysis. Such techniques provide a rough estimate of some of the consequences of introducing automation into complex dynamic systems. As an example, simulation of a supervisory control situation made it possible to assess how characteristics of the automation interacted with the operating environment to govern system performance [134]. This analysis showed that the time taken to engage the automation interacted with the dynamics of the environment to undermine the value of the automation such that manual control was more appropriate than engaging the automation. Although discrete event simulation tools can incorporate cognitive mechanisms and performance constraints, developing this capacity requires substantial effort. For automation analysis that requires a detailed cognitive representation, cognitive architectures, such as adaptive control of thoughtrational (ACT-R), offer a promising approach [135]. ACT-R is a useful tool for estimating the costs and benefits of various automation alternatives when a simple discrete event simula-
ACC active
445
tion does not provide a sufficiently detailed representation of the operator [136]. Simulation tools can be used to explore the potential behavior of the joint human–automation system, but may not be the most efficient way of identifying potential human– automation mismatches associated with inadequate mental models and automation-related errors. Network analysis techniques offer an alternative. State-transition networks can describe operator–automation behavior in terms of a finite number of states, transitions between those states, and actions. Figure 19.4 provides an example presentation, defining at a high level the behavior of adaptive cruise control (ACC). This formal modeling language makes it possible to identify automation problems that occur when the interface or the operator’s mental model is inadequate to manage the automation [137]. Combining the concurrent processes of the ACC model with the associated driver model of the ACC’s behavior reveals mismatches. These mismatches can cause automation-related errors and surprises to occur. More specifically, when the automation model enters a particular state and the operator’s model does not include this state, then the analysis predicts that the associated ambiguity will surprise operators and lead to errors [138]. Such ambiguities have been discovered in actual aircraft autopilot systems, and network analysis can identify how to avoid them with improvements to the interface and training materials [138].
Press Accel or Coast button
ACC speed control
Press Time gap + or Time gap – button
Initialize brake or throttle actuators
ACC distance (following) control
Initialize throttle actuator Press Set speed or Resume button
Press Off button
Depress brake pedal
Depress brake pedal
ACC system fault detected
Press Set speed or Resume button
Press Off button ACC off
ACC standby Press On button
ACC deceleration required > 0.2g or Current ACC vehicle speed < 20 mph
Press Off button ACC system fault detected
Fig. 19.4 ACC states and transitions. Dashed lines represent driver-triggered transitions. Solid lines represent ACC-triggered transitions [139]
19
446
19.5.4 Enhanced Feedback and Representation Aiding Enhanced feedback and representation aiding can help prevent problems associated with inadequate feedback that range from developing appropriate trust and clumsy automation to the out-of-the-loop phenomenon. Automation often lacks adequate feedback [140]. Providing sufficient feedback without overwhelming the operator is a critical design challenge. Insufficient feedback can leave the operator unaware that the automation has reached its limit, but poorly presented or excessive feedback can increase operator workload and undermine the benefits of automation [141]. A promising approach to avoid overloading the operator is to provide feedback through sensory channels that are not otherwise used (e.g., haptic, tactile, and auditory) to prevent overload of the more commonly used visual channel. Haptic feedback (i.e., vibration on the wrist) has proven more effective in alerting pilots to mode changes of cockpit automation than visual cues [142]. Pilots receiving visual alerts only detected 83% of the mode changes, but those with haptic warnings detected 100% of the changes. Importantly, the haptic warnings did not interfere with performance of concurrent visual tasks. Even within the visual modality, presenting feedback in the periphery helped pilots detect uncommanded mode transitions and such feedback did not interfere with concurrent visual tasks any more than currently available automation feedback [143]. Similarly, Seppelt and Lee [144] combined a more complex array of variables in a peripheral visual display for ACC. Fig. 19.5 shows how this display includes relevant variables for headway control (i.e., time headway, time-to-collision, and range rate) relative to the operating limits of the ACC. This display promoted faster failure detection and more appropriate engagement strategies compared with a standard ACC interface. Although promising, haptic, auditory, and peripheral visual displays cannot convey the detail possible in visual displays, making it difficult to convey the complex relationships that sometimes govern automation behavior. An important design tradeoff emerges: provide sufficient detail regarding automation behavior, but avoid overloading and distracting the operator. Simply enhancing the feedback operators receive regarding automation is insufficient. Without the proper context, abstraction, and integration, feedback may not be understandable. Representation aiding capitalizes on the power of visual perception to convey complex information; for example, graphical representations for pilots can augment the traditional airspeed indicator with target airspeeds and acceleration indicators. Integrating this information into a traditional flight instrument allows pilots to assimilate automation-related information with little additional effort [111]. Using a display that combines pitch, roll, altitude, airspeed, and heading can directly specify task-relevant
J. D Lee and B. D Seppelt
information such as what is too low as opposed to operators being required to infer such relationships from the set of variables [145]. Integrating automation-related information with traditional displays and combining low-level data into meaningful information can help operators understand automation behavior. In the context of process control, Guerlain and colleagues [146] identified three specific strategies for visual representation of complex process control algorithms. First, create visual forms whose emergent features correspond to higherorder relationships. Emergent features are salient symmetries or patterns that depend on the interaction of the individual data elements. A simple emergent feature is parallelism that can occur with a pair of lines. Higher-order relationships are combinations of the individual data elements that govern system behavior. The boiling point of water is a higherorder relationship that depends on temperature and pressure. Second, use appropriate visual features to represent the dimensional properties of the data; for example, magnitude is a dimensional property that should be displayed using position or size on a visual display, not color or texture, which are ambiguous ways to represent an increase or decrease in amount. Third, place data in a meaningful context. The meaningful context for any variable depends on what comparisons need to be made. For automation, this includes the allowable ranges relative to the current control variable setting, and the output relative to its desired level. Similarly, Dekker and Woods [131] suggest event-based representations that highlight changes, historical representations that help operators project future states, and pattern-based representations that allow operators to synthesize complex relationships perceptually rather than through arduous mental transformations. Representation aiding helps operators trust automation appropriately. However, trust also depends on more subtle elements of the interface [50]. In many cases, trust and credibility depend on surface features of the interface that have no obvious link to the true capabilities of the automation [147, 148]. An online survey of over 1400 people found that for websites, credibility depends heavily on real-world feel, which is defined by factors such as response speed, a physical address, and photos of the organization [149]. Similarly, a formal photograph of the author enhanced trustworthiness of a research article, whereas an informal photograph decreased trust [150]. These results show that trust tends to increase when information is displayed in a way that provides concrete details that are consistent and clearly organized. Representation aiding might take too long for operators to process when managing automation that operates at a timescale of seconds. Such automation may require integration of input to the automation and feedback from the automation that is very tightly coupled. One such approach has been termed haptic shared control [151, 152]. Haptic shared control allows people to feel the operation of the automation
19
Design for Human-Automation and Human-Autonomous Systems
447
Enhanced feedback display
Set speed – LV speed
Range rate
TTC–1
LV position within sensor field *
Braking capacity (0.2 g)
Note: Image color (bright yellow) and shape texture/fill degraded with increasing uncertainty of sensor signal
THW
Set THW
* Shape trajectories
Fig. 19.5 A peripheral display to help drivers understand adaptive cruise control [144]. (TTC – time-to-collision; THW – time headway). The outset image shows an expanded view of potential shape trajecto-
ries. (Note: labels on the images were not present on the display but are provided here for reference)
through a control surface, such as a steering wheel. It goes beyond simply enhancing feedback, by allowing people to influence the automation through the same control surface. Drivers can override and guide the automation controlling the vehicle steering by exerting force on the steering wheel. The automation can influence the person by resisting movement of the control surface when such input might jeopardize system safety, with the resistance proportional to the certainty and severity associated with a deviation from the automation’s planned trajectory. Drivers might feel mild resistance to moving slightly left to give a bicycle more space, but might encounter substantial resistance to changing lanes into a vehicle that the driver had not noticed. A major limitation of haptic shared control is that it requires the operator to be
physically engaged with the automation and controlled process, which is infeasible for many applications of automation.
19.5.5 Expectation Matching and Simplification Expectation matching and simplification help operators understand automation by using algorithms that are more comprehensible. One strategy is to simplify the automation by reducing the number of functions, modes, and contingencies [153]. Another is to match its algorithms to the operators’ mental model [154]. Automation designed to perform in a manner consistent with operators’ mental model, prefer-
19
448
ences, and expectations can make it easier for operators to recognize failures and intervene. Expectation matching and simplification are particularly effective when a technologycentered approach has created an overly complex array of modes and features. ACC is a specific example of where matching the mental model of an operator to the automation’s algorithms may be quite effective. Because ACC can only apply moderate levels of braking, drivers must intervene if the car ahead brakes heavily. If drivers must intervene, they must quickly enter the control loop because fractions of a second can make the difference in avoiding a collision. If the automation behaves in a manner consistent with drivers’ expectations, drivers will be more likely to detect and respond to the operational limits of the automation quickly. Goodrich and Boer [154] designed an ACC algorithm consistent with drivers’ mental models such that ACC behavior was partitioned according to perceptually relevant variables of inverse time-to-collision and time headway. Inverse time-to-collision is the relative velocity divided by the distance between the vehicles. Time headway is the distance between the vehicles divided by the velocity of the driver’s vehicle. Using these variables it is possible to identify a perceptually salient boundary that separates routine speed regulation and headway maintenance from active braking associated with collision avoidance. A general property of human cognition is the tendency to organize information and activities into hierarchical chunks [155, 156]. These hierarchical structures also form the basis for delegation in human organizations [157]. Automation that replicates this hierarchical organization will likely be more observable and directable, making it possible for the operator to delegate and intervene or direct as the situation demands [21]. A concrete example of such an approach is the automation playbook [158]. Playbooks take the name from how sports teams draw on a book of pre-defined plays. These plays provide a common language for organizing activities across the team. Automation playbooks provide a similar function by creating a common language between operators and the automation. This language defines a hierarchical goal decomposition that enables an operator to delegate coherent sets of activities to the automation and to direct the automation at any level of the hierarchy. Organizing the automation behavior in a hierarchical fashion that matches how people think about an activity can make the automation more observable and directable. For situations in which the metaphor for automation is an agent, the mental model people may adopt to understand automation is that of a human collaborator. Specifically, Miller [159] suggests that computer etiquette may have an important influence on human–automation interaction. Etiquette may influence trust because category membership associated with adherence to a particular etiquette helps people to infer how
J. D Lee and B. D Seppelt
the automation will perform. Some examples of automation etiquette are for the automation to make it easy for operators to override and recover from errors, to enable interactive features only when and if necessary, to explain what is being done and why, to interrupt operators only in emergencies, and to provide information that the operators do not already know about. Developing automation etiquette could promote appropriate trust, but also has the potential to lead to inappropriate trust if people infer inappropriate category memberships and develop distorted expectations regarding the capability of the automation. Even in simple interactions with technology, people often respond as they would to another person [51, 160]. If anticipated, this tendency could help operators develop appropriate expectations regarding the behavior of the automation; however, unanticipated anthropomorphism could lead to surprising misunderstandings. Beyond automation etiquette, another approach is to design to support the process of trusting by creating collegial automation [161]. Collegiality includes the capacity to align goals. Goal alignment can be achieved by the automation adjusting to meet the person’s goals, through negotiation to arrive at new goals, or by nudging the person to adopt new goals. It involves monitoring the trust of the partner and repairing, dampening, and tempering trust [162]. Collegiality implies the automation can take initiative to build, repair, or temper trust, but that the person can also take initiative. For example, the automation can offer an apology after a period of poor performance or the person can ask for an explanation. An important prerequisite for designing automation according to the mental model of the operator is the existence of a consistent mental model. Individual differences may lead to many different mental models and expectations. This is particularly true for automation that acts as an agent, in which a mental-model-based design must conform to complex social and cultural expectations. Also, the mental model must be consistent with the physical constraints of the system if the automation is to work properly [163]. Mental models often contain misconceptions that could lead to serious misunderstandings and automation failures. Even if an operator’s mental model is consistent with the system constraints, automation based on such a mental model may not achieve the same benefits as automation based on more sophisticated algorithms. In this case, designers must consider the tradeoff between the benefits of a complex control algorithm and the costs of an operator not understanding that algorithm. Enhanced feedback and representation aiding can mitigate this tradeoff. If the operator’s mental model is inaccurate the transparency and feedback from the automation can help the operator understand not only the automation but the overall system operation.
19
Design for Human-Automation and Human-Autonomous Systems
19.6
Future Challenges in Automation Design
The previous section outlined strategies that can make the operator–automation partnership more effective. The challenges in applying the Fitts’ list show that the application of these strategies, either individually or collectively, does not guarantee effective automation. The rapid advances in software and hardware development, combined with an everexpanding range of applications, make future problems with automation likely. The following sections highlight some of these emerging challenges. The first concerns the demands of managing swarm automation, in which many semiautonomous agents work together. The second concerns interconnected networks of people and automation, in which issues of cooperation and competition become critical.
19.6.1 Swarm Automation Swarm automation consists of many simple, semiautonomous entities whose emergent behavior provides a robust response to environmental variability. Swarm automation has important applications in a wide range of domains, including planetary exploration, unmanned aerial vehicle reconnaissance, land-mine neutralization, and intelligence gathering; in short, it is applicable in any situation in which hundreds of simple agents might be more effective than a single, complex agent. Biology-inspired robotics provides a specific example of swarm automation. Instead of the traditional approach of relying on one or two larger robots, system designers employ swarms of insect robots [164, 165]. The swarm robot concept assumes that small robots with simple behaviors can perform important functions more reliably and with lower power and mass requirements than can larger robots [166–168]. Typically, the simple algorithms controlling the individual entity can elicit desirable emergent behaviors in the swarm [169, 170]. As an example, the collective foraging behavior of honeybees shows that agents can act as a coordinated group to locate and exploit resources without a complex central controller. In addition to physical examples of swarm automation, swarm automation has potential in searching large complex data sets for useful information. Current approaches to searching such data sources are limited. People miss important documents, disregard data that is a significant departure from initial assumptions, misinterpret data that conflicts with an emerging understanding, and disregard more recent data that could revise an initial interpretation [171]. The parameters that govern the discovery and exploitation of food sources for ants might also apply to the control of software agents in their discovery and exploitation of information. Just
449
as swarm automation might help explore physical spaces, it might also help explore information spaces [172]. The concept of hortatory control describes some of the challenges of controlling swarm automation. Hortatory control describes situations where the system being controlled retains a high degree of autonomy and operators must exert indirect rather than direct control [173]. Interacting with swarm automation requires people to consider swarm dynamics and not just the behavior of the individual agents. In these situations, it is most useful for the operator to control parameters affecting the group rather than individual agents and to receive feedback about the group rather than individual behavior. Parameters for control might include the degree to which each agent tends to follow successful agents (positive feedback), the degree to which they follow the emergent structure of their own behavior (stigmergy), and the amount of random variation that guides their paths [174]. In exploration, a greater amount of random variation will lead to a more complete search, and a greater tendency to follow successful agents will speed search and exploitation [175]. Although swarms are often intentionally designed, they may also emerge when many automated systems are deployed in an environment where they interact with each other. Such scenarios include internet infrastructure and algorithmic securities trading. In recent years both of these systems have failed due to emergent properties of the interaction of an interconnected swarm of algorithms. Full self-driving automated vehicles may form similar swarms that may also be vulnerable to fleet failures that emerge from the interactions of the many vehicles on the road. Already traffic is vulnerable to waves of congestion and gridlock that emerge from interactions across many vehicles. How automated vehicles exacerbate or mitigate such problems remains an open question. Swarm automation, either designed or not, has great potential to extend human capabilities, but only if a thorough empirical and analytic investigation identifies the display requirements, viable control mechanisms, and the range of swarm dynamics that can be comprehended and controlled by humans [176].
19.6.2 Operator–Automation Networks Complex operator–automation networks emerge as automation becomes more pervasive. With networked automation, the appropriate unit of analysis shifts from a single operator interacting with a single element of automation to that of multiple operators interacting with multiple elements of automation. Important dynamics can only be explained with this more complex unit of analysis. The factors affecting micro-level behavior may have unexpected effects on macrolevel behavior [177]. As the degree of coupling increases,
19
450
poor coordination between operators and inappropriate reliance on automation has greater consequences for system performance [7]. Supply chains can represent an important example of multi-operator–multi-automation systems. A supply chain is composed of a network of suppliers, transporters, and purchasers who work together, usually as a decentralized virtual company, to convert raw materials into products. The growing popularity of supply chains reflects the general trend of companies to move away from vertical integration, where a single company converts raw materials into products. Increasingly, manufacturers rely on supply chains [178] and attempt to manage them with automation [109]. Supply chains can suffer from serious problems that erode their promised benefits. One is the bullwhip effect, in which small variations in end-item demand induce large-order oscillations, excess inventory, and back-orders [179]. The bullwhip effect can undermine a company’s efficiency and value. Automation that forecasts demands can moderate these oscillations [180, 181]. However, people must trust and rely on that automation, and substantial cooperation between supplychain members must exist to share such information. Vicious cycles also undermine supply-chain performance, through an escalating series of conflicts between members [182]. Vicious cycles can have dramatic negative consequences for supply chains; for example, a strategic alliance between Office Max and Ryder International Logistics devolved into a legal fight in which Office Max sued Ryder for US $21.4 million, and then Ryder sued Office Max for US $75 million [183]. Beyond the legal costs, these breakdowns undermine the market value of the companies involved [178]. Vicious cycles also undermine information sharing, which can exacerbate the bullwhip effect. Even with the substantial benefits of cooperation, supply chains can fall into a vicious cycle of diminishing cooperation. Inappropriate use of automation can contribute to both vicious cycles and the bullwhip effect. A recent study used a simulation model to examine how reliance on automation influences cooperation and how sharing two types of automation-related information influences cooperation between operators in the context of a two-manufacturer oneretailer supply chain [41]. This study used a decision fieldtheoretic model of the human operator [184–186] to assess the effects of automation failures on cooperation and the benefit of sharing automation-related information in promoting cooperation. Sharing information regarding automation performance improved operators’ reliance on automation, and the more appropriate reliance promoted cooperation by avoiding unintended competitive behaviors caused by inappropriate use of automation. Sharing information regarding the reliance on automation increased willingness to cooperate even when the other operator occasionally engaged in competitive behavior. Sharing information regarding the
J. D Lee and B. D Seppelt
operators’ reliance on automation led to a more charitable interpretation of the other’s intent and therefore increased trust in the other operator. The consequence of enhanced trust is an increased chance of cooperation. Cooperation depends on the appropriate use of automation and sharing automation-related information can have a profound effect on cooperation [20, 41, 161]. The interaction between automation, cooperation, and performance seen with supply-chain management also apply to other domains; for example, power-grid management involves a decentralized network that makes it possible to efficiently supply power, but it can fail catastrophically when cooperation and information sharing breaks down [187]. Similarly, datalink-enabled air-traffic control makes it possible for pilots to negotiate flight paths efficiently, but it can fail when pilots do not cooperate or have trouble anticipating the complex dynamics of the system [188, 189]. Overall, technology creates many highly interconnected networks that have great potential, but that also raises important concerns. Resolving these concerns partially depends on designing effective multi-operator–multiautomation interactions. Swarm automation and complex operator–automation networks pose challenges beyond those of traditional systems and require new design strategies. The automation design strategies described earlier, such as function allocation, operator–automation simulation, representation aiding, and expectation matching are somewhat limited in addressing these challenges. New approaches to automation design involve developing analytic tools, interface designs, and interaction concepts that consider issues of cooperation and coordination in operator– automation interactions [190, 191]. See additional details about design of human-automation systems communication and collaboration in Chapters 15, 18, and 20.
References 1. Grabowski, M.R., Hendrick, H.: How low can we go?: validation and verification of a decision support system for safe shipboard manning. IEEE Trans. Eng. Manag. 40(1), 41–53 (1993) 2. Nagel, D.C., Nagle, D.C.: Human error in aviation operations. In: Wiener, E.L., Nagle, D.C. (eds.) Human Factors in Aviation, pp. 263–303. Academic, New York (1988) 3. Singh, D.T., Singh, P.P.: Aiding DSS users in the use of complex OR models. Ann. Oper. Res. 72, 5–27 (1997) 4. Lutzhoft, M.H., Dekker, S.W.: On your watch: automation on the bridge. J. Navig. 55(1), 83–96 (2002) 5. NTSB: Grounding of the Panamanian passenger ship Royal Majesty on Rose and Crown shoal near Nantucket, MA, 10 June 1995 (NTSB/MAR-97/01). Washington, DC (1997) 6. Ransbotham, S., Khodabandeh, S., Kiron, D., Candelon, F., Chu, M., LaFountain, B.: Expanding AI’s Impact with Organizational Learning. Available: (https://sloanreview.mit.edu/projects/ expanding-ais-impact-with-organizational-learning/) (2020). Accessed 18 Jan 2021
19
Design for Human-Automation and Human-Autonomous Systems
7. Woods, D.D.: Decomposing automation: apparent simplicity, real complexity. In: Automation and Human Performance: Theory and Applications, pp. 3–17. Erlbaum, Mahwah (1996) 8. Fisher, D.L., Horrey, W.J., Lee, J.D., Regan, M.A.: Handbook of Human Factors for Automated, Connected, and Intelligent Vehicles. CRC Press, Boca Raton (2020) 9. O’Neil, K.: Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown, New York (2016) 10. Pearce, M., Mutlu, B., Shah, J., Radwin, R.: Optimizing Makespan and ergonomics in integrating collaborative robots into manufacturing processes. IEEE Trans. Autom. Sci. Eng. 15(4), 1772–1784 (2018). https://doi.org/10.1109/TASE.2018.2789820 11. Anam, K., Al-Jumaily, A.A.: Active exoskeleton control systems: state of the art. Procedia Eng. 41, 988–994 (2012). https://doi.org/ 10.1016/j.proeng.2012.07.273 12. McFarland, D.J., Wolpaw, J.R.: Brain-computer interface operation of robotic and prosthetic devices. Computer. 41(10), 52–56 (2008). https://doi.org/10.1109/MC.2008.409 13. van Krevelen, D.W.F., Poelman, R.: A survey of augmented reality technologies, applications and limitations. Int. J. Virtual Real. 9(2), 1–20 (2010). https://doi.org/10.1155/2011/721827 14. Sawyer, B.D., Miller, D.B., Canham, M., Karwowksi, W.: Human factors and ergonomics in design of A3 : automation, autonomy, and artificial intelligence. In: Handbook of Human Factors and Ergonomic. Wiley, New York (2021) 15. Bergasa, L.M., Nuevo, J., Sotelo, M.A., Barea, R., Lopez, M.E.: Real-time system for monitoring driver vigilance. IEEE Trans. Intell. Transp. Syst. 7(1), 63–77 (2006) 16. Ghazizadeh, M., Lee, J.D.: Modeling driver acceptance: from feedback to monitoring and mentoring. In: Regan, M.A., Horberry, T., Stevens, A. (eds.) Driver Acceptance of New Technology: Theory, Measurement and Optimisation, pp. 51–72. CRC Press, Boca Raton (2013) 17. Diedenhofen, B., Musch, J.: PageFocus: using paradata to detect and prevent cheating on online achievement tests. Behav. Res. 49(4), 1444–1459 (2017). https://doi.org/10.3758/s13428016-0800-7 18. Woods, D.D.: The risks of autonomy: Doyles Catch. J. Cogn. Eng. Decis. Mak. 10(2), 131–133 (2016). https://doi.org/10.1177/ 1555343416653562 19. DSB: The Role of Autonomy in DoD Systems. Department of Defense, Defense Science Board (2012) 20. Chiou, E.K., Lee, E.K.: Trusting automation: Designing for responsivity and resilience. Hum. Factors, 00187208211009995 (2021). 21. Russell, S.: Human Compatible: AI and the Problem of Control. Penguin, New York (2019) 22. Wiener, N.: The Human Use of Human Beings: Cybernetics and Society. Eyre and Spottiswoode, London. Available: http://csi-india.org.in/document_library/ Gopal_The%20Human%20Use%20of%20Human%20Beings14f 0.pdf (1954). Accessed 31 Jan 2021 23. McFadden, S., Vimalachandran, A., Blackmore, E.: Factors affecting performance on a target monitoring task employing an automatic tracker. Ergonomics. 47(3), 257–280 (2003) 24. Wickens, C.D., Kessel, C.: Failure detection in dynamic systems. In: Human Detection and Diagnosis of System Failures, pp. 155– 169. Springer US, Boston (1981) 25. Zuboff, S.: In the Age of Smart Machines: the Future of Work, Technology and Power. Basic Books, New York (1988) 26. Endsley, M.R., Kiris, E.O.: The out-of-the-loop performance problem and level of control in automation. Hum. Factors. 37(2), 381–394 (1995). https://doi.org/10.1518/001872095779064555 27. Bhana, H.: Trust but verify. AeroSafety World, 5(5), 13–14 (2010)
451 28. Billings, C.E.: Aviation Automation: The Search for a HumanCentered Approach. Erlbaum, Mahwah (1997) 29. Alsaid, A., Lee, J.D., Price, M.A.: Moving into the loop: an investigation of drivers’ steering behavior in highly automated vehicles. Hum. Factors. 62(4), 671–683 (2019) 30. Merat, N., Seppelt, B., Louw, T., Engstrom, J., Lee, J.D., Johansson, E., Green, C.A., Katazaki, S., Monk, C., Itoh, M., McGehee, D., Sunda, T., Kiyozumi, U., Victor, T., Schieben, A., Andreas, K.: The ‘Out-of-the-Loop’ concept in automated driving: proposed definition, measures and implications. Cogn. Tech. Work. 21(1), 87–98 (2019). https://doi.org/10.1007/s10111-018-0525-8 31. Sarter, N.B., Woods, D.D., Billings, C.E.: Automation surprises. In: Salvendy, G. (ed.) Handbook of Human Factors and Ergonomics, 2nd edn, pp. 1926–1943. Wiley, New York (1997) 32. NTSB: Marine accident report – grounding of the U.S. Tankship Exxon Valdez on Bligh Reef, Prince William Sound, Valdez, 24 Mar 1989. NTSB, Washington, DC (1997) 33. Lee, J.D., Sanquist, T.F.: Augmenting the operator function model with cognitive operations: assessing the cognitive demands of technological innovation in ship navigation. IEEE Trans. Syst. Man Cybern. Syst. Hum. 30(3), 273–285 (2000) 34. Wiener, E.L.: Human Factors of Advanced Technology (‘Glass Cockpit’) Transport Aircraft. NASA Ames Research Center, NASA Contractor Report 177528 (1989) 35. Bainbridge, L.: Ironies of automation. Automatica. 19(6), 775– 779 (1983). https://doi.org/10.1016/0005-1098(83)90046-8 36. Cook, R.I., Woods, D.D., McColligan, E., Howie, M.B.: Cognitive consequences of ‘clumsy’ automation on high workload, high consequence human performance. In: SOAR 90, Space Operations, Applications and Research Symposium, NASA Johnson Space Center (1990) 37. Johannesen, L., Woods, D.D., Potter, S.S., Holloway, M.: Human Interaction with Intelligent Systems: an Overview and Bibliography. The Ohio State University, Columbus (1991) 38. Lee, J.D., Morgan, J.: Identifying clumsy automation at the macro level: development of a tool to estimate ship staffing requirements. In: Proceedings of the Human Factors and Ergonomics Society 38th Annual Meeting, Santa Monica, vol. 2, pp. 878–882 (1994) 39. Smith, P.J., Layton, C., McCoy, C.E.: Brittleness in the design of cooperative problem-solving systems: the effects on user performance. IEEE Trans. Syst. Man Cybern. 27(3), 360–371 (1997) 40. Hutchins, E.L.: Cognition in the Wild. The MIT Press, Cambridge, MA (1995) 41. Gao, J., Lee, J.D., Zhang, Y.: A dynamic model of interaction between reliance on automation and cooperation in multi-operator multi-automation situations. Int. J. Ind. Ergon. 36(5), 511–526 (2006) 42. Casner, S.M., Geven, R.W., Recker, M.P., Schooler, J.W.: The retention of manual flying skills in the automated cockpit. Hum. Factors. 56(8), 1506–1516 (2014). https://doi.org/10.1177/ 0018720814535628 43. Kirwan, B.: The role of the controller in the accelerating industry of air traffic management. Saf. Sci. 37(2–3), 151–185 (2001) 44. Parasuraman, R., Molloy, R., Singh, I.L.: Performance consequences of automation-induced ‘complacency’. Int. J. Aviat. Psychol. 3(1), 1–23 (1993). https://doi.org/10.1207/ s15327108ijap0301_1 45. Parasuraman, R., Mouloua, M., Molloy, R.: Monitoring automation failures in human-machine systems. In: Mouloua, M., Parasuraman, R. (eds.) Human Performance in Automated Systems: Current Research and Trends, pp. 45–49. Lawrence Erlbaum Associates, Hillsdale (1994) 46. Meyer, J.: Effects of warning validity and proximity on responses to warnings. Hum. Factors. 43(4), 563–572 (2001) 47. Parasuraman, R., Riley, V.A.: Humans and automation: use, misuse, disuse, abuse. Hum. Factors. 39(2), 230–253 (1997)
19
452 48. Dzindolet, M.T., Pierce, L.G., Beck, H.P., Dawe, L.A., Anderson, B.W.: Predicting misuse and disuse of combat identification systems. Mil. Psychol. 13(3), 147–164 (2001) 49. Lee, J.D., Moray, N.: Trust, self-confidence, and operators’ adaptation to automation. Int. J. Hum.-Comput. Stud. 40(1), 153–184 (1994). https://doi.org/10.1006/ijhc.1994.1007 50. Lee, J.D., See, K.A.: Trust in automation: designing for appropriate reliance. Hum. Factors. 46(1), 50–80 (2004) 51. Reeves, B., Nass, C.: The Media Equation: how People Treat Computers, Television, and New Media like Real People and Places. Cambridge University Press, New York (1996) 52. Sheridan, T.B., Ferrell, W.R.: Man-Machine Systems: Information, Control, and Decision Models of Human Performance. MIT Press, Cambridge, MA (1974) 53. Sheridan, T.B., Hennessy, R.T.: Research and Modeling of Supervisory Control Behavior. National Academy Press, Washington, DC (1984) 54. Deutsch, M.: The effect of motivational orientation upon trust and suspicion. Hum. Relat. 13, 123–139 (1960) 55. Deutsch, M.: Trust and suspicion. Confl. Resolut. III(4), 265–279 (1969) 56. Rempel, J.K., Holmes, J.G., Zanna, M.P.: Trust in close relationships. J. Pers. Soc. Psychol. 49(1), 95–112 (1985) 57. Ross, W.H., LaCroix, J.: Multiple meanings of trust in negotiation theory and research: a literature review and integrative model. Int. J. Confl. Manag. 7(4), 314–360 (1996) 58. Rotter, J.B.: A new scale for the measurement of interpersonal trust. J. Pers. 35(4), 651–665 (1967) 59. Lee, J.D., Moray, N.: Trust, control strategies and allocation of function in human-machine systems. Ergonomics. 35(10), 1243– 1270 (1992). https://doi.org/10.1080/00140139208967392 60. Lewandowsky, S., Mundy, M., Tan, G.P.A.: The dynamics of trust: comparing humans to automation. J. Exp. Psychol. Appl. 6(2), 104–123 (2000) 61. Muir, B.M., Moray, N.: Trust in automation. Part II. Experimental studies of trust and human intervention in a process control simulation. Ergonomics. 39(3), 429–460 (1996) 62. de Vries, P., Midden, C., Bouwhuis, D.: The effects of errors on system trust, self-confidence, and the allocation of control in route planning. Int. J. Hum.-Comput. Stud. 58(6), 719–735 (2003). https://doi.org/10.1016/S1071-5819(03)00039-9 63. Gefen, D., Karahanna, E., Straub, D.W.: Trust and TAM in online shopping: an integrated model. MIS Q. 27(1), 51–90 (2003) 64. Kim, J., Moon, J.Y.: Designing towards emotional usability in customer interfaces – trustworthiness of cyber-banking system interfaces. Interact. Comput. 10(1), 1–29 (1998) 65. Wang, Y.D., Emurian, H.H.: An overview of online trust: concepts, elements, and implications. Comput. Hum. Behav. 21(1), 105–125 (2005) 66. Sheridan, T.B.: Supervisory control. In: Salvendy, G. (ed.) Handbook of Human Factors, pp. 1243–1268. Wiley, New York (1987) 67. Sheridan, T.B.: Telerobotics, Automation, and Human Supervisory Control. The MIT Press, Cambridge, MA (1992) 68. Eprath, A.R., Curry, R.E., Ephrath, A.R., Curry, R.E.: Detection of pilots of system failures during instrument landings. IEEE Trans. Syst. Man Cybern. 7(12), 841–848 (1977) 69. Gibson, J.J.: Observations on active touch. Psychol. Rev. 69(6), 477–491 (1962) 70. Jagacinski, R.J., Flach, J.M.: Control Theory for Humans: Quantitative Approaches to Modeling Performance. Lawrence Erlbaum Associates, Mahwah (2003) 71. Bainbridge, L.: Mathematical equations of processing routines. In: Rasmussen, J., Rouse, W.B. (eds.) Human Detection and Diagnosis of System Failures, pp. 259–286. Plenum Press, New York (1981)
J. D Lee and B. D Seppelt 72. Moray, N.: Human factors in process control. In: Salvendy, G. (ed.) The Handbook of Human Factors and Ergonomics, 2nd edn. Wiley, New York (1997) 73. Evans, L.: Traffic Safety. Science Serving Society, Bloomfield Hills/Michigan (2004) 74. Wilde, G.J.S.: Risk homeostasis theory and traffic accidents: propositions, deductions and discussion of dissension in recent reactions. Ergonomics. 31(4), 441–468 (1988) 75. Wilde, G.J.S.: Accident countermeasures and behavioral compensation: the position of risk homeostasis theory. J. Occup. Accid. 10(4), 267–292 (1989) 76. Perrow, C.: Normal Accidents. Basic Books, New York (1984) 77. Tenner, E.: Why Things Bite Back: Technology and the Revenge of Unanticipated Consequences. Knopf, New York (1996) 78. Sagberg, F., Fosser, S., Saetermo, I.a.F., Sktermo, F.: An investigation of behavioural adaptation to airbags and antilock brakes among taxi drivers. Accid. Anal. Prev. 29(3), 293–302 (1997) 79. Stanton, N.A., Pinto, M.: Behavioural compensation by drivers of a simulator when using a vision enhancement system. Ergonomics. 43(9), 1359–1370 (2000) 80. Mosier, K.L., Skitka, L.J., Heers, S., Burdick, M.: Automation bias: decision making and performance in high-tech cockpits. Int. J. Aviat. Psychol. 8(1), 47–63 (1998) 81. Skitka, L.J., Mosier, K.L., Burdick, M.D.: Accountability and automation bias. Int. J. Hum.-Comput. Stud. 52(4), 701–717 (2000) 82. Skitka, L.J., Mosier, K.L., Burdick, M.: Does automation bias decision-making? Int. J. Hum.-Comput. Stud. 51(5), 991–1006 (1999) 83. Sheridan, T.B.: Humans and Automation. Wiley, New York (2002) 84. Vicente, K.J.: Cognitive Work Analysis: toward Safe, Productive, and Healthy Computer-Based Work. Lawrence Erlbaum Associates, Mahwah/London (1999) 85. Lee, J.D.: Human factors and ergonomics in automation design. In: Salvendy, G. (ed.) Handbook of Human Factors and Ergonomics, pp. 1570–1596. Wiley, Hoboken (2006) 86. Lee, J.D., Sanquist, T.F.: Maritime automation. In: Parasuraman, R., Mouloua, M. (eds.) Automation and Human Performance: Theory and Applications, pp. 365–384. Lawrence Erlbaum Associates, Mahwah (1996) 87. Parasuraman, R., Sheridan, T.B., Wickens, C.D.: A model for types and levels of human interaction with automation. IEEE Trans. Syst. Man Cybern. Syst. Hum. 30(3), 286–297 (2000) 88. Dzindolet, M.T., Pierce, L.G., Beck, H.P., Dawe, L.A.: The perceived utility of human and automated aids in a visual detection task. Hum. Factors. 44(1), 79–94 (2002) 89. Yeh, M., Wickens, C.D.: Display signaling in augmented reality: effects of cue reliability and image realism on attention allocation and trust calibration. Hum. Factors. 43, 355–365 (2001) 90. Bliss, J.P.: Alarm reaction patterns by pilots as a function of reaction modality. Int. J. Aviat. Psychol. 7(1), 1–14 (1997) 91. Bliss, J.P., Acton, S.A.: Alarm mistrust in automobiles: how collision alarm reliability affects driving. Appl. Ergon. 34(6), 499–509 (2003). https://doi.org/10.1016/j.apergo.2003.07.003 92. Guerlain, S.A., Smith, P., Obradovich, J., Rudmann, S., Strohm, P., Smith, J., Svirbely, J.: Dealing with brittleness in the design of expert systems for immunohematology. Immunohematology. 12(3), 101–107 (1996) 93. Sarter, N.B., Woods, D.D.: Decomposing automation: autonomy, authority, observability and perceived animacy. In: Mouloua, M., Parasuraman, R. (eds.) Human Performance in Automated Systems: Current Research and Trends, pp. 22–27. Lawrence Erlbaum Associates, Hillsdale (1994) 94. Olson, W.A., Sarter, N.B.: Automation management strategies: pilot preferences and operational experiences. Int. J. Aviat. Psychol. 10(4), 327–341 (2000). https://doi.org/10.1207/ S15327108IJAP1004_2
19
Design for Human-Automation and Human-Autonomous Systems
95. Sarter, N.B., Woods, D.D.: Team play with a powerful and independent agent: operational experiences and automation surprises on the Airbus A-320. Hum. Factors. 39(3), 390–402 (1997) 96. Sarter, N.B., Woods, D.D.: Team play with a powerful and independent agent: a full-mission simulation study. Hum. Factors. 42(3), 390–402 (2000) 97. Lewis, M.: Designing for human-agent interaction. AI Mag. 19(2), 67–78 (1998) 98. Jones, P.M., Jacobs, J.L.: Cooperative problem solving in humanmachine systems: theory, models, and intelligent associate systems. IEEE Trans. Syst. Man Cybern. Part C Appl. Rev. 30(4), 397–407 (2000) 99. Bocionek, S.R.: Agent systems that negotiate and learn. Int. J. Hum.-Comput. Stud. 42(3), 265–288 (1995) 100. Sarter, N.B.: The need for multisensory interfaces in support of effective attention allocation in highly dynamic event-driven domains: the case of cockpit automation. Int. J. Aviat. Psychol. 10(3), 231–245 (2000) 101. Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267, 1–38 (2019). https://doi.org/ 10.1016/j.artint.2018.07.007 102. Rudin, C.: Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 1(5), 206–215 (2019). https://doi.org/10.1038/ s42256-019-0048-x 103. Inagaki, T.: Automation and the cost of authority. Int. J. Ind. Ergon. 31(3), 169–174 (2003) 104. Moray, N., Inagaki, T., Itoh, M.: Adaptive automation, trust, and self-confidence in fault management of time-critical tasks. J. Exp. Psychol. Appl. 6(1), 44–58 (2000) 105. Endsley, M.R.: Autonomous driving systems: a preliminary naturalistic study of the Tesla Model S. J. Cogn. Eng. Decis. Mak. 11(3), 225–238 (2017). https://doi.org/10.1177/ 1555343417695197 106. NTSB: Collision Between a Car Operating with Automated Vehicle Control Systems and a Tractor-Semitrailer Truck Near Williston, Florida. Accident Report NYST/HAR-17/-2 (2017) 107. Liang, C.Y., Peng, H.: Optimal adaptive cruise control with guaranteed string stability. Veh. Syst. Dyn. 32(4–5), 313–330 (1999) 108. Liang, C.Y., Peng, H.: String stability analysis of adaptive cruise controlled vehicles. JSME Int. J. Ser. C-Mech. Syst. Mach. Elem. Manuf. 43(3), 671–677 (2000) 109. Lee, J.D., Gao, J.: Trust, information technology, and cooperation in supply chains. Supply Chain Forum: Int. J. 6(2), 82–89 (2005) 110. Gao, J., Lee, J.D.: Information sharing, trust, and reliance – a dynamic model of multi-operator multi-automation interaction. In: Proceedings of the 5th Conference on Human Performance, Situation Awareness and Automation Technology, Mahwah, vol. 2, pp. 34–39 (2004) 111. Hollan, J., Hutchins, E.L., Kirsh, D.: Distributed cognition: toward a new foundation for human-computer interaction research. ACM Trans. Comput.-Hum. Interact. 7(2), 174–196 (2000) 112. Flach, J.M.: The ecology of human-machine systems I: introduction. Ecol. Psychol. 2(3), 191–205 (1990) 113. Kantowitz, B.H., Sorkin, R.D.: Allocation of functions. In: Salvendy, G. (ed.) Handbook of Human Factors, pp. 355–369. Wiley, New York (1987) 114. Kirlik, A., Miller, R.A., Jagacinski, R.J.: Supervisory control in a dynamic and uncertain environment: a process model of skilled human-environment interaction. IEEE Trans. Syst. Man Cybern. 23(4), 929–952 (1993) 115. Vicente, K.J., Rasmussen, J.: The Ecology of Human-Machine Systems II: Mediating ‘Direct Perception’ in Complex Work Domains. Risø National Laboratory and Technical University of Denmark (1990)
453 116. Hogarth, R.M., Lejarraga, T., Soyer, E.: The two settings of kind and wicked learning environments. Curr. Dir. Psychol. Sci. 24(5), 379–385 (2015). https://doi.org/10.1177/0963721415591878 117. Tomsett, R., Preece, A., Braines, D., Cerutti, F., Chakraborty, S.: Rapid trust calibration through interpretable and uncertaintyaware AI. Patterns. 1(4), 100049 (2020). https://doi.org/10.1016/ j.patter.2020.100049 118. Lee, J.D., Wickens, C.D., Liu, Y., Boyle, L.N.: Designing for People: An Introduction to Human Factors Engineering. CreateSpace, Charleston (2017) 119. Woods, D.D., Patterson, E.S., Corban, J., Watts, J.: Bridging the gap between user-centered intentions and actual design practice. In: Proceedings of the Human Factors and Ergonomics Society 40th Annual Meeting, Santa Monica, vol. 2, pp. 967–971 (1996) 120. Bosma, H., Marmot, M.G., Hemingway, H., Nicholson, A.C., Brunner, E., Stansfeld, S.A.: Low job control and risk of coronary heart disease in Whitehall II (prospective cohort) study. Br. Med. J. 314(7080), 558–565 (1997) 121. Bosma, H., Peter, R., Siegrist, J., Marmot, M.G.: Two alternative job stress models and the risk of coronary heart disease. Am. J. Public Health. 88(1), 68–74 (1998) 122. Morgeson, F.P., Campion, M.A., Bruning, P.F.: Job and team design. In: Salvendy, G. (ed.) Handbook of Human Factors and Ergonomics, 4th edn, pp. 441–474. Wiley, New York (2012). https://doi.org/10.1002/9781118131350.ch15 123. Hackman, J.R.R., Oldham, G.R.: Motivation through the design of work: test of a theory. Organ. Behav. Hum. Perform. 16(2), 250– 279 (1976). https://doi.org/10.1016/0030-5073(76)90016-7 124. Herzberg, F.I.: Work and the Nature of Man. World Press, Oxford, UK (1966) 125. Smith, M.J., Sainfort, P.C.: A balance theory of job design for stress reduction. Int. J. Ind. Ergon. 4(1), 67–79 (1989). https:// doi.org/10.1016/0169-8141(89)90051-6 126. Oldman, G.R., Hackman, J.R.: Not what it was and not what it will be: the future of job design research. J. Organ. Behav. 31, 463–479 (2010) 127. Klein, G.A., Woods, D.D., Bradshaw, J.M., Hoffman, R.R., Feltovich, P.J.: Ten challenges for making automation a ‘Team Player’ in joint human-agent activity. IEEE Intell. Syst. 19(6), 91– 95 (2004) 128. Kaber, D.B.: Issues in human-automation interaction modeling: presumptive aspects of frameworks of types and levels of automation. J. Cogn. Eng. Decis. Mak. 12(1), 7–24 (2018) 129. Sharit, J.: Perspectives on computer aiding in cognitive work domains: toward predictions of effectiveness and use. Ergonomics. 46(1–3), 126–140 (2003) 130. Dearden, A., Harrison, M., Wright, P.: Allocation of function: scenarios, context and the economics of effort. Int. J. Hum.Comput. Stud. 52(2), 289–318 (2000) 131. Dekker, S.W., Woods, D.D.: Maba-Maba or abracadabra? Progress on human-automation coordination. Cogn. Tech. Work. 4(4), 1–13 (2002) 132. Sheridan, T.B., Studies, H.: Function allocation: algorithm, alchemy or apostasy? Int. J. Hum.-Comput. Stud. 52(2), 203–216 (2000). https://doi.org/10.1006/ijhc.1999.0285 133. Hollnagel, E., Bye, A.: Principles for modelling function allocation. Int. J. Hum.-Comput. Stud. 52(2), 253–265 (2000) 134. Kirlik, A.: Modeling strategic behavior in human-automation interaction: why an “aid” can (and should) go unused. Hum. Factors. 35(2), 221–242 (1993) 135. Anderson, J.R., Libiere, C.: Atomic Components of Thought. Lawrence Erlbaum, Hillsdale (1998) 136. Byrne, M.D., Kirlik, A.: Using computational cognitive modeling to diagnose possible sources of aviation error. Int. J. Aviat. Psychol. 15(2), 135–155 (2005)
19
454 137. Degani, A., Kirlik, A.: Modes in human-automation interaction: Initial observations about a modeling approach. In: IEEE Transactions on System, Man, and Cybernetics, Vancouver, vol. 4, p. to appear (1995) 138. Degani, A., Heymann, M.: Formal verification of humanautomation interaction. Hum. Factors. 44(1), 28–43 (2002) 139. Seppelt, B.D., Lee, J.D.: Modeling driver response to imperfect vehicle control automation. Procedia Manuf. 3, 2621–2628 (2015) 140. Norman, D.A.: The ‘problem’ with automation: inappropriate feedback and interaction, not ‘over-automation’. Philosophical Transactions of the Royal Society London, Series B, Biological Sciences. 327, 585–593 (1990). https://doi.org/10.1098/ rstb.1990.0101. Great Britain, Human Factors in High Risk Situations, Human Factors in High Risk Situations 141. Entin, E.B.E.E., Serfaty, D.: Optimizing aided target-recognition performance. In: Proceedings of the Human Factors and Ergonomics Society, Santa Monica, vol. 1, pp. 233–237 (1996) 142. Sklar, A.E., Sarter, N.B.: Good vibrations: tactile feedback in support of attention allocation and human-automation coordination in event-driven domains. Hum. Factors. 41(4), 543–552 (1999) 143. Nikolic, M.I., Sarter, N.B.: Peripheral visual feedback: a powerful means of supporting effective attention allocation in event-driven, data-rich environments. Hum. Factors. 43(1), 30–38 (2001) 144. Seppelt, B.D., Lee, J.D.: Making adaptive cruise control (ACC) limits visible. Int. J. Hum.-Comput. Stud. 65(3), 192–205 (2007). https://doi.org/10.1016/j.ijhcs.2006.10.001 145. Flach, J.M.: Ready, fire, aim: a ‘meaning-processing’ approach to display design. In: Gopher, D., Koriat, A. (eds.) Attention and Performance XVII: Cognitive Regulation of Performance: Interaction of Theory and Application, vol. 17, pp. 197–221. MIT Press, Cambridge, MA (1999) 146. Guerlain, S.A., Jamieson, G.A., Bullemer, P., Blair, R.: The MPC elucidator: a case study in the design for human- automation interaction. IEEE Trans. Syst. Man Cybern. Syst. Hum. 32(1), 25– 40 (2002) 147. Briggs, P., Burford, B., Dracup, C.: Modeling self-confidence in users of a computer-based system showing unrepresentative design. Int. J. Hum.-Comput. Stud. 49(5), 717–742 (1998) 148. Fogg, B.J., Tseng, H.: The elements of computer credibility. In Proceedings of the SIGCHI conference on Human Factors in Computing Systems, 80–87 (1999) 149. Fogg, B.J., Marshall, J., Laraki, O., Ale, O., Varma, C., Fang, N., Jyoti, P., Rangnekar, A., Shon, J., Swani, R., Treinen, M.: What makes web sites credible? A report on a large quantitative study. In: CHI Conference on Human Factors in Computing Systems, Seattle, pp. 61–68 (2001) 150. Fogg, B.J., Marshall, J., Kameda, T., Solomon, J., Rangnekar, A., Boyd, J., Brown, B.: Web credibility research: a method for online experiments and early study results. In: CHI Conference on Human Factors in Computing Systems, pp. 293–294 (2001) 151. Abbink, D.A., Mulder, M., Boer, E.R.: Haptic shared control: smoothly shifting control authority? Cogn. Tech. Work. 14(1), 19– 28 (2011). https://doi.org/10.1007/s10111-011-0192-5 152. Mars, F., Deroo, M., Hoc, J.M.: Analysis of human-machine cooperation when driving with different degrees of haptic shared control. IEEE Trans. Haptics. 7(3), 324–333 (2014). https://doi.org/ 10.1109/TOH.2013.2295095 153. Riley, V.A.: A new language for pilot interfaces. Ergon. Des. 9(2), 21–27 (2001) 154. Goodrich, M.A., Boer, E.R.: Model-based human-centered task automation: a case study in ACC system design. IEEE Trans. Syst. Man Cybern. Syst. Hum. 33(3), 325–336 (2003) 155. Friston, K.: Hierarchical models in the brain. PLoS Comput. Biol. 4(11), e1000211 (2008). https://doi.org/10.1371/ journal.pcbi.1000211
J. D Lee and B. D Seppelt 156. Kurby, C.A., Zacks, J.M.: Segmentation in the perception and memory of events. Trends Cogn. Sci. 12(2), 72–79 (2008). https:/ /doi.org/10.1016/j.tics.2007.11.004 157. Simon, H.A.: Sciences of the Artificial. MIT Press, Cambridge, MA (1970) 158. Miller, C.A., Parasuraman, R.: Designing for flexible interaction between humans and automation: delegation interfaces for supervisory control. Hum. Factors. 49(1), 57–75 (2007). https://doi.org/ 10.1518/001872007779598037 159. Miller, C.A.: Definitions and dimensions of etiquette. In: Miller, C. (ed.) Etiquette for Human-Computer Work: Technical Report FS-02-02, pp. 1–7. American Association for Artificial Intelligence, Menlo Park (2002) 160. Nass, C., Lee, K.N.M.: Does computer-synthesized speech manifest personality? Experimental tests of recognition, similarityattraction, and consistency-attraction. J. Exp. Psychol. Appl. 7(3), 171–181 (2001). https://doi.org/10.1037//1076-898X.7.3.171 161. Chiou, E.K., Lee, J.D.: Cooperation in human-agent systems to support resilience: a microworld experiment. Hum. Factors. 58(6), 846–863 (2016) 162. de Visser, E., Peeters, M.M., Jung, M., Kohn, S., Shaw, T., Richard, P., Neerincx, M.: Towards a theory of longitudinal trust calibration in human–robot teams. Int. J. Soc. Robot. 12 (2020). https://doi.org/10.1007/s12369-019-00596-x 163. Vicente, K.J.: Coherence- and correspondence-driven work domains: implications for systems design. Behav. Inform. Technol. 9, 493–502 (1990) 164. Brooks, R.A., Maes, P., Mataric, M.J., More, G.: Lunar base construction robots. In: Proceedings of the 1990 International Workshop on Intelligent Robots and Systems, pp. 389–392 (1990) 165. Johnson, P.J., Bay, J.S.: Distributed control of simulated autonomous mobile robot collectives in payload transportation. Auton. Robot. 2(1), 43–63 (1995) 166. Beni, G., Wang, J.: Swarm intelligence in cellular robotic systems. In: Dario, P., Sansini, G., Aebischer, P. (eds.) Robots and Biological Systems: Towards a New Bionics. Springer, Berlin (1993) 167. Flynn, A.M.: A Robot Being. In: Dario, P., Sansini, G., Aebischer, P. (eds.) Robots and Biological Systems: Towards a New Bionics. Springer, Berlin (1993) 168. Fukuda, T., Funato, D., Sekiyama, K., Arai, F.: Evaluation on flexibility of swarm intelligent system. In: Proceedings of the 1998 IEEE International Conference on Robotics and Automation, pp. 3210–3215 (1998) 169. Min, T.W., Yin, H.K.: A decentralized approach for cooperative sweeping by multiple mobile robots. In: Proceedings of the 1998 IEEE/RSJ International Conference on Intelligent Robots and Systems (1998) 170. Sugihara, K., Suzuki, I.: Distributed motion coordination of multiple mobile robots. In: 5th IEEE International Symposium on Intelligent Control, pp. 138–143 (1990) 171. Patterson, E.S.: A simulation study of computer-supported inferential analysis under data overload. In: Proceedings of the Human Factors and Ergonomics 43rd Annual Meeting, vol. 1, pp. 363– 368 (1999) 172. Pirolli, P., Card, S.K.: Information foraging. Psychol. Rev. 106(4), 643–675 (1999) 173. Murray, J., Liu, Y.: Hortatory operations in highway traffic management. IEEE Trans. Syst. Man Cybern. Syst. Hum. 27(3), 340– 350 (1997) 174. Stickland, T.R., Britton, N.F., Franks, N.R.: Complex trails and simple algorithms in ant foraging. Proc. R. Soc. Lond. Ser. B Biol. Sci. 260(1357), 53–58 (1995) 175. Resnick, M.: Turtles, Termites, and Traffic Jams: Explorations in Massively Parallel Microworlds. The MIT Press, Cambridge, MA (1991)
19
Design for Human-Automation and Human-Autonomous Systems
176. Lee, J.D.: Emerging challenges in cognitive ergonomics: managing swarms of self-organizing agent-based automation. Theor. Issues Ergon. Sci. 2(3), 238–250 (2001) 177. Schelling, T.C.: Micro Motives and Macro Behavior. Norton, New York (1978) 178. Dyer, J.H., Singh, H.: The relational view: cooperative strategy and sources of interorganizational competitive advantage. Acad. Manag. Rev. 23(4), 660–679 (1998) 179. Sterman, J.: Modeling managerial behavior misperceptions of feedback in a dynamic decision making experiment. In: Management Science. [Online]. Available: http:// mansci.journal.informs.org/content/35/3/321.short (1989). Accessed 30 Mar 2013 180. Lee, H.L., Whang, S.J.: Information sharing in a supply chain. Int. J. Technol. Manag. 20(3–4), 373–387 (2000) 181. Zhao, X.D., Xie, J.X.: Forecasting errors and the value of information sharing in a supply chain. Int. J. Prod. Res. 40(2), 311–335 (2002) 182. Akkermans, H., van Helden, K.: Vicious and virtuous cycles in ERP implementation: a case study of interrelations between critical success factors. Eur. J. Inf. Syst. 11(1), 35–46 (2002) 183. Handfield, R.B., Bechtel, C.: The role of trust and relationship structure in improving supply chain responsiveness. Ind. Mark. Manag. 31(4), 367–382 (2002) 184. Busemeyer, J.R., Diederich, A.: Survey of decision field theory. Math. Soc. Sci. 43, 345–370 (2002) 185. Lee, J.D., Gao, J.: Extending the decision field theory to model operators’ reliance on automation in supervisory control situations. IEEE Trans. Syst. Man Cybern. 36(5), 943–959 (2006). https:// doi.org/10.1109/TSMCA.2005.855783 186. Busemeyer, J.R., Townsend, J.T.: Decision field theory: a dynamic-cognitive approach to decision making in an uncertain environment. Psychol. Rev. 100(3), 432–459 (1993) 187. Zhou, T.S., Lu, J.H., Chen, L.N., Jing, Z.J., Tang, Y.: On the optimal solutions for power flow equations. Int. J. Electr. Power Energy Syst. 25(7), 533–541 (2003) 188. Mulkerin, T.: Free Flight is in the future – large-scale controller pilot data link communications emulation testbed. IEEE Aerosp. Electron. Syst. Mag. 18(9), 23–27 (2003) 189. Olson, W.A., Sarter, N.B.: Management by consent in humanmachine systems: when and why it breaks down. Hum. Factors. 43(2), 255–266 (2001) 190. Chiou, E.K., Lee, J.D.: Trusting automation: designing for responsivity and resilience. Hum. Factors, 001872082110099 (2021). https://doi.org/10.1177/00187208211009995 191. Duffy, V.G., Landry, S.J., Lee, J.D., Stanton, N. (Eds.).: Humanautomation interaction: transportation. Springer ACES Series, Vol. 11, (2023)
455
John D. Lee graduated with degrees in psychology and mechanical engineering from Lehigh University and a Ph.D. in mechanical engineering from the University of Illinois. He is a Professor in the Department of Industrial Engineering at the University of WisconsinMadison. His research focuses on the safety and acceptance of humanmachine systems by considering how technology mediates attention.
19
Bobbie D. Seppelt graduated with her MS degree in Engineering Psychology from The University of Illinois, and Ph.D. in Industrial Engineering from The University of Iowa. Her research interests include interface design for driver support systems, measurement and modeling of driver attention, and understanding the influence of mental models on automation trust and use.
Teleoperation and Level of Automation
20
Luis Basañez, Emmanuel Nuño, and Carlos I. Aldana
Contents 20.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457
20.2
Historical Background and Motivation . . . . . . . . . . . . 458
20.3 20.3.1 20.3.2 20.3.3
Levels of Automation and General Schemes . . . . . . . . Levels of Automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bilateral Teleoperation . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cooperative Teleoperation Systems . . . . . . . . . . . . . . . . .
459 459 460 463
20.4 20.4.1 20.4.2 20.4.3 20.4.4 20.4.5
Challenges and Solutions . . . . . . . . . . . . . . . . . . . . . . . . Control Objectives and Algorithms . . . . . . . . . . . . . . . . . Communication Channels . . . . . . . . . . . . . . . . . . . . . . . . . Situation Awareness and Immersion . . . . . . . . . . . . . . . . Teleoperation Aids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Teleoperation of Unmanned Aerial Vehicles/Drones . . .
464 464 466 467 469 472
20.5 20.5.1 20.5.2 20.5.3 20.5.4 20.5.5 20.5.6 20.5.7 20.5.8 20.5.9
Application Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Industry and Construction . . . . . . . . . . . . . . . . . . . . . . . . . Agriculture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mining . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Underwater . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Healthcare and Surgery . . . . . . . . . . . . . . . . . . . . . . . . . . . Assistance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Humanitarian Demining . . . . . . . . . . . . . . . . . . . . . . . . . . Education . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
473 473 474 474 474 475 476 477 477 477
20.6
Conclusions and Trends . . . . . . . . . . . . . . . . . . . . . . . . . 478
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 478
automation. Next, as a representative example, an up-todate specific bilateral teleoperation scheme is described in order to illustrate the typical components and functional modules of this kind of systems. As a natural extension of the bilateral teleoperators, the cooperative teleoperation systems are introduced. Some specific topics in the field are particularly discussed, for instance, the control objectives and algorithms for both bilateral teleoperators and cooperative teleoperation systems, the communication channels, the use of graphical simulation and task planning, and the usefulness of virtual and augmented reality. The last part of the chapter includes a description of the most typical application fields, such as industry and construction, agriculture, mining, underwater, space, healthcare and surgery, assistance, humanitarian demining, and education, where some of the pioneering, significant, and latest contributions are briefly presented. Finally, some conclusions and the trends in the field close the chapter. Keywords
Teleoperation systems · Cooperative teleoperation · Levels of automation · Teleoperation control · Augmented reality · Relational positioning
Abstract
This chapter presents an overview of the teleoperation of robotics systems, starting with a historical background and positioning these systems in a scale of levels of L. Basañez () Institute of Industrial and Control Engineering (IOC), Universitat Politècnica de Catalunya, BarcelonaTech (UPC), Barcelona, Spain e-mail: [email protected] E. Nuño () · C. I. Aldana () Department of Computer Science, University of Guadalajara, Guadalajara, Mexico e-mail: [email protected]; [email protected]
© Springer Nature Switzerland AG 2023 S. Y. Nof (ed.), Springer Handbook of Automation, Springer Handbooks, https://doi.org/10.1007/978-3-030-96729-1_20
20.1
Introduction
A bilateral teleoperator, in which two robots, called local and remote, are connected through a communication channel and one of them, the local robot, is handled by a human operator, constitutes an interesting type of collaborative framework between humans and robots. The remote robot has to track the local robot motion, and its interaction with the environment is reflected back to the human operator via the local robot. The teleoperation of robots can also be considered as an extension of the human sensory and acting capabilities since it allows the human getting information and performing tasks at distance, in a remote environment. The teleoperation also 457
458
L. Basañez et al.
allows to improve the accuracy of the human movements, like in robotics surgery, and it deals with dangerous tasks and environments, like explosive deactivation and nuclear plant accident recovery. In this respect, teleoperation combines the best of human and robot characteristics [1, 2]. A natural extension of the teleoperators is the cooperative teleoperation systems in which several robots and humans collaborate in order to perform more complex and difficult tasks that cannot be accomplished by one robot. In both cases, teleoperators and cooperative systems, the interconnection between the different agents is a crucial issue, and the Internet and the wireless links open new and exciting possibilities in this field. A crucial aspect of these systems is their control that must guarantee both stability and performance, taking into account different drawbacks like communication delays and loss of information in the communication channels. An exposition of the main proposed control strategies and algorithms, both in the robot joint space and in the operational space, is one of the objectives of this chapter. In Sect. 20.2 the chapter presents the beginnings of teleoperation and the main steps in its development. Section 20.3 briefly introduces the levels of automation concept and some of its more recognized scales in the human-automation interaction (HAI) context in which the robot teleoperation fits. Then, a general scheme of a bilateral teleoperator is described together with its typical components and functional modules. Section 20.4 discusses important challenges in the field and their current solutions, specifically, control objectives and algorithms for both bilateral teleoperators and cooperative teleoperation systems, communication channels, situation awareness and immersion, aids for the human operators, and teleoperation of unmanned aerial vehicles/drones. The main application fields, such as industry and construction, agriculture, mining, underwater, space, healthcare and surgery, assistance, humanitarian demining, and education, are the object of Sect. 20.5. Finally, some conclusions and the trends in the field close the chapter.
20.2
Historical Background and Motivation
The term teleoperation comes as combination of the Greek word τ ηλ- (tele-), i.e., off-site or remote, and the Latin word operat˘ı o, -¯onis (operation), i.e., something done. So, teleoperation means performing some work or action from some distance away. Although, in this sense, teleoperation could be applied to any operation performed remotely, this term is most commonly associated with robotics and indicates the driving of manipulators and mobile robots from a place far from these machines’ location.
There are of lot of topics involved in a teleoperated robotic system, including human-machine interaction, distributed control laws, communications, graphic simulation, task planning, virtual and augmented reality, and dexterous grasping and manipulation. Also the fields of application of these systems are very wide, and teleoperation offers great possibilities of profitable uses. Since a long time ago, human beings have used a range of tools to increase their manipulation capabilities. In the beginning these tools were simple tree branches, which evolved to long poles with tweezers, such as blacksmith’s tools that help to handle hot pieces of iron. These developments were the ancestors of the local-remote robotic systems, where the remote robot reproduces the local robot motions controlled by a human operator [3]. Teleoperated robotic systems allow humans to interact with robotic manipulators and vehicles and to handle objects located in a remote environment, extending human manipulation capabilities to far-off locations, allowing the execution of quite complex tasks and avoiding dangerous situations. The beginnings of teleoperation can be traced back to the beginnings of radio communication when Nikola Tesla developed what can be considered the first teleoperated apparatus, dated November 8, 1898. This development has been reported under the US patent No. 613 809, Method of and Apparatus for Controlling Mechanism of Moving Vessels or Vehicles. However, the bilateral teleoperation systems did not appear until the late 1940s. The first bilateral manipulators were developed for handling radioactive materials. Outstanding pioneers were Raymond Goertz and his colleagues at the Argonne National Laboratory outside of Chicago and Jean Vertut at a counterpart nuclear engineering laboratory near Paris. These first devices were mechanically coupled, and the slave manipulator mimicked the master motions, both being very similar mechanisms. It was not until the mid-1950s that Goertz presented the first electrically coupled master-slave manipulator [4]. In the 1960s the applications were extended to underwater teleoperation, where the submersible devices carried cameras and the operator could watch the remote robot and its interaction with the submerged environment. The beginnings of the space teleoperation date from the 1970s, and in this application the presence of time-delays brought instability problems. Technology has evolved with giant steps, resulting in better robotic manipulators and, in particular, increasing the communication means, from mechanical to electrical transmission, using optic wires, radio signals, and the Internet which practically removes any distance limitation. Today, the applications of teleoperation systems are found in a large number of fields. The most illustrative are space,
20
Teleoperation and Level of Automation
459
Machines surpass humans in the:
Humans surpass machines in the: Detection
Speed
Perception Judgment
Power Computation
Induction Replication Improvisation
Simultaneous operations Short term memory
Longterm memory
• • • • • •
Ability to detect small amounts of visual or acoustic energy Ability to perceive patterns of light or sound Ability to improvise and use flexible procedures Ability to store very large amounts of information for long periods and to recall relevant facts at the appropriate time Ability to reason inductively Ability to exercise judgment
• • • • •
Ability to respond quickly to control signals, and to apply great force smoothly and precisely Ability to perform repetitive, routine tasks Ability to store information briefly and then to erase it completely Ability to reason deductively, including computational ability Ability to handle highly complex operations, i.e., to do many different things at once.
Fig. 20.1 Fitts List of things humans can do better than machines and vice versa [6]. (Courtesy of the National Academy Press)
underwater, medicine, and hazardous environments, among others, which are commented in Sect. 20.5.
20.3
Levels of Automation and General Schemes
In order to illustrate the typical components and functional modules of a teleoperation system, this section describes an up-to-date specific bilateral teleoperation scheme and introduces the cooperative teleoperation systems. Both kinds of systems are representative cases of human-robot interaction (HRI), and, for that reason, it is useful to previously introduce the Levels of Automation concept and to present some of its more recognized taxonomies.
Table 20.1 Scale of degrees of automation [4] 1. 2. 3. 4. 5. 6.
The computer offers no assistance; human must do it all The computer offers a complete set of action alternatives and narrows the selection down to a few or suggests one, and executes that suggestion if the human approves or allows the human a restricted time to veto before automatic execution or 7. executes automatically, then necessarily informs the human, or 8. informs him after execution only if he asks or 9. informs him after execution if it, the computer, decides to 10. The computer decides everything and acts autonomously, ignoring the human Each succeeding level of the scale below assumes some previous ones (when ANDed) or imposes more restrictive constraints (when ORed).
20.3.1 Levels of Automation One of the key aspects to be considered in a robotic teleoperation system is the sharing out of the decision-making and control authority between the human and the machine [5]. In general, leaving aside the completely autonomous systems, a range of situations can be envisaged, covering from plain teleoperation, where every action of the local manipulator conducted by the human operator is mimicked by the remote robot, to supervisory control, corresponding to schemes in which the system generates options, selects the option to implement, and carries out that action and the human monitors the process and only intervenes if necessary. Function allocation is then one of the critical issues in HRI in which the robot teleoperation fits. The question whether tasks should be performed by humans or by machines has been a research object for more than a half a century. One
early example is Paul Fitts [6], who, together with some fellow founding fathers of the Human Factors discipline, proposed in 1951 the famous Fitts List (Fig. 20.1) compiling the kinds of things men can do better than machines at that time and vice versa, the so-called MABA-MABA lists (Men Are Better At-Machines Are Better At) [7]. Fitts’s work was followed by several seminal studies on how HRI could be specified for complex systems for which fully autonomous capability was not yet possible. In 1978 Sheridan and Verplanck [8] described different ways in which a decision could be made and implemented by the coordinated actions of a human operator and a computer, constituting different Levels of Automation (LOA), and they developed a LOA taxonomy which incorporates ten levels. A later refinement of this scale [4] is shown in Table 20.1.
20
460 Table 20.2 Endsley and Kaber Levels of Automation [9] Level of automation Description 1. Manual control The human performs all tasks: monitoring the state, generating options, selecting the option to perform (decision-making), and physically implementing it 2. Action support The system assists the operator with performance of the selected action, although some human control actions are required 3. Batch processing The human generates and selects the options to be performed. Then they are turned over to the system to be carried out automatically 4. Shared control Both the human and the system generate possible decision options. The human still retains full control over the selection of which option to implement; however, carrying out the actions is a shared task 5. Decision support The system generates a list of decision options that the human can select from or may generate own options. Once the human has selected an option, the system implements it 6. Blended The system generates a list of options that it decision-making selects from and carries out if the human consents, but the human may select another option generated by the system or by herself/himself, to be carried by the system 7. Rigid system The system presents a set of actions to the human whose role is to select from among this set. The system implements the selected action 8. Automated The system selects the best option and carries it decision-making out, based upon a list it generates (augmented by alternatives suggested by the human) 9. Supervisory The system generates options, selects, and control carries out a desired option. The human monitors the system and intervenes if necessary (shifting, in this case, to the decision support LOA) 10. Full automation The system carries out all actions
This work served as the foundation for several subsequent LOA taxonomies [9–11], including reformulations of levels based on broader types of information processing automation. For instance, Endsley and Kaber [9], in order to represent general and pervasive stages of information processing in HRI, considered four functions: monitoring system status, generating strategy options, selecting a particular option or strategy, and implementing the chosen strategy. By assigning these functions to the human or the computer or a combination of the two, they formulated ten levels of automation (Table 20.2). Additionally, intermediary levels of automation have also been proposed in order to maintain operator involvement in system performance, improving in this way the situation awareness and reducing the out-of-theloop performance problems [9]. Some authors [11] have addressed criticisms on the utility or appropriateness of the Levels of Automation taxonomies, but, in general, they agree that these taxonomies
L. Basañez et al.
have been highly useful in both informing research on human -automation interaction (HAI) and in providing guidance for the design of automated and autonomous systems [10]. The benefits of LOA taxonomies have been realized on several practical and theoretical grounds. An interesting taxonomy of Levels of Robot Autonomy (LORA), in the context of teleoperation systems, has been suggested by Beer, Fisk, and Rogers [12]. Each level is specified from the perspective of the interaction between the human and the robot and the roles each play (Table 20.3). The function allocation between robot and human is expressed using the primitives sense, plan, act into which any task, no matter how simple or complex, can be divided [13]. Level Manual represents a situation where no robot is involved in performing the task, and in level Full Automation, the human has no role. The teleoperation system described in Sect. 20.3.2 could be classified in level 4. Shared Control of the Endsley and Kaber Levels of Automation Taxonomy (Table 20.2) and in level Shared Control with Human Initiative of the Beer, Fisk, and Rogers Taxonomy of Levels of Robot Autonomy for HRI (Table 20.3).
20.3.2 Bilateral Teleoperation A modern teleoperation system is composed of several functional modules according to the aim of the system. As a paradigm of an up-to-date teleoperated robotic system, the one developed at the Robotics Laboratory of the Institute of Industrial and Control Engineering (IOC), Universitat Politècnica de Catalunya - BarcelonaTech (UPC), Spain [14, 15], is described below. The outline of the IOC teleoperation system is represented in Fig. 20.2. The diagram contains two large blocks that correspond to the local station, where the human operator and the local robot (haptic device) are located, and the remote station, which includes one or more industrial manipulators as remote robots. There is also an intermediate block representing the communication channel. Although this system has been originally designed for a robot manipulator, this could be replaced by a mobile robot or by a mobile manipulator (a manipulator mounted on a mobile platform) as long as its mobility does not exhibit nonholonomic restrictions.
Local Station In the local station (Fig. 20.2), the human operator physically interacts with a haptic device and with the teleoperation aids modules by means of a graphical user interface. The modules of this station are described below.
20
Teleoperation and Level of Automation
461
Table 20.3 Beer, Fisk, and Rogers taxonomy of Levels of Robot Autonomy for HRI [12] LORA Manual
Sense Plan Act Description H H H The human performs all aspects of the task including sensing the environment, generating plans/options/goals, and implementing processes Teleoperation H/R H H/R The robot assists the human with action implementation. However, sensing and planning is allocated to the human. For example, a human may teleoperate a robot, but the human may choose to prompt the robot to assist with some aspects of a task (e.g., gripping objects) Assisted teleoperation H/R H H/R The human assists with all aspects of the task. However, the robot senses the environment and chooses to intervene with the task. For example, if the user navigates the robot too close to an obstacle, the robot will automatically steer to avoid collision Batch processing H/R H R Both the human and robot monitor and sense the environment. The human, however, determines the goals and plans of the task. The robot then implements the task Decision support H/R H/R R Both the human and robot sense the environment and generate a task plan. However, the human chooses the task plan and commands the robot to implement actions Shared control with human initiative H/R H/R R The robot autonomously senses the environment, develops plans and goals, and implements actions. However, the human monitors the robot’s progress and may intervene and influence the robot with new goals and plans if the robot is having difficulty Shared control with robot initiative H/R H/R R The robot performs all aspects of the task (sense, plan, act). If the robot encounters difficulty, it can prompt the human for assistance in setting new goals and plans Executive control R H/R R The human may give an abstract high-level goal (e.g., navigate in environment to a specified location). The robot autonomously senses environment, sets the plan, and implements action Supervisory control H/R R R The robot performs all aspects of the task, but the human continuously monitors the robot, the environment, and the task. The human has override capability and may set a new goal and plan. In this case, the autonomy would shift to executive control, shared control, or decision support Full automation R R R The robot performs all aspects of a task autonomously without human intervention with sensing, planning, or implementing action H = Human, R = Robot
Local Teleoperation Control Modules Haptic Rendering module This is responsible for calculating the torque τ *l , in the joint space, to be fed back to the operator as a combination of the following forces: • Constraint force f c , in the operational space, computed by the Relational Positioning module in order to restrict the movements to a submanifold of free space, during the manipulation of the haptic device by the operator • Virtual force f v , in the operational space, computed by the Virtual Contacts module as a response to the detection of potential collision situations • Guiding force f g , in the operational space, computed by the Guiding module to swept the operator along a collision-free path toward the goal • Control torque τ l , in the joint space, produced by the controller as a result of tracking errors between the local haptic and the remote robot manipulator The total torque is given by τ ∗l = J l (ql )[f c +f g +f v ]+τ l , where J (q ) is the transpose of the Jacobian matrix of the l l haptic device, evaluated at the configuration ql . Control Algorithms module It executes the motion and force control algorithms that allow position tracking while maintaining stability despite the communication delays between stations.
Geometric Conversion module It is in charge of the conversion between the coordinates of the haptic devices and those of the remote robots. State Determination module It updates the system state variables that are used by the teleoperation control and the Teleoperation Aids modules.
Local Teleoperation Aids Modules Relational Positioning module The operator makes use of this module to define geometric relationships between the part manipulated by the remote robot and the parts in the environment. These relationships can completely define the position of the manipulated part and then fix all the robots’ degrees of freedom (DOFs), or they can partially determine the part position and orientation and therefore fix only some DOFs. In the latter case, the remaining degrees of freedom are those that the operator will be able to control by means of the haptic device (local robot). The output of this module is the solution submanifold in which the constraints imposed by the relationships are satisfied. Virtual Contacts module It detects possible collisions of the manipulated part with the environment and generates the corresponding repulsion forces for helping the
20
462
L. Basañez et al.
by the Sensing modules and transmitted to the local station. Its modules are the following. Local station Human operator
Teleoperation aids
Haptic device
Teleoperation control
Relational positioning
Virtual contacts
Haptic rendering
Control algorithms
Guiding
Augmented reality
Geometric conversion
State determination
Communications management Communication channel Communications management
Sensing
Teleoperation control
State determination
Control algorithms
Audio / video
Planning
Environment
Robot manipulator
Remote Teleoperation Control Modules Control Algorithms module It receives motion commands from the local station and sends the corresponding torque references to the joint actuators of the remote robot manipulator. Planning module It computes how the part manipulated by the robot must follow the trajectory specified by the operator with the haptic device, avoiding, at the same time, collisions between the robot and the environment. Remote Sensing Modules State Determination module It gathers and processes the remote system state variables that are fed back to the local station. Audio/Video module This module is responsible of the capture and transmission of sounds and images of the remote environment.
Communications In both local and remote stations, there exists a Communications Management module that takes in charge of communications between these stations through the used channel (e.g., Internet). It consists of the following submodules for the information processing in the local and remote stations:
Remote station
Fig. 20.2 General scheme of the IOC teleoperation system
operator to react as soon as possible when virtual contacts situations occur. Guiding module It computes, by means of motion planning techniques, forces that guide the operator during task execution. Augmented Reality module This module is in charge of enhancing the view of the remote station with the addition of useful information like motion restrictions imposed by the operator or by the Teleoperation Aids modules and graphical models of the remote robot using simulated configurations provided by the operator or received from the remote station.
Remote Station In the remote station, the robot is controlled by the Remote Teleoperation Control modules, connected with the Local Teleoperation Control modules through the communication channel. The information generated at this station is captured
• Command codification/decodification. These submodules are responsible for the codification and decodification of the motion commands exchanged between the local and the remote stations. These commands should contain the information of the degrees of freedom constrained to satisfy the geometric relationships and the motion variables on the unrestricted ones, following the movements specified by the operator by means of the haptic device. The following three qualitatively different situations are possible: – The motion subspace satisfying the constraints defined by the relationships fixed by the operator has dimension zero. This means that the constraints completely determine the position and orientation (pose) of the manipulated object. In this case the command is this pose. – The motion subspace has dimension six, i.e., the operator does not have any relationship fixed. In this case the operator can manipulate the six degrees of freedom of the haptic device, and the command sent to the remote station is composed of the values of the six joint variables. – The motion subspace has dimension from one to five. In this case the commands are composed of the information of this subspace and the variables that de-
20
Teleoperation and Level of Automation
463
d1 61
6
d2 6o 62
po p
Communication channel 6w, Local robot controller
T To
Robot 1 controller
6w,r
Robot 2 controller
Fig. 20.3 Example of application of a cooperative teleoperation system
scribe the motion inside it calculated from the coordinates introduced by the operator through the haptic device. • State codification/decodification. These submodules generate and interpret the messages between the remote and the local stations. The robot state is coded as the combination of the position and force information. • Network monitoring system. This submodule analyzes in real time the Quality of Service (QoS) of the communication channel in order to properly adapt the teleoperation parameters and the sensorial feedback.
Operation Principle The performing of a teleoperated task using the teleoperation framework just described is done as follows. The operator first determines the goal configurations of the task and sets the motion constraints that must be maintained during the task execution, by specifying the relative positions of the part or tool manipulated by the robot with respect to the environment. Then, in order to remotely control the robot performing the task, he/she moves the haptic device with the help of the following feedback forces: 1. Forces that restrain her/him within the task submanifold defined by the constraints set, letting her/him concentrate on the commanding of motions relevant to the task 2. Forces resulting from collisions predicted in the virtual environment, which allow her/him to react on time because they prevent her/him of imminent collisions in the remote station 3. Guiding forces that, from any point within the task submanifold, swept her/him along a collision-free path toward the goal configuration, resulting in a faster task commanding
4. Forces generated by the controller as a result of tracking errors that occur due to real interactions on the remote station Position and force correspondence between the haptic device and the robot is guaranteed, including the possibility to perform mouse jumps required when the size of their workspaces differs substantially. The performance of the task being teleoperated is continually monitored by the operator using the 3D image of the scene augmented with extra relevant information, like the graphical representation of the motion subspace, the graphical model of the robot updated with the last received data, and other outstanding information for the good and easy performance of the task.
20.3.3 Cooperative Teleoperation Systems Cooperative robot systems have advantages over single robots, such as increased dexterity and improved handling and loading capacity. Moreover, the execution of some robotic tasks requires two or more robots, and, in such a case, a cooperative strategy with several robot manipulators becomes necessary to perform the tasks that cannot be executed by a single manipulator (Fig. 20.3). Typical cooperation examples include tasks such as handling large and heavy rigid and non-rigid objects and assembly and mating of mechanical parts and space robotics applications [16, 17]. In spite of the potential benefits achievable with multiple robot manipulators, the analysis and the control of these systems become more complex due to the kinematic and the dynamic interactions imposed by the cooperation. This means that, in all those tasks requiring effective cooperation,
20
464
L. Basañez et al.
one cannot extend the well-known results for the kinematics, dynamics, and control of a single arm. The difficulty increases if the cooperating robots have different kinematic structures [18].
20.4
Challenges and Solutions
When designing a bilateral teleoperation system or a cooperative teleoperation system, different challenges must be faced. On the one hand, the robot controller has to be designed considering the special characteristics of the communication channels like the possible time-varying delays. On the other hand, it is necessary a high feeling of immersion of the operator on the remote site that can be obtained by using different sensors together with augmented reality techniques. Teleoperation aids must be also considered in order to help the operator performing a teleoperated task in an easy and efficient way. Another interesting recent challenge is the teleoperation of unmanned aerial vehicles/drones. All these challenges are analyzed in the following subsections.
20.4.1 Control Objectives and Algorithms A control algorithm for a teleoperation system has two main objectives: telepresence and stability. Obviously, the minimum requirement for a control scheme is to preserve stability despite the existence of time-delays and the behavior of the operator and the environment. Telepresence means that the information about the remote environment is displayed to the operator in a natural manner, which implies a feeling of presence at the remote site (immersion). Good telepresence increases the feasibility of the remote manipulation task. The degree of telepresence associated to a teleoperation system is called transparency [19]. However, due to time-delays, complete transparency and Lyapunov stability cannot be, sometimes, achieved simultaneously [20]. The same control objectives apply for the teleoperation of cooperative systems.
Bilateral Teleoperation Control In order to ensure stability when delays appear in the communication channel, different control approaches have been proposed. Although scattering-based control has dominated this field, other schemes like passivity-based, adaptive, and sliding controllers have also been proved to provide position tracking capabilities under constant or variable time-delays, even when velocities cannot be measured. Scattering-based control has dominated the control field in teleoperation systems since it was first proposed by Anderson and Spong [21], creating the basis of modern teleoperation system control. Their approach was to render passive the
communications using the analogy of a lossless transmission line with scattering theory. They showed that the scattering transformation ensures passivity of the communications despite any constant time-delay. Following the former scattering approach, it was proved [22] that, by matching the impedances of the local and remote robot controllers with the impedance of the virtual transmission line, wave reflections are avoided. For a historical survey along this line, the reader may refer to [1]. Passivity is the basic property behind the scattering-based schemes to ensure stability, and it has also been used to show that simple Proportional plus damping (P+d) schemes can provide the teleoperator with position tracking capabilities for the constant delay case [23], as well as for the variable delay case [24]. Moreover passivity has been also employed to increase transparency as the scheme reported in [25]. A passivity-based control tutorial for bilateral teleoperators can be found in [26]. Most of the passivity-based schemes rely on gravity cancellation to ensure position tracking. However, they require the exact knowledge of the system physical parameters [24, 27]. To deal with parameter uncertainty, adaptive control schemes have been proposed both for the constant delay case [28] and for variable delays [29]. These results have been latter extended, for example, to deal with parameter estimation convergence [30] and input saturation [31]. Many of the reported control schemes have tackled the control problem assuming that velocities are measurable. However, most of the commercially available devices are not equipped with velocity sensors, and those with them are often prone to noise that cannot be easily filtered [32]. Among the works that treat the output-feedback case, i.e., without velocity sensors, can be cited [33–36] and, more recent, [37–39]. In [33] a sliding control technique is used to control a linearized version of the local and remote manipulators, making use of measurements of the human and the environment forces. An adaptive control scheme, in order to render the teleoperator Input-to-State stable, is provided in [34]. The work of [35] proves boundedness of the position error using a high-gain velocity observer. For the undelayed case, the work reported in [36] proposes the use of a velocity filter to achieve position tracking. A Immersion and Invariance (I&I) velocity observer [40], together with a proportional plus damping scheme, is reported in [37], but this approach has the practical drawback that the observer design requires the exact knowledge of the system dynamics. This drawback has been obviated either using a dynamic controller based on the energy shaping methodology that back-propagates damping to the plant [38] or using a sliding-mode velocity observer [39]. Figure 20.4 shows a control diagram of a common teleoperation system in the Cartesian/task space for which velocity
20
Teleoperation and Level of Automation
fh
+
–
JT
t
q
∫
f
y·
465
Forward kinematics
x
fr
x
Controller
Filter
Filter xr
Controller xr
Forward kinematics
qr
y· r
∫ tr
JrT
+
–
fe
Fig. 20.4 Teleoperation control scheme in the Cartesian/task space
measurements are not available [41]. In this scheme the joint position vectors qi of the local and remote robots (sub-index i = for the local robot and i = r for the remote robot) are measured, and, via a Forward Kinematics module, they are translated into Cartesian poses xi . Then, using a dirty velocity filter, the filtered velocities y˙ i are employed to inject damping through the controllers to the robots. Both controllers are simple P+d schemes implemented in the Cartesian space, and their output control signals f i are translated to the joint space control signals τ i using the transpose of the robot Jacobians Ji (qi ). The forces injected by the human and the environment are f h and f e , respectively.
Cooperative Teleoperation Control The bilateral teleoperation of multiple remote robots with one or multiple local robots is becoming increasingly popular. Among the first works on the control of cooperative teleoperation systems that can be mentioned are those of Sirouspour [16] and Lee and Spong [42]. In the latter, the authors report a formation of multiple manipulators being teleoperated by a single local manipulator using a passive decomposition that is robust to constant time-delays, by separating the teleoperation system into two decoupled systems: the shape system describing the cooperative grasping aspect and the locked system representing the overall behavior of the multiple remotes. In [16], multiple local manipulators command multiple remote manipulators, and each local one is coupled to one of the remote robots. The proposed control scheme uses linearized dynamics of the elements of the teleoperation system, and no time-delays are considered. These two mentioned works were the beginnings of a series of control proposals along this line, for example, [43] where a PD controller is designed to enforce motion tracking and formation control of local and remote vehicles under constant time-delays; [44] that proposes a Lyapunov-based adaptive control approach for two types of asymmetrical teleoperation systems, the first one composed of two local
manipulators and one redundant remote robot and the second one composed of one local robot and one remote robot with less DOFs than the local and using a control scheme that does not consider time-delays; [45] that proposes a control framework based on the small-gain arguments for teleoperation systems composed of multiple Single Local and Single Remote (SL-SR) pairs that interact through an environment; and [46] where two (master) local devices teleoperate a remote manipulator for the undelayed case. Recently, Pliego-Jiménez et al. [18] treat the dexterous cooperative telemanipulation of an object being manipulated by two robots. The proposed controller is able to estimate the mass and the inertia of the object being manipulated. Yang et al. [47] propose a neural network approach together with an integral terminal sliding mode to ensure fixed-time stability for an uncertain cooperative teleoperation system. Using the joint space adaptive controller of [48], Aldana et al. [17] present an adaptive controller in the task space for a cooperative teleoperation system that is capable of controlling the internal forces over the manipulated object. Extending the consensus idea in [48,49], a leaderless consensus algorithm that can be employed in the teleoperation of multiple remote manipulators in the task space, when velocities are not available for measurement, has been proposed in [50]. However, this solution is not robust to communication delays. Recently, Montaño et al. [51], making use of the energy shaping methodology described in [52], present a bilateral telemanipulation scheme for the dexterous in-hand manipulation of unknown objects that is robust to time-delays and that does not need velocity measurements. Figure 20.5 shows an adaptive control scheme for a cooperative teleoperation system [17]. This scheme is a possible implementation of the application represented in Fig. 20.3. The key idea in this scheme is that the human operator, through a single local robot, teleoperates a remote object using a cooperative robotic system while the internal forces over the object are under control. In the
20
466
L. Basañez et al.
fh
+
JT
q
t
Forward kinematics
qˆD
· qˆD
x
· , , f· , f o o o o
J u
q·
³
Aux. f signals · f
qˆD1
q q·
uo
xo(t – To(t)) xo
f
Controller
ˆ f· + Cˆ f – gˆ + k M o o o o o o
x(t – To(t))
q q·
qˆDo
³
(W†)1 · qˆ
³
D1 f d,int 1
WT1
Controller
f1
q1 u1
Object forward kinematics
qˆJ
· qˆDo
o
J1
q· 1
t1
q1
³
J1T
(W†)J
· qˆJ
fJ
WJT fJd,int
Controller
q·J qJ
tJ
T JJ
Fig. 20.5 Adaptive control scheme for a cooperative teleoperation system in the Cartesian/task space
following description, sub-index i = stands for local, and i = {1, 2, . . . , N} stands for each robot in the remote collaboration system. The adaptive controllers dynamically estimate the robots’ physical parameters θˆ Di along with the object parameters θˆ Do . The controllers are implemented in the Cartesian space, and their output control signals f i are translated, using the robots’ Jacobians Ji (qi ), to the joint space, resulting in τ i . By measuring the robots’ joint space positions qi and velocities q˙ i , some auxiliary error signals i and φ i are computed to ensure that position tracking is achieved while maintaining a desired value of the internal forces f di, int . The robot force/motion contribution reflected to the object is determined using matrices Wi .
20.4.2 Communication Channels Communication channels can be classified in terms of two aspects: their physical nature and their mode of operation. According to the first aspect, two groups can be defined: physically connected (mechanically, electrically, optically wired, pneumatically, and hydraulically) and physically disconnected (radiofrequency and optically coupled such as
via infrared). The second aspect entails the following three groups: Time-delay free. The communication channel connecting the local and the remote stations does not affect the stability of the overall teleoperation system. In general this is the kind of channel present when the two stations are near to each other. Examples of these communication channels are some surgical systems, where the master and slave are located in the same room and connected through wires or radio. Constant time-delay. These are often associated with communications in space, underwater teleoperation using sound signals, and systems with dedicated wires across large distances. Variable time-delay. This is the case, for instance, of packet-switched networks where variable time-delays are caused by many reasons such as routing, acknowledge response, and packing and unpacking data. One of the most promising teleoperation communication channels is the Internet, which is a packet-switched network, i.e., it uses protocols that divide the messages into packets before transmission. Each packet is then transmitted individually and can follow a different route to its destination. Once all packets forming a message have arrived at the
20
Teleoperation and Level of Automation
destination, they are recompiled into the original message. When using packet-switched networks for real-time teleoperation systems, besides bandwidth, three effects can result in decreased performance of the communication channel: packet loss, variable time-delays, and, in some cases, loss of order in packet arrival. Among the transport protocols, there are in the literature three standard protocols, Transmission Control Protocol (TCP), User Datagram Protocol (UDP), and Real-Time Transport Protocol (RTP), and three modified protocols for teleoperation systems, Real-Time Network Protocol (RTNP), Interactive Real-Time Protocol (IRTP), and Efficient Transport Protocol (ETP). In order to achieve relatively faster transmission of the local robot data and to maintain lower variations in delay, UDP is widely used for many real-time applications over the network [53]. However, since UDP provides an unreliable and connectionless service, packet loss, packet replication, and out-of-order delivery problems are more significant with this protocol. Then, some modifications of the packet structure of UDP have been proposed to supply additional information about unreliable network conditions to the application layer [54]. In order to improve the performance of teleoperation systems, Quality of Service (QoS)-based schemes have been used to provide priorities on the communication channel. The main drawback of today’s best-effort Internet service is due to network congestion. The use of high-speed networks with recently created protocols, such as the Internet Protocol version 6 (IPv6), improves the performance of the whole teleoperation system [55]. Besides QoS, IPv6 presents other important improvements. The current 32 bit address space of IPv4 is not able to satisfy the increasing number of Internet users. IPv6 quadruples this address space to 128 bits, which provides more than enough globally unique IP addresses for every network device on the planet. Teleoperation has been largely relegated to fixed communication connections, but with the new enhancements in the wireless technologies, this will change. 5G is the 5th generation mobile network. It is a new global wireless standard after 1G, 2G, 3G, and 4G networks. 5G enables a new kind of network that is designed to connect virtually everyone and everything together including machines, objects, and devices. 5G wireless technology is meant to deliver higher multi-Gbps peak data speeds, ultra low latency, more reliability, massive network capacity, increased availability, and a more uniform user experience [56]. The ITU (International Telecommunication Union) and the whole community of 5G stakeholders have collaborated in developing the standards for IMT-2020 (International Mobile Telecommunications) [57]. These standards outline eight criteria for mobile networks, which should be fulfilled in order to qualify them as 5G [58]:
467
• • • • • • •
10 Gbps maximum achievable data rate 10 (Mbit/s)/m2 traffic capacity across coverage area 1 ms latency 106 /km2 total number of connected devices 3 times higher spectrum efficiency compared to 4G Full (100%) network coverage Mobility of the connected device up to 500 km/h with acceptable Quality of Service • 10 times lower network energy usage compared to 4G A major design challenge of 5G is to provide ultra-low delay communication over the network which would enable real-time interactions across wireless networks. This, in turn, will empower people to wirelessly control both real and virtual objects. 5G will add a new dimension to humanmachine interaction and will lead to a revolution in almost every segment of society with applications and use cases like mobile augmented video content, road traffic/autonomous driving, healthcare, smart grid, remote education, remote immersion/interaction, Internet of Things (IoT), and tactile Internet, among others [58, 59]. Several 5G implementations have already been reported, like [60], in which a tonne crawler excavator located over 8500 km away from the operator is commanded, 5G being essential to deliver live video streaming at the operator’s station in a reliable way, minimizing the time lag in the system for the operator. This teleoperation system is used for operating excavators in dangerous applications such as industrial waste disposal, involving hazardous, toxic, or radioactive substances. In [61] advances in remote surgery are reported. In one of them, a doctor in Fujian, a province in southeastern China, removed the liver of a test animal about 30 miles away. It was reportedly the world’s first remote surgery over 5G, which kept the lag time to 0.1 s. But progress does not stop, and, although service providers around the world are only now rolling out their mobile 5G networks, some researchers are contemplating the use of terahertz waves for the next generation of wireless networks, 6G, that would occupy the 300 gigahertz to 3 terahertz band of the electromagnetic spectrum [62]. These frequencies are higher than the highest frequencies used by 5G, which are known as millimeter waves and fall between 30 and 300 gigahertz. It is expected that terahertz waves should be able to carry more data more quickly with even lower latency, though they will not be able to propagate as far. In any case, it remains to be seen.
20.4.3 Situation Awareness and Immersion Human beings are able to perceive information from the real world in order to interact with it. However, sometimes, for engineering purposes, there is a need to interact with systems
20
468
that are difficult to actually build or that, due to their physical behavior, present unknown features or limitations. Then, in order to allow better human interaction with such systems, as well as their evaluation and understanding, it is necessary the use of sensory systems that provide the required information to improve human situation awareness. In this context, situation awareness can be defined as the ability to perceive elements within a volume of space, be able to comprehend the meaning of these elements, and be able to predict the status of these elements in the future [63]. It is the perception, comprehension, and projection of information relevant to the user regarding their immediate environment, which plays an important role in teleoperation tasks. It is also a primary basis for operator’s performance [64]. In the teleoperation case, besides the force feedback originated from the contact of the remote robot with other objects, the visual feedback of the remote environment is of capital importance, because it provides the operator with information that will help her/him navigate through the remote workspace. This is especially true for task operations that take place in free space, where there is no force feedback. Depth perception also improves the operator’s situation awareness and can be achieved, for example, with a stereoscopic visual feedback system that combines images gathered from two remotely actuated video cameras. In such a system, the stereoscopic effect is achieved by alternatively displaying the image corresponding to the left and right eyes on a computer monitor or wall projector, switching between them at a frequency of 100–120Hz or higher. Many people can visualize the 3D scene at the same time by wearing pairs of shutter glasses that are synchronized with the switching of the video display [15]. Graphical and vibro-tactile feedback are used in [65] to increase situation awareness. Immersion is the sensation of being in an environment that actually does not exist and that can be a purely mental state or can be accomplished through physical elements. In order to improve the operator immersion in the remote environment of a teleoperation system, two important techniques are of great helpfulness: virtual reality and augmented reality. In virtual reality a nonexistent world can be simulated with a compelling sense of realism for a specific environment. So, the real world is replaced by a computer-generated world that uses input devices to obtain information from the user and capture data from the real world (e.g., using trackers and transducers) and uses output devices to represent the responses of the virtual world by means of visual, touch, aural, or taste displays, like haptic devices, head-mounted displays (HMD), and headphones, in order to be perceived by any of the human senses [66, 67]. Augmented reality is a form of Human-Computer Interaction (HCI) that superimposes information created by computers over a real environment. Augmented reality enriches the surrounding environment instead of replacing it as in
L. Basañez et al.
the case of virtual reality, and it can also be applied to any of the human senses. Although some authors put attention on hearing and touching, the main augmentation route is through visual data addition. Furthermore augmented reality can remove real objects or change their appearance [68], operations known as diminished or mediated reality. In this case, the information that is shown and superposed depends on the context, i.e., on the observed objects. Augmented reality can improve task performance by increasing the degree of reliability and speed of the operator due to the addition or reduction of specific information. Reality augmentation can be of two types: modal or multimodal. In the modal type, augmentation is referred to the enrichment of a particular sense (normally sight), whereas, in the multimodal type, augmentation includes several senses. Research done to date has faced other important challenges [69]. In teleoperation environments, augmented reality has been used to complement human sensorial perception in order to help the operator to perform teleoperated tasks [70]. In this context, augmented reality can reduce or eliminate the factors that break true perception of the remote station, such as timedelays in the communication channel, poor visibility of the remote scene, and poor perception of the interaction with the remote environment. For instance, the operator experience can be enhanced at the local station by overlaying a colocated virtual scene on the video streams corresponding to each eye. This virtual scene can contain visual cues and annotations that are not present in the real world, but can improve task performance. Examples of graphical entities that can be rendered are geometric constraints the movement of an object is subject to, virtual objects an operator may be haptically interacting with, guidance paths, magnitude and direction of interaction forces, and boundaries of the robot workspace. Limited-bandwidth communications may reduce the frame rate of the live video streams below acceptable levels. In such cases a virtual version of the remote robot can be displayed and refreshed at a higher frequency while using very little bandwidth, since it only requires updating the current joint variables. The appearance of an augmented environment with both real and virtual objects must be visually compelling and, for this, must obey overlay and occlusion visibility rules. That is, the parts of a virtual object that are in the foreground are rendered and block the real objects that lie behind (if the virtual object is semitransparent, a blending effect occurs), and the parts of a virtual object that are behind a real one are not rendered. Occlusions are achieved by rendering transparent models of the real objects in the virtual scene. Unmodeled real objects are unable to produce occlusion effects. Figure 20.6 shows the left and right eye views of a scene augmented with three visible virtual entities: a planar surface that represents the geometric constraints acting on the
20
Teleoperation and Level of Automation
a)
469
b)
Fig. 20.6 (a) and (b) parts of the stereographic view of a remote scene with augmented reality aids
robot end-effector, the end-effector coordinate frame, and a semitransparent rendering of the robot in a different configuration. Notice the overlay and occlusion effects between the real robot and the virtual entities. Among the applications of augmented reality, it is worthwhile to mention interaction between the operator and the remote site for better visualization [71,72], better collaboration capacity [73], better path or motion planning for robots [74], addition of specific virtual tools [75], and multisensorial perception enrichment [76].
20.4.4 Teleoperation Aids Some of the problems arising in teleoperated systems, such as an unstructured environment, communication delays, human operator uncertainty, and safety at the remote site, among others, can be reduced using teleoperation aids. Among the teleoperation aids aimed to diminish the uncertainty of the human operator, improve his repeatability, and reduce his fatigue, one can highlight Relational Positioning, Virtual Contacts, and Guiding.
Relational Positioning Tasks where an object has to be positioned with respect to its surroundings are ubiquitous in robotics and often can be decomposed into a series of constrained movements which do not require using the six DOFs an object has in free space. For example, the spray-painting of a flat surface takes place in a two-DOFs planar space, and the insertion of a prismatic peg in a hole also requires two DOFs, translation
and rotation around the hole axis, provided that the axes of the two elements are aligned. Although operator skills are needed for the successful execution of many teleoperated tasks, maintaining both the tool and the manipulated object inside a specific space region can be both challenging and tiring. Such region can be easily described in terms of geometric constraints that, when satisfied, define a submanifold of the Special Euclidean space of dimension three, denoted SE(3), of allowed movements. Haptic feedback can be used to assist the operator by restricting his/her movements to the submanifold of interest, lowering the mental burden needed to execute the task. A Relational Positioning module explicitly addresses these issues. Its core consists of a geometric constraint solver that, given a set of constraints between various objects, finds the positions that each object should have in order to comply with this set. There exist many methods for solving geometric constraint problems [77], most of which can be classified as graph-based, logic-based, algebraic, or a combination of these approaches. Graph-based methods construct a (hyper)graph in which the nodes represent geometric elements and the arcs constraints. Topological features like cyclic dependencies and open chains can be easily detected. Graph analysis identifies simpler and solvable subproblems whose solutions are combined while maintaining compatibility with the initial problem. There exist algorithms with O(n2 ) [78] and O(nm) [79] time complexity, where n is the number of geometric elements and m is the number of constraints. Logic-based methods represent the geometric elements and constraints using a set of axioms and assertions. The
20
CI Input constraint decomposition CR
CT CT combination CRimpl
CTⴕ
CRⴕ = CR CRimpl CRⴕ combination CRⴖ
Input constraint Constraint combination decomposition
solution is obtained following general logic reasoning and constraint rewriting techniques [80]. Algebraic methods translate the problem into a set of nonlinear equations, which can be solved using a variety of numeric and symbolic methods. Numeric methods range from the Newton-Raphson method [81] that is simple but does not guarantee convergence nor finding all possible solutions to more sophisticated ones like Homotopy [82] that guarantee both. They tend to have O(n2 ) - O(n3 ) time complexity. Symbolic methods use elimination techniques such as Gröbner basis to find an exact generic solution to the problem, which can be evaluated with numerical values to obtain particular solutions [83]. These methods are extremely slow, since they have O(cn ), c > 1, time complexity. Hybrid methods combine more than one solution technique in order to compensate for the shortfalls of a specific approach and thus to create faster and more capable solvers. For example, a graph can be used to represent the problem, and the identified subproblems can be solved using a logicbased method for the cases with known closed-form solution and a numeric-algebraic method for the unhandled ones. A computationally efficient logic-based geometric constraint solver for assisting the execution of teleoperated tasks is the PMF (Positioning Mobile with respect to Fixed) that finds the map between constraint sets and parameterized solution submanifolds [84]. Constraints are defined between elements of the manipulated object and elements of its (fixed) environment. The solver accepts, as input constraints, distance and angle relations between points, lines, and planes and exploits the fact that, in a set of geometric constraints, the rotational component can often be separated from the translational one and solved independently. By means of logic reasoning and constraint rewriting, the solver is able to map a broad family of input problems to a few rotational and translational scenarios with known closed-form solution. The PMF solution process is represented in Fig. 20.7. The solver can handle under-, well-, and over-constrained (redundant or incompatible) problems with multiple solutions and is computationally very efficient, so it can be included in high-frequency loops that require response times within the millisecond order of magnitude (e.g., update solutions when the geometry of the problem changes, as in the case of moving obstacles). In teleoperation systems making use of impedance-type haptic devices (force/torque input, velocity output), the generation of virtual constraint forces and torques requires translating the kinematic information provided by the geometric constraint solver into a dynamic model. These systems take advantage of the high backdrivability and low inertia/friction of the haptic device: it leaves the dynamics of the unconstrained directions unchanged and generates forces in the constrained directions based on the difference el between the actual xl and desired xld positions of the end-effector in
L. Basañez et al.
Rotational component solution R R
CTⴕ
Translational component solution R
Solution synthesis
470
T
Fig. 20.7 PMF solver solution process. CI represents the input constraint set; CR and CT , the rotational and translational fundamental constraint sets, respectively (CRimpl contains the implicit rotational constraints); and R and T, the rotational and translational components of solution, respectively. Prime symbols indicate that the elements of a constraint set may have changed
Table 20.4 Translational and rotational submanifolds Translational submanifold DOFs Rotational submanifold R3 3 SO(3) * Plane 2 Vectors at an angle Sphere 2 Parallel vectors Cylinder 2 Fixed rotation Line 1 Ellipse 1 Point 0 *Special Orthogonal Rotation group of order 3
DOFs – 2 1 0
operational space coordinates: el = xl − xld , where xld represents the projection of xl on the current solution submanifolds, e.g., the expression of the force f c = KP el + KD e˙ l (a PD-like controller) is analogous to attaching a virtual spring and a virtual damper between xl and xld . A similar scheme is used by the Virtual Contacts and the Guiding modules (Sects. 20.4.4 and 20.4.4, respectively). Table 20.4 shows the translational and rotational submanifolds to which the haptic device end-effector can be constrained. All combinations between rotational and translational submanifolds are possible.
20
Teleoperation and Level of Automation
Virtual Contacts Haptic devices allow the operator to interact with a virtual world and to feel the reaction forces that arise when the manipulated virtual object collides with the objects in the virtual environment [15]. The haptic rendering of virtual contacts can be a useful teleoperation aid because virtual contacts can make the user react on time since they may occur before the real ones (if bounding volumes are considered for the models and because no time-delay exists). Simple and efficient procedures have been developed for punctual haptic interaction. Nevertheless, the haptic rendering of virtual contacts forces between 3D objects requires the use of collision detection algorithms and the approximation of the reaction force and torque by the interpolation or the sum of the forces computed at each contact point. Considering the task composed of convex polyhedra representing the bounding volumes of the objects, face-face contacts or edge-face contacts are not uncommon (these types of contact may also occur with virtual fixtures defined by the operator). Approaches based on collision detection algorithms do not provide, in these situations, a good haptic rendering. This can be solved if the knowledge of the current type of contacts taking place is used. In this line, a method based on the task configuration space (C-space T ) can be very efficient. Since in the C-space T the manipulated object is represented by a point, the method becomes similar to punctual haptic interaction methods. The procedure, illustrated in Fig. 20.8, is based on the following three steps [85]: (1) C-space T modeling: Assuming objects modelled with convex polyhedra, their interference is represented by (convex) C-obstacles. Each C-obstacle can be modelled with a graph G whose nodes represent basic contacts. The node whose C-face is closest to the current position
Fig. 20.8 Rendering of virtual contacts forces between 3D objects using the task configuration space
471
of the manipulated object, together with those neighbor nodes that satisfy the applicability condition for the current orientation θ, constitutes a subgraph, Gnear (θ ), that represents the local C-space T . (2) C-space updating: Each time the user changes the orientation θ of the manipulated object, Gnear (θ ) is updated (note that G(θ ) remains unchanged whenever the user does a pure translation motion). (3) Haptic rendering: The collision detection between the manipulated object and the obstacles is done by evaluating whether the object position lies inside a C-obstacle. This is done only considering the nodes of Gnear (θ ) by verifying if the object position is below the plane that contains the C-face of the basic contact for the current orientation. The reaction force is computed proportional to the penetration depth. The reaction torque is computed as a function of the point where the reaction force is applied.
Guiding Haptic devices can also provide guiding forces to assist the user to safely teleoperate a robot or to train him in the performance of a task. Some simple guiding forces may constrain the user motions along a line or curve or over a given working plane or surface, e.g., for a peg-in-hole task, a line can be defined along the axis of the hole, and the user may feel an increasing force as he moves the peg away from that line. Although these simple guides can already be a good help, some tasks may require more demanding guiding forces to aid the user all along the task execution. Motion planning strategies based on potential fields can cope with these guiding requirements, by defining trajectories that follow the gradient descent. One promising approach following this line is the use of harmonic functions that guarantee the existence of a single minimum at the goal configuration [86]. This approach, illustrated in Fig. 20.9 for the teleoperation of a bent-corridor planar task, relies on the following three points: (1) Task configuration space modeling: A 2n -tree hierarchical cell decomposition of the C-space T is build based on an iterative procedure that samples configurations (using a deterministic sampling sequence), evaluates and classifies them, and updates the cell partition when necessary. The cells are characterized by a transparency parameter computed as a function of the number of free and collision samples they contain. The transparency parameter is used for both evaluating the necessity of performing collision checks (i.e., as a lazy-evaluation control) and controlling the partitioning procedure of the cell decomposition. (2) Harmonic functions computation: Two harmonic functions, H1 and H2 , are computed interspersed with the
20
472
L. Basañez et al.
C-spaceT guiding
H2
Channel guiding
H1
Fig. 20.9 Teleoperation of a bent-corridor task using guiding forces
configuration space modeling. They permit to globally capture the current knowledge of the C-space T : H1 is used to find a solution channel from the initial cell to the goal cell on the current cell decomposition, and H2 is used to propagate the information of the channel in order to bias the sampling toward the regions around it. The harmonic functions are computed not only over the free cells (fixing the obstacle cells at a high value) but over the whole set of cells (using the transparency as a weighting parameter). (3) Generation of haptic guiding forces: Guiding forces are generated from the harmonic functions. A guiding force generated from H2 pushes the user toward the solution channel, and a guiding force generated from H1 guides the user within the channel toward the goal cell. The forces are computed using a simple point-attraction primitive of a haptic programming toolkit [87]. From the current cell, the user is attracted to the next one (following the negated gradient) by a force directed to its center. This force is felt until the user is located at a given distance threshold from the cell center. Then the current cell is updated and the procedure is repeated until the goal cell is reached.
20.4.5 Teleoperation of Unmanned Aerial Vehicles/Drones Bilateral teleoperation can be used to ensure a precise and safe remote piloting of unmanned aerial vehicles (UAVs), which is the name commonly used to describe an airborne vehicle without any pilot on-board, which operates under either remote or autonomous control. UAVs are also referred to as remotely piloted vehicles (RPVs), remotely operated aircrafts (ROAs), unmanned vehicles systems (UVSs), or simply drones [88]. There are three methods to remotely operate UAVs. The Level of Robot Autonomy (LORA) of the first method is Supervisory Control (Table 20.3), that is, the UAV operates in full autonomy while moving from one place to another and the human only sets the initial and final destinations. In this mode, human operators are not responsible for the intermediated flying decisions between two consecutive waypoints. In the second method, the UAV is piloted by an operator who is monitoring and controlling its course from a fixed base on ground with a direct line of sight (LORA: Teleoperation). Finally, the third method consists of piloting the UAV as if the operator was onboard (no direct line of sight); this mode requires sensory equipment such as cameras to explore
20
Teleoperation and Level of Automation
the environment (LORA: Manual). In the last two cases the human operator typically has to perceive the remote environment using only two-dimensional visual feedback. The limited field of view often leads to low levels of situational awareness, which can make it difficult to safely and accurately control the UAV. Several works in the literature [89–91] propose to provide the human operator with forcebased haptic feedback about the robot’s environment. This last has proven to help reduce collisions between a humancontrolled UAV and the environment and improve operator situational awareness [91]. The small size and agility of many UAVs make such robotic platforms a great fit for a wide range of applications in both military and civilian domains. In the first domain, UAVs are used for surveillance and launching of military operations. Also, UAVs are used for search and rescue operations or in assessing harsh locations. For instance, The Florida State Emergency Response Team deployed the first documented use of small UAVs for disaster response following hurricane Katrina in 2005 [92]. Flights were carried out to determine whether people were stranded in unreachable areas and if the river was posing immediate threats or to examine structural damage at different buildings. Following that, UAVs have also been deployed in several earthquake scenarios such as L’Aquila in 2009 [89]. Aerial robots have been also used to survey the state of buildings or to measure radiation. The research field on radiation detection using robotic systems comprises two main topics, i.e., localization and mapping. In particular, radiation mapping is focused on the problem of generating a complete map of an area, and localization addresses the problem of finding a point source of radiation using the perceived radiation distribution as a guide in the search behavior of the robot.
20.5
Application Fields
The following subsections present several application fields where teleoperation plays a significant role, describing their main particular aspects and some relevant works.
20.5.1 Industry and Construction Teleoperation in industry-related applications covers a wide range of branches. One of them is mostly oriented toward inspection, repair, and maintenance operations in places with difficult or dangerous access, particularly in power plants [93], as well as to manage toxic wastes [94]. In the nuclear industry, the main reason to avoid the presence of human workers is the existence of a continuous radioactive environment, which results in international regulations to limit the number of hours that humans can work in these conditions.
473
Fig. 20.10 Robot ROBTET for maintenance of electrical power lines. (Courtesy of DISAM, Technical University of Madrid – UPM)
This application was actually the motivation for early real telemanipulation developments, as stated in Sect. 20.2. Some habitual teleoperated actions in nuclear plants are the maintenance of nuclear reactors, decommissioning and dismantling of nuclear facilities, and emergency interventions. The challenges in these tasks include operation in confined areas with high radiation levels, risk of contamination, unforeseen accidents, and manipulation of materials that can be liquid or solid or have a muddy consistency. Another kind of application is the maintenance of electrical power lines, which require operations such as replacement of ceramic insulators or opening and reclosing bridges, which are very risky for human operators due to the height of the lines and the possibility of electric shocks, specially under poor weather conditions [95]. That is why electric power companies are interested in the use of robotic teleoperated systems for live-line power maintenance. Examples of these robots are the TOMCAT [96] and the ROBTET (Fig. 20.10) [97]. An interesting application field is construction, where teleoperation can improve productivity, reliability, and safety. Typical tasks in this field are earth-moving, compaction, road construction and maintenance, and trenchless technologies [98]. In general, applications in this field are based on direct visual feedback. One example is radio operation of construction machinery, such as bulldozers, hydraulic shovels, and crawler dump trucks, to build contention barriers against volcanic eruptions [99]. Another example is the use of an experimental robotized crane with a six-DOF parallel kinematic structure, to study techniques and technologies to reduce the time required to erect steel structures [100]. Since the tasks to be done are quite different according to the particular application, the specific hardware and devices
20
474
used in each case can vary a lot, ranging from a fixed remote station in the dangerous area of a nuclear plant to a mobile remote station assembled on a truck that has to move along an electrical power line or a heavy vehicle in construction.
20.5.2 Agriculture Agriculture is another application field that can be benefited with the use of teleoperation technologies. However, there are several challenges in this application field like that the agricultural environment is not structured [101]; the agricultural production is composed of live produce whose physical characteristics change from one to another species [102]; and the terrain, landscape, and atmospheric conditions vary continuously and this variation might be unpredictable [103]. Robots in agriculture can be used for fruit harvesting, monitoring, irrigation, fumigation, and fertilization, among others. In particular, ground and aerial mobile robots are well suited for these tasks due to their extended workspace capabilities [104,105]. Examples of teleoperated robots include a bilateral haptic teleoperation architecture, reported in [106], for controlling a swarm of agricultural UAVs. The proposed scheme is composed of a velocity controller together with a connectivity preservation term and a collision avoidance algorithm. The work [107] reports an analysis on the usability of different human-machine interaction modes for the teleoperation of an agricultural mobile robot designed for spraying tasks. Another example is a teleoperated manipulator, mounted on a conveyor belt, employed to monitor and take care of urban crops [108].
L. Basañez et al.
(TSCM) machines, which can work in a semiautonomous and teleoperated way. Position measurement, needed for control, is not easy to obtain when the vehicle is beneath the surface, and interference can be a problem, depending on the mine material. Moreover, for the same reason, video feedback has very poor quality. In order to overcome these problems, the use of gyroscopes, magnetic electronic compasses, and radars to locate the position of vehicles while underground has been considered [111]. The problems with visual feedback could be solved by integrating, for instance, data from live video, computer-aided design (CAD) mine models, and process control parameters, and presenting the operator a view of the environment with augmented reality [112]. In this field, in addition to information directly related to the teleoperation, the operator has to know other measurements for safety reasons, for instance, the volatile gas (like methane) concentration, to avoid explosions produced due to sparks generated by the drilling action. Teleoperated mining is not only considered on Earth. If it is too expensive and dangerous to have a man underground operating a mining system, it is much more so for the performance of mining tasks on the Moon. As stated in Sect. 20.5.5, for space applications, in addition to the particularities of mining, the long transmission delay between the local and remote stations is a significant problem. So, the degree of autonomy has to be increased to perform the simplest tasks locally while allowing a human teleoperator to perform the complex tasks at a higher level [113]. When the machines in the remote station are performing automated actions, the operator can teleoperate some other machinery; thus productivity can be improved by using a multiuser schema at the local station to operate multiple mining systems at the remote station [114].
20.5.3 Mining Mining is another interesting field of application for teleoperation. The reason is quite clear: operation of a drill underground is very dangerous, and sometimes mines themselves are almost inaccessible. One of the first applications started in 1985, when the thin-seam continuous mining Jeffrey model 102HP was extensively modified by the US Bureau of Mines to be adapted for teleoperation. Communication was achieved using 0.6 inch wires, and the desired entry orientation was controlled using a laser beam [109]. Later, in 1991, a semiautomated haulage truck was used underground and since then has hauled 1.5 million tons of ore without failure. The truck has an on-board personal computer (PC) and video cameras, and the operator can stay on the surface and teleoperate the vehicle using an interface that simulates the dashboard of the truck [110]. The most common devices used for teleoperation in mining are Load-HaulDump (LHD) machines and Thin-Seam Continuous Mining
20.5.4 Underwater Underwater teleoperation is motivated by the abundance of living and nonliving resources in the oceans, combined with the difficulty for human beings to operate in this environment. The most common applications are related to rescue missions and underwater engineering works, among other scientific and military applications. Typical tasks are pipeline welding, seafloor mapping, inspection and reparation of underwater structures, collection of underwater objects, ship hull inspection, laying of submarine cables, sample collection from the ocean bed, and study of marine creatures. A pioneering application was the Cable-Controlled Undersea Recovery Vehicle (CURV) used by the US Army in 1966 to recover, in the Mediterranean sea south of Spain, the bombs lost due to a bomber accident [115]. More recent relevant applications are related to the inspection and object
20
Teleoperation and Level of Automation
Fig. 20.11 Underwater robot Garbi III AUV. (Courtesy of University of Girona – UdG)
collection from famous sunken vessels, such as the Titanic with the ARGO robot [116], and to ecological disasters, such as the sealing of crevices in the hull of the oil tanker Prestige, which sank in the Atlantic in 2002 [117]. Specific problems in deep underwater environments are the high pressure, quite frequently poor visibility, and corrosion. Technological issues that must be considered include robust underwater communication, power source, and sensors for navigation. A particular problem in several underwater applications is the position and force control of the remote actuator when it is floating without a fixed holding point. Most common unmanned underwater robots are remotely operated vehicles (ROVs) (Fig. 20.11), which are typically commanded from a ship by an operator using joysticks. Communication between the local and remote stations is frequently achieved using an umbilical cable with coaxial cables or optical fiber, and also the power is supplied by cables. Most of these underwater vehicles carry a robotic arm manipulator (sometimes with hydraulic actuators), which effect may be negligible on a large vehicle but that introduce significant perturbation on the system dynamics of a small one. Moreover, there are several sources of uncertainties, mainly due to buoyancy, inertial effects, hydrodynamic effects (of waves and currents), and drag forces [118], which have motivated the development of several specific control schemes to deal with these effects [119]. The operational cost of these vehicles is very high, and their performance largely depends on the skills of the operator, because it is difficult to operate them accurately as they are always subject to undesired motion. In the oil industry, for instance, it is common to use two arms: one to provide stability by gripping a nearby structure and another to perform the assigned task. A new use of underwater robots is as a practice tool to prepare and test exploration robots for remote planets and moons [120].
475
Fig. 20.12 Canadarm 2. (Courtesy of NASA)
20.5.5 Space The main motivation for the development of space teleoperation is that, nowadays, sending a human into the space is difficult, risky, and quite expensive, while the interest in having some devices in space is continuously growing, from the practical (communications satellites) as well as the scientific point of view. The first explorations of space were carried out by robotic spacecrafts, such as the Surveyor probes that landed on the lunar surface between 1966 and 1968. The probes transmitted to Earth images and analysis data of soil samples gathered with an extensible claw. Since then, several other ROVs have been used in space exploration, such as in the Voyager missions [121]. Various manipulation systems have been used in space missions. The remote manipulator system, named Canadarm after the country that built it, was installed aboard the space shuttle Columbia in 1981 and since then has been employed in a variety of tasks, mainly focused on the capture and redeployment of defective satellites, besides providing support for other crew activities. In 2001, the Canadarm 2 (Fig. 20.12) was added to the International Space Station (ISS), with more load capacity and maneuverability, to help in more sensitive tasks such as inspection and fault detection of the ISS structure itself. In 2021, the European Robotic Arm (ERA) is expected to be attached to the Russian segment of the ISS, primarily to be used outside the ISS in service tasks requiring precise handling of components [122]. Control algorithms are among the main issues in this type of applications, basically due to the significant delay between the transmission of information from the local station on the Earth and the reception of the response from the remote station in space (Sect. 20.4.1). A number of experimental ground-based platforms for telemanipulation such as the Ranger [123], the Robonaut [124], and the space experiment
20
476
ROTEX [125] have demonstrated sufficient dexterity in a variety of operations such as plug/unplug tasks and tools manipulation. Another interesting experiment under development is the Autonomous Extravehicular Activity Robotic Camera Sprint (AERCam) [126], a teleoperated free-flying sphere to be used for remote inspection tasks. An experiment in bilateral teleoperation was developed by the National Space Development Agency of Japan (NASDA) [127] with the Engineering Test Satellite (ETS-VII), overcoming the significant time-delay (up to 7s was reported) in the communication channel between the robot and the ground-based control station. Currently, most effort in planetary surface exploration is focused on Mars, and several remotely operated rovers have been sent to this planet [128]. In these experiments the long time-delays in the control signals between Earth-based commands and Mars-based rovers are especially relevant. The aim is to avoid the effect of these delays by providing more autonomy to the rovers. So, only high-level control signals are provided by the controllers on Earth, while the rover solves low-level planning of the commanded tasks (Supervisory Control in the taxonomy of Levels of Robot Autonomy of Table 20.2). NASA’s last-generation Perseverance rover – the largest, most advanced rover NASA has so far sent to another world – follows this function allocation scheme. Perseverance rover touched down on Mars on February 18, 2021 [129]. Another possible scenario to minimize the effect of delays is teleoperation of the rovers with humans closer to them (perhaps in orbit around Mars) to guarantee a short time-delay that will allow the operator to have real-time control of the rover, allowing more efficient exploration of the surface of the planet [130].
20.5.6 Healthcare and Surgery There are two main reasons for using robot teleoperation in the surgical field. The first is the improvement or extension of the surgeon’s abilities when his/her actions are mapped to the remote station, increasing, for instance, the range of position and motion of the surgical tool (motion scaling) or applying very precise small forces without oscillations; this has greatly contributed to the development of major advances in the field of microsurgery, as well as in the development of minimally invasive surgery (MIS) techniques. Using teleoperated systems, surgeries are quicker, and patients suffer less than with the conventional approach, also allowing faster recovery. The second reason is to exploit the expertise of very good surgeons around the world without requiring them to travel, which could waste time and fatigue these surgeons. A basic initial step preceding teleoperation in surgical applications was telediagnosis, i.e., the motion of a device, acting as the remote station, to obtain information without
L. Basañez et al.
working on the patient. A simple endoscope could be considered as a basic initial application in this regard, since the position of a camera is teleoperated to obtain an appropriate view inside the human body. Telediagnosis also eliminates the possibility of transmitting infectious diseases between patients and healthcare professionals, what can be of great importance in pandemic situations such as the one caused by COVID-19 [131]. Apart from medically isolated areas, telemedicine will also play a key role in areas of natural disasters and war zones where consistent healthcare is unavailable or there is no time to transport a patient to a hospital [132]. It is worth to highlight the first real remote telesurgery performed successfully on September 7, 2001 [133]. The scenario was as follows: the local station, i.e., the surgeon, was located in New York City, USA, and the remote station, i.e., the patient, was in Strasbourg, France. The performed surgery was a laparoscopic cholecystectomy done to a 68year-old female, and it was called operation Lindbergh, name derived from the American aviator Charles Lindbergh, because he was the first person to make a solo, nonstop flight across the Atlantic Ocean. This surgery was possible thanks to the availability of a very secure high-speed communication line, allowing a mean total time-delay between the local and remote stations of 155 ms. The time needed to set up the robotic system, in this case the Zeus system [134], was 16 min., and the operation was done in 54 min. without complications. The patient was discharged 48 h later without any particular postoperative problems. A key problem in this application field is that someone’s life is at risk, and this affects the way in which information is processed, how the system is designed, the amount of redundancy used, and any other factors that may increase safety. Also, the surgical tool design must integrate sensing and actuation on the millimeter scale. Some instruments used in MIS have only four degrees of freedom, losing therefore the ability to orient the instrument tip arbitrarily, but specialized equipment such as the Intuitive Surgical Inc.’s da Vinci Surgical System [135] already incorporates a three-DOF wrist close to the instrument tip that makes the whole system benefit from seven degrees of freedom. In order to perform an operation, at least three surgical instruments are required (the usual number is four): one is an endoscope that provides the video feedback and the other two are grippers or scissors with electric scalpel functions, which should provide some tactile and/or force feedback [136] (Fig. 20.13). The trend now is to extend the application field of the current surgical devices so that they can be used in different types of surgical procedures, particularly including tactile feedback and virtual fixtures to minimize the effect of any imprecise motion of the surgeon [137]. So far, there are more than 25 surgical procedures in at least 6 medical fields that have been successfully performed with telerobotic techniques [138].
20
Teleoperation and Level of Automation
477
Various physical systems are considered for teleoperation in this field, for instance, fixed devices (the disabled person has to get into the device workspace) or devices based on wheelchairs or mobile robots; the latest are the most flexible and versatile and therefore the most used in recently developed assistance robots, such as RobChair [142], ARPH [143], Pearl NurseBot [144], and ASIBOT [140].
20.5.8 Humanitarian Demining
Fig. 20.13 Robotics surgery at Dresden Hospital. (With permission from Intuitive Surgical, Inc. 2007)
20.5.7 Assistance The main motivation in this field is to provide independence to disabled and elderly people in their daily domestic activities, increasing in this way their quality of life. One of the first relevant applications in this line was seen in 1987, with the development of Handy 1 [139], to enable an 11-year-old boy with cerebral palsy to gain independence at mealtimes. The main components of Handy 1 were a robotic arm, a microcomputer (used as a controller for the system), and an expanded keyboard as Human-Machine Interface (HMI). One of the most difficult parts in developing assistance applications is the HMI, as it must be intuitive and appropriate for people that do not have full capabilities. In this regard different approaches are considered, such as tactile, voice recognition, joystick/haptic interfaces, buttons, and gesture recognition, among others [140]. Another very important issue, which is a significant difference with respect to most teleoperation scenarios, is that the local and the remote stations share the same space, i.e., the human operator is not isolated from the working area; on the contrary, actually he/she is part of it. This leads to consider the safety of the operator as one of the main topics. The remote station is quite frequently composed of a mobile platform and one or more arms installed on it, and the whole system should be adaptable to unstructured and/or unknown environments (different houses), as it is desirable to perform actions such as going up and down stairs, opening various kinds of doors, grasping and manipulating different kind of objects, and so on. Improvement of the HMI to include different and more friendly ways of use is one of the main current challenges: the interfaces must be even more intuitive and must achieve a higher level of abstraction in terms of user commands. A typical example is understanding of an order when a voice recognition system is used [141].
This particular application field is included in a separate subsection due to its relevance from the humanitarian point of view. Land mines are very easy to place but very hard to be removed. Specific robots have been developed to help in the removal of land mines, especially to reduce the high risk that exists when this task is performed by humans. Humanitarian demining differs from the military approach. In the latter it is only required to find a path through a minefield in the minimum time, while the aim in humanitarian demining is to cover the whole area to detect mines, mark them, and remove/destroy all of them. The time involved may affect the cost of the procedure, but should not affect its efficiency. One key aspect in the design of teleoperated devices for demining is that the remote station has to be robust enough to resist a mine explosion or cheap enough to minimize the loss when the manipulation fails and the mine explodes. The removal of a mine is quite a complex task, which is why demining tools include not only teleoperated robotic arms but also teleoperated robotic hands [145]. Some proposals are based on walking machines, such as TITAN-IX [146] and SILO6 [147] (Fig. 20.14). A different method includes the use of machines to mechanically activate the mine, like the Mini Flail, Bozena 4, Tempest, or Dervish, among others; many of these robotic systems have been tested and used in the removal of mines in countries such as Japan, Croatia, and Vietnam [148].
20.5.9 Education Recently, teleoperation has been introduced in education and can be collated into two main types. In one of these, the professor uses teleoperation to illustrate the (theoretical) concepts to the students during a lecture by means of the operation of a remote real plant, which obviously cannot be brought to the classroom and that would require a special visit, which would probably be expensive and timeconsuming. The second type of educational application is the availability of remote experimental plants where the students can carry out experiments and training, working at common facilities at the school or in their own homes at different
20
478
L. Basañez et al.
Fig. 20.14 SILO6: A six-legged robot for humanitarian demining tasks. (Courtesy of IAI, Spanish National Research Council - CSIC)
times. In this regard, during the last years, a number of remote laboratory projects have been developed to teach fundamental concepts of various engineering fields, thanks to remote operation and control of scientific facilities via the Internet [149]. The development of e-Laboratory platforms, designed to enable distance training of students in real scenarios of robot programming, has proven useful in engineering training for mechatronic systems [150]. Experiments performed in these laboratories are wide-ranging; they may go from a single user testing control algorithms in a remote real plant [151] to multiple users simulating and teleoperating multiple virtual and real robots in a whole production cell [152]. The main feature in this type of applications is the almost exclusive use of the Internet as the communication channel between the local and remote stations. Due to its ubiquitous characteristic, these applications are becoming increasingly frequent.
20.6
Conclusions and Trends
Teleoperation is a highly topical subject with great potential for expansion in its scientific and technical development as well as in its applications. The development of new wireless communication systems, like 5G, and the diffusion of global communication networks, such as the Internet, can tremendously facilitate the implementation of teleoperation systems. Nevertheless, at the same time, these developments give rise to new problems such as real-time requirements, delays in signal transmission, and loss of information. Research into new control algorithms that guarantee stability even with variable delays constitutes an answer to some of these problems. On the other hand, the creation of new networks that will be able to guarantee a Quality of Service will help considerably to solve the real-time necessities of teleoperated systems.
The information that the human operator receives about what is happening at the remote station is essential for good execution of teleoperated tasks. In this regard, new techniques and devices are necessary in order to facilitate the situation awareness of the human operator in the task that he/she is carrying out. Virtual reality, augmented reality, haptics, and 3D vision systems are key elements for this purpose. The function of the human operator can also be greatly facilitated by aids to teleoperation. These aids, such as relational positioning, virtual guides, collision avoidance methods, and operation planning, can help the construction of efficient teleoperation systems. In any case, it is expected that, in the future, the teleoperation systems will scale to higher levels of robot autonomy, increasing the automation and reducing the role of the human operator. The fields of application of teleoperation are multiple nowadays and will become even more vast in the future, as research continues to outline new solutions to the aforementioned challenges.
References 1. Hokayem, P.F., Spong, M.W.: Bilateral teleoperation: An historical survey. Automatica 42(12), 2035–2057 (2006) 2. Mihelj, M., Podobnik, J.: Haptics for virtual reality and teleoperation. In: Intelligent Systems, Control and Automation: Science and Engineering, vol. 64. Springer, Berlin (2012) 3. Bicker, R., Burn, K., Hu, Z., Pongaen, W., Bashir, A.: The early development of remote tele-manipulation systems. In: Ceccarelli, M. (ed.) International Symposium on History of Machines and Mechanisms, pp. 391–404 (2004) 4. Sheridan, T.B.: Telerobotics, Automation, and Human Supervisory Control. The MIT Press, Cambridge (1992) 5. Milgram, P., Rastogi, A., Grodski, J.J.: Telerobotic control using augmented reality. In: Proceedings 4th IEEE International Workshop on Robot and Human Communication, Tokyo, pp. 21–29 (1995) 6. Fitts, P.M.(ed.) Human Engineering for an Effective AirNavigation and Traffic-Control System. National Research Council, Washington (1951) 7. Save, L., Feuerberg, B.: Designing human-automation interaction: a new level of Automation Taxonomy. In: Human Factors: A View from an Integrative Perspective, Toulouse, pp. 43–55 (2012) 8. Sheridan, T.B., Verplank, W.L.: Human and Computer Control of Undersea Teleoperators. MIT Man-Machine Systems Laboratory, Cambridge (1978) 9. Endsley, M.R., Kaber, D.B.: Level of automation effects on performance, situation awareness and workload in a dynamic control task. Ergonomics 42(3), 462–492 (1999) 10. Endsley, M.R.: Level of automation forms a key aspect of autonomy design. J. Cognitive Engineering and Decision Making 12(1), 29–34 (2018) 11. Kaber, D.B.: Issues in human-automation interaction modeling: presumptive aspects of frameworks of types and levels of automation. J. Cognitive Engineering and Decision Making 12(1), 7–24 (2018) 12. Beer, J.M., Fisk, A.D., Rogers, W.A.: Toward a framework for levels of robot autonomy in human-robot interaction. Journal of Human-Robot Interaction 3(2), 74–99 (2014)
20
Teleoperation and Level of Automation
13. Murphy, R.R. Introduction to AI Robotics, 2nd ed. The MIT Press, Cambridge (2019) 14. Nuño, E., Rodríguez, A., Basañez, L.: Force reflecting teleoperation via ipv6 protocol with geometric constraints haptic guidance. In: Advances in Telerobotics. Springer Tracts in Advanced Robotics, vol. 31, pp. 445–458. Springer, Berlin (2007) 15. Basañez, L., Rosell, J., Palomo-Avellaneda, L., Nuño, E., Portilla, H.: A framework for robotized teleoperated tasks. In: Proceedings Robot 2011, pp. 573–580 (2011) 16. Sirouspour, S.: Modeling and control of cooperative teleoperation systems. IEEE Trans. Robot. 21(6), 1220–1225 (2005) 17. Aldana, C., Nuño, E., Basañez, L.: Bilateral teleoperation of cooperative manipulators. In: IEEE International Conference on Robotics and Automation, pp. 4274–4279 (2012) 18. Pliego-Jiménez, J., Arteaga-Pérez, M.A., Cruz-Hernández, C.: Dexterous remote manipulation by means of a teleoperation system. Robotica 37(8), 1457–1476 (2019) 19. Lawrence, D.A.: Stability and transparency in bilateral teleoperation. IEEE Trans. Robot. Autom. 9(5), 624–637 (1993) 20. Mobasser, F., Hashtrudi-Zaad, K.: Transparent rate mode bilateral teleoperation control. Int. J. Robot. Res. 27(1), 57–72 (2008) 21. Anderson, R.J., Spong, M.W.: Bilateral control of teleoperators with time delay. IEEE Trans. Autom. Control 34(5), 494–501 (1989) 22. Niemeyer, G., Slotine, J.J.E.: Stable adaptive teleoperation. IEEE J. Ocean. Eng. 16(1), 152–162 (1991) 23. Nuño, E., Ortega, R., Barabanov, N., Basañez, L.: A globally stable PD controller for bilateral teleoperators. IEEE Trans. Robot. 24(3), 753–758 (2008) 24. Nuño, E., Basañez, L., Ortega, R., Spong, M.W.: Position tracking for nonlinear teleoperators with variable time-delay. Int. J. Robot. Res. 28(7), 895–910 (2009) 25. Secchi, C., Stramigioli, S., Fantuzzi, C.: Transparency in porthamiltonian-based telemanipulation. IEEE Trans. Robot. 24(4), 903–910 (2008) 26. Nuño, E., Basañez, L., Ortega, R.: Passivity-based control for bilateral teleoperation: a tutorial. Automatica 47(3), 485–495 (2011) 27. Chopra, N., Spong, M.W., Ortega, R., Barbanov, N.: On tracking performance in bilateral teleoperation. IEEE Trans. Robot. 22(4), 844–847 (2006) 28. Nuño, E., Ortega, R., Basañez, L.: An Adaptive Controller for Nonlinear Teleoperators. Automatica 46(1), 155–159 (2010) 29. Sarras, I., Nuño, E., Basañez, L.: An adaptive controller for nonlinear teleoperators with variable time-delays. J. Frankl. Inst. 351(10), 4817–4837 (2014) 30. Li, Y., Yin, Y., Zhang, S., Dong, J., Johansson, R.: Composite adaptive control for bilateral teleoperation systems without persistency of excitation. J. Frankl. Inst. 357(2), 773–795 (2020) 31. Zhang, H., Song, A., Li, H., Shen, S.: Novel adaptive finite-time control of teleoperation system with time-varying delays and input saturation. IEEE Trans. Cybern. 51(7), 1–14 (2019) 32. Arteaga, M., Kelly, R.: Robot control without velocity measurements: new theory and experimental results. IEEE Trans. Robot. Autom. 20(2), 297–308 (2004) 33. Garcia-Valdovinos, L., Parra-Vega, V., Arteaga, M.: Observerbased sliding mode impedance control of bilateral teleoperation under constant unknown time delay. Robot. Auton. Syst. 55(8), 609–617 (2007) 34. Polushin, I.G., Tayebi, A., Marquez, H.J.: Control schemes for stable teleoperation with communication delay based on IOS small gain theorem. Automatica 42(6), 905–915 (2006) 35. Hua, C.C., Liu, X.P.: Teleoperation over the internet with/without velocity signal. IEEE Trans. Instrum. Meas. 60(1), 4–13 (2011) 36. Nuño, E., Basañez, L., López-Franco, C., Arana-Daniel, N.: Stability of nonlinear teleoperators using PD controllers without velocity measurements. J. Frankl. Inst. 351(1), 241–258 (2014)
479 37. Sarras, I., Nuño, E., Basañez, L., Kinnaert, M.: Position tracking in delayed bilateral teleoperators without velocity measurements. Int. J. Robust Nonlinear Control 26(7), 1437–1455 (2016) 38. Nuño, E., Arteaga-Pérez, M., Espinosa-Pérez, G.: Control of bilateral teleoperators with time delays using only position measurements. Int. J. Robust Nonlinear Control 28(3), 808–824 (2018) 39. Arteaga-Pérez, M.A., Morales, M., López, M., Nuño, E.: Observer design for the synchronization of bilateral delayed teleoperators. Eur. J. Control. 43(9), 20–32 (2018) 40. Astolfi, A., Ortega, R., Venkatraman, A.: A globally exponentially convergent immersion and invariance speed observer for mechanical systems with non-holonomic constraints. Automatica 46(1), 182–189 (2010) 41. Aldana, C., Nuño, E., Basañez, L.: Control of bilateral teleoperators in the operational space without velocity measurements. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 5445–5450 (2013) 42. Lee, D., Spong, M.W.: Bilateral teleoperation of multiple cooperative robots over delayed communication networks: Theory. In: IEEE International Conference on Robotics and Automation, pp. 360–365 (2005) 43. Rodriguez-Seda, E.J., Troy, J.J., Erignac, C.A., Murray, P., Stipanovic, D.M., Spong, M.W.: Bilateral teleoperation of multiple mobile agents: coordinated motion and collision avoidance. IEEE Trans. Control Syst. Technol. 18(4), 984–992 (2010) 44. Malysz, P., Sirouspour, S.: A kinematic control framework for single-slave asymmetric teleoperation systems. IEEE Trans. Robot. 27(5), 901–917 (2011) 45. Polushin, I.G., Dashkovskiy, S.N., Takhmar, A., Patel, R.V.: A small gain framework for networked cooperative force-reflecting teleoperation. Automatica 49(2), 338–348 (2013) 46. Li, J., Tavakoli, M., Huang, Q.: Stability of cooperative teleoperation using haptic devices with complementary degrees of freedom. IET Control Theory Appl. 8(12), 1062–1070 (2014) 47. Yang, Y., Hua, C., Guan, X.: Multi-manipulators coordination for bilateral teleoperation system using fixed-time control approach. Int. J. Robust Nonlinear Control 28(18), 5667–5687 (2018) 48. Nuño, E., Ortega, R., Basañez, L., Hill, D.: Synchronization of networks of nonidentical Euler-Lagrange systems with uncertain parameters and communication delays. IEEE Trans. Autom. Control 56(4), 935–941 (2011) 49. Nuño, E., Sarras, I., Basañez, L.: Consensus in networks of nonidentical Euler-Lagrange systems using P+d controllers. IEEE Trans. Robot. 29(6), 1503–1508 (2013) 50. Aldana, C.I., Nuño, E., Basañez, L., Romero, E.: Operational space consensus of multiple heterogeneous robots without velocity measurements. J. Frankl. Inst. 351(3), 1517–1539 (2014) 51. Montaño, A., Suárez, R., Aldana, C.I., Nuño, E.: Bilateral telemanipulation of unknown objects using remote dexterous in-hand manipulation strategies. In: Proceedings of the 2020 IFAC World Congress. Berlin, Germany (2020) 52. Nuño, E., Ortega, R.: Achieving consensus of Euler-Lagrange agents with interconnecting delays and without velocity measurements via passivity-based control. IEEE Trans. Control Syst. Technol. 26(1), 222–232 (2018) 53. Rodriguez-Seda, E.J., Lee, D., Spong, M.W.: Experimental comparison study of control architectures for Bilateral teleoperators. IEEE Trans. Robot. 25(6), 1304–1318 (2009) 54. Lee, J.-Y., Payandeh, S.: Haptic Teleoperation Systems: Signal Processing Perspective. Springer, Switzerland (2015) 55. Loshin, P.: IPv6: Theory, Protocol, and Practice, 2nd edn. Morgan Kaufmann Publishers Inc., San Francisco (2003) 56. Everything you need to know about 5G. https://www.qualcomm. com/invention/5g/what-is-5g. Accessed: 2020-06-02 57. ITU-R. IMT Vision—Framework and overall objectives of the future development of IMT for 2020 and beyond. In: M Series, Recommendation ITU-R M.2083-0 (2015)
20
480 58. Minopoulos, G., Kokkonis, G., Psannis, K.E., Ishibashi, Y.: A survey on haptic data over 5G networks. Int. J. Future Gener. Commun. Networking 12(2), 37–54 (2019) 59. Antonakoglou, K., Xu, X., Steinbach, E., Mahmoodi, T., Dohler, M.: Toward Haptic communications over the 5G Tactile Internet. IEEE Commun. Surv. Tutorials 20(4), 3034–3059 (2018) 60. Doosan First To Use 5G for Worldwide TeleOperation. https:// eu.doosanequipment.com/en/news/2019-28-03-doosan-to-use-5g. Accessed: 2020-06-02 61. Remote surgery using robots advances with 5G tests in China. https://www.therobotreport.com/remote-surgery-via-robots-adva nces-china-5g-tests/. Accessed: 2020-06-02 62. Koziol, M.: Terahertz Waves Could Push 5G to 6G. In: IEEE Spectrum (2019) 63. Endsley, M.R.: Direct measurement of situation awareness: validity and use of SAGAT. Situation Awareness: Analysis and Measurement 10, 147–174 (2000) 64. Endsley, M.R.: Toward a theory of situation awareness in dynamic systems. Hum. Factors 37(1), 32–64 (1995) 65. de Barros, P.G., Lindeman, R.W., Ward, M.O.: Enhancing robot teleoperator situation awareness and performance using vibrotactile and graphical feedback. In: 2011 IEEE Symposium on 3D User Interfaces (3DUI), pp. 47–54 (2011) 66. Sherman, W., Craig, A.: Understanding Virtual Reality: Interface, Application, and Design. Morgan Kaufmann Publishers Inc., Burlington (2002) 67. Stotko, P., Krumpen, S., Schwarz, M., Lenz, C., Behnke, S., Klein, R., Weinmann, M.: A VR system for immersive teleoperation and live exploration with a mobile robot. In: 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 3630–3637 (2019) 68. Inami, M., Kawakami, N., Tachi, S.: Optical camouflage using retro-reflective projection technology. In: Proceedings International Symposium Mixed and Augmented Reality, pp. 348–349 (2003) 69. Azuma, R.T.: The most important challenge facing augmented reality. Presence Teleop. Virt. Environments 25(3), 234–238 (2017) 70. Portilla, H., Basañez, L.: Augmented reality tools for enhanced robotics teleoperation systems. In: Proceedings 3DTV Conference, pp. 1–4 (2007) 71. Rastogi, A., Milgram, P., Drascic, D.: Telerobotic control with stereoscopic augmented reality. In: Proceedings of SPIE—The International Society for Optical Engineering, pp. 115–122 (1996) 72. Kron, A., Schmidt, G., Petzold, B., Zah, M.I., Hinterseer, P., Steinbach, E.: Disposal of explosive ordnances by use of a bimanual haptic telepresence system. In: Proceedings of the IEEE International Conference on Robotics and Automation, 2004 (ICRA ’04), vol. 2, pp. 1968–1973 (2004) 73. Ansar, A., Rodrigues, D., Desai, J.P., Daniilidis, K., Kumar, V., Campos, M.F.M.: Visual and haptic collaborative tele-presence. Comput. Graph. 25(5), 789–798 (2001) 74. DeJong, B.P., Faulring, E.L., Colgate, J.E., Peshkin, M.A., Kang, H., Park, Y.S., Ewing, T.F.: Lessons learned from a novel teleoperation testbed. Ind. Robot. Int. J. 33(3), 187–193 (2006) 75. Otmane, S., Mallem, M., Kheddar, A., Chavand, F.: Active virtual guides as an apparatus for augmented reality based telemanipulation system on the Internet. In: Proceedings of the 33rd Annual Simulation Symposium, Washington, pp. 185–191 (2000) 76. Gu, J., Augirre, E., Cohen, P.: An augmented-reality interface for telerobotic applications. In: Proceedings of the 6th IEEE Workshop on Applications of Computer Vision, Orlando, pp. 220– 224 (2002) 77. Hoffmann, C., Joan-Arinyo, R.: A brief on constraint solving. Comput.-Aided Des. Applic. 2(5), 655–663 (2005) 78. Fudos, I., Hoffmann, C.: A graph-constructive approach to solving systems of geometric constraints. ACM Trans. Graph. 16(2), 179– 216 (2004)
L. Basañez et al. 79. Kramer, G.A.: Solving Geometric Constraint Systems: A Case Study in Kinematics. The MIT Press, New York (2003) 80. Bruderlin, B.: Using geometric rewrite rules for solving geometric problems symbolically. Theor. Comput. Sci. 116, 291–303 (1993) 81. Nelson, G.: Juno, a constraint-based graphics system. In: Proceedings of the 12st Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH, vol. 19, pp. 235–243 (1985) 82. Lamure, H., Michelucci, D.: Solving geometric constraints by homotopy. IEEE Trans. Vis. Comput. Graph. 2(1), 28–34 (1996) 83. Kondo, K.: Algebraic method for manipulation of dimensional relationships in geometric models. Comput. Aided Des. 24(3), 141–147 (1992) 84. Rodríguez, A., Basañez, L., Celaya, E.: A relational positioning methodology for Robot task specification and execution. IEEE Trans. Robot. 24(3), 600–611 (2008) 85. Rosell, J., Vázquez, I.: Haptic rendering of compliant motions using contact tracking in C-space. In: Proceedings of the IEEE International Conference on Robotics and Automation, pp. 4223– 4228 (2005) 86. Rosell, J., Vázquez, C., Pérez, A., Iñiguez, P.: Motion planning for haptic guidance. J. Intell. Robot. Syst. 53(3), 223–245 (2008) 87. Itkowitz, B., Handley, J., Zhu, W.: The OpenHaptics toolkit: a library for adding 3D Touch navigation and haptics to graphics applications. In First Joint Eurohaptics Conference and Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems, pp. 590–595 (2005) 88. Alaimo, S.M.C.: Novel Haptic Cueing for UAV Tele-Operation. PhD thesis, Università Di Pisa, Pisa (2011) 89. Perez-Grau, F.J., Ragel, R., Caballero, F., Viguria, A., Ollero, A.: Semi-autonomous teleoperation of UAVs in search and rescue scenarios. In: Proceedings of the 2017 International Conference on Unmanned Aircraft Systems (ICUAS), pp. 1066–1074 (2017) 90. Aleotti, J., Micconi, G., Caselli, S., Benassi, G., Zambelli, N., Bettelli, M., Calestani, D., Zappettini, A.: Haptic Teleoperation of UAV Equipped with Gamma-Ray Spectrometer for Detection and Identification of Radio-Active Materials in Industrial Plants, pp. 197–214. Springer, Cham (2019) 91. Zhang, D., Yang, G., Khurshid, R.P.: Haptic teleoperation of UAVs through control barrier functions. IEEE Trans. Haptic 13(1), 109–115 (2020) 92. Murphy, R.R.: Disaster Robotics. MIT Press, New York (2014) 93. Iborra, A., Pastor, J.A., Alvarez, B., Fernandez, C., Merono, J.M.F.: Robots in radioactive environments. IEEE Robot. Autom. Mag. 10(4), 12–22 (2003) 94. Book, W., Love, L.: Teleoperation, Telerobotics, and Telepresence, chapter 9, pp. 167–185. Wiley, New York (2007) 95. Aracil, R., Ferre, M.: Telerobotics for aerial live power line maintenance. In: Advances in Telerobotics. Springer Tracts in Advanced Robotics, vol. 31, pp. 459–469. Springer, Heidelberg (2007) 96. Dunlap, J.H., Van Name, J.M., Henkener, J.A.: Robotic maintenance of overhead transmission lines. IEEE Trans. Power Delivery 1(3), 280–284 (1986) 97. Aracil, R., Ferre, M., Hernando, M., Pinto, E., Sebastian, J.M.: Telerobotic system for live-power line maintenance: ROBTET. Control. Eng. Pract. 10(11), 1271–1281 (2002) 98. Haas, C.T., Kim, Y.: Automation in infrastructure construction. Constr. Innov. 2(3), 191–210 (2002) 99. Hiramatsu, Y., Aono, T., Nishio, M.: Disaster restoration work for the eruption of Mt Usuzan using an unmanned construction system. Adv. Robot. 16(6), 505–508 (2002) 100. Lytle, A.M., Saidi, K.S., Bostelman, R.V., Stone, W.C., Scott, N.A.: Adapting a teleoperated device for autonomous control using three-dimensional positioning sensors: experiences with the NIST RoboCrane. Autom. Constr. 13(1), 101–118 (2004)
20
Teleoperation and Level of Automation
101. Roldán, J.J., del Cerro, J., Garzón-Ramos, D., Garcia-Aunon, P., Garzón, M., de León, J., Barrientos, A.: Robots in Agriculture: State of Art and Practical Experiences. IntechOpen (2018) 102. Bechar, A., Vigneault, C.: Agricultural robots for field operations: concepts and components. Biosyst. Eng. 149, 94–111 (2016) 103. Kantor, G.A., Vasconez, J.P., Auat-Cheein, F.A.: Human-robot interaction in agriculture: a survey and current challenges. Biosyst. Eng. 179, 35–48 (2019) 104. Opiyo, S., Zhou, J., Mwangi, E., Kai, W., Sunusi, I.: A review on teleoperation of mobile ground robots: architecture and situation awareness. Int. J. Control. Autom. Syst. 19(3), 1384–1407 (2020) 105. Kim, J., Kim, S., Ju, C., Son, H.I.: Unmanned aerial vehicles in agriculture: A review of perspective of platform, control, and applications. IEEE Access 7(1), 105100–105115 (2019) 106. Ju, C., Son, H.I.: A haptic teleoperation of agricultural multi-uav. In: Workshop on Robotic Vision and Action in Agriculture: The Future of Agri-Food Systems and Its Deployment to the RealWorld at the IEEE International Conference on Robotics and Automation (ICRA) (2018) 107. Adamides, G., Katsanos, C., Parmet, Y., Christou, G., Xenos, M., Hadzilacos, T., Edan, Y.: Hri usability evaluation of interaction modes for a teleoperated agricultural robotic sprayer. Appl. Ergon. 62, 237–246 (2017) 108. Peña, C., Riaño, C., Moreno, G.: Robotgreen. a teleoperated agricultural robot for structured environments. J. Eng. Sci. Technol. Rev. 11(6), 144–155 (2018) 109. Kwitowski, A.J., Mayercheck, W.D., Brautigam, A.L.: Teleoperation for continuous miners and haulage equipment. IEEE Trans. Ind. Appl. 28(5), 1118–1125 (1992) 110. Flewelling, S., Baiden, G.R., Scoble, M.: Robotic systems development for mining automation. In: CIM (Canadian Institute of Mining) Bulletin, pp. 75–77 (1993) 111. Ralston, J.C., Hainsworth, D.W., Reid, D.C., Anderson, D.L., McPhee, R.J.: Recent advances in remote coal mining machine sensing, guidance, and teleoperation. Robotica 19(5), 513–526 (2001) 112. Park, A.J., Kazman, R.N.: Augmented reality for mining teleoperation. In: Das, H. (ed.) Telemanipulator and Telepresence Technologies, vol. 2351, pp. 119–129. International Society for Optics and Photonics, SPIE (1995) 113. Nelson, T.J., Olson, M.R., Wood, H.C.: Long delay telecontrol of lunar mining equipment. In: Proceedings 6th International Conference and Exposition on Engineering, Construction, and Operations in Space, Albuquerque, pp. 477–484 (2006) 114. Wilkinson, N.: Cooperative control in tele-operated mining environments. In: Proceedings 55th International Astronautical Congress, Vancouver, vol. 2351. International Astronautical Federation, the International Academy of Astronautics and the International Institute of Space Law, Paris (2004) 115. Ridao, P., Carreras, M., Hernandez, E., Palomeras, N.: Underwater Telerobotics for Collaborative Research, pp. 347–359. Springer, Berlin (2007) 116. Harris, S., Ballard, R.: ARGO: Capabilities for deep ocean exploration. Oceans 18, 6–8 (1986) 117. Fontolan, M.: Prestige oil recovery from the sunken part of the Wreck. In: PAJ Oil Spill Symposium (Petroleum Association of Japan), Tokyo (2005) 118. Antonelli, G.: Underwater Robots: Motion and Force Control of Vehicle-Manipulator Systems. Springer, Berlin (2006) 119. Lee, M., Choi, H.-S.: A robust neural controller for underwater robot manipulators. IEEE Trans. Neural Netw. 11(6), 1465–1470 (2000) 120. Kumagai, J.: Swimming to Europa. IEEE Spectr. 44(9), 33–40 (2007) 121. Pedersen, L., Kortenkamp, D., Wettergreen, D., Nourbakhsh, I.: A survey of space robotics. In: Proceedings 7th International Sym-
481
122.
123.
124.
125.
126.
127.
128.
129.
130. 131. 132.
133.
134.
135.
136.
137.
138. 139.
140.
posium Artificial Intelligence. Robotics and Automation Space, Nara (2003) Cruijssen, H.J., Ellenbroek, M., Henderson, M., Petersen, H., Verzijden, P., Visser, M.: The European Robotic Arm: a highperformance mechanism finally on its way to space. In: 42nd Aerospace Mechanism Symposium, NASA Goddard Space Flight Center, Maryland, pp. 319–334 (2014) Roderick, S., Roberts, B., Atkins, E., Akin, D.: The Ranger robotic satellite servicer and its autonomous software-based safety system. IEEE Intell. Syst. 19(5), 12–19 (2004) Bluethmann, W., Ambrose, R., Diftler, M., Askew, S., Huber, E., Goza, M., Rehnmark, F., Lovchik, C., Magruder, D.: Robonaut: A robot designed to work with humans in space. Auton. Robot. 14, 179–197 (2003) Hirzinger, G., Brunner, B., Landzettel, K., Sporer, N., Butterfass, J., Schedl, M.: Space robotics—DLR’s telerobotic concepts, lightweight arms and articulated hands. Auton. Robot. 14, 127– 145 (2003) Fredrickson, S.E., Duran, S., Mitchell, J.D.: Mini AER-Cam inspection robot for human space missions. In: AIAA Space 2004 Conference Exhib., San Diego (2004) Imaida, T., Yokokohji, Y., Doi, T., Oda, M., Yoshikawa, T.: Ground-space bilateral teleoperation of ETS-VII robot arm by direct bilateral coupling under 7-s time delay condition. IEEE Trans. Robot. Autom. 20(3), 499–511 (2004) Lindemann, R.A., Bickler, D.B., Harrington, B.D., Ortiz, G.M., Voothees, C.J.: Mars exploration rover mobility development. IEEE Robot. Autom. Mag. 13(2), 19–26 (2006) Touchdown! NASA’s Mars Perseverance Rover Safely Lands on Red Planet. https://mars.nasa.gov/news/8865/. Accessed: 202102-18 Landis, G.A.: Robots and humans: synergy in planetary exploration. Acta Astronaut. 55(12), 985–990 (2004) Hauser, K., Shaw, R.: How medical robots will help treat patients in future outbreaks. In: IEEE Spectrum, p. 6 (2020) Avgousti, S., Christoforou, E.G., Panayides, A.S., Voskarides, S., Novales, C., Nouaille, L., Pattichis, C.S., Vieyres, P.: Medical telerobotic systems: current status and future trends. BioMedical Engineering OnLine 15(96), 44 (2016) Marescaux, J., Leroy, J., Gagner, M., Rubino, F., Mutter, D., Vix, M., Butner, S.E., Smith, M.: Transatlantic robot-assisted telesurgery. Nature 413, 379–380 (2001) Butner, S.E., Ghodoussi, M.: A real-time system for tele-surgery. In: Proceedings 21st International Conference on Distributed Computing Systems, pp. 236–243 (2001) Guthart, G.S., Salisbury Jr., J.K.: The IntuitiveTM telesurgery system: overview and application. In: Proceedings 2000 ICRA. Millennium Conference. IEEE International Conference on Robotics and Automation, San Francisco, vol. 1, pp. 618–621 (2000) George, E.I., Brand, T.C., LaPorta, A., Marescaux, J., Satava, R.M.: Origins of robotic surgery: from skepticism to standard of care. J. Soc. Laparoendosc. Surg. JSLS 22(4), 14, 10–12 (2018) Li, M., Kapoor, A., Taylor, R.H.: Telerobotic control by virtual fixtures for surgical applications. In: Advances in Telerobotics. Springer Tracts in Advanced Robotics, vol. 31, pp. 381–401. Springer, Berlin (2007) Smith, A., Smith, J., Jayne, D.G.: Telerobotics: surgery for the 21st century. Surgery 24(2), 74–78 (2006) Topping, M.: An Overview of the development of Handy 1, a rehabilitation robot to assist the severely disabled. J. Intell. Robot. Syst. 34(3), 253–263 (2002) Balaguer, C., Giménez, A., Jardón, A., Correal, R., Martínez, S., Sabatini, A., Genovese, V.: Proprio and teleoperation of a robotic system for disabled persons’ assistance in domestic environments. In: Advances in Telerobotics. Springer Tracts in Advanced Robotics, vol. 31, pp. 415–427. Springer, Berlin (2007)
20
482 141. Reinoso, O., Fernández, C., Ñeco, R.: User voice assistant tool for teleoperation. In: Advances in Telerobotics. Springer Tracts in Advanced Robotics, vol. 31, pp. 107–120. Springer, Berlin (2007) 142. Pires, G., Nunes, U.: A wheelchair steered through voice commands and assisted by a reactive fuzzy logic controller. J. Intell. Robot. Syst. 34(3), 301–314 (2002) 143. Hoppenot, P., Colle, E.: Human-like behavior robot-application to disabled people assistance. In: IEEE International Conference on Systems, Man, and Cybernetics, vol. 1, pp. 155–160 (2000) 144. Pollack, M.E., Engberg, S., Matthews, J.T., Thrun, S., Brown, L., Colbry, D., Orosz, C., Peintner, B., Ramakrishnan, S., Dunbarjacob, J., Mccarthy, C., Montemerlo, M., Pineau, J., Roy, N.: Pearl: a mobile robotic assistant for the elderly. In: AAAI Workshop Autom. Eldercare, Alberta (2002) 145. Wojtara, T., Nonami, K., Shao, H., Yuasa, R., Amano, S., Waterman, D., Nobumoto, Y.: Hydraulic master-slave land mine clearance robot hand controlled by pulse modulation. Mechatronics 15, 589–609 (2005) 146. Kato, K., Hirose, S.: Development of the quadruped walking robot, TITAN-IX-mechanical design concept and application for the humanitarian demining robot. Adv. Robot. 15(2), 191–204 (2001) 147. Gonzalez de Santos, P., Garcia, E., Cobano, J.A., Ramirez, A.: SILO6: a six-legged robot for humanitarian de-mining tasks. In: Robotics: Trends, Principles, and Applications—Proceedings of the Sixth Biannual World Automation Congress, WAC (2004) 148. Habib, M.K.: Humanitarian demining: reality and the challenge of technology-The state of the art. Int. J. Adv. Robot. Syst. 4(2), 151–172 (2007) 149. Heradio, R., de La Torre, L., Galan, D., Cabrerizo, F.J., HerreraViedma, E., Dormido, S.: Virtual and remote labs in education: a bibliometric analysis. Comput. Educ. 98, 14–38 (2016) 150. Tzafestas, C.S., Palaiologou, N., Alifragis, M.: Virtual and remote robotic laboratory: comparative experimental evaluation. IEEE Trans. Educ. 49(3), 360–369 (2006) 151. Giralt, X., Jofre, D., Costa, R., Basañez, L.: Proyecto de Laboratorio Remoto de Automática: Objetivos y Arquitectura Propuesta. In: III Jornadas de Trabajo EIWISA 02, Enseñanza vía Internet/Web de la Ingeniería de Sistemas y Automática, Alicante, pp. 93–98 (2002) 152. Alencastre-Miranda, M., Munoz-Gomez, L., Rudomin, I.: Teleoperating robots in multiuser virtual environments. In: Proceedings of the Fourth Mexican International Conference on Computer Science ENC, Tlaxcala, pp. 314–321 (2003)
Luis Basañez is an Emeritus Professor at the Technical University of Catalonia, where he has been full professor of System Engineering and Automatic Control, from 1986 to 2014, and Director of the Cybernetics Institute. He is the Spanish delegate at the International Federation of Robotics (IFR) and fellow member of the International Federation of Automatic Control (IFAC). His present research interest includes teleoperation, multirobot coordination, sensor integration, and active perception.
L. Basañez et al.
Emmanuel Nuño received the B.Sc. degree from the University of Guadalajara (UDG), Mexico, in 2002, and the Ph.D. degree from the Universitat Politècnica de Catalunya, Spain, in 2008. Since 2009, he has been a Titular Professor with the Department of Computer Science at UDG. He is an Editor of the International Journal of Adaptive Control and Signal Processing. His research interests include the control of single and multiple robots.
Carlos I. Aldana received the B.Sc. degree from the University of Guadalajara (UDG) in 2002, M.Sc. degree from the CINVESTAVIPN in 2004, and Ph.D. degree in Automatic Control, Robotics and Computer Vision from the Universitat Politècnica de Catalunya (UPC), Spain, in 2015. Actually, he is a professor of the Department of Computer Science at UDG. His current research interests include task space control, robot networks, teleoperation, and predictive control.
Nature-Inspired and Evolutionary Techniques for Automation
21
Mitsuo Gen and Lin Lin
Contents 21.1 21.1.1 21.1.2 21.1.3 21.1.4 21.1.5 21.1.6
Nature-Inspired and Evolutionary Techniques . . . . . Genetic Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Swarm Intelligences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Other Nature-Inspired Optimization Algorithms . . . . . . Evolutionary Multi-objective Optimization . . . . . . . . . . . Features of Evolutionary Search . . . . . . . . . . . . . . . . . . . . Evolutionary Design Automation . . . . . . . . . . . . . . . . . . .
484 484 485 487 489 490 491
21.2 21.2.1 21.2.2 21.2.3
Evolutionary Techniques for Automation . . . . . . . . . . Advanced Planning and Scheduling . . . . . . . . . . . . . . . . . Assembly Line System . . . . . . . . . . . . . . . . . . . . . . . . . . . Logistics and Transportation . . . . . . . . . . . . . . . . . . . . . . .
492 492 494 496
21.3 21.3.1 21.3.2 21.3.3
AGV Dispatching in Manufacturing System . . . . . . . Network Modeling for AGV Dispatching . . . . . . . . . . . . A Priority-Based GA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Case Study of AGV Dispatching . . . . . . . . . . . . . . . . . . .
498 498 499 500
21.4 21.4.1 21.4.2 21.4.3 21.4.4
Robot-Based Assembly Line System . . . . . . . . . . . . . . Assembly Line Balancing Problems . . . . . . . . . . . . . . . . Robot-Based Assembly Line Model . . . . . . . . . . . . . . . . Evolutionary Algorithm Approaches . . . . . . . . . . . . . . . . Case Study of Robot-Based Assembly Line Model . . . .
500 500 501 503 504
21.5
Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 504
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 505
Abstract
In this chapter, nature-inspired and evolutionary techniques (ET) will be introduced for treating automation problems in advanced planning and scheduling, assembly M. Gen () Department of Research, Fuzzy Logic Systems Institute (FLSI), Iizuka, Japan Research Institute of Science and Technology, Tokyo University of Science, Tokyo, Japan e-mail: [email protected]; [email protected]
line system, logistics, and transportation. ET is the most popular meta-heuristic method for solving NP-hard optimization problems. In the past few years, ETs have been exploited to solve design automation problems. Concurrently, the field of ET reveals a significant interest in evolvable hardware and problems such as scheduling, placement, or test pattern generation. The rest of this chapter is organized as follows. First, the background and developments of natureinspired and ETs are described. Then basic schemes and working mechanism of genetic algorithms (GA), swarm intelligence, and other nature-inspired optimization algorithms are given. Multi-objective evolutionary algorithms for treating optimization problems with multiple and conflicting objectives are presented. Features of evolutionary search, such as hybrid evolutionary search, enhanced EA via learning, and evolutionary design automation are presented. Next, the various applications based on ETs for solving nonlinear/combinatorial optimization problems in automation are surveyed. In terms of advanced planning and scheduling (APS), the facility layout problems, planning and scheduling of manufacturing systems, and resource-constrained project scheduling problems are included. In terms of assembly line system, various assembly lines balancing (ALB) models are included. In terms of logistics and transportation, location allocation models and various types of logistics network models are included.
Keywords
Evolutionary algorithms · Flexible manufacturing system · Multi-objective optimization problem · Automate guide vehicle · Assembly line balance · Facility layout problem
L. Lin International School of Information Science and Engineering, Dalian University of Technology, Economy and Technology Development Area, Dalian, China e-mail: [email protected]
© Springer Nature Switzerland AG 2023 S. Y. Nof (ed.), Springer Handbook of Automation, Springer Handbooks, https://doi.org/10.1007/978-3-030-96729-1_21
483
484
21.1
M. Gen and L. Lin
Nature-Inspired and Evolutionary Techniques
Nature-inspired technology (NIT) and evolutionary techniques (ET) are subfields of artificial intelligence (AI), and refer to a synthesis of methodologies from fuzzy logic (FL), neural networks (NN), genetic algorithm (GA), and other evolutionary algorithms (EAs). ET uses the evolutionary process within a computer to provide a means for addressing complex engineering problems involving chaotic disturbances, randomness, and complex nonlinear dynamics that traditional algorithms have been unable to conquer. Computer simulations of evolution started as early as in 1954; however, most publications were not widely noticed. From these beginnings, computer simulation of evolution by biologists became more common in the early 1960s. Evolution strategies (ES) was introduced by Rechenberg in the 1960s and early 1970s [1]. Evolutionary programming (EP) was first used by Lawrence J. Fogel in the USA in 1960s in order to use simulated evolution as a learning process aiming to generate artificial intelligence [2]. Genetic algorithm (GA) in particular became popular through the work of Holland in the early 1970s [3]. His work originated with studies of cellular automata, conducted by Holland and his students at the University of Michigan. Holland introduced a formalized framework for predicting the quality of the next generation, known as Holland’s schema theorem or one of the nature inspired methods by John Holland. Research in GAs remained largely theoretical until the mid1980s. Genetic programming (GP) is an extended technique of GA popularized by Koza in which computer programs, rather than function parameters, are optimized [4]. Genetic programming often uses tree-based internal data structures to represent the computer programs for adaptation instead of the list structures typical of genetic algorithms. Evolutionary techniques are generic population-based meta-heuristic optimization algorithms, summarized in Fig. 21.1. Although similar techniques differ in genetic representation and other implementation details, and the nature of the particular applied problem. In general, ETs have five basic components, as summarized by Michalewicz [5]: 1. A genetic representation of potential solutions to the problem 2. A way to create a population (an initial set of potential solutions) 3. An evaluation function rating solutions in terms of their fitness 4. Genetic operators that alter the genetic composition of offspring (crossover, mutation, selection, etc.) 5. Parameter values that genetic algorithms use (population size, probabilities of applying genetic operators, etc.)
Several publications and conferences have been held to provide an easy way for exchanging new ideas, progress, or experience on ETs and to promote better understanding and collaborations between the theorists and practitioners in this field. Depending on the Google Scholar Metrics (2020), the top 10 publications are: Applied Soft Computing, IEEE Congress on Evolutionary Computation, Soft Computing, Swarm and Evolutionary Computation, Conference on Genetic and Evolutionary Computation, Evolutionary Computation, IEEE Symposium Series on Computational Intelligence, Memetic Computing, International Journal of Bio-Inspired Computation, and Natural Computing.
21.1.1 Genetic Algorithm Genetic algorithm (GA) is the most widely known natureinspired method of ETs. GA includes the common essential elements of ETs, and has wide real-world applications. GA is a stochastic search technique based on the mechanism of natural selection and natural genetics. The central theme of research on GA is to keep a balance between exploitation and exploration in its search to the optimal solution for survival in many different environments. Features for self-repair, self-guidance, and reproduction are the rules in biological systems, whereas they barely exist in the most sophisticated artificial systems. GA has been theoretically and empirically proven to provide a robust search in complex search spaces. GA, differing from conventional search techniques, starts with an initial set of random solutions called the population. The i-th individual in the population P(t) is called a chromosome vt (i), representing a solution to the problem at hand. A chromosome is a string of symbols, usually but not necessarily, a binary bit string. The chromosomes evolve through successive iterations, called generations. During each generation t, the chromosomes are evaluated, using some measures of fitness fitt (i) or eval(P). To create the next generation, new chromosomes, called offspring C(t), are generated by either merging two chromosomes from the current generation using a crossover operator and/or modifying a chromosome using a mutation operator. A new generation is formed by selecting some of the parents, according to the fitness values, and offspring, and rejecting others so as to keep the population size constant. Fitter chromosomes have higher probabilities of being selected. After several generations, the algorithms converge to the best chromosome, which hopefully represents the optimum or suboptimal solution to the problem. Figure 21.2 shows the evolution process of GA, and the pseudo-code of the GA is given in Algorithm 1.
21
Nature-Inspired and Evolutionary Techniques for Automation
1970
GA
ES CS
1980
485
1970
EP
PGA 1980
GP
LCS EMO IEC MA
1990
ACO
CEA LLGA NeuroE
EDA
DE
NSGA
1990
EVH
PSO
NSGA II
2000
2000 ABC
2010
2010
Fig. 21.1 Types of nature-inspired and evolutionary techniques
Chromosome vt(i)
Population P(t)
Fitness fitt(i)
Offspring C(t)
Initialize Individual
1111100000
0.15
1111100000
*
1000011111
*
Crossover
t=0
11111 11111
#
10000 00000
#
t=t+1
Evaluate
101101100
*
1100110001
*
Mutation 1010011001
N
# Evaluate
Evaluate Select
Y 1001010111
0.99
Stop condition
The best individual
1000011111
0.95
1010011001
0.81
New population
Fig. 21.2 Evolution process of genetic algorithm
21.1.2 Swarm Intelligences Ant Colony Optimization Ant colony optimization (ACO) is a kind of bionic algorithm, inspired by the behavior of ants foraging in nature [6]. Research on the behavior of ants shows that most of the information transmission between individuals in a group and between individuals and the environment is carried out on the chemical substances produced by ants. This is a special
substance called pheromone. They use pheromone to mark the path on the ground, such as the path from a food source to an ant colony. When ants walk from a food source to an ant nest or from an ant nest to a food source, they will release pheromone on the ground they pass by, thus forming a path containing pheromone. Ants can perceive the concentration of pheromone on the path, and select the path with the highest pheromone concentration with a higher probability. Ants find the location of food along the way by sensing the
21
486
M. Gen and L. Lin
Algorithm 1: Genetic Algorithm (GA)
Algorithm 2: Ant Colony Optimization (ACO)
Input: problem data, parameters;
Input: problem data, parameters;
Process: t m 0;
Process: t m 0; initialize pheromone Ph(t) by encoding routine;
initialize P(t) by encoding routine; evaluate eval(P) by decoding routine & keep the best solution;
evaluate P(t) by decoding routine & keep the best solution;
while (not terminating condition) do
while (not terminating condition) do
create C(t) from P(t) by crossover routine;
calculate state transition probability by pheromone Ph(t);
create C(t) from P(t) by mutation routine;
create C(t) from P(t) by local search or global search;
evaluate C(t) by decoding routine & update the best solution;
evaluate C(t) by decoding routine & update the best solution;
reproduce P(t+1) from P(t) and C(t) by selection routine;
reproduce P(t+1) from P(t) and C(t) by selection routine;
t m t + 1;
update pheromone Ph(t+1) by pheromone Ph(t) by (21.1); t m t + 1; end Output: the best solution;
end Output: the best solution;
pheromone released by other ants. This way of influencing the path selection of ant colonies based on the information of chemical substances released by other ants is the inspiration for ACO. This algorithm has the characteristics of distributed computing, positive feedback of information, and heuristic search. It has been widely recognized and its application has been extended to all aspects of the optimization problem field. ACO starts from a set of randomly initial solutions, which is called as ant colony P(t). Each ant in P represents a solution. And the number of ant is N. After a certain number of generations of state transition, ACO could find the best optimal solution. During each generation t, the ant colony is evaluated, using some measures of fitness fitt (i). To the ant colony after state transition, the state transition probability is calculated according to the pheromone Ph(t). New ant colony, called C(t), is searched globally or locally based on the state transition probability. About pheromone Ph(t), the firstgeneration pheromone is represented by the fitness of the first-generation ant colony. The greater the fitness, the more the pheromone. Subsequent updates are updated according to the following formula: Ph (t + 1) = (1 − ρ) Ph(t)
Particle Swarm Optimization Particle swarm optimization (PSO) is a swarm intelligence optimization algorithm first proposed by Kennedy and Eberhart in 1995 [7]. The algorithm uses swarm iterations, allowing particles to follow the optimal particle search in the solution space to simulate the mutual cooperation mechanism of the foraging behavior of groups of birds and fish to find the optimal solution to the problem. All particles are searched in a D-dimensional space. All particles are determined by a fitness function to determine the fitness value to judge the current position. Each particle must be endowed with memory function to remember the best position found. Each particle also has a speed used to determine the distance and direction of flight. This speed is dynamically adjusted based on its own experience and peer flight flying experience. The d-dimensional velocity update formula of particle i: vk id = wvk−1 id +c1 r1 pbestid −xk−1 id +c2 r2 gbestid −xk−1 id (21.2)
xk id = xk−1 id + vk id
(21.3)
(21.1)
where ρ is the rate of occurrence of pheromone, 0 < ρ < 1, the purpose is to avoid unlimited accumulation of pheromone. A new generation is formed by selecting some of the ant colony before state transition and the ant colony after state transition, according to the fitness values, and rejecting others so as to keep the population size constant. After several generations, the algorithms converge to the best ant, which hopefully represents the optimum or suboptimal solution to the problem. The pseudo-code of the ACO is given in Algorithm 2.
where xk id is the d-dimensional component of the flying velocity vector of particle i in the k-th iteration. vk id is the ddimensional component of the position vector of the particle i in the k-th iteration. c1 , c2 are acceleration constant, adjusting the maximum step length of learning. r1 , r2 are two random function, in the range [0,1], in order to increase the search randomness. w is a inertia weight, non-negative, adjusting the search range of the solution space. In 1998, Shi and Eberhart [8] introduced the inertia weight ω and proposed to dynamically adjust the inertia weight to balance the global convergence and convergence speed. This
21
Nature-Inspired and Evolutionary Techniques for Automation
algorithm is called the standard PSO algorithm. The inertia weight ω describes the influence of the particle’s previous generation velocity on the current generation velocity. In 1999, Clerc [9] introduced a shrinkage factor to ensure the convergence of the algorithm. The speed update formula is as follows: vid = K vid + ϕ1 r1 pbestid − xid + ϕ2 r2 gbestid − xid (21.4) where the shrinkage factor K is ω limited by ϕ 1 and ϕ 2 . ϕ 1 and ϕ 2 are models that need to be set in advance parameter. The shrinkage factor method controls the behavior of the system and finally converges, and can effectively search for different areas. In 2001, Bergh and Engelbrecht [10] proposed a cooperative particle swarm algorithm. The specific steps are to assume that the dimension of the particle swarm is n, divide the entire particle into n small parts, and then the algorithm optimizes each small part of the particle separately. After evaluating the fitness value, it merges into a complete particle. The result shows that the algorithm has achieved a faster convergence rate on many problems.
Algorithm 3: Particle Swarm Optimization (PSO) Input: problem data, parameters; Process: t m 0; initialize velocity v(t) and position x(t) by encoding routine; evaluate v(t) and x(t) by decoding routine & keep the best solution gBest;
while (not terminating condition) do for each particle x k (t) in swarm do
update velocity vk (t+1) by (21.2); update position x k (t+1) by (21.3);
487
could fulfill three requirements of the practical algorithms: first, it could find global optimal no matter what parameters are initialized; second, the convergence of DE is fast; third, DE is easy to use because it requires little control parameters. As one of the most powerful and versatile evolutionary optimizers, DE has been demonstrated to be efficient to a variety of fields, such as artificial intelligence (AI) and electricity. DE starts from a set of randomly initial solutions, which is called as population P. Each individual in P represents a solution, we define it as x. And the number of individuals is N. After a certain number of generations of evolution, DE could find the global optimal solution. The same as GA and PSO, the evolve operations also includes crossover, mutation, selection, and so on. Different from other optimization algorithms, the evolution of the DE algorithm is reflected by the differential information of multiple individuals. The mutation of DE is to randomly select two different individuals in the population, and then scale their vector difference to perform vector synthesis with the individual to be mutated. The formula is as follows: vr (t + 1) = xr1 (t) + F (xr2 (t) − xr3 (t))
In which r1 , r2 , r3 belongs to [1, N], and r1 = r2 = r3 . F is the scale factor, t represents generation. xr2 (t) − xr3 (t) is the difference, which tends to adapt to the natural scales of the objective landscape through the iteration of population. For example, the difference will become smaller when a variable of the population become compact. And this kind of adaptive adjustment helps speed up the exploration of solution space, which makes DE more effective. The operation of crossover operation is as follows: uγ (t + 1) =
k
evaluate x (t+1) by decoding routine; k ) then if f (x k (t+1)) < f (pBest k update pBest (t) = x k (t+1)
update gBest = argmin { t m t + 1; end Output: the best solution gBest;
k f(pBest (t)),
f(gBest(t))};
21.1.3 Other Nature-Inspired Optimization Algorithms Differential Evolution Differential evolution (DE) is a heuristic algorithm based on population. DE is proposed by Storn and Price in 1995 [11], which aims to find the global optimal of possibly nonlinear and non-differentiable function in continuous space. DE
(21.5)
vγ (t + 1) , if U (0, 1) ≥ CR xγ (t), otherwise
(21.6)
where U(0,1) represents a random real number uniformly distributed between [0,1], and CR represents crossover probability. In DE, the strategy of greedy selection is adopted, which means when an offspring is generated, the fitness value of it is compared with the corresponding parent, and the individual with the better fitness will be selected to enter the next generation. The selection formula is as follows: xγ (t + 1) =
ua (t + 1) , if f uγ (t + 1) ≤ f xγ (t) xγ (t), otherwise (21.7)
where f represents fitness function, and goal is to minimize the value of fitness. The pseudo-code of the DE is given in Algorithm 4.
21
488
M. Gen and L. Lin
Algorithm 4: Differential Evolution (DE)
repeated until the algorithm converges or met the termination condition, and the pseudo code is given in Algorithm 5.
Input: problem data, parameters; Process: t m 0; initialize v(t), u(t), x(t) by real number encoding; evaluate eval(t) by fitness function & keep the best solution; while (not terminating condition) do create v(t+1) by (21.5); create u(t+1) by (21.6); create x(t+1) by (21.7); reproduce P(t+1) from P(t) and x(t); t m t + 1; end Output: the best solution gBest;
Algorithm 5: Estimation of distribution algorithm (EDA) Input: problem data, parameters; Process: t m 0; initialize Popt() by encoding routine and calculate pt(Pop); evaluate Popt() by decoding & keep the best solution; while (not terminating condition) do
supPop = select(Popt());select some superior populations pt+1() = estimate(supPop, pt(Pop)); estimate next probability newPop = create(pt+1(), supPop); create new population evaluate newPop by decoding routine and update the best solution Sbest;
Popt+1() = reproduce(newPop, Popt()); reproduce next population
In spite of DE requires only three control parameters: the population size N, the crossover possibility CR, and the scale factor F, and DE achieves remarkable performance on accuracy, robustness, and convergence speed compared with other ETs, such as GA and PSO.
Estimation of Distribution Algorithm Estimation of distribution algorithm (EDA), based on statistical optimization techniques, is one of the emerging types of ETs. Compared with other evolutionary algorithms that use traditional evolutionary operators to produce the next generation, EDA predicts the most potential area of the solution space by constructing a probabilistic model and sampling the search space to generate excellent individuals. Constructing a probabilistic model is the core of EDA. Different probabilistic models should be designed for different types of optimization problems to describe the distribution of solution spaces. Compared with genetic-based micro-level evolution method of GA, EDA adopts a macro-level evolution method based on search space, which has stronger global search capabilities and the ability to solve high-dimensional complex problems. Hao et al. proposed an effective estimation of distribution algorithm for stochastic job shop scheduling problem [12]. EDA typically works with all individuals in the population, starting with the method of initializing the population according to the uniform distribution. The i-th solution is represented as the i-th individual in the population. During each generation of population Pop(t), the fitness function fitt (i) is used to score individuals. The higher the score, the better the fitness. The selection operator selects the elite individuals of this generation to form the dominant subpopulation by setting a threshold in the ranked population. EDA estimates the probability distribution pt (Pop) of all individuals in the dominant sub-population by constructing a probabilistic model, and new individuals are generated by sampling the distribution of the encoded model. These new individuals will be merged or directly replace individuals from the old population to form a new population. This step is
t m t + 1; end Output: the best solution Sbest;
Simulated Annealing Simulated Annealing (SA) algorithm is an important heuristic algorithm, which simulates the process of solid annealing. It accepts a worse solution than the current solution with a certain probability, so it has more possibility to jump out of the local optimal solution and find the global optimal solution. The pseudocode for SA is shown as follows. Algorithm 6: Simulated Annealing (SA) Input: current temper T, min temper Tmin, fitness function J, current position Y, decay rate r Process i= 0 ; while T > T min do dE = J(Y(i+1)) - J(Y(i)); if dE >= 0 Y(i+1) = Y(i); else if exp(dE/T) > random(0, 1) Y(i+1) = Y(i); else continue; T = r * T; i = i+1; Output: the best position Y(i)
Matai, Singh, and Mittal proposed a modified simulated annealing (mSA) algorithm for the equal-area facility layout problem [13]. Pourvaziri and Pierreval proposed a GA algorithm considering simulation for the layout of machines in a manufacturing system considering the aisles structure and their capacity [14].
21
Nature-Inspired and Evolutionary Techniques for Automation
21.1.4 Evolutionary Multi-objective Optimization Multiple objective problems arise in the design, modeling, and planning of many complex real systems in the areas of industrial production, urban transportation, capital budgeting, forest management, reservoir management, layout and landscaping of new cities, energy distribution, etc. It is easy to find that almost every important real-world decision problem involves multiple and conflicting objectives which need to be tackled while respecting various constraints, leading to overwhelming problem complexity. Since the 1990s, ETs have been received considerable attention as a novel approach to multi-objective optimization problems, resulting in a fresh body of research and applications known as evolutionary multi-objective optimization (EMO). A special issue in the multi-objective optimization problem is the fitness assignment mechanism. Since the 1980s, several fitness assignment mechanisms have been proposed and applied to multi-objective optimization problems, although most fitness assignment mechanisms are just different approaches and are applicable to different cases of multi-objective optimization problems. In order to understand the development of EMO, we classify the fitness assignment mechanisms according to the published year. Type 1: Vector evaluation approach. The vector evaluated genetic algorithm (VEGA) was the first notable work to solve multi-objective problems in which a vector fitness measure is used to create the next generation [15]. Type 2: Pareto ranking + diversity: Fonseca and Fleming proposed a multi-objective genetic algorithm (MoGA) in which the rank of a certain individual corresponds to the number of individuals in the current population that dominate it [16]. Srinivas and Deb also developed a Pareto-rankingbased fitness assignment and called it the nondominated sorting genetic algorithm (NSGA) [17]. In each method, the nondominated solutions constituting a nondominated front are assigned the same dummy fitness value. He et al. combined fuzzy logic with Pareto sorting into moEA, called fuzzy dominance genetic algorithm (FDGA), using fuzzy logic based on the left Gaussian function to quantify the degree of domination, from dominate to being dominated in each objective, and then using the set theoretic operator to combine multiple fuzzy sets to allow two individuals to compare between multiple objectives [18]. Yuan et al. also developed an improved Pareto dominated domain called θ dominance-based evolutionary algorithm (θ-DEA), in which only the solutions under the same cluster have competitive relations [19]. Li, Yang, and Liu proposed a shift-based density estimation strategy (SDE), which incorporates additional information on the basis of individual distribution information. This convergence information can reflect the relative
489
degree of proximity between individuals and Pareto frontiers [20]. Xiang proposed a vector angle-based evolutionary algorithm (VAEA), which adopts a selection operator that combines the maximum-vector-angle-first principle and the worse-elimination principle to ensure the balance between convergence and diversity [21]. Yang et al. proposed a gridbased Evolutionary Algorithm (GBEA), which defines the dominance and evaluation criteria under the grid environment, and can effectively increase the selection pressure of moving to the Pareto front [22]. Type 3: Weighted sum + elitist preserve: Ishibuchi and Murata proposed a weighted-sum-based fitness assignment method, called the random-weight genetic algorithm (RWGA), to obtain a variable search direction toward the Pareto frontier [23]. The weighted-sum approach can be viewed as an extension of methods used in the multi-objective optimizations to GAs. It assigns weights to each objective function and combines the weighted objectives into a single objective function. Gen et al. proposed another weightsum-based fitness assignment method called the adaptiveweight genetic algorithm (AWGA), which readjusts weights for each objective based on the values of nondominated solutions in the current population to obtain fitness values combined with the weights toward the Pareto frontier [24]. Zitzler and Thiele proposed the strength Pareto evolutionary algorithm (SPEA) [25] and an extended version SPEA 2 [26] that combines several features of previous multiobjective genetic algorithms (MoGA) in a unique manner. Deb suggested a nondominated sorting-based approach called the nondominated sorting genetic algorithm II (NSGA II) [27], which alleviates three difficulties: computational complexity, nonelitism approach, and the need to specify a sharing parameter. NSGA II was advanced from its origin, NSGA. Gen et al. proposed an interactive adaptive-weight genetic algorithm (i-AWGA), which is an improved adaptiveweight fitness assignment approach with consideration of the disadvantages of the weighted-sum and Pareto-rankingbased approaches [24]. On the basis of NSGA II, Deb and Jain proposed an more suitable algorithm for solving manyobjective optimization problems, called NSGA III, which uses a number of widespread reference points in selection operators and has better ability to maintain population diversity [28]. Zhang et al. proposed a hybrid sampling strategy based multi-objective evolutionary algorithms (MoEA-HSS) by combining archive mechanisms based on VEGA and Pareto-based fitness functions and applied it to the process planning and scheduling problem [29]. Recently Zhang et al. also proposed HMoEA with fast sampling strategy-based global search and route sequence differencebased local search (HMoEA-FSS.RSD) and applied it to VRP with time window [30]. Ke, Zhang, and Battiti proposed an algorithm that combines a decomposition-based strategy with swarm
21
490
intelligence, called multi-objective evolutionary algorithm with decomposition and ant colony optimization (MoEA/dACO). This method decomposes the multi-objective problem into a set of single objective subproblems, and evolves the individuals of each subpopulation to approach the Pareto frontier by sharing pheromones [31]. Li et al. developed another decomposition-based method, called multiple objective evolutionary algorithm based on directional diversity (MoEA/dd), in which a two-layer vector is designed to adjust the weight vector adaptively [32].
M. Gen and L. Lin
Population P(t)
Offspring C(t)
1111100000
*
1000011111
*
Crossover
11111 11111
#
10000 00000
#
Evaluate 101101100
*
1100110001
*
Mutation 1010011001
#
Evaluate
21.1.5 Features of Evolutionary Search Hybrid evolutionary search
Exploitation and Exploration Search is one of the more universal problem solving methods for such problems one cannot determine a priori sequence of steps leading to a solution. Search can be performed with either blind strategies or heuristic strategies [33]. Blind search strategies do not use information about the problem domain. Heuristic search strategies use additional information to guide search move along with the best search directions. There are two important issues in search strategies: exploiting the best solution and exploring the search space [34]. ET is a class of general purpose search methods combining elements of directed and stochastic search which can produce a remarkable balance between exploration and exploitation of the search space. At the beginning of evolution search, there is a widely random and diverse population and the blind strategies tend to perform widespread search for exploring all solution space. As the high fitness solutions develop, the heuristic strategies provide exploration in the neighborhood of each of them. In other words, the evolutionary operators would be determined by the environment of evolutionary system (the diversity of population) but not by the operator itself. In addition, simple evolutionary operators are designed as general purpose search methods (the domain-independent search methods) they perform essentially a blind search and could not guarantee to yield an improved offspring. Hybrid Evolutionary Search ET has proved to be a versatile and effective approach for solving optimization problems. Nevertheless, there are many situations in which the simple ET does not perform particularly well, and need to design specific methods of hybridization based on specific problems. One of most common forms of hybrid evolutionary search is to incorporate local optimization as an add-on extra to the canonical ET loop of recombination and selection (Fig. 21.3). With the hybrid approach, local optimization is applied to each newly generated offspring to move it to a local optimum before injecting it into the population. ET is used to perform
Fig. 21.3 General structure of hybrid evolutionary search
global exploration among a population while heuristic methods are used to perform local exploitation around chromosomes. Because of the complementary properties of ET and conventional heuristics, the hybrid approach often outperforms either method operating alone. Another common form is to incorporate ET parameters adaptation. The behaviors of ET are characterized by the balance between exploitation and exploration in the search space. The balance is strongly affected by the strategy parameters such as population size, maximum generation, evolutionary operation probabilities. How to choose a value to each of the parameters and how to find the values efficiently are very important and promising areas of research on the ET.
Enhanced EA via Learning As early as 1988, Goldberg and Holland began to pay attention to the interaction between machine learning (ML) and ETs [35]. ETs are equipped with outstanding search ability, and they could store a large number of problem features data, search information data, and population information data during the search process. Benefiting from the outstanding analysis ability, ML could help analyze the huge amounts of data and improve the search ability of ETs. By combining ML and ETs, useful information can be extracted to help understand the search behavior and search global optimum. There are many ML techniques which can be combined with ETs: statistical methods (e.g., mean and variance), interpolation and regression, clustering analysis (CA), principle component analysis (PCA), artificial neural networks (ANN), orthogonal experimental design (OED), support vector machines (SVM), case-based reasoning, opposition-based learning (OBL), reinforcement learning, competitive learning, and Bayesian network. And the combination of ML and ETs has been proved to be useful in
21
Nature-Inspired and Evolutionary Techniques for Automation
491
Knowledge Apriori knowledge
Population initialization
Dynamic knowledge
Evaluation
Evolutionary operations
Population distribution
Parameters adaption EA
Aim Speeding-up
Quality improvement
Fig. 21.4 An illustration of the hybridization taxonomy
speeding up convergence and obtaining better solution. MLtechnique-enhanced-ETs have been investigated by Jourdan et al. [36] and Zhang et al. [37]. Incorporation of different ML techniques into different ETs could lead to different affect. Jourdan et al. [36] suggested that the hybridization of ML techniques and ETs can be classified according to knowledge type/aim/localization, as shown in Fig. 21.4 [38]. Based on five basic components of ETs discussed above, the hybridization with ML can occur in each process of the ET: 1. Population initialization (Learning for initial solutions distribution and initial solutions quality) 2. Evaluation (Learning to reduce evaluation process, replace the evaluation function and avoid evaluations) 3. Evolutionary operations (Learning-enhanced evolutionary operators, and learning-enhanced local searches) 4. Population distribution (Learning to incorporate search experience, predict promising region, maintain population diversity) 5. Parameters adaption
21.1.6 Evolutionary Design Automation Automation is the use of control systems such as computers to control industrial machinery and processes, replacing human operators. In the scope of industrialization, it is a step beyond mechanization. Whereas mechanization provided human operators with machinery to assist them with the physical requirements of work, automation greatly reduces the
need for human sensory and mental requirements as well. Processes and systems can also be automated. Automation plays an increasingly important role in the global economy and in daily experience. Engineers strive to combine automated devices with mathematical and organizational tools to create complex systems for a rapidly expanding range of applications and human activities. ETs have received considerable attention regarding their potential as novel optimization techniques. There are three major advantages when applying ETs to design automation. Adaptability: ETs do not have many mathematical requirements regarding the optimization problem. Due to their evolutionary nature, ETs will search for solutions without regard to the specific internal workings of the problem. ETs can handle any kind of objective functions and any kind of constraints (i.e., linear or nonlinear, defined on discrete, continuous, or mixed search spaces). Robustness: The use of evolution operators makes ET very effective in performing global search (in probability), while most conventional heuristics usually perform local search. It has been proven by many studies that EA is more efficient and more robust at locating optimal solution and reducing computational effort than other conventional heuristics. Flexibility: ETs provide great flexibility to hybridize with domain-dependent heuristics to make an efficient implementation for a specific problem. However, to exploit the benefits of an effective ET to solve design automation problems, it is usually necessary to examine whether we can build an effective evolutionary search with the encoding. Several principles were proposed to evaluate effectiveness [24]:
21
492
M. Gen and L. Lin
1. Space: Individual should not require extravagant amounts of memory. 2. Time: The time required for executing evaluation, recombination on individuals should not be great. 3. Feasibility: An individual corresponds to a feasible solution. 4. Legality: Any permutation of an individual corresponds to a solution. 5. Completeness: Any solution has a corresponding individual. 6. Uniqueness: The mapping from individuals to solutions (decoding) may belong to one of the following three cases: 1-to-1 mapping, n-to-1 mapping, and 1-to-n mapping. The 1-to-1 mapping is the best among the three cases, and 1to-n mapping is the most undesirable. 7. Heritability: Offspring of simple recombination (i.e., onecut-point crossover) should correspond to solutions which combine the basic features of their parents. 8. Locality: A small change in an individual should imply a small change in its corresponding solution.
21.2
Evolutionary Techniques for Automation
Currently, for manufacturing, the purpose of automation has shifted from increasing productivity and reducing costs to broader issues, such as increasing quality and flexibility in the manufacturing process. For example, automobile and truck pistons used to be installed into engines manually. This is rapidly being transitioned to automated machine installation, because the error rate for manual installment was around 1–1.5%, but has been reduced to 0.00001% with automation. Hazardous operations, such as oil refining, manufacturing of industrial chemicals, and all forms of metal working, were always early contenders for automation. However, many applications of automation, such as optimizations and automatic controls, are formulated with complex structures, complex constraints, and multiple objects simultaneously, which makes the problem intractable to traditional approaches. In recent years, the evolutionary techniques community has turned much of its attention toward applications in industrial automation.
21.2.1 Advanced Planning and Scheduling Advanced planning and scheduling (APS) refers to a manufacturing process by which raw materials and production capacity are optimally allocated to meet demand. APS is especially well-suited to environments where simpler planning methods cannot adequately address complex trade-offs between competing priorities. However, most scheduling
problems of APS in the real world face inevitable constraints such as due date, capability, transportation cost, setup cost, and available resources. Generally speaking, we should obtain an effective “flexibility” not only as a response to the real complex environment but also to satisfy all the combinatorial constraints. Thus, how to formulate the complex problems of APS and find satisfactory solutions play an important role in manufacturing systems. Design: Design problems generally have to be decided only once in APS, and they form some kinds of fixed inputs for other subsequent problems, such as manufacturing and planning problems. Typically, design problems in an integrated manufacturing system include layout design, assembly planning, group technology, and so on. Planning: Compared to scheduling problems, planning problems have a longer horizon. Hence the demand information needed to find the optimal solution for a planning problem comes from forecasting rather than arrived orders. Process planning, operation sequencing, production planning, and assembly line balancing fall into the class of planning problems. Manufacturing: In manufacturing, there are two kinds of essential issues: scheduling and routing. Machining, assembly, material handling, and other manufacturing functions are performed to the best efficiency. Such kinds of problems are generally triggered by a new order. Distribution: The efficient distribution of products is very important in IMS, as transportation costs become a nonnegligible part of the purchase price of products in competitive markets. This efficiency is achieved through sophisticated logistic network design and efficient traffic routing. To find the optimal solutions in those fields gives rise to complex combinatorial optimization problems. Unfortunately, most of them fall into the class of NP-hard problems. Hence to find a satisfactory solution in an acceptable time span is crucial for the performance of APS. ETs have turned out to be potent methods to solve such kinds of optimization problems. In APS, the basic problems indicated in Fig. 21.5 are likely to use ETs. A flexible machining system is one of the forms of factory automation. The design of a flexible machining system (FMS) involves the layout of machine and workstations. The layout of machines in an FMS is typically determined by the type of material-handling devices used. Layout design is concerned with the optimum arrangement of physical facilities, such as departments or machines, in a certain area. Kusiak and Heragu (1987) wrote a survey paper on the machine layout problem [39]. Usually the design criterion is considered to be minimizing material-handling costs. Because of the combinatorial nature of the facility layout problem, the heuristic technique is the most promising approach for solving practical-size layout problems. The interest in application of ETs to facility layout design has been growing rapidly.
21
Nature-Inspired and Evolutionary Techniques for Automation
493
Advanced planning and scheduling
Design
Planning
Manufacturing
• Layout design • Group technology • Part and product design • Tool and fixture • Design
• Process planning • Operation sequencing • Advanced planning & scheduling • Production planning • Assembly planning
• Lot flow scheduling • Material handing & vehicle routing • Flow shop scheduling • Job shop scheduling • Flexible job shop scheduling • Assembly line balancing • Maintenance scheduling
Distribution
• Distribution systems design • Logistics network planning • Transportation planning • Location & routing planning • Vehicle routing & scheduling
Fig. 21.5 Basic problems in advanced planning and scheduling
Cohoon et al. proposed a distributed GA for the floorplan design problem [40]. Tam reported his experiences of applying genetic algorithms to the facility layout problem [41]. Tate and Smith applied GA to the shape-constrained unequal-area facility layout problem [42]. Gen and Cheng provided various GA approaches for the machine layout and facility layout problems [43]. Liu and Liu also proposed a modified multiobjective ant colony optimization (MoACO) algorithm [44]. Garcia-Hernandez et al. developed a new multi-method evolutionary algorithm for unequal-area facility layout problem, called Coral Reefs Optimization algorithm with Substrate Layers (CRO-sl), which increases the diversity of individual distribution in the population by designing several replication mechanisms suitable for improving the exploration of the searching space [45]. Recently Suer and Gen edited advanced development, analysis, and case studies for the cellular manufacturing systems [46]. The planning and scheduling of manufacturing systems always require resource capacity constraints, disjunctive constraints, and precedence constraints, owing to the tight due dates, multiple customer-specific orders, and flexible process strategies. Here, some hot topics of applications of ETs in APS are introduced. These models mainly support the integrated, constraint-based planning of the manufacturing system to reduce lead times, lower inventories, increase throughput, etc. The flexible jobshop scheduling problem (FJSP) is a generalization of the jobshop and parallel machine environment [47, 48], which provides a closer approximation to a wide range of real manufacturing systems. Kacem et al. proposed
the operations machine-based GA approach [49], which is based on a traditional representation called the schemata theorem representation. Zhang and Gen proposed a multistage operation-based encoding for FJSP [50]. Zhang et al. proposed an effective gene expression programming (eGEP) algorithm for energy-efficient FJSP, which combines multigene representation, self-study, and unsupervised learning [51]. Shahsavari-Pour and Ghasemishabankareh designed a hybrid genetic algorithm and simulated annealing algorithm (nhGASA) for solving multi-objective FJSP [52]. KemmoeTchomte, Lamy, and Tchernev proposed an improved greedy randomized adaptive search procedure with a multi-level evolutionary local search (GRASP-mels) algorithm based on neighborhood structure optimization [53]. Xu et al. designed a teaching-learning based optimization (TLBO) algorithm for FJSP with fuzzy processing time [54]. Gen, Lin, and Ohwata recently surveyed recent hybrid evolutionary algorithms for fuzzy flexible job-shop scheduling problems [55]. The objective of the resource-constrained project scheduling problem (rcPSP) is to schedule activities such that precedence and resource constraints are obeyed and the makespan of the project is to be minimized. Gen and Cheng adopted priority-based encoding for this rcPSP [48]. In order to improve the effectiveness of priority-based GA approach for an extended resource constrained multiple project scheduling problem, Kim et al. combined priority dispatching rules in priority-based encoding process [56]. To enhance the local search capability of GA, Joshi et al. proposed a variable neighborhood search GA (vnsGA) [57]. Elsayed et al. designed a two-layer framework for adaptive selection of the
21
494
M. Gen and L. Lin
Advanced planning and scheduling models
Job-shop scheduling model (JSP)
Flexible JSP model (fJSP)
Integrated resource selection & operation sequence model
Integrated scheduling model
Manufacturing & logistics model
Extension Extension Extension Extension Objective: Objective: Objective: Objective: min makespan min makespan min makespan min makespan min makespan plant transition times balancing workloads balancing workloads machine transition times Constraints: Constraints: precedence constraints precedence constraints Constraints: Constraints: Constraints: alternative machines alternative machines precedence constraints precedence constraints precedence constraints lot size information lot size information alternative machines in same (or different) alternative machines multi-plant chain multi-plant chain lot size information direction pickup and delivery Output: Output: Output: Output: Output: operation operation operation operation sequence operation sequences with sequences with sequences with sequences with machine selection machine selection machine selection machine selection Objective:
Fig. 21.6 Basic problems in advanced planning and scheduling
optimal evaluation method, the upper layer selects the optimization algorithm, and the lower layer selects the operator [58]. Cheng et al. proposed a fuzzy clustering differential evolution (FCDE) algorithm, which combines fuzzy C-means clustering technique [59]. He et al. proposed a filter-and-fan approach with adaptive neighborhood switching (FFANS). It is a hybrid meta-heuristic algorithm based on a single solution, which is a combination of local search, multiple neighborhoods filtering fan-out search and an adaptive neighborhood switching step, and has the characteristics of autonomous and adaptive problem scale [60]. Tian et al. proposed a hybrid multi-objective EDA for robust resource constraint project scheduling with uncertainty [61]. The advanced planning and scheduling (APS) model includes a range of capabilities from finite capacity planning at the plant floor level through constraint-based planning to the latest applications of advanced logic for supply-chain planning and collaboration [62]. Several related works by Moon et al. [63] and Moon and Seo [64] have reported a GA approach especially for solving such kinds of APS problems. Depending on the common sense “from easy to difficult” and “from simple to complex”, the core APS models are summarized in Fig. 21.6.
21.2.2 Assembly Line System From ancient times to the modern day, the concept of assembly has naturally been changed a lot. The most important milestone in assembly is the invention of assembly lines (ALs). In 1913, Henry Ford completely changed the general
concept of assembly by introducing ALs in automobile manufacturing for the first time. He was the first to introduce a moving belt in a factory, where the workers were able to build the famous model-T cars, one piece at a time instead of one car at a time. Since then, the AL concept revolutionized the way products were made while reducing the cost of production. Over the years, the design of efficient assembly lines received considerable attention from both companies and academicians. A well-known assembly design problem is assembly line balancing (ALB), which deals with the allocation of the tasks among workstations so that a given objective function is optimized. As shown in Fig. 21.7, ALB models can be classified into two groups based on the model structure. While, the first group [65] includes single-model assembly line balancing (smALB), multi-model assembly line balancing (muALB), and mixed-model assembly line balancing (mALB); the second group [66] includes simple assembly line balancing (sALB) and general assembly line balancing (gALB). The smALB model involves only one product. Additionally, several versions of ALB problems arise by varying the objective function [65]. Type-F is an objective independent problem which is to establish whether or not a feasible line balance exists. Type-1 and Type-2 have a dual relationship; the first tries to minimize the number of stations for a given cycle time, and the second tries to minimize the cycle time for a given number of stations. Type-E is the most general problem version, which tries to maximize the line efficiency by simultaneously minimizing the cycle time and a number of stations. Finally, Type-3, -4, and -5 correspond to maximization of workload smoothness, maximization of work
21
Nature-Inspired and Evolutionary Techniques for Automation
495
Classification of ALB models based on problem structure
According to ALB model type
According to ALB problem structure
Single-model ALB (smALB)
Simple ALB (sALB)
Multi-model ALB (muALB)
General ALB (gALB)
Mixed-model ALB (mALB)
Fig. 21.7 Classification of assembly line balancing models
relatedness, and multiple objectives with Type-3 and Type4, respectively [66–68]. Falkenauer and Delchambre [69] were the first to solve ALB with GAs. Following Falkenauer and Delchambre [69], application of GAs for solving ALB models was studied by many researchers, for example. Zhang and Gen [70] used a multi-objective GA to solve mixedmodel assembly lines. Kucukkoc and Zhang [71] utilized mathematical formulation to model parallel double-sided assembly line balance problems and proposed a new genetic algorithm (GA)-based approach to solve it. Triki et al. [72] proposed an innovative hybrid genetic algorithm (HGA) scheme hybridized with a local search procedure to solve the task restrictions assembly line balancing (TRALB) problem. Quyen et al. [73] proposed a hybrid genetic algorithm (HGA) that includes two stages to solve the resource constrained assembly line balancing (RCALB) problem. Defersha and Mohebalizadehgashti [74] proposed a multi-phased linear programming embedded genetic algorithm to solve mixedmodel assembly line balancing problem. Due to very different conditions in real manufacturing environments, assembly line systems show a great diversity. Particularly, ALs can be distinguished with regard to the line layout such as serial and u-shaped lines. In u-shaped lines, the stations are arranged along a rather narrow U, where both legs are close together, and the entrance and the exit of the line are in the same position. Stations in between those legs may work at two segments of the line facing each other simultaneously. This means that the workpieces can revisit the same station at a later stage in the production process without changing the flow direction of the line. This can result in better balance of station loads due to a larger number of task-station combinations where operators can handle adjacent tasks as well as tasks on both sides of the u-shaped line. Zha and Yu [75] proposed a new hybrid algorithm of
ant colony optimization and filtered beam search for solving U-line balancing and rebalancing problem. In the process of constructing path, each ant chooses the best one step by global and local evaluation at a given probability. Alavidoost et al. [76] utilized a hybrid multi-objective genetic algorithm where a one-fifth success rule was deployed to solve Ushaped assembly line balancing problems with the fuzzy task processing times. Sahin and Kellegoz [77] designed a grouping genetic and simulated annealing algorithms to solve U-shaped assembly line balancing problems. Zhang et al. [78] modified ant colony optimization inspired by the process of simulated annealing, to reduce the possibility of being trapped in a local optimum solving U-type assembly lines balancing (uALB) problem. In the past decades, robots have been extensively used in assembly lines called robotic assembly lines (rALs). Usually, specific tooling is developed to perform the activities needed at each station. Such tooling is attached to the robot at the station. In order to avoid the wasted time required for tool change, the design of the tooling can take place only after the line has been balanced. Different robot types may exist at the assembly facility. Each robot type may have different capabilities and efficiencies for various elements of the assembly tasks. Hence, allocating the most appropriate robot for each station is critical for the performance of robotic assembly lines. Unlike manual assembly lines, where actual processing times for performing tasks vary considerably and optimal balance is rather of theoretical importance, the performance of rALs depends strictly on the quality of its balance. As extended from sALB, robotic assembly line balancing (RALB) is also NP-hard. Rubinovitz and Bukchin were the first to formulate the RALB model as one of allocating equal amounts of work to the stations on the line while assigning the most efficient
21
496
Suppliers s∈ S
Plants k∈ K
DCs j∈J
Customers i∈ I 1
1 1
s–1
2
...
...
...
j
i–1
...
2
...
i
1 3
s
k ...
s+1 ...
i+1
J
K
...
robot type from the given set of available robots to each workstation [79]. Their objective is to minimize the number of workstations for a given cycle time. Gao et al. proposed a genetic algorithm (GA) hybridized with local search for solving the type II robotic assembly line balancing (RALBII) problem algorithm where five local search procedures are developed to enhance the search ability of GA [80]. Yoosefelahi et al. utilized three versions of multi-objective evolution strategies (MOES) to tackle multi-objective RALB problem [81]. Janardhanan and Ponnambalam [82] utilized particle swarm optimization (PSO) to optimize the robotic assembly line balancing (RALB) problems with an objective of maximizing line efficiency. Nilakantan et al. proposed two bio-inspired search algorithms to solve the RALB problem [83].
M. Gen and L. Lin
S 1st stage
2nd stage
3rd stage
I
Fig. 21.8 A simple network of three stages in a logistics network
21.2.3 Logistics and Transportation Logistics is the last frontier for cost reduction and the third profit source of enterprises [84]. The interest in developing effective logistics system design models and efficient optimization methods has been stimulated by high costs of logistics and the potential for securing considerable savings. The logistics network design is one of the most comprehensive strategic decision problems that need to be optimized for long-term efficient operation of whole supply chain. It determines the number, location, capacity and type of plants, warehouses, and distribution centers (DCs) to be used. It also establishes distribution channels, and the amount of materials and items to consume, produce, and ship from suppliers to customers. The logistics network models cover a wide range of formulations ranging from simple single product type to complex multi-product ones, and from linear deterministic models to complex nonlinear stochastic ones. Illustration of a simple network of three stages in logistics network is shown in Fig. 21.8. During the last decades, there has been a growing interest in using ETs to solve a variety of single and multiobjective problems in logistics system that are combinatorial and NP hard. Based on the model structure, the core network models of logistics are summarized in Fig. 21.9. The efficient and effective movement of goods from raw material sites to processing distribution centers (DCs), component fabrication plants, finished goods assembly plants, DCs, retailers, and customers is critical in today’s competitive environment [85]. Within individual industries, the percentage of the cost of a finished delivered item to the final consumer can easily exceed this value. Logistics system entails not only the movement of goods but also decisions about 1) where to produce, what to produce, and how much to produce at each site, 2) what quantity of goods to hold in inventory at each stage of the process, 3) how to share information among parties in the process, and finally, 4)
where to locate plants and distribution centers. Logistics problems can be classified as: location problems, allocation problems, and location allocation problems. The locationallocation problem (LAP) is to locate a group of new facilities. The goal of LAP is to minimize the transportation cost from facilities to costumers while satisfying the minimal requirements of costumers in an area of interest [86]. Since LAP was proposed by Cooper in 1963 [87], it has attract a lot of attention. The basic LAP is usually composed of three components: facilities, costumers, and locations. Different type of these components could leads to different type of LAP. Scaparra and Scutella proposed a unified framework for combining and relating the three components [88]. The method to solve LAPs can be divided into three types: exact equations, heuristic methods and meta-heuristic methods. As one of the meta-heuristic method, ETs have good performance in solving LAPs because LAP is NP-hard and ETs is one of the most efficient way for NP-hard problems. Based on GA and combined with some traditional optimization techniques, Gong et al. proposed a hybrid evolutionary method to solve Obstacle Location-allocation problem [89]. Vlachos attempted to use a modified ant colony system (ACS), especially on the pheromone, to solve capacitated location allocation problem on a line (CLAAL), which is one of the location allocation problem [90]. Chen et al. investigated the LA problems with the various restricted regional constraints, and proposed a hybrid evolutionary approach named immune algorithm with particle swarm optimization (IA-PSO) to solve it [91]. Pitakaso et al. modified the crossover process of basic DE and integrate PSO algorithm and proposed a modified DE algorithm to explore solutions of the multi-objective, source and stage location-allocation problem (MoSS-LAP) [92]. The transportation problem (TP) is a well-known basic network problem which was originally proposed by Hitchcock [93]. The objective is to find the way of transporting
21
Nature-Inspired and Evolutionary Techniques for Automation
497
Logistics network models
Basic logistics models
Location allocation models
Multi-stage logistics models
Linear logistics model
Capacitated location allocation model
Two-stage logistics model
Generalized logistics model
Location allocation model with obstacles
Multi-stage logistics model
Capacitated logistics model
Flexible logistics models
Integrated logistics model with multi-time period and inventory
Reverse logistics models
Fixed-charge logistics model Exclusionary side constrained logistics model
21
Multiobjective logistics model
Fig. 21.9 The core models of logistics network design
homogeneous product from several sources to several destinations so that the total cost can be minimized. For some real-world applications, the transportation problem is after extended to satisfy several other additional constraints or performed in several stages. Logistics is often defined as the art of bringing the right amount of the right product to the right place at the right time and usually refers to supply chain problems (Tilanus, [94]). The efficiency of the logistic system is influenced by many factors; one of them is to decide the number of DCs, and find the good location to be opened, in such a way that the customer demand can be satisfied at minimum DCs’ opening cost and minimum shipping cost. ETs have been successfully applied to logistics network models. Michalewicz et al. and Viagnaux and Michalewicz firstly discussed the use of GA for solving linear and nonlinear transportation problems [95, 96]. Syarif and Gen considered production/distribution problem modeled by twostage transportation problem (TSTP) and propose a hybrid genetic algorithm [97]. Gen et al. developed a prioritybased genetic algorithm (priGA) with new decoding and encoding procedures considering the characteristic of tsTP [98]. Altiparmak et al. extended priGA to solve a singleproduct, multi-stage logistics design problem [99]. The objectives are minimization of the total cost of supply chain, maximization of customer services that can be rendered to
customers in terms of acceptable delivery time (coverage), and maximization of capacity utilization balance for DCs (i.e., equity on utilization ratios). Furthermore, Altiparmak et al. also applied the priGA to solve a single-source, multi-product, multi-stage logistics design problem. As an extended multi-stage logistics network model [100], Lee et al. applied the priGA to solve a multi-stage reverse logistics network (MRLN) problem, minimizing the total of costs to reverse logistics shipping cost and fixed cost of opening the disassembly centers and processing centers [101]. Although the above traditional multistage logistics network and its application had made a big success in both theory and business practices, as time goes on, some faults of the traditional structure of logistics network came to light, making it impossible to fit the fast changing competition environments or meet the diversified customer demands very well. Today’s energetic business competition presses enterprises to build up a logistics network productively and flexibly. At this point, DELL gives us some clear ideas on designing cost-effective logistics networks [102]. The company’s entire direct-to-consumer business model, not just its logistics approach, gives a remarkable edge over its competitors. Lin et al. proposed direct path-based GA and priGA for solving the flexible multistage logistics network (FMLN) (Fig. 21.10) problems [103, 104].
498
M. Gen and L. Lin
also called workstations (or machines), each with a specific operation such as milling, washing, or assembly. Each cell is connected to the guide path network by a pickup/delivery (P/D) point where pallets are transferred from/to the AGVs. Pallets of products are moved between the cells by the AGVs. Assumptions considered are as follows:
Direct shipment
Normal delivery Plants
Normal delivery DCs
Normal delivery Retailters
Customers
Direct delivery Normal delivery Direct shipment Direct delivery
Fig. 21.10 The core models of logistics network design
21.3
AGV Dispatching in Manufacturing System
Automated material handling has been called the key to integrated manufacturing. An integrated system is useless without a fully integrated, automated material handling system. In the manufacturing environment, there are many automated material handling possibilities. Currently, automated guided vehicles systems, which include automated guided vehicles (AGVs), are the state of the art, and are often used to facilitate automatic storage and retrieval systems (AS/RS). Traditionally, AGV systems were mostly used in manufacturing systems. In manufacturing areas, AGVs are used to transport all types of materials related to the manufacturing process. The transportation network connects all stationary installations (e.g., machines) in the center. At stations, pickup and delivery points are installed that operate as interfaces between the production/storage system and the transportation system of the center. At these points a load is transferred by, for example, a conveyor from the station to the AGV and vice versa. AGVs travel from one pickup and delivery point to another on fixed or free paths. Guide paths are determined by, for example, wires in the ground or markings on the floor. More recent technologies allow AGVs to operate without physical guide paths. Hwang et al. proposed simultaneously an integrated model for designing an end-of aisle order picking system and determining a unit load sizes of AGVs [105, 106].
21.3.1 Network Modeling for AGV Dispatching In this subsection, we introduce simultaneous scheduling and routing of AGVs in a flexible manufacturing system (FMS) [107]. An FMS environment requires a flexible and adaptable material handling system. AGVs provide such a system. An AGV is a material-handling equipment that travels on a network of guide paths. The FMS is composed of various cells,
1. AGVs only carry one kind of products at a time. 2. A network of guide paths is defined in advance, and the guide paths have to pass through all pickup/delivery points. 3. The vehicles are assumed to travel at a constant speed. 4. The vehicles can just travel forward, not backward. 5. As many vehicles travel on the guide path simultaneously, collisions are avoided by hardware and are not considered herein. 6. At each workstation, there is pickup space to store the operated material and delivery space to store the material for the next operation. 7. The operation can be started any time after an AGV took the material to come. And also the AGV can transport the operated material from the pickup point to the next delivery point any time. Definition 1 A node is defined as task Tij , which represents a transition task of the j-th process of job Ji for moving from the pickup point of machine Mi,j−1 to the delivering point of machine Mij . Definition 2 An arc can be defined as many decision variables, such as, capacity of AGVs, precedence constraints among the tasks, or costs of movement. Lin et al. defined an arc as a precedence constraint, and give a transition time cjj from the delivery point of machine Mij to the pickup point of machine Mi j on the arc. Definition 3 We define the task precedence for each job; for example, task precedence for three jobs is shown in Fig. 21.11. The notation used in this chapter is summarized as follows. Indices: i, i : index of jobs, i, i = 1, 2, . . . , n; j, j : index of processes, j, j = 1, 2, . . . , n. Parameters: n: total number of jobs, m: total number of machines, ni : total number of operations of job i, oij : the j-th operation of job i, pij : processing time of operation oij , Mij : machine assigned for operation oij , Tij : transition task for operation oij , tij : transition time from Mi,j−1 to Mij . Decision variables: xij : assigned AGV number for task Tij , tij S : starting time of task Tij , cij S : starting time of operation oij . The objective functions are to minimize the time required to complete all jobs (i.e., the makespan) tMS and the number of AGVs nAGV , and the problem can be formulated as follows:
21
Nature-Inspired and Evolutionary Techniques for Automation
499
Job J1 : T11 o T12 o T13 o T14 Job J2 : T21 o T22 Job J3 : T31 o T32 o T33
s
T11
T12
T21
T22
T31
T32
T13
T14 t
s
T33
1
4
2
5
3
6
7
9 t
8
Fig. 21.11 Illustration of the network structure of the example
S min tMS = max ti,n + tMi,ni ,0 , i i
min nAGV = max xij , i,j
s.t. cSij − cSi,j−1 ≥ pi,j−1 + tij , ∀i, j = 2, . . . , ni ,
(21.8)
(21.9)
(21.10)
Task ID :
1
2
3
4
5
6
7
8
9
Priority :
1
5
7
2
6
8
3
9
4
T11 o T12 o T13 o T14 oT21 o T22 oT31 o T32 o T33
Fig. 21.12 Example generated chromosome and its decoded task sequence
21
cSij − cSi j − pi j + Mij − Mi j ≥ 0
∨ cSi j − cSij − pij + Mij − Mi j ≥ 0 , ∀ (i, j) , i j ,
disjunctive constraint. This represents the operation nonoverlapping constraint (21.11) and the AGV non-overlapping constraint (21.12 and 21.13).
(21.11)
21.3.2 A Priority-Based GA
tijS − tiS j − ti j + xij − xi j ≥ 0
∨ tiS j − tijS − tij + xij − xi j ≥ 0 , ∀ (i, j) , i j ,
S cSij ≥ ti,j+1 − pij ,
(21.12)
(21.13)
xij ≥ 0,
∀i, j,
(21.14)
tijS ≥ 0,
∀i, j,
(21.15)
where is a very large number, and ti is the transition time from the pickup point of machine Min to the delivery point of loading/unloading. Constraint (21.10) describes the operation precedence constraints. In (21.11, 21.12, and 21.13), since one or the other constraint must hold, it is called
For solving the AGV dispatching problem in FMS, the special difficulty arises from (1) that the task sequencing is NP-hard problem, and (2) that a random sequence of AGV dispatching usually does not satisfy the operation precedence constraint and routing constraint. Firstly, we give a prioritybased encoding method that is an indirect approach, encoding some guiding information to construct a sequence of all tasks. As is known, a gene in a chromosome is characterized by two factors: the locus, that is, the position of the gene within the structure of the chromosome, and the allele, that is, the value the gene takes. In this encoding method, the position of a gene is used to represent the ID which mapping the task in Fig. 21.11 and its value is used to represent the priority of the task for constructing a sequence among candidates. A feasible sequence can be uniquely determined from this encoding with consideration of the operation precedence constraint. An example of a generated chromosome and its decoded path is shown in Fig. 21.12 for the network structure of Fig. 21.11. After generating the task sequence, we separate tasks into several groups for assigning different AGVs. We find the breakpoints, which the tasks are the final transport of job i from pickup point of operation Oin to delivery point
500
M. Gen and L. Lin
AGV1 : T11 → T12 → T13 → T14 AGV2 : T21 → T22 AGV3 : T31 → T32 → T33 .
21.3.3 Case Study of AGV Dispatching For evaluating the efficiency of the AGV dispatching algorithm suggested in the case study, a simulation program was developed using Java on a Pentium IV processor (3.2 GHz clock). The detailed test data is given by Yang [110] and Kim et al. [111]. For a layout of facility of the AGV model in the FMS as an example, we consider the simple model as shown in Fig. 21.13. Note, although the network of guide paths is unidirectional, it has to take a very large transition time from pickup point (PP) to delivery point (DP) on same machine. It is unnecessary in the real application. So, we defined an inside cycle for each machine, that is the transition time is same with P to D and D to P. We give a routing example for carry out job J1 by using one AGV in Fig. 21.14. GA parameter settings were taken as follows: population size, popSize = 20; crossover probability, pC = 0.70; mutation
Loading / Unloading
D
D
P
D M1
M4
D
60
P
2
D
O11
P
80 O12 P 4
5 D
As genetic operators, we combine a weight mapping crossover (WMX), insertion mutation, and immigration operator based on the characteristic of this representation, and adopt an interactive adaptive-weight fitness assignment mechanism that assigns weights to each objective and combines the weighted objectives into a single objective function. The detailed procedures are showed in [108, 109].
P
1 Loading/ Unloading
of loading/unloading. Then we separate the part of tasks sequence by the breakpoints. An example of grouping is shown as follows, using the chromosome (Fig. 21.12):
P
70 O14
D
2
P 100 D O13
Fig. 21.14 A routing example for carrying out job J1
probability, pM = 0.50; immigration rate, μ = 0.15. In an FMS case study, ten jobs are to be scheduled on five machines. The maximum number of processes for the operations is four. The detailed date sets are shown in [112]. We can draw a network depended on the precedence constraints among tasks {Tij } of the case study. The best result is shown in Fig. 21.15. The final time required to complete all jobs (i.e., the makespan) is 574, and four AGVs are used. Figure 21.15 shows the result on a Gantt chart. As discussed above, the AGV dispatching problem is a difficult problem to solve by conventional heuristics. Adaptability, robustness, and flexibility make EA very effective for such automation systems.
21.4
Robot-Based Assembly Line System
Assembly lines are flow-oriented production systems which are still typical in the industrial production of high-quantity standardized commodities and are even gaining importance in low-volume production of customized products. Usually, specific tooling is developed to perform the activities needed at each station. Such tooling is attached to the robot at the station. In order to avoid the wasted time required for tool change, the design of the tooling can take place only after the line has been balanced. Different robot types may exist at the assembly facility. Each robot type may have different capabilities and efficiencies for various elements of the assembly tasks. Hence, allocating the most appropriate robot for each station is critical for the performance of robotic assembly lines.
P
21.4.1 Assembly Line Balancing Problems D P
M2
M3 D
Fig. 21.13 Example of layout of facility
P
This problem concerns how to assign the tasks to stations and how to allocate the available robots for each station in order to minimize cycle time under the constraint of precedence relationships. Let us consider a simple example to describe the problem, in which ten tasks are to be assigned to four workstations, and four robots are to be equipped on the four stations. Figure 21.16 shows the precedence constraints for
21
Nature-Inspired and Evolutionary Techniques for Automation
501
AGV1 : T11 o T12 o T41o T81 o T91 o T82 o T92 o T83 o T84 , AGV2 : T21 o T41 o T12 o T15 o T10.2 o T52 o T71 o T44 , AGV3 : T61 o T62 o T63 o T64 o T43 o T72 , AGV4 : T31 o T32 o T10.1 o T33 o T13 o T10.3 o T93 .
Machine M1
O22
O11
O21
M2
O64
O32
M4
O10,2
O12
M3
M5
O63
O61
O33
O62
O31
O13
O52
O10,1
O10,3
O82
O81
O93
O43
O42
O51
O41
O71
O91
O92
O44
O72
O84
O83 tMS = 574
Time t
Fig. 21.15 Gantt chart of the schedule of the case study considering AGVs routing
7 1 4
6
8
2
10
3
5
9
21.4.2 Robot-Based Assembly Line Model
Fig. 21.16 Precedence graph of the example problem Table 21.1 Data for the example i 1 2 3 4 5 6 7 8 9 10
Suc(i) 4 4 5 6 10 8 8 10 10 –
R1 17 21 12 21 31 28 42 27 19 26
R2 22 22 25 21 25 18 28 33 13 27
R3 19 16 27 19 26 20 23 40 17 35
The balancing chart for the solution can be drawn to analyze the solution. Fig. 21.17 shows that the idle time of stations 1–3 is very large, which means that this line is not balanced for production. In the real world, an assembly line is not just used for producing one unit of the product; it should produce several units. So we give the Gantt chart for three units to analyze the solution, as shown in Fig. 21.18.
R4 13 20 15 16 22 21 34 25 34 26
the ten tasks, and Table 21.1 gives the processing time for each of the tasks processed by each robot. We show a feasible solution for this example in Fig. 21.17.
The following assumptions are stated to clarify the setting in which the problem arises: A1. The precedence relationship among assembly activities is known and invariable. A2. The duration of an activity is deterministic. Activities cannot be subdivided. A3. The duration of an activity depends on the assigned robot. A4. There are no limitations on the assignment of an activities or a robot to any station. If a task cannot be processed on a robot, the assembly time of the task on the robot is set to a very large number. A5. A single robot is assigned to each station. A6. Material handling, loading, and unloading times, as well as setup and tool changing times are negligible, or are included in the activity times. This assumption
21
502
M. Gen and L. Lin
Decision variables: stn/rbn
1 0, 1 ykl = 0, xjk =
4/4
8
3/3 2/1
1 3
1/2
9
10
4
6
Problem formulation: The rALB-II problem is then formulated as the following zero-one integer program (0 – 1 IP) model:
5 2
if task j is assigned to workstation k otherwise if robot l is allocated to workstation k otherwise
7
min c = max 50
n m
1≤k≤m
85
til xik ykl
(21.10)
i=1 i=1
Time
Fig. 21.17 A feasible solution for the example (stn: workstation no, rbn: robot no.)
s.t.
m
k=1
kxik −
m
kxjk ≤ 0, ∀i ∈ pre(j); j
(21.11)
k=1
stn/rbn (case of 3 products) Part waiting Processing waiting
1/2 2 7 2/1
xik = 1
∀i
(21.12)
ykl = 1
∀k
(21.13)
ykl = 1 ∀l
(21.14)
k=1
3 5
3/3
m
1 4 6
4/4
8 50
100
150
9 200
10 250
300
350
400 Time
m
k=1
Fig. 21.18 Gantt chart for producing three units m
is realistic on a single model assembly line that works on the single product for which it is balanced. Tooling on such a robotic line is usually designed such that tool changes are minimized within a station. If tool change or other type of setup activity is necessary, it can be included in the activity time, since the transfer lot size on such line is of a single product. A7. The number of workstations is determined by the number of robots, since the problem aims to maximize the productivity by using all robots at hand. A8. The line is balanced for a single product. The notation used in this section can be summarized as follows. Indices: i, j: index of assembly tasks, i, j = 1, 2, . . . , n; k: index of workstations, k = 1, 2, . . . , m; l: index of robots, l = 1, 2, . . . , m. Parameters: n: total number of assembly tasks, m: total number of workstations (robots), til : processing time of the i-th task by robot l, pre(i): the set of predecessor of task i in the precedence diagram.
k=1
xik ∈ {0, 1}
∀k, i
(21.15)
ykl ∈ {0, 1}
∀l, k
(21.16)
The objective (21.10) is to minimize the cycle time, CT . Constraint (21.11) represents the precedence constraints. It ensures that, for each pair of assembly activities, the precedent cannot be assigned to a station before the station of the successor if there is precedence between the two activities. Constraint (21.12) ensures that each task has to be assigned to one station. Constraint (21.13) ensures that each station is equipped with one robot. Constraint (21.14) ensures that each robot can only be assigned to one station.
21
Nature-Inspired and Evolutionary Techniques for Automation
21.4.3 Evolutionary Algorithm Approaches Tasan and Tunali suggested most of the problems involving the design and plan of manufacturing systems are combinatorial and NP-hard and a well-known manufacturing optimization problem is the assembly line balancing (ALB) problem [113]. Due to the complexity of the problem, a growing number of researchers have employed genetic algorithms (GAs). Zhang et al. proposed an effective multi-objective genetic algorithm (MoGA)-based approach for solving multiobjective assembly line balancing (MoALB) problem with worker allocation [114]. Zhang and Gen reported an efficient multi-objective genetic algorithm (MoGA) for solving the mixed-model assembly line balancing (mALB) problem considering demand ratio-based cycle time in which a MoGA for solving the mixed-model assembly line balancing (mALB) problem [115]. They proposed a generalized Pareto-based scale-independent fitness function (GP-SIFF) as a fitness function of each individual in the population at each generation: eval (Si ) = p (Si ) − q (Si ) + c,
i = 1, 2, . . . , popSize (21.17)
where p(Si ) is the number of individuals which can be dominated by the individual Si , and q(Si ) is the number of individuals which can dominate the individual Si in the objective space. Also, c is the number of all participant individuals. The case study of the multi-objective mALB problem is to minimize the cycle time of the assembly line based on demand ratio of each model, to minimize the variation of workload, and to minimize the total worker cost. The solution process is consisting five phases: Phase 1: Combining all the models, Phase 2: Creating a task sequence, Phase 3: Assigning tasks to each station, Phase 4: Assigning worker to each station, Phase 5: Designing a Schedule. The detailed calculating process with an illustrative mALB example having ten tasks, four stations, four workers, and two models is introduced in [115]. For reducing the convergence of the performan for GPSI-FF, a new Pareto dominating and dominated relationship based fitness function (PDDR-FF) proposed by Zhang et al. [29]. The PDDR-FF-based fitness function is proposed to evaluate the individuals for solving the multi-objective optimization problems: 1 , eval (Si ) = q (Si ) + p (Si + 1)
i = 1, 2, . . . , popSize (21.18)
The solution to the RALB problem includes an attempt for optimal assignment of robots to line stations and a balanced distribution of work between different stations. It aims at maximizing the production rate of the line
503
(Levitin et al. [116]). This is the most typical combinatorial optimization model and many nature-inspired and evolutional techniques are applied to find a solution to this problem. Gao et al. [80] reported an innovative genetic algorithm (IGA) hybridized with local search such as five different neighborhood structures. In order to enhance the search ability, local search procedures work under the framework of hybrid GA. The local search investigates neighbor solutions that have possibilities to outperform the incumbent one, and adjusts neighborhood structures dynamically. The set of decoding procedures is to generate a feasible solution based on the task sequence and robot assignment scheme expressed in a chromosome. Considering the precedence constraints, a task sequence is infeasible when a successor task appears in front of any of its precedents. A reordering procedure is used to repair infeasible task sequences into feasible ones. The mixed order crossover consists of two different crossover methods: Order crossover (OX) and partial-mapped crossover (PMX), which operate the task sequence vector and the robot assignment vector, respectively. For the mutation operation, allele-based mutation is implemented for both task sequence and robot assignment vectors. A solution of the robot-based assembly line balancing (rALB) problem can be represented by two integer vectors: the task sequence vector, v1 , which contains a permutation of assembly tasks ordered according to their technological precedence sequence, and the robot assignment vector, v2 . The solution representation method is illustrated in Fig. 21.19. The balancing chart for the solution can be drawn in Fig. 21.20. The detailed processes of decoding the task sequence and assigning robots to workstations are shown in [80, 117]. In the real world, an assembly line is not just for producing one unit of the product; it should produce several units. So we give the Gantt chart with three units for analyzing the solution as in Fig. 21.21. We can see the solution reduces the waiting time for the line by comparing with the feasible solution from Fig. 21.21. This also means that the better solution improved the assembly line balancing.
S1
S2
S3
S4
S1
S2
S3
S4
2 1 3
4 9 6
5
7
8
10
Locus 1 2 3 4 5 6 7 8 9 10 Phase 1: Task sequence (v1) 2 1 3 4 9 6 5 7 8 10 Locus: station Phase 2: Robot assignment (v2)
1 1
2 2
3 3
4 4
Fig. 21.19 Solution representation of a sample problem
21
504
M. Gen and L. Lin
21.5
21.4.4 Case Study of Robot-Based Assembly Line Model An example of the RALB problem (25 tasks, 6 robots, 6 workstations) Scholl-93 (modified) is shown as follows. The test dataset of the example is shown in Table 21.2, and the precedence graph of the example is shown in Fig. 21.22. The best solution with the task sequence, breakpoints, and robot assignment is shown in Fig. 21.23, and the chart of the cycle time is shown in Fig. 21.24.
Conclusions
In this chapter, we introduced nature-inspired and evolutionary techniques (ET) introduced for treating automation problems in advanced planning and scheduling (APS), assembly line system, logistics, and transportation. ET is the most popular metaheuristic method for solving NP-hard combinatorial optimization problems. First, the background and developments of natureinspired and ETs are briefly described. Then basic schemes and working mechanism of genetic algorithms (GA), swarm intelligence and other nature-inspired optimization algorithms are introduced. Multi-objective evolutionary
stn/rbn (Balancing chart of the best solution) Table 21.2 Dataset of the RALB example 4/4
8
10
3/3
5
7
2/2 1/1
4
6
1
9
2
3 52
Time
Fig. 21.20 The balancing chart of the best solution (stn: workstation no, rbn: robot no.)
stn/rbn (case of 3 products) Part waiting Processing waiting
1/1 1 2 3 2/2
4 6 9
3/3
5
7
4/4
8 50
100
150
10 200
250
300 Time
Fig. 21.21 Gantt chart for producing three units
5
1
7
12
succ(i) 3 3 4 5, 8 6 7, 10 11, 12 9, 11 10, 13 – 13 15 14 16, 19, 20 17, 22 18 18, 23 25 22 21, 25 22, 24 – 25 – –
R1 87 67 82 182 71 139 98 70 60 112 51 79 57 139 95 54 71 112 109 63 75 87 58 44 79
R2 62 47 58 58 47 48 99 40 114 67 35 39 47 65 63 48 28 29 47 45 68 36 36 54 64
4
10 8
R4 60 45 40 60 57 73 49 29 72 63 44 80 85 38 65 34 29 58 37 43 79 29 38 21 35
23
19
9
13 11
Fig. 21.22 Precedence graph of example problem
R3 42 42 54 62 28 51 44 33 47 85 41 50 56 40 42 51 35 49 38 39 45 74 55 23 48
22
15 17
3 2
6
i 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
14
16 20
18 21
25 24
R5 44 53 61 55 62 61 59 36 63 49 85 67 41 87 61 71 32 84 52 36 84 82 42 36 48
R6 76 100 60 100 76 117 82 52 93 86 69 95 49 105 167 133 41 69 69 57 83 109 107 71 97
21
Nature-Inspired and Evolutionary Techniques for Automation
Breakpoint 1 1 2
2 1
Breakpoint 5 3 3
4 4
5 8
6 5
505
Breakpoint 10 Breakpoint 13 Breakpoint 16 7 6
Breakpoint 21
8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 7 12 9 11 15 13 14 20 16 17 18 10 23 19 21 24 22 25 R1
R5
R6
R2
R6
Fig. 21.23 The best solution with the task sequence, breakpoints and robot assignment
stn/rbn 6/4 5/2 4/6 3/1
Cycle time
19/37 10/67
1/5
16/48
13/49
17/28 18/29 23/36
14/105
9/60
2/3 5/28
22/29 24/21 25/35
21/79
20/57
11/51 6/51
1/44
7/44
2/53 50
15/95 8/33 3/61
100
12/50 4/55
150
200 213 Time
Fig. 21.24 The chart of cycle time
algorithms for treating optimization problems with multiple and conflicting objectives are introduced. Features of evolutionary search, such as hybrid evolutionary search, enhanced EA via learning, and evolutionary design automation are introduced. Next, the various applications based on ETs for solving nonlinear/combinatorial optimization problems in automation are surveyed. In terms of APS, the facility layout problems, planning and scheduling of manufacturing systems, and resource-constrained project scheduling problems are included. In terms of assembly line system, various assembly lines balancing (ALB) models are introduced. In terms of logistics and transportation, location allocation models and various types of logistics network models are surveyed with examples.
References 1. Rechenberg, I.: Evolution Strategie: Optimierung Technischer Systeme nach Prinzipien der Biologischen Evolution. FrommannHolzboog, Stuttgart (1973) 2. Fogel, L.A., Walsh, M.: Artificial Intelligence Through Simulated Evolution. Wiley, New York (1966) 3. Holland, J.: Adaptation in Natural and Artificial Systems. University of Michigan Press, Ann Arbor (1975), MIT Press, Cambridge (1992) 4. Koza, J.R.: Genetic Programming. MIT Press, Cambridge (1992) 5. Michalewicz, Z.: Genetic Algorithm + Data Structures = Evolution Programs, 3rd edn. Springer, New York (1996)
6. Colorni, A., Dorigo, M., Maniezzo, V.: Distributed optimization by ant colonies. In: Proceedings of ECAL’91, European Conference on Artificial Life. Elsevier, Amsterdam (1991) 7. Kennedy, J., Eberhart, R.C.: Particle swarm optimization. In: Proc. of the IEEE International Conference on Neural Networks, vol. 4, pp. 1942–1948 (1995) 8. Shi, Y., Eberhart, R.C.: Parameter selection in particle swarm optimization. In: Proc. of Inter. Conf. on Evolutionary Prog, pp. 591–600 (1999) 9. Clerc, M.: The swarm and the queen: towards a deterministic and adaptive particle swarm optimization. In: Proc. of IEEE Conference of Evolutionary Computation, pp. 1951–1957 (1999) 10. van den Bergh, F., Engelbrecht, A.P.: Training product unit networks using cooperative particle swarm optimisers. In: IJCNN, pp. 126–131 (2001) 11. Storn, R., Price, K.: Differential evolution: a simple and efficient adaptive scheme for global optimization over continuous spaces. J. Glob. Optim. 23(1) (1995)Please provide missing page range for Ref. [8]. 12. Hao, X.C., Lin, L., Gen, M., et al.: Effective estimation of distribution algorithm for stochastic job shop scheduling problem. Procedia Comput. Sci. 20, 102–107 (2013) 13. Matai, R., Singh, S.P., Mittal, M.: Modified simulated annealing based approach for multi objective facility layout problem. Int. J. Prod. Res. 51(13–14), 4273–4288 (2013) 14. Pourvaziri, H., Pierreval, H.: Combining metaheuristic search and simulation to deal with capacitated aisles in facility layout. Neurocomputing. 452, 443 (2020) 15. Schaffer, J.D.: Multiple objective optimization with vector evaluated genetic algorithms. In: Proc. 1st Int. Conf. on Genet. Algorithms, pp. 93–100 (1985) 16. Fonseca, C., Fleming, P.: An overview of evolutionary algorithms in multiobjective optimization. Evol. Comput. 3(1), 1–16 (1995) 17. Srinivas, N., Deb, K.: Multiobjective function optimization using nondominated sorting genetic algorithms. Evol. Comput. 3, 221– 248 (1995) 18. He, Z., Yen, G.G., Zhang, J.: Fuzzy-based Pareto optimality for many-objective evolutionary algorithms. IEEE Trans. Evol. Comput. 18(2), 269–285 (2014) 19. Yuan, Y., Xu, H., Wang, B., et al.: A new dominance relationbased evolutionary algorithm for many-objective optimization. IEEE Trans. Evol. Comput. 20(1), 16–37 (2016) 20. Li, M., Yang, S., Liu, X.: Shift-based density estimation for Pareto-based algorithms in many-objective optimization. IEEE Trans. Evol. Comput. 18(3), 348–365 (2014) 21. Xiang, Y., Zhou, Y., Li, M., et al.: A vector angle-based evolutionary algorithm for unconstrained many-objective optimization. IEEE Trans. Evol. Comput. 21(1), 131–152 (2017) 22. Yang, S., Li, M., Liu, X., et al.: A grid-based evolutionary algorithm for many-objective optimization. IEEE Trans. Evol. Comput. 17(5), 721–736 (2013) 23. Ishibuchi, H., Murata, T.: A multiobjective genetic local search algorithm and its application to flowshop scheduling. IEEE Trans. Syst. Man Cybern. 28(3), 392–403 (1998)
21
506 24. Gen, M., Cheng, R., Lin, L.: Network Models and Optimization: Multiobjective Genetic Algorithm Approach. Springer, London (2008) 25. Zitzler, E., Thiele, L.: Multiobjective evolutionary algorithms: a comparative case study and the strength Pareto approach. IEEE Trans. Evol. Comput. 3(4), 257–271 (1999) 26. Zitzler, E., Thiele, L.: SPEA2: Improving the Strength Pareto Evolutionary Algorithm, Technical Report 103. Computer Engineering and Communication Networks Lab, Zurich (2001) 27. Deb, K.: Multi-Objective Optimization Using Evolutionary Algorithms. Wiley, New York (2001) 28. Jain, H., et al.: An evolutionary many-objective optimization algorithm using reference-point based nondominated sorting approach, Part II: Handling constraints and extending to an adaptive approach. IEEE Trans. Evol. Comput. 18(4), 602–622 (2014) 29. Zhang, W.Q., Gen, M., Jo, J.B.: Hybrid sampling strategy-based multiobjective evolutionary algorithm for process planning and scheduling problem. J. Intel. Manuf. 25(5), 881–897 (2014) 30. Zhang, W.Q., Yang, D.J., Zhang, G.H., Gen, M.: Hybrid multiobjective evolutionary algorithm with fast sampling strategy-based global search and route sequence difference-based local search for VRPTW. Expert Syst. Appl. 145, 1–16 (2020) 31. Ke, L., Zhang, Q., Battiti, R.: MOEA/D-ACO: a multiobjective evolutionary algorithm using decomposition and AntColony. IEEE Trans. Cybernet. 43(6), 1845–1859 (2013) 32. Ke, L., Dev, K., Zhang, Q., Kwong, S.: An evolutionary manyobjective optimization algorithm based on dominance and decomposition. IEEE Trans. Evol. Comput. 19(5), 694–716 (2015) 33. Bolc, L., Cytowski, J.: Search Methods for Artificial Intelligence. Academic, London (1992) 34. Booker, L.: Improving search in genetic algorithms. In: Davis, L. (ed.) Genetic Algorithms and Simulated Annealing. Morgan Kaufmann Publishers, Los Altos (1987) 35. Goldberg, D.E., Holland, J.H.: Genetic algorithms and machine learning. Mach. Learn. 3(2), 95–99 (1988) 36. Jourdan, L., Dhaenens, C., Talbi, E.G.: Using datamining techniques to help metaheuristics: a short survey. Hybrid Metaheuristics. 4030, 57–69 (2006) 37. Zhang, J., Zhan, Z.-H., Ying, L., et al.: Evolutionary computation meets machine learning: a survey. IEEE Comput. Intell. Mag. 6(4), 68–75 (2011) 38. Lin, L., Gen, M.: Hybrid evolutionary optimisation with learning for production scheduling: state-of-the-art survey on algorithms and applications. Int. J. Prod. Res. 56, 193 (2018) 39. Kusiak, A., Heragu, S.: The facility layout problem. Eur. J. Oper. Res. 29, 229–251 (1987) 40. Cohoon, J., Hegde, S., Martin, N.: Distributed genetic algorithms for the floor-plan design problem. IEEE Trans. Comput.-Aided Des. 10, 483–491 (1991) 41. Tam, K.: Genetic algorithms, function optimization, facility layout design. Eur. J. Oper. Res. 63, 322–346 (1992) 42. Tate, D., Smith, A.: Unequal-area facility layout by genetic search. IIE Trans. 27, 465–472 (1995) 43. Gen, M., Cheng, R.: Genetic Algorithms and Engineering Design. Wiley, New York (1997) 44. Liu, J., Liu, J.: Applying multi-objective ant colony optimization algorithm for solving the unequal area facility layout problems. Appl. Soft Comput. 74, 167 (2018) 45. Garcia-Hernandez, L., Garcia-Hernandez, J.A., Salas-Morera, L., et al.: Addressing unequal area facility layout problems with the coral reef optimization algorithm with substrate layers. Eng. Appl. Artif. Intell. 93, 103697 (2020) 46. Suer, G., Gen, M. (eds.): Cellular Manufacturing Systems: Recent Development, Analysis and Case Studies. Nova Science Publishers, New York, 615pp (2018)
M. Gen and L. Lin 47. Pinedo, M.: Scheduling Theory, Algorithms and Systems. Prentice-Hall, Upper Saddle River (2002) 48. Gen, M., Cheng, R.: Genetic Algorithms and Engineering Optimization. Wiley, New York (2000) 49. Kacem, I., Hammadi, S., Borne, P.: Approach by localization and multiobjective evolutionary optimization for flexible job-shop scheduling problems. IEEE Trans. Syst. Man Cybern. Part C. 32(1), 408–419 (2002) 50. Zhang, H., Gen, M.: Multistage-based genetic algorithm for flexible job-shop scheduling problem. J. Complex. Int. 11, 223–232 (2005) 51. Zhang, L., Tang, Q., Wu, Z., et al.: Mathematical modeling and evolutionary generation of rule sets for energy-efficient flexible job shops. Energy. 138, 210–227 (2017) 52. Shahsavari-Pour, P., Ghasemishabankareh, B.: A novel hybrid meta-heuristic algorithm for solving multi objective flexible job shop scheduling. J. Manuf. Syst. 32(4), 771–780 (2013) 53. Kemmoe-Tchomte, S., Lamy, D., Tchernev, N.: An effective multi-start multi-level evolutionary local search for the flexible job-shop problem. Eng. Appl. Artif. Intell. 62, 80–95 (2017) 54. Xu, Y., Wang, L., Wang, S.Y., et al.: An effective teachinglearning-based optimization algorithm for the flexible job-shop scheduling problem with fuzzy processing time. Neurocomputing. 148, 260–26848 (2015) 55. Gen, M., Lin, L., Ohwata, H.: Advances in hybrid evolutionary algorithms for fuzzy flexible job-shop scheduling: State-of-theArt Survey. In: Proc. of the 13th Inter. Conf. on Agents and Artificial Intelligence, pp. 562–573 (2021) 56. Kim, K.W., Yun, Y.S., Yoon, J.M., Gen, M., Yamazaki, G.: Hybrid genetic algorithm with adaptive abilities for resource-constrained multiple project scheduling. Comput. Ind. 56(2), 143–160 (2005) 57. Joshi, K., Jain, K., Bilolikar, V.: A VNS-GA-based hybrid metaheuristics for resource constrained project scheduling problem. Int. J. Oper. Res. 27(3), 437–449 (2016) 58. Elsayed, S., Sarker, R., Ray, T., et al.: Consolidated optimization algorithm for resource-constrained project scheduling problems. Inf. Sci. 418, 346–362 (2017) 59. Cheng, M.Y., Tran, D.H., Wu, Y.W.: Using a fuzzy clustering chaotic-based differential evolution with serial method to solve resource-constrained project scheduling problems. Autom. Constr. 37, 88–97 (2014) 60. He, J., Chen, X., Chen, X.: A filter-and-fan approach with adaptive neighborhood switching for resource-constrained project scheduling. Comput. Oper. Res. 71, 71–81 (2016) 61. Tian, J., Hao, X.C., Gen, M.: A hybrid multi-objective EDA for robust resource constraint project scheduling with uncertainty. Comput. Ind. Eng. 128, 317–326 (2019) 62. Turbide, D.: Advanced planning and scheduling (APS) systems, Midrange ERP Mag. (1998) 63. Moon, C., Kim, J.S., Gen, M.: Advanced planning and scheduling based on precedence and resource constraints for e-Plant chains. Int. J. Prod. Res. 42(15), 2941–2955 (2004) 64. Moon, C., Seo, Y.: Evolutionary algorithm for advanced process planning and scheduling in a multi-plant. Comput. Ind. Eng. 48(2), 311–325 (2005) 65. Scholl, A.: Balancing and Sequencing of Assembly Lines. Physica-Verlag, Heidelberg (1999) 66. Becker, C., Scholl, A.: A survey on problems and methods in generalized assembly line balancing. Eur. J. Oper. Res. 168, 694– 715 (2006) 67. Baybars, I.: A survey of exact algorithms for the simple assembly line balancing problem. Manag. Sci. 32, 909–932 (1986) 68. Kim, Y.K., Kim, Y.J., Kim, Y.H.: Genetic algorithms for assembly line balancing with various objectives. Comput. Ind. Eng. 30(3), 397–409 (1996)
21
Nature-Inspired and Evolutionary Techniques for Automation
69. Falkenauer, E., Delchambre, A.: A genetic algorithm for bin packing and line balancing. In: Proc. of the 1992 IEEE Inter. Conf. on Robotics and Automation, Nice, France, pp. 1189–1192 (1992) 70. Zhang, W., Gen, M.: An efficient multiobjective genetic algorithm for mixed-model assembly line balancing problem considering demand ratio-based cycle time. J. Intell. Manuf. 22(3), 367–378 (2011) 71. Kucukkoc, I., Zhang, D.Z.: A mathematical model and genetic algorithm-based approach for parallel two-sided assembly line balancing problem. Prod. Plann. Control. 26(11), 874–894 (2015) 72. Triki, H., Mellouli, A., Hachicha, W., et al.: A hybrid genetic algorithm approach for solving an extension of assembly line balancing problem. Int. J. Comput. Integr. Manuf. 29(5), 504–519 (2016) 73. Quyen, N.T.P., Chen, J.C., Yang, C.L.: Hybrid genetic algorithm to solve resource constrained assembly line balancing problem in footwear manufacturing. Soft. Comput. 21(21), 6279–6295 (2017) 74. Defersha, F.M., Mohebalizadehgashti, F.: Simultaneous balancing, sequencing, and workstation planning for a mixed model manual assembly line using hybrid genetic algorithm. Comput. Ind. Eng. 119, 370–387 (2018) 75. Zha, J., Yu, J.: A hybrid ant colony algorithm for U-line balancing and rebalancing in just-in-time production environment. J. Manuf. Syst. 33(1), 93–102 (2014) 76. Alavidoost, M.H., Tarimoradi, M., Zarandi, M.H.F.: Fuzzy adaptive genetic algorithm for multi-objective assembly line balancing problems. Appl. Soft Comput. 34, 655–677 (2015) 77. Sahin, ¸ M., Kellegöz, T.: An efficient grouping genetic algorithm for U-shaped assembly line balancing problems with maximizing production rate. Memetic Comput. 9(3), 213–229 (2017) 78. Zhang, H., Zhang, C., Peng, Y., et al.: Balancing problem of stochastic large-scale U-type assembly lines using a modified evolutionary algorithm. IEEE Access. 6, 78414–78424 (2018) 79. Rubinovitz, J., Bukchin, J.: RALB-a heuristic algorithm for design and balancing of robotic assembly line. Ann. CIRP. 42, 497– 500 (1993) 80. Gao, J., Sun, L., Wang, L., Gen, M.: An efficient approach for type II robotic assembly line balancing problems. Comput. Ind. Eng. 56(3), 1065–1080 (2009) 81. Yoosefelahi, A., Aminnayeri, M., Mosadegh, H., et al.: Type II robotic assembly line balancing problem: an evolution strategies algorithm for a multi-objective model. J. Manuf. Syst. 31(2), 139– 151 (2012) 82. Janardhanan, M.N., Nielsen, P., Ponnambalam, S.G.: Application of particle swarm optimization to maximize efficiency of straight and U-shaped robotic assembly lines. In: Proc. of 13th Inter. Conf. Distributed Computing and Artificial Intelligence, pp. 525–533. Springer, Cham (2016) 83. Nilakantan, J.M., Ponnambalam, S.G., Jawahar, N., et al.: Bioinspired search algorithms to solve robotic assembly line balancing problems. Neural Comput. Appl. 26(6), 1379–1393 (2015) 84. Lau, Y.K., Zhao, Y.: Integrated scheduling of handling equipment at automated container terminals. Ann. Oper. Res. 159(1), 373– 394 (2008) 85. Simchi-Levi, D., Kaminsky, P., Simchi-Levi, E.: Designing and Managing the Supply Chain: Concepts Strategies & Case Studies, 2nd edn. McGraw-Hill, Irwin (2003) 86. Azarmand, Z., Neishabouri, E.: Chapter 5: Location allocation problem. In: Farahani, R.Z., Hekmatfar, M. (eds.) Facility Location. Springer (2009) 87. Cooper, L.: Location-allocation problems. Oper. Res. 11(3), 331– 343 (1963) 88. Scaparra, M.P., Scutella, M.G.: Facilities, Locations, Customers: Building Blocks of Location Models. A Survey. Universits’ degli Studi di Pisa (2001)
507 89. Gong, D.J., Gen, M., Xu, W., et al.: Hybrid evolutionary method for obstacle location-allocation. Comput. Ind. Eng. 29(1–4), 525– 530 (1995) 90. Vlachos, A.: Ant colony system solving capacitated locationallocation problems on a line. J. Inf. Optim. 27(1), 81–96 (2006) 91. Chen, T.-C., Wang, S.-C., Tseng, W.-C.: Using a hybrid evolutionary algorithm for solving signal transmission station location and allocation problem with different regional communication quality restriction. Int. J. Eng. Technol. Innov. 10(3), 165 (2020) 92. Pitakaso, R., Thongdee, T.: Solving a multi-objective, source and stage location-allocation problem using differential evolution. In: Logistics Operations, Supply Chain Management and Sustainability, pp. 507–524. Springer, Cham (2014) 93. Hitchcock, F.L.: The distribution of a product from several sources to numerous localities. J. Math. Phys. 20, 24–230 (1941) 94. Tilanus, B.: Introduction to information system in logistics and transportation. In: Tilanus, B. (ed.) Information Systems in Logistics and Transportation, pp. 7–16. Elsevier, Amsterdam (1997) 95. Michalewicz, Z., Vignaux, G.A., Hobbs, M.: A non-standard genetic algorithm for the nonlinear transportation problem. ORSA J. Comput. 3(4), 307–316 (1991) 96. Vignaux, G.A., Michalewicz, Z.: A genetic algorithm for the linear transportation problem. IEEE Trans. Syst. Man Cybernetics. 21(2), 445–452 (1991) 97. Syarif, A., Gen, M.: Double spanning tree-based genetic algorithm for two stage transportation problem. Int. J. KnowledgeBased Intelligent Eng. Syst. 7(4), 157 (2003) 98. Gen, M., Altiparamk, F., Lin, L.: A genetic algorithm for twostage transportation problem using priority-based encoding. OR Spectr. 28(3), 337–354 (2006) 99. Altiparmak, F., Gen, M., Lin, L., et al.: A genetic algorithm approach for multiobjective optimization of supply chain networks. Comput. Ind. Eng. 51(1), 197–216 (2006) 100. Altiparmak, F., Gen, M., Lin, L., et al.: A steady-state genetic algorithm for multi-product supply chain network design. Comput. Ind. Eng. 56, 521 (2007) 101. Lee, J.E., Gen, M., Rhee, K.G.: A multi-stage reverse logistics network problem by using hybrid priority-based genetic algorithm. IEEJ Trans. Electron. Inf. Syst. 128, 450 (2008) 102. Logistics & Technology. [Online]. Available: http:// www.trafficworld.com/news/log/112904a.asp 103. Lin, L., Gen, M., Wang, X.G.: Integrated multistage logistics network design by using hybrid evolutionary algorithm. Comput. Ind. Eng. 56(3), 854–873 (2009) 104. Gen, M., Lin, L., Yun, Y.S., et al.: Recent advances in hybrid priority-based genetic algorithms for logistics and SCM network design. Comput. Ind. Eng. 125, 394–412 (2018) 105. Kim, S.H., Hwang, H.: An adaptive dispatching algorithm for automated guided vehicles based on an evolutionary process. Int. J. Prod. Econ. 60/61, 465–472 (1999) 106. Hwang, H., Moon, S., Gen, M.: An integrated model for the design of end-of-aisle order picking system and the determination of unit load sizes of AGVs. Comput. Ind. Eng. 42, 249–258 (2002) 107. Gen, M., Zhang, W., Lin, L., et al.: Recent advances in hybrid evolutionary algorithms for multiobjective manufacturing scheduling. Comput. Ind. Eng. 112, 616–633 (2017) 108. Lin, L., Shinn, S.W., Gen, M., Hwang, H.: Network model and effective evolutionary approach for AGV dispatching in manufacturing system. J. Intell. Manuf. 17(4), 465–477 (2006) 109. Lin, L., Gen, M.: A random key-based genetic algorithm for AGV dispatching in FMS. Int. J. Manuf. Technol. Manag. 16, 58–75 (2009) 110. Yang, J.B.: GA-based discrete dynamic programming approach for scheduling in FMS environment. IEEE Trans. Syst. Man Cybern. B. 31(5), 824–835 (2001)
21
508 111. Kim, K., Yamazaki, G., Lin, L., Gen, M.: Network-based hybrid genetic algorithm to the scheduling in FMS environments. J. Artif. Life Robot. 8(1), 67–76 (2004) 112. Naso, D., Turchiano, B.: Multicriteria meta-heuristics for AGV dispatching control based on computational intelligence. IEEE Trans. Syst. Man Cybern.-B. 35(2), 208–226 (2005) 113. Tasan, S.O., Tunali, S.: A review of the current applications of genetic algorithms in assembly line balancing. J. Intell. Manuf. 19, 49–69 (2008) 114. Zhang, W.Q., Lin, L., Gen, M.: A multiobjective genetic algorithm based approach to assembly line balancing problem with worker allocation. J. Soc. Plant Eng. Jpn. 19(4), 61–72 (2008) 115. Zhang, W.Q., Gen, M.: An efficient multiobjective genetic algorithm for mixed-model assembly line balancing problem considering demand ratio-based cycle time. J. Intell. Manuf. 22(3), 367– 378 (2011) 116. Levitin, G., Rubinovitz, J., Shnits, B.: A genetic algorithm for robotic assembly balancing. Eur. J. Oper. Res. 168, 811–825 (2006) 117. Lin, L., Gen, M., Gao, J.: Optimization and improvement in robotbased assembly line system by hybrid genetic algorithm. IEEJ Trans. Electron. Inf. Syst. 128C(3), 424–431 (2008)
Mitsuo Gen received his PhD in Engineering from Kogakuin University in 1974 and a PhD degree in Informatics from Kyoto University in 2006. He was faculties at Ashikaga Institute of Technology from 1974 to 2003 and Waseda University from 2003 to 2010. He was visiting faculties at University of California, Berkeley from 1999.8 to 2000.3, Texas A&M University for 2000, Hanyang University in S. Korea from 2010 to 2012, and National Tsing Hua University in Taiwan from 2012 to 2014. He published several books on Genetic Algorithms including Network Models and Optimization, and Introduction to Evolutionary Algorithms, with colleagues, Springer, London, in 2008 and 2010, respectively.
M. Gen and L. Lin
Lin Lin received the PhD degree from Waseda University, Japan, in 2008. He is a Professor with the International School of Information Science and Engineering, Dalian University of Technology, China, and a Senior Researcher with Fuzzy Logic Systems Institute, Japan. His research interests include computational intelligence and their applications in combinatorial optimization and pattern recognition.
Automating Prognostics and Prevention of Errors, Conflicts, and Disruptions
22
Xin W. Chen and Shimon Y. Nof
Contents 22.1
Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 509
22.2 22.2.1 22.2.2 22.2.3 22.2.4 22.2.5
22.2.8
Error Prognostics and Prevention Applications . . . . . Error Detection in Assembly and Inspection . . . . . . . . . . Process Monitoring and Error Management . . . . . . . . . . Hardware Testing Algorithms . . . . . . . . . . . . . . . . . . . . . . Error Detection in Software Design . . . . . . . . . . . . . . . . . Error Detection and Diagnostics in Discrete-Event Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Error Detection and Disruption Prevention in Service Industries and Healthcare . . . . . . . . . . . . . . . . . . . . . . . . . Error Detection and Prevention Algorithms for Production and Service Automation . . . . . . . . . . . . . . . . . Error-Prevention Culture (EPC) . . . . . . . . . . . . . . . . . . . .
22.3
Conflict Prognostics and Prevention Applications . . . 517
22.4
Integrated Error and Conflict Prognostics and Prevention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Active Middleware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conflict and Error Detection Model . . . . . . . . . . . . . . . . . Performance Measures . . . . . . . . . . . . . . . . . . . . . . . . . . .
518 518 519 520
Error Recovery, Conflict Resolutions, and Disruption Prevention . . . . . . . . . . . . . . . . . . . . . . . Error Recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conflict Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Disruption Prevention . . . . . . . . . . . . . . . . . . . . . . . . . . . .
520 521 522 522
22.2.6 22.2.7
22.4.1 22.4.2 22.4.3 22.5 22.5.1 22.5.2 22.5.3 22.6 22.6.1 22.6.2 22.6.3
Emerging Trends . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Decentralized and Agent-Based Error and Conflict Prognostics and Prevention . . . . . . . . . . . . . . . . . . . . . . . . Intelligent Error and Conflict Prognostics and Prevention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Graph and Network Theories . . . . . . . . . . . . . . . . . . . . . .
512 512 512 512 514 515 516 516 517
524 524 524 524
22.6.4
Financial Models for Prognostics Economy . . . . . . . . . . 524
22.7
Conclusions and Emerging Trends . . . . . . . . . . . . . . . . 527
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 527
Abstract
Errors, conflicts, and disruptions exist in many systems. A fundamental question from industries is how can they be eliminated by automation, or can we at least use automation to minimize their damage? The purpose of this chapter is to illustrate a theoretical background and applications of how to automatically prevent errors, conflicts, and disruptions with various devices, technologies, methods, and systems. Eight key functions to prevent errors and conflicts are identified and their theoretical background and applications in both production and service are explained with examples. As systems and networks become larger and more complex, such as global enterprises, the Internet, and healthcare networks, error and conflict prognostics and prevention become more important and challenging; the focus is shifting from passive response to proactive and predictive prognostics and prevention. Additional theoretical developments and implementation efforts are needed to advance the prognostics and prevention of errors and conflicts in many real-world applications. Keywords
Model check · Conflict resolution · Multiagent system · Error detection · Error recovery · Disruption propagation X. W. Chen () Department of Industrial Engineering, Southern Illinois University, Edwardsville, IL, USA e-mail: [email protected] S. Y. Nof () PRISM Center and School of Industrial Engineering, Purdue University, West Lafayette, IN, USA e-mail: [email protected]
© Springer Nature Switzerland AG 2023 S. Y. Nof (ed.), Springer Handbook of Automation, Springer Handbooks, https://doi.org/10.1007/978-3-030-96729-1_22
22.1
Definitions
All humans commit errors (“To err is human”) and encounter conflicts and disruptions. In the context of automation, there are two main questions: (1) Does automation commit errors and encounter conflicts? (2) Can automation help humans 509
510
prevent errors, eliminate conflicts, and overcome or avoid disruptions? All human-made automation includes humancommitted errors and conflicts, for example, programming errors, design errors, and conflicts between human planners. Two automation systems, designed separately by different human teams, will encounter conflicts when they are expected to collaborate, for instance, the need for communication protocol standards to enable computers to interact automatically. Some errors, conflicts, and disruptions are inherent to automation, similar to all human-made creations, for instance, a robot mechanical structure that collapses under weight overload. An error is any input, output, or intermediate result that has occurred or will occur in a system and does not meet system specification, expectation, or comparison objective. A conflict is an inconsistency between different units’ goals, plans, tasks, or other activities in a system. A system usually has multiple units, some of which collaborate, cooperate, and/or coordinate to complete tasks. The most important difference between an error and a conflict is that an error involves only one unit, whereas a conflict involves two or more units in a system. An error at a unit may cause other errors or conflicts, for instance, a workstation that cannot provide the required number of products to an assembly line (a conflict) because one machine at the workstation breaks down (an error). Similarly, a conflict may cause other errors and conflicts, for instance, a machine that does not receive required products (an error) because the automated guided vehicles that carry the products collide when they move toward each other on the same path (a conflict). These phenomena, errors leading to other errors or conflicts, and conflicts leading to other errors or conflicts, are called error
X. W. Chen and S. Y. Nof
and conflict propagation. Both errors and conflicts cause disruptions, which are disturbances to quality, reliability, and continuity of operations. Errors and conflicts are different but related. The definition of the two terms is often subject to the understanding and modeling of a system and its units. Mathematical equations can help define errors and conflicts. An error is defined as ∃E [ur,i (t)] ,
Dissatisfy
if ϑi (t) −→ conr (t).
(22.1)
E[ur,i (t)] is an error, ui (t) is unit i in a system at time t, ϑi (t) is unit i’s state at time t that describes what has occurred with unit i by time t, conr (t) denotes constraint r in the system at Dissatisfy
time t, and −→ denotes that a constraint is not satisfied. Similarly, a conflict is defined as ∃C [nr (t)] ,
Dissatisfy
if θi (t) −→ conr (t).
(22.2)
C[nr (t)] is a conflict and nr (t) is a network of units that need to satisfy conr (t) at time t. The use of constraints helps define errors and conflicts unambiguously. A constraint is the system specification, expectation, comparison objective, or acceptable difference between different units’ goals, plans, tasks, or other activities. Tables 22.1 and 22.2 illustrate errors and conflicts in automation with some typical examples. There are also human errors and conflicts that exist in automation systems. Figure 22.1 describes the difference between errors and conflicts in pin insertion. This chapter provides a theoretical background and illustrates applications of how to prevent errors and conflicts automatically in production and service. Different terms have
Table 22.1 Examples of errors and conflicts in production automation Error A robot drops a circuit board while moving it between two locations A machine punches two holes on a metal sheet while only one is needed, because the size of the metal sheet is recognized incorrectly by the vision system A lathe stops processing a shaft due to power outage The server of a computer-integrated manufacturing system crashes due to high temperature A facility layout generated by a software program cannot be implemented due to irregular shapes
Conflict Two numerically controlled machines request help from the same operator at the same time Three different software packages are used to generate optimal schedule of jobs for a production facility; the schedules generated are different Two automated guided vehicles collide A DWG (drawing) file prepared by an engineer with AutoCAD cannot be opened by another engineer with the same software Overlapping workspace defined by two cooperating robots
Table 22.2 Examples of errors and conflicts in service automation Error The engine of an airplane shuts down unexpectedly during the flight A patient’s electronic medical records are accidently deleted during system recovery A pacemaker stops working Traffic lights go off due to lightening A vending machine does not deliver drinks or snacks after the payment Automatic doors do not open An elevator stops between two floors A cellphone automatically initiates phone calls due to a software glitch
Conflict The time between two flights in an itinerary generated by an online booking system is too short for transition from one flight to the other A ticket machine sells more tickets than the number of available seats An ATM machine dispenses $ 250 when a customer withdraws $ 260 A translation software incorrectly interprets text Two surgeries are scheduled in the same room due to a glitch in a sensor that determines if the room is empty
22
Automating Prognostics and Prevention of Errors, Conflicts, and Disruptions
a)
b)
c)
Fig. 22.1 Errors and conflicts in a pin insertion task: (a) successful insertion; (b–f) are unsuccessful insertion with (1) errors if the pin and the two other components are considered as one unit in a system, or
been used to describe the concept of errors and conflicts, for instance, failure [2–5], fault [4, 6], exception [7], and flaw [8]. Error and conflict are the most popular terms appearing in literature [3, 4, 6, 9–15]. The related terms listed here are also useful descriptions of errors and conflicts. Depending on the context, some of these terms are interchangeable with error; some are interchangeable with conflict; and the rest refer to both error and conflict. Eight key functions have been identified as useful to prevent errors and conflicts automatically as described below [16–19]. Functions 5–8 prevent errors and conflicts with the support of functions 1–4. Functions 6–8 prevent errors and conflicts by managing those that have already occurred. Function 5, prognostics, is the only function that actively determines which errors and conflicts will occur, and prevents them. All other seven functions are designed to manage errors and conflicts that have already occurred, although as a result they can prevent future errors and conflicts directly or indirectly. Figure 22.2 describes error and conflict propagation and their relationship with the eight functions: 1. Detection is a procedure to determine if an error or conflict has occurred. 2. Identification is a procedure to identify the observation variables most relevant to diagnosing an error or conflict; it answers the question: Which of them has already occurred? 3. Isolation is a procedure to determine the exact location of an error or conflict. Isolation provides more information than identification function, in which only the observation variables associated with the error or conflict are determined. Isolation does not provide as much information as the diagnostics function, however, in which the type, magnitude, and time of the error or conflict are determined. Isolation answers the question: Where has an error or conflict occurred? 4. Diagnostics is a procedure to determine which error or conflict has occurred, what their specific characteristics are, or the cause of the observed out-of-control status.
511
d)
e)
f)
(2) conflicts if the pin is a unit and the two other components are considered as another unit in a system [1]
Prognostics
Propagation
Error conflicts
Propagation
Diagnostics Detection Identification Isolation
E[u(r2,i,t)] C[n(r1,t)]
Propagation
Error conflicts
22
Occurrence u(i, t) n(r1, t)
Error recovery Conflict resolution Exception handling
Fig. 22.2 Error and conflict propagation and eight functions to prevent errors and conflicts
5. Prognostics is a procedure to prevent errors and conflicts through analysis and prediction of error and conflict propagation. 6. Error recovery is a procedure to remove or mitigate the effect of an error. 7. Conflict resolution is a procedure to resolve a conflict. 8. Exception handling is a procedure to manage exceptions. Exceptions are deviations from an ideal process that uses the available resources to achieve the task requirement (goal) in an optimal way. There has been extensive research on the eight functions, except prognostics. Various models, methods, tools, and algorithms have been developed to automate the management of errors and conflicts in production and service. Their main limitation is that most of them are designed for a specific application area, or even a specific error or conflict. The main challenge of automating the management of errors
512
X. W. Chen and S. Y. Nof
and conflicts is how to prevent them through prognostics, which is supported by the other seven functions and requires substantial research and developments.
22.2
Error Prognostics and Prevention Applications
Analytical approach
Data-driven approach
A variety of applications in production and service industries are related to error prognostics and prevention [20–24]. The majority of applications aim to detect and diagnose errors in design and production.
22.2.1 Error Detection in Assembly and Inspection As the first step to prevent errors, error detection has attracted much attention, especially in assembly and inspection; for instance, researchers [3] have studied an integrated sensorbased control system for a flexible assembly cell which includes error detection function. An error knowledge base has been developed to store information about previous errors that had occurred in assembly operations, and corresponding recovery programs which had been used to correct them. The knowledge base provides support for both error detection and recovery. In addition, a similar machine-learning approach to error detection and recovery in assembly has been discussed. To enable error recovery, failure diagnostics has been emphasized as a necessary step after the detection and before the recovery. It is noted that, in assembly, error detection and recovery are often integrated. Automatic inspection has been applied in various manufacturing processes to detect, identify, and isolate errors or defects with computer vision. It is mostly used to detect defects on printed circuit board [25–27] and dirt in paper pulps [28, 29]. The use of robots has enabled automatic inspection of hazardous materials [30] and in environments that human operators cannot access, e.g., pipelines [31]. Automatic inspection has also been adopted to detect errors in many other products such as fuel pellets [32], printing the contents of soft drink cans [33], oranges [34], aircraft components [35], and microdrills [36]. The key technologies involved in automatic inspection include but are not limited to computer or machine vision, feature extraction, and pattern recognition [37–39].
Parameter estimation Observers Parity relations Shewhart charts Univariate statistical Cumulative sum (CUSUM) charts monitoring Exponentially weighted moving average (EWMA) charts Principal component analysis (PCA) Multivariate statistical techniques
Fisher discriminant analysis (FDA) Partial least squares (PLS) Canonical variate analysis (CVA)
Causal analysis techniques Knowledgebased approach
Signed directed graph (SDG) Symptom tree model (STM)
Expert systems Pattern recognition techniques
Artificial neural networks (ANN) Self-organizing map (SOM)
Fig. 22.3 Techniques of fault management in process monitoring
Three approaches to manage faults for process monitoring are summarized in Fig. 22.3. The analytical approach generates features using detailed mathematical models. Faults can be detected and diagnosed by comparing the observed features with the features associated with normal operating conditions directly or after some transformation [19]. The data-driven approach applies statistical tools on large amount of data obtained from complex systems. Many quality control methods are examples of the data-driven approach. The knowledge-based approach uses qualitative models to detect and analyze faults. It is especially suited for systems in which detailed mathematical models are not available. Among these three approaches, the data-driven approach is considered most promising because of its solid theoretical foundation compared with the knowledge-based approach and its ability to deal with large amount of data compared with the analytical approach. The knowledge-based approach, however, has gained much attention recently. Many errors and conflicts can be detected and diagnosed only by experts who have extensive knowledge and experience, which need to be modeled and captured to automate error and conflict prognostics and prevention.
22.2.3 Hardware Testing Algorithms 22.2.2 Process Monitoring and Error Management Process monitoring, or fault detection and diagnostics in industrial systems, has become a new subdiscipline within the broad subject of control and signal processing [40].
The three fault management approaches discussed in Sect. 22.2.2 can also be classified according to the way that a system is modeled. In the analytical approach, quantitative models are used which require the complete specification of system components, state variables, observed variables,
22
Automating Prognostics and Prevention of Errors, Conflicts, and Disruptions
and functional relationships among them for the purpose of fault management. The data-driven approach can be considered as the effort to develop qualitative models in which previous and current data obtained from a system are used. Qualitative models usually require less information about a system than do quantitative models. The knowledge-based approach uses qualitative models and other types of models; for instance, pattern recognition techniques use multivariate statistical tools and employ qualitative models, whereas the signed directed graph is a typical dependence model which represents the cause–effect relationships in the form of a directed graph [41]. Similar to algorithms used in quantitative and qualitative models, optimal and near-optimal test sequences have been developed to diagnose faults in hardware [41–50]. The goal of the test sequencing problem is to design a test algorithm that is able to unambiguously identify the occurrence of any system state (faulty or fault-free state) using the test in the test set and minimizes the expected testing cost [42]. The test sequencing problem belongs to the general class of binary identification problems. The problem to diagnose single fault is a perfectly observed Markov decision problem (MDP). The solution to the MDP is a deterministic AND/OR binary decision tree with OR nodes labeled by the suspect set of system states and AND nodes denoting tests (decisions) (Fig. 22.4). It is well known that the construction of the optimal decision tree is an NP-complete problem [42]. To subdue the computational explosion of the optimal test sequencing problem, algorithms that integrate concepts from information theory and heuristic search have been developed and first used to diagnose faults in electronic and electromechanical systems with a single fault [42]. An XWindows-based software tool, the testability engineering
S0, S1, S2, S3, S4 T3
p S0, S3
p f
T2 p S0
f
513
and maintenance system (TEAMS), has been developed for testability analysis of large systems containing as many as 50,000 faults and 45,000 test points [41]. TEAMS can be used to model individual systems and generate near-optimal diagnostic procedures. Research on test sequencing has expanded to diagnose multiple faults [46–50] in various realworld systems including the Space Shuttle’s main propulsion system. Test sequencing algorithms with unreliable tests [45] and multivalued tests [50] have also been studied. To diagnose a single fault in a system, the relationship between the faulty states and tests can be modeled by directed graph (digraph model) (Fig. 22.5). Once a system is described in a diagraph model, the full order dependences among failure states and tests can be captured by a binary test matrix, also called a dependency matrix (D-matrix, Table 22.3). Other researchers have used digraph model to diagnose faults in hypercube microprocessors [51]. The directed graph is a powerful tool to describe dependences among system components and tests. Three important issues have been brought to light by extensive research on test sequencing problem and should be considered when diagnosing faults for hardware: 1. The order of dependences. The first-order cause–effect dependence between two nodes, i.e., how a faulty node affects another node directly, is the simplest dependence relationship between two nodes. Earlier research did not consider the dependences among nodes [42, 43], whereas in most recent research, different algorithms and test strategies have been developed with the consideration of not only the first-order but also high-order dependences among nodes [48–50]. The high-order dependences describe relationships between nodes that are related to each other through other nodes.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
f S1, S2, S2
Test passes Test fails OR node AND node
T1 p
S3
f
S1, S2
S4
T2 p S2
Fig. 22.4 Single-fault test strategy
f
Component with test S1
Component without test
Fig. 22.5 Digraph model of an example system
22
514
X. W. Chen and S. Y. Nof
Table 22.3 D-matrix of the example system derived from Fig. 22.5 State/test S1 (1) S2 (2) S3 (3) S4 (4) S5 (5) S6 (6) S7 (7) S8 (8) S9 (9) S10 (10) S11 (11) S12 (12) S13 (13) S14 (14) S15 (15)
T1 (5) 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0
T2 (6) 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0
T3 (8) 0 1 0 1 0 0 0 1 0 0 0 0 0 0 0
T4 (11) 1 1 0 0 0 0 1 0 0 0 1 0 0 0 0
T5 (12) 1 0 0 1 0 1 0 0 1 0 0 1 0 0 0
T6 (13) 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0
T7 (14) 0 1 0 1 1 0 0 1 0 0 0 0 0 1 0
T8 (15) 0 0 1 0 0 0 0 0 0 1 0 0 0 0 1
2. Types of faults. Faults can be classified into two categories: functional faults and general faults. A component or unit in a complex system may have more than one function. Each function may become faulty. A component may therefore have one or more functional faults, each of which involves only one function of the component. General faults are those faults that cause faults in all functions of a component. If a component has a general fault, all its functions are faulty. Models that describe only general faults are often called worst-case models [41] because of their poor diagnostics ability. 3. Fault propagation time. Systems can be classified into two categories: zero-time and nonzero-time systems [50]. Fault propagation in zero-time systems is instantaneous to an observer, whereas in nonzero-time systems, it is several orders of magnitude slower than the response time of the observer. Zero-time systems can be abstracted by taking the propagation times to be zero. Another interesting aspect of the test sequencing problem is several assumptions that have been discussed in various articles, which are useful guidelines for the development of algorithms for hardware testing: 1. At most one faulty state (component or unit) in a system at any time [42]. This may be achieved if the system is tested frequently enough [47]. 2. All faults are permanent faults [42]. 3. Tests can identify system states unambiguously [42]. In other words, a faulty state is either identified or not identified. There is not a situation such as: There is a 60% probability that a faulty state has occurred. 4. Tests are 100% reliable [45, 50]. Both false positive and false negative rates are zero.
5. Tests do not have common setup operations [47]. This assumption has been proposed to simplify the cost comparison among tests. 6. Faults are independent [47]. 7. Failure states that are replaced/repaired are 100% functional [47]. 8. Systems are zero-time systems [50]. Note the critical difference between assumptions 3 and 4. Assumption 3 is related to diagnostics ability. When an unambiguous test detects a fault, the conclusion is that the fault has definitely occurred with 100% probability. Nevertheless, this conclusion could be wrong if the false positive rate is not zero. This is the test (diagnostics) reliability described in assumption 4. When an unambiguous test does not detect a fault, the conclusion is that the fault has not occurred with 100% probability. Similarly, this conclusion could be wrong if the false negative rate is not zero. Unambiguous tests have better diagnostics ability than ambiguous tests. If a fault has occurred, ambiguous tests conclude that the fault has occurred with a probability less than one. Similarly, if the fault has not occurred, ambiguous tests conclude that the fault has not occurred with a probability less than one. In summary, if assumption 3 is true, a test gives only two results: a fault has occurred or has not occurred, always with a probability of one. If both assumptions 3 and 4 are true, (1) a fault must have occurred if the test concludes that it has occurred, and (2) a fault must have not occurred if the test concludes that it has not occurred.
22.2.4 Error Detection in Software Design The most prevalent method to detect errors in software is model checking. As Clarke et al. [52] state, model checking is a method to verify algorithmically if the model of software or hardware design satisfies given requirements and specifications through exhaustive enumeration of all the states reachable by the system and the behaviors that traverse them. Model checking has been successfully applied to identify incorrect hardware and protocol designs, and recently there has been a surge in work on applying it to reason about a wide variety of software artifacts; for example, model checking frameworks have been applied to reason about software process models [53], different families of software requirements models [54], architectural frameworks [55], design models [56], and system implementations [57–60]. The potential of model checking technology for (1) detecting coding errors that are hard to detect using existing quality assurance methods, e.g., bugs that arise from unanticipated interleavings in concurrent programs, and (2) verifying that system models and implementations satisfy crucial temporal properties and other lightweight specifications has led
22
Automating Prognostics and Prevention of Errors, Conflicts, and Disruptions
a number of international corporations and government research laboratories such as Microsoft, International Business Machines Corporation (IBM), Lucent, Nippon Electric Company (NEC), National Aeronautics and Space Administration (NASA), and Jet Propulsion Laboratory (JPL) to fund their own software model checking projects. A drawback of model checking is the state-explosion problem. Software tends to be less structured than hardware and is considered as a concurrent but asynchronous system. In other words, two independent processes in software executing concurrently in either order result in the same global state [52]. Failing to execute checking because of too many states is a particularly serious problem for software. Several methods, including symbolic representation, partial order reduction, compositional reasoning, abstraction, symmetry, and induction, have been developed either to decrease the number of states in the model or to accommodate more states, although none of them has been able to solve the problem by allowing a general number of states in the system. Based on the observation that software model checking has been particularly successful when it can be optimized by taking into account properties of a specific application domain, Hatcliff and colleagues have developed Bogor [61], which is a highly modular model-checking framework that can be tailored to specific domains. Bogor’s extensible modeling language allows new modeling primitives that correspond to domain properties to be incorporated into the modeling language as first-class citizens. Bogor’s modular architecture enables its core model-checking algorithms to be replaced by optimized domain-specific algorithms. Bogor has been incorporated into Cadena and tailored to checking avionics designs in the common object request broker architecture (CORBA) component model (CCM), yielding orders of magnitude reduction in verification costs. Specifically, Bogor’s modeling language has been extended with primitives to capture CCM interfaces and a real-time CORBA (RTCORBA) event channel interface, and Bogor’s scheduling and state-space exploration algorithms were replaced with a scheduling algorithm that captures the particular scheduling strategy of the RT-CORBA event channel and a customized state-space storage strategy that takes advantage of the periodic computation of avionics software. Despite this successful customizable strategy, there are additional issues that need to be addressed when incorporating model checking into an overall design/development methodology. A basic problem concerns incorrect or incomplete specifications: before verification, specifications in some logical formalism (usually temporal logic) need to be extracted from design requirements (properties). Model checking can verify if a model of the design satisfies a given specification. It is impossible, however, to determine if the derived specifications are consistent with or cover all design properties that the system should satisfy. That
515
is, it is unknown if the design satisfies any unspecified properties, which are often assumed by designers. Even if all necessary properties are verified through model checking, code generated to implement the design is not guaranteed to meet design specifications, or more importantly, design properties. Model-based software testing is being studied to connect the two ends in software design: requirements and code. The detection of design errors in software engineering has received much attention. In addition to model checking and software testing, for instance, Miceli et al. [8] has proposed a metric-based technique for design flaw detection and correction. In parallel computing, synchronization errors are major problems, and a nonintrusive detection method for synchronization errors using execution replay has been developed [14]. Besides, concurrent error detection (CED) is well known for detecting errors in distributed computing systems and its use of duplications [9, 62], which is sometimes considered a drawback.
22.2.5 Error Detection and Diagnostics in Discrete-Event Systems Recently, Petri nets have been applied in fault detection and diagnostics [63–65] and fault analysis [66–68]. Petri nets are formal modeling and analysis tool for discrete-event or asynchronous systems. For hybrid systems that have both event-driven and time-driven (synchronous) elements, Petri nets can be extended to global Petri nets to model both discrete-time and event elements. To detect and diagnose faults in discrete-event systems (DES), Petri nets can be used together with finite-state machines (FSM) [69, 70]. The notion of diagnosability and a construction procedure for the diagnoser have been developed to detect faults in diagnosable systems [69]. A summary of the use of Petri nets in error detection and recovery before the 1990s can be found in the work of Zhou and DiCesare [71]. To detect and diagnose faults with Petri nets, some of the places in a Petri net are assumed observable and others are not. All transitions in the Petri net are also unobservable. Unobservable places, i.e., faults, indicate that the number of tokens in those places is not observable, whereas unobservable transitions indicate that their occurrences cannot be observed [63, 65]. The objective of the detection and diagnostics is to identify the occurrence and type of a fault based on observable places within finite steps of observation after the occurrence of the fault. It is clear that to detect and diagnose faults with Petri nets, system modeling is complex and time-consuming because faulty transitions and places must be included in a model. Research on this subject has mainly involved the extension of previous work using FSM and has made limited progress.
22
516
Faults in DES can be diagnosed with the decentralized approach [72]. Distributed diagnostics can be performed by either diagnosers communicating with each other directly or through a coordinator. Alternatively, diagnostics decisions can be made completely locally without combining the information gathered [72]. The decentralized approach is a viable direction for error detection and diagnostics in large and complex systems.
22.2.6 Error Detection and Disruption Prevention in Service Industries and Healthcare Errors tend to occur frequently in certain service industries that involve intensive human operations. As the use of computers and other automation devices, e.g., handwriting recognition and sorting machines in postal service, becomes increasingly prevalent, errors can be effectively and automatically prevented and reduced to minimum in many service industries including delivery, transportation, e-Business, and e-Commerce. In some other service industries, especially in healthcare systems, error detection is critical and limited research has been conducted to help develop systems that can automatically detect human errors and other types of errors [73–77]. Several systems and modeling tools have been studied and applied to detect errors in health industries with the help of automation devices [78–81]. Much more research needs to be conducted to advance the development of automated error detection in service industries. At the turn of year 2019 to 2020, a global coronavirus pandemic has brought the detection and diagnosis of viruses such as severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) to the forefront of research in epidemiology. The pandemic caused a significant loss of human life worldwide. The economic and social disruption caused by the pandemic was devastating. Millions of people lost jobs and fell into poverty. Many companies filed for bankruptcy or closed permanently. Almost all areas of global economy such as transportation, tourism, and international trade were severely disrupted. SARS-CoV-2 has a rapid human to human transmission and causes fatal pneumonia [82]. There are only a few approved specific drugs and vaccines for treatment and prevention. Early detection and diagnosis are crucial to prevent the extensive spread of the SARS-CoV-2. Two main detection strategies are available for the diagnosis of SARS-CoV-2 [82]. First, the SARS-CoV-2 viral ribonucleic acid (RNA) can be detected by polymerase chain reaction (PCR) or nucleic acid hybridization techniques [83] with a nasopharyngeal or oral swab [84]. Secondly, the virus antibody or antigens can be detected using immunological and serological assays [85]. These two detection strategies complement
X. W. Chen and S. Y. Nof
each other. The determination of the viral RNA leads to the detection of the virus in its active stage, whereas the serological assays identify people whose immune system has already developed antibodies to fight the infection. A third strategy for the diagnosis of SARS-CoV-2 is to use biosensors that are bioanalytical devices combining the selectivity features of a biomolecule with the sensitivity of a physicochemical transducer. For example, trained scent dogs can be used to detect saliva or tracheobronchial secretions of SARS-CoV-2 [86]. For another example, deep learning can be applied to analyze chest X-ray images for detection and diagnosis of SARS-CoV-2 [87, 88]. In addition to identifying individuals who have SARS-CoV-2, several studies have focused on early detection of SARS-CoV-2 outbreak [89, 90]. For example, biosensors, PCR, or paper-based indicator methods can be used to detect SARS-CoV-2 in wastewater. The results can help localize infection clusters of the primary wave and detect potential subsequent waves of SARS-CoV-2 outbreak.
22.2.7 Error Detection and Prevention Algorithms for Production and Service Automation The fundamental work system has evolved from manual power, human–machine system, computer-aided and computer-integrated systems, and then to e-Work [91], which enables distributed and decentralized operations where errors and conflicts propagate and affect not only the local workstation but the entire production/service network. Agent-based algorithms, e.g., (22.3), have been developed to detect and prevent errors in the process of providing a single product/service in a sequential production/service line [92, 93]. Qi is the performance of unit i in the produc are the upper limit and lower tion/service line. Um and Lm limit, respectively, of the acceptable performance of unit m. Um and Lm are the upper limit and lower limit, respectively, of the acceptable level of the quality of a product/service after the operation of unit m. Units 1 through m − 1 complete their operation on a product/service before unit m starts its operation on the same product/service. An agent deployed at unit m executes (22.3) to prevent errors ∃E [um ] , m−1 m−1 Qi ∪ Lm − Um > Qi . if Um −Lm < i=1
i=1
(22.3) In the process of providing multiple products/services, traditionally, the centralized algorithm (22.4) is used to predict errors in a sequential production/service line. Ii (0) is the
22
Automating Prognostics and Prevention of Errors, Conflicts, and Disruptions
quantity of available raw materials for unit i at time 0. ηi is the probability a product/service is within specifications after being operated by unit i, assuming the product/service is within specifications before being operated by unit i. ϕm (t) is the needed number of qualified products/services after the operation of unit m at time t. Equation (22.4) predicts at time 0 the potential errors that may occur at unit m at time t. Equation (22.4) is executed by a central control unit that is aware of Ii (0) and ηi of all units. Equation (22.4) often has low reliability, i.e., high false positive rates (errors are predicted but do not occur), or low preventability, i.e., high false negative rate (errors occur but are not predicted), because it is difficult to obtain accurate ηi when there are many units in the system. ∃E [um (t)] ,
m if min Ii (0)× ηi < ϕm (t). i
i
∃E [um (t)] ,
if min
Circumstance
Im t , Im−1 t × ηm−1+
Cm−1 t − Nm t − Cm t × ηm + Cm t < ϕm (t) t < t. (22.5)
Error
Initial response
Known causes
Contingent action
Root causes after error analysis
Preventive action
Unknown causes
Fig. 22.6 Incident mapping
(22.4)
To improve reliability and preventability, agent-based error prevention algorithms, e.g., (22.5), have been developed to prevent errors in the process of providing multiple products/services [94]. Cm (t ) is the number of cumulative conformities produced by unit m by time t . Nm (t ) is the number of cumulative nonconformities produced by unit m by time t . An agent deployed at unit m executes (22.5) by using information about unit m − 1, i.e., Im−1 (t ), ηm−1 , and Cm−1 (t ) to prevent errors that may occur at time t, t < t. Multiple agents deployed at different units can execute (22.5) simultaneously to prevent errors. Each agent can have its own attitude, i.e., optimistic or pessimistic, toward the possible occurrence of errors. Additional details about agent-based error prevention algorithms can be found in the work by Chen and Nof [94]:
517
2.
3.
4.
5.
by individuals or teams through collaboration, and (e) consequences provided to encourage or discourage people for their behaviors. System alignment: an organization’s operating systems must be aligned to get work done with discipline, routines, and best practices. Technical excellence: an organization must promote shared technical and operational understanding of how a process, system, or asset should technically perform. Standardization: standardization supports error prevention with a balanced combination of good manufacturing practices. Problem-resolution skills: an organization needs people with effective statistical diagnostics and issue-resolution skills to address operational process challenges.
Not all errors can be prevented manually and/or by automation systems. When an error does occur, incident mapping (Fig. 22.6) [95] as one of the exception-handling tools can be used to analyze the error and proactively prevent future errors.
22.3
Conflict Prognostics and Prevention Applications
22.2.8 Error-Prevention Culture (EPC) To prevent errors effectively, an organization is expected to cultivate an enduring error-prevention culture (EPC) [95], i.e., the organization knows what to do to prevent errors when no one is telling it what to do. The EPC model has five components [95]: 1. Performance management: the human performance system helps manage valuable assets and involves five key areas: (a) an environment to minimize errors, (b) human resources that are capable of performing tasks, (c) task monitoring to audit work, (d) feedback provided
Conflicts can be categorized into three classes [96]: goal conflicts, plan conflicts, and belief conflicts. Goals of an agent are modeled with an intended goal structure (IGS; e.g., Fig. 22.7), which is extended from a goal structure tree [97]. Plans of an agent are modeled with the extended project estimation and review technique (E-PERT) diagram (e.g., Fig. 22.8). An agent has (1) a set of goals which are represented by circles (Fig. 22.7), or circles containing a number (Fig. 22.8), (2) activities such as Act 1 and Act 2 to achieve the goals, (3) the time needed to complete an activity, e.g., T1, and (4) resources, e.g., R1 and R2 (Fig. 22.8). Goal conflicts are detected by comparing goals by agents. Each
22
518
X. W. Chen and S. Y. Nof
The three common characteristics of available conflict detection approaches are: (1) they use the agent concept because a conflict involves at least two units in a system, (2) an agent is modeled for multiple times because each agent has at least two distinct attributes: goal and plan, and (3) they not only detect, but mainly prevent conflicts because goals and plans are determined before agents start any activities to achieve them. The main difference between the IGS and PERT approach, and the Petri net approach is that agents communicate with each other to detect conflicts in the former approach whereas a centralized control unit analyzes the integrated Petri net to detect conflicts in the latter approach [99]. The Petri net approach does not detect conflicts using agents, although systems are modeled with agent technology. Conflict detection has been mostly applied in collaborative design [100–102]. The ability to detect conflicts in distributed design activities is vital to their success because multiple designers tend to pursue individual (local) goals prior to considering common (global) goals.
agent has a PERT diagram and plan conflicts are detected if agents fail to merge PERT diagrams or the merged PERT diagrams violate certain rules [96]. The three classes of conflicts can also be modeled by Petri nets with the help of four basic modules [98]: sequence, parallel, decision, and decision-free, to detect conflicts in a multiagent system. Each agent’s goal and plan are modeled by separate Petri nets [99], and many Petri nets are integrated using a bottom-up approach [71, 98] with three types of operations [99]: AND, OR, and precedence. The synthesized Petri net is analyzed to detect conflicts. Only normal transitions and places are modeled in Petri nets for conflict detection. The Petri-net-based approach for conflict detection developed so far has been rather limited. It has emphasized more the modeling of a system and its agents than the analysis process through which conflicts are detected.
Time Agent A's IGS A0
22.4
Agent A's IGS A0
22.4.1 Active Middleware
A1
Middleware was originally defined as software that connects two separate applications or separate products and serves as the glue between two applications; for example, in Fig. 22.9, middleware can link several different database systems to several different web servers. The middleware allows users to request data from any database system that is connected to the middleware using the form displayed on the web browser of one of the web servers. Active middleware is one of the four circles of the “e-” in e-Work as defined by the PRISM Center (Production,
Agent A's IGS A0 A1 A4
A5
Integrated Error and Conflict Prognostics and Prevention
A6
Fig. 22.7 Development of agent A’s intended goal structure (IGS) over time
Agent 1
5 Act6, T6 Act5, T5
1
R3
R1
Act1, T1
Agent 3
Dummy 6
2 R1
Act4, T4
R2
R2
Act2, T2 3 Agent 2
Act3, T3 R1
Dummy 4
Fig. 22.8 Merged project estimation and review technique (PERT) diagram
Act7, T7 R3
7
Act8, T8 R3, R4
8
22
Automating Prognostics and Prevention of Errors, Conflicts, and Disruptions
Database 1
Database 2
Database 3
Database n
Middleware
3. Server 1
Server 2
Server 3
Server m
Fig. 22.9 Middleware in a database server system
4. User: Human/machine
Modeling tool
Workflows
Task/activity database
MAS
Cooperative work protocols
Middleware DSS
5.
HAD information systems: Engineering systems, planning decision systems
6. Distributed databases
Enterprises I
Enterprises II
519
two questions: (1) Which agent will benefit from the task when it is completed by one or more given agents? (2) Which task must be finished before other tasks can begin? The workflows are specific to the given system and can be managed by a workflow management system (WFMS). Task/activity database: This database is used to record and help allocate tasks. There are many tasks in a large system such as those applied in automotive industries. Certain tasks are performed by several agents, and others are performed by one agent. The database records all task information and the progress of tasks (activity) and helps allocate and reallocate tasks if required. Decision support system (DSS): DSS for the active middleware is like the operating system for a computer. In addition, DSS already has programs running for monitoring, analysis, and optimization. It can allocate/delete/create tasks, bring in or take off agents, and change workflows. Multiagent system (MAS): MAS includes all agents in a system. It stores information about each agent, for example, capacity and number of agents, functions of an agent, working time, and effective date and expiry date of the agent. Cooperative work protocols: Cooperative work protocols define communication and interaction protocols between components of active middleware. It is noted that communication between agents also includes communication between components because active middleware includes all agents in a system.
Distributed enterprises
Fig. 22.10 Active middleware architecture. (After Ref. [103])
22.4.2 Conflict and Error Detection Model Robotics, and Integration Software for Manufacturing & Management) at Purdue University [91]. Six major components in active middleware have been identified [103, 104]: modeling tool, workflows, task/activity database, decision support system (DSS), multiagent system (MAS), and collaborative work protocols. Active middleware has been developed to optimize the performance of interactions in heterogeneous, autonomous, and distributed (HAD) environments, and is able to provide an e-Work platform and enables a universal model for error and conflict prognostics and prevention in a distributed environment. Figure 22.10 shows the structure of the active middleware; each component is described below: 1. Modeling tool: The goal of a modeling tool is to create a representation model for a multiagent system. The model can be transformed to next-level models, which will be the base of the system implementation. 2. Workflows: Workflows describe the sequence and relations of tasks in a system. Workflows store the answer to
A conflict and error detection model (CEDM; Fig. 22.11) that is supported by the conflict and error detection protocol (CEDP, part of collaborative work protocols) and conflict and error detection agents (CEDAs, part of MAS) has been developed [100] to detect errors and conflicts in different network topologies. The CEDM integrates CEDP, CEDAs, and four error and conflict detection components (Fig. 22.11). A CEDA is deployed at each unit of a system to (1) detect errors and conflicts by three components (detection policy generation, error detection, and conflict evaluation), which interact with and are supported by error knowledge base, and (2) communicate with other CEDAs to send and receive error and conflict announcements with the support of CEDP. The CEDM has been applied to four different network topologies and the results show that the performance of CEDM is sometimes counterintuitive, i.e., it performs better on networks that seem more complex. To be able to detect both errors and conflicts is desired when they exist in the same system. Because errors are different from conflicts, the activities to detect them are often different and need to be integrated.
22
520
X. W. Chen and S. Y. Nof
CEDP Error knowledge base
Detection policy generation
CEDA
Conflict evaluation
Send
Error and conflict announcement
Error detection Receive CEDP Error and conflict announcement
Fig. 22.11 Conflict and error detection model (CEDM)
22.4.3 Performance Measures Performance measures are necessary for the evaluation and comparison of various error and conflict prognostics and prevention methods. Several measures have already been defined and developed in previous research: 1. Detection latency: The time between the instant that an error occurs and the instant that the error is detected [10, 105]. 2. Error coverage: The percentage of detected errors with respect to the total number of errors [10]. 3. Cost: The overhead caused by including error detection capability with respect to the system without the capability [10]. 4. Conflict severity: The severity of a conflict. It is the sum of the severity caused by the conflict at each involving unit [disser]. 5. Detectability: The ability of a detection method. It is a function of detection accuracy, cost, and time [106]. 6. Preventability: The ratio of the number of errors prevented divided by the total number of errors [94]. 7. Reliability: The ratio of the number of errors prevented divided by the number of errors identified or predicted, or the ratio of the number of errors detected divided by the total number of errors [45, 50, 94]. Other performance measures, e.g., total damage and cost– benefit ratio, can be developed to compare different methods. Appropriate performance measures help determine how a
specific method performs in different situations and are often required when there are multiple methods available.
22.5
Error Recovery, Conflict Resolutions, and Disruption Prevention
When an error or a conflict occurs and is detected, identified, isolated, or diagnosed, there are three possible consequences: (1) other errors or conflicts that are caused by the error or conflict have occurred, (2) other errors or conflicts that are caused by the error or conflict will (probably) occur, and (3) other errors or conflicts, or the same error or conflict, will (probably) occur if the current error or conflict is not recovered or resolved, respectively. One of the objectives of error recovery and conflict resolution is to avoid the third consequence when an error or a conflict occurs. They are therefore part of error and conflict prognostics and prevention. There has been extensive research on automated error recovery and conflict resolution, which are often domain specific. Many methods have been developed and applied in various real-world applications in which the main objective of error recovery and conflict resolution is to keep the production or service flowing; for instance, Fig. 22.12 shows a recovery tree for rheostat pick-up and insertion, which is programmed for automatic error recovery. Traditionally, error recovery and conflict resolution are not considered as an approach to prevent errors and conflicts. In the next two sections, we describe two examples, error recovery in
22
Automating Prognostics and Prevention of Errors, Conflicts, and Disruptions
521
Start Rheostat positioned correctly? Move to next rheostat
Recalibrate robot Return to start
Freeze; Call operator
Move to next rheostat Move to next rheostat
Return to start Rheostat available? Move to next rheostat
Go to next feeder Return to start
Freeze; Call operator Return to start
Move to next rheostat Move to next rheostat Rheostat not caught? Release rheostat Is feeder aligned? Return to start
Rheostat inserted?
Return to start
Search for insertion End
Freeze; Call operator
Discard rheostat
22
Increment failurecounter Counter < 3? Return to start
Recalibrate robot Set failurecounter to 0 Return to start
Fig. 22.12 Recovery tree for rheostat pick-up and insertion recovery. A branch may only be entered once; on success branch downward; on failure branch to right if possible, otherwise branch left; when the end
of a branch is reached, unless otherwise specified return to last sensing position; “?” signifies sensing position where sensors or variables are evaluated. (After Ref. [1])
robotics [107] and conflict resolution in collaborative facility design [102, 108], to illustrate how to perform these two functions automatically.
sible errors and the inherent complexity of recovery actions, to automate error recovery fully without human interventions is difficult. The emerging trend in error recovery is to equip systems with human intelligence so that they can correct errors through reasoning and high-level decision-making. An example of an intelligent error recovery system is the neuralfuzzy system for error recovery (NEFUSER) [107]). The NEFUSER is both an intelligent system and a design tool of fuzzy logic and neural-fuzzy models for error detection and recovery. The NEFUSER has been applied to a single robot working in an assembly cell. The NEFUSER enables interactions among the robot, the operator,
22.5.1 Error Recovery Error recovery cannot be avoided when using robots because errors are an inherent characteristic of robotic applications [109] that are often not fault tolerant. Most error recovery applications implement preprogrammed nonintelligent corrective actions [109–112]. Due to the large number of pos-
522
X. W. Chen and S. Y. Nof
and computer-supported applications. It interprets data and information collected by the robot and provided by the operator, analyzes data and information with fuzzy logic and/or neural-fuzzy models, and makes appropriate error recovery decisions. The NEFUSER has learning ability to improve corrective actions and adapt to different errors. The NEFUSER therefore increases the level of automation by decreasing the number of times that the robot has to stop and the operator has to intervene due to errors. Figure 22.13 shows the interactions between the robot, the operator, and computer-supported applications. The NEFUSER is the error recovery brain and is programmed and run on MATLAB, which provides a friendly windowsoriented fuzzy inference system (FIS) that incorporates the graphical user interface tools of the fuzzy logic toolbox [113]. The example in Fig. 22.13 includes a robot and an operator in an assembly cell. In general, the NEFUSER design for error recovery includes three main tasks: (1) design the FIS, (2) manage and evaluate information, and (3) train the FIS with real data and information. Automation of machine operations has led to hardware and software error compensation technologies. One error compensation method is using error tables. The machine controller can access error tables to adjust target positions used in motion servo algorithms (feedforward control). Modern machine controllers have more sophisticated compensation
NEFUSER
Request help Sensor information
Interact with the system Recovery strategy
Operator
Assist Recovery instructions
Controller
tables that enable two- or three-dimensional error compensation based on measurements of error motions [114, 115]. Real-time error compensation of geometric and thermally induced errors can improve the precision of machine tools by up to an order of magnitude.
22.5.2 Conflict Resolution There is a growing demand for knowledge-intensive collaboration in distributed design [108, 116, 117]. Conflict detection has been studied extensively in collaborative design, as has conflict resolution, which is often the next step after a conflict is detected. There has been extensive research on conflict resolution [118–123]. Recently, a multiapproach method to conflict resolution in collaborative design has been introduced with the development of the facility description language–conflict resolution (FDL-CR) [102]. The critical role of computer-supported conflict resolution in distributed organizations has been discussed in great detail [91, 124– 126]. In addition, Ceroni and Velasquez [100] have developed the conflict detection and management system (CDMS) and their work shows that both product complexity and number of participating designers have a statistically significant effect on the ratio of conflicts resolved to those detected, but that only complexity had a statistically significant effect on design duration. Based on the previous work, most recently, a new method, Mcr (Table 22.4), has been developed to automatically resolve conflict situations common in collaborative facility design using computer-support tools [102, 108]. The method uses both traditional human conflict-resolution approaches that have been used successfully by others and principles of conflict prevention to improve design performance and apply computer-based learning to improve usefulness. A graph model for conflict resolution is used to facilitate conflict modeling and analysis. The performance of the new method has been validated by implementing its conflict resolution capabilities in the FDL, a computer tool for collaborative facility design, and by applying FDL-CR, to resolve typical conflict situations. Table 22.4 describes the Mcr structure.
22.5.3 Disruption Prevention Robot
Sensor data
Sensing
Operations and recovery actions
Production process
Fig. 22.13 Interactions with NEFUSER. (After Ref. [107])
Disruptions occur when manufacturing, production, or service activities do not perform as specified or become unavailable. Disruptions are part of the consequences when conflicts and errors are not effectively managed. Many disruptions are temporary, but certain disruptions last for an extended period of time. In terms of impact, small disruptions such as minor traffic accidents affect a local community, whereas large disruptions have significant global impact. Disruptions may be random [131] and often propagate and cause
Strategy Direct negotiation
Third-party mediation
Incorporation of additional parties
Persuasion
Arbitration
Stage Mcr(1)
Mcr(2)
Mcr(3)
Mcr(4)
Mcr(5)
Steps to achieve conflict resolution 1. Agent prepares resolution proposal and sends to counterparts 2. Counterpart agents evaluate proposal. If they accept it, go to step 5; otherwise go to step 3 3. Counterpart agents prepare a counteroffer and send it back to originating agent 4. Agent evaluates the counteroffer. If accepted go to step 5; otherwise go to Mcr(2) 5. End of the conflict resolution process 1. Third-party agent prepares resolution proposal and sends to counterparts 2. Counterpart agents evaluate the proposal. If accepted, go to step 5; otherwise go to step 3 3. Counterpart agents prepare a counteroffer and send it back to the third-party agent 4. Third-party agent evaluates counteroffer. If accepted go to step 5; otherwise go to Mcr(3) 5. End of the conflict resolution process 1. Specialized agent prepares resolution proposal and sends to counterparts 2. Counterpart agents evaluate the proposal. If accepted, go to step 5; otherwise go to step 3 3. Counterpart agents prepare a counteroffer and send it back to the specialized agent 4. Specialized agent evaluates counteroffer. If accepted go to step 5; otherwise go to Mcr(4) 5. End of the conflict resolution process 1. Third-party agent prepares persuasive arguments and sends to counterparts 2. Counterpart agents evaluate the arguments 3. If the arguments are effective, go to step 4; otherwise go to Mcr(5) 4. End of the conflict resolution process 1. If conflict management and analysis results in common proposals (X), conflict resolution is achieved through management and analysis 2. If conflict management and analysis results in mutually exclusive proposals (Y), conflict resolution is achieved though conflict confrontation 3. If conflict management and analysis results in no conflict resolution proposals (Z), conflict resolution must be used
Table 22.4 Multiapproach conflict resolution in collaborative design (Mcr) structure [102, 108]. (After Ref. [108], courtesy Elsevier, 2008)
Graph model for conflict resolution (GMCR) [128] for conflict management and analysis Adaptive neural-fuzzy inference system (ANFIS) [129] for conflict confrontation Dependency analysis [130] and product flow analysis for conflict resolution
PERSUADER [127] Case-based reasoning
Heuristics Knowledge-based interactions Expert systems
Heuristics Knowledge-based interactions Multiagent systems PESUADER [127]
Methodologies and tools Heuristics Knowledge-based interactions Multiagent systems
22 Automating Prognostics and Prevention of Errors, Conflicts, and Disruptions 523
22
524
X. W. Chen and S. Y. Nof
other errors, conflicts, and disruptions [132, 133]. Disruption prevention is domain specific. There have been several studies of disruption prevention in nuclear fusion [134–136], supply chain [137], assembly [138, 139], and transportation [140]. The field of disruption prevention and especially disruption propagation prevention is emerging and requires addition research. Table 22.5 summarizes error and conflict prognostics and prevention methods and technologies in various production and service applications.
22.6
Emerging Trends
22.6.1 Decentralized and Agent-Based Error and Conflict Prognostics and Prevention Most error and conflict prognostics and prevention methods developed so far are centralized approaches (Table 22.5) in which a central control unit controls data and information and executes some or all eight functions to prevent errors and conflicts. The centralized approach often requires substantial time to execute various functions and the central control unit often possesses incomplete or incorrect data and information [94]. These disadvantages become apparent when a system has many units that need to be examined for errors and conflicts. To overcome the disadvantages of the centralized approach, the decentralized approach that takes advantage of the parallel activities of multiple agents has been developed [16, 72, 93, 94, 105]. In the decentralized approach, distributed agents detect, identify or isolate errors and conflicts at individual units of a system, and communicate with each other to diagnose and prevent errors and conflicts. The main challenge of the decentralized approach is to develop robust protocols that can ensure effective communications between agents. Further research is needed to develop and improve decentralized approaches for implementation in various applications.
22.6.2 Intelligent Error and Conflict Prognostics and Prevention Compared with humans, automation systems perform better when they are used to prevent errors and conflicts through the violation of specifications or violation in comparisons [13]. Humans, however, have the ability to prevent errors and conflicts through the violation of expectations, i.e., with tacit knowledge and high-level decision-making. To increase the effectiveness degree of automation of error and conflict prognostics and prevention, it is necessary to equip automa-
tion systems with human intelligence through appropriate modeling techniques such as fuzzy logic, pattern recognition, and artificial neural networks. There has been some preliminary work to incorporate high-level human intelligence in error detection and recovery [3, 107] and conflict resolution [102, 108]. Additional work is needed to develop self-learning, self-improving artificial intelligence systems for error and conflict prognostics and prevention. Recent work in this area includes supply chain and supply network security and robustness against errors, conflicts, and disruptions [141–143], and collaborative control protocols and algorithms for disruption prevention and disruption propagation prevention [132, 133, 144].
22.6.3 Graph and Network Theories The performance of an error and conflict prognostics and prevention method is significantly influenced by the number of units in a system and their relationship. A system can be viewed as a graph or network with many nodes, each of which represents a unit in the system. The relationship between units is represented by the link between nodes. The study of network topologies has a long history stretching back at least to the 1730s. The classic model of a network, the random network, was first discussed in the early 1950s [145] and was rediscovered and analyzed in a series of papers published in the late 1950s and early 1960s [146–148]. Most recently, several network models have been discovered and extensively studied, for instance, the small-world network [149], the scale-free network [150–153], and the Bose–Einstein condensation network [154]. Bioinspired network models for collaborative control have recently been studied by Nof [155] (see also Chs. 11 in Sect. 5 and 21 for more details). Because the same prognostics and prevention method may perform quite differently on networks with different topologies and attributes, or with the same network topology and attributes but with different parameters, it is imperative to study the performance of prognostics and prevention methods with respect to different networks for the best match between methods and networks. There is ample room for research, development, and implementation of error and conflict prognostics and prevention methods supported by graph and network theories [156].
22.6.4 Financial Models for Prognostics Economy Most errors and conflicts must be detected, isolated, identified, diagnosed, or prevented. Certain errors and conflicts, however, may be tolerable in certain systems, i.e., faulttolerant systems. Also, the cost of automating some or all
Assembly and inspection Control theory Knowledge base Computer/machine vision Robotics Feature extraction Pattern recognition × × × × ×
Detection Diagnostics Identification Isolation Error recovery Conflict resolution Prognostics Exception handling × Errors/conflicts E Centralized/decentralized C Strengths Integration of error detection and recovery Weaknesses Domain specific Lack of general methods [3, 25–39]
Functions
Applications Methods/technologies
× × × ×
× E C Can process large amount of data
× × × ×
× E C Accurate and reliable
×
× × × ×
Knowledge-based
E C Does not require detailed system information Require mathematical Rely on the quantity, Results are subjective models that are often quality, and timeliness and may not be not available of data reliable [17, 19, 40]
Data-driven
Process monitoring Analytical
Table 22.5 Summary of error and conflict prognostics and prevention theories, applications, and open challenges
E C Thorough verification with formal methods State explosion Duplications needed in CED Cannot deal with incorrect or incomplete specifications [8, 9, 14, 52–62] (continued)
Difficult to derive optimal algorithms to minimize cost Time consuming for large systems [41–50]
× × × ×
Software testing Model checking Bogor Cadena Concurrent error detection (CED)
E C Accurate and reliable
× ×
×
Hardware testing Information theory Heuristic search
22 Automating Prognostics and Prevention of Errors, Conflicts, and Disruptions 525
22
E C/D Formal method applicable to various systems
State explosion for large systems System modeling is complex and time-consuming [63–72]
Weaknesses
[71, 96–102]
An agent may be modeled for multiple times due to many conflicts it is involved
C C/D Modeling of systems with agent-based technology
×
×
Errors/conflicts Centralized/decentralized Strengths
× ×
Functions
Intended goal structure (IGS) Project evaluation and review technique (PERT) Petri net Conflict detection and management system (CDMS) ×
× ×
×
Methods/technologies
Detection Diagnostics Identification Isolation Error recovery Conflict resolution Prognostics Exception handling
Discrete event system Petri net Finite-state machine (FSM)
Applications
Table 22.5 (continued)
[91, 100, 102, 108] [116–126]
C C/D Integration of traditional human conflict resolutions and computer-based learning The adaptability of the methods to other design activities has not been validated
×
×
Facility description language (FDL) Mcr CDMS
Collaborative design
[73–81, 91–94]
Limited to sequential production and service lines Domain specific
E C/D Reliable Easy to apply
×
× ×
×
Detection and prevention algorithms Reliability theory Process modeling Workflow
[91, 103–105]
Needs further development and validation
E/C D Short detection time
× ×
×
Conflict and error detection model (CEDM) Active middleware
[107, 109–113]
Needs further development for various applications
E C/D Correct errors through reasoning and high-level decision-making
×
×
Production and service Fuzzy logic Artificial intelligence
526 X. W. Chen and S. Y. Nof
22
Automating Prognostics and Prevention of Errors, Conflicts, and Disruptions
eight functions of error and conflict prognostics and prevention may far exceed the damages caused by certain errors and conflicts. In both situations, cost–benefit analyses can be used to determine if an error or a conflict needs to be dealt with. In general, financial models are used to analyze the economy of prognostics and prevention methods for specific errors and conflicts, to help decide which of the eight functions will be executed and how they will be executed, e.g., the frequency. There has been limited research on how to use financial models to help justify the automation of error and conflict prognostics and prevention [106, 157]. One of the challenges is how to appropriately evaluate or assess the damage of errors and conflicts, e.g., short-term damage, longterm damage, and intangible damage. Additional research is needed to address these economical decisions.
22.7
Conclusions and Emerging Trends
In this chapter, we have discussed the eight functions that are used to automate prognostics and prevention of errors, conflicts, and disruptions, to prevent and reduce or recover from them. Applications of these eight functions in various design, production, and service areas are described and illustrated. Prognostics and prevention methods for errors, conflicts, and disruptions are developed based on extensive theoretical advancements in many science and engineering domains, and have been applied with significant effectiveness to various real-world problems. As systems and networks become larger and more complex, such as global enterprises, the Internet, and service networks, error, conflict, and disruption prognostics and prevention become more important. Therefore, the emerging focus is on shifting from passive, reactive responses to active, predictive, and self-learning prognostics and prevention automation. Examples are illustrated in Refs. [158, 159].
References 1. Nof, S.Y., Wilhelm, W.E., Warnecke, H.-J.: Industrial Assembly. Springer (2017) 2. Lopes, L.S., Camarinha-Matos, L.M.: A machine learning approach to error detection and recovery in assembly. In: Proc. IEEE/RSJ Int. Conf. Intell. Robot. Syst. 95, ’Human Robot Interaction and Cooperative Robots’, vol. 3, pp. 197–203 (1995) 3. Najjari, H., Steiner, S.J.: Integrated sensor-based control system for a flexible assembly. Mechatronics. 7(3), 231–262 (1997) 4. Steininger, A., Scherrer, C.: On finding an optimal combination of error detection mechanisms based on results of fault injection experiments. In: Proc. 27th Annu. Int. Symp. Fault-Toler. Comput., FTCS-27, Digest of Papers, pp. 238–247 (1997) 5. Toguyeni, K.A., Craye, E., Gentina, J.C.: Framework to design a distributed diagnosis in FMS. Proc. IEEE Int. Conf. Syst. Man. Cybern. 4, 2774–2779 (1996) 6. Kao, J.F.: Optimal recovery strategies for manufacturing systems. Eur. J. Oper. Res. 80(2), 252–263 (1995)
527
7. Bruccoleri, M., Pasek, Z.J.: Operational issues in reconfigurable manufacturing systems: exception handling. In: Proc. 5th Biannu. World Autom. Congr. (2002) 8. Miceli, T., Sahraoui, H.A., Godin, R.: A metric based technique for design flaws detection and correction. In: Proc. 14th IEEE Int. Conf. Autom. Softw. Eng., pp. 307–310 (1999) 9. Bolchini, C., Fornaciari, W., Salice, F., Sciuto, D.: Concurrent error detection at architectural level. In: Proc. 11st Int. Symp. Syst. Synth., pp. 72–75 (1998) 10. Bolchini, C., Pomante, L., Salice, F., Sciuto, D.: Reliability properties assessment at system level: a co-design framework. J. Electron. Test. 18(3), 351–356 (2002) 11. Jeng, M.D.: Petri nets for modeling automated manufacturing systems with error recovery. IEEE Trans. Robot. Autom. 13(5), 752–760 (1997) 12. Kanawati, G.A., Nair, V.S.S., Krishnamurthy, N., Abraham, J.A.: Evaluation of integrated system-level checks for on-line error detection. In: Proc. IEEE Int. Comput. Perform. Dependability Symp., pp. 292–301 (1996) 13. Klein, B.D.: How do actuaries use data containing errors?: models of error detection and error correction. Inf. Resour. Manag. J. 10(4), 27–36 (1997) 14. Ronsse, M., Bosschere, K.: Non-intrusive detection of synchronization errors using execution replay. Autom. Softw. Eng. 9(1), 95–121 (2002) 15. Svenson, O., Salo, I.: Latency and mode of error detection in a process industry. Reliab. Eng. Syst. Saf. 73(1), 83–90 (2001) 16. Chen, X.W., Nof, S.Y.: Conflict and error prevention and detection in complex networks. Automatica. 48, 770–778 (2012) 17. Gertler, J.: Fault Detection and Diagnosis in Engineering Systems. Marcel Dekker, New York (1998) 18. Klein, M., Dellarocas, C.: A knowledge-based approach to handling exceptions in workflow systems. Comput. Support. Coop. Work. 9, 399–412 (2000) 19. Raich, A., Cinar, A.: Statistical process monitoring and disturbance diagnosis in multivariable continuous processes. AICHE J. 42(4), 995–1009 (1996) 20. Chen, X.W., Nof, S.Y.: Interactive, Constraint-Network Prognostics and Diagnostics to Control Errors and Conflicts (IPDN). U.S. Patent 10,496,463, 2019 21. Chen, X.W., Nof, S.Y.: Interactive, Constraint-Network Prognostics and Diagnostics to Control Errors and Conflicts (IPDN). U.S. Patent 9,760,422, 2017 22. Nof, S.Y., Chen, X.W.: Failure Repair Sequence Generation for Nodal Network. U.S. Patent 9,166,907, 2015 23. Chen, X.W., Nof, S.Y.: Interactive, Constraint-Network Prognostics and Diagnostics to Control Errors and Conflicts (IPDN). U.S. Patent 9,009,530, 2015 24. Chen, X.W., Nof, S.Y.: Interactive Conflict Detection and Resolution for Air and Air-Ground Traffic Control. U.S. Patent 8,831,864, 2014 25. Chang, C.-Y., Chang, J.-W., Jeng, M.D.: An unsupervised selforganizing neural network for automatic semiconductor wafer defect inspection. In: IEEE Int. Conf. Robot. Autom. ICRA, pp. 3000–3005 (2005) 26. Moganti, M., Ercal, F.: Automatic PCB inspection systems. IEEE Potentials. 14(3), 6–10 (1995) 27. Rau, H., Wu, C.-H.: Automatic optical inspection for detecting defects on printed circuit board inner layers. Int. J. Adv. Manuf. Technol. 25(9–10), 940–946 (2005) 28. Calderon-Martinez, J.A., Campoy-Cervera, P.: An application of convolutional neural networks for automatic inspection. In: IEEE Conf. Cybern. Intell. Syst., pp. 1–6 (2006) 29. Duarte, F., Arauio, H., Dourado, A.: Automatic system for dirt in pulp inspection using hierarchical image segmentation. Comput. Ind. Eng. 37(1–2), 343–346 (1999)
22
528 30. Wilson, J.C., Berardo, P.A.: Automatic inspection of hazardous materials by mobile robot. Proc. IEEE Int. Conf. Syst. Man. Cybern. 4, 3280–3285 (1995) 31. Choi, J.Y., Lim, H., Yi, B.-J.: Semi-automatic pipeline inspection robot systems. In: SICE-ICASE Int. Jt. Conf., pp. 2266–2269 (2006) 32. Finogenoy, L.V., Beloborodov, A.V., Ladygin, V.I., Chugui, Y.V., Zagoruiko, N.G., Gulvaevskii, S.Y., Shul’man, Y.S., Lavrenyuk, P.I., Pimenov, Y.V.: An optoelectronic system for automatic inspection of the external view of fuel pellets. Russ. J. Nondestruct. Test. 43(10), 692–699 (2007) 33. Ni, C.W.: Automatic inspection of the printing contents of soft drink cans by image processing analysis. Proc. SPIE. 3652, 86– 93 (2004) 34. Cai, J., Zhang, G., Zhou, Z.: The application of areareconstruction operator in automatic visual inspection of quality control. Proc. World Congr. Intell. Control Autom. (WCICA). 2, 10111–10115 (2006) 35. Erne, O., Walz, T., Ettemeyer, A.: Automatic shearography inspection systems for aircraft components in production. Proc. SPIE. 3824, 326–328 (1999) 36. Huang, C.K., Wang, L.G., Tang, H.C., Tarng, Y.S.: Automatic laser inspection of outer diameter, run-out taper of micro-drills. J. Mater. Process. Technol. 171(2), 306–313 (2006) 37. Chen, L., Wang, X., Suzuki, M., Yoshimura, N.: Optimizing the lighting in automatic inspection system using Monte Carlo method. Jpn. J. Appl. Phys. Part 1. 38(10), 6123–6129 (1999) 38. Godoi, W.C., da Silva, R.R., Swinka-Filho, V.: Pattern recognition in the automatic inspection of flaws in polymeric insulators. Insight Nondestr. Test. Cond. Monit. 47(10), 608–614 (2005) 39. Khan, U.S., Igbal, J., Khan, M.A.: Automatic inspection system using machine vision. In: Proc. 34th Appl. Imag. Pattern Recognit. Workshop, pp. 210–215 (2005) 40. Chiang, L.H., Braatz, R.D., Russell, E.: Fault Detection and Diagnosis in Industrial Systems. Springer, London/New York (2001) 41. Deb, S., Pattipati, K.R., Raghavan, V., Shakeri, M., Shrestha, R.: Multi-signal flow graphs: a novel approach for system testability analysis and fault diagnosis. IEEE Aerosp. Electron. Syst. Mag. 10(5), 14–25 (1995) 42. Pattipati, K.R., Alexandridis, M.G.: Application of heuristic search and information theory to sequential fault diagnosis. IEEE Trans. Syst. Man Cybern. 20(4), 872–887 (1990) 43. Pattipati, K.R., Dontamsetty, M.: On a generalized test sequencing problem. IEEE Trans. Syst. Man Cybern. 22(2), 392–396 (1992) 44. Raghavan, V., Shakeri, M., Pattipati, K.: Optimal and near-optimal test sequencing algorithms with realistic test models. IEEE Trans. Syst. Man. Cybern. A. 29(1), 11–26 (1999) 45. Raghavan, V., Shakeri, M., Pattipati, K.: Test sequencing algorithms with unreliable tests. IEEE Trans. Syst. Man. Cybern. A. 29(4), 347–357 (1999) 46. Shakeri, M., Pattipati, K.R., Raghavan, V., Patterson-Hine, A., Kell, T.: Sequential Test Strategies for Multiple Fault Isolation. IEEE, Atlanta (1995) 47. Shakeri, M., Raghavan, V., Pattipati, K.R., Patterson-Hine, A.: Sequential testing algorithms for multiple fault diagnosis. IEEE Trans. Syst. Man. Cybern. A. 30(1), 1–14 (2000) 48. Tu, F., Pattipati, K., Deb, S., Malepati, V.N.: Multiple Fault Diagnosis in Graph-Based Systems. International Society for Optical Engineering, Orlando (2002) 49. Tu, F., Pattipati, K.R.: Rollout strategies for sequential fault diagnosis. IEEE Trans. Syst. Man. Cybern. A. 33(1), 86–99 (2003) 50. Tu, F., Pattipati, K.R., Deb, S., Malepati, V.N.: Computationally efficient algorithms for multiple fault diagnosis in large graphbased systems. IEEE Trans. Syst. Man. Cybern. A. 33(1), 73–85 (2003)
X. W. Chen and S. Y. Nof 51. Feng, C., Bhuyan, L.N., Lombardi, F.: Adaptive system-level diagnosis for hypercube multiprocessors. IEEE Trans. Comput. 45(10), 1157–1170 (1996) 52. Clarke, E.M., Grumberg, O., Peled, D.A.: Model Checking. MIT Press, Cambridge (2000) 53. Karamanolis, C., Giannakopolou, D., Magee, J., Wheather, S.: Model checking of workflow schemas. In: 4th Int. Enterp. Distrib. Object Comp. Conf., pp. 170–181 (2000) 54. Chan, W., Anderson, R.J., Beame, P., Notkin, D., Jones, D.H., Warner, W.E.: Optimizing symbolic model checking for state charts. IEEE Trans. Softw. Eng. 27(2), 170–190 (2001) 55. Garlan, D., Khersonsky, S., Kim, J.S.: Model checking publishsubscribe systems. In: Proc. 10th Int. SPIN Workshop Model Checking Softw. (2003) 56. Hatcliff, J., Deng, W., Dwyer, M., Jung, G., Ranganath, V.P.: Cadena: an integrated development, analysis, and verification environment for component-based systems. In: Proc. 2003 Int. Conf. Softw. Eng. ICSE, Portland (2003) 57. T. Ball, S. Rajamani: Bebop: a symbolic modelchecker for Boolean programs, Proc. 7th Int. SPIN Workshop, Lect. Notes Comput. Sci. 1885, 113–130 (2000) 58. Brat, G., Havelund, K., Park, S., Visser, W.: Java PathFinder – a second generation of a Java model-checker. In: Proc. Workshop Adv. Verif. (2000) 59. Corbett, J.C., Dwyer, M.B., Hatcliff, J., Laubach, S., Pasareanu, C.S., Robby, H.Z.: Bandera: extracting finite-state models from Java source code. In: Proceedings of the 22nd International Conference on Software Engineering (2000) 60. Godefroid, P.: Model-checking for programming languages using VeriSoft. In: Proceedings of the 24th Symposium on Principles of Programming Languages (POPL’97), pp. 174–186 (1997) 61. Robby, Dwyer, M.B., Hatcliff, J.: Bogor: an extensible and highlymodular model checking framework. In: Proceedings of 9th European Software Engineering Conference on held jointly with the 11th ACM SIGSOFT Symposium Foundations of Software Engineering (2003) 62. Mitra, S., McCluskey, E.J.: Diversity techniques for concurrent error detection. In: Proceedings of 2nd International Symposium on Quality Electronic Design, IEEE Computer Society, pp. 249– 250 (2001) 63. Chung, S.-L., Wu, C.-C., Jeng, M.: Failure Diagnosis: A Case Study on Modeling and Analysis by Petri Nets. IEEE, Washington, DC (2003) 64. Georgilakis, P.S., Katsigiannis, J.A., Valavanis, K.P., Souflaris, A.T.: A systematic stochastic Petri net based methodology for transformer fault diagnosis and repair actions. J. Intell. Robot. Syst. Theory Appl. 45(2), 181–201 (2006) 65. Ushio, T., Onishi, I., Okuda, K.: Fault Detection Based on Petri Net Models with Faulty Behaviors. IEEE, San Diego (1998) 66. Rezai, M., Ito, M.R., Lawrence, P.D.: Modeling and Simulation of Hybrid Control Systems by Global Petri Nets. IEEE, Seattle (1995) 67. Rezai, M., Lawrence, P.D., Ito, M.R.: Analysis of Faults in Hybrid Systems by Global Petri Nets. IEEE, Vancouver (1995) 68. Rezai, M., Lawrence, P.D., Ito, M.B.: Hybrid Modeling and Simulation of Manufacturing Systems. IEEE, Los Angeles (1997) 69. Sampath, M., Sengupta, R., Lafortune, S., Sinnamohideen, K., Teneketzis, D.: Diagnosability of discrete-event systems. IEEE Trans. Autom. Control. 40(9), 1555–1575 (1995) 70. Zad, S.H., Kwong, R.H., Wonham, W.M.: Fault diagnosis in discrete-event systems: framework and model reduction. IEEE Trans. Autom. Control. 48(7), 1199–1212 (2003) 71. Zhou, M., DiCesare, F.: Petri Net Synthesis for Discrete Event Control of Manufacturing Systems. Kluwer, Boston (1993) 72. Wenbin, Q., Kumar, R.: Decentralized failure diagnosis of discrete event systems. IEEE Trans. Syst. Man. Cybern. A. 36(2), 384–395 (2006)
22
Automating Prognostics and Prevention of Errors, Conflicts, and Disruptions
73. Brall, A.: Human reliability issues in medical care – a customer viewpoint. In: Proceedings of Annual Reliability and Maintainability Symposium, pp. 46–50 (2006) 74. Furukawa, H.: Challenge for preventing medication errors-learn from errors-: what is the most effective label display to prevent medication error for injectable drug? In: Proceedings of the 12th International Conference on Human Computer Interaction: HCI Intelligent Multimodal Interaction Environments, Lecture Notes on Computer Science, 4553, pp. 437–442 (2007) 75. Huang, G., Medlam, G., Lee, J., Billingsley, S., Bissonnette, J.P., Ringash, J., Kane, G., Hodgson, D.C.: Error in the delivery of radiation therapy: results of a quality assurance review. Int. J. Radiat. Oncol. Biol. Phys. 61(5), 1590–1595 (2005) 76. Nyssen, A.-S., Blavier, A.: A study in anesthesia. Ergonomics. 49(5/6), 517–525 (2006) 77. Unruh, K.T., Pratt, W.: Patients as actors: the patient’s role in detecting, preventing, and recovering from medical errors. Int. J. Med. Inform. 76(1), 236–244 (2007) 78. Chao, C.C., Jen, W.Y., Hung, M.C., Li, Y.C., Chi, Y.P.: An innovative mobile approach for patient safety services: the case of a Taiwan health care provider. Technovation. 27(6–7), 342–361 (2007) 79. Malhotra, S., Jordan, D., Shortliffe, E., Patel, V.L.: Workflow modeling in critical care: piecing together your own puzzle. J. Biomed. Inform. 40(2), 81–92 (2007) 80. Morris, T.J., Pajak, J., Havlik, F., Kenyon, J., Calcagni, D.: Battlefield medical information system-tactical (BMIST): the application of mobile computing technologies to support health surveillance in the Department of Defense, Telemed. J. e-Health. 12(4), 409–416 (2006) 81. Rajendran, M., Dhillon, B.S.: Human error in health care systems: bibliography. Int. J. Reliab. Qual. Saf. Eng. 10(1), 99–117 (2003) 82. Sheikhzadeh, E., Eissa, S., Ismail, A., Zourob, M.: Diagnostic techniques for COVID-19 and new developments. Talanta. 220, 121392 (2020) 83. Lieberman, J.A., Pepper, G., Naccache, S.N., Huang, M.-L., Jerome, K.R., Greningera, A.L.: Comparison of commercially available and laboratory-developed assays for in vitro detection of SARS-CoV-2 in clinical laboratories. J. Clin. Microbiol. 58(8), e00821–e00820 (2020) 84. Azzi, L., Carcano, G., Gianfagna, F., Grossi, P., Gasperina, D.D., Genoni, A., Fasano, M., Sessa, F., Tettamanti, L., Carinci, F., Maurino, V., Rossi, A., Tagliabue, A., Baj, A.: Saliva is a reliable tool to detect SARS-CoV-2. J. Infect. 81, e45–e50 (2020) 85. Amanat, F., Stadlbauer, D., Strohmeier, S., Nguyen, T.H.O., Chromikova, V., McMahon, M., Jiang, K., Arunkumar, G.A., Jurczyszak, D., Polanco, J., Bermudez-Gonzalez, M., Kleiner, G., Aydillo, T., Miorin, L., Fierer, D.S., Lugo, L.A., Kojic, E.M., Stoever, J., Liu, S.T.H., Cunningham-Rundles, C., Felgner, P.L., Moran, T., García-Sastre, A., Caplivski, D., Cheng, A.C., Kedzierska, K., Vapalahti, O., Hepojoki, J.M., Simon, V., Krammer, F.: A serological assay to detect SARS-CoV-2 seroconversion in humans. Nat. Med. 26, 1033–1036 (2020) 86. Jendrny, P., Schulz, C., Twele, F., Meller, S., Köckritz-Blickwede, M., Osterhaus, A.D.M.E., Ebbers, J., Pilchová, V., Pink, I., Welte, T., Manns, M.P., Fathi, A., Ernst, C., Addo, M.M., Schalke, E., Volk, H.A.: Scent dog identification of samples from COVID19 patients – a pilot study. BMC Infect. Dis. 20(536), 1–7 (2020) 87. Khan, A.I., Shah, J.L., Bhat, M.M.: CoroNet: a deep neural network for detection and diagnosis of COVID-19 from chest xray images. Comput. Methods Prog. Biomed. 196, 105581 (2020) 88. Brunese, L., Mercaldo, F., Reginelli, A., Santone, A.: Explainable deep learning for pulmonary disease and coronavirus COVID19 detection from X-rays. Comput. Methods Prog. Biomed. 196, 105608 (2020)
529
89. Orive, G., Lertxundi, U., Barcelo, D.: Early SARS-CoV-2 outbreak detection by sewage-based epidemiology. Sci. Total Environ. 732, 139298 (2020) 90. Lesimple, A., Jasim, S.Y., Johnson, D.J., Hilal, N.: The role of wastewater treatment plants as tools for SARS-CoV-2 early detection and removal. J. Water Proc. Eng. 38, 101544 (2020) 91. Nof, S.Y.: Design of effective e-Work: review of models, tools, and emerging challenges. Product. Plan. Control. 14(8), 681–703 (2003) 92. Chen, X.: Error Detection and Prediction Agents and Their Algorithms. M.S. Thesis, School of Industrial Engineering, Purdue University, West Lafayette (2005) 93. Chen, X.W., Nof, S.Y.: Error detection and prediction algorithms: application in robotics. J. Intell. Robot. Syst. 48(2), 225–252 (2007) 94. Chen, X.W., Nof, S.Y.: Agent-based error prevention algorithms. Expert Syst. Appl. 39, 280–287 (2012) 95. Duffy, K.: Safety for profit: building an error-prevention culture. Ind. Eng. Mag. 9, 41–45 (2008) 96. Barber, K.S., Liu, T.H., Ramaswamy, S.: Conflict detection during plan integration for multi-agent systems. IEEE Trans. Syst. Man Cybern. B. 31(4), 616–628 (2001) 97. O’Hare, G.M.P., Jennings, N.: Foundations of Distributed Artificial Intelligence. Wiley, New York (1996) 98. Zhou, M., DiCesare, F., Desrochers, A.A.: A hybrid methodology for synthesis of Petri net models for manufacturing systems. IEEE Trans. Robot. Autom. 8(3), 350–361 (1992) 99. Shiau, J.-Y.: A Formalism for Conflict Detection and Resolution in a Multi-Agent System. Ph.D. Thesis, Arizona State University, Arizona (2002) 100. Ceroni, J.A., Velásquez, A.A.: Conflict detection and resolution in distributed design. Prod. Plan. Control. 14(8), 734–742 (2003) 101. Jiang, T., Nevill Jr., G.E.: Conflict cause identification in webbased concurrent engineering design system. Concurr. Eng. Res. Appl. 10(1), 15–26 (2002) 102. Lara, M.A., Nof, S.Y.: Computer-supported conflict resolution for collaborative facility designers. Int. J. Prod. Res. 41(2), 207–233 (2003) 103. Anussornnitisarn, P., Nof, S.Y.: The Design of Active Middleware for e-Work Interactions, PRISM Res. Memorandum. School of Industrial Engineering, Purdue University, West Lafayette (2001) 104. Anussornnitisarn, P., Nof, S.Y.: e-Work: the challenge of the next generation ERP systems. Prod. Plan. Control. 14(8), 753–765 (2003) 105. Chen, X.W.: Prognostics and Diagnostics of Conflicts and Errors with Prediction and Detection Logic. Ph.D. Dissertation, School of Industrial Engineering, Purdue University, West Lafayette (2009) 106. Yang, C.L., Nof, S.Y.: Analysis, Detection Policy, and Performance Measures of Detection Task Planning Errors and Conflicts, PRISM Res. Memorandum, 2004-P2. School of Industrial Engineering, Purdue University, West Lafayette (2004) 107. Avila-Soria, J.: Interactive Error Recovery for Robotic Assembly Using a Neural-Fuzzy Approach. Master Thesis, School of Industrial Engineering, Purdue University, West Lafayette (1999) 108. Velásquez, J.D., Lara, M.A., Nof, S.Y.: Systematic resolution of conflict situation in collaborative facility design. Int. J. Prod. Econ. 116(1), 139–153 (2008) 109. Nof, S.Y., Maimon, O.Z., Wilhelm, R.G.: Experiments for planning error-recovery programs in robotic work. Proc. Int. Comput. Eng. Conf. Exhib. 2, 253–264 (1987) 110. Imai, M., Hiraki, K., Anzai, Y.: Human-robot interface with attention. Syst. Comput. Jpn. 26(12), 83–95 (1995) 111. Lueth, T.C., Nassal, U.M., Rembold, U.: Reliability and integrated capabilities of locomotion and manipulation for autonomous robot assembly. Robot. Auton. Syst. 14, 185–198 (1995)
22
530 112. Wu, H.-J., Joshi, S.B.: Error recovery in MPSG-based controllers for shop floor control. Proc. IEEE Int. Conf. Robot. Autom. ICRA. 2, 1374–1379 (1994) 113. Jang, J.-S.R., Gulley, N.: Fuzzy Systems Toolbox for Use with MATLAB. The Math Works (1997) 114. Yee, K.W., Gavin, R.J.: Implementing Fast Probing and Error Compensation on Machine Tools, NISTIR 4447. The National Institute of Standards and Technology, Gaithersburg (1990) 115. Donmez, M.A., Lee, K., Liu, R., Barash, M.: A real-time error compensation system for a computerized numerical control turning center. In: Proceedings of IEEE International Conference on Robotics and Automation (1986) 116. Zha, X.F., Du, H.: Knowledge-intensive collaborative design modeling and support part I: review, distributed models and framework. Comput. Ind. 57, 39–55 (2006) 117. Zha, X.F., Du, H.: Knowledge-intensive collaborative design modeling and support part II: system implementation and application. Comput. Ind. 57, 56–71 (2006) 118. Klein, M., Lu, S.C.-Y.: Conflict resolution in cooperative design. Artif. Intell. Eng. 4(4), 168–180 (1989) 119. Klein, M.: Supporting conflict resolution in cooperative design systems. IEEE Trans. Syst. Man Cybern. 21(6), 1379–1390 (1991) 120. Klein, M.: Capturing design rationale in concurrent engineering teams. IEEE Comput. 26(1), 39–47 (1993) 121. Klein, M.: Conflict management as part of an integrated exception handling approach. Artif. Intell. Eng. Des. Anal. Manuf. 9, 259– 267 (1995) 122. Li, X., Zhou, X.H., Ruan, X.Y.: Study on conflict management for collaborative design system. J. Shanghai Jiaotong Univ. (English ed.). 5(2), 88–93 (2000) 123. Li, X., Zhou, X.H., Ruan, X.Y.: Conflict management in closely coupled collaborative design system. Int. J. Comput. Integr. Manuf. 15(4), 345–352 (2000) 124. Huang, C.Y., Ceroni, J.A., Nof, S.Y.: Agility of networked enterprises: parallelism, error recovery and conflict resolution. Comput. Ind. 42, 73–78 (2000) 125. Nof, S.Y.: Tools and models of e-work. In: Proceedings of 5th International Conference on Simulation AI, Mexico City, pp. 249– 258 (2000) 126. Nof, S.Y.: Collaborative e-work and e-manufacturing: challenges for production and logistics managers. J. Intell. Manuf. 17(6), 689–701 (2006) 127. Sycara, K.: Negotiation planning: an AI approach. Eur. J. Oper. Res. 46(2), 216–234 (1990) 128. Fang, L., Hipel, K.W., Kilgour, D.M.: Interactive Decision Making. Wiley, New York (1993) 129. Jang, J.-S.R.: ANFIS: adaptive-network-based fuzzy inference systems. IEEE Trans. Syst. Man Cybern. 23, 665–685 (1993) 130. Kusiak, A., Wang, J.: Dependency analysis in constraint negotiation. IEEE Trans. Syst. Man Cybern. 25(9), 1301–1313 (1995) 131. Jiang, Z., Ouyang, Y.: Reliable location of first responder stations for cooperative response to disasters. Transp. Res. B Methodol. 149, 20–32 (2021) 132. Zhong, H., Nof, S.Y.: Dynamic Lines of Collaboration – Disruption Handling & Control Automation, Collaboration, and EServices (ACES) Book Series. Springer (2020) 133. Nguyen, W.P.V., Nof, S.Y.: Strategic lines of collaboration in response to disruption propagation (CRDP) through cyber-physical systems. Int. J. Prod. Econ. 230 (2020) 134. Li, D., Yu, Q., Ding, Y., Wang, N., Hu, F., Jia, R., Peng, L., Rao, B., Hu, Q., Jin, H., Li, M., Zhu, L.: Disruption prevention using rotating resonant magnetic perturbation on J-TEXT. Nucl. Fusion. 60(5), 056022 (2020) 135. Pau, A., Fanni, A., Carcangiu, S., Cannas, B., Sias, G., Murari, A., Rimini, F.: A machine learning approach based on generative
X. W. Chen and S. Y. Nof
136.
137.
138.
139.
140.
141.
142.
143.
144.
145. 146. 147. 148. 149. 150. 151. 152.
153. 154. 155. 156.
157. 158.
159.
topographic mapping for disruption prevention and avoidance at JET. Nucl. Fusion. 59(1022), 106017 (2019) Strait, E.J., Barr, J.L., Baruzzo, M., Berkery, J.W., Buttery, R.J., De Vries, P.C., Eidietis, N.W., Granetz, R.S., Hanson, J.M., Holcomb, C.T., Humphreys, D.A., Kim, J.H.: Progress in disruption prevention for ITER. Nucl. Fusion. 59(115), 112012 (2019) Wang, W., Xue, K., Sun, X.: Cost sharing in the prevention of supply chain disruption. Math. Probl. Eng. 2017, 7843465 (2017) Burggraef, P., Wagner, J., Dannapfel, M., Vierschilling, S.P.: Simulating the benefit of disruption prevention in assembly. J. Model. Manag. 14(1), 214–231 (2019) Burggraf, P., Wagner, J., Luck, K., Adlon, T.: Cost-benefit analysis for disruption prevention in low-volume assembly. Prod. Eng. 11(3), 331–3421 (2017) Taylor, R.S.: Ice-related disruptions to ferry services in Eastern Canada: prevention and consequence mitigation strategies. Transp. Res. Procedia. 25, 279–290 (2017) Tkach, I., Edan, Y., Nof, S.Y.: Multi-sensor task allocation framework for supply networks security using task administration protocols. Int. J. Prod. Res. 55(18), 5202–5224 (2017) Nguyen, W.P.V., Nof, S.Y.: Resilience informatics for cyberaugmented manufacturing networks (CMN): centrality, flow, and disruption. Stud. Inf. Control. 27(4), 377–384 (2018) Reyes Levalle, R.: Resilience by Teaming in Supply Chains and Networks Automation, Collaboration, and E-Services (ACES) Book Series. Springer (2018) Ajidarma, P., Nof, S.Y.: Collaborative detection and prevention of errors and conflicts in an agricultural robotic system. Stud. Inf. Control. 30(1), 19–28 (2021) Solomonoff, R., Rapoport, A.: Connectivity of random nets. Bull. Mater. Biophys. 13, 107–117 (1951) Erdos, P., Renyi, A.: On random graphs. Publ. Math. Debr. 6, 290– 291 (1959) Erdos, P., Renyi, A.: On the evolution of random graphs. Magy. Tud. Akad. Mat. Kutato Int. Kozl. 5, 17–61 (1960) Erdos, P., Renyi, A.: On the strenth of connectedness of a random graph. Acta Mater. Acad. Sci. Hung. 12, 261–267 (1961) Watts, D.J., Strogatz, S.H.: Collective dynamics of ‘small-world’ networks. Nature. 393(6684), 440–442 (1998) Albert, R., Jeong, H., Barabasi, A.L.: Internet: diameter of the world-wide web. Nature. 401(6749), 130–131 (1999) Barabasi, A.L., Albert, R.: Emergence of scaling in random networks. Science. 286(5439), 509–512 (1999) Broder, A., Kumar, R., Maghoul, F., Raghavan, P., Rajagopalan, S., Stata, R., Tomkins, A., Wiener, J.: Graph structure in the Web. Comput. Netw. 33(1), 309–320 (2000) de Solla Price, D.J., Networks of scientific papers: Science. 149, 510–515 (1965) Bianconi, G., Barabasi, A.L.: Bose-Einstein condensation in complex networks. Phys. Rev. Lett. 86(24), 5632–5635 (2001) Nof, S.Y.: Collaborative control theory for e-work, e-production, and e-service. Annu. Rev. Control. 31(2), 281–292 (2007) Chen, X.W.: Knowledge-based analytics for massively distributed networks with noisy data. Int. J. Prod. Res. 56(8), 2841–2854 (2018) Chen, X.W., Nof, S.Y.: Constraint-based conflict and error management. Eng. Optim. 44(7), 821–841 (2012) Susto, G.A., Schirru, A., Pampuri, S., McLoone, S., Beghi, A.: Machine learning for predictive maintenance: a multiple classifier approach. IEEE Trans. Ind. Inf. 11(3), 812–820 (2014) Nikolopoulos, K., Punia, S., Schäfers, A., Tsinopoulos, C., Vasilakis, C.: Forecasting and planning during a pandemic: COVID19 growth rates, supply chain disruptions, and governmental decisions. Eur. J. Oper. Res. 290(1), 99–115 (2021)
22
Automating Prognostics and Prevention of Errors, Conflicts, and Disruptions
Xin W. Chen is Professor in the School of Engineering at Southern Illinois University Edwardsville. He received the MS and PhD degrees in Industrial Engineering from Purdue University and BS degree in Mechanical Engineering from Shanghai Jiao Tong University. His research interests cover several related topics in the area of conflict and error prognostics and prevention, production/service optimization, and decision analysis.
531
Shimon Y. Nof is Professor of IE, Purdue University, and Director of PRISM Center; former President of IFPR, International Foundation of Production Research; former Chair of IFAC, International Foundation of Automatic Control Coordinating Committee of Manufacturing and Logistics Systems; and Fellow of IISE and of IFPR. Among his awards is the Joseph Engelberger Medal for Robotics Education. Author/editor of 16 books, his interests include cyber-automated collaboration of robotics and cyber-physical work; systems security, integrity, and assurance; and integrated production and service operations with decentralized decision support networks. He has five patents in automation, four of them jointly with Xin W. Chen.
22
Part IV Automation Design: Theory and Methods for Integration
Communication Protocols for Automation
23
Carlos E. Pereira, Christian Diedrich, and Peter Neumann
Contents 23.1 23.1.1 23.1.2 23.1.3
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Requirements and Classification . . . . . . . . . . . . . . . . . . . Chapter Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
535 535 536 537
23.2 23.2.1 23.2.2 23.2.3 23.2.4 23.2.5 23.2.6 23.2.7
Wired Industrial Communications . . . . . . . . . . . . . . . . Classification According to Automation Hierarchy . . . . Sensor/Actuator Networks . . . . . . . . . . . . . . . . . . . . . . . . Fieldbus Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Industrial Ethernet-Based Networks . . . . . . . . . . . . . . . . . Time-Sensitive Networking (TSN) . . . . . . . . . . . . . . . . . . Advanced Physical Layer (APL) . . . . . . . . . . . . . . . . . . . Internet of Things (IoT) Communication . . . . . . . . . . . .
537 537 537 538 541 544 545 545
23.3 23.3.1 23.3.2 23.3.3 23.3.4 23.3.5
Wireless Industrial Communications . . . . . . . . . . . . . . Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Wireless Local Area Networks (WLAN) . . . . . . . . . . . . . Wireless Sensor/Actuator Networks . . . . . . . . . . . . . . . . . Low-Power Wide Area Network (LPWAN) . . . . . . . . . . 5G . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
547 547 548 548 550 550
23.4 23.4.1 23.4.2 23.4.3
Virtual Automation Networks . . . . . . . . . . . . . . . . . . . . Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Domains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Architectures for VAN Solutions . . . . . . . . . . . . . . . . . . .
551 551 552 552
23.5 23.5.1 23.5.2 23.5.3
Wide Area Communications . . . . . . . . . . . . . . . . . . . . . Contextualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Best Effort Communication in Automation . . . . . . . . . . . Real-Time Communication in Automation . . . . . . . . . . .
553 553 554 554
23.6
General Overview About Industrial Protocol Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 555
C. E. Pereira () Automation Engineering Department, Federal University of Rio Grande do Sul (UFRGS), Porto Alegre, Brazil e-mail: [email protected]
23.7
Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 555
23.8
Emerging Trends . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 556
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 557
Abstract
The importance of industrial communication networks and their protocols in automation has become even more important over the last decade with the advent of concepts such as Industry 4.0, Industrial Internet-of-Things (IIoT), and others. The introduction of fieldbus systems has been associated with a paradigm change that enables the deployment of distributed industrial automation systems, supporting device autonomy and decentralized decisionmaking and control loops. The chapter presents the main wired and wireless industrial protocols used in industrial automation, manufacturing, and process control applications. In order to help readers to better understand the differences between industrial communication protocols and protocols used in general computer networking, the chapter also discusses the specific requirements of industrial applications. The concept of virtual automation networks, which integrate local and wide area as well as wired and wireless communication systems, is also discussed. Keywords
Wireless sensor network · Medium access control · Wireless local area network · Medium access control layer · Controller area network
C. Diedrich Institute for Automation Engineering, Otto von Guericke University Magdeburg, Magdeburg, Germany e-mail: [email protected]
23.1
P. Neumann Institut für Automation und Kommunikation – ifak, Magdeburg, Germany e-mail: [email protected]
Digital communication is now well established in distributed computer control systems both in discrete manufacturing as well as in the process control industries. Proprietary
© Springer Nature Switzerland AG 2023 S. Y. Nof (ed.), Springer Handbook of Automation, Springer Handbooks, https://doi.org/10.1007/978-3-030-96729-1_23
Introduction
23.1.1 History
535
536
communication systems within SCADA (supervisory control and data acquisition) systems have been supplemented and partially displaced by fieldbus and sensor bus systems. The introduction of fieldbus systems has been associated with a change of paradigm to deploy distributed industrial automation systems, emphasizing device autonomy and decentralized decision-making and control loops. Nowadays, (wired) fieldbus systems are standardized and are the most important communication systems used in commercial control installations. At the same time, Ethernet won the battle as the most commonly used communication technology within the office domain, resulting in low component prices caused by mass production. This has led to an increasing interest in adapting Ethernet for industrial applications and several approaches have been proposed (Sect. 23.2.4). Ethernet-based solutions are now dominating as a merging technology. In parallel to advances on Ethernet-based industrial protocols, the use of wireless technologies in the industrial domain has also been increasingly researched. Following the trend to merge automation and office networks, heterogeneous networks (virtual automation networks [VAN]), consisting of local and wide area networks, as well as wired and wireless communication systems, are becoming important [1]. Advances in the areas of embedded hardware/software and sensor/actuator systems, industrial communication protocols, as well as cloud and edge computing have enabled a Fourth Industrial Revolution, the so-called “Industry 4.0”, [2]. Some of the main characteristics enabled by Industry 4.0 are [3]: mass customization, flexible production, tracking, and self-awareness of parts and products, among others. Cyber-physical systems (CPS) have been proposed as key concept of Industry 4.0 architectures and path the way to smart manufacturing. A CPS can be defined as a set of physical devices, objects, and equipment that interact via a virtual cyberspace through communication networks. Each physical device will have its cyber part as a digital representation of the real device, culminating in the “Digital Twin” (DT)) concept [3–8].
23.1.2 Requirements and Classification Industrial communication protocols play a key role in the digitalization process of industries and must meet several important requirements: • Reliability: This describes the ability of a system or component to perform its intended function under stated conditions without failure for a given period of time. • Functional safety: Protection against hazards caused by incorrect functioning including communication
C. E. Pereira et al.
via heterogeneous networks. There are several safety integrities levels (SIL) [2]. It includes the influence of noisy environments and the degree of reliability. • Security: This means a common security concept for distributed automation using a heterogeneous network with different security integrity levels. • Real-time behavior: Within the automation domain, realtime requirements are of uttermost importance and are focused on the response time behavior of data packets. Three real-time classes can be identified based on the required temporal behavior: – Class 1: best effort. Scalable cycle time, used in factory floor and process automation in cases where no severe problems occur when deadlines are not met, e.g., diagnosis, maintenance, commissioning, and slow mobile applications. – Class 2: real time. Typical cycle times from 1 to 10 ms, used for time-critical closed-loop control, e.g., closedloop control applications, such as in fast mobile applications and machine tools. – Class 3: isochronous real time, cycle times from 250 μs to 1 ms, with tight restrictions on jitter (usually less than 1 μs), used for motion control applications, e.g., motion control. – Additionally, there is a class nonreal time, which means systems without real-time requirements; these are not considered here. It means (regarding industrial automation) exchange of engineering data maintenance, etc. There are additional classification features: • Distribution: The most important achievement of industrial communication systems are local area communication systems, consisting of sensor/actuator networks (Ch. 14 and Sect. 23.2.2), fieldbus systems, and Ethernet-based local area networks (LAN). Of increasing importance is the use of Internet (i.e., IP-based communication using wide area networks (WAN and telecommunication networks). Thus, it should be advantageous to consider Internet as part of an industrial communication system (Sect. 23.2). • Homogeneity: There are homogeneous parts (e.g., standardized fieldbus systems) within an industrial communication system. But in real applications the use of heterogeneous networks is more common, especially when using Internet and when connected with services of network providers. • Installations types: While most of the installed enterprise networks are currently wired, the number of wireless installations is increasing and this trend will continue.
23
Communication Protocols for Automation
23.1.3 Chapter Overview The scope of this chapter is restricted to communication protocols adopted in industrial automation applications. Other application areas that also demand communication protocols with strict timing requirements, such as vehicular ad hoc networks (VANets)) used in autonomous driving applications, including vehicle-to-vehicle (v2v), vehicle-to-infrastructure (v2i) communication, and vehicle-to-everything (v2x), are not discussed in this chapter. Readers interested on knowing more about VANets are referred to survey papers as [9, 10]. The remainder of the chapter is structured as follows. In Sect. 23.2, the main wired industrial communication protocols are presented and compared, while Sect. 23.3 discusses wireless industrial communication systems. Section 23.4 discusses the concept of virtual automation networks (VANs)), a key concept in future distributed automation systems, which will be composed of (partially homogeneous) local and wide area as well as wired and wireless communication systems leading to complex heterogeneous communication networks. Section 23.5 deals with the use of wide area communications to execute remote automation operations.
23.2
Wired Industrial Communications
23.2.1 Classification According to Automation Hierarchy Wired digital communication has been an important driving force of computer control systems for the last 30 years. To allow the access to data in various layers of an enterprise information system by different users, there is a need to merge different digital communication systems within the plant, control, and device levels of an enterprise network. On these different levels, there are distinct requirements dictated by the nature and type of information being exchanged. Network physical size, number of supported devices, network bandwidth, response time, sampling frequency, and payload size are some of the performance characteristics used to classify and group specific network technologies. Realtime requirements depend on the type of messages to be exchanged: deadlines for end-to-end data transmission, maximum allowed jitter for audio and video stream transmission, etc. Additionally, available resources at the various network levels may vary significantly. At the device level, there are extremely limited resources (hardware and communications), but at the plant level powerful computers allow comfortable software and memory consumption. Due to the different requirements described above, there are different types of industrial communication systems as part of a hierarchical automation system within an enterprise:
537
• Sensor/actuator networks: at the field (sensor/actuator) level • Fieldbus systems: at the field level, collecting/distributing process data from/to sensors/actuators, communication medium between field devices and controllers/PLCs/mana gement consoles • Controller networks: at the controller level, transmitting data between powerful field devices and controllers as well as between controllers • Wide area networks: at the enterprise level, connecting networked segments of an enterprise automation system Vendors of industrial communication systems offer a set of fitting solutions for these levels of the automation/communication hierarchy. Nowadays switch Ethernetbased systems combined with the Internet Protocol is ready to dominate wide ranges of application domains.
23.2.2 Sensor/Actuator Networks At this level, several well-established and widely adopted protocols are available: • HART (HART Communication Foundation): highway addressable remote transducer, coupling analog process devices with engineering tools [11] • ASi (ASi Club): actuator sensor interface, coupling binary sensors in factory automation with control devices [12] Additionally, CAN-based solutions (CAN in automation [CIA])) are used for widespread application fields (e.g., within cars, machines, or locomotives), coupling decentralized devices with centralized devices based on physical and MAC layers of the controller area network [13]. Recently, IOLink has been specified for bidirectional digital transmission of parameters between simple sensor/actuator devices in factory automation [14, 15].
HART HART Communication [11] is a protocol specification, which performs a bidirectional digital transmission of parameters (used for configuration and parameterization of intelligent field instruments by a host system) over analog transmission lines. The host system may be a distributed control system (DCS)), a programmable logic controller (PLC)), an asset management system, a safety system, or a handheld device. HART technology is easy to use and very reliable. The HART protocol uses the Bell 202 Frequency Shift Keying (FSK) standard to superimpose digital communication signals at a low level on top of the 4–20 mA analog signal. The HART protocol communicates at 1200 bps without interrupting the 4–20 mA signal and
23
538
C. E. Pereira et al.
• Combined transactions: more than 4 bits of coherent information are transmitted, composed of a series of master calls and slave replies in a defined context. Handheld terminal
For more details, see [12]. HART interface (RS 232 or USB)
Resistor (250 Ohm)
PC host application Power supply
Field device
Fig. 23.1 A HART system with two masters [11]
allows a host application (master) to get two or more digital updates per second from a field device. As the digital FSK signal is phase continuous, there is no interference with the 4–20 mA signal. The HART protocol permits all digital communication with field devices in either point-to-point or multidrop network configurations. HART provides for up to two masters (primary and secondary). As depicted in Fig. 23.1, this allows secondary masters (such as handheld communicators) to be used without interfering with communications to/from the primary master (i.e., control/monitoring system).
ASi (IEC 62026-2) ASi [12] is a network of actuators and sensors (optical, inductive, and capacitive) with binary input/output signals. An unshielded twisted-pair cable for data and power (max. 2 A; max. 100 m) enables the connection of up to 31 slaves (max. 124 binary signals of sensors and/or actuators). This enables a modular design using any network topology (i.e., bus, star, and tree). Each slave can receive any available address and be connected to the cable at any location. AS-Interface uses the APM method (alternating pulse modulation) for data transfer. The medium access is controlled by a master–slave principle with cyclic polling of all nodes. ASi masters are embedded (ASi) communication controllers of PLCs or PCs, as well as gateways to other fieldbus systems . To connect legacy sensors and actuators to the transmission line, various coupling modules are used. AS-Interface messages can be classified as follows: • Single transactions: maximum of 4-bit information transmitted from master to slave (output information) and from slave to master (input information).
IO-Link IO-Link [14] is a point-to-point connection between an IOLink master with sensors and actuators (Fig. 23.2). The standard 24 V wired connection carrying analog measurement and actuation signals is extended by a digital protocol on top of it. Three transmission rates are specified for IO-Link mode in IO-Link specification V1.1: 4.8 kbaud, 38.4 kbaud, and 230.4 kbaud. The slave supports one of these baud rates only and the IO-Link master all three. The IO-Link master can be a separate device acting as Remote IO integrated in a PLC input/output module (Fig. 23.2). IO-Link is therefore not a fieldbus system. The protocol allows to transmit process data and the value status of connection port cyclically and device data and events acyclically. An IO-Link system is standardized in IEC 61131-9 and consists of the following basic components: • IO-Link master • IO-Link device (e.g., sensors, RFID readers, valves, motor starters, and I/O modules) • Unshielded three- or five-conductor standard cables (pin assignment according to IEC 60974-5-2) • Engineering tool for configuring and assigning parameters of IO-Link
23.2.3 Fieldbus Systems Nowadays, fieldbus systems are standardized (though unfortunately not unified) and widely used in industrial automation. The IEC 61158 and 61784 standards [16, 17] contain ten different fieldbus concepts. Seven of these concepts have their own complete protocol suite: PROFIBUS (Siemens, PROFIBUS International); Interbus (Phoenix Contact, Interbus Club); Foundation Fieldbus H1 (Emerson, Fieldbus Foundation); SwiftNet (B. Crowder); P-Net (Process Data); and WorldFIP (Schneider, WorldFIP). Three of them are based on Ethernet functionality: highspeed Ethernet (HSE) (Emerson, Fieldbus Foundation); Ethernet/IP (Rockwell, ODVA); and PROFINET (Siemens, PROFIBUS International). The worldwide leading positions within the automation domain regarding the number of installed fieldbus nodes hold PROFIBUS followed by DeviceNet (Rockwell, ODVA), which has not been part of the IEC 61158 standard. For that reason, the basic concepts of PROFIBUS and DeviceNet will be explained very briefly.
23
Communication Protocols for Automation
539
23 Industrial ethernet
IO-Link master Fieldbus IO-Link master
IO-Link master
Fig. 23.2 An IO-Link system with possible IO-Link master’s allocation [15]
Readers interested in a more comprehensive description are referred to the related websites.
PROFIBUS PROFIBUS is a universal fieldbus for plantwide use across all sectors of the manufacturing and process industries based on the IEC 61158 and IEC 61784 standards. Different transmission technologies are supported [18]: • RS 485: Type of medium attachment unit (MAU) corresponding to [19]. Suited mainly for factory automation. For technical details, see [18, 19]. Number of stations: 32 (master stations, slave stations, or repeaters); Data rates: 9.6/19.2/45.45/93.75/187.5/500/1500/3000/6000/ 12000 kb/s. • Manchester bus powered (MBP). Type of MAU suited for process automation: line, tree, and star topology with two wire transmission; 31.25 kBd (preferred), high-speed variants w/o bus powering and intrinsic safety; synchronous transmission (Manchester encoding); optional: bus-powered devices (≥10 mA per device; low power option); optional: intrinsic safety (Ex-i) via additional constraints according to the FISCO model. Intrinsic safety means a type of protection in which a portion of the electrical system contains only intrinsically safe equipment (apparatus, circuits, and wiring) that
is incapable of causing ignition in the surrounding atmosphere. No single device or wiring is intrinsically safe by itself (except for battery-operated self-contained apparatus such as portable pagers, transceivers, gas detectors, etc., which are specifically designed as intrinsically safe self-contained devices) but is intrinsically safe only when employed in properly designed intrinsically safe system. There are couplers/link devices to couple MBP and RS485 transmission technologies. • Fibre optics (not explained here, see [18]). There are two medium access control (MAC)) mechanisms (Fig. 23.3): 1. Master–master traffic using token passing 2. Master–slave traffic using polling PROFIBUS differentiates between two types of masters: 1. Master class 1, which is basically a central controller that cyclically exchanges information with the distributed stations (slaves) at a specified message cycle. 2. Master class 2, which are engineering, configuration, or operating devices. The slave-to-slave communication is based on the application model publisher/subscriber using the same MAC mechanisms.
540
C. E. Pereira et al.
DP-slave 3
Cycle Slave 1 Slave 2 Slave 3 Cyclic access of master 1
Slave 3 Acyclic access of master 2
Transmission technologies
SEMI
RIO for PA
Common application profiles (optional)
Application profiles I
Communic. technologies
...
Application profiles II
PA devices
Fig. 23.3 Profibus medium access control. (From [18])
System profiles 1... , x
DP-slave 2
Master conform. classes interfaces, constraints
DP-slave 1
Configuration parameteriz.
Integration technologies
Cyclic data exchange
Descriptions (GSD, EDD) tools (DTM, configurators)
Profibus-DP Master class 2
Encoder
Token
& dosing
Profibus-DP Master class 1
Weighing
These profiles reflect the broad experience of the PROFIBUS International organization.
Ident systems
1. Common application profiles (regarding functional safety, synchronization, redundancy, etc.) 2. Application field–specific profiles (e.g., process automation, semiconductor industries, and motion control)
DeviceNet DeviceNet is a digital, multidrop network that connects and serves as a communication network between industrial controllers and I/O devices. Each device and/or controller is a node on the network. DeviceNet uses a trunk-line/drop-line topology that provides separate twisted-pair busses for both signal and power distribution. The possible variants of this topology are shown in [20]. Thick or thin cables can be used for either trunk-lines or drop-lines. The maximum end-to-end network length varies with data rate and cable thickness. DeviceNet allows transmission of the necessary power on the network. This allows devices with limited power requirements to be powered directly from the network, reducing connection points and physical size. DeviceNet systems can be configured to operate in a master–slave or a distributed control architecture using peer-to-peer communication. At the application layer, DeviceNet uses a producer/consumer application model. DeviceNet systems offer a single point of connection for configuration and control by supporting both I/O and explicit messaging. DeviceNet uses CAN (controller area network [13]) for its data link layer, and CIP (common industrial protocol) for the upper network layers. As with all CIP networks, DeviceNet implements CIP at the session (i.e., data management services) layer and above and adapts CIP to the specific DeviceNet technology at the network and transport layer, and below. Figure 23.5 depicts the DeviceNet protocol suite. The data link layer is defined by the CAN specification and by the implementation of CAN controller chips. The CAN specification [13] defines two bus states called
PROFIdrive
The dominating PROFIBUS protocol is the application protocol DP (decentralized periphery), embedded into the protocol suite (Fig. 23.4). Depending upon the functionality of the masters, there are different volumes of DP specifications. There are various profiles, which are grouped as follows:
PROFIsafe, Time stamp, Redundancy, etc. IEC 61158/61784 PROFIBUS DP RS 485: NRZ RS 485-IS Intrinsic safety
Fig. 23.4 PROFIBUS protocol suite. (From [18])
Fiber optics: glass single/multi mode; PCF/plastic fiber
DP-V0...V2 MBP: Manchester bus powered (LP: low power, IS: intrinsic safety)
23
Communication Protocols for Automation
A
S T
Application layer
Device profiles & application objects CIP common industrial protocol (IEC 61158)
P Explicit messaging
Implicit messaging
Connection manager
DeviceNet specification (IEC 62026)
PHY
PROFINET; EtheretIP, Modbus TCP/IP, HART IP, https, MQTT, OPC UA, … Presentation layer Session layer Transport layer
UDP, TCP
Network layer
IP
Data link layer
N DL
541
Controller area network (CAN)
CAN specification (ISO 11898)
Peer-to-peer, Master-slave, Multi-master, 64 nodes/network
DeviceNet specification (IEC 62026)
Physical layer
Upper protocol specific mapping
Ethernet (CSMA/CD)
Bridging IEEE 802.1Q TSN Ethernet, Fast Ethernet, Giga Ethernet, WLAN, APL…
Fig. 23.6 Overview about industrial Ethernet protocols Fig. 23.5 DeviceNet protocol suite. (From [20])
dominant (logic 0) and recessive (logic 1). Any transmitter can drive the bus to a dominant state. The bus can only be in the recessive state when no transmitter is in the dominant state. A connection with a device must first be established in order to exchange information with that device. To establish a connection, each DeviceNet node will implement either an unconnected message manager (UCMM) or a Group 2 unconnected port. Both perform their function by reserving some of the available CAN identifiers. When either the UCMM or the Group 2 unconnected port is selected to establish an explicit messaging connection, that connection is then used to move information from one node to the other (using a publisher/subscriber application model), or to establish additional I/O connections. Once I/O connections have been established, I/O data may be moved among devices on the network. At this point, all the protocol variants of the DeviceNet I/O message are contained within the 11-bit CAN identifier. CIP is strictly object oriented. Each object has attributes (data), services (commands), and behavior (reaction to events). Two different types of objects are defined in the CIP specification: communication objects and application-specific objects. Vendor-specific objects can also be defined by vendors for situations where a product requires functionality that is not in the specification. For a given device type, a minimum set of common objects will be implemented. An important advantage of using CIP is that for other CIP-based networks the application data remains the same regardless of which network hosts the device. The application programmer does not even need to know to which network a device is connected. CIP also defines device profiles, which identifies the minimum set of objects, configuration options, and the I/O data formats for different types of devices. Devices that follow
one of the standard profiles will have the same I/O data and configuration options will respond to all the same commands, and will have the same behavior as other devices that follow that same profile. For more information on DeviceNet readers are referred to [20].
23.2.4 Industrial Ethernet-Based Networks Industrial Ethernet is a concept related to the use of Ethernet in industrial environments with protocols that provide determinism and real-time control. It has followed a twostep development: it has started with the introduction of controller devices for control network class and is nowadays ready for the common network among field instrumentation, controller network as well as supervision diagnostics and maintenance. Considering these kinds of networks based on switched Ethernet technology, one can distinguish between (related to the real-time classes, see Sect. 23.1): 1. Local soft real-time approaches (real-time class 1) 2. Deterministic real-time approaches (real-time class 2) 3. Isochronous real-time approaches (real-time class 3) The standardization process started in 2004. Nearly 20 Ethernet-based communication systems became part of the extended fieldbus standard IEC 61158 and the related IEC 61984 (Release 2019): Ethernet/IP (Rockwell, ODVA), and PROFINET (Siemens, PROFIBUS International). Figure 23.6 shows the general structure which are common among the different solutions. Layer one and two are used from all solutions. They differ if they are using the UDP/TCP/IP protocols and the way they map the applicationspecific protocols to the standard Ethernet. Figure 23.6 shows also the rather new TSN and APL protocols, which are described in Sects. 23.2.5 and 23.2.6.
23
542
In the reminder of this section a short survey of communication systems related to the previously mentioned realtime classes will be given, and two practical examples will be examined.
Local Soft Real-Time Approaches (Real-Time Class 1) These approaches use TCP (UDP)/IP mechanisms over shared and/or switched Ethernet networks. They can be distinguished by different functionalities on top of TCP (UDP)/IP, as well as by their object models and application process mechanisms. Protocols based on Ethernet-TCP/IP offer response times in the lower millisecond range but are not deterministic, since data transmission is based on the best effort principle. Some examples are given below. MODBUS TCP/IP (Schneider) MODBUS [21] is an application layer messaging protocol for client/server communication between devices connected via different types of buses or networks. Using Ethernet as the transmission technology, the application layer protocol data unit (A-PDU) of MODBUS (function code and data) is encapsulated into an Ethernet frame. The connection management on top of TCP/IP controls the access to TCP. Ethernet/IP (Rockwell, ControlNet International, Open DeviceNet Vendor Association) Uses a Common Industrial Protocol CIP In this context, IP stands for industrial protocol (not for Internet Protocol). CIP [22, 23] represents a common application layer for all physical networks of Ethernet/IP, ControlNet, and DeviceNet. Data packets are transmitted via a CIP router between the networks. For the real-time I/O data transfer, CIP works on top of UDP/IP. For the explicit messaging, CIP works on top of TCP/IP. The application process is based on a producer/consumer model. The general concept for real-time communication is based on a producer-consumer massaging model where the data owner transmitting the data and the receiving devices can consume the data simultaneously. This principle is supported by the Internet Protocol (IP) multicast service which, in its turn, is supported by the Ethernet multicast service. The used addresses are connection ID related not related to the MAC address of the consuming devices. The consumer subscribes once at the beginning and receives the data as long the sender provides it. PROFINET (PROFIBUS International, Siemens) Used for the non-real-time communication the standard TCP/UDP and IP protocol according to the best effort transportation performance. An open-source code and various exemplary implementations/portations for different operating systems are available on the PNO website [24]. All of the abovementioned approaches are able to support widely used office domain protocols, such as SMTP, SNMP,
C. E. Pereira et al.
and HTTP. Some of the approaches support BOOTP and DHCP for web access and/or for engineering data exchange. But the object models of the approaches differ.
Deterministic Real-Time Approaches (Real-Time Class 2) These approaches use a middleware on top of the data link layer to implement scheduling and smoothing functions. The middleware is normally represented by a software implementation. Industrial examples include the following. PROFINET (PROFIBUS International, Siemens) This variant of the Ethernet-based PROFINET [24] system (using the main application model background of the fieldbus PROFIBUS DP) uses the object model IO (input/output). Figure 23.7 roughly depicts the PROFINET protocol suite, containing the connection establishment for PROFINET via connection-oriented RPC on the left side, as well as for the PROFINET via connectionless RPC on the right side. The exchange of (mostly cyclic) productive data uses the realtime functions in the center. The PROFINET service definition and protocol specification [26] covers the communication between programmable logical controllers (PLCs), supervisory systems, and field devices or remote input and output devices. The PROFINET specification complies with IEC 61158, Parts 5 and 6, specially the fieldbus application layer (FAL). The PROFINET protocol is defined by a set of protocol machines. For more details see [27]. Time-Critical Control Network (Tcnet, Toshiba) Tcnet [28] specifies in the application layer a so-called common memory for time-critical applications, and uses the same mechanisms as mentioned for PROFINET for TCP(UDP)/
Component object model
IO object model
Component context management (ACCO)
IO context management
DCOM CO-RPC TCP IP
Cyclic (Producer/ consumer) and acyclic services
CL-RPC
Real-time
IP
UDP
IEEE 802.3 Connection establishment
Connection establishment
Fig. 23.7 PROFINET protocol suite of PROFINET. (From [25]) (active control connection object [ACCO], connection oriented [CO], connectionless [CL], and remote procedure call [RPC])
23
Communication Protocols for Automation
IP-based non-real-time applications. An extended data link layer contains the scheduling functionality. The common memory is a virtual memory globally shared by participating nodes as well as application processes running on each node. It provides a temporal and spatial coherence of data distribution. The common memory is divided into blocks with several memory lengths. Each block is transmitted to member nodes using multicast services, supported by a publisher node. A cyclic broadcast transmission mechanism is responsible for refreshing the data blocks. Therefore, the common memory consists of dedicated areas for the transmitting data to be refreshed in each node. Thus, the application program of a node has quick access to all (distributed) data. The application layer protocol (FAL) consists of three protocol machines: the FAL service protocol machine (FSPM), the application relationship protocol machine (ARPM), and the data link mapping protocol machine (DMPM). The scheduling mechanism in the data link layer follows a token passing mechanism. Vnet (Yokogawa) Vnet [29] supports up to 254 subnetworks with up to 254 nodes each. In its application layer, three kinds of application data transfers are supported: • A one-way communication path used by an endpoint for inputs or outputs (conveyance paths) • A trigger policy • Data transfer using a buffer model or a queue model (conveyance policy) The application layer FAL contains three types of protocol machines: the FSPM FAL service protocol machine, ARPMs application relationship protocol machines, and the DMPM data link layer mapping protocol machine. For real-time data transfer, the data link layer offers three services: 1. Connection-less DL service 2. DL-SAP management service 3. DL management service Real-time and non-real-time traffic scheduling is located on top of the MAC layer. Therefore, one or more time slots can be used within a macrocycle (depending on the service subtype). The data can be ordered by four priorities: urgent, high, normal, and time available. Each node has its own synchronized macrocycle. The data link layer is responsible for clock synchronization.
Isochronous Real-Time Approaches (Real-Time Class 3) The main examples are as follows.
543
Powerlink (Ethernet PowerLink Standardization Group [EPSG], Bernecker and Rainer), Developed for Motion Control Powerlink [30] offers two modes: protected mode and open mode. The protected mode uses a proprietary (B&R) realtime protocol on top of the shared Ethernet for protected subnetworks. These subnetworks can be connected to an open standard network via a router. Within the protected subnetwork the nodes cyclically exchange real-time data avoiding collisions. The scheduling mechanism is a time division scheme. Every node uses its own time slot (slot communication network management [SCNM]) to send its data. The mechanism uses a manager node, which acts comparably with a bus master, and managed nodes act similar to a slave. This mechanism avoids Ethernet collisions. The Powerlink protocol transfers the real-time data isochronously. The open mode can be used for TCP(UDP)/IP-based applications. The network normally uses switches. The traffic has to be transmitted within an asynchronous period of the cycle. EtherCAT (EtherCAT Technology Group [ETG], Beckhoff ) Developed as a Fast Backplane Communication System EtherCAT [31] distinguishes two modes: direct mode and open mode. Using the direct mode, a master device uses a standard Ethernet port between the Ethernet master and an EtherCAT segment. EtherCAT uses a ring topology within the segment. The medium access control adopts the master– slave principle, where the master node (typically the control system) sends the Ethernet frame to the slave nodes (Ethernet device). One single Ethernet device is the head node of an EtherCAT segment consisting of a large number of EtherCAT slaves with their own transmission technology. The Ethernet MAC address of the first node of a segment is used for addressing the EtherCAT segment. For the segment, special hardware can be used. The Ethernet frame passes each node. Each node identifies its subframe and receives/sends the suitable information using that subframe. Within the EtherCAT segment, the EtherCAT slave devices extract data from and insert data into these frames. Using the open mode, one or several EtherCAT segments can be connected via switches with one or more master devices and Ethernet-based basic slave devices. PROFINET/Isochronous Technology (PROFIBUS International, Siemens) Developed for Any Industrial Application PROFINET/Isochronous Technology [32] uses a middleware on top of the Ethernet MAC layer to enable high-performance transfers, cyclic data exchange, and event-controlled signal transmission. The layer 7 functionality is directly linked to the middleware. The middleware itself contains the scheduling and smoothing functions. This means that TCP/IP does
23
544
C. E. Pereira et al.
tsendclock
demanding real-time applications. The jitter between master and slave clocks can be less than 200 ns.
tsendclock+1
31.25 µs d Tsendclock d 4 ms T60%
cRT aRT non RT
cRT aRT
non RT
Fig. 23.8 LMPM MAC access used in PROFINET [27] (cyclic real time [cRT], acyclic real time [aRT], non-real time [non-RT], medium access control [MAC], and link layer mapping protocol machine [LMPM])
not influence the PDU structure. A special Ethertype is used to identify real-time PDUs (only one PDU type for real-time communication). This enables easy hardware support for the real-time PDUs. The technical background is a 100 Mbps full duplex Ethernet (switched Ethernet). PROFINET adds an isochronous real-time channel to the RT channels of realtime class 2 option channels. This channel enables a highperformance transfer of cyclic data in an isochronous mode [33]. Time synchronization and node scheduling mechanisms are located within and on top of the Ethernet MAC layer. The offered bandwidth is separated for cyclic hard realtime and soft/non-real-time traffic. This means that within a cycle there are separate time domains for cyclic hard real time, for soft/non-real-time over TCP/IP traffic, and for the synchronization mechanism, see also Fig. 23.8. The cycle time should be in the range of 250 μs (35 nodes) up to 1 ms (150 nodes) when simultaneously TCP/IP traffic of about 6 Mbps is transmitted. The jitter will be less than 1 μs. PROFINET/IRT uses switched Ethernet (full duplex). Special four-port and two-port switch ASICs have been developed and allow the integration of the switches into the devices (nodes) substituting the legacy communication controllers of fieldbus systems. Distances of 100 m per segment (electrical) and 3 km per segment (fiber optical) can be bridged. Ethernet/IP with Time Synchronization (ODVA, Rockwell Automation) Ethernet/IP with time synchronization [34], an extension of Ethernet/IP, uses the CIP Synch protocol to enable the isochronous data transfer. Since the CIP Synch protocol is fully compatible with standard Ethernet, additional devices without CIP Synch features can be used in the same Ethernet system. The CIP Synch protocol uses the precision clock synchronization protocol [35] to synchronize the node clocks using an additional hardware function. CIP Synch can deliver a time synchronization accuracy of less than 500 ns between devices, which meets the requirements of the most
SERCOS III, (IG SERCOS Interface e.V.) A SERCOS network [36], developed for motion control, consists of masters and slaves. Slaves contain integrated repeaters, which have a constant delay time T rep (input/output). The nodes are connected via point-to-point transmission lines. Each node (participant) has two communication ports, which are interchangeable. The topology can be either a ring or a line structure. The ring structure consists of a primary and a secondary channel. All slaves work in forwarding mode. The redundancy provided by the ring structure prevents any downtime caused by a broken cable. The line structure consists of either a primary or a secondary channel. The last physical slave performs the loopback function. All other slaves work in forwarding mode. No redundancy against cable breakage is achieved. It is also possible to insert and remove slaves during operation (hot plug). This is restricted to the last physical slave.
23.2.5 Time-Sensitive Networking (TSN) The current industrial trends require the use of converged networks, since it is no longer the case that only one Ethernetbased industrial communication system [16] is used in a network. These networks require flexibility in the configuration as well as scalability to the number of devices while still supporting time-critical real-time communication by ensuring bounded latency. The approach to meet these requirements is called timesensitive networking (TSN [37] and [38]). TSN is a series of standards that extend the Ethernet (IEEE 802.3) and bridging (IEEE 802.1Q) standards. For the industrial communication, the following standards are the most important ones: • • • •
IEEE 802.1AS (time synchronization) IEEE 802.1Qbv (enhancements for scheduled traffic) IEEE 802.1Qbu (frame preemption) IEEE 802.1AB (station and media access control connectivity discovery) • IEEE 802.3br (interspersing express traffic) • IEEE 802.1CB (frame replication and elimination for reliability) The use of these standards enables simultaneous use of the network by different industrial communication systems as well as the concurrency of deterministic and nondeterministic traffic. The communication systems can utilize a number of different scheduling algorithms to ensure the bounded latency, for example, Time-Aware Shaper (TAS) with fixed time slots for every stream or simple strict priority scheduling. For every scheduling algorithm, a common
23
Communication Protocols for Automation
545
understanding of time is a fundamental requirement, making the support of IEEE 802.AS crucial for the usage of TSN. Concrete requirements that go beyond the technical specifications mentioned above are defined in the IEC/IEEE 60802 Join Profile for Industrial Automation. This profile defines minimum requirements for devices and bridges as well as the configuration protocols that are supported by industrial networks. This profile is the basis for the convergence of different industrial communication systems within the same network.
23.2.6 Advanced Physical Layer (APL) The APL [39] alternative of the Ethernet protocol is design specific for process control domain to transmit data across long distances with high reliability and provide electrical power to the devices (Fig. 23.9). The two-wired Ethernet variant is based on 10BASE-T1L. The transmission rate is 10 Mbit/s, using the 4B3T coding standard and the threestep Puls Amplitude Modulation PAM-3 with 7,5 MBaud full duplex with switched network topology. The main features are: • Standards IEEE 802.3 (10BASE-T1L), IEC 60079 (according to IEEE 802.3cg) • Power supply output (Ethernet APL power switch) up to 60 W • Redundant cable and switches optional • Reference cable type for intrinsic safety IEC 61158-2, Type A • Maximum trunk length 1000 m/into Zone 1, Div. 2 • Maximum spur length 200 m/into Zone 0, Div. 1 • Speed 10 Mbps, full duplex This brings the benefits of industrial Ethernet to process automation and instrumentation (Fig. 23.10):
5-7: Session/ presentation/ application layer
• A converged long-distance communication network for process automation and field instrumentation • Ability to locate Ethernet-based field devices in hazardous areas by virtue of being intrinsically safe • Two wire using industry standard fieldbus cable with looppowered devices • Increased bandwidth provided by 10 megabit, full-duplex Ethernet communication enables productivity gains over the lifecycle of field devices • Reuse of existing two-wire cable • Ethernet based, for any protocol or application • Explosion hazardous area protection with intrinsic safety, including simple validation • Transparent connection to any IT network • Supports the familiar trunk-and-spur topology • Device access anytime and anywhere • Fast and efficient communications
23.2.7 Internet of Things (IoT) Communication The term “Internet of Things – IoT” has not been uniformously used in literature. It describes the network of physical objects – “things” – that are embedded with sensors, software, and other technologies for the purpose of connecting and exchanging data with other devices and systems over the Internet. The term IoT usually summarizes technologies which provide access to devices based on the Internet technology. In several case it is identified by the use of the TCP/UDP/IP communication protocols stack. Two communication protocols have received an increasing attention recently to use in automation applications: OPC UA [40] and MQTT [41]. OPC UA is native from automation technologies and has been in use even before IoT became visible. MQTT comes from the business IT and is more and more used in automation domain as well.
Ethernet/IP, HART-IP, OPC UA, PROFINET, http, ...
4:
Transport layer
TCP
3:
Network layer
IP
2:
Data link layer
1:
Physical layer
WIFI
RT/IRT
Gigabit
Fast-ethernet
Ethernet
Ethernet
Higher layers operate independent of the physical layer
Fig. 23.9 APL allocation in the industrial communication stack [39]
TSN
...
APL
APL is one of many physical layers
23
546
C. E. Pereira et al.
Engineering
Control
Asset management
Optimization & monitoring
APL power switch
Hx
General purpose area up to zone 2 / div. 2 APL field switch
Zone 1 / div. 2
Zone 0 / div. 1
Legend Facility or plant ethernet Increased safety Intrinsic safety Redundancy ring (option)
Fig. 23.10 Typical network topology for APL [39]
Vendor specific extensions Predefined and custom information models
Specifications of information models of other organisations DA
AC
HA
Prog
OPC UA base services Technological basis Transport Webservice / OPC UA binary
OPC UA data model Modeling rules
Transport layer
UDP, TCP
Network layer
IP
Data link layer Physical layer
DA – Data access AC – Alarm condition HA – History access Prog – Programm invocation
Ethernet (CSMA/CD) Ethernet, Fast Ethernet, Giga Ethernet, WLAN, APL…
Fig. 23.11 General structure of the OPC UA interface and object model [40]
OPC UA OPC UA is an interface for industrial communication systems, standardized in IEC 62541. Originally, it was launched for fieldbus independent access from SCADA systems to field devices. It has been further developed as an interface combining layer 7 communication services with a universal information model based on the object-oriented modeling paradigm. The communication services provide different access means to the application objects of the communication
partners. The OPC UA interface is usually mapped to UDP and TCP/IP (Fig. 23.11). As basic a client server concept is provided. This means that a device offers access to its objects and another application can use these. The way of usage is dependent on the type of the objects (see DA, AC, in Fig. 23.11). In addition to the classical read and write services infrastructure, services such as session management, security, event and alarm management, and discovery are offered by the OPC UA technology.
23
Communication Protocols for Automation
547
Application session
Secure channel
Transport
UA stack
UA client
UA stack
23
UA server
Fig. 23.12 Protocol nesting of OPC UA transport layers [40]
The session and security services are embedded in the transport channels of OPC UA communication stack (Fig. 23.12). OPC UA is extended by a publisher-subscriber interaction model [40].
Message Queue Telemetry Transport (MQTT) The TCP/IP-based Message Queue Telemetry Transport (MQTT) [42] is an important solution of a lightweight protocol. It was developed as a machine-to-machine communication protocol and is standardized by the Organization for the Advancement of Structured Information Standards (OASIS)) [41]. MQTT works in an event-oriented communication protocol. The protocol is using TCP/IP (see also Fig. 23.6), or other network protocols that provide ordered, lossless, bidirectional connections. There are the following main features: • Use of the publish/subscribe message pattern which provides multi- and broadcast message distribution • Small overhead of datagrams and protocol exchanges to minimized network load • Notification to interested partners when an abnormal disconnection occurs • Decoupling of applications with a lightweight protocol Additionally, it defines a quality of service (QoS)) for each message. The QoS describes whether and how often a message is transmitted and three cases are defined: at most once (0), at least once (1), and exactly once (2). The selected quality of service does not define when a message will arrive at the recipient. It only supervises the transport of the messages and give indications about it in case of QoS = 1 and QoS = 2. There is an end-to-end response time between the sending of a message by the publisher and the receipt of the message by the subscriber which has to be supervised using extra means if this is required (Fig. 23.13).
Component A
Component B
Component X Publisher
Subscriber MQTT broker
Component Y
… Component N
…
Subscriber
Fig. 23.13 MQTT publisher-subscriber communication [43]
23.3
Wireless Industrial Communications
23.3.1 Classification Wireless communication networks are increasingly penetrating the application area of wired communication systems. Therefore, they have been faced with the requirements of industrial automation. Wireless technology has been introduced in automation as wireless local area networks (WLAN) and wireless personal area networks (WPAN). Currently, the wireless sensor networks (WSN) are under discussion for process and factory automation and there are some standards that try to cope with the industrial automation requirements. The WSN allow the integration of Industrial Internet of Things (IIoT)) concepts in the industrial applications. For specific application aspects of wireless communications, see (Ch. 14). The basic standards are the following: • Mobile communications standards: GSM, GPRS, and UMTS wireless telephones (DECT) • Lower layer standards (IEEE 802.11: Wireless LAN [44], and 802.15 [45]: personal area networks) as a basis of radio-based local networks (WLANs, Pico networks, and sensor/actuator networks)
548
• Higher layer standards (application layers on top of IEEE 802.11 and 802.15.4, e.g., Wi-Fi, Bluetooth [46], wireless HART [47], and ZigBee [48]) • Upcoming radio technologies such as ultrawide band (UWB) and WiMedia • Low-power wide area networks (LPWAN) [49] • 5G [50] For more detailed information and survey see [49, 51–55].
23.3.2 Wireless Local Area Networks (WLAN) WLAN is a mature technology and it is implemented in PCs, laptops, and PDAs. Modules for embedded systems development are also available. WLAN can be used almost worldwide. WLAN enables wireless access to Ethernetbased LANs and is helpful for the vertical integration in an automated manufacturing environment. It offers high-speed data transmission that can be used to transmit production data and management data in parallel. The WLAN propagation characteristics fit into a number of possible automation applications. WLAN enables more flexibility and a cost-effective installation in automation associated with mobility and localization. The transition to Ethernet is simple and other gateways are possible. The largest part of the implementation is achieved in hardware; however, improvements can be made above the MAC layer. In the last few years Wi-Fi Alliance has introduced simplified names to identify device and product descriptions. The latest version of Wi-Fi is based on the IEEE 802.11ax standard and is known as Wi-Fi 6. Devices based on IEEE 802.11ac standard are identified as Wi-Fi 5. One of the main differences between the versions is that the data transmission rate of Wi-Fi 6 is theoretically 3x higher than version 5. In addition, an access point of Wi-Fi 6 is capable of communicating with eight devices simultaneously while WiFi 5 only with four. In this way, Wi-Fi 6 is suitable for dense environments with hundreds of devices simultaneously accessing the network.
23.3.3 Wireless Sensor/Actuator Networks Various wireless sensor network (WSN) concepts are under discussion, especially in the area of industrial automation. Features such as time-synchronized operation, frequency hopping, self-organization (with respect to star, tree, and mesh network topologies), redundant routing, and secure data transmission are desired. Interesting surveys on this topic are available in [56] and [57]. Currently, the most prominent technologies in industrial wireless sensor/actuator networks are:
C. E. Pereira et al.
• • • • •
ZigBee (ZigBee Alliance) [48] Wireless HART [47, 58] ISA SP100.11a [56] WIA-PA [57, 59] WIA-FA [57]
The first three technologies use the standard IEEE 802.15.4 (2003) low-rate wireless personal area network (WPAN) [45], specifying the physical layer and parts of the data link layer (medium access control). The features of these technologies make them suitable to process automation but limit their adoption in critical scenarios. On the other hand, the WIA-FA was designed to address high communication requirements in factory automation applications and its physical layer is based on IEEE 802.11.
ZigBee ZigBee distinguishes between three device types: • Coordinator ZC: root of the network tree, storing the network information and security keys. It is responsible for enabling the connection of the ZigBee network to other networks. • Router ZR: transmits data of other devices. • End device ZED: automation device (e.g., sensor or actuator), which can communicate with ZR and ZC, but is unable to transmit data of other devices. An enhanced version allows one to group devices and to store data for neighboring devices. Additionally, to save energy, there are full-function devices and reduced-function devices. The ZigBee application layer (APL) consists of three sublayers: application support layer (APS) (containing the connection lists of the connected devices), an application framework (AF), and Zigbee device objects (ZDO) (definition of devices roles, handling of connection requests, and establishment of communication relations between devices). For process automation, the ZigBee application model and the ZigBee profiles are very interesting. The application functions are represented by application objects (AO), and the generic device functions by device objects (DO). Each object of a ZigBee profile can contain one or more clusters and attributes, transferred to the target AO (in the target device) directly or to a coordinator, which transfers them to one or more target objects.
WirelessHART Revision 7 of HART protocol includes the specification of WirelessHART (WH) [47]. The elements that form a WH network are at least one gateway, one network manager and one access point, and many field devices. The meshtype network allows the use of redundant communication
23
Communication Protocols for Automation
paths between the radio-based nodes. The temporal behavior is determined by the time-synchronized mesh protocol (TSMP) [60, 61]. TSMP enables a synchronous operation of the network nodes (called motes) based on a time slot mechanism. It uses various radio channels (supported by the MAC layer) for an end-to-end communication between distributed devices. In order to increase reliability, the adopted time division multiple access (TDMA) concept is combined with channel hopping mechanisms. WirelessHART employs nonadaptive frequency hopping where each link in the network switches randomly between the 15 available channels. Moreover, channels subject to interference may be eliminated due to blacklisting. TSMP supports star, tree, as well as mesh topologies. All nodes have the complete routing function (contrary to ZigBee). A self-organization mechanism enables devices to acquire information of neighboring nodes and to establish connections between them. The messages have their own network identifier. Thus, different networks can work together in the same radio area. Each node has its own list of neighbors, which can be updated when failures have been recognized. To support security, TSMP uses mechanisms for encryption (128-bit symmetric key), authentification (32-bit MIC for source address), and integrity (32-bit MIC for message content). Additionally, the frequency hopping mechanism improves the security features. For detailed information see [47] and [62].
ISA SP100.11a Developed by the International Society of Automation (ISA), the ISA SP100.11a standard is also a TSMP-based protocol designed to process automation applications. It also allows point-to-point and mesh communication. One of the differences in relation to WH is a more comprehensive number of specifications such as security manager policies. In addition, while in WH there is a single channel hopping scheme, in ISA100 three different techniques can be applied. Another feature is that the time slot can be adjusted from 10 to 14 ms. At the network layer, the protocol uses IPv6 over lowpower WPAN (6LoWPAN), thus allowing IP-based communication via IEEE 802.15.4. In this way, the sensor network nodes can be accessed directly over the Internet, which is not possible in the WirelessHART, WIA-PA, and Zigbee protocols [56]. WIA-PA In 2007 the Chinese Industrial Wireless Alliance (CIWA) proposed the Wireless Networks for Industrial Automation Process Automation (WIA-PA), which is a communication protocol based on the IEEE 802.15.4 stack. It was approved by the IEC in 2008, making it the second industrial wireless communication standard. The topology used is star in intracluster and mesh intercluster. The physical devices of
549
the protocol include host, gateway, field devices, routers, and handheld devices. The central point of each cluster is a router that forwards messages to the gateway. The physical and MAC layers were built based on the IEEE 802.15.4 standard and the MAC layer has contention access periods (CAP) and contention-free periods (CFP) using CSMA/CA as medium access. In the active period, the contention portion is used to add devices to the network, intracluster management and retransmissions. The contention-free portion is used for communication between field devices and routers [57, 59].
WIA-FA The WIA-FA standard is a protocol designed by CIWA to cope with factory automation whose requirements are delays in the order of milliseconds and more than 99% transmission reliability. The physical layer is based on the IEEE 802.112012 standard and supports different modulation modes such as FHSS, DSSS, and OFDM, operating in the 2.4 GHz unlicensed frequency range. A TDMA scheme is used to provide the necessary determinism in communications and the topology used is star with redundancy of access points. The application layer is object oriented and can encapsulate information from other systems to provide tunneling. The protocol can also convert different industrial protocols such as HART, Modbus, and Profibus, thus allowing interoperability with systems already installed [57]. Bluetooth Low Energy Bluetooth Low Energy (BLE) or Bluetooth Smart is a technology developed by the Bluetooth Special Interest Group (SIG). It was implemented in Bluetooth technology since version 4. The technology focuses on transmitting infrequent data between devices and uses the ISM unlicensed frequency band included in 2.4000–2.4835 GHz subdivided into 40 channels. The application layer defines different profiles specified by the Bluetooth SIG that allow interoperability between devices from different manufacturers. BLE can be used for cases where a device transfers small amounts of data, such as data from a sensor or data to control an actuator. In addition, there are transmission-only devices, also called beacons, which have the function of transmitting messages so that other devices can discover them and read their data. With the help of a Bluetooth beacon, a device can find its approximate position. The topology used is star also called piconet. There is also the possibility of forming a mesh with BLE devices. The specifications for the Bluetooth mesh network were launched in 2017 and allow messages transmitted by a node to hop from one node to another over the network until reaching its destination. It is well suited for control systems, monitoring, sensor networks, asset tracking, and other IIoT applications that require dozens, hundreds, or thousands of devices to communicate with each other. For more details see [46], [63], and [64].
23
550
23.3.4 Low-Power Wide Area Network (LPWAN) The main characteristics of LPWAN networks are long battery life, large coverage area, low transmission rate, and low installation costs. Some of the main LPWAN technologies are Sigfox, LoRaWAN, Narrowband IoT (NB-IoT), and Weightless, among others. They can be adopted in different application areas such as environment monitoring, smart cities, smart utilities, precision agriculture, and logistics control, among others [49].
SIGFOX SIGFOX [65] is a proprietary ultra-narrow band technology with low transmission rate and long range. It supports bidirectional communication in which DBPSK modulation is used for uplink and GFSK for downlink. Due to the narrowband feature, the effect of noise in the receiver is mitigated by allowing the receiver to only listen in a tiny slice of spectrum. Sigfox has low-cost sensors and SDR-based base stations to manage the network connection to the Internet. To increase reliability, devices transmit the message several times. Sigfox is installed by network operators and, therefore, if users want to use the network, they must pay a subscription fee. The supported data rate is 600 bps for downlink and until 200 kbps for uplink. The maximum payload size for each message is 12 bytes for uplink and 8 bytes for downlink and each device can send until 140 messages per day. LoRaWAN LoRa is a physical layer technology, owned by the Semtech [66] company that operates in the ISM unlicensed frequency range. LoRa is based on chirp spread spectrum (CSS) modulation. The upper layers of the network stack are open for development. LoRaWAN is a MAC layer that uses LoRa as the physical layer. The LoRaWAN protocol is an open protocol managed by the LoRa Alliance [67]. Its main features are long battery duration and long transmission range. The transmission range of a LoRa device can be modified by changing parameters such as bandwidth, transmission power, and propagation factor. A LoRaWAN network is not managed by a provider, so there are no subscription fees as required in Sigfox protocol. An additional advantage over Sigfox is that there is no message limit per device, but on the other hand, the regulated duty cycle in the region of installation must be observed. The supported data rate is 50 kbps for downlink and uplink. The maximum payload size for each message is 243 bytes. Weightless The Weightless protocol [68] is managed by the Weightless – Special Interest Group. Initially, three standards were
C. E. Pereira et al.
proposed: Weightless-N, Weightless-W, and Weightless-P. Weightless-W was designed to operate in the white space of the TV spectrum. This limits the use of the protocol since the use of this frequency range is only allowed in some regions. Weightless-N is an LPWAN technology similar to Sigfox and uses slotted ALOHA, in the unlicensed band, supporting only unidirectional communication (uplink only) for end devices to the base station. Weightless-P (or only Weightless) has narrowband bidirectional communication designed to operate on globally licensed and unlicensed ISM frequencies. The Weightless protocol uses GMSK and QPSK for signal modulation. It has a shorter communication range and shorter battery life when compared with other LPWAN technologies. The supported data rate is 100 kbps for downlink and for uplink. The maximum payload size for each message is 255 bytes.
NB-IoT Narrowband IoT (NB-IoT) [69] is an LPWAN technology defined by Release 13 of 3GPP. It can coexist with the global mobile communications system (GSM) and LTE in licensed frequency bands of 700 MHz, 800 MHz, and 900 MHz. The communication is bidirectional using orthogonal frequency division multiple access (OFDMA) for downlink and single carrier frequency division multiple access (SC-FDMA) for the uplink. NB-IoT is designed to optimize and reduce the resources of LTE technology, prioritizing the characteristics for infrequent data and low power consumption data transmissions. The supported data rate is 200 kbps for downlink and 20 kbps for uplink. The maximum payload size for each message is 1600 bytes [70].
23.3.5 5G The 5G, which is the fifth generation of mobile communication, is considered a strong candidate to serve industrial IoT (IIoT)) applications. The characteristics of high data transfer rates and lower latencies than current LTE technologies expand the use cases where the mobile communication can be used. The new 5G radio technologies will support three essential types of communication: enhanced mobile broadband (eMBB)), in addition to ultrareliable low-latency data exchange (uRLLC)) and massive machine-type communication (mMTC)) that will allow the use of thousands of field devices in a specific area. This technology promises to cover several industrial automation use cases, mainly for factory automation applications that generally require low latencies in comparison to process automation. Some of the applications in the manufacturing domain can be in motion control, mobile robots, massive wireless sensor networks, remote access and remote maintenance, augmented reality applications, closed-loop process control, process monitor-
23
Communication Protocols for Automation
551
ing, and plant asset management, among others. A new concept introduced in 5G, called network slicing, will allow the simultaneous and isolated provisioning of several services by the same network infrastructure. It allows the network operator to provide customized networks for specific services and achieve varying degrees of isolation between the various types of service traffic and the network functions associated with those services. This is interesting from the point of view that within a single network infrastructure, there are different needs in terms of functionality (for example, priority, security, and mobility), performance (for example, latency, mobility, and data rates), or operational requirements (for example, monitoring, root cause analysis, etc.). In addition, slices can be configured to serve specific user organizations (for example, public security agencies and corporate customers). For additional details, readers are referred to [70] and [50]. As the number of wireless protocols that find application in the industrial automation is increasing, it is always good to keep in mind that each wireless technology has characteristics that make it more suitable for a specific type of application. Therefore, before choosing a communication protocol to use in a given case, it is necessary to understand the application requirements. Two features that can be a first step in the selection of a given wireless technology can be based on the required range and data rate. Figure 23.14 depicts the most common wireless technologies and where they fit in these two aspects. Contact technologies, such as RFID, for example, usually have a range of a few meters. The range of the personal area network (WPAN) is between 10 and 100 m, as the Bluetooth and WirelessHART technologies. The wireless local area networks (WLAN)) generally have a short/medium range and is between 100
and 1000 m while technologies called LPWAN handle longdistance transmissions and can have ranges of 10–50 km.
23.4
Virtual Automation Networks
23.4.1 Motivation Future scenarios of distributed automation lead to desired mechanisms for geographically distributed automation functions for various reasons: • Centralized supervision and control of (many) decentralized (small) technological plants • Remote control, commissioning, parameterization, and maintenance of distributed automation systems • Inclusion of remote experts or external machine-readable knowledge for plant operation and maintenance (for example, asset management, condition monitoring, etc.) This means that heterogeneous networks, consisting of (partially homogeneous) local and wide areas, as well as wired and wireless communication systems, will continue to play an increasing role. Figure 23.15 depicts the communication environment of a complex automation scenario. Following a unique design concept, regarding the objects to be transmitted between geographically distributed communication end points, the heterogeneous network becomes a virtual automation network (VAN) [71, 72]. VAN characteristics are defined for domains, where the expression domain is widely used to address areas and devices with common properties/behavior, common network technology, or common application purposes.
Cellular
Data rate
5G WLAN 100 Mbps WPAN
4G
Bluetooth BR/EDR Bluetooth low energy
1 Mbps
100 Kbps
1 Kbps
Wi-Fi 6 Wi-Fi 5
WIA-FA WIA-PA ISA100 WirelessHart ZigBee
LPWAN NB-IoT LoRaWAN SigFox
NFC/RFID 1m
10 m
100 m
1 km
Fig. 23.14 Comparison of range and data rate of some of the current wireless technologies
Weightless-P
10 km
100 km
Range
23
552
C. E. Pereira et al.
Other third party production or logistics domain (real time, non-real time best effort, wired and wireless)
Remote industrial plant or subsidiary communication domain (real time, non-real time best effort, wired and wireless)
ary Industrial plant or subsidiary communication domain (real time, non-real time best effort, eff ffort wired and wireless) Edge
IP based networks
Controller (C)
Device (D)
IIoT gateway C
Heterogenous networks
D
D
(
((
IP based networks
D
D
D
D
VAN domain
VAN domai
n
Public or privat telecommunication network or Internet
n VAN domai
Office domain (best effort, wired or wireless)
))) C ))) D ))) D
D
Fig. 23.15 Different VAN domains related to different automation applications
23.4.2 Domains
23.4.3 Architectures for VAN Solutions
Within the overall automation and communication environment, a VAN domain covers all devices that are grouped together on a logical or virtual basis to represent a complex application such as an industrial application. Therefore, the encompassed networks may be heterogeneous and devices can be geographically distributed over a physical environment, which shall be covered by the overall application. Figure 23.15 depicts VAN domain examples representing three different distributed applications. Devices related to a VAN domain may reside in a homogeneous network domain (e.g., the industrial domain shown in Fig. 23.15). But, depending on the application, additional VAN relevant devices may only be reached by crossing other network types (e.g., wide area network type communication) or they need to use proxy technology to be represented in the VAN domain view of a complex application. Additionally, application can follow a vertical data flow or a horizontal one. Vertical means that automation devices deliver their data to a higher-level application such as fault analysis or plant performance monitoring. Horizontal means that the application close the loop from sensor to actuator devices or controller to controller communication in the same level of the automation hierarchy.
As mentioned in the previous sections, a VAN network consists of both horizontal and vertical communication paths and network transitions. Figure 23.16 depicts the required transitions in both heterogeneous and IP-based networks. There are different opportunities to achieve a communication path between heterogeneous network segments/devices (or their combinations). This is achieved by multiple links, gateways, or servers. Using this solution these infrastructure communication devices take advantage of the quality of service and addressing schema of the automation-related communication systems such as PROFIBUS and PROFINET, so both vertical and horizontal applications are on the main focus. These VAN infrastructure allows logical communication clusters independent of the communication structure. More recently, edge- and cloud-based solutions in combination with IIoT (Industrial Internet of Things) gateways are taking advantage from a homogeneous IP-based network and reaching a high connectivity from the single automation device to the cloud application. These vertical-oriented solutions have to make extra effort to extend the IP communication with application services to reach interoperability. There is an explosion in the amount of data being generated by digital devices. The traditional model of processing and
23
Communication Protocols for Automation
553
a)
23
b)
Solution with heterogeneous communication networks Horizontal and vertical interactions
IP based communication networks Mostly vertical interaction
WAN, Internet
Telecommunication
Cloud LAN
RTE
Fieldbus
Manager station Supervisor
Office device
Controller
MAN station
Internet server
Device
W/WL link
Proxy gateway W/WL link
IP based networks
WL device
IP based networks
WL device WL device WL device
Controller (C)
Wireless fieldbus
Device (D)
D
IP based networks
Edge
D
IP based networks D
D
IIoT gateway Heterogenous C networks D
D
(
((
))) ))) )))
C D D
D
Fig. 23.16 Network solutions with mostly heterogenous (a) and mostly IP-based (b) networks (local area networks [LAN], wireless LAN [WL], wired/wireless [W/WL], real-time Ethernet [RTE], metropolitan area network [MAN], and wide area network [WAN])
storing all data in the cloud is becoming too costly and often too slow to meet the requirements of the end user. Edge computing brings computation and data storage closer to the devices where it is being gathered, rather than relying on a central location that can be thousands of miles away. This is done so that data, especially real-time data, does not suffer latency issues that can affect an application’s performance. Virtualization is, through software implementation, the logical abstraction of hardware devices on a network. In this way, control and hardware are separated, facilitating management and modifications. Virtualization can contribute to heterogeneity, scalability, and interoperability of networks. Two possible alternatives are software-defined networks (SDN) and network function virtualization (NFV) [73]. Even though SDN and NFV are maintained by different standardization organizations, they are not competing approaches, but complementary. Together they enable dynamic network resource allocation for heterogeneous QoS requirements [74]. SDN centralizes the network management decoupling the data forwarding plane from the control plane. The central manager, also called SDN controller, has a global view of the network resources and for this reason it can gather network state information and act forwarding device configuration. A virtual control plane is created enforcing smart management decisions between network functions filling the gap between network management and services provisioning [73–75]. The NFV is an approach that decouples several network functions such as routers, firewalls, etc. from network devices and run them as software also called virtual network functions (VNFs)) at a data center. NFV reduces the equipment and needed installation efforts, hence the deployment costs are reduced. Through NFV it is possible to dynamically allocate network functions (NFs) making it much more manageable for service providers by adding, removing, or updating
a function for all or subsets of customers. It provides flexibility that would enable services providers to scale up/down services to address changing customer demands [73–75].
23.5
Wide Area Communications
23.5.1 Contextualization With the application of remote automation mechanisms (remote supervisory, operation, and service) using wide area networks, the stock of existing communication technology becomes broader and includes the following [25]: • All appearances of the Internet (mostly supporting best effort quality of services) • Public digital-wired telecommunication systems: either line switched (e.g., integrated services digital network [ISDN]) or packet switched (such as asymmetric/symmetrical digital subscriber line [ADSL, SDSL]) • Public digital wireless telecommunication systems (GSM based, GPRS based, and UMTS based) • Private wireless telecommunication systems, e.g., trunk radio systems The transition between different network technologies can be made easier by using multiprotocol label switching (MPLS) and synchronous digital hierarchy (SDH). There are several private protocols (over leased lines, tunneling mechanisms, etc.) that have been used in the automation domain using these technologies. Most of the wireless radio networks can be used in non-real-time (class 1: best effort) applications, some of them in soft real-time (class 2) applications; however, industrial environments and industrial,
554
C. E. Pereira et al.
Non real-time Configuration data Record data
Real-time IO data
Alarms
O dev IO device
IO controller
Communication channel CRs
Fig. 23.17 Remote communication channels over WAN (input/output [IO]; communication relation [CR])
scientific, and medical (ISM) band limit the applications. Figure 23.17 depicts the necessary remote channels. The end-to-end connection behavior via these telecommunication systems depends on the offered quality of service (QoS). The following application areas have to be distinguished.
23.5.2 Best Effort Communication in Automation • Non-real-time communication (Standard IT: upload/download, SNMP) with lower priority to realtime communication: for configuration, diagnostics, and automation-specific up-/download • Manufacturing-specific functions, context management, establishment of application relationships and connection relationships to configure IO devices, application monitoring to read status data (diagnostics, I&M), read/write services (HMI, application program), and open-loop control The automation domain has the following impact on best effort WAN connections: addressing between multiple distributed address spaces and redundant transmission for minimized downtime to ensure its availability for a running distributed application.
23.5.3 Real-Time Communication in Automation • Cyclic real-time communications (i.e., PROFINET data) for closed-loop control and acyclic alarms (i.e., PROFINET alarms) as major manufacturing-specific services
• Transfer (and addressing) methods for RT data across WAN can be distinguished as follows: – MAC based: tunnel (real-time class 1, partially realtime class 2 for longer send intervals, e.g., 512 ms), clock across WAN and reserved send phase for realtime transmission – IP based: real-time over UDP (routed); web services based [76, 77] The automation domain has the following impact on realtime WAN connections: a constant delay-sensitive and jittersensitive real-time base load (e.g., in LAN: up to 50% bandwidth reservation for real-time transmission). To use a wide area network for geographically distributed automation functions, the following basic design decisions were made following the definitions in Sect. 23.2: • A virtual automation network (VAN) is an infrastructure for standard LAN-based distributed industrial automation concepts (e.g., PROFINET or other) in an extended environment. The automation functions (applications) are described by their object models used in existing industrial communications. The application service elements (ASEs), as they are specified in the IEC 61158 standard, can additionally be used. • The establishment of the end-to-end connections between distributed objects within a heterogeneous network is based on web services. Once this connection has been established, the runtime channel between these objects is equivalent to the runtime channel within the local area by using PROFINET (or other) runtime mechanisms. • The VAN addressing scheme is based on names to avoid the use of IP and MAC addresses during establishment of the end-to-end path between logically connected applications within a VAN domain. Therefore, the IP and MAC addresses remain transparent to the connected application objects. • Since there is no new fieldbus or real-time Ethernet protocol, no new specified application layer is necessary. Thus, the well-tried models of industrial communications (as they are specified in the IEC 61158 standard) can be used. Only the additional requirements caused by the influence of wide area networks have to be considered and they lead to additional functionality following the abovementioned design guidelines. Most of the WAN systems that offer quality-ofservice (QoS) support cannot provide real guarantees, and this strongly limits the use of these systems within the automation domain. To guarantee a defined QoS for data transmission between two application access points via a wide area network, written agreements between customer and service provider (service-level agreements
23
Communication Protocols for Automation
[SLA]) must be contracted. In cases where the provider cannot deliver the promised QoS, an alternative line must be established to hold the connection for the distributed automation function (this operation is fully transparent to the application function). This line should be available from another provider, independent from the currently used provider. The automation devices (so-called VAN access points [VAN-APs]) should support functions to switch (either manually or automatically) to an alternative line [78]. There are different mechanisms to realize a connection between remote entities: • The VAN switching connection: the logical connection between two VAN-APs over a WAN or a public network. One VAN switching connection owns one or more communication paths. VAN switching line is defined as one physical communication path between two VAN-APs over a WAN or a public network. The endpoints of a switching connection are VAN-APs. • The VAN switching line: the physical communication: path between two VAN-APs over a WAN or a public network. A VAN switching line has its provider and own QoS parameter. If a provider offers connections with different warranted QoS each of these shall be a new VAN switching line. • VAN switching endpoint access: the physical communication path between one VAN-AP and a WAN or a public network. This is a newly introduced class for using the switching application service elements of virtual automation networks for communication via WAN or public networks. These mechanisms are very important for the concept of VANs using heterogeneous networks for automation. Depending on the priority and importance of the data transmitted between distributed communications partners, the kind of transportation service and communication technology is selected based on economical aspects. The VAN provider switching considers the following alternatives: • Use case 1: For packet-oriented data transmission via public networks a connection from a corresponding VANAP to the public network has to be established. The crossover from/to the public network is represented by the VAN switching endpoint access. The requirements made for this line have to be fulfilled by the servicelevel agreements from the chosen provider. Within the public network it is not possible to influence the quality of service. The data package leaves the public network when the VAN switching endpoint access from the faced communication partner is achieved. The connection from the public network to the faced VAN-AP is also provided by the same or an alternative provider and
555
guarantees defined requirements. The data exchange between two communication partners is independent of each other. • Use case 2: For a connection-oriented data transmission (or data packages with high-level priority) the use of manageable data transport technology is needed. The VAN switching line represents a manageable connection. A direct known connection between two VAN-APs has to be established and a VAN switching endpoint access is not needed. The chosen provider guarantees the defined requirements for the complete line. When the current line loses the promised requirements, it is possible to define the VAN-APs to build up an alternative line and hold on/disconnect the current line automatically.
23.6
General Overview About Industrial Protocol Features
Table 23.1 presents a comparative overview of the main industrial communication protocols discussed in this chapter. As it can be observed, there is a diversity of protocols which present almost all possible combinations in terms of the characteristics compared. Because of that, there is no single communication protocol that fulfills all requirements of all possible applications, so that each protocol usually has some “application niches,” whose requirements are better met with the characteristics of some specific protocols. As previously discussed in the chapter, this growing number of protocols makes interoperability a challenge so that concepts such as virtual automation networks, that offer a system-wide integration among different domains with real-time guarantees, become very important.
23.7
Conclusions
The area of industrial communication protocols has been experiencing a tremendous evolution over the last decades, being strongly influenced by advances in the area of information technology and hardware/software developments. Existing industrial communication protocols have impacted very positively both in the operation of industrial plants, due to enhanced diagnostic capabilities, which have enabled enhanced maintenance operations, such as predictive maintenance, as well as in the development of complex automation systems. This chapter has reviewed the main concepts of industrial communication networks and presented the most prominent wired and wireless protocols that are already incorporated in a large number of industrial devices (from thousands to millions).
23
556
C. E. Pereira et al.
Table 23.1 Comparative overview of the main industrial communication protocols
Wired
Wireless
23.8
Fieldbus AS-Interface CANopen ControlNet CC-Link DeviceNet EtherCAT Ethernet Powerlink EtherNet/IP Interbus Modbus PROFIBUS DP PROFIBUS PA PROFINET IO PROFINET IRT SERCOS III SERCOS interface Foundation Fieldbus H1 Zigbee Bluetooth Bluetooth LE
Bus power Yes No No No Yes Yes No No No No No Yes No No No No Yes No No No
Cabling redundancy No No Yes No No Yes Optional Optional No No Optional No Optional Optional Yes No No No No No
WirelessHart ISA 100 WIA-PA Wifi 5 and 6 LoRa/LoRaWAN NB-IoT Sigfox 5G
No No No No No No No No
No No No No No No No No
Emerging Trends
The number of commercially available industrial communication protocols has continued to increase, despite some trials to converge to a single and unified protocol, in particular during the definition of the IEC 61178 standard; the automation community has started to accept that no single protocol will be able to meet all different communication requirements from different application areas. This trend will be continued by the emerging wireless sensor networks as well as the integration of wireless communication technologies in all mentioned automation-related communication concepts. Therefore, increasing attention has been given to concepts and techniques to allow integration among heterogeneous networks, and within this context virtual automation networks are playing an increasing role. Currently the Internet Protocol plays a major role for the integration enabling the rise of the “Industrial Internet of Things” (IIoT)) era, where concepts as cloud/edge/fog computing were introduced. Similar to what happened in the past, those concepts have to be
Max devices 62 127 99 64 64 65,536 240 Almost unlimited 511 246 126 126 Almost unlimited Almost unlimited 511 254 240 100 8 Usually 8 (depends on the connection interval) 100 100 100 250 Almost unlimited Almost unlimited Almost unlimited Almost unlimited
Synchronization No Yes No No No Yes Yes Yes No No Yes No No Yes Yes Yes Yes Yes Yes Yes
Submillisecond cycle No No No No No Yes Yes Yes No No No No No Yes Yes Yes No No No No
Yes Yes Yes No No Yes No Yes
No No No No No No No Yes
carefully extended to ensure that automation requirements are addressed. With the proliferation of networked devices with increasing computing capabilities, the trend of decentralization in industrial automation systems will even increase in the future (Figs. 23.18 and 23.19). This situation will lead to an increased interest in autonomic systems with self-X capabilities, where X stands for alternatives as configuration, organizing, optimizing, healing, etc. The idea is to develop automation systems and devices that are able to manage themselves given high-level objectives. Those systems should have sufficient degrees of freedom to allow a self-organized behavior, which will adapt to dynamically changing requirements. The ability to deal with widely varying time and resources demands while still delivering dependable and adaptable services with guaranteed temporal qualities is a key aspect for future automation systems [79]. With the emergence of technologies that provide latencies in the range of milliseconds such as 5G, interest in connected vehicle networks has been growing. Different modes of communication are explored such as vehicle-to-vehicle (V2V),
23
Communication Protocols for Automation
557
SCADA system
n atio Loc tem sys
Control parameterization Set-up
Wifi system
((( RFID
RFID
RFID
(((
RFID
((( Bluetooth
RFID
Bluetooth
Zigbee
Bluetooth
Fig. 23.18 The wireless factory
• Ubisense UWB-realtime positioning system
• RFID grid for mobile workshop navigation
• Cricket ultrasonic indoor location system
Fig. 23.19 Indoor positioning systems in the smart factory. (From [80])
vehicle-to-infrastructure (V2I), vehicle-to-pedestrian (V2P), and vehicle-to-everything (V2X). This type of communication will be used in the coming years in order to increase the safety of drivers and pedestrians in addition to reducing congestion traffic [10]. Acknowledgments The authors would like to acknowledge the support of Gustavo Cainelli, Marco Meier, and Professor Dr. Ing. Ulrich Jumar.
References 1. Neumann, P.: Communication in industrial automation – what is going on? In: INCOM 2004, 11th IFAC Symposium on Information Control Problems in Manufacturing, Salvador da Bahia (2004)
2. Kumar, K., Zindani, D., Davim, J.P.: Industry 4.0 – Developments Towards the Fourth Industrial Revolution. Cham, Switzerland: Springer (2019) 3. Schroeder, G.N., Steinmetz, C., Pereira, C.E., Espindola, D.B.: Digital twin data modeling with automationML and a communication methodology for data exchange. IFAC-PapersOnLine. 49(30), 12–17 (2016). ISSN 2405-8963 4. Grieves, M.: Digital twin: manufacturing excellence through virtual factory replication. http://www.apriso.com (2014). Accessed 14 Sept 2020 5. Lee, E.A.: Cyber physical systems: design challenges. In: Object Oriented Real-Time Distributed Computing (ISORC) 11th IEEE International Symposium on, pp. 363–369 (2008) 6. Baheti, R., Gill, H.: Cyber-physical systems. In: The Impact of Control Technology, vol. 12, pp. 161–166 (2011) 7. Shafto, M., Conroy, M., Doyle, R., Glaessgen, E., Kemp, C., LeMoigne, J., Wang, L.: Draft modeling, simulation, information technology & processing roadmap. Technology Area, 11 (2010)
23
558 8. Shafto, M., Conroy, M., Doyle, R., Glaessgen, E., Kemp, C., LeMoigne, J., Wang, L.: NASA Technology Roadmap: Modeling, Simulation. Information Technology & Processing Roadmap Technology Area, 11 (2012) 9. Zhou, H., Xu, W., Chen, J., Wang, W.: Evolutionary V2X technologies toward the internet of vehicles: challenges and opportunities. Proc. IEEE. 108(2), 308–323 (2020). https://doi.org/10.1109/ JPROC.2019.2961937 10. Zeadally, S., Guerrero, J., Contreras, J.: A tutorial survey on vehicle-to-vehicle communications. Telecommun. Syst. 73 (2020). https://doi.org/10.1007/s11235-019-00639-8 11. Fieldcomm Group: HART protocol. https://fieldcommgroup.org/ (2020). Accessed 14 Sept 2020 12. ASi: Actuator/Sensor Interface. http://www.as-interface.net/ System/Description (2020). Accessed 14 Sept 2020 13. CAN: IEC 11898 Controller Area Networks. http://de.wikipedia.o rg/wiki/Controller_Area_Network (2020). Accessed 14 Sept 2020 14. IO-Link Community: Interface and System Specification, V 1.1.3 (2020) 15. IO-Link Integration: Part 1, V. 1.00; PROFIBUS International 2.812 (2008) 16. IEC: IEC 61158 Ser., Edition 3: Digital data communication for measurement and control – fieldbus for use in industrial control systems (2003) 17. IEC: IEC 61784–1: Digital data communications for measurement and control – part 1: profile sets for continuous and discrete manufacturing relative to fieldbus use in industrial control systems (2003) 18. PROFIBUS: PROFIBUS System Description Technology and Application. http://www.profibus.com/ (2016). Accessed 20 Sept 2020 19. ANSI/TIA/EIA-485-A: Electrical characteristics of generators and receivers for use in balanced digital multipoint systems (1998) 20. ODVA: http://www.odva.org/ (2020) Accessed 20 Sept 2020 21. MODBUS TCP/IP. IEC 65C/341/NP: Real-Time Ethernet: MODBUS-RTPS (2004) 22. ControlNet International and Open DeviceNet Vendor Association. Ethernet IP: Ethernet/IP specification, Release 1.0 (2001) 23. ODVA. EtherNet/IP: Network Infrastructure for EtherNet/IP™ – Introduction and Considerations. Publication Number: PUB00035R0. Copyright © 2007 Open DeviceNet Vendor Association, Inc. https://www.odva.org/wp-content/uploads/2020/ 05/PUB00035R0_Infrastructure_Guide.pdf. Accessed 25 Sept 2020 24. PROFIBUS Nutzerorganisation: PROFInet Architecture Description and Specification, Version V 2.0., PROFIBUS Guideline (2003) 25. Neumann, P.: Communication in industrial automation. What is going on? Control. Eng. Pract. 15, 1332–1347 (2007) 26. PROFINET: IEC 65C/359/NP: Real-Time Ethernet: PROFINET, Application Layer Service Definition and Application Layer Protocol Specification (2004) 27. Neumann, P., Pöschmann, A.: Ethernet-based real-time communication with PROFINET. WSEAS Trans. Commun. 4(5), 235–245 (2005) 28. IEC: Time-critical Control Network: IEC 65C/353/NP: Real-Time Ethernet: Tcnet (2007) 29. IECT: Vnet/IP: IEC 65C/352/NP: Real-Time Ethernet: Vnet/IP (2007) 30. IEC: POWERLINK: IEC 65C/356/NP: Real-Time Ethernet: POWERLINK (2007) 31. IEC: ETHERCAT: IEC 65C/355/NP: Real-Time Ethernet: ETHERCAT (2007) 32. Manges, W., Fuhr, P. (eds.): PROFINET/Isochronous Technology. IFAC Summer School Control, Computing, Communications, Prague (2005)
C. E. Pereira et al. 33. Jasperneite, J., Shehab, K., Weber, K.: Enhancements to the time synchronization standard IEEE-1588 for a system of cascaded bridges. In: 5th IEEE International Workshop on Factory Communication Systems, WFCS2004, Vienna, pp. 239–244 (2004) 34. IEC: EtherNet/IP with time synchronization: IEC 65C/361/NP: Real-Time Ethernet (2007) 35. IEC: IEC 61588: Precision clock synchronization protocol for networked measurement and control systems (2002) 36. IEC: SERCOS III: IEC 65C/358/NP: Real-Time Ethernet: SERCOS III (2007) 37. PROFIBUS User Organization: PROFINET over TSN. Guideline. Version 1.21 Order No.: 7.252. PNO. https://www.profibus.com/ download/profinet-over-tsn/ (2020). Accessed 17 Sept 2020 38. Time-Sensitive Networking (TSN) Task Group: TSN Standards, Ongoing TSN Projects and Meetings and Conference Calls. https:/ /1.ieee802.org/tsn/ (2020). Accessed 17 Sept 202 39. PROFIBUS International: Advanced Physical Layer. https:// www.profibus.com/index.php?eID=dumpFile&t=f&f=107608&to ken=97abc376e3e83e010df1902d93deba94d1a30d22 (2020). Accessed 14 Sept 2020 40. Mahnke, W., Leitner, S.-H., Damm, M.: OPC Unified Architecture. Springer, Berlin/Heidelberg (2009). https://doi.org/10.1007/9783-540-68899-0. ISBN 978-3-540-68898-3, e-ISBN 978-3-54068899-0 41. OASIS: MQTT Version 3.1.1: OASIS Standard. http://docs.oasisopen.org/mqtt/mqtt/v3.1.1/os/mqtt-v3.1.1-os.html (2014). Accessed 14 Sept 2020 42. MQTT: The Standard for IoT Messages. http://mqtt.org/ (2020). Accessed 14 Sept 2020 43. Höme, S., Diedrich, C.: Analyse des Zeitverhaltens von Steuerungssystemen mit zyklischer Kommunikation. In: Kommunikation in der Automation. Lemgo, p. 7. Kongress: KOMMA; 3 (Lemgo) (2012) 44. IEEE: Wireless LAN: IEEE 802.11: IEEE 802.11 Wireless Local Area Networks Working Group for WLAN Standards. http:// ieee802.org/11/ (2020). Accessed 20 Sept 2020 45. IEEE: Personal Area Networks: IEEE 802.15: IEEE 802.15 Working Group for Wireless Personal Area Networks (2020), Task Group 1 (TG1), http://www.ieee802.org/15/pub/TG1.html15/pub/ TG1.html. Accessed 20 Sept 2020 and Task Group 4 (TG4). http:/ /www.ieee802.org/15/pub/TG4.html. Accessed 20 Sept 2020 46. Bluetooth Special Interest Group: Bluetooth technology description. https://www.bluetooth.com/ (2020). Accessed 18 Aug 2020 47. HART Communication Foundation: HART 7 – HART Protocol Specification, Revision 7. https://fieldcommgroup.org/ (2007). Accessed 20 Sept 2020 48. ZigBee Alliance: Zigbee. https://zigbeealliance.org/ (2020). Accessed 20 Sept 2020 49. Butun, I. (ed.): Industrial IoT – Challenges, Design Principles, Applications and Security. Springer Springer International Publishing (2020) 50. 5G-ACIA: 5G for Connected Industries and Automation – White Paper. ZVEI Frankfurt (2020) 51. ABI Research: Forecast and Information on Emerging Wireless Technologies. http://www.abiresearch.com (2020). Accessed 20 Sept 2020 52. Stallings, W.: IEEE 8O2.11. Wireless LANs from a to n. IT Prof. 6(5), 32–37 (2004) 53. Mustafa Ergen: IEEE 802.11 Tutorial. https://www-inst.eecs .berkeley.edu/∼ee228a/fa03/228A03/802.11%20wlan/80 2.11_tutorial.pdf (2002). Accessed 20 Sept 2020 54. Willig, A., Matheus, K., Wolisz, A.: Wireless technology in industrial networks. Proc. IEEE. 93(6), 1130–1151 (2005) 55. Neumann, P.: Wireless sensor networks in process automation, survey and standardisation. atp, 3(49), 61–67 (2007). (in German)
23
Communication Protocols for Automation
56. Zand, P., Chatterjea, S., Das, K., Havinga, P.: Wireless industrial monitoring and control networks: the journey so far and the road ahead. J. Sens. Actuator Netw. 1, 123–152 (2012) 57. Liang, W., Zheng, M., Zhang, J., Shi, H.: WIA-FA and its applications to digital factory: a wireless network solution for factory automation. Proc. IEEE. 107(6), 1053–1073 (2019) 58. Griessmann, J.-L.: HART protocol rev. 7 including WirelessHART. atp Int.-Autom. Technol. Pract. 2, 21–22 (2007) 59. Valadão, Y.d.N., Künzel, G., Müller, I., Pereira, C.E.: Industrial wireless automation: overview and evolution of WIA-PA. IFACPapersOnLine, pp. 175–180 (2018) 60. Dust Networks: TSMP Seminar. Online-Präsentation (2006) 61. Dust Networks: Technical Overview of Time Synchronized Mesh Protocol (TSMP). White Paper (2006) 62. Muller, I., Netto, J.C., Pereira, C.E.: WirelessHART field devices. IEEE Instrum. Meas. Mag. 14(6), 20–25 (2011) 63. Darroudi, S.M., Gomez, C.: Bluetooth low energy mesh networks: a survey. Sensors. 17, 1467 (2017) 64. Tosi, J., Taffoni, F., Santacatterina, M., Sannino, R.: Performance evaluation of Bluetooth low energy: a systematic review. Sensors. 17, 2898 (2017) 65. Sigfox: Sigfox Technology. https://www.sigfox.com/en/whatsigfox/technology (2020). Accessed 18 Aug 2020 66. Semtech: Semtech LoRa Technology Overview. https:// www.semtech.com/lora (2020). Accessed 20 Sept 2020 67. Lora Alliance: What is the LoRaWAN® Specification?. https:// lora-alliance.org/about-lorawan (2020). Accessed 20 Sept 2020 68. Weightless SIG: Weightless – Setting the Standard for IoT. http:// www.weightless.org/ (2020). Accessed 20 Sept 2020 69. 3GPP: Overview of release 13. https://www.3gpp.org/release-13 (2016). Accessed 20 Sept 2020 70. Chettri, L., Bera, R.: A comprehensive survey on internet of things (IoT) toward 5G wireless systems. IEEE Internet Things J. 7(1), 16–32 (2020) 71. Neumann, P., Pöschmann, A., Flaschka, E.: Virtual automation networks, heterogeneous networks for industrial automation. atp Int. Autom. Technol. Pract. 2, 36–46 (2007) 72. European Integrated Project: Virtual Automation Networks – European Integrated Project FP6/2004/IST/NMP/2 016696 VAN. Deliverable D02.2-1: Topology Architecture for the VAN Virtual Automation Domain (2006) 73. Jovan, I., Sharif, K., Li, F., Latif, Z., Karim, M.M., Biswas, S., Wang, Y., Nour, B.: A survey of network virtualization techniques for internet of things using SDN and NFV. ACM Comput. Surv. (2020). https://doi.org/10.1145/3379444 74. Mekikis, P.-V., Ramantas, K., Sanabria-Russo, L., Serra, J., Antonopoulos, A., Pubill, D., Kartsakli, E., Verikoukis, C. (eds.): NFV-enabled experimental platform for 5G tactile internet support in industrial environments. IEEE Trans. Ind. Inf. (2019). https:// doi.org/10.1109/TII.2019.2917914 75. Barakabitze, A., Ahmad, A., Hines, A., Mijumbi, R.: 5G network slicing using SDN and NFV: a survey of taxonomy, architectures and future challenges. Comput. Netw. 167, 106984 (2019). https:// doi.org/10.1016/j.comnet.2019.106984 76. IBM: Standards and Web services. https://www.ibm.com/de-de (2020). Accessed 20 Sept 2020 77. Wilkes, L.: The web services protocol stack. Report from CBDI Web Services Roadmap. http://www.everware-cbdi.com/ (2005). Accessed 20 Sept 2020 78. European Integrated Project: Virtual Automation Networks – European Integrated Project FP6/2004/IST/NMP/2 016696 VAN. Deliverable D07.2-1: Integration Concept, Architecture Specification (2007)
559 79. Morel, G., Pereira, C.E., Nof, S.Y.: Historical survey and emerging challenges of manufacturing automation modeling and control: a systems architecting perspective. Annu. Rev. Control. 47, 21–34 (2019) 80. Zuehlke, D.: Smart Factory from vision to reality in factory technologies. IFAC Congress, Seoul (2008)
Further Reading Books Alam, M., Shakil, K.A., Khan, S. (eds.): Internet of Things (IoT) – Concepts and Applications. Springer (2020) Franco, L.R., Pereira, C.E.: Real-time characteristics of the foundation fieldbus protocol. In: Mahalik, N.P. (ed.) Fieldbus Technology: Industrial Network Standards for Real-Time Distributed Control, pp. 57–94. Springer, Berlin/Heidelberg (2003) Liu, J.W.S.: Real-Time Systems. Prentice Hall, Upper Saddle River (2000) Marshall, P.S., Rinaldi, J.S. Industrial Ethernet. ISA (2004) Pedreiras, P., Almeida, L.: Approaches to enforce real-time behavior in Ethernet. In: Zurawski, R. (ed.) The Industrial Communication Technology Handbook. CRC, Boca Raton (2005)
Various Communication Standards IEC: IEC 61158 Ser., Edition 3: Digital data communication for measurement and control – Fieldbus for use in industrial control systems (2003) IEC: IEC 61508: Functional safety of electrical/electronic/programmable electronic safety-related systems (2000) IEC: IEC 61784–1: Digital data communications for measurement and control – Part 1: Profile sets for continuous and discrete manufacturing relative to Fieldbus use in industrial control systems (2003) PROFIBUS Guideline: PROFInet Architecture Description and Specification, Version V 2.0. PNO, Karlsruhe (2003)
Carlos Eduardo Pereira received the Dr.-Ing. degree from the University of Stuttgart, Germany (1995), and also holds a MSc in computer science (1990) and a BS in electrical engineering (1987) both from UFRGS in Brazil. He is a full professor of automation engineering at UFRGS and director of operations at EMBRAPII, a Brazilian innovation agency. He also acts as vice president for Technical Activities of the International Federation on Automatic Control (IFAC). He received in 2012 the Friedrich Bessel Research Award from the Humboldt Foundation. His research focuses on methodologies and tool support for the development of distributed real-time embedded systems, with special emphasis on industrial automation applications.
23
560
Christian Diedrich received a German Diplom Ingenieur degree in electrical engineering with minor in automation (1985) and a Ph.D. degree (1994) from the Otto-von-Guericke-University in Magdeburg, Germany. He acts as deputy head of the Institut für Automation und Kommunikation (ifak) and as chair of integrated automation at the Ottovon-Guericke-University, both in Magdeburg, Germany. His activity field covers the entire engineering life cycle of field devices of production systems. He has worked on many German and European R&D projects in the areas of industrial communication and engineering of automation systems and is active in national and international standardization activities.
C. E. Pereira et al.
Peter Neumann received the PhD in 1967. Peter Neumann was full professor at the Otto-von-Guericke-University Magdeburg from 1981 to 1994. In 1991 he founded the applied research institute “ifak” (Institut für Automation and Kommunikation) headed by him until 2004. His interests are industrial communications, device management, and formal methods in engineering of distributed computer control systems.
Product Automation and Innovation
24
Kenichi Funaki
Contents 24.1
Historical Background of Automation . . . . . . . . . . . . . 562
24.2
Definition of Product Automation . . . . . . . . . . . . . . . . . 563
24.3
Fundamental Core Functions . . . . . . . . . . . . . . . . . . . . 563
24.4
24.4.3 24.4.4 24.4.5 24.4.6
Innovation of Product Automation in the IoT Age . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Technology as a Driver to Change Life and Industry . . . Expansion of Automation Applications and Innovations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Benefit and Value of Automation . . . . . . . . . . . . . . . . . . . Business Trends and Orientation . . . . . . . . . . . . . . . . . . . Products in Experience-Value Economy . . . . . . . . . . . . . New Requirements for Product Automation . . . . . . . . . .
24.5
Modern Functional Architecture of Automation . . . . 569
24.6 24.6.1 24.6.2 24.6.3 24.6.4
Key Technologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Localization and Mapping . . . . . . . . . . . . . . . . . . . . . . . . . Edge Intelligence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . OTA (Over-the-Air) Technology . . . . . . . . . . . . . . . . . . . Anomaly Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
24.7
24.7.2 24.7.3 24.7.4
Product and Service Lifecycle Management in the IoT Age . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Management Model in the Experience-Value Economy . . . . . . . . . . . . . . . . . . . . . . . PSS (Product-Service System) Discussions . . . . . . . . . . How to Realize Valuable Customer Experience . . . . . . . Business Model Making . . . . . . . . . . . . . . . . . . . . . . . . . .
24.8
Conclusion and Next Topics . . . . . . . . . . . . . . . . . . . . . . 581
24.4.1 24.4.2
24.7.1
564 564 565 566 567 568 568
570 570 573 574 576 578 578 579 580 581
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 582
Abstract
Product automation is the attempt to equip products with functionality so that they can fulfill their tasks fully or partly in an automated way. Thanks to recent advances in
edge intelligence and cognitive technologies, automated products are becoming autonomous and mobile. Automated products with autonomous mobility can operate and move in open unstructured fields like roads, cities, mountains, the air, and any terrain. They bring benefits not only about efficiency and productivity but also about work and lifestyle freeing workers from dangerous and harsh environment and people from inconvenience and cumbersome processes. In addition, as the world is shifting to experience-value economy, people’s interest is more focused on outcome by use of a product rather than its function and performance. Since people’s expectations for the outcome depend on individual cases, product must be software defined and IoT connected so that it can be configured to the user’s requirement and can monitor and adjust its performance to best fit customer’s preference to provide the utmost experience. In this chapter, we provide overview of modern product automation and innovations with the recent business trends which drive needs for software-defined and IoTconnected product automation with autonomous and intelligent properties. Key technologies to realize such properties include localization and mapping, edge intelligence, OTA (over the air), and anomaly detection, which are shown with some cases of implementation. Finally, we discuss transformation of the business model of product automation from “One time make & sell” to “Continuous engagement” model driven by the experience-value economy, and its impact on product and service lifecycle management. Keywords
Software defined · IoT connected · Localization and mapping · Edge intelligence · OTA (over the air) · Anomaly detection · Experience-value economy K. Funaki () Corporate Venturing Office, Hitachi Ltd., Tokyo, Japan e-mail: [email protected]
© Springer Nature Switzerland AG 2023 S. Y. Nof (ed.), Springer Handbook of Automation, Springer Handbooks, https://doi.org/10.1007/978-3-030-96729-1_24
561
562
24.1
K. Funaki
Historical Background of Automation
Historical roots of automation technology can be traced back to the three origins of automation needs: (1) industrial automation, (2) robotization under governmental initiatives, and (3) life and office automation. It goes without saying that industrial robot has played a central role in the industrial automation history. The first modern industrial robot was Unimate which was invented by George Devol in 1954 [1], and its commercial version was sold to General Motors in 1960 and realized automated die casting mold process at Ewing Township plant [2]. The early robots were pneumatic and hydraulic with predetermined motions. With the advances in electric power transmission and programmable controller, the first microcomputer-controlled robot T3 appeared in 1974 [2]. Most of the early robots at factories were for automation of material handling and other simple manipulations at a fixed location. In late 1960s, a mobile robot Shakey was developed at the Artificial Intelligence Center of Stanford Research Institute, which was one of the first robots focusing on move and travel [3]. Since then, sensing and routing functions have been the essential keys to enhance mobility of robots. Thanks to the progress of sensing and routing technologies in the last two decades together with continuous cost reduction, the journey of mobile robots has now reached to the level of advanced industrial unmanned mobile robots like AGVs (automated guided vehicles) and unmanned forklifts. Governmental initiatives have been also strong drivers to advance automation technologies especially for robotization. Governmental initiatives include robotization in military and aerospace, and for the public interest. Since remotely controlled weapons appeared in the 1930s, continuous R&D investment in military robotization has brought significant advancement especially in all terrain and aero robots and micro-machines. Although many are not disclosed, there have been many streams of military technology development in various countries, some of which have been converted to civil use such as search and rescue [4], cyber security, and trajectory computation developed for homing devices. For aerospace missions, all terrain robots, UAVs (unmanned aerial vehicles), and robot arms have been intensively developed. Not only complex and reliable robotics but also technologies for special needs in space such as radiationresistant devices are remarkable achievements. In addition to military and aerospace, public benefit is also a motivation for government to advance automation technologies. Construction and maintenance of infrastructure, safety and security of towns, and processes of administration and public services are good areas for automation. Many challenges, such as water leaks, frequent blackouts, traffic jams, and intricate manual processes, are needing automation technologies. Since it is not the purpose of this chapter
Fig. 24.1 The “Ultrasonic Bath” made by Sanyo Electric. (By Tokumeigakarinoaoshima – Own work, CC0, https://commons. wikimedia.org/w/index.php?curid=37060176)
to cover entire history of government-initiated automation technologies, here it is just implied that government initiatives have played an important role in R&D for automation technologies which requires heavy upfront investment with uncertainty. The other pillar of automation technology development has been in life and office space. Typical examples include automation of home appliances like washing machines, cookers, and cleaners, and robots for surveillance and security of buildings and towns. The automatic clothes washing machine on a commercial basis was introduced in US market in 1911, and the first microwave oven was introduced by Raytheon in 1947 [5]. At the Japan World Exposition ‘70 held in Osaka in 1970, a future-looking washing machine with a flavor of SF named “Ultrasonic Bath” was exhibited (Fig. 24.1). The Sanyo Electric made machine could automatically wash and rinse human body, and thus drew much people’s attention then. Life and office have been a good testing ground for human-machine interaction as well. Since there are many kinds of tasks to be done through human-machine interactions in life and office, researchers and engineers have to design the best configuration of the total system with human-machine combination. In designing the best humanmachine interaction, human senses and feelings are important as well as ergonomic factor and user interface design. Kansei engineering was one of such approaches emerged in humanmachine interaction design where “Kansei” is a Japanese word meaning sensibility, feelings, and cognition [6]. The automation in life and office is not only about hardware but also software. When you ask for a help upon trouble in withdrawing cash from ATM, you may talk through microphone on the ATM to a robot who can chat and listen
Product Automation and Innovation
to your request or claim to understand the situation, and then provide you with appropriate guidance autonomously in accordance with the mode of trouble you are facing. Now ATM becomes a true “Automated Teller Machine!” Conversation and interaction technologies powered by NLP (natural language processing) together with backend process execution by RPA (robotic process automation) are the key elements to realize automation in life and office.
24.2
Definition of Product Automation
In this chapter, we follow the definition of product automation which was made by Pinnekamp [7] in the former edition of Springer Handbook of Automation, i.e., product automation is the attempt to equip products with functionality so that they can fulfill their tasks fully or partly in an automated way. The tasks include various kinds of works, assignments, and missions in industry and society as already seen in the previous section. As Pinnekamp [7] pointed, product automation could be confused with production automation or automation products. Again, following Pinnekamp [7], production automation is the automation of individual steps or the whole chain of steps necessary to produce, and products used to automate production is called automation product mainly in industries that produce such tools or machines as their products.
24.3
Fundamental Core Functions
Fundamental body to realize product automation is equipped with the minimum essential functions to fulfill a task which include identifying and measuring a target or an object to work on, generating and applying force to actuate the product, and controlling all the pieces necessary to execute processes to fulfill the task as well as observing and inspecting its environment and surroundings. Figure 24.2 shows the traditional functional blocks required for product automation [7]. Sensors identify and measure objects and observe and inspect environment and surroundings to feed sensed data to the control system so as to let the product take right actions and operate effectively. There is a large variety of sensing types by the gauged properties such as motional (linear and rotary), kinetic (force, torque, acceleration and velocity, pressure, and vibration), physical (distance, range, angle, density, temperature, and flow), electrical (voltage, current, phase, and frequency), optical (luminous intensity, color, and image), acoustic, haptic sensing, and others. Actual implementation of sensors is much dependent on the application cases.
563
Sensors
Control system Energy
24
Information exchange with other systems
Actuators
Primary task of the device
Fig. 24.2 Traditional functional blocks of product automation [7]
For product automation, elements of a control system depend on types of controlled components and their logical and physical implementation. Minimum core of controlling unit for automation of physical action is motion control. Motion controller executes control programs to generate and send commands to actuators, mostly motors, by determining necessary motion (direction and speed) to reduce gap between the feedback on sensed position and the target position. For whole actions to fulfill the task, operational control must be implemented, which could be a logical controller for discrete event control or a PID (proportional integral derivative) controller for continuous process control. For further intelligent logical behavior like selection of relevant picking objects and self-protection against danger, computational control with rule-based engine could be a part of the control system. Once an actuator receives a command from the controller, it takes physical actions through mechanisms installed in the product. The basic categories of actuator can be mechanical, hydraulic, pneumatic, and electric. Mechanical actuator is basically composed of gears and springs, such ones introduced in early versions of elevators and excavators. Hydraulic actuator is made of pumps and valves through which hydraulic fluid conveys power and force to moving parts. Generally speaking, hydraulic system is effective for heavy duty, thus many old mechanical actuators have been replaced with hydraulic systems as seen in elevators and excavators. Pneumatic actuator is a system that uses compressed air to convey power and force. The most widely used mechanism is electric actuator where motors and drives are the core components. As the size of electric motor is getting smaller and more efficient thanks to advances in materials and architecture, various kinds of small-sized multiaxis motion can be automated. Other than the actuator types above, there are various actuators for specific purposes using unique properties like magnetic force, chemical reaction, and thermodynamic characteristics.
24
564
K. Funaki
None of those functions above can be active without energy. Energy supply function is not just to receive and send energy to the components of the product, but an assembly of managed elements to supply energy efficiently and (recently) eco-friendly, which may include energy generator and storage, pulley and belt (if mechanical), heat exchanger (if thermal), and converter and inverter (if electric).
24.4
Innovation of Product Automation in the IoT Age
In this section, we discuss how technologies, applications, and business trends drive innovation of product automation, and then derive new requirements for product automation.
24.4.1 Technology as a Driver to Change Life and Industry Technology is definitely one of strong drivers to change people’s life and industry. In agriculture, which has been a fundamental human occupation since the era before industrialization, all the farming tasks from sowing to harvesting were done by human workforce in the age of “The Sower” or “The Gleaners” by Millet. Now, through a hundred-year history of mechanization, every aspect of farm operation including farm tractors, water sprinklers, weeding and harvesting machines, and air and humidity control in a greenhouse is robotized and automated. All the physical elements and conditions including soil, seeds, water, temperature, and sunshine as well as crops are monitored and measured to determine and take optimal actions for the best yield performance. Once the machines and robots in the farm are connected by IoT and equipped with AI-enabled edge intelligence, they can operate autonomously and furthermore cooperate with each other. Now, agro-drone can sow seeds and spray chemicals 50 times faster than human farmers. Autonomous machines free farmers from burden and shift the human role from a mere labor source to a value-creating resource. Of course, technology’s impact on industry has not been confined to agriculture. Through multiple industrial revolutions in the history, technologies have freed people from harsh and dangerous work or tediously monotonous and noncreative work. As the main drivers for industrial revolution have changed from energy to mechanization, electrification, computerization, and recently networked intelligence, available tools for automation and scope of their applications have been expanding across various industries. If we look at transportation space, it can be seen how technological drivers of transportation revolution have been changing. In the era of the first Industrial Revolution from the eighteenth century up to the mid-nineteenth century,
steam engine completely changed the mode of power train of transportation, shifting water transport from sailboat to steamship and land transportation from horse tramway to steam locomotive. Together with evolution of power train, mechanization enabled highly functional and powerful vehicles such that horse-drawn carriage was completely replaced with steel-made automobile with gasoline engine. Mechanization even created new types of transportation such as ropeway in the mountains (aerial transportation) and elevators in the buildings (vertical transportation). Electrification was another enabler to realize efficient city transportation like tramcars and trolley cars. Since electrification of railway system can save weight of tractor unit of train compared to combustion engine car, most of the modern high-speed trains are powered by electricity. As the wave of computerization came and extended to transportation industry, level and quality of service and management was drastically improved. Ticketing and booking services, scheduling and operation planning, dispatching and fleet management, and traffic and safety management are the typical examples which were much advanced by computerization. Now, with rapid advancement in AI and connectivity, all modes of transportation are heading for automation. Automation of ship, train, and vehicle requires capabilities of sensing objects and surroundings and connectivity with the other moving objects and infrastructure as well as their controlling systems. Each automated ship, train, and vehicle has to think and determine the best action to take accordingly to the observed situations, where IoT and AI play important roles. Automation of operation and control frees human operators from time-consuming and tiring manipulation work and lets people have efficient and creative time during travel. Rio Tinto, a world leading mining company, launched the world’s first autonomous heavy freight rail operation with special locomotives installed with an onboard driver module in 2018. It runs 280 km from the mining site to the port hauling 18,000 t of ore and is monitored constantly from a central operations center about 1000 km away [8]. Stena Line, a world-class ferry service company, announced to conduct a pilot study of AI-assisted vessel in 2018, which is expected to reduce fuel consumption without support by human captain’s knowledge and thus aims to minimize environmental impact [9]. This is a good exemplar to show that automation is not just only for operational efficiency or safety but also for broader context of the company’s management policy which leads to industrial transformation. In addition to these challenges in existing transportation services, introduction of autonomous vehicle for the next generation city transportation is not a dream anymore but is a matter of less-than-ten-years long journey with no doubt. Although it may be a step-by-step societal challenge, city transportation transformation will surely change our life.
24
Product Automation and Innovation
24.4.2 Expansion of Automation Applications and Innovations Applications of automation are expanding to many segments. We see various innovations emerging in many ways, which include both technology advancement and business model revolution.
Automation Applications and Innovations in Industry Industrial automation, especially in manufacturing and logistics, with various automated entities like robots, NC machines, and AGVs is the most prevailing application area. The latest topics regarding innovative technologies for industrial automation applications include: (1) 3D vision sensing and recognition, (2) self-learning and programming-less operation, and (3) autonomous coordinated control. Kyoto Robotics is one of the leading companies in 3D vision and recognition technology as well as its application to pick-and-place work in a warehouse. The 3D object recognition technology realizes fully automated palletizing and depalletizing robot for mixed-size bulk goods, which is one of the most difficult cases in pick-and-place work application [10]. One of hurdles in implementing automation is burden of labor-intensive setup work which causes CAPEX increase. In setting up a robotic system, production engineers, with the help of robot system integrator, have to design and teach its motion and movement in advance by setting various parameters adjusted to the real situation and conditions. In order to mitigate this burden, self-learning or programmingless approach is strongly demanded. Direct teaching is the most straightforward alternative to this problem where an engineer or an operator directly teaches the movement and trajectory of the robot by manipulating its moving part like a hand physically, which is said to be effective in humanrobot cooperation case [11]. A more intuitive and advanced method is proposed by Southie Autonomy aiming for “Nocode robot solution” [12]. Southie Autonomy is developing an intelligent software backed by AI and AR (augmented reality) which enables users to teach robots what to do by using gestures and voice commands. Especially in the situation where machine and human workers collaborate with each other in a shared physical space, harmful motion and movement to the human workers must be avoided so that the human workers can co-work with the machine without risk and fear of danger. For such a situation, Realtime Robotics proposed interesting concept “Robot motion planning on a chip” [13] that enables collision-free dynamic motion planning and control at milliseconds level with less preprogramming. In establishing automation of a complex system with a united purpose such as factory and warehouse, each element
565
composing the system has to be well controlled to co-work with each other being aligned with the purpose. Since the system is required to be dynamic by its nature being exposed to external varying conditions on one hand and being subject to internal system status on the other, it must be capable of coordinating all the elements of the system so as to let the entire system behave effectively in any situations. The latest advance in such coordinated automation technologies appears in the use cases where various kinds of multiple robots autonomously co-work together for a purpose. Hitachi and the University of Edinburgh developed multiple AI coordination control technology which integrates the control of picking robots and AGVs and realizes smooth collaborative work to pick up specified items from goods carried by an AGV without its stopping. It is reported that a picking system controlled by multiple AI coordination gained operation time reduction by 38% [14].
Home Automation Applications and Innovations Home automation is another most prevailing area of automation applications. Automated home appliances like washing machines, air conditioners, and cookers are the most familiar examples. Recently, thanks to advances in AI-enabled human interfaces, various requests to such home appliances and devices can be conveyed by oral instructions like “turn on the light” and “lower the temperature to 20 degrees centigrade.” Furthermore, once a smart house is equipped with sensors with AI which detect and understand the context of people’s movement and room occupancy, it may control home appliances and devices according to the situation automatically without human’s instruction. Intellithings offers a smart room sensor connected with smart phone by Bluetooth which detects location and movement of a person in the house to execute personalized control of relevant appliances and devices automatically [15]. For instance, lights, thermostats, or speakers automatically turn on when a person enters the room and turn off when leaving with a mode set according to the person’s preference. Home is also a good application field of automation with autonomous mobility. Lawn mower and house cleaner are the typical embodiment of its implementation. Innovation of self-localization and mapping is the key to realize accurate and reliable movement. The famous iconic robot cleaner invented by iRobot, Roomba, was one of the first employers of visual mapping technology together with navigation algorithms in home application [16]. Automation Applications and Innovations for Autonomous Vehicle and Transport As a rapidly developing area of automation, we cannot ignore autonomous vehicle and transport. In terms of autonomous vehicle, we are still at an entrance to the Level 3 of the well-known SAE International Standard J3016 [17], where
24
566
levels of automation for on-road vehicles are defined as Level 0 (no automation), Level 1 (driver assistance), Level 2 (partial automation), Level 3 (conditional automation), Level 4 (high automation), and Level 5 (full automation). (Note: SAE International does not use the term “autonomous ” since its usage and scope of meaning varies by user’s stand and case ranging from just full driving automation (Level 5) to capacity for self-governance in jurisprudence.) Intuitive interpretation of each level can be “hands on” for Level 1, “hands off” Level 2, “eyes off” Level 3, “mind off” Level 4, and “steering wheel optional” Level 5. The definition of the levels is based on variation of roles of human driver and driving automation system, which is determined by allocation of roles in the dynamic driving task and its fallback performance between the system and the human driver (or other users if necessary). The fallback is the response by the driver or user to either perform the driving task or achieve a minimal risk condition upon occurrence of the system failure or unintended event outside of the scope and assumptions taken into account when designed. If there is no driver or user’s fallback performance expected, the driving automation system takes fallback actions to achieve minimal risk condition. Technical challenges in driving automation resides in localization and mapping as well as visual recognition of objects surrounding the vehicle. While ultimate autonomous vehicle at Level 5 seems to take time to reach to the quality of practical use with full acceptance by society, some of the other transport services are or have been introducing autonomous operation. Public transportation services using exclusive tracks such as monorail systems and elevated city movers are enjoying the benefit from autonomous operation with unmanned train. Recently increasing number of BRT (bus rapid transit)) service providers or authorities are testing autonomous driving bus on exclusive lane and discussing when to put it into practice [18]. Long-distance industrial railways are also with strong needs for autonomous operation with driverless train. As formerly mentioned in the Sect. 24.4.1, Rio Tinto introduced autonomous heavy freight rail operation to the 280 km railway from mine to port. Total length of the autonomous train of 240 ore cars pulled by three locomotives equipped with onboard driver modules reaches 2.5 km. It travels through mostly inhospitable terrain without a driver on board, but instead being monitored from the 1000 km distant central operations center [19].
Automation Applications and Innovations in Logistics and Delivery Logistics and delivery is another hot spot where we see many real-world applications of automation. In a warehouse, you can see many picking and shipping robots are working, and at a port, a large portion of dockside loading and unloading is automated with cranes and loaders. In addition to these
K. Funaki
current practices, truck platooning is one of the immediate future applications of automation in logistics space. ACEA, European Automobile Manufacturers’ Association, defines truck platooning as the linking of two or more trucks in convoy, using connectivity technology and automated driving support systems [20]. It is expected to bring various benefits ranging from safer road transportation to efficient energy consumption with less CO2 emissions and effective use of roads for less traffic jams. Last mile delivery is another application expected in the immediate future. The famous trial of Amazon’s last mile delivery using drones showed its value of practical use. For the sidewalk delivery such as near-distance food delivery, many robotic companies, many of which are startups, have developed various kinds of autonomous delivery robots and are conducting real-life testing. For instance, TechCrunch reported Starship Technologies has made more than 100,000 commercial deliveries by August 2019 using its unique rolling (slow speed) autonomous delivery robot [21]. The sidewalk delivery robot travels at a top speed of four miles per hour, which is completely a new category of robot in the age of AI and IoT.
Other Areas of Automation Applications and Innovations Although it is not the purpose of this chapter to list all the automation applications and innovations in all industries, it seems meaningful at least to list up some topical areas for automation in the other industries. Many are still at the stage of semi-automation, but some are entering to the full automation world. Table 24.1 shows industries with topical applications of automation.
24.4.3 Benefit and Value of Automation Automation-powered operation gains tremendous speed and dynamics. Well-designed automated systems can enjoy full adaptability and flexibility. For instance, Siemens Electronics Works Amberg gains full flexibility to run its operation smoothly under demand fluctuation with 350 production
Table 24.1 Typical applications of automation in various industries Health care and Surgical robot and micro-robot pill medicine Security and safety Aero surveillance, patrol robot, and rescue robot Nursing Companion robot and rehabilitation robot Information and Chatbot and communication robot guidance Farming Cropping, fertilizing, watering, pesticides, and agronomical decision-making Forestry Unmanned harvester and forwarder
24
Product Automation and Innovation
changeovers per day for a portfolio containing roughly 1200 different products. Siemens says technologies like AI, edge computing, and cloud computing enable highly flexible and extremely efficient and reliable production sequences [22]. In addition to such efficiency-related benefits, people can be freed from harsh and dangerous environment, burdensome and tiresome work, and stressful but less valued experience as well. Inspection work for extremely tall structures like high-voltage power transmission towers can be replaced by autonomous drones with high-resolution intelligent camera. Automation also provides sustainability. Automatic control of air conditioning and lighting in a building has enormous impact on energy saving. Keppel Bay Tower in Singapore introduced occupancy-based smart lighting system to achieve about additional 12% lighting energy savings [23]. As seen in the case of Stena Line’s AI-assisted vessel, transportation industry is another arena where automation can make a huge impact on energy savings and minimization of environmental impact. As people’s consciousness of social impacts and environmental issues increases, multifaceted benefit of automation will be realized. Furthermore, in some cases, automated machines or robots can create new value rather than just replacement of human work. LOVOT, a product with a unique concept of totally useless robot without intended purpose designed by Groove X, provides its owner with happiness and sense of attachment for the pet-like robot [24]. It may remove loneliness and sense of isolation in some cases and may give relaxing time and happiness in the other cases. While many see such potential benefit and values delivered by automation, there still exist significant gaps between ambition for automation and actual level of implementation. BCG reported that in most of the industries ranging from health care to automotive, transportation, and logistics, consumer goods, process industries, engineered products, and technology industry, number of companies who have implemented advanced robotics is much less than that of companies who are interested in introducing advanced robotics [25]. Having this evidence in mind, it can be said that we are amid transition to the coming automation-prevailing era when people embrace various benefits from automation.
24.4.4 Business Trends and Orientation In the age of material shortage in the past, main mission of manufacturing industry or its meaning of existence was to provide people with goods and materials to satisfy people’s needs. Through continuous industrial development and some revolutions, people have been enjoying increasing variety of goods with further functions and higher quality that has been making life easier and better. People enjoyed product itself and were happy to own a product, and even more
567
purpose of buying a product was often to own the product, and sometimes it represented a symbol of wealth. It was true that many products replaced human work with machine work and freed people from burdensome and dangerous work. This meant people could obtain and enjoy more available time and safety for more constructive life. However, once people reached to a certain level of life and got to be satisfied with the materialized life, many realized that there were no further needs for product or there was little gain and excitement by obtaining new products. Recent generations especially in matured countries, who are “sufficient life native,” tend to feel less value of product ownership. It is not only about consumers but also about industries. Once a company realizes that true values come from result of use of its assets but not from the assets themselves, it pursues outcomes gained through use of its assets being indifferent to ownership of the assets. We see many cases where product makers try to shift their business models from product supply to outcome delivery. Kaeser Kompressoren, a German air compressor company, introduced a new service model called “Operator model” where Kaeser operates and manages air supply to their customers instead of air compressor supply [26]. In this model, customer can enjoy on-demand air supply with energy efficiency and system reliability without owning heavy asset on its balance sheet since Kaeser owns and operates the air supply asset (compressor) with required level of quality and reliability, and focuses on outcome delivery to the customer by on-demand air supply with stability. More generally speaking in broader context, what people want to enjoy is good experience. Wherever you are or whatever you are doing, you want more convenience, more easiness and peace of mind, and more excitement and happiness. This is an economy of experience value, which was already advocated in the framework of “the Progression of Economic Value” in 1998 [27]. With this in mind, it can be said that such an experience-oriented economy requires various industries to transform or evolve their structure from function-based siloed industries to value-oriented contextualized service industries. Together with technology advancement and innovation emerged as shown in the Sect. 24.4.2, it is now getting to take on realness. Taking transportation industry as an illustrative case, we can see its potential transformation (evolution) of the industry as shown in Fig. 24.3. Currently we choose and take necessary transportation services as we need. However, in order to travel to the destination, we often have to access multiple services separately. Many are siloed and require passengers to endure inefficiency and inconvenience. But now, by connecting those separated services utilizing digital technologies and data, one can realize one-stop mobility service to provide traveler with best mobility experience tailored to meet the traveler’s demand and purpose such as seamless travel for commuters, comfortable and easy-access mobility
24
568
K. Funaki
Siloed service Inefficient
Value oriented service
Inconvenient
Eco-friendly Digital technology
Human-friendly Seamless
Comfortable Data Safe Sea
Land
Building
Fig. 24.3 Evolution of transportation service
Seamless mobility MaaS
Wellness Wellness
Energy saving Green grid
Fig. 24.4 Evolution of service industries
for elderly people, and safe route for kids. Such evolution opportunity is not confined to transportation industry for mobility service. It can be also seen in the other areas like lifecycle service for health and well-being and end-to-end service for sustainable energy consumption, which are briefly illustrated in Fig. 24.4.
24.4.5 Products in Experience-Value Economy In preparing for the shift to experience-value economy, product must become software defined and IoT connected so that it can be configured to customer’s requirement and can monitor and adjust its performance to best fit customer’s preference and thus improve customer’s experience. As we saw in the mobility service case above, all the products necessary to deliver the service such as automobiles, ships, trains, and even elevators and escalators have to be connected and capable of being configured and controlled in accordance with service needs. If you drop in at an electronics or robotics show like CES, you can easily find
various concepts with prototypes of software-defined and IoT-connected vehicles. Smart house is another example of software-defined and IoT-connected product. The smart house can adjust and control its appliances and devices to deliver better experience for the people living there. As we saw in the Sect. 24.4.2, Intellithings’ offering is an example of making a house software defined and IoT connected. Individual control menu and parameter setting can be configured to each family member’s preference. Also, if one wants to provide more comprehensive lifecycle services, it can be connected with other services which can be updated in accordance with the progress of life stages of the family living there such as food delivery or fitness recommendation for a young double-income family, nursing and education programs for a family with kids, and health care and medical services if needed. Software-defined and IoT-connected product is beneficial to makers as well. If product at customer site can be updated by software and commands sent via IoT, the product turns to be “Evergreen.” This mode of remote update is called OTA (over the air) which is one of the features inherent in a software-defined and IoT-connected product. OTA was initially introduced to less critical mobile devices like navigation systems for drivers and smart phones for consumers, but now it is widely used for more critical assets like industrial machines and connected vehicles along with advancements in secured network and communication technologies. Thanks to the spread of OTA applications, it is widely possible to update products at customer site remotely. The automation of product update is directly linked to automation of update of customer experience in the experience-value economy.
24.4.6 New Requirements for Product Automation In addition to the traditional functions for automation, the expanding application opportunities and the business trend
Product Automation and Innovation
24.5
Modern Functional Architecture of Automation
A representation of functional architecture of modern automation is illustrated in Fig. 24.5. As we saw in the previous sections, autonomous and intelligent properties with cognitive capability and connectivity must be built in the modern architecture. The core portion of the modern architecture is “Sense-Think-Act” function. This is not a conventional rulebased control supervised by a “ladder diagram” of PLC (programmable logic controller)), but an intelligent “thinking” control backed by AI as well as estimation and optimization capability. In order to realize high-level thought by machine, “sensing” function must be upgraded to cognitive role to feed proper data to the thinking function with accuracy and relevance in terms of the context of its use. The “acting” function which is embodied by actuators and other moving parts of the product is required to execute the task ordered by the thinking function with sufficient exactness and accuracy.
Digital twin
24 Report & monitor
Core function Think
Sense
Act
Interface & access control
of experience-value economy pose new requirements for product automation. Application areas of automation are expanding to open and unstructured fields like roads, cities, mountains, the air, and any terrain. As machines get into open spaces and automatically move around, they have to behave according to their purpose and situations with their own autonomy to some extent as well as necessary supervision. In a world where many kinds of automated machines interact with each other and co-work together, they have to communicate with others and recognize who I am, who others are, where I am, and which directions others head for. As each machine has its own purpose and own way to behave, no single entity can control and supervise all the machines in the field. As a matter of course, control systems are distributed, and more intelligence is embedded at the “edge” side. If a group of distributed pieces of the system has a common purpose and responsibility, the group should be controlled in a coordinated way over the members. Since the situation and condition each machine faces may change and thus required behavior and mode of functions may be altered, the machine has to be configurable accordingly, and sometimes has to be personalized to fit an individual case. In summary, in the age of IoT, an automated product must be an autonomous entity equipped with cognitive capability like self-localization and mapping as well as mutual recognition. Control systems are naturally distributed, under which machines are well coordinated if they share common purpose and responsibility. Each machine is equipped with more intelligence backed by edge computing and obtains more flexibility with capability of configuration and personalization so as to well behave in accordance with any situations and even more to cope with any problems it faces.
569
Energy management
24
Learn & update Reliability, safety, security
Fig. 24.5 Modern functional architecture of automation
In order to adapt to surrounding conditions and gain further capabilities required to behave more effectively and efficiently, a product must continuously upgrade the “SenseThink-Act” function through learning and updating function. The “Learn and update” function is a supportive function to the core “Sense-Think-Act” loop and realized by machine learning as well as estimation and optimization to identify new patterns or rules effective to upgrade the “Sense-ThinkAct” function. The “Sense-Think-Act” and the “Learn and update” functions are the fundamental body of edge intelligence. In any form of energy supply, “Energy management” is a critical function not only as an enablement to make entire product active but also as an intelligent controller for energyefficient operation. If the product works with electricity, the function is realized by an assembly of battery and battery management system, charging inlet, converters and inverters, capacitors if necessary, and other devices like fuses and software. Large size products are often powered by fuel such as gasoline and diesel oil, but with increasing attention to impact on environment, fuel cells with nonfossil fuel like hydrogen are strongly expected. “Report and monitor” function is getting more important than ever to keep the product “Evergreen.” The modern automated product monitors itself and reports its operational status as well as its health condition to its supervisor on a regular basis to get necessary and timely support, often remotely, such as relevant software update and preventive maintenance so as to keep it in the best condition. The modern automated product is required to interact with external entities including other automated products,
570
K. Funaki
smart devices like a smart phone, surrounding objects such as road infrastructure as well as human operator on-site or supervisor at control center. “Interface and access control” function plays a critical role in interacting with such external entities securely. Normally a machine has to validate each access from any external entities to shut out harmful access and avoid wrong interactions. Thus the “Interface and access control” function is equipped with authentication and filtering capabilities under authorized protocols. For interaction with a human operator, it is often validated by biometric key authentication such as recognition by face, iris, fingerprint or vein, and others. As a fundamental nonfunctional element, “reliability, safety, and security” of the product are “must” requirements. Since the requirement for individual product regarding reliability, safety, and security depends on its mission and role as well as its physical and logical structure, actual implementation of these elements is determined on a caseby-case basis. It is more than just reporting and monitoring. Protection functions regarding reliability, safety, and security must be embedded when shipped from the factory and should be continuously updated during operation in the field. Selfrecovering function or minimal operation function in an emergency is also a “must have” feature for an autonomous product. As an advanced mode of automation management, an automated product is copied in a cyber space as a “Digital twin” which can reproduce structure, functions, mechanisms and actions, and even aging or deterioration over time. The “Digital twin” enables operator and supervisor to analyze and simulate the product state and behavior within the cyber space so that the operator and supervisor can identify (potential) problems of the physical product and can verify decisions and actions prior to actual execution in the physical space. Thanks to advancements in communication and huge data processing technologies, the “Digital twin” can reproduce the physical product’s behavior on a near realtime basis, and decisions and instructions verified by the “Digital twin” can be transferred to the physical product in a timely manner. While cyber-physical system comprising digital twin and its corresponding physical product will be a new standard of product lifecycle management system for automation, there are many variations on its approach of design and implementation, and thus it is much expected to establish its theoretical framework [28].
24.6
Key Technologies
In this section, some of the key technologies to realize product automation will be briefly reviewed. The key technologies selected here are mainly about the feature of the modern automation, that is, autonomous and intelligent properties
with cognitive capability and connectivity. We concentrate on showing concrete examples for readers to touch the latest cases, rather than providing comprehensive survey or dictionary-like explanations.
24.6.1 Localization and Mapping An autonomous mobile machine moves based on its own recognition of its location and own decision on the direction to go. In order to do so, the machine has to make a map around it and identify its location on the map dynamically as it moves. The localization and mapping function must be realized using data from sensors mounted on the machine. It means that sensor data measured on the machine’s coordinate system must be projected on the absolute map coordinate system which should be also generated from the data collected by the machine. This mutually dependent dynamic localization and mapping technology is called SLAM (simultaneous localization and mapping). Since any autonomous mobile machine needs self-localization and mapping capability, SLAM is used in many applications such as AGVs in a factory and a warehouse, AMRs (autonomous mobile robots) for delivery of food and patrol in a building, robotic cleaners at home, automatic lawn mowers in a garden, deep-sea probes, and even rovers for planetary probe. They can move autonomously without preset tracks thanks to the SLAM technology. SLAM is one of the typical embodiments of the “Sense-Think-Act” function of the modern automation system shown in the forementioned Fig. 24.5. Although the research on localization and mapping technique has a long history from early 1980s, the acronym SLAM appeared for the first time in 1995 [29].
Sensor’s Role for SLAM Actual implementation of SLAM is strongly dependent on modalities of sensors employed in each individual case. Major modalities of sensors used for autonomous mobile machines are optical sensors such as LiDAR (light detection and ranging) and cameras, since they can scan objects or capture images with rich information to extract and identify landmarks with which its location can be estimated. LiDAR identifies distance and orientation to the objects surrounding the machine by transmitting laser pulses to the target objects and receiving the reflection from them. The distance to the target object is calculated by the time between transmitted and returned pulses multiplied by the speed of light. The output of LiDAR can be 2D or 3D point cloud data. A 3D representation can be made from laser return times and wavelengths. Amount of movement is estimated by matching the point clouds that the LiDAR sensor scans iteratively as the machine moves. By accumulating the amounts of movement, the machine can localize itself. Since laser sensors
24
Product Automation and Innovation
like LiDAR gain far more accuracy than vision sensors like camera, they are essential to the cases where machine moves rather fast such as autonomous vehicles and drones. On the other hand, point clouds generated by LiDAR sensor are less informative in the sense of information density than raw images captured by camera. This means LiDAR is not good at finding a specific landmark needed to identify the machines position. Therefore, other types of sensors which can detect landmarks and signs are necessary to complete SLAM implementation. Thanks to increasing availability of inexpensive image sensors, many autonomous mobile machines have been equipped with camera-type visual sensors for exteroceptive sensing. Camera-type sensors include monocular camera, stereo camera, and RGB-D camera. However, such visual sensors are prone to errors because of their brittleness to changes in light, weather, and scene structure especially when they cannot detect sufficient light from its environment. In order to supplement robustness under outdoor environment, GNSS (global navigation satellite system) and radar can be employed. Since radar itself does not provide sufficient accuracy required for 3D mapping, fusion of radar and visual sensors is challenging and one of remaining issues. Other physical sensors like accelerometer, gyro sensor, and odometer can also be of help in estimating the position and motion of the machine. From sensory model perspective, there are two ways to generate information for localization and mapping: landmark based and raw data use. Landmarks are often used to (re)identify the machine’s location and correct the distortion of the estimated machine’s pose map. Visual sensors like camera are the most relevant ones for the landmark-based approach. As the raw data use approach, 3D point clouds generated by LiDAR sensors and images captured by cameras are essential input for iterative estimation and identification of machine’s location as well as iterative creation and correction of 3D map.
SLAM Methods and Algorithms Assuming the autonomous mobile machine is moving through an unknown environment, localization and mapping is a probabilistic exercise with the knowledge iteratively acquired through the series of measurement of the machine’s state and observation of the surroundings including landmarks which are specified to obtain geographical information. Based on its probabilistic nature, SLAM is constructed as a recursive estimation process consisting of the main two steps: prediction and update, which are as follows: 1. Prediction (time update) 1-1. State prediction (odometry): Predict the machine state at time k based on the knowledge of the history of landmark observations and machine controls
571
1-2. Measurement prediction: Predict measurement of landmarks from the predicted state of the machine at time k 2. Measurement and update 2-1. Measurement: Observe landmarks and obtain measurement from the machine at time k 2-2. Update: Update measurement of landmarks as well as the machine state with the maximum likelihood If a new (or unknown) landmark is identified, it is integrated into the map and will be used in the following predictions and measurement updates. In SLAM research community, a graphical model (Bayesian network) as shown in Fig. 24.6 is often used to illustrate the SLAM parameters’ interrelations, where xk : machine state vector describing the location and orientation of the machine at time k uk : control vector applied at time k − 1 to move the machine to a state xk at time k mi : a vector describing the location of the ith landmark zk, i : an observation of the location of the ith landmark at time k The work by Smith and Cheeseman [30] was the first achievement to provide a method to solve the probabilistic SLAM problem. The method employs EKF (extended Kalman filter) to estimate the state with a state-space model with additive Gaussian noise. This EKF SLAM method is not so scalable to broad applications because of its assumptions of linearity in the machine motion models. FastSLAM is another representative method which gains more applicability since it models the machine motion in a form not bound to linearity and Gaussian probability distribution, instead using recursive Monte Carlo sampling (particle filtering) [31]. Other than these two legendary methods for the probabilistic SLAM problem, there have been many alternative methods proposed so far. For further introduction to the various approaches, see [32]. The core of the SLAM algorithm in general is to solve the nonlinear simultaneous equations consisting of state prediction formula and measurement prediction formula. The uniqueness of the equation model is its inherent redundancy which comes from the observation model where a landmark can be observed from different states of the machine at multiple times, as you see in the Fig. 24.6. This redundancy is essential to make the map more accurate, and further it works as a catalyst to mitigate the risk of errors accumulated in iteration of the state transitions which is a Markov process. Another uniqueness of the model resides in the fact that a relative location between landmarks is also observed from different states of the machine at multiple times. Although Fig. 24.6 does not show geographical topology, you can see the situation that the machine at the state xk observes mi and mi+1 from its location at time k, and it then observes
24
572
K. Funaki
xk–1
xk
uk
Zk–1,i
xk+2
xk+1
uk+1
uk+2
Zk,i
Zk+1,i
Zk,i+1
mi
Zk+1,i+1
Zk+2,i+1
mi+1
Fig. 24.6 Graphical model of SLAM parameters
them again at time k + 1. Since the relative relationship between the two observed landmarks is independent of the coordinate system the machine uses, estimation of the relative location of the landmarks becomes “sharp,” as the machine moves, with the knowledge accumulated through multiple observations of the same relative location. For a linear Gaussian case, it was proved that the correlations between landmark estimates increase monotonically as the number of observations increases [33]. For further concrete mathematical descriptions, you can refer to [34, 35].
Types of SLAM Implementation As seen in the Sect. 24.6.1, there are various types of sensor data to be used for SLAM depending on individual cases and, conversely, SLAM implementation is dependent on input data coming from the sensors. As the major type of sensors are largely divided into LiDAR and camera (visual sensor), the type of SLAM implementation can be categorized into LiDAR SLAM and visual SLAM as well. LiDAR SLAM is a mode of implementation using 2D or 3D mapping with laser scanners where scan matching approach plays a key role. The most classical but performanceproven method for point cloud matching and registration is ICP (iterative closest point) algorithm [36, 37]. The fundamental framework of ICP is to iterate the two steps until convergence: (1) find point correspondences between the currently scanned point cloud and the previously scanned point
Sensor data
Scan matching Odometry (Front end)
Graph optimization Pose graph (Back end)
Loop closing (Off-line)
Map Pose graph
Fig. 24.7 LiDAR SLAM framework
cloud with closest-point rule and (2) estimate the machine location with the assumption of the point correspondences. Following the scan matching, the history of the machine’s state or pose graph should be adjusted to minimize the errors. Pose graph is a graph where every node corresponds to a machine position and a sensor measurement and an edge between two nodes represent the spatial constraint between the nodes. The graph optimization is done by the nonlinear least squares method [38, 39]. The overall framework of the LiDAR SLAM with graph optimization is shown in Fig. 24.7 which has three elements: (1) front end for odometry (graph construction) mainly by scan matching, (2) back end for pose graph optimization by the nonlinear optimization, and (3) closed-loop detection and adjustment. A typical example of the pose graphs generated by the front-end and the back-end processes can be seen in [39].
24
Product Automation and Innovation
Visual SLAM is the SLAM performed with sequential images captured by visual sensors (cameras) mounted on the autonomous mobile machine. The algorithm of the visual SLAM is divided into the two categories: feature based and direct method. Feature-based method is to detect and track specific features in the image and matching them to estimate pose and map. MonoSLAM was the first kind of the feature-based method to estimate features and locations of the machine for the case where single camera or monocular camera is used as the sensor [40, 41]. MonoSLAM employed extended Kalman filter for the estimation, thus it needs more computational capacity as the number of features to be observed increases. One of the most widely used approaches in the visual SLAM category is PTAM (parallel tracking and mapping) [42]. As is easily recognized from the name, PTAM parallelizes feature tracking and mapping to realize fast estimation with bundle adjustment–based SLAM. ORB-SLAM is another popular approach in the feature-based methods, and it is applicable to the images captured by monocular, stereo, and RGB-D camera [43, 44]. Direct method is to use the full image information and its pixels’ move and change, instead of features in the image, to register two successive images. There are many derivations and variations for various purposes: DTAM (dense tracking and mapping) for dense mapping in a relatively small area [45] and LSD SLAM [46] and DSO [47, 48] for large-scale use cases.
573
easily list various reasons for such demand: shortage of truck drivers, harsh work environment, skyrocketing expansion of EC (electronic commerce), saving energy for lighting and air conditioning for human workers, etc. Among them, warehouse automation is the hottest topic recently due to increasing demand for further efficiency as well as the economy of scale. Actually, we are getting to see real practices of almost unmanned warehouse. CB Insights reports that JD.com, a China-based retailer, launched a 100,000 square foot smart warehouse sorting up to 16,000 packages per hour with just four human employees in 2018 where all the available digital technologies including AI, robotics, and image scanners are introduced [49]. Among various warehouse operations, the most frequent but the most difficult task to automate is object picking. As an evidence for it, even the world-class EC and logistics giant, Amazon, calls for global excellence in picking automation through a contest-type event called “Amazon Picking Challenge” [50]. The difficulty in this challenge resides not only in technological reasons but also in economic factors. Since warehouse operation is less profitable in general and asset heavy in nature, therefore, even in automation investment, there is a strong pressure to squeeze both CAPEX (size of the automation system) and OPEX (operation costs including energy and maintenance costs). Accordingly, a robot introduced to warehouse must be lighter as much as possible in terms of physical and economical sizes. This means the robot should be realized with cheaper devices and less energy consumption, but with the maximum possible intelligence subject to the configuration.
24.6.2 Edge Intelligence When a machine on the ground was merely an execution entity which was always waiting for instructions provided by someone else, all the intelligent parts were left to a highspec central computer. However, as the automated machines are expected to have more autonomy and self-contained capability to solve the problems it faces, the “edge” side (machine) rather than “center” side (server computer) has to be equipped with higher intelligence. The autonomous machine has to perform the “Sense-Think-Act” routine by itself as an “edge” with the capability of recognition, optimization, and decision-making. Implementation of such an edge intelligence is always a battle against the limited capacity of the edge such as small computing power and less electric power supply. In this section, using a robot motion application case, we discuss one of the recent achievements in the edge intelligence implementation taking advantage of both software and hardware technologies.
Needs of Edge Intelligence in Logistics There is no room to doubt that one of the most urgently needed areas for unmanned operation is logistics. One can
Edge Intelligence Implementation: A Case Thanks to dedicated and extensive research efforts worldwide, we can see some hints of potential solutions to such difficult challenges. As an example of such a hint, the remainder of this section will exemplify the achievement by [51], which is one of the recent advances in research on picking robot intelligence for real-world application, and is also a good example for readers to touch a real practice carried out through combination of software and hardware technologies. In order to make warehouse operation efficient, it is crucial to gain throughput of each picking work since the reduction of picking time has a direct impact on the overall productivity being multiplied by the number of goods. However, under limited computational resources and power supply, it is a big challenge to realize a picking robot whose throughput surpasses human worker’s. The main reason for the low throughput of picking work by a robot is the long computation time required to identify objects and define its picking motion. According to [51], a picking robot takes the following computational steps: (1) capturing image of the target object by a 3D visual sensor like RGB-D camera, identify the class of the object typically by a CNN (convolutional
24
574
K. Funaki
neural network); (2) estimate the object pose by ICP (iterative closest point) algorithm; (3) identify relevant gripping points based on the estimated object pose; and (4) plan the motion to be taken to pick up the object. Among these steps, object pose estimation at the step (2) is the most time-consuming process. The ICP algorithm for the object pose estimation is the iterative procedure of the two processes: k-NN search using point clouds of object model and observed data from the sensor, and estimation of rotation matrix R and translation vector T. The k-NN search is to find up-to-kth nearest neighboring points in the point cloud of the object model for each point of the observed scene data to identify a set of the most corresponding points of the object model (Fig. 24.8). Due to its numeration-dependent nature, k-NN search takes much computation time. The resultant set of R and T from the ICP algorithm gives the optimized transformation between the object model and the observed scene data (Fig. 24.9). In order to reduce the object pose estimation time, [51] took an algorithm-level acceleration approach as well as a hardware-level acceleration approach. For the algorithmlevel approach, a coarse-to-fine multi-resolution ICP was employed together with their original idea of introducing hierarchical graph [52] instead of using conventional Kdtree. It is well known that the multi-resolution ICP reduces number of iterations at a latter ICP with finer resolution since the transformation estimated at its preceding ICP with lower
Object model point cloud k nearest points (k=3)
resolution can serve as an initial guess. The introduction of the hierarchical graph data structure has an effect to eliminate multiple duplication of evaluations in the k-NN search using knowledge obtained in the preceding evaluation steps. Although details are not repeated here, the order of computation times is reduced to O(logN) much more stable and smaller than that of Kd-tree; O(k × logN) (see Table I in [51]). While this approach gains much efficiency in k-NN search itself on one hand, introduction of graph-type data structure brought heavy computation to generate the graph on the other, since the graph generation procedure contains many repetitive distance calculations and sorting operations, and requires large memory size for storing information on neighboring points. In [51], further acceleration by hardware was realized by an SoC-FPGA-based ICP accelerator with the unique configuration as shown in Fig. 24.10. The super excellence of the accelerator circuit is that the two operations of graph generation and k-NN search are realized by partial reconfiguration of FPGA utilizing the inherent feature that the two operations have a similar structure consisting of distance calculations and sorting operations, and are not executed simultaneously. This enabled resource-efficient execution with inexpensive hardware. To accelerate the two operations, the distance calculation is parallelized and the sorting is implemented by “sorting network” [53] which is a data-flow-type circuit effective to minimize its memory access. Operations for estimation of rotation matrix and translation vector are implemented on CPU and interact with the k-NN search operations. Using Amazon Picking Challenge datasets of a folded T-shirt model, the acceleration capability and energy efficiency of the proposed ICP accelerator was confirmed successfully, where ICP time was 0.72 s and power consumption was 4.2 W (see Table III in [51]).
24.6.3 OTA (Over-the-Air) Technology Observed scene point could
Fig. 24.8 The k-NN search operation
R, T
Object model
Fig. 24.9 ICP algorithm framework
Observed scene
Needs and Benefit of OTA OTA is a mode and method of wireless delivery of software or other content data to mobile entities. If a product becomes software defined and IoT connected, its configuration and function can be changed and updated by rewriting the software with OTA remotely. For an automated product, if such reconfiguration and update can be made remotely, it brings a huge impact on stakeholders’ benefit as well as business models. Product makers can continuously upgrade the product at a customer site at a smaller cost even after the sales. Users can always enjoy the latest version of functions and services through the continuously updated “Evergreen” product without carrying it to the maker or dealer. Makers can be connected with each customer as well as its operations on a day-to-day basis, then there will be a driver to shift their busi-
24
Product Automation and Innovation
Scene data
575
CPU in FPGA Down sampling Object model
No
Update current R, T of data
R, T est.
Term. criterion
Object model
BRAM for graph data
Graph k-NN search
k-NN search
Graph generator
Generator
Programmable logic Storing circuit configuration SoC FPGA
Storage
Fig. 24.10 SoC-FPGA-based ICP accelerator employing partial reconfiguration © 2020 IEEE. (Reprinted, with permission, from Kosuge et al. (2020))
ness model from “One time make & sell” model to “Continuous engagement” model for each customer’s whole lifecycle. A user of an OTA-enabled automated product can more focus on its outcome rather than its maintenance and renewal costs, and will be more interested in its performance rather than its ownership. In this sense, OTA can be a key infrastructure for the “product” subscription economy. Actually, Tesla recently announced its plan to introduce a subscription model to the self-driving function which is to be mounted on Tesla cars by the end of 2020 [54]. In addition to these immediate impacts, both makers and users can mitigate potential risk of incidents or threat of cyberattacks. Furthermore, if a maker is allowed to monitor the machines or devices at customers’ sites, a proactive service fully personalized to the customer can be offered in a timely manner.
Technical Challenges Although remote update of software and contents has been put into practice for consumer electronics products like smart phones and Internet TVs, OTA function for automated machines, in many cases, is required to provide more secure, safer, and more stable quality in sending and update. In a consumer electronics product case, for instance, users can wait for some time during the update of their smart devices, and even can afford to accept a possible failure in the update and to redo it. However, as for automated machines which often have their “important” missions to complete, they cannot be interrupted by such a time-consuming or a failed update. Especially for a machine which is always exposed to risk of accident and other critical situations such as autonomous vehicles and industrial robots, any errors and
failures in update must be avoided. With this in mind, OTA for product automation has to satisfy the demand for quick and secured update.
OTA Implementation: A Case To explain the technologies required to implement OTA, a case for autonomous vehicle would be a good example since autonomous vehicle is anticipated to be the most safe and secure machine. At the moment the author is writing this manuscript, possible target components of OTA for autonomous vehicle include vehicle control ECUs (electronic control units)) for ADAS (advanced driver assistance systems) or AD (autonomous driving) and engine, infotainment ECUs for IVI (in-vehicle infotainment) and TCU (telematics control unit), and other parts like doors, mirrors, and cockpit displays. A typical structure of OTA system is illustrated in Fig. 24.11 [55]. The system is composed of OTA center which generates, manages, and distributes software data for update, and the target vehicle which is equipped with necessary components including IVI components, TCUs, gateways, and ECUs. The gateway works as a router for the onboard communication across different network domains in the vehicle on one hand, and as a guardian to maintain the security within the vehicle by monitoring its internal communications on the other (onboard communication security). In order to meet the forementioned demand for quick and secured update, the OTA system should be equipped with the two capabilities: differential update and centralized security management at the gateway.
24
576
K. Funaki
OTA center Vehicle configuration
New version
Vehicle
Software management
Gateway IVI
OTA client
OTA client
Decrypt and verify signature
Old version
Generate differential data
TCU
ECU update control
AD ECU
Differential update Fault recovery
OTA client Encrypt and signing
Engine ECU
Fault recovery management
Secure distribution
Fig. 24.11 OTA system structure. (Courtesy of Hitachi Ltd.)
Differential Update To minimize the amount of update information, OTA system employs “Differential update” approach. In this approach, when a new software is ready for release, only the different portion from the old software is sent for update. Firstly, at the OTA center, difference between the new software content and the old one installed at each vehicle is evaluated, and then the differential portion is packaged and sent to the corresponding vehicle. This process can be implemented in a way that the new and old programs are divided into some blocks according to erase block of a Flash ROM and each block of the new program is compared with the corresponding block of the old one to extract differential data in each block. Once the vehicle receives the differential data at its gateway with security check, it is transferred to the target ECU where the update operation is applied to build the new software replacing the old one already installed. This approach makes all the update processes lean with less use of memory. As for update information, there are two types of software data: script and compiled code. Script is an instruction or a set of rules to be applied to operations executed by the other program. While script can be easily updated by simple editing since no compilation is needed, it consumes more CPU and memory resources than the update of compiled code, which, in contrast, needs less resources, but takes more steps to update since compiling operation is needed to update the program. Exploiting the fact that compiled code is mainly for key operations with less frequent update such as machine learning and statistical computations and that script is just for simple comparisons and function calls but is subject to frequent change, the best mix and allocation of script and
compiled codes can be designed to reduce overall update time as well as resource use. Security Management at the Gateway Since gateway is a router for the onboard communication network and a guardian of the internal system security, the gateway should manage the security related to the software update to maintain the functional safety and security. OTA update control function is installed in the gateway to execute various protection functions against the internal and external threats to the vehicle, which include filtering of unauthorized communications, secure boot to detect unauthorized attempts to overwrite software, device authentication to prevent unauthorized connection, message authentication to verify its validity, and finally countermeasures against DoS (denialof-service) attacks [55]. As a basis for this security control, encryption key management for verification of various communications between the OTA center and the vehicle as well as within the onboard network is essential and the gateway plays a central role in both generating a key and its distribution to the onboard devices.
24.6.4 Anomaly Detection Needs and Benefit of Anomaly Detection A user or a beneficiary of an automated product is always anticipating that the product operates continuously to complete the task. In many cases, the user or the beneficiary is not the owner of the product, and the owner has to take care of the product so as to keep the product working. Conventional practice for such a caretaking job is a combination of on-site
24
Product Automation and Innovation
inspection and preventive maintenance. In order to avoid interrupting the product operations, such inspections and maintenance works are planned and executed on a periodical schedule basis. This scheme always faces dilemma between overmaintenance and risk of a sudden failure, but both users and owners have been accepting the scheme as it is. Recently, thanks to advances in sensors and IoT infrastructure as well as analytics technologies, a product can collect its condition data and transmit it to a remote site with millisecond-order cycles, and parallelly distributed data processing technologies like Hadoop can find insights about the product’s conditions based on the huge data coming from the product on a real-time basis. If an automated machine can avoid sudden failures and unplanned stops by detecting symptoms that imply potential malfunction and deterioration in advance without waiting for a periodical inspection, both users and owners can gain much benefit being freed from the burden of periodical inspections and maintenance works as well as the dilemma of overmaintenance and risk of a sudden failure. Anomaly detection is the core technique to realize this approach. Although it is not specific to automated products, but it is super helpful in unmanned situations. Anomaly is a phenomenon or a pattern captured in the data which does not conform to a normal situation or operation. Anomaly detection is a general term to describe a category of methods to detect anomalous behavior of data which is different from the behavior of the data under normal situations. Basic idea of anomaly detection is to define and identify anomaly against the normal, instead of modeling a set of anomaly situations directly.
Anomaly Detection Methods Basically, anomaly detection has the two stages: training and detection. At the training stage, it is a model of normal conditions using the best knowledge about normal data where the actual method depends on the available data and applications. At the detection stage, observed data is compared with the normal model to detect anomalies using anomaly measures which are predefined according to the anomaly detection method employed. In an actual implementation, all the aspects of anomaly detection ranging from selection of relevant dataset to consistent use of data and selection of an appropriate method require domain knowledge to make it successful. Since research on anomaly detection originated in data mining area with many applications such as cyberattack detection, anti-money laundering, fraud detection, cardiogram analysis, and condition monitoring of machines, there have been many varieties of methods proposed so far [56, 57]. Accordingly, there are many ways to categorize the methods. From a target anomaly data perspective, they can be categorized by mode of anomaly data representation. Point anomaly detection is to detect an individual data instance
577
which is anomalous. Contextual anomaly detection is to identify anomalous situation based on a context behind such as trend or seasonality and detect an individual data instance that deviates from the context. Collective anomaly detection is to detect an anomalous collection of related data instances such as an anomalous subsequence in a time series dataset. If we look at techniques used in detecting anomalous data, we can categorize anomaly detection methods by detection approach. Rule learning approach is to detect anomalous data when the data does not follow the rule which is identified through learning using normal dataset. Regression approach is to detect anomalous data based on deviation from a regression equation of normal dataset. Clustering approach is to make groups (clusters) of similar data from the normal dataset and identify data that does not belong to any of the clusters as anomalous. Classification approach is to judge if a data is normal or anomalous using predefined classification model. Classification approach is further subcategorized into three types according to classification modeling: supervised, semi-supervised, and unsupervised anomaly detection. Supervised anomaly detection needs both normal- and anomaly-labeled dataset and build a classifier to distinguish between normal and anomaly classes. Semisupervised anomaly detection is based only on normal model made from normal-labeled dataset to detect and identify deviated data from normal model as anomalous. Unsupervised anomaly detection is, with the assumption that most of the data instances are normal, to detect rare data that does not fit the normal dataset. Finally, nearest neighbor approach is to detect anomalous data when it is far from the other data based on the assumption that normal data is neighboring each other. In the remainder of this section, we show an application case of point anomaly detection with nearest neighbor approach in monitoring an automated machine remotely.
Anomaly Detection Implementation: A Case In this case, the target machine operates automatically in an unmanned mode. The condition monitoring system for the machine is composed of rule setting subsystem and anomaly detection subsystem (Fig. 24.12). The rule setting subsystem is to generate a normal data model and set rules to be used in anomaly detection through learning from the training data. The rule setting can be off-line. The anomaly detection subsystem receives sensor data and other event signals online to detect anomalous situation using the rules provided by the rule setting subsystem. Sensor data from the machine includes temperature, pressure, voltage, and other physical or electrical condition data. Event signal is transmitted upon operational events such as start, stop, standby, and others by which one can recognize the operational status. Sensor data and event signals are synchronized. Anomaly detection rules are logically organized by a decision tree composed of hierarchical if-then branches which
24
578
K. Funaki
Rule setting Training data
Anomaly detection
Normal model
Decision tree
Data analysis
Sensor data
Detection & alarm
Event signals
Automated machine
Fig. 24.12 Condition monitoring system
Sensor 1
Measure of abnormality = Distance d Subspace Sensor N
Cluster (group of similar data)
Sensor 1
q: Diagnosis data
Sensor N
b: Closest point
k: Proximity training data Normal data
Sensor 2
Degree of abnormality = Distance d
Diagnosis data Sensor 2
Fig. 24.13 Anomaly detection by LSC. (Courtesy of Hitachi Ltd.) Fig. 24.14 Anomaly detection by VQC. (Courtesy of Hitachi Ltd.)
are set by key parameters with threshold values. In order to construct the decision tree, the rule setting subsystem performs learning process using a training data generated from normal-labeled data extracted from the historical sensor data. Anomaly situation of the machine can be detected by an anomalous data instance from the sensors, and the anomaly measure is distance from the normal datasets. In this case, as an algorithm to build a normal model used for the anomaly detection, either of LSC (local subspace classifier) or VQC (vector quantization clustering) algorithm is employed. LSC is to set anomaly measure as a projection distance to an affine subspace defined by unknown pattern of k-NN data points (Fig. 24.13). VQC measures distance between the observed data and clusters of normal data to detect anomaly (Fig. 24.14). Both LSC and VQC are distribution free, model free, and nonparametric methods which are widely applicable to point anomaly detection. In some actual cases, it is reported that a symptom for the future potential failure was successfully detected several days ahead of the actual failure occurrence [58].
24.7
Product and Service Lifecycle Management in the IoT Age
Roles of product and service are changing in the IoT age. Lastly, we treat this topic from management perspective.
24.7.1 Management Model in the Experience-Value Economy As the experience-value economy spreads, product makers have to master capability to deliver experience to customers. An automated product at customer site is not a precious thing anymore and becomes a vehicle to deliver valuable experience to the customer with the outcome made by its performance. Making the product software defined and IoT connected, the experience delivery will be promoted. As we discussed in the Sect. 24.6.3, OTA adds further capability to
24
Product Automation and Innovation
Continuous engagement model
Make
Own
579
Use
Business model
Experience Deliver Share Update/upgrade
Product & service engineering
Service design for experience
Architect
High-performance computing & data processing One time make & sell model
Make
Own
Use
Experience Continuous learning
Sell
Fig. 24.15 From “One time make & sell” to “Continuous engagement”
Product lifecycle Make Own
update the software-defined product remotely and to upgrade the experience delivered to the customer. Since the value offering point will shift from product supply to experience delivery as we saw in the case of Kaeser Kompressoren, conventional “One time make & sell” business model will be replaced with “Continuous engagement” business model, where the product maker owns and uses the product to deliver valuable experience to the customer (Fig. 24.15). In the “Continuous engagement” model, conventional PLM (product lifecycle management) is not sufficient to manage the whole business cycle since the PLM concept and methods are basically established from a product management point of view. By the “Continuous engagement,” the maker is expected to deliver experience as a service throughout the customer’s lifecycle. As the customer’s life stage progresses, the product, as a service vehicle, has to be updated to upgrade the customer experience. In order to establish management system for the “Continuous engagement” model, both the product lifecycle and customer experience lifecycle should be considered and combined by the continuous reinvention cycle of getting continuous feedback from the customer and reflecting the feedback on continuous learning to update the product (and service) for the next stage of customer experience (Fig. 24.16). Key functions necessary to handle such three cycles are product and service engineering, service design for experience, and relevant business model, all of which must be supported by high-performance computing and data processing infrastructure since huge amount of data would be exchanged between customers and the maker, and various digital engineering software would run for rapid product and service engineering. An architect must be assigned as a central hub to orchestrate those key functions, and its role is essential to let the whole product and service lifecycle management work. Hardware engineers and software engineers together with customer engagement team must work together, and ICT infrastructure team must support their continuous reinvention work with iterative cycle of analysis, design, testing, and updating by preparing ICT infrastructure
Continuous re-invention cycle Continuous update
Continuous feedback
Experience lifecycle
Use
Experience Experience
Reuse
Recycle
Experience
Fig. 24.16 Management model for the “Continuous engagement”
with capability of high-performance computing with massive data handling and ultrafast communications. Recently, Clatworthy [59] showed organizational requirements and strategy to make a company experience centric. Although transformation to the experience-centric organization is a difficult journey with challenges to the conventional institutional logics, continuous feedback and learning practice would be a driver to accelerate such a transformation. While we are still on the way to find rationale and to establish a justified framework for the product and service lifecycle management with formal definition and proven good practices, we can show some related research streams and achievements which would give theoretical and practical background to establish the new management model.
24.7.2 PSS (Product-Service System) Discussions As the manufacturing industry realized that the value origin had been shifting from product to service, experts both researchers and practitioners initiated the discussion on new business models and systematic frameworks to deliver sustainable values through combination of products and services. The concept of PSS (product-service system) was proposed in 1999 and the term was introduced by [60], where it is defined such that “A Product Service system (PS system, or product service combination) is a marketable set of products and services, jointly capable of fulfilling
24
580
a client’s need.” The original concept of PSS was rather broad and covered not only capability to keep sustainable relationship between companies and customers but also hints for governments on policymaking concerning sustainable production and consumption patterns. Following the original definition and adding further discussion, Mont [61, p. 71] provided a more formal definition of PSS: “A system of products, services, infrastructure, and support networks that continually strive to be competitive, satisfy customer needs, and result in a lower environmental impact than traditional business models.” It can be said that PSS needs comprehensive processes of planning, development, implementation, and operation of the system composed of product and service to meet its goal. It is also naturally seen that actual implementation of PSS depends on the business model the company employs. Tukker et al. [62] categorized the PSS implementation into the three types: product oriented, use oriented, and result oriented. The product-oriented PSS is for traditional sale of product embracing some additional service, and the use-oriented PSS is for sale of the use or availability of a product to customers in various forms such as leasing or sharing. While the product-oriented and the useoriented PPSs correspond to the conventional service business models, the result-oriented PSS is an advanced model that fits the experience-value economy, which is for sale of the result, function, or capability of a product to customers, while the company holds the ownership of the product. Although the PSS is not specifically focusing on automated products, software-defined and IoT-connected automatic products will amplify benefits of PSS by enabling the whole system to continuously adjust and redirect the improvement of product and service so as to optimize the customer’s experience. As for the benefit of PSS, Nemoto et al. [63] categorized the values delivered by PSS into the seven types: performance improvement, cost reduction, comfort and simplicity, convenience, risk reduction, emotional experience, and eco-friendliness.
24.7.3 How to Realize Valuable Customer Experience Once the automated product becomes merely a vehicle to deliver experience to its customer, competitiveness of a business comes from its capability to make the customer experience valuable. Interestingly, in the past discussions on customer experience, there seem to be two perspectives on how to plot to deliver customer experience: one is that experience design should be an essential business art [27], and the other that customer experience is not designed but cocreated through interactions between customer and service elements [64]. Although we do not intend to discuss their pros and cons and whether right or wrong between the two perspec-
K. Funaki
tives, approach to realize relevant customer experience may change according to relationship with customer and level of dominance over customer experience. Experience at an amusement park or a restaurant, for example, should be well designed since the guests are fully immersed in the world prepared by the amusement park or the restaurant. Automation of the rides at the amusement park has to be designed and built so as to deliver the designed experience. On the contrary, in our daily life, people’s perception of services depends on the context the person holds behind. One may expect an energy-efficient performance by an automatic home cleaner, but the other may prefer its powerful cleaning capability. In an industry case, an automated assembly machine installed at a production line is expected to deliver the performance to keep the target throughput that the customer (ex factory manager) sets according to the factory’s goal. In these situations, customer experience is cocreated by the automated product and the customer. In this context, customer experience cannot be designed solely by the product and service provider, but instead service should be designed with the assessment of how each service element influences the customer experience [64]. In this chapter, we take this idea as shown in the Fig. 24.16. In designing service with the aim to make customer experience better, hypothetical model about customer experience is necessary as a fundamental input. Teixeira et al. [64] proposed a model-based method called CEM (customer experience modeling), which employs HAM (human activity modeling) with participation map [65] to describe relations among stakeholders and related entities, and MSD (multilevel service design) method [66] to identify the structure of customer experience. By the MSD method, service designer can gain whole view of customer experience structure consisting of the service concept with target area of customer experience described by customer value constellation map, and service system architecture with definition of service encounters (or customer touch points) in the system. Once service is designed, it must be integrated with products which realizes service elements. Product and service engineering identifies necessary functions realized by the product and their specifications with supporting technologies. Well-known methods like QFD (quality function deployment) and MRP (manufacturing resource planning) for product design and manufacturing can be applied to identify necessary functional components in product and service engineering. Kimita et al. [67] proposed a unique method to explore and identify newly necessary technologies to meet customer needs, which can map relevant technologies associated with the customer needs. In general, the product and service are realized by combination of hardware and software. Some software must be embedded in hardware, and even more it is sometimes built on a silicon device, to achieve the best performance to yield valuable customer experience
24
Product Automation and Innovation
with a cost target. It makes design of product and service complicated and thus difficult to review and reuse for potential revision and update for customer experience upgrade in the future. To avoid such complexity and enable easier review and reuse, MBD (model-based design or model-based development) and MBSE (model-based systems engineering) is widely introduced to let designers and engineers with various different disciplines have a common understanding and view of the product and service system under development. In order to continuously align the service contents and product performance with the latest customer demand, feedback of customer experience is essential. Experience feedback is a knowledge management for continuous positive loop for a company [68]. By obtaining such a feedback from customer, service designers and product engineers can reflect lessons learned in the next version of the product and service which may be immediately delivered to the customer remotely and automatically by OTA.
24.7.4 Business Model Making In the experience-value economy, customers pay for value obtained through experience, not for acquisition of a product. As stated in the Sect. 24.7.1, product makers have to shift their business models from “One time make & sell” model to “Continuous engagement” model to be successful in the experience-value economy. The most fit business model for the “Continuous engagement” is subscription model [69]. The subscription model requires a company to change its mindset and transform all the processes and systems shifting from product centric to customer centric. Not only change in operations, but also change in financial system is crucial to make the subscription business successful. Instead of the conventional result-based financial status reports, the financial statement for subscription business shows how the revenue and profit will look like in the future based on the prospect of recurring revenue and churn rate. The success of the “Continuous engagement” is measured by recurring revenue like ARR (annual recurring revenue)) and the risk of losing customers due to discrepancy between customer’s expectation and actual experience is measured by churn rate. It is also obvious that the legacy IT system architecture does not work in the subscription model [69]. Ideal IT system for subscription business is totally customer centric and can update the service contents as well as pricing on a minute-order cycle basis. By taking the discussion on the customer-centric mindset a step further, a concept of customer success was introduced by Mehta et al. [70], where the customer success is defined as a composite of the three concepts of an organization, a discipline, and a philosophy to concentrate all the efforts into the goal of maximizing retention and long-term value by
581
focusing on the customer experience. It is also backed by the ten laws for practice of customer success, among which the Law 4 “Relentlessly monitor and manage customer health” is the most essential for the “Continuous engagement” and the most enabled by the service with software-defined and IoTconnected product automation. Recently many companies assign a CSM (customer success manager)) and put the concept into practice. As a method to design a new business model, Business Model Canvas is a well-known and the most prevailing tool [71]. The Business Model Canvas is a template to conceptualize and describe a business model with logical structure of the building blocks covering customer, value propositions, infrastructure, and finance. The Business Model Canvas is a one-page sheet whose right portion is about customer and composed of building blocks of customer segments, customer relationships, and channels. The left portion is infrastructure related and has building blocks of key activities and key resources as well as key partners. Placing the value propositions building block in the middle, user can confirm consistency between target customer and key infrastructure to realize the proposed values. Building blocks of cost structure and revenue streams are described at the bottom of the canvas, which provide user with information to check the financial requirements and feasibility of the business. Although the Business Model Canvas is a logical and wellvisualized tool, it does not contain dynamics and temporal properties of the business model. In ordinary practices, the business model conceptualized by the Business Model Canvas is translated into a business model diagram including flows of information and money among the stakeholders so as to understand and simulate overall dynamics of the model to confirm its feasibility and effectiveness.
24.8
Conclusion and Next Topics
In this chapter, we provided overview of modern product automation and innovations with the recent business trends and orientation which drive needs for software-defined and IoT-connected product automation. The feature of the modern automation resides in autonomous and intelligent properties with cognitive capability and connectivity. As the key technologies to realize the feature of the modern automation, localization, and mapping, edge intelligence, OTA (over the air), and anomaly detection were shown with some implementation cases. We used the last part of the chapter to supplement management perspective with the view on how the experience-value economy drives transformation of the business model of product automation from “One time make & sell” model to “Continuous engagement” model, and how it influences product and service lifecycle management. Although we focused on autonomous and intelligent properties
24
582
of automation in the chapter, foundations of automation are made with mechatronic components and control systems as ever. For those foundations and practical guidance, you can refer to [72]. As the automated products with autonomous intelligence and mobility spread in society, it poses new agenda for the next generation industry and people’s life. Security and safety issue is one on the immediate agenda, which includes protection of the products from malicious attacks, prevention of unintended dangerous behavior harmful to people and society, authentication of access to and communications between products, and backup and recovery system in case of emergency. Management and control of fleets of multi-vendor products which operate and move in public space is another issue. Not only how to do it but who should do it is still an open question. It must be product-maker agnostic and society reliable. Its mission ranges from local conflict and collision avoidance to totally efficient and effective operations as a piece of whole societal system. Currently, fleet management service for automated products is provided by the corresponding maker of the products vertically. In order to manage and control fleets of products which are operating in a common public space and made by various makers, some horizontal player who can consider public interest and total optimization should be necessary. With those in mind, ethics and compliance issue is an essential and fundamental topic to be discussed. Although it is not only about technology but also social acceptability as well as legal matters, technologies to promote alignment with social common sense and public good, such as explainable automation and emotionally understandable interaction, are expected.
References 1. Devol Jr, G.C.: Programmed article transfer. US Patent 2,988,237, 13 June 1961 2. Gasparetto, A., Scalera, L.: From the Unimate to the Delta Robot: the early decades of industrial robotics. In: Zhang, B., Ceccarelli, M. (eds.) Proceedings of the 2018 HMM IFToMM Symposium on History of Machines and Mechanisms. Explorations in the History and Heritage of Machines and Mechanisms (History of Mechanism and Machine Science), vol 37, pp. 284–295. Springer (2019) 3. Engineering and Technology History Wiki. Milestones: SHAKEY: the world’s first mobile intelligent robot. https://ethw.org/ Milestones:SHAKEY:_The_World% E2%80%99s_First_Mobile_ Intelligent_Robot,_1972 (1972). Accessed 29 Aug 2020 4. iRobot news release: iRobot introduces PackBot Explorer Robot: new unmanned vehicle equipped with multiple cameras and sensors, Burlington. https://investor.irobot.com/news-releases/news -release-details/irobot-introduces-packbot-explorer-robot (2005). Accessed 29 Aug 2020
K. Funaki 5. Lui, T.J.: Automation in home appliances. In: Nof, S.Y. (ed.) Handbook of Automation, pp. 1469–1483. Springer, Heidelberg (2009) 6. Ishihara, S., Ishihara, K., Nakagawa, R., Nagamachi, M., Sako, H., Fujiwara, Y., Naito, M.: Development of a washer-dryer with kansei ergonomics. Eng. Lett. 18(3), 243–249 (2010) 7. Pinnekamp, F.: Product automation. In: Nof, S.Y. (ed.) Handbook of Automation, pp. 545–558. Springer, Heidelberg (2009) 8. Rio Tinto launches the autonomous heavy freight rail operation. http: / / sts.hitachirail.com/en/news/rio-tinto-launches-autonomousheavy-freight-rail-operation (2018). Accessed 25 June 2020 9. AI Captain: The future of ship navigation inspiring Stena Line to embrace AI technology. https://social-innovation.hitachi/en-eu/ case_studies/stena_line (2019). Accessed 29 Aug 2020 10. Kyoto Robotics’ Technology: Our technology for 3D vision. https://www.kyotorobotics.co.jp/en/technology-3dvision.html. Accessed 29 Aug 2020 11. Ren, T., Dong, Y., Wu, D., Chen, K.: Design of direct teaching behavior of collaborative robot based on force interaction. J. Intell. Robot. Syst. 96, 83–93 (2019) 12. Southie Autonomy: https://www.southieautonomyworks.com. Accessed 29 Aug 2020 13. Murray, S., Floyd-Jones, Q.W., Sorin, D., Konidaris, G.: Robot motion planning on a chip. Robot. Sci. Syst. (2016). https://doi.org/ 10.15607/RSS.2016.XII.004 14. Hitachi Group Corporate Information: Multiple AI coordination control developed in collaboration with University of Edinburgh. https: / / www.hitachi.com / rd / sc / story / ai_coordination/index.html (2018). Accessed 29 Aug 2020 15. Intellithings: https://www.intellithings.net. Accessed 29 Aug 2020 16. iRobot: Vision simultaneous localization and mapping. https:// www.irobot.com/en/roomba/innovations. Accessed 26 July 2020 17. SAE International: SAE International Standard J3016: taxonomy and definition for terms related to driving automation systems for on-road motor vehicles (2018) 18. Lutin, J.M.: Not if, but when: autonomous driving and the future of transit. J. Public Transp. 21(1), 92–103 (2018) 19. Hitachi social innovation in Oceania: creating the world’s largest robot.https://social-innovation.hitachi/en-au/about/our-social-inno vators/AutoHaul-train (2018). Accessed 29 Aug 2020 20. ACEA: Truck platooning. https://www.acea.be/industry-topics/tag/ category/truck-platooning. Accessed 29 Aug 2020 21. Etherington, D.: Techcrunch: Starship Technologies raises $40M, crosses 100K deliveries and plans to expand to 100 new universities. https://techcrunch.com/2019/08/20/starshiptechnologies-raises-40m-crosses-100k-deliveries-and-plans-to-exp and-to-100-new-universities (2019). Accessed 29 Aug 2020 22. Siemens News: Digital transformation: leading by example. https:// new.siemens.com/global/en/company/stories/industry/electronicsdigitalenterpr (2019). Accessed 13 June 2020 23. Hitachi News Release: BCA launches first one-stop smart portal on green building solutions in the region. https://www.hitachi.com.sg/ press/press_2019/20190904.html (2019). Accessed 29 Aug 2020 24. Groove X: https://groove-x.com/en/. Accessed 29 Aug 2020 25. Kupper, D., Lorenz, M., Knizek, C., Kuhlmann, K., Maue, A., Lassig, R., Buchner, T.: Advanced Robotics in the Factory of the Future. BCG Report. https://www.bcg.com/ja-jp/publications/ 2019/advanced-robotics-factory-future.aspx (2019). Accessed 29 Aug 2020 26. Kaeser Kompressoren: Operator model. https://www.kaeser.com/ int-en/solutions/operator-models. Accessed 29 Aug 2020 27. Pine, B., Gilmore, J.: Welcome to the experience economy. Harv. Bus. Rev. 76, 97–105 (1998)
24
Product Automation and Innovation
28. Lim, K.Y.H., Zheng, P., Chen, C.-H.: A state-of-the-art survey of Digital Twin: techniques, engineering product lifecycle management and business innovation perspectives. J. Intell. Manuf. 31, 1313–1337 (2020) 29. Durrant-Whyte, H., Rye, D., Nebot, E.: Localisation of automatic guided vehicles. In: Giralt, G., Hirzinger, G. (eds.) Robotics Research: The 7th International Symposium (ISRR’95), pp. 613–625. Springer, New York (1996) 30. Smith, R.C., Cheeseman, P.: On the representation and estimation of spatial uncertainty. Int. J. Robot. Res. 5(4), 56–68 (1986) 31. Montemerlo, M., Thrun, S., Koller, D., Wegbreit, B.: FastSLAM: a factored solution to the simultaneous localization and mapping problem. In: Proceedings of the AAAI National Conference on Artificial Intelligence, pp. 593–598 (2002) 32. Bailey, T., Durrant-Whyte, H.: Simultaneous localization and mapping (SLAM): part II. IEEE Robot. Autom. Mag. 13(3), 108–117 (2006) 33. Dissanayake, G., Newman, P., Durrant-Whyte, H.F., Clark, S., Csobra, M.: A solution to the simultaneous localization and map building (SLAM) problem. IEEE Trans. Robot. Autom. 17(3), 229– 241 (2001) 34. Thrun, S., Burgard, W., Fox, D.: Probabilistic Robotics. MIT Press, Cambridge, MA (2005) 35. Durrant-Whyte, H., Bailey, T.: Simultaneous localization and mapping: part I. IEEE Robot. Autom. Mag. 13(2), 99–108 (2006) 36. Chen, Y., Medioni, G.: Object modeling by registration of multiple range image. In: Proceedings of the 1991 IEEE International Conference on Robotics and Automation, Sacramento (1991) 37. Besl, P.J., Mckay, N.D.: A method for registration of 3-D shapes. IEEE Trans. Pattern Anal. Mach. Intell. 14(2), 239–256 (1992) 38. Konolige, K., Grisetti, G., Kümmerle, R., Burgard, W., Limketkai, B., Vincent, R.: Efficient sparse pose adjustment for 2D mapping. In: Proceedings of the 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, pp. 22–29 (2010) 39. Grisetti, G., Kummerle, R., Stachniss, C., Burgard, W.: A tutorial on graph-based SLAM. IEEE Intell. Transp. Syst. Mag. 2(4), 31–43 (2010) 40. Davison, A.J., Reid, I.D., Molton, N., Stasse, O.: MonoSLAM: real-time single camera SLAM. IEEE Trans. Pattern Anal. Mach. Intell. 29, 1052–1067 (2007) 41. Davison, A.J.: Real-time simultaneous localisation and mapping with a single camera. In: Proceedings of the Ninth IEEE International Conference on Computer Vision, vol. 2, Nice, pp. 1403–1410 (2003) 42. Klein, G., Murray, D.: Parallel tracking and mapping for small AR workspaces. In: Proceedings of the 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, Nara, pp. 225–234 (2007) 43. Mur-Artal, R., Montiel, J.M.M., Tardós, J.D.: ORB-SLAM: a versatile and accurate monocular SLAM system. IEEE Trans. Robot. 31, 1147–1163 (2015) 44. Mur-Artal, R., Tardós, J.D.: Orb-slam2: an open-source slam system for monocular, stereo, and rgb-d cameras. IEEE Trans. Robot. 33, 1255–1262 (2017) 45. Newcombe, R.A., Lovegrove, S.J., Davison, A.J.: DTAM: dense tracking and mapping in real-time. In: Proceedings of the 2011 International Conference on Computer Vision, Barcelona, pp. 2320– 2327 (2011) 46. Engel, J., Schöps, T., Cremers, D.: LSD-SLAM: large-scale direct monocular SLAM. In: Proceedings of the European Conference on Computer Vision, Zurich, pp. 834–849. (2014) 47. Engel, J., Koltun, V., Cremers, C.: Direct sparse odometry. IEEE Trans. Pattern Anal. Mach. Intell. 40, 611–625 (2017)
583 48. Wang, R., Schworer, M., Cremers, C.: Stereo DSO: large-scale direct sparse visual odometry with stereo cameras. In: Proceedings of the IEEE International Conference on Computer Vision, Venice, pp. 3903–3911 (2017) 49. CB Insights Report: Retail Trends 2019, 21 (2019) 50. Amazon Picking Challenge: http://amazonpickingchallenge.org. Accessed 29 Aug 2020 51. Kosuge, A., Yamamoto, K., Akamine, Y., Oshima, T.: An SoCFPGA-based iterative-closest-point accelerator enabling faster picking robots. IEEE Trans. Ind. Electron. (pre-print). (2020). https://doi.org/10.1109/TIE.2020.2978722 52. Malkov, Y.A., Yashunin, D.A.: Efficient and robust approximate nearest neighbor search using hierarchical navigable small world graphs (online). Available at https://arxiv.org/abs/1603.09320 (2018) 53. Mashimo, S., Chu, T.V., Kise, K.: High-performance hardware merge sorter. In: Proceedings of IEEE 25th Annual International Symposium on Field-Programmable Custom Computing Machines, pp. 1–8 (2017) 54. Business Insider: Tesla will likely roll out a monthly subscription plan for customers who aren’t yet ready to drop $7,000 upfront for its self-driving technology. https://www.businessinsider.com/teslaelon-musk -monthly -subscription -full-self-drive-autopilot-2020-4 (2020). Accessed 29 Aug 2020 55. Sakurai, K., Kataoka, M., Kodaka, H., Kato, A., Teraoka, H., Kiyama, N.: Connected car solutions based on IoT. Hitachi Rev. 67(1), 72–78 (2018) 56. Agrawal, S., Agrawal, J.: Survey on anomaly detection using data mining techniques. Procedia Comput. Sci. 60, 708–713 (2015) 57. Thudumu, S., Branch, P., Jin, J., Singh, J.: A comprehensive survey of anomaly detection techniques for high dimensional big data. J. Big Data. 7 (2020) 58. Shibuya, H., Maeda, S., Suzuki, S., Noda, T.: Criteria set up support technology of remote monitoring for prognosis. In: The Proceedings of the National Symposium on Power and Energy Systems, pp. 43–46 (2011) 59. Clatworthy, S.D.: The Experience-Centric Organization: How to Win through Customer Experience. O’Reilly & Associates Inc, Beijing (2019) 60. Goedkoop, M.J., van Halen, C.J.G., te Riele, H.R.M., Rommens, P.J.M.: Product Service Systems, Ecological and Economic Basics. PricewaterhouseCoopers N.V., The Hague (1999) 61. Mont, O.: Product-service systems: panacea or myth?, Dissertation, Lund University. Available at http://lup.lub.lu.se/record/467248 (2004) 62. Tukker, A., Tischner, U.: New Business for Old Europe – ProductService Development, Competitiveness and Sustainability. Greenleaf Publishing Ltd., Sheffield (2006) 63. Nemoto, Y., Akasaka, F., Shimomura, Y.: Knowledge-based design support system for conceptual design of product-service systems. In: Meier, H. (ed.) Product-Service Integration for Sustainable Solutions: Proceedings of the 5th CIRP International Conference on Industrial Product-Service Systems, Bochum, Germany, March 14th–15th 2013. Lecture Notes in Production Engineering, pp. 41–52. Springer, Heidelberg (2013) 64. Teixeira, J., Patricio, L., Nunes, N.J., Nobrega, L., Fisk, R.P., Constantine, L.: Customer experience modeling: from customer experience to service design. J. Serv. Manag. 23(3), 362–376 (2012) 65. Constantine, L.: Human activity modeling: toward a pragmatic integration of activity theory and usage-centered design. In: Seffah, A., Vanderdonckt, J., Desmarais, M.C. (eds.) Human-Centered Software Engineering: Software Engineering Models, Patterns and Architectures for HCI, pp. 27–51. Springer, London (2009)
24
584 66. Patricio, L., Fisk, R.P., Cunha, J.F., Constantine, L.: Multilevel service design: from customer value constellation to service experience blueprinting. J. Serv. Res. 14(2), 180–200 (2011) 67. Kimita, K., Shimomura, Y.: A method for exploring PSS technologies based on customer needs. In: Meier, H. (ed.) Product-Service Integration for Sustainable Solutions: Proceedings of the 5th CIRP International Conference on Industrial Product-Service Systems, Bochum, Germany, March 14th–15th 2013. Lecture Notes in Production Engineering, pp. 215–225. Springer, Heidelberg (2013) 68. Bergmann, R.: Experience Management: Foundations, Development Methodology, and Internet-Based Applications. Springer, Heidelberg (2002) 69. Tzuo, T., Weisert, G.: Subscribed: Why the Subscription Model Will Be Your Company’s Future – And What to Do About It. Portfolio Penguin, New York (2018) 70. Mehta, N., Steinman, D., Murphy, L.: Customer Success: How Innovative Companies Are Reducing Churn and Growing Recurring Revenue. Wiley, Hoboken (2016) 71. Osterwalder, A., Pigneur, Y.: Business Model Generation. Wiley, Hoboken (2010) 72. Sands, N.P., Verhappen, I. (eds.): A Guide to the Automation Body of Knowledge, 3rd edn. ISA, Research Triangle Park (2018)
K. Funaki
Kenichi Funaki received his doctoral degree in industrial engineering and management from Waseda University, Tokyo in 2001. Funaki began his career at Production Engineering Research Laboratory, Hitachi Ltd. in 1993, and has over 20 years of experience in developing production, logistics, and supply chain management systems. He has conducted joint research projects with world-class institutes including Gintic Institute of Manufacturing Technology (currently SIMTech) in Singapore, the Institute for Computer Science and Control of Hungarian Academy of Science (SZTAKI), and the Supply Chain and Logistics Institute at Georgia Institute of Technology where he stayed as a research executive between 2008 and 2009.
25
Process Automation Juergen Hahn and B. Wayne Bequette
Contents 25.1
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 585
25.2 25.2.1 25.2.2 25.2.3 25.2.4 25.2.5 25.2.6
Enterprise View of Process Automation . . . . . . . . . . . Measurement and Actuation (Level 1) . . . . . . . . . . . . . . . Safety and Environmental/Equipment Protection (Level 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Regulatory Control (Level 3a) . . . . . . . . . . . . . . . . . . . . . Multivariable and Constraint Control (Level 3b) . . . . . . Real-Time Optimization (Level 4) . . . . . . . . . . . . . . . . . . Planning and Scheduling (Level 5) . . . . . . . . . . . . . . . . . .
25.3
Process Dynamics and Mathematical Models . . . . . . . 587
25.4 25.4.1 25.4.2 25.4.3 25.4.4
Regulatory Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Control Valves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Controllers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . PID Enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
25.5 25.5.1
Control System Design . . . . . . . . . . . . . . . . . . . . . . . . . . 590 Multivariable Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 591
25.6
Batch Process Automation . . . . . . . . . . . . . . . . . . . . . . . 593
25.7
Automation and Process Safety . . . . . . . . . . . . . . . . . . . 597
25.8
Emerging Trends . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 598
586 586 586 586 586 587 587
589 589 590 590 590
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 600
Abstract
The field of process automation is concerned with the analysis of dynamic behavior of chemical processes, design of automatic controllers, and associated
J. Hahn () Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, USA Department of Chemical & Biological Engineering, Rensselaer Polytechnic Institute, Troy, NY, USA e-mail: [email protected] B. W. Bequette () Department of Chemical & Biological Engineering, Rensselaer Polytechnic Institute, Troy, NY, USA e-mail: [email protected]
© Springer Nature Switzerland AG 2023 S. Y. Nof (ed.), Springer Handbook of Automation, Springer Handbooks, https://doi.org/10.1007/978-3-030-96729-1_25
instrumentation. Process automation as practiced in the process industries has undergone significant changes since it was first introduced a century ago. Perhaps the most significant influence on the changes in process control technology has been the introduction of inexpensive digital computers and instruments with greater capabilities than their analog predecessors. During the past decade, the increasing availability of low-cost, yet “smart,” sensors and actuators, has ushered in an era of “smart manufacturing,” and a greater integration of the supply and product chains with manufacturing facilities. Keywords
Control valve · Model predictive control · International Standard Organization · Control system design · Process analytical technology · Smart manufacturing
25.1
Overview
Large-scale chemical manufacturing processes began to evolve in the late nineteenth century, with prime examples of sulfuric acid and soda ash production [1]. For many decades these processes were operated using manual measurements and adjustments. By around 1917, simple temperature feedback control systems, which were largely on-off controllers, came into use, with the notion of proportional and proportional-integral control evolving through the 1930s, and with the first proportional-integral-derivative controller commercially available in 1939 [2]. In this chapter, we first provide an overview of the hierarchical control structure for chemical processes, then review process dynamics and mathematical models. Basic regulatory controllers are followed by control system design, including multivariable control and model predictive control, largely to enforce constraints. The batch processes that dominate the pharmaceutical, food, and semiconductor device industries are covered in a separate section. Safety should be 585
586
J. Hahn and B. W. Bequette
the highest priority in any manufacturing process, so safety instrumented systems (SIS) are discussed. Finally, we assess emerging trends of smart manufacturing and understanding the importance of the human-in-the-loop.
25.2
Enterprise View of Process Automation
Process automation is used in order to maximize production while maintaining a desired level of product quality and safety and making the process more economical. Because these goals apply to a variety of industries, process control systems are used in facilities for the production of chemicals, pulp and paper, metals, food, and pharmaceuticals. While the methods of production vary from industry to industry, the principles of automatic control are generic in nature and can be universally applied, regardless of the size of the plant. In Fig. 25.1, the process automation activities are organized in the form of a hierarchy with required functions at the lower levels and desirable functions at the higher levels. The time scale for each activity is shown on the left side of Fig. 25.1. Note that the frequency of execution is much lower for the higher-level functions.
(Days–months)
(Hours–days)
5. Planning and scheduling
Demand forecasting, supply chain management, raw materials and product planning/scheduling
4. Real-time optimization
Plant-wide and individual unit real-time optimization, parameter estimation, supervisory control, data reconcilation
3b. Multivariable and constraint control
Multivariable control, model predictive control
(Seconds–minutes)
3a. Regulatory control
PID control, advanced control techniques, control loop performance monitoring
(< 1 second)
2. Safety, environmental/ equipment protection
Alarm management, emergency shutdown
(< 1 second)
1. Measurement and actuation
Sensor and actuator validation, limit checking
(Minutes–hours)
Process
25.2.1 Measurement and Actuation (Level 1) Measurement devices (sensors and transmitters) and actuation equipment (for example, control valves) are used to measure process variables and implement the calculated control actions. These devices are interfaced to the control system, usually digital control equipment such as a digital computer. Clearly, the measurement and actuation functions are an indispensable part of any control system.
25.2.2 Safety and Environmental/Equipment Protection (Level 2) The level 2 functions play a critical role by ensuring that the process is operating safely and satisfies environmental regulations. Process safety relies on the principle of multiple protection layers that involve groupings of equipment and human actions. One layer includes process control functions, such as alarm management during abnormal situations, and safety instrumented systems for emergency shutdowns. The safety equipment (including sensors and control valves) operates independently of the regular instrumentation used for regulatory control in level 3a. Sensor validation techniques can be employed to confirm that the sensors are functioning properly.
Fig. 25.1 The five levels of process control and optimization in manufacturing. Time scales are shown for each level [3]
25.2.3 Regulatory Control (Level 3a) Successful operation of a process requires that key process variables such as flow rates, temperatures, pressures, and compositions be operated at, or close to, their set points. This level 3a activity, regulatory control, is achieved by applying standard feedback and feedforward control techniques. If the standard control techniques are not satisfactory, a variety of advanced control techniques are available. In recent years, there has been increased interest in monitoring control system performance.
25.2.4 Multivariable and Constraint Control (Level 3b) Many difficult process control problems have two distinguishing characteristics: (1) significant interactions occur among key process variables and (2) inequality constraints exist for manipulated and controlled variables. The inequality constraints include upper and lower limits; for example,
25
Process Automation
each manipulated flow rate has an upper limit determined by the pump and control valve characteristics. The lower limit may be zero or a small positive value based on safety considerations. Limits on controlled variables reflect equipment constraints (for example, metallurgical limits) and the operating objectives for the process; for example, a reactor temperature may have an upper limit to avoid undesired side reactions or catalyst degradation, and a lower limit to ensure that the reaction(s) proceed. The ability to operate a process close to a limiting constraint is an important objective for advanced process control. For many industrial processes, the optimum operating condition occurs at a constraint limit, for example, the maximum allowed impurity level in a product stream. For these situations, the set point should not be the constraint value because a process disturbance could force the controlled variable beyond the limit. Thus, the set point should be set conservatively, based on the ability of the control system to reduce the effects of disturbances. The standard process control techniques of level 3a may not be adequate for difficult control problems that have serious process interactions and inequality constraints. For these situations, the advanced control techniques of level 3b, multivariable control and constraint control, should be considered. In particular, the model predictive control (MPC) strategy was developed to deal with both process interactions and inequality constraints.
25.2.5 Real-Time Optimization (Level 4) The optimum operating conditions for a plant are determined as part of the process design, but during plant operations, the optimum conditions can change frequently owing to changes in equipment availability, process disturbances, and economic conditions (for example, raw materials costs and product prices). Consequently, it can be very profitable to recalculate the optimum operating conditions on a regular basis. The new optimum conditions are then implemented as set points for controlled variables. Real-time optimization (RTO) calculations are based on a steady-state model of the plant and economic data such as costs and product values. A typical objective for the optimization is to minimize operating cost or maximize the operating profit. The RTO calculations can be performed for a single process unit and/or on a plant-wide basis. The level 4 activities also include data analysis to ensure that the process model used in the RTO calculations is accurate for the current conditions. Thus, data reconciliation techniques can be used to ensure that steady-state mass and energy balances are satisfied. Also, the process model can be updated using parameter estimation techniques and recent plant data.
587
25.2.6 Planning and Scheduling (Level 5) The highest level of the process control hierarchy is concerned with planning and scheduling operations for the entire plant. For continuous processes, the production rates of all products and intermediates must be planned and coordinated, based on equipment constraints, storage capacity, sales projections, and the operation of other plants, sometimes on a global basis. For the intermittent operation of batch and semibatch processes, the production control problem becomes a batch scheduling problem based on similar considerations. Thus, planning and scheduling activities pose large-scale optimization problems that are based on both engineering considerations and business projections. The activities of levels 1–3a in Fig. 25.1 are required for all manufacturing plants, while the activities in levels 3b–5 are optional but can be very profitable. The decision to implement one or more of these higher-level activities depends very much on the application and the company. The decision hinges strongly on economic considerations (for example, a cost-benefit analysis), and company priorities for their limited resources, both human and financial. The immediacy of the activity decreases from level 1 to level 5 in the hierarchy. However, the amount of analysis and the computational requirements increase from the lowest to the highest level. The process control activities at different levels should be carefully coordinated and require information transfer from one level to the next. The successful implementation of these process control activities is a critical factor in making plant operation as profitable as possible.
25.3
Process Dynamics and Mathematical Models
Development of dynamic models forms a key component for process automation, as controller design and tuning is often performed by using a mathematical representation of the process. A model can be derived either from first principles knowledge about the system or from past plant data. Once a dynamic model has been developed, it can be solved for a variety of conditions that include changes in the input variables or variations in the model parameters. The transient responses of the output variables are calculated by numerical integration after specifying both the initial conditions and the inputs as functions of time. A large number of numerical integration techniques are available, ranging from simple techniques (e.g., the Euler and Runge–Kutta methods) to more complicated ones (e.g., the implicit Euler and Gear methods). All of these techniques represent some compromise between computational effort (computing time) and accuracy. Although a dynamic model can always be solved in principle, for some situations it may
25
588
be difficult to generate useful numerical solutions. Dynamic models that exhibit a wide range of time scales (stiff equations) are quite difficult to solve accurately in a reasonable amount of computation time. Software for integrating ordinary and partial differential equations is readily available. Popular software packages include MATLAB, Mathematica, IMSL, Mathcad, and GNU Octave. For dynamic models that contain large numbers of algebraic and ordinary differential equations, generation of solutions using standard programs has been developed to assist in this task. A graphical user interface (GUI) allows the user to enter the algebraic and ordinary differential equations and related information such as the total integration period, error tolerances, the variables to be plotted, and so on. The simulation program then assumes responsibility for: 1. Checking to ensure that the set of equations is exactly specified 2. Sorting the equations into an appropriate sequence for iterative solution 3. Integrating the equations 4. Providing numerical and graphical output Examples of equation-oriented simulators used in the process industries include gPROMS, Aspen Custom Modeler, and Aspen HYSYS. One disadvantage of many equation-oriented packages is the amount of time and effort required to develop all of the equations for a complex process. An alternative approach is to use modular simulations where prewritten subroutines provide models of individual process units such as distillation columns or chemical reactors. Consequently, this type of simulator has a direct correspondence to the process flowsheet. The modular approach has the significant advantage that plant-scale simulations only require the user to identify the appropriate modules and to supply the numerical values of model parameters and initial conditions. This activity requires much less effort than writing all of the equations and, furthermore, the software is responsible for all aspects of the solution. Because each module is rather general in form, the user can simulate alternative flowsheets for a complex process, for example, different configurations of distillation towers and heat exchangers, or different types of chemical reactors. Similarly, alternative process control strategies can be quickly evaluated. Some software packages allow the user to add custom modules for novel applications. Modular dynamic simulators have been available since the early 1970s. Several commercial products are available from Aspen Technology and Honeywell (Unisim). Modelica is an example of a collaborative effort that provides modeling capability for a number of application areas. These packages also offer equation-oriented capabilities. Modular dynamic
J. Hahn and B. W. Bequette
simulators are achieving a high degree of acceptance in process engineering and control studies because they allow plant dynamics, real-time optimization, and alternative control configurations to be evaluated for an existing or a new plant. They also can be used for operator training. This feature allows dynamic simulators to be integrated with software for other applications such as control system design and optimization. While most processes can be accurately represented by a set of nonlinear differential equations, a process is usually operated within a certain neighborhood of its normal operating point (steady state), thus the process model can be closely approximated by a linearized version of the model. A linear model is beneficial because it permits the use of more convenient and compact methods for representing process dynamics, namely Laplace transforms. The main advantage of Laplace transforms is that they provide a compact representation of a dynamic system that is especially useful for the analysis of feedback control systems. The Laplace transform of a set of linear ordinary differential equations is a set of algebraic equations in the new variable s, called the Laplace variable. The Laplace transform is given by F(s) = L [f (t)] =
∞
f (t)e−st dt
(25.1)
0
where F(s) is the symbol for the Laplace transform, f (t) is some function of time, and L is the Laplace operator, defined by the integral. Tables of Laplace transforms are well documented for common functions [3, 4]. A linear differential equation with a single input u and single output y can be converted into a transfer function using Laplace transforms as follows: Y(s) = G(s) U(s),
(25.2)
where U(s) is the Laplace transform of the input variable u(t), Y(s) is the Laplace transform of the output variable y(t), and G(s) is the transfer function, obtained from transforming the differential equation. The transfer function G(s) describes the dynamic characteristic of the process. For linear systems it is independent of the input variable and so it can readily be applied to any time-dependent input signal. As an example, the first-order differential equation τ
dy(t) + y(t) = Ku(t) dt
(25.3)
can be Laplace-transformed to Y(s) =
K U(s). τs + 1
(25.4)
25
Process Automation
589
Note that the parameters K and τ , known as the process gain and time constant, respectively, map into the transfer function as unspecified parameters. Numerical values for parameters such as K and τ have to be determined for controller design or for simulation purposes. Several different methods for the identification of model parameters in transfer functions are available. The most common approach is to perform a step test on the process and collect the data along the trajectory until it reaches steady state. In order to identify the parameters, the form of the transfer function model needs to be postulated and the parameters of the transfer function can be estimated by using nonlinear regression. For more details on the development of various transfer functions, see [3, 4].
25.4
Regulatory Control
When the components of a control system are connected, their overall dynamic behavior can be described by combining the transfer functions for each component. Each block describes how changes in the input variables of the block will affect the output variables of the block. One example is the feedback/feedforward control block diagram shown in Fig. 25.2, which contains the important components of a typical control system, namely process, controller, sensor, and final control element. Regulatory control deals with treatment of disturbances that enter the system, as shown in Fig. 25.2. These components are discussed in more detail below. Most modern control equipment require a digital signal for displays and control algorithms, thus the analog-to-digital converter (ADC) transforms the transmitter analog signal to a digital format. Because ADCs may be relatively expensive if adequate digital resolution is required, incoming digital
signals are usually multiplexed. Prior to sending the desired control action, which is often in a digital format, to the final control element in the field, the desired control action is usually transformed by a digital-to-analog (DAC) converter to an analog signal for transmission. DACs are relatively inexpensive and are not normally multiplexed. Widespread use of digital control technologies has made ADCs and DACs standard parts of the control system.
25.4.1 Sensors The hardware components of a typical modern digital control loop shown in Fig. 25.2 are discussed next. The function of the process measurement device is to sense the values, or changes in values, of process variables. The actual sensing device may generate, e.g., a physical movement, a pressure signal, or a millivolt signal. A transducer transforms the measurement signal from one physical or chemical quantity to another, e.g., pressure to milliamps. The transduced signal is then transmitted to a control room through the transmission line. The transmitter is therefore a signal generator and a line driver. Often the transducer and the transmitter are contained in the same device. The most commonly measured process variables are temperature, flow, pressure, level, and composition. When appropriate, other physical properties are also measured. The selection of the proper instrumentation for a particular application is dependent on factors such as: the type and nature of the fluid or solid involved; relevant process conditions; range, accuracy, and repeatability required; response time; installed cost; and maintainability and reliability. Various handbooks are available that can assist in selecting sensors for particular applications (e.g., [5]). Sensors are discussed in detail in Ch. 14.
Feedforward controller
Supervisory control
Set point, YSP
Error, E
Feedback controller
Manipulated variable, U
Final control element
Measurement device
Fig. 25.2 Block diagram of a process
Disturbance, D
Controlled variable, Y Process
25
590
J. Hahn and B. W. Bequette
25.4.2 Control Valves
uk = u + KC
Material and energy flow rates are the most commonly selected manipulated variables for control schemes. Thus, good control valve performance is an essential ingredient for achieving good control performance. A control valve consists of two principal assemblies: a valve body and an actuator. Good control valve performance requires consideration of the process characteristics and requirements such as fluid characteristics, range, shutoff, and safety, as well as control requirements, e.g., installed control valve characteristics and response time. The proper selection and sizing of control valves and actuators is an extensive topic in its own right [5].
25.4.3 Controllers The most commonly employed feedback controller in the process industry is the proportional-integral (PI) controller, which can be described by the following equation: 1 t e t dt . u(t) = u + KC e(t) + τi 0
(25.5)
Note that the controller includes proportional as well as integrating action. The controller has two tuning parameters: the proportional constant KC and the integral time constant τ I . The integral action will eliminate offset for constant load disturbances but it can potentially lead to a phenomenon known as reset windup. When there is a sustained error, the large integral term (25.5) causes the controller output to saturate. This can occur during start-up of batch processes, or after large set-point changes or large sustained disturbances. PI controllers make up the vast majority of controllers that are currently used in the chemical process industries. If it is important to achieve a faster response that is offset free, a PID (D = derivative) controller can be utilized, described by the following expression:
k ek − ek−1 t ei + τD ek + τi i=0 t
(25.7)
where t is the sampling period for the control calculations and k represents the current sampling time. If the process and the measurements permit to choose the sampling period t to be small then the behavior of the digital PID controller will essentially be the same as for an analog PID controller.
25.4.4 PID Enhancements The derivative action in a PID controller is particularly beneficial for counteracting unmeasured disturbances. When a disturbance can be measured, then feedforward control is often used for better disturbance rejection. For many blending or combustion problems it is useful to maintain a ratio between two stream flowrates. For example, a ratio controller can be used to supply combustion air to a furnace at a desired air-to-fuel ratio to assure complete combustion. It is also common to use cascade control, where the output of a higherlevel loop is the setpoint to a faster lower-level loop (e.g., flow control).
25.5
Control System Design
(25.6)
Traditionally, process design and control system design have been separate engineering activities. Thus, in the traditional approach, control system design is not initiated until after plant design is well underway and major pieces of equipment may even have been ordered. This approach has serious limitations because the plant design determines the process dynamics as well as the operability of the plant. In extreme situations, the process may be uncontrollable, even though the design appears satisfactory from a steady-state point of view. A more desirable approach is to consider process dynamics and control issues early in the process design. The two general approaches to control system design are:
The PID controller of (25.6) contains three tuning parameters because the derivative mode adds a third adjustable parameter τ D . However, if the process measurement is noisy, the value of the derivative of the error may change rapidly and derivative action will amplify the noise, as a filter on the error signal can be employed. Digital control systems are ubiquitous in process plants, mostly employing a discrete (finite-difference) form of the PID controller equation given by
1. Traditional approach. The control strategy and control system hardware are selected based on knowledge of the process, experience, and insight. After the control system is installed in the plant, the controller settings (such as in a PID controller) are adjusted. This activity is referred to as controller tuning. 2. Model-based approach. A dynamic model of the process is first developed that can be helpful in at least three ways: (a) it can be used as the basis for model-based
u(t) = u + KC
1 e(t) + τi
t 0
de(t) e t dt + τD dt
25
Process Automation
controller design methods, (b) the dynamic model can be incorporated directly in the control law (for example, model predictive control), and (c) the model can be used in a computer simulation to evaluate alternative control strategies and to determine preliminary values of the controller settings. For many simple process control problems controller specification is relatively straightforward and a detailed analysis or an explicit model is not required. However, for complex processes, a process model is invaluable both for control system design and for an improved understanding of the process. The major steps involved in designing and installing a control system using the model-based approach are shown in the flowchart of Fig. 25.3. The first step, formulation of the control objectives, is a critical decision. The formulation is based on the operating objectives for the plants and the process constraints; for example, in the distillation column control problem, the objective might be to regulate a key component in the distillate stream, the bottoms stream, or key components in both streams. An alternative would be to minimize energy consumption (e.g., heat input to the reboiler) while meeting product quality specifications on one or both product streams. The inequality constraints should include upper and lower limits on manipulated variables, conditions that lead to flooding or weeping in the column, and product impurity levels. After the control objectives have been formulated, a dynamic model of the process is developed. The dynamic model can have a theoretical basis, for example, physical and chemical principles such as conservation laws and rates of reactions, or the model can be developed empirically from experimental data. If experimental data are available, the dynamic model should be validated, with the data and the model accuracy characterized. This latter information is useful for control system design and tuning. The next step in the control system design is to devise an appropriate control strategy that will meet the control objectives while satisfying process constraints. As indicated in Fig. 25.3, this design activity is based on models and plant data. Finally, the control system can be installed, with final adjustments performed once the plant is operating.
25.5.1 Multivariable Control In most industrial processes, there are a number of variables that must be controlled, and a number of variables can be manipulated. These problems are referred to as multipleinput multiple-output (MIMO) control problems. For almost all important processes, at least two variables must be controlled: product quality and throughput. Several examples of processes with two controlled variables and two manipulated
591
Information from existing plants (if available)
Formulate control objectives
Management objectives
Computer simulation Physical and chemical principles
25
Develop process model Plant data (if available)
Process control theory Device control strategy
Computer simulation
Select control hardware and software
Vendor information
Experience with existing plants (if available)
Install control system
Adjust controller settings Engineering activity Information base
Final control system
Fig. 25.3 Major steps in control system development [3]
variables are shown in Fig. 25.4. These examples illustrate a characteristic feature of MIMO control problems, namely the presence of process interactions, that is, each manipulated variable can affect both controlled variables. Consider the inline blending system shown in Fig. 25.4a. Two streams containing species A and B, respectively, are to be blended to produce a product stream with mass flow rate w and composition x, the mass fraction of A. Adjusting either manipulated flow rate, wA or wB , affects both w and x. Similarly, for the distillation column in Fig. 25.4b, adjusting either reflux flow rate R or steam flow S will affect both distillate composition xD and bottoms composition xB . For the gas-liquid separator in Fig. 25.4c, adjusting the gas flow rate G will have a direct effect on pressure P and a slower, indirect effect on liquid level h because changing the pressure in the vessel will tend to change the liquid flow rate L and thus affect h. In contrast, adjusting the other manipulated variable L directly affects h but has only a relatively small and indirect effect on P.
592
J. Hahn and B. W. Bequette
a) Inline blending system
Set-point calculations
wA
w x
wB
Set points (targets) Prediction
b) Destillation column
Predicted Process Control Inputs Process outputs outputs calculations
Coolant C o l u m n
Feed Steam S
Inputs
AT xD R
D
Model
Model outputs
Residuals
AT xB B
Fig. 25.5 Block diagram for model predictive control [3]
c) Gas–liquid separator Gas G P
PT
h
LT
Feed
Liquid L
Fig. 25.4 Physical examples of multivariable control problems [3]
Pairing of a single controlled variable and a single manipulated variable via a PID feedback controller is possible, if the number of manipulated variables is equal to the number of controlled variables. The relative gain array (RGA) provides insights about the best pairing of multiple single-input singleoutput (SISO) controllers, as well as sensitivity to individual loop failures (including saturation of manipulated variables). MIMO control problems are inherently more complex than SISO control problems because process interactions occur between controlled and manipulated variables. In general, a change in a manipulated variable, say u1 , will affect all of the controlled variables y1 , y2 , . . . , yn . Because of process interactions, selection of the best pairing of controlled and manipulated variables for a multiloop control scheme can be a difficult task. In particular, for a control problem with n controlled variables and n manipulated variables, there are n! possible multiloop control configurations. There is a growing trend to use multivariable control, in particular an approach called model predictive control (MPC). Model predictive control offers several important advantages: (1) the process model captures the dynamic and static interactions between input, output, and disturbance variables; (2) constraints on inputs and outputs are considered in a systematic manner; (3) the control calculations can be coordinated with the calculation of optimum set points; and (4) accurate model predictions can provide early warnings
of potential problems. Clearly, the success of MPC (or any other model-based approach) depends on the accuracy of the process model. Inaccurate predictions can make matters worse, instead of better. First-generation MPC systems were developed independently in the 1970s by two pioneering industrial research groups. Dynamic matrix control (DMC) was devised by Shell Oil [6], and a related approach was developed by ADERSA [7]. Model predictive control has had a major impact on industrial practice; for example, an MPC survey by Qin and Badgwell [8] reported that there were over 4500 applications worldwide by the end of 1999, primarily in oil refineries and petrochemical plants. In these industries, MPC has become the method of choice for difficult multivariable control problems that include inequality constraints. The overall objectives of an MPC controller are as follows: 1. Prevent violations of input and output constraints 2. Drive some output variables to their optimal set points, while maintaining other outputs within specified ranges 3. Prevent excessive movement of the manipulated variables 4. Control as many process variables as possible when a sensor or actuator is not available A block diagram of a model predictive control system is shown in Fig. 25.5. A process model is used to predict the current values of the output variables. The residuals (the differences between the actual and predicted outputs) serve as the feedback signal to a prediction block. The predictions are used in two types of MPC calculations that are performed at each sampling instant: set-point calculations and control calculations. Inequality constraints on the input and output variables, such as upper and lower limits, can be included in either type of calculation. The model acts in parallel with the process and the residual serves as a feedback signal, however, it should be noted that the coordination of the control and set-point calculation
25
Process Automation
is a unique feature of MPC. Furthermore, MPC has had a significant impact on industrial practice because it is more suitable for constrained MIMO control problems. The set points for the control calculations, also called targets, are calculated from an economic optimization based on a steady-state model of the process, traditionally a linear steady-state model. Typical optimization objectives include maximizing a profit function, minimizing a cost function, or maximizing a production rate. The optimum values of set points are changed frequently owing to varying process conditions, especially changes in the inequality constraints. The constraint changes are due to variations in process conditions, equipment, and instrumentation, as well as economic data such as prices and costs. In MPC, the set points are typically calculated each time the control calculations are performed. The control calculations are based on current measurements and predictions of the future values of the outputs. The predictions are made using a dynamic model, typically a linear empirical model such as a multivariable version of the step response models that were discussed in Sect. 25.2. Alternatively, transfer function or state-space models can be employed. For very nonlinear processes, it can be advantageous to predict future output values using a nonlinear dynamic model. Both physical models and empirical models, such as neural networks, have been used in nonlinear MPC [8]. The objective of the MPC control calculations is to determine a sequence of control moves (that is, manipulated input changes) so that the predicted response moves to the set point in an optimal manner. The actual output y, predicted output yˆ , and manipulated input u are shown in Fig. 25.6. At the current sampling instant, denoted by k, the MPC strategy calculates a set of M values of the input {u{k + i − 1), i = 1, 2, . . . , M}. The set consists of the current input u(k) and M − 1 future inputs. The input is held constant after the M control moves. The
inputs are calculated so that a set of P predicted outputs y (k + i) , i = 1, 2, . . . , P reaches the set point in an optimal manner. The control calculations are based on optimizing an objective function. The number of predictions P is referred to as the prediction horizon while the number of control moves M is called the control horizon. A distinguishing feature of MPC is its receding horizon approach. Although a sequence of M control moves is calculated at each sampling instant, only the first move is actually implemented. Then a new sequence is calculated at the next sampling instant; after new measurements become available, again only the first input move is implemented. This procedure is repeated at each sampling instant. In MPC applications, the calculated input moves are usually implemented as set points for regulatory control loops at the distributed control system (DCS) level, such as flow control loops. If a DCS control loop has been disabled or placed in manual mode, the input variable is no longer
593
available for control. In this situation, the control degrees of freedom are reduced by one. Even though an input variable is unavailable for control, it can serve as a disturbance variable if it is still measured. Before each control execution, it is necessary to determine which outputs (controlled variables [CV]), inputs (manipulated variables [MV]), and disturbance variables (DVs) are currently available for the MPC calculations. The variables available for the control calculations can change from one control execution time to the next for a variety of reasons; for example, a sensor may not be available owing to routine maintenance or recalibration. Inequality constraints on input and output variables are important characteristics for MPC applications. In fact, inequality constraints were a primary motivation for the early development of MPC. Input constraints occur as a result of physical limitations on plant equipment such as pumps, control valves, and heat exchangers; for example, a manipulated flow rate might have a lower limit of zero and an upper limit determined by the pump, control valve, and piping characteristics. The dynamics associated with large control valves impose rate-of-change limits on manipulated flow rates. Constraints on output variables are a key component of the plant operating strategy; for example, a common distillation column control objective is to maximize the production rate while satisfying constraints on product quality and avoiding undesirable operating regimes such as flooding or weeping. It is convenient to make a distinction between hard and soft constraints. As the name implies, a hard constraint cannot be violated at any time. By contrast, a soft constraint can be violated, but the amount of violation is penalized by a modification of the cost function. This approach allows small constraint violations to be tolerated for short periods of time [3].
25.6
Batch Process Automation
Batch processing is an alternative to continuous processing. In batch processing, a sequence of one or more steps, either in a single vessel or in multiple vessels, is performed in a defined order, yielding a specific quantity of a finished product. Because the volume of product is normally small, large production runs are achieved by repeating the process steps on a predetermined schedule. In batch processing, the production amounts are usually smaller than for continuous processing; hence, it is usually not economically feasible to dedicate processing equipment to the manufacture of a single product. Instead, batch processing units are organized so that a range of products (from a few to possibly hundreds) can be manufactured with a given set of process equipment. Batch processing can be complicated by having multiple stages, multiple products made from the same equipment, or
25
594
J. Hahn and B. W. Bequette
Past
Future Set point (target)
Past output Predicted future output Past control action Future control action
yˆ
y
Control horizon, M
u u Prediction horizon, P
k–1
k
k+1
k+M–1
k+2
k+P Sampling instant
Fig. 25.6 Basic concept for model predictive control
parallel processing lines. The key challenge for batch plants is to consistently manufacture each product in accordance with its specifications while maximizing the utilization of available equipment. Benefits include reduced inventories and shortened response times to make a specialty product compared with continuous processing plants. Typically, it is not possible to use blending of multiple batches in order to obtain the desired product quality, so product quality specifications must be satisfied by each batch. Batch processing is widely used to manufacture specialty chemicals, metals, electronic materials, ceramics, polymers, food and agricultural materials, biochemicals and pharmaceuticals, multiphase materials/blends, coatings, and composites – an extremely broad range of processes and products. The unit operations in batch processing are also quite diverse, and some are analogous to operations for continuous processing. In analogy with the different levels of plant control depicted in Fig. 25.1, batch control systems operate at various levels: • • • •
Batch sequencing and logic controls (levels 1 and 2) Control during the batch (level 3) Run-to-run control (levels 4 and 5) Batch production management (level 5)
Figure 25.7 shows the interconnections of the different types of control used in a typical batch process. Run-torun control is a type of supervisory control that resides
Control during the batch
Production management
Run-to-run control
Equipment control
Sequential control
Logic control
Safety interlocks
Fig. 25.7 Overview of a batch control system
principally in the production management block. In contrast to continuous processing, the focus of control shifts from regulation to set-point changes, and sequencing of batches and equipment takes on a much greater role. Batch control systems must be very versatile to be able to handle pulse inputs and discrete input/output (I/O) as well as analog signals for sensors and actuators. Functional control activities are summarized as follows: 1. Batch sequencing and logic control: Sequencing of control steps that follow a recipe involves, for example, mixing of ingredients, heating, waiting for a reaction to complete, cooling, and discharging the resulting products. Transfer of materials to and from batch tanks or reactors includes metering of materials as they are charged (as
25
Process Automation
specified by the recipe), as well as transfer of materials at the completion of the process operation. In addition to discrete logic for the control steps, logic is needed for safety interlocks to protect personnel, equipment, and the environment from unsafe conditions. Process interlocks ensure that process operations can only occur in the correct time sequence. 2. Control during the batch: Feedback control of flow rate, temperature, pressure, composition, and level, including advanced control strategies, falls in this category, which is also called within-the-batch control [9]. In sophisticated applications, this requires specification of an operating trajectory for the batch (that is, temperature or flow rate as a function of time). In simpler cases, it involves tracking of set points of the controlled variables, which includes ramping the controlled variables up and down and/or holding them constant for a prescribed period of time. Detection of when the batch operations should be terminated (end point) may be performed by inferential measurements of product quality, if direct measurement is not feasible. 3. Run-to-run control: Also called batch-to-batch control, this supervisory function is based on offline product quality measurements at the end of a run. Operating conditions and profiles for the batch are adjusted between runs to improve the product quality using tools such as optimization. 4. Batch production management: This activity entails advising the plant operator of process status and how to interact with the recipes and the sequential, regulatory, and discrete controls. Complete information (recipes) is maintained for manufacturing each product grade, including the names and amounts of ingredients, process variable set points, ramp rates, processing times, and sampling procedures. Other database information includes batches produced on a shift, daily, or weekly basis, as well as material and energy balances. Scheduling of process units is based on availability of raw materials and equipment and customer demand. Recipe modifications from one run to the next are common in many batch processes. Typical examples are modifying the reaction time, feed stoichiometry, or reactor temperature. When such modifications are done at the beginning of a run (rather than during a run), the control strategy is called run-to-run control. Run-to-run control is frequently motivated by the lack of online measurements of the product quality during a batch run. In batch chemical production, online measurements are often not available during the run, but the product can be analyzed by laboratory samples at the end of the run. The process engineer must specify a recipe that contains the values of the inputs (which may be time varying) that will meet the product requirements. The task of the run-to-run controller is to adjust the recipe after each
595
run to reduce variability in the output product from the stated specifications. Batch run-to-run control is particularly useful to compensate for processes where the controlled variable drifts over time; for example, in a chemical vapor deposition process the reactor walls may become fouled owing to by-product deposition. This slow drift in the reactor chamber condition requires occasional changes to the batch recipe in order to ensure that the controlled variables remain on target. Eventually, the reactor chamber must be cleaned to remove the wall deposits, effectively causing a step disturbance to the process outputs when the inputs are held constant. Just as the run-to-run controller compensates for the drifting process, it can also return the process to target after a step disturbance [10, 11]. The Instrument Society of America (ISA) SP-88 standard deals with the terminology involved in batch control [12]. There are a hierarchy of activities that take place in a batch processing system [13]. At the highest level, procedures identify how the products are made, that is, the actions to be performed (and their order) as well as the associated control requirements for these actions. Operations are equivalent to unit operations in continuous processing and include steps such as charging, reacting, separating, and discharging. Within each operation are logical points called phases, where processing can be interrupted by the operator or computer interaction. Examples of different phases include the sequential addition of ingredients, heating a batch to a prescribed temperature, mixing, and so on. Control steps involve direct commands to final control elements, specified by individual control instructions in software. As an example, for {operation = charge reactant} and {phase = add ingredient B}, the control steps would be: (1) open the B supply valve, (2) total the flow of B over a period of time until the prescribed amount has been added, and (3) close the B supply valve. The term recipe has a range of definitions in batch processing, but in general a recipe is a procedure with the set of data, operations, and control steps required to manufacture a particular grade of product. A formula is the list of recipe parameters, which includes the raw materials, processing parameters, and product outputs. A recipe procedure has operations for both normal and abnormal conditions. Each operation contains resource requests for certain ingredients (and their amounts). The operations in the recipe can adjust set points and turn equipment on and off. The complete production run for a specific recipe is called a campaign (multiple batches). In multigrade batch processing, the instructions remain the same from batch to batch, but the formula can be changed to yield modest variations in the product; for example, in emulsion polymerization, different grades of polymers are manufactured by changing the formula. In flexible batch
25
596
processing, both the formula (recipe parameters) and the processing instructions can change from batch to batch. The recipe for each product must specify both the raw materials required and how conditions within the reactor are to be sequenced in order to make the desired product. Many batch plants, especially those used to manufacture pharmaceuticals, are certified by the International Standards Organization (ISO). ISO 9000 (and the related ISO standards 9001–9004) state that every manufactured product should have an established, documented procedure, and the manufacturer should be able to document the procedure that was followed. Companies must pass periodic audits to main ISO 9000 status. Both ISO 9000 and the US Food and Drug Administration (FDA) require that only a certified recipe be used. Thus, if the operation of a batch becomes abnormal, performing any unusual corrective action to bring it back within the normal limits is not an option. In addition, if a slight change in the recipe apparently produces superior batches, the improvement cannot be implemented unless the entire recipe is recertified. The FDA typically requires product and raw materials tracking, so that product abnormalities can be traced back to their sources. Recently, in an effort to increase the safety, efficiency, and affordability of medicines, the FDA has proposed a new framework for the regulation of pharmaceutical development, manufacturing, and quality assurance. The primary focus of the initiative is to reduce variability through a better understanding of processes that can be obtained by the traditional approach. Process analytical technology (PAT) has become an acronym in the pharmaceutical industry for designing, analyzing, and controlling manufacturing through timely measurements (i.e., during processing) of critical quality and performance attributes of raw and inprocess materials and processes, with the goal of ensuring final product quality. Process variations that could possibly contribute to patient risk are determined through modeling and timely measurements of critical quality attributes, which are then addressed by process control. In this manner, processes can be developed and controlled in such a way that quality of product is guaranteed. Semiconductor manufacturing is an example of a largevolume batch process [10]. In semiconductor manufacturing an integrated circuit consists of several layers of carefully patterned thin films, each chemically altered to achieve desired electrical characteristics. These devices are manufactured through a series of physical and/or chemical batch unit operations similar to the way in which specialty chemicals are made. From 30–300 process steps are typically required to construct a set of circuits on a single-crystalline substrate called a wafer. The wafers are 4–12 inch (100–300 mm) in diameter, 400–700 μm thick, and serve as the substrate upon which microelectronic circuits (devices) are built. Circuits are constructed by depositing the thin films (0.01–10 μm)
J. Hahn and B. W. Bequette
of material of carefully controlled composition in specific patterns and then etching these films to exacting geometries (0.35–10 μm). The main unit operations in semiconductor manufacturing are crystal growth, oxidation, deposition (dielectrics, silicon, and metals), physical vapor deposition, dopant diffusion, dopant-ion implantation, photolithography, etch, and chemical-mechanical polishing. Most processes in semiconductor manufacturing are semibatch; for example, in a singlewafer processing tool the following steps are carried out: 1. A robotic arm loads the boat of wafers. 2. The machine transfers a single wafer into the processing chamber. 3. Gases flow continuously and reaction occurs. 4. The machine removes the wafer. 5. The next wafer is processed. When all wafers are finished processing, the operator takes the boat of wafers to the next machine. All of these steps are carried out in a clean room designed to minimize device damage by particulate matter. For a given tool or unit operation, a specified number of wafers are processed together in a lot, which is carried in a boat. There is usually an extra slot in the boat for a pilot wafer, which is used for metrology reasons. A cluster tool refers to equipment which has several single-wafer processing chambers. The chambers may carry out the same process or different processes; some vendors base their chamber designs on series operation, while others utilize parallel processing schemes. The recipe for the batch consists of the regulatory set points and parameters for the real-time controllers on the equipment. The equipment controllers are normally not capable of receiving a continuous set-point trajectory. Only furnaces and rapid thermal processing tools are able to ramp up, hold, and ramp down their temperature or power supply. A recipe can consist of several steps; each step processes a different film based on specific chemistry. The same recipe on the same type of chamber may produce different results, due to different processes used in the chamber previously. This lack of repeatability across chambers is a big problem with cluster tools or when a fabrication plant (fab) has multiple machines of the same type, because it requires that a fab keeps track of different recipes for each chamber. The controller translates the desired specs into a machine recipe. Thus, the fab supervisory controller only keeps track of the product specifications. Factory automation in semiconductor manufacturing integrates the individual equipment into higher levels of automation, in order to reduce the total cycle time, increase fab productivity, and increase product yield [11]. The major functions provided by the automation system include:
25
Process Automation
1. Planning of factory operation from order entry through wafer production 2. Scheduling of factory resources to meet the production plan 3. Modeling and simulation of factory operation 4. Generation and maintenance of process and product specification and recipes 5. Tracking of work-in-progress (WIP) 6. Monitoring of factory performance 7. Machine monitoring, control, and diagnosis 8. Process monitoring, control, and diagnosis Automation of semiconductor manufacturing in the future will consist of meeting a range of technological challenges. These include the need for faster yield ramp, increasing cost pressures that compel productivity improvements, environmental safety and health concerns, and shrinking device dimensions and chip size. The development of 300 mm platforms in the last few years has spawned equipment with new software systems and capabilities. These systems will allow smart data collection, storage, and processing on the equipment, and transfer of data and information in a more efficient manner. Smart data management implies that data are collected as needed and based upon events and metrology results. As a result of immediate and automatic processing of data, a larger fraction of data can be analyzed, and more decisions are data driven. New software platforms provide the biggest opportunity for a control paradigm shift seen in the industry since the introduction of statistical process control.
25.7
Automation and Process Safety
In modern chemical plants, process safety relies on the principle of multiple protection layers. A typical configuration is shown in Fig. 25.8. Each layer of protection consists of a grouping of equipment and/or human actions. The protection layers are shown in the order of activation that occurs as a plant incident develops. In the inner layer, the process design itself provides the first level of protection, including the specification of redundant equipment. For example, almost all pumps in liquid service are backed up by a spare pump, which can be switched on if the main pump fails or needs to be removed from service. Similarly, almost all control valves are installed in a parallel arrangement, so that a failing control valve can be taken out of service with flow controlled manually using a bypass valve. Level transmitters can fail, so most tanks and vessels with a gas/liquid interface include a slight glass, which can be used to manually check the level. The next two layers consist of the basic process control system (BPCS) augmented with two levels of alarms and operator supervision or intervention. An alarm indicates that
597
Community emergency response Plant emergency response Physical protection (dikes) Physical protection (relief devices) Automatic action SIS or ESD Critical alarms, operator supervision, and manual intervention Basic controls process alarms, and operator supervision Process design
Note: Protection layers for a typical process are shown in the order of activation expected as a hazardous condition is approached ESD = Emergency shutdown; SIS = Safety interlock system
Fig. 25.8 Typical layers of protection in a modern chemical plant [14]
a measurement has exceeded its specified limits and may require operator action. The fourth layer consists of a safety interlock system (SIS), which is also referred to as a safety instrumented system or as an emergency shutdown (ESD) system. The SIS automatically takes corrective action when the process and BPCS layers are unable to handle an emergency; for example, the SIS could automatically turn off the reactant pumps after a high-temperature alarm occurs for a chemical reactor. Relief devices such as rupture discs and relief valves provide physical protection by venting gas or vapor if overpressurization occurs. As a last resort, dikes are located around process units and storage tanks to contain liquid spills. Emergency response plans are used to address emergency situations and to inform the community. The functioning of the multiple layer protection system can be summarized as follows [14]: Most failures in well-designed and operated chemical processes are contained by the first one or two protection layers. The middle levels guard against major releases and the outermost layers provide mitigation response to very unlikely major events. For major hazard potential, even more layers may be necessary.
25
598
It is evident from Fig. 25.8 that automation plays an important role in ensuring process safety. In particular, many of the protection layers in Fig. 25.8 involve instrumentation and control equipment. The SIS operation is designed to provide automatic responses after alarms indicate potentially hazardous situations. The objective is to have the process reach a safe condition. The automatic responses are implemented via interlocks and automatic shutdown and start-up systems. Distinctions are sometimes made between safety interlocks and process interlocks; process interlocks are used for less critical situations to provide protection against minor equipment damage and undesirable process conditions such as the production of off-specification product. Two simple interlock systems are shown in Fig. 25.9. For the liquid storage system, the liquid level must stay above a minimum value in order to avoid pump damage such as cavitation. If the level drops below the specified limit, the low-level switch (LSL) triggers both an alarm and a solenoid (S), which acts as a relay and turns the pump off. For the gas storage system in Fig. 25.9b, the solenoidoperated valve is normally closed. However, if the pressure of the hydrocarbon gas in the storage tank exceeds a specified limit, the high-pressure switch (PSH) activates an alarm and causes the valve to open fully, thus reducing the pressure in the tank. For interlock and other safety systems, a switch can be replaced by a transmitter if the measurement is required. Also, transmitters tend to be more reliable. The SIS in Fig. 25.9 serves as an emergency backup system for the BPCS. The SIS automatically starts when a critical process variable exceeds specified alarm limits that define the allowable operating region. Its initiation results in a drastic action such as starting or stopping a pump or shutting down a process unit. Consequently, it is used only as a last resort to prevent injury to people or equipment. It is very important that the SIS functions independently of the BPCS, otherwise emergency protection will be unavailable during periods when the BPCS is not operating (e.g., due to a malfunction or power failure). Thus, the SIS should be physically separated from the BPCS and have its own sensors and actuators. Sometimes redundant sensors and actuators are utilized; for example, triply redundant sensors are used for critical measurements, with SIS actions based on the median of the three measurements. This strategy prevents a single sensor failure from crippling SIS operation. The SIS also has a separate set of alarms so that the operator can be notified when the SIS initiates an action (e.g., turning on an emergency cooling pump), even if the BPCS is not operational. While chemical process plants have generally become safer over the years, there are still disasters that occur, largely due to the lack of a “safety culture.” A prime example is the BP Texas City disaster in 2005, where a distillation tower on an isomerization unit overfilled, resulting in an explosion that killed 15 people. An investigation revealed that there was a plant history of safety violations and that one of the
J. Hahn and B. W. Bequette
a) Low-level interlock
LSL = Level switch low S Liquid storage tank
LSL
= Solenoid switch
S
b) High-pressure interlock S Gas out
To flare stack PSH
Gas in
Gas storage tank
PSH = Pressure switch high
Fig. 25.9 Two interlock configurations [3]
problems was due to the reward structure for plant managers, who would typically be in the position for a year or two before being promoted to other positions in the company, hence important safety-related expenditures might be pushed into the future. Critical equipment maintenance had been deferred, and numerous instruments were out of calibration. Further, there was a history of violating process unit start-up procedure during previous start-ups over a five-year period. It is known that start-ups and shutdowns are the most dangerous times, with an incident rate ten times that of normal [15].
25.8
Emerging Trends
Main emerging trends in process automation have to do with process integration, information integration, and engineering integration. The theme across all of these is the need for a greater degree of integration of all components of the process. Emerging concepts that play important roles in future implementations of process automation include smart manufacturing, the Industrial Internet of Things, cyber-physical systems, and human-in-the-loop or human-cyber-physical systems. While most control loops consist of classic control algorithms, such as PID or MPC, artificial intelligence (AI), including machine learning (ML), is playing an increasingly important role in process automation.
25
Process Automation
Traditionally, a process plant implementing distributed control systems would choose a specific vendor that used proprietary hardware, software, and communication protocols, making it difficult and/or expensive to expand or retrofit the system. This has motivated a movement to create open standards to enable a “plug and play” culture where sensors, actuators, and control algorithms from different vendors can be integrated into advanced control strategies. The Open Process Automation (OPA) Forum is focused on developing an standards-based open architecture for process control (https://www.opengroup.org/forum/open-processautomation-forum). Open standards are an important component of what is known as Industry 4.0 in Europe, and smart manufacturing in the USA. The Industrial Internet of Things (IIoT) enables data across many spatial and time scales to be used in models, leading to improved automation, control, and decision support systems throughout the entire manufacturing enterprise. That is, better and more frequent information about both the supply chain, manufacturing plant operations, and the product chain can enable better decisions to be made at all levels of the corporate enterprise [16, 17]. Particularly in the specialty chemicals and pharmaceutical industries, it is important to be able to rapidly move from laboratory-scale to pilot-scale, and to full-scale manufacturing. With smart manufacturing techniques, information can be easily shared between each of these scales/groups. Fundamental thermodynamic and chemical reaction model parameters are determined using laboratory-scale experiments. These models are then used to aid engineers that perform studies in pilot-scale vessels to better understand scale effects, such as imperfect mixing. This information is then used in further models to design and control large-scale production facilities. Cyber-physical systems (CPS) are composed of physical devices and software, which have been part of process automation for 40 years during which it has been common to use digital control in manufacturing. In the past, the physical and software components were largely independent, with data transmitted to a control room or control computers, computations performed by the control system, and commands sent back to the manufacturing equipment. In modern and emerging CPS, software is increasingly integrated directly into the equipment such that the equipment is “selfaware.” Sensors and actuators incorporate computational devices and can perform diagnostics locally. Indeed, with distributed computation, the hierarchical implementation shown in Fig. 25.1, with the decoupling of time and spatial scales, is a rather simplistic representation of a highly distributed system. While advanced algorithms automatically optimize and control chemical manufacturing processes, humans continue to be in the loop, so there are many research efforts to better
599
understand the role of the “human-in-the-loop” and humancyber-physical systems (HCPS) [18]. As automation gets more and more complex, it is often difficult for a human operator to understand and troubleshoot an operating problem. A prime example from the aviation industry is the Boeing implementation of a maneuvering characteristics augmentation system (MCAS), which played a role in two crashes of Boeing 737 MAX aircraft in 2018 and 2019. Because of a redesign of the aircraft, including new engines, to increase operating efficiency, the MCAS was implemented to avoid the risk of an aircraft stall. Unfortunately, an error in the angle of attack sensor could lead to the aircraft control system to force the nose of the plane down even when not in a stall condition. Further, this “fault detection” algorithm was only activated when the plane was under manual control, which is counterintuitive to pilots [19]. In both of the crashes the pilot was fighting against the actions of the MCAS, causing the plane to oscillate vertically but eventually resulting in a loss of control. One could envision similar events occurring chemical process manufacturing, particularly without sufficient operator training. As noted in [17], chemical process systems have a near 40-year history of AI applications. Stephanopoulos [20] provided a comprehensive vision for AI in process development, design, operations, and control (including planning and scheduling). His perspective focused on automated reasoning, with little discussion of machine learning and related techniques. A recent assessment of AI in chemical engineering is provided by Venkatasubramanian [21], who classifies three overlapping phases: (i) expert systems (1984–1995), (ii) neural networks (1990–2008), and (iii) deep learning and data science (2005 to present). He views self-organization and emergence as the next crucial phase of AI. Lee et al. [22] provide an overview of machine learning applications in process systems engineering; Shin et al. focus on reinforcement learning, specifically for process control applications [23]; and Qin and Chiang discuss machine learning for process data analytics [24]. On the other hand, Marcus lists ten major limitations to current deep learning techniques [25]. The four most relevant to this chapter include: (i) deep learning cannot inherently distinguish causation from correlation; (ii) deep learning has no natural way to deal with hierarchical structure; (iii) deep learning has not been well integrated with prior knowledge; and (iv) deep learning is difficult to engineer with. This chapter has reviewed process modeling and control that are important in typical continuous and batch chemical process plants. Specific examples of related chemical process automation problems are detailed in the following chapters: 27. Infrastructure and Complex Systems 38. Semiconductor Manufacturing Automation 42. Automation in Wood, Paper, and Fiber Industry 44. Automation in Food Processing
25
600
References 1. Stephanopoulos, G., Reklaitis, G.V.: Process systems engineering: from Solvay to modern bio- and nanotechnology. A history of development, successes and prospects for the future. Chem. Eng. Sci. 66, 4272–4306 (2011) 2. Bequette, B.W.: Process control education and practice: past, present and future. Comp. Chem. Eng. 128, 538–556 (2019) 3. Seborg, D.E., Edgar, T.F., Mellichamp, D.A., Doyle III, F.J.: Process Dynamics and Control, 4h edn. Wiley, New York (2016) 4. Bequette, B.W.: Process Control. Modeling, Design, and Simulation, 2nd edn. Prentice Hall, Upper Saddle River (2021) 5. Edgar, T.F., Smith, C.L., Bequette, B.W., Hahn, J.: Process control. In: Green, D.W., Southard, M.Z. (eds.) Perry’s Chemical Engineering Handbook. McGraw-Hill, New York (2019) 6. Cutler, C.R., Ramaker, B.L.: Dynamic matrix control – a computer control algorithm. In: Proc. Jt. Auto. Control Conf., paper WP5-B, San Francisco (1980) 7. Richalet, J., Rault, A., Testud, J.L., Papon, J.: Model predictive heuristic control: applications to industrial processes. Automatica. 14, 413–428 (1978) 8. Qin, S.J., Badgwell, T.A.: A survey of industrial model predictive control technology. Control. Eng. Pract. 11, 733–764 (2003) 9. Bonvin, D.: Optimal operation of batch reactors – a personal view. J. Process Control. 8, 355–368 (1998) 10. Fisher, T.G.: Batch Control Systems: Design, Application, and Implementation. ISA, Research Triangle Park (1990) 11. Parshall, J., Lamb, L.: Applying S88: Batch Control from a User’s Perspective. ISA, Research Triangle Park (2000) 12. Edgar, T.F., Butler, S.W., Campbell, W.J., Pfeiffer, C., Bode, C., Hwang, S.B., Balakrishnan, K.S., Hahn, J.: Automatic control in microelectronics manufacturing: practices, challenges and possibilities. Automatica. 36, 1567–1603 (2000) 13. Moyne, J., del Castillo, E., Hurwitz, A.M. (eds.): Run to Run Control in Semiconductor Manufacturing. CRC Press, Boca Raton (2001) 14. AIChE Center for Chemical Process Safety: Guidelines for Safe Automation of Chemical Processes. AIChE, New York (1993) 15. Investigation Report: Refinery Explosion and Fire, BP Texas City, Texas; Report no. 2005-04-I-TX; U.S. Chemical Safety and Hazard Investigation Board, 2007. https://www.csb.gov/bp-americarefinery-explosion 16. Davis, J., Edgar, T.F., Graybill, R., Korambath, P., Schott, B., Swink, D., Wang, J., Wetzel, J.: Smart manufacturing. Annu. Rev. Chem. Biomol. Eng. 6, 141–160 (2015) 17. Bequette, B.W.: Commentary: the smart human in smart manufacturing. Ind. Eng. Chem. Res. 58, 19317–19321 (2019) 18. Ghosh, S., Bequette, B.W.: Process systems engineering and the human-in-the-loop – the smart control room. Ind. Eng. Chem. Res. 59(6), 2422–2429 (2020). https://doi.org/10.1021/acs.iecr.9b04739 19. Endsley, M.R., Kiris, E.O.: The out-of-the-loop performance problem and level of control in automation. Hum. Factors J. Hum. Factors Ergon. Soc. 37(2), 381–394 (1995). https://doi.org/10.1518/ 001872095779064555 20. Stephanopoulos, G.: Artificial intelligence in process engineering – current state and future trends. Comp. Chem. Eng. 14(11), 1259– 1270 (1990) 21. Venkatasubramanian, V.: Artificial intelligence in chemical engineering: is tt here, finally? AICHE J. 65(2), 466–478 (2019) 22. Lee, J.H., Shin, J., Realff, M.J.: Machine learning: overview of the recent progress and implications for the process systems engineering field. Comp. Chem. Eng. 114, 111–121 (2017) 23. Shin, J., Badgwell, T.A., Liu, K.-H., Lee, J.H.: Reinforcement learning – overview of recent progress and implications for process control. Comp. Chem. Eng. 127, 282–294 (2019)
J. Hahn and B. W. Bequette 24. Qin, S.G., Chiang, L.H.: Advances and opportunities in machine learning for process data analytics. Comp. Chem. Eng. 126, 465– 473 (2019) 25. Marcus, G.: Deep Learning: A Critical Appraisal. arXiv:1801.00631. https://arxiv.org/abs/1801.00631
Further Reading Cichocki, A., Ansari, H.A., Rusinkiewicz, M., Woelk, D.: Workflow and Process Automation: Concepts and Technology, 1st edn. Springer, London (1997) Cleveland, P.: Process automation systems. Control. Eng. 55(2), 65–74 (2008) Jämsä-Jounela, S.-L.: Future trends in process automation. Annu. Rev. Control. 31(2), 211–220 (2007) Love, J.: Process Automation Handbook: A Guide to Theory and Practice, 1st edn. Springer, London (2007)
Juergen Hahn is a professor of Biomedical Engineering and of Chemical & Biological Engineering at the Rensselaer Polytechnic Institute. He received his diploma from RWTH Aachen, Germany, and his PhD from the University of Texas at Austin. His research deals with process systems engineering with special emphasis on systems biology.
B. Wayne Bequette is a professor of Chemical and Biological Engineering at the Rensselaer Polytechnic Institute. He received his PhD from the University of Texas at Austin. His research includes a wide range of control systems applications, from health care (automated insulin dosing) to large-scale chemical process manufacturing.
26
Service Automation Christopher Ganz and Shaun West
Contents 26.1 26.1.1 26.1.2 26.1.3 26.1.4 26.1.5
Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Definition of Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Service Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Service Industries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Life Cycle of Product-Service Systems . . . . . . . . . . . . . . Service Business Models . . . . . . . . . . . . . . . . . . . . . . . . . .
601 602 602 603 603 606
26.2 26.2.1 26.2.2 26.2.3
Operational Considerations . . . . . . . . . . . . . . . . . . . . . . Operation Driven by Market Situation . . . . . . . . . . . . . . . Long-Term Continuous Operation . . . . . . . . . . . . . . . . . . Batch or Shift Operation . . . . . . . . . . . . . . . . . . . . . . . . . .
607 608 608 608
26.3 26.3.1 26.3.2 26.3.3 26.3.4 26.3.5 26.3.6
Service, Maintenance, and Repair Strategies . . . . . . . Key Performance Indicators . . . . . . . . . . . . . . . . . . . . . . . Corrective Maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . Preventive Maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . Condition-Based Maintenance . . . . . . . . . . . . . . . . . . . . . Predictive Maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . Prescriptive Maintenance . . . . . . . . . . . . . . . . . . . . . . . . .
609 609 609 610 610 610 611
26.4 26.4.1 26.4.2 26.4.3 26.4.4 26.4.5
Technology and Solutions . . . . . . . . . . . . . . . . . . . . . . . . Condition Assessment and Prediction . . . . . . . . . . . . . . . Remote Services: Internet of Things . . . . . . . . . . . . . . . . Service Information Management in a Digital Twin . . . Service Support Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . Toward Fully Automated Service . . . . . . . . . . . . . . . . . . .
611 612 612 614 614 615
26.5
Conclusions and Emerging Challenges . . . . . . . . . . . . 615
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 615
Abstract
Throughout the life cycle of an industrial installation, services play an important role. Discussion of service in an industrial context very often refers only to “after sales
C. Ganz () C. Ganz Innovation Services, Zurich, Switzerland e-mail: [email protected] S. West () Institute of Innovation and Technology Management, Lucerne University of Applied Science and Art, Horw, Switzerland e-mail: [email protected]
© Springer Nature Switzerland AG 2023 S. Y. Nof (ed.), Springer Handbook of Automation, Springer Handbooks, https://doi.org/10.1007/978-3-030-96729-1_26
services” for a product, i.e., keeping it operational through maintenance and repair. Yet when planning a factory, engineering may be one of the services provided, and in operation, most logistics transports are services. A fast and effective industrial service, supporting plants with a broad spectrum of assistance from emergency repair to prescriptive maintenance, rests on two legs: the provision and analysis of the vast variety of information required by service personnel to figure out what is going on and the remedy for the situation, including physical interaction with the equipment and transport of people and equipment. While the automation of physical interaction is limited, data management for efficient servicing, including optimized logistics for transport, is increasingly expanding throughout the service industry. This chapter discusses the basic requirements for the automation of service (predominantly but not limited to after sales services) and gives examples of how the challenging issues involved can be solved.
Keywords
Industrial services · Product-service systems · Product life cycle · Servitization · Service design · Maintenance strategies · Predictive maintenance · Remote service
26.1
Service
Many manufacturing firms make more of their profit margins from their service (or after-market) activities than from selling their new products. The firm’s effort is often focused on the design and manufacture of a product, yet the reward here is low, which has been confirmed by many studies (Fig. 26.1). Besides generating a margin for the business, services are essential for customers, and it has been seen that good
601
602
C. Ganz and S. West
20% Spares
Repair Mainteance
EBIT as % of sales
15%
Technical advice
Technical overhauling Financial services
10%
Inspections
5%
Install and commissioning Second-hand equipment New equipment
Training 0% 0%
25%
50%
75%
100%
Fig. 26.1 Average margins for a range of goods and services for manufacturing firms (Adapted from [1])
services improve customer satisfaction, leading to improved customer retention. Services in many manufacturing firms are considered as a marketing tool and given away “free” to sweeten the equipment sale, presenting the services as a cost to the firm. A product sale is a one-time event, whereas services can include presale consulting, installation, commissioning, maintenance, operational support, and upgrades. We will expand on the description of services in the following sections and describe where automation can support services by either supporting the customer’s processes or the product.
26.1.1 Definition of Service To be able to describe the automation of service, we must first define what we mean by service. Service in general is defined in [2] to be “useful labor that does not produce a tangible commodity.” Another definition is given in [3]: “Work that is done for others as an occupation or business.” While a product is an object, service is an activity. In the context of this chapter, we shall restrict ourselves to considering the provision of the activities that industry requires. In particular, we want to address organized systems of apparatus, appliances, employees, etc. for providing supportive activities to industrial operations. In this context, the term is mostly associated with repair and maintenance activities and servicing
equipment. However, industrial services can be interpreted more broadly. “Useful labor” supplied to a customer can be more than just repairing equipment; it may include other supporting activities, i.e., operation optimization support, logistics, or engineering. Furthermore, recent years have made the concept of equipment as a service popular, where equipment is not owned but charged for on a usage basis.
26.1.2 Service Properties To distinguish properties of the two key components of economies – goods and services – a mental model can help: goods are typically referred to as nouns and services as verbs. Goods as tangible commodities can be acted upon, while services are the actions performed. Service properties derived from that model are the following [4]: • Intangible: as mentioned in the definition, services are actions, not physical objects, and therefore intangible. • Not stored: the “intangible” nature implies the inability to store. A service activity is consumed as it is executed. • Does not result in ownership: since there is nothing to be stored, there is nothing to be owned. This has a few implications for the automation of services:
26
Service Automation
• A consequence of the lack of storage is the difference in value chain. A service cannot be designed, produced, stored, and delivered. Service may be designed, but delivery is immediate, i.e., the acting subject (the service executor) is present at the same time the object (the serviced equipment) is. • While the quality of a produced item can be checked through analysis and tests, proper delivery of a service is much harder to assess. The evidence of the quality of service is a property of service that is difficult to measure. • Today, many services are delivered by people. Good services build on people’s cognitive capabilities and rely on the interaction between people. This is one of the major challenges in automating services. • As part of a service activity, some components may be tangible and change ownership. Spare parts or replacement equipment are indeed tangible and are transferred to the customer. The service contract then covers the sale of these goods in addition to the services provided.
26.1.3 Service Industries In the global economy, service industries form the tertiary sector (primary, extraction; secondary, production). This sector covers any kind of service offerings, for example: financial services, entertainment, education, healthcare, etc. Such services can be utilizing industrial equipment for the delivery of the service and can be automated as well. But these are not the focus of this section. Some are discussed in more detail in the parts G, H, and I of this handbook. In this section, we look into tertiary (service) activities in the secondary sector (manufacturing and production). Within these services, we distinguish between the following: • Administrative and business services: manufacturing industries do need financial and other externally purchased business services to operate. Some may even need restaurant services (canteen) or housing (for employees). These are covered in their respective sections of this book. • Services along the value chain: an important component of the manufacturing ecosystem is logistics: supplying material to the factory and delivering products to customers. In the case of consumer end-customers, this includes the distribution and retail system. We also do not go into further details on those. • Services to the operation of the production: most prominently, industrial plants utilize services when it comes to maintaining and repairing equipment. But other services, such as engineering, operational optimization, or quality certification (to name a few), are important to optimize a plant’s output. In this narrower sense, we will discuss how industrial service is automated.
603
26.1.4 Life Cycle of Product-Service Systems The cradle-to-grave life cycle of the equipment [5] provides a valuable introduction to the range of products and services that can support the customer’s processes or the product before the initial purchase of the equipment installation, through the operational phase to the end of its life. Many services are hidden and often provided for free to the customer, as they are “bundled” with the product sale. From a single OEM perspective, the challenge of services over the life cycle can be pretty complex. However, from the customer’s perspective, managing such services is equally complex as they probably operate equipment from multiple suppliers. The complex environment creates a “product-service system,” where both the product and the services over the equipment life cycle are essential for its successful operation. One of the key drivers of industrial services is the life cycle of the equipment. Figure 26.2 describes a range of services that are typically needed over during the beginning of life, middle of life, and end of life of the equipment. In a perfect world, equipment is purchased, commissioned, and operated in a plant, without the need for further interaction. Unfortunately, equipment wears and ages, to the point where performance is reduced or the equipment fails. Moving from a product to a core product with added services to a product-service system is a transformational process known as “servitization.” The most basic offer is when service is considered a market differentiator, making services a bolt-on to the product. In contrast, in the productservice system, value creation is enabled by integrating the product and services. When developing PSS, it is always important to consider the development of both elements together and continue evolving the services during the middle of life and end of life to improve value creation. Tukker [8] provided a model based on the concept of servitization in a product-service system context (Fig. 26.3). The “product-orientated” block contains minimal services, the services themselves being focused on the product, which means that the provider generally limits their services to installation, commissioning, warranty, and maintenance, although, at the extremes, they may provide spare parts rather than maintenance services. Generally, this is provided on a transactional basis with the pull for “aftermarket” services coming directly from the equipment owner or operator. With a use-orientated product-service system, the supplier focuses on the inputs to keep the equipment’s productivity high by focusing on the maintenance of the equipment, so here condition-based maintenance is many suppliers’ goal. Often long-term agreements are used to provide a use-oriented product-service system. A results-orientated product-service system means that the system is now paid based on its output and that the supplier has a much more comprehensive range of obligations.
26
604
C. Ganz and S. West
Beginning of life
Design
Product design
Process design
Middle of life
Manufacture
Plant design
Distribution
Use/ operate
Produc- Internal External Optimize tion logistics logistic
End of life
Support
Train
Maintain
Upgrade
Repair
MOL2
Retire
Reverse Recycle logistics
Resell
Fig. 26.2 Life cycle management from multiple perspective helps firms to identify new service opportunities. (Adapted from [6] and [7])
Product-service system Value mainly in product content
Service content (intangible)
Value mainly in the service content
Results oriented
Pure service
Product content (tangible)
Pure product
Pure product oriented
Use oriented
Fig. 26.3 PSS model from (Adapted from [8])
Tukker’s product-service system model [8] has been the basis for the characterization of services [9]. In the Kowalkowski and Ulaga model [9], the nature of the supplier’s value proposition (input- or output-based) is compared with the orientation of the services (e.g., the product or the customer’s processes). This leads to four different service classifications: • Product life cycle services – many of the traditional industrial services are in this category as the value proposition is input-based and oriented toward the product. • Process support services – here, the supplier supports the customer’s process and again focuses on the inputs rather than the outcomes. • Asset efficiency services – the focus is again the product where the supplier is paid based on the product’s outputs rather than the inputs.
• Process delegation services – the supplier takes responsibility for the outcomes, and the customer, in effect, outsources their process to the supplier. Service businesses are different from product businesses in more than just the fact that their value propositions come mainly from the intangible rather than the tangible [10]. Mathieu [11, 12] identified that service businesses were based on supporting the customer, whereas product-based firms supported the product and had efficiency-based innovation processes. Bretani [13] explained that product firms had a more precise separation of production and delivery. In contrast, services often combined production with delivery and were frequently codependent on the customer for part of the delivery process. The cultural aspects of service cannot be underplayed; Schlesinger and Hasket [14] established that the “employee as a disposable tool” is a very costly approach in a service
26
Service Automation
605
business. This was in part due to the importance of personal relationships and interactions within service transitions. To humanize service delivery, they developed a four-element model that: • Values people as an investment as much as technology. • Considers that technology is there to support people (not replace them). • Highlights that selection and training for frontline staff is as important as for managers. • Compensation should be related to performance. According to Mathieu [12], the cultural aspect of a service business is related to the service strategy for manufacturing firms. The model defines three levels of organizational intensity for services: “tactical” (e.g., we do it because our competitors do it), “strategic” (e.g., we do it because our senior managers tell us to do it), and “cultural” (e.g., why would you not consider services?).
Service Innovation and Service Design The effort and risk of developing services and products are different; the Kindström, D., Kowalkowski, C., and Sandberg, E. [15] study into service innovation provided the evidence for this by assessing the different development processes (Fig. 26.4). Service development has costs in the later stages of launch and follow-up, whereas for product development the costs are in the earlier phases of innovation. This can explain to some degree why innovation management tools, such as “StageGate” processes, need adaptation to make them more suitable for service innovation. Service innovation is closer to more general business development and can be considered more disruptive as it can force adaptation of traditional business models. The tools used for innovating in services are also different from product development tools as the innovation is often “outside in” and driven by customer knowledge. It involves a wider range of participants from the supplier and the customer than is the case with product development. The
outcome of service innovation is often much more intangible and often difficult to visualize. However, in order to automate services, defining and visualizing the service process are essential.
Outcomes for Operators For the operator of an industrial plant, it is of utmost importance that the equipment is available with optimum performance at any time that it is needed. For some systems in a plant, this means 24 hours a day and 365 days a year, while for others, this may apply for only a few weeks spread over the year. Even though industrial plants prefer to have equipment that does not require any service, maintenance, or repair, planned outage for overhaul is generally accepted and sometimes unavoidable. A large rotating kiln in a cement plant, for example, must usually be fitted with a new liner every year, stopping production for a short time. What is unacceptable, however, are unplanned outages due to malfunction of equipment. The better prepared a supplier of this equipment is to react to these unplanned events, the more appreciated this service will be. Taking this into account, industrial service must strive to: • Keep equipment at maximum performance whenever operated. Where degradation is exceeding the level of what can be recovered in operation • Provide well-planned service at times of planned outage. If there are still unplanned outages • Restore optimum performance as quickly and effectively as possible From these steps, it can be surmised that service strategy planning is related to risk management: operational maintenance and outage planning are risk mitigation actions, avoiding an unplanned shutdown and related contingency costs, while operation restoration is a contingency action,
New product development
New service development 1. 2. 3. 4. 5.
1.
2.
3.
4.
Pre-study/concept study Development Industrialization Launch Follow-up
5.
Fig. 26.4 Product vs. service development process (Adapted from [16])
1.
2.
3.
4.
5.
26
606
aiming at minimizing the cost of lost opportunity [3]. At the same time, over-maintenance will result in high costs, while not adding significantly to the availability of the equipment. We will cover this approach in more detail in Sect. 26.3. Already described is the need to consider the entire life cycle when considering services. For services, it can be helpful to consider the life cycle management from both the supplier’s and the customer’s perspectives. This is because the customer may keep the equipment in operation for a significantly longer time than the supplier’s “design life” suggests. This extension of life (and by default, an increase in the accessible installed base) means that new service opportunities may exist that would otherwise be overlooked. Over time new technologies are developed from the product development process that can be reapplied to the existing installed base. This can offer a low-risk and high-margin opportunity for firms that have an active PSS portfolio. The technologies can support the customer to adapt their mission to the new market conditions or a new asset management strategy.
26.1.5 Service Business Models At the core of a business model discussion stands the value proposition: what value does the offering provide to the customer. That value defines whether and how much the customer is willing to pay for it, which defines the supplier’s revenue model. Osterwalder nicely visualizes this in his “business model canvas” [17]. A value proposition is best expressed in the customer’s words: “the solution helps [persona/s] to [activity] in order to [outcome] when [situation].” In the case of a product offering, the value proposition materializes over time, as the customer is using the product, while the payment is made at the time of purchase. The customer is making an upfront investment for a foreseen value to be delivered. In services, the value mostly materializes at the point of service delivery, which may be over a longer period of time (e.g., a service contract). For the customer, the “value flow” (cash flow, if value is equated to money) is much more favorable. Note that the value proposition is in the eyes of the customer. Not all value propositions may be attractive to all customer segments, so customer segmentation and focused value propositions are required. GE does this very well by having three very different value positions for the maintenance of gas turbines: one based on the “power-by-the-hour” concept, one based on an advanced rate sheet valid for 6 years or more, and one on a purely transactional basis. In effect, all value propositions are based on the same underlying service modules of parts, repairs, and spares. Alstom’s gas turbine business
C. Ganz and S. West
(now part of GE) had one of the most comprehensive suites of business models and value propositions [18] (Fig. 26.5). In a product-service system, the value proposition of the individual product and of the services often differs, and the product sale and delivery may be separated from the service sale and delivery, even when operating in the same market segment. Buyer preferences can (and should) mean that the value proposition is adjusted to their requirements, needs, and preferences. Value propositions may be hierarchical, combining those of the product and the services [5]. Traditionally, industrial services are delivered under an activity-based revenue model. The customer (owner, operator) asked for an activity to be executed on the plant, and the service provider performs all actions required to fulfill this contract. The customer is then charged for time and materials spent. Another common business model for service is the service contract. Under this, the customer has the right to consume a number of services over a defined period of time or a usagebased metric (e.g., distance driven) for a fixed fee. Service contracts have the advantage of easing service budgeting for the customer and create a continuous revenue stream for the service provider. A variant of the service contract is performance-based services. The agreed outcome of the service contract is not the number of interventions but a KPI to be achieved by the equipment. Contracts may be tied to a maximum unplanned outage time, and the service provider has to take all necessary service measures to make sure this is fulfilled. A performance-based service contract may also include a bonus clause if the KPI is significantly exceeded. In recent years, companies are striving to increase the service part of their business model to create more recurring revenue. This trend is generally referred to as “servitization” [19, 20]. One servitization variant, the product as a service, has gained popularity particularly in the software industry. Instead of buying a product and then paying for software upgrades, the software is charged for on a time or usage base. Similar contracts have reached industry by offering a product as a service. In many cases this is not more than a leasing contract, where the equipment stays on the leaser’s balance sheet. Other models, particularly those where the product is exchanged over time, provide more customer value: Hilti provides tools as a service (Hilti Fleet Management Service, [21]), where a construction company gets the tools required at every stage of the construction and later in building maintenance. This kind of contract provides flexibility to the customer to not buy all tools and use them for a short time. For the supplier to have a benefit requires the possibility to increase the utilization of the equipment, i.e., by using equipment in several contracts in sequence.
26
Service Automation
607
Value propositions Full operation
O&M
Operational support and routine, plan and unplanned maintenance
o&M
Operational and maintenance support
Focus on the outcomes Transfer of risk
o&m
Contractual service agreements
26
m
Planned and unplanned
Support and consulting Planned only
Field services
Focus on the inputs Optimization of costs
Multi-year maintenance agreements
Part Purchase orders
Transactional
Fig. 26.5 Alstom gas turbine suite of value propositions with underlying business models. (Based on (Adapted from [18]))
The idea of manufacturing as a service goes even further: the complete plant is used as a service, and the customer can produce components without owning manufacturing capacity. As mentioned before, the “as a service” provider can increase the utilization of the plant by offering this service to several customers in sequence. The complexity of most industrial plants may require a mixed approach depending on the complexity and criticality of subsystems. More and more, plants are built in a modular fashion, where individual subsystems are packaged in modules by a supplier, including all equipment and automation. Separate modules may follow different service models, where some modules are key to the operation and are owned and operated by the plant owner, whereas others may be utilized as a-service, with maintenance and possibly even operations provided by the supplier. Modularity in services is similar to modularity in products; it supports the standardization of the back end of the service delivery to help ensure the quality of the services with economies of scale. The frontend modules allow a “pick-and-mix” approach that allows the necessary customization for the customer. In effect this enables flexibility along the continuum of “do it for me” to “do it with me” and “do it yourself.” Caterpillar, Hilti, and Rolls-Royce all have excellent value propositions based on PSS concepts, and today the majority of their growth is from the service solutions that they offer. They have nested value propositions that have clearly focused messages; each of the firms is well known for the quality of their products. For each of the firms, the innovative value propositions have been cannibalistic to their existing offerings; nevertheless they successfully adapted their busi-
ness models to deliver the new value propositions yet still provide the traditional products and services to a market segment.
26.2
Operational Considerations
Any maintenance strategy needs to fit into the overall operation scheme of an enterprise. Essentially, the business processes define the operation schedule. This is the key input required to define a maintenance strategy. When can equipment be taken off-line and for how long, depending on the current commitments to customers and the market situation? Can plant operation be reduced to extend the life cycle of an asset? What is the impact on the business? In the following sections, we will discuss a few typical situations and their impact on a maintenance strategy. When considering the operation (and hence the maintenance demand for the equipment), it is vital to consider the relation between operations (asset load and demand impacting asset health) and maintenance (asset maintenance and repair requiring downtime reducing operations). Each point of view impacts operation and the maintenance strategy for the asset in question, and each links back to the value proposition. It also requires the supplier firm and the customer to understand the forms of maintenance (e.g., planned, unplanned, routine/operational) and the approach to maintenance (e.g., time-based, condition-based, breakdown). Different owners all have different strategies, and here, the OEM’s team needs to bear in mind that service is not about “keeping the equipment shiny.”
608
26.2.1 Operation Driven by Market Situation Some industries are driven by highly changeable market situations. Food and beverage industries may depend on the harvest season. Power interruption in a winter resort is less critical in summer. And in some industries, production is critical prior to the launch of a new product or for large orders from key customers. In all these examples, production interruption has a very high impact in the high utilization time window and is less critical in between. Some of the utilization peaks may be shifted, but in some situations, this is not possible without impact. It is essential that failure is avoided while under high utilization, and maintenance can be postponed until the low utilization or standstill time. A maintenance schedule must therefore minimize opportunity cost, i.e., the difference between the possible unrestricted revenue and the achieved revenue under the maintenance regime.
26.2.2 Long-Term Continuous Operation Industry verticals like oil and gas or conventional power generation require continuous operation over a long period. A complete plant shutdown is costly and time-consuming and is avoided whenever possible so are normally scheduled up to years ahead. It is then when all critical maintenance has to be completed within a predefined time window.
Main Equipment Planned Shutdown The maintenance schedule of many plants in these particularly sensitive industries is mostly defined by the main equipment, i.e., by the manufacturer of that equipment. These systems are the key production assets that are essential for the operation; examples are turbines in power generation or compressors in oil and gas extraction. All other maintenance activities are then aligned to that schedule. The key question for condition monitoring and predictive maintenance is the probability of failure before the next maintenance interval. If failure is predicted before the maintenance window, intermediate action is to be taken; if the prediction shows a potential failure after the planned shutdown, maintenance is to be scheduled in the regular maintenance window. Operational risk is usually prescribed by equipment manufacturers based on generic usage assumptions and conservative operating models to define a recommended preventative maintenance schedule. Due to market pressures on production and the desire for more production uptime, operators are currently challenging these assumptions and models and asking equipment manufacturers to extend the recommended maintenance periods. One method to allow operators to safely extend the maintenance period is to move from
C. Ganz and S. West
preventive maintenance (based on assumptions) to conditionbased maintenance based on the real plant operating conditions.
Failures Reducing Flexibility Most of the equipment in a plant is not part of the main equipment fleet, but a failure still generates a plant outage. Examples are pumps with their associated motors and drives, cooling fans, etc. Smaller equipment that would have a high impact when failing is often available in a redundant configuration, i.e., the plant still operates with one failed device. Some of the equipment may be used in particular operational situations only, for example, at start-up or shutdown, cleaning or grade change. Even when the facility is in full operation, redundant devices or equipment that is not operating may be exchanged between main inspections. The maintenance window for such devices is defined by the operational risk. A failed device in a redundant pair significantly increases the risk of failure. Even though a plant may operate without redundancy for some time, it is wise to still repair the device as soon as possible. Some equipment reduces the plant capacity, for instance, a cooling fan that fails in a large fan arrangement may result in a reduction of peak production or the failure of one of six export compressors can reduce the plant capacity without causing a complete shutdown. The production schedule drives the acceptable time window of reduced production. If a failing device is part of a configuration that is currently not operating, the production schedule needs to be consulted for the next scheduled use. In some situations, the production schedule can be rearranged to avoid that particular mode of operation.
26.2.3 Batch or Shift Operation Some plants are shut down at regular intervals. Some factories only operate in one or two shifts, i.e., they are shut down at night. Others are driven by batch schedules, and between batches, a shutdown may be needed, e.g., for cooling, cleaning, or reconfiguring. A comparable situation exists if the plant is reconfigured frequently to produce a different product. A few weeks of uninterrupted operation may be followed by a few days of reconfiguration. In either situation, small maintenance tasks that can be completed during the shutdown can be scheduled quickly without having an impact on the operation schedule. Longer interventions may result in a delay of the production schedule and may need to be shifted until the next planning cycle. Depending on the business cycle, plant utilization may be shifted. This still creates a production loss, but it does not affect a commitment for delivery.
Service Automation
26.3
609
Service, Maintenance, and Repair Strategies
The requirements from the business environment, the life cycle stage, the nature of the plant (continuous, discrete), and other boundary conditions define the requirements for the maintenance strategy. Cost of downtime needs to be balanced with cost of maintenance, an optimum that may also change over time. In this section we look at different strategies for plant maintenance.
26.3.1 Key Performance Indicators One of the key performance indicators that can be influenced by an optimized maintenance program is plant availability: the proportion of time a system is in a functional condition. Availability [A] is calculated based on the expected value [E] of the uptime and downtime, respectively: A=
E [uptime] E [uptime] + E [downtime]
Depending on the formula use, there are two interpretations of availability: • Operational availability: availability under the normal operation scheme of the plant. Uptime and downtime are driven by equipment reliability as well as by commercial considerations. The overall plant uptime and downtime are used, irrespective of what caused the downtime. • Inherent availability: availability solely caused by failures and their recovery. The operational availability is optimized based on the business conditions. The inherent availability is a KPI that is normally maximized by minimizing outages or their impact. For the inherent availability, E[uptime] is equal to the mean time to failure (MTTF), and E[downtime] is equal to the mean time to repair (MTTR). An optimal maintenance strategy can influence the inherent availability by extending the MTTF through avoiding failures altogether. It is also influenced by reducing the MTTR and bringing a plant back online in shorter time. Over the life cycle of the equipment, the MTTF changes: there is a higher probability that the equipment will fail early in its life cycle (“infant mortality”) due to production or installation errors, and this probability rises again when the equipment is aging and wearing out. This reliability function is generally referred to as the “bathtub curve,” shown in Fig. 26.6. In the context of predictive maintenance, MTTR has the following aspects:
Failure rate
26
Infant mortality
Random failures
Wear-out Equipment life
26 Fig. 26.6 Failure rate over the life cycle of the equipment, “bathtub curve”
• Detection time: the time from the plant being unavailable until it is detected. Even though one would assume that detection of downtime is immediate, one can imagine cases of subsystems or remote plant sections, where a failure goes unnoticed for some time. • Preparation time: the time spent to prepare the repair. • Repair time: the time spent to repair a fault. If equipment fails unexpectedly, the preparation time cannot be spent before the incident. MTTR is therefore the sum of the repair time and the preparation time. If a fault can be predicted, the preparation time can be spent before the outage, and only the repair time affects the availability. A comprehensive overview over the literature in the field is given in [22]. If the maintenance strategy is aligned with the business outcome, this optimization considers times of high and low production. A more comprehensive, but more complex, KPI for the effectiveness of a maintenance program may therefore be the cumulative opportunity cost. This KPI considers repair cost but also the lost sales due to the plant not being available in critical business situations.
26.3.2 Corrective Maintenance In this approach, maintenance is only done when something fails. In the best case, the failed equipment is redundant, either by having another device installed that is performing the same function or by executing the function by different means (e.g., manual instead of automatic). In most cases, this approach leads to plant downtime. Downtime can be shortened by keeping an inventory of spare parts and spare equipment, which in turn has an impact on working capital. Relating back to the definition of availability, the factor that can be influenced by corrective maintenance is the duration of the downtime. Proper detection and reporting mechanisms help to detect the failure and, in the best case,
610
also locate it and explain its reason quickly. Spare part inventory, and well-trained service personnel, located close to the plant further helps to shorten the preparation time. To then repair the failed component, it is essential to have efficient personnel as well as ease of repair built into the design of the broken component. One way to achieve personnel efficiency is proper training. New technologies applying augmented reality (AR) may be of help in achieving this, as the AR allows the trainee to get hands-on experience before seeing a real plant. Furthermore, a service person may not need the support of a senior expert traveling with him or her to the site, as the expert may well be connected to the system remotely. Ease of repair is something that needs to be designed into the equipment. Today, design for manufacturing is state of the art. To extend that concept to include a service environment, where the equipment is installed in its operational environment with only limited tools available, is a challenge that is often not considered in the design phase. To include service technicians in the design stage of an R&D project is a best practice worth exploring [23].
26.3.3 Preventive Maintenance In this maintenance approach, equipment is maintained based on a schedule, hours in operation, use cycles, or similar indications of usage that relates to the equipment’s wear and tear. The intervals in which the maintenance activity has to occur are derived from statistics and the experience of the equipment manufacturer. This approach is taken by car manufacturers, where a car must be serviced at regular intervals (according to time or mileage). This approach prevents many unplanned failures. However, it requires service activity where it may not yet be needed due to the state of the equipment. If a fault develops between maintenance intervals, it will still lead to an unplanned outage. To come up with an optimal preventive maintenance strategy, information about the following aspects of the equipment have to be analyzed: • Design parameters: the equipment was laid out in the design phase to fulfill life cycle requirements that have an impact on its reliability. Tolerances are then chosen to further reduce the failure probability within the expected life cycle. Good industrial design flags components that need to be serviced or replaced within the life cycle of the equipment. This a priori information is the base of a good preventive maintenance plan. • Failure statistics and best practices: service records from the field can be used to further improve the strategy. If, despite the design, some failures occur more frequently, the
C. Ganz and S. West
maintenance instructions for the fleet have to be updated. The service tools in place (covered in Sect. 26.4.1) should be used to both collect the data and also to roll out refined service plans to service personnel and existing customers.
26.3.4 Condition-Based Maintenance Maintaining a device once it has broken down is one extreme case of condition-based maintenance, where the condition of the equipment has “failed.” Good diagnostic capabilities, however, can indicate an upcoming failure before it results in an outage of the plant. The device can then be maintained in times of low or stopped production, which has a minor impact on the plant’s production. Some equipment conditions can be detected by the experienced maintenance person, through sound, sight, or other senses. This can be extended by adding sensing capabilities to the equipment and by running diagnostic algorithms to detect developing issues before they are spotted by humans. Automating equipment monitoring requires sensors. These may be added to the equipment for the sake of monitoring, but they may already be available for other purposes, e.g., for control. In a simple case, a failure of a component is indicated by a sensor that surpasses a defined limit. If a bearing temperature or vibration measurement is too high, this may point to a problem. In many cases however, further analysis is required. The amplitude of a vibration signal may still be in the normal range, while some frequencies are outside the norm [24]. Such condition monitoring systems may combine several sensor readings to detect the failure. In Fig. 26.7, the current machine condition is one data point that indicates the current status (labeled “monitored condition”).
26.3.5 Predictive Maintenance An extension of condition-based maintenance is predictive maintenance. In addition to monitoring the device’s condition, an algorithm is used to estimate the device’s predicted failure time. This may be done based on past measurements on that device but also based on statistics over a fleet of devices where typical failure probabilities are calculated on the condition that a given effect is observed on a device. A predictive approach allows better optimization of the maintenance schedule. How far in the future the failure can be predicted depends on the capabilities of the analytics used and the amount of data that is available from which to derive the failure probabilities. The result can be an unplanned outage that is scheduled to match a time of low production
26
Service Automation
611
Monitored condition Failure initiated Prediction Prescription
Condition
26
Statistical spread
Catastrophic failure
Fig. 26.7 Relation of maintenance strategies
(low lost opportunity), or the prediction can even assure that a device will survive until the next planned outage. The longer lead time indicated by the predictive algorithm may have an additional impact on the spare part inventory. If the predicted time is long enough, spare parts can be ordered ahead from the manufacturer and do not need to be kept in stock. To enable predictive maintenance, all aspects of conditionbased maintenance must be present, i.e., the capability to collect and analyze data. Many of the common algorithms applied also require recordings of past data in periods leading to failures, in order to detect trends. To achieve a more precise prediction that also considers the intended future load of a plant, accurate models are required that take the current state for calibration and calculate the future failure behavior. Since modeling of wear and aging is difficult using first principle physical models, statistical models or neural networks show good performance in such applications. Figure 26.7 shows the prediction to forecast future machine behavior given the currently assessed condition.
26.3.6 Prescriptive Maintenance One further maintenance strategy is to investigate the options indicated by the prediction algorithm. A device that is suffering from reduced health may deteriorate more slowly if
not loaded to full capacity. A minor decrease in production may prevent an outage and buy time until the next scheduled maintenance. Alternatively, a short outage to install a temporary fix may buy the time needed to order a new device or to properly schedule the maintenance activity according to the overall maintenance plan. Again, prescriptive maintenance requires all aspects of predictive maintenance to be present. It requires predictive algorithms that not only rely on statistics but also that can run “what-if” simulations to calculate the impact of a particular intervention. Figure 26.7 indicates the impact of prescriptive maintenance to move the failure condition further into the future to allow for corrective actions in an optimal time window.
26.4
Technology and Solutions
In the previous sections, we have covered the various aspects of industrial services and what the drivers are. In this section we will now cover the solutions available to increase the automation level of these services. It is to be noted that “industrial services” is one field where human intervention is still very important. Automation technologies can support humans in their tasks to increase their effectiveness, but fully automated service delivery is still limited to niche applications.
612
26.4.1 Condition Assessment and Prediction A properly designed condition assessment and prediction solution is tailored to the equipment it is evaluating. In an optimal case, the functionality is already built into the device as it is designed. The engineers who do this tailor much of the information available and are using it to design the device for reliability. In order to properly construct a condition monitoring system, the elements to be observed need to be those that create the biggest value when monitored [25]. In the first step, possible failure modes need to be analyzed. What can go wrong? What impact would that failure have? If combined with life cycle data from existing products, or market failure statistics, the most frequent failures can be identified. The next step in the design of a monitoring system is to identify possible variables that would indicate a failure as early as possible. Where are sensors installed that could observe an effect of a degradation? The volume of sensors now used in smartphones and cars has led to a significant cost reduction. While vibration monitoring using accelerometers was expensive in the past, such sensors have become available in large numbers. This means adding sensors to a device only for monitoring purposes has become possible even for lower cost products such as low-voltage motors. If such sensors are not available in the installed base of older equipment, retrofitting devices with monitoring packages that comprise sensing, basic analytics, and communication capabilities has become possible. Figure 26.8 shows the ABB Smart Sensor attached to motors in a factory [26]. If it is still not possible to directly measure a failure with a sensor, it may be calculated from adjacent sensors. Are there physical models available that provide the necessary algorithms to do these calculations? If a failure is difficult to model, a statistical modeling approach can be taken, where
Fig. 26.8 Condition monitoring retrofit solution for motors (ABB Smart Sensor)
C. Ganz and S. West
data from earlier failures are analyzed and used to calculate the respective parameters. A special case of a statistical model is using machine learning. Data from past operations can be used to train a neural network that can then be used to infer the system’s state of health. To train a neural network or statistical model, a large number of observations of healthy as well as faulty devices need to be available. Since industrial equipment rarely fails in normal use, the chance of seeing a failure within a single plant is low. Data from a vast fleet of devices therefore needs to be used for that training. To make this data available for analysis, concepts of the Internet of Things (IoT) have to be applied [27].
26.4.2 Remote Services: Internet of Things With the proliferation of the Internet of Things into industrial applications [28], many of the monitoring and analytical functions described so far have moved from the plant to the cloud. Computing and storage capacity are more scalable and elastic in cloud environments compared to on-site computation. Furthermore, to learn from as many incidents as possible and to extract the corresponding data patterns to predict that failure in other installations, the data needs to be accessible from one central point where the algorithms can be trained. This point has been established by a number of vendors to be the IoT cloud. Figure 26.9 shows the issue resolution process, where the issue is detected or predicted by the analytics algorithms in the cloud platform. It is then further resolved by the service expert, who triggers the service intervention process that finally brings the field engineer to the plant to fix the problem. The data that is analyzed for the detection of potential issues can also be used to analyze the plant’s performance in general. Performance indicators such as energy efficiency, throughput, or quality may also be derived from collected data, and the plant operator can be informed about possible measures to improve operations. Since all required data is available off-site, broad collaboration between different stakeholders is enabled. The plant that lacks the top-level experts on-site may still be guided by those specialists (located in another plant or in a corporate office), and the supplier can add equipment specialists to discuss any operational or maintenance issues whenever required. Such a collaborative operation center (example screens are shown in Fig. 26.10), where plant personnel, customer’s experts, and supplier’s experts can interact in real time across the globe without the need for travel, is a key concept to deliver digital services beyond maintenance support [29, 30]. Independent of the location and the staffing of the plant, key experts (internal and external) can be made available depending on the needs of the plant operation. This enables
26
Service Automation
613
3rd Party service provider
Collaborating service expert
External consultant
Service dispatcher
Web portal
Field service mgt
Service center Analytics Inst. base
Customer
IoT cloud Secure conn.
Improved operational effectiveness
Fig. 26.9 Industrial Internet of Things service ecosystem. (Courtesy ABB)
Fig. 26.10 ABB Ability™ collaborative operation center – plant and asset view
26 Other data Field service engineer Fast and efficient resolution of issues
614
C. Ganz and S. West
faster reaction and broader analysis of technical as well as production issues. Experts can be called upon without the need for a physical presence on-site. And changing needs of production can be matched by bringing in the experts needed in a wide range of situations.
in a plant, for example, the system would display the data of last inspection; the present performance status, analyzed by a background system; the scheduled maintenance interval; etc. Drawings of the part can be displayed, and maintenance instructions can be given.
26.4.3 Service Information Management in a Digital Twin
26.4.4 Service Support Tools
The detailed information about the equipment’s condition is not sufficient to efficiently execute a maintenance task. In addition to this information, a service organization needs to know more, for example: • Location of the equipment and description of how to reach it, including local access procedures • Purpose of the equipment and expected performance and connection of equipment to other parts of the installation • Proposed service actions and detailed advice of how to carry out these actions, including required tools • Safety issues in handling the equipment • Technical data, including drawings and part lists • Historical data for the equipment, including previous service actions • Information about spare parts and their availability and location • Information about urgency and time constraints • Data for administrative procedures and work orders Most of this information is available in digital form. It is essential that it is available at the place and the time of the service intervention. Ensuring that this information is correct is not a trivial matter. All available information must be associated with the equipment to be serviced. Outdated or incorrect information may in the best case prolong the service activity or in the worst case create a dangerous situation. The challenge of managing digital information along the life cycle has been addressed by introducing the concept of a digital twin [31]. If properly set up, a digital twin not only shows the cause of the problem but also contains 3D models, work instructions, spare part bills of material, and corresponding spare part inventory levels. If the digital twin information is consistently managed throughout the life cycle of the equipment, it can provide most of the information required for a service intervention. The digital twin information not only needs to be collected, but it also needs to be available at the place and time of the service intervention. Handheld devices such as smartphones and tablets are a convenient way to carry the information or to access back-end systems that provide it. Another means to access it may be augmented reality. These tools combine the real picture of a device with displayed information about the object inspected. Looking at a valve
Apart from the information required on-site, further automation can be achieved by managing the service intervention. If the following tasks are not aligned, there may be further delay in providing a remedy: • Availability of a properly trained service team in optimal distance to the plant • Availability of the required tools and spare parts • Deployment of people and material on-site • Alignment with the plant’s shutdown schedule and procedures • Necessary approvals and work orders from the plant • Administrative back-end processes, such as purchase orders and billing All these steps can be taken on paper and coordinated over the phone, but today’s service management tools provide the support required to keep the information flows consistent and avoid misunderstandings. Connecting the real-time information from the field to the service process tools, and integrating the information that is available in the digital twin to the processes managed by the corresponding service and logistics tools, creates the information base to enable automated business processes. From notification of the service organization through high-level event processing based on IoT data to detection of successful remedy by analyzing IoT data that may lead to automatic billing, many processes can be automated. Still, even in highly automated service delivery systems, humans stay in the loop as well, taking the relevant decisions and finally executing the service actions in the field. The technology available in the IoT cloud, through artificial intelligence and advanced optimization algorithms, helps to make this process as efficient as possible to reduce downtime and increase the productivity of a plant. When considering automation, it is vital to consider the digital landscape that exists, as service can tap into these as resources to improve the service delivery, the operation management, and the design of the next generation of products. The installed base information provides a core service base as it includes information on where the equipment is, how it is operated, how it is maintained, and who is the operator. This provides important insights for both sales and the service business. The installed base’s starting point can be data in the customer relationship management (CRM)
26
Service Automation
system that considers the bidding process or simply the ship-to address on the delivery. When the installed base is integrated with a life cycle management model CRM and coupled with operational data, it is possible to forecast the service sales revenues. The CRM data provides information on the customers, and this can be used to drive the service sales team, ensuring repeat visits and triggers for sales processes. Journey mapping can support this process further and allows for the automation of trigger marketing. ServiceMax, for instance, provides a system similar to a manufacturing execution system (MES) for services that supports the service business, drawing from data in CRM and ERP systems and the installed base with operational service management. This helps to automate the service operations to drive customer experience and efficiency. Machine data can be collected in the form of process data and sensor data to provide, for example, automated replenishment as part of a service contract. Veolia Environment, for example, used sensors on waste bins to allow an adaptive collection process to be developed.
26.4.5 Toward Fully Automated Service As we laid out in Sect. 26.1.2, one of the key properties of services is their intangible nature. When developing a physical product, its design is described so that a factory or workshop can build it from the design instructions. The machines in a factory are capable of using some of these instructions (possibly converted into machine instructions) to automatically build the product. Service delivery is unique in time and place on each occasion it is performed for a customer. Its execution still often depends on the individual performing the service. To standardize service delivery, organizations have clearly described the service process, similar to the instructions and plans that are used to build a product. Such defined processes are the base for service automation. As we have seen in Sect. 26.4.4, tools to support the backend processes in a more automated way are available today. To properly automate service processes, feedback from the equipment is essential. Consistently collecting and analyzing information as described in Sect. 26.4.2 are required to close the automation loop: from condition assessment to failure prediction, from spare part inventory to the availability of properly trained personnel. The concept of a digital twin can serve as a central information platform that is then used for information analysis and operation optimization. Still, human judgement and also human manual skills will be required to finally resolve an issue on-site. However, technologies that allow remote interaction with the plant (remote operations, augmented/virtual reality) allow an expert to interact with onsite personnel, advise them, and guide them quickly to the correct solution.
615
The use of industrial IoT has brought automated services a great step forward, but further information integration is required to optimally serve an industrial customer. Service processes can be automated similarly to other business processes where robotic process automation is increasing efficiency.
26.5
Conclusions and Emerging Challenges
Industrial services cover most of a product or system’s life cycle, from the design stages until its end of life. In particular, the services around managing the installed system’s operational performance and uptime share a common pattern: measured field data and other information available in the plant’s or supplier’s IT systems are analyzed to detect or predict performance degradation or failures. Based on this information, a service intervention is planned and executed. Service automation is easiest to implement in the data collection and analysis parts of this process. Today’s asset management systems are capable of doing these tasks in an automated or supported way. These systems are also well suited to automate some of the background processes, scheduling, spare parts management, etc. A well-designed product-service system (PSS) can help optimize these steps. Where automation is still not easily implemented is in the execution of the service intervention. Repair and maintenance activities still require human activities to perform the steps necessary to get the equipment back to optimal performance. In this chapter we have outlined some of the challenges and presented current approaches around creating productservice system offerings that not only maintain the product’s functionality but also extend it with services along the life cycle. We have also outlined some of the technologies that can be employed to put such systems into action. If properly implemented, these approaches not only provide better margins for the supplier but also create better customer relationships and are a door opener to continued collaboration on expanding the customer’s operations. See related topics in Chapter 31.
References 1. Fischer, T., Gebauer, H., Fleisch, E.: Service Business Development: Strategies for Value Creation in Manufacturing Firms. Cambridge University Press, Cambridge (2012) 2. Merriam Webster [Online]. Available: https://www.merriamwebster.com/dictionary/service. Accessed 30 June 2020 3. The American Heritage Dictionary of the English Language [Online]. Available: https://www.ahdictionary.com. Accessed 8 Mar 2021 4. InvestorWorlds [Online]. Available: http://www.investorwords. com/6664/service.html . Accessed 30 June 2020
26
616 5. West, S., Pascual, A.: The use of equipment life-cycle analysis to identify new service opportunities. In: Baines, T., Harrison, D.K. (eds.) Servitization: The Theory and Impact: Proceedings of the Spring Servitization Conference, Aston (2015) 6. Thorben, K.-D., Wiesner, S.A., Wuest, T.: “Industrie 4.0” and smart manufacturing-a review of research issues and application example. J. Autom. Technol. 11(1), 4–19 (2017) 7. Terzi, S., Bouras, A., Dutta, D., Garetti, M., Kiritsis, D.: Product lifecycle management - from its history to its new role. Int. J. Prod. Lifecycle Manag. 4(4), 360–389 (2010) 8. Tukker, A.: Eight types of product-service system: eight ways to sustainability? Experiences from suspronet. Bus. Strateg. Environ. 13(4), 246–260 (2004) 9. Kowalkowski, C., Ulaga, W.: Service Strategy in Action: a Practical Guide for Growing Your B2B Service and Solution Business. Service Strategy Press, Helsinki (2017) 10. Kotler, P., Keller, K.L.: Marketing Management, 13th edn. Prentice Hall, Upper Saddle River (2009) 11. Mathieu, V.: Service strategies within the manufacturing sector: benefits, costs and partnership. Int. J. Serv. Ind. Manag. 12(5), 451– 475 (2001) 12. Mathieu, V.: Product services: from a service supporting the product to a service supporting the client. J. Bus. Ind. Mark. 16(1), 39– 61 (2001) 13. de Brentani, U.: Success and failure in new industrial services. J. Prod. Innov. Manag. 6(4), 239–258 (1989) 14. Schlesinger, L.A., Heskett, J.L.: The service-driven service company. Harv. Bus. Rev. 69(5), 71–81 (1991) 15. Kindström, D., Kowalkowski, C., Sandberg, E.: Enabling service innovation: a dynamic capabilities approach. J. Bus. Res. 66(8), 1063–1073 (2013) 16. Kowalkowski, C.: Service innovation in industrial contexts. In: Toivonen, M. (ed.) Service Innovation, Translational Systems Sciences, vol. 6. Springer, Tokyo (2016) 17. Osterwalder, A., Pigneur, Y.: Business Model Generation: a Handbook for Visionaries, Game Changers, and Challengers. Wiley, Hoboken (2010) 18. Alstom: Gas Service Solutions. Alstom, Baden (2009) 19. Raddats, C., Kowalkowski, C., Benedettini, O., Burton, J., Gebauer, H.: Servitization: a contemporary thematic review of four major research streams. In: Industrial Marketing Management, vol. 83, pp. 207–223 (2019) 20. Kowalkowski, C., Gebauer, H., Kamp, B., Parry, G.: Servitization and deservitization: overview, concepts, and definitions. In: Industrial Marketing Management, vol. 60, pp. 4–10 (2017) 21. Wurzer, A.J., Nöken, S., Söllner, O.: International Institute for IP Management, 15 July 2020 [Online]. Available: https://www.i3pm.org/MIPLM-Industry-Case-Studies/MIPLM_In dustry_Case_Study_Series_Hilti.pdf 22. Bousdekis, A., Lepenioti, K., Apostolou, D., Mentzas, G.: Decision making in predictive maintenance: literature review and research agenda for Industry 4.0. In: 9th IFAC Conference on Manufacturing Modelling, Management and Control MIM 2019, Berlin (2019) 23. Ganz, C.: ABB’s service technologies are crucial to ensuring longevity in its products. ABB Rev., 4, 6–11 (2012) 24. Ottewill, J.: What currents tell us about vibrations. ABB Rev., 1, 72–79 (2018) 25. Kloepper, B., Hoffmann, M.W., Ottewill, J.: Stepping up value in AI industrial projects with co-innovation. ABB Rev., 1, 36–41 (2020) 26. Oriol, M., Yu, D.-Y., Orman, M., Tp, N., Svoen, G., Sommer, P., Schlottig, G., Sokolov, A., Stanciulescu, S., Sutton, F.: Smart Sensor for hazardous areas. ABB Rev. 14, 05 (2020)
C. Ganz and S. West 27. Gitzel, R., Schmitt, B., Kotriwala, A., Amihai, I., Sosale, G., Ottewill, J., Heese, M., Pareschi, D., Subbiah, S.: Transforming condition monitoring of rotating machines. ABB Rev., 2, 58–63 (2019) 28. Porter, M., Heppelmann, J.: Managing the internet of things: how smart, connected products are changing the competitive landscape. Harv. Bus. Rev. 92(11), 64–88 (2014) 29. Ralph, M., Domova, V., Björndal, P., Vartiainen, E., Zoric, G., Windischhofer, R., Ganz, C.: Improving remote marine service. ABB Rev., 4, 44–48 (2016) 30. Hamm, P.: The IIoT evolution in the capital goods sector. Master Thesis, Lucerne University of Applied Sciences and Arts, Horw (2019) 31. Schmitt, J., Malakuti, S., Grüner, S.: Digital Twins in practice: Cloud-based integrated lifecycle management, ATP Magazine (2020)
Christopher Ganz is an independent innovation consultant with more than 30 years of experience in the entire value chain of industrial innovation, including over 25 years at ABB. His innovation focus is on industrial digitalization and its implementation in service business models. As one of the authors of ABB’s digitalization strategy, he was able to build on his work in ABB research and global ABB service management and today supports companies in innovation processes. Christopher holds a doctorate degree (technical sciences) from ETH Zürich.
Shaun West gained a PhD from Imperial College in London and then worked for over 25 years in several businesses related to industrial services. Now at the Lucerne University of Applied Sciences and Arts, he is the professor of product-service system innovation. He focuses his research on supporting industrial firms to develop and deliver new services and service-friendly business models.
Infrastructure and Complex Systems Automation
27
Florin Gheorghe Filip and Kauko Leiviskä
Contents 27.1
Background and Scope . . . . . . . . . . . . . . . . . . . . . . . . . . 618
27.2 27.2.1 27.2.2 27.2.3
Control Methods Large-Scale Complex Systems . . . . Multilevel Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Decentralized Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . Computer Supported Decision-Making . . . . . . . . . . . . . .
620 620 623 623
27.3 27.3.1 27.3.2 27.3.3 27.3.4
Modern Automation Architectures and Essential Enabling Technologies . . . . . . . . . . . . . . . . . . . . . . . . . . . Internet of Things . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Computing Technologies . . . . . . . . . . . . . . . . . . . . . . . . . . Tools and Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Networked Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
625 625 626 628 629
27.4 27.4.1 27.4.2 27.4.3
Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Smart Cities as Large-Scale Systems . . . . . . . . . . . . . . . . Environmental Protection . . . . . . . . . . . . . . . . . . . . . . . . . Other Infrastructure Automation Cases . . . . . . . . . . . . . .
631 631 634 635
27.5
Design and Security Issues . . . . . . . . . . . . . . . . . . . . . . . 635
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 636
Abstract
The set of traditional characteristic features of large-scale complex systems (LSS) included the large number of variables, structure of interconnected subsystems, and other features that complicate the control models, such as nonlinearities, time delays, and uncertainties. The advances in information and communication technologies (ICT) and the modern business models have led to important evolution in the concepts and the corresponding management and control infrastructures of large-scale complex systems. The last three decades have highlighted several
F. G. Filip () The Romanian Academy, Bucharest, Romania e-mail: [email protected] K. Leiviskä Control Engineering Laboratory, University of Oulu, Oulu, Finland e-mail: [email protected]
© Springer Nature Switzerland AG 2023 S. Y. Nof (ed.), Springer Handbook of Automation, Springer Handbooks, https://doi.org/10.1007/978-3-030-96729-1_27
new characteristic features, such as networked structure, enhanced geographical distribution associated with the increased cooperation of subsystems, evolutionary development, higher risk sensitivity, presence of more, possibly conflicting, objectives, and security and environment concerns. This chapter aims to present a balanced review of several traditional well-established methods (such as multilevel and decentralized control) and modern control solutions (such as cooperative and networked control) for LSS together with the technology development and new application domains, such as smart city with heating and water distribution systems, and environmental monitoring and protection. A particular attention is paid to automation infrastructures and associated enabling technologies together with security aspects. Keywords
Complex systems · Cloud Computing · Decision support · Environment protection · ICT · Interconnected systems · Internet of Things · Networked control · Smart City
There is not yet a universally accepted definition of the large-scale complex systems (LSS) though the LSS movement started more than 50 years ago, and many definitions were proposed [1–6, and others]. In addition, the concept has continuously evolved under the influence of the human factor, technology advances, and changes in business models. The LSS have traditionally been characterized by large numbers of variables, structure of interconnected subsystems, and other features that complicate the control models, such as nonlinearities, time delays, and uncertainties. Their decomposition into smaller, more manageable subsystems, possibly organized in a hierarchy, has been associated with intense and time-critical information exchange and with the need for efficient decentralization and coordination mechanisms [7]. 617
618
F. G. Filip and K. Leiviskä
This chapter is an up-dated version of the Ch. 36, entitled “Large-Scale Complex Systems,” of the first edition of Springer Handbook of Automation.
27.1
Background and Scope
In real life, one can encounter many natural, manmade, and social entities that can be viewed as LSS. From the early years of the LSS movement, the general LSS class has included several specific subclasses, such as steelworks, petrochemical plants, power systems, transportation networks, water systems, and societal organizations. The interest in designing effective control schemes for such systems was primarily motivated by the fact that even small improvements in the LSS operations could lead to large savings and important economic effects [6]. The structure of interconnected subsystems has apparently been the characteristic feature of LSS to be found in most definitions. Several subclasses of interconnections can be noticed (Fig. 27.1). Firstly, there are the resource sharing interconnections described by Findeisen [8], which can be identified at the system level as remarked by Takatsu [9]. Also, at the system level, subsystems may be interconnected through their common objectives, collective constraints, and information flows. Besides the above virtual interconnections, the subsystems may also be physically interconnected through buffer units (tanks), which are meant to attenuate the effects of possible differences in the operation regimes of plants, which feed or drain the stock in the buffer. This type of flexible interconnection can frequently be met in large industrial and related systems, such as refineries, steelworks, and water systems [10]. In some cases, buffering units are not allowed because of technological reasons; for example, electric power cannot be stocked at all and reheated ingots in steelworks must go immediately to rolling mills to be processed. When there are no buffer units, the subsystems are coupled through direct interconnections, at the process level [9]. In the 1990s, integration of systems continued and new paradigms, such as the extended/networked/virtual enterprise, were articulated to reflect real-life developments. In this context, Mårtenson [11] remarked that complex systems became more and more complex. She provided several arguments to support her remark. First, the ever-larger number of interacting subsystems that perform various functions and utilize technologies belonging to different domains, such as mechanics, electronics, and Information and Communication Technologies (ICT). Second, experts from different domains can encounter hard-to-solve communication problems. Also, people in charge of control and maintenance tasks, who have to treat both routine and emergency situations, possess uneven levels of skills and training and might even belong to different cultures. Several years later, Nof et al. [12] stated:
There is the need to create the next generation manufacturing systems with higher levels of flexibility, allowing these systems to respond as a component of enterprise networks in a timely manner to highly dynamic supply-and-demand networked markets.
The ever-growing preoccupation for environment quality as a part of sustainable development has led to circular economy [13] and environmentally conscious manufacturing and product recovery trends [14]. To implement those concepts, a specific form of virtual LSS could be identified. It consists of complex loops made up of forwards activities (design, manufacturing, distribution, and use) and backwards activities (collecting end-of-life products, disassembling, followed by remanufacturing/reusing/repairing/disposal and recycling). To make those activities effective, several multicriteria methods can be used [15, 16]. Besides manufacturing, there has been a growing trend to understand the design, management, and control aspects of complex supersystems or Systems of Systems (SoS). Systems of Systems can be met in space exploration, military, and civil applications, such as computer networks, integrated education systems, healthcare networks, and air transportation systems. There are several definitions of SoS, most of them being articulated in the context of particular applications; for example, Sage and Cuppan [17] stated that an SoS is not a monolithic entity and possesses the majority of the following characteristics: geographic distribution, operational and management independence of its subsystems, emergent behavior, and evolutionary development. All these developments obviously imply ever more complex control and decision problems. A particular case that has received a lot of attention over recent years is the class of largescale critical infrastructures (communication networks, the Internet, highways, water systems, power systems) that serve the society in general [18]. Several SoS may show variable structures under the influence of strong external perturbations (weather conditions, pandemic, economic crisis, and so on) or/and new business models and, consequently, may request a multimodel approach. For example, the same large water distribution system needs different models in normal, floods, and drought situations [19]. The recent advances in information and communication technologies have led to new developments and concepts. Among them, the following ones are of interest: a) increased collaboration of various entities, such as enterprises, humans, computers, networks, and machines, b) the digital transformation and associated concepts, such as Cyber-Physical Systems (CPS) and Cyber-Physical-Social Systems (CPSS), and c) new control schemes, such as Artificial Intelligence (AI)-based solutions and Networked Control Systems (NCS). All new concepts and control schemes are enabled by corresponding effective ICT infrastructures.
27
Infrastructure and Complex Systems Automation
a)
619
m1
w1
w2
m2 y1
u1
SSy1
Resources y2
u2
b)
SSy2
m1
w1
w2
m2 y1
u1
z1
SSy1
z2 u2
c)
m1
w1
u = h (z) y2
SSy2
m2
w2
y1 u1
z1
SSy1
z2 u2
SSy2
s = g (s,z) u = h (s) y2
Fig. 27.1 Interconnection patterns: (a) resource sharing, (b) direct interconnection, (c) flexible interconnection; SSy subsystem, m control variable, y output variable, w disturbance, u interconnection input, z interconnection output, g(·) stock dynamics function, h(·) interconnection function
Lee [20] defines CPS as “integrations of computation and physical processes where embedded computers and networks monitor and control the physical processes, usually with feedback loops where physical processes affect computations and vice versa.” Yilma et al. [21] define a CPSS as a system composed of two parts: a) a CPS and b) a social system (SS). The components of CPS and SS interact in a virtual and physical environment. See more on CPS later in Sect. 27.3.3.1. Nof et al. [22, 23] emphasized that e-manufacturing is highly dependent on the efficiency of collaborative human– human and human–machine e-work. Emergency situations of operation disruptions can be effectively handled in the context of collaborative e-work [24] (See Ch. 18). In recent years, the social operator 4.0 and the cooperation of social machines within a social factory have been envisaged [25]. Industry 4.0 [26, 27] and other similar concepts, such as Economy 4.0, Agriculture 4.0, Health 4.0, Water 4.0, Education 4.0, Tourism 4.0, and so on, associated with digital transformation [28] can be viewed as new specific subclasses to the general class of LSS.
A plethora of methods have been proposed over the last decades for managing and controlling LSS, such as decomposition [29], multilevel control and optimization [6], decentralized control [30], model reduction [31], singular perturbation-based techniques [32], intelligent control including soft computing [33, 34], multimodel approaches [35, 36], collaborative control [23], network systems and networked control [37–39], and so on. Several common ideas can be found in most approaches proposed so far, such as: 1. Exploiting the particular structure of each system to the extent possible by replacing the original problem with a set of simpler ones, which can be solved with the available tools and accepting the satisfactory, near optimal, solution 2. Making use of the communication infrastructures that enables an intensive exchange of data among various entities of a complex system, such as controlled objects, sensors, actuators, computers, and people, as a preeminent characteristic feature of the modern solutions for management and control schemes
27
620
F. G. Filip and K. Leiviskä
The rest of the chapter is organized as follows. Section 27.2 describes several traditional and modern control methods, such as multilevel schemes, decentralized and collaborative control, and decision support systems. Modern automation architectures and several technologies, which enable the control of the LSS, are described in Sect. 27.3. Several particular issues such as large heat and water distribution systems, environment protection, and other infrastructure automation cases are addressed in Sect. 27.4. The design and security aspects are reviewed in Sect. 27.5.
27.2
Control Methods Large-Scale Complex Systems
Even though many ideas and methods for controlling LSSs have been proposed since late 1960s, one could admit that the book of Mesarovic et al. published in 1970 [6] had a significant role in laying the theoretical foundations for LSS domain and triggering the LSS movement. The concepts contained in that book have inspired many academics and practitioners. A series of books, including those of Ho and Mitter [40], Sage [41], Singh and Titli [42], Findeisen et al. [43], Jamshidi [5], Šiljak [30], Brdys and Tatjewski [44], and so on, followed and contributed to the consolidation of the LSS domain of research and paved the way for practical applications. In 1976, the first International Federation of Automatic Control (IFAC) conference on Large-Scale Systems: Theory and Applications was held in Udine, Italy. This was followed by a series of symposia, which were organized by the specialized Technical Committee of IFAC and took place in various cities, such as Toulouse, Warsaw, Zurich, Berlin, Beijing, London, Patras, Bucharest, Osaka, Gdansk, Lille, Shanghai, and Delft.
27.2.1 Multilevel Methods The central idea of the Hierarchical Multilevel Systems (HMS) approach to LSS consists of replacing the original system (and the associated control problem) with a multilevel structure of smaller subsystems (and associated less complicated subproblems). The subproblems at the bottom of the hierarchy are defined by the interventions made by the higher-level subproblems, which in turn utilize the feedback information they receive from the solutions of the lower-level subproblems. There are three main subclasses of hierarchies, which can be obtained in accordance with the complexity of description, control task, and organization [6].
Levels of Description The first step in analyzing an LSS and designing the corresponding control scheme consists of model building.
As Steward [45] points out, practical experience witnessed that there is a paradoxical law of systems. If the description of the plant is too complicated, then the designer is tempted to consider only a part of the system or a limited set of aspects, which characterize its behavior. In this case, it is likely that the very ignored parts and aspects have a crucial importance. Consequently, it emerges that more aspects should be considered, but this may lead to a problem which is too complex to be solved in due time. To solve the conflict between the necessary simplicity (to allow for the usage of existing methods and tools with a reasonable consumption of time and other computing resources) and the acceptable precision (to avoid obtaining wrong or unreliable results), the LSS can be represented by a family of models. These models reflect the behavior of the LSS as viewed from various perspectives, called [6] levels of description or strata, or levels of influence [41, 46]. The description levels are governed by independent laws and principles and use different sets of descriptive variables. The lower the level is, the more detailed the description of a certain entity is. For example, the same manufacturing system can be described from the top stratum in terms of economic and financial models, and, at the same time, by control variables (states, controls, and disturbances), as viewed from the middle stratum, or by physical and chemical variables, as viewed from the bottom description level. In the context of digital transformation of the present day, a new decomposition axis could be added to contain the physical level and the cyber one. Smart city is a recent concept that is also associated with digital transformation and may be viewed as an LSS that can be described by using a 3-dimension (technology, human, and institutions) approach [47].
Levels of Control In order to act in due time even in emergency situations, when the available data are uncertain and the decision consequences cannot be fully explored and evaluated, a hierarchy of specialized control functions can be an effective solution, as shown by Eckman and Lefkowitz [48]. Several examples of sets of levels of control are: • Regulation, optimization, and organization [49] • Stabilization, dynamic coordination, static optimization, and dynamic optimization [8] • Knowledge-based Enterprise Resource Planning-ERP (the business level), the High-Level Controller (the general performance level), and the Low-Level Controller (the operative level) [50] • Device level consisting of field devices such as sensors and actuators, and machines and process elements such as PLC (Programmable Logic Controllers). Control level involving networking machines, work cells, and work areas where Supervisory Control and Data Acquisition
27
Infrastructure and Complex Systems Automation
(SCADA) are usually implemented. Informational level that is meant to gather the information from the lower level and deals with large volumes of data that are neither in constant use or time critical [51] The levels of control, also called layers by Mesarovic et al. [6], can be the result of a time-scale decomposition. They can be defined based on time horizons, or the frequency of disturbances, which may show up in process variables, operation conditions, parameters, and structure of the plant [46].
Levels of Organization Brosilow et al. [52] proposed the hierarchies based on the complexity of organization in mid-1960s, and Mesarovic et al. [6] formalized these in detail. The hierarchy with several levels of organization, also called echelons by Mesarovic et al. [6], has been, for many years, a natural solution for management of large-scale military, industrial, and social systems, which are made up of several interconnected subsystems when a centralized scheme is neither technically possible nor economically acceptable. The central idea of the multiechelon hierarchy is to place the control/decision units, which might have different objectives and information bases, on several levels of a management and control pyramid. While the multilayer systems implement the vertical division of the control effort, the multiechelon systems also include a horizontal division of work (Fig. 27.2). Thus, on the n-th organization level, the i-th control unit, CUn.i , has a limited autonomy. It sends coordination signals downwards to a well-defined subset of control units, which are placed at the level n − 1, and it receives coordination signals from the corresponding unit placed on level n + 1. The unit on the top of the pyramid is called the supremal coordinator and the units to be found at the bottom level are called infimal units [6]. Towards More Collaborative Schemes The traditional multilevel systems proposed in the 1970s to be used for the management and control of large-scale systems can be viewed as pure hierarchies. They are characterized by the circulation of feedback and intervention signals only along the vertical axis, up and down, respectively, in accordance with traditional concepts of the command-andcontrol systems. They constituted a theoretical basis for various industrial distributed control systems, which possessed at the highest level a powerful minicomputer. Also, the multilayer and multiechelon hierarchies served in the 1980s as a conceptual reference model for the efforts to design computer-integrated manufacturing (CIM) systems [53]. Several new schemes have been proposed over the last decades to overcome the identified drawbacks and limits of the practical management and control systems designed in
621
accordance with the concepts of pure hierarchies, such as inflexibility, difficult maintenance, and limited robustness to major disturbances. The more recent solutions exhibit ever more increased communication and cooperation capabilities of the management and control units. This trend has been supported by the advances in communication technology; for example, already in 1977, Binder [35] introduced the concept of decentralized-coordinated control with cooperation, which allowed limited communication among the control unit placed at the same level. Several years later, Hatvany [54] proposed the heterarchical organization concept, which allowed for exchange of information among the decision and control units placed at various levels of the hierarchy. Heterarchical control is characterized by the full autonomy of local units associated with a certain limited cooperation. Since there are no master/slave relationships among the decision and control units, increased reactivity and improved robustness to local disturbances are expected. In order to reduce, to the extent possible, the myopic effects, semiheterarchical schemes were proposed by Rey et al. [55]. The term holon was first proposed by Koestler in 1967 [56] with a view to describing a general organization scheme able to explain the evolution and life of biological and social systems. A holon cooperates with other holons to build up a larger structure (or to solve a complex problem) and, at the same time, it works toward attaining its own objectives and treats the various situations it faces without waiting for any instructions from the entities placed at higher levels. A holarchy is a hierarchy made up of holons. It is characterized by several features as follows: (1) It tends to continuously grow up by attracting new holons. (2) The structure of the holarchy may constantly change. (3) There are various patterns of interactions among holons, such as communication messages, negotiations, and even aggressions. (4) A holon may belong to more than one holarchy if it observes their operation rules. (5) Some holarchies may work as pure hierarchies and others may behave as heterarchical organizations [57]. It can be noticed [58] that the pure hierarchies and heterarchies are particular cases of holarchies (Fig. 27.3). Management and control structures based on holarchy concepts were proposed in [59] for implementation in complex discrete-part manufacturing systems. To increase the autonomy of the decision and control units and their cooperation, the multiagent technology is recommended by Parunak [60]. An intelligent software agent encapsulates its code and data, is able to act in a proactive way, and cooperates with other agents to achieve a common goal [61]. Several design principles of the Collaborative Control Theory were proposed by Nof [62] in the context of e-activities. The original set of principles included: a) CRP (Cooperation Requirement Planning), b) DPIEM (Distributed Planning of Integrated Execution Method), c) PCR (the Principle of Conflict Resolution in collaborative
27
622
F. G. Filip and K. Leiviskä
Coordinator a1
a2
a3
Level 2 b1
b2
CU1
b3
y1
Level 1
CU 3
CU2 y2
m1
y3
m2
u1
m3
u2 SSy1
u3
z1
z2
SSy 2
z3
SSy 3
y
w
H
Fig. 27.2 A simple two-level multilevel control system (CU control unit, SSy controlled subsystem, H interconnection function, m control variable, u input interconnection variable, z output interconnection variable, y output variable, w disturbance)
Holarchy
Pure hierarchy
0+
Heterarchical system
2+
3+
Vertical channel for coordination
Holon
2+
1+ Horizontal channel for cooperation
Higher class has as particular forms… Higher class is made up of… n + There may be n or more objects related to the class… There may be none, one or more objects related to the class…
Fig. 27.3 Holarchies: an object-oriented description
27
Infrastructure and Complex Systems Automation
e-work), d) PCFT (the Principle of Collaborative Fault Tolerance), e) JLR (the Join/Leave/Remain), and f) LOCC (Lines of Collaboration and Command). Monostori et al. [63] present various advantages of the cooperative control in the context of production and logistic applications, such as a) openness (it is easier to build and change the control system), b) reliability, c) higher performance due to distributed execution of tasks, d) scalability and incremental design, e) flexibility allowing heterogeneity and redesign, and so on. The same authors warn about the disadvantages of cooperative control, such as a) communication overhead, b) potential threats for data security, c) decision “myopia” caused by focus on local optima, d) chaotic behavior including “butterfly effects” and bottlenecks, and so on.
27.2.2 Decentralized Control Feedback control of LSS poses the standard control problem: to find a controller for a given system with control input and control output ensuring closed-loop systems stability and reach a suitable input–output behavior. The fundamental difference between small and large systems is usually described by a pragmatic view: A system is large scale and complex if it is conceptually or computationally attractive to decompose it into several less complex subsystems, which can be solved easier than the original system. Then, the subsystem solutions can be combined in some manner to obtain a satisfactory result for the overall system. Decentralized control has consistently been a control of choice for LSS [64, 65]. The prominent reason for adopting this approach is its capability to effectively solve the problems of dimensionality, uncertainty, information structure constraints, and time delays. It also attenuates the problems that communication lines may cause, such as latency and signal corruption by noise. While in the hierarchical control schemes, as shown above, the control units are coordinated through intervention signals and may, sometimes, be allowed to exchange a few cooperation messages, in decentralized control, the units are completely independent or at least almost independent. This means that the information flow network among the control units can be divided into completely independent partitions. The units that belong to different subnetworks are separate from each other. Only restricted communication at certain time moments or intervals or limited to small part of information among the units is allowed. The foundation of decentralized control can be traced back in the paper of Wang and Davison [66]. The basic decentralized control schemes are as follows [67]: • Multichannel system. The global system is considered as one whole. The control inputs and the control outputs
623
operate only locally. This means that each channel has at its disposal only local information about the system and influences only a local part of the system. • Interconnected systems. The overall system is decomposed according to a selected criterion. Then, local controllers are designed for each subsystem. The subsystems can be strongly coupled or weakly interconnected. While in the first case, the local controllers should possess a minimal approximate model of the rest of the system, in the second, the coupling links could be neglected.
27.2.3 Computer Supported Decision-Making The Role of Human in the Control System One traditionally speaks about automation when a computer or another device executes certain functions that the human agent would otherwise have to perform. There have been several traditional approaches in allocating the tasks to human and automation devices [68]: • Comparison allocation based on MABA-MABA (Men Are Better At-Machines Are Better At) list of Fitts [69]. It consists of assigning each function to the most capable agent (either human or machine). Fitts, himself, suggested that great caution be exercised in assuming that human can successfully monitor complex automatic machines and “take over” if the machine breaks down. • Leftover allocation: allocating to automation equipment every function that can be automated, the rest remain to be performed by people. • Economic design: the allocation scheme ensures economic efficiency. A more flexible and dynamic approach is proposed in [70] based on several criteria, for example, authority, ability, responsibility, and control. A possible solution to many LSS control problems is the use of artificial intelligence-based tools. However, in practical applications, due to strange combinations of external influences and circumstances, rare or new situations may show up that were not taken into consideration at time of design. Since several decades ago, there have been cautious opinions concerning the replacement of the human by automation devices. For example, Bibby et al. [71] stated that “even highly automated systems need human beings for supervision, adjustment, maintenance and improvement,” and Bainbridge [72] described the ironies of automation. In 1990, Martin et al. remarked that [73]: Although AI and expert systems were successful in solving problems that resisted to classical numerical methods, their role remains confined to support functions, whereas the belief that evaluation by man of the computerized solutions may become superfluous is a very dangerous error.
27
624
Based on this observation, Martin et al. [73] recommended appropriate automation, which integrates technical, human, organizational, economical, and cultural factors. Schneiderman [74] has recently expressed a similar view. Over the years, several solutions have been proposed such as human centered automation, balanced automation, collaborative automation. They have one feature in common: The human should have a place and role in the control system.
Decision Support Systems The Decision Support System (DSS) concept appeared in the early 1970s [75]. As with any new term, the significance of DSS was, at the beginning, rather vague and controversial. Since then, many research and development activities and practical applications have witnessed that the DSS concept meets a real need and there is a significant market for such information systems even in the context of real-time settings in the industrial milieu [76]. See more on DSS in Ch. 66. The decision is the result of human conscious activities aiming at choosing a course of action for attaining a certain objective (or set of objectives) and normally implies allocating the necessary resources. It is the result of processing information and knowledge performed by an empowered person (or a decision unit composed of several persons) who has to make the choice and is accountable for the quality of the solution adopted to solve a particular problem or situation [77]. If, at a certain time moment, a decision problem cannot be entirely clarified and all possible ways of action cannot be fully identified and evaluated before a choice is made, then the problem is said to be unstructured or semi-structured. The DSS forms a specific subclass of anthropocentric and evolving information systems which are designed to implement the functions of a human support system (composed of consultants, collaborating experts, and so on) that would otherwise be necessary to help the decision-maker to overcome his/her limits and constraints that he/she may encounter when trying to solve semi-structured, complex, and complicated decision problems that matter [78]. Most of the developments in the DSS domain have initially addressed business applications not involving any realtime control. However, even in the early 1980s, DSS were reported to be used in manufacturing control [79, 80]. The usage of DSS in disaster management and control is presented in [81]. Several aspects characterize real-time decision-making processes (RTDMP) for control applications: • They involve continuous monitoring of a dynamic environment. • They are short time horizon-oriented and are carried out on a repetitive basis. • They normally occur under time pressure. • Long-term effects are difficult to predict [82].
F. G. Filip and K. Leiviskä
It is quite unlikely for an econological (economically logic) approach, involving optimization, be technically possible for genuine RTDMPs. Satisficing approaches [83], which reduce the search space at the expense of the decision quality, or fully automated decision-making systems, if taken separately, cannot be accepted either, but for some exceptions. At the same time, one can notice that genuine RT DMP are typical for emergency situations [84]. In many practical problems concerning the management and control of LSS, decisions are made by a group of persons instead of an individual [85]. Consequently, a group (or multiparticipant) DSS needs an important communication facility, possibly ensured by ICT platforms [86]. Nof [87] applies the collaborative control theory to DSS design and the case of very large number of participants is treated in [88]. The generic framework of a DSS proposed by Bonczek et al. [89] and subsequently refined by Holsapple and Whinston [90] is quite general and can conceptually accommodate the most recent technologies. It is based on three essential components to perform specific tasks, such as a) language subsystem (to enable the communication among user, system, and various data feeders and actuators), b) knowledge subsystem (to store and maintain the data, models, and results) and c) problem processing subsystem. Several decades ago, Simon [91] stated that it is worth considering combining mathematical models with artificial intelligence-based tools. In this context, there is a significant trend towards combining computerized numerical models with software that emulates the human reasoning with a view to build advanced mixed-knowledge DSS [92–95]. Possible work division between human and computer is also proposed in [77, 96].
Digital Cognitive Systems Traditional DSS have been assumed useful ICT tools when approaching semi-structured problems. A new generation of ICT products, namely, [digital] cognitive systems, is expected to be of effective use in the case of the problems that do not possess structure at all. When proposing the Stanford Research Institute (SRI), Engelbart [97] defined the concept of augmenting human intellect as increasing the capability of a man to approach a complex problem situation, to gain comprehension to suit his particular needs, and to derive solutions to problems.
The topics of [digital] cognitive system was addressed by Hollnagel and Woods [98] and Kelly [99] who defined cognitive computing as systems that a) learn at scale, b) reason with purpose, and c) interact with humans naturally. Rather than being explicitly programmed, they learn and reason from their interactions with us and from their experiences with their environment. [ . . . ] a. They can make sense of the 80 percent of the world’s data that
27
Infrastructure and Complex Systems Automation computer scientists call “unstructured.” This enables them to keep pace with the volume, complexity and unpredictability of information and systems in the modern world.
Several available digital cognitive systems, such as Apple Siri, IBM Watson, Google’s Now, Brain, AlphaGo, Home, Assistant, Amazon’s Alexa, Microsoft’s Adam, Braina & Cortana, and so on, are analyzed by Siddike et al. [100]. In a paper about the prospects for automating intelligence versus augmenting human intelligence, Rouse and Spohrer [101] noticed that, at present, it is necessary to device a new perspective on the automation-augmentation continuum to synergize the human and digital cognitive systems and to create the best cognitive team or cognitive organization to address the problems at hand. The repartition of functions in the mixed teams designed to solve decision and control problems has evolved over time under the influence of technology developments and ever more enriched knowledge of the human/group of humans (the natural cognitive part of the team). Siddike et al. [100] view digital cognitive systems as a new wave of decision support tools meant to enable humans to augment their capabilities and expertise. The above authors forecast that the digital cognitive systems can evolve from information tools to cognitive assistants, collaborators, coaches, and even mediators. However, at the end of this section, it is worth mentioning what was stated in a recent document of the High-Level Expert Group on Artificial Intelligence of the European Commission: AI systems should empower human beings, allowing them to make informed decisions and fostering their fundamental rights. At the same time, proper oversight mechanisms need to be ensured, which can be achieved through a) human-in-the-loop b) human-on-the-loop, and c) human-in-command approaches. [102]
27.3
Modern Automation Architectures and Essential Enabling Technologies
The progress of several technologies has had a clear impact on the development of automation during the last 10–20 years: 1. Information and communications technology has changed our view in automation from the lack of data to Big Data concerning with processes and phenomena that we are controlling. This is now visible in terms like Industry 4.0 and Internet of Things (IoT). This last term indicates the origin of the technology that is now also being utilized in automation, the Internet. 2. The increasing amount of data has led to the need for new computing technologies, and terms like Cloud Computing are being used quite fluently, maybe without thinking too much about what they really represent.
625
3. The third factor is the systems technology, that is, the methods that are needed for running the new technological tools. How can one analyze the huge amount of data coming from different sources? How to model the largescale and complex systems that have to be monitored and controlled? This has brought two more terms along: Cyber-Physical Systems (CPS) and Big Data Analytics (BDA). 4. The fourth factor that has changed our vision on automation is the change in the application fields. Automation is now used in several infrastructures of the society, in the housing sector, in power generation and distribution, in traffic, etc. This has made it necessary to consider several new features of automation compared to the industrial applications: What is the division between the real-time and not-so-real-time functions? What is the role of (smart) measurements? How much user intervention is needed (allowed)? This technological development with increasing data communications, a need for processing huge amounts of data, explaining both fast and slow dynamics, and visualizing the results to the operators at different levels, has given a new life to an old paradigm, Artificial Intelligence (AI). It is present in different forms, in Intelligent Systems, Machine Learning, (ML), and so on. A quick look at the published literature reveals that it is now more visible than ever before. See also Ch. 16 on automation architectures.
27.3.1 Internet of Things There are several definitions for the term Internet of Things (IoT), but the following one from IoT Agenda [103] serves us well: The internet of things, or IOT, is a system of interrelated computing devices, mechanical and digital machines, objects, animals or people that are provided with unique identifiers (UIDs) and the ability to transfer data over a network without requiring humanto-human or human-to-computer interaction.
Figure 27.4 presents the main operation cycle of the IoT system. The system consists of web-enabled devices that collect and transfer information to active nodes in the network that process and analyze it. In automation, IoT helps in monitoring and controlling larger entities, but, of course, putting a large number of digital systems together makes the overall system vulnerable to inside and outside threats. For example, a failure in one of the system’s components can spread and harm the whole system or a malevolent intrusion from outside can stop the operation of some important machinery. This has increased the importance of safety and security factors in today’s digital systems. In industrial automation, the terms like Industrial Internet of Things (IIoT) or Industrial Internet are frequently used.
27
626
F. G. Filip and K. Leiviskä
Internet environment
Sensors Systems
Collect
Communicate
Use (act)
Fig. 27.4 The main functions of IoT
References [104, 105] present several definitions of IIoT that can be combined as follows: Industrial Internet of Things is a system consisting of “Things” that: enable real-time, intelligent, and autonomous access, collection, analysis, communications, and exchange of process, product and/or service information, within the industrial environment, so as to optimize the overall production value. [105]
The references cited above list several “Things” like networked smart systems, cyber-physical systems, together with cloud or edge computing platforms. The related term Internet of Services is explained below in Sect. 27.3.2. Another term widely used in this connection is Industry 4.0 (or Industrie 4.0). According to [106], Industry 4.0 lays on nine main pillars: a) Big Data, b) autonomous robots, c) simulation, d) additive manufacturing, e) IoT, f) cloud computing, g) augmented reality, h) horizontal and vertical integration, and i) cybersecurity.
27.3.2 Computing Technologies The Internet has offered several new possibilities for distributing computations and data storage. This has led to reforming the ICT business from the hardware and software orientation to service orientation. The following sections introduce some of these relatively new technologies.
Cloud Computing Cloud computing refers to the application services offered via Internet and the cloud infrastructure. It turned computing resources into the utility that the service providers can offer to clients, inside an enterprise or as commercial products. According to the National Institute of Standards and Technology (NIST), the following five characteristic features define Cloud computing [107]: 1. Cloud resources (storage, applications, platforms, and infrastructure) are available ubiquitously on demand with the minimal human interaction from the service provider.
2. Resources are available automatically through usual, convenient interfaces (possibly mobile) via the network (the Internet). 3. Resources (both physical and virtual) are open to multiple customers and they are assigned dynamically according to the users’ needs (pooling). 4. Resources evolve with the application, that is, they scale up and down automatically according to users’ needs (elasticity). 5. Resources are measurable and optimizable. Providers and customers can monitor, control, and follow their use and activities. A cloud infrastructure includes both hardware and software components. Cloud computing follows a ServiceOriented Architecture (SOA) model. According to NIST, it provides three main services: a) software for applications (Software as a Service, SaaS), b) platform for developers (PaaS), and c) infrastructure for high-level applications (IaaS). These services may form hierarchical layers, and they are offered separately or in different combinations.
Edge Computing Hamilton [108] states Like the metaphorical cloud and the Internet of Things, the edge is a buzzword meaning everything and nothing
Edge computing dates back to distributed computing of the 1960s, but the Internet has given the new meaning for it. Edge computing refers to data processing on the edge of the cloud, closer to its point of origin. It decreases the delay, latency, and the bandwidth and the overhead of the data centers. Data processing takes place close to its origin and fewer data (mainly for storage) goes over the network to the cloud. This may mean an increased data security, too. Several reasons have led to the need for edge computing [111]: Enormous amount of data has turned data transfer to cloud into a system bottleneck causing delays in the response. There will be more and more data producers in the system,
27
Infrastructure and Complex Systems Automation
and it becomes impossible for cloud computing to handle all the requests coming from several IoT systems. This is, of course, an essential feature in process automation, where a real-time response is a requirement. This calls for computing closer to the data source. This becomes even more important when the level of automation in traffic and vehicles (and other safety-critical applications) is increasing and the personal safety is an important feature. Shi et al. [111] also list several potential applications for Edge computing: 1. Moving the workload from Cloud computing in customer applications, for example, in on-line shopping services 2. Video analytics, for example, surveillance applications 3. In large IoT applications that deal with different data and different dynamics, for example, smart home automation and smart cities. 4. Applications that require collaboration of different parties, for example, in healthcare, pandemic follow-up, etc., with strict privacy requirements Fog computing is another concept that extends the cloud closer to the “Things” that use IoT data; it makes computing at the edge possible. Computing devices, fog nodes, can be any device with memory and network connection. Edge computing focuses more on the “Things,” while Fog computing emphasizes the role of the infrastructure [111]. Fog is rather a standard and edge a concept [108]. See more on Fog computing and LSS in Sect. 27.4.3. Internet of Services (IoS) is a related term and standard [109, 110]. Following the service orientation of cloud computing, it emphasizes that the data gleaned from the IoT is useful through an Internet of services, sometimes called Apps, which serve as the infrastructure of IoT data-based applications and addedvalue. In automation literature, it is usual to present the relationships between control operations in hierarchies. Figure 27.5 shows that Cloud computing and Edge computing locate on the higher levels of automation hierarchy. The division is case-dependent, but the applications that are more latencycritical belong to the lower levels of the hierarchy.
Mobile Computing Mobile computing is a generic term that means the possibility to access computing systems or data storages anywhere and at any time. It can handle data, voice, and video over a conventional Local Area Network (LAN) or by using Wi-Fi or wireless technology. Mobile devices cover a wide range from smart phones to mobile computers [85]. Depending on the location of computing resources, one can speak about mobile cloud computing [112] and mobile edge computing [113]. In both cases, the actual computa-
627
Cloud computing Data storage
Edge computing Pre-processing
Internet gateways SCADA
27 Sensors Actuators
Fig. 27.5 Hierarchical relationships in the control hierarchy. SCADA means Supervisory Control And Data Acquisition. (Modified from [105])
tion takes place outside the mobile device. This increases battery lifetime and data storage and improves reliability and flexibility in providing and sharing computing resources. Computing at the edge provides some more advantages with regard to the real-time properties of the system such as: a) faster response (lower propagation, communication, and computation latency) and b) possibilities for the contextaware and augmented reality computations. Also moving from public cloud servers to edge servers can lead to an improvement in the system privacy and security. The distribution of computations over the networks increases the vulnerability of the system to different threats. At least four basic threats can be recognized [114]: 1. Application-based threats that come along with downloadable applications. Malicious software can be malware or spyware intended to harm the user’s system or steal and use the data. They can also come up in the form of vulnerable applications that make it possible for the intruder to break in the user’s system. They can also be a privacy threat. Repackaging is one possibly used technology solution. 2. Web-based threats stem from the continuous use of Internet services. There exists malicious software, which downloads often automatically when triggering a web page or even when accessing the browser. 3. Network threats originate from cellular networks or local wireless networks. 4. There are also physical threats in the form of lost or stolen devices.
628
In industrial automation, mobile computing has several application areas such as remote monitoring and control, mobile data acquisition from moving machines and vehicles, for example, in mining applications, for environment monitoring purposes (air and water quality), and advisory systems for maintenance of process or automation devices.
27.3.3 Tools and Methods Cyber-Physical Systems A Cyber-Physical System (CPS) in an industrial context is a combination of computer algorithms and a physical system with sensors and actuators for monitoring and controlling the industrial production and products. In this basic form, it could cover all applications of industrial automation since the mid-1970s, that is, since the beginning of the era of digital automation. The modern definition adds several features to this definition such as a) the integration of physical and software components, embedded systems, networked systems, b) scalability in space and time, c) multiple possibilities for user interaction, together with wireless and mobile systems [115]. IoT makes up a perfect framework for describing CPS architecture [116]. CPS “theory” is interdisciplinary and includes cybernetics, mechatronics, control and process engineering and design. The whole selection of signal processing, communication, and control methods is available. In the networked environment, cyber security is a very important aspect of CPS. In addition to industrial automation, CPS is used in the connection of mobile robotics, smart grid, and autonomous vehicles. Several aspects make the design of CPS challenging [117] are: • Cyber-Physical Systems are heterogeneous systems consisting of smart machines, inventory systems, and manufacturing plants that integrate and interact at several levels. • The high level of automation is typical for CPS and means that the design of the “cyber” part requires using different software environments for the design of data models, monitoring and control functions, quality control, registration of production, decision making at different organizational levels, and so on. • Networking at multiple spatial and temporal levels requires design tools for system architectures. • Modern systems also require capabilities for reconfiguring (self-organizing) in order to survive in cases of changes in production routes, new products and raw materials, and machine failures. Lee et al. [116] proposed a five-level CPS architecture (5C). It is a functional hierarchy with several levels such as:
F. G. Filip and K. Leiviskä
• The lowest level – Connection – is the smart communication level, which collects the data from different systems mentioned above and transmits and stores it for further usage at the upper levels of the hierarchy. • The second level – Conversion – is the data-to-information conversion level, which upgrades the data. • The third level – Cyber – is the computing level, which employs models and machine learning tools to produce information describing the status and performance of machines and processes. • Cognition level uses the information available for monitoring and decision making, be it automatic or humanbased. • The uppermost level – Configuration – creates the feedback from the cyber space to the physical system. This hierarchy is basically meant for manufacturing processes, and the three lowermost levels usually include the process hierarchy: Sensors – Components – Machines – Machine groups. Distributed architectures heavily depend on the application area, but, in general, this architecture includes five layers according to Fig. 27.6. This is, once again, a functional description, which utilizes the available hardware and software resources in an IIoT system. It includes two multilevel layers: one for process and another for automation. Digital Twin is an object of vivid discussion nowadays. Its origin goes back to the beginning of the 2000s. It can be defined as the representation of the physical object in a virtual environment that can generate information and analysis results almost in real time. Integrating it with CPS could transfer manufacturing systems from knowledgebased intelligent ones to data-driven and knowledge-enabled smart manufacturing entities [118]. See also Ch. 17 on CPS.
Distributed decision support layer monitoring, control, reporting Distributed IIoT tools and services SaaS, PaaS, IaaS Data models and storage Cloud/edge computing Communication layer Public/private networks Distributed physical layer IOT devices, unit processes, production lines, plants
Fig. 27.6 General description of the distributed architecture for CPS in IIoT environment
27
Infrastructure and Complex Systems Automation
Big Data Analytics Big Data associates in people’s minds with massive volumes of data, but describing it more closely requires several features [78, 119]. Three V-concepts have been initially used [119, 120]: 1. Volume refers to the amount of data, which is available for the application. In Big Data, one is dealing with massive data sets that are impossible to handle with conventional software tools. 2. Velocity means high data generation rates. It is an important attribute for automation because of the real-time requirement and it needs efficient data flow applications especially in wireless sensor networks to decrease latency and reduce massive data communications. 3. Variety means the richness of data representation: numeric, textual, audio, and video. Data usually changes dynamically and it is incomplete, noisy, and corrupted. Very often data is unstructured and unorganized, sometimes semi-structured or even structured. Extracting useful information from unstructured video data dynamically in real-time is the most difficult task. Several other V-terms have been listed in the literature: Veracity, Validity, Value, Variability, Venue, Vocabulary, and Vagueness [119]. Big Data also brings certain new viewpoints to the scene: data quality (repeatability, accuracy, uncertainty), privacy, security, and storage. There are several challenges in Big Data [78]: 1. Processing data with available Data Mining tools requires structured data. 2. Complexity and uncertainty make systematic modeling a challenging task. 3. Heterogeneity of data, knowledge, and decision making require special care. 4. Engineering decision-making uses cross-industry standards, for example, for Data Mining. There are several ways to define Big Data Analytics. IBM’s definition is [121]: Big data analytics is the use of advanced analytic techniques against very large, diverse data sets that include structured, semistructured and unstructured data, from different sources, and in different sizes from terabytes to zettabytes.
The advanced analytic techniques include text and data mining methods, intelligent machine learning algorithms, predictive analytics, together with statistical and probabilistic methods. In order to overcome the bottlenecks in storage and computation capacity, data mining from industrial data can be completed in the cloud environment. Industry 4.0 offers a framework for building industrial platforms for data ana-
629
lytics. Big data has also changed process monitoring from univariate to high-dimensional, from homogeneous data to heterogeneous datasets, from static to dynamic and nonstationary models, from monitoring the mean values to monitoring the variability and correlations, and from structured to unstructured monitoring [122].
27.3.4 Networked Control The networked interconnection pattern has become, over the years, one of the most characteristic features of the modern LSS. The network theory was used in [123] for the study of the controllability and dynamics of networktype complex systems. At the same time, the advances in computer-based communication networks have stimulated the combination of control methods and information and communication technologies [124]. At present, networked control is a central constituent of large-scale infrastructure automation solutions.
Networked Control Systems The various entities of a control loop that are connected via communication links have become ever more numerous especially in the case of LSS, which are composed of several, possibly remotely placed, subsystems. Over the years, the links that connect the controllers, sensors, HCI (human computer interface), and actuators have evolved. By tradition, the control methods assumed that there were ideal channels with no-delay communication links for connecting the entities of control loops. The initial point-to-point centralized solution has been proven not to be suitable for the everincreasing concerns with easy diagnosis and maintenance, decentralization, modularity, and low cost. The next step in the evolution of control architectures was the usage of a common bus to connect the entities of the control loop [51]. The solution based on a common loop has led, in time, to new problems, such as signal losses and transmission time delays. The delays are caused by both the communication medium itself and the computation time required for performing various operations, such as signal coding and communication processing. In the setting of a LSS composed of several subsystems, the centralized control may be affected by a great number of problems caused by the transfer of the signal between the controllers and numerous sensors and actuators. The decentralized solutions (see Sect. 27.2.2) help avoid the need for data transferring from a local controller to other remote constituents of the control system. The quasi-decentralized schemes allow keeping the data transfer volume at a reduced level. A networked control system (NCS) can be defined as a scheme in which the traditional control loops are closed
27
630
through a communication network, so that the control and feedback signals could be exchanged among all the components (sensors, controllers, HCI, and actuators) through a common network [39]. There are several aspects that should be considered in designing a NCS [124]: • The constituents of the control loop are spatially distributed. • The communication networks utilized in NCS are sharedband digital ones. • A limited set of local controllers must solve the timecritical problems. As it is pointed out by Antsaklis and Baillieul [124], the current practical implementations of control systems are influenced by the available technology and the old, hardwired connections between sensors and controllers should be replaced by low-cost solutions that include microprocessors and shared digital networks, sometimes deploying wireless systems. The same authors notice that the analysis and design of NCS should include many aspects concerning: • The impact of the communication quality of the network including the influence on the dynamic behavior of the system • The modification of priorities and requirements for the algorithms and control schemes, such as limited autonomous operation and possible reconsideration of the feedforward solutions versus feedback ones Two main classes of problems were defined in the context of NCS [39, 125]: 1. Control of networks (CN) that aims at attaining a good quality of service (QoS) for the communication networks, which support the NCS schemes and an efficient and, at the same time, fair usage of the network. 2. Control over the network (CON), means to attain a high quality of control (QoC) solution and to compensate the possible imperfect operation of the communication network. The class of problems associated with CON includes sampled data control, event-triggered control, security control, and so on. The main advantages of NCS are a) the modularity of the solution associated with the availability of plug-and-play devices, b) quasi-elimination of the need for wiring, and c) agility and the premises for an easy diagnosis and maintenance. Consequently, the NCS solutions have got ground in many LSS control application areas, such as automated highways, smart grids, fleets of unmanned aerial vehicles, and complex manufacturing systems [126, 127]. At the same time, there are several concerns mainly related to the safety
F. G. Filip and K. Leiviskä
and security operation of the network including aspects, such as data availability, integrity, and confidentiality.
Cloud Control Systems Zhan et al. [128] noticed that the NCS cannot be any longer a satisfactory solution in the current context characterized by Big Data phenomenon, which resulted from the continuous expanding of Internet of Things (IoT). Consequently, new control paradigms, such as Cloud Control System (CCS) and Fog Control System (FCS), emerged. Mahmoud [39] states that CCS is a natural development within the NCS domain. In a CCS, the control algorithms are placed in the cloud, which can be physically far away from the control plant. Hence, a CCS can be viewed as a CPS composed of two distinct parts: • The physical part that is made up of the plant and several other constituents of The control loop such as sensors, actuators, and communication lines • The cyber part that includes the control algorithms Since the cloud typically operates as a service providing system, the entire CCS may also be viewed as a Service Oriented System (SOS). In comparison with NCS, a CCS possesses several advantages such as: a) integrating and fully utilizing all kinds of resources, b) saving energy, and c) increasing system reliability. On the other hand, there are also several open problems and challenges caused by various factors including: • The vulnerability to various types of attacks • The need to provide real-time services by a cloud simultaneously utilized by several users • Possible time delays caused by big volumes of data transfer from the plant to the cloud • Supporting the initial and running costs needed for the operation of cloud computing services [129]
Fog Computing and Control Even though CCSs have numerous advantages, sometimes the huge volumes of data to be transferred from the constituents of the physical part to the algorithms placed in the cloud and back may cause serious problems for the genuine real-time operation, such as latency and network congestion [129, 130]. In various control schemes (hierarchical, decentralized, quasi-decentralized), a part of data is used for performing local functions in real-time and it is not necessary to send it for being processed in a central site. In order to reduce the data traffic to and from the cloud and, consequently, to avoid the latency problems, an extension of cloud computing, named fog computing (FC) can be used. The basic idea of FC is to keep and process a part of the data in an intermediate layer called fog that is near to both the data sources (collected from the sensors) and control signal
27
Infrastructure and Complex Systems Automation
destinations (actuators) placed in the physical part of the overall computing and control system. As its name suggests, the fog is placed near to the field components in contrast with the cloud, which is supposed to be placed at a higher altitude, “in the sky.” The Open Fog Consortium, which is now a part of Industrial Internet Consortium, defines FC as a system level horizontal architecture that distributes resources and services of computing storage, control, and networking anywhere along a continuum from cloud to things. [131]
Several characteristic features of FC that are useful in the LSS (including infrastructure automation) context are: a) allowing scalability and geographical distribution, b) heterogeneity and interoperability of new devices installed with the purpose of extending he system, and c) allowing the usage of wireless communication to reduce the costs [132]. A comparison of fog computing and cloud computing with reference to several attributes, such as the target user, location of servers, types of service, distance between the cloud and the servers, latency, type of connectivity, security level, and so on, is provided in [130]. It is worth noticing that the FC does not replace the entire CCS, but only the real-time part. The architecture of fog computing can be represented as a multilevel system (see Sect. 27.2.1.) composed of several layers [132]. The layers are a) physical and virtualization of ground components, b) monitoring of activities and resource services, c) preprocessing which consists of operations, such as data analysis, filtering, reconstruction, and trimming, d) temporary storage, f) ensuring security, privacy, and integrity, and f) transportation of preprocessed and secured data to the cloud. Beside the advantages of fog computing, there are several open problems, key concerns, and challenges. The most significant ones are: a) authentication of end user devices before allowing their access to resources, b) secure communication between the user devices, fog and cloud, c) data integrity, d) secure data storage, and e) intrusion detection and prevention [130, 132]. Edge computing (EC) represents an extension of cloud computing towards the source of data and is meant to diminish the burden of data flows even further. It consists in embedding intelligent controllers with limited processing power and increased security capabilities in order to create peer-to-peer, self-organizing networks [111, 133–135].
27.4
Examples
This section includes some examples of large-scale complex systems representing monitoring and control in smart Cities and smart Environment.
631
27.4.1 Smart Cities as Large-Scale Systems Smart Buildings Figure 27.7 illustrates how the role of different automation functions changes while moving from detached (one-family) houses to the automation in buildings at the city level. To keep the description simple, automation scheme is divided into three layers: a) direct measurements and control, b) data acquisition and communications, and c) management and optimization. The figure shows that the role of direct measurements and controls is the biggest in the case of detached houses and decreases when proceeding to larger units. It gives way to the management and optimization functions for balancing the consumption and production of commodities such as power and water and their delivery. While taking all possible constraints, disturbances, and maintenance actions into account, the importance of these functions also increases. The role of automation in the data acquisition and communications level is quite interesting. In comparison with the other two layers, it remains quite constant, as shown in Fig. 27.7. Its importance, however, increases when the overall degree of automation increases. This is mainly because of the increasing need for data storing and communication. The modern tools for data transfer and storing facilitate performing such functions. The functions vary from one application class to another. For example, in one-family houses, they are limited to reading measurements, launching control actions, and recording and communicating consumption data to the energy company. The need for data storing and communication increases in the case of a region and of an entire city simply because of the high number of customers. Customer connections in the form of registering consumption, monitoring, reporting, accounting, and billing contribute to increasing the function complexity. The interactions between consumption and generation of energy, and among the consumers, increase while moving from one-family houses to the region or city level. Now one comes to the area of large-scale complex systems. This calls for optimization and balancing functions and possibilities for controlling the energy production. It requires versatile methods for modeling the processes sometimes with slow and varying dynamics, optimization in choosing the best compensating capacities, and tools for setting constraints for consumption. The trend to increasing energy production at the consumer site is changing this picture. The amount of data increases radically when moving along the horizontal axis of Fig. 27.7. The nature of data is changing too. In the case of one-family houses, one is mostly dealing with direct process measurements that are filtered and used in control. In some cases, they are used in modeling and for reporting the consumption. At region and city levels, data
27
632
F. G. Filip and K. Leiviskä
Management and optimization
Amount of details in the models
Data acquisition, and communication functions
Amount of data
City
City region
Apartment house
Semidetached house
Detached house
Direct measurements and controls
Apartment block
Interaction between production and consumption of energy
Fig. 27.7 The change in the role of automation from detached houses to larger units
is more condensed and averaged according to the customers, customer blocks, and time interval and then, integrated over the time period. There are also legal requirements to archive data for longer periods. While the amount of data increases, the number of details in the models decreases. In the case of one-family houses, relatively detailed physical or hybrid black-box/physical dynamic models are used [136]. In the case of apartment houses or apartment blocks, physical-based experimental models are satisfactory for optimization even at the city level [137]. The following paragraphs highlight these two cases. See more on smart buildings in Ch. 48. Modeling for home electricity management. The modeling of the electrical demand in buildings can use either the top-down or the bottom-up approach. The top-down approach views the building as a black box without detailed knowledge of users or appliances inside it. It is more suitable for studying the general behavior at higher levels. The bottomup approach helps modeling the electricity consumption of individual users in a more detailed way. Louis et al. [138] give an example of modeling for the electricity analysis in residential homes. Model inputs consist of daily appliances (washing machine, electric oven, electric heater, etc.) and user profiles that, when summed up, give the electricity demand. Converting demand information to economic values requires the electricity price and information on the state of the electrical grid. The simulation starts by generating the events, that is, by defining the starting and ending times for using each appliance. This proceeds from weekly and daily levels to hourly levels over the whole simulation period and for each appliance. Typical examples are using the coffee machine
during the breakfast time or the sauna during the weekends. The outputs are the power demand and its price. Separate models are needed, if the heat demand or possible own power generation (e.g., solar power) is calculated. Emission models usually use fixed factors for calculations. In this case, a dynamic model is used. It is working on an hourly base and it takes into account the used fuel [139]. What is the role of home automation in the Smart City concept? The data generated at the single house level is the cornerstone for designing the large-scale automation at the city level. Its availability and reliability are on a key position. Combining the direct measurement data with some representative index values and integrating it over time as close to the data source as possible would decrease the load for data communication. As mentioned before, an increase in the on-the-site production and its selling to the grid would complicate the situation. Even though the above case refers to electric power, the same conclusions hold true also for heat energy in district heating network. This case is considered next from the optimization point of view. Electric grids are considered in detail elsewhere in this book (Ch. 47). City-level optimization of heat demand in buildings. Handling peak demand situations is important in large-scale district heating networks at the city level. In such situations, the heat demand exceeds the existing production capacity and all reserve capacities are in use. This means increased production costs and a higher environmental load. Dealing with the peak load situations by structural means may lead to high investment costs related to the production machinery. It may also limit the extension of the network with more customers.
27
Infrastructure and Complex Systems Automation
Demand Side Management (DSM) is one tool for handling the peak demand situations. It was originally used for the electricity networks and was defined as the set of activities that try to change the electricity use by customers in ways that will lead to desired changes in the load. It is equally applicable also to the district heating networks. Handling peak load situations requires the forecasting of the heat consumption at the hourly level, preferable by using the adaptive models of the single building while also taking the uncertainties in the weather forecasts into account. This has proved to be the bottleneck. Smart meters have mainly solved the problem with lacking of measurements, but the modeling work has proved to be too tedious. Models for single buildings exist, but only few studies of their applications on a larger scale exist. Hietaharju et al. [137] presented simulation studies using easily adaptable models for the indoor temperature and heat demand. They tested the strategy to optimize the heat demand and to reduce the peak loads by utilizing buildings themselves as short-term heat storages. Measured district heating data from 201 large buildings (apartment buildings, schools, and commercial, public, and office buildings) was used. Simple recursive discrete-time models predicted the indoor temperature and heat demand with an accuracy of 5% and 4%, respectively [140, 141]. It is possible to forecast the outdoor temperature, but in this case, the measured values were used. The forecasting period was 48 h. Each model included four parameters. Their estimates were based on the existing data by using the simulated annealing method [140]. The models were used in the optimization of the heat demand at the city level. The number of free parameters and input variables should not be too high in order to ensure the easy application of these models to the building stock consisting of hundreds or even thousands of individual buildings. The biggest difficulty in parameter estimation was that the indoor temperature was not available for all buildings. Therefore, either the simulated value or the target value was used. The estimated values for two physical parameters, the thermal capacity and heat loss, were presented as the function of the volume of the building. The objective of the optimization was to minimize the variance of the heating power while not increasing the total heat demand and keeping the indoor temperature within acceptable limits. The cost function was [137]
Jk = var Popt,k + Pp,k +
N
633
demand, and Tp,k is the penalty for the indoor temperature getting outside the acceptable limits. The optimization was subject to three constraints limiting the difference between the forecasted and optimized heat demand and keeping the heat demand and indoor temperature within the acceptable limits. The optimization was carried out for over 480 h for each individual building recursively in slots of 48 h using the receding horizon approach. During each iteration, the heat demand and the indoor temperature were forecasted for 48 h and the optimization result of the first hour was kept over the control horizon of one hour. These calculations were repeated with one-hour intervals until the results for 480 h were available. The optimization used the pattern search algorithm from the Global Optimization Toolbox of MATLAB® . The same paper also proposed another cost function that aims at minimum peak load cutting while using the model for the indoor temperature. The cost function for peak load cutting minimized the difference between the forecasted and the optimized heat demand while simultaneously keeping the change in the total heat demand at a minimum [137] N N Pcp,k (t) + Tp,k (t) J= Pf,k − Popt,k + t=1
t=1
Here Pf,k is the forecasted power and Pcp,k is the penalty for too large changes in the heating power per one hour. There are two constraints in this case: One is limiting the change in the heating power and the other keeps the optimized power between 30% and 100% of the forecasted one. In the test with 201 individual buildings, the potential for peak load cutting varied depending on the building type and within building categories, too. The median was 15% and the maximum 70%. Optimization was done for individual buildings and no advantage was noticed in coupling the optimization of separate buildings or building groups together. The optimization used the simulated inside temperature of the buildings, and better results would follow, if the actual temperatures were available. The most important advantage of this approach is the ease of modeling. The number of inputs and model parameters is so low that it is possible to optimize even a large building stock. The parameter estimation needs about one week’s measurement data (heating power and output temperatures). One possible way to improve the results would be to include the model uncertainty into the system.
Tp,k (t)
t=1
where Popt,k is the optimal heating power during the optimization for period k, N is the number of hours in the optimization period, Pp,k is the penalty for an increase in total heat
Water Treatment and Distribution Infrastructures There are a lot of similarities between district heating and water distributing networks, but from the automation and optimization point of view, they differ in some very important respects such as:
27
634
• In water distribution networks, the level of automation on the consumer side is low and the smart meter is enough for measuring the water consumption. This means a low need for data transfer and ease of automatic reading. The lowest frequency is usually once per hour. This does not decrease the importance of the measuring accuracy and necessity to check it, but this takes place at the utility site. On the other hand, a higher level of automation is required for monitoring and control at the treatment plants of both drinking water and wastewater and the requirements related to data communications are higher there. • In water distribution networks, the role of the water reservoirs is just as important for the peak level cutting as in district heating. Water reservoirs and their compensating capacities are easier to measure and handle than in the above case for district heating. Because the operation of the water supply and distribution system bases on forecasting the demand, the future behavior of the water reservoirs can be simulated on the basis of these forecasts. • Water security is a complicated problem that includes, in addition to ensuring the sufficiency of drinking water, also observing the water quality and preventing possible vandalism or cyber threats. • When estimating the costs of the water usage and its effect on environment, both the production of drinking water and the purification of wastewater must be taken into account. The water system as a whole is a large-scale complex system and the design of its automation follows the design methods for LSS. The smart meter is the basic building block in Smart Water System (SWS). It performs the real-time monitoring of water consumption in the Smart City environment. Smart meters are also necessary for detecting possible leaks in the network, building forecasts of the future consumption, and updating the models used in optimization of the network operation. There is a recent review with 109 references considering SWS [142]. It presents the current definitions (SWS is also called Smart Water Grid and Smart Water Management) and the historical development of SWS. It introduces a new fivelayer architecture of SWS. It looks like the other five-layer architecture presented in Sect. 27.3, but there are several functional differences. The layers are named as Instrument, Property, Function, Benefit, and Application. The instrument layer includes both physical and cyber instruments. Property layer ensures the system’s ability to respond to various threats by four properties: automation, resourcefulness, realtime functions, and connectivity. Instruments and properties define the functions of the system. They are dealing with data handling, simulation, planning, and optimization. Benefit layer deals with the costs, sustainability, and lifecycle
F. G. Filip and K. Leiviskä
benefits of the system. Application layer also includes public applications in addition to user applications.
27.4.2 Environmental Protection Modern technologies such as IoT joining together distributed sensors, Internet, and distributed monitoring, alarming, and registering systems contribute a lot to environmental protection in future Smart Cities [143]. They enable the centralized monitoring of the effects of industrial and traffic emissions and effluents, changing weather conditions, housing and farming, etc., on the quality of air, water, and soil. Disaster forecasting and early warning systems are used in case of natural factors such as seismic and volcanic activity, tsunamis, forest fires, health risks caused by heat, radiation, and so on which belong to the area of environmental monitoring. These systems can operate with manual samples and analyses, automatic analyzers and data loggers, individual point-wise measurements, results from large-scale meteorological models, and even with exceptional findings from some individuals on the field in a crowdsourcing scheme. Data acquisition can utilize smart sensors, wireless and wired sensors and networks, and network connections. As a real large-scale complex system, the design of an environmental monitoring and protection system requires methods for the correct decomposition of the large system and data mining and visualization tools to provide easy-touse information for the system users. Li et al. [144] introduce a three-level structure: data collection and processing layer, network layer, and application layer. Their application relies on Cloud computing. Zhang and Huang [145] propose the use of Edge computing and a four-level architecture consisting of Perception layer, Transport layer, Management layer, and Service layer. There is a recent review on the use of IoT and sensors in the smart environment monitoring system [146]. It critically analyzed 113 papers from various viewpoints: applications in agriculture, water, and air monitoring together with the use of Machine Learning and IoT tools. The authors noticed a strong increase in the number of papers published in the domain over the last 10 years. Poor quality of the sensor data (repeatability, reliability) is a serious obstacle in applications. As Machine Learning is concerned, more research on handling big and noisy data is required. Another extensive review considered marine environments as oceans, water quality, fish farming, coral reefs, and wave and current monitoring [147]. It referred to various sensors and sensor networks, communication systems, and data analysis methods. Biggest challenges were identified with regard to the battery management in wireless systems, standardization in IoT devices and platforms, in computing and data storage technologies, and in data analysis.
27
Infrastructure and Complex Systems Automation
635
Table 27.1 Large-scale infrastructure automation examples Controlled object Utility networks Heating networks in smart cities Electric power grids and gas networks Water networks Large-scale environment monitoring systems Transportation systems Traffic light control Fleet control Connected parking systems, automotive, and smart mobility
27.4.3 Other Infrastructure Automation Cases Due to the space limitation, a detailed presentation of other infrastructure automation examples is not possible. Consequently, several papers are indicated in Table 27.1, which mainly includes survey papers on automation in different areas of the modern infrastructures. The interested reader can find, in the volume edited by Kyriakides and Polycarpou [167], several other examples of critical infrastructure automation solutions deployed in various domains, for example, electric power systems, telecommunications, water systems, and transportation networks. As Brdys [168] notices, such systems are often characterized by a spatial and networked structure, multiple objectives, nonlinearities, and different time scales. In addition, their operation conditions may be affected by numerous disturbances such as various operating ranges and faults in the automation devices (sensors and actuators) and abnormal functioning of controlled objects themselves. Consequently, the automation schemes may vary from genuine distributed control to hierarchical solutions.
27.5
Design and Security Issues
Designing a control system for a LSS can be viewed as a set of decisions to be made concerning various aspects, such as: a) the people to be involved in the project, b) the development approach adopted: waterfall model of incremental design, c) selection of ICT products to be deployed, d) evaluation procedures and so on [169]. Using Multi-criterion and Multiattribute decision-making (MCDM/MADM) [170] could effectively support the process of choosing the appropriate solutions. In [85], a classification of criteria to be used in selecting ICT is provided. An example of using TOPSIS, a multiattribute method, for selecting the cloud provider, is presented in [171], and a DSS for selecting a piece of software (a database management system) is described in [172]. The LSS and their control infrastructures may show various vulnerabilities and may be affected by various attacks
References [148–150] [130, 135, 151–155, 176, 177] [19, 44, 154, 155, 156, 178] [157, 158, 159, 160, 161] [42, 132, 162, 163] [164, 165] [130, 132, 166]
especially in the cases when they are composed of distributed subsystems and communication links are used. Critical infrastructure systems are expected to be available 24 h a day, 7 days a week. Nevertheless, they occasionally do not operate as expected because of various causes such as natural disasters, equipment malfunction, human errors, or malicious attacks. The economic and societal consequences may be very high. Security concerns and open problems have been frequently addressed all over Sect. 27.3 of this chapter. In particular, one can mention the specific cybersecurity issues for industrial critical infrastructures that are presented in [173]. Two frequently used security standards (ISO 2701 and NIST SP 800-53) are described in detail in [174]. Ogie [175] analyzes cybersecurity incidents on critical infrastructure and industrial networks. The author classifies the possible incidents according to the three criteria: • Intent: theft, intended and unintended service disruption, sabotage, espionage, accident, and unknown • Method of operation: malware, unauthorized insider (local or remote) access, interruption of service, non-cyberattack, and unknown • “Perpetrator” who deliberately launches the attack: lone hacker, organized group of hackers, vendor, employee, unknown, and none Enoch et al. [176] compare various existing security metrics for networks based on the concept of network reachable information and propose a composite one for network security analytics. Fiedler et al. [177] propose a DSS approach for investment in cybersecurity. This approach is based on game theory and combinatorial optimization. Stepanova and Zegzhda [178] have studied the IT security evolution and the usage of control theory in analyzing the large-scale, heterogeneous distributed systems. The authors apply three basic concepts of control theory – feedback, adaptive control, and system state prediction – to define the formal representation of the security for four types of systems: static, active, adaptive, and dynamic. The principles of control theory, which are applied in security assessment, are meant to identify and
27
636
eliminate the threats in critical infrastructures such as those in power systems [179], gas transmission lines [180], water distribution systems [181], and the air transportation systems [182]. In case of decision-making situations concerning LSS, it is likely that the participants (the decision-makers and their assistants) are located in various, possibly remote places, and their number is not limited by the size of a classical meeting room. Moreover, at present, some people may need to attend the virtual decision-making meetings by using their mobile devices. Security aspects concerning the authentication and access granting for participants that are authorized to take part in sensitive decision-making activities can be approached by using biometric methods and tools [85, 94]. A review of the current state-of-the-art in cyber-security training offerings for critical infrastructure protection and the key performance indicators is provided in [183].
References 1. Tomovic, R.: Control of large systems. In: Troch, I. (ed.) Simulation of Control Systems, pp. 3–6. North Holland, Amsterdam (1972) 2. Mahmoud, M.S.: Multilevel systems control and applications. IEEE Trans. Syst. Man Cybern. SMC–7, 125–143 (1977) 3. Šiljak, D.D.: Large Scale Dynamical Systems: Stability and Structure. North Holland, Amsterdam (1978) 4. Athans, M.: Advances and open problems on the control of largescale systems, Plenary paper. In: Proc. 7th IFAC Congress, pp. 2371–2382. Pergamon Press, Oxford (1978) 5. Jamshidi, M.: Large Scale Systems: Modeling and Control. North Holland, New York (1983); 2nd edn. Prentice Hall, Upper Saddle River (1997) 6. Mesarovic, M.D., Macko, D., Takahara, Y.: Theory of Hierarchical Multilevel Systems. Academic, New York (1970) 7. Filip, F.G., Dumitrache, I., Iliescu, S.S.: Foreword. In: Proceedings of the 9th IFAC Symposium on Large Scale Systems: Theory and Applications 2001, Bucharest, Romania, 18– 20 July 2001, pp. V-VII. IFAC Proceedings Volumes 3 (8) (2001). https://www.sciencedirect.com/journal/ifac-proceedingsvolumes/vol/34/issue/8. Accessed 10 Aug 2020 8. Findeisen, W.: Decentralized and hierarchical control under consistence or disagreements of interests. Automatica. 18(6), 647– 664 (1982) 9. Takatsu, S.: Coordination principles for two-level satisfactory decision-making systems. Syst. Sci. 7(3/4), 266–284 (1982) 10. Filip, F.G., Donciulescu, D.A.: On an online dynamic coordination method in process industry. IFAC J. Automat. 19(3), 317–320 (1983) 11. Mårtenson, L.: Are operators in control of complex systems? In: Proc. 13th IFAC World Congress, vol. B, pp. 259–270. Pergamon, Oxford (1990) 12. Nof, S.Y., Morel, G., Monostori, L., Molina, A., Filip, F.G.: From plant and logistics control to multi-enterprise collaboration. Milestones report of the manufacturing & logistic systems coordinating committee. Annu. Rev. Control. 30(1), 55–68 (2006) 13. Stahel, W.R.: The circular economy. Nature. 531, 435–438 (2016). https://doi.org/10.1038/531435a
F. G. Filip and K. Leiviskä 14. Ilgin, M.A., Gupta, S.M.: Environmentally conscious manufacturing and product recovery (ECMPRO): a review of the state of the art. J. Environ. Manag. 91, 563–559 (2010) 15. Duta, L., Filip, F.G.: Control and decision-making process in disassembling used electronic products. Stud. Inform. Control. 17(1), 17–26 (2008) 16. Gupta, S.M., Ilgin, A.M.: Multiple Criteria Decision-Making Applications in Environmentally Conscious Manufacturing and Product Recovery. CRC Press, Boca Raton (2018) 17. Sage, A.P., Cuppan, C.D.: On the system engineering of systems of systems and federations of systems. Inf. Knowl. Syst. Manage. 2(4), 325–349 (2001) 18. Gheorghe, A.V., Vamanu, D.V., Katina, P.F., Pulfer, R.: Critical infrastructures, key resources, and key assets. In: Critical Infrastructures, Key Resources, Key Assets. Topics in Safety, Risk, Reliability and Quality, vol. 34. Springer, Cham (2018) 19. Donciulescu, D.A., Filip, F.G.: DISPECER-H – a decision supporting system in water resources dispatching. Annu. Rev. Autom. Program. 12(2), 263–266 (1985) 20. Lee, E.A.: Cyber physical systems: design challenges. In: 11th IEEE International Symposium on Object and ComponentOriented Real-Time Distributed Computing (ISORC), Orlando, pp. 363–369 (2008) 21. Yilma, B., Panetto, H., Naudet, Y.: A meta-model of cyberphysical-social system: the CPSS paradigm to support humanmachine collaboration in Industry 4.0. Collab. Netw. Digital Transform. 568, 11–20 (2019) 22. Nof, S.Y., Morel, G., Monostori, L., Molina, A., Filip, F.: From plant and logistic control to multienterprise. milestones report of the manufacturing & logistic systems coordinating committee. Annu. Rev. Control. 30(1), 55–68 (2006) 23. Nof, S.Y., Ceroni, J., Jeong, W., Moghaddam, M.: Revolutionizing Collaboration through e-Work, e-Business, and e-Service. Springer, Berlin/Heidelberg (2015) 24. Zhong, H., Nof, S.Y.: Collaborative e-work and collaborative control theory for disruption handling and control. In: Dynamic Lines of Collaboration. Automation, Collaboration, & E-Services, vol. 6, pp. 23–31. Springer, Cham (2020) 25. Romero, D., Bernus, P., Noran, O., Stahre, J., Fast-Berglund, Å.: The Operator 4.0: human cyber-physical systems & adaptive automation towards human-automation symbiosis. In: Nääs, I., et al. (eds.) Advances in Production Management Systems. Initiatives for a Sustainable World. APMS 2016. IFIP AICT, vol. 488. Springer (2016) 26. Vernadat, F.B., Chen, F.T.S., Molina, A., Nof, S.Y., Panetto, H.: Information systems and knowledge management in industrial engineering: recent advances and new perspectives. Int. J. Prod. Res. 56(8), 2707–2713 (2018) 27. Borangiu, T., Morariu, O., Raileanu, S., Trentesaux, D., Leitão, P., Barata, J.: Digital transformation manufacturing. Industry of the future with cyber-physical production systems. ROMJIST. 23(1), 3–37 (2020) 28. Camarinha-Matos, L.M., Fornasiero, R., Ramezani, J., Ferrada, F.: Collaborative networks: a pillar of digital transformation. Appl. Sci. 9, 5431 (2019) 29. Ledet, W.P., Himmelblau, D.M.: Decomposition procedures for the solving of large scale systems. Adv. Chem. Eng. 8, 185–254 (1970) 30. Šiljak, D.D.: Decentralized Control of Complex Systems. Academic, New York (1991) 31. Obinata, G., Anderson, B.D.O.: Model reduction for control system design. In: Series Communications and Control Engineering. Springer, London (2001) 32. Kokotovi´c, P.V.: Applications of singular perturbation techniques to control problems. SIAM Rev. 26(4), 501–555 (1984) 33. Antsaklis, P.J., Passino, K.M. (eds.): An Introduction to Intelligent and Autonomous Control. Kluwer Academic (1993)
27
Infrastructure and Complex Systems Automation
34. Precup, R.E., Hellendoorn, H.: A survey on industrial applications of fuzzy control. Comput. Ind. 62(3), 213–226 (2011) 35. Binder, Z.: Sur l’organisation et la conduite des systèmes complexes. Thèse de Docteur. LAG, Grenoble (1977) 36. Lupu, C., Borne, P., Popescu, D.: Multi-model adaptive control systems. CEAI. 10(1), 49–56 (2008) 37. Liu, Y.-Y., Barabási, A.-L.: Control principles of complex systems. Rev. Mod. Phys. 88, 035006 (2016) 38. Hespanha, J.P., Naghshtabriz, P., Xu, Y.G.: A survey of recent results in networked control systems. Proc. IEEE. 95(1), 138–162 (2007) 39. Mahmoud, M.S., Hamdan, M.M.: Fundamental issues in networked control systems. IEEE/CAA J. Automat. Sin. 5(5), 902– 922 (2018) 40. Ho, Y.C., Mitter, S.K.: Directions in Large-Scale Systems. Plenum, New York (1976) 41. Sage, A.P.: Methodology for Large Scale Systems. McGraw Hill, New York (1977) 42. Singh, M.D., Titli, A.: Systems: Decomposition, Optimisation, and Control. Pergamon, Oxford (1978) 43. Findeisen, W., Brdys, M., Malinowski, K., Tatjewski, P., Wozniak, A.: Control and Coordination in Hierarchical Systems. Wiley, Chichester (1980) 44. Brdys, M., Tatjewski, P.: Iterative Algorithms for Multilayer Optimizing Control. Imperial College, London (2001) 45. Steward, D.: Systems Analysis and Management: Structure, Strategy and Design. Petrocelli Books, New York (1981) 46. Schoeffler, J.: Online multilevel systems. In: Wismer, D. (ed.) Optimization Methods for Large Scale Systems, pp. 291–330. McGraw Hill, New York (1971) 47. Nam, T., Pardo, T.A.: Conceptualizing smart city with dimensions of technology, people, and institutions. In: dg.o ‘11, Proc. 12th Annu. Intern. Digital Gov. Res. Conf: Digital Government Innovation in Challenging Times, June 2011, pp. 282–291 (2011) 48. Eckman, P., Lefkowitz, I.: Principles of model technique in optimizing control. In: Proc. 1st IFAC World Congr., Moscow 1960, pp. 970–974 (1960) 49. Lefkowitz, I.: Multilevel approach to control system design. In: Proc. JACC, pp. 100–109 (1965) 50. Grassi, A., Guizzi, G., Santillo, L.C., Vespoli, S.: A semiheterarchical production control architecture for industry 4.0based manufacturing systems. Manuf. Lett. 24, 43–46 (2020) 51. Kim, D.-S., Tran-Dang, H.: Industrial sensors and controls in communication networks. Comput. Commun. Netw. (2019). https://doi.org/10.1007/978-3-030-04927-0_1 52. Brosilow, C.B., Ladson, L.S., Pearson, J.D.: Feasible optimization methods for interconnected systems. In: Proc. Joint Autom. Control Conf. – JACC, pp. 79–84. Rensselaer Polytechnic Institute, Troy/New York (1965) 53. Williams, T.J.: Analysis and Design of Hierarchical Control Systems with Special Reference to Steel Plant Operations. Elsevier, Amsterdam (1985) 54. Hatvany, J.: Intelligence and cooperation in heterarchic manufacturing systems. Robot. Comput. Integr. Manuf. 2(2), 101–104 (1985) 55. Rey, G.Z., Pach, C., Aissani, N., Berger, T., Trentesaux, D.: The control of myopic behavior in semi-heterarchical production systems: a holonic framework. Eng. Appl. Artif. Intell. 26(2), 800–817 (2013) 56. Koestler, A.: The Ghost in the Machine. Hutchinson, London (1967) 57. Hopf, M., Schoeffer, C.F.: Holonic manufacturing systems. In: Goossenaerts, J., et al. (eds.) Information Infrastructure Systems for Manufacturing, pp. 431–438. Springer Science+Business Media, Dordrecht (1997)
637 58. Filip, F.G., Leiviskä, K.: Large-scale complex systems. In: Nof, S.Y. (ed.) Springer Handbook of Automation, pp. 619–638. Springer, Berlin/Heidelberg (2009) 59. Valckenaers, P., Van Brussel, H., Hadeli, K., Bochmann, O., Germain, B.S., Zamfirescu, C.: On the design of emergent systems an investigation of integration and interoperability issues. Eng. Appl. Artif. Intell. 16, 377–393 (2003) 60. Parunak, H.V.D.: Practical and Industrial Applications of AgentBased Systems. Industrial Technology Institute, Ann Arbor (1998) 61. Tecuci, G., Dybala, T.: Building Intelligent Agents: An Apprenticeship Multistrategy Learning Theory, Methodology, Tool and Case Studies. Academic, New York (1998) 62. Nof, S.Y.: Collaborative control theory for e-work, e-production and e-service. Annu. Rev. Control. 31(2), 281–292 (2007) 63. Monostori, L., Valkenaerts, P., Dolgui, A., Panetto, H., Brdys, M., Csáji, B.C.: Cooperative control in production and logistics. Annu. Rev. Control. 39, 12–29 (2015) 64. Siljak, D.: Decentralized Control of Complex Systems. Dover Publications, Mineola (2012) Original Academic Press (1991) 65. Davison, E.J., Aghdam, A.G., Miller, D.: Decentralized Control of Large-Scale Systems. Springer, New York (2020) 66. Wang, S.-H., Davison, E.J.: On the stabilization of decentralized control system. IEEE Trans. Autom. Control. 18(5) (1973) 67. Bakule, L.: Decentralized control: an overview. Annu. Rev. Control. 32, 87–98 (2008) 68. Inagaki, T.: Adaptive automation: sharing and trading of control. In: Hollnagel, E. (ed.) Handbook of Cognitive Task Design, pp. 147–169. LEA (2003) 69. Fitts, P.M.: Human Engineering for an Effective Air Navigation and Traffic Control System. Nat. Res. Council, Washington, DC (1951) 70. Flemisch, F., Heesen, M., Hesse, T., Kelsch, J., Schieben, A., Beller, J.: Towards a dynamic balance between humans and automation: authority, ability, responsibility, and control in shared and cooperative control situations. Cogn. Tech. Work. 14(1), 3–8 (2012) 71. Bibby, K.S., Margulies, F., Rijnsdorp, J.E., Whithers, R.M.: Man’s role in control systems. In: Proceedings, IFAC 6th Triennial World Congress, Boston, Cambridge, MA, pp. 24–30 (1975) 72. Bainbridge, L.: Ironies of automation. IFAC J. Automatica. 19(6), 775–779 (1983) 73. Martin, T., Kivinen, J., Rijnsdorp, J.E., Rodd, M.G., Rouse, W.B.: Appropriate automation – integrating human, organizational, economic, and cultural factors. In: Proc. IFAC 11th World Congress, vol. 23, issue 8, Part 1, pp. 1–19 (1990) 74. Schneiderman, B.: Human-centered artificial intelligence: reliable, safe & trustworthy. Int. J. Human–Comput. Interact. 36(6), 495–504 (2020) 75. Power, D.J., Heavin, C., Keenan, P.: Decision systems redux. J. Decis. Syst. 28(1), 1–18 (2019) 76. Filip, F.G., Suduc, A.M., Bizoi, M.: DSS in numbers. Technol. Econ. Dev. Econ. 20(1), 154–164 (2014) 77. Filip, F.G.: Decision support and control for large-scale complex systems. Annu. Rev. Control. 32(1), 62–70 (2008) 78. Filip, F.G.: DSS – a class of evolving information systems. In: Dzemyda, G., Bernataviˇcien˙e, J., Kacprzyk, J. (eds.) Data Science: New Issues, Challenges and Applications. Studies in Computational Intelligence, vol. 869, pp. 253–277. Springer, Cham (2020) 79. Nof, S.Y.: Theory and practice in decision support for manufacturing control. In: Holsapple, C.W., Whinston, A.B. (eds.) Data Base Management, pp. 325–348. Reidel, Dordrecht (1981) 80. Filip, F.G., Neagu, G., Donciulescu, D.A.: Jobshop scheduling optimization in real-time production control. Comput. Ind. 4(3), 395–403 (1983)
27
638 81. Cioca, M., Cioca, L.-I.: Decision support systems used in disaster management. In: Jao, C.S. (ed.) Decision Support Systems, pp. 371–390. INTECH (2010). https://doi.org/10.5772/39452 82. Charturverdi, A.R., Hutchinson, G.K., Nazareth, D.J.: Supporting real-time decision-making through machine learning. Decis. Support. Syst. 10, 213–233 (1997) 83. Simon, H.: Rational choice and the structure of environment. Psychol. Rev. 63(2), 129–138 (1956) 84. Van de Walle, B., Turoff, M.: Decision support for emergency situations. Inf. Syst. E-Bus. Manage. 6, 295–316 (2008) 85. Filip, F.G., Zamfirescu, C.B., Ciurea, C.: Computer-Supported Collaborative Decision-Making. Springer, Cham (2017) 86. Candea, C., Filip, F.G.: Towards intelligent collaborative decision support platforms. Stud. Inform. Control. 25(2), 143–152 (2016) 87. Nof, S.Y.: Collaborative control theory and decision support systems. Comput. Sci. J. Moldova. 25(2), 15–144 (2017) 88. Ding, R.X., Palomares, I., Wang, X., Yang, G.-R., Liu, B., Dong, Y., Herrera-Viedma, E., Herrera, F.: Large-scale decision-making: characterization, taxonomy, challenges and future directions from an artificial intelligence and applications perspective. Inform. Fusion. 59, 84–102 (2020) 89. Bonczek, R.H., Holsapple, C.W., Whinston, A.B.: Foundations of Decision Support Systems. Academic, New York (1981) 90. Holsapple, C.W., Whinston, A.B.: Decision Support System: A Knowledge-Based Approach. West Publishing, Minneapolis (1996) 91. Simon, H.: Two heads are better than one: the collaboration between AI and OR. Interfaces. 17(4), 8–15 (1987) 92. Kusiak, A., Chen, M.: Expert systems for planning and scheduling manufacturing systems. Eur. J. Oper. Res. 34(2), 113–130 (1988) 93. Filip, F.G.: System analysis and expert systems techniques for operative decision making. J. Syst. Anal. Model. Simul. 8(2), 296– 404 (1990) 94. Kaklauskas, A.: Biometric and Intelligent Decision-Making Support. Springer International Publishing, Cham (2015) 95. Zaraté, P., Liu, S.: A new trend for knowledge-based decision support systems design. Int. J. Inform. Decis. Sci. 8(3), 305–324 (2016) 96. Dutta, A.: Integrated AI and optimization for decision support: a survey. Decis. Support. Syst. 18, 213–226 (1996) 97. Engelbart, D.C.: Augmenting Human Intellect: a Conceptual Framework. SRI Project 3578 (1962). https://apps.dtic.mil/dtic/tr/ fulltext/u2/289565.pdf. Accessed 10 Aug 2020 98. Hollnagel, E., Woods, D.D.: Cognitive systems engineering: new wine in new bottles. Int. J. Man-Mach. Stud. 18(6), 583– 600 (reprinted in Intern J Human-Comp Stud 51:339–356) (1983/1999) 99. Kelly III, J.E.: Computing, Cognition and the Future of Knowing. How Humans and Machines are Forging a New Age of Understanding. IBM Global Service (2015) 100. Siddike, M.A.K., Spohrer, J., Demirka, H., Kohda, J.: People’s interactions with cognitive assistants for enhanced performances. In: Proc. 51st International Conference on System Sciences, pp. 1640–1648 (2018) 101. Rouse, W.B., Spohrer, J.C.: Automating versus augmenting intelligence. J. Enterprise Transform. (2018). https://doi.org/10.1080/ 19488289.2018.1424059 102. AIHLEG: Ethics Guidelines for Trustworthy AI. European Commission (2019). https://ec.europa.eu/futurium/en/ai-allianceconsultation/guidelines#Top. Accessed 10 Aug 2020 103. Rouse, M.: Definition: Internet of Things (IoT). IoT Agenda (2020). https://internetofthingsagenda.techtarget.com/definition/ Internet-of-Things-IoT. Accessed 30 June 2020 104. Jeschke, S., Brecher, C., Meisen, D., Özdemir, D., Eschert T.: Industrial Internet of Things and Cyber Manufacturing Systems. Springer Series in Wireless Technology (2017)
F. G. Filip and K. Leiviskä 105. Boyes, H., Hallaq, B., Cunningham, J., Watson, T.: The industrial internet of things (IIoT): an analysis framework. Comput. Ind. 101, 1–12 (2018) 106. Erboz, G.: How to define Industry4.0: the main pillars of industry 4.0. In: Conference Managerial Trends in the Development of Enterprises in Globalization Era, at Slovak University of Agriculture in Nitra, Slovakia, pp. 761–767 (2017) 107. Mell, P., Grance, T.: The NIST definition of cloud computing. Special publication 800-15. (2011). http://csrc.nist.gov/ publications/nistpubs/800-145/SP800-145.pdf. Accessed 30 June 2020 108. Hamilton, E.: What Is Edge Computing: The Network Edge Explained. (2018). https://www.cloudwards.net/what-is-edgecomputing. Accessed 30 June 2020 109. ISO/IEC 18384:2016. 2016. Information Technology - Reference Architecture for Service Oriented Architecture (SOA RA). Geneva: International Organization for Standardization 110. Moghaddam, M., and Nof, S.Y. Collaborative service-component integration in cloud manufacturing, Int. J. of Prod. Res., 56(1-2), 677–691 (2018) 111. Shi, W., Cao, J., Zhang, Q., Li, T., Xu, L.: Edge computing: vision and challenges. IEEE Internet Things J. 3(5) (2016) 112. Dinh, H.T., Lee, C., Niyato, D., Wang, P.: A survey of mobile cloud computing: architecture, applications, and approaches. Wirel. Commun. Mob. Comput. 13, 1587–1611 (2013) 113. Mao, Y., You, C., Zhang, J., Huang, K., Letaief, K.B.: Mobile edge computing: survey and research outlook. Research Gate. (2017). https://www.researchgate.net/publication/312061424_Mobile_E dge_Computing_Survey_and_Research_Outlook. Accessed 4 July 2020 114. Gangula, A., Ansari, S., Gondhalekar, M.: Survey on mobile computing security. In: 2013 European Modelling Symposium, Manchester, pp. 536–542 (2013) 115. Baheti, R., Gill H.: Cyber-physical systems. (2011). http://ieeecss.org/sites/ieeecss/files/2019-07/IoCT-Part3-02Cyber physicalSystems.pdf . Accessed 10 July 2020 116. Lee, J., Bagheri, B., Kao, H.-A.: A cyber-physical systems architecture for industry 4.0-based manufacturing systems. Manuf. Lett. 3, 18–23 (2015) 117. Khaitan, S., McCalley, J.D.: Design techniques and applications of cyberphysical systems: a survey. IEEE Syst. J. 9(2), 350–365 (2015) 118. Tao, F., Qi, Q., Wang, L., Nee, A.Y.C.: Digital twins and cyber–physical systems toward smart manufacturing and industry 4.0: correlation and comparison. Engineering. 5, 653–661 (2019) 119. Grimes, S.: Big Data: avoid ‘Wanna V’ confusion. Information Week (2013) 120. Alexandru, A., Alexandru, C.A., Coardos, D., Tudora, E.: Big Data: concepts, technologies and applications in the public sector. Int. J. Comput. Inf. Eng. 10(10), 1670–1676 (2016) 121. IBM: What is Big Data? https://www.ibm.com/analytics/hadoop/ big-data-analytics. Accessed 16 July (2020) 122. Reis, M.S., Gins, G.: Industrial process monitoring in the Big Data/Industry 4.0 era: from detection, to diagnosis, to prognosis. PRO. 5(35) (2017) 123. Li, A., Liu, Y.-Y.: Controlling network dynamics. Adv. Complex Syst. 22(07n08), 1950021 (2019) 124. Antsaklis, P., Baillieul, J.: Special issue on technology of networked control systems. Proc. IEEE. 95(1), 5–8 (2007) 125. Murray, R.M., Åström, K.J., Boyd, S.P., Brockett, R.W., Stein, G.: Future directions in control in an information-rich world. IEEE Control. Syst. Mag. 20–33 (2003) 126. Sandberg, H., Amin, S., Johansson, K.H.: Cyberphysical security in networked control systems: an introduction to the issue. IEEE Control. Syst. Mag. 35(1), 20–23 (2015)
27
Infrastructure and Complex Systems Automation
127. Zhang, X.-M., Han, Q.-L., Ge, X., Ding, D., Ding, L., Yue, D., Peng, C.: Networked control systems: a survey of trends and techniques. IEEE/CAA J Automat. Sin. 7(1), 1–17 (2020) 128. Zhan, Y., Xia, Y., Vasilakos, A.V.: Future directions of networked control systems: a combination of cloud control and fog control approach. Comput. Netw. 161(9), 235–248 (2019) 129. Xia, Y.: Cloud control systems. IEEE/CAA J. Automat. Sin. 2(2), 134–142 (2015) 130. Kunal, S., Saha, A., Amin, R.: An overview of cloud-fog computing: architectures, applications with security challenges. Secur. Privacy 2(4), 14 p. (2019) 131. Open Fog: OpenFog Reference Architecture for Fog Computing (2017). https://www.iiconsortium.org/pdf/OpenFog_Reference_ Architecture_2_09_17.pdf. Accessed 10 Aug 2020 132. Atlam, H.F., Walters, R.J., Wills, G.B.: Fog computing and the internet of things: a review. Big Data Cogn. Comput. 2(10) (2018) 133. Rad, B.B., Shareef, A.A.: Fog computing: a short review of concept and applications. IJCSNS Int. J. Comput. Sci. Netw. Secur. 17(11), 68–74 (2017) 134. Satyanarayanan, M.: The emergence of edge computing. Computer. 50(1), 30–39 (2017). https://doi.org/10.1109/MC.2017.9 135. Basir, R., Qaisar, S., Ali, M., Aldwairi, M., Ashraf, M.I., Mahmood, A., Gidlund, M.: Fog computing enabling industrial internet of things: State-of-the-art and research challenges. Big Data Cogn. Comput. 2(10) (2018). https://doi.org/10.3390/ bdcc2020010 136. Louis, J.-N., Caló, A., Leiviskä, K., Pongrácz, E.: Modelling home electricity management for sustainability: the impact of response levels, technological deployment & occupancy. Energ. Buildings. 119, 218–232 (2016) 137. Hietaharju, P., Ruusunen, M., Leiviskä, K., Paavola, M.: Predictive optimization of the heat demand in buildings at the city level. Appl. Sci. 9, 16 p. (2019) 138. Louis, J.-N., Caló, A., Leiviskä, K., Pongrácz, E.: Environmental impacts and benefits of smart home automation: life cycle assessment of home energy management system. In: Breitenecker, F., Kugi, A., Troch, I. (eds.) 8th Vienna International Conference on Mathematical Modelling – MATHMOD 2015, pp. 880–885. IFAC-PapersOnLine, Vienna (2015) 139. Louis, J.-N., Caló, A., Leiviskä, K., Pongrácz, E.: A methodology for accounting the co2 emissions of electricity generation in Finland – the contribution of home automation to decarbonisation in the residential sector. Int. J. Adv. Intell. Syst. 8(3&4), 560–571 (2014) 140. Hietaharju, P., Ruusunen, M., Leiviskä, K.: A dynamic model for indoor temperature prediction in buildings. Energies. 11, 1477 (2018) 141. Hietaharju, P., Ruusunen, M.: Peak load cutting in district heating network. In: Proceedings of the 9th EUROSIM Congress on Modelling and Simulation, Oulu, Finland, 12–16 September (2016) 142. Li, J., Yang, X., Sitzenfrei, R.: Rethinking the framework of smart water system: a review. Water. 12, 412 (2020) 143. Gulati, A.: How IoT is playing a key role in protecting the environment. ETTelecom News, June 13 (2017) 144. Li, S., Wang, H., Xu, T., Zhou, G.: Application study on internet of things in environment protection field. In: Yang, D. (ed.) Informatics in Control, Automation and Robotics. Lecture Notes in Electrical Engineering, vol. 133, pp. 99–106. Springer, Berlin/Heidelberg (2011) 145. Zhang, X., Huang, Z.: Research on smart environmental protection IoT application based on edge computing. In: 2019 International Conference on Computer, Network, Communication and Information Systems (CNCI 2019). Advances in Computer Science Research, vol. 88, pp. 546–551 (2019) 146. Ullo, S.L., Sinha, G.R.: Advances in smart environment monitoring systems using IoT and sensors. Sensors. 20, 3113 (2020)
639 147. Xu, G., Shi, Y., Sun, X., Shen, W.: Internet of Things in marine environment monitoring: a review. Sensors (Basel). 19(7), 1711 (2019) 148. Ahmad, T., Chen, H., Guo, Y., Wang, J.: A comprehensive overview on the data driven and large scale based approaches for forecasting of building energy demand: a review. Energ. Buildings. 165, 301–320 (2018) 149. Frayssinet, L., Merlier, L., Kuznik, F., Hubert, J.-L., Milliez, M., Roux, J.-J.: Modeling the heating and cooling energy demand of urban buildings at city scale. Renew. Sust. Energ. Rev. 81, 2318– 2327 (2018) 150. Tardioli, G., Kerrigan, R., Oates, M., O’Donnell, J., Finn, D.: Data driven approaches for prediction of building energy consumption at urban level. Energy Procedia. 78, 3378–3383 (2015) 151. Dkhili, N., Eynard, J., Thil, S., Grieu, S.: A survey of modelling and smart management tools for power grids with prolific distributed generation. Sustain. Energ. Grids Netw. 21 (2020) 152. Naveen, P., Ing, W.K., Danquah, M.K., Sidhu, A.D., Abu-Siada, A.: Cloud computing for energy management in smart grid – an application survey. IOP Conf. Ser.: Mater. Sci. Eng. 121 (2016) 153. Siano, P.: Demand response and smart grids – a survey. Renew. Sust. Energ. Rev. 30, 461–478 (2014) 154. Lee, S.W., Sarp, S., Jeon, D.J., Kim, J.H.: Smart water grid: the future water management platform. Desalin. Water Treat. 55, 339– 346 (2015) 155. Yea, Y., Lianga, L., Zhaoa, H., Jianga, Y.: The system architecture of smart water grid for water security. Procedia Eng. 154, 361–368 (2016) 156. Public Utilities Board Singapore: Managing the water distribution network with a smart Water Grid. Smart Water, 1(4), 13 p. (2016) 157. Mih˘ai¸ta˘ , A.S., Dupont, L., Cheryc, O., Camargo, M., Cai, C.: Evaluating air quality by combining stationary, smart mobile pollution monitoring and data-driven modelling. J. Clean. Prod. 221, 398–418 (2019) 158. Pule, M., Yahya, A., Chuma, J.: Wireless sensor networks: a survey on monitoring water quality. J. Appl. Res. Technol. 15(6), 562–570 (2017) 159. Mitchell, L.E., Crosman, E.T., Jacques, A.A., Fasoli, B., LeclairMarzolf, L., Horel, J., Bowling, D.R., Ehleringer, J.R., Lin, J.C.: Monitoring of greenhouse gases and pollutants across an urban area using a light-rail public transit platform. Atmos. Environ. 187, 9–23 (2018) 160. Smit, R., Kingston, P., Neale, D.W., Brown, M.K., Verran, B., Nolan, T.: Monitoring on-road air quality and measuring vehicle emissions with remote sensing in an urban area. Atmos. Environ. 218, 116978 (2019) 161. Kyung, C.-M. (ed.): Smart Sensors for Health and Environment Monitoring. Springer, Dordrecht (2015) 162. Wei, H., Zheng, G., Gayah, V., Li, Z.: A Survey on Traffic Signal Control Methods. ArXiv, abs/1904.08117. (2019) 163. Jácome, L., Benavides, L., Jara, D., Riofrio, G., Alvarado, F., Pesantez, M.: A Survey on Intelligent Traffic Lights, 2018 IEEE International Conference on Automation/XXIII Congress of the Chilean Association of Automatic Control (ICA-ACCA), Concepcion, pp. 1–6 (2018) 164. Soni, A., Hu, H.: Formation control for a fleet of autonomous ground vehicles: a survey. Robotics. 77, 67 (2018) 165. Zhou, L., Liang, Z., Chou, C.A.: Airline planning and scheduling: models and solution methodologies. Front. Eng. Manag. 7, 1–26 (2020) 166. Badue, C., et al.: Self-driving cars: a survey. Expert Syst. Appl. 165, 113816 (2021) 167. Kyriakides, E., Polycarpou., M. (eds.): Intelligent Monitoring, Control, and Security of Critical Infrastructure Systems. Springer Studies in Computational Intelligence, vol. 565. Springer, Berlin/Heidelberg (2015)
27
640 168. Brdys, M.A.: Algorithms and tools for intelligent control of critical infrastructure systems. In: Kyriakides, E., Polycarpou, M. (eds.) Intelligent Monitoring, Control, and Security of Critical Infrastructure Systems. Studies in Computational Intelligence, vol. 565. Springer, Berlin/Heidelberg (2015) 169. Filip, F.G.: A decision-making perspective for designing and building information systems. Int. J. Comput. Commun. 7(2), 264–272 (2012) 170. Zavadskas, E.K., Turskis, Z., Kildien˙e, S.: State of art surveys of overviews on MCDM/MADM methods. Technol. Econ. Dev. Econ. 20(1), 165–179 (2014) 171. R˘adulescu, C.Z., R˘adulescu, I.: An extended TOPSIS approach for ranking cloud service providers. Stud. Inform. Control. 26(2), 183–192 (2017) 172. Farshidi, S., Jansen, S., de Jong, R., Brinkkemper, S.: A decision support system for software technology selection. J. Decis. Syst. 27(1), 98–110 (2018) 173. Ani U.P.D., He, H. (M.), Tiwari, A.: Review of cybersecurity issues in industrial critical infrastructure: manufacturing in perspective. J. Cyber Secur. Technol. 1(1), 32–74 (2016). https:// doi.org/10.1080/23742917.2016.1252211 174. Montesino, R., Fenz, S.: Information security automation: how far can we go? In: 2011 Sixth International Conference on Availability, Reliability and Security, pp. 2080–2085. IEEE Computer Society (2011). https://doi.org/10.1109/ARES.2011.48 175. Ogie, R.I.: Cyber security incidents on critical infrastructure and industrial networks. In: ICCAE ‘17: Proceedings of the 9th International Conference on Computer and Automation Engineering, pp. 254–258. ACM, New York (2017) 176. Yusuf, S., Hong, J.B., Ge, M., Kim, D.S.: Composite Metrics for Network Security Analysis. Cornell University (2020). arXiv:2007.03486v2 [cs.CR] 177. Fielder, A., Panaousis, E., Malacaria, P., Hankina, C., Smeraldi, F.: Decision support approaches for cyber security investment. Decis. Support. Syst. 86, 13–23 (2016) 178. Stepanova, T.V., Zegzhda, D.P.: Large-scale systems security evolution: control theory approach. In: Proceedings of the 8th International Conference on Security of Information and Networks (2015). https://doi.org/10.1145/2799979.2799993 179. Kargarian, A., Fu, Y., Li, Z.: Distributed security-constrained unit commitment for large-scale power systems. IEEE Trans. Power Syst. 30(4), 1925–1936 (2015). https://doi.org/10.1109/ TPWRS.2014.2360063 180. Liu, C., Shahidehpour, M., Fu, Y., Li, Z.: Security-constrained unit commitment with natural gas transmission constraints power systems. IEEE Trans. 24(3), 1523–1536 (2009) 181. Hart, W.E., Murray, R.: Review of sensor placement strategies for contamination warning systems in drinking water distribution systems. J. Water Resour. Plan. Manag. 136(6), 611–619 (2010)
F. G. Filip and K. Leiviskä 182. Laracy, J.: A systems-theoretic security model for large scale complex systems applied to the US air transportation system. Int. J. Commun. Netw. Syst. Sci. 10, 75–105 (2017). https://doi.org/ 10.4236/ijcns.2017.105005 183. Chowdhury, N., Gkioulos, V.: Cyber security training for critical infrastructure protection: a literature review. Comput. Sci. Rev. 40, 100361 (2021)
Florin-Gheorghe Filip received his MSc and PhD degrees in Automation from the “Politehnica” University of Bucharest in 1970 and 1982, respectively. In 1991, he was elected as a member of the Romanian Academy and as Vice-President from 2000 to 2010. He was the chair of IFAC Technical Committee “Large-scale complex systems” (2003– 2009). His scientific interests include complex systems and decision support systems. He authored/co-authored some 350 papers and 13 monographs and edited/coedited 33 volumes.
Kauko Leiviskä received the degree of Diploma Engineer (MSc) in Process Engineering in 1975 and the Doctor of Science (Technology) degree in Control Engineering in 1982, both from the University of Oulu. He is Professor Emeritus in Control Engineering, University of Oulu, Finland. He has been the IFAC Contact Person of the Finnish NMO (2000–2020) and the member of several Technical Committees in IFAC. His research interests include applications of soft computing methods in modeling and control of industrial processes. He is the (co)author of more than 400 publications.
Computer-Aided Design, Computer-Aided Engineering, and Visualization
28
Jorge D. Camba, Nathan Hartman, and Gary R. Bertoline
Contents 28.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 641
28.2
Product Lifecycle Management in the Digital Enterprise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 642
28.3
3D Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 644
28.4
Parametric Solid Modeling . . . . . . . . . . . . . . . . . . . . . . . 646
28.5
Parametric Geometry Creation Process . . . . . . . . . . . 647
28.6
Electronic Design Automation (EDA) . . . . . . . . . . . . . 648
28.7
Geometry Automation Mechanisms in the Modern CAD Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 650
28.8
User Characteristics Related to CAD Systems . . . . . . 651
28.9
Visualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 652
28.10
Emerging Visualization Technologies: Virtual/Augmented/Mixed Reality . . . . . . . . . . . . . . . . 653 28.10.1 Augmented Reality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 654 28.10.2 Virtual Reality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 656 28.11
Conclusions and Emerging Trends . . . . . . . . . . . . . . . . 657
that define the object. Many of today’s modern CAD tools also operate on similar interfaces with similar geometry creation command sequences that operate interdependently to control the modeling process. Core modules include the sketcher, the solid modeling system itself, the dimensional constraint engine, the feature manager, and the assembly manager. In most cases, there is also a drawing tool, and other modules that interface with analysis, manufacturing process planning, and machining. These processes begin with the 3D geometry generated by CAD systems in the design process or 3D models can be created as a separate process. The second half of the chapter examines emerging visualization technologies (e.g., augmented and virtual reality) and their connection to CAD/CAM. We provide an overview of the technology, its capabilities and limitations, and how it is used in industrial environments. Finally, we discuss the challenges that are hindering its implementation and adoption.
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 657
Keywords Abstract
This chapter is an overview of computer-aided design (CAD) and computer-aided engineering and includes elements of computer graphics, visualization, and emerging visualization technologies. Commercial threedimensional (3D) modeling tools are dimension driven, parametric, feature based, and constraint based. This means that, when geometry is created, the user specifies numerical values and requisite geometric conditions for the elemental dimensional and geometric constraints
J. D. Camba () · N. Hartman · G. R. Bertoline School of Engineering Technology, Purdue University, West Lafayette, IN, USA e-mail: [email protected]; [email protected]; [email protected]
© Springer Nature Switzerland AG 2023 S. Y. Nof (ed.), Springer Handbook of Automation, Springer Handbooks, https://doi.org/10.1007/978-3-030-96729-1_28
Computer-aided design · Digital product representation · Design intent · Product lifecycle management · Product data management
28.1
Introduction
Computer-aided design, manufacturing, and engineering (CAD/CAM/CAE) technologies continue to transform engineering and technical disciplines in unprecedented ways. The tools originally used to replace manual drafting just a few decades ago have evolved into sophisticated design solutions that support the entire product lifecycle with increasingly powerful levels of automation, simulation, connectivity, and autonomy. The recent advances in CAD/CAM and visualization technologies coupled with the development 641
642
J. D. Camba et al.
and affordability of new hardware such as high-resolution displays, multitouch surfaces, and mixed-reality headsets are, once again, changing the engineering landscape, accelerating product development, and enhancing technical communication. From automotive and aerospace engineering to civil engineering and construction, and to electronics, biomedicine, and fashion design, CAD tools are ubiquitous across all disciplines and industries. In this chapter, we discuss the technology, methods, and practices to build 3D models for engineering applications, emphasizing parametric feature-based solid modeling technology and its corresponding modeling processes. Finally, we review emerging trends in product visualization and examine the state of the art of virtual/augmented/mixed-reality technologies in industry.
28.2
Product Lifecycle Management in the Digital Enterprise
The increased usage of digital data and tools throughout organizations is one effect of the larger digital transformation that is sweeping the industrial sector across the globe. Companies are now using machine learning, artificial intelligence, and modeling and simulation to better design and manufacture their products, and to make better business decisions. More specifically to product design and manufacturing is the concept of a model-based enterprise (MBE), which is an approach to product design, production, and support whereby a digital, three-dimensional representation of the product serves as the ubiquitous source for information communicated throughout the product’s lifecycle. In the MBE environment, the digital product model serves as a container, carrying shape definition (i.e., geometry, PMI, annotations, etc.), behavioral definition (i.e., materials, functional logic, genealogy, etc.), and contextual attributes (i.e., supply chain, in use, assembly, etc.). Since the information is embedded in the 3D model and structured in a manner that is machine readable, it can be used directly to feed processes such as CAM, CNC, and coordinate-measuring machine (CMM) applications, among others. An example of a model with graphic PMI data is illustrated in Fig. 28.1. At the core of the digital enterprise is product lifecycle management, or PLM. Commonly mischaracterized as a software tool or system itself, PLM is actually a process leveraged by the integration of enterprise systems to manage a product from concept to disposal. The general phases associated with a product’s lifecycle are illustrated in Fig. 28.2. Modern PLM technologies are designed to operate in multi-CAD environments and interface with other enterprise systems such as manufacturing execution systems (MES), extending the system to suppliers and customers, providing cloud-based solutions, and connecting with
systems engineering and configuration management. Today, the global PLM market is dominated by Siemens Teamcenter, PTC Windchill, and Dassault Systèmes 3DExperience. Other packages include SolidWorks PDM, Autodesk Vault, Fusion Lifecycle PLM, Aras Innovator, and Arena Solutions. While products have varying lifecycle spans, some may not have a need for maintaining data for extended periods of time; however, some industrial sectors, such as aerospace and defense, have product platforms and programs with lifecycles measured in decades. Understanding the characteristics of the data needed during these lifecycles is important to both adequate compliance and to minimizing costs [2, 3]. These industry sectors are also two of the most dependent users of product lifecycle management (PLM) technologies and processes, which by nature promote the use of a 3D modelbased environment and a digital, model-based product definition. However, the process of creating and maintaining this sort of model-based definition is made more difficult by the current state of the art in product definition authoring tools – their disparate proprietary data formats are incompatible. There is no homogeneous, monolithic “file” to encompass a model-based definition and the visualization and integration within a PLM system is challenging [4, 5]. Moreover, the exchange of digital product data between these different authoring and consuming tools is often incorrect, incomplete, and highly ambiguous. While a corporate directive to migrate to a single-vendor PLM toolset may seem expeditious, it is highly unlikely and incredibly costly given the organization and makeup of modern enterprises and their respective supply chains [6, 7]. Digital, model-based product definition is an enabler for effective communications and capturing of business value throughout the product lifecycle in modern enterprises. As the digital backbone that supports the modern product data enterprise has been propagated throughout the extended product lifecycle, an obvious medium of communication has evolved – the 3D model and its various derivative forms – to support their products. Not only must this channel support human-to-human communication, but it must also support human-to-machine, machine-to-human, and machine-tomachine communication as well. In a drawing-based environment, the drawing itself (particularly when paper based) contains the relevant product data and serves as a trusted information source for those using it. When drawings are shared, there is typically little worry about information being lost due to translation or exchange error in the medium [8–10]. In order to manage a product’s lifecycle, one must manage the data lifecycle that accompanies the product. Naturally, as the complexity of the product increases, so does the volume and complexity of its related data lifecycle. The advent of the Internet of Things (IoT) and the Industrial Internet of Things (IIoT), coupled with advances in cyberphysical systems and their enabling technologies are fueling
28
Computer-Aided Design, Computer-Aided Engineering, and Visualization
643
28
Fig. 28.1 CAD model with PMI data. (Source: NIST – MBE PMI Validation and Conformance Testing Project [1])
Concept
Design
Manufacturing
Service and support
Disposal
Fig. 28.2 General phases of a product’s lifecycle
the development of new and increasingly more complex products. In many cases, these physical products become sophisticated systems, or systems of systems, that combine mechanical, electrical, and embedded software components. From a product lifecycle standpoint, being able to organize, track, and effectively manage the vast volumes of highly heterogeneous data associated to a particular aspect of a product becomes critical as well as challenging. In these scenarios, data overlaps due to duplication are common, as the same system must often be described from different perspectives. Data relationships are also common, such as connecting requirements to CAD data to test cases and simulation data. Furthermore, handling engineering changes to a product or system, or managing product variability and customization is even more difficult. Indeed, variability means that variant-
specific product configurations must be created and maintained to drive manufacturing and coordinate supply chain processes. Change management is a subset of a larger field of study called configuration management (CM) that refers to the strategies involved in identifying, evaluating, preparing, executing, and validating change. Configuration management involves “applying appropriate processes, resources, and controls, to establish and maintain consistency between product configuration information, and the product” [11]. Successful change management depends on robust change processes to ensure changes are properly executed and formally documented. Change management processes can be centralized, normalized, and to some extent automated, by leveraging the strengths of PLM technology. The key elements and tasks
644
J. D. Camba et al.
Fig. 28.3 Change request form in PTC Windchill
that describe a change can be captured by the PLM system, facilitating the implementation and documentation of the corresponding processes. An example of a change request form in a commercial PLM system (i.e., PTC Windchill) is shown in Fig. 28.3. The use of embedded software and network connectivity in modern smart products enables services such as performance monitoring, predictive maintenance, and on-demand software updates. However, it also means that software libraries, source code, and the related files become key elements in the product configuration and change processes which must be managed carefully. Software and hardware also become tightly coupled, as software must be tested against specific hardware configurations to ensure safety, compliance, and performance. A change in processes and skillsets is necessary to support the product data lifecycle. In the sociotechnical system that surround a product through its lifecycle, product data curators not only need to appreciate the functional domain requirements for authoring and consuming product data, but they must also understand computing architecture and communications mechanisms for dissemination; business processes for security and archival of data; and the model data translation, validation, and exchange processes for multiple authors and consumers.
28.3
3D Modeling
The term “model” refers to any representation of an item, system, process, or phenomenon. Depending on the type,
models may serve different purposes. For example, financial models are used to guide investment strategies and determine the behavior of the stock market, physical models are developed to predict the path and impact of major hurricanes, and graphical models are used in many fields to visually communicate complex ideas. In a CAD context, modeling is “the spatial description and placement of objects, characters, environments and scenes with a computer system” [12]. Different methods exist to represent and produce 3D computer models. Wireframe models are the oldest and most basic representation, in which geometry is represented as a collection of vertices connected by edges. Although wireframe representations are relatively simple and can be processed and rendered quickly by the computer, they are not realistic (particularly when representing nonpolyhedral geometry) and can be ambiguous (due to the lack of face information), which makes their use very limited and unsuitable for even basic design communication. Surface models are a type of boundary representation (B-Rep) where the geometry represents the infinitely thin “boundary” that separates the inside volume of the object from the outside. This “boundary” can be described in different ways. For example, in a polygonal model, the “boundary” is defined by a set of connected flat faces (i.e., polygons), which are bound by edges, which are further bound by end vertices, respectively. As a result, the model’s geometry is approximated by a mesh of polygons. The finer the mesh, the higher the resolution of the model. An alternative approach to represent the “boundary” of the model is through a series of mathematical equations that generate precise curves and surfaces. A common type of curve to produce 3D surface models consist of nonuniform rational B-splines (NURBS). Surface models are created explicitly by manipulating the properties of the entities (vertices, edges, and faces, in the case of a polygonal model; and curves, in the case of a surface model) of the model in 3D space. Surface models allow for the representation of complex geometry and sophisticated organic shapes, which make them appropriate for product visualization and animation applications. However, information about the physical properties of the object represented by the model is not available, which makes these representations unsuitable for engineering analysis and other calculations. Surface models can be saved to a variety of formats. Some file formats are exclusive to specific software packages (proprietary formats), while others are portable, which means they can be exchanged among different programs. Two common portable formats are “.obj” (short for object) introduced by Alias for high-end computer animation and visual effects productions, and the drawing interchange format (DXF) developed by Autodesk and widely used to exchange models between CAD and 3D animation programs.
28
Computer-Aided Design, Computer-Aided Engineering, and Visualization
Solid models provide a complete, accurate, unambiguous, and topologically valid representation of the object, including its physical properties. These characteristics make solid models appropriate for a wide range of engineering applications, including analysis and simulation studies. In fact, solid models are the core representation technology used in modern CAD systems to create and manipulate engineering models. Unlike the boundary representation methods described above where only the outside surface of the model is represented, volumetric models use voxels to represent the inside of an object using a discretely sampled 3D set. Volumetric models are common in medical visualization applications, for example, where MRI or CT scan data are used to capture the internal geometry. Additional modeling techniques have been developed for specific purposes. For example, particle-system modeling is an approach used to represent various phenomena such as fire, snow, clouds, smoke, etc. which do not have a stable and well-defined shape. Such phenomena would be very difficult to model with surface or solid modeling techniques because they are composed of large amounts of moleculesized particles rather than discernible surfaces. In particlesystem modeling, the animator creates a system of particles, i.e., graphical primitives such as points or lines, and defines the physical attributes of the particles. These attributes control how the particles move, how they interact with each other and with the environment, and how they are rendered. Dynamics fields can also be used to control the particles’ motion.
645
Procedural modeling includes a number of techniques to create 3D models from sets of rules. L-systems, fractals, and generative modeling are examples of procedural modeling techniques since they apply algorithms for producing scenes; for instance, a mountainous terrain model can be produced by plotting an equation of fractal mathematics that recursively subdivides and displaces a geometric patch. For the past few years, generative modeling has been gaining popularity in the research community, particularly because of its potential applications in design and engineering. Generative modeling is an iterative process that involves the automatic generation of geometry by defining a set of design goals. The user inputs the criteria and the conditions desired for these goals, and an algorithm generates a number of solutions that meet those criteria. These criteria may be related to design performance, materials and manufacturing methods, and spatial requirements, among others. The resulting geometry is context aware and usually organic in form, reminiscent of natural lattice structures. Generative design tools can be used for topology optimization [13], exploration of design spaces [14], and as a creative tool to automatically generate design configurations [15]. An example of the generative design functionality applied to design space exploration in a commercial CAD system is shown in Fig. 28.4. When a physical model of an object already exists, it is possible to create a corresponding 3D model using various digitization methods. Examples of digitization tools include 3D digitizing pens and laser contour scanners. Each time the tip of a 3D pen touches the surface of the object to be
Fig. 28.4 Exploration of design alternatives in a commercial CAD system. The designs were produced automatically using generative techniques
28
646
J. D. Camba et al.
digitized, the location of a point is recorded. In this way it is possible to compile a list of 3D coordinates that represent key points on the surface. These points form a point cloud that is used by the 3D modeling software to build the corresponding digital mesh, which is often a polygonal surface. In laser contour scanning the physical object is placed on a turntable, a laser beam is projected onto its surface, and the distance the beam travels to the object is recorded. After each 360◦ rotation a contour curve is produced and the beam is lowered a bit. When all contour curves have been generated, the 3D software builds a lofted surface.
28.4
Parametric Solid Modeling
Solid models can be created using a number of techniques. Traditionally, solid models used constructive solid geometry (CSG), where basic geometric primitives (e.g., cube, sphere, cone, cylinder, etc.) are combined using Boolean operators. Three Boolean operations are commonly used: addition or union, subtraction, and intersection. The addition operation combines two solids into a single, unified solid; the subtraction operation takes away from one solid the space occupied by another solid; and the intersection operation produces a solid consisting of only the volume shared by two overlapping solids. Complex geometry is built by combining these Boolean operations sequentially as a tree structure, where the output of one operation is used as the input for the next. Traditionally, CSG approaches were efficient for the storage of the database. However, due to the inherent limitations of the primitive shapes, and the difficulty to manipulate individual surfaces, CSG has evolved into more sophisticated approaches such as the ability to create swept solids on demand where needed, and combine them with existing geometry. Today’s 3D modeling tools implement a variety of technologies. They are dimension driven, parametric, feature based, and constraint based. These terms have come to be synonymous when describing modern CAD systems [16]. The term parametric means that, when geometry is created, the user specifies numerical values and requisite geometric conditions for the elemental dimensional and geometric constraints that define the object; for example, a rectangular prism would be defined by parameter dimensions that control its height, width, and depth. In addition, many of today’s modern CAD tools also operate on similar interfaces with similar geometry creation command sequences [17]. Generally, most constraint-based CAD tools consist of software modules that operate interdependently to control the 3D modeling process. They include core modules such as the sketcher, the solid modeling system itself, the dimensional
constraint engine, the feature manager, and the assembly manager [18]. In most cases, there is also a drawing tool, and other modules that interface with analysis, manufacturing process planning, and machining. The core modules are used in conjunction with each other (or separately as necessary) to develop a 3D model of the desired product. In so doing, most modern CAD systems will produce the same kinds of geometry, irrespective of the software interface they possess. Many of the modern 3D CAD tools combine constructive solid geometry (CSG) and boundary representation (B-rep) modeling functionality to form hybrid 3D modeling packages [16, 18]. Constraint-based CAD tools create a solid model as a series of features that correspond to operations that would be used to create the physical object. Features are geometric abstractions that can be created dependently or independently of each other with respect to the effects of modifications made to the geometry. If features are dependent, then an update to the parent feature will affect the child feature. This is known as a parent–child reference, and these references are typically at the heart of most modeling processes performed by the user [18]. The geometry of each feature is controlled by the use of modifiable constraints that allow for the dynamic update of model geometry as the design criteria change. When a parent feature is modified, it typically creates a ripple effect that yields changes in the child features. This is one example of associativity – the fact that design changes propagate through the geometric database and associated derivatives of the model due to the interrelationships between model features. This dynamic editing capability is also reflected in assembly models that are used to document the manner in which components of a product interact with each other. Modifications to features contained in a part will be displayed in the parent part as well as in the assembly that contains the part. Any working drawings of the part or assembly will also update to reflect the changes. This is another example of associativity. Feature-based modeling enables a more intuitive approach to modeling by making design intent explicit and easy to understand. It also links modeling with engineering and manufacturing. Different classifications of features have been proposed. Some of them, such as material and analysis features, are nongeometric. These types of features are useful for many engineering processes, but not directly relevant to 3D CAD modeling. CAD systems will often organize features based on their type. However, there is not a consistent agreed-upon classification across CAD systems. For example, based on their effect on geometry, features can be classified as positive (features that add new material to the part) and negative (features that remove material from the part). Some features (replication features) are used to replicate existing geometry by creating patterns of existing features in the part, whereas
28
Computer-Aided Design, Computer-Aided Engineering, and Visualization
dress-up features add cosmetic and finishing geometry (such as fillets and rounds). Based on their functional characteristics, features can be classified as design features (features which serve a particular purpose related to the function of the part), form features (features that represent certain geometric aspects of a part, with no implied functional purpose or manufacturing method), and manufacturing features (geometric characteristics that result from the application of a specific manufacturing operation). A critical issue in the use of constraint-based CAD tools is the planning that happens prior to the creation of the model [16]. This is known as design intent. Much of the power and utility of constraint-based CAD tools is derived from the fact that users can edit and redefine part geometry as opposed to deleting and recreating it. This requires a certain amount of thought with respect to the relationships that will be established between and within features of a part and between components in an assembly. The ways in which the model will be used in the future and how it could potentially be manipulated during design changes are both factors to consider when building the model. The manner in which the user expects the CAD model to behave under a given set of circumstances, and the effects of that behavior on other portions of the same model or on other models within the assembly, is known as design intent [16, 18]. The eventual use and reuse of the model will have a profound effect on the relationships that are established within the model as well as the types of features that are used to create it, and vice versa.
28.5
Parametric Geometry Creation Process
In modern constraint-based CAD systems, geometry is created by using the modules and functionality described above, especially the sketcher, the dimensional constraint engine, the solid modeling system, and the feature manager. To create solid geometry, the user considers their design intent and proceeds to make the first feature of the model. The most common way to create feature geometry within a part file is to sketch the main feature’s cross-section on a datum plane (or flat planar surface already existing in the part file), dimension and constrain the sketched profile, and then apply a feature form to the cross-section. Due to the inherent inaccuracies of sketching geometric entities on a computer screen with a mouse, CAD systems typically employ a constraint solver. This portion of the software is responsible for resolving the geometric relationships and general proportions between the sketched entities and the dimensions that the user applies to them, which is an example of automation in the geometric modeling process. The final stage of geometry creation is typically the application of a feature form, which is what
647
Ø34.925
19.05
V V HO
57.15
Fig. 28.5 Sketch geometry created on a plane on a 3D model
28 gives a sketch its depth element. This model creation process is illustrated in Fig. 28.5. This automated process of capturing dimensional and parametric information as part of the geometry creation process is what gives modern CAD systems their advantage over traditional engineering drawing techniques in terms of return on investment and efficiency of work. Without this level of automation, CAD systems would be nothing more than an electronic drawing board, with the user being required to recreate a design from scratch each time. As the user continues to use the feature creation functions in the CAD system, the feature list continues to grow. It lists all of the features used to create a model in chronological order. The creation of features in a particular order also captures design intent from the user, since the order in which geometry is created will have a final bearing on the look (and possibly the function) of the object. In most cases, the feature tree is also the location where the user would go to consider modifying the order in which the model’s features were created (and rebuilt whenever a change is made to the topology of the model). The notion of maintaining the sequence of steps and making it available to the user for editing the model is known as history-based modeling. As a recent alternative approach to history-based parametric modeling, the direct modeling paradigm allows users to create geometry without keeping a log of the modeling sequence or any parent–child relationships between features. Instead, geometric entities in the model can be manipulated directly however the designer desires. This type of modeling eliminates parent–child relationships entirely, which may be effective for certain tasks, but it also eliminates the ability to add design intelligence to and capture design intent within the 3D model. As users become more proficient at using a constraintbased CAD system to create geometry, they adopt their own mental model for interfacing with the software [19]. This mental model typically evolves to match the software
648
J. D. Camba et al.
tware processes Sof
Design consi der ati o
stics acteri har
Geometry creation
rc se
Geometry manipulation
n
s
U
Fig. 28.6 Expert mental model of modern CAD system operation
interface metaphor of the CAD system. In so doing, they are able to leverage their expertise regarding the operation of the software to devise highly sophisticated methods for using the CAD systems. This level of sophistication and automation by the user is due in some part to the nature of the constraint-based CAD tools. It is also what enables the user to dissect geometric models created by others (or themselves at a prior time) and reuse them to develop new or modified designs. Effective use of the tools requires that the user’s own knowledge base comprised of the conceptual relationships regarding the capture of design intent in the geometric model and the specific software skills necessary to create geometry be used. This requires the use of an object–action interface model and metaphor on the part of the user in order to be effective. This interface model correlates the objects and actions used in the software with those used in the physical construction of the object being modeled. If a person is to use the CAD tool effectively, these two sets of models should be similar. In relation to the object–action interface model is the idea of a user’s mental model of the software tool. This mental model is comprised of semantic knowledge of how the CAD system operates, the relationships between the different modules and commands, and syntactic knowledge that is comprised of specific knowledge about commands and the interface (Fig. 28.6). The process of creating 3D geometry in this fashion allows the user to automate the capture of their design intent. Semantic and syntactic knowledge are combined once in the initial creation of the model to develop the intended shape of the object being modeled. This encoding of design knowledge allows the labor of creating geometry to be stored and used again when the model is used in the future. This labor storage is manifested within the CAD system inside the geometric
features themselves, and the script for playing back that knowledge-embedding process is captured within the feature manager as described in Sect. 28.2. It is generally common knowledge within the modern 3D modeling environment that a user will likely work with models created by other people and vice versa. As such, having a predictable means to include design intent in the geometric model is critical for the reuse of existing CAD models within an organization.
28.6
Electronic Design Automation (EDA)
The design of intelligent products and systems that integrate electrical, mechanical, and software characteristics requires the integration and interoperability of the various design and engineering workflows and tools. Traditional mechanical computer-aided design (MCAD) tools must work together with electronic computer-aided design (ECAD) systems to enable a unified approach to design. Electronic design automation (EDA) refers to the software tools for electrical or electronic computer-aided design (ECAD) of electronic systems ranging from printed circuit boards (PCBs) to integrated circuits (see Fig. 28.7). EDA can work on digital circuits and analog circuits. In this section, we focus on EDA tools for digital integrated circuits because they are more prominent in the current EDA industry and occupy the major portion of the EDA market. For analog and mixed-signal circuit design automation, readers are referred to [20] and [21] for more details. Note that we can only briefly introduce the key techniques in EDA. Interested readers can refer to [22–24] for additional information. The majority of the development effort for CAD techniques is devoted to the design of single-purpose processors using semicustom or programmable logic device (PLD) integrated circuit technologies. A typical step-by-step design flow is shown in Fig. 28.8. The synthesis stage includes behavioral synthesis, RTL synthesis, and logic synthesis. The basic problem of behavioral synthesis or high-level synthesis is the mapping of a behavioral description of a circuit into a cycle-accurate register transfer level (RTL) design consisting of a data path and a control unit. Designers can skip behavioral synthesis and directly write RTL codes for circuit design. This design style is facing increasing challenges due to the growing complexity of circuit design. The next step after behavioral synthesis is RTL synthesis. RTL synthesis performs optimizations on the registertransfer-level design. Input to an RTL synthesis tool is a Verilog or VHDL design that includes the number of data path components, the binding of operations/variables/transfers to data path components, and a controller that contains the detailed schedule of computational, input/output (I/O), and memory operations.
28
Computer-Aided Design, Computer-Aided Engineering, and Visualization
649
28
Fig. 28.7 ECAD system for PCB design
Behavioral description
Behavioral synthesis
RTL synthesis
Logic synthesis
Synthesis
Routing
Placement
Partitioning & floorplan
Physical design
Design stages
Semicustom/ASIC flow
PLD flow
GDSII generation
Bitstream generation
IC fabrication
Bitstream to program PLD
Fig. 28.8 Typical design flow
Logic synthesis is the task of generating a structural view of the logic-level implementation of the design. It can take the generic Boolean network generated from the RTL
synthesis and perform logic optimization on top of it. Such optimizations include both sequential logic optimization and combinational logic optimization.
650
In terms of physical design, the design stages include partitioning, floorplan, placement, and routing. The input of physical design is a circuit netlist, and the output is the layout of the circuit. Partitioning and floorplan are optional design stages. They are required only when the circuit is highly complex. Partitioning is usually required for multimillion-gate designs. For such a large design, it is not feasible to layout the entire chip in one step due to the limitation of memory and computation resources. Instead, the circuit will be first partitioned into subcircuits (blocks), and then these blocks can go through a process called floorplan to set up the foundation of a good layout. Floorplan will select a good layout alternative for each block and for the entire chip as well. Floorplan will consider the area and the aspect ratio of the blocks, which can be estimated after partitioning. The number of terminals (pins) required by each block and the nets used to connect the blocks are also known after partitioning. In order to complete the layout, we need to determine the shape and orientation of each block and place them on the surface of the layout. These blocks should be placed in such a way as to reduce the total area of the circuit. Meanwhile, the pin-to-pin wire delay needs to be optimized. Floorplan also needs to consider whether there is sufficient routing area between the blocks so that the routing algorithms can complete the routing task without experiencing routing congestions. Placement is a key step in the physical design flow. It deals with the similar problem as floorplan – determining the positions of physical objects (logic blocks and/or logic cells) on the layout surface. The difference is that, in placement, we can deal with a large number of objects (up to millions of objects) and the shape of each object is predetermined and fixed. Therefore, placement is a scaled and restricted version of the floorplan problem and is usually applied within regions created during floorplanning. Because of the importance of placement, an extensive amount of research has been carried out in the CAD community. Placement algorithms can be mainly categorized into simulated-annealing-based (e.g., [25, 26]), partitioning-based (e.g., CAPO [27]), analytical (e.g., BonnPlace [28]), and multilevel placement (e.g., mPL [29]). After placement, the routing stage determines the geometric layouts of the nets to connect all the logic blocks and/or logic cells together. Routing is the last step in the design flow before either creating the GDSII (graphic data system II) file for fabrication in the semicustom/ASIC design style or generating the bitstream to program the PLD. GDSII is a database file format used as the industry standard for integrated circuit layout data exchange. The objective of routing can be reducing the total wire length, minimizing the critical path delay, minimizing power consumption, or improving manufacturability.
J. D. Camba et al.
Technology computer-aided design (TCAD) is an important branch in CAD which carries out numeric simulations of semiconductor processes and devices. Process TCAD takes a process flow, including essential steps such as ion implantation, diffusion, etching, deposition, lithography, oxidation, and silicidation, and simulates the active dopant distribution, the stress distribution, and the device geometry. Mask layout is also an input for process simulation. The layout can be selected as a linear cut in a full layout for a twodimensional simulation or a rectangular cut from the layout for a three-dimensional simulation. Process TCAD produces a final cross-sectional structure. Such a structure is then provided to device TCAD for modeling the device electrical characteristics. The device characteristics can be used to either generate the coefficients of compact device models or develop the compact models themselves. These models are then used in circuit simulators to model the circuit behavior. Because of the detailed physical modeling involved, TCAD is mostly used to aid the design of single devices. TCAD has become a critical tool in the development of next-generation integrated circuit processes and devices. The challenge for TCAD is that the physics and chemistry of fabrication processes are still not well understood. Therefore, TCAD cannot replace experiments except in very limited applications so far. Reference [30] summarizes applications of TCAD in four areas: • Technology selection: TCAD tools can be used to eliminate or narrow technology development options prior to starting experiments. • Process optimization: tune process variables and design rules to optimize performance, reliability, cost, and manufacturability. • Process control: aid the transfer of a process from one facility to another (including from development to manufacturing) and serve as reference models for diagnosing yield issues and aiding process control in manufacturing. • Design optimization: optimize the circuits for cost, power, performance, and reliability.
28.7
Geometry Automation Mechanisms in the Modern CAD Environment
Computer-aided design systems are used in many places within a product design environment, but each scenario tends to have a common element: the need to accurately define the geometry which represents an object. This could be in the design engineering phase to depict a product, or during the manufacturing planning stage for the design of a fixture to hold a workpiece. Recently, these CAD systems have been coupled with product data management (PDM) systems to track the ongoing changes through the lifecycle of a
28
Computer-Aided Design, Computer-Aided Engineering, and Visualization
product. By so doing, the inherent use of the CAD system can be tracked, knowledge about the design can be stored, and permissions can be granted to appropriate users of the system. While the concept of concurrent engineering is not new, contemporary depictions of that model typically show a CAD system (and often a PDM system) at the center of the conceptual model, disseminating embedded information for use by the entire product development team throughout the product lifecycle [16]. To use a modern CAD system effectively, one must understand the common inputs and outputs of the system, typically in light of a concurrent and distributed design and manufacturing environment. Input usually takes the form of numerical information regarding size, shape, and orientation of geometry during the product model creation process. This information generally comes directly from the user responsible for developing the product; however, it is not uncommon to get CAD input data from laser scanning devices used for quality control and inspection, automated scripts for generating seed geometry, or translated files from other systems. As with other types of systems, the quality of the information put into the system greatly affects the quality of the data coming out of the system. In today’s geographically dispersed product development environment, CAD geometry is often exported from the CAD system is a neutral file format (e.g., IGES or STEP) to be shared with other users up and down the supply chain. Detailed two-dimensional drawings are often derived from the 3D model in a semiautomated fashion to document the product and to communicate with suppliers. In addition, 3D CAD data is generally shared in an automated way (due to integration between digital systems) with structural and manufacturing analysts for testing and process planning. Geometry creation within CAD systems is also automated for certain tasks, especially those of a repetitive nature. The use of geometry duplication functions often involves copying, manipulating, or moving selected entities from one area to another on a model. This reduces the amount of time that it takes a user to create their finished model. However, it is critical that the user be mindful of parent–child references as described previously. While these references are elemental to the very nature of modern CAD systems, they can make the modification and reuse of design geometry tenuous at a later date, thereby negating any positive effects of a user having copied geometry in an effort to save time. CAD systems provide specialized automation mechanisms to work with part families and configurations. These mechanisms leverage the fact that many products are slight variations of the same existing design. Part families and configuration tools allow users to build a template model for a “base part” and define what geometric characteristics will vary between the different model instances in the part family. Configurations can be created manually, by describing the parameters of each model, or through a spreadsheet type of
651
table where each row specifies the geometric characteristics of each model instance. Likewise, they can be defined at part level, when building the geometry of a single part, or at the assembly level, when building variations of assembly models with different components. When used effectively, part families and configurations can significantly increase productivity, simplification, and model reuse. Standard parts and standard libraries are an additional mechanism available in CAD to enable automation. Libraries are organized repositories of features and models that can be added to a design, typically by dragging and dropping the desired item onto the geometry and selecting the desired parameters. Some libraries are fully integrated within the CAD system. Others are custom made, or provided by a third-party company. Libraries allow users to create models and features once and reuse the geometry when needed. An example of a library of standard parts in a commercial CAD system (i.e., SolidWorks) is shown in Fig. 28.9. Finally, geometry automation also exists in the form of scripting and programming functionality to generate geometry through the definition of algorithms. This scenario requires extensive knowledge of the CAD system’s application programming interface (API), but is particularly helpful when it is necessary to produce geometry with a high degree of accuracy and around which exists a fair amount of tribal knowledge and corporate practice. A set of parameters are created that represents corporate knowledge to be embedded into the geometry to control its shape and behavior and then the CAD system generates the desired geometry based on user inputs (Figs. 28.10 and 28.11). In the example of the airfoil, aerodynamic data has been captured by an engineering analyst and input into a CAD system using a knowledge capture module of the software. These types of modules allow a user to configure the behavior of the CAD system when it is supplied with a certain type of data in the requisite format. This data represents the work of the analyst, which is then used to automate the creation of the 3D geometry to represent the airfoil. Such techniques are replacing the manual geometric modeling tasks performed by users on designs that require a direct tie to engineering analysis data, or on those designs where a common geometry is shared among various design options.
28.8
User Characteristics Related to CAD Systems
Contemporary CAD systems require a technological knowledge base independent of (yet complementary to) normal engineering fundamentals. An understanding of design intent related to product function and how that is manifested in the creation of geometry to represent the product is critical [19, 31]. Users require the knowledge of how the various
28
652
J. D. Camba et al.
Fig. 28.9 Library of standard parts in DS SolidWorks
modules of a CAD systems work and the impact of their command choices on the usability of geometry downstream in the design and manufacturing process. In order to enable users to accomplish their tasks when using CAD tools, training in how to use the system is critical. Not just at a basic level for understanding the commands themselves, but the development of a community of practice to support the ongoing integration of user knowledge into organizational culture and best practices is critical. Complementary to user training, and one of the reasons for why relevant training is important in the use of CAD tools, is for users to develop strategic knowledge in the use of the design systems. Strategic knowledge is the application of procedural and factual knowledge in the use of CAD systems directed toward a goal within a specific context [32, 33]. It is through the development of strategic knowledge that users are able to effectively utilize the myriad functionality within modern CAD systems and adapt, transfer, and apply that knowledge when migrating to a new CAD system. Nearly all commercial CAD systems have similar user interfaces, similar geometry creation techniques, and similar required inputs and optional outputs. Productivity in the use of CAD systems
requires that users employ their knowledge of engineering fundamentals and the tacit knowledge gathered from their environment, in conjunction with technological and strategic knowledge of the CAD system’s capabilities, to generate a solution to the design problem at hand.
28.9
Visualization
Information visualization is useful in automation not only for supervision, control, and decision support, but also for training. A variety of visualization methods and software are available, including geographic information systems (GIS) and virtual and augmented reality (VR/AR). Information visualization is an essential aspect of any decision-making process. Therefore, it is critical that the information is presented in a clear, unambiguous, and understandable manner. A geographic information system is a computer-based system for capturing, manipulating, and displaying information using digitized maps. Its key characteristic is that every digital record has an identified geographical location. This process, called geocoding, enables automation applications
28
Computer-Aided Design, Computer-Aided Engineering, and Visualization
653
# collect the list of fours Collect Reverse( _GetPointsBySide: (1, -xdc_StartingPt_TE: –1, $lineNum, $Rstk, $Phi, $slnt)) + Reverse( _GetPointsBySide: ( xdc_StartingPt_TE: –1, RR_NPT:, $lineNum, $Rstk, $Phi, $slnt)); # }; # end Loop Through Sections }; # end List
(List) -Points_Radial: If ( -PtOrder: = Sides) Then Loop { For $eachSection In -Points_Section:; Collect first (nth(1, $eachSection)) Into $LE_SS; Collect last (nth(1, $eachSection)) Into $SS_TE; Collect first (nth(3, $eachSection)) Into $TE_PS; Collect last (nth(3, $eachSection)) Into $PS_LE; Return Is {$LE_SS, $SS_TE, $TE_PS, $PS_LE}; } Else If ( PtOrder: = Wrap) Then Loop {
28
.Points_Wrap:; For $eachSection In Collect nth( _NSP: + (2 * _NLE:) + 1, $eachSection) Into $PSLEpts; Collect nth( _NSP:, $eachSection) Into $SSLEpts; Collect nth((2 * _NSP:) + (2 * _NLE:) , $eachSection) Into $PSTEpts; Collect nth(1, $eachSection) Into $SSTEpts; Return Is {$SSLEpts, $SSTEpts, $PSTEpts, $PSLEpts}; } Else {};
Fig. 28.10 User script for automatic geometry generation
for planning and decision-making by mapping visualized information. Virtual and augmented reality technologies provide interactive, computer-generated, three-dimensional imagery to users in a variety of modalities that are immersive and/or blend with reality. Virtual and augmented reality can be powerful media for communication and collaboration, as well as entertainment and learning. The capabilities of both AR and VR will be discussed in the next section. Table 28.1 lists examples of visualization applications.
28.10 Emerging Visualization Technologies: Virtual/Augmented/Mixed Reality Emerging visualization technologies are drastically changing the way 3D models are viewed, shared, and manipulated. In design and manufacturing environments, these advanced visualization technologies enable users to view 3D models in context and in a more natural and intuitive setting, which has advantages at various levels (particularly operational), when the data can be made accessible and fully integrated into the business processes of the organization. As such, the
technology is finding its way into many stages of the product lifecycle. Today, many modern CAD systems provide integrated support for both augmented reality and virtual reality experiences. According to Milgram [34], any experience that involves a combination of real and/or virtual content can be viewed as a point in a spectrum, as shown in Fig. 28.12. On one end of the spectrum there is the physical reality we experience in our daily lives. On the opposite end, there is virtuality, in which the view and experience of reality is completely replaced by an artificial counterpart. Any experience along this spectrum can be described as mixed reality, augmented reality, or extended reality. Depending on how much access is given to the real world or how much virtual content is involved in the experience, the terminology may vary slightly. Virtual/Augmented/Mixed-reality technologies have been used across a wide range of disciplines and industries for different purposes. Although in many aspects these technologies are related, each one of them has gone through its own evolution and development. The applications, interaction types, limitations, and capabilities enabled by these technologies also differ. Therefore, it is important to examine them separately. In this section, the technological basis
654
J. D. Camba et al. Table 28.1 Examples of visualization applications Cap
Leading edge Suction side Radial spline
Trailing edge
Section splines
Partial flowpath Pressure side
Fig. 28.11 Airfoil geometry generated from script (labels not generated as part of script)
of virtual reality, augmented reality, and spatial augmented reality is discussed. We review current advances and trends in VR/AR/MR technologies in the context of computeraided design and engineering, and examine future research directions.
28.10.1 Augmented Reality Augmented reality (AR) refers to the combined experience of real and virtual content. AR technologies enable a live interactive view of our physical world whose elements are enhanced with virtual imagery. The goal of AR is to enhance our interactions and experiences with everyday objects and environments by providing on-site and on-demand information about our world. This combination of real and virtual content can be experienced directly or indirectly. Indirect augmented reality employs pre-captured and preregistered images instead of a live video feed to provide the blended experience to the user. The approach typically uses a camera, such as a webcam or a phone camera, to record the point of view of the user and then superimpose computer-generated content. Indirect augmented reality is generally low cost compared to other direct-view approaches. A common technique to implement it is through the so-called “magic mirror” paradigm, in which the user experiences the augmented content by looking at a “reflection” in the computer screen. In this scenario, a camera located in front of the user captures the scene and the AR software processes the
Application domain Examples of visualization applications Manufacturing Virtual prototyping and engineering analysis Training and experimenting Process planning and logistics Assembly processes Ergonomics and virtual simulation Design Urban planning and architecture Civil engineering design Electrical design Mechanical design Business Advertising and marketing Presentation in e-commerce, e-business Presentation of financial information Science Chemical and biological visualizations Astrophysics phenomena GIS and surveying Meteorological data visualization for weather forecast Physical therapy and recovery Medicine Safety procedures Interpretation of medical information and planning surgeries Medical training Research and Virtual laboratories development Representation of complex math and statistical models Cultural heritage applications Spatial configurations Virtual explorations: art and science Learning and entertainment Virtual reality games Colocated learning Learning and educational simulators
Mixed reality
Real environment
Augmented reality (AR)
Augmented virtuality (AV)
Virtual environment
Fig. 28.12 Reality-Virtuality continuum [34]
image and overlays the virtual content in the right position with respect to the real scene. The combined scene is then displayed to the user in a computer screen. An alternative method of indirect AR is the “magic lens” or “magic window” approach, in which a device (usually a phone or a tablet) is used as a “window” to the augmented world. The user experiences the content by holding the device and moving it around while the AR software updates the virtual imagery based on the scene the camera is capturing. Because of the popularity of AR software and the prevalence
28
Computer-Aided Design, Computer-Aided Engineering, and Visualization
655
28 Fig. 28.13 (a) Indirect augmented reality: “magic mirror” (left) and (b) “magic lens” (right) paradigms [36]
of mobile devices, indirect AR has become a popular and affordable method to deliver and experience AR. Companies have used this technology extensively in marketing campaigns, gaming, and entertainment applications. In industrial environments, indirect AR applications have been developed for remote maintenance, servicing, and training. In education, several books and technical materials have implemented indirect AR as a mechanism to supplement written content [35]. Examples of indirect augmented reality applications are shown in Fig. 28.13. Direct augmented reality is generally experienced via a wearable see-through headset, which renders the virtual imagery in front the user’s eyes. Many commercial headsets are available such as the Microsoft HoloLens or the MagicLeap 1. The 3D cameras integrated in these headsets map the 3D environment in front of the user in real time. The information is then used to determine the location where the virtual content needs to be displayed. Spatial augmented reality is an alternative type of direct augmented reality that uses projectors to render virtual content onto the surfaces of physical objects. This type of experience can be viewed directly without a headset and has been used extensively in the entertainment industry. For example, projection mapping technology has been showcased at theme parks, sports events, and corporate promotional events. It has also been used in museums to enhance the visitor’s experience. Nonentertainment uses of the technology include the well-known augmented reality sandbox developed by the University of California, Davis [37]. The sandbox was designed as an interactive visualization tool for surveying, topography, and civil engineering applications. It uses a projector and a 3D camera – the Microsoft Kinect – to analyze the shape of the terrain defined by the sand inside a physical sandbox and display the corresponding contour lines and colors to visualize elevation data. The application
is fully interactive, so users can manipulate the sand and the visualization will update in real time. In addition, the system can be used to visualize water flow based on the conditions of the terrain. Spatial augmented reality has also been employed in the automotive industry, for example, for in situ support of spot welding tasks [38], and in space architecture applications as a mechanism to help mitigate the psychological effects of long-duration spaceflight [39]. Augmented reality applications for design education and creative work have also been proposed [36, 40]. Registration of the 3D content with the real scene is required to ensure the user experiences a consistent and cohesive view of the combined imagery. AR software must keep track of the position and orientation of the camera in real time and render the correct view of the virtual content. The content is dynamically updated as the view of the real scene changes. The most basic method to register the 3D content is by using fiducial markers. Fiducial markers are physical elements that can be easily detected and processed by computer vision algorithms. They are typically black and white images with distinctive patterns that can be immediately recognized, as shown in Fig. 28.14. Each marker has an associated digital asset, which is rendered on the screen when the physical marker is detected. Multiple markers can be used at the same time for more interactive and multiuser experiences. When used strategically, markers enable basic interaction with the content, such as simulating physical buttons. The action of physically covering and uncovering the marker can be translated to the two states of a button, which will trigger the corresponding actions. Each action can be connected to a different 3D asset or to a different state (e.g., animated) of the same asset. Advances in computer vision have enabled the use of more sophisticated image-based markers. An image-based marker
656
J. D. Camba et al.
Fig. 28.14 Fiducial markers used in marker-based augmented reality applications
is a type of fiducial marker that uses images instead of black and white patterns. Image-based markers are less intrusive and can be integrated more naturally and inconspicuously into physical elements such as books, catalogs, posters, and user manuals. In engineering and industrial environments, the use of AR dates back to the 1990s with the various AR-based applications developed at Boeing. Since then, many more applications have been proposed. In fact, many industrial processes can benefit from the adoption of AR, particularly in situations where the digital object can be linked to its counterpart in the real world (i.e., the digital twin). However, many technical challenges remain, which currently hinder its adoption. One of the most significant ones is the recognition, identification, tracking, and estimation of the position and orientation of untextured 3D objects in real time [41]. This is relevant as many objects in these environments are made of untextured metallic and/or plastic materials, in some cases with highly reflective surfaces. In addition to the difficulties of identifying and tracking these types of objects, the environments where these objects exist present their unique set of challenges. Industrial settings are intrinsically complex, busy, and often visually cluttered. Many objects may occlude others, making them only partially visible to the AR devices. Additionally, many industrial settings have environmental conditions that are less than ideal for technology (poor or inconsistent lighting, high temperatures, gas emissions, hazardous materials, grease, liquids, etc.).
28.10.2 Virtual Reality Virtual reality (VR) is a visualization technology that fully replaces our view of real world with an artificial environment. VR usually implies a stereoscopic real-time first-person view of the environment. The environment is immersive and a tracking system responds to the user’s changes in position and orientation (at least at the level of head motion) to recal-
Fig. 28.15 CAVE system at EVL, University of Illinois at Chicago (public domain image)
culate the view. In truly immersive experiences, there is also auditory, haptic, and tactile feedback when the user interacts with the elements of the virtual world. Motion capture (Mocap) suits enable the real-time capturing and recording of body movements of the wearer during the VR experience, which are particularly useful in human factors and ergonomic studies. VR can be experienced in different modalities, such as cave automatic virtual environment (CAVE) systems, power walls, and individual VR headsets, as shown in Fig. 28.15. In industrial settings, the integration of VR technology with CAD/CAM systems is becoming more prevalent not only during conceptual design but also for detailed 3D modeling, visualization of simulation and analysis processes, and validation and design reviews. The use of visualization technologies in engineering has evolved from the traditional power walls and complex CAVE systems used to visualize digital mock-ups (DMUs) and perform ergonomic assessments, to the immersive environments delivered by head-mounted displays (HMDs) and high-definition power walls used for design reviews. The affordability of consumer VR devices has facilitated the implementation of technology in more areas and stages of the product lifecycle. For example, in manufacturing, VR has been successfully employed to support plant layout design, process planning, resource allocation, and training. However, there are many challenges that hamper further implementation. First, importing CAD data into a VR system (which typically runs on some type of game engine technology) is not a straightforward process. CAD models must be converted to appropriate formats and optimized, which is time consuming, error prone, and requires significant expertise in computer graphics. In addition, there is currently a disconnect between the VR data and the PDM system, which
28
Computer-Aided Design, Computer-Aided Engineering, and Visualization
prevents VR from fully integrating into the product lifecycle. As a result, when changes are made to the native CAD model, the geometry needs to be reconverted and optimized before it is sent back to the VR system. Moreover, the conversion process is unidirectional. Any changes made to a model in the VR environment cannot be automatically transferred back to the CAD file. This results in unproductive repetitive tasks [42] and potential data inconsistencies [43]. In many scenarios, the complexity, precision, and level of detail of a CAD model tend to be high. In this regard, despite the powerful computer hardware and displays available today, processing this type of geometry in real time is challenging, especially in a VR experience that is multiuser or if a large amount of data need to be transferred over the network to other users with low network speed. These limitations may have a direct impact on the rendering quality and/or the frame rate at which the experience is delivered. From an ergonomics standpoint, VR headsets are still heavy, bulky, and uncomfortable to wear for long periods of time. In some cases, the headsets are tethered, which significantly constrain the user’s movements in the virtual space. In addition, low frame rates and high latencies can induce “simulator sickness” which may lead to discomfort, dizziness, and nausea. Haptic and tactile feedback is limited, audio is generally neglected in industrial VR applications, and more intuitive user interfaces and interaction mechanisms are needed, particularly when VR is used to create, manipulate, and edit 3D content. Integration of additional sensory experiences such as motion and acceleration is also essential in immersive applications with high levels of presence (for example, in a driving simulation for the automotive industry or a VR application for astronaut training). Finally, the VR authoring process is still cumbersome. There are no visual tools that are intuitive and can assist users in the creation of VR content beyond basic product visualization applications. Usually, the authoring process requires significant technical knowledge and programming experience.
28.11 Conclusions and Emerging Trends This chapter provided an overview of computer-aided design and computer-aided engineering and included elements of computer graphics and visualization. Today’s commercial brands of 3D modeling tools contain many of the same functions, irrespective of which software vendor is selected. CAD software programs are dimension driven, parametric, feature based, and constraint based, and these terms have come to be synonymous when describing modern CAD systems. Computer animations and simulations are commonly used in the engineering design process to visualize movement of parts, determine possible interferences of parts, and to
657
simulate design analysis attributes such as fluid and thermal dynamics. Today most CAD systems 3D model files can be converted into a format that can be used as input into popular AR and VR software programs. Many modern CAD systems are beginning to provide native support for some of these applications. In the future, it is anticipated that there will be a tighter integration between CAD and other systems involved in the product lifecycle. CAD vendors will be under greater pressure to partner with large enterprise software companies and become more product lifecycle management (PLM) centric. This will result in CAD and visualization becoming a part of a larger suite of software tools used in industry. The rapid development of information technology and computer graphics technology will impact the hardware platforms and software development related to CAD, visualization, and simulation. This will result in even more featurerich CAD software programs and capabilities. More powerful computer hardware, faster screen refresh rates of large CAD models, the ability to collaborate at great distances in real time, shorter rendering times, and higher-resolution images are a few improvements that will result from the rapid development of information technology and computer graphics technology. Overall, there is an exciting future for CAD and visualization that will result in positive impacts and changes for the many industries and businesses that depend on CAD as a part of their day-to-day business. See additional details on CAD, CAE, and visualization in Chapters 13, and 33.
References 1. NIST – MBE PMI Validation and Conformance Testing Project. Available at https://www.nist.gov/el/systems-integration-division73400/mbe-pmi-validation-and-conformance-testing-project 2. DoD Directive Number 4630.05. (May 5, 2004) 3. Kasunic, M., Andrew, T.: Ensure interoperability. In: DoD Software Tech News 13(2) (June 2010) 4. Camba, J.D., Contero, M., Company, P., Pérez, D.: On the integration of model-based feature information in product lifecycle management systems. Int. J. Inf. Manag. 37(6), 611–621 (2017) 5. Camba, J., Contero, M., Johnson, M.: Management of visual clutter in annotated 3D CAD models: a comparative study. In: International Conference of Design, User Experience, and Usability, pp. 405–416. Springer, Cham (June 2014) 6. Hartman, N.: Using metrics to justify interoperability projects and measure effectiveness. In: Proceedings of the Collaboration and Interoperability Congress (May 2009) 7. Horst, J., Hartman, N., Wong, G.: Defining quantitative and simple metrics for developing a return-on- investment (ROI) for interoperability efforts. In: Proceedings of the Collaboration and Interoperability Congress (May 2010) 8. Brown, L.D., Hua, H., Gao, C.: A widget framework for augmented interaction in SCAPE. In: Proceedings of the 16th Annual ACM Symposium on User Interface Software and Technology (Vancouver, Canada, November 02–05, 2003), pp. 1–10. UIST ‘03. ACM, New York (2003)
28
658 9. Kasunic, M.: Measuring Systems Interoperability: Challenges and Opportunities. Software Engineering Institute, Carnegie Mellon University (2001) 10. Institute of Electrical and Electronics Engineers: Standard Computer Dictionary: A Compilation of Ieee Standard Computer Glossaries. New York (1990) 11. SAE International. Configuration Management Standard EIA649C (2019) 12. Kerlow, I.V.: The Art of 3-D: Computer Animation and Effects, 2nd edn. Wiley, Indianapolis (2000) 13. Tyflopoulos, E., Tollnes, F.D., Steinert, M., Olsen, A.: State of the art of generative design and topology optimization and potential research needs. In: DS 91: Proceedings of NordDesign 2018, Linköping, 14–17 Aug 2018 14. Kazi, R.H., Grossman, T., Cheong, H., Hashemi, A., Fitzmaurice, G.W.: DreamSketch: Early Stage 3D Design Explorations with Sketching and Generative Design. In: UIST 2017 Oct 20, vol. 14, pp. 401–414 15. Khan, S., Awan, M.J.: A generative design technique for exploring shape variations. Adv. Eng. Inform. 38, 712–724 (2018) 16. Bertoline, G.R., Wiebe, E.N.: Fundamentals of Graphic Communications, 5th edn. McGraw-Hill, Boston (2006) 17. Wiebe, E.N.: 3-D constraint-based modeling: finding common themes. Eng. Des. Graph. J. 63(3), 15–31 (1999) 18. Hanratty, P.J.: Parametric/relational solid modeling. In: Lacourse, D.E. (ed.) Handbook of Solid Modeling, pp. 8.1–8.25. McGrawHill, New York (1995) 19. Hartman, N.W.: Defining expertise in the use of constraint-based CAD tools by examining practicing professional. Eng. Des. Graph. J. 69(1), 6–15 (2005) 20. Gielen, G.G.E., Rutenbar, R.A.: Computer-aided design of analog and mixed-signal integrated circuits. Proc. IEEE. 88(12), 1825– 1854 (2000) 21. Wambacq, P., Vandersteen, G., Phillips, J., Roychowdhury, J., Eberle, W., Yang, B., Long, D., Demir, A.: CAD for RF circuits. Proc. Des. Autom. Test Eur. 520–527 (2001) 22. Scheffer, L., Lavagno, L., Martin, G. (eds.): Electronic Design Automation for Integrated Circuits Handbook. CRC, Boca Raton (2006) 23. Jansen, D. (ed.): The Electronic Design Automation Handbook. Springer, Norwell (2003) 24. Alpert, C.J., Mehta, D.P., Sapatnekar, S.S. (eds.): The Handbook of Algorithms for VLSI Physical Design Automation. CRC, Boca Raton (2007) 25. Sun, W., Sechen, C.: Efficient and effective placement for very large circuits. IEEE Trans. CAD Integr. Circuits Syst. 14(3), 349–359 (1995) 26. Betz, V., Rose, J., Marquardt, A.: Architecture and CAD for DeepSubmicron FPGAs. Kluwer, Dordrecht (1999) 27. Caldwell, A., Kahng, A.B., Markov, I.: Can recursive bisection produce routable placements? Proc. IEEE/ACM Des. Autom. Conf., 477–482 (2000) 28. Brenner, U., Rohe, A.: An effective congestion-driven placement framework. Proc. Int. Symp. Phys. Des. 387–394 (2002) 29. Chan, T., Cong, J., Kong, T., Shinnerl, J.: Multilevel circuit placement, Chapter 4. In: Cong, J., Shinnerl, J. (eds.) Multilevel Optimization in VLSICAD. Kluwer, Boston 22(4) (2003) 30. Mar, J.: The application of TCAD in industry. Proc. Int. Conf. Simul. Semiconduct. Process. Dev. 139–145 (1996) 31. Hartman, N.W.: The development of expertise in the use of constraint-based CAD tools: examining practicing professionals. Eng. Des. Graph. J. 68(2), 14–25 (2004)
J. D. Camba et al. 32. Bhavnani, S.K., John, B.E.: Exploring the unrealized potential of computer-aided drafting. In: Proc. CHI’96, pp. 332–339 (1996) 33. Bhavnani, S.K., John, B.E.: From sufficient to efficient usage: an analysis of strategic knowledge. In: Proc. CHI’97, pp. 91–98 (1997) 34. Milgram, P., Kishino, F.: A taxonomy of mixed reality visual displays. IEICE Trans. Inf. Syst. 77(12), 1321–1329 (1994) 35. Camba, J.D., Otey, J., Contero, M., Alcañiz, M.: Visualization and Engineering Design Graphics with Augmented Reality. SDC Publications, Mission, KS (2013) 36. Camba, J.D., Contero, M.: From reality to augmented reality: rapid strategies for developing marker-based AR content using image capturing and authoring tools. In: 2015 IEEE Frontiers in Education Conference (FIE), pp. 1–6. IEEE (2015) 37. Reed, S., Kreylos, O., Hsi, S., Kellogg, L., Schladow, G., Yikilmaz, M.B., Segale, H., Silverman, J., Yalowitz, S., Sato, E.: Shaping Watersheds Exhibit: An Interactive, Augmented Reality Sandbox for Advancing Earth Science Education, American Geophysical Union (AGU) Fall Meeting 2014, Abstract no. ED34A-01 (2014) 38. Zhou, J., Lee, I., Thomas, B., Menassa, R., Farrant, A., Sansome, A.: Applying spatial augmented reality to facilitate in-situ support for automotive spot welding inspection. In: Proceedings of the 10th International Conference on Virtual Reality Continuum and Its Applications in Industry, pp. 195–200 (2011) 39. Bannova, O., Camba, J.D., Bishop, S.: Projection-based visualization technology and its design implications in space habitats. Acta Astronaut. 160, 310–316 (2019) 40. Camba, J.D., Soler, J.L., Contero, M.: Immersive visualization technologies to facilitate multidisciplinary design education. In: International Conference on Learning and Collaboration Technologies, pp. 3–11. Springer, Cham (2017) 41. Radkowski, R., Garrett, T., Ingebrand, J., Wehr, D.: TrackingExpert: a versatile tracking toolbox for augmented reality. In: ASME 2016 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers Digital Collection, August 2016 42. Jerard, R.B., Ryou, O.: Internet based fabrication of discrete mechanical parts. In: Proceedings of the NSF Design & Manufacturing Research Conference, Vancouver, January 2000 43. Zorriassatine, F., Wykes, C., Parkin, R., Gindy, N.: A survey of virtual prototyping techniques for mechanical product development. Proc. Inst. Mech. Eng. B J. Eng. Manuf. 217(4), 513–530 (2003)
Jorge D. Camba is an associate professor of computer graphics technology at Purdue University. He holds degrees in computer science (BS, MS), technology (MS), and systems and engineering management (PhD). His research activities focus on 3D model complexity and optimization, design automation and reusability, knowledge management, and immersive environments.
28
Computer-Aided Design, Computer-Aided Engineering, and Visualization
Dr. Nathan Hartman is the Dauch Family Professor of advanced manufacturing and department head of the Department of Computer Graphics Technology at Purdue University. Professor Hartman’s research areas focus on the process and methodology for creating model-based definitions; examining the use of the model-based definition in the product lifecycle; developing the model-based enterprise; geometry automation; and data interoperability and reuse. He holds a B.S. in technical graphics and M.S. in industrial technology from Purdue University and an Ed.D. in technology education, with emphasis in training and development and cognitive psychology from North Carolina State University.
659
Dr. Gary R. Bertoline is the dean of the Purdue Polytechnic Institute and a distinguished professor of computer graphics technology and computer and information technology at Purdue University. Gary’s research interests are in scientific visualization, interactive immersive environments, distributed and grid computing, and STEM education. He has authored numerous papers on engineering and computer graphics, computer-aided design, and visualization research. He has authored and coauthored seven textbooks in the areas of computer-aided design and engineering design graphics.
28
Safety Warnings for Automation
29
Mark R. Lehto and Gaurav Nanda
Contents 29.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 661
29.2
Warning Roles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 662
29.3 29.3.1 29.3.2
Types of Warnings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 664 Static Versus Dynamic Warnings . . . . . . . . . . . . . . . . . . . 664 Warning Sensory Modality . . . . . . . . . . . . . . . . . . . . . . . . 665
29.4
Automated Warning Systems . . . . . . . . . . . . . . . . . . . . . 667
29.5 29.5.1 29.5.2 29.5.3 29.5.4 29.5.5 29.5.6 29.5.7
Models of Warning Effectiveness . . . . . . . . . . . . . . . . . Warning Effectiveness Measures . . . . . . . . . . . . . . . . . . . The Warning Compliance Hypothesis . . . . . . . . . . . . . . . Information Quality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Information Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . The Value of Warning Information . . . . . . . . . . . . . . . . . . Team Decision-Making . . . . . . . . . . . . . . . . . . . . . . . . . . . Time Pressure and Stress . . . . . . . . . . . . . . . . . . . . . . . . . .
669 669 669 670 670 671 671 672
29.6 29.6.1 29.6.2 29.6.3
Design Guidelines and Requirements . . . . . . . . . . . . . . Legal Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Voluntary Safety Standards . . . . . . . . . . . . . . . . . . . . . . . . Design Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . .
672 673 673 674
29.7
Challenges and Emerging Trends . . . . . . . . . . . . . . . . . 674
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 676
Abstract
Automated systems can provide tremendous benefits to users; however, there are also potential hazards that users must be aware of to safely operate and interact with them. To address this need, safety warnings are often provided to operators and others who might be placed at risk by the system. This chapter discusses some of the roles safety warnings can play in automated systems. M. R. Lehto () School of Industrial Engineering, Purdue University, West Lafayette, IN, USA e-mail: [email protected] G. Nanda () School of Engineering Technology, Purdue University, West Lafayette, IN, USA e-mail: [email protected]
© Springer Nature Switzerland AG 2023 S. Y. Nof (ed.), Springer Handbook of Automation, Springer Handbooks, https://doi.org/10.1007/978-3-030-96729-1_29
During this discussion, the chapter addresses some of the types of warnings that might be used, along with issues and challenges related to warning effectiveness. Design recommendations and guidelines are also presented. Keywords
False alarm · Warning system · Material safety data sheet · Fault tree analysis · American National Standard Institute
29.1
Introduction
Automated systems have become increasingly prevalent in our society, in both our work and personal lives. Automation involves the execution by a computer (or machine) of a task that was formerly executed by human operators [1]; for example, automation may be applied to a particular function in order to complete tasks that humans cannot perform or do not want to perform, to complete tasks that humans perform poorly or that incur high workload demands, or to augment the capabilities and performance of the human operator [2]. The potential benefits of automation include increased productivity and quality, greater system safety and reliability, as well as fewer human errors, injuries, or occupational illnesses. These benefits follow because some demanding or dangerous tasks previously performed by the operator can be completely eliminated through automation, and many others can be made easier. On the other hand, automation can create new hazards and increase the potential for catastrophic human errors [3]; for example, in advanced manufacturing settings, the use of robots and other forms of automation has reduced the need to expose workers to potentially hazardous materials in welding, painting, and other operations, but in turn has created a more complex set of maintenance, repair, and setup tasks, for which human errors can have serious consequences, such as damaging expensive equipment, long
661
662
M. R. Lehto and G. Nanda
periods of system downtime, production of multiple runs of defective parts, and even injury or death. As implied by the above example, a key issue is that automation increases the complexity of systems [4]. A second issue is that the introduction of automation into a system or task does not necessarily remove the human operator from the task or system. Instead, the role and responsibilities of the operator change. One common result of automation is that operators may go from active participants in a task to passive monitors of the system function [5, 6]. This shift in roles from active participation to passive monitoring can reduce the operator’s situation awareness and ability to respond appropriately to automation failures [7]. Part of the problem is that the operator may have few opportunities to practice their skills because automation failures tend to be rare events. Further complicating the issue, system monitoring might be done from a remote location using one or more displays that show the status of many different subsystems. This also can reduce situation awareness, for many different reasons. Another common problem is that workload may be too low during routine operation of the automated system, causing the operator to become complacent and easily distracted. Furthermore, designers may assign additional unrelated tasks to operators to make up for the reduced workload due to automation. This again can impair situation awareness, as performing these unrelated tasks can draw the operator’s attention away from the automated system. The need to perform these additional tasks can also contribute to a potentially disastrous increase in workload in nonroutine situations in which the operator has to take over control from the automated system. Many other aspects of automated systems can make it difficult for operators and others to be adequately aware of the hazards they face and how to respond to them [4, 8]. To address this issue, safety warnings are often employed in such systems. This chapter discusses some of the roles safety warnings can play in automated systems, issues related to the effectiveness of warnings, and design recommendations and guidelines.
29.2
Warning Roles
Warnings are sometimes viewed as a method of last resort to be relied upon when more fundamental solutions to safety problems are infeasible. This view corresponds to the socalled hierarchy of hazard control, which can be thought of as a simple model that prioritizes control methods from most to least effective. One version of this model proposes the following sequence: (1) eliminate the hazard, (2) contain or reduce the hazard, (3) contain or control people, (4) train or educate people, and (5) warn people [9]. The basic idea is that designers should first consider design solutions
that completely eliminate the hazard. If such solutions are technically or economically infeasible, solutions that reduce but do not eliminate the hazard should then be considered. Warnings and other means of changing human behavior, such as training, education, and supervision, fall in this latter category for obvious reasons. Simply put, these behaviororiented approaches will never completely eliminate human errors and violations. On the other hand, this is also true for most design solutions. Consequently, warnings are often a necessary supplement to other methods of hazard control [10]. There are many ways warnings can be used as a supplement to other methods of hazard control; for example, warnings can be included in safety training materials, hazard communication programs, and within various forms of safety propaganda, including safety posters and campaigns, to educate workers about risks and persuade them to behave safely. Particularly critical procedures include start-up and shut-down procedures, setup procedures, lock-out and tagout procedures during maintenance, testing procedures, diagnosis procedures, programming and teaching procedures, and numerous procedures specific to particular applications. The focus here is to reduce errors and intentional violations of safety rules by improving worker knowledge of what the hazards are and their severity, how to identify and avoid them, and what to do after exposure. Inexperienced workers are often the target audience at this stage. Warnings can also be included in manuals or job performance aids (JPAs), such as written procedures, checklists, and instructions. Such warnings usually consist of brief statements that either instruct less skilled workers or remind skilled workers to take necessary precautions when performing infrequent maintenance or repair tasks. This approach can prevent workers from omitting precautions or other critical steps in a task. To increase their effectiveness, such warnings are often embedded at the appropriate stage within step-by-step instructions describing how to perform a task. Warning signs, barriers, or markings at appropriate locations can play a similar role; for example, a warning sign placed on a safety barrier or fence surrounding a robot installation might state that no one except properly authorized personnel is allowed to enter the area. Placing a label on a guard to warn that removing the guard creates a hazard also illustrates this approach. Warning signals can also serve as a supplement to other safety devices such as interlocks or emergency braking systems; for example, presence sensing, intrusion warning, and interlock devices are sometimes used in installations of robots to sense and react to potentially dangerous workplace conditions [11]. Sensors used in such systems include: (1) pressure-sensitive floor mats, (2) light curtains, (3) endeffector sensors, (4) ultrasound, capacitive, infrared, and microwave sensing systems, and (5) computer vision. Floor mats and light curtains are used to determine whether
29
Safety Warnings for Automation
663
29
Fig. 29.1 SICK deTec4 light curtain installed to ensure protection of worker from the rotating table system. (Courtesy of Sick Inc., Minneapolis) [12]
someone has crossed the safety boundary surrounding the perimeter of the robot. Perimeter penetration will trigger a warning signal and, in some cases, will cause the robot to stop. End-effector sensors detect the beginning of a collision and trigger emergency stops. Ultrasound, capacitive, infrared, and microwave sensing systems are used to detect intrusions. Computer vision theoretically can play a similar role in detecting safety problems. Figure 29.1 illustrates how a presence sensing system, in this case a safety light curtain device in an assembly station, might be installed for point of operation, area, perimeter, or entry/exit safeguarding [12]. Figure 29.2 illustrates a safety laser scanner. By reducing or eliminating the need for physical barriers, such systems make it easier to access the robot system during setup and maintenance [13]. By providing early warnings prior to entry of the operator into the safety zone, such systems can also reduce the prevalence of nuisance machine shutdowns (Fig. 29.3) [12]. Furthermore, such systems can prevent the number of accidents by providing warning signals and noises to alert personnel on the floor (Fig. 29.4) [13]. Modern-day advanced safety systems are intelligent enough to differentiate a person’s presence from an object’s presence. For example, Sick deTec4 light curtain has “mute” functionality which will trigger alerts only when a person goes through a light curtain boundary, not when an object goes through it. Combination of such sophisticated and intelligent safety equipment can be used to design warning systems enabling safer human-robot interaction, such as: robots- and human-operated forklift
Fig. 29.2 Safety laser scanner application. (Courtesy of Sick Inc., Minneapolis) [13]
working in adjacent zones defined by using two Sick deTec light curtains with mute feature (Fig. 29.5) [14] and safety system for semiautomated assembly of electric built using a combination of motors safety laser scanner, microScan3, deTec4 Core safety light curtain, and a Flexi Soft safety controller (Fig. 29.6) [15].
664
M. R. Lehto and G. Nanda
Fig. 29.3 C4000 safety light curtain hazardous point protection. (Courtesy of Sick Inc., Minneapolis) [12]
Fig. 29.5 Intelligent safety concept for protecting robots. (Courtesy of Sick Inc., Minneapolis) [14]
Fig. 29.4 Safety laser scanner AGV (automated guided vehicle) application. (Courtesy of Sick Inc., Minneapolis) [13]
Fig. 29.6 Safe human-robot collaboration in the final assembly of electric motors. (Courtesy of Sick Inc., Minneapolis) [15]
29.3
Types of Warnings
Warnings can be of different types based on physical characteristics (e.g., static or dynamic), sensory modality (e.g., visual, auditory, haptic, or multimodal), and other properties. The effectiveness of a warning type varies depending on the nature of human-automation interactions in a task, as discussed in detail in this section.
29.3.1 Static Versus Dynamic Warnings Perhaps the most familiar types of warnings are the visually based signs and labels that we encounter everyday whether on the road (e.g., slippery when wet), in the workplace (in-
29
Safety Warnings for Automation
dustrial warnings such as entanglement hazard – keep clear of moving gears), or on consumer products (e.g., do not take this medication if you might be pregnant). These signs and labels indicate the presence of a hazard and may also indicate required or prohibited actions to reduce the associated risks, as well as the potential consequences of failing to comply with the warning. This type of warning is static in the sense that its status does not change over time [16]. However, as noted by Lehto [17], even these static displays have a dynamic component in that they are noticed at particular points in time. In order to increase the likelihood that a static warning, such as a sign or label, is received (i.e., noticed, perceived, and understood) by the user at the appropriate moment, it should be physically as well as temporally placed such that using the product requires interaction with the label prior to the introduction of the hazard to the situation; for example, Duffy et al. [18] examined the effectiveness of a label on an extension cord which stated: “Warning. Electric shock and fire. Do not plug more than two items into this cord.” Interactive labels in which the label was affixed to the outlet cover on the female receptacle were found to produce greater compliance than a no-label control condition, and a tag condition in which the warning label was attached to the extension cord 5 cm above the female receptacle. Other studies [19, 20] have found that a warning that interrupts a user’s script for interacting with a product increases compliance. A script consists of a series of temporally ordered actions or events which are typical of a user’s interactions with a class of objects [21]. Additionally, varying the warning’s physical characteristics can also increase its conspicuity or noticeability [22]; for example, larger objects are more likely to capture attention than smaller objects. Brightness and contrast are also important in determining whether an object is discernible from a background. As a specific form of contrast, highlighting can be used to emphasize different portions of a warning label or sign [23]. Additionally, lighting conditions influence detectability of signs and labels (i.e., reduced contrast). In contrast to static warnings, dynamic warning systems produce different messages or alerts based on input received from a sensing system – therefore, they indicate the presence of a hazard that is not normally present [16, 24]. Environmental variables are monitored by a sensor, and an alert is produced if the monitored variables exceed some threshold. The threshold can be changed based on the criticality of potential consequences – the more trivial the consequences, the higher the threshold and, for more serious consequences, the threshold would be set lower so as to reduce the likelihood of missing the critical event. However, the greater the system’s sensitivity, the greater the likelihood that an alert will be generated when there is no hazard present. Ideally, an alert should always be produced when there is a hazard present, but never be produced in the absence of a hazard.
665
29.3.2 Warning Sensory Modality Warnings serve to alert users to the presence of a hazard and its associated risks. Accordingly, they must readily capture attention and be easily/quickly understood. The ability of the warning to capture attention is especially important in the case of complex systems, in which an abundance of available information may overload the operator’s limited attentional resources. When designing a warning system, it is critical to take into account the context in which the warning will appear; for example, in a noisy construction environment, in which workers may be wearing hearing protection, an auditory warning is not likely to be effective. Whatever the context, the warning should be designed to stand out against any background information (i.e., visual clutter and ambient noise). Sanderson [25] has provided a taxonomy/terminology for thinking about sensory modality in terms of whether information is persistent in time; whether information delivery is localized, ubiquitous, or personal; whether sensing the information is optional or obligatory; whether the information is socially inclusive; whether monitoring occurs through sampling, peripheral awareness, or is interrupt based; and information density (Table 29.1). Next, advantages and disadvantages of different modes of warning presentation will be discussed in related terms.
Visual Warnings The primary challenge in using visual warnings is that the user/operator needs to be looking at a specific location in order to be alerted, or the warning needs to be sufficiently salient to cause the operator to reorient their focus toward the warning. As discussed earlier in the section on static warnings, the conspicuity of visually-based signals can be maximized by increasing size, brightness, and contrast [22]. Additionally, flashing lights attract attention better than continuous indicator-type lights (e.g., traffic signals incorporating a flashing light into the red phase) [22]. Since flash rates should not be greater than the critical flicker fusion frequency (≈24 Hz, resulting in the perception of a continuous light), or so slow that the on time might be missed, Sanders and McCormick [26] recommend flash rates of around 10 Hz. Auditory Warnings Auditory stimuli have a naturally alerting quality and, unlike visual warnings, the user/operator does not have to be oriented toward an auditory warning in order to be alerted, that is, auditory warnings are omnidirectional (or ubiquitous in Sanderson’s terminology) [27, 28]. Additionally, localization is possible based on cues provided by the difference in time and intensity of the sound waves arriving at the two ears. To maximize the likelihood that the auditory warning is effective, the signal should be within the range of about 800–5000 Hz (the human auditory system is most sensitive
29
666
M. R. Lehto and G. Nanda
Table 29.1 Contrast between the fundamental properties of visual, auditory, and haptic modalities of information processing. (After Sanderson [25]) Visual Persistent – signal typically persistent in time so that information about past has same sensory status as information about present Localized – can be sensed only from specific locations such as monitors or other projections (eyeballs needed) Optional – there are proximal physical means for completely eliminating signal (eyeballs, eyelids, and turn) Moderately socially inclusive – others aware of signal but need not look at screen Sampling-based monitoring – temporal sampling process needed for coverage of all needed variables High information density – many variables and relationships can be simultaneously presented
Auditory Transitory – signal happens in time and recedes into past: creating persistent information or information about past is a design challenge Ubiquitous – can be sensed from any location unless technology is used to create localized qualities Obligatory – there are no proximal physical means for completely eliminating signal (unless earplugs block signal) Socially inclusive – others always receive signal unless signal is sent only to an earpiece Peripheral monitoring – temporal properties of process locked into temporal properties of display Moderate information density – several variables and relationships can be simultaneously presented
to frequencies within this range – frequencies contained in speech; e.g., Coren and Ward [29]) – and should have a tonal quality that is distinct from that of expected environmental sounds – to help reduce the possibility that it will be masked by those sounds (see Edworthy and Hellier [30] for an indepth discussion of auditory warning signals).
Verbal Versus Nonverbal Warnings Any auditory stimulus, from a simple tone to speech, can serve as an alert as long as it easily attracts attention. However, the human auditory system is most sensitive to sound frequencies contained within human speech. Speech warnings have the further advantage of being composed of signals (i.e., words) which have already been well learned by the user/operator. There is a redundancy in the speech signal such that, if part of the signal is lost, it can be filled in based on the context provided by the remaining sounds [31, 32]. However, since speech is a temporally based code that unfolds over time, it is only physically available for a very limited duration. Therefore, earlier portions of a warning message must be held in working memory while the remainder of the message continues to be processed. As a result, working memory may become overloaded and portions of the warning message may be lost. With visually based verbal warnings, on the other hand, there is the option of returning to, and rereading, earlier portions of the warning; that is, they persist over time. However, since the eyes must be directed toward the warning source, the placement of visually based warnings is critical so as to minimize the loss of other potentially critical information (i.e., such that other signals can be processed in peripheral vision). While verbal signals have the obvious advantage that their meaning is already established, speech warnings require the use of recorded, digitized, or synthesized speech which
Haptic Transitory – signal happens in time and recedes into past: creating persistent information or information about past is a design challenge Personal – can be sensed only by the person whom the display is directed (unless network or shared) Obligatory – there are no proximal physical means for completely eliminating signal (unless remove device) Not socially inclusive – others probably unaware of signal Interrupt-based monitoring – monitoring based on interrupts Low information density – few variables and relationships can be simultaneously presented
will be produced within a noisy background – therefore, intelligibility is a major issue [30]. Additionally, as indicated earlier, the speech signal unfolds over time and may take longer to produce/receive than a simpler nonverbal warning signal. However, nonverbal signals must somehow encode the urgency of the situation – that is, how quickly a response is required by the user/operator. Extensive research in the auditory domain indicates that higher-frequency sounds have a higher perceived urgency than lower-frequency sounds, that increasing the modulation of the amplitude or frequency of a pulse decreases urgency, that increases in number of harmonics increases perceived urgency, and that spectral shape also impacts perceived urgency [30, 33, 34]. Edworthy et al. [34] found that faster, more repetitive bursts are judged to be more urgent; regular rhythms are perceived as more urgent than syncopated rhythms; bursts that are speeded up are perceived as more urgent than those that stay the same or slow down; and the larger the difference between the highest and lowest pitched pulse in a burst, the higher the perceived urgency.
Haptic/Tactile Warnings While the visual and auditory channels are most often used to present warnings, haptic or tactile warnings are also sometimes employed; for example, the improper maneuvering of a jet will cause tactile vibrations to be delivered through the pilot’s control stick – this alert serves to signal the need to reorient the control. In the domain of vehicle collision warnings, Lee et al. [35] examined driver preferences for auditory or haptic warning systems as supplements to a visual warning system. Visual warnings were presented on a head-down display in conjunction with either an auditory warning or a haptic warning, in the form of a vibrating seat. Preference data indicated that drivers found auditory
29
Safety Warnings for Automation
warnings to be more annoying and that they would be more likely to purchase a haptic warning system. In the domain of patient monitoring, Ng et al. [36] reported that a vibrotactile wristband results in a higher identification rate for heart rate alarms than does an auditory display. In Sanderson’s [25] terminology, a haptic alert is discrete, transitory, has low precision, has obligatory properties, and allows the visual and auditory modalities to continue to monitor other information sources. Therefore, patient monitoring represents a good use of haptic alarms. Another example of a tactile or haptic warning is the use of rumble strips on the side of the highway. When a vehicle crosses these strips, vibration and noise is created within the vehicle. Some studies have reported that the installation of these strips reduced drift-off road accidents by about 70% [37].
Multimodal Warnings For the most part, we have focused our discussion on the use of individual sensory channels for the presentation of automated warnings. However, research suggests that multimodal presentation results in significantly improved warning processing. Selcon et al. [38] examined multimodal cockpit warnings. These warnings must convey the nature of the problem to the pilot as quickly as possible so that immediate action can be taken. The warnings studied were visual (presented using pictorials), auditory (presented by voice), or both (incorporating visual and auditory components) and described real aircraft warning (high-priority/threat) and caution (low-priority/threat) situations. Participants were asked to classify each situation as either warning or caution and then to rate the threat associated with it. Response times were measured. Depth of understanding was assessed using a measure of situational awareness [39, 40]. Performance was faster in the condition incorporating both visual and auditory components (1.55 s) than in the visual (1.74 s) and auditory (3.77 s) conditions and there was some indication that this condition was less demanding and resulted in improved depth of understanding as well. Sklar and Sarter [41] examined the effectiveness of visual, tactile, and redundant visual and tactile cues for indicating unexpected changes in the status of an automated cockpit system. The tactile conditions produced higher detection rates and faster response times. Furthermore, provision of tactile feedback and performance of concurrent visual tasks did not result in any interference suggesting that tactile feedback may better support human-machine communication in information-rich domains (see also Ng et al.s [36] findings reviewed earlier). Ho et al. [42] examined the effectiveness of unimodal auditory, unimodal vibrotactile, and multisensory combined audiotactile warning signals to alert drivers about likely rear-end collision while driving. They found that multisensory warning signals were more effective in capturing
667
attention of users already engaged in a demanding task such as driving. Previous studies have found multimodal presentation of warnings in form of voice and text to be more effective than a single modality of either voice or text. Some examples include: better awareness of wet floor at a shopping center in a field study; higher compliance of personal protective equipment in a chemical laboratory [27, 28]; and higher compliance with supplementary voice and print warnings with product manual during unpacking of a computer disk drive [43]. To summarize, by providing redundant delivery channels, a multimodal approach helps to ensure that the warning attracts attention, is received (i.e., understood) by the user/operator, and is remembered [44]. Future research should focus on further developing relatively underused channels for warning delivery (i.e., haptic) and using multiple modalities in parallel in order to increase the attentional and performance capacity of the user/operator [25].
29.4
Automated Warning Systems
Automated warning systems come in many forms, across a wide variety of domain applications including aviation, medicine, process control and manufacturing, automobiles and other surface transportation, military applications, and weather forecasting, among others. Some specific examples of automated systems include collision warning systems and ground proximity warning systems in automobiles and aircraft, respectively. These systems will alert drivers or pilots when a collision with another vehicle or the ground is likely, so that they can take evasive action. In medicine, anesthesiologists and medical care workers must monitor patients’ vitals, sometimes remotely. Similarly, in process control, such as nuclear power plants, workers must continuously monitor multiple subsystems to ensure that they are at safe and tolerable levels. In these situations, automated alerts can be used to inform operators of any significant departures from normal and acceptable levels, whether in the patients’ condition or in plant operation and safety. Automation may be particularly important for complex systems, which may involve too much information (sometimes referred to as raw data), creating difficulties for operators in finding relevant information at the appropriate times. In addition to simply informing or alerting the human operator, automation can play many different roles, from guiding human information gathering to taking full control of the system. The role of a warning system varies significantly depending on its level of automation. For example, in the case of automobiles, five levels of automation are defined by the Society of Automotive Engineers (SAE) [45], as illustrated in Fig. 29.7.
29
668
M. R. Lehto and G. Nanda
Level 0
• No automation • Zero autonomy, driver performs all driving tasks
Level 1
• Driver assistance • Vehicle controlled by driver, but some driving assist feature may be included in the vehicle design
Level 2
• Partial automation • Vehicle has combined automated functions such as acceleration and steering but driver must remain engaged with driving task and always monitor the environment
Level 3
• Conditional automation • Driver is a necessity but is not required to monitor the environment. Driver must be ready to take control of the vehicle at all time with notice
Level 4
• High driving automation • Vehicle can perform all driving functions under certain conditions. The driver may have the option to control the vehicle
Level 5
• Full driving automation • Vehicle can perform all driving functions under all conditions. The driver may have the option to control the vehicle
Fig. 29.7 Different levels of automation in automobiles defined by the Society of Automotive Engineers (SAE) [45]
For each level of automation shown in Fig. 29.5, the role of humans and warning systems vary significantly. To model the human interaction with different levels of automation, various psychological models have been developed. Parasuraman et al. [46] proposed a taxonomy of human-automation interaction according to the psychological human information processing steps where they are intended to replace or supplement. This model proposed by Parasuraman et al. [46] also aligns with other psychological models of situation awareness (SA) [47, 49] which indicate that automation in early stages of human information processing contributes to the establishment and maintenance of SA [47–49]. Next, we discuss the varying role of warning systems for different levels of automation. There has been extensive research into the automated warning systems that alert users for target detection tasks. Basic research has reliably demonstrated the capacity for visual cues to reduce search times in target search tasks [50]. Applied research has also demonstrated these benefits in military situations [51, 52], helicopter hazard detection [53], aviation and air traffic control [6, 54], and a number of other domains. These generally positive results support the potential value of stage 1 automation in applications where operators can receive an excessive number of warnings. Example applications might filter the warnings in terms of urgency or limit the warnings to relevant subsystems.
Advanced automated warning systems further help the operators by integrating the raw data, drawing inferences, and/or generating predictions. Lower levels of automation may extrapolate current information and predict future status (e.g., cockpit predictor displays [55]). Higher levels of automation may reduce information from a number of sources into a single hypothesis regarding the state of the world; for example, collision warning systems in automobiles will use information regarding the speed of the vehicle ahead, the intervehicle separation, and the driver’s own velocity (among other potential information) to indicate to the driver when a forward collision is likely [56–59]. In general, operators are quicker to respond to the relevant event when provided with these alerts. Studies of such automated alerts have been performed in many different domains, including aviation [60], process control [61], unmanned aerial vehicle operation [62], medicine [63], air traffic control [6], and battlefield operations [49]. Some automated warning systems may provide users with a complete set (or subset) of alternatives from which the operator will select which one to execute (whether correct or no). Higher levels of automation in the warning system may only present the optimal decision or action or may automatically select the appropriate course of action. At this stage the automation will utilize implicit or explicit assumptions about the costs and benefits of different decision outcomes; for example, the ground proximity warning system in aviation –
29
Safety Warnings for Automation
a system designed to help avoid aircraft-ground collisions – will recommend a single maneuver (pull up) to pilots when the aircraft is in danger of hitting the ground. Automated warning systems also aid the user in the execution of the selected action. A low level of automation may simply provide assistance in the execution of the action (e.g., power steering). High levels of automation may take control from the operator; for example, adaptive cruise control (ACC) systems in automobiles will automatically adjust the vehicle’s headway by speeding up or slowing down in order to maintain the desired separation. In general, one of the ironies of automation is that those systems that incorporate higher stages of automation tend to yield the greatest performance in normal situations; however, these also tend to come with the greatest costs in off-normal situations, where the automated response to the situation is inappropriate or erroneous [64].
669
A complicating issue is that the design of warnings is a polycentric problem [65], that is, the designer will have to balance several conflicting objectives when designing a warning. The most noticeable warning will not necessarily be the easiest to understand, and so on. As argued by Lehto [17], this dilemma can be partially resolved by focusing on decision quality as the primary criterion for evaluating applications of warnings; that is, warnings should be evaluated in terms of their effect on the overall quality of the judgments and decisions made by the targeted population in their naturalistic environment. This perspective assumes that decision quality can be measured by comparing people’s choices and judgments to those prescribed by some objective gold standard. In some cases, the gold standard might be prescribed by a normative mathematical model, as expanded upon below.
29 29.5.2 The Warning Compliance Hypothesis
29.5
Models of Warning Effectiveness
In this section, we will first introduce some commonly used measures of effectiveness. Attention will then shift to modeling perspectives and related research findings which provide guidance as to when, where, and why warnings will be effective.
29.5.1 Warning Effectiveness Measures The performance of a warning can be measured in many different ways [24]. The ultimate measure of effectiveness is whether a warning reduces the frequency and severity of human errors and accidents in the real world. However, such data is generally unavailable, forcing effectiveness to be evaluated using other measures, sometimes obtained in controlled settings that simulate the conditions under which people receive the warning; for example, the effectiveness of a collision avoidance warning might be assessed by comparing how quickly subjects using a driving simulator notice and respond to obstacles with and without the warning system. For the most part, such measures can be derived from models of human information processing that describe what must happen after exposure to the warning, for the warning to be effective [17, 24]. That is, the human operator must notice the warning, correctly comprehend its meaning, decide on the appropriate action, and perform it correctly. Analysis of these intervening events can provide substantial guidance into factors influencing the overall effectiveness of a particular warning.
The warning compliance hypothesis [66] states that people’s choices should approximate those obtained by applying the following optimality criterion: If the expected cost of complying with the warning is greater than the expected cost of not complying, then it is optimal to ignore the warning; otherwise, the warning should be followed.
The warning compliance hypothesis is based on statistical decision theory which holds that a rational decision-maker should make choices that maximize expected value or utility [67, 68]. The expected value of choosing an action Ai is calculated by weighting its consequences Cik over all events k, by the probability Pik that the event will occur. More generally, the decision-maker’s preference for a given consequence Cik might be defined by a value or utility function V(Cik ), which transforms consequences into preference values. The preference values are then weighted using the same equation. The expected value of a given action Ai becomes EV [Ai ] =
Pik V (Cik )
(29.1)
k
From this perspective, people who decide to ignore the warning feel that avoiding the typically small cost of compliance outweighs the large, but relatively unlikely cost of an accident or other potential consequence of not complying with the warning. The warning compliance hypothesis clearly implies that the effectiveness of warnings might be improved by: 1. Reducing the expected cost of compliance 2. Increasing the expected cost of ignoring the warning
670
Many strategies might be followed to attain these objectives; for example, the expected cost of compliance might be reduced by modifying the task or equipment so the required precautionary behavior is easier to perform. The benefit of following this strategy is supported by numerous studies showing that even a small cost of compliance (i.e., a short delay or inconvenience) can encourage people to ignore warnings (for reviews see Lehto and Papastavrou [69], Wogalter et al. [70], and Miller and Lehto [71]). Some strategies for reducing the cost of compliance include providing the warning at the time it is most relevant or more convenient to respond to. Increasing the expected cost of ignoring the warning is another strategy for increasing warning effectiveness suggested by the warning compliance hypothesis. The potential value of this approach is supported by studies indicating that people will be more likely to take precautions when they believe the danger is present and perceive a significant benefit to taking the precaution. This might be done through supervision and enforcement or other methods of increasing the cost of ignoring the warning. Also, assuming the warning is sometimes given when it does not need to be followed (i.e., a false alarm), the expected cost of ignoring the warning will increase if the warning is modified in a way that reduces the number of false alarms. This point leads us to the topic of information quality .
29.5.3 Information Quality In a perfect world, people would be given warnings if, and only if, a hazard is present that they are not already aware of. When warning systems are perfect, the receiver can optimize performance by simply following the warning when it is provided [72]. Imperfect warning systems, on the other hand, force the receiver to decide whether to consult and comply with the provided warning. The problem with imperfect warning systems is that they sometimes provide false alarms or fail to detect the hazard. From a short-term perspective, false alarms are often merely a nuisance to the operator. However, there are also some important long-run costs, because repeated false alarms shape people’s attitudes and influence their actions. One problem is the cry wolf effect which encourages people to ignore (or mistrust) warnings [73, 74]. Even worse, people may decide to completely eliminate the nuisance by disconnecting the warning system [75]. Misses are also an important issue, because people may be exposed to hazards if they are relying on the warning system to detect the hazard. Another concern is that misses might reduce operator trust in the system [53]. Due to the potentially severe consequences of a miss, misses are often viewed as automation failures. The designers of warning systems consequently tend to focus heavily on
M. R. Lehto and G. Nanda
designing systems that reliably provide a warning when a hazard is present. One concern, based on studies of operator overreliance upon imperfect automation [1, 54, 76], is that this tendency may encourage overreliance on warning systems. Another issue is that this focus on avoiding misses causes warning systems to provide many false alarms for each correct identification of the hazard. This tendency has been found for warning systems across a wide range of application areas [72, 74].
29.5.4 Information Integration As discussed by Edworthy [77] and many others, in real-life situations people are occasionally faced with choices where they must combine what they already know about a hazard with information they obtain from hazard cues and a warning of some kind. In some cases, this might be a warning sign or label. In others, it might be a warning signal or alarm that indicates a hazard is present that normally is not there. A starting point for analyzing how people might integrate the information from the warning with what they already know or have determined from other sources of information is given by Bayes’ rule, which describes how to infer the probability of a given event from one or more pieces of evidence [68]. Bayes’ rule states that the posterior probability of hypothesis Hi given that evidence Ej is present, or P(Hi | Ej ), is given by P Ej |Hi P (Hi ) P Hi |Ej = P Ej
(29.2)
where P(Hi ) is the probability of the hypothesis being true prior to obtaining the evidence Ej , and P(Ej | Hi ) is the probability of obtaining the evidence Ej given that the hypothesis Hi is true. When a receiver is given an imperfect warning, we can replace P(Ej | Hi ) in the above equation with P(W | H) to calculate the probability that the hazard is present after receiving a warning. That is,
P (H|W) =
P (W|H) P(H) P(W)
(29.3)
where P(H) is the prior probability of the hazard, P(W) is the probability of sending a warning, P(W | H) is the probability of sending a warning given the hazard is present, and P(H | W) is the probability that the hazard is present after receiving the warning. A number of other models have been developed in psychology that describe mathematically how people combine sources of information. Some examples include social judgment theory, policy capturing, multiple cue probability learn-
29
Safety Warnings for Automation
ing models, information integration theory, and conjoint measurement approaches [17]. From the perspective of warnings design, these approaches can be used to check which cues are actually used by people when they make safety-related decisions and how this information is integrated. A potential problem is that research on judgment and decision-making clearly shows that people integrate information inconsistently with the prescriptions of Bayes’ rule in some settings [78]; for example, several studies show that people are more likely to attend to highly salient stimuli. This effect can explain the tendency for people to overestimate the likelihood of highly salient events. One overall conclusion is that significant deviations from Bayes’ rule become more likely when people must combine evidence in artificial settings where they are not able to fully exploit their knowledge and cues found in their naturalistic environment [79–81]. This does not mean that people are unable to make accurate inferences, as emphasized by both Simon and researchers embracing the ecological [82, 83] and naturalistic [84] models of decision-making. In fact, the use of simple heuristics in rich environments can lead to inferences that are in many cases more accurate than those made using naive Bayes, or linear, regression [82]. Unfortunately, many applications of automation place the operator in a situation which removes them from the rich set of naturalistic cues available in less automated settings, and forces the operator to make inferences from information provided on displays. In such situations, it may become difficult for operators to make accurate inferences since they no longer can rely on simple heuristics or decision rules that are adapted to particular environments.
29.5.5 The Value of Warning Information As mentioned earlier, the designers of warning systems tend to focus heavily on designing systems that reliably provide a warning when a hazard is present, which results in many false alarms. From a theoretical perspective, it might be better to design the warning system so that it is less conservative. That is, a system that occasionally fails to detect the hazard but provides fewer false alarms might improve operator performance. From the perspective of warning design, the critical question is to determine how selective the warning should be to minimize the expected cost to the user as a function of the number of false alarms and correct identifications made by the warning [75, 85, 86]. Given that costs can be assigned to false alarms and misses, an optimal warning threshold can be calculated that maximizes the expected value of the provided information. If it is assumed that people will simply follow the recommendation of a warning system (i.e., the warning system is the sole decision-maker) the optimal warning threshold can be
671
calculated using classical signal detection theory (SDT) [87, 88]. That is, a warning should be given when the likelihood ratio P(E | S)/P(E | N) exceeds the optimal warning threshold β, calculated as shown below:
β=
1 − PS cr − ca × PS ci − cm
(29.4)
where PS P(E | S)
The a priori probability of a signal (hazard) being present The conditional probability of the evidence given a signal (hazard) is present P(E | N) The conditional probability of the evidence given a signal (hazard) is not present cr Cost of a correct rejection ca Cost of a false alarm ci Cost of a correct identification cm Cost of a missed signal
In reality, the problem is more complicated because, as mentioned earlier, people might consider other sources in addition to a warning when making decisions. The latter situation corresponds to a team decision made by a person and warning system working together, as expanded upon below.
29.5.6 Team Decision-Making The distributed signal detection theoretic (DSDT) model focuses on how to determine the optimal decision thresholds of both the warning system and the human operator when they work together to make the best possible decision [72]. The proposed approach is based on the distributed signal detection model [89–93]. The key insight is that a warning system and human operator are both decision-makers who jointly try to make an optimal team decision. The DSDT model has many interesting implications and applications. One is that the warning system and human decision-maker should adjust their decision thresholds in a way that depends upon what the other is doing. If the warning system uses a low threshold and provides a warning even when there is not much evidence of the hazard, the DSDT model shows that the human decision-maker should adjust their own threshold in the opposite direction. That is, the rational human decision-maker will require more evidence from the environment or other source before complying with the warning. At some point, as the threshold for providing the warning gets lower, the rational decision-maker will ignore the warning completely.
29
672
The DSDT model also implies that the warning system should use different thresholds depending upon how the receiver is performing. If the receiver is disabled or unable to take their observation from the environment, the warning should take the role of primary decision-maker, and set its threshold accordingly to that prescribed in traditional signal detection theory threshold (i.e., the optimal warning threshold β, calculated as shown above in Eq. 29.4). If the decisionmaker is not responding in an optimal manner when the warning is or not given, the DSDT model prescribes ways of modifying the warning systems threshold; for example, if the decision-maker is too willing to take the precaution when the warning is provided, the warning system should use a stricter warning threshold. That is, the warning system should require more evidence before sending a warning. Previous research has observed human behavior to be consistent with the predictions of the DSDT model [72, 94, 95]. For example, a study comparing collision avoidance system thresholds showed that using the DSDT threshold resulted in significantly improved passing decisions by drivers in a truck driving simulator compared to using the optimal SDT warning threshold β (which assumes a single decision-maker and is calculated as shown above in Eq. 29.4). Drivers also changed their own decision thresholds, in the way the DSDT model predicted they should, when the warning threshold changed. Another interesting result was that use of the DSDT threshold resulted in either risk-neutral or risk-averse behavior, while, on the other hand, use of the SDT threshold resulted in some risk-seeking behavior; that is, people were more likely to ignore the warning and visual cues indicating that a car might be coming. Overall, these results suggest that, for familiar decisions such as choosing when to pass, people can behave nearly optimally. One of the more interesting aspects of the DSDT model is that it suggests ways of adjusting the warning threshold, as explained above, in response to how the operator is performing to improve team performance. This implies that warning systems can improve their effectiveness by monitoring the receiver’s behavior and making adjustments based on how the receiver responds [94].
29.5.7 Time Pressure and Stress Time pressure and stress is another important issue in many applications of warnings. Reviews of the literature suggest that time pressure often results in poorer task performance and that it can cause shifts between the cognitive strategies used in judgment and decision-making situations [96, 97]. One change is that people show a tendency to shift to noncompensatory decision rules. This finding is consistent with contingency theories of strategy selection. In other words, this shift may be justified when little time is available, be-
M. R. Lehto and G. Nanda
cause a noncompensatory rule can be applied more quickly. Maule and Hockey [96] also note that people tend to filter out low-priority types of information, omit processing information, and accelerate mental activity when they are under time pressure. The general findings above indicate that warnings may be useful when people are under time pressure and stress. People in such situations are especially likely to make mistakes. Consequently, warnings that alert people after they make mistakes may be useful. A second issue is that people under time pressure will not have a lot of extra time available, so it will become especially important to avoid false alarms. A limited amount of research addresses the impact of time stress on warnings compliance. In particular, a study by Wogalter et al. [70] showed that time pressure reduced compliance with warnings. Interestingly, subjects performed better in both low- and high-stress conditions when the warnings were placed in the task instructions than on a sign posted nearby. The latter result supports the conclusion that warnings which efficiently and quickly transmit their information may be better when people are under time stress or pressure. In some situations, this may force the designer to carefully consider the trade-off between the amount of information provided and the need for brevity. Providing more detailed information might improve understanding of the message but actually reduce effectiveness if processing the message requires too much time and attentional effort on the part of the receiver.
29.6
Design Guidelines and Requirements
Safety warnings can vary greatly in behavioral objectives, intended audiences, content, level of detail, format, and mode of presentation. Lehto [98] proposed a model of human behavior that described four hierarchical levels of operator performance and suggested suitable warning design for each level based on likely errors and information overload considerations associated with each performance level. The four levels of performance in decreasing order were: judgement based, knowledge based, rule based, and skilled based. The different forms of warning information were: values, symbols, signs, and signals. Based on the model, values would be appropriate for judgement-based performance level. Symbols are most likely to be effective when changing behavior from a knowledge-based to a rule-based level or a judgement-based level. Signs are likely to be effective at a rule-based level. Signals are best for guiding needed transitions from a skillbased to a rule-based level. Design of adequate warnings often require extensive investigations and development activities involving significant resources and time [99], which is beyond the scope of this chapter. Some of the main approaches and guidelines for warnings are discussed here.
29
Safety Warnings for Automation
The first step in the development of warnings is to identify the hazards to be warned against. This process is guided by past experience, codes and regulations, checklists, and other sources, and is often organized by separately considering systems and subsystems, and potential malfunctions at each stage in their life cycles. Numerous complementary hazard analysis methods, which also guide the process of hazard identification, are available [100, 101]. Commonly used methods include work safety analysis, human error analysis, failure modes and effects analysis, and fault tree analysis. Work safety analysis (WSA) [102] and human error analysis (HEA) [103] are related approaches that organize the analysis around tasks rather than system components. This process involves the initial division of tasks into subtasks. For each subtask, potential effects of product malfunctions and human errors are then documented, along with the implemented and the potential countermeasures. In automation applications, the tasks that would be analyzed fall into the categories of normal operation, programming, and maintenance. Failure modes and effects analysis (FMEA) is a systematic procedure for documenting the effects of system malfunctions on reliability and safety [101, 104]. It lists the components of a system, their potential failure modes, the likelihood and effects of each failure, and both the implemented and the potential countermeasures that might be taken to prevent the failure or its effects. Fault tree analysis (FTA) is a closely related approach where the approach is top-down analysis that begins with a malfunction or accident and works downward to basic events at the bottom of the tree [101]. Computerized tools for FTA have made the analysis and construction of fault trees more convenient [105–107]. FTA and FMEA are complementary tools for documenting sources of reliability and safety problems, and also help organize efforts to control these problems. The primary shortcoming of both approaches is that the large number of components in many automated systems imposes a significant practical limitation on the analysis, that is, the number of event combinations that might occur is an exponential function of the large number of components. Applications for complex forms of automation consequently are confined to fairly simplified analyses of failures [108]. Dhillon [109] provides a comprehensive overview of documents, data banks, and organizations for obtaining failure data to use in robot reliability analysis. Component reliabilities used in such analysis can be obtained from sources such as handbooks [110], data provided by manufacturers [111], or past experience. Limited data is also available that documents error rates of personnel performing reliabilityrelated tasks, such as maintenance [112]. Methods for estimating human error rates have been developed [113], such as technique for human error rate prediction (THERP) [114] and success likelihood index method-multiattribute utility
673
decomposition (SLIM-MAUD) [115]. Based on the system component failure rates or probabilities, quantitative models such as the systems block diagram or the system fault tree can be developed showing how system reliability is functionally determined by each component. Fault trees and system block diagrams are both useful for describing the effect of component configurations on system reliability. The most commonly considered configurations in such analysis are: (1) serial systems, (2) parallel systems, and (3) mixed serial and parallel systems.
29.6.1 Legal Requirements In most industrialized countries, governmental regulations require that certain warnings be provided to workers and others who might be exposed to hazards. The most wellknown governmental standards in the USA applicable to applications of automation are the general industry standards specified by the Occupation Safety and Health Administration (OSHA). Training, container labeling and other forms of warnings, and material data safety sheets are all required elements of the OSHA hazard communication standard. Other relevant OSHA publications addressing automation are the Guidelines for Robotics Safety [116] and the Occupational Safety and Health Technical Manual [117]. In the USA, the failure to warn also can be grounds for litigation holding manufacturers and others liable for injuries incurred by workers. In establishing liability, the theory of negligence considers whether the failure to adequately warn is an unreasonable conduct based on (1) the foreseeability of the danger to the manufacturer, (2) the reasonableness of the assumption that a user would realize the danger, and (3) the degree of care that the manufacturer took to inform the user of the danger. The theory of strict liability only requires that the failure to warn caused the injury or loss.
29.6.2 Voluntary Safety Standards A large set of existing standards provide voluntary recommendations regarding the use and design of safety information. These standards have been developed by both: (1) international groups, such as the United Nations, the European Economic Community (EURONORM), the International Organization for Standardization (ISO), and the International Electrotechnical Commission (IEC), and (2) national groups, such as the American National Standards Institute (ANSI), the British Standards Institute, the Canadian Standards Association, the German Institute for Normalization (DIN), and the Japanese Industrial Standards Committee. Among consensus standards, those developed by ANSI in the USA are of special significance. Some of the ANSI standards focusing on safety signs and labels include: (1) ANSI Z535.1-
29
674
2006 (R 2011) Safety Color Code; (2) ANSI Z535.2 -2011 Environmental and Facility Safety Signs; (3) ANSI Z535.3 -2011 Criteria for Safety Symbols; (4) ANSI Z535.4-2011 Product Safety Signs and Labels; (5) ANSI Z535.5-2011 Accident Prevention Tags; and (6) ANSI Z129.1-2000, Hazardous Industrial Chemicals – Precautionary Labeling. Furthermore, ANSI has also published a Guide for Developing Product Information. Warning requirements for automated equipment can also be found in many other standards. The most well-known standard in the USA that addresses automation safety is ANSI/RIA R15.06-2012, which was revised in 2013 and is an adoption of the international standards ISO 10218-1 and ISO 10218-2 for industrial robots and robot systems. This standard was first published in 1986 by the Robotics Industries Association (RIA) and the American National Standards Institute (ANSI) as ANSI/RIA R15.06, the American National Standard for Industrial Robots and Robot Systems – Safety Requirements [118]. ISO 10218 for robots and robotic devices details safety requirements for industrial robots specifically. ISO 13482 is focused toward personal care robots that deals with closer human-robot interaction and contact. ISO/TS 15066 technical specification provides guidance on safe operations in a workspace shared by humans and collaborative robots [119]. Several other standards developed by the ANSI are potentially important in automation applications. The latter standards address a wide variety of topics such as machine tool safety, machine guarding, lockout/tagout procedures, mechanical power transmission, chemical labeling, material safety data sheets, personal protective equipment, safety markings, workplace signs, and product labels. Other potentially relevant standards developed by nongovernmental groups include the National Electric Code, the Life Safety Code, and the UL1740 safety standard for industrial robots and robotics equipment. Also, many companies that use or manufacture automated systems will develop their own guidelines [120]. Companies often start with the ANSI/RIA R15.06-2012 robot safety standard, and then add detailed information relevant to them.
29.6.3 Design Specifications Design specifications can be found in consensus and governmental safety standards specifying how to design (1) material safety data sheets (MSDS), (2) instructional labels and manuals, (3) safety symbols, and (4) warning signs, labels, and tags. ANSI and other standards organizations provide very specific recommendations about how to design warning signs, labels, and tags. These include, among other factors, particular signal words and text, color coding schemes, typography, symbols, arrangement, and hazard identification [9, 24].
M. R. Lehto and G. Nanda
There are also standards which specifically address the design of automated systems and alerts; for example, the Department of Transportation, Federal Aviation Administration’s (FAA) Human Factors Design Standard (HFDS) (2003) indicates that alarm systems should alert the user to the existence of a problem, inform of the priority and nature of the problem, guide the user’s initial response, and confirm whether the user’s response corrected the problem. Furthermore, it should be possible to identify the first event in a series of alarm events (information valuable in determining the cause of a problem). Consistent with our earlier discussion, the standard suggests that information should be provided in multiple formats (e.g., visual and auditory) to improve communication and reduce mental workload, that auditory signals be used to draw attention to the location of a visual display, and that false alarms should not occur so frequently so as to undermine user trust in the system. Additionally, users should be informed of the inevitability of false alarms (especially in the case of low base rates). A more in-depth discussion of the FAA’s recommendations is beyond the scope of this chapter – for additional information, see the Human Factors Design Standard [121]. Recommendations also exist for numerous other applications including automated cruise control/collision warning systems for commercial trucks greater than 10,000 pounds [122] as well as for horns, backup alarms, and automatic warning devices for mobile equipment [123].
29.7
Challenges and Emerging Trends
The preceding sections of this chapter reveal that warnings can certainly play an important role in automated systems. In today’s era, where the level of automation in vehicles is continuously increasing and highly automated vehicles are being deployed in populated environments [124], ineffective human-automation interaction can lead to serious consequences such as fatal crashes of highly automated vehicles [125]. The nature of human-automation interaction in highly automated vehicles has similar characteristics of a vigilance task where drivers have to maintain constant attention over a long period of time and be able to identify and react to rare and unpredictable hazardous events which the automated system is not trained to handle [126]. Studies [126, 127] examining drivers of highly automated vehicles have found their ability to monitor and detect automation failures declines with time due to vigilance decrement, which is a well-studied phenomenon in psychology, described as “deterioration in the ability to remain vigilant for critical signals with time, as indicated by a decline in the rate of the correct detection of signals” [128]. This human factors challenge in operating highly automated vehicles can be addressed to some extent with the use of effective warning
29
Safety Warnings for Automation
systems that can identify potentially hazardous conditions accurately and alert the user in advance by allowing sufficient reaction time to take appropriate decision and action in the real-world situation [129]. One of the more encouraging results is that people often tend to behave consistently with the predictions of normative models of decision-making. From a general perspective, this is true both if people comply with warnings because they believe the hazard is more serious, and if people ignore warnings with little diagnostic value or when the cost of compliance is believed to be high. This result supports the conclusion that normative models can play an important role in suggesting and evaluating design solutions that address issues such as operator mistrust of warnings, complacency, and overreliance on warnings. The “paradox of automation” refers to the phenomenon that factors (such as automation reliability, level of automation, operator’s trust on system, and complacency potential) that have a positive impact on performance when the automation is working may undermine performance when the automation fails. The paradox of automation can be addressed to some extent by designing effective visual displays and warning systems, providing training to reduce complacency, and enabling better understanding of the logic of automated system [130]. Perhaps the most important design challenge is that of increasing the value of the information provided by automated warning systems. Doing so would also address the issue of operator mistrust. There have been recent studies to model and evaluate interventions to better calibrate the driver’s trust on automated vehicles changing with experience [131]. An experimental study that examined the response of drivers of vehicle with highly reliable but supervised automation for crash or avoidance with conflict object observed that drivers with higher level of trust in automation system were slower to respond which led to crashing, and drivers with lower level of trust in automation were quicker to react and were able to avoid the crash [132]. The most fundamental method of addressing this issue is to develop improved sensor systems that more accurately measure important variables that are strongly related to hazards or other warned against events. Successful implementations of this approach could increase the diagnostic value of the warnings provided by automated warning systems by reducing either false alarms or misses, and hopefully both. Given the significant improvements and reduced costs of sensor technology that have been observed in recent years, this strategy seems quite promising. Another promising strategy for increasing the diagnostic value of the warnings is to develop better algorithms for both integrating information from multiple sensors and deciding upon when to provide a warning. Such algorithms might include methods of adaptive automation that monitor the operator’s behavior and respond accordingly; for example, if the system detects evidence that
675
the user is ignoring the provided warnings, a secondary, more urgent warning that requires a confirmatory response might be given to determine if the user is disabled (i.e., unable to respond because they are distracted or even sleeping). Recent studies have found such alerts about inattention to be effective in case of driving a SAE Level-2 (Fig. 29.7) automobile [133]. Other algorithms might track the performance of particular operators over a longer period and use this data to estimate the skill of the operator or determine the types of information the operator uses to make decisions. Such tracking might reveal the degree to which the operator relies on the warning system. It also might reveal the extent to which the other sources of information used by operator are redundant or independent of the warning. Chancey et al. investigated the link between trust in automation systems and the compliance-reliance paradigm which is based on operator’s trust in signals and nonsignals provided by the system. They found that false alarm rate affected compliance but not reliance and the missed rate affected reliance but not compliance [134]. For successful interaction with automated systems, it is important that the design parameters of automation align well with operator’s system representation. People gather information about the automation system they are interacting with and adjust their behavior (e.g., calibrating reliance on warnings based on false alarm rate) to collaborate with the automation to achieve optimal human-automation system performance [135]. Another challenge that has been barely, if it all, addressed in most current applications of warning systems is related to the tacit assumption that the perceived costs and benefits of correct detections, misses, false alarms, and correct rejections are constant across operators and situations. This assumption is clearly false, because operators will differ in their attitudes toward risk. Furthermore, the costs and benefits are also likely to change greatly between situations; for example, the expected severity of an automobile accident changes, depending on the speed of the vehicle. This issue might be addressed by algorithms based on normative models which treat the costs and benefits of correct detections, misses, false alarms, and correct rejections as random variables which are a function of particular operators and situations. Many other challenges and areas of opportunity exist for improving automated warnings; for example, more focus might be placed on developing warning systems that are easier for operators to understand. Such systems might include capabilities of explaining why the warning is being provided and how strong the evidence is. Other systems might give the user more control over how the warning operates. Such capabilities will help in building trust for warning systems in highly automated systems such as self-driving automobiles and increase their customer acceptance and commercial viability. See a related Case Study example in Chapter 71.
29
676
References 1. Parasuraman, R., Riley, V.A.: Humans and automation: use, misuse, disuse, abuse. Hum. Factors. 39, 230–253 (1997) 2. Wickens, C.D., Hollands, J.G.: Engineering Psychology and Human Performance, 3rd edn. Prentice Hall, Upper Saddle River (2000) 3. Sarter, N.B., Woods, D.D.: How in the world did we get into that mode? Mode error and awareness in supervisory control. Hum. Factors. 37(1), 5–19 (1995) 4. Perrow, C.: Normal Accidents: Living with High-Risk Technologies. Basic Books, New York (1984) 5. Endsley, M.R.: Automation and situation awareness. In: Parasuraman, R., Mouloua, M. (eds.) Automation and Human Performance, pp. 163–181. Lawrence Erlbaum, Mahwah (1996) 6. Metzger, U., Parasuraman, R.: The role of the air traffic controller in future air traffic management: an empirical study of active control versus passive monitoring. Hum. Factors. 43(4), 519–528 (2001) 7. Endsley, M.R., Kiris, E.O.: The out-of-the-loop performance problem and level of control in automation. Hum. Factors. 37(2), 381–394 (1995) 8. Reason, J.: Human Error. Cambridge University Press, Cambridge (1990) 9. Lehto, M.R., Clark, D.R.: Warning signs and labels in the workplace. In: Karwowski, W., Mital, A. (eds.) Workspace, Equipment and Tool Design, pp. 303–344. Elsevier, Amsterdam (1990) 10. Lehto, M.R., Salvendy, G.: Warnings: a supplement not a substitute for other approaches to safety. Ergonomics. 38(11), 2155– 2163 (1995) 11. Antwi-Afari, M.F., Li, H., Wong, J.K.-W., Oladinrin, O.T., Ge, J.X., Seo, J., Wong, A.Y.L.: Sensing and warning-based technology applications to improve occupational health and safety in the construction industry: a literature review. Eng. Constr. Archit. Manage. 26(8), 1534–1552 (2019). https://doi.org/10.1108/ ECAM-05-2018-0188 12. Safety Light Curtains | DeTec | SICK. https://www.sick.com/us/ en/opto-electronic-protective-devices/safety-light-curtains/detec/ c/g461751. Accessed 22 May 2021. 13. Described Product S300 Safety Laser Scanner. https://cdn.sick. com/media/docs/3/13/613/operating_instructions_s300_safety_la ser_scanner_en_im0017613.pdf . Accessed 22 May 2021. 14. Intelligent Safety Concept for Protecting Robots | SICK. https://www.sick.com/us/en/industries/household-appliances/mat erial-infeed-housing-mounting/intelligent-safety-concept-for-pro tecting-robots/c/p663809). Accessed 22 May 2021. 15. Safe Human-Robot Collaboration in the Final Assembly of Electric Motors | SICK. https://www.sick.com/us/en/industries/ automotive-and-parts-suppliers/parts-suppliers/powertrain-suppli ers/safe-human-robot-collaboration-in-the-final-assembly-of-elec tric-motors/c/p571844 . Accessed 22 May 2021. 16. Meyer, J.: Responses to dynamic warnings. In: Wogalter, M.S. (ed.) Handbook of Warnings, pp. 89–108. Lawrence Erlbaum, Mahwah (2006) 17. Lehto, M.R.: Optimal warnings: an information and decision theoretic perspective. In: Wogalter, M.S. (ed.) Handbook of Warnings, pp. 89–108. Lawrence Erlbaum, Mahwah (2006) 18. Duffy, R.R., Kalsher, M.J., Wogalter, M.S.: Increased effectiveness of an interactive warning in a realistic incidental product-use situation. Int. J. Ind. Ergon. 15, 159–166 (1995) 19. Frantz, J.P., Rhoades, J.M.: A task-analytic approach to the temporal and spatial placement of product warnings. Hum. Factors. 35, 719–730 (1993) 20. Wogalter, M.S., Barlow, T., Murphy, S.A.: Compliance to owner’s manual warnings: influence of familiarity and the placement of a supplemental directive. Ergonomics. 38, 1081–1091 (1995)
M. R. Lehto and G. Nanda 21. Schank, R.C., Abelson, R.: Scripts, Plans, Goals, and Understanding. Lawrence Erlbaum, Hillsdale (1977) 22. Wogalter, M.S., Vigilante, W.J.: Attention switch and maintenance. In: Wogalter, M.S. (ed.) Handbook of Warnings, pp. 89– 108. Lawrence Erlbaum, Mahwah (2006) 23. Young, S.L., Wogalter, M.S.: Effects of conspicuous print and pictorial icons on comprehension and memory of instruction manual warnings. Hum. Factors. 32, 637–649 (1990) 24. Lehto, M.R., Miller, J.M.: Warnings, Volume 1: Fundamentals, Design and Evaluation Methodologies. Fuller Technical, Ann Arbor (1986) 25. Sanderson, P.: The multimodal world of medical monitoring displays. Appl. Ergon. 37, 501–512 (2006) 26. Sanders, M.S., McCormick, E.J.: Human Factors in Engineering and Design, 7th edn. McGraw-Hill, New York (1993) 27. Wogalter, M.S., Kalser, M.J., Racicot, B.M.: Behavioral compliance with warnings: effects of voice, context, and location. Saf. Sci. 16, 637–654 (1993) 28. Wogalter, M.S., Young, S.L.: Behavioural compliance to voice and print warnings. Ergonomics. 34, 79–89 (1991) 29. Coren, S., Ward, L.M.: Sensation and Perception, 3rd edn. Harcourt Brace Jovanovich, San Diego (1989) 30. Edworthy, J., Hellier, E.: Complex nonverbal auditory signals and speech warnings. In: Wogalter, M.S. (ed.) Handbook of Warnings, pp. 89–108. Lawrence Erlbaum, Mahwah (2006) 31. Warren, R.M., Warren, R.P.: Auditory illusions and confusions. Sci. Am. 223, 30–36 (1970) 32. Samuel, A.G.: Phonemic restoration: insights from a new methodology. J. Exp. Psychol. Gen. 110, 474–494 (1981) 33. Momtahan, K.L.: Mapping of Psychoacoustic Parameters to the Perceived Urgency of Auditory Warning Signals. Carleton University, Ottawa Ontario (1990). Unpublished Master’s thesis 34. Edworthy, J., Loxley, S.L., Dennis, I.D.: Improving auditory warning design: relationship between warning sound parameters and perceived urgency. Hum. Factors. 33, 205–231 (1991) 35. Lee, J.D., Hoffman, J.D., Hayes, E.: Collision warning design to mitigate driver distraction. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 65–72. ACM, New York (2004) 36. Ng, J.Y.C., Man, J.C.F., Fels, S., Dumont, G., Ansermino, J.M.: An evaluation of a vibro-tactile display prototype for physiological monitoring. Anesth. Analg. 101, 1719–1724 (2005) 37. Hickey, J.J.: Shoulder Rumble Strip Effectiveness: Drift-off-road Accident Reductions on the Pennsylvania Turnpike (Transportation Research Record 1573), pp. 105–109. Washington, National Research Council (1997) 38. Selcon, S.J., Taylor, R.M., Shadrake, R.A.: Multimodal cockpit warnings: pictures, words or both? In: Proc. Hum. Factors Soc. 36th Annu. Meet., pp. 57–61. Human Factor Society, Santa Monica (1992) 39. Selcon, S.J., Taylor, R.M.: Evaluation of the situational awareness rating technique (SART) as a tool for aircrew systems design. In: Proc. AGARD AMP Symp. Situational Awareness in Aerospace Operations. Copenhagen (1989) 40. Taylor, R.M.: Situational awareness rating technique (SART): the development of a tool for aircrew systems design. In: Proc. AGARD AMP Symp. Situational Awareness in Aerospace Operations. Copenhagen (1989) 41. Sklar, A.E., Sarter, N.B.: Good vibrations: tactile feedback in support of attention allocation and human-automation coordination in event-driven domains. Hum. Factors. 41, 543–552 (1999) 42. Ho, C., Reed, N., Spence, C.: Multisensory in-car warning signals for collision avoidance. Hum. Factors. 49(6), 1107–1114 (2007). https://doi.org/10.1518/001872007X249965 43. Conzola, V.C., Wogalter, M.S.: Using voice and print directives and warnings to supplement product manual instructions – con-
29
44. 45.
46.
47. 48.
49.
50.
51.
52.
53.
54.
55.
56.
57.
58.
59.
60.
61.
62.
Safety Warnings for Automation spicuous print and pictorial icons. Int. J. Ind. Ergon. 23, 549–556 (1999) Penney, C.G.: Modality effects in short-term verbal memory. Psychol. Bull. 82, 68–84 (1975) Automated Vehicles for Safety | NHTSA: (2020) Retrieved from https://www.nhtsa.gov/technology-innovation/automatedvehicles-safety Parasuraman, R., Sheridan, T.B., Wickens, C.D.: A model for types and levels of human interaction with automation. IEEE Trans. Syst. Man Cyber. Part A: Syst. Hum. 30(3), 286–297 (2000) Endsley, M.R.: Toward a theory of situation awareness in dynamic systems. Hum. Factors. 37(1), 32–64 (1995) Parasuraman, R.: Designing automation for human use: empirical studies and quantitative models. Ergonomics. 43(7), 931–951 (2000) Horrey, W.J., Wickens, C.D., Strauss, R., Kirlik, A., Stewart, T.R.: Supporting situation assessment through attention guidance and diagnostic aiding: the benefits and costs of display enhancement on judgment skill. In: Kirlik, A. (ed.) Adaptive Perspectives on Human-Technology Interaction: Methods and Models for Cognitive Engineering and Human-Computer Interaction, pp. 55–70. Oxford University Press, New York (2006) Flanagan, P., McAnally, K.I., Martin, R.L., Meehan, J.W., Oldfield, S.R.: Aurally and visually guided visual search in a virtual environment. Hum. Factors. 40(3), 461–468 (1998) Yeh, M., Wickens, C.D., Seagull, F.J.: Target cuing in visual search: the effects of conformality and display location on the allocation of visual attention. Hum. Factors. 41(4), 524–542 (1999) Graham, S.E., Matthews, M.D.: Infantry Situation Awareness: Papers from the 1998 Infantry Situation Awareness Workshop. US Army Research Institute, Alexandria (1999) Davison, H., Wickens, C.D.: Rotorcraft hazard cueing: the effects on attention and trust. In: Proc. 11th Int. Symp. Aviat. Psychol. Columbus. The Ohio State University, Columbus (2001) Mosier, K.L., Skitka, L.J., Heers, S., Burdick, M.: Automation bias: decision making and performance in high-tech cockpits. Int. J. Aviat. Psychol. 8(1), 47–63 (1998) Wickens, C.D., Gempler, K., Morphew, M.E.: Workload and reliability of predictor displays in aircraft traffic avoidance. Transp. Hum. Factors. 2(2), 99–126 (2000) Lee, J.D., McGehee, D.V., Brown, T.L., Reyes, M.L.: Collision warning timing, driver distraction, and driver response to imminent rear-end collisions in a high-fidelity driving simulator. Hum. Factors. 44(2), 314–334 (2001) Kramer, A.F., Cassavaugh, N., Horrey, W.J., Becic, E., Mayhugh, J.L.: Influence of age and proximity warning devices on collision avoidance in simulated driving. Hum. Factors. 49(5), 935–949 (2007) Maltz, M., Shinar, D.: Imperfect in-vehicle collision avoidance warning systems can aid drivers. Hum. Factors. 46(2), 357–366 (2004) Dingus, T.A., McGehee, D.V., Manakkal, N., Jahns, S.K., Carney, C., Hankey, J.M.: Human factors field evaluation of automotive headway maintenance/collision warning devices. Hum. Factors. 39(2), 216–229 (1997) Sarter, N.B., Schroeder, B.: Supporting decision making and action selection under time pressure and uncertainty: the case of in-flight icing. Hum. Factors. 43(4), 573–583 (2001) Wiegmann, D.A., Rich, A., Zhang, H.: Automated diagnostic aids: the effects of aid reliability on users’ trust and reliance. Theor. Issues Ergon. Sci. 2(4), 352–367 (2001) Dixon, S.R., Wickens, C.D.: Automation reliability in unmanned aerial vehicle control: a reliance-compliance model of automation dependence in high workload. Hum. Factors. 48(3), 474–486 (2006)
677 63. Seagull, F.J., Sanderson, P.M.: Anesthesia alarms in context: an observational study. Hum. Factors. 43(1), 66–78 (2001) 64. Bainbridge, L.: Ironies of automation. Automatica. 19, 775–779 (1983) 65. Henderson, J.A., Twerski, A.D.: Doctrinal collapse in products liability: the empty shell of failure to warn. New York Univ. Law Rev. 65(2), 265–327 (1990) 66. Papastavrou, J., Lehto, M.R.: Improving the effectiveness of warnings by increasing the appropriateness of their information content: some hypotheses about human compliance. Saf. Sci. 21, 175–189 (1996) 67. von Neumann, J., Morgenstern, O.: Theory of Games and Economic Behavior. Princeton University Press, Princeton (1947) 68. Savage, L.J.: The Foundations of Statistics. Dover, New York (1954) 69. Lehto, M.R., Papastavrou, J.: Models of the warning process: important implications towards effectiveness. Saf. Sci. 16, 569– 595 (1993) 70. Wogalter, M.S., Dejoy, D.M., Laughery, K.R.: Organizing theoretical framework: a consolidated-human information processing (C-HIP) model. In: Wogalter, M., DeJoy, D., Laughery, K. (eds.) Warnings and Risk Communication. Taylor and Francis, London (1999) 71. Miller, J.M., Lehto, M.R.: Warnings and Safety Instructions: The Annotated Bibliography, 4th edn. Fuller Technical, Ann Arbor (2001) 72. Papastavrou, J., Lehto, M.R.: A distributed signal detection theory model: implications for the design of warnings. Int. J. Occup. Saf. Ergon. 1(3), 215–234 (1995) 73. Breznitz, S.: Cry-Wolf: The Psychology of False Alarms. Lawrence Erlbaum, Hillsdale (1984) 74. Bliss, J.P., Fallon, C.K.: Active warnings: false alarms. In: Wogalter, M.S. (ed.) Handbook of Warnings, pp. 231–242. Lawrence Erlbaum, Mahwah (2006) 75. Sorkin, R.D., Kantowitz, B.H., Kantowitz, S.C.: Likelihood alarm displays. Hum. Factors. 30, 445–459 (1988) 76. Moray, N.: Monitoring, complacency, scepticism and eutactic behaviour. Int. J. Ind. Ergon. 31(3), 175–178 (2003) 77. Edworthy, J.: Warnings and hazards: an integrative approach to warnings research. Int. J. Cogn. Ergon. 2, 2–18 (1998) 78. Lehto, M.R., Nah, H.: Decision making and decision support. In: Salvendy, G. (ed.) Handbook of Human Factors and Ergonomics, 3rd edn, pp. 191–242. Wiley, New York (2006) 79. Cohen, M.S.: The naturalistic basis of decision biases. In: Klein, G.A., Orasanu, J., Calderwood, R., Zsambok, E. (eds.) Decision Making in Action: Models and Methods, pp. 51–99. Ablex, Norwood (1993) 80. Simon, H.A.: A behavioral model of rational choice. Q. J. Econ. 69, 99–118 (1955) 81. Simon, H.A.: Alternative visions of rationality. In: Simon, H.A. (ed.) Reason in Human Affairs. Stanford University Press, Stanford (1983) 82. Gigerenzer, G., Todd, P., ABC Research Group: Simple Heuristics That Make Us Smart. Oxford University Press, New York (1999) 83. Hammond, K.R.: Human Judgment and Social Policy: Irreducible Uncertain, Inevitable Error, Unavoidable Injustice. Oxford University Press, New York (1996) 84. Klein, G.A., Orasanu, J., Calderwood, R., Zsambok, E.: Decision Making in Action: Models and Methods. Ablex, Norwood (1993) 85. Owens, D.A., Helmers, G., Sivak, M.: Intelligent vehicle highway systems: a call for user-centred design. Ergonomics. 36(4), 363– 369 (1993) 86. Hancock, P.A., Parasuraman, R.: Human factors and safety in the design of intelligent vehicle-highway systems (IVHS). J. Saf. Res. 23, 181–198 (1992)
29
678 87. Green, D.M., Swets, J.A.: Signal Detection Theory and Psychophysics. Wiley, New York (1966) 88. Wickens, C.D.: Engineering Psychology and Human Performance, 2nd edn. Harper Collins, New York (1992) 89. Tenney, R.R., Sandell Jr., N.R.: Detection with distributed sensors. IEEE Trans. Aerosp. Electron. Syst. 17(4), 501–510 (1981) 90. Ekchian, L.K., Tenney, R.R.: Detection networks. In: Proc. 21st IEEE Conf. Decis. Control, pp. 686–691 (1982) 91. Papastavrou, J.D., Athans, M.: On optimal distributed decision architectures in a hypothesis testing environment. IEEE Trans. Autom. Control. 37(8), 1154–1169 (1992) 92. Papastavrou, J.D., Athans, M.: The team ROC curve in a binary hypothesis testing environment. IEEE Trans. Aerosp. Electron. Syst. 31(1), 96–105 (1995) 93. Tsitsiklis, J.N.: Decentralized detection. Adv. Stat. Signal Proc. 2, 297–344 (1993) 94. Lehto, M.R., Papastavrou, J.P., Giffen, W.: An empirical study of adaptive warnings: human vs. computer adjusted warning thresholds. Int. J. Cogn. Ergon. 2(1/2), 19–33 (1998) 95. Lehto, M.R., Papastavrou, J.P., Ranney, T.A., Simmons, L.: An experimental comparison of conservative versus optimal collision avoidance system thresholds. Saf. Sci. 36(3), 185–209 (2000) 96. Maule, A.J., Hockey, G.R.J.: State, stress, and time pressure. In: Svenson, O., Maule, A.J. (eds.) Time Pressure and Stress in Human Judgment and Decision Making, pp. 83–102. Plenum, New York (1993) 97. Flin, R., Salas, E., Straub, M., Martin, L.: Decision-Making under Stress: Emerging Themes and Applications. Routledge (2017) 98. Lehto, M.R.: A proposed conceptual model of human behavior and its implications for design of warnings. Percept. Mot. Skills. 73(2), 595–611 (1991). https://doi.org/10.2466/ pms.1991.73.2.595 99. Frantz, J.P., Rhoades, T.P., Lehto, M.R.: Warnings and risk communication. In: Wogalter, M.S., DeJoy, D.M., Laughery, K. (eds.) Warnings and Risk Communication, pp. 291–312. Taylor and Francis, London (1999) 100. Lehto, M.R., Salvendy, G.: Models of accident causation and their application: review and reappraisal. J. Eng. Technol. Manag. 8, 173–205 (1991) 101. Hammer, W.: Product Safety Management and Engineering, 2nd edn. American Society of Safety Engineers (ASSE), Des Plaines (1993) 102. Suoakas, J., Rouhiainen, V.: Work Safety Analysis: Method Description and User’s Guide, Research Report 314. Technical Research Center of Finland, Tempere (1984) 103. Suoakas, J., Pyy, P.: Evaluation of the Validity of Four Hazard Identification Methods with Event Descriptions, Research Report 516. Technical Research Center of Finland, Tempere (1988) 104. Lehto, M.R.: Determining warning label content and format using EMEA. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting. SAGE Publications, Los Angeles 44(28), 774–777 (2000) 105. Askarian, M., Zarghami, R., Jalali-Farahani, F., Mostoufi, N.: Fusion of micro-macro data for fault diagnosis of a sweetening unit using Bayesian network. Chem. Eng. Res. Des. 115, 325–334 (2016) 106. Hurdle, E.E., Bartlett, L.M., Andrews, J.D.: System fault diagnostics using fault tree analysis. Proc. Inst. Mech. Eng. O J. Risk Rel. 221(1), 43–55 (2007) 107. Gheraibia, Y., Kabir, S., Aslansefat, K., Sorokos, I., Papadopoulos, Y.: Safety + AI: a novel approach to update safety models using artificial intelligence. IEEE Access. 7, 135855–135869 (2019) 108. Zhuo-Hua, D., Zi-xing, C.A., Jin-xia, Y.U.: Fault diagnosis and fault tolerant control for wheeled mobile robots under unknown environments: a survey. In: Proceedings of the 2005 IEEE Interna-
M. R. Lehto and G. Nanda
109.
110.
111.
112.
113. 114.
115.
116. 117.
118.
119. 120.
121.
122.
123.
124.
125.
tional Conference on Robotics and Automation, pp. 3428–3433. IEEE (2005) Dhillon, B.S.: Applied Reliability and Quality: Fundamentals, Methods and Procedures. Springer Science & Business Media (2007) Department of Defense: MIL-HBDK-217F: Reliability Prediction of Electronic Equipment. Rome Laboratory, Griffiss Air Force Base (1990) Bukowski, R.W., Budnick, E.K., Schemel, C.F.: Estimates of the operational of fire protection systems. In: Proc. Soc. Fire Prot. Eng. Am. Inst. Archit., pp. 111–124 (2002) DiMattia, D.G., Khan, F.I., Amyotte, P.R.: Determination of human error probabilities for offshore platform musters. J. Loss Prev. Proc. Ind. 18, 488–501 (2005) Gertman, D.I., Blackman, H.S.: Human Reliability, Safety Analysis Data Handbook. Wiley, New York (1994) Swain, A.D., Guttman, H.: Handbook for Human Reliability Analysis with Emphasis on Nuclear Power Plant Applications. US Nuclear Regulatory Commission, Washington (1983). NUREG/CR-1278 Embrey, D.E.: SLIM-MAUD: An Approach to Assessing Human Error Probabilities Using Structured Expert Judgment. US Nuclear Regulatory Commission, Washington (1984). NUREG/CR3518, Vols. 1 and 2 OSHA: Guidelines for Robotics Safety. US Department of Labor, Washington (1987) OSHA: Industrial robots and robot system safety. In: OSHA Technical Manual (OTM) _ Section IV_ Chapter 4 – Heat Stress _ Occupational Safety and Health Administration. United States Department of Labor (2016) https://www.osha.gov/otm/section4-safety-hazards/chapter-4 American National Standard for Industrial Robots and Robot Systems: Safety Requirements, ANSI/RIA R15.06-2012. Robotic Industries Association/the American National Standards Institute, Ann Arbor/New York (2013) CobotsGuide | Cobot Safety. (n.d.). Retrieved from https:// cobotsguide.com/safety/ Murashov, V., et al.: Working safely with robot workers: recommendations for the new workplace. J. Occup. Environ. Hyg. 13(3), D61–D71 (2016). https://doi.org/10.1080/ 15459624.2015.1116700 Ahlstrom, V.: Department of Transportation, Federal Aviation Administration: Human Factors Design Standard. DTFAA, Atlantic City International Airport (2016). Report No. DOT/FAA/HFSTD-001B Houser, A., Pierowicz, J., McClellan, R.: Concept of Operations and Voluntary Operational Requirements for Automated Cruise Control/Collision Warning Systems (ACC/CWS) On-board Commercial Motor Vehicles. Federal Motor Carrier Safety Administration, Washington (2005). Report No. FMCSA-MCRR-05–007, retrieved from http://www.fmcsa.dot.gov/facts-research/researchtechnology/report/forward-collision-warning-systems.htm US Department of Labor, Mine Safety and Health Administration: Horns, backup alarms, and automatic warning devices, Title 30 Code of Federal Regulations (30 CFR 56.14132, 57.1413230, 77.410, 77.1605) (n.d.). retrieved from http://www.msha.gov/ STATS/Top20Viols/tips/14132.htm De Freitas, J., Censi, A., Smith, B.W., Di Lillo, L., Anthony, S.E., Frazzoli, E.: From driverless dilemmas to more practical commonsense tests for automated vehicles. Proc. Natl. Acad. Sci. U.S.A. 118(11) (2021). https://doi.org/10.1073/pnas.2010202118 Tesla crash: Self-driving cars aren’t ready to drive themselves yet. (n.d.). Retrieved from https://www.usatoday.com/story/ money/cars/2021/04/20/tesla-autopilot-crash-what-can-cant-selfdriving-cars-do/7283027002/
29
Safety Warnings for Automation
126. Greenlee, E.T., DeLucia, P.R., Newton, D.C.: Driver vigilance in automated vehicles: hazard detection failures are a matter of time. Hum. Factors. 60(4), 465–476 (2018) 127. Greenlee, E.T., DeLucia, P.R., Newton, D.C.: Driver vigilance in automated vehicles: effects of demands on hazard detection performance. Hum. Factors. 61(3), 474–487 (2019). https://doi.org/ 10.1177/0018720818802095 128. Parasuraman, R.: Vigilance, monitoring and search. In: Boff, J.R., Kaufmann, L., Thomas, J.P. (eds.) Handbook of Human Perception and Performance, Vol.2, Cognitive Processes and Performance, pp. 41–1–41-49. Wiley, New York (1986) 129. Seppelt, B.D., Victor, T.W.: Potential solutions to human factors challenges in road vehicle automation. In: Lecture Notes in Mobility, vol. 3, pp. 131–148. Springer, Cham (2016). https://doi.org/ 10.1007/978-3-319-40503-2_11 130. McBride, S.E., Rogers, W.A., Fisk, A.D.: Understanding human management of automation errors. Theor. Issues Ergon. Sci. 15(6), 545–577 (2014). https://doi.org/10.1080/1463922X.2013.817625 131. Khastgir, S., Birrell, S., Dhadyalla, G., Jennings, P.: Calibrating trust to increase the use of automated systems in a vehicle. In: Stanton, N., Landry, S., Di Bucchianico, G., Vallicelli, A. (eds.) Advances in Human Aspects of Transportation. Advances in Intelligent Systems and Computing, vol. 484. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-41682-3_45 132. Pipkorn, L., Victor, T.W., Dozza, M., Tivesten, E.: Driver conflict response during supervised automation: do hands on wheel matter? Transport. Res. F: Traffic Psychol. Behav. 76, 14–25 (2021) 133. Atwood, J.R., Guo, F., Blanco, M.: Evaluate driver response to active warning system in level-2 automated vehicles. Accid. Anal. Prev. 128, 132–138 (2019) 134. Chancey, E.T., Bliss, J.P., Yamani, Y., Handley, H.A.H.: Trust and the compliance-reliance paradigm: the effects of risk, error bias, and reliability on trust and dependence. Hum. Factors. 59(3), 333– 345 (2017). https://doi.org/10.1177/0018720816682648 135. Sanchez, J., Rogers, W.A., Fisk, A.D., Rovira, E.: Understanding reliance on automation: effects of error type, error distribution, age and experience. Theor. Issues Ergon. Sci. 15(2), 134–160 (2014). https://doi.org/10.1080/1463922X.2011.611269
679
Dr. Mark R. Lehto is a professor and cochair of the Interdisciplinary Graduate Program in Human Factors Engineering at Purdue University. His research and teaching interests include human factors and safety engineering, hazard communication, and decision support systems.
29
Dr. Gaurav Nanda is an assistant professor in the School of Engineering Technology at Purdue University. His research interests include applied machine learning, text mining, and decision support systems in the areas of safety, health care, and education. He obtained his Ph.D. in industrial engineering and postdoctoral training from Purdue University, and his bachelors and masters from Indian Institute of Technology (IIT) Kharagpur, India. He has also worked in the software industry for 5 years.
Part V Automation Management
Economic Rationalization of Automation Projects and Quality of Service
30
José A. Ceroni
Contents 30.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 683
30.2 30.2.1
General Economic Rationalization Procedure . . . . . . General Procedure for Automation Systems Project Rationalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Pre-cost-Analysis Phase . . . . . . . . . . . . . . . . . . . . . . . . . . Cost-Analysis Phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Considerations of the Economic Evaluation Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
30.2.2 30.2.3 30.2.4 30.3 30.3.1 30.3.2 30.4
684 684 684 689 691
Alternative Approach to the Rationalization of Automation Projects . . . . . . . . . . . . . . . . . . . . . . . . . . 692 Strategical Justification of Automation . . . . . . . . . . . . . . 692 Analytical Hierarchy Process (AHP) . . . . . . . . . . . . . . . . 693
30.4.2
Final Additional Considerations in Automation Rationalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 693 Investment Risk Effects on the Minimum Acceptable Rate of Return . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 693 Equipment Depreciation and Salvage Value Profiles . . . 695
30.5
Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 696
30.6
Recommended Additional Reading . . . . . . . . . . . . . . . 696
30.4.1
for rapid change in structure, as well as in hardware and software components. Its objective is to quickly adjust production capacity and functionality within a part family in response to sudden changes in market or regulatory requirements. Reconfigurability of a system is a key factor affecting automation systems’ economic evaluation due to the reusability of equipment and software for the service and manufacturing of multiple products. A new method based on an analytical hierarchy process for project selection is reviewed. A brief discussion on risk and salvage consideration is included, as are emerging aspects needing further development in future rationalization techniques.
Keywords
Cash flow · Analytical hierarchy process · Reconfigurable system · Depreciation schedule · Automation project
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 697
Abstract
The future of any investment project is undeniably linked to its economic rationalization. The chance that a project is realized depends on our ability to demonstrate the benefits that it can convey to a company. However, traditional investment evaluation must be enhanced and used carefully in the context of rationalization to reflect adequately the characteristics of modern automation systems. Nowadays, automation systems often take the form of complex, strongly related autonomous systems that are able to operate in a coordinated fashion in distributed environments. Reconfigurability is the capacity of a system designed J. A. Ceroni () School of Industrial Engineering, Pontifica Universidad Católica de Valparaíso, Valparaiso, Chile e-mail: [email protected]
© Springer Nature Switzerland AG 2023 S. Y. Nof (ed.), Springer Handbook of Automation, Springer Handbooks, https://doi.org/10.1007/978-3-030-96729-1_30
30.1
Introduction
Industry-wide recognition of automation’s contribution to the generation of benefits that lay down the creation of products and services has strongly led to the adoption of advanced automation technologies [1]. Adequate definition and selection of automation technology offers substantial potential for cost savings, increased flexibility, better product consistency, and higher throughput. Hence, automation technology cannot be justified solely based on traditional economic criteria and is at least biased and often wrong. Disregarding consideration of automation strategic and long-term benefits on company strategies has often led to failure in adopting it [2–5]. Nowadays, the long-range cost of not automating can turn out to be considerably greater than the short-term cost of acquiring automation technology [6]. 683
684
J. A. Ceroni
An automation project is the effort of engineering and other related disciplines for the development of a new integrated system able to provide financial, operational, and strategic benefits, avoiding replication of current operational methods and support systems. The automation project should make clear the four differences from any other capital equipment project: • Automation provides flexibility in production capability, enabling companies to respond effectively to market changes, an aspect with clear economic value. • Automation solutions force users to rethink and systematically define and integrate the functions of their operations. This reengineering process creates major economic benefits. • Modern automation solutions are reprogrammable and reusable, with components often having lifecycles longer than the planned production facility. • Using automation significantly reduces requirements for services and related facilities. These differences led to operational benefits that include: Increased flexibility Increased productivity Reduced operating costs Increased product quality
Elimination of health and safety hazard Higher precision Ability to run longer shifts Reduced floor space
Time-based competition and mass customization of global markets are key competitive strategies of present-day manufacturing companies [7]. Average product lifecycle in marketplaces has changed from years to months for products based on rapidly evolving technologies. This demands agile collaborative and reconfigurable automated systems, created through the concept of common manufacturing processes organized modularly to allow rapid deployment in alternative configurations. Justification of reconfigurable automation systems must necessarily include strategic aspects when comparing them with traditional manufacturing systems developed under the product-centric paradigm. Product-specific systems generally lack the reconfiguration ability that would allow them to meet the needs of additional products. Consequently, traditional systems are typically decommissioned before their capital cost can be recovered and held in storage until fully depreciated for tax purposes and sold at salvage value. However, reconfigurability claims for additional investment in design, implementation, and operation of the system. It is estimated that generic system capabilities can increase the cost of reconfigurable system hardware by as much as 25% over that of a comparable dedicated system. On the other hand, software required to configure and run reconfigurable
automation systems is often much more expensive than simple part-specific programs. Traditional economic evaluation methods fail to consider benefits from capital reutilization over multiple projects and also disregard strategic benefits of technology. Upgrade of traditional economic evaluation methods is required to account for the short-term economic and long-term strategic value of investing in reconfigurable automation technologies to support the evolving production requirements of a family of products. In this chapter, the traditional economic justification approach to automation system justification is first addressed. A related discussion on economic aspects of automation not discussed in this chapter can be found in Ch. 5. New approaches to automated system justification based on strategic considerations are presented next. Finally, a discussion on justification approaches currently being researched for reconfigurable systems is presented.
30.2
General Economic Rationalization Procedure
An economic rationalization enables us to compare the financial benefits expected from a given investment project with alternative use of investment capital. Economic evaluation measures capital cost plus operating expenses against cashflow benefits estimated for the project. This section describes a general approach to economic rationalization and justification of automation system projects.
30.2.1 General Procedure for Automation Systems Project Rationalization The general procedure for rationalization and analysis of automation projects presented here consists of a precost-analysis phase, followed by a cost-analysis phase. Figure 30.1 presents the procedure steps and their sequence. The sequence of steps in Fig. 30.1 is reviewed in detail in the rest of this section and an example cost-analysis phase is described.
30.2.2 Pre-cost-Analysis Phase The pre-cost-analysis phase evaluates the automation project feasibility. Feasibility is evaluated in terms of the technical capability to achieve production capacity and utilization as estimated in production schedules. The first six steps of the procedure include determining the most suitable manufacturing method, selecting the tasks to automate, and the feasibility assessment of these options (Fig. 30.1). Noneco-
30
Economic Rationalization of Automation Projects and Quality of Service
2.2.1 Alternative automated manufacturing methods
2.2.2 Technical feasibility evaluation
New development or improve present method Fails
Develop new methods without automation
100 10
Dedicated automation Manual
Hold the plan
2.2.3 Selection of tasks to automate
0.1 102
2.2.4 Noneconomic and intangible considerations
103
Fails
Pre-cost-analysis phase
2.3.1 Period evaluation, depreciation, and tax data requirements 2.3.2 Project cost analysis
Cpr = tpr
2.3.3.b Return on invested capital Decision
105
106 Units per year
production volumes, ranging from a few tens or hundreds of products per year per part type to hundreds of thousands of products per year. Finally, annual production volumes of 500,000 or above seem to justify the utilization of dedicated (hard) automation systems. Boothroyd et al. [9] have derived specific formulas for determining the assembly cost (Table 30.1). By using these formulas, they compare alternative assembly systems such as the one-operator assembly line, assembly center with two arms, universal assembly center, free-transfer machine with programmable workheads, and dedicated machine. The last three systems are robotics-based automated systems. The general expression derived by Boothroyd et al. [9] is
2.3.3 Economic evaluation 2.3.3.a Net present value
104
Fig. 30.2 Comparison of manufacturing methods for different production volumes
2.2.5 Determination of costs and benefits
Passes
30
Cost per unit produced ($) 1000 Programmable automation
1
Passes
2.2.6 Utilization analysis
685
2.3.3.c Payback period Cost-analysis phase
Fig. 30.1 Automation project economic evaluation procedure
nomic considerations must be studied and all data pertinent to product volumes and operation times gathered.
Alternative Automated Manufacturing Methods Production unit cost at varying production volumes for three main alternative manufacturing methods (manual labor, flexible automation, and hard automation) are compared in Fig. 30.2 [8]. Manual labor is usually the most cost-effective method for low production volumes; however, reconfigurable assembly is changing this situation drastically. Flexible, programmable automation is most effective for medium
WM Wt + SQ
,
where: Cpr = unitary assembly cost, tpr = average assembly time per part, Wt = labor cost per time, W = operator rate in dollars per second, M = assembly equipment cost per time, S = number of shifts, and Q = operator cost in terms of capital equivalent. Parameters and variables in this expression present alternative relationships, depending on the automation and flexibility of assembly system. Figures 30.3 and 30.4 show the unitary cost for the assembly systems at varying annual production volumes. It can be seen that production of multiple products increases costs, by approximately 100%, for the assembly center with two arms and free-transfer machine with programmable workheads, and by 1000% for the dedicated hybrid machine.
Evaluation of Technical Feasibility for Alternative Methods Feasibility of the automation system plan must be reviewed carefully. It is perfectly possible for an automation project to
686
J. A. Ceroni
Table 30.1 Assembly equipment cost for alternative automation levels Average assembly time per part Kt0 (1 + x)
Labor cost per Automation level Assembly equipment cost per time time Manual assembly and nW/k (n/k)(2CB + Np CC ) feeding Manual assembly with Kt 0 (1 + x) nW/k (n/k)(2CB + Np CC ) + Np (ny + Nd CF ) automated feeding Dedicated hybrid assembly t + xT 3W [nyCT +(T/t)(ny)CB ] + Np {(ny + Nd )(CF + Cw ) + [ny + (T/2 t)(ny)]CC } machine Assembly machine with k(t + xT) 3W (n/k)[CdA +(T/t + 1)CB + Np [(ny + Nd )CM + nCg + (n/k)(T/2 t + 0.5)CC ] programmable workheads and automated transfer of parts Automated assembly center n(t/2 + xT) 3W 2CdA + Np [CC + nCg + (ny + Nd )CM ] with two robotic arms Flexible universal assembly n(t/2 + xT) 3W (2CdA + nyCPF + 2Cug ) + Np CC machine center Cpr product unit assembly cost, S number of shifts, Q equivalent cost of operator in terms of capital equivalent, W operator rate in dollars per second, d degrees of freedom, k number of parts assembled by each operator or programmable workhead, n total number of parts, Np number of products, Nd number of design changes, CdA cost of a programmable robot or workhead, CB cost of a transfer device per workstation, CC cost of a work carrier, CF cost of an automatic feeding device, Cg cost of a gripper per part, CM cost of a manually loaded magazine, CPF cost of a programmable feeder, CS cost of a workstation for a single station assembly, Ct cost of a transfer device per workstation, Cug cost of a universal gripper, CW cost of a dedicated workhead, T = machine downtime for defective parts, t0 , t 0 machine downtime due to defective parts, t mean time of assembly for one part, x ratio of faulty parts to acceptable parts, y product styles
Assembly cost per part Cpr /n (US $) 0.5
Assembly cost per part Cpr /n (US $) 0.5
Assembly center with two arms Free-transfer machine with programmable workheads
Assembly center with two arms
0.1
0.1 Free-transfer machine with programmable workheads
0.01
Universal assembly center Operator assembly line 50 parts in one assembly (n = 50) One product (Np = 1) One style of each product (y = 1) No design changes (Nd = 0)
0.001 0.01
0.1
Dedicated hybrid machine
Operator
0.01 assembly line Dedicated hybrid machine
1 10 Annual product volume V (millions of assemblies per year)
Fig. 30.3 Comparison of alternative assembly systems (one product)
have a positive economic evaluation but have problems with its feasibility. Although this situation may seem strange, it must be considered that an automation project is rather complex and demands specific operational conditions, far more complex than those in conventional production systems. A thorough feasibility review must consider aspects such as the answers to the following questions in case of automated assembly: • Is the product designed for automated assembly? • Is it possible to do the job with the planned procedure and within the given cycle time?
50 parts in one assembly (n = 50) 20 product (Np = 20) One style of each product (y = 1) No design changes (Nd = 0)
0.001 0.01
0.1
Universal assembly center
1 10 Annual product volume V (millions of assemblies per year)
Fig. 30.4 Comparison of alternative assembly systems (20 products)
• Can reliability be ensured as a component of the total system? • Is the system sufficiently staffed and operated by assigned engineers and operators? • Is it possible to maintain safety and the designed quality level? • Can inventory and material handling be reduced in the plant? • Are the material-handling systems adequate? • Can the product be routed in a smooth batch-lot flow operation?
30
Economic Rationalization of Automation Projects and Quality of Service
The comprehensive analysis of these aspects of any given automation project is rather cumbersome and prone to shortcomings in the full understanding of decision makers in charge of choosing the best automation proposal. Techniques such as technology-enabled engagement process (TEEP) [10–12] are leading the way for providing decision makers with a submissive environment to perceive through virtual reality automation components functioning in an integrated operation. However, operation of the system is not only visual but also reported as the key performance indicators that will provide the most insightful decision-making appraisal. This new framework is currently being utilized in education in design training programs [13]. A similar approach, if followed in Ref. [14] for proposing a system that uses data mining, is able to evaluate alternative automation projects in companies. Following the feasibility analysis, alternatives are considered further in the evaluation. If the plan fails due to lack of feasibility, a search for other type of solutions is in order. Alternative solutions may involve the development of new equipment, improvement of proposed equipment, or development of other manufacturing alternatives.
Selection of Tasks to Automate Selection of tasks for automation is a difficult process. The following five job grouping strategies may assist the determination of tasks to automate: • Components of products of the same family • Products presently being manufactured in proximity • Products consisting of similar components that could share part-feeding devices • Products of similar size, dimensions, weight, and number of components • Products with simple design possible to manufacture within a short cycle time
Noneconomic and Intangible Considerations Issues related to specific company characteristics, company policy, social responsibility, and management policy need to be addressed both quantitatively and qualitative in the automation project. Adequate justification of automation systems needs to consider aspects such as: • Compliance with the general direction of the company’s automation strategy • Satisfaction of equipment and facilities standardization policies • Adequate accommodation of future product model changes or production plans • Improvement of working life quality and workers morale • Reliability and consistency to achieve and maintain the expected operational results
687
• Positive impact on company reputation • Promotion of technical progress and knowledge creation at the company Differences among automated solutions (e.g., robots) and other case-specific capitalization equipment also provide numerous intangible benefits, as the following list illustrates: • Robots are reusable. • Robots are multipurpose and can be reprogrammed for many different tasks. • Because of reprogrammability, robotic systems service life can often be three or more times longer than that of fixed (hard) automation devices. • Tooling costs for robotic systems also tend to be lower owing to the programming capability around certain physical constraints. • Production startup occurs sooner because of less construction and tooling constraints. • Plant modernization can be implemented by eliminating discontinued automation systems.
Determination of Costs and Benefits Although costs and benefits expected from automation projects vary according to each particular case being analyzed, a general classification of costs can include operators’ wages, capital, maintenance, design, and power costs. However, it must be noted that, while usually wages decrease at higher levels of automation, the rest of the cost tend to increase. Figure 30.5 shows the behavior of assembly costs at different automation levels [15]. Consequently, it would be possible to determine the optimal degree of automation based on the minimum total operational cost of the system.
Costs Km ln Assembly cost
Personnel cost
Capital cost
Maintenance and energy cost Degree of automation a op l
Fig. 30.5 Assembly costs as functions of the degree of automation in the assembly case
30
688
Considering automation’s potential to sustainably deliver value to the organization, a useful concept regarding cost and expected benefits of automation projects is quality of service (QoS) [16]. QoS involves issues of cost, affordability, energy, maintenance, and dependability being delivered by the automated system at a stable rate, according to a service-level agreement. For the purposes of this chapter, relevant aspects of QoS are cost, affordability, and energy. Cost-effective or cost-oriented automation is part of a strategy called lowcost automation which covers the life cycle of an automation system with respect to their owners: design, production, operating, and maintenance, refitting or recycling. Affordable automation is another part of the strategy and focuses on automation or automatic control in small enterprises to enhance their competitiveness in manufacturing and service. It is directly observable how the cost control aspect of the latter two strategies contribute to the economic evaluation process of alternative automation projects. Despite relative expensive components, the automation system may present low cost with respect to its operation and maintenance. To be utilized broadly, a new technology must demonstrate tangible and sustainable benefits, be easier to implement and maintain, and/or substantially improve performance and efficiency. Sometimes a new control method is not pursued due to poor usability during operation and troubleshooting in an industrial environment. Low-cost automation is now established as a strategy to achieve the same performance as sophisticated automation but with lower costs. The designers of automation systems therefore have a cost frame to find solutions. This is a challenge to theory and technology of automatic control, as the main parts of automation projects. Cost aspects are mostly considered when designing automation systems. However, in the end, industry is looking for intelligent solutions and engineering strategies for saving cost but that nevertheless have secure, high performance. Field robots in several domains such as manufacturing plants, buildings, offices, agriculture, and mining are candidates for reducing operation cost. Enterprise integration and support for networked enterprises are considered as cost-saving strategies. Condition monitoring of machines to reduce maintenance cost and avoid downtime of machines and equipment, if possible, is also a new challenge, and promotes e-Maintenance [17] and e-Service [18]. Reliability of low-cost automation is independent of the grade of automation, i.e., it covers all possible circumstances in its field of application. Often it is more suitable to reduce the grade of automation and involve human experience and capabilities to bridge the gap between theoretical findings and practical requirements [19]. Low-cost automation also concerns the implementation of automation systems. This should be as easy as possible and also facilitate maintenance. Maintenance is very often the crucial point and an important cost factor to be considered. Standardization of components of automation systems could also be very helpful to reduce
J. A. Ceroni
cost, because it fosters usability, distribution, and innovation in new applications, for example, fieldbus technology in manufacturing and building automation. Distributed collaborative engineering, i.e., the control of common work over remote sites, is an important topic in cost-oriented automation. Integrated product and process development as a cost-saving strategy has been partly introduced in industry. However, as Nnaji et al. [20] mention, lack of information from suppliers and working partners, incompleteness and inconsistency of product information/knowledge within the collaborating group, and incapability of processing information/data from other parties due to the problem of interoperability hamper effective use. Hence, collaborative design tools are needed to improve collaboration among distributed design groups, enhance knowledge sharing, and assist in better decisionmaking [21]. (See also Chs. 18 and 19.) Mixed-reality concepts could be useful for collaborative distributed work because they address two major issues: seamlessness and enhancing reality. In mixed-reality distributed environments, information flow can cross the border between reality and virtuality in an arbitrary bidirectional way. Reality may be the continuation of virtuality, or vice versa [22], which provides a seamless connection between the two worlds. This bridging or mixing of reality and virtuality opens up some new perspectives not only for work environments but also for learning or training environments [23, 24]. (See also Chs. 32 and 63.) The changing global context is having an impact on local and regional economies, particularly on small and medium-sized enterprises. Global integration and international competitive pressures are intensifying at a time when some of the traditional competitive advantages – such as relatively low labor costs – enjoyed by certain countries are vanishing. Energy-saving strategies and also individual solutions for reducing energy consumption are challenging politicians, the public, and researchers. The cost aspect is very important. Components of controllers such as sensors and actuators may be expensive, but one has to calculate savings in energy costs over a certain period or life cycle of a building or plant. Cost-oriented automation as part of the strategy of low-cost automation considers the cost of ownership with respect to the life cycle of the system: • • • • • •
Designing Implementing Operating Reconfiguring Maintenance Recycling
Life cycle assessment of automation projects could become very cumbersome due to the amount of data needed to gather and process if the analysis is intended to be as com-
30
Economic Rationalization of Automation Projects and Quality of Service
prehensive as possible [25]. Methods to assist this evaluation are available elsewhere in the literature, for example, in Ref. [26], an open source tool for modeling and performing the life cycle assessment of a wide variety of projects is provided, being possible to use it for automation projects as well. The research by Yeom et al. [27] presents a life cycle analysis of the robot system for buildings’ exterior painting and can be used as reference. Components and instruments could be expensive if life cycle costs are to decrease. An example is enterprise integration or networked enterprises as production systems that are vertically (supply chain) or horizontally and vertically (network) organized. Cost-effective product and process realization has to consider several aspects regarding automatic control [24]: • Virtual manufacturing supporting integrated product and process development • Web-based maintenance (cost reduction with e-Maintenance systems in manufacturing) • Small and medium-sized (SME)-oriented agile manufacturing Agile manufacturing here is understood as the synthesis of a number of independent enterprises forming a network to join their core skills. As mentioned above, life cycle management of automation systems is important regarding cost of ownership. The complete production process has to be considered with respect to its performance, where maintenance is the most important driver of cost. Nof et al. [28] consider the performance of the complete automation system, which interests the owner in terms of cost, rather than only the performance of the control system, i.e., a compromise between cost of maintenance and cost of downtime of the automation system has to be found. Questions regarding benefits of automation systems often arise concerning long-range, unmeasurable effects on economic issues. A few such issues include the impact of the automation system on: • • • • • • • • • • • • •
Product value and price Increase of sales volume Decrease of production cost Decrease of initial investment requirements Reduction of products lead time Decrease of manufacturing costs Decrease of inventory costs Decrease of direct and indirect labor costs Decrease of overhead rate Full utilization of automated equipment Decrease of setup time and cost Decrease of material-handling cost Decrease of damage and scrap costs
689
Table 30.2 Difficult-to-quantify impacts of automation Automation can increase/improve/maximize 1. Flexibility 2. Sustainability 3. Resilience 4. Quality of service 5. Convenience 6. Plant modernization 7. Labor skills of employees 8. Employee satisfaction 9. Methods and operations 10. Manufacturing productivity capacity 11. Reaction to market fluctuations 12. Product quality 13. Business opportunities 14. Market share 15. Profitability 16. Competitiveness 17. Growth opportunities 18. Handling of short product lifecycles 19. Handling of potential labor shortages 20. Space utility of plant
Automation can decrease/eliminate/minimize 1. Regulatory violations and deviations 2. Hazardous and unsafe human jobs 3. Tedious human jobs 4. Safety violations 5. Accidents and injuries 6. Scrap and rework 7. Errors and conflicts 8. Time for training 9. Trainers costs 10. Clerical costs 11. Food costs 12. Dining rooms 13. Restrooms, bathrooms, parking areas 14. Burden, direct, and other overhead costs 15. Manual material handling 16. Inventory levels 17. Scrap and errors 18. Time to market
However, automation may also result in hard-to-quantify benefits that could improve system performance or help to ameliorate negative impacts in industrial operations. The correct economical assessment of each these intangible factors is required for them to be included in the financial analysis in order to discriminate among the automation alternatives. Some examples of hard to quantify automation impacts are listed in Table 30.2.
Utilization Analysis Underutilized automated systems usually cannot be costjustified, mainly due to the high initial startup expenses and low labor savings they result in. Consideration of additional applications or planned future growth are required to drive the potential cost-effectiveness up; however, there are also additional costs to consider, for example, tooling and feeder costs associated with new applications.
30.2.3 Cost-Analysis Phase This phase of the methodology focuses on detailed cost analysis for investment justification and includes five steps (Fig. 30.1). To evaluate economically the automation project installation, the following data are required: • Capital investment of the project • Estimated changes in gross incomes (revenue, sales, and savings) and costs expected from the project
30
690
J. A. Ceroni
To illustrate the remaining steps of the methodology, an example will be developed. Installation cost, operation costs, and salvage for the example are as given in Table 30.3.
Period Evaluation, Depreciation, and Tax Data Requirements Before proceeding with the economic evaluation, the evaluation period, tax rates, and tax depreciation method must be specified. We will consider in the example an evaluation period of 6 years. We will use the US Internal Revenue Service’s Modified Accelerated Cost Recovery System (MACRS) for 5 years (Table 30.4). Tax rate considered is 40%. These values are not fixed and can be changed if deemed appropriate. Project Cost Analysis The project cost is as given in Table 30.3 (US$168,000). To continue, it is necessary to determine (estimate) the yearly changes in operational cost and cost savings (benefits). For the example, these are as shown in Table 30.5. Economic Rationalization Techniques used for the economic analysis of automation applications are similar to those for any manufacturing equipment purchase. They are usually based on net present value, rate of return, or payback (payout) methods. All of these methods require the determination of the yearly net cash flows, which are defined as Xj = (G − C)j − (G − C − D)j (T) − K + Lj , where Xj = net cash flow in year j, Gj = gross income (savings, revenues) for year j, Cj = total costs for year j, Dj = tax
depreciation for year j, T = tax rate (assumed constant), K = project cost (capital expenditure), and Lj = salvage value in year j. The net cash flows are given in Table 30.6. Net Present Value (NPV) Once the cash flows have been determined, the net present value (NPV) is determined using the equation NPV =
n j=0
Project costs Machine cost Tooling cost Software integration Part feeders Installation cost Total Actual realizable salvage
$80,000 $13,000 $30,000 $20,000 $25,000 $168,000 $12,000
Table 30.4 MACRS percentages Year 1 2 3 4 5 6
Percentage 20.00 32.00 19.20 11.52 11.52 5.76
n
where Xj = net cash flow for year j, n = number of years of cash flow, k = minimum acceptable rate of return (MARR), and 1/(1 + k)j = discount factor, usually designated as (P/F, k, j). With the cash flows of Table 30.6 and k = 25%, the NPV is NPV = −168 000 + 53 640 (P/F, 25, 1) + 61 704 (P/F, 25, 2) + · · · + 78 271 (P/F, 25, 6) = $18 431. The project is economically acceptable if its NPV is positive. Also, a positive NPV indicates that the rate of return is greater than k. Return on Invested Capital (ROIC) The ROIC or rate of return is the interest rate that makes the NPV = 0. It is sometimes also referred to as the internal rate of return (IRR). Mathematically, the ROIC is defined as 0=
n j=0
Table 30.3 Example data
Xj = Xj (P/F, k, j) , j (1 + k) j=0
Xj = Xj (P/F, i, j) , j (1 + i) j=0 n
where i = ROIC. For this example, the ROIC is determined from the following expression 0 = −168 000 + 53 640 (P/F, i, 1) + 61 704 (P/F, i, 2) + · · · + 78 271 (P/F, i, 6) . To solve the previous expression for i, a trial-and-error approach is needed. Assuming 25%, the right-hand side gives $ 18,431 (NPV calculation) and with 35%, it is $ −11,719. Therefore, the ROIC using linear interpolation is approximately 31%. This ROIC is now compared with the minimum acceptable rate of return (MARR). In this example, the MARR is that used for calculating the NPV. If ROIC ≥ MARR, the project is acceptable; otherwise it is unacceptable. Consequently, the NPV and the rate-of-return methods will give the same decision regarding the economic desirability of a project (investment). It is pointed out that
30
Economic Rationalization of Automation Projects and Quality of Service
691
Table 30.5 Costs and savings (dollars per year) Year Labor savings Quality savings Operating costs (increase)
1 70,000 22,000 (25,000)
2 70,000 22,000 (25,000)
Table 30.6 Net cash flow with project cost of K = 168,000 and salvage value of L6 = 12,000 End of year Total Ga C Db X 0 – – – −168,000 1 92,000 25,000 33,600 53,640 2 92,000 25,000 53,760 61,704 3 116,000 19,000 32,256 71,102 4 116,000 12,000 19,354 70,142 5 116,000 12,000 19,354 70,142 6 116,000 12,000 9677 78,271 a These are the sums of labor and quality savings (Table 30.5) b Computed with the MACRS for each year
the definitions of cash flow and MARR are not independent. Also, the omission of debt interest in the cash-flow equation does not necessarily imply that the initial project cost (capital expenditure) is not being financed by some combination of debt and equity capital. When total cash flows are used, the debt interest is included (approximately) in the definition of MARR as MARR = ke (1 − c) + kd (1 − T) c, where ke = required return for equity capital, kd = required return for debt capital, T = tax rate, and c = debt ratio of the pool of capital used for current capital investments. It is not uncommon in practice to adjust (increase) ke and kd to account for project risk and uncertainties in economic conditions. In this case, risk is the combination of the probability or frequency of occurrence of a defined threat or opportunity and the magnitude of the consequences of the occurrence. The effects of automation on ROIC (Fig. 30.6) are documented elsewhere in the literature [29]. The main effects of automation can be classified into reduction of capital or increased profits or, more desirably, both simultaneously. Automation may generate investment capital savings in project engineering, procurement costs, purchase price, installation, configuration, calibration, or project execution. Working capital requirements may be lowered by reducing raw material (quantity or price), product inventories, spare parts for equipment, reduced energy and utilities utilization, or increased product yields. Maintenance costs are diminished in automation solutions by reducing unscheduled maintenance, the number of routine checks, time required for maintenance tasks, material purchase, and the number and cost of sched-
3 88,000 28,000 (19,000)
4 88,000 28,000 (12,000)
5 88,000 28,000 (12,000)
6 88,000 28,000 (12,000)
uled shutdown tasks. Automation also contributes to reduce impacts (often hard to quantify) due to health, safety, and environmental issues in production systems. Profits could increase due to automation by increasing the yield of more valuable products. Reduced work-in-process inventory and waste result in higher revenue per unitary input to the system. Although higher production yield will be meaningful only if the additional products can be sold, today’s global markets will surely respond positively to added production capacity. Payback (Payout) Period An alternative method used for economic evaluation of a project is the payback period (or payout period). The payback period is the number of years required for incoming cash flows to balance the cash outflows. The payback period (p) is obtained from the expression 0=
p
Xj .
j=0
This is one definition of the payback period, although an alternative definition that employs a discounting procedure is most often used in practice. Using the cash flow given in Table 30.6, the payback equations for 2 years gives −168 000 + 53 640 + 61 704 = $ − 52 656 and for 3 years, it is −168 000 + 53 640 + 61 704 + 71 102 = $18 446. Therefore, using linear interpolation, the payback period is p=2+
52 656 = 2.74 years. 71 102
30.2.4 Considerations of the Economic Evaluation Procedure The application of the general economic evaluation procedure and related techniques described previously must take into consideration the aspects described next. The risk of ending with a wrong economic evaluation and therefore an
30
692
J. A. Ceroni
Increased ROIC
Reduce capital
Increase profit
Reduce costs
Reduced: Capital investment Product, Input, WIP Inventory Warehouse Spares
Reduced: Energy and utilities Maintenance Waste Staff Exceptions
Increase revenue
Increase price
Increase production
Increased: Production yield Improved: Products quality
Increased: Equipment capacity Reduced: Unscheduled downtime Scheduled shutdowns
Fig. 30.6 Effects of automation on ROIC
incorrect alternative project selection is usually the result of not considering the following aspects: • Cash flows are incremental since they represent increases or decreases resulting directly from the investment alternative under consideration. • NPV and rate of return are inversely related to the payback period. • Payback period should not be used as a primary selection criterion since it disregards cash flows generated after it. • Mutually exclusive alternative automation projects should be evaluated only by selecting the project with the highest NPV. Using the highest rate of return is incorrect (see Refs. [30–32]). • Selecting a subset of projects from a group of independent projects, in order to consider some restriction, the selection objective should maximize the NPV of the subset.
the literature, there are well-documented issues that make hard to justify and adopt strategic automation technologies [33]. Additionally, difficulties in justifying integrated automation technologies may arise from considering cultural and organizational aspects when implementing them. The success or failure of the implementation depends heavily on the commitment of all participants involved, people and companies. Such widespread impacts are usually not considered in traditional economic evaluation methodologies, which normally overlook the need for consensus on the technology to adopt. Flexibility is recurrently considered in the literature as a key benefit related to the implementation of automated systems [34]. Jönsson et al. [35] analyze the types of flexibility resulting from the operation of automated systems:
30.3.1 Strategical Justification of Automation
• Product mix flexibility: having different products using the same production process at the same time • Product volume flexibility: set the process for achieving additional or less throughput • Equipment multifunction flexibility: devices do different production with tool changes • New product flexibility: change and reprogram of the process to accommodate products changes or new products
The evaluation analysis described so far in this chapter is financially driven and covers the span of the automation project itself. However, the adoption of automation systems and its related technology has undeniable strategic and intangible benefits surpassing the capabilities of traditional economic evaluation methods. Plenty available elsewhere in
Additional frequently reported benefits include improved product quality, process resilience and sustainability, shorter response time, increased product and process consistency, lower inventory levels, improved worker safety, effective management processes, shorter cycle times and setups, and support for continuous improvement [3–6].
30.3
Alternative Approach to the Rationalization of Automation Projects
30
Economic Rationalization of Automation Projects and Quality of Service
30.3.2 Analytical Hierarchy Process (AHP) Rarely do automation systems comprise out-of-the-box solutions. In fact, most automated systems nowadays comprise a collection of equipment properly integrated into an effective solution. This integration process makes evaluation of alternative solutions by traditional economic evaluation methods more complex, due to the many combinations possible for the configuration of all available components and the inherent fuzziness of the selection criteria. To assist the equipment selection process, AHP has been implemented in the form of decision support systems (DSSs) [36]. AHP was developed by Saaty [37] as a way to convey the relative importance of a set of activities in a quantitative and qualitative multicriteria decision problem. The AHP method is based on three principles: the structure of the model [38], comparative judgment of alternatives and criteria [39], and synthesis of priorities. Despite the wide utilization of AHP, selection of casting process [40], improvement of human performance in decision-making [41], and improvement of quality-based investments [42], the method has shortcomings related to its inability to handle decision-maker’s uncertainty and imprecision in determining values for the pairwise comparison process involved. Another difficulty with AHP lies in the fact that not every decision-making problem may be cast into a hierarchical structure. Next, a proposed method implementing AHP is reviewed using a numerical application for computer numerical control (CNC) machine selection.
The AHP-PROMETHEE Method The preference ranking organization method for enrichment evaluation (PROMETHEE) is a multicriteria decisionmaking method developed by Brans et al. [42, 43]. Implementation of PROMETHEE requires two types of information: (1) relative importance (weights of the criteria considered) and (2) decision-maker’s preference function (for comparing the contribution of alternatives in terms of each separate criterion). Weights coefficients are calculated in this case using AHP. Figure 30.7 presents the various steps of the AHP-PROMETHEE integration method. The AHP-PROMETHEE method is applied to a manufacturing company wanting to purchase a number of milling machines in order to reduce work-in-process inventory and replace old equipment [36]. A decision-making team was devised and its first task was to determine the five milling machines candidates for the purchasing and six evaluation criteria: price, weight, power, spindle, diameter, and stroke. The decision structure is depicted in Fig. 30.8. The next step is for decision team experts to assign weights on a pairwise basis to decision criteria, as presented in Table 30.7. Results from AHP calculations are shown in Table 30.8 and show that the top three criteria for the case are spindle, weight, and
693
diameter. The consistency ratio of the pairwise comparison matrix is 0.032 < 0.1, which indicates weights consistency and validity. The AHP results shown in the previous table are obtained with a largest Eigenvalue (λmax = 6.201), a consistency ratio (CR = 0.032), a consistency index (CI = 0.040), and a random index (RI = 1.24). Following the application of AHP, PROMETHEE steps are carried out. The first step comprises the evaluation of five alternative milling machines according to the evaluation criteria previously defined. The resulting evaluation matrix is shown in Table 30.9. Next, a preference function (PF) and related thresholds are defined by the decision-making team for each criterion. PF and thresholds consider features of the milling machines and the company’s purchasing policy. Table 30.10 shows preference functions and their thresholds. The partial ranking of alternatives is determined according to PROMETHEE I, based on the positive and negative flows shown in Table 30.11. The resulting partial ranking is shown in Fig. 30.9 and reveals that machine 5, machine 2, machine 4, and machine 1 are preferred over machine 3, and machine 4 is preferred over machine 1. The partial ranking also shows that machine 5, machine 4, and machine 2 are not comparable, as well as machine 5 and machine 1, and machine 2 and machine 1. PROMETHEE II uses the net flow in Table 30.11 to compute a complete ranking and identify the best alternative. According to the complete ranking, machine 5 is selected as the best alternative, while the other machines are ranked accordingly as machine 4, machine 2, machine 1, then machine 3 (Fig. 30.10). The geometrical analytic for interactive aid (GAIA) plane [44] representing the decision (Fig. 30.11) shows that price has a great differentiation power, criteria 1 (price) and 3 (power) are conflicting, machine 2 is very good in terms of criterion 3 (power), and machine 3 is very good in terms of criteria 2 and 4. The vector π (decision axis) represents the compromise solution (selection must be in this direction).
30.4
Final Additional Considerations in Automation Rationalization
30.4.1 Investment Risk Effects on the Minimum Acceptable Rate of Return Capital investment projects involve particular levels of risk, where risk is an expected measure of project aspects that could fail. Including risk into the economical analysis would imply taking into consideration the probabilities of occurrence of each possible scenario, turning the evaluation model
30
694
J. A. Ceroni
Step 1:
Forming decision-making team
Step 2:
Determining alternative equipment
Step 3:
Determining the criteria to be used in evaluation Stage 1: Data gathering
Step 4:
Structuring decision hierarchy
Assigning criteria weights via AHP
Step 6:
Structuring decision hierarchy Stage 2: AHP calculations
Approve decision weights ?
No
Step 7:
Step 8:
Approve decision hierarchy ?
No
Step 5:
Determining the preference functions and parameters for the criteria Approve preference functions ?
No
Step 9:
Stage 3: PROMETHEE calculations
Step 10:
Partial ranking via PROMETHEE I
Step 11:
Complete ranking via PROMETHEE II
Step 12:
Determining GAIA plane
Step 13:
Determining the best equipment
Stage 4: Decision making
Fig. 30.7 Steps of AHP-PROMETHEE method
Selection of the best equipment
Price
Weight
Machine 1
Power
Machine 2
Spindle
Machine 3
Diameter
Machine 4
Stroke
Machine 5
Fig. 30.8 Decision structure
highly complex. A simpler way to consider risk into the normal economic evaluation is to relate the MARR to project risk [45–47], resulting in higher MARR values for projects
with higher risk levels. Additionally, other considerations on the automation project characteristics may be related to MARR as well. Consider, for example, the possibility the
30
Economic Rationalization of Automation Projects and Quality of Service
proposed equipment to acquire could be utilized in current or alternative automation projects, diminishing the potential losses from a failed project due to internal or external causes. As a note of caution, higher MARR rates should be applied only to those components of the proposed automation project exposed to the risk being considered. In such a way, the aspects of the automation project under good levels of control by the organization should not be altered. Expecting higher levels of return from risky decisions, companies could find a way to force strategic concerns into the automation projects initial configuration. In such scenario, companies will have strong incentives to consider low risk projects with reconfigurable automation equipment allowing the consideration of lower MARR rates [48, 49].
30.4.2 Equipment Depreciation and Salvage Value Profiles
695
Table 30.7 Pairwise comparison of criteria and weights Criteria Price Price 1.00 Weight 1.93 Power 1.61 Spindle 2.44 Diameter 1.80 Stroke 1.69
Weight 0.52 1.00 0.38 0.83 0.77 0.42
Power 0.62 2.60 1.00 2.47 2.40 0.61
Spindle 0.41 1.21 0.40 1.00 2.49 2.17
Diameter Stroke 0.55 0.59 1.30 2.39 0.41 1.64 0.40 0.46 1.00 2.40 0.42 1.00
Table 30.8 AHP results Criteria Price Weight Power Spindle Diameter Stroke
Weights (ω) 0.090 0.244 0.113 0.266 0.186 0.101
Table 30.9 Evaluation matrix for the milling machine case
Financial evaluation of automation projects is deeply affected by how equipment capital is depreciated during the project life and estimated project salvage values at the end of the project are determined [50–52]. Two alternative depreciating mechanisms (equipment loss of monetary value within the company assets) are to be considered: tax and book depreciation. Tax depreciation methods consists of a systematic mechanism to reduce the capital asset value over time. Being depreciation tax-deductible, companies usually seek to depreciate in the shortest period of time possible, usually determined by the country tax code where the company operates. Careful consideration of tax depreciation is due when comparing product-specific automation for reconfigurable systems for cases when the projected product life is shorter than the tax law depreciation life. In this scenario, product-specific systems can be sold and the remaining tax depreciation would be lost. Alternatively, the reconfigurable system would be decommissioned and kept by the company until fully depreciated and then sold. Due to its reconfigurability, the automation project usually will no shorter than tax depreciation period. Tax depreciation terms depend on local tax and accounting conventions disregarding the equipment operation. Book depreciation schedules are predictions of the equipment realizable salvage value at the end of its useful life. Realizable salvage value can be determined based on the internal asset value for the organization (given its usability in alternative projects) or by the asset value in the salvage market. Redeploying automation equipment within the same company usually results in higher profit values than selling to third parties. Product-specific automation equipment usually has value for potential salvage buyers because of its key system components. Custom product and process tooling will have no value for equipment resellers, reducing the salvage to raw
Criteria Unit Max/min Weight Machine 1 Machine 2 Machine 3 Machine 4 Machine 5
Price US $ Min 0.090 936 1265 680 650 580
Weight kg Min 0.244 4.8 6.0 3.5 5.2 3.5
Power W Max 0.113 1300 2000 900 1600 1050
Spindle Rpm Max 0.266 24,000 21,000 24,000 22,000 25,000
Diameter Stroke Mm mm Max Max 0.186 0.101 12.7 58 12.7 65 8.0 50 12.0 62 12.0 62
Table 30.10 Preference functions
Criteria
Preference function type
Thresholds q p s Price Level 600 800 – Weight Gaussian – – 4 Power Level 800 1200 – Spindle Level 20,000 23,000 – Diameter Gaussian – – 6 Stroke V-shape – 50 – q: Indifference threshold (the largest deviation to consider as negligible on that criterion) p: Preference threshold (the smallest deviation to consider decisive in the preference of one alternative over another) s: Gaussian threshold (only used with the Gaussian preference function)
materials, if usable. Alternatively, if the system is based on known modular and reusable components, its value may result comparable to new components. The reusable automation resources inventory of a company will enable it to quickly redeploy such resources into new systems, improving its time to market and equipment profitability, among other positive impacts [53].
30
696
J. A. Ceroni
Table 30.11 PROMETHEE flows Φ+ 0.0199 0.0553 0.0192 0.0298 0.0478
Alternatives Machine 1 Machine 2 Machine 3 Machine 4 Machine 5
Φ− 0.0139 0.0480 0.0810 0.0130 0.0163
Φ 0.0061 0.0073 −0.0618 0.0168 0.0315
Criterion 3 Machine 3
Machine 1
Criterion 2 Machine 5 Machine 4
1 0.05
Φ–
0.02
Criterion 6 p
Machine 5 Φ+
Machine 2
Criterion 4
Criterion 5
Criterion 1
Fig. 30.11 GAIA decision plane 2
4
5
Machine 4
Machine 1
Machine 3
Φ+
0.03
Φ+
0.02
Φ+
0.02
Φ–
0.01
Φ–
0.01
Φ–
0.08
3 Machine 2 Φ+
0.06
Φ–
0.05
Fig. 30.9 PROMETHEE I partial ranking
1
3
5
Machine 5
Machine 2
Machine 3
Φ
Φ
Φ
0.03
2
0.01
–0.06
4 Machine 4
Machine 1
Φ
Φ
0.02
0.01
Fig. 30.10 PROMETHEE II complete ranking
30.5
Conclusions
Although computation of economic performance indicators for automation projects is often straightforward, rationalization of automation technology is fraught with difficulty and many opportunities for long-term improvements are lost because purely economic evaluation apparently showed no direct economic benefit. Modern methods, taking into account risks involved in technology implementation or comparison of complex projects, are emerging to avoid such high-impact mistakes [54–56]. In this chapter, in addition to
providing the traditional economic rationalization methodology, strategical considerations are included plus a discussion on current trends of automation systems towards agility and reconfigurability. These last two aspects of modern automation systems should be considered altogether with the already established collaboration environment present in modern production and automation systems [21, 22, 28, 50, 57]. Higher focus on extremely complex technical processes has led production of goods and services into collaborative efforts. Therefore, collaboration necessarily should be taken into account in modern automation economical evaluation. Quality of service (QoS) is also an important part of the rationalization efforts of automation projects since it ensures that automation benefits will be reached in a sustainable manner [4, 58]. This benefit from QoS also has impact at strategical level since it establishes organizational guidelines and policies for achieving automation sustainability, for example, through reconfigurability of automation alternative projects. Evaluation methods are finally complemented with the analytical hierarchy process for covering those situations where selection criteria are not only economical but consider other aspects, sometimes conflicting among them.
30.6
Recommended Additional Reading
Readers interested in more detailed published material may look for the following references: Production Economics, Evaluating Costs of Operations in Manufacturing and Service Industries, Anoop Desai, Aashi Mital, June 11, 2018, CRC Press ISBN 9781138033269 The Digital Shopfloor: Industrial Automation in the Industry 4.0 Era, Editors: Soldatos, John, Lazaro, Oscar, Cavadini, Franco, 2019, River Publishers Series in Automation, Control and Robotics, ISSBN 9788770220415
30
Economic Rationalization of Automation Projects and Quality of Service
Project Evaluation: Making Investments Succeed, Samset, Knut, 2003, Fagbokforlaget Ed., ISSBN 8251918405 Project Evaluation Explained: Project Management Books Volume 5, Akdeniz, Can, 2015, ISSBN 1518801838 Acknowledgments Parts of Ch. 41 Quality of Service (QoS) of Automation by Heinz-Hermann Erbe from the first edition of this Handbook of Automation have been used in this chapter [16]. The author sincerely thanks Professor Erbe for the materials used in this chapter of the handbook.
References 1. Brown, S., Squire, B., Blackmon, K.: The contribution of manufacturing strategy involvement and alignment to world-class manufacturing performance. Int. J. Oper. Prod. Manag. 27(3–4), 282–302 (2007) 2. Azemi, F., Simunovic, G., Lujic, R.: The use of advanced manufacturing technology to reduce product cost. Acta Polytech. Hung. 16(7), 115–131 (2019) 3. Palucha, K.: World class manufacturing model production management. Arch. Mater. Sci. Eng. 58, 227–234 (2012) 4. Hyun, Y.G., Lee, J.Y.: Trends analysis and direction of business process automations, RPA (Robotic Process Automation) in the times of convergence. J. Digit. Converg. 16(11), 313–327 (2018) 5. Petrillo, A., De Felice, F., Zomparelli, F.: Performance measurement for world class manufacturing: a model for the Italian automotive industry. Total Qual. Manag. Bus. Excell. 30(7–8), 908–935 (2019) 6. Quaile, R.: What does automation cost – calculating total lifecycle costs of automated production equipment such as automotive component manufacturing systems isn’t straightforward. Manuf. Eng. 138(5), 175 (2007) 7. Vazquez-Bustelo, D., Avella, L., Fernandez, E.: Agility drivers, enablers and outcomes – empirical test of an integrated agile manufacturing model. Int. J. Oper. Prod. Manag. 27(12), 1303– 1332 (2007) 8. Mills, J.J., Stevens, G.T., Huff, B., Presley, A.: Justification of robotics systems. In: Nof, S.Y. (ed.) Handbook of Industrial Robotics, pp. 675–694. Wiley, New York (1999) 9. Boothroyd, G., Poli, C., Murch, L.E.: Automatic Assembly. Marcel Dekker, New York (1982) 10. Heller, J., Chylinski, M., de Ruyter, K., Keeling, D.I., Hilken, T., Mahr, D.: Tangible service automation: decomposing the technology-enabled engagement process (TEEP) for augmented reality. J. Serv. Res. 24(1), 84 (2020) 11. Househ, M., Kushniruk, A., Cloutier-Fisher, D., Carleton, B.: Technology enabled knowledge exchange: development of a conceptual framework. J. Med. Syst. 35, 713–721 (2011) 12. Concentrix, How to engage customers through technology. Available at www.concentrix.com. Last access date 10 June 2020 13. Ontario College of Art and Design, OCAD, Technology-enabled learning strategy. Available at www.ocadu.ca/Assets/content/ teaching-learning/TEL%20Strategy.pdf. Last access date 11 July 2020 14. Shi, L., Newnes, L., Culley, S., Allen, B.: Modelling, monitoring and evaluation to support automatic engineering process management. Proc. IMechE Part B J. Eng. Manuf. 232(1), 17–31 (2018) 15. Nof, S.Y., Wilhelm, W.E., Warnecke, H.-J.: Industrial Assembly. Chapman Hall, London (1997) 16. Erbe, H.H.: Quality of Service (QoS) of automation. In: Nof, S.Y. (ed.) Handbook of Automation, pp. 715–733. Springer, New York (2009)
697
17. Chen, Z., Lee, J., Qui, H.: Infotronics technologies and predictive tools for next-generation maintenance systems. In: Proceedings of the 11th Symposium on Information Control Problems in Manufacturing. Elsevier (2004) 18. Berger, R., Hohwieler, E.: Service platform for web-based services. Proc. 36th CIRP Int. Semin. Manuf. Syst. Produktionstech. 29, 209–213 (2003) 19. Lay, G.: Is high automation a dead end? Cutbacks in production overengineering in the German industry. In: Proceedings of the 6th IFAC Symposium on Cost Oriented Automation. Elsevier, Oxford (2002) 20. Nnaji, B.O., Wang, Y., Kim, K.Y.: Cost effective product realization. In: Proceedings of the 7th IFAC Symposium on Cost Oriented Automation. Elsevier, Oxford (2005) 21. Nof, S.Y., Ceroni, J., Jeong, W., Moghadam, M.: Revolutionizing Collaboration Through e-Work, e-Business, and e-Service Springer ACES Book Series. Springer (2015) 22. Bruns, F.W.: Hyper-bonds – distributed collaboration in mixed reality. Annu. Rev. Control. 29(1), 117–123 (2005) 23. Müller, D.: Designing learning spaces for mechatronics. In: Müller, D. (ed.) MARVEL – Mechatronics Training in Real and Virtual Environments Impuls, vol. 18. NA/Bundesinstitut für Berufsbildung, Bremen (2005) 24. Erbe, H.H.: Introduction to low cost/cost effective automation. Robotica. 21(3), 219–221 (2003) 25. Hossein Tabatabaie, S.M., Tahami, H., Murthy, G.S.: A regional life cycle assessment and economic analysis of camelina biodiesel production in the Pacific Northwestern US. J. Clean. Prod. 172, 2389–2400 (2018) 26. openLCA.: Available at: www.openlca.org. Last access date 11 July 2020 27. Yeom, D.J., Na, E.J., Lee, M.Y., Kim, Y.J., Kim, Y.S., Cho, C.S.: Performance evaluation and life cycle cost analysis model of a gondola-type exterior wall painting robot. Sustainability. 9, 1809 (2017) 28. Nof, S.Y., Morel, G., Monostori, L., Molina, A., Filip, F.: From plant and logistics control to multi-enterprise collaboration. Annu. Rev. Control. 30(1), 55–68 (2006) 29. White, D.C.: Calculating ROI for automation projects. Emerson Process Manag. (2007). Available at www.EmersonProcess.com/ solutions/Advanced Automation. Last access date 10 Aug 2020 30. Stevens Jr., G.T.: The Economic Analysis of Capital Expenditures for Managers and Engineers. Ginn, Needham Heights (1993) 31. Blank, L.T., Tarquin, A.J.: Engineering Economy, 7th edn. McGraw-Hill, New York (2011) 32. Sullivan, W.G., Wicks, E.M., Koelling, C.P.: Engineering Economy, 17th edn. Pearson, New York (2018) 33. Kuzgunkaya, O., ElMaraghy, H.A.: Economic and strategic perspectives on investing in RMS and FMS. Int. J. Flex. Manuf. Syst. 19(3), 217–246 (2007) 34. Raouf, S.I.A., Ahmad, I. (eds.): Flexible Manufacturing-Recent Developments in FMS, Robotics, CAD/CAM, CIM. Elsevier (1985) 35. Jönsson, M., Andersson, C., Stal, J.E.: Relations between volume flexibility and part cost in assembly lines. Robot. Comput. Integr. Manuf. 27, 669–673 (2011) 36. Dagdeviren, M.: Decision making in equipment selection: an integrated approach with AHP and PROMETHEE. J. Intell. Manuf. 19, 397–406 (2008) 37. Saaty, T.L.: The Analytic Hierarchy Process. McGraw-Hill, New York (1980) 38. Acharya, V., Sharma, S.K., Gupta, S.K.: Analyzing the factors in industrial automation using analytic hierarchy process. Comput. Electr. Eng. 71, 877–886 (2018) 39. Wang, J.J., Yang, D.L.: Using hybrid multi-criteria decision aid method for information systems outsourcing. Comput. Oper. Res. 34, 3691–3700 (2007)
30
698 40. Chen, Y., Bouferguene, A., Al-Hussein, M.: Analytic hierarchy process-simulation framework for lighting maintenance decisionmaking based on clustered network. J. Perform. Constr. Facil. 32(1) (2018) 41. Güngör, Z., Arikan, F.: Using fuzzy decision making system to improve quality-based investment. J. Intell. Manuf. 18, 197–207 (2007) 42. Brans, J.P., Vincke, P.H.: A preference ranking organization method. Manag. Sci. 31, 647–656 (1985) 43. Brans, J.P., Vincke, P.H., Mareschall, B.: How to select and how to rank projects: the PROMETHEE method. Eur. J. Oper. Res. 14, 228–238 (1986) 44. Albadvi, A., Chaharsooghi, S.K., Esfahanipour, A.: Decision making in stock trading: an application of PROMETHEE. Eur. J. Oper. Res. 177, 673–683 (2007) 45. Kuznietsova, N., Kot, L., Kot, O.: Risks of innovative activity: economic and legal analysis. Baltic J. Econ. Stud. 6(1), 67–73 (2020) 46. Hassler, M.L., Andrews, D.J., Ezell, B.C., Polmateer, T.L.: Multiperspective scenario-based preferences in enterprise risk analysis of public safety wireless broadband network. Reliab. Eng. Syst. Saf. 197 (2020) 47. Wang, Z., Jia, G.: Augmented sample-based approach for efficient evaluation of risk sensitivity with respect to epistemic uncertainty in distribution parameters. Reliab. Eng. Syst. Saf. 197 (2020) 48. MacKenzie, C.A., Hu, C.: Decision making under uncertainty for design of resilient engineered systems. Reliab. Eng. Syst. Saf. 197 (2018) 49. Hassler, M.L., Andrews, D.J., Ezell, B.C., Polmateer, T.L., Lambert, J.H.: Multi-perspective scenario-based preferences in enterprise risk analysis of public safety wireless broadband network. Reliab. Eng. Syst. Saf. 197 (2018) 50. Alkaraan, F.: Strategic investment decision-making practices in large manufacturing companies: a role for emergent analysis techniques? Meditari Account. Res. 28(4), 633–653 (2020) 51. Soares, B.A.R., Henriques, E., Ribeiro, I.: Cost analysis of alternative automated technologies for composite parts production. Int. J. Prod. Res. 57(6), 1797–1810 (2019) 52. Sisbot, S.: Execution and evaluation of complex industrial automation and control projects using the systems engineering approach. Syst. Eng. 14(2), 193–207 (2011) 53. Oztemel, E., Gursev, S.: Literature review of Industry 4.0 and related technologies. J Intell. Manuf. 31, 127–182 (2020)
J. A. Ceroni 54. Santos, C., Mehrsai, A., Barros, A.C., Araújo, M., Ares, E.: Towards Industry 4.0: an overview of European strategic roadmaps. Procedia Manuf. 13, 972–979 (2017) 55. Becker, T., Stern, H.: Future trends in human work area design for cyber-physical production systems. Procedia CIRP. 57, 404–409 (2016) 56. Dombrowski, U., Richter, T., Krenkel, P.: Interdependencies of industrie 4.0 & lean production systems – a use cases analysis. Procedia Manuf. 11, 1061–1068 (2017) 57. Ceroni, J., Quezada, L.: Development of collaborative production systems in emerging economies. In Special Issue Based on the 19th International Conference on Production Research International Journal of Production Economics, vol. 122, no. 1 (2009) 58. Miranda, P.A., Garrido, R.A., Ceroni, J.A.: A collaborative approach for a strategic logistic network design problem with fleet design and customer clustering decisions. In Computers & Industrial Engineering. Special issue on Collaborative e-Work Networks in Industrial Engineering, vol. 57, no. 1 (2009)
José A. Ceroni graduated as an Industrial Engineer from Pontifical Catholic University of Valparaiso, Chile, and received his Master of Science and PhD in Industrial Engineering from Purdue University, Indiana, USA. His research interests include collaborative production and control, industrial robotics systems, collaborative robotics agents, and collaborative control in logistics systems. He is member of the Board of the International Federation for Production Research, and a member of IFAC and IEEE.
Reliability, Maintainability, Safety, and Sustainability
31
Elsayed A. Elsayed
Contents 31.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 699
31.2 31.2.1 31.2.2
Reliability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 700 Non-repairable Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . 700 Repairable Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 702
31.3
Maintainability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 703
31.4 31.4.1 31.4.2
Safety . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 705 Fault-Tree Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 705 Failure Modes and Effects Analysis . . . . . . . . . . . . . . . . . 706
31.5 31.5.1 31.5.2
Resilience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 707 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 707 Resilience Quantification . . . . . . . . . . . . . . . . . . . . . . . . . 708
31.6
31.6.2 31.6.3
Reliability, Maintainability, Safety, and Resilience (RMSR) Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . Resilience Quantifications of Supply Chain Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Degradation Modeling in Manufacturing Systems . . . . . Product Reliability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
31.7
Conclusion and Future Trends . . . . . . . . . . . . . . . . . . . 714
31.6.1
709
safety are also discussed. The reliability metrics are then extended to include the impact of the system’s failure due to external hazards on its performance by introducing resilience measures. The ability of the system to “absorb” the impact of the hazards and the ability to quickly recover to its normal performance level are captured in the quantification of its resilience.
Keywords
Reliability of manufacturing systems · Availability · Maintainability · Safety · Resilience quantification · Burn-in testing · Accelerated life testing · Degradation modeling
710 710 712
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 714
Abstract
This chapter presents modeling and analysis of the reliability, maintainability, safety, and resilience of manufacturing and automation systems and its elements. It serves as a guide during the design and operation of highly “reliable” automation systems. It discusses reliability metrics for both non-repairable and repairable systems. The maintainability of the system and methods for its improvements are presented. The reliability and safety of the systems are intertwined and approaches for the minimization of the effect of system failures on its production output and
E. A. Elsayed () Department of Industrial and Systems Engineering, Rutgers University, Piscataway, NJ, USA e-mail: [email protected]
© Springer Nature Switzerland AG 2023 S. Y. Nof (ed.), Springer Handbook of Automation, Springer Handbooks, https://doi.org/10.1007/978-3-030-96729-1_31
31.1
Introduction
The continuous and accelerated rate of developments of new materials, manufacturing processes, sensors, data acquisition systems, and software with extensive capabilities have resulted in the introduction of many new products, monitoring and control systems, and system automation of many processes and their functions. One of the most important characteristics of these products and systems is its safety and reliability. Assessing and ensuring the safety and reliability of such complex systems is challenging, and it requires expertise of multidisciplinary teams. In addition, the increase in cyber threats and natural hazards such as hurricanes, earthquakes, fires, and floods have extended the system reliability concept to include system resilience. Automation systems are complex systems that require the integration of machines, sensors, control, software, and safety to provide high reliability and uninterrupted functions over its design life. This chapter provides definitions and methods for estimating and improving systems’ reliability, availability, maintainability, safety, and resilience. 699
700
Reliability
The generic reliability term is used loosely to imply the durability of components, units, or systems. It also implies their expected operational life, availability for use and their remaining useful lives. However, it is important to quantify “reliability” so proper actions for improvements can be taken. Therefore, we classify systems (components, subsystems, units) as either non-repairable or repairable. Each has different reliability metrics and methods for their estimation. Examples of non-repairable systems include satellites, missiles, rockets, and disposable products. Examples of repairable systems include: power-distribution network, automation and manufacturing systems, automotives, planes, and most house appliances. This section introduces definitions and quantification of the reliability metrics of both systems.
31.2.1 Non-repairable Systems The formal definition of reliability of non-repairable system is the probability that a system, subsystem, component, product operates properly for a specified period under design conditions without failure. It is a metric of the product (system) performance [1]. The reliability is therefore a monotonically decreasing function of time and systems (products) are considered operating properly until failures occur or when the reliability reaches an unacceptable specified threshold [1]. Therefore, the failure-time distributions of the system (and its components) are important for estimating the system’s reliability. These distributions are obtained through reliability testing, failure observations in the field, engineering experience of similar components, and reliability information references. The characteristics of the distribution are based on the components’ failure rate, which is defined as “the probability that a failure per unit time occurs in a small interval of time given that no failure has occurred prior to the beginning of the interval” [1]. In general, the failure rate of a product, piece of equipment, or a system, exhibits the well-known bathtub curve shown in Fig. 31.1. The limit of the failure rate as time approaches zero is referred to as the hazard rate. The hazard rate and failure rate are used interchangeably in the reliability engineering field. There are three failure rate regions as shown in Fig. 31.1. Region 1 is the early life region where the failure rate starts with a high value attributed to poor assembly, material defects, manufacturing defects, design blunders, and others. The manufacturers attempt to “weed out” such early failures before the release of the product or the system. It is followed by Region 2 where the failure rate is constant and occurs randomly during the period T1 − T2 which is common for most of the electronic products. In Region 3, the failure rate increases with time due to wear out and degradation of the
Failure rate, h(t)
31.2
E. A. Elsayed
1
2
3
Early life region
Constant failure rate region
Wear out region
T1
T2 Time
Fig. 31.1 Failure rate regions
product as in the case of most of the rotating (or sliding) mechanical components such as shafts, brake systems, and bearings. The reliability of a product or system is estimated based on its failure rate function which is often expressed by a probability density function (p.d.f.). This is demonstrated by considering Region 2 where the failure rate (used interchangeably with hazard rate) is constant and is expressed as h(t) = λ
(31.1)
where h(t) is the hazard rate function at time t. Since the failure rate is constant then the failure times are exponentially distributed p.d.f., f (t), given by f (t) = λe−λt
(31.2)
The probability of failure at time t, F(t) (or cumulative distribution function, CDF), is
t
F(t) =
f (τ ) dτ
(31.3)
0
The reliability, probability of survival at time t, is then obtained as t f (τ ) dτ (31.4) R(t) = 1 − F(t) = 1 − 0
Dividing Eq. (31.2) by Eq. (31.3) results in λe−λt f (t) = −λt = λ = hazard rate R(t) e
(31.5)
Equations (31.3), (31.4), and (31.5) are the key equations for the reliability modeling.
Reliability, Maintainability, Safety, and Sustainability
701
Back to Fig. 31.1, the hazard rates in Regions 1 and 3 are decreasing and increasing, respectively. One of the most commonly used expression in practice for modeling the hazard function (decreasing or increasing) under this condition is expressed as γ t γ −1 h(t) = . θ θ
γ t γ −1 −( t )γ e θ θ θ
g = 0.6 g =1
0.8
g = 3.2
0.4 0.2
t > 0,
(31.7)
where θ and γ are positive and are referred to as the characteristic life and the shape parameter of the distribution, respectively. When γ = 1 this f (t) becomes an exponential density. When γ = 2 the density function becomes a Rayleigh distribution (linearly increasing failure rate). Figure 31.2 shows the p.d.f. of the Weibull model for different values of the shape parameter γ . Other models may be used to describe the failure rate as a function of the operating conditions (temperature, humidity, vibration, shocks, impact loads, volt, current, and others). A clear understanding of the failure mechanism and failure modes are key to obtain accurate estimates of the unit’s reliability function. For example, the reliability function of the Weibull model in Fig. 31.2 is obtained using Eq. (31.4) and is expressed as given in Eq. (31.8). The corresponding reliability graphs are shown in Fig. 31.3.
0 0
2000
4000
6000
(31.8)
8000
10000
Time
Fig. 31.3 The reliability function for different γ and θ = 5000
The parameters of the reliability function are obtained using failure-time data obtained through actual reliability testing, field data, historical data, or other data sources. The data are then analyzed using reliability software, mean rank or median rank models, or probability-distribution fitting approaches [1]. In addition to the failure rate and reliability function, the reliability metrics of non-repairable systems include the mean time to failure (MTTF), time to first failure (TTFF), and mean residual life (MRL). They are estimated using Eqs. (31.3), (31.6), and (31.8) and are expressed as MTTF =
∞
R(t) dt.
(31.9)
0
Alternatively, it can be obtained as
γ
−( θt )
R(t) = e
31
g = 2.2
0.6
(31.6)
This model is referred to as the Weibull Model, and its probability density function of the failure-time distribution, f (t), is given as f (t) =
1
Reliability
31
∞
MTTF =
t f (t) dt 0
For the Weibull model In Fig. 31.3, reliability decreases rapidly as the shape parameter increases.
Probability density
0.0005
g = 0.6
0.0004
g = 3.2
g=1
g = 2.2
0.0003 0.0002 0.0001 0
0
2000
4000
6000
8000
10000
Time
Fig. 31.2 Weibull density function for different γ and θ = 5000
1 MTTF = θ 1 + γ
where (.) is the incomplete gamma function and its value can be obtained from gamma tables or by using software such as Matlab© . The TTFF is commonly used for one-shot products such as airbags, missiles, and disposable medical devices. When a batch of products is released for actual use, the producers are interested in estimating the time for the first failure from this batch (it is important for recalls of the product or estimating warranty length). Assume a batch of N units is produced and that the failure time distribution is exponential with p.d.f. for a single unit given by Eq. (31.2), where 1/λ is the design life of the units. The probability that the first failure occurs in [t, t + dt] is f1 (t):
702
E. A. Elsayed
f1 (t) = Nf (t)
N−1
∞
In other words,
f (τ ) dτ
t
In other words, it is the probability that a unit fails in [t, t + dt] and N − 1 units fail in [t, ∞]. Other characteristics can then be obtained based on the above expression. Other reliability metrics include the mean residual life (MRL), defined as L(t) = E [T − t|T ≥ t] ,
t ≥ 0.
(31.10)
Indeed, the mean residual life function is the expected remaining life, T − t, given that the product, component, or a system has survived to time t [2]. It is expressed as 1 L(t) = R(t)
∞
τ f (τ ) dτ − t
(31.11)
t
Rs (t) = R1 (t) × R2 (t) × · · · × Rn (t) and the system’s n
MTTF = 1/ λi . i
Similarly, when n components are configured in parallel, then the system’s reliability is Rs (t) = 1 − (1 − R1 (t))(1 − R2 (t)) . . . (1 − Rn (t)) and the system’s MTTF is obtained using Eq. (31.9) which results in MTTF =
n−1
n n n−1 n n−2
1
1 − + λ λ + λj i=1 i i=1 j=i+1 i i=1 j=i+1 k=j+1
1 1 − · · · + (−1)n+1 n λi + λj + λk λi i=1
System Configurations Systems are composed of subsystems and components that are configured to perform required functions. The failure rates of the components and their configuration in the system are considered simultaneously in estimating the overall system reliability metrics. The system configuration has a major impact on its reliability. The configuration of the system refers to the arrangement and interconnection among its components to provide the desired function of the system while meeting the required reliability metrics. The most common and simplest configuration is connecting the components to form a series system. The components may also be arranged in parallel to provide redundancy and improve system reliability. Configurations that combine both series and parallel arrangements are common in series-parallel and parallel series arrangements. In complex configurations, the components are arranged to form a network with multiple path between nodes (components) to perform the desired function and topology of the network such as the case of electric power distribution networks and telecommunication networks. There are other special configurations such as gamma networks and k-out-of-n (where a minimum of k components out of a total n components must operate properly for the system to perform its function). Once the system is configured, its reliability needs to be evaluated and compared with the expected reliability metric of the system. If it does not meet the requirements, the system is redesigned by changing its configuration, selecting components with different failure rates or combinations of both until the system achieves its reliability goals. This is illustrated for the two most common configurations: the series and the parallel systems as follows. Consider a series system consisting of n components each has a constant failure rate λi for component i. The reliability of each component is Ri (t) = e−λi t and the system reliability, (Rs (t), is the product of the components’ reliabilities.
Of course, many systems are configured as networks and the estimation of its reliability becomes more complex especially when the there are many components (nodes) and links in the network. Methods for estimating the reliability metrics of such complex structures are presented [1, 3, 4].
31.2.2 Repairable Systems Repairable systems are those systems that are repaired upon failure, such as power distribution networks, communication networks, computers, airplanes, automobiles, and most household appliances. The reliability metric of such systems is its availability which is defined as “the probability that the system is available for use when requested” [1]. It is important to note that availability is one of the key reliability metrics of repairable systems. There are variant definitions of availability; they include instantaneous availability, steadystate availability, mission availability, achieved availability, and others. Failures and repairs have a major impact on the system availability. Therefore, the availability estimation includes both the failure and repair rates of the system. There are several approaches for availability estimation such as simulation modeling of the system failure and repairs, the alternating renewal process, and the Markov process model. The Markov process is commonly used under the assumption that both the failure and repair rates are constant, that is, the failure time and the repair time follow exponential distributions. Both the instantaneous and steady-state availabilities are obtained by defining all of the mutually exclusive states of the system and deriving its state transition equations (from one state to another) which are then solved simultaneously. The availability is then estimated by summing the probabilities of
31
Reliability, Maintainability, Safety, and Sustainability
703
the system being in a working state (up). This is illustrated as follows. Consider a component that exhibits a constant failure rate, λ. When the component fails, it is repaired with a constant repair rate, μ. We define two mutually exclusive states for the component: State s0 represents the working state of the component, and State s1 represents the nonworking state of the component. Let Pi (t) be the probability that the component is in State i at time t and P˙ i (t) is its derivative with time. The state-transition equations of the component are dP0 (t) = P˙ 0 (t) = −λP0 (t) + μP1 (t) dt
(31.12)
where f∗ (s) = w∗ (s), and g∗ (s) are the Laplace transforms ∗ of
∞the−stcorresponding density functions, that is, f (s) = f (t)dt. It is shown in [1] that the Laplace transform 0 e of the system availability is expressed as A∗ (s) =
∗
1 − w (s) . s [1 − w∗ (s)g∗ (s)]
The Laplace inverse of A∗ (s) results in obtaining the instantaneous availability, A(t). Often, a closed-form expression of the inverse of A∗ (s) is difficult to obtain and numerical solutions or approximations become the only alternatives for obtaining A(t). The steady-state availability, A, is A = lim A(t) = lim s A∗ (s) t→∞
dP1 (t) = P˙ 1 (t) = −μP1 (t) + λP0 (t). dt
(31.13)
Solutions of these equations results in the estimation of the instantaneous availability of the component as P0 (t) =
μ λ + e−(λ+μ)t λ+μ λ+μ
(31.14)
The steady-state availability is obtained by taking the limits of Eq. (31.14) as t → ∞ μ A(t → ∞) = λ+μ
A(t → ∞) =
MTBF MTTBF + MTTR
(31.16)
where MTBF and MTTR are the mean time between failures and the mean time to repairs, respectively. Markov model is inapplicable when the failure rate and/or repair rate are time dependent and the alternating renewal process becomes an alternative approach for estimating the availability of systems. This process is briefly described as follows: consider a repairable system that has a failure-time distribution with a probability density function, w(t), and a repair-time distribution with a probability-density function, g(t). When the system fails, it is repaired and restored to its initial working condition (repair may occur immediately or after a delay). This process of failure and repair is repeated [1]. We refer to this process as an alternating-renewal process. We define, f (t), as the density function of the renewal process, and it is the convolution of w and g. In other words, f ∗ (s) = w∗ (s)g∗ (s),
(31.17)
s→0
(31.19)
The alternating renewal process is a general approach for obtaining the system availability by simply substituting the Laplace transforms of both the p.d.f. of the failure time and repair-time distributions in Eq. (31.18). The steadystate availability is also obtained by using Eq. (31.19), and the expected number of renewals during time period tp is estimated by obtaining the inverse of the renewal density expressed in terms of w∗ (s) and g∗ (s) then integrating the inverse between 0 and tp . The renewal density is m∗ (s) =
(31.15)
Based on Eq. (31.15), a general steady-state availability of systems (units or components) is
(31.18)
w∗ (s) g∗ (s) 1 − w∗ (s) g∗ (s)
number of failures in the interval (0, tp ) is
t expected The M tp = 0 tp m(t) dt. This can be obtained numerically [5] or using closed form expressions. The second important reliability metrics of the repairable systems is the mean time between failures which is obtained as follows: consider the alternating-renewal process, the time to failure is the result of the failure-time density function w(t), and the system goes under repair for a period based on the probability-density function of the repair time distribution g(t). Once repaired, the system resumes its function until it fails; it is then repaired, and the cycle is repeated. The mean time between failures is the expected length of time that includes the failure time and repaired time; MTBF = MTTF + MTTR.
31.3
Maintainability
Maintainability is applicable for repairable systems. It is defined as the probability of performing a successful repair action within a given time under normal operating condition of the unit under repair. In other words, maintainability is a measure of the ease with which a device or system can be repaired or modified to correct and prevent faults,
31
704
E. A. Elsayed
anticipate degradation, improve performance, or adapt to a changed environment. Beyond simple physical accessibility, it is the ability to reach a component to perform the required maintenance task: maintainability should be described [6] as the characteristic of material design and installation that determines the requirements for maintenance expenditures, including time, manpower, personnel skill, test equipment, technical data, and facilities, to accomplish operational objectives in the user’s operational environment. Mathematically, we express maintainability, M(t), as the probability of repairing the failed unit before an elapsed repair time, t, assuming a repair rate, μ(t), and is expressed as
t M(t) = 1 − e− 0 μ(τ )dτ (31.20) When μ(t) is constant with rate μ, then M(t) = 1 − e−
t 0
μdt
= 1 − e−μt
(31.21)
It is important to note that there are different inspection and maintenance approaches that are utilized to improve the overall system’s availability. They include corrective maintenance (CM) or failure replacement where maintenance and repair are performed when failures occur. This is common for components that exhibit constant failure rates and do not have degradation indicators to monitor their performance such as the controllers’ circuit boards and display monitors for examples. This type of maintenance might cause significant down time of the manufacturing equipment and interruptions of the production output. In this situation, analysis of the historical failure data of such units becomes necessary for estimating the number of spares for the components, inspection strategies, and redundancies of critical components. The second maintenance approach is preventive maintenance (PM), which is applicable when the system exhibits increasing failure rate (Region 3 in Fig. 31.1). Thus, in an automation environment, PM is performed at prescheduled times that minimize the downtime of the equipment and machines or minimizes its cost. Parts and components that may utilize PM include rotating and sliding components and cutting tools. On the other hand, PM does not improve the availability of units that exhibit constant failure rate (Region 2) due to the memoryless property of the exponential distribution. In this region, we perform failure replacement (replace or maintain the unit only when it fails). The optimal time interval for performing PM is obtained by minimizing system downtime or cost. The advances in sensors technology, image analysis, data acquisition, and data analysis gave rise to the development of dynamic maintenance and self-maintenance systems. Lee et al. [7] refer to the former as condition-based maintenance (CBM) or predictive maintenance and the latter as prescriptive maintenance. CBM is only suitable when there is a degradation indicator for the units or components being
monitored. The degradation measurements are continuously observed, and the degradation path (relationship between the “amount” of degradation and time) is updated when new observations are obtained and the reliability is estimated accordingly, as described later in this chapter. Thus, maintenance and/or replacement decisions are made based on a predefined threshold of the degradation level. The threshold level may be obtained based on physics-of-failure (PoF) approach or statistics approach. Accurate determination of the threshold level is important since a low level of the threshold results in more frequent maintenance (higher cost and less system availability), whereas a high threshold level results in potential failures of the system before it reaches the threshold causing system’s unavailability. On the other hand prescriptive maintenance extends CBM beyond failure prediction and maintenance decisions, it allows the units (machines) to carry out their own decisions based on machine learning, for example, and on-line analysis of historical observations. It integrates inputs from sensors, historical data, and current data to carry out its own maintenance actions when possible. In general, in automation environment, sensors can be widely implemented for units that exhibit degradation and can be continuously monitored to obtain accurate predictions of the time when the degradation reaches a specified threshold level for maintenance and repair. Integrating these maintenance approaches leads to e-Maintenance which is an organizational point of view of maintenance. Morel et al. [6] state that “The concept of e-Maintenance comes from remote maintenance capabilities coupled with information and communication capabilities. Remote maintenance was first a concept of remote data acquisition or consultation. Data are accessible during a limited time. In order to realize e-Maintenance objectives data storage must be organized to allow flexible access to historical data.” In order to improve remote maintenance, a new concept of e-Maintenance emerged at the end of the 1990s. “The e-Maintenance concept integrates cooperation, collaboration, and knowledge-sharing capabilities in order to evolve the existing maintenance processes and to try to tend towards new enterprise concepts: extended enterprise, supply-chain management, lean maintenance, distributed support and expert centers, etc.” (Morel et al. [6]). Recent advances in computation, data storage capacity, ease of access to historical data, and data exploration methods such as data mining and machine learning have accelerated the e-maintenance implementations. Morel et al. [6] also state that “e-Maintenance is not based on software functions but on maintenance services that are well-defined, self- contained, and do not depend on the context or state of other services. So, with the advent of service-oriented architectures (SOA) and enterprise service-
31
Reliability, Maintainability, Safety, and Sustainability
bus technologies [8], e-Maintenance platforms are easy to evolve and can provide interoperability, flexibility, adaptability, and agility. e-Maintenance platforms are a kind of hub for maintenance services based on existing, new, and future applications.”
31.4
Safety
In general, systems, equipment, and machines experience failures, which might cause minor or major interruptions of its functions and/or catastrophic failures that result in injuries or deaths. Therefore, understanding the physics of failures, potential failure modes, and effect of the failures on the operator, machines and products provide the designers of manufacturing and automation systems with important information for designing fail-safe systems. Moreover, identifying potential failure modes and their causes results in determining the critical components and subsystems and implementation of approaches that minimize such failures (or potential accidents). As presented in the maintainability section, the availability of a wide range of sensors and onboard control systems makes it possible to design effective and safe systems. There are many approaches that can be used to determine the criticality of systems’ components. Among them are the well-known importance measures commonly used to quantify the importance of components in terms of their effect on the system’s reliability [9, 10]. These measures do not necessarily address systems’ safety due to failures of its component. However, there are two commonly used approaches to design safety into systems. They include Fault Tree Analysis [11] and Failure Mode and Effects Analysis (FMEA). FTA is widely used and is a requirement safety analysis of critical systems such as nuclear power generation, medical equipment such as computerized tomography scan (CAT Scan), aerospace industries, and chemical plants. The main objective of FTA is identify the critical components of the system whose failure may result in “unsafe” outcome. Therefore, FTA is conducted using a graphical presentation of the events and components or causes that result in the occurrences of the events. It starts by defining the top event in the tree (undesired event such as radiation leak in a nuclear energy generation) and all potential causes that may lead to the occurrence of this event are identified. The next level of the tree uses the events or causes of the previous level and identifies the events, components failures and conditions that cause these events. The process continues until the all-potential events and their causes at all levels are identified. The probabilities of the occurrence of events are estimated upward (from the lowest level of the fault tree) until the probability of the top event is estimated. Actions are then taken to reduce this probability as described below.
705
FTA considers the probability of event occurrence but does not consider the severity of the event in terms of losses and associated risk. Therefore, Failure Mode and Effects Analysis (FMEA) is conducted in parallel to assign “weights” to the events (in addition to their probabilities). FMEA is a method for evaluating a process, a system, or a product to estimate the severity of the failures and their associated risk, that is, it considers both the frequency of failures (obtained from reliability modeling and failure time distribution) and the severity of the failure in terms of safety and other losses. As the acronym implies, it also deals with identifying the failure modes and their effect as discussed next.
31.4.1 Fault-Tree Analysis Construction of the fault tree is a logical process. It is usually constructed by a team of experts and professionals in the field of the subject to be analyzed. As stated earlier, the top event is clearly identified and logical gates and events are used to construct the tree. The most common gates are AND and OR gates [12]. Some gates could be conditional or dependent on others. In automation and manufacturing system, a top event such as system fails to produce items as specified or a manufacturing cell does not start. This event could be attributed to either hardware or software failures. Then we explore each type of failures to the lowest level of the tree and inserting the proper logical gates for every event. In complex systems, it suggested that they should be decomposed to subsystems and each has its own FTA then the probability of the system top event is obtained using the probabilities of the top events of each subsystem and the associated logical gates. For example, a system consists of three components connected in series fails (an undesired event occurs) due to the failure of the first OR the second OR the third component. In this case, the logic gate is an OR gate and is represented by the symbol shown in Fig. 31.4a; the events are the failures of the components 1, 2, and 3. Likewise, in a system composed of three components in parallel, the system fails (undesired event) when the first AND, the second AND, and the third component fail. In this case, the logic gate is an AND gate and is represented by the symbol shown in Fig. 31.4b. Faulttree diagrams of three-component series and parallel systems are shown in Fig. 31.4c. More logical gates and their functions are provided in [13]. The probability of the occurrence of an event in the fault-tree diagram is then calculated using probability laws as exemplified for Fig. 31.5. We utilize Eqs. (31.22) and (31.23) to obtain the probability of events occurring as a result of AND or OR gates.
31
706
E. A. Elsayed
a)
b)
c) OR
Component 1
AND
Component 2
Component 3
Series system
Component 1
Component 2
Component 3
Parallel system
Fig. 31.4 OR and AND gates
where ∧ and ∨ are the AND and OR Boolean operators, respectively. Therefore,
Top
P(E1) = P(A1 ∧ A2) = P(A1)P(A2) P(E2) = P(B1 ∨ B2) = P(B1) + P(B2) − P(B1B2) E1
A1
E2
A2
B1
B2
Fig. 31.5 Fault-tree diagram
The probability of either Event A or B occurring (assuming independence) is expressed as P(A ∨ B) = P(A) + P(B) − P(AB)
(31.22)
The probability of Event A and Event B occurring assuming independence is P (A∧) B) = P(A)P(B)
(31.23)
Thus P(Top) = P(E1 ∨ E2) = P(E1) + P(E2) − P(E1E2). Other gates may have conditional probabilities and Bayes’ theorem and rule of total probability may be applied. When the top event represents different undesired safety events an individual fault-tree diagram for each event is constructed to ensure that all of the potential safety hazards are investigated and addressed during the design of the system. It is important to conduct FTA on regular basis since the reliability and degradation of the components may introduce new undesired safety concerns. Moreover, updates of software, hardware, design changes, and the addition of new equipment may cause other top events to occur.
31.4.2 Failure Modes and Effects Analysis FMEA is a method for evaluating a process, a system, or a product to identify potential failures and assess their impacts on the overall performance and safety of the system in order to identify the critical components. FMEA methods have three general steps [1].
Reliability, Maintainability, Safety, and Sustainability
31.5
Resilience
Using the methods presented in the previous sections and the reliability metrics such as reliability, mean time to failure (MTTF), and failure rate, the system designers gain insights into the inherent failures of the systems and make appropriate decisions that result in improvements of the system’s reliability, safety, and performance. Apart from the inherent failures, unexpected natural and manmade hazards (e.g., earthquakes, hurricanes, flood, physical-attacks, and cyberattacks) account for a large proportion of the interruption of systems’ expected performance. Indeed, severe hazards may render repairable systems to be non-repairable such as in the case of the triple meltdown at Fukushima Daiichi nuclear reactor in Japan in 2012 and Chernobyl nuclear power plant in the Ukraine in 1986. Therefore, systems need to extend the reliability metrics to include its resilience (a measure of a system’s ability to absorb the hazards effects and resume operation at a desired level in a short time).
loss) which is generally understood as system robustness. The system’s ability to adapt to the degraded environment and still maintain (at least partially) its functionality is interpreted as system’s adaptive ability. As stated earlier, systems are classified as non-repairable and repairable. The resilience of a non-repairable system is the ability of the system to absorb the impact of the hazard while its normal performance does not fall below the non-functional level of the system. Whereas the resilience of the repairable systems, in addition to its ability to degrade slowly to the unacceptable performance level, is the ability to recover to its normal performance level in a short time. These are explained further in Fig. 31.6. Like reliability, which assumes values between 0 and 1 inclusive, resilience needs to be quantified in order to assess the proper counter measures against the hazards during the design and operation phases of the system.
P(th)
a)
Performance measure
Step 1: FMEA begins by identifying all potential failures and failure modes of the process, product, and system under consideration as discussed in the FTA process. These failures are usually based on historical data such as repair and warranty data and/or based on the input of experts and professionals in the system being studied. This step should be conducted regularly since new failure modes may occur. Step 2: This is a quantification step of the failures (frequency and severity). Weights are assigned to the probability of the failure occurrence (based on failure time distribution or the degradation path of the unit as discussed later in this chapter). Weights are also assigned to the severity of the failure and its impact on the process (it may cause interruptions of the process for a long time or may cause interruptions for short durations). Finally, probability estimates for the detection of the failure are obtained. This quantification step requires extensive discussions with Subject Matter Experts (SME) and several iterations before reaching these weights. Step 3: Assign a score value on a 1–10 scale for the three attributes in Step 2 and multiply the three scores to obtain a Risk Priority Number (RPN). This RPN is used to assign priorities for component improvements, monitoring, and repair [1].
707
P(td)
t0
Resilience is defined as the system’s ability to minimize the negative impact of the hazards (e.g., system-performance
td
b) P(th)
P(tr) P(td)
t0
31.5.1 Definition
th Time
Performance measure
31
th
td
tr
Time
Fig. 31.6 Schematic diagram of the system performance behavior for non-repairable systems (a) and repairable systems (b)
31
708
E. A. Elsayed
31.5.2 Resilience Quantification Figure 31.6 illustrates system’s resilience behavior. We denote P(t) as the system performance at time, t. A system operates with steady-state system performance (under inherent failures) P(t0 ) from time t0 until the occurrence of the hazard at time th . For a non-repairable system, the system performance deteriorates to level P(td ) at time td . The level of deterioration of the system performance and the time elapses to reach this level are indicatives of the system’s resilience. The higher the level of P(td ), and the longer the time td , the more resilient the system. For a repairable system, the maintenance and recovery actions start the restorations until the system reaches a desired performance level P(tr ) at time, tr , where we assume without loss of generality that P(tr ) ≤ P(t0 ). Higher levels of P(tr ) in a shorter time of tr are indicatives of the resilience of the repairable system. As shown in Fig. 31.6b, the resilience curve generally includes two segments: Segment 1 starts from th when the hazards occur to the unacceptable performance inflection time td . This segment represents the period when no recovery action is taken and the systems sustain the impact of the hazard. Segment 2 starts from time, td , to the current recovery time, tr . Considering that under most circumstances, system performance data are presented in discrete time instants, we denote, ti , as the ith time unit (say, day, hour, etc.) after the hazard’s occurrence. The explicit form of each segment of the resilience curve is developed as follows [14]:
where 2 (tn ) is the resilience of the system at time tn . Note, we normalize the time period (td , tr )such that td = 0 and tr = 1. Resilience during this period reflects the average rate of the system performance improvements with time. Combining Eqs. (31.24) and (31.25), the resilience of repairable systems at time instant t can then be written as shown in Eq. (31.26).
(tn ) =
⎧ ⎪ ⎪ 1+ ⎪ ⎪ ⎨
1 n
n i=1
⎪ ⎪ ⎪ ⎪ ⎩ 1 (td ) +
P(ti )−P(ti−1 ) ti −ti−1
1 n
n i=1
P(ti )−P(ti−1 ) ti −ti−1
th ≤ t < td td < t = tr (31.26)
A highly resilient system should exhibit low performance degradation rate, that is, it is expected to take a long time to reach the unacceptable performance level P(td ) in Segment 1 and a short time period to reach an acceptable performance level during its recovery in Segment 2. We further derive the overall system resilience s (tn ) as the integral of the two segments over their corresponding time periods, Eq. (31.27). Considering the computational efficiency and the difficulty in obtaining an explicit form of the resilience curve, the integral operation in Eq. (31.27) can be replaced by the summation of the discrete times of the integration range.
• For period (th , td ), i.e., Segment 1 (Eq. (31.24)): 1 (tn ) = 1 +
n 1 P(ti ) − P(ti−1 )
n
ti − ti−1
i=1
th ≤ t < td (31.24)
s (tn ) = ⎧ n
tn ⎪ 1 P(ti )−P(ti−1 ) ⎪ dti th < tn < td ⎨ th 1 + n ti −ti−1 i=1 n
td
tn 1 P(ti )−P(ti−1 ) ⎪ ⎪ dti td < tn = tr ⎩ th 1 (ti ) dti + td n ti −ti−1 i=1
where 1 (tn ) is the resilience of the system at time tn . Note that in Eq. (31.24), the time, ti , needs to be normalized w.r.t. period (th , td ) such that th = 0 and td = 1 after the normalization. In essence, the resilience considers the weighted performance decrement per unit time during this time period. Clearly, this rate is not necessarily constant, and the average of the performance decrement rate until time, t, is adopted as presented in Eq. (31.24). Since P(ti ) ≤ P(ti − 1 ) is always a valid assumption during this period, 1 (t) ranges between (0, 1). Note that Eq. (31.24) is the resilience of non-repairable systems at arbitrary time instant, t. • For period (td , tr ), that is, Segment 2 (Eq. (31.25)): 1 P(ti ) − P(ti−1 ) 2 (tn ) = 1 (td ) + n i=1 ti − ti−1 n
td < t = tr (31.25)
(31.27) Analogous to the average failure rate (AFR) in the reliability engineering, the average resilience can be used as an effective measure for comparing the resilience of two systems over a period T as shown in Eq. (31.28) s (tn ) =
s (tn ) T
(31.28)
Note, that since P(th ), P(td ), P(tr ), th , td and tr have different units, they need to be normalized. Specifically, the resilience is built from low level and increases with time; therefore, the normalization needs to be done within each segment. This will always ensure that in Segment 1 the system is losing performance (and resilience), and in Segment 2 the system is building recovery (resilience) from td = 0 onward.
31
Reliability, Maintainability, Safety, and Sustainability
31.6
709
Reliability, Maintainability, Safety, and Resilience (RMSR) Engineering
RMSR methods for estimating system’s reliability, maintainability, safety, and resilience are presented in Sects. 31.2, 31.3, 31.4, and 31.5. These methods are interrelated in a typical automated manufacturing system and together they constitute system “dependability” as shown in Fig. 31.7. Manufacturing systems encompass several interdependent entities starting with the input supply chain of material, parts, equipment, and other resources for product manufacturing from different vendors (V1, V2, . . . ) and supply chain of the finished and semi-finished products for distributions to different users, warehouses, customers, and vendors (D1, D2, . . . ). Both supply chains are subject to interruptions due to natural hazards such as flood, hurricanes, earthquakes, and others as well as man-made hazards such as the introduction of counterfeit components, cyber-attacks and blockage of ports, shipping lanes, and labor strikes. In other words, the supply chains are subject to interruptions in terms of
Dependability
Reliability
Maintainability
Safety
Resilience
severity and duration. Therefore, resilience quantifications of these supply chains are essential to minimize the impact of the interruptions of the manufacturing systems’ operations as shown in Fig. 31.8. Manufacturing systems also include manufacturing equipment and processes for product manufacturing and the reliability and availability of such equipment are important to ensure minimum interruptions of the production output. Therefore, reliability modeling is a necessary task that assesses the ability of the system to operate without interruptions. Moreover, the ability of the system to recover (resume production) after failures or events that cause its interruption need to be assessed (maintainability and resilience). Likewise, at the process level, the optimal process parameters need to be maintained by continuous monitoring and control. Continuous degradation of the tools and other process parameters may be modeled to determine their degradation threshold levels that result in unacceptable products. The degradation of the manufacturing equipment and wear out may result in the production of products with unacceptable quality metrics. Continuous monitoring of the critical components of the equipment results in conditionbased maintenance strategies that minimize the equipment downtime and replacement and repair cost. Therefore, the implementation of condition-based maintenance is essential in automated manufacturing systems. Finally, the product reliability and safety need to be continuously monitored and assessed. Figure 31.8 shows a schematic diagram of the typical components of automated manufacturing systems, and the RMSR use in each is demonstrated in the following sections.
Fig. 31.7 System dependability
Fig. 31.8 Schematic diagram of a typical manufacturing system
Resilience modeling
Resilience modeling V1 V2
D1 Input supply chain
Manufacturing system
Output supply chain
V3
D2 D3
Manufacturing equipment, processes
Reliability, maintainability safety and resilience
Products
Reliability and safety
31
710
E. A. Elsayed
31.6.1 Resilience Quantifications of Supply Chain Network In this section, we demonstrate the resilience calculations for a supply chain network. The calculations are applicable to components of the manufacturing system. Consider Fig. 31.9, the supply chain network has a steadystate performance indicator of 0.998 at normal operations. When a threat (hazard) occurs at time th the performance degrades to unacceptable level at time td (Day 30) when the recovery begins until the performance reaches the stead-state level at time tr . Clearly, the rate of performance degradation is a function of the supply chain deign (redundancy, for example) and the recovery rate which is a function of the available resources. The loss of performance until time td (Day 30) and the gain of performance until time tr (Day 116) are included in resilience calculation as given by Eq. (31.26) and are shown in Table 31.1. Since supply chain network is an integral component of the total manufacturing systems it is incumbent on the designer of such systems to quantify the resilience of its supply chain networks and ensure its ability to absorb the effect of the hazards (threats) and quickly recover to its steady-state performance. Resilience quantification can also be considered within the manufacturing systems as in the case of the configuration of the processing and manufacturing equipment in such a way that the impact of failures of one or more of the processes does not significantly affect the processes output and the quality of its products.
31.6.2 Degradation Modeling in Manufacturing Systems As shown in Fig. 31.8 the manufacturing processes, equipment, and products constitute the main components within
Steady-state performance indicator
0.95 0.9
Day 1 2 3 4 5 6 7 . . . 27 28 29 30 31 32 33 34
Performance 0.99892499 0.99836931 0.99756546 0.9964764 0.99504787 0.99318768 0.99081119 0.98784861 0.98432309 0.98042943 0.87103539 0.86713773 0.86501332 0.86481703 0.86613552 0.86879608 0.87264202 0.87731266
Resilience 0.0085562 0.0171219 0.0085679 0.0343089 0.0429477 0.0516338 0.0603839 0.0692172 0.0781482 0.0871762 0.2649367 0.2759841 0.2865427 0.2964908 0.4525686 0.5301306 0.6022163 0.6623856
the manufacturing system. The reliability, maintainability, and safety are essential to ensure its proper function with minimum or no interruptions. These components may exhibit failures and degradations. We described in Sect. 31.2.1 how reliability is estimated and predicted based on failure data. In this section, we address one of the critical indicators of components’ failure, namely the degradation indicator. Degradation is observed in tool wear, machine and equipment wear out, oils, and fluids degradation (presence of particles, loss of viscosity). In such cases, the degradation indicator can be continuously monitored and the “degraded” units may be maintained or replaced before failure occurrences. Modeling degradation in order to estimate the component’s reliability (time to failure, for example) may be achieved through understanding of the physics-of-failure models [15], physics-statistics-based model, or statisticsbased models (data-driven). The last approach is more prevalent due to the advances in sensors technologies, data collection, and analysis. Moreover, the approach can be effectively used in monitoring machines, process parameters, tool wear, and others as described next.
0.85 0.8 0.75
1 6 11 16 21 26 31 36 41 46 51 56 61 66 71 76 81 86 91 96 101 106 111 116
Performance level
1
Table 31.1 Performance degradation due to supply chain interruptions
th
td
Time
tr
Fig. 31.9 Performance degradation and recovery of the supply chain network
Degradation Modeling and Reliability Prediction There are several stochastic models for categorizing the degradation path (cumulative degradation with time). They include gamma process, inverse Gaussian process, modified inverse Gaussian process, and the Brownian motion process [16]. We describe the Brownian motion process due it wide use in many applications.
31
Reliability, Maintainability, Safety, and Sustainability
711
Brownian Motion Degradation Model A Brownian motion with drift degradation process is the solution to the following stochastic differential equation with constant drift μ and diffusion σ coefficients, where X(t) is the degradation level at time t as given by Eq. (31.29). (31.29)
W(t) is the standard Brownian motion at time t and it is normally distributed with mean zero and variance t. Eq. (31.29) assumes the initial condition X(0) = x0 . By direct integration, the degradation X(t) is given by Eq. (31.30).
dX(t) =
μi dt+
μ = 0.061, σ = 0.00151
0.05 0.04
μ = 0.001, σ = 0.00151
0.03 0.02 0.01 0
σ dW(t) = X(t) = x0 +μi t+σ W(t)
31
0.06 Degradation
dX(t) = μdt + σ dW(t)
μ = 0.081, σ = 0.00181
0.07
0
0.2
0.4
0.6
0.8
Time
(31.30) Since the degradation X(t) is a linear function of a normally distributed random variable, the degradation also follows a normal distribution with mean μi t and variance σ 2 t. Similarly, the degradation for any increment of time t follows a normal distribution with mean μi t and variance σ 2 t. This is due to the independent increment property of the standard Brownian motion. The density function for X(t) is given by Eq. (31.31).
fX(t) (x; t) =
−(x − x0 − μt)2 1 exp √ 2σ 2 t σ 2π t
(31.31)
Fig. 31.10 Brownian motion degradation paths
The variable Y is normally distributed. Therefore, the likelihood function L is described by Eq. (31.34). L (μi , σ |y1 , y2 , . . . , yn ) =
n
f (yi |μi , σ )
i=1
Let v = σ 2 t and mi = μi n L (mi , v|y1 , y2 , . . . , yn ) = f (yi |mi , v)
(31.33)
i=1
The log-likelihood is given by Eq. (31.34). The drawback of the standard Brownian motion assumption is that the degradation can assume decreasing values. Therefore, there is a challenge in the physics interpretation of this model. The geometric Brownian motion model is a suitable alternative since it is strictly positive process or the gamma process model where the degradation is monotonically increasing with time [17, 18]. Typical Brownian motion degradation paths are shown in Fig. 31.10 for different values of μ and σ . The parameters μ and σ of a degradation path are estimated from the degradation data as shown next.
Maximum Likelihood Estimation of the Brownian Motion Parameters The property in Eq. (31.32) is applied in order to estimate the parameters of the Brownian motion model.
Y = X (t + t) − X(t) = (μi ) t + σ (W (t + t) − W(t)) ∼ N μi t, σ 2 t (31.32)
n log (L (mi , v|y1 , y2 , . . . , yn )) = log (f (yi |mi , v)) i=1 n 2 1 i) log √2π − (yi −m = − 21 n log (2π ) = 2v v i=1 n1 (yi −mi )2 − 2n log(v) − 2v i=1
(31.34) The maximum likelihood estimates of the parameters are obtained by equating the partial derivatives of the log likelihood with respect to the model parameters to zero. The closed form expressions for parameters, μ and σ , are obtained using Eq. (31.35). m ˆi =
n i=1
yi n
m ˆ i = μˆ i t n 1 μˆ i = t
i=1
vˆ =
n (yi −mˆ i )2 i=1 2
vˆ = σˆ t yi n
σˆ = 2
1 t
n n (yi −mˆ i )2 i=1
n
(31.35)
712
E. A. Elsayed
F(t) = P(T ≤ t) = Pr X t; μ, σ 2 ≥ Df
(31.36)
1 Reliability
Reliability Prediction The reliability function for a degradation model is defined as the probability of not failing over time for a given degradation threshold level. In the degradation context, failure implies that the degradation level has crossed the threshold level Df . In another context of degradation analysis, a failure could imply that the unit suddenly ceases total functioning and fails suddenly as we presented in the reliability section of this chapter or the unit considered failed when the degradation level reaches the threshold as given in Eq. (31.36).
μ = 0.061, σ = 0.00151 μ = 0.081, σ = 0.00181
0.5
0
μ = 0.081, σ = 0.00181
0
100 Time
200
Fig. 31.11 Reliability function of three degradation paths
2
Since X(t)∼Normal(μ t, σ t), therefore the probability of failure is defined as in Eq.(31.37). F(t) = P(T ≤ t) = 1 −
Df − μt √ σ t
(31.37)
It is well known that, when the degradation path is linear, the first passage time (failure time), T, of the process to a failure threshold level, Df , follows the Inverse Gaussian (IG) distribution [19], IG t; μ, σX2 , with mean time to failure (MTTF), Df −x(0) , and probability density function (p.d.f.) as μ given in Eq. (31.38). Df − x(0) IG t; μ, σX2 , Df = √ φ σX t3
Df − x(0) − μt , √ σX t
μ > 0, Df > x(0),
(31.38)
where φ(·) is the p.d.f. of the standard Normal distribution. The reliability function is given by
Df − x(0) − μt √ σX t −2μ (Df − x(0)) Df − x(0) + μt
− − exp √ σX2 σX t (31.39)
R(t) =
where (·) is the cumulative distribution function (CDF) of the standard Normal distribution and x(0) is the initial degradation. Figure 31.11 shows the reliability functions of the degradation paths shown in Fig. 31.10 for a degradation threshold of 0.10. These functions are obtained using Eq. (31.39).
31.6.3 Product Reliability One of the major activities in manufacturing systems is ensuring that produced units meet the reliability requirements. This entails, in addition to the quality engineering activities during
manufacturing, the verification of the reliability requirements before release. This is accomplished by performing “burnin” testing, acceptance testing, and accelerated life testing as described next.
Burn-in Testing As shown in Fig. 31.1, the early life region (Region 1) shows high failure rate at the release time of the product. This is attributed to manufacturing defects, poor assembly, material defects, and others. Burn-in testing is conducted at conditions slightly higher than normal operating conditions to “weed out” unacceptable units from production and identify the causes of the early failures. This in turn increases the mean residual life (MRL) of the released production units and minimizes the dead-on-arrival (DOA) units at the end users. The two parameters of the test are: test conditions (stresses, temperature, humidity, volt, . . . ) and test duration. Long test times reduce the mean useful life of the units and increase the cost of the test whereas short test times may be insufficient to remove unacceptable units and increase the cost of product service after release. Therefore, it is important to determine the optimal test duration as described in the following numerical example. Let Cb be the cost per unit time of burn-in, Cf cost of failure in the actual use of units (field use), Cfb cost of failure during burn-in (Cfb < Cf ), and tb burn-in test period. Assume a product run of n units to be tested before release and each has a reliability function R(t). All units are subjected to burn-in testing, and the failed units are analyzed and are not released to users whereas surviving units are released. The probability of a failure of a unit during burn-in is (1 − R(tb )), the total cost of failures during burn-in is n Cfb (1 − R(tb ), and the expected number of failed units in the field is n R(tb )[1 − R(tb + t)] = n[R(tb ) − R(t + tb )] and the corresponding expected cost is n Cf [R(tb ) − R(t + tb )] and the cost of burn-in of the units is Cb n tb . The sum of these expected cost elements results in E[TC] = n Cb tb + n cf [R(tb ) − R(t + tb )] + n Cfb (1 − R(tb )) and the expected cost per unit is
31
Reliability, Maintainability, Safety, and Sustainability
713
Table 31.2 Partial listing of the expected cost per for different burn-in E [C] = Cb tb + Cf [R (tb ) − R(t + tb )] + Cfb (1 − R (tb )) (31.40) periods
The optimum burn-in period is obtained by the minimization of Eq. (31.40). The Weibull model exhibits a decreasing failure rate region when the shape parameter γ < 1. Therefore, Eq. (31.40) for the Weibull model is obtained as shown in Eq. (31.41).
E [C] = Cb tb + Cf exp
+ Cfb 1 − exp
γ
t
−( θb ) t
γ
−( θb )
− exp
−
tb +t θ
γ
(31.41)
where t is the design life of the unit. Assume that the manufacturer is interested in conducting a burn-in test for a product with the following parameters: θ = 18 000 hours, γ = 0.25, the cost of failure during burnin testing Cfb = $1000, the cost per unit of burn-in test Cb = $150 and the cost of failure during the operational life (design life) of 1,000,000 h is $9000. Substituting in Eq. (31.41) yields
Burn-in time 0.45 0.50 0.55 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00 1.05 1.10 1.15 1.20 1.25 1.30
t +100000 0.25 0.25 tb − b 18000 −( 18000 ) E [C] = 150 tb + 9000 exp − exp (31.42)
Equation (31.42) is solved numerically results in tb = 1 hour, partial listing of the expected cost for different burnin periods is shown in Table 31.2 and its plot versus time is shown in Fig. 31.12. A typical burn-in period ranges from 1 to 48 h for electronic products and longer for electromechanical systems.
31
7100 7000 Expected cost
0.25 tb + 1000 1 − exp−( 18000 )
Expected cost per unit $6,583 $6,576 $6,571 $6,566 $6,562 $6,559 $6,556 $6,554 $6,552 $6,551 $6,550 $6,550 $6,549 $6,549 $6,550 $6,550 $6,551 $6,552
6900 6800 6700 6600 6500 0.00
1.00
2.00 Burn-in period
3.00
4.00
Fig. 31.12 Expected cost of burn-in
Accelerated Testing The significant annual increase in the introduction of new products coupled with the significant reduction in time from product design to manufacturing, as well as the ever increasing customer’s expectation for high reliability and longer warranties have prompted industry to shorten its product test duration. In many cases, accelerated life testing (ALT) might be the only viable approach to assess whether the product meets the expected long-term reliability requirements. In ALT experiments, a sample of production units are selected and subjected to stresses (one or more) more severe than normal operating stress. Failure and/or degradation data are then used to extrapolate the reliability at normal conditions. Traditionally, ALT is conducted under constant stresses during the entire test duration. The test results are used to extrapolate the product life at normal conditions. In practice, constant-stress tests are easier to carry out but need more test units and a long time at low stress levels to yield sufficient
degradation or failure data. However, in many cases the available number of test units and test duration are extremely limited. This has prompted industry to consider step-stress test where the test units are first subjected to a lower stress level for some time; if no failures or only a small number of failures occur, the stress is increased to a higher level and held for another amount of time; the steps are repeated until all units fail or a predetermined test time has expired. Usually, step-stress tests yield failures in a much shorter time than constant-stress tests, but the statistical inference from the data is more difficult to make. Moreover, since the test duration is short and a large proportion of failures occur at high stress levels far from the design stress level, much extrapolation has to be made, which may lead to poor estimation accuracy. On the other hand, there could be other choices in stress loadings (e.g., cyclic-stress and ramp-stress) in conducting
714
E. A. Elsayed
ALT experiments. In this section, we present a simple extrapolation approach for predicting reliability at normal operating conditions using the results from accelerated life testing. One of the most commonly used extrapolation model assumes that the accelerated stresses are within the range of normal operating conditions and they do not induce different failure mechanisms. It also assumes that the life time scale between normal conditions and stress conditions is linear. The subscripts s and o refer to stress conditions and normal operating conditions, respectively. Since the lives of the units at s and o are linearly proportional, we express this relationship by Eq. (31.43). to = Af ts
(31.43)
where Af is the acceleration faction; it is obtained experimentally by testing samples at three different stress levels and relating their mean lives to the applied stresses. Thus, the cumulative distribution function of failure time at normal operating conditions is related to CDFat stress testing as Fo (t) = Fs Atf and Ro (t) = Rs Atf . The p.d.f. is the derivative of CDF w.r.t. t, thus fo (t) =
t fs Af
t Af
and ho (t) =
t hs Af
t Af
The above general relationships are applicable to all failure time distributions as shown for the Weibull model next. γs t − t = 1 − e θAf Fo (t) = Fs (31.44) Af It is noted that the underlying failure-time relationship of AFT assumes the shape parameters and scale parameters of stress testing and normal operating conditions are θ o = Af θ s and γ o = γ s . The p.d.f. at normal operating conditions is fo (t) =
γ θs Af
t θs Af
−
e
t θs Af
γ
γ ≥ 0 and θs ≥ 0 (31.45)
The MTTF is 1 MTTF = θo 1 + γ Of course, the design of ALT plans and extrapolation models depend on the product, stress types, and the physics of failure [20]. Similar accelerated testing for units that exhibit degradation is referred to as accelerated degradation testing (ADT). In this case, the number of units required for testing is reduced significant [21]. Other reliability testing includes acceptance testing and reliability demonstration testing. An overview of a variety of reliability testing is given in [22].
31.7
Conclusion and Future Trends
The performance of automated manufacturing systems depends on its reliability, maintainability, safety, and resilience. As demonstrated in this chapter, these must be fully integrated during the design and operational stages of the systems. Reliability ensures that the system and its products need to function properly during their design lives, must be readily available for use, safe to operate and use, and must be resilient to meet random hazards and threats whether they are natural or manmade. Advances in sensors and computational resources as well as the development of data analytics methods and tools present new trends in automation. Systems will be continuously monitored, and extensive data are acquired and analyzed as soon as new observations are collected. The systems will identify abnormal behavior, changes in process parameters, predict potential failures and self-correct and repair, when possible. Failures and down time of the systems will be minimized and the quality of the products will be improved. Major challenges will continue to exit due to the wide range of manufacturing processes from traditional processes to 3D printing and nano-manufacturing. However, the pace of the advances in the technological innovations, manufacturing processes, and computational algorithms such as machine learning will continue to accelerate and will result in significant and continuous improvements in automated manufacturing systems.
References 1. Elsayed, E.A.: Reliability Engineering. Wiley, Hoboken (2021) 2. Leemis, L.M.: Reliability: Probabilistic Models and Statistical Methods. Prentice Hall, Upper Saddle River (2009) 3. Birolini, A.: Reliability Engineering: Theory and Practice. Springer, Berlin (2013) 4. Bertsche, B.: Reliability in Automotive and Mechanical Engineering. Springer, Berlin (2008) 5. Koutsellis, T., Mourelatos, Z.P., Hu, Z.: Numerical estimation of expected number of failures for repairable systems using a generalized renewal process model. ASCE-ASME J Risk Uncertain. Eng. Syst. Part B Mech. Eng. (2019). https://doi.org/10.1115/1.4042848 6. Morel, G., Pétin, J.-F., Johnson, T.L.: Reliability, maintainability, and safety. In: Nof, S. (ed.) Springer Handbook of Automation, pp. 735–747. Springer, Berlin (2009) 7. Lee, J., Ni, J., Singh, J., Jiang, B., Azamfar, M., Feng, J.: Intelligent maintenance systems and predictive manufacturing. J. Manuf. Sci. Eng. 142(11), 110805-1–110805-23 (2020) 8. Vernadat, F.B.: Interoperable enterprise systems: principles, concepts, and methods. Annu. Rev. Control. 31(1), 137–145 (2007) 9. Aven, T., Nøkland, T.: On the use of uncertainty importance measures in reliability and risk analysis. Reliab. Eng. Syst. Saf. 95(2), 127–133 (2010) 10. Natvig, B., Gåsemyr, J.: New results on the Barlow–Proschan and Natvig measures of component importance in nonrepairable and repairable systems. Methodol. Comput. Appl. Probab. 11(4), 603– 620 (2009)
31
Reliability, Maintainability, Safety, and Sustainability
11. Jeong, H.S., Elsayed, E.A.: On-line surveillance and monitoring. In: Ben-Daya, M., Duffuaa, S., Raouf, A. (eds.) Maintenance Modeling and Optimization: State of the Art, pp. 309–343. Kluwer, Boston (2000) 12. Clemens, P.: Fault tree analysis primer. Prof. Saf. 59(1), 26 (2014) 13. Blokdyk, G.: Fault-Tree Analysis a Complete Guide – 2019 Edition. Emereo Pty Limited, BRENDALE (2019) 14. Elsayed, E.A., Cheng, Y., Chen, X.: Reliability and resilience modeling and quantification. In: Ettouney, M. (ed.) Objective Resilience Manual. ASCE, New York, forthcoming 2021 15. McPherson, J.W.: Reliability Physics and Engineering: Time-toFailure Modeling. Springer, Berlin (2010) 16. Borodin, A.N., Salminen, P.: Handbook of Brownian Motion-Facts and Formulae. Springer Science & Business Media, Berlin (2015) 17. Lawless, J., Crowder, M.: Covariates and random effects in a gamma process model with application to degradation and failure. Lifetime Data Anal. 10(3), 213–227 (2004) 18. Ye, Z.S., Xie, M.: Stochastic modelling and analysis of degradation for highly reliable products. Appl. Stoch. Model. Bus. Ind. 31(1), 16–32 (2015) 19. Chhikara, R., Folks, J.: The Inverse Gaussian Distribution: Theory. Methodology, and Applications. CRC, New York (1989) 20. Pham, H.: Handbook of Reliability Engineering. Springer, Berlin (2003) 21. Liao, H., Elsayed, E.A.: Reliability inference for field conditions from accelerated degradation testing. Nav. Res. Logist. 53(6), 576– 587 (2006)
715 22. Elsayed, E.A.: Overview of reliability testing. IEEE Trans. Reliab. 61(2), 282–291 (2012)
31
E. A. Elsayed is Distinguished Professor of the Department of Industrial and Systems Engineering, Rutgers University. He served as Chairman of the Industrial and Systems Engineering, Rutgers University, from 1983 to 2001. His research interests are in the areas of Quality and Reliability Engineering and Production Planning and Control. He is a co-author of Quality Engineering in Production Systems, McGraw-Hill Book Company, 1989. He is also the author of Reliability Engineering, 3rd Edition, John Wiley & Sons, 2021.
Education and Qualification for Control and Automation
32
Juan Diego Velasquez de Bedout, Bozenna Pasik-Duncan, and Matthew A. Verleger
Contents 32.1
Education for Control and Automation: Its Influence on Society in the Twenty-First Century . . . 717
32.2 32.2.1 32.2.2
Automation Education in K-12 . . . . . . . . . . . . . . . . . . . 719 Systems and Control Education in K-12 . . . . . . . . . . . . . 719 Automation in K-12 Education . . . . . . . . . . . . . . . . . . . . . 720
32.3 32.3.1
32.3.4
Control and Automation in Higher Education . . . . . . First-Year College Students: Building an Early Understanding of Control Through Modeling Skills . . . A First Course in Systems and Control Engineering . . . Integrating Research, Teaching, and Learning in Control at a University Level . . . . . . . . . . . . . . . . . . . . Remote and Virtual Labs . . . . . . . . . . . . . . . . . . . . . . . . . .
32.4
Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 725
32.3.2 32.3.3
720 720 722 723 724
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 726
Abstract
Control is used in most common devices and systems: cell phones, computer hard drives, automobiles, aircrafts, and many more, but it is usually hidden from our view. This deep integration means that the control and automation fields span science, technology, engineering, and mathematics (STEM) but lacks oftentimes much needed visibility. To advance the discipline, we must expand our outreach and education efforts. Integrating interdisciplinary research, teaching, learning, and outreach at all levels of education has become the key for success in other
areas of knowledge and it is that same level of integration that control, systems, and automation societies should pursue. Organizations such as the International Federation of Automatic Control (IFAC), IEEE Control Systems Society (CSS), and the American Automatic Control Council (AACC) have recognized that, to better support interdisciplinary work, control education must move well beyond isolated courses in upper-division STEM curricula to broaden participation and knowledge about controls for all STEM and non-STEM disciplines and to look at the entire educational pipeline starting in K-12. This chapter focuses on the importance of education in control and automation as well as some of the efforts and approaches being used to introduce controls within the educational pipeline from K-12 to higher education institutions (HEIs). The focus is on innovative approaches to teaching control and automation while engaging students as being major players in the learning process. Student feedback is an essential input in the continuous adaptation of developing teaching techniques and learning new skills, and therefore, teaching itself, we believe, needs to be viewed as a control system.
Keywords
Control education · Holistic modeling · K-12 education · Computational thinking · Remote and virtual labs · Teaching · Pipeline programs
J. D. V. de Bedout () Purdue University, West Lafayette, IN, USA e-mail: [email protected] B. Pasik-Duncan Department of Mathematics, University of Kansas, Lawrence, KS, USA e-mail: [email protected]; [email protected] M. A. Verleger Embry-Riddle Aeronautical University, Engineering Fundamentals Department, Daytona Beach, FL, USA e-mail: [email protected]
© Springer Nature Switzerland AG 2023 S. Y. Nof (ed.), Springer Handbook of Automation, Springer Handbooks, https://doi.org/10.1007/978-3-030-96729-1_32
32.1
Education for Control and Automation: Its Influence on Society in the Twenty-First Century
In 2017, the US National Science Foundation (NSF) announced the 10 Big Ideas for Future Investment [1] and has since been financially supporting pioneering work in 717
718
these areas. These research ideas, such as “Harnessing the Data Revolution,” or the “Future of Work at the HumanTechnology Frontier,” all require expertise from multiple disciplines to come together to address the challenges that societies are facing. For instance, one of the ideas, “Growing Convergence Research” is explicitly about expanding our need to conduct interdisciplinary research that stimulates discovery and innovation by merging ideas, technologies, and methods from diverse areas of knowledge. “The goal was to motivate dynamic, fundamental, interdisciplinary research building on a theme that science is strongest when science works together” said Dawn Tilbury, head of the NSF Directorate of Engineering in the opening of her talk during the plenary of the National Academy of Sciences 2020 Symposium [2]. Advances in computing and sensor development, and systems and control education will have a significant role in the interdisciplinary research needed to address and provide solutions to NSF’s 10 Big Ideas for the Future. Similarly, since the release of the Engineer of 2020: Visions of Engineering in the New Century [3] and Educating the Engineer of 2020: Adapting Engineering Education to the New Century [4], which collectively explored ways to better prepare STEM graduates for an ever-changing world market and context, the challenges of educating a new society and its needed workforce continue to resonate. The systems and control communities need to remain attentive and engaged to address the societal needs and reshape curriculums to bring attention to the power of systems, controls, automation, and tools not only in STEM fields but in all areas of knowledge as they become more and more embedded in our everyday lives and way of life. It is a better integration of control and systems education in the education systems that will enable us to provide meaningful solutions to cross-disciplinary efforts like the 10 Big Ideas for Future Investment and the United Nations Sustainable Development Goals. Automatic control has had a broad and rich history [5–7] that has permeated many fields of work. At the center of control systems are functions that include modeling, identification, simulation, planning, decision-making, optimization, deterministic, and stochastic adaptation, among others. While the overarching goal of any control system is to aid automation, the successful application of the defining principles requires the integration of tools from complimentary disciplines like electronics, communication, algorithms, sensors and actuators, as well as real-time computing to name a few. As new discoveries emerge, and its corresponding knowledge communities are built, control and automation will very likely play an integral role in continuing to add to its history of transforming the world. In 2020, a public symposium sponsored by the National Science Foundation and the National Academies of Sciences, Engineering, and Medicine attracted participation from academia, government, and industry to discuss the future of undergraduate STEM
J. D. V. de Bedout et al.
education to better meet the needs of students, science, and society with an eye on the graduates of 2040 [8] and start the conversation on what that graduate should look like as the world enters new knowledge frontiers. It is imperative that the control and systems communities continue to advocate and participate in these types of discussions and open societal and academic forums to bring to the forefront the importance of these areas of knowledge as the graduate of 2040 is defined. In yet another example of the influence of control and automation in a global context, in “Back to the Future of Education: Four OECD Scenarios for Schooling” [9] the organization showcases how even well-crafted plans can be upended by the uncertainties of the world we live in. For example, COVID-19 brought to forefront needs and areas of attention for the world to better prepare and mitigate, or reduce, the effects on pandemics in society. In the case of education, the authors showcase four scenarios (see Table 32.1) for the future of schooling as we look at the graduates of 2040, and they are discussed along factors like: teaching workforce, governance, organization and structure, etc. When one considers the different scenarios, it behooves control and automation societies, working groups, and connected industries to be prepared to collaborate and innovate with its active members (students, teachers, families, and providers) in the different scenarios. Several of the scenarios presented in Table 32.1 will require significant investments in automation and control to reach larger populations, and handle decentralized learning and learn-as-you go models (they are bolded for ease of illustration). However, it is not just health crises like COVID-19 that remind the world the need for more robust infrastructure and a better educated society to respond to the unforeseen. In the OECD report the authors also bring to the reader’s attention a few other shocks that the world might experience in the not-so-distant future. Several of those shocks might be borne in control and automation or will require solutions provided by the control and automation knowledge communities and they include: cyber war or attacks, Internet and communications disruptions both in undersea fiber cables or satellites, human-machine interfaces, and artificial intelligence. Therefore, starting to build pipeline programs for students to be exposed at an early age to automation and control will be key to our success on addressing these challenges. Lastly, another reminder of the need and influence on our society of automation and control education comes from the world’s existing physical infrastructure. With aging societies and infrastructure in many countries around the world, it is imperative to equip students with the most comprehensive education that allows them to navigate complex problems framed from a systems-of-systems perspective all while working in a more inclusive society. For instance, the American Society of Civil Engineers (ASCE) grades every four years different segments of the US infrastructure to share
32
Education and Qualification for Control and Automation
719
Table 32.1 Four OECD scenarios for the future of schooling (modified from original figure) [9] OECD scenario Goals and functions Schooling extended Schools as actors in socialization, qualification, and credentialing Outsourced education
Fragmentation of demand with self-reliant clients looking for flexibility
Schools as hubs for Flexible arrangements learning allow personalization and community involvement Learn-as-you-go Technology model overwrites traditional goals and functions
Organization and structure Education monopolies retain all functions of school systems
Governance and geopolitics Traditional administration emphasis on international collaboration Multiple organization Diversity of roles and Schooling systems as structures available to status within and players in wider individuals outside schools markets
Schools as hubs of multiple local-global resources configurations Dismantling of school as a social institution
Teaching workforce Teachers in monopolies with new economies of scale and division of tasks
Professional teachers as nodes of networks of flexible expertise Open market with central role for communities of practice
with citizens and policymakers investment needs, which in turn can be extended as human capacity building needs. In 2009, Pazik-Duncan and Verleger [10] linked the National Academies of Engineering report on the vision of the Engineer of 2020 to the needs of STEM graduates to address the upcoming challenges. As seen from Table 32.2, not much has changed since then as far as trends go; infrastructure needs and its ratings and the workforce needed to provide solutions continue to be among the issues requiring attention. Many of these infrastructure areas will continue to demand highly trained STEM graduates, many in areas in which control plays a dominant role (i.e., aviation, rail, transit, access to water, etc.).
32.2
Automation Education in K-12
32.2.1 Systems and Control Education in K-12 Getting students exposed to control theory and automation early in their educational formation is critical to their development and future careers; and therefore, it is imperative to start that exposure early and to connect the concepts to applications relevant to their context. In the USA including the concepts of automation, control theory and the like have to often be framed within the context of educational initiatives like the Next Generation Science Standards [12] and the Common Core Standards Initiative [13] as most school boards and K-12 educators use them as their basis for covering concepts in all areas of knowledge and meeting testing standards.
Challenges for public authorities Accommodate diversity and ensure quality. Trade-off between consensus and innovation Supporting access, fixing “market” failures. Competing with others and ensure information flows String focus on local Diverse interests and decisions power dynamics. Large Self-organizing units variation in local capacity in partnerships Global governance of Potential for high data and digital interventionism technologies is key (public/private) impacts democracy and individual rights
Table 32.2 North America’s infrastructure report card from 2001 to 2017 by the American Society of Civil Engineers [11]
Area Aviation Bridges Dams Drinking water Energy Hazardous waster Inland waterways Levees Ports Public parks Rail Roads Schools Solid waste Transit Wastewater Overall
2001 D C D D D+ D+ D+
D+ D− C+ C− D D+
2009 D C D D− D+ D D− D− C− C− D− D C+ D D− D
2017 D C+ D D D+ D+ D D C+ D+ B D D+ C+ D− D+ D+
Trend (since 2001) ↑ ↓
↓ ↑ ↓ ↑ ↓ ↑ ↓ ↑
In the work by Yadav, Hong, and Stephenson [14], the authors highlight the importance and significance of embedding computational thinking in K-12 and provide insight into doing so within the Next Generation Science Standards and the Common Core Standards Initiative. The authors focus their attention on computational thinking as the foundation for covering topics like algorithms, abstraction, and automation and provide examples by which teachers can embed the ideas into their content being presented. Computational thinking is defined by the authors as a process by which
32
720
one breaks down complex problems into more manageable components, going through a sequence of steps to solve the problems, reviewing the solution and how it might transfer to other contexts, and finally exploring if a computer can help find the solution (automation). Finally, the importance of embedding the concept of computational thinking, and therefore automation, within multiple contexts is brought to the forefront when one looks at the hesitancy of school boards in rural communities to embed the concepts often because of the lack of relevance to their communities. However, we might have an opening due to COVID to get rural communities to embrace the concepts of computational thinking because of the challenges the pandemic placed on educational systems. Another activity that requires the attention of the community for building pipelines of K-12 students interested in control and automation, as previously discussed, is to engage girls (young women) early on with content that is relevant and that addresses their interests. STEM areas are highly dominated by men and control and automation is a fitting example of how that lack of representation translates to academia from students to faculty [15]. There are many stereotypes regarding the preferences and interests of young women that need to be addressed [16, 17] and, therefore, programs that explore ways to do outreach and are more inclusive are critical to automation and control educators – a few examples of such programs and their descriptions can be found and accessed online. Some of the initiatives include: National Girls Collaborative Project [18], Girls Who Code [19], Million Women Mentors [20], etc. But not all efforts require large infrastructure and complex organizations. During the 21st IFAC World Conference held in 2020 and virtually hosted by Germany a workshop titled “Girls in Control (GiC)” [21] researchers from Germany and Norway welcomed over 100 girls (10–15 years old) from 20 countries to stimulate interest in control and the pursuit of careers in STEM. Likewise, the Society for Women Engineers (SWE) student chapter at Embry-Riddle Aeronautical University has been leading an annual “Introduce a Girl to Engineering” workshop for up to 350 girls in third through fifth grade [22].
32.2.2 Automation in K-12 Education It is not just students that need to be exposed to control and automation, so do teachers in K-12 settings to gain deeper knowledge and to understand how the profession and their qualifications need to evolve. The profession of teaching is facing many pressures, among them: (1) the number of working hours a week – an average of 50 h per week – a number that has grown 3% over the last five years according to an OECD survey [23], (2) teacher turnover due to burnout and high attrition rates, and (3) the concern of being replaced by robots and artificial intelligence. But the picture is not
J. D. V. de Bedout et al.
as dire for K-12 teachers as some have painted, in part due to the need in emerging economies for more educators and the realization that automation can in fact be a benefit for teachers if those benefits are garnered properly and the fact that many of the activities they do do not lend themselves to be automated (Fig. 32.1). In India and China, for example, the estimated demand for teachers will grow more than 100% from 2016 to 2030 [24]. Automation, particularly aspects of artificial intelligence, can free educator’s time between 20% and 40% from routine work using automated tools and technologies [25]. This “gained” time can then be used with students for more meaningful interactions that computers, robots, and artificial intelligence cannot provide.
32.3
Control and Automation in Higher Education
32.3.1 First-Year College Students: Building an Early Understanding of Control Through Modeling Skills Control engineering is functionally dependent on one’s ability to develop sophisticated system models and while control engineering tends to focus on applying discipline-specific tools to those models, students need a broader foundational knowledge about models to be successful in their studies and careers [26, 27]. One approach to building those competencies is presented by Rodgers and colleagues [28–30], through a holistic modeling perspective (HMP). HMP focuses on students’ interactions with all types of models placing a strong emphasis on moving beyond simple model creation into a deeper understanding of how models can represent real systems and phenomena. HMP is a perspective on how students interact with models; models representing a broader set of tools and techniques for translating problem complexity that includes mathematical models, physical models (e.g., drawings and prototypes), and computational models (e.g., simulations, programming – logic tree, concept map, and flow chart). The goal is to increase students’ awareness and comfort with engineering as modeling, which should then translate into greater skill across a broad range of topics, including systems and control. In the authors work, the primary approach to implementing HMP has been through revising the language and activities used in first-year engineering courses. For example, at Embry-Riddle Aeronautical University, there are three core courses that have been revised: EGR101 – Introduction to Engineering Design, EGR120 – Introduction to Graphical Communication, and EGR115 – Introduction to Computer Programming. EGR101 – Introduction to Engineering Design course covers a multitude of professional skills and introductory material.
32
Education and Qualification for Control and Automation
721
32
Fig. 32.1 Technical potential for automation across sectors depending on mix of activity types [24]
722 The two required components of the course are teamwork and ethics training. In redesigning the course, an explicit unit about modeling was added, exploring the various kinds of models and how modeling is a fundamental design tool. Greater emphasis was placed on modeling a design for a physical object, iterating on the design to address any underlying issues, and then implementing it according to their modeled design. A key outcome from this course was building familiarity with the language and terminology around modeling. EGR120 – Introduction to Graphical Communications is split into two primary topics: (1) hand sketching and (2) parametric modeling using CATIA. The revision involved adjusting the terminology to emphasize that the work being done is a form of modeling. Because of the inherent nature of parametric modeling, few changes were needed to the existing content. The biggest change was in better demonstrating the potential interplay between the mathematical/computational models and the virtual physical models that students were developing. This took the form of demonstrating how CATIA has built-in or add-on toolboxes for visualizing kinematics, conducting finite element analysis, and computational fluid dynamics. Emphasis was also placed on how each of these three components involves mathematical and computational modeling as part of their calculation processes. EGR115 – Introduction to Computer Programming received the most revision. This course is split into two primary topics: (1) problem-solving and algorithm development and (2) computer programming in MATLAB. The goal is to teach students how to develop mathematical models and then code these models into MATLAB – making them computational models. This shift required explicitly teaching students mathematical and computational modeling skills by revising and replacing the current topic about problem-solving and algorithm development.
One key approach in this revision was the introduction of modeling problems. Each term, two problems are given, each with three phases. During the first phase, students develop a written solution to the modeling problem, identifying the inputs and outputs, relevant diagrams, needed theory, necessary assumptions, working a small test case, and verifying its accuracy. During the second phase, they develop a pseudocode/flowchart to implement their model, describing not only the specific steps and calculations but also their rationale for taking each step. During the final phase, they translate their pseudocode into a MATLAB program. Throughout each phase, the problem evolves, including additional inputs and larger sample datasets. As an example of the types of problems being given to students, one problem asks students to develop a model for selecting wind farm locations. They are provided with a basic diagram relating wind speed to power output, with indicators of the speeds necessary for starting a turbine (cut-in speed), the speed at which peak output power is given (rated output speed), and the maximum speed, after which the turbine is locked down for safety reasons (cutout speed). In solution phase 1, students are given annual average and peak wind speeds for five locations. In phase 2, the dataset is expanded to include average and peak wind speeds and average temperature and pressure for each month for seven locations. In phase 3, the data is expanded to be
J. D. V. de Bedout et al.
the daily statistics for an entire year for seven locations. By changing the datasets in this way, the expectation is made that students’ solutions are meant to be iterated on and evolve over time. These changes have caused a meaningful shift in students’ ability to solve complex modeling problems as compared to students who did not receive the modeling intervention. The focus on modeling in the first year sets the stage for more advanced courses, including control and systems engineering courses, building on that familiarity in a meaningful way.
32.3.2 A First Course in Systems and Control Engineering Control theory appears in most engineering curricula but has historically come across as dull and purely mathematical. In introductory courses we need to inspire and engage students to see and experience the exciting possibilities that an understanding of control enables, so that they are keen to select more advanced courses, and yet better, to pursue this area or work within their careers. With this in mind, two main international technical committees on control education (Control Systems Society – CSS and the International Federation of Automatic Control – IFAC) undertook a survey of the global community on what would constitute an ideal first course in control [31]. Perhaps unsurprisingly, the most important output was an overwhelming consensus that a first course should focus on concepts, case studies, motivation, context, and so forth as shown by the responses in Fig. 32.2. While some mathematical depth/rigor is important and needed, students can focus on these aspects later once their interest and enthusiasm are confirmed. Furthermore, stimulating industrial case studies and hardware experiments are strongly encouraged to emphasize the context and core issues.
50% 45% 40% 35% 30% 25% 20% 15% 10% 5% 0%
49% Academia
38%
Industry
29% 21% 13% 12%
16% 18% 3%
Strongly agree
Agree
Neutral
Disagree
1%
Strongly disagree
Fig. 32.2 Survey response: focus on concepts, case studies, motivation, and context
32
Education and Qualification for Control and Automation
Concepts accepted, the next question instructors might ask is what topics and content should be included and how should these be delivered? A detailed survey into pedagogical issues and good practices for content delivery in control education is presented in [32] suffice to say that the understanding of effective learning and teaching approaches is always evolving and they need to adopt these new insights. For example, flipped learning approaches [33, 34] supported by suitable resources are now well accepted and becoming more common in most STEM courses. One popular example is the potential for remote access laboratories [35] and takehome laboratories [36] to give students more freedom to explore and apply their learning on authentic scenarios. Such independent learning approaches are increasingly relevant in a modern digital world and also support wider student skills development. A good course should also include a sizable amount of self-assessment resources (e.g., computer quizzes and software tools) that students can use independently to support and self-assess their progress. Moreover, staff should include a range of learning outcomes which go beyond the technical aspects and include professional skills. In terms of technical content, there has been reasonable international consensus over the more important topics to be included. Aside from authentic case studies and laboratories to set the scene the following topics are important: (i) first principles modeling for a range of systems; (ii) systems dynamics and the quantification of behaviors including stability; (iii) performance measures leading to quantification of the weaknesses of open-loop control; (iv) the importance of feedback and simple implementations; and (v) an introduction to PID tuning. Finally, there was a strong consensus that Laplace tools would be appropriate to support much of the above learning as well as software tools for supporting the simulation aspects. Many important topics are not included in this first course description, but it is better with limited lectures and time to get students to buy into the importance and prevalence of the topic so they can transition smoothly to the workplace. A good independent learner who is enthused and convinced of a topic importance will easily pick up further technical content as they need it.
32.3.3 Integrating Research, Teaching, and Learning in Control at a University Level At the University of Kansas, Math 750 – Stochastic Adaptive Control, a course developed 30 years ago based on research initiated in Poland [37] has been adapted regularly according to new developments in stochastic adaptive control area. This course focuses on innovative methods of teaching stochastic adaptive control with students who represent all science, technology, engineering, and mathematics (STEM) fields.
723
The course attracts junior and senior undergraduate and graduate students, and leads toward honors theses for undergraduates, and masters’ and doctoral theses for graduate students. The theory of stochastic adaptive control generates many attractive problems for mathematics, engineering, and computer science as well as for economics and business students. The Math 750 course on stochastic adaptive control was developed and designed for students from all STEM fields to increase, in a friendly and welcoming way, the general awareness of the importance of control and control technology and its cross-disciplinary nature. It requires a very good mathematical understanding of real analysis, probability, and statistics as well as good computational skills. The diversity of students differs from year to year, therefore the way how the course is taught, its content, and illustrative examples of applications differ too. Some of the major ideas and concepts of teaching this course are described below and some of the same techniques have been adopted in other courses such as: probability, mathematical statistics, and calculus. Classroom Teaching as a Stochastic Adaptive Control Problem The classroom with students and their instructor is considered as a controlled system. It is stochastic because there is a lot of randomness in the classroom. A classroom becomes a scientific laboratory. Information is collected. The students in the class are unknown to the instructor at the beginning of a semester. They come from different departments with different academic backgrounds. The challenge for the instructor is to learn about students by collecting relevant course information about them. Too many unknowns in the system make the system unknown, so learning or identifying the system is a critical issue in teaching as in research. Short Bio as the First Assignment One of the best practices for learning about the students from the beginning of a semester is to ask them to prepare a well-done short bio with information that will help the instructors find optimal adaptive strategies. The information should contain student’s family and academic background, math and science courses taken, any significant recognitions, motivation for taking this course, short- and long-term goals for studying and career, research and real-world problem interests, and hobbies and favorite things to do during free time. These short bios should be revisited by the instructor as a semester progresses, and students should have an opportunity to update them twice: during the middle and at the end of a semester. This method of teaching is called scholarship in teaching. The idea is to treat teaching as a stochastic process that changes over time, a process with several components such as vision, design, data collection, and data analysis, and, finally, integrate teaching and learning.
32
724
C5: Curiosity, Creativity, Connections, Communication, and Collaboration: A Key for a Success Curiosity is the most important part of learning in this course as it covers many different topics. Students are engaged in research discussion through curiosity and creativity. They are alert in this course of what they are doing, they ask why things are the way they are, they try different ways to explain to each other what they observe. They make connections. The cross-boundaries nature of stochastic adaptive control, probability mathematical statistics, or even calculus are exciting for students. Students love collaboration. They bring their individual skills while searching for the correct models for solving problems of the modern real complex world. Reviewing Papers Is a Powerful Method to Prepare Students for Writing a Scientific Project In recent years a new practice of reviewing a conference paper has been added to the course, and it has turned to be the most effective way of teaching students how to write a scientific project. Students are provided with a sample of very well professionally done reviews. Students have done outstanding work. Their reviews are very constructive, detailed, providing missing references, providing missing steps in mathematical proofs, as well as indicating weaknesses in interpretations of results and in particular graphs, and amazing remarks on editing. Their reviews have led to a significant improvement of the quality of the paper presentation. Guest Speakers Provide New Perspective Guest speakers energize, inspire, and motivate students. Former students who took the Stochastic Adaptive Control course or research collaborators are invited as guest speakers during the middle of a semester to have a conversation with students and share their industrial or academic career experiences. Real-world problems generate new mathematical problems. Mathematics of finance and mathematics of networks, as well as understanding of the brain as the most complex system, generate new stochastic control problems, such as the identification, detection, and control of stochastic system with a noise modeled by a fractional Brownian motion. Neurologists, mathematicians, and engineers from the medical research centers talk about epilepsy, seizures, and COVID-19. Students discovered amazing creativity and connections of STEM fields. Bringing research collaborators, especially those from other countries, to the classroom with good preparation for their visit changes students’ perspectives. It opens their eyes, and it awakes their imagination and creativity. They see how math and control can be found everywhere and how often it is hidden. They see the role of the broad and inspiring vision of control. They find them to be most inspirational and most motivating for continuing study of mathematics and engineering to do research in stochastic adaptive control. In a summary of the
J. D. V. de Bedout et al.
reflections from a talk a student wrote, “Today’s visitor is a fine example of how an engineer should proceed with his career and deal with his success. I hope that the inspiration I have drawn from the speaker can help fuel some creativity for me in my field and motivate me to do my work for the betterment of the world.” Vertical and Horizontal Integration and Peer Tutoring In the courses with a very diverse group of undergraduate and graduate students, vertical and horizontal integrations as well as peer tutoring play crucial roles by building a community of learners. Vertical integration incorporates students at different levels in their studying and learning approach. It involves and engages undergraduate and graduate students. Horizontal integration incorporates students from various disciplines in learning and in teaching. Through peer tutoring students become teachers for their peers. They enthusiastically volunteer to teach the entire class a topic which they learned well or which they studied independently. A peer tutoring program was created because there were some students who needed help, and others who could offer that help. The students build not only a community of learners but importantly they are building lifetime friendships. Students Who Are Broadly Prepared Have a Higher Chance to Find Attractive Jobs The Math 750 – Stochastic Adaptive Control course prepares students broadly for studying and for doing research. It prepares them for new challenges and opportunities of the twenty-first century. Students are better prepared to process information, work with complex systems, manage large datasets using advanced mathematical and statistical methods, and to understand randomness and uncertainties in the system. Integrating research, learning, and teaching using the stochastic adaptive control approach described above has demonstrated to be both effective and enjoyable.
32.3.4 Remote and Virtual Labs A key tool for solidifying the teaching and understanding of key concepts for students is with laboratories. Laboratories provide students the ability to try and apply their learning in a setting that allows them to interact with the system, make mistakes, see results, and learn in an interactive fashion. Leaning by doing, a theory first promoted by John Dewey, reinforces the significance of learning environments like laboratories to adapt and learn. Remote labs and virtual labs are now part of the pedagogical tools that educators can leverage to help their students reinforce concepts particularly in STEM fields. Virtual and remote labs (VRLs)) have for the last 15– 20 years been used in academic settings to supplement stu-
32
Education and Qualification for Control and Automation
Fig. 32.3 Global reach of nanoHUB. Red dots represent lectures and tutorials. Yellow dots represent simulation users, and green lines represent collaborations documented by publications citing nanoHUB as a resource. (Network for Computational Nanotechnology/Purdue University)
dents learning [35]. Virtual labs have allowed educational institutions to reach students in many locations as the labs can be accessed anywhere and at any time. They have also enabled greater accessibility for students with disabilities and in some settings provides a safer environment for students to learn (e.g., manipulation of chemical reactions). For instance, Purdue University’s nanoHUB was the first of its kind end-to-end science and engineering cloud that allows the users to model and simulate for enhanced collaboration and learning and in turn enable simulation-powered curriculums. nanoHUB was launched in 2002 with support from the National Science Foundation as a collaboration among six partner universities and its reach is now global as seen in Fig. 32.3. Although the work started in 2002 with nanoHub, Purdue has continued to innovate and lead in the development of virtual and online labs. In 2020 to address the challenges of the COVID-19 pandemic, the College of Engineering was well positioned to provide quality education to its students during these times. The college had already started to transition to online and virtual labs to address a decades’ growth in undergraduate and graduate education as well as the significant increase in online classes. The push to bring educational content online or to the cloud like VRLs and massive online courses (MOOCs) was brought to the forefront and gotten more attention and investments due to the COVID-19 pandemic [38].
32.4
Conclusions
Control education is at a crossroad. Low-cost computing and sensors, advances in artificial intelligence and big data management, and the necessity of interdisciplinary research for solving modern problems have increased the need for broader participation and awareness in control education. Control systems now play critical roles in many fields that include
725
electronics manufacturing, communications and telecommunications, autonomous vehicles and transportation, power systems and energy, finance, biology, biomedicine, actuarial sciences, computer and networks, neuroscience and computational neuroscience, neurology, many biological systems, and many military systems. Meaningful advancements in these and other areas are dependent on us to adequately engage future engineers and scientists in control education and to foster a curiosity well before they reach a formal controls course. Readers might want to look at Chs. 54 “Library Automation and Knowledge Sharing” and Ch. 63 “Automation in Education, Training, and Learning Systems” as two chapters that discuss in detail the emerging capabilities to expand the reach, engagement, and accessibility to augmenting educational tools and knowledge that are complementary to our discussion on emerging needs. In addition, readers might also want to read in Ch. 68 “Case Study: Automation Education and Qualification Apprenticeships” as another related extension to the work described in this chapter. In the new era, stochastic control and stochastic systems will play an increasingly important role. Not only are they used in the traditional fields such as engineering, but also in emerging application areas as financial engineering, economics, risk management, mathematical biology, environmental sciences, and social networks, to mention just a few among others. It is well known that good understanding of randomness in systems is a critical and central issue in all areas of science. A noise in the systems cannot be disregarded as there are examples showing that a noise can stabilize or destabilize the system. This is particularly important in rapidly advanced biomedical and financial engineering areas of applications but also in all areas where measurements carry on errors. Mathematical statistics with good understanding of randomness in big data, analyzing and modeling data, making properly statistical inferences are all must learn by students along with programming languages now and for the next 10 years. Specific topics of central interest include societal systems, autonomous systems, infectious diseases systems, the brain system, universal access to technology with its focus on security, privacy, ethics. Data driven science approaches will lead towards Artificial Intelligence (AI). Education training curricula for K-12 and beyond need to respond to all the new challenges described before. As we look into the emerging future of education one can see efforts like the Colombia Purdue Partnership, a university-tocountry win-win collaboration as a means to reduce educational disparities, provide access to higher education without borders, and addressing and providing solutions to local, glocal, and global challenges including in areas of control and systems. Programs like the Undergraduate Research Experience that pairs students with mentors at Purdue to work on jointly devised research programs will enable students
32
726
who are usually limited to their exposure to global programs the opportunity to collaboratively work with their advisors to pursue apprenticeship and certification of skills they did not possess prior to the experience. It is programs like that which include experiential learning, research internships, that educators need to leverage to awaken the interest of student to pursue work in the areas of controls and systems. The longevity of the control field depends on its continuous success in attracting the most gifted young people to the profession. Early exposure is the key to that goal. The idea is that education is at all levels an inclusive process: It should integrate scholarship, teaching, and learning both horizontally and vertically creating a learning experience for students of all ages, from K-12 to higher education and beyond. This year marked the twentieth anniversary of the Ideas and Technology Control Systems workshops for middle and high school teachers and students, renamed recently as the Power, Beauty, and Excitement of Cross-Boundaries Nature of Control, a Field that Spans Science, Technology, Engineering, and Mathematics (STEM). These workshops are held twice a year in conjunction with the ACC, CDC, and IFAC Meetings and Congresses. During the last 20 years, over 20,000 middle and high school students and their teachers as well as undergraduate students have been reached through our educational activities. Over 225 academic and industry representatives have shared their passion and have given inspirational talks at these workshops. The purpose of these workshops is to increase awareness among students and teachers of the importance and cross-disciplinary nature of control and systems technology in everyday life. The workshops show the power of cross-boundaries research and have been presented to students and teachers in Atlanta, Baltimore, Chicago, Denver, Las Vegas, Los Angeles, Maui, New Orleans, Orlando, Portland, San Diego, Seattle, St. Louis and Washington DC, Boston, Milwaukee, Philadelphia, and have been held in the Czech Republic, Cyprus, South Korea, Poland, Spain, and South Africa. The model of a sustainable outreach partnership among our control communities and the school districts at the places where our major conferences are held was established and subsequently followed by other organizations and societies. This outreach partnership has provided a vehicle for demonstrating the importance of control. The workshop activities include presentations by control systems experts from the control community, informal discussions, and the opportunity for teachers to meet passionate researchers and educators from academia and industry. A discussion on “Plain Talks” was initiated, and this initiative was closely related to the outreach efforts. The goal was to develop short and inspirational presentations for teachers and students but also for noncontrol engineering communities.
J. D. V. de Bedout et al. Acknowledgments We would like to recognize the contributions of John Anthony Rossiter from the University of Sheffield in the UK for his contribution to courses in systems and control engineering.
References 1. NSF – National Science Foundation, “NSF’s 10 Big Ideas – Special Report,” 2017. Accessed 02 Dec 2020. [Online]. Available: https:// www.nsf.gov/news/special_reports/big_ideas/ 2. NAS – National Academy of Sciences, “Grand Challenges in Science Plenary Talk – Panel 1: Views from Other Initiatives,” 2020. http://www.nasonline.org/about-nas/events/annual-meeting/ nas157/symposium.html (accessed 02 Dec 2020. 3. Clough, G.W.: The Engineer of 2020: Visions of Engineering in the New Century. National Academies Press, Washington, DC (2004) 4. Committee on the Engineer of 2020 and National Academy of Engineering: Educating the Engineer of 2020: Adapting Engineering Education to the New Century. The National Academies Press, Washington, DC (2005). https://doi.org/10.17226/11338 5. Fleming, W.H.: Report of the Panel on Future Directions in Control Theory: a Mathematical Perspective. Soc. for Industrial and Applied Math. (1988) 6. Murray, R.M.: Control in an Information Rich World: Report of the Panel on Future Directions in Control, Dynamics, and Systems. SIAM (2003) 7. Bittanti, S., Gevers M.: On the dawn and development of control science in the XX century (2007). 8. Symposium on Imagining the Future of Undergraduate STEM Education. https://stemfutureshighered.secure-platform.com/a/ (accessed 09 Dec 2021). 9. OECD, Back to the Future of Education. 2020. https://doi.org/ 10.1787/178ef527-en. 10. Pasik-Duncan, B., Verleger, M.: Education and qualification for control and automation. In: Springer Handbook of Automation, pp. 767–778. Springer, Berlin Heidelberg (2009). https://doi.org/ 10.1007/978-3-540-78831-7_44 11. America’s Infrastructure Report Card 2021 | GPA: C-. https:// infrastructurereportcard.org/ (accessed 09 Dec 2021). 12. Next Generation Science Standards. https:// www.nextgenscience.org/ (accessed 09 Dec 2021). 13. Home | Common Core State Standards Initiative. http:// www.corestandards.org/ (accessed 09 Dec 2021). 14. Yadav, A., Hong, H., Stephenson, C.: Computational thinking for all: pedagogical approaches to embedding 21st century problem solving in K-12 classrooms. TechTrends. 60(6), 565–568 (2016). https://doi.org/10.1007/s11528-016-0087-7 15. Annaswamy, A.: Women in the IEEE control systems society [President’s Message]. IEEE Control. Syst. 40(5), 8–11 (2020) 16. Schuster, C., Martiny, S.E.: Not feeling good in STEM: effects of stereotype activation and anticipated affect on women’s career aspirations. Sex Roles. 76(1–2), 40–55 (2017). https://doi.org/10.1007/ s11199-016-0665-3 17. Makarova, E., Aeschlimann, B., Herzog, W.: The gender gap in STEM fields: the impact of the gender stereotype of math and science on secondary students’ career aspirations. Front. Educ. 4, 60 (2019). https://doi.org/10.3389/feduc.2019.00060 18. National Girls Collaborative Project |. https://ngcproject.org/ (accessed 09 Dec 2021) 19. Girls Who Code | Home. https://girlswhocode.com/ (accessed 09 Dec 2021) 20. Million Women Mentors. https://mwm.stemconnector.com/ (accessed 09 Dec 2021)
32
Education and Qualification for Control and Automation
727
21. Girls in Control (GIC) Workshop and Material — IFAC International Federation of Automatic Control. https://www.ifaccontrol.org/areas/girls-in-control-gic-workshop-and-material (accessed 09 Dec 2021) 22. Igew, Dec. 01, 2021. https://www.igew.org/ (accessed 09 Dec 2021) 23. Talis, O.: Results (Volume I): Teachers and School Leaders as Lifelong Learners. OECD Publishing, Paris (2019) 24. Manyika, J., et al.: A Future That Works: Automation, Employment, and Productivity. McKinsey Global Institute (2021). Accessed: 09 Dec 2021. [Online]. Available: https://www.mckin sey.com/∼/media/mckinsey/featured%20insights/Digital%20Disru ption/Harnessing%20automation%20for%20a%20future%20that% Juan Diego Velasquez de Bedout has worked 8 years in international 20works/MGI-A-future-that-works-Executive-summary.ashx 25. Bryant, J., Heitz, C., Sanghvi, S., Wagle, D.: How artificial intelli- higher education and US-Colombia academic cooperation. Juan is the managing director of the Colombia Purdue Partnership in the Office gence will impact K-12 teachers. Retrieved May. 12, 2020 (2020) 26. Hazelrigg, G.A.: On the role and use of mathematical models in of Global Partnerships where he manages a portfolio of programs engineering design. J. Mech. Des. Trans. ASME. 121(3), 336–341 between Purdue’s 10 colleges and over 20 Colombian entities (academic, nonacademic, and governmental). His previous positions include (1999). https://doi.org/10.1115/1.2829465 27. Zawojewski, J.S., Diefes-Dux, H.A., Bowman, K.J.: Models and assistant director of TA and Curricular Development in the Center Modeling in Engineering Education: Designing Experiences for all for Instructional Excellence and most recently managing director of Strategic Initiatives for the College of Engineering (both positions at Students. Sense Publishers, Rotterdam (2008) 28. Marbouti, F., Rodgers, K.J., Verleger, M.A.: Change in student Purdue). He has authored several refereed journals and conference understanding of modeling during first year engineering courses. proceedings. Juan has received several awards for teaching at Purdue University. He holds a Ph.D. in industrial engineering from Purdue In: ASEE Annual Conference Proceedings (2020) 29. Rodgers, K.J., Verleger, M.A., Marbouti, F.: Comparing students’ University. solutions to an open-ended problem in an introductory programming course with and without explicit modeling interventions. In: ASEE Annual Conference Proceedings (2020) 30. Rodgers, K.J., McNeil, J.C., Verleger, M.A., Marbouti, F.: Impact of a modeling intervention in an introductory programming course. In: American Society for Engineering Education (ASEE) 126th Annual Conference and Exposition (2019) 31. Rossiter, A., Serbezov, A., Visioli, A., Žáková, K., Huba, M.: A survey of international views on a first course in systems and control for engineering undergraduates. IFAC J. Syst. Control. 13, 100092 (2020). https://doi.org/10.1016/j.ifacsc.2020.100092 32. Rossiter, J.A., Pasik-Duncan, B., Dormido, S., Vlacic, L., Jones, B., Murray, R.: A survey of good practice in control education. Eur. J. Eng. Educ. 43(6), 801–823 (2018). https://doi.org/10.1080/ 03043797.2018.1428530 Bozenna Pasik-Duncan received her Ph.D. and D.Sc. from Mathe33. Crouch, C.H., Mazur, E.: Peer instruction: ten years of experience matics Department of Warsaw School of Economics (SGH) in 1978 and results. Am. J. Phys. 69(9), 970–977 (2001) and 1986, respectively. She is currently professor of mathematics, 34. Bishop, J.L., Verleger, M.A.: The flipped classroom: a survey of Chancellor’s Club Teaching Professor, courtesy professor of EECS the research. In: Paper Presented at the ASEE Annual Conference, and AE, and investigator at ITTC. Her current research interests are pp. 1–18 (2013) primarily in stochastic adaptive control, data analysis, and stochastic 35. Heradio, R., de la Torre, L., Dormido, S.: Virtual and remote labs in modeling with its applications to finance, insurance, medicine, and control education: a survey. Annu. Rev. Control. 42, 1–10 (2016). telecommunications. Her other interests include STEM and control https://doi.org/10.1016/j.arcontrol.2016.08.001 engineering education. She has been actively involved in IEEE CSS, 36. Taylor, B., Eastwood, P., Jones, B.L.: Development of a low- AACC, and IFAC in a number of capacities. She is Immediate Past cost, portable hardware platform for teaching control and systems Global Chair of IEEE WIE and cofounder of CSS Women in Control, theory. In: IFAC Proceedings Volumes (IFAC-Papers Online), vol. Past Chair of AACC, IFAC, and CSS TCs on Control Education, and 10, pp. 208–213 (2013). https://doi.org/10.3182/20130828-3-UK- Immediate Past IFAC TB Member for Education and IFAC Task Force 2039.00009 on Diversity and Inclusion. She is the founder and coordinator of KU 37. Pasik, B.: On Adaptive Control. SGPiS (1986) Mathematics Outreach Program since 1992. She received IEEE Third 38. Bhute, V.J., Inguva, P., Shah, U., Brechtelsbauer, C.: Transforming Millennium Medal, IEEE CSS Distinguished Member Award, and IFAC traditional teaching laboratories for effective remote delivery— Outstanding Service Award. She is IEEE Life Fellow, IFAC Fellow, a review. Educ. Chem. Eng. 35, 96–104 (2021). https://doi.org/ AWM Fellow, and Member of KU Women’s Hall of Fame and IEEE10.1016/j.ece.2021.01.008 HKN.
32
728
Dr. Matthew A. Verleger received his PhD in engineering education from Purdue University (2010). He is currently a professor of engineering fundamentals at Embry-Riddle Aeronautical University. His research interests are in developing educational software to enhance student learning and exploring how students learn to develop models and modeling skills. Currently, he is developing the tools to gamify courses within a standard LMS environment as a means of increasing student engagement.
J. D. V. de Bedout et al.
Interoperability and Standards for Automation
33
François B. Vernadat
Contents
mentation, integration, maintenance, and operations. This chapter combines two parts. First, it defines systems interoperability in the context of automation and enterprise integration; reviews relevant interoperability frameworks; discusses technical, semantic, and organizational dimensions of interoperability; and presents essential underlying technologies. Second, major international standards for automation projects in small, medium, or large organizations are listed before concluding.
33.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 729
33.2 33.2.1 33.2.2 33.2.3
Interoperability in Automation . . . . . . . . . . . . . . . . . . . The Need for Integration and Interoperability . . . . . . . . . Systems Interoperability . . . . . . . . . . . . . . . . . . . . . . . . . . Enterprise Interoperability and Enterprise Integration . . . . . . . . . . . . . . . . . . . . . . . . .
730 730 731
33.3 33.3.1 33.3.2 33.3.3 33.3.4
Integration and Interoperability Frameworks . . . . . . Technical Interoperability . . . . . . . . . . . . . . . . . . . . . . . . . Semantic Interoperability . . . . . . . . . . . . . . . . . . . . . . . . . Organizational Interoperability . . . . . . . . . . . . . . . . . . . . . Technologies for Interoperability . . . . . . . . . . . . . . . . . . .
733 734 735 735 735
33.4 33.4.1
Standards for Automation . . . . . . . . . . . . . . . . . . . . . . . Standards for Automation Project or System Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Standards for Automated Systems Modeling, Integration, and Interoperability . . . . . . . . . . . . . . . . . . . . Standards for E-Commerce and E-Business . . . . . . . . . . Standards for Emerging Industry (Including I4.0, IoT, and CPS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Standards for Industrial Data . . . . . . . . . . . . . . . . . . . . . . Standards for Industrial Automation . . . . . . . . . . . . . . . . Standards for Industrial and Service Robotics . . . . . . . . Standards on Usability and Human-Computer Interaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
738
Keywords
738
Automated systems · Systems interoperability · Enterprise interoperability · Enteprise integration · Automation standards · ISO · IEC · ANSI · CEN standards
33.4.2 33.4.3 33.4.4 33.4.5 33.4.6 33.4.7 33.4.8 33.5
731
740 740 740 740 740 742
33.1
Introduction
742
Overview and Conclusion . . . . . . . . . . . . . . . . . . . . . . . . 743
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 751
Abstract
Interoperability is an essential characteristic of systems when they have to communicate, cooperate, or collaborate within an organization or across organizations, be they single organizations or networked organizations. Automated systems are subject to interoperability when they have to work together, i.e., to interoperate. They are also subject to a number of standards for their design, impleF. B. Vernadat () Laboratoire de Génie Informatique, de Production et de Maintenance (LGIPM), University of Lorraine, Metz, France
© Springer Nature Switzerland AG 2023 S. Y. Nof (ed.), Springer Handbook of Automation, Springer Handbooks, https://doi.org/10.1007/978-3-030-96729-1_33
Automation is the art of providing devices, machines, or any kind of man-made systems with autonomy (i.e., operation and self-regulation) and even decision-making or collaborative abilities resulting in automated systems, i.e., systems acting in and reacting to their environment without significant human intervention. In simpler terms, “Automation is a way for humans to extend the capabilities of their tools and machines” as stated by Prof. Williams [1]. While initially extensively applied to manufacturing, industrial, and military systems, automation is nowadays present in all sectors of human society (e.g., home robotics, automobile industry and automotive products, aeronautical and space industries, maritime systems, energy power plants, water treatment, medical equipment and health care systems, agriculture, or financial systems to name a few) as illustrated in Ch. 1 Automation: What It Means to Us Around the World, Definitions, Its Impact and Outlook. 729
730
Automation has its foundation in automatic control but also heavily relies on advances in information and communications technologies (ICT) as well as artificial intelligence (AI) techniques, the best example being robotics developments. Due to the ever-increasing complexity of human society, automated systems no longer exist in isolation and islands of automation become the exception. In many cases, these systems have to interact within cooperative or collaborative environments. For instance, if a bank and an insurance company want to jointly offer real estate loans with life insurance, they have to align their customer data files, agree on loan acceptance conditions and rules, and coordinate respective local services (e.g., loan request processing, loan approval/refusal, insurance condition setting, etc.) into a common business process to offer a composite service to customers. In other words, they have to be integrated and to interoperate. Indeed, integration and interoperability come into play any time that two or more systems need to work together or need to share common information. Industrial automation has been an intensive application domain of automation for decades with programmable logic controllers (PLCs), computer numerically controlled (CNC) machines, flexible manufacturing systems (FMSs), automated guided vehicles (AGVs), industrial robotics, or computer-integrated manufacturing (CIM). The history of industrial automation in process industries has been reviewed by Williams [1] and in Ch. 2 while examples of applications to process and discrete manufacturing can be found in chapters throughout this handbook. New trends in automated manufacturing are called smart manufacturing [2] and Ch. 45 or Industry 4.0 (I4.0) [3], presented as the fourth revolution in manufacturing (after the first revolution on the late eighteenth century based on steam power that enabled mechanization, the second revolution based on electricity power that made mass production possible in the first half of the twentieth century, and the recent third revolution based on computerization that led to industrial automation). The term Industry 4.0 has been introduced in 2011 by the German government as a national program to boost research and development of the manufacturing industry [4]. I4.0 is based on intensive digitalization, i.e., integration of digital technologies with production facilities and technologies, altogether forming a so-called cyber-physical system (CPS) that can sense its environment and quickly respond to changes [5]. While the idea of smart manufacturing is to interconnect all manufacturing resources as smart entities (e.g., smart products, smart devices, smart machines, etc.) to decentralize decision-making and increase agility, the aim of I4.0 is to utilize recent advances in information technologies and the Internet (cloud computing, Internet of Things or IoT, IoS, CPS, big data, etc.) to interconnect people, machines, tools, devices, sensors, and their digital
F. B. Vernadat
twins, into decentralized intelligent systems that can sense and adapt to the environment. In addition to interoperability, both require big data analytics and data mining technologies to analyze the tremendous amount of data exchanged. Putting in place automation solutions, be it in private, public, or government sectors, is the subject of automation projects. These projects are themselves subject to a number of constraints, best practices, rules, and applicable standards in terms of risk management, quality management, environment regulations, or sustainable development, as well as in terms of design and analysis methods, applicable technologies, or human-related factors such as safety and man-machine interaction. The role of international standards in these projects must be stressed because in many cases their application and respect facilitate easier communication, smooth collaboration, and better interoperability between systems. The chapter has the mandate to cover both automation interoperability and automation standards. It therefore consists of two parts. First, interoperability in the context of integrated automated systems is defined, relevant architectural frameworks are indicated, and the underlying technologies are discussed from three perspectives (technical, application/semantic, and organizational dimensions). Next, important international standards to be considered for carrying out automation projects in small, medium, or large organizations are listed. Finally, an overview discussion is given for concluding.
33.2
Interoperability in Automation
This section concentrates on interoperability aspects of automated systems in particular but also addresses their integration aspects to explain the global context of the problem.
33.2.1 The Need for Integration and Interoperability Broadly speaking, integration and interoperability aim at facilitating data/information exchange and collaboration between systems or business entities. A business entity is any part of an organization (such as a work team, an organization unit, a manufacturing plant, an information system such an ERP system, a set of business processes, a node in a supply chain, etc.) or the organization itself. It is made of people, processes, and/or technologies. Caution The two terms, integration and interoperability, are intentionally used in this chapter, but they have their own meaning and are not interchangeable. Integration is about making a consistent whole from heterogeneous components, while interoperability is about using functionality of another
33
Interoperability and Standards for Automation
entity. While integration is the big picture and can apply to people, organization, processes, or data, interoperability is specific to systems. While entities can be more or less integrated (fully, tightly, loosely, or not all), systems are either interoperable or not (although some authors have tried to define degrees of interoperability and interoperability maturity models). Indeed, people are not interoperable but they can communicate, collaborate, cooperate, or coordinate their actions. Data per se cannot interoperate because they are facts without behavior, but data exchanges can be made interoperable. The need for systems integration and interoperability results from the increasing need to interconnect and seamlessly operate technical, social, and information systems of different nature and technology, of geographically dispersed enterprises, or of different enterprises working together. It also comes from the need to integrate business entities, i.e., remove organizational boundaries between these entities [6]. This mostly concerns IT systems and software applications but also the internal business processes and services of a given enterprise (i.e., intraorganizational integration) as well as cross-organizational business processes spanning partner companies or enterprise networks (i.e., interorganizational integration). Part of the need also comes from the emergence of new organizational structures such as collaborative networked organizations (CNOs), the objective of which is to bring together largely autonomous, geographically distributed, and heterogeneous business entities in terms of their operating environment, culture, and goals, that have to collaborate to better achieve common or compatible goals, thus jointly generating value, and whose interactions are supported by ICT [7, 8]. The best examples of CNOs are extended enterprises, virtual enterprises, or supply chains. Finally, since 2000, the need for interoperability has been intensified by the ever-growing development of collaborative e-work, e-business, and e-services and their requirements for efficient communication, coordination, cooperation, and collaboration mechanisms ([9]; Ch. 18). The greatest challenge of integrating enterprise systems or business entities and making them interoperable comes from the large number and high heterogeneity of components that are subject to be interconnected. When two systems have to talk to each other or interoperate, they have to agree on common message formats, transfer protocols, shared (data) semantics, security and identification aspects, and possibly on legal aspects. In terms of solution, the major pitfall to be avoided is building monolithic solutions that may result from dependence on commercial offerings from the same or preferred suppliers. This would trap users into staying with a given solution because the costs of conversion become very significant barriers, thus denying the users of achieving the benefits of new technology as it becomes available. Hence, a
731
recommended practice is to base solutions on open standards and open solutions to avoid such a dependency and to control the pain and costs of maintainability.
33.2.2 Systems Interoperability Interoperability is defined in the Webster dictionary as “the ability of a system to use parts of another system.” In IT terms, it has been defined as the ability of systems to exchange information and use the information that has been exchanged [10]. From an enterprise engineering point of view, interoperability of enterprise applications has been defined by the EU Athena research project as the ability of a system to work with other systems without special effort on the part of their users [11]. More generally and in systemic terms, interoperability can be defined as the ability of two or more systems to exchange data, share common information and knowledge, and/or use functionality of one another to get synergistic gains [6, 12]. For instance, in the context of Industry 4.0, the digital twin of a product, i.e., a software entity representing the physical product in the cyber world, must remain connected in real time to its physical counterpart in the real world to be a faithful digital image of the product while, at the same time, be able to exchange data with machine tools for its assembly, call functions of the production management system to update its status or get information, and possibly interact with other products or pieces of equipment. This assumes on the part of the ICT infrastructure the ability for systems to send and receive messages (requests or responses) or data streams, to call functions or services of other systems, and to provide execution services to orchestrate operation execution (e.g., workflow engines or enterprise operating systems [13]). These exchanges can be performed in synchronous or asynchronous modes depending on the communication or the collaboration needs. The Internet and the web service technologies, and especially their associated open standards (TCP/IP, HTTP, and XML as explained in Sect. 33.3.4), have boosted technical development and widespread use of systems interoperability in industry as well as in many other sectors of our modern life. As already mentioned, interoperability is a specific characteristic of technical systems but it has been extended to the business process and even enterprise levels.
33.2.3 Enterprise Interoperability and Enterprise Integration Enterprise interoperability can be defined as the ability of an enterprise to use information and/or services (i.e., functionality) provided by one or more other enterprises [12, 14].
33
732
In other words, enterprise interoperability is the ability of a business entity of an enterprise, or more generally of an organization, to work with business entities of other enterprises or organizations without special effort [11]. The capability to exchange information, interact, or use thirdparty services both internally or with external stakeholders (partners, suppliers, subcontractors, and customers) is a key feature in the economic and public sectors, especially to facilitate e-business, e-government, e-work, e-services, or any type of collaborative networked activities. Enterprise interoperability emerged as an essential feature in enterprise integration, which itself has been a generalization of computer-integrated manufacturing (CIM), i.e., integration in manufacturing enterprises, toward the integration of all functions of the enterprise [15]. Enterprise integration (EI) occurs when there is a need to remove organizational barriers and/or improve interactions among people, systems, applications, departments, and companies (especially in terms of rationalizing and/or fluidifying material, information/decision, and workflows) [6, 15]. The goal is to create synergy, i.e., the integrated system must offer more capabilities than the sum of its components. The complexity of EI relies on the fact that enterprises typically comprise hundreds, if not thousands, of applications (be they packaged solutions, custom-built, or legacy systems), some of them being remotely located and supporting an even larger number of business processes. Such environments get even more complicated as companies are often involved in merger, fusion, acquisition, divestiture, and partnership opportunities. Integration of enterprise activities has long been, and often still is, considered as a pure IT problem. While it is true that at the end of the day the prime challenge of EI is to provide the right information to the right place at the right time, business integration needs must drive information system integration and not vice versa. Thus, EI has a strong organizational dimension in addition to its technological dimensions. From a pure IT standpoint, EI mostly means connecting computer systems and IT applications to support business process operations. It involves technologies such as enterprise portals, service-oriented approaches, shared business functions with remote method invocation, enterprise service buses, distributed business process management (or workflow engines), and business-to-business (B2B) integration [16, 17]. From an organizational standpoint, EI is concerned with facilitating information, control, and material flows across organization boundaries by connecting all the necessary functions and heterogeneous functional entities (e.g., information systems, devices, applications, and people) in order to improve the 4Cs: communication (data and information exchanges at physical system level), coordination (synchronized and oderly interoperation at application level),
F. B. Vernadat
and cooperation or collaboration (trust-based transactions and timely orchestration of process steps at business level) (Fig. 33.1) [6, 15]. The goal is to enhance overall productivity, flexibility, and capacity for the management of change (i.e., agility). Li and Williams [18] provided a broader definition of EI stating that enterprise integration is the coordination of all elements including business, processes, people, and technology of the enterprise(s) working together in order to achieve the optimal fulfillment of the business mission of that enterprise(s) as defined by the management. Enterprise integration can apply vertically, horizontally, or end to end within an organization. Vertical integration refers to integrating a line of business from its top management down to its tactical planning and operational levels (i.e., along the control structure). Horizontal integration refers to integrating various domains (i.e., business areas) of the enterprise or with its partners and market environment (i.e., along the supplier/consumer chain structure). End-to-end integration refers to integrating an entire value chain such as a supply chain. Integration can range from no integration at all to loosely coupled, tightly coupled, up to full system integration. Full integration means that component systems are no longer distinguishable in the whole system. Tightly coupled integration means that components are still distinguishable in the whole but any modification to one of them may have direct impact on others. Loosely coupled integration means that component systems continue to exist on their own and can still work as independent entities of the integrated system. In this context, enterprise interoperability appears to be one of the many facets of enterprise integration. Indeed, an integrated family of systems must necessarily comprise interoperable components in one form or another while interoperable systems do not necessarily need to be integrated. In fact, enterprise interoperability equates to loosely coupled enterprise integration. It provides two or more business entities (of the same organization or from different organizations and irrespective of their location) with the ability to exchange or share information (wherever it is and at any time) and to use functions or services of one another in a distributed and heterogeneous environment. Interacting component systems are preserved as they are, can still work on their own, but can at the same time interoperate seamlessly. This is the reason why, although EI provides the broad picture and was studied first in the 1990s, emerging from all the efforts made on CIM in the 1980s, enterprise interoperability is receiving more attention nowadays because of its greater flexibility and less monolithic approach. Putting in place interoperable enterprise systems is essential to achieving enterprise integration with the level of flexibility and agility required by public or private organizations in order to cope with the need for frequent organizational changes.
33
Interoperability and Standards for Automation
733
A prerequisite for developing integrated or interoperable enterprise systems is enterprise modeling (EM) [6]. EM is the art of developing models (e.g., functional, process, information, resource, or organizational models) describing the current or desired structure, behavior, and organization of business entities that need to work together. EM has been defined as “the art of externalizing knowledge about the enterprise that needs to be shared or adds value to the enterprise” [19]. The models should be interpretable by humans and computers, be expressed in terms of enterprise modeling languages (EMLs), and be stored in enterprise model repositories for sharing, reuse, and cleaning (i.e., validation and verification) purposes. Examples of well-known EMLs are the IDEF suite of languages (IDEF0, IDEF1x, and IDEF3), GRAI, CIMOSA, or ARIS languages as well as the set of modeling constructs of ISO 19440 or the business process modeling notation (BPMN) for business processes as surveyed by Vernadat [6, 19]. As indicated by Fig. 33.1, building interoperable enterprise systems and reaching the successive levels of integration require building on a number of standards, languages, and technologies. Some of the key technologies are summa-
rized in Sect. 33.3.4 while the initial standardization efforts have been reviewed in [20] and are summarized in Sect. 33.4.
33.3
Integration and Interoperability Frameworks
Enterprise integration and interoperability frameworks are made of a set of principles, standards, guidelines, methods, and sometimes models that practitioners should apply to put in place the right components in the proper order to achieve integration and interoperability in their organization. An example of EI framework is given by Fig. 33.1. It comes from the computer-integrated manufacturing open system architecture (CIMOSA)) that has specifically been designed around 1990 for CIM systems [15]. The framework defines three integration levels, namely systems integration, application/service integration, and business integration, and positions different technologies or languages to be used at each level. Other similar architectures or frameworks for enterprise integration exist; for instances, PERA (Purdue enterprise reference architecture), GIM (GRAI [graphes de résultats
Integration levels Business integration
Coordination / cooperation
Trust-based transactions CNO, e-Work, e-Business & e-Government stds enterprise architectures, enterprise portals, CSCW BP coordination, ISO 19440, BPMN, EDM/WCM Application / service integration
Coordination
Service orchestration, workflow engines, BPEL EDI/EDIFACT, STEP, ebXML, RosettaNet SQL/xPath, KQML/KIF, RDF/OWL, WSDL web services, MOM/ESB, service registries, UDDI Physical system integration
Communication
FTP, SMTP, HTTP / HTTPS, HTML, XML .Net / J2EE, SOAP / REST, cloud computing ISO-OSI, TCP/IP, IPv6, ATM, Fast ethernet, fieldbuses Integration evolution
Fig. 33.1 Enterprise integration levels and related technologies (essential technical terms and acronyms appearing in the figure are explained in Sect. 33.3.4 of this chapter and more information on each of them
can be found at http://whatis.techtarget.com) (FTP file transfer protocol, ISO-OSI International Standards Organization Open System Interconnection, J2EE Java to Enterprise Edition, CSCW Computer Supported Collaborative Work)
33
734
F. B. Vernadat
et activités interreliés] integrated methodology), or GERAM (generalized enterprise reference architecture and methodology). Details on those can be found in [6, 21]. Similarly, a number of frameworks have been developed for interoperability; for instances, LISI, FEI, and EIF. The framework of levels of information systems interoperability (LISI)) has been proposed by an Architecture Working Group of the US Department of Defense (on Command, Control, Communications, Computers, Intelligence, Surveillance and Reconnaissance – C4ISR). It defines five levels of interoperability [22]: • Level 0 – isolated systems (manual extraction and integration of data) • Level 1 – connected interoperability in a peer-to-peer environment • Level 2 – functional interoperability in a distributed environment • Level 3 – domain-based interoperability in an integrated environment • Level 4 – enterprise-based interoperability in a universal environment The Framework for Enterprise Interoperability (FEI)) [23], developed by the EU Network of Excellence INTEROP-NOE, is useful to identify different barriers to interoperability and potential approaches to be adopted for solving the identified barriers. It is organized as a cubic structure along three dimensions (Fig. 33.2): interoperability concerns, which represent the different levels of an enterprise where interoperation can take place (namely, data, service, process, and business levels); interoperability barriers, which refer to the incompatibilities between two enterprise systems (classified as conceptual, technical, and
organizational barriers); and interoperability approaches regarding the solution to be implemented for removing identified barriers (namely, federated, unified, and integrated approaches). FEI has been adopted as an international standard under the number ISO 11354-1. The European Interoperability Framework (EIF)) is a more generic framework jointly developed by the European Commission (EC) and the member states of the European Union (EU) to address business and government needs for information exchange [24]. This framework defines three essential levels (or dimensions) of interoperability, namely technical, semantic, and organizational from bottom up, as depicted by Fig. 33.3. In the new version (2017), a fourth layer has been added to deal with legal aspects, i.e., alignment of legal frameworks, policies, or strategies that might hinder interoperability. Because EIF is gaining wide acceptance, these dimensions are further explained in the next sections.
33.3.1 Technical Interoperability This dimension covers technical issues or plumbing aspects of interoperability. It deals with connectivity and is to date the most developed dimension. It ensures that systems get physically connected and that data and messages can be reliably transferred across systems. It is about linking computer systems and applications or interconnecting information systems by means of so-called middleware components. It includes key aspects such as open interfaces, interconnection services, data transport, data presentation and exchange, accessibility, and security services. At the very bottom level are network protocols (e.g., TCP/IP [Transmission Control Protocol/Internet Protocol], SOAP [Simple Object Access Protocol], etc.) and on top
Interoperability barriers Interoperability approaches
European interoperability framework
Integrated Unified Federated
Organization interoperability
Interoperability concerns
Business
Organisation & process alignment (BPM; process coordination)
Real-world system
Real-world system
Process
Semantic interoperability
Service
(Metadata registries, ontologies)
Semantic alignment
Data
Technical interoperability Conceptual
Organizational
Technological
Fig. 33.2 Framework for Enterprise Interoperability (FEI). (After [23])
Information system
Syntax, interaction & transport
Information system
(HTTP, SMTP ... over TCP/IP; SOAP)
Fig. 33.3 European Interoperability Framework (EIF). (After [24])
33
Interoperability and Standards for Automation
transport protocols (e.g., HTTP [Hypertext Transfer Protocol], SMTP [Simple Mail Transfer Protocol], etc.) to exchange data in the form of XML messages.
33.3.2 Semantic Interoperability This dimension of interoperability concerns semantic alignment, i.e., the ability to achieve meaningful exchange and sharing of information, and not only data, among independently developed systems. It is concerned with ensuring that the precise meaning of exchanged information is understandable by any other application that was not initially developed for this purpose. Semantic interoperability, using encoded domain or expert knowledge, enables systems to interpret, reformat, or adapt received information (e.g., interpret social security code or adapt applicable value-added tax rates depending on countries), combine it with other information resources (for instance, apply machining compensation factors in some circumstances depending on material hardness, temperature, pressure, or else), or process it in a meaningful manner for the recipient. It is therefore a prerequisite for the front-end (multilingual) delivery of services to users. Beyond system interoperability, coordination between systems should also be possible. This means that the different systems must be able to provide each other context-aware services, i.e., services that can be used in different types of situations, that they can call oruse to enable the emergence of higher-level composite services. Technologies involved at this level include metadata registries, thesauri, or ontologies to map semantic definitions of the same concepts used by the different systems involved. In IT, an ontology is is defined as “a shared formal specification of some domain knowledge” [25]. For instance, a circle can be formally defined by its center and radius, themselves formally defined as a point in space and a strictly positive real number, respectively. An ontology is usually expressed by means of a set of axioms and predicates in first-order logic or formal languages such as knowledge query and manipulation language (KQML), common logic (CL), now an international standard (ISO/IEC 24707:2007), or in the form of semantic networks or taxonomies, for instance, using Resource Description Framework (RDF) and Web Ontology Language (OWL), the World Wide Web Consortium (W3C) Web Ontology Language [26]. Another technology is semantic annotation, or tagging [27]. It consists in adding textual or formal annotations or comments to objects (e.g., information items, data schemata, process models, and even web services) to enrich their description. These annotations can even refer to an ontological definition of the concept at hand in a domain ontology. The added information can then be interpreted depending
735
on context and purpose. Semantic annotation uses ontology information to enrich object description that tells a computer the meanings and relations of the object to be interpreted. It can for instance be used to bridge the gap between models as additional information that helps description, discovery, and composition of data items or web services [28].
33.3.3 Organizational Interoperability
33 This dimension of interoperability is concerned with defining business goals, engineering business processes, and bringing cooperation or collaboration capabilities to organizations that wish to exchange information and may have different internal structures and processes. Moreover, organizational interoperability aims at addressing the requirements of the user community by making services available, easily identifiable, accessible, and user centric. In other words, it is the ability of organizations to provide services to each other as well as to users or the wider public. To achieve organizational interoperability, it is necessary to align business processes of cooperating business entities, define synchronization steps and messages, and define coordination and collaboration mechanisms for interorganizational processes. This requires enterprise modeling to describe the organization, business process management (BPM) for the modeling analysis, and control of the business processes, workflow engines for the coordination of the execution of the process steps defined as business services, trust-based and collaborative tools, and enterprise portals to provide user-friendly access to business services and information pages made available to end users.
33.3.4 Technologies for Interoperability As illustrated by Figs. 33.1 and 33.3, a certain number of technologies and open standards need to be considered for achieving the three levels of enterprise integration and interoperability. Essential ones include (from bottom up of both figures): 1. TCP/IP: The transmission control protocol over Internet Protocol remains the easier-to-use transfer protocol for web services, mail messages, XML messages, file transfer, and multimedia data. It is based on the ISO-OSI standard for open system interchange (ISO 7498) but only uses four layers of the seven-layer structure defined by ISO-OSI. The only recent evolution is the use of IPv6, the sixth version of IP, that replaced IPv4 for faster data transfer and larger addressing of IP addresses. 2. XML: The extended markup language is a widely used, standardized tagged language proposed and maintained
736
by the World Wide Web Consortium (W3C). It has been proposed to be a universal format for structured content and data on the web but can indeed be used for any computer-based exchange. Its simple and open structure has revolutionized data, information, and request/reply exchanges among computer systems because XML messages can be handled by virtually any transport protocol due to its alphanumeric and platform-independent nature [29]. 3. HTTP/HTTPS: Hypertext transfer: protocol is the set of rules for transferring files (text, graphics, images, sound, video, and other multimedia files) on the World Wide Web. HTTP is an application protocol that runs on top of the TCP/IP suite of protocols. Because of its ubiquity it has become an essential standard for communications. HTTP 1.1 is the current version (http://www.w3.org/ Protocols/). HTTPS is the secured version with encryption of messages using public and private keys. 4. Web services and service-oriented architectures (SOAs): Since the year 2000, service-oriented architectures represent a new generation of IT system architectures taking advantage of message-oriented, loosely coupled, asynchronous systems as well as web services [30–32]. SOAs provide business analysts, integration architects, and IT developers with a broad and abstract view of applications and integration components to be dealt with as encapsulated and reusable high-level services. Web services can be defined as interoperable software objects that can be assembled over the Internet using standard protocols and exchange formats to perform functions or execute business processes [31]. They are accessible by the external world by their uniform resource locator (URL), they are described by their interface, and can be implemented in any computer language. Web services use the following de facto standards for their implementation: • WSDL: The web service description language is a contract language used to declare the web service interface and access methods in a universal language using specific description templates [33]. • SOAP: The simple object access protocol is a communication protocol and a message layout specification that defines a uniform way of passing XML-encoded data between two interacting software entities (for instance, web services). It has been designed to be a simple way to do remote procedure calls (RPCs) over Internet using the GET and POST methods. SOAP became a W3C standard in the field of Internet computing in 2002 (SOAP 1.1) and the current version is SOAP 1.2 [34]. While SOAP 1.1 was based on XML 1.0, SOAP 1.2 is based on XML information set (XML Infoset). The XML information set (infoset) provides a way to describe an XML document with XML schema definition (XSD). SOAP still lacks
F. B. Vernadat
security mechanisms for the transfer of sensitive data and messages but can deal with data encryption. • REST: Representational state transfer is a constrained interface architecture often presented as an alternative to SOAP. REST promotes the use of HTTP and its basic operations for building large-scale distributed systems. In this architecture, the HTTP operations PUT, POST, GET, and DELETE form a standardized, constrained interface. These operations are then applied to resources, located by their uniform resource locator (URL). REST makes possible to build large distributed applications using only these four operations [35]. • UDDI: The universal description, discovery, and integration specification is an XML-based registry proposal for worldwide businesses to list their services offered on the Internet as web services. The goal is to streamline online transactions by enabling companies to find one another on the web and make their services interoperable for e-commerce. UDDI is often compared to a telephone book’s white, yellow, and green pages. Within a company or within a network of companies, a private service registry can be established in a similar way and format to UDDI in order to manage all shared business and data services used across the business units. 5. Service registries: A service registry is a metadata repository that maintains a common description of all services registered for the functional domain for which it has been set up. Services are described by their name, their owner, their service level agreement and quality of service, their location, and their access method. It is used by service owners to expose their services and by service users to locate services. Commercial products based on UDDI specs are available. 6. Message and service buses – MOM/ESB: The key to building interoperable enterprise systems and serviceoriented architectures that are reliable and scalable is to ensure loose coupling among services and applications. This can be achieved using message queuing techniques, and especially message-oriented middleware (MOM) products and their extensions known as enterprise service buses (ESB) [36]. Services and IT applications exchange messages in a neutral format (preferably XML) using simple transport protocols (e.g., XML/SOAP on TCP/IP, SMTP, or HTTP). A message-oriented middleware is a messaging system that provides the ability to connect applications in an asynchronous message exchange fashion using message queues. It provides the ability to create and manage message queues, to manage the routing of messages, and to fix priorities of messages. Messages can be delivered according to three basic messaging models:
33
Interoperability and Standards for Automation
• Point-to-point model, in which only one consumer may receive messages that are sent to a queue by one or more producers • Publish and subscribe, in which multiple consumers may register, or subscribe, to receive messages (called topics) from the same message queue • Request/reply, which allows reliable bidirectional communications between two peer systems (using input queues and output queues) An enterprise service bus goes beyond the capabilities of a MOM. It is a standards-based integration platform that combines messaging of a MOM, web services, data transformation, database access services, intelligent routing of messages, and even workflow execution to reliably connect and coordinate the interaction of significant numbers of diverse applications across an extended organization with transactional integrity. It is capable of being adopted for any general-purpose integration project and can scale beyond the limits of a huband-spoke enterprise application integration broker. Data transformation is the ability to apply XSLT (extended style sheet transformation) (or XML style sheets) to XML messages to reformat messages during transport depending on the type of receivers that will consume the messages (for instance, the ZIP code is interpreted as a separate piece of information for one application while it is part of a customer address for another one). Intelligent or rule-based routing is the ability to use message properties to route message delivery to different queues according to message content. Database services simplify access to database systems using SQL (structured query language) and Java database connectivity (JDBC) or object database connectivity (ODBC). Useful enterprise integration patterns to be used in messaging applications can be found in the book by Hohpe and Woolf [17], who have thoroughly analyzed and described patterns most commonly used in practice. 7. Workflow engines: A workflow engine is a software application that is able to enable and monitor the execution of instances of a business process from its model specification (for instance, expressed in the BPEL language – see next bullet). It executes a process step by step and monitors the state of each process step. Workflow engines are widely available on the market place. 8. BPM: Business process management consists of reviewing, reengineering, and automating business processes of the organization. A business process is a partially ordered sequence of steps. Among the process steps are services offered by IT systems, others are activities or tasks. Once automated, these processes take the form of workflows, i.e., state-transition machines. Two languages can be used to model business processes, one at the business
737
user level and one at the workflow system level. These are: • BPMN: The business process modeling notation is a diagrammatic and semistructured notation that provides business analysts with the capability to represent internal business procedures in a graphical language [37]. It gives organizations the ability to depict and communicate these procedures in a standard manner. Furthermore, the graphical notation facilitates the description of how collaborations as well as business transactions are performed between organizations. This ensures that business participants will understand each other’s business and that they will be able, when necessary, to adjust organization process models to new internal and B2B business circumstances quickly. The current version is BPMN 2.0.2 and has become a standard (ISO/IEC 19510). • BPEL: The business process execution language is an XML-based language designed to enable task sharing for a distributed computing environment – even across multiple organizations – using a combination of web services (BPEL was first named BPEL4WS). It has been developed as a de facto standard by the IT industry [38]. BPEL specifies the business process logic that defines the choreography of interactions between a number of web services as a workflow. The BPEL standard defines the structure, tags, and attributes of an XML document that corresponds to a valid BPEL specification. Conversion mechanisms exist to translate BPMN models into BPEL specifications so that the corresponding business processes can be executed by workflow engines. 9. Enterprise portals: Enterprise portals provide a single point of access to enterprise information and services organized as web pages, easily made accessible via a web browser. Services are implemented as web services (also called portlets) and information contents are stored in web content managers. Enterprise portals provide a unified web-based framework for integrating information, people, and processes across organizational boundaries. If the portal is only accessible by the employees of an enterprise, it is called an intranet. If in addition it can be accessed by authorized external people, it is called an extranet. If it can be freely accessed by anyone, it is a web portal or public Internet site. 10. EDM/WCM: Enterprise content management (ECM)) consists in managing unstructured information in electronic form (letters, cont consists in managing unstructured information in electronic form (letters, contracts, invoices, e-mails, miscellaneous electronic documents, photos, films, etc.), as opposed to structured information stored in databases. It includes all of the management and formatting of content published on large-scale
33
738
F. B. Vernadat
company websites (intranets, extranets, and Internet websites). Web content management (WCM)) specifically deals with content published on web pages. A web content management system (WCMS) provides website authoring, collaboration, and administration tools that help users with little knowledge of web programming languages or markup languages that create and manage website contents. Both EDM and WCM tools are essential systems to store and manage contents presented by enterprise portals.
33.4
Standards for Automation
Standardization is the process of elaborating official documents, i.e., sets of guidelines and standards, on the basis of a large consensus at national or international level on a specific technical domain, a technology, a methodology or set of procedures, or a body of knowledge. Standards are useful to public or private organizations, and especially to industry, because they provide commonly agreed terminology, rules, and normative information or specifications about procedures, methods, or technical solutions for their design, management, improvement, operations, maintenance, or decommissioning, or those of the products or services they produce or offer. The aim of this section is to list major standards applicable to the design, implementation, and management of automated systems, be they interoperable or not. Standardization organizations producing relevant standards for automation domains include ISO, IEC, ANSI, CEN, and NIST. ISO (International Standardization Organization), based in Geneva, Switzerland, was founded in 1946 and counts 165 participating countries. Standardization work at ISO is organized in technical committees (TCs), each TC being made of subcommittees (SCs) comprising working groups (WGs). TC 184 on automation systems and integration is the main ISO TC dealing with industrial automation and covering physical device control (SC 1), industrial and service robotics (SC 2), industrial data (SC 4), and interoperability and integration technologies (SC 5). IEC (International Electrotechnical Commission), , founded in 1906 and also based in Geneva, Switzerland, is the world’s leading organization for the preparation and publication of international standards for all so-called “electrotechnologies,” i.e., electrical, electronic, and related technologies. ISO and IEC formed the joint technical committee JTC 1 dealing with development of worldwide information and communication technology (ICT) standards for business and consumer applications. ANSI (American National Standards Institute)) is a private, not-for-profit organization founded in 1918 and based in Washington, DC, USA. Its mission is to enhance both the
global competitiveness of US business and the US quality of life by promoting and facilitating voluntary consensus standards and conformity assessment systems, and safeguarding their integrity. CEN (European Committee for Standardization)) is based in Brussels, Belgium. It is an association that brings together the National Standardization Bodies of 34 European countries. It provides a platform for the development of European standards. CEN/TC 310 on advanced automation technologies and their applications has been working since 1990 to ensure the availability of the standards the European industry needs for integrating and operating the various physical, electronic, software, and human resources required for automated manufacturing. It works closely with ISO/TC 184 and other committees. CEN/ISO documents are referenced as EN ISO documents for European norms (EN). NIST (National Institute for Standards and Technology)) is a US standardization institute founded in 1901 and based in Gaithersburg, MD. It is now a branch of the US Department of Commerce and is organized in laboratories. The true central mission of NIST is to enable innovation and industrial competitiveness through standards. Together with ISO and ANSI, NIST has significantly contributed to automation standards developments, e.g., automation frameworks, cyberphysical system frameworks, risk management framework (RMF) automation, security automation, etc. The following list of standards is by no means an exhaustive presentation. It contains the most popular standards relevant for automation projects or automated systems. Standards are presented in tabular form and classified by themes, excluding industrial communications and computer networks technologies which are addressed in Ch. 23. Copies of the standards can be obtained from the respective website of the responsible organization body (usually with a fee) using the standard identification code indicated in the following tables. For each standard, a short description extracted from publicly available information on its website is proposed.
33.4.1 Standards for Automation Project or System Management Table 33.1 identifies high level standards relevant for project managers of automation projects or automated systems. Indeed, depending on projects, managers must pay attention to quality management of products, services, or processes (ISO 9000), assess and mitigate risks (ISO 31000), care about environmental responsibilities (ISO 14000) as well as social responsibilities including sustainable development (ISO 26000), guarantee information asset security (ISO/IEC 27000), measure and control performance
33
Interoperability and Standards for Automation
739
Table 33.1 Standards for automation project or system management Standard identification ISO 9000 Family
Title Quality management
Short description Proposes a number of quality management principles including a strong customer focus, the motivation and implication of top management, the process approach, and continual improvement. ISO 9001 can be used as the basis for audit or certification of enterprise processes or services. The family comprises: ISO 9000:2015 Quality management systems – Fundamentals and vocabulary ISO 9001:2015 Quality management systems – Requirements ISO 9004:2018 Quality management – Quality of an organization – Guidance to achieve sustained success ISO 14000 Family Environmental management Concerns companies or organizations of any type that require practical tools to manage their environmental responsibilities. It comprises: ISO 14001:2015 Environmental management systems – Requirements with guidance for use ISO 14004:2016 Environmental management systems – General guidelines on implementation ISO 14005:2019 Environmental management systems – Guidelines for a flexible approach to phased implementation ISO 14040: 2006 Environmental management – Life cycle assessment – Principles and framework ISO 19011:2018 Guidelines for auditing Provides guidance on auditing management systems, including management systems the principles of auditing, managing an audit program, and conducting management system audits, as well as guidance on the evaluation of competence of individuals involved in the audit process ISO 22301:2012 Societal security – Business Specifies requirements to plan, establish, implement, operate, continuity management monitor, review, maintain, and continually improve a systems – Requirements documented management system to protect against, reduce the likelihood of occurrence, prepare for, respond to, and recover from disruptive incidents when they arise ISO 22400 Family Automation systems Specifies an industry-neutral framework for defining, integration – Key performance composing, exchanging, and using key performance indicators indicators (KPIs) for (KPIs) for manufacturing operations management, as defined manufacturing operations in IEC 62264-1 for batch, continuous, and discrete industries. management It comprises: ISO 24001:2014 Part 1: Overview, concepts, and terminology ISO 24002:2014 Part 2: Definitions and descriptions (contains a selected number of KPIs) ISO 24010:2018 Part 10: Operational sequence description of data acquisition ISO 26000:2010 Guidance on social Is intended to assist organizations in contributing to responsibility sustainable development; also intended to encourage them to go beyond legal compliance, recognizing that compliance with law is a fundamental duty of any organization and an essential part of their social responsibility ISO/IEC 27000 Family Information Technology – Enables organizations of any kind to manage the security of Security management assets such as financial information, intellectual property, employee details, or information entrusted by third parties in their information systems ISO 31000:2018 Risk management – Guidelines Provides guidelines and a common approach to managing any kind of risks faced by organizations. The application of these guidelines can be customized to any organization and its context
Responsible entity ISO/TC 176/SC2
33 ISO/TC 207
ISO/TC 176/SC3
ISO/TC 292
ISO/TC 184/SC5
ISO Technical Management Board
ISO/IEC JTC 1/SC27
ISO/TC 262
740
of manufacturing operations (ISO 22400), ensure business continuity of operations (ISO 22301), and carry out audits or obtain certification for their products, services, or systems (ISO 19011).
F. B. Vernadat
for smart devices and their connection, and the digital factory framework (IEC 62832-1:2016).
33.4.5 Standards for Industrial Data 33.4.2 Standards for Automated Systems Modeling, Integration, and Interoperability Table 33.2 summarizes the most important standards applicable to interoperability or integration projects of enterprise systems or automation applications. They concern project managers and business analysts when they have to plan and organize enterprise/automation system integration or interoperability projects (ISO 15704), assess interoperability maturity (ISO 11354-2), define system architecture (EN ISO 19439, ISO/IEC 10746), apply enterprise or system modeling techniques to describe business entities (ISO 14258, ISO 19440, ISO/IEC 15414) or business processes (ISO/IEC 19150), specify and analyze system design with Petri nets (ISO/IEC 15909), and put in place interoperability solutions using open systems application integration (ISO 15745) or manufacturing software capability profiling (ISO 16100).
33.4.3 Standards for E-Commerce and E-Business Table 33.3 mentions only three standardization documents, including one in working draft (WD) status and one standard family (ISO 15000) on ebXML for e-commerce and e-business as far as automation is concerned.
33.4.4 Standards for Emerging Industry (Including I4.0, IoT, and CPS) Table 33.4 indicates a few standards relevant to Industry 4.0/smart manufacturing. Because these are recent and still emerging concepts, only a limited number of standards or prestandards can be pointed out. The most important one at the moment is the publicly available specification (PAS) document about the reference architecture for Industry 4.0 known as RAMI 4.0 (IEC/PAS 63088). For this reason, it is listed first. Also included in this category are standards on RFID (ISO/IEC 18000 Series) as used in inventory management, Reference Architecture for Service-Oriented Architectures (SOA RA) (ISO/IEC 18384:2016), drones (ISO 21384 and ISO 21895) to be used to deliver shipments, Internet of Things (IoT) (ISO/IEC 21823 and ISO/IEC TR 30166:2020)
Table 33.5 suggests a list of major standards to be considered when handling or processing industrial data in automated systems. They concern data quality and the exchange of master data (ISO 8000), exchange of product data with the STEP format (ISO 10303)), exchange of parts library data (ISO 13584), and specification and exchange of manufacturing process data with the PSL language (ISO 18629). It also concerns industrial manufacturing management data as defined in the various parts of the MANDATE standard (ISO 15531) for which it defines a series of conceptual models (resource model, time model, and flow management model).
33.4.6 Standards for Industrial Automation The sum of standardization efforts in industrial automation is immense. It covers specifications for everything from network communications profiles all the way through to the tools physically interacting with the products, interoperability of devices, or to equipment reliability, safety, or efficiency. Therefore, Table 33.6 only provides a list of essential standards for industrial automation as recommended by ANSI and ISA (the International Society of Automation). Among this set of documents, we specifically recommend reading documents of the ISO/IEC 62264 family (listed first), formerly known as ANSI/ISA-95, which defines a generic hierarchical control framework, also called the automation pyramid, for the management and control of manufacturing and production systems. The pyramid defines five levels: level 0 at the bottom for production processes with sensors and signals, level 1 for sensing and manipulating using PLCs, level 2 for monitoring and supervising using supervisory control and data acquisition (SCADA) systems accessible by means of human-machine interfaces (HMIs), level 3 for manufacturing operations management using manufacturing executive systems (MESs), and level 4 at the top for business planning and management using enterprise resource planning (ERP) systems. Part 3 of the standard also needs consideration as it defines generic activity models of manufacturing operations management that enable enterprise systems to control system integration. Three other significant and widely used standards are: IEC 61131 about programmable logic controllers (PLCs), ISO 9506 about manufacturing message specification (MMS), and ANSI/MTC1.6-2020 about MTConnect, an ANSI/ISA-95-compliant interoperability solution for data
33
Interoperability and Standards for Automation
741
Table 33.2 Standards for automated systems modeling, integration, and interoperability Standard identification Title ISO/IEC 10746 Family Information technology – (known as RM-ODP) Open distributed processing – Reference model
EN ISO 11354 Family
ISO 14258:1998
ISO/IEC 15414:2015
ISO 15704:2019
ISO 15745 Family
Short description Provides a coordinating framework for the standardization of open distributed processing (ODP). This supports distribution, interworking, portability, and platform and technology independence. It establishes an enterprise architecture framework for the specification of ODP systems based on five model views (namely enterprise, information, computational, engineering, and technology viewpoints). The family comprises: ISO/IEC 10746-1:1998 Part 1: Overview ISO/IEC 10746-2:2009 Part 2: Foundations ISO/IEC 10746-3:2009 Part 3: Architecture ISO/IEC 10746-4:1998 Part 4: Architectural semantics Advanced automation Specifies Framework for Enterprise Interoperability (FEI) that technologies and their establishes dimensions and viewpoints to address interoperability applications – barriers, their potential solutions, and the relationships between them, Requirements for and specifies maturity levels to represent the capability of an establishing manufacturing enterprise to interoperate with other enterprises. It comprises: enterprise process ISO 11354-1:2011 Part 1: Framework for Enterprise Interoperability interoperability ISO 11354-2: 2015 Part 2: Maturity model for assessing enterprise interoperability Industrial automation Defines general concepts and rules applicable for elaborating systems – Concepts and enterprise models rules for enterprise models Information technology – Provides (a) a language (the enterprise language) comprising Open distributed concepts, structures, and rules for developing, representing, and processing – Reference reasoning about a specification of an open distributed processing model – Enterprise system from the enterprise viewpoint (as defined in ISO/IEC language 10746-3); and (b) rules which establish correspondences between the enterprise language and the other viewpoint languages (defined in ISO/IEC 10746-3) to ensure the overall consistency of a specification Enterprise modeling and Specifies a reference base of concepts and principles for enterprise architecture – architectures that enable enterprise development, enterprise Requirements for integration, enterprise interoperability, human understanding, and enterprise-referencing computer processing. The document further specifies requirements architectures and for models and languages created for expressing such enterprise methodologies architectures. It specifies those terms, concepts, and principles considered necessary to address stakeholder concerns and to carry out enterprise creation programs as well as any incremental change projects required by the enterprise throughout the whole life of the enterprise. The document forms the basis by which enterprise architecture and modeling standards can be developed or aligned Industrial automation Defines an application integration framework, i.e., a set of elements systems and integration – and rules for describing integration models and application Open systems application interoperability profiles, together with their component profiles integration framework (process profiles, information exchange profiles, and resource profiles). It is applicable to industrial automation applications such as discrete manufacturing, process automation, electronics assembly, semiconductor fabrication, and wide-area material handling. It may also be applicable to other automation and control applications such as utility automation, agriculture, off-road vehicles, medical and laboratory automation, and public transport systems. It comprises: ISO 15745-1:2003 Part 1: Generic reference description ISO 15745-2:2003 Part 2: Reference description for ISO 11898-based control systems ISO 15745-3:2003 Part 3: Reference description for IEC 61158-based control system ISO 15745-4:2003 Part 4: Reference description for Ethernet-based control system
Responsible entity ISO/IEC JTC 1/SC 7
33 ISO/TC 184/SC5
ISO/TC 184/SC5
ISO/IEC JTC 1/SC 7
ISO/TC 184/SC5
(continued)
742
F. B. Vernadat
Table 33.2 (continued) Standard identification ISO 16100 Family
Title Industrial automation systems and integration – Manufacturing software capability profiling for interoperability
Short description Specifies a framework, independent of a particular system architecture or implementation platform, for the interoperability of a set of software products used in the manufacturing domain and to facilitate its integration into a manufacturing application. The full framework addresses information exchange models, software object models, interfaces, services, protocols, capability profiles, and conformance test methods. It comprises: ISO 16100-1:2009 Part 1: Framework ISO 16100-2:2003 Part 2: Profiling methodology ISO 16100-3:2005 Part 3: Interface services, protocols, and capability templates ISO 16100-4:2006 Part 4: Conformance test methods, criteria, and reports ISO 16100-5:2009 Part 5: Methodology for profile matching using multiple capability class structures ISO/IEC 19150:2013 Information technology – Provides a notation that is readily understandable by all business (on BPMN) Object Management Group users, from the business analysts that create the initial drafts of the Business Process Model processes to the technical developers responsible for implementing and Notation the technology that will perform those processes, and finally to the business people who will manage and monitor those processes. Based on BPMN 2.0 EN ISO 19439:2006 Enterprise integration – Specifies a framework based on CIMOSA, PERA, and GERAM Framework for enterprise conforming to requirements of ISO 15704 made of four model views modeling (function, information, resource, and organization). It serves as the basis for further standards for the development of enterprise models that will be computer-enactable and enable business process model-based decision support leading to model-based operation, monitoring, and control EN ISO 19440:2020 Enterprise integration – Identifies and specifies a set of constructs necessary for modeling Constructs for enterprise enterprises aspects in conformance with ISO 19439. This document modeling focuses on, but is not restricted to, engineering and the integration of manufacturing and related services in the enterprise. The constructs enable the description of structure and functioning of an enterprise for use in configuring or implementing in different application domains ISO/IEC 15909 Family Systems and software Defines a Petri net modeling language or technique, called high-level engineering – High-level Petri nets, including its syntax and semantics. It provides a reference Petri nets definition that can be used both within and between organizations to ensure a common understanding of the technique and of the specifications written using the technique. The standard also defines an XML-based transfer format for Petri nets to facilitate the development and interoperability of Petri net computer support tools. It comprises: ISO/IEC 15909:2019 Part 1: Concepts, definitions, and graphical notation ISO/IEC 15909:2011 Part 2: Transfer format
Responsible entity ISO/TC 184/SC5
ISO/IEC JTC 1
ISO/TC 184/SC5
ISO/TC 184/SC5
ISO/IEC JTC 1
exchanges between machine tools and other pieces of facility equipment.
general characteristics, coordinate systems, effectors, mechanical interfaces, or safety requirements.
33.4.7 Standards for Industrial and Service Robotics
33.4.8 Standards on Usability and Human-Computer Interaction
Many standards apply to industrial and service robots. Table 33.7 gives a limited but useful list for manipulating robots covering aspects such as established vocabulary,
Table 33.8 provides a nonexhaustive list of useful standards related to ease of use and interactions between humans and computerized systems in automated systems with a focus
33
Interoperability and Standards for Automation
743
Table 33.3 Standards for e-commerce and e-business Standard identification ISO 10008:2013
ISO 15000 Family (ebXML)
Title Quality management – Customer satisfaction – Guidelines for business-to-consumer electronic commerce transactions Electronic business eXtensible Markup Language (ebXML)
ISO/IEC 15944-1:2011 Information (Open-edi) technology – Business operational view – Part 1: Operational aspects of Open-edi for implementation ISO/WD 32110 Transaction assurance in e-commerce
Short description Provides guidance for planning, designing, developing, implementing, maintaining, and improving an effective and efficient business-to-consumer electronic commerce transaction system within an organization
Commonly known as e-business XML, or ebXML, it is a family ISO/TC 154 of XML-based standards sponsored by OASIS and UN/CEFACT to provide an open, XML-based infrastructure, that enables the global use of electronic business information in an interoperable, secure, and consistent manner by all trading partners. It currently comprises: ISO/DIS 15000-1: Part 1: Messaging service core specification ISO/DIS 15000-2: Part 2: Applicability Statement (AS) profile of ebXML messaging service ISO 15000-5:2014 Part 5: Core Components Specification (CCS) Concerns open electronic data interchange (Open-edi) and ISO/IEC JTC 1/SC 32 addresses the fundamental requirements of the commercial and legal frameworks and their environments on business transactions of commercial data between the participating organizations
Ongoing work on standardization in the field of transaction ISO/TC 321 assurance in e-commerce-related upstream/downstream processes
on ergonomic requirements (including multimedia systems), different aspects of man-machine interaction, and usability.
33.5
Responsible entity ISO/TC 176/SC3
Overview and Conclusion
To synthesize, Fig. 33.4, adapted from [39], puts together a number of concepts and principles addressed in this chapter to draw a comprehensive view of the big picture. In addition, it indicates some of the research challenges that are receiving a lot of attention from different academic communities in the various domains of enterprise integration, interoperability, and networking, when they have to apply to enterprise automation. As illustrated in Sect. 33.4, all these domains are the subject of extensive standardization efforts to consolidate advances and encourage harmonized developments. Due to the tremendous needs for data/information/ knowledge exchange or sharing as well as for the everincreasing needs for cooperative or collaborative work around the world and between organizations, nearly all activity sectors are facing integration and interoperability problems, ranging from manufacturing supply chains to banking activities, the insurance sector, medical services, and public or government organizations. Typical recent examples are the manufacturing world on one hand with smart manufacturing/Industry 4.0, and all kinds of e-activities on the other hand (e.g., e-business,
e-commerce, e-services, e-work, or e-government). In both cases, the state of the art to face their needs is to implement XML-based messaging communications and loosely coupled system interoperation thanks to XML/HTTP or web services over Internet (TCP/IP). More specifically, electronic commerce is a rapidly growing application domain (for webbased shopping, B2B commerce, e-procurement, electronic payment, EDI, etc.). Dedicated technologies developed for this domain include ebXML (electronic business XML – now ISO 15000) for XML electronic data interchange, and RosettaNet (www.rosettanet.org) for standard electronic data exchange in supply chains. In addition, e-work, and especially teleworking and videoconferencing, is another application domain that has boomed as an effect of the COVID-19 crisis in 2020. Regarding Industry 4.0, the needs are going to decouple and primary technologies used are cyber-physical systems (CPSs), Internet-ofThings/Internet-of-Services, service-oriented architectures, and MOM/ESB internally, while cloud computing, enterprise portals, and collaborative platforms are required to support interorganization exchanges. In terms of trends that can be observed, there has been a shift from the pure business process paradigm that prevailed until the end of the 1990s to the service orientation paradigm since 2000. This puts more emphasis on services interoperability than on IT systems interoperability or enterprise application integration than before [40]. Among the future trends, one can mention the emergence of composite services, i.e., services made of services,
33
744
F. B. Vernadat
Table 33.4 Standards for Industry 4.0 Standard identification IEC/PAS 63088:2017 (known as RAMI 4.0)
Title Smart manufacturing – Reference architecture model industry 4.0 (RAMI 4.0)
Short description Describes a reference architecture model in the form of a cubic layer model, which shows technical objects (assets) in the form of layers, and allows them to be described, tracked over their entire lifetime (or “vita”), and assigned to technical and/or organizational hierarchies. It also describes the structure and function of Industry 4.0 components as essential parts of the virtual representation of assets ISO/IEC 18000 Series Information technology – Describes a series of diverse radio frequency identification (RFID) (on RFID) Radio frequency technologies, each using a unique frequency range. The series is made identification for item of several parts: management Part 1: Reference architecture and definition of parameters to be standardized Parts 2 to 7: Parameters for air interface communications for frequency ranges below 135 KHz, at 13.56 MHz, at 2.45 GHz, at 860 MHz to 960 MHz, and 433 MHz, respectively Part 6 (ISO/IEC 18000-6) is a large document that has been split into five parts as follows: Part 6: Parameters for air interface communications at 860 MHz to 960 MHz General Part 61: Parameters for air interface communications at 860 MHz to 960 MHz Type A Part 62: Parameters for air interface communications at 860 MHz to 960 MHz Type B Part 63: Parameters for air interface communications at 860 MHz to 960 MHz Type C Part 64: Parameters for air interface communications at 860 MHz to 960 MHz Type D ISO/IEC 18384:2016 Information technology – Describes a Reference Architecture for SOA Solutions which applies Reference Architecture for to functional design, performance, development, deployment, and Service-Oriented management of SOA Solutions. It includes a domain-independent Architecture (SOA RA) – framework, addressing functional requirements and nonfunctional Part 2: Reference requirements, as well as capabilities and best practices to support Architecture for SOA those requirements Solutions ISO 21384 Series (for Unmanned aircraft systems Refers to unmanned aircraft systems (UASs) that are widely used in drones) science and technology and specifies the requirements for safe commercial UAS operations. The series, still under development and approval, comprises: ISO/DIS 21384-1 Part 1: General specification ISO/CD 21384-2 Part 2: Product systems ISO 21384-3:2019 Part 3: Operational procedures ISO 21384-4:2020 Part 4: Vocabulary ISO 21895:2020 Categorization and Specifies requirements for the classification and grading of civil classification of civil unmanned aircraft system (UAS). It applies to the industrial unmanned aircraft systems conception, development, design, production, and delivery of civil UAS ISO/IEC 21823 Family Internet of Things (IoT) – Deals with interoperability as it applies to IoT systems, providing a Interoperability for IoT framework for interoperability for IoT systems that enables systems peer-to-peer interoperability between separate IoT systems. It comprises: ISO/IEC 21823-1:2019 Part 1: Framework ISO/IEC 21823-2: 2020 Part 2: Transport interoperability ISO/IEC TR Internet of Things (IoT) – Technical report that describes: general Industrial IoT (IioT) systems 30166:2020 Industrial IoT and landscapes which outline characteristics, technical aspects and functional as well as nonfunctional elements of the IioT structure, and a listing of standardizing organizations, consortia, and open-source communities with work on all aspects on IIoT IEC 62832-1:2016 Industrial process It is a technical specification (TS) that defines the general principles measurement, control, and of the digital factory framework (DF framework), which is a set of automation – Digital model elements (DF reference model) and rules for modeling factory framework – Part 1: production systems General principles
Responsible entity IEC/TC 65
ISO/IEC JTC 1/SC 31
ISO/IEC JTC 1/SC 38
ISO/TC 20/SC 16
ISO/TC 20/SC 16
ISO/IEC JTC 1/SC 41
ISO/IEC JTC 1/SC 41
IEC/TC 65
33
Interoperability and Standards for Automation
745
Table 33.5 Standards for industrial data Standard identification ISO 8000 Series
ISO 10303 Series (informally known as STEP)
ISO 13584 Family (known as PLIB)
ISO 15531 Family (know as MANDATE)
ISO 18629 Family (on PSL)
Title Data quality
Short description Is a global set of standards for data quality and enterprise master data (including critical business information about products, services and materials, constituents, clients and counterparties, and for certain immutable transactional and operational records). It describes the features and defines the requirements for standard exchange of master data among business partners Industrial automation Is the standard for the computer-interpretable representation and systems and integration – exchange of product manufacturing information, and especially Product data representation product data for the entire product life cycle, using STEP (standard and exchange for the exchange of product data models). STEP can be used to exchange geometric, dimensioning, tolerancing, or descriptive data between computer-aided design, computer-aided manufacturing, computer-aided engineering, product data management/enterprise data modeling, and other computer-aided systems. ISO 10303 is closely related to ISO 13584 (PLIB). ISO 10303 is made of many parts and many application protocols (APs). Essential ones are: ISO 10303-1:1994 Part 1: Overview and fundamental principles ISO 10303-11:2004 Part 11: Description methods: The EXPRESS language reference manual ISO 10303 Parts 2x: Implementation methods (STEP-file, STEP-XML . . . ) ISO 10303 Parts 3x: Conformance testing methodology and framework ISO 10303 Parts 4x and 5x: Integrated generic resources ISO 10303 Parts 1xx: Integrated application resources AP 201, Explicit draughting. Simple 2D drawing geometry related to a product. No association, no assembly hierarchy AP 202, Associative draughting. 2D/3D drawing with association, but no product structure AP 203, Configuration-controlled 3D designs of mechanical parts and assemblies AP 204, Mechanical design using boundary representation AP 214, Core data for automotive mechanical design processes AP 242, Managed model-based 3D engineering Industrial automation Is an international standard for the computer-interpretable systems and integration – representation and exchange of parts library data. The objective is to Parts library provide a neutral mechanism capable of transferring parts library data, independent of any application that is using a parts library data system. The nature of this description makes it suitable not only for the exchange of files containing parts, but also as a basis for implementing and sharing databases of parts library data Industrial automation MANDATE addresses the modeling of manufacturing management systems and integration – data such as: resources management data (resource model), Industrial manufacturing time-related features (time model), or flow management data in management data manufacturing (flow management model). MANDATE, in association with STEP, PLIB, and other ISO/SC 4 (or non-SC 4) standards, may be used in any software application that addresses manufacturing management–related information such as resources management data, flow management data, etc. As such, the standard is intended at facilitating information exchanges between software applications such as ERP, manufacturing management software, maintenance management software, quotation software, etc. MANDATE has been written in EXPRESS (ISO 10303-11) Industrial automation Defines a process specification language (PSL) aimed at identifying, systems and integration – formally defining, and structuring the semantic concepts intrinsic to Process specification the capture and exchange of process information related to discrete language manufacturing. PSL is defined as a set of logic terms, specified in an ontology, used to describe manufacturing, engineering, and business processes. This family is made of nine parts, among which the most important are: ISO 18629-1:2004 Part 1: Overview and basic principles ISO 18629-11:2005 Part 11: PSL core ISO 18629-12:2005 Part 12: PSL outer core
Responsible entity ISO/TC 184/SC4
ISO/TC 184/SC4
33
ISO/TC 184/SC4
ISO/TC 184/SC4
ISO/TC 184/SC4
746
F. B. Vernadat
Table 33.6 Essential standards for industrial automation Standard identification ISO/IEC 62264 Family(known as ANSI/ISA-95 or ISA-95)
Title Enterprise control system integration
Short description Is an international standard for developing an automated interface between enterprise and control systems, developed to be applied in all industries, and in all sorts of processes, like batch processes, continuous, and repetitive processes. It comprises: IEC 62264-1:2013 Part 1: Models and terminology IEC 62264-2:2015 Part 2: Objects and attributes for enterprise control system integration IEC 62264-3:2016 Part 3: Activity models of manufacturing operations management IEC 62264-4:2016 Pat 4: Activity models of manufacturing operations management integration IEC 62264-5:2016 Part 5: Business to manufacturing transactions ISO 9506 Family Industrial automation Deals with manufacturing messaging specification (MMS) for (known as MMS) systems – Manufacturing transferring real-time process data and supervisory control message specification information between networked devices or computer applications. It comprises: ISO 9506-1:2003 Part 1: Service definition ISO 9506-2:2003 Part 2: Protocol specification ISO 11161:2007 Safety of machinery – Specifies the safety requirements for integrated manufacturing Integrated manufacturing systems that incorporate two or more interconnected machines for systems – Basic specific applications, such as component manufacturing or assembly. requirements It gives requirements and recommendations for the safe design, safeguarding, and information for the use of such systems ISO 13281:1997 Industrial automation Specifies the functional architecture of MAPLE, a manufacturing (known as MAPLE) systems – Manufacturing automation programming environment. MAPLE is a Automation Programming vendor-independent neutral facility for the programming of multiple Environment (MAPLE) – manufacturing devices and controls. For the programming of Functional architecture manufacturing devices and controls the following areas will be supported: Connection between various manufacturing data and manufacturing application programs; management of several manufacturing databases; and sharing of manufacturing application programs and software tools ISO 20242-1:2005 Industrial automation Provides an overview of the particularities of the international systems and integration – standard ISO 20242 and its use in the computer-aided testing Service interface for testing environment applications – Part 1: Overview ISO 23570-1:2005 Industrial automation Specifies the interconnection of elements in the control system of systems and integration – machine tools and similar large pieces of industrial automation and Distributed installation in specifies the interconnection of sensors and actuators with I/O industrial applications – modules. This specification includes cable types, sizes and sheath Part 1: Sensors and colors, connector types and contact assignments, and diagnostic actuators functions appropriate to the sensors and actuators IEC 61131:2020 Series Programmable controllers Is a set of IEC standard for programmable controllers. It comprises: IEC 61131-1:2003 Part 1: General information IEC 61131-2:2017 Part 2: Equipment requirements and tests IEC 61131-3:2013 Part 3: Programming languages IEC TR 61131-4:2004 Part 4: User guidelines IEC 61131-5:2000 Part 5: Communications IEC 61131-6:2012 Part 6: Functional safety IEC 61131-10:2019 Part 10: PLC open XML exchange format IEC 61158-1:2019 Industrial communication Specifies the generic concept of fieldbuses. This document presents networks – Fieldbus an overview and guidance for the IEC 61158 series explaining the specifications – Part 1: structure and content of the IEC 61158 series, relating the structure of Overview and guidance for the IEC 61158 series to the ISO/IEC 7498-1 OSI Basic Reference the IEC 61158 and IEC Model and showing how to use parts of the IEC 61158 series in 61784 series combination with the IEC 61784 series
Responsible entity ISO/TC 184/SC 5
ISO/TC 184/SC 5
ISO/TC 199
ISO/TC 184/SC 5
ISO/TC 184/SC 5
ISO/TC 184/SC 1
IEC/TC 65/SC 65B
IEC/TC 65/SC 65C
(continued)
33
Interoperability and Standards for Automation
747
Table 33.6 (continued) Standard identification IEC 61298-1:2008
IEC 61508-1:2010
IEC 61784-1:2019
IEC 61850 Series
IEC 61987-1:2006
IEC 62381:2012
IEC 62453-1:2016
ANSI/MTC1.6-2020
Title Process measurement and control devices – General methods and procedures for evaluating performance – Part 1: General considerations Functional safety of electrical/electronic/programmable electronic safety-related systems – Part 1: General requirements Industrial communication networks – Profiles Part 1: Fieldbus profiles
Responsible Short description entity Specifies general methods and procedures for conducting tests, and IEC/TC reporting on the functional and performance characteristics of process 65/SC 65B measurement and control devices
Covers the aspects to be considered when electrical/electronic/programmable electronic systems are used to carry out safety functions
Defines a set of protocol-specific communication profiles based primarily on the IEC 61158 series, to be used in the design of devices involved in communications in factory manufacturing and process control Communication networks Set of standards for communication networks and systems for power and systems for power utility utility automation – All parts. It is applicable to power utility automation automation systems and defines the communication between intelligent electronic devices in such systems Industrial process Defines a generic structure in which product features of industrial measurement and control – process measurement and control equipment with analog or digital Data structures and elements output should be arranged, in order to facilitate the understanding of in process equipment product descriptions when they are transferred from one party to catalogues – Part 1: another Measuring equipment with analog and digital output Automation systems in the Defines the procedures and specifications for the Factory Acceptance process industry – Factory Test (FAT), the Site Acceptance Test (SAT), and the Site Integration Acceptance Test (FAT), Site Test (SIT). These tests are carried out to prove that the automation Acceptance Test (SAT), and system is in accordance with the specification. The description of Site Integration Test (SIT) activities given in this standard can be taken as a guideline and adapted to the specific requirements of the process, plant, or equipment Field device tool (FDT) Presents an overview and guidance for the IEC 62453 series on field interface specification – Part device tool (FT) interfaces 1: Overview and guidance MTConnect Standard MTConnect is an open and free of charge data and information Version 1.6 exchange standard supporting interoperability between devices, sensors, and software applications by publishing data over a network. It is based on a data dictionary of terms describing information associated with manufacturing operations. The standard also defines a series of semantic data models that provide a clear and unambiguous representation of how that information relates to a manufacturing operation
especially to build hubs of integrated e-services. This need is rapidly growing both in business (to combine competencies and capabilities of several partners into common services) as well as in government organizations (for instance, panEuropean services offered to EU citizens where a service offered to citizens through a common portal will in fact be a combination of local services running in member states in national languages, e.g., application for the blue card or European identity card). At the technical level, integration and interoperability are still evolving and will continue to evolve with tech-
IEC/TC 65/SC 65A
IEC/TC 65/SC 65C
IEC/TC 57
IEC/TC 65/SC 65E
IEC/TC 65/SC 65E
IEC/TC 65/SC 65E ANSI
nological advances (e.g., faster communication networks, authentication mechanisms, security issues, electronic signature, or multilingual issues). At the semantic level, many researchers agree that one of the major challenges for integration and interoperability remains the semantic unification of concepts, thus hindering development of smart systems. There are simply too many (often trivial) ontologies around, developed in isolation and ignoring each other, and no common way of representing these ontologies, although OWL-S is a de facto standard. Proposed underlying microtheories must be
33
748
F. B. Vernadat
Table 33.7 Standards for industrial and service robotics Standard identification ISO 8373:2012 ISO 9409-1:2004
Title Robots and robotic devices – Vocabulary Manipulating industrial robots – Mechanical interfaces – Part 1: Plates
ISO 9409-2:2002
Manipulating industrial robots – Mechanical interfaces – Part 2: Shafts
ISO 9283:1998
Manipulating industrial robots – Performance criteria and related test methods Robots and robotic devices – Coordinate systems and motion nomenclatures Manipulating industrial robots – Presentation of characteristics
ISO 9787:2013
ISO 9946:1999
EN ISO 10218-1:2011
Robots and robotic devices – Safety requirements for industrial robots – Part 1: Robots
EN ISO 10218-2:2011
Robots and robotic devices – Safety requirements for industrial robots – Part 2: Robot systems and integration Manipulating industrial robots – Automatic end-effector exchange systems – Vocabulary and presentation of characteristics Manipulating industrial robots – Informative guide on test equipment and metrology methods of operation for robot performance evaluation in accordance with ISO 9283 Robots and robotic devices – Safety requirements for personal care robots
ISO 11593:1996
ISO/TR 13309:1995
EN ISO 13482:2014
ISO 14539:2000
Manipulating industrial robots – Object handling with grasp-type grippers – Vocabulary and presentation of characteristics
Short description Defines terms used in relation with robots and robotic devices operating in both industrial and nonindustrial environments Defines the main dimensions, designation, and marking for a circular plate as mechanical interface. It is intended to ensure the exchangeability and to keep the orientation of hand-mounted end-effectors Defines the main dimensions, designation, and marking for a shaft with cylindrical projection as mechanical interface. It is intended to ensure the exchangeability and to keep the orientation of hand-mounted end-effectors Is intended to facilitate understanding between users and manufacturers of robots and robot systems, defining the important performance characteristics, describing how they shall be specified, and recommending how they should be tested Defines and specifies robot coordinate systems. It also provides nomenclature, including notations, for the basic robot motions. It is intended to aid in robot alignment, testing, and programming. It applies to all robots and robotic devices as defined in ISO 8373 Because the number of manipulating industrial robots used in a manufacturing environment is constantly increasing and this has underlined the need for a standard format for the specification and presentation of robot characteristics, the document is intended to assist users and manufacturers in the understanding and comparison of various types of robots and contains a vocabulary and a format for the presentation of automatic end-effector exchange systems characteristics Specifies requirements and guidelines for the inherent safe design, protective measures, and information for use of industrial robots. It describes basic hazards associated with robots and provides requirements to eliminate, or adequately reduce, the risks associated with these hazards Specifies safety requirements for the integration of industrial robots and industrial robot systems as defined in ISO 10218-1, and industrial robot cell(s)
Responsible entity ISO/TC 299 ISO/TC 299
ISO/TC 299
ISO/TC 299
ISO/TC 299
ISO/TC 299
ISO/TC 299
ISO/TC 299
Defines terms relevant to automatic end-effector exchange systems ISO/TC 299 used for manipulating industrial robots. The terms are presented by their symbol, unit, definition, and description. The definition includes references to existing standards
This technical report supplies information on the state of the art of test ISO/TC 299 equipment operating principles. Additional information is provided that describes the applications of current test equipment technology to ISO 9283
Specifies requirements and guidelines for the inherently safe design, ISO/TC 299 protective measures, and information for use of personal care robots, in particular the following three types of personal care robots: mobile servant robots, physical assistant robots, and person carrier robots. These robots typically perform tasks to improve the quality of life of intended users, irrespective of age or capability Is one of a series of standards dealing with the requirements of ISO/TC 299 manipulating industrial robots. This standard provides the vocabulary for understanding and planning of object handling and presentation of characteristics of grasp-type grippers
33
Interoperability and Standards for Automation
749
Table 33.8 Standards on usability and human-computer interaction Standard identification ISO 9241 Family
ISO 13407:1999
ISO 14915
ISO/TS 16701:2003
ISO/TR 16982:2002
ISO/TR 18529:2000
IEC/TR 61997:2001
Title Ergonomic requirements for office work with visual display terminals (VDTs) Renamed from 2006 as: Ergonomics of human-system interaction
Human-centered design processes for interactive systems Software ergonomics for multimedia user interfaces
Ergonomics of human-system interaction – Guidance on accessibility for human-computer interfaces Ergonomics of human-system interaction – Usability methods supporting human-centered design
Ergonomics – Ergonomics of human-system interaction – Human-centered life cycle process descriptions Guidelines for the user interfaces in multimedia equipment for general purpose use
Short description This voluminous standard made of several parts and series provides detailed guidance on the design of user interfaces of interactive systems. Recommended readings are: ISO 9241-1:1997 Part 1: General introduction ISO 9241-2:1992 Part 2: Guidance on task requirements ISO 9241-11:2018 Part 11: Usability: Definitions and concepts ISO 9241-20:2008 Part 20: Accessibility guidelines for information/communication technology (ICT) equipment and services ISO/TR 9421-100:2010 Part 100: Introduction to standards related to software ergonomics ISO 9241-171:2008 Part 171: Guidance on software accessibility (former ISO/TS 16701:2003) ISO 9241-210:2019 Part 210: Human-centered design for interactive systems (former ISO 13407:1999) ISO 9241-220:2019 Part 220: Processes for enabling, executing, and assessing human-centered design within organizations (former ISO/TR 18529:2000) Since 2006, standards documents have been organized in series as follows: 100 series: Software ergonomics 200 series: Human-system interaction processes 300 series: Displays and display-related hardware 400 series: Physical input devices – ergonomics principles 500 series: Workplace ergonomics 600 series: Environment ergonomics 700 series: Application domains – Control rooms 900 series: Tactile and haptic interactions Replaced by ISO 9241-210:2019
Responsible entity ISO/TC 159/SC 4
33
ISO/TC 159/SC 4
Provides principles, recommendations, requirements, and guidance for the design of multimedia user interfaces with respect to the following aspects: design of the organization of the content, navigation, and media-control issues. It comprises: ISO 14915-1:2002 Part 1: Design principles and framework ISO 14915-2:2003 Part 2: Multimedia navigation and control ISO 14915-3:2002 Part 3: Media selection and combination Replaced by ISO 9241-171:2008
ISO/IEC JTC 1/SC 35
Provides information for project managers on human-centered usability methods which can be used for design and evaluation. It details the advantages, disadvantages, and other factors relevant to using each usability method. It explains the implications of the stage of the life cycle and the individual project characteristics for the selection of usability methods and provides examples of usability methods in context Replaced by ISO 9241-220:2019 Part 220
ISO/TC 159/SC 4
ISO/TC 159/SC 4
ISO/TC 159/SC 4
Applies to the design of multimedia equipment such as information IEC/TC 100 and communications equipment or audio-video equipment and systems. The purpose of the guidelines set in this technical report is to take note of those inconveniences in the operation of multimedia equipment observed today, and to specify checkpoints that should be given primary consideration in the development of good multimedia products and systems that the general, nonprofessional user can use with confidence
750
F. B. Vernadat
Domains I4,0, e-Services, e-Work, collaborative networked organizations (CNOs)
Theories, models, languages & tools
Research challenges
• Hub of integrated e-Services Service Service Service
Service Service
Enterprise modeling and reference models
• Enterprise modeling languages (EMLs) • EMLs for enterprise integration & interoperability • Ontologies for (networked) enterprise interoperability • Semantic interoperability
Enterprise integration
• Reference architectures, ontologies, and Interoperability approaches functional models
Interoperability barriers Integrated Unified Federated
Interoperability concerns
Business Process
Validation and verification of enterprise models
• Interoperability architectures • Interoperability process classification frameworks • Model repository classification scheme
Service Data Conceptual
Organizational
Technological
• Collaboration and bio-inspired models and tools Enterprise model repositories
Theories of collaboration CNO reference models Cyber-physical systems Formal methods and tools Empirical studies
Service
Process
Enterprise and system interoperability
• • • • •
Functional model
Information model
Refers to
Produces and annotates models Ontology1
Ontology n
Ontology management services
Annotation provider
Annotation management services
• Process definition, execution, and tracking • Process analysis, evaluation, monitoring, and optimization • Process data mining and visualization
Fig. 33.4 The scope of current challenges in enterprise integration and interoperability (after [39])
aligned, validated, and internationally approved (and may be standardized) to provide sound foundations to the field, while (XML-based) standards for ontology representation should be welcome. At the organizational level, many techniques and languages exist to capture and represent business processes and services in a satisfactory way from a practitioner’s viewpoint. Recent research trends have concerned the development of interoperability maturity models with associated metrics to assess the level of interoperability of companies and the development of methodologies for improving enterprise integration or interoperability. A major challenge for developing smart interoperable enterprise systems concerns organizational learning and knowledge processing at enterprise level [41]. Anyway, for the years to come, the driving forces imposing the new development trends will still come from new business conditions: market globalization in some sectors versus industrial relocalization in other areas, increased communications and electronic activities of all kinds,
teleoperations, knowledge sharing and skill-/competencybased hubs of integrated services, etc. The reader should also see Ch. 18 for related content and other drivers. To date, thanks to advances in Internet computing and service orientation, enterprise integration and systems interoperability have become a reality that largely contributes to make enterprises more agile, more collaborative, and more interoperable. Enterprise integration and systems interoperability are fairly mature at the technical level, and will continue to evolve with progress of new ICT technologies, but are not yet fully mature at the organizational and more specifically at the semantic level. There are however some other issues (out of the scope of this chapter) that should be considered. They concern legal issues, confidentiality issues, or linguistic aspects, to name a few, especially in the case of international contexts (multinational firms, international organizations, or unions of member states). They can have a political dimension and be even harder to solve than pure technical or organizational issues mentioned in this chapter.
33
Interoperability and Standards for Automation
References 1. Williams, T.J.: Advances in industrial automation: historical perspectives. In: Nof, S.Y. (ed.) Springer Handbook of Automation, pp. 5–11. Springer, Berlin Heidelberg (2009) 2. Kusiak, A.: Smart manufacturing. Int. J. Prod. Res. 56(1–2), 508– 517 (2018) 3. Xu, L.D., Xu, E.L., Li, L.: Industry 4.0: state of the art and future trends. Int. J. Prod. Res. 56(8), 2941–2962 (2018) 4. Kagermann, H., Wahlster, W., Helbig, J. (eds.): Recommendations for Implementing the Strategic Initiative Industrie 4.0: Final Report of the Industrie 4.0 Working Group. Federal Ministry of Education and Research, Frankfurt (2013) 5. Monostori, L., Kádár, B., Bauernhansl, T., Kondoh, S., Kumara, S., Reinhart, G., Sauer, O., Schuh, G., Sihn, W., Ueda, K.: Cyberphysical systems in manufacturing. CIRP Ann. 65(2), 621–664 (2016) 6. Vernadat, F.B.: Enterprise Modeling and Integration: Principles and Applications. Chapman Hall, London (1996) 7. Camarinha-Matos, L.M., Afsarmanesh, H., Galeano, N., Molina, A.: Collaborative networked organizations – concepts and practice in manufacturing enterprises. Comput. Ind. Eng. 57, 46–60 (2009) 8. Jagdev, H.S., Thoben, K.D.: Anatomy of enterprise collaborations. Prod. Plan. Control. 12(5), 437–451 (2001) 9. Velásquez, J.D., Nof, S.Y.: Collaborative e-work, e-business, and e-service. In: Nof, S.Y. (ed.) Springer Handbook of Automation, pp. 1549–1576. Springer, Berlin Heidelberg (2009) 10. IEEE: IEEE standard computer dictionary: a vompilation of IEEE standard computer glossaries, IEEE Std 610, 1991. https://doi.org/ 10.1109/IEEESTD.1991.106963 11. ATHENA: Advanced Technologies for Interoperability of Heterogeneous Enterprise Networks and their Applications, FP6-507312IST1 Integrated Project (2005). www.ist-athena.org 12. Vernadat, F.B.: Interoperable enterprise systems: principles, concepts, and methods. Annu. Rev. Control. 31(1), 137–145 (2007) 13. Youssef, J.R., Chen, D., Zacharewicz, G., Vernadat, F.B.: EOS: Enterprise operating systems. Int. J. Prod. Res. 56(8), 2714–2732 (2018) 14. Panetto, H., Molina, A.: Enterprise integration and interoperability in manufacturing systems: trends and issues. Comput. Ind. 59(7), 641–646 (2008) 15. AMICE: CIMOSA: CIM Open System Architecture, 2nd edn. Springer, Berlin, Heidelberg (1993) 16. Gold-Bernstein, B., Ruh, W.: Enterprise Integration: The Essential Guide to Integration Solutions. Addison-Wesley, Boston (2005) 17. Hohpe, G., Woolf, B.: Enterprise Integration Patterns: Designing, Building, and Deploying Messaging Solutions. Addison-Wesley, Reading (2004) 18. Li, H., Williams, T.J.: A vision of enterprise integration considerations: a holistic perspective as shown by the Purdue enterprise reference architecture. In: Proc. 4th Int. Conf. Enterp. Integr. Model. Technol. (ICEIMT’04), Toronto (2004) 19. Vernadat, F.B.: Enterprise modelling: research review and outlook. Comput. Ind. 122 (2020) 20. D. Chen, F. Vernadat: Enterprise interoperability: a standardization view. In: Enterprise Inter- and Intra- Organizational Integration: Building International Consensus, ed. K. Kosanke, R. Jochem, J.G. Nell, A. Ortiz Bas (Kluwer Academic Publisher., Dordrecht 2003) pp. 273–282
751 21. Chen, D., Doumeingts, G., Vernadat, F.: Architectures for enterprise integration and interoperability: past, present and future. Comput. Ind. 59(7), 647–659 (2008) 22. C4ISR: C4ISR Architecture Framework, V. 2.0, Architecture Working Group (AWG). US Department of Defense (DoD) (1998) 23. Chen, D., Dassisti, M., Elveseater, B., Panetto, H., Daclin, N., Jeakel, F.-W., Knothe, T., Solberg, A., Anaya, V., Gisbert, R.S., Kalampoukas, K., Pantelopoulos, S., Kalaboukas, K., Bertoni, M., Bor, M., Assogna, P.: DI.3: Enterprise interoperability-framework and knowledge corpus, Final report, Research report of INTEROP NoE, FP6 – Network of Excellence – Contract n◦ 508011, Deliverable DI.3 24. ISA2 : The New European Interoperability Framework (European Commission, Brussels 2017). https://ec.europa.eu/isa2/eif_en 25. Gruber, T.R.: Toward principles for the design of ontologies used for knowledge sharing. Int. J. Human-Comput. Stud. 43, 907–928 (1995) 26. World Wide Web Consortium (W3C): OWL-S: Semantic markup for web services (2004). www.w3.org/2004/OWL 27. Oren, E., Möller, K.H., Scerri, S., Handschuh, S., Sintek, M.: What Are Semantic Annotations, Technical Report, DERI, Galway (2006) 28. Liao, Y., Lezoche, M., Panetto, H., Boudjlida, N., Rocha Loures, E.: Semantic annotation for knowledge explicitation in a product lifecycle management context: a survey. Comput. Ind. 71, 24–34 (2015) 29. World Wide Web Consortium (W3C): XML: eXtensible mark-up language (2000). www.w3.org/xml 30. Herzum, P.: Web Services and Service-Oriented Architecture, Executive Rep. 4, No. 10 (Cutter Distributed Enterprise Architecture Advisory Service, 2002) 31. Khalaf, R., Curbera, F., Nagy, W., Mukhi, N., Tai, S., Duftler, M.: Understanding web services. In: Singh, M. (ed.) Practical Handbook of Internet Computing. CRC, Boca Raton (2004)., Ch. 27 32. Kreger, H.: Web Services Conceptual Architecture (WSCA 1.0). IBM Software Group, Somers (2001) 33. World Wide Web Consortium (W3C): WSDL: Web service description language (2001). www.w3.org/TR/wsdl 34. World Wide Web Consortium (W3C): SOAP: Simple object access protocol (2002). www.w3.org/TR/SOAP 35. Webber, J., Parastatidis, S., Robinson, I.: REST in Practice. O’Reilly Media, Sebastopol (2010) 36. Chappell, D.A.: Enterprise Service Bus. O’Reilly, Sebastopol (2004) 37. OMG: Business Process Model and Notation (BPMN), Object Management Group, 2011. http://www.omg.org/spec/BPMN/2.0 38. Andrews, T., Curbera, F., Dholakia, H., Goland, Y., Klein, J., Leymann, F., Liu, K.,Roller, D., Smith, D., Thatte, S., Trickovic, I., Weerawanana, S.: Business Process Execution Language for Web Services, Version 1.1 (2003). http://download.boulder.ibm.com/ ibmdl/pub/software/dw/specs/ws-bpel/ws-bpel.pdf 39. Nof, S.Y., Filip, F.G., Molina, A., Monostori, L., Pereira, C.E.: Advances in e-Manufacturing, e-Logistics, and e-Service Systems, Proc. IFAC Congr. (Seoul Korea 2008) 40. Molina, A., Panetto, H., Chen, D., Whitman, L., Chapurlat, V., Vernadat, F.: Enterprise integration and networking: challenges and trends. Stud. Inform. Control. 16(4), 353–368 (2007) 41. Weichhart, G., Stary, C., Vernadat, F.: Enterprise modelling for interoperable and knowledge-based enterprises. Int. J. Prod. Res. 56(8), 2818–2840 (2018)
33
752
François B. Vernadat received his PhD in electrical engineering and automatic control from University of Clermont, France, in 1981. He has been a research officer at the National Research Council of Canada in the 1980s and at the Institut National de Recherche en Informatique et Automatique in France in the 1990s. He joined the University of Lorraine at Metz, France, in 1995 as a full professor and founded the LGIPM research laboratory. His research interests include enterprise modeling, enterprise architectures, enterprise integration and interoperability, information systems design and analysis, and performance management. He has contributed to seven books and published over 300 papers. He has been a member of IEEE and ACM, has hold several vice-chairman positions at IFAC, and has been associate or area editor for many years of Computers in Industry, IJCIM, IJPR, Enterprise Information Systems, and Computers and Industrial Engineering. In parallel from 2001 until 2016, he held IT management positions as head of departments in IT directorates of European institutions in Luxemburg (first European Commission and then European Court of Auditors).
F. B. Vernadat
34
Automation and Ethics Micha Hofri
Contents
Keywords
34.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 753
34.2
What Is Ethics, and Why Is It Used for Automation? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 754
34.3 34.3.1 34.3.2 34.3.3 34.3.4 34.3.5
Dimensions of Ethics . . . . . . . . . . . . . . . . . . . . . . . . . . . . Theories of Ethics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Principles of Ethics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Automation Ethical Concerns . . . . . . . . . . . . . . . . . . . . . . Automation Failures and Their Ethical Aspects . . . . . . . Artificial Intelligence and Its Ethical Aspects . . . . . . . . .
34.4
Protocols for Ethical Analysis . . . . . . . . . . . . . . . . . . . . 764
34.5
Codes of Ethics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 764
34.6
Online Resources for Ethics of Automation . . . . . . . . 764
34.7
Sources for Automation and Ethics . . . . . . . . . . . . . . . 765
754 755 757 758 758 761
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 771
Abstract
This chapter surveys aspects of automation in technology, its design, implementation, and usage, as they interact with values that underpin our society and its civilization. The framework chosen for this survey is that of moral or ethical theories and attitudes. To this avail we describe several of the ethical theories used currently to anchor discussions about values in our society. Several significant failures of automatic systems, including a fictional one, are quarried for ethical insights and lessons. The currently intractable nature of “neural networks” artificial intelligence systems trained via machine learning is shown to be a moral conundrum. The concepts of code of ethics and of ethical analysis are presented in some detail.
M. Hofri () Department of Computer Science, Worcester Polytechnic Institute, Worcester, MA, USA e-mail: [email protected]
© Springer Nature Switzerland AG 2023 S. Y. Nof (ed.), Springer Handbook of Automation, Springer Handbooks, https://doi.org/10.1007/978-3-030-96729-1_34
Ethical framework · Theory · Model · Conflict analysis · Role of ethics in automation design · Automation effect on society · Automation failure analysis · Artificial intelligence opacity · Internet and communications technology · Code of ethics and professional conduct
34.1
Introduction
Mankind has evolved to be the dominant species in the world. On the way humans have developed technologies that are changing this world. A convenient time point for the beginning of our “technological age” is 1804; that year the first steam locomotive was built for a railroad in a coal mine in Wales, and the US Census Bureau estimates that on that year the global population reached one billion, for the first time. It is now (at end of Summer, 2022) just under eight billion. This growth has required the development of many technologies, notably in food production, construction, transportation, and communications. We now need these technologies; the current way much of humanity lives depends on them and on many others that provide for needs and wants beyond mere subsistence; with time, this dependence is certain to increase in depth and criticality. On the way to the present, people have discovered that these technologies, and their applications, have downsides as well. This chapter considers the ways that the balancing is handled. The framework which has been found useful for the needed consideration is that of ethics, also referred to as morality. Note that “the ethics of technology” is a topic in numerous publications, where the actual discussions have little to do with ethics: mostly they raise worries of technology influence on society, or report about damage due to technology, and discuss ways to control and repair these concerns; the discussion is technical or legal (regulatory) rather than ethical; that usage is not 753
754
M. Hofri
considered in this chapter. It is a sign of the times that many universities have obtained generous grants to build research centers focusing on this topic. Such centers tend to have brief lives (probably due to the grant that started them petering out). The quotation heading the next section suggests an intuition behind the choice of our topic.
34.2
What Is Ethics, and Why Is It Used for Automation?
Two things fill the mind with ever new and increasing admiration and awe …. the starry heavens above me and the moral law within me. …. I see them before me and connect them immediately with the consciousness of my existence. —Immanuel Kant, Critique of Practical Reason, 1788.
As Kant claims, a sense for ethics, a “moral law” in his words, is innate to humans. Similar realizations have moved thinkers to analyze this sense in the earliest writings of mankind [1], going back to Classical Greece. Our moral sense is needed, so that we act properly, in ways that this sense approves of, and it lets us then feel we are “doing the right thing.” Humans have other senses, such as vision and touch, and we have found that they can be enhanced by instruments, such as a microscope and a thermometer. You may think of the various ethical theories and models that people have spawned as tools to sharpen our moral sense The most direct answer to the question in the title of this section is to show how easy it is to draw a list of woes, scenarios of automatic systems or operations which give rise to undesirable, or even catastrophic results; these would not be the outcomes of “doing the right thing.” This can happen in many and diverse ways: 1. Traffic lights that “favor” one road over another intersecting it, creating large, unneeded delays. A distributed system of such lights can have even more elaborate failure modes, such as the creation of a “red wave” along a main road. 2. Airline reservations system open to the Internet makes it possible for anyone to discover who flies on a given flight by using an ordinary Internet search engine. 3. A driverless car which does not allow adequately for weather conditions. 4. The machines in a printer room of an academic department manage the queue of documents to be printed, unaware that some of them are confidential (such as tests for current courses), and the room needs to be monitored when these are printed. 5. Automatic assembly machine which manufactures products inferior to those that humans employees produced
before. Customer complaints would now be answered by blaming the machine, “the computer did it.” 6. Automatic irrigation system which is not provided input on the state of the soil and predictions of precipitation and wastes water/or spaces the watering sessions too sparsely, and plants die off. In the next section, we give a more detailed description of the concerns with automatic systems, which can be ameliorated through a disciplined process of ethical analysis of the decision space. The list of examples above where faulty design led to unanticipated behavior suggests how much decision-makers could be helped by such analysis. Often failures of automation are assumed to be due to its novelty; this need not be the case. Automation came early: The above mentioned beginning of the technical age cited the use of a steam engine in a locomotive. Such steam engines were even then provided with a centrifugal governor that regulated the flow of steam to the cylinder and maintained the engine speed within a prescribed range, automatically. Now consider the examples in the list above; how do they differ from the early locomotive? They are all computerized systems. As will become evident, in much of the discussion of automation, we need hardly distinguish between the terms “automatic system” and “computer-based system.” Similarly, the important distinction in computers between the hardware and the software is rarely material for us. It is important to note: the creation and operation of an automatic system involves several categories of personnel, not all necessarily acting or even existing concurrently. Among those we distinguish designers, implementers, deployment teams, maintainers, operators, and possibly more. On another level there are the owner(s), decision-maker(s), and external users (think of an atm), all of them, at different levels of involvement, stakeholders. There is a basic question that underlies much of our discussion: does technology carry in and of itself any values, or is it value-neutral, and ethical concerns relate to it through its embedding in human society? This chapter hews closely to such an attitude; humans, their needs and wants—that is the measure of our ethics concerns. The question is treated in some detail in [28, Chapter 3].
34.3
Dimensions of Ethics
The discussion begins with the main kinds of theories of ethics that have been developed and then presents a list of issues or concerns that the personnel connected to automation needs to take into account.
34
Automation and Ethics
34.3.1 Theories of Ethics The problem is that no ethical system has ever achieved consensus. Ethical systems are completely unlike mathematics or science. This is a source of concern. —Daniel Dennett, Interview with Hari Kunzru, December 17, 2008.
The concern Dennett raises leads some ethicists to refer to ethical frameworks, or systems, as he does, rather than theories. We use these collective names as equivalent. How can an ethical issue be addressed? We have an inherent moral sense, and it is natural for us to appeal to its guidance. It is, however, personal and as such cannot be the basis for moral principles that are widely accepted, ideally—by all society. What can be the source of authority or persuasion that can back such wide acceptance of an ethical principle? Traditionally we recognize three such sources: religion, the Law, and theories of ethics. We argue that only the last is a suitable choice [2, Chapter 2]. Using religion as the basis of ethical decisions has an attractive side; it has been practiced in various forms for a very long time, and for many among us it is invested with the recognition and demands of a higher authority, even a supreme being. However, in the diverse, pluralistic society we exist, this cannot provide a common source recognized by most of the people we consider. In addition, it may be pointed out that automation systems exist at a societal level that is not addressed well by most religious moral teachings. This cannot be an adequate basis. No more than we can appeal to the code of Hammurabi to order our lives. Can the legal system provide us the support we need to determine the ethical value of actions? Unlike religious teachings it is recognized as impersonal, it is not subject to disputations, and it is quite explicit. There are difficulties in using it as source of consensus. It is not uniform across national borders (and in several ways even across state borders within the United States). The main reason we hesitate to see it as our source of moral authority is the incongruity: not all that is moral is legal; not all that is legal is moral. For example, till the 1860s slavery was legal in the United States; conversely, in many states, due to public health concerns, restaurants may not give charity kitchens their surplus prepared food. What is morally acceptable by society changes with time, and so does legislation, but their changes are rarely well-synchronized. What theories of ethics are at our disposal? A readable survey of the area is provided by Michael Sandel in [6]. Over time, several types have been proposed: 1. 2. 3. 4.
Relativistic ethics frameworks Consequence-based ethics theories Duty-based ethics theories Character-based ethics theories
755
This list immediately reinforces the statement of Daniel Dennett above. Note We keep to the main flavors; the complete record would also mention social contract-based ethical theories, [3, 5] and rights-based theories [7]. When automation is considered, these are probably less significant than in other contexts.
Relativistic Ethics Theories Such theories deny the existence of universal ethics norms [8, Chapter 2]. It came to be considered a valid approach in the twentieth century only [4]. Ethicists have defined two such approaches to ethics, with self-explanatory names. The Subjective Relativism posits that each person makes moral decisions according to his individual values. The working principle is acceptance that “what is moral for me may not be right for you.” While this may be a workable arrangement for a community of sensible people, it is not appropriate as the ethical protocol for managing a technology likely to be novel to many persons or organizations. The second type of relativism, cultural relativism, enlarges the domain of agreement about moral norms from an individual to a community, which may encompass few or many individuals; yet similarly, it denies the need for agreement among such groups. The same difficulty holds: since the technologies we consider are expected to transcend any particular culture, its design and operation cannot be held hostage to any such local set of norms. There is even a third view, sometimes called “ethical egoism,” which defines the good and the bad actions or issues as those that advance or, conversely, harm the interests of a person. In such a view, any issue which neither abets nor harms the person has no moral dimension. Such thinking is commonly associated with the author Ayn Rand; it is not better than the relativistic formulations for our concern. David Oderberg is quoted in [17], saying that relative ethics ultimately dissolves into moral nihilism. Albert Einstein observed “Relativity applies to physics, not ethics….” Consequence-Based Ethics Theories We look now at theories that judge an action moral to the extent that it leads to the increase of the happiness of the most people. This is the way it was offered in the eighteenth and nineteenth centuries by its founders, Jeremy Bentham (1748–1832) and John Stuart Mill (1806–1873), [2, 9]. The underlying idea is attractive, but its application requires attention, to avoid unacceptable outcomes. A known example suggests that if a society enslaves 1% of the population, making them produce desirable products and services for the
34
756
rest and improving their quality of life, many more people would experience greater happiness, swamping the suffering of the few, making it qualify as a highly ethical act! Nevertheless, this approach has found much favor and has been modified in several ways, to improve its usability. While happiness is desirable by all people, comparing happiness levels provided by different acts is not easy, arguably impossible, and has been typically replaced by quantifying the merit, often by equivalent sums of money. Economists that have been using it found that a more flexible application is possible introducing the concept of utility function, to represent the merit of decisions [10]. Consequently, a common name for this approach is utilitarianism. Unlike the relativistic approaches to ethics, utilitarianism is factual and objective and allows decision-makers to explain and justify their decisions. Yet another important development of the theory emerged as it was realized that there is merit in avoiding the need to conduct a complete analysis for each situation which needs an ethical decision, if we adopt rule-utilitarianism, which calls on us to formulate rules of action by analyzing their utility, rules that would allow us a simpler decision-making, by finding for a pending action the rule that would provide us guidance for it. It has been observed that much of the merit of utilitarianism is that its application requires a systematic, exhaustive, prudential, and objective analysis of the decision space that is relevant to the act, or rule under consideration, as the case may be, guaranteeing, to the extent possible, that the decision-makers understand the context of the decision and are in command of the facts available.
Duty-Based Ethics Theories Immanuel Kant, the German philosopher, claimed, as seen when Sect. 34.2 was introduced, that his consciousness is tied up with a moral law. Throughout his work he struggled to reconcile this fact with his conviction that morality of action must ultimately be grounded in the duty that speaks to the action, which mainly springs from the obligations that humans have to each other, but never in the possible consequence of the actions taken. He refused to tie ethics to the promotion of happiness in any form, explaining in his Prolegomena to Any Future Metaphysics that it is so, “…because happiness is not an ideal of reason, but of the imagination,” and noting further that performing a duty may be unpleasant or even lead to a result which is undesirable from certain points of view. (Could this observation have led to George B. Shaw, acute social critic that he was, to observe “An Englishman thinks he is moral when he is only uncomfortable,” in Man and Superman, Act 3, 1903?). But ‘duty calls,’ and this call is supreme. The term deontics or deontology is used to describe such theories. (A traditional view states that justice flows from three sciences: economics; deontics, study of duty; and juridics, study of law).
M. Hofri
Kant had a second string to his bow; his conviction that while the moral law is an essential difference between humans and all other beings, there is another difference, our ability to reason and be rational. Hence, he stated, moral decisions need to be reached by rational analysis of the situation at hand and judging which moral rules or principles apply. Moral analysis, like the mathematical kind, needs a basis. In mathematics the need is provided by axioms, and for the moral variety, Kant formulated similar primary directives, for which he coined the term categorical imperative. Kant has provided a few versions of this basis, in several of his books, as his theories developed. The two most often used for such analysis are on their face quite different. Kant’s categorical imperatives First The only valid moral rules, at any time, are such that can be universal moral laws Second Act so that you always treat yourself and other people as ends in themselves, and never as only means to an end [8]
A more direct reformulation of the first imperative states that you should only act on moral rules that you can imagine everyone else following, at any suitable time, without deriving a logical contradiction [6]. The second imperative is based on Kant’s special view of humans. While Kant’s duty-based moral theories have achieved much acclaim, applying them can run into difficulties, especially when the decision-maker is faced with competing duties. Resolving the difficulty calls for ranking different duties, which is usually a fraught procedure. Kant already considered duties that can be primary and secondary, and subsequent treatments went much further than we wish to follow in the current discussion. The names of Benjamin Constant [11] and David Ross are relevant here [12]. To conclude this brief exposition, let us quote one more statement of Kant that exhibits his standards: “In law, a man is guilty when he violates the rights of another. In ethics he is guilty if he only thinks of doing so.”
Character-Based Ethics Theories The approach to ethics described as character-based is the oldest among those we consider, following the writings by Plato and Aristotle, more than two millenia ago. Similar tracks were laid in oriental traditions by Confucius and later Mencius; the latter was a contemporary of Aristotle. Another term applied to it is virtue ethics, since in this approach it is the acting person and his virtues, rather than the action (or the decision about it), which is at the center of attention, and accordingly neither the consequences of actions nor duties that may compel them are considered. The virtuous man is described as the person who, at the right time, does the right act, for the right reason. The “right reason” is culture appro-
34
Automation and Ethics
priate. In pre-platonic times, for example, physical prowess was prized; in our time tolerance may be claimed to have a similar significance or in a different vein—high scholarly achievement, from which the right actions follow. Over the last several decades, numerous ethicists have seen the consequentialist and duty-based approaches as obscuring the importance of living the moral life, acquiring moral education, with ethical and emotional family and community relationships; virtue ethics appears as a way to attend to these missing components. What sets apart this view of ethics is that it is not based on inculcating a theory, or learning principles, but on practicing the ethical behavior. It is a process of growth, with the aim for the person to get to a stage where doing the correct action, adopting honest behavior, becomes a part of his or her nature and that doing so will feel good and become a character trait. We argue for our preference of this approach when considering decisions about speculative situations, where deontic rules may be hard to apply, and consequentialist analyses encounter numerous unknowns. Automation practice may often give rise to just such situations. A shortcoming recognized for this approach is that the tight view of the actor as an individual does not lend it for easy use by institutions, including agencies of government. Yet such an agency is a human institution, and distinguishable people create its actions. It must be clear that the distinction between these frameworks, or theories, is not a moral one but a matter of what a person values at a given situation. Looking at healthcare system, and at regional transportation control, arguably should lead to different types of argumentation.
34.3.2 Principles of Ethics The theories above display how thinking about morality over time has become codified in a variety of formal ways, exhibiting deep differences, tied to cultural and personal moral values. Cultures have found other ways to preserve and display the reactions of thoughtful people to the ethical conundrums life throws at us at every turn: they have created ethical proverbs or maxims. There is the classic Golden Rule, “do unto others as you would have them do unto you.” It has been attributed to many people, going back thousands of years, and is found in the teachings of most religions. It encapsulates the ethic of reciprocity, a basic sense of justice. (This exposition would be remiss if it did not mention the demurral of George Bernard Shaw, who objected to this rule, observing that your and the others’ taste or preferences may well differ.) The negative form of this adage is not as common. Hillel the Elder, a Jewish scholar living in the first century bc, is quoted in the Talmud: “That which is hateful to you, do not do to your
757
neighbor; this is the whole law. The rest is commentary on it; go, study.” Cognate forms can be found in Muslim and Confucian writings. Societies can disagree in surprising ways. Ancient Greeks believed that revenge is sanctioned by the gods (and discharged by Nemesis). It provides a unifying basis to many of the classical tragedies of Greek theater. Hence, it was said, “to seek revenge is rational.” Kant, indeed, elevated the role of reason to a main tool of ethical analysis, yet he would not have approved. Reason may encounter further hindrances: “It is useless to attempt to reason a man out of a thing he was never reasoned into” (Jonathan Swift, 1721), in A Letter to a Young Gentleman…. Many have expressed this sentiment since. “As they were not reasoned up, they cannot be reasoned down” (Fisher Ames). Here is a scattering of others; most can be seen as pertaining to one of the frameworks discussed above. Several have known first formulators: “Any tool can be used for good or bad.” “If it is not right, do not do it; if it is not true, do not say it” (Marcus Aurelius, Meditations, second century ad). “Honesty is the first chapter in the book of wisdom” (Thomas Jefferson). “There cannot any one moral rule be proposed whereof a man may not justly demand a reason” (John Locke, 1690). “Integrity has no need of rules” (Albert Camus). “Practice what you preach” (After Dean Rusk). “Even the most rational approach to ethics is defenseless if there isn’t the will to do what is right” (Alexander Solzhenitsyn). “Ethical axioms are found and tested not very differently from the axioms of science. Truth is what stands the test of experience ” (Albert Einstein, 1950). “No moral system can rest solely on authority” (A. J. Ayer). “Morality is the custom of one’s country, and the current feeling of one’s peers” (Samuel Butler, 1900). “The needs of a society determines its ethics” (Maya Angelou). “There can be no final truth in ethics any more than in physics, until the last man has had his experience and has said his say” (William James, 1896). “Science, by itself, cannot supply us with an ethic” (Bertrand Russel, 1950).
Asimov’s Laws of Robotics—Consideration In 1940 Isaac Asimov, a professor of chemistry and also a very prolific science and science-fiction writer, invented three laws designed to constrain (very advanced) robots so they operate ethically [13].
34
758 First Law
A robot may not injure a human being or, through inaction, allow a human being to come to harm Second Law A robot must obey the orders given it by human beings except where such orders would conflict with the First Law Third Law A robot must protect its own existence as long as such protection does not conflict with the First or Second Law
The laws were included in a collection of science-fiction short stories, and they appeared plausible. The stories, however, for the most part were designed to display a variety of situations which were not captured adequately by the laws, mostly due to their informal language. A subsequent collection of stories [14] continued this demonstration. Remarkably, even though these laws were merely a literary device—included in a book considered a low-brow genre, with stories that mostly displayed modes of their failure— the laws captured wide attention, of a variety of other writers and also scholars in machine ethics [15, 16]. It soon became obvious that the laws are inadequate in many ways for the purpose intended; we referred to their informal, even sloppy language, but a deeper difficulty is their adversarial formulation. It would make a robot that adheres to them a strictly deontological-ethics device, an approach which is not satisfactory on its own, as mentioned. The two papers just referenced describe some of these inadequacies in depth. While there have been many attempts to extend or convert these laws to a working design (one of them by Asimov himself, in [14]), the consensus appears to be that no adequate extensions exist; this is probably due to the unhappy fact that we do not have currently a sufficient understanding of the possible nature(s) of advanced General Artificial Intelligence (GAI) devices. An interesting initiative to clarify the situation is due to the authors of the “2017 Montreal Declaration for a Responsible Development of AI” [26]. They produced a genuine attempt to corral the ethical issues in the development of automatic systems. The web page shows the principles only; the pdf document it points to has a thoughtful expansion of these principles. How successful is this recent attempt? It remains to be seen.
34.3.3 Automation Ethical Concerns As explained above, ethical concerns arise when engineers design systems whose operation may lead to improper outcomes, regardless of their actual nature. Of particular interest are failure modes which should have been avoided during the system design phase, since there are now powerful design verification tools.
M. Hofri
This could be due to any of the personnel bringing the system to operation. The ethical concerns that automation raises can be parsed as follows (some overlap of the categories is inherent): 1. Viewing the system as a software-based operation raises the issues known as those of any information processing operation [8, 17, 39]; we remind the reader that some of those are correctness of operation, privacy and safety of users and operators, data security, intellectual property integrity, maintainability, and fairness. 2. Concerns due to the physical actions associated with the functions of the automation. These include safety of personnel, integrity of the physical plant (including products), acceptable wear and tear, and conservation of input and process materials. 3. Intellectual property issues that are unrelated to the software, in particular, ensuring the validity of agreements needed to use patented or copyrighted material. Also the applications to protect patented material created in the design of the system, including devices and methods of operation. 4. Issues that arise during system design. These include beyond all those above the need to ensure that the system achieves its purposes, at the desired level of quality, subject to budgetary constraints, and satisfies subsidiary requirements, such as accessibility and physical security of the system, and satisfies relevant electrical and building codes. 5. AI & ethics—artificial intelligence gives rise to issues that are unique to it and differ from other software-based systems. It is considered in Sect. 34.3.5. Note on Verification and Validation This topic is of such importance in the development of automated systems that omitting it, or not using its full potential, amounts to acting unethically. The same consideration prompted us to include this note. Discussing it however is beyond the mandate of this chapter. It has generated by now an enormous corpus of sources: books, software tools, journals, conference proceedings, and blogs. For reference we suggest the relevant current standard, “1012-2016—IEEE Standard for System, Software, and Hardware Verification and Validation” [29], and a collection of articles specific to an industry or technique [30].
34.3.4 Automation Failures and Their Ethical Aspects Several automation systems with unfortunate stories are presented. Why tell such sorry tales? Rumor has it that the way
34
Automation and Ethics
of people, engineers included, to mend their erring ways is complex, and often the cues need to be multiple and pitched right: to fit the context of a narrative, that will send a lesson home. We now show a few; indeed, they are very different from each other!
Royal Majesty Grounding, 1995 The cruise ship Royal Majesty (rm) left St. George’s in Bermuda bound for Boston at 12:00 noon on June 9, 1995, and ran aground just east of the island of Nantucket at 22:25 the next day [21]. None of the over 1000 passengers were injured; repairs and lost income amounted to $7 million [18]. The rm was equipped with an automatic navigation system consisting of two main components, a gps receiver and a Navigation and Command System (nacos), an autopilot that controls the ship steering to the desired destination. There are a few subsidiary subsystems, such as a gyrocompass, for azimuth, Doppler log for speed, radar, Loran-C, an alternative location system, and more. However, the bridge personnel referred almost exclusively to the nacos display. At the time of this trip, the ship has been in service for 3 years, and the crew came to trust the ship’s instrumentation. The accident has several component-reasons, as is the rule; the immediate technical problem was the rupture of the cable from the gps antenna to its receiver that happened within an hour of departure, for a reason unknown. As a result the receiver changed to dead-reckoning mode, using compass and log. Its display shows its status in much smaller letters than the position reading, as dr, instead of the usual sol (Satellite on Line)—none of the crew noticed it till the grounding. When the ship was equipped, the gps and nacos were provided by different manufacturers and used somewhat different communications standards. The following is from [18]: Due to these differing standards and versions, valid position data and invalid dr data sent from the gps to the nacos were both “labelled” with the same code (gp). The installers of the bridge equipment were not told, nor did they expect, that position data (gp-labelled) sent to the nacos would be anything but valid position data. The designers of the nacos expected that if invalid data were received it would have another format. Due to this misunderstanding the gps used the same “data label” for valid and invalid data, and thus the autopilot could not distinguish between them. Since the nacos could not detect that the gpsprovided data was invalid the ship sailed on an autopilot that was using estimated positions until a few minutes before the grounding.
That is all that was needed: an immediate cause, a broken cable, and building errors, incompatible communications protocols and a device which exhibited unusual status unobtrusively. The contribution of human nature is here: a crew, which included experienced professionals, that has been lulled to complacency, by the automation that has worked flawlessly for 3 years.
759
The authors of [18], who are, respectively, a researcher in maritime human factors and an academician who is very prolific about safety and automation, argue that the answer to ineffective automation is not more automation, due to human behavior, so long as humans are kept in the control loop— and while they do not say so explicitly, they do not seem to consider any alternatives viable.
Boeing 737 Max Grounding, 2019 Boeing announced the 737 Max, fourth generation of its very popular 737 series, in August 2011 and had the first 7378 enter service in May 2017. (The 737 Max was offered in several configurations 737-7, 737-8, 737-200, 737-9, and 737-10. By December 2019 Boeing had 4932 orders and delivered 387 planes of the series [Boeing press releases].) Nearly half a million flights later, on October 29, 2018, Lion Air Flight 610 crashed, and on March 10, 2019, Ethiopian Airlines Flight 302 crashed. Not one of the 346 passengers and crew on the two flights survived. A rate of four crashes per million flights is very high for the industry, and there were consequences beyond the loss of life. The US FAA grounded all 737 Max planes on March 13, 2019, as did similar agencies in other countries. At the time of writing (November 2020), the plane has just been re-certified, with first flight expected early next year. The monetary cost to Boeing so far is close to $20 billion. Is there an automation-design story behind this tale, which is extreme in several ways? Indeed, there is. The 737 series was introduced in 1967: a narrow-body plane, with the attraction (for airlines) of requiring only two pilots in the cockpit. As time passed and customer needs changed, Boeing introduced more planes in the series, larger, higher capacity, with larger engines, yet maintained the “airframe type,” which means that a pilot licensed to fly on one of the series can fly all the others. For the airlines, no need to train the pilots on simulator, or for re-certification test flights, means significant savings—and since Boeing was in hot competition with Airbus, the European manufacturer, an important marketing feature. The 737 Max was the largest of the series, and in particular, its engines were so large that to maintain the shape of the plane, the engines needed to be moved forward, beyond the wing, and positioned higher, so that they protruded above the wing (Figure 2 in [19]). This changed the aerodynamics of the plane compared with its predecessors in the series; when power is applied, most planes tend to raise their nose, and this was much more ˙ pronounced in the 737Max. Autopilots in planes have been used for a long time: Sperry corporation introduced it as early as 1912 (we note that auto-piloting a plane is much simpler than achieving this in a car, on land, since the “lanes” are assigned by air traffic controllers and are non-intersecting). To have pilots feel the
34
760
plane is still a 737, Boeing introduced stealthily a software function, mcas, into the autopilot computers, that would prevent the plane from increasing its angle-of-attack (AoA) too much: this can cause aerodynamic stall at low speeds. The mcas in that case moves the tail elevators to direct the nose down; this is done with considerable force. The mcas relied on an AoA sensor mounted on the outside of the cockpit. This is a fragile device, often damaged, and when activated in error causes the plane to descend abruptly. After the Lion Air crash, Boeing distributed to the airline instructions how pilots can recover from this state; they needed to complete the process—pulling the plug on the mcas and the elevator motors and moving them manually to the correct position— within 4 seconds. The Ethiopian Airlines crew knew of the procedure, but apparently did not complete it in time. Boeing could pull this off since the FAA had relinquished most of its certification oversight to the company team of Designated Engineer Representatives (DER). These experienced, veteran engineers were under pressure from the management to get the plane certified early. Not only was the mcas barely mentioned in the operation manual, it was not mentioned at all in the documentation provided to the FAA [19]. Multiple investigations by the US Congress, Transportation Department, FBI, FAA, and NTSB (and comparable agencies in other countries) faulted Boeing on an array of issues. This is entirely unlike the ship grounding: a mix of greed, hubris, disdain of authorities, snubbing of law and professional responsibility, and callousness toward customers and passengers were combined to a revolting outcome. Automation was introduced to hide the deviation of the 737Max from the 737 airframe type. As detailed in [19], it failed because it was poorly and “optimistically” designed.
Therac-25 Linear Accelerator, 1980s Radiation therapy is one of the common methods to treat cancer patients (the same treatment is occasionally also used to destroy non-cancerous tissue). A linear accelerator, like Therac-25, is a device that can provide it, producing a highenergy beam directed at the target tissue. The Therac-25 is a dual mode machine. Its basic operation is to create a narrow beam of electrons, tunable in the energy range 5– 25MeV. The beam can be either sculpted by an array of magnets and directed to the target or be used (when at top energy) to generate a beam of X-rays, which penetrate more deeply into the patient’s body. The Therac-25, manufactured by AECL of Canada in the 1980s, followed a series of earlier accelerators, Therac-6 and Therac-20 [8, §8.5]. This last version introduced several advances in the accelerator itself; yet for the purpose of our discussion, the main difference was that it was entirely software-controlled. All the interaction between operator and machine was via a keyboard-driven
M. Hofri
computer interface. The software inherited some modules that were used in the earlier models but was much enhanced. One of the principal changes in this last stage was that several safety measures that were before implemented by hardware interlocks were replaced by software controls. This led to a simpler, smaller machine and cheaper to manufacture. Between 1985 and 1987, 4 machines, of the 11 installed then in Canada and the United States, suffered several incidents where patients were treated with overdoses of radiation. Four patients died, and several others carried radiation damage for the rest of their lives. The report by Nancy Leveson [22] gives a detailed survey of the machine structure, the software architecture, and these incidents, as well as of the resulting interactions of AECL with the FDA and similar Canadian agencies. Careful analyses revealed that previous safety evaluations ignored the software entirely! They assumed it would operate as expected. When the software was put to thorough analysis, a plethora of defects was revealed; some of them were found to have been inherited from Therac-20, where they caused no recorded damage, presumably due to the hardware preventive measures it carried. Some software errors were minor mistakes, never noticed before. A notable one was a 1-byte flag that was tested before the beam was turned on and had to be zero for the operation to continue and the beam to be activated. That flag was incremented by one whenever the readiness tests failed. Since the tests were run continually, while the machine reacted to commands that required physical arrangement of components—an activity that took up to 8 seconds—and while the computer in question, a dec pdp 11, was slow, it still performed the testing that the machine is set up correctly hundreds of times during such a hardware reconfiguration. The 1-byte flag cycled through 0 every 256 tests; if at that instant its status was queried, a go-ahead would result, regardless of the actual situation. While this flaw is trivial to repair (just set the flag to one, when the status was not ready—as AECL did, later), it is a symptom of a different and deeper problem: poor comprehension of the software operations. The basic software activity consisted of a cycle of tasks, re-initiated by a clock-interrupt every 100 milliseconds. Several of those tasks used shared variables to coordinate activities and modified them as needed. In cases where the tasks did not complete within the renewal interval (0.1sec), they were interrupted, occasionally left in inconsistent state, and restarted. It is impossible to predict the erroneous results that may then occur…. In summer 1985 the FDA had AECL recall the machine and introduced some modifications, but lethal incidents continued. In February 1987 the FDA had AECL recall the machine and stop its usage. Several iterations between the FDA and AECL during that year resulted in several changes
34
Automation and Ethics
to the software and installation of hardware enhancements. No further accidents ensued. Some of the software errors that were revealed are embarrassing, when viewed with modern understanding of realtime, multi-threaded software. AECL did not use any of the operating systems available at the time, but rolled its own [22]. May we assume such mistakes would not occur in current software? Probably not. Unfortunately, the half-life of basic software errors is near infinity. Yet times have changed. A significant effort has been expended on software development environments and methodologies and later on tools for software verification and validation. While in systems as complex as the one we now discussed, involving hardware of a variety of types, software components prepared separately, and user operations, this will always be a scene that involved art and mathematics, much improvement has been achieved. The publicly available information about the Therac-25 debacle shows AECL as a negligent, callous, unresponsive, and quite irresponsible company [22]. No personal information is known to identify the internal processes that led to this turn of events. The company is now out of the business of medical devices.
The Machine Stops [20] Here we encounter a different kettle of fish. It is an imagined automatic system; implied though it is quite vividly portrayed, yet it is fictional, brought to us as a short story, by the master story-teller Edward M. Forster, who published it in 1910, likely influenced by interactions with his friend H.G. Wells, and conversation about the tale the latter wrote “The Time Machine,” published in 1895, and described in it the insufferable morlocks, living underground. In Forster’s tale the entire world population lives in warrens underground. All their needs are provided by a system called “the Machine.” When the story begins, we are led to understand this has been the state of affairs for a long time; durations are vague, but the people take the machine for granted, and many, including the story protagonist, begin to regard it as deity. The surface of the earth is abandoned, considered deadly. Communications technology is presented as we know it in the twenty-first century, with Internet capabilities, all mediated by the Machine. Although Forster witnessed none of that, he was in company with people who could so fantasize already. A very effective transportation system covers the earth, but it is viewed as a relic and only lightly used; whatever a person needs, it is brought to her individual room. Forster adds a large number of hints at the ways of the society, which suggest much societal engineering prepared the population as the Machine was developed— our focus all along is the Machine. A “Committee of the Machine” exists, also called “Central Committee;” the two
761
may be the same—it is mentioned as a body that enforces and rarely changes policies. Forster mentions in passing some measures, presumably put in place by the Central Committee, that we would find intolerable. Space is assigned, rather than selected; some form of eugenics maintained; freedom of movement limited. Values could change too, he suggests, and describe how originality is deprecated. Maintenance of the Machine is mentioned as a set of rote tasks that were performed, as time went on, with less and less understanding. It is said no one really knew much about its operation and stated that nobody understood it as a whole. The machine was built with a “mending apparatus,” which also protected it from attacks. The place of the Machine in the world view of the population was such that the phrase “the Machine stops” was meaningless to most, when used by one of its denizens. Yet as the Machine started failing, interruptions during the streaming of music, or bad food delivered, the people learned to live with it and expected the machine to heal itself. The deterioration is relatively quick, and the entire system crumbles, as various operations fail. The story was an impressive tour de force by the author at the time, yet we can see it as an account of the demise of a society who trusted its engineers, but did not understand the nature of engineered systems, the need for maintenance— not just of the machinery but also of the connection and interaction, between the machine and the population—so that a dynamic develops, in which the routine operation of the Machine depends on human intervention. It is a riveting exercise for the reader to imagine what circumstances forced that society underground and made them choose such a Machine as their solution, an improbable duress? Possibly global warming getting out of hand… ? And why were the interfaces needed to keep the system alive so poorly designed? (Naturally, because Forster was not an expert in systems engineering and none was available for him to consult.) Would our current society have done better if such extreme pressure were forced on us? With technology much evolved beyond what was available, or even imaginable in 1910, could our solution be similarly superior to that Machine and our engineering skills up to it? Designing for long-term survival is unlike any challenges humans have faced… !
34.3.5 Artificial Intelligence and Its Ethical Aspects As the internet and increased computing power have facilitated the accumulation and analysis of vast data, unprecedented vistas for human understanding have emerged … . [We may have] generated a potentially dominating technology in search of a guiding philosophy. —Henry A. Kissinger: How the Enlightenment Ends, 2018, [27].
34
762
Artificial intelligence (AI) was declared in 1956 as the “next big field” in computer science and kept exactly this designation for better than half a century. The field was not idle: multiple techniques were developed, such as genetic algorithms and Bayesian networks or simulated annealing; various vogues had their day, such as enormous world ontologies or expert systems, but successes were narrow and far between. The situation started changing in the first decade of the twenty-first century, with the growth and then explosion of machine learning. The growth was facilitated by the significant, continuing decline in costs of computer power and data storage space, on one hand, and by the enormous growth of data available on the Internet, from blogs to social networks to databases of many government agencies and corporations who conduct their business in cyberspace, on the other hand. AI continued to employ a variety of techniques, but soon machine learning (ML), based on (artificial) neural networks, became the much-preferred paradigm. When Kissinger, an experienced historian and statesman, was perchance exposed to these facts, followed by dialogues with technically aware colleagues, he was moved to write [27], where the above instructive quotation is found. What the (neural) network learns may be seen as the ability to classify items—which can be, for example, strings of characters or images—and decide whether a given item belongs to a specified class; with much ingenuity this capability has been leveraged to any number of applications, from driving a car to recognizing faces in a crowd. While most of the ethical attention raised by AI is similar to that which any software modality entails, AI, especially its recent developments—machine learning and deep learning— merits a special note, as more and more areas in our life and affairs and the automatic tools we build are impacted by it. In this it is not different from other theory-rich areas of computer science, such as database management or compilation—but unlike those, little in the theory of AI is currently settled. The recent survey [25] provides detailed summary and a wealth of references. In Sect. 34.3.2 we presented the three “laws” of robotics invented by Isaac Asimov and referred briefly to their inadequacy to prevent a robot from making wrong moves. The 2017 Montreal declaration for a responsible development of artificial intelligence [26] can be seen as a careful, disciplined “call to arms,” for this difficult task. It replaces those three laws by ten principles, which are further developed into several directives each. Opacity of Machine Learning AI There is a special property of AI systems developed by training a neural network; even as such systems are shown to be very proficient, in a remarkable range of fields and applications, where they provide a wealth of good or useful decisions—they cannot provide an explanation of how, and
M. Hofri
why, any of their decisions was reached, and the system owners (its users), as well as its developers, cannot do so either—such systems function as black boxes. The concept of a subsystem which is a black box is commonplace, dating to middle of the twentieth century; the traditional black box refers to a mere transfer function, which can be used with no need to know its operational details. It is however coupled with the understanding that if the need for the details arises, the box can be opened and inspected and any information about its internals be available. That is not the case with the neural network machine: it is an opaque classifier of clumps of data—a classification which may be put to further use, in a following part of an algorithm—but it is not responsive to any attempt to probe it for enlightenment about its modus operandi, which includes its successes, as well as the failures. If it is wished, its internal structure and the weights associated with each node and other elements can be listed, but this sheds no light or provides a meaningful explanation. This was not the case with previous generations of AI, which can be described as rule-based; their operations can be explained, at any required depth; but alas, they were not as capable! This property is alarming for three reasons beyond the chill it casts (nobody likes to be thwarted in this way): (a) Brittleness: Even an excellent ML classifier, is reliable on a part of its possible input space only, a part which is effectively impossible to determine there is no guarantee the system responds correctly, in any particular case, and in fact such systems are notorious for being occasionally derailed, by trivial features in their input [31]. Researchers have found many ways of tricking such systems, by changing subtly the system input—in a manner that is unobtrusive to humans—yet entirely confused the system responses. It is considered open to hackers to “game” and twiddle the systems at will. Since there is no understanding, there is no defense. (b) Bias: The selection of data used to train the system affects, naturally, the way different characteristics of the input get translated to outcomes. A voice recognition system can be more likely to misunderstand speech in a regional accent it was not trained with, and thereby not provide the desired service or help. A loan-application scanner program in a bank is likely to mishandle an application of a person with unusual life trajectory. Just as the AI cannot be asked to verify its response, the opacity means the impossibility of critiquing or appealing effectively the system determinations. This type of AI systems especially the extremely large “deep learning” kind, which is discussed further below. Different system architectures, training regimes, and approximation criteria are attempted. Some observers of the scene are confident the opacity can be dissolved in some manner,
34
Automation and Ethics
763
maybe by added complexity; others see this AI approach as doomed, due to its being “Brittle, Greedy, Opaque, and Shallow,” and their progress likely to hit a wall any time soon. Even our vocabulary for cognitive processes does not fit well such AI systems. One result is the difficulty of pursuing any ethical analysis of such systems—they are opaque! Is there hope that rule-based AI can be combined with the neural network-based recognizer, making its responses explicable? The difficulty seems similar to comprehending the deep learning AI directly. It is an open question—which is pursued with great vigor. (c) Errors in input: Only recently have researchers turned to examine the quality of the labeling in several of the most popular labeled datasets used to train and test such networks [33]; the results were surprising. They found that labeling errors averaged 3.4% of the data set items. They also found that the impact on the performance over test-sets was more nuanced and depended on the model used and the size of the training set. A word after a word after a word, is power —Margaret Atwood.
Recently a particular type of large Deep Learning model was introduced by several companies, generally called Large Language Model. The most famous, probably since it came first, in 2020, is GPT-3, the third in a sequence of Generative Pre-trained Transformers, created by OpenAI in San Francisco. Its native purpose is to generate text when given a prompt, producing one token (a word, usually) at a time, recursively. For its input the creators collected texts amounting to 175 billion tokens (by skimming the web, and adding digitized books); a measure of its complexity is indicated by the size of its context, its “status descriptor,” that determines the generated output: it is a 2048-token-long string. The generated text is usually of a high, human-created quality. While clearly ‘there is no there there,’ in such a text beyond the initial prompt (which can be as short as a word), people have reported that they enjoyed reading the output, finding it of substance, interest, etc. Note: Because this type of automata is currently under feverish development, the main reference recommended is the Wikipedia article “GPT3.” It is likely to be kept current, and lists many of the original sources. This model has already found numerous uses, leading to serious questions, such as about the (im)possibility to trust the authenticity of the claimed authorship of submitted written work, such as essays, in any imaginable context. A recent article describes getting GPT-3 to write a scientific paper about itself, [41], with minimal tweaks by humans. GPT-3 ends the paper (which as of this writing, September 2022, is still in review) with “Overall, we believe that the
benefits of letting GPT-3 write about itself outweigh the risks. However, we recommend that any such writing be closely monitored by researchers in order to mitigate any potential negative consequences.” One of the developments based on GPT-3 was DALLE, now followed by DALL-E 2. The tokens these programs generate are pixels rather than words. They create images instead of text when prompted by verbal cues. They can draw and paint in any style which was digitally available when their input was collected, and do it very effectively; they can also produce photo-realistic images. The ethical implications of unleashing such systems are of a different order than anything we have witnessed or described before, for two main reasons, and both pertain to issues of automation: (a) These tools automate activities hitherto considered the domain of high-order human creativity as suggested by the quotation of Margaret Atwood. Their release was received with howls of protest by artists and others who are affronted by the perceived loss of human exceptionality. These denunciations were quite similar to the reactions two centuries earlier, when the invention of photography alarmed painters and art lovers. Interestingly, that invention drove painters to develop, and artlovers to appreciate, a large number of new styles, genres, materials, and techniques, as ways to distinguish their creations from the “mechanical” ones. (b) The purpose of the designer of any automation tool has been to create a system with certain properties. The socalled “world models” described here were designed to have specific capabilities — generate scads of tokens, emulating the ways they are seen to be used by humans. The properties of the tools, or rather, their output, were not designed for: they are emergent, not yet known, and the task of the engineer is then to discover them, rather than verify the existence of any target behavior. The properties then need to be evaluated; the questions to be asked are about the potential of such output, possibly in large quantity with such properties, to benefit or harm society. Such analyses are analogous to the work of biologists who investigate the nature of novel creatures caught in the wild, or sometimes grown in the lab by a process akin to genetic engineering. From the point of view of public platforms (social media, open blogs) such systems would be seen as content-creators, on par with humans. Users of the media can even have automata of this type post their output without prior inspection. A question of ethical, and possibly legal, nature that needs to be settled is whether to require such postings to be marked as of other-than-human source?
34
764
34.4
M. Hofri
Protocols for Ethical Analysis
You’re not going to find an exceptionless rule …Sometimes there isn’t an answer in the moral domain, and sometimes we have to agree to disagree, and come together and arrive at a good solution about what we will live with. —Patricia Churchland, [32].
The knowledge of ethical principles, coupled with the understanding of ethical theories and their relation to our society, does not yet mean that whenever we need to make a decision that has an ethical dimension, it is immediately available. The application of this knowledge to a specific case at hand requires what has been called ethical protocols or more fully—as the section is named. There is a very large literature about these protocols; see, for example [34, 35]. As Maner says in [34], numerous protocols have been prepared, reflecting the different domains which were the areas of concern of the protocol designers. We show now a skeletal protocol and defer to Appendix B a much more detailed one, adapted form [36]. Each is seen as a sequence of questions or tasks: 1. 2. 3. 4. 5. 6.
What is the question? What needs to be decided? Who are the stakeholders? What are the conflicting issues? Which values are involved? How are stakeholders, issues, and values related? What (effective) actions exist? Which of them are available to us? 7. Are the following ethical tests satisfied by the actions we consider: (a) Does it conflict with accepted principles? (Ten commandments, criminal law …) (b) Is it in agreement with the Golden Rule? (c) Does it agree with the Rule of Universality (what if everyone acted this way?) (d) Does it agree with the Rule of Consistency (what if we always acted this way?) (e) Does it agree with the Rules of Disclosure (what if our action is known to everybody?) (f) Does it satisfy the Rule of Best Outcome? Question 7 invokes nearly all decision aids we saw in this chapter.
34.5
Codes of Ethics
Nearly all professional societies create and advertise a code of ethics adapted to the profession in question (other terms
often used are code of behavior and code of conduct). There are several roles, or functions of such a code, internal and external to the professional society: • Inspire the members to professionalism • Educate about the profession (both the members and society) • Guide professional actions • Inform about accountability (both the members and society) • Provide enforcement information (for the licensed professions) The code needs to be read by the professionals as they find their place in the profession. Appendix A exhibits the code of ethics for software engineers. It was prepared by a joint committee of the ACM and the Computer Society of the IEEE, the two major organizations for computing professionals in the United States. A survey and an interesting evaluation of the impact of the code of ethics of the ACM, which is similar to the Software Engineering code in Appendix A, are at [37]. Most engineering societies adapted for their needs the Code of Conduct of the National Academy of Engineering, available at [38]. The code of conduct of ifac, International Federation of Automatic Control, also provided in Appendix A, is starkly different, since ifac has no individuals as members, but national member organizations; also, other control-oriented organizations can be affiliates, and those organizations are the immediate audience of the code.
34.6
Online Resources for Ethics of Automation
If it is not on the ’net, it does not exist. —Street scuttlebutt.
The street needs not be taken too literally, yet the claim expresses much that is true. In addition to the “specialized” sites in the list below, each of the professional societies that touches on automation has parts of its website which deal with professional ethics: acm, ieee, asee, ifac, and the arching societies aaas and aaes. Online is where we shall find which is new, and in the field of automation, “new” is a key word. The list below is a rich one, but while they all are active at the time of writing, many (most?) web sites are short-lived species; you may need to look for more. Use links in those sources, to delve farther and deeper, using your favorite web search engine. Finally, the Stanford Encyclopedia of Philosophy, SEP, is impressive in its selection and
34
Automation and Ethics
765
quality of ethics-related entries. The Internet Encyclopedia of Philosophy, IEP, has a more practical mien and is highly recommended as well. http://www.bsa.org http://catless.ncl.ac.uk/risks http://www.cerias.purdue.edu http://www.dhs.gov/dhspublic http://ethics.iit.edu http://privacyrights.org https://plato.stanford.edu/ https://iep.utm.edu/ https://esc.umich.edu/
https://cssh.northeastern.edu/ informationethics/ https://www.nationalacademies. org/our-work/responsiblecomputing-research-ethicsand-governance-of-computingresearch-and-its-applications https://ocean.sagepub.com/blog/ 10-organizations-leading-theway-in-ethical-ai
34.7
Global software industry advocate The Risks Digest; an ACM moderated forum Center for Information Security at Purdue Univ Bulletin board of the DHS Center for research in professional ethics A privacy rights clearing house Stanford Encyclopedia of Philosophy Internet Encyclopedia of Philosophy Center for ethics, society, and computing at the Univ. of Michigan Multiple centers for computing ethics reseach at Northeastern Univ Responsible Computing research at the national academies
A roster of organizations for ethical AI research
Sources for Automation and Ethics
Comments About Sources for This Chapter The bibliography lists as usual the specific publications which have been used in preparing this chapter, in order of citation. We wish to draw attention to three sources. One is available online, The Stanford Encyclopedia of Philosophy; some of its articles recur in the bibliography. The articles are by recognized authorities and are revised for currency every several (5 to 10) years. Its editors take a remarkably expansive view of their domain, possibly of eighteenthcentury vintage, and include many articles relevant to the questions we consider, for example, [28]. Two other sources are print books [39,40] and anthologies of wide-ranging articles, many of which deal with topics that are important to the interactions of ethics with automation. They are not new, appearing in 1995 and 2011, respectively, yet remain very much of interest.
Appendix A: Code of Ethics Examples Software Engineering Code of Ethics and Professional Practice This code is maintained on the site of the ACM at SE CODE (https://ethics.acm.org/code-of-ethics/softwareengineering-code/). Short Version PREAMBLE The short version of the code summarizes aspirations at a high level of abstraction. The clauses that are included in the full version give examples and details of how these aspirations change the way we act as software engineering professionals. Without the aspirations, the details can become legalistic and tedious; without the details, the aspirations can become high sounding but empty; together, the aspirations and the details form a cohesive code. Software engineers shall commit themselves to making the analysis, specification, design, development, testing, and maintenance of software a beneficial and respected profession. In accordance with their commitment to the health, safety, and welfare of the public, software engineers shall adhere to the following Eight Principles: 1. Public Software engineers shall act consistently with the public interest. 2. Client and employer Software engineers shall act in a manner that is in the best interests of their client and employer, consistent with the public interest. 3. Product Software engineers shall ensure that their products and related modifications meet the highest professional standards possible. 4. Judgment Software engineers shall maintain integrity and independence in their professional judgment. 5. Management Software engineering managers and leaders shall subscribe to and promote an ethical approach to the management of software development and maintenance. 6. Profession Software engineers shall advance the integrity and reputation of the profession consistent with the public interest. 7. Colleagues Software engineers shall be fair to and supportive of their colleagues. 8. Self
34
766
Software engineers shall participate in lifelong learning regarding the practice of their profession and shall promote an ethical approach to the practice of the profession. ——————————————————— Full Version PREAMBLE Computers have a central and growing role in commerce, industry, government, medicine, education, entertainment, and society at large. Software engineers are those who contribute by direct participation or by teaching, to the analysis, specification, design, development, certification, maintenance, and testing of software systems. Because of their roles in developing software systems, software engineers have significant opportunities to do good or cause harm, to enable others to do good or cause harm, or to influence others to do good or cause harm. To ensure, as much as possible, that their efforts will be used for good, software engineers must commit themselves to making software engineering a beneficial and respected profession. In accordance with that commitment, software engineers shall adhere to the following Code of Ethics and Professional Practice. The Code contains eight Principles related to the behavior of and decisions made by professional software engineers, including practitioners, educators, managers, supervisors, and policy-makers, as well as trainees and students of the profession. The Principles identify the ethically responsible relationships in which individuals, groups, and organizations participate and the primary obligations within these relationships. The Clauses of each Principle are illustrations of some of the obligations included in these relationships. These obligations are founded in the software engineer’s humanity, in special care owed to people affected by the work of software engineers and in the unique elements of the practice of software engineering. The Code prescribes these as obligations of anyone claiming to be or aspiring to be a software engineer. It is not intended that the individual parts of the Code be used in isolation to justify errors of omission or commission. The list of Principles and Clauses is not exhaustive. The Clauses should not be read as separating the acceptable from the unacceptable in professional conduct in all practical situations. The Code is not a simple ethical algorithm that generates ethical decisions. In some situations, standards may be in tension with each other or with standards from other sources. These situations require the software engineer to use ethical judgment to act in a manner which is most consistent with the spirit of the Code of Ethics and Professional Practice, given the circumstances. Ethical tensions can best be addressed by thoughtful consideration of fundamental principles, rather than blind reliance on detailed regulations. These Principles should influ-
M. Hofri
ence software engineers to consider broadly who is affected by their work; to examine if they and their colleagues are treating other human beings with due respect; to consider how the public, if reasonably well informed, would view their decisions; to analyze how the least empowered will be affected by their decisions; and to consider whether their acts would be judged worthy of the ideal professional working as a software engineer. In all these judgments, concern for the health, safety, and welfare of the public is primary; that is, the “Public Interest” is central to this Code. The dynamic and demanding context of software engineering requires a code that is adaptable and relevant to new situations as they occur. However, even in this generality, the Code provides support for software engineers and managers of software engineers who need to take positive action in a specific case by documenting the ethical stance of the profession. The Code provides an ethical foundation to which individuals within teams and the team as a whole can appeal. The Code helps to define those actions that are ethically improper to request of a software engineer or teams of software engineers. The Code is not simply for adjudicating the nature of questionable acts; it also has an important educational function. As this Code expresses the consensus of the profession on ethical issues, it is a means to educate both the public and aspiring professionals about the ethical obligations of all software engineers. Principles Principle 1: Public Software engineers shall act consistently with the public interest. In particular, software engineers shall, as appropriate: 1.01. Accept full responsibility for their own work. 1.02. Moderate the interests of the software engineer, the employer, the client, and the users with the public good. 1.03. Approve software only if they have a well-founded belief that it is safe, meets specifications, passes appropriate tests, and does not diminish quality of life, diminish privacy, or harm the environment. The ultimate effect of the work should be to the public good. 1.04. Disclose to appropriate persons or authorities any actual or potential danger to the user, the public, or the environment, that they reasonably believe to be associated with software or related documents. 1.05. Cooperate in efforts to address matters of grave public concern caused by software, its installation, maintenance, support, or documentation. 1.06. Be fair and avoid deception in all statements, particularly public ones, concerning software or related documents, methods, and tools.
34
Automation and Ethics
1.07. Consider issues of physical disabilities, allocation of resources, economic disadvantage, and other factors that can diminish access to the benefits of software. 1.08. Be encouraged to volunteer professional skills to good causes and to contribute to public education concerning the discipline. Principle 2: Client and employer Software engineers shall act in a manner that is in the best interests of their client and employer, consistent with the public interest. In particular, software engineers shall, as appropriate: 2.01. Provide service in their areas of competence, being honest and forthright about any limitations of their experience and education. 2.02. Not knowingly use software that is obtained or retained either illegally or unethically. 2.03. Use the property of a client or employer only in ways properly authorized and with the client’s or employer’s knowledge and consent. 2.04. Ensure that any document upon which they rely has been approved, when required, by someone authorized to approve it. 2.05. Keep private any confidential information gained in their professional work, where such confidentiality is consistent with the public interest and consistent with the law. 2.06. Identify, document, collect evidence, and report to the client or the employer promptly if, in their opinion, a project is likely to fail, to prove too expensive, to violate intellectual property law, or otherwise to be problematic. 2.07. Identify, document, and report significant issues of social concern, of which they are aware, in software or related documents, to the employer or the client. 2.08. Accept no outside work detrimental to the work they perform for their primary employer. 2.09. Promote no interest adverse to their employer or client, unless a higher ethical concern is being compromised; in that case, inform the employer or another appropriate authority of the ethical concern. Principle 3: Product Software engineers shall ensure that their products and related modifications meet the highest professional standards possible. In particular, software engineers shall, as appropriate: 3.01. Strive for high quality, acceptable cost, and a reasonable schedule, ensuring significant tradeoffs are clear to and accepted by the employer and the client and are available for consideration by the user and the public.
767
3.02. Ensure proper and achievable goals and objectives for any project on which they work or propose. 3.03. Identify, define, and address ethical, economic, cultural, legal, and environmental issues related to work projects. 3.04. Ensure that they are qualified for any project on which they work or propose to work, by an appropriate combination of education, training, and experience. 3.05. Ensure that an appropriate method is used for any project on which they work or propose to work. 3.06. Work to follow professional standards, when available, that are most appropriate for the task at hand, departing from these only when ethically or technically justified. 3.07. Strive to fully understand the specifications for software on which they work. 3.08. Ensure that specifications for software on which they work have been well documented, satisfy the users’ requirements, and have the appropriate approvals. 3.09. Ensure realistic quantitative estimates of cost, scheduling, personnel, quality, and outcomes on any project on which they work or propose to work and provide an uncertainty assessment of these estimates. 3.10. Ensure adequate testing, debugging, and review of software and related documents on which they work. 3.11. Ensure adequate documentation, including significant problems discovered and solutions adopted, for any project on which they work. 3.12. Work to develop software and related documents that respect the privacy of those who will be affected by that software. 3.13. Be careful to use only accurate data derived by ethical and lawful means and use it only in ways properly authorized. 3.14. Maintain the integrity of data, being sensitive to outdated or flawed occurrences. 3.15. Treat all forms of software maintenance with the same professionalism as new development. Principle 4: Judgment Software engineers shall maintain integrity and independence in their professional judgment. In particular, software engineers shall, as appropriate: 4.01. Temper all technical judgments by the need to support and maintain human values. 4.02. Only endorse documents either prepared under their supervision or within their areas of competence and with which they are in agreement. 4.03. Maintain professional objectivity with respect to any software or related documents they are asked to evaluate.
34
768
4.04. Not engage in deceptive financial practices such as bribery, double billing, or other improper financial practices. 4.05. Disclose to all concerned parties those conflicts of interest that cannot reasonably be avoided or escaped. 4.06. Refuse to participate, as members or advisors, in a private, governmental, or professional body concerned with software-related issues, in which they, their employers, or their clients have undisclosed potential conflicts of interest. Principle 5: Management Software engineering managers and leaders shall subscribe to and promote an ethical approach to the management of software development and maintenance. In particular, those managing or leading software engineers shall, as appropriate: 5.01. Ensure good management for any project on which they work, including effective procedures for promotion of quality and reduction of risk. 5.02. Ensure that software engineers are informed of standards before being held to them. 5.03. Ensure that software engineers know the employer’s policies and procedures for protecting passwords, files, and information that is confidential to the employer or confidential to others. 5.04. Assign work only after taking into account appropriate contributions of education and experience tempered with a desire to further that education and experience. 5.05. Ensure realistic quantitative estimates of cost, scheduling, personnel, quality, and outcomes on any project on which they work or propose to work, and provide an uncertainty assessment of these estimates. 5.06. Attract potential software engineers only by full and accurate description of the conditions of employment. 5.07. Offer fair and just remuneration. 5.08. Not unjustly prevent someone from taking a position for which that person is suitably qualified. 5.09. Ensure that there is a fair agreement concerning ownership of any software, processes, research, writing, or other intellectual property to which a software engineer has contributed. 5.10. Provide for due process in hearing charges of violation of an employer’s policy or of this Code. 5.11. Not ask a software engineer to do anything inconsistent with this Code. 5.12. Not punish anyone for expressing ethical concerns about a project. Principle 6: Profession
M. Hofri
Software engineers shall advance the integrity and reputation of the profession consistent with the public interest. In particular, software engineers shall, as appropriate: 6.01. Help develop an organizational environment favorable to acting ethically. 6.02. Promote public knowledge of software engineering. 6.03. Extend software engineering knowledge by appropriate participation in professional organizations, meetings, and publications. 6.04. Support, as members of a profession, other software engineers striving to follow this Code. 6.05. Not promote their own interest at the expense of the profession, client, or employer. 6.06. Obey all laws governing their work, unless, in exceptional circumstances; such compliance is inconsistent with the public interest. 6.07. Be accurate in stating the characteristics of software on which they work, avoiding not only false claims but also claims that might reasonably be supposed to be speculative, vacuous, deceptive, misleading, or doubtful. 6.08. Take responsibility for detecting, correcting, and reporting errors in software and associated documents on which they work. 6.09. Ensure that clients, employers, and supervisors know of the software engineer’s commitment to this Code of ethics and the subsequent ramifications of such commitment. 6.10. Avoid associations with businesses and organizations which are in conflict with this Code. 6.11. Recognize that violations of this Code are inconsistent with being a professional software engineer. 6.12. Express concerns to the people involved when significant violations of this Code are detected unless this is impossible, counter-productive, or dangerous. 6.13. Report significant violations of this Code to appropriate authorities when it is clear that consultation with people involved in these significant violations is impossible, counter-productive, or dangerous. Principle 7: Colleagues Software engineers shall be fair to and supportive of their colleagues. In particular, software engineers shall, as appropriate: 7.01. Encourage colleagues to adhere to this Code. 7.02. Assist colleagues in professional development. 7.03. Credit fully the work of others and refrain from taking undue credit. 7.04. Review the work of others in an objective, candid, and properly documented way.
34
Automation and Ethics
7.05. Give a fair hearing to the opinions, concerns, or complaints of a colleague. 7.06. Assist colleagues in being fully aware of current standard work practices including policies and procedures for protecting passwords, files and other confidential information, and security measures in general. 7.07. Not unfairly intervene in the career of any colleague; however, concern for the employer, the client, or public interest may compel software engineers, in good faith, to question the competence of a colleague. 7.08. In situations outside of their own areas of competence, call upon the opinions of other professionals who have competence in that area. Principle 8: Self Software engineers shall participate in lifelong learning regarding the practice of their profession and shall promote an ethical approach to the practice of the profession. In particular, software engineers shall continually endeavor to: 8.01. Further their knowledge of developments in the analysis, specification, design, development, maintenance, and testing of software and related documents, together with the management of the development process. 8.02. Improve their ability to create safe, reliable, and useful quality software at reasonable cost and within a reasonable time. 8.03. Improve their ability to produce accurate, informative, and well-written documentation. 8.04. Improve their understanding of the software and related documents on which they work and of the environment in which they will be used. 8.05. Improve their knowledge of relevant standards and the law governing the software and related documents on which they work. 8.06. Improve their knowledge of this Code, its interpretation, and its application to their work. 8.07. Not give unfair treatment to anyone because of any irrelevant prejudices. 8.08. Not influence others to undertake any action that involves a breach of this Code. 8.09. Recognize that personal violations of this Code are inconsistent with being a professional software engineer. This Code was developed by the IEEE-CS/ACM joint task force on Software Engineering Ethics and Professional Practices (SEEPP): Executive Committee: Donald Gotterbarn (Chair), Keith Miller, and Simon Rogerson Members: Steve Barber, Peter Barnes, Ilene Burnstein, Michael Davis, Amr El-Kadi, N. Ben Fairweather, Milton Fulghum, N. Jayaram, Tom Jewett, Mark Kanko, Ernie Kallman, Duncan Langford, Joyce Currie Little, Ed Mechler,
769
Manuel J. Norman, Douglas Phillips, Peter Ron Prinzivalli, Patrick Sullivan, John Weckert, Vivian Weil, S. Weisband, and Laurie Honour Werth ©1999 by the Institute of Electrical and Electronics Engineers, Inc. and the Association for Computing Machinery, Inc. This Code may be published without permission as long as it is not changed in any way and it carries the copyright notice.
International Federation of Automatic Control—Code of Conduct IFAC recognizes its role as a worldwide federation for promoting automatic control for the benefit of humankind. In agreement with and in implementation of the approved IFAC—Mission and Vision—this document summarizes the commitment and obligation of IFAC to maintain ethical and professional standards in its academic and industrial activities. All activities within IFAC as well as volunteers acting on behalf of or for IFAC are to act in accordance with this Code of Conduct. 1. Honesty and Integrity Activities conducted by IFAC shall always be fair, honest, transparent, and in accordance with the IFAC—Mission and Vision. That is, their main goal is to contribute to the promotion of the science and technology of control in the broadest sense. IFAC disapproves any actions which are in conflict with existing laws, are motivated by criminal intentions, or include scientifically dishonest practices such as plagiarism, infringement, or falsification of results. IFAC will not only retaliate against any person who reports violations of this principle but rather encourage such reporting. 2. Excellence and Relevance IFAC recognizes its responsibility to promote the science and technology of automatic control through technical meetings, publications, and other means consistent with the goals and values defined in the IFAC—Mission and Vision. Further, IFAC has the responsibility to be a trusted source of publication material on automatic control renowned for its technical excellence. IFAC acknowledges its professional obligation toward employees, volunteers, cooperating or member organizations and companies, and further partners. 3. Sustainability A major challenge in future automatic control is the development of modern techniques which reduce the ecological damage caused by technology to a minimum. IFAC acknowledges this fact and contributes to a solution by promoting the importance of automatic control and its impact on the society and by advancing the knowledge in automatic control and
34
770
its applications. IFAC disapproves any actions which are in conflict with the above philosophy, in particular those which have a negative impact on the environment. 4. Diversity and Inclusivity IFAC is a diverse, global organization with the goal to create a fruitful environment for people from different cultures dealing with automatic control in theory and practical applications. People shall be treated fairly, respectfully, and their human rights shall be protected. IFAC is committed to the highest principles of equality, diversity, and inclusion without boundaries. IFAC disapproves of any harassment, bullying, or discrimination. 5. Compliance of Laws The purpose of any action conducted by IFAC is to further the goals defined in IFAC’s constitution and consequences thereof. Activities on behalf of IFAC cannot be in conflict with ethical principles or laws existing in countries where IFAC operates. This includes but is not limited to any form of bribery, corruption, or fraud. IFAC disapproves unethical or illegal business practices which restrain competition such as price fixing or other kinds of market manipulation. Conflicts of interest are to be prevented if possible and revealed immediately whenever they occur. IFAC assures the protection of confidential information belonging to its member organizations and further partners.
Appendix B: Steps of the Ethical Decision-Making The following has been closely adapted from [36]. While the process is presented here as a sequence of actions, in practice the decision-maker may have to return to an earlier stage and fill up details the need for was revealed later: 1. Gather all relevant facts. • Don’t jump to conclusions without the facts. • Questions to ask: Who, what, where, when, how, and why. However, facts may be difficult to find because of the uncertainty often found around ethical issues. • Some facts are not available. • Assemble as many facts as possible before proceeding. • Clarify what assumptions you are making! 2. Define the ethical issue(s) • Don’t jump to solutions without first identifying the ethical issue(s) in the situation. • Define the ethical basis for the issue you want to focus on. • There may be multiple ethical issues—focus on one major one at a time.
M. Hofri
3. Identify the affected parties. • Identify all stakeholders, and then determine: • Who are the primary or direct stakeholders? • Who are the secondary or indirect stakeholders? • Why are they stakeholders for the issue? • Perspective-taking—try to see the situation through the eyes of those affected; interview them if possible. 4. Identify the consequences of possible actions. • Think about potential positive and negative consequences for affected parties by the decision. Focus on primary stakeholders initially. • Estimate the magnitude of the consequences and the probability that the consequences will happen. • Short-term vs. long-term consequences—will decision be valid over time. • Broader systemic consequences—tied to symbolic and secrecy as follows • Symbolic consequences—each decision sends a message. • Secrecy consequences—what are the consequences if the decision or action becomes public? • Did you consider relevant cognitive barriers/biases? • Consider what your decision would be based only on consequences—then move on and see if it is similar given other considerations. 1. Identify the relevant principles, rights, and justice issues. • Obligations should be thought of in terms of principles and rights involved: (A) What obligations are created because of particular ethical principles you might use in the situation? Examples: Do no harm; do unto others as you would have them do unto you; do what you would have anyone in your position do in the given context. (B) What obligations are created because of the specific rights of the stakeholders? What rights are more basic vs. secondary in nature? Which help protect an individual’s basic autonomy? What types of rights are involved—negative or positive? (C) What concepts of justice (fairness) are relevant— distributive or procedural justice? • Did you consider any relevant cognitive barriers/biases? Formulate the appropriate decision or action based solely on the above analysis of these obligations. 2. Consider your character and integrity • Consider what your relevant community members would consider to be the kind of decision that an individual of integrity would make in this situation. • What specific virtues are relevant in the situation?
34
Automation and Ethics
• Disclosure rule—what would you do if various media channels reported your action and everyone was to read it. • Think about how your decision will be remembered when you are gone. • Did you consider any relevant cognitive biases/barriers? • What decision would you come to based solely on character considerations? 3. Think creatively about potential actions. • Be sure you have not been unnecessarily forced into a corner. • You may have some choices or alternatives that have not been considered. • If you have come up with solutions “a” and “b,” try to brainstorm, and come up with a “c” solution that might satisfy the interests of the primary parties involved in the situation. 4. Check your gut. • Even though the prior steps have argued for a highly rational process, it is always good to check your moral sense, as observed by Kant. • Intuition is gaining credibility as a source for good decision-making; feeling something is not “right” is a useful trigger. —Particularly relevant if you have a lot of experience in the area and are considered an expert decisionmaker. 5. Decide on your course of action, and prepare responses to those who may oppose your position. • Consider potential actions based on the consequences, obligations, and character approaches. • Do you come up with similar answers from the different perspectives? • Do your obligations and character help you evaluate the consequentialist preferred action? • How can you protect the rights of those involved (or your own character) while still maximizing the overall good for all of the stakeholders? • What arguments are most compelling to you to justify the action ethically? How will you respond to those with opposing viewpoints?
References 1. Sayre-McCord, G.: Metaethics, The Stanford Encyclopedia of Philosophy (Summer 2014 Edition). Edward N. Zalta (ed.). SEP 2. Tavani, H.: Ethics and Technology, 3rd edn. Wiley, Hoboken NJ (2011) 3. Cudd, A., Eftekhari, S.: Contractarianism, The Stanford Encyclopedia of Philosophy (Winter 2021 Edition), Edward N. Zalta (ed.), SEP 4. Gowans, C.: Moral Relativism, The Stanford Encyclopedia of Philosophy (Summer 2019 Edition). Edward N. Zalta (ed.). SEP
771 5. D’Agostino, F., Gaus, G., Thrasher, J.: Contemporary Approaches to the Social Contract, The Stanford Encyclopedia of Philosophy (Fall 2019 Edition). Edward N. Zalta (ed.). SEP 6. Sandel, M.J.: Justice: What Is the Right Things to Do. Farrar, Straus and Giroux, New York (2009) 7. de Mori, B.: What moral theory for human rights? Naturalization vs. denaturalization. Etica e Politica 2(1) (2000) 8. Quinn, M.J.: Ethics for the Information Age, 7th edn. Pearson Education Inc. (2017) 9. Sinnott-Armstrong, W.: Consequentialism, The Stanford Encyclopedia of Philosophy (Summer 2019 Edition). Edward N. Zalta (ed.). SEP 10. Gilboa, I.: Theory of Decision Under Uncertainty. Cambridge University Press, Cambridge (2009) 11. Benton, R.J.: Political expediency and lying: Kant vs Benjamin constant. J. Hist. Ideas 43(1), 135–144 (1982) 12. Ross, D.: The Right and the Good. Oxford University press (1930) 13. Asimov, I.: I, Robot. Doubleday, New York (1950) 14. Asimov, I.: The Rest of the Robots. Doubleday, New York (1964) 15. Clarke, R.: Asimov’s laws of robotics: Implications for information technology. IEEE Computer 26(12), 53–61 (1993), and 27(1), 57– 66 (1994). Reprinted as Chapter 15 in [40] 16. Anderson, S.L.: The Unacceptability of Asimov’s Three Laws of robotics as a Basis for Machine Ethics, Chapter 16 in [40] 17. Spinello, R.A.: Cybernetics: Morality and Law in Cyberspace. Jones & Bartlett Learning, Burlington, Massachusetts (2021) 18. Lützhöft, M.H., Dekker, S.W.A.: On your watch: Automation on the bridge. J. Navigat. 55, 83–96 (2002) 19. Travis, G.: How the Boeing 737 Max disaster looks to a software developer. IEEE Spectrum (18 April 2019), 19:49 gmt 20. Forster, A.M.: The Machine Stops. In: (the final chapter) [39] (1910) 21. NTSB.: Grounding of the Panamanian Passenger Ship Royal Majesty on Rose and Crown Shoal near Nantucket Massachusetts, June 10, 1995. (NTSB/MAR-97/01). National Transportation Safety Board, Washington DC (1997) 22. Nancy, L.: Safeware: System Safety and Computers. Appendix A: Medical Devices: The Therac-25. Addison-Wesley (1995). This report is separately available at Leveson 23. Baase, S.: A Gift of Fire: Social, Legal, and Ethical Issues for Computing Technology, 4th edn. Pearson, Boston (2013) 24. International Federation of Robotics (IFR).: World Robotics 2020 Edition 25. Müller, V.C.: Ethics of Artificial Intelligence and Robotics, The Stanford Encyclopedia of Philosophy (Winter 2020 Edition). Edward N. Zalta (ed.). SEP 26. https://www.montrealdeclaration-responsibleai.com/thedeclaration Montreal Declaration for a responsible development of AI, 2017 27. https://www.theatlantic.com/magazine/archive/2018/06/henrykissinger-ai-could-mean-the-end-of-human-history/559124/ Henry Kissinger: How the Enlightenment Ends, The Atlantic, June 2018 28. Franssen, M., Lokhorst, G.-J., van de Poel, I.: Philosophy of Technology, The Stanford Encyclopedia of Philosophy (Fall 2018 Edition). Edward N. Zalta (ed.). SEP 29. IEEE Standard for System, Software, and Hardware Verification and Validation.: In: IEEE Std 1012-2016 (Revision of IEEE Std 1012-2012/ Incorporates IEEE Std 1012-2016/Cor1-2017), pp.1–260, 29 Sept. 2017. https://doi.org/10.1109/IEEESTD.2017. 8055462 30. Leitner, A., Daniel, W., Javier, I.-G. (eds.): Validation and Verification of Automated Systems, Results of the ENABLE-S3 Project (Springer, 2020) 31. Heaven, D.: Why deep-learning AIs are so easy to fool. Nature 574, 163–166 (2019). https://doi.org/10.1038/d41586-019-03013-5
34
772 32. Churchland, P.: The biology of ethics. The Chronicle of Higher Education (June 12, 2011). Available at Rule Breaker 33. Northcutt, C.G., Athalye, A., Mueller, J.: Pervasive Label Errors in Test Sets Destabilize Machine Learning Benchmarks. arXiv:2103.14749v2[stat.ML]. Summarised by Kyle Wiggers, MIT study finds ‘systematic’ labeling errors in popular AI benchmark datasets. VentureBeat 34. Maner, W.: Heuristic methods for computer ethics. Metaphilosophy 33(3), 339–365 (2002). Available at Maner 35. Liffick, B.W.: Analyzing ethical scenarios. Ethicomp. J. 1 (2004) 36. May, D.R.: Steps of the Ethical Decision–Making Process. Accessed at research.ku.edu, November 24, 2020 37. Peslak, A.R.: A review of the impact of ACM code of conduct on information technology moral judgment and intent. J. Comput. Inf. Syst. 47(3), 1–10 (2007) 38. https://www.nae.edu/225756/Code_of_Conduct NAE Code of Conduct 39. Johnson, D.G., Nissenbaum, H. (eds.): Computers, Ethics & Social Values. Prentice Hall Upper Saddle River, NJ (1995) 40. Anderson, M., Anderson, S.L. (eds.): Machine Ethics. Cambridge University Press, New York, NY (2011) 41. Thunström, A.O., We Asked gpt-3 to Write an Academic Paper about Itself- Then We tried to Get It Published. Scientific American, 327(3), September (2022)
M. Hofri
Micha Hofri earned his first two degrees in the department of Physics, at the Technion (IIT) in Haifa, Israel. For his PhD he climbed to a higher hill in the same campus, for the department of management and industrial engineering. His dissertation, for the D.Sc. degree in Industrial engineering and Operations Research, concerned performance evaluation of computer systems. He has been on the faculty at the Technion, Purdue, University of Houston and Rice, and came to WPI as the CS department chairman. He has taught several different courses about operating systems, analysis of algorithms, probability and combinatorics in computing; and more recently about the societal impact of computing and communications. Professor emeritus Hofri continues to offer this recent course. His latest book Algorithmics of Nonuniformity: Tools and Paradigms, authored with Hosam Mahmoud, was published in 2019 by CRC Press.
Part VI Industrial Automation
Machine Tool Automation
35
Keiichi Shirase and Susumu Fujii
Contents
Abstract
35.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 775
35.2 35.2.1 35.2.2 35.2.3
The Advent of the NC Machine Tool . . . . . . . . . . . . . . From Hand Tool to Powered Machine . . . . . . . . . . . . . . . Copy Milling Machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . NC Machine Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
777 777 778 778
35.3
Development of Machining Center and Turning Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Machining Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Turning Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fully Automated Machining: FMS and FMC . . . . . . . . .
779 779 780 781
35.3.1 35.3.2 35.3.3 35.4 35.4.1 35.4.2 35.4.3 35.5 35.5.1 35.5.2 35.5.3 35.5.4 35.5.5 35.6 35.6.1 35.6.2 35.6.3 35.7
NC Part Programming . . . . . . . . . . . . . . . . . . . . . . . . . . Manual Part Programming . . . . . . . . . . . . . . . . . . . . . . . . Computer-Assisted Part Programming: APT and EXAPT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . CAM-Assisted Part Programming . . . . . . . . . . . . . . . . . .
782 782
Technical Innovation in NC Machine Tools . . . . . . . . Functional and Structural Innovation by Multitasking and Multiaxis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Innovation in Control Systems Toward Intelligent CNC Machine Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Current Technologies of Advanced CNC Machine Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Autonomous and Intelligent Machine Tool . . . . . . . . . . . Advanced Intelligent Technology for Machine Tools . . .
784
Numerical control (NC) is one of the greatest innovations in the achievement of machine tool automation in manufacturing. In this chapter, first a history of the development up to the advent of NC machine tools is briefly reviewed (Sect. 35.2). Then the machining centers and the turning centers are described with their key modules and integration into flexible manufacturing systems (FMS) and flexible manufacturing cells (FMC) in Sect. 35.3. NC part programming is described from manual programming to the computer-aided manufacturing (CAM) system in Sect. 35.4. In Sects. 35.5, 35.6, and 35.7, following the technical innovations in the advanced hardware and software systems of NC machine tools, future control systems for intelligent CNC machine tools are presented.
784
Keywords
785
Machine tool · Computer numerical control · Flexible manufacturing system · Computer numerical control machine tool · Numerical control program
Metal Additive Manufacturing Machines or Metal 3D Printers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Rapid Growth of Additive Manufacturing . . . . . . . . . . . . Laser Additive Manufacturing Machine Using Powder Bed Fusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Five-Axis Milling Machining Center Combining Directed Energy Deposition . . . . . . . . . . . . . . . . . . . . . . .
783 784
786 789 794 796 796
35.1
Introduction
799 800
Key Technologies for Future Intelligent Machine Tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 800
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 802
K. Shirase () Department of Mechanical Engineering, Kobe University, Kobe, Japan e-mail: [email protected] S. Fujii () Kobe University, Kobe, Japan
© Springer Nature Switzerland AG 2023 S. Y. Nof (ed.), Springer Handbook of Automation, Springer Handbooks, https://doi.org/10.1007/978-3-030-96729-1_35
Numerical control (NC) is one of the greatest innovations in the achievement of machine tool automation in manufacturing. Machine tools have expanded their performance and ability since the era of the Industrial Revolution; however all machine tools were operated manually until the birth of the NC machine tool in 1952. Numerical control enabled control of the motion and sequence of machining operations with high accuracy and repeatability. In the 1960s, computers added even greater flexibility and reliability of machining operations. These machine tools which had computer numerical control were called CNC machine tools. A machining center, which is a highly automated NC milling machine performing 775
776
K. Shirase and S. Fujii
multiple milling operations, was developed to realize process integration as well as machining automation in 1958. A turning center, which is a highly automated NC lathe performing multiple turning operations, was also developed. These machine tools contributed to realize the flexible manufacturing system (FMS), which had been proposed during the mid1960s. FMS aims to perform automatic machining operations unaided by human operators to machine various parts. The automatically programmed tool (APT) is the most important computer-assisted part programming language and was first used to generate part programs in production around 1960. The extended subset of APT (EXAPT) was developed to add functions such as setting of cutting conditions, selection of cutting tool, and operation planning besides the functions of APT. Another pioneering NC programming language, COMPACT II, was developed by Manufacturing Data Systems Inc. (MDSI) in 1967. Technologies developed beyond APT and EXAPT were succeeded by computeraided manufacturing (CAM). CAM provides interactive part programming with a visual and graphical environment and saves significant programming time and effort for part programming. For example, COMPACT II has evolved into open CNC software, which enables integration of off-theshelf hardware and software technologies. NC languages have also been integrated with computer-aided design (CAD) and CAM systems.
Hardware
The first innovation
In the past five decades, NC machine tools have become more sophisticated to achieve higher accuracy and faster machining operation with greater flexibility. Certainly, the conventional NC control system can perform sophisticated motion control but not cutting process control. This means that further intelligence of NC control system is still required to achieve more sophisticated process control. In the near future, all machine tools will have advanced functions for process planning, tool-path generation, cutting process monitoring, cutting process prediction, self-monitoring, failure prediction, etc. Information technology (IT) will be the key issue to realize these advanced functions. The paradigm is evolving from the concept of autonomy to yield nextgeneration NC machine tools for sophisticated manufacturing systems. Machine tools have expanded their performance and abilities as shown in Fig. 35.1. The first innovation took place during the era of the Industrial Revolution. Most conventional machine tools, such as lathes and milling machines, have been developed since the Industrial Revolution. High-speed machining, high-precision machining, and high productivity have been achieved by these modern machine tools to realize mass production. The second innovation was numerical control (NC). A prototype machine was demonstrated at MIT in 1952. The accuracy and repeatability of NC machine tools became far
Tools
Machine tools Analog control/mechanical control (mechanics)
• High speed • High precision • High productivity
Digital control (servo, actuator, sensor) NC machine tools with adaptive control
• Motion control • Multi tasks/multi functions • System
• Cutting process control • Feedback of cutting process
Digital control with AC (performance function/constraint conditions)
Information
The third innovation Intelligent NC machine tools Digital control with AC (machining strategy/decision making)
Fig. 35.1 Evolution of machine tools toward the intelligent machine for the future
• Artificial intelligence • Knowledge/knowhow • Learning/evolution
Autonomous operation instructed by in-process planning and decision
NC machine tools
Automatic operation pre-instructed by NC programs
Software
The second innovation
35
Machine Tool Automation
better than those of manually operated machine tools. NC is a key concept to realize programmable automation. The principle of NC is to control the motion and sequence of machining operations. Computer numerical control (CNC) was introduced, and computer technology replaced the hardware control circuit boards on NC, greatly increasing the reliability and functionality of NC. The most important functionality to be realized was adaptive control (AC). In order to improve the productivity of the machining process and the quality of machined surfaces, several AC systems for real-time adjustment of cutting parameters have been proposed and developed [1]. As mentioned above, machine tools have evolved through advances in hardware and control technologies. However, the machining operations are fully dominated by the predetermined NC commands, and conventional machine tools are not generally allowed to change the machining sequence or the cutting conditions during machining operations. This means that conventional NC machine tools are allowed to perform only automatic machining operations that are preinstructed by NC programs. In order to realize an intelligent machine tool for the future, some innovative technical breakthroughs are required. An intelligent machine tool should be good at learning, understanding, and thinking in a logical way about the cutting process and machining operation, and no NC commands will be required to instruct machining operations as an intelligent machine tool thinks about machining operations and adapts the cutting processes itself. This means that an intelligent machine tool can perform autonomous operations that are instructed by in-process planning made by the tool itself. Information technology (IT) will be the key issue to realize this third innovation.
35.2
The Advent of the NC Machine Tool
35.2.1 From Hand Tool to Powered Machine It is well known that John Wilkinson’s boring machine (Fig. 35.2) was able to machine a high-accuracy cylinder to build Watt’s steam engine. The performance of steam engines was improved drastically by the high-accuracy cylinder. With the spread of steam engines, machine tools changed from hand tools to powered machines, and metal cutting became widespread to achieve modern industrialization. During the era of the Industrial Revolution, most conventional machine tools, such as lathes and milling machines, were developed. Maudsley’s screw-cutting lathe with mechanized tool carriage (Fig. 35.3) was a great invention which was able to machine high-accuracy screw threads. The screw-cutting lathe was developed to machine screw threads accurately; however
777
35
Fig. 35.2 Wilkinson’s boring machine (1775)
Fig. 35.3 Maudsley’s screw-cutting lathe with mechanized tool carriage (1800)
the mechanical tool carriage equipped with a screw allowed precise repetition of machined shapes. Precise repetition of machined shape is an important requirement to produce many of the component parts for mass production. Therefore, Maudsley’s screw-cutting lathe became a prototype of lathes. Whitney’s milling machine (Fig. 35.4) is believed to be the first successful milling machine used for cutting plane of metal parts. However, it appears that Whitney’s milling machine was made after Whitney’s death. Whitney’s milling machine was designed to manufacture interchangeable musket parts. Interchangeable parts require high-precision machine tools to make exact shapes. Fitch’s turret lathe (Fig. 35.5) was the first automatic turret lathe. Turret lathes were used to produce complex-shaped cylindrical parts that required several operating sequences and tools. Also, turret lathes can perform automatic machining with a single setup and can achieve high productivity. High productivity is an important requirement to produce many of the component parts for mass production. As mentioned above, the most important modern machine tools required to realize mass production were developed during the era of the Industrial Revolution. Also high-speed machining, high-precision machining, and high productivity had been achieved by these modern machine tools.
778
K. Shirase and S. Fujii
Fig. 35.6 A copy milling machine Fig. 35.4 Whitney’s milling machine (1818)
simultaneously. In some copy milling machines the ratio between the motion of the tracer and the cutting tool can be changed to machine shapes that are similar to the master model. Copy milling machines were widely used to machine molds and dies which were difficult to generate with simple tool paths, until CAD/CAM systems became widespread to generate NC programs freely for machining threedimensional freeform shapes.
35.2.3 NC Machine Tools
Fig. 35.5 Fitch’s turret lathe (1845)
35.2.2 Copy Milling Machine A copy milling machine, also called a tracer milling machine or a profiling milling machine, can duplicate freeform geometry represented by a master model for making molds, dies, and other shaped cavities and forms. A probe tracing the model contour is controlled to follow a three-dimensional master model, and the cutting tool follows the path taken by the tracer to machine the desired shape. Usually, a tracing probe is fed by a human operator, and the motion of the tracer is converted to the motion of the tool by hydraulic or electronic mechanisms. The motion in Fig. 35.6 shows an example of copy milling. In this case, three spindle heads or three cutting tools follow the path taken by the tracer
The first prototype NC machine tool, shown in Fig. 35.7, was demonstrated at the MIT in 1952. The name numerical control was given to the machine tool, as it was controlled numerically. It is well known that numerical control was required to develop more efficient manufacturing methods for modern aircraft, as aircraft components became more complex and required more machining. The accuracy, repeatability, and productivity of NC machine tools became far better than those of machine tools operated manually. The concept of numerical control is very important and innovative for programmable automation, in which the motions of machine tools are controlled or instructed by a program containing coded alphanumeric data. According to the concept of numerical control, machining operation becomes programmable, and machining shape is changeable. The concept of the flexible manufacturing system (FMS) mentioned later required the prior development of numerical control. A program to control NC machine tools is called a part program, and the importance of a part program was recognized from the beginning of NC machine tools. In particular, the definition for machining shapes of more complex parts is
35
Machine Tool Automation
779
35
Fig. 35.7 The first NC machine tool, which was demonstrated at MIT in 1952 Fig. 35.8 Machining center, equipped with an ATC. (Courtesy of Makino Milling Machine Co. Ltd.)
difficult by manual operation. Therefore, a part programming language, APT, was developed at MIT to realize computerassisted part programming. Recently, CNC has become widespread, and in most cases the term NC is used synonymously with CNC. Originally, CNC corresponded to an NC system operated by an internal computer, which realized storage of part programs, editing of part programs, manual data input (MDI), and so on. The latest CNC tools allow generation of a part program interactively by a machine operator and avoid machine crash caused by a missing part program. High-speed and high-accuracy control of machine tools to realize highly automated machining operation requires the latest central processing unit (CPU) to perform high-speed data processing for several functions.
35.3
Development of Machining Center and Turning Center
Fig. 35.9 Horizontal machining center. (Courtesy of Yamazaki Mazak Corp.)
35.3.1 Machining Center A machining center is a highly automated NC milling machine that performs multiple machining operations such as end milling, drilling, and tapping. It was developed to realize process integration as well as machining automation, in 1958. Figure 35.8 shows an early machining center equipped with an automatic tool changer (ATC). Most machining centers are equipped with an ATC and an automatic pallet changer (APC) to perform multiple cutting operations in a single machine setup and to reduce nonproductive time in the whole machining cycle. Machining centers are classified into horizontal and vertical types according to the orientation of the spindle axis. Figures 35.9 and 35.10 show typical horizontal and vertical
machining centers, respectively. Most horizontal machining centers have a rotary table to index the machined part at some specific angle relative to the cutting tool. A horizontal machining center which has a rotary table can machine the four vertical faces of boxed workpieces in single setup with minimal human assistance. Therefore, a horizontal machining center is widely used in an automated shop floor with a loading and unloading system for workpieces to realize machining automation. On the other hand, a vertical machining center is widely used in a die and mold machine shop. In a vertical machining center, the cutting tool can machine only the top surface of boxed workpieces, but it is easy for human operators to understand tool motion relative to the machined part.
780
K. Shirase and S. Fujii
Fig. 35.10 Vertical machining center. (Courtesy of Yamazaki Mazak Corp.)
Automatic Tool Changer (ATC) ATC stands for automatic tool changer, which permits loading and unloading of cutting tools from one machining operation to the next. The ATC is designed to exchange cutting tools between the spindle and a tool magazine, which can store more than 20 tools. The large capacity of the tool magazine allows a variety of workpieces to be machined. Additionally, higher tool-change speed and reliability are required to achieve a fast machining cycle. Figure 35.11 shows an example of a twin-arm-type ATC driven by a cam mechanism to ensure reliable high-speed tool change.
Fig. 35.11 ATC: automatic tool changer. (Courtesy of Yamazaki Mazak Corp.)
Automatic Pallet Changer (APC) APC stands for automatic pallet changer, which permits loading and unloading of workpieces for machining automation. Most horizontal machining centers have two pallet tables to exchange the parts before and after machining automatically. Figure 35.12 shows an example of an APC. The operator can be unloading the finished part and loading the next part on one pallet, while the machining center is processing the current part on another pallet. Fig. 35.12 APC: automatic pallet changer. (Courtesy of Yamazaki Mazak Corp.)
35.3.2 Turning Center A turning center is a highly automated NC lathe to perform multiple turning operations. Figure 35.13 shows a typical turning center. Changing of cutting tools is performed by a turret tool changer which can hold about ten turning and milling tools. Therefore, a turning center enables not only turning operations but also milling operations such as end milling, drilling, and tapping in a single machine setup. Some turning centers have two spindles and two or more turret tool changers to complete all machining operations of cylindrical parts in a single machine setup. In this case, the first half of
the machining operations of the workpiece is carried out on one spindle, and then the second half of the machining operations is carried out on another spindle, without unloading and loading of the workpiece. This reduces production time.
Turret Tool Changer Fig. 35.14 shows a tool turret with 12 cutting tools. A suitable cutting tool for the target machining operation is indexed automatically under numerical control for continuous machining operations. The most sophisticated turning centers
35
Machine Tool Automation
781
35
Fig. 35.13 Turning center or CNC lathe. (Courtesy of Yamazaki Mazak Corp.) Fig. 35.15 Flexible manufacturing system. (Courtesy of Yamazaki Mazak Corp.)
Fig. 35.14 Tool turret in turning center. (Courtesy of Yamazaki Mazak Corp.)
have tool monitoring systems which check tool length and diameter for automatic tool alignment and sense tool wear for automatic tool changing.
35.3.3 Fully Automated Machining: FMS and FMC Flexible Manufacturing System (FMS) The concept of the flexible manufacturing system (FMS) was proposed during the mid-1960s. It aims to perform automatic machining operations unaided by human operators to machine various parts. Machining centers are key components of the FMS for flexible machining operations. Figure 35.15 shows a typical FMS, which consists of five machining
centers, one conveyor, one load/unload station, and a central computer that controls and manages the components of the FMS. No manufacturing system can be completely flexible. FMSs are typically used for mid-volume and mid-variety production. An FMS is designed to machine parts within a range of style, sizes, and processes, and its degree of flexibility is limited. Additionally, the machining shape is changeable through the part programs that control the NC machine tools, and the part programs required for every shape to be machined have to be prepared before the machining operation. Therefore a new shape that needs a part program is not acceptable in conventional FMSs, which is why a third innovation of machine tools is required to achieve autonomous machining operations instead of automatic machining operations to achieve true FMS. An FMS consists of several NC machine tools such as machining centers and turning centers, material-handling or loading/unloading systems such as industrial robots and pallet changer, conveyer systems such as conveyors and automated guided vehicles (AGV), and storage systems. Additionally, an FMS has a central computer to coordinate all of the activities of the FMS, and all hardware components of the FMS generally have their own microcomputer for control. The central computer downloads NC part programs and controls the material-handling system, conveyer system, storage system, management of materials and cutting tools, etc. Human operators play important roles in FMSs, performing the following tasks: 1. Loading/unloading parts at loading/unloading stations 2. Changing and setting of cutting tools
782
K. Shirase and S. Fujii
3. NC part programming 4. Maintenance of hardware components 5. Operation of the computer system
who is familiar with the metal cutting process to define the points, lines, and surfaces of the workpiece and to generate the alphanumerical data. The most important NC part programming techniques are summarized as follows:
These tasks are indispensable to manage the FMS successfully.
Flexible Manufacturing Cell (FMC) Basically, FMSs are large systems to realize manufacturing automation for mid-volume and mid-variety production. In some cases, small systems are applicable to realize manufacturing automation. The term flexible manufacturing cell (FMC) is used to represent small systems or compact cells of FMSs. Usually, the number of machine tools included in a FMC is three or fewer. One can consider that an FMS is a large manufacturing system composed of several FMCs.
35.4
NC Part Programming
The task of programming to operate machine tools automatically is called NC part programming because the program is prepared for a part to be machined. NC part programming requires the programmer to be familiar with both the cutting processes and programming procedures. The NC part program includes the detailed commands to control the positions and motion of the machine tool. In numerical control, the three linear axes (x, y, z) of the Cartesian coordinate system are used to specify cutting tool positions, and three rotational axes (a, b, c) are used to specify the cutting tool postures. In turning operations, the position of the cutting tool is defined in the x–z plane for cylindrical parts, as shown in Fig. 35.16a. In milling operations, the position of the cutting tool is defined by the x-, y-, and z-axes for cuboid parts, as shown in Fig. 35.16b. Numerical control realizes programmable automation of machining. The mechanical actions or motions of the cutting tool relative to the workpiece and the control sequence of the machine tool equipments are coded by alphanumerical data in a program. NC part programming requires a programmer
a)
+z +c
b) +x
+y +b
+z
–z –x
–x
+x +a –y –z
Fig. 35.16 Coordinate systems in numerical control. (a) Cylindrical part for turning; (b) cuboid part for milling
1. Manual part programming 2. Computer-assisted part programming – APT and EXAPT 3. CAM-assisted part programming
35.4.1 Manual Part Programming This is the simplest way to generate a part program. Basic numeric data and alphanumeric codes are entered manually into the NC controller. The simplest command example is shown as follows:
N0010 N0020 N0030 N0040
M03 G00 Z20.000 G01
S1000 X20.000 EOB Z – 20.000
F100 Y50.000
EOB EOB
EOB
Each code in the statement has a meaning to define a machining operation. The “N” code shows the sequence number of the statement. The “M” code and the following two-digit number define miscellaneous functions; “M03” means to spindle on with clockwise rotation. The “S” code defines the spindle speed; “S1000” means that the spindle speed is 1000 rpm. The “F” code defines the feed speed; “F100” means that the feed is 100 mm/min. “EOB” stands for “end of block” and shows the end of the statement. The “G” code and the following two-digit number define preparatory functions; “G00” means rapid positioning by point-to-point control. The “X” and “Y” codes indicate the x- and y-coordinates. The cutting tool moves rapidly to the position x = 20 mm and y = 50 mm with the second statement. Then, the cutting tool moves rapidly again to the position z = 20 mm with the third statement. “G01” means linear positioning at controlled feed speed. Then the cutting tool moves with the feed speed, defined by “F100” in this example, to position z = −20 mm. The positioning control can be classified into two types, (1) point-to-point control and (2) continuous path control. “G00” is a positioning command for point-to-point control. This command only identifies the next position required at which a subsequent machining operation such as drilling is performed. The path to get to the position is not considered in point-to-point control. On the other hand, the path to get to the position is controlled simultaneously in more than one axis to follow a line or circle in continuous path control. “G01” is a positioning command for linear interpolation. “G02” and “G03” are positioning commands for circular
35
Machine Tool Automation
interpolation. These commands permit the generation of twodimensional curves or three-dimensional surfaces by turning or milling.
35.4.2 Computer-Assisted Part Programming: APT and EXAPT Automatically programmed tools is the most important computer-assisted part programming language and was first used to generate part programs in production around 1960. EXAPT contains additional functions such as setting of cutting conditions, selection of cutting tool, and operation planning besides the functions of APT. APT provides two steps to generate part programs: (1) definition of part geometry and (2) specification of tool motion and operation sequence. An example program list is shown in Fig. 35.17. The following APT statements define the contour of the part geometry based on basic geometric elements such as points, lines, and circles:
LN1 = LINE/20, 20, 20, 70 LN2 = LINE/(POINT/20, 70), ATANGL, 75, LN1 LN3 = LINE/(POINT/40, 20), ATANGL, 45 LN4 = LINE/20, 20, 40, 20 CIR = CIRCLE/YSMALL, LN2, YLARGE, LN3, RADIUS, 10
783
where LN1 is the line that goes through points (20, 20) and (20, 70); LN2 is the line that goes from point (20, 70) at 75◦ to LN1; LN3 is the line that goes from point (40, 20) at 45◦ to the horizontal line; LN4 is the line that goes through points (20, 20) and (40, 20); and CIR is the circle tangent to lines LN2 and LN3 with radius 10. Most part shapes can be described using these APT statements. On the other hand, tool motions are specified by the following APT statements:
TLLFT, GOLFT/LN1, PAST, LN2 GORGT/LN2, TANTO, CIR GOFWD/CIR, TANTO, LN3
where “TLLFT, GOLFT/LN1” indicates that the tool positions left (TLLFT) of the line LN1, goes left (GOLFT), and moves along the line LN1. “PAST, LN2” indicates that the tool moves until past (PAST) the line LN2. “GORGT/LN2” indicates that the tool goes right (GORGT) and moves along the line LN2. “TANTO, CIR” indicates that the tool moves until tangent to (TANTO) the circle CIR. GOFWD/CIR indicates that the tool goes forward (GOFWD) and moves along the circle CIR. “TANTO, LN3” indicates that the tool moves until tangent to the line LN3. Additional APT statements are prepared to define feed speed, spindle speed, tool size, and tolerances of tool paths. The APT program completed by the part programmer is
PARTNO TEMPLET Start statement REMARK PART TYPE KS-02 Comment $$ Comment MACHINE/F 240, 2 Selection of post processor CLPRT OUTTOL/0.002 Outer tolerance INTOL/0.002 Inner tolerance CUTTER/10 $$ FLAT END MILL DIA=10mm Cutting tool $$ DEFINITION Definition of geometry LN1=LINE/20, 20, 20, 70 LN2=LINE/(POINT/20, 70), ATANGL, 75, LN1 LN3=LINE/(POINT/40, 20), ATANGL, 45 LN4=LINE/20, 20, 40, 20 CIR=CIRCLE/YSMALL, LN2, YLARGE, LN3, RADIUS, 10 $$ XYPL=PLANE/0, 0, 1, 0 $$ XYPLANE SETPT=POINT/-10, -10, 10 $$ MOTION Motion of machine tool FROM/SETPT Start point FEDRAT/FO1 $$ RAPID SPEED Feed rate GODLTA/20, 20, -5 Tool motion SPINDL/ON Spindle on COOLNT/ON Coolant on FEDRATE/FO2 Feed rate GO/TO, LN1, TO, XYPL, TO, LN4 Tool motion
FEDRAT/FO3 $$ CUTTING SPEED Feed rate TLLFT, GOLFT/LN1, PAST, LN2 Tool motion GORGT/LN2, TANTO, CIR Tool motion GOFWD/CIR, TANTO, LN3 Tool motion GOFWD/LN3, PAST, LN4 Tool motion GORGT/LN4, PAST, LN1 Tool motion FEDRAT/FO2 GODLTA/0, 0, 10 SPINDL/OFF COOLNT/OFF FEDRAT/FO1 GOTO/SETPT END PRINT/3, ALL FINI
Print out End statement
Y 80 60
LN2
40
LN1 LN3 LN4
20 0
Fig. 35.17 Example program list in APT
Feed rate Tool motion Spindle off Coolant off Feed rate Tool motion Stop
CIR
X 0
20
40
60
80
35
784
K. Shirase and S. Fujii
translated by the computer to the cutter location (CL) data, which consists of all the geometry and cutter location information required to machine the part. This process is called main processing or preprocessing to generate NC commands. The CL data is converted to the part program, which is understood by the NC machine tool controller. This process is called postprocessing to add NC commands to specify feed speed, spindle speed, and auxiliary functions for the machining operation.
35.4.3 CAM-Assisted Part Programming CAM systems grew based on technologies relating to APT and EXAPT. Originally, CAM stood for computer-aided manufacturing and was used as a general term for computer software to assist all operations while realizing manufacturing. However, CAM is now used to indicate computer software to assist part programming in a narrow sense. The biggest difference between part programming assisted by APT and CAM is usability. Part programming assisted by APT is based on batch processing. Therefore, many programming errors are not detected until the end of computer processing. On the other hand, part programming assisted by CAM is interactive-mode processing with a visual and graphical environment. It therefore becomes easy to complete a part program after repeated trial and error using visual verification. Additionally, close cooperation between CAD and CAM offers a significant benefit in terms of part programming. The geometrical data for each part designed by CAD are available for automatic tool-path generation, such as surface profiling, contouring, and pocket milling, in CAM through software routines. This saves significant programming time and effort for part programming. Recently, some simulation technologies have become available to verify part programs free from machining trouble. Optimization of feed speed and detection of machine crash are two major functions for part program verification. These functions also save significant production lead time.
35.5
Technical Innovation in NC Machine Tools
35.5.1 Functional and Structural Innovation by Multitasking and Multiaxis Turning and Milling Integrated Machine Tool Recently, a turning and milling integrated machine tool has been developed as a sophisticated turning center. It also has a rotating cutting tool which can perform milling operation besides turning operation, as shown in Fig. 35.18. The benefits of the use of turning and milling integrated machine tools are as follows:
Fig. 35.18 Milling and turning integrated machine tool. (Courtesy of Yamazaki Mazak Corp.)
1. Reduction of production time 2. Improved machining accuracy 3. Reduction of floor space and initial cost As the high performance of these machine tools was accepted, the configuration became more and more complicated. Multispindles and multiturrets are integrated to perform multitasks simultaneously. The machine tool shown in Fig. 35.18 has two spindles: one milling spindle with four axes and one turret with two axes. Increasing the complexity of these machine tools causes the risk of machine crashes during machining operation and requires careful part programming to avoid machine crashes.
Five-Axis Machining Center Multiaxis machining centers are expanding in practical applications rapidly. The multiaxis machining center is applied to generate a workpiece with complex geometry with a single machine setup. In particular, five-axis machining centers have become popular for machining aircraft parts and complicated surfaces such as dies and molds. A typical fiveaxis machining center is shown in Fig. 35.19. Benefits to the use of multiaxis machining centers are as follows: 1. Reduction of preparation time 2. Reduction of production time 3. Improved machining accuracy
Parallel Kinematic Machine Tool A parallel kinematic machine tool is classified as a multiaxis machine tool. In the past years, parallel kinematic machine tools (PKM) have been studied with interest for their advantages of high stiffness, low inertia, high accuracy,
35
Machine Tool Automation
785
35
Fig. 35.19 Five-axis machining center. (Courtesy of Mori Seiki Co. Ltd.)
and high-speed capability. Okuma Corporation in Japan developed the parallel mechanism machine tool COSMO CENTER PM-600 shown in Fig. 35.20. This machine tool achieves high-speed and high-degrees-of-freedom machining operation for practical products. Also, high-speed milling of a free surface is shown in Fig. 35.20.
Ultraprecision Machine Tool Recently, ultraprecision machining technology has experienced major advances in machine design, performance, and productivity. Ultraprecision machining was successfully adopted for the manufacture of computer memory disks used in hard disk drives (HDD) and also photoreflector components used in photocopiers and printers. These applications require extremely high geometrical accuracies and form deviations in combination with supersmooth surfaces. The FANUC ROBONANO α-0iB is shown in Fig. 35.21 as an example of a five-axis ultraprecision machine tool. Nanometer servo-control technologies and air-bearing technologies are combined to realize an ultraprecision machine tool. This machine provides various machining methods for mass production with nanometer precision in the fields of optical electronics, semiconductor, medical, and biotechnology.
Fig. 35.20 Parallel kinematic machining center. (Courtesy of OKUMA Corp.)
0.3µm
35.5.2 Innovation in Control Systems Toward Intelligent CNC Machine Tools The framework of future intelligent CNC machine tools is summarized in Fig. 35.22. A conventional CNC control system has two major levels: the servo control (level 1 in Fig. 35.22) and the interpolator (level 2) for the axial motion control of machine tools. Certainly, the conventional CNC control system can achieve highly sophisticated motion control, but it cannot achieve sophisticated cutting process control. Two additional levels of control hierarchy, levels 3
0.3µm
Cross groove V-angle : 90° Pitch : 0.3 µm Height : 0.15 µm Material : Ni-P plate
Fig. 35.21 Ultraprecision machine tool. (Courtesy of Fanuc Ltd.)
and 4 in Fig. 35.22, are required for a future intelligent CNC control system to achieve more sophisticated process control.
786
K. Shirase and S. Fujii
Future intelligent CNC machine tools with adaptive and process control Current CNC machine tools with/without adaptive control Deformation/vibration/ noise Computer
CNC controller
• Process planning • Tool path generation (CAPP, CAM)
• Tool path • Cutting conditions • Tool position • Tool velocity
Actuator • Servo-amplifier • Servo-motor • Ball screw
Machine tool • Relative motion between tool and workpiece • Cutting operation
Servo-control (Level 1) Database
Temperature/vibration
Interpolation (Level 2)
Cutting process
• • • •
Cutting force Temperature Vibration Noise
Cutting process information (Level 3) • Knowledge • Knowhow • Skill
Cutting results (Level 4)
• Machining accuracy • Surface roughness • Tool condition
: Key issues for future intelligent CNC machine tools
Fig. 35.22 Framework of intelligent machine tools (CAPP – computer-aided process planning)
Machining operations by conventional CNC machine tools are generally dominated by NC programs, and only feed speed can be adapted. For sophisticated cutting process control, dynamic adaptation of cutting parameters is indispensable. The adaptive control (AC) scheme is assigned at a higher level (level 3) of the control hierarchy, enabling intelligent process monitoring, which can detect machining state independently of cutting conditions and machining operation. Level 4 in Fig. 35.22 is usually regarded as a supervisory level that receives feedback from measurements of the finished part. A reasonable index to evaluate the cutting results and a reasonable strategy to improve cutting results are required at this level. For this purpose, the utilization of knowledge, know-how, and skill related to machining operations has to be considered. Effective utilization of feedback information regarding the cutting results is very important. Additionally, an autonomous process planning strategy, which can generate a flexible and adaptive working plan, is required as a function of intelligent CNC machine tools. It must be responsive and adaptive to unpredictable changes, such as job delay, job insertion, and machine breakdown on machining shop floors. In order to generate the operation plan autonomously, several planning and information processing functions are needed. Operation planning, cutting tool selection, cutting parameters assignment, and tool-path generation for each machining operation are required at the machine level. Product data analysis and machining feature recognition are important issues as part of information processing.
35.5.3 Current Technologies of Advanced CNC Machine Tools Open Architecture Control The concept of open architecture control (OAC) was proposed in the early 1990s. The main aim of OAC was easy implementation and integration of customer-specific controls by means of open interfaces and configuration methods in a vender-neutral standardized environment [2]. It provides the methods and utilities for integrating user-specific requirements, and it is required to implement several intelligent control applications for process monitoring and control. Altintas has developed a user-friendly, reconfigurable, and modular toolkit called the open real-time operating system (ORTS). ORTS has several intelligent machining modules, as shown in Fig. 35.23. It can be used for the development of real-time signal processing, motion, and process control applications. A sample tool-path generation using quintic spline interpolation for high-speed machining is described as an application, and a sample cutting force control has also been demonstrated [3]. Mori and Yamazaki developed an open servo-control system for an intelligent CNC machine tool to minimize the engineering task required for implementing custom intelligent control functions. The conceptual design of this system is shown in Fig. 35.24. The software model reference adaptive control was implemented as a custom intelligent function, and a feasibility study was conducted to show the effectiveness of the open servo control [4]. Open architecture control
35
Machine Tool Automation
787
35
PC/Windows NT-ORTS - Man machine, communication, CAD/CAM functions, ... DSP-board 1 Motion control module Tacho generator and encoder
NC-code decoding
Motor
DSP
I/O board
Servoamplifier
Motor power Velocity feedback Position feedback
DSP-board n Intelligent machining module
Interpolation - Linear, circular, spline, ...
-
Sensor data collection Filtering FFT, FRF Adaptive control Tool wear monitoring Tool breakage detection Chatter avoidance Thermal deformation compensation - Probing - Manipulate machine tool operating functions
Axis control functions - PID, PPC, CCC, ZPETC, ... Sensor functions - Velocity, torque, position, force, ...
I/O box
MT operating functions - Fedd, speed, offsets, ...
Fig. 35.23 Application of ORTS on the design of CNC and machining process monitoring. (After [3]) (DSP, digital signal processor; FFT, fast Fourier transform; FRF, frequency response function;
PID, proportional-integral-derivative controller; PPC, pole placement controller; CCC, cross-coupling controller; ZPETC, zero-phase error tracking controller; I/O, input/output; MT, machine tool)
Conventional CNC control logic unit CNC non-real-time processing section CNC real-time motion control section NC program Display control
Display data
Program decode and analysis
NC program data
Preprocess accel/decel control
Analised results
Dual port memory (RAM)
Multiaxis interpolator
Interpolated results Accl./dcl. parameters
Postprocess accel/decel control
Servocontrol timing flag
Intelligent control engine
Position control
Position command
Velocity control
Current control
Axis servo-control parameters and data
Accl./dcl. data
Custom non-real-time execution section
Custom real-time execution section
Intelligent control application
Fig. 35.24 Conceptual design of open servo system. (After [4])
will reach the level of maturity required to replace current CNC controllers in the near future. The custom intelligent control functions required for an intelligent machine tool will
be easy to implement with the CNC controller. Machining performance in terms of higher accuracy and productivity will thereby be enhanced.
788
K. Shirase and S. Fujii
Feedback of Cutting Information Yamazaki proposed TRUE-CNC as a future-oriented CNC controller. (TRUE-CNC was named after the following keywords: T, transparent, transportable, transplantable; R, revivable; U, user-reconfigurable; and E, evolving.) The system consists of an information service, quality control and diagnosis, monitoring, control, analysis, and planning sections, as shown in Fig. 35.25 [5]. TRUE-CNC allows the operator to achieve maximum productivity and highest quality for machined parts in a given environment with autonomous capture of machining operation proficiency and machining know-how. The autonomous coordinate measurement planning (ACMP) system is a component of TRUE-CNC and enhances the operability of coordinate measuring machines (CMMs). The ACMP generates probe paths autonomously for in-line measurement of machined parts [6]. Inspection results or measurement data are utilized to evaluate the machining process to be finished and to assist in the decisionmaking process for new operation planning. The autonomous machining process analyzer (AMPA) system is also a component of TRUE-CNC. In order to retrieve knowledge, knowhow, and skill related to machining operations, the AMPA analyzes NC programs coded by experienced machining
Planning section Blank shape
Analysis section
Selected Selected Workpiece tools jigs orientation
Process & operation planning Product mode
CAD based design
Resource database Machine tool spec. data Available tool data Available jigs & fixture file Result database Machining history Product quality control Machining process diagnosis
Machining know-how database Operation sequence record Tool utilization record Cutting condition record Machining element record
Control section
Operation procedure list 1. Fix workpiece 2. Setup tools 3. Measure tools
CNC program
operators and gathers machining information. Machining process sequence, cutting conditions, machining time, and machining features are detected automatically and stored in the machining know-how database [7], which is then used to generate new operation plans. Mitsuishi developed a CAD/CAM mutual information feedback machining system which has capabilities for cutting state monitoring, adaptive control, and learning. The system consists of a CAD system, a database, and a real-time controller, as shown in Fig. 35.26 [8]. The CNC machine tool equipped with a six-axis force sensor was controlled to obtain the stability lobe diagram. Cutting parameters, such as depth of cut, spindle speed, and feed speed, are modified dynamically according to the sequence for finding stable cutting states, and the stability lobe diagram is obtained autonomously. The stability lobe diagram is then used to determine chatter-free cutting conditions. Furthermore, Mitsuishi proposed a networked remote manufacturing system which provides remote operating and monitoring [9]. The system demonstrated the capability to transmit the machining state in real time to the operator who is located far from the machine tool. The operator can modify the cutting conditions in real time depending on the machining state monitored.
Tool data
Real-time machining simul. & machining condition optimization
Real-time machine motion dynamics simul. & process optimization
Process analyzer & database generator Progressive product model
Autonomous measurement planning
Recommendation
Information service section
Fig. 35.25 Architecture of TRUE-CNC. (After [5])
Quality control & machining process diagnoser
Machining environment recognition
Generalpurpose robot control
Tool wear recognition Monitoring system Machining process
Generalpurpose robot system
Machined workpiece
Vision capture system
Autonomous measurement
Inspection result
Report
Machine tool system
Main CNC control
Product mode Evolving information provision & consultation service
Monitoring section In-process & quick dynamic calibration
#1 Hole X Y Z r
QC & diagnosis section
35
Machine Tool Automation
789
CAD system Design specification Geometrical information for a mechanical part
3-Dimensional modeler
Tool path generation with machining condition information
35
Database Reference information Geometrical information to be machined
Depth of cut, spindle speed and feed speed
NC data
Experimentally obtained stability lobe diagram
Machining condition determination
Machining efficiency evaluation
Modification of database (depth of cut, spindle speed)
Real-time controller NC data interpreter
Tool path, initial spindle speed and initial feed speed
Depth of cut, actual spindle speed and actual feed speed
Real-time machining condition controller Override value Machining state judgement
Modified spindle speed and modified feed speed
Machining center
Multiaxis force information
Fig. 35.26 Software system configuration of open architecture CNC. (After [8])
Five-Axis Control Most commercial CAM systems are not sufficient to generate suitable cutter location (CL) data for five-axis control machining. The CL data must be adequately generated and verified to avoid tool collision with the workpiece, fixture, and machine tool. In general, five-axis control machining has the advantage of enabling arbitrary tool posture, but it makes it difficult to find a suitable tool posture for a machining strategy without tool collision. Morishige and Takeuchi applied the concept of C-space to generate tool-collision-free CL data for five-axis control [10, 11]. The two-dimensional C-space is used to represent the relation between the tool posture and the collision area, as shown in Fig. 35.27. Also, threedimensional C-space is used to generate the most suitable CL data which satisfy the machining strategy, smooth tool movement, good surface roughness, and so on. Experimental five-axis-control collision-free machining was performed successfully, as shown in Fig. 35.28.
35.5.4 Autonomous and Intelligent Machine Tool The whole machining operation of conventional CNC machine tools is predetermined by NC programs. Once the
cutting conditions, such as depth of cut and stepover, are given by the machining commands in the NC programs; they are not generally allowed to be changed during machining operations. Therefore NC programs must be adequately prepared and verified in advance, which requires extensive amounts of time and effort. Moreover, NC programs with fixed commands are not responsive to unpredictable changes, such as job delay, job insertion, and machine breakdown found on machining shop floors. Shirase proposed a new architecture to control the cutting process autonomously without NC programs. Figure 35.29 shows the conceptual structure of autonomous and intelligent machine tools (AIMac). AIMac consists of four functional modules called management, strategy, prediction, and observation. All functional modules are connected with each other to share cutting information.
Digital Copy Milling for Real-Time Tool-Path Generation A technique called digital copy milling has been developed to control a CNC machine tool directly. The digital copy milling system can generate tool paths in real time based on the principle of traditional copy milling. In digital copy milling, a tracing probe and a master model in traditional copy milling are represented by three-dimensional (3D) virtual models in
790
K. Shirase and S. Fujii
T
Z q Collision surface
Collision surface Tool radius
O C
X
Y f
Collision-free tool posture Local coordinate system
Surface to be machined
Boundary of definition area (limit of inclination angle)
S 2
f
Collision area
q
P f
f= S q= 0
f= 0 f= 2S
Free area Collision area f
3 S 2
Fig. 35.27 Configuration space to define tool posture. (After [11])
a computer. A virtual tracing probe is simulated to follow a virtual master model, and cutter locations are generated dynamically according to the motion of the virtual tracing probe in real time. In the digital copy milling, cutter locations are generated autonomously, and an NC machine tool can be instructed to perform milling operation without NC programs. Additionally, not only stepover but also radial and axial depths of cut can be modified, as shown in Fig. 35.30. Also, digital copy milling can generate new tool paths to avoid cutting problems and change the machining sequence during operation [12]. Furthermore, the capability for in-process cutting parameters modification was demonstrated, as shown in Fig. 35.31 [13]. Real-time tool-path generation and the monitored actual milling are shown in the lower-left corner and the upper-right corner of this figure. The monitored cutting torque, adapted feed rate, and radial and axial depths of cut are shown in the lower-right corner of this figure. The cutting parameters can be modified dynamically to maintain the cutting load.
Fig. 35.28 Five-axis control machining. (After [11])
Flexible Process and Operation Planning System A flexible process and operation planning system has been developed to generate cutting parameters dynamically for machining operation. The system can generate the production plan from the total removal volume (TRV). The TRV is extracted from the initial and finished shapes of the product and is divided into machining primitives or machining features. The flexible process and operation planning system can generate cutting parameters according to the machining features detected. Figure 35.32 shows the operation sequence and cutting tools to be used. Cutting parameters are determined for the experimental machining shape. The digital copy milling system can generate the tool paths or CL data dynamically according to these results and perform the autonomous milling operation without requiring any NC program. Adaptive Control Using Virtual Milling Simulation In this adaptive control, the predicted cutting force is used to control the milling process. For this purpose, two functions are required as shown in Fig. 35.33 [15]. Real-time instruction to change cutting conditions is one
35
Machine Tool Automation
791
Workpiece model/CAD data
35
Design Reso Resource an and machini machining data
Raw material Management Planning Process Tool list
Machining sequence Face Pocket
---
OP 1
---
---
OP 2 ---, T2, ---, T5, ---
Hole Step
Database
Database generation
T1
Resource
Machining knowhow
Machine tool data Tool data
Machining feature Cutting condition
T2, T3 T5, T6, T7 T2, T4
Database maintenance
Prediction Real time process stimulation Strategy Cutting condition maintenance Depth of cut Stepover Feed rate Spindle speed
Machining process Cutting force Machining error Chatter vibration Cutting temperature Tool wear, etc.
Observation Process diagnosis Machining status Machining trouble (chatter vibration, tool breakage), etc. Monitoring
Tool path generation
Real machining
CL data
Feed rate Spindle speed Spindle load Cutting force Vibration Temperature Tool wear, etc.
Fig. 35.29 Conceptual structure of AIMac
function. The digital copy milling system mentioned above can achieve real-time instruction. Real-time cutting force prediction is another function. The virtual milling simulator can achieve real-time cutting force prediction and eliminate cutting force measurement. The virtual milling simulator consists of machining shape simulator and cutting force simulator. Figure 35.34 [15] shows the experimental verification of this adaptive control. The left column of this figure shows tool paths generated by the digital copy milling system and a photo of the experimental milling operation. The right column of this figure shows collaboration between the virtual milling simulator and the digital copy milling system. The virtual milling simulator calculates the changing shape of the material and predicts cutting force and torque, and the digital copy milling system generates tool paths that reflect the instructed feed speed, simultaneously. Adaptive control based on the predicted cutting force or torque can be performed successfully.
Figure 35.35 shows the details of the changes in the cutting torque and tool feed rate on the first and third loops within five loops of tool paths when adaptive control is performed. From the experimental results, the predicted cutting torque has good agreement with the measurement one to show the effectivity of using the predicted cutting torque for adaptive control. In addition, between I and J in the first loop, the predicted cutting torque is within the upper and lower limits of desired torque except for the transient state at the beginning and end of milling operation, and the tool feed speed is kept constant. On the other hand, between K and L in the third loop, the predicted cutting torque changes from zero to exceeding the upper limit of desired torque, and it can be seen that the tool feed speed is controlled appropriately according to the increase or decrease of the predicted cutting torque. The time required to complete whole milling operation was 6 min and 31 s with adaptive control, about 25% reduction from 8 min and 42 s required for without adaptive control.
792
K. Shirase and S. Fujii
a)
Machining Error Correction Based on Predicted Machining Error in Milling Operation Recent improvements in computer performance have been remarkable, and the extremely complicated and sophisticated simulation of cutting force and machining error in end milling operation has been performed.
b) Change of stepover
Change of cutting depth
c)
d) Change of stepover Change of cutting depth
Cutting torque (N mm) 1104 Feed rate (mm/min)
Fig. 35.30 Example of real-time tool-path generation. (a) Bilateral zigzag paths; (b) contouring paths; (c) change of stepover; (d) change of cutting depth
650 Cutting depth (mm) RD 4 AD 2.7
Fig. 35.31 Adaptive milling on AIMac
Face mill Ø 80 mm Scanning-line mode
End mill Ø 10 mm Contour-line mode
End mill Ø 16 mm Scanning-line mode
End mill Ø 10 mm Scanning-line mode
S = 1000 rpm, F = 230 mm/min RD = 8 mm, AD = 5 mm
S = 2450 rpm, F = 346 mm/min RD = 1.6 mm, AD = 3.8 mm
S = 1680 rpm, F = 241 mm/min RD = 4.5 mm, AD = 2.1 mm
S = 2450 rpm, F = 346 mm/min RD = 1.6 mm, AD = 3.8 mm
Face
Closed pocket
Open pocket
Closed slot
1
2
3
4
End mill Ø 10 mm Scanning-line mode
End mill Ø 6 mm Scanning-line mode
End mill Ø 6 mm Scanning-line mode
Ball end mill Ø 10 mm Scanning-line mode
S = 2450 rpm, F = 346 mm/min RD = 1.6 mm, AD = 3.8 mm
S = 3539 rpm, F = 413 mm/min RD = 1.2 mm, AD = 2.1 mm
S = 3539 rpm, F = 413 mm/min RD = 1.2 mm, AD = 2.1 mm
S = 2580 rpm, F = 335 mm/min RD = 1.3 mm, AD = 3.2 mm
Closed slot
Closed slot
Open slot
Free form
5
6
Center drill Ø 3 mm Drilling mode
Drill Ø 10 mm Drilling mode
S = 1154 rpm, F = 232 mm/min
S = 848 rpm, F = 180 mm/min
Blind hole
Fig. 35.32 Results of machining process planning on AIMac
8
Raw material shape
Blind hole
9
7
38
10
Finished shape
33 60
100
60
100
35
Machine Tool Automation
793
For example, in an instantaneous force with static deflection feedback model, the elastic deformation of the tool or tool holder caused by the cutting force can be calculated, and the changing of actual uncut chip thickness is calculated exactly. Unlike an instantaneous rigid force model, the three
Real time instruction
Cutting force (Measured) Machine tool
Virtual milling simulator
Difficult to refer in practical
Cutting force (Predicted)
Digital copy milling
Fig. 35.33 Adaptive control based on predicted cutting force
relationships of the actual uncut chip thickness that changes by elastic deformation, the cutting force that changes by the actual uncut chip thickness, and the elastic deformation that changes by the cutting force are calculated without any contradiction. The predicted milling surface calculated in this way and the measured milling surface are shown in Fig. 35.36. The graph shows the machining error corresponding to the cross section of the central part of the milling surface. In this example, it can be seen that the tool is deformed toward the workpiece and the workpiece was cut over. Since the machining error can be predicted exactly in consideration of the tool deformation, the machining error can be eliminated by correcting the tool position and posture during machining operation using a five-axis control machine tool. Since the tool position and posture during machining operation can be known from the machining error simulation described above, the offset and inclination of the tool from TCP can be corrected as shown in Fig. 35.37. However, if the offset and inclination are corrected, the actual uncut chip thickness will change, the cutting force will change, and the machining error will also change. Therefore, it is necessary to perform iterated calculations to determine the offset and inclination where the machining error closes to zero.
Virtual milling simulator
Digital copy milling
Tool paths
Torque (Nmm)
0.5 rotation 0
20
40
Cutting force (N)
Fig. 35.34 Verification of adaptive control milling
130
60 80 100 120 140 160 180 Tool rotation angle (deg.)
Predicted cutting force
Experimental milling
Predicted cutting torque 3,000 2,500 2,000 1,500 1,000 500 0
400 300 200 100 0 –100 –200 –300 129.7 129.8 129.9
5 rotations 130 130.1 130.2 130.3 130.4 Time (s)
1,600 1,400 1,200 1,000 800 600 400 200 0
Time (s)
Instructed feed speed
Feed rate (mm/min)
Cutting force (N)
Predicted cutting force 400 300 200 100 0 –100 –200 –300
130
Time (s)
35
794
K. Shirase and S. Fujii
st
1
loop in 5 loops
2.0 1.0 0.0 1
2
3
4
5
6
7
8
Feed speed (mm/min)
900 600 Constant
0 0
I
1
1st loop
2
3
4 5 Time (s)
6
7
8
9
L
3 rd loop in 5 loops
2.0 1.0 0.0 0
Time (s)
Upper desired torque Lower desired torque
3.0
9
1200
300
Measured torque Predicted torque
K
J Torque (Nm)
Torque (Nm)
3.0
0 Feed speed (mm/min)
Upper desired torque Lower desired torque
Measured torque Predicted torque
I
0.5
1
1.5
2
2.5 3 3.5 Time (s)
4
4.5
5
5.5
5
5.5
1200 Max.
900 600
Increase
Max. Decrease
300
Increase
0 0
0 .5
1
1 .5
2
2 .5
3
3 .5
4
4 .5
Time (s) K
J Without control
3rd loop
L
With control
Machining time
8 min 42 s
6 min 31 s
Ave. cutting torque
0.988 Nm
1.02 Nm
About 25 % machining time reduction
Fig. 35.35 Measured and predicted cutting torque, feed speed in adaptive control milling
7 mm
z
x
Work material: C3604 Tool: )6 Square endmill Cutting direction: Up cut Spindle speed: 2000 rpm Tool feed speed: 480 mm/min Radial depth of cut: 2 mm Axial depth of cut: 5 mm 0.0
Also the graph shows a comparison of the measured machining errors without and with correction. From this result, it can be seen that the over-cut machining error is corrected very well.
10 mm 3 mm Predicted milling surface
Measured milling surface
Total axial position (mm)
y 1.0
Predicted error
2.0 3.0 4.0
Measured error
Over cut 5.0 –0.8 –0.6 –0.4 –0.2 0.0 0.2 Machining error (mm)
Fig. 35.36 Predicted and measured milling surface and machining error [16]
A machining experiment was conducted to verify the effectiveness of this machining error correction method. The experimental results are shown in Fig. 35.38. In this figure, the two simulation results of the milling surface when the tool offset and inclination are not corrected and corrected.
35.5.5 Advanced Intelligent Technology for Machine Tools Advanced Thermal Deformation Compensation Machining accuracy of the workpiece changes significantly due to ambient temperatures around the machine tool, heat generated by the machine tool itself, and heat generated in machining operation. The latest machine tools are equipped with the advanced thermal deformation compensation to achieve higher machining accuracy under temperature changes in normal factory environments. For the effective thermal deformation compensation, the machine tools are designed with simple construction which generates predictable deformation without torsion or tilting deformation. Thermal deformation caused by changes in room temperature and frequent changes in spindle speed is controlled by highly accurate compensation technology regardless of wet or dry cutting. The advanced thermal deformation compensation can provide excellent dimensional stability against the thermal deformation.
35
Machine Tool Automation
795
35 Inclination Ym
Zm TCPm Yc
Zc
Offset TCPc Xc Tool command position
Zm
Ym
Zc
Xm Actual tool position caused by tool deflection
Offset TCPc Xc Tool command position
Zm Zc
TCPm Yc
Xm Actual tool inclination caused by tool deflection
Ym
TCPm Yc
Inclination Zm Ym
Xm Actual tool position and inclination Offset caused by tool deflection Xc
Xm New TCPc Corrected tool position and inclination
Fig. 35.37 Correction of tool position and inclination to eliminate machining error [17]
Work material: C3604 Tool: )6 Square endmill Cutting direction: Up cut Spindle speed: 2000 rpm Tool feed speed: 480 mm/min Radial depth of cut: 2 mm Axial depth of cut: 5 mm
7 mm
z
x
0.0
10 mm 3 mm Without correction
7 mm
z
x
Total axial position (mm)
y 1.0 2.0 3.0
Without correction After correction
4.0
y 10 mm 3 mm After correction
formation compensation, while the bottom plot illustrates the same characteristic with thermal deformation compensation. The advanced thermal deformation compensation provides excellent dimensional stability within 4 μm both when spindle speed is constant and when it changes frequently. This capability helps improve productivity by allowing high-accuracy machining without warm-up operation of machines.
Over cut 5.0 −0.8 −0.6 −0.4 −0.2 0.0 0.2 Machining error (mm)
Fig. 35.38 Experimental verification to eliminate machining error [17]
Figure 35.39 illustrates the relationship between the spindle speed and spindle thermal displacement. The top plot illustrates spindle thermal deformation without thermal de-
Optimum Spindle Speed Control for Free Chatter Vibration Cost reduction – shorter cycle times and higher productivity – is required continuously in machining operation. However, decision of cutting conditions which brings the best performance of the machine and cutting tool capabilities is quite difficult for the machine operators. Especially, decision of optimum spindle speed for free chatter vibration requires skillful technique. The optimum spindle speed control system measures chatter vibration by built-in sensors and automatically changes spindle speed to the optimum spindle speed. In addition, this system provides advanced graphics of the optimal cutting conditions as shown in Fig. 35.40. A stability lobe to show chatter characteristics is analyzed, and advanced graphics represent effective alternatives to suppress chatter vibration throughout the low to high spindle speed zones. This
796
K. Shirase and S. Fujii
Spindle thermal displacement (µm)
Typical thermal deformation compensation
15 µm
Advanced thermal deformation compensation 4 µm
2 µm
Spindle speed (min–1)
10 µm 15000 0 0
1
2
3
4
5
6
7
8
9
10
Time (hours)
Fig. 35.39 Effect of advanced thermal deformation compensation. (Courtesy of OKUMA Corp.)
Unstable
Stable
Spindle speed
Fig. 35.40 Recommendation of optimal spindle speed to suppress chatter vibration. (Courtesy of OKUMA Corp.)
navigation can be used to achieve shorter cycle times and higher productivity effectively. Figure 35.41 shows the effect of chatter suppression to improve productivity and surface quality in milling operation. When chatter vibration is detected at a spindle speed of 11,000 min−1 , the spindle speed is increased to 16,225 min−1 , which is represented by the optimum spindle speed control system. The chatter vibration was suppressed and the surface quality of the workpiece was improved. Moreover, increased spindle speed is translated into a higher feed rate, and 1.5 times higher productivity was achieved.
Automatic Geometric Error Compensation in Five-Axis Machines In five-axis machining, 13 types of geometric errors such as misalignment of a rotary axis greatly affect machining ac-
curacy. Therefore, geometric error compensation in five-axis machines is indispensable to achieve higher accuracy in fiveaxis machining. The automatic geometric error compensation system measures geometric error automatically using a touch probe and datum sphere, and 11 types of geometric errors as shown in Fig. 35.42 can be compensated automatically based on measurement results. Figure 35.43 shows the effect of automatic geometric error compensation in five-axis machines. In this experimental milling operation, indexed machining from multiple directions, in which cutting tool or machine table is tilted to various angle, was performed. Before the geometric error compensation, the maximum step height in the machined surfaces is 12 μm. After the geometric error compensation, the maximum step height becomes 3 μm, and on most adjacent machined surfaces, there will be no step height. This result shows that geometric error compensation in fiveaxis machines is quite important to achieve high-accuracy machining.
35.6
Metal Additive Manufacturing Machines or Metal 3D Printers
35.6.1 Rapid Growth of Additive Manufacturing Additive manufacturing (AM) technology became a hot topic in 2013. AM is known as 3D printing which uses computeraided design to build objects layer by layer. AM contrasts with traditional manufacturing, which removes unwanted excess from a solid piece of material, often metal. Originally, AM was called rapid prototyping which achieves short-end
35
Machine Tool Automation
797
Spindle speed: 11,000 min−1 Feedrate: 2,200 mm/min
Spindle speed: 16,225 min−1 Feedrate: 3,200 mm/min
Chatter
No chatter
Spindle speed Optimum speed
Spindle speed
Chatter sound pressure
Optimum speed
Chatter sound pressure
Fig. 35.41 Effect of chatter suppression in milling operation. (Courtesy of OKUMA Corp.)
Perpendicularity between linear axes
Rotary axis misalignment Z Y
Y
Y C
C A
A
Z
A
X
X X
A-axis center Z error
A-axis center Y error
C-axis center X error
C-axis center Y error X-Y perpendicularity
Rotary axis tilt errors Z
Z-X perpendicularity
Z
Y
Y C
A
Y C
A X A-Y perpendicularity
A-Z perpendicularity
C-X perpendicularity
C-Y perpendicularity
Fig. 35.42 11 types geometric errors for automatic compensation. (Courtesy of OKUMA Corp.)
Y-Z perpendicularity
35
798
K. Shirase and S. Fujii
Before geometric error compensation Indexed machining from multiple directions
0
−6 +5
−3 −6 −3 −2 −3
0
−7 0 5
After geometric error compensation
−5 0
−1
+5 −2
−1
0
−4
−1 +1
0
+5
Min. to Max. 12 Pm
−1
0 −2
0 0
−2 0
0 0
0
−1
0
−1
0
Min. to Max. 3 Pm
Fig. 35.43 Effect of automatic geometric error compensation. (Courtesy of OKUMA Corp.)
12 20 13 20 14 20 15 20 16 20 17 20 18 20 19
11
20
20
09 10 20
20
20
20
08
1.60 1.40 1.20 1.00 0.80 0.60 0.40 0.20 0.00
07
Money spent billions of $
a)
Year
AM systems numbers
b)
2500
to functional models that can be used for functional tests and to actual parts that are used only once, such as parts for competition vehicles. In addition, practical products such as medical implants that can withstand long-term use have come to be manufactured by additional manufacturing technology. New equipment, technologies, and materials in AM are driving down the costs of building parts, devices, and products in industries such as aerospace, medicine, automotive, and more. AM is classified into seven categories by the ASTM standard and stipulates more detailed construction methods. Seven categories in AM are as follows: A. B. C. D. E. F. G.
2000 1500 1000 500
Vat photopolymerization (VP) Powder bed fusion (PBF) Binder jetting (BJ) Sheet lamination (SL) Material extrusion (ME) Material jetting (MJ) Directed energy deposition (DED)
12 20 13 20 14 20 15 20 16 20 17 20 18 20 19
11
20
20
10
20
09
08
20
20
20
07
0 Year
Fig. 35.44 Growth of additive manufacturing. (a) Money spent on final part production by AM worldwide. (b) Sales of AM system for metal parts. (Source: Wohlers Report 2020)
period to develop a prototype in the 1990s. However, the applications were limited due to processing performance, material properties, and production cost that allow for only prototyping. On the other hand, in Europe and the United States, applications have gradually expanded from simple models
Vat photopolymerization (VP) method is the first AM method called rapid prototyping proposed in 1980. Metal forming methods using metal powder include the powder bed fusion (PBF) method and the directed energy deposition (DED) method. Metal AM or metal 3D printing has grate benefit for complex and high-value parts to reduce material and weight of product. In case of turboprop engines, there are examples of reducing more than 800 traditional parts to 12 metal AM parts and more than 300 conventional parts to 7 metal AM parts. Rapid growth of AM can be realized by money spent on final part production by AM shown in Fig. 35.44. Also, sales of metal AM systems have increased in recent year shown in Fig. 35.44.
35
Machine Tool Automation
35.6.2 Laser Additive Manufacturing Machine Using Powder Bed Fusion Powder bed fusion (PBF) includes direct metal laser melting (DMLM), direct metal laser sintering (DMLS), electron beam melting (EBM), selective laser sintering (SLS), selective thermal sintering (SHS), and so on. Various powder bed fusions (PBFs) use electron beam, laser, or thermal printing head to melt or partially melt a fine layer of material to build parts. The LASERTEC 30 SLM made by DMG Mori Co., Ltd. is a powder bed additive manufacturing machine. In this machine, metal powders are built up layer by layer and selectively melted by laser shown in Fig. 35.45. It is most suited for creating complex-shaped parts which are difficult to cut. For example several impellers are built up simultaneously shown in Fig. 35.45. However, dimensional accuracy and surface roughness of products built up by powder bed fusion (PBF) are not
799
sufficient for mechanical parts. Therefore, a hybrid metal 3D printer, which includes metal laser sintering using a laser beam to melt powder and high-speed, high-precision milling operation, had been proposed in 2002. The LUMEX series made by Matsuura Machinery Corp. is the world’s first hybrid metal 3D printer. In this machine, metal laser processing and high-speed, high-precision milling operation are repeated to build up complex-shaped parts with satisfactory machining accuracy and surface finish comparable to machining centers, surpassing the capability of conventional metal 3D printers. As shown in Fig. 35.46, both dimensional accuracy and surface roughness are quite improved caused by high-speed, high-precision milling operation. A hybrid metal 3D printer can make deep ribs in a single process without EDMs and inner free form channels which are difficult to form in conventional postprocess machining. This feature is suitable for mold manufacturing for injection molding.
LASERTEC 30 SLM
Recoating of metal powder layer
Final products on platform
Laser melting of metal powder
Fig. 35.45 AM with selective laser melting. (Courtesy of DMG Mori Co., Ltd.)
35
800
K. Shirase and S. Fujii
LUMEX Avance-25
Only laser processing Dimensional accuracy: approx. ± 0.1 mm Surface roughness: approx. Rz 50 µm
Both laser processing and machining Dimensional accuracy: approx. ± 0.025 mm Surface roughness: approx. Rz 10 µm
Fig. 35.46 The world’s first hybrid metal 3D printer. (Courtesy of Matsuura Machinery Corp.)
35.6.3 Five-Axis Milling Machining Center Combining Directed Energy Deposition
35.7
Direct energy deposition can use various materials including ceramics, metals, and polymers. A laser, electric arc, or electron beam gun mounted on the arm melts wire, filament material, or powder horizontally and stacks the material by moving the bed vertically. However, built-up shapes are limited by the three-axis control of the arm and table; a hybrid with a five-axis machine tool was devised to increase the degree of freedom of built-up shapes. The LASERTEC 65 DED hybrid is a hybrid solution that incorporates the additive manufacturing function into a fiveaxis machining center. Combining directed energy deposition and milling operation on one machine is shown in Fig. 35.47. The LASERTEC 65 DED hybrid demonstrates outstanding performance in various applications, such as machining of complex-shaped parts and repair or coating on parts for corrosive wear protection.
Several architectures and technologies have been proposed and investigated as mentioned in the previous sections. However, they are not yet mature enough to be widely applied in practice, and the achievements of these technologies are limited to specific cases. Achievements of key technologies for future intelligent machine tools are summarized in Fig. 35.48. Process and machining quality control will become more important than adaptive control. Dynamic tool-path generation and in-process cutting parameters modification are required to realize flexible machining operation for process and machining quality control. Additionally, intelligent process monitoring is needed to evaluate the cutting process and machining quality for process and machining quality control. A reasonable strategy to control the cutting process and a
Key Technologies for Future Intelligent Machine Tool
35
Machine Tool Automation
801
35
LASERTEC 65 DED hybrid
AM by direct energy deposition
AM by directed energy deposition
5-axis machining to finish shape
Fig. 35.47 Five-axis machining center with direct energy deposition. (Courtesy of DMG Mori Co., Ltd.)
Key technologies
Conceptual
>>>>>
Motion control Adaptive control Process and quality control Monitoring (sensing) Intelligent process monitoring Open architecture concept Process planning Operation planning Utilization of knowhow Learning of knowhow Network communication Distributed computing
Fig. 35.48 Achievements of key technologies for future intelligent machine tools
Confirmed
>>>>>
Practical
802
reasonable index to evaluate machining quality are required. It is therefore necessary to consider utilization and learning of knowledge, know-how, and skill regarding machining operations. A process planning strategy with which one can generate flexible and adaptive working plans is required. An operation planning strategy is also required to determine the cutting tool and parameters. Product data analysis and machining feature recognition are important issues in order to generate operation plans autonomously. Sections 35.5.2, 35.5.3, “Open Architecture Control,” “Feedback of Cutting Information,” “Five-Axis Control,” 35.5.4, “Digital Copy Milling for Real-Time Tool-Path Generation,” “Flexible Process and Operation Planning System,” and 35.7 are quoted from [14].
K. Shirase and S. Fujii 13. Shirase, K., Nakamoto, K., Arai, E., Moriwaki, T.: Digital copy milling – autonomous milling process control without an NC program. Robot. Comput. Integr. Manuf. 21(4–5), 312–317 (2005) 14. Moriwaki, T., Shirase, K.: Intelligent machine tools: current status and evolutional architecture. Int. J. Manuf. Technol. Manag. 9(3/4), 204–218 (2006) 15. Shirase, K.: CAM-CNC integration for innovative intelligent machine tool. In: Proceedings of the 8th International Conference on Leading Edge Manufacturing in 21 Century (LEM21), A01 (2015) 16. Nishida, I., Okumura, R., Sato, R., Shirase, K.: Cutting force and finish surface simulation of end milling operation in consideration of static tool deflection by using voxel model. In: Proceedings of the 8th CIRP Conference of High Performance Cutting (HPC 2018), ID139 (2018) 17. Nishida, I., Shirase, K.: Machining error correction based on predicted machining error caused by elastic deflection of tool system. J. Jpn. Soc. Precis. Eng. 85(1), 91–97 (2019) (in Japanese)
Further Reading
References 1. Koren, Y.: Control of machine tools. ASME J. Manuf. Sci. Eng. 119, 749–755 (1997) 2. Pritschow, G., Altintas, Y., Jovane, F., Koren, Y., Mitsuishi, M., Takata, S., Brussel, H., Weck, M., Yamazaki, K.: Open controller architecture – past, present and future. Ann. CIRP. 50(2), 463–470 (2001) 3. Altintas, Y., Erol, N.A.: Open architecture modular tool kit for motion and machining process control. Ann. CIRP. 47(1), 295–300 (1998) 4. Mori, M., Yamazaki, K., Fujishima, M., Liu, J., Furukawa, N.: A study on development of an open servo system for intelligent control of a CNC machine tool. Ann. CIRP. 50(1), 247–250 (2001) 5. Yamazaki, K., Hanaki, Y., Mori, Y., Tezuka, K.: Autonomously proficient CNC controller for high performance machine tool based on an open architecture concept. Ann. CIRP. 46(1), 275–278 (1997) 6. Ng, H., Liu, J., Yamazaki, K., Nakanishi, K., Tezuka, K., Lee, S.: Autonomous coordinate measurement planning with work-inprocess measurement for TRUE-CNC. Ann. CIRP. 47(1), 455–458 (1998) 7. Yan, X., Yamazaki, K., Liu, J.: Extraction of milling know-how from NC programs through reverse engineering. Int. J. Prod. Res. 38(11), 2443–2457 (2000) 8. Mitsuishi, M., Nagao, T., Okabe, H., Hashiguchi, M., Tanaka, K.: An open architecture CNC CAD-CAM machining system with data-base sharing and mutual information feedback. Ann. CIRP. 46(1), 269–274 (1997) 9. Mitsuishi, M., Nagao, T.: Networked manufacturing with reality sensation for technology transfer. Ann. CIRP. 48(1), 409–412 (1999) 10. Morishige, K., Takeuchi, Y., Kase, K.: Tool path generation using C-space for 5-axis control machining. ASME J. Manuf. Sci. Eng. 121, 144–149 (1999) 11. Morishige, K., Takeuchi, Y.: Strategic tool attitude determination for five-axis control machining based on configuration space. CIRP J. Manuf. Syst. 31(3), 247–252 (2003) 12. Shirase, K., Kondo, T., Okamoto, M., Wakamatsu, H., Arai, E.: Trial of NC programless milling for a basic autonomous CNC machine tool. In: Proceedings of the 2000 JAPAN-USA Symposium on Flexible Automation (JUSFA2000), pp. 507–513 (2000)
Altintas, Y.: Manufacturing Automation: Metal Cutting Mechanics, Machine Tool Vibrations, and CNC Design. 2nd edn., Cambridge University Press, Cambridge (2000) Altintas, Y.: Manufacturing Automation: Metal Cutting Mechanics, Machine Tool Vibrations, and CNC Design. 2nd edn., Cambridge University Press, Cambridge (2000) Bollinger, J.G., Duffie, N.A.: Computer Control of Machines and Processes. Addison-Wesley, Boston (1988) Black, J.T., Kosher, R.A.: DeGarmo’s Materials and Processes in Manufacturing. Wiley, 13th edn., New York (2020) Evans, K.: Programming of CNC Machines. Industrial, New York (2016) Gibson, I., Rosen, D., Stucker, B., Khorasani, M.: Additive Manufacturing Technologies. Springer (2021) Groover, M.P.: Fundamentals of Modern Manufacturing: Materials, Processes, and Systems, 7th Edn. Wiley, New York (2019) Ito, Y.: Modular Design for Machine Tools. McGraw-Hill, New York (2008) Ito, Y.: Theory and Practice in Machining Systems, Springer (2017) Joashi, P.H.: Machine Tools Handbook: Design and Operation, McGraw-Hill (2009) John, K.-H., Tiegelkamp, M.: IEC 61131-3: Programming Industrial Automation Systems. Springer, Berlin/Heidelberg (2013) Krar, S.: Technology of Machine Tools 8th Edn., McGraw-Hill Higher Education, (2017) Marinescu, I.D., Ispas, C., Boboc, D.: Handbook of Machine Tool Analysis. CRC, Boca Raton (2002) Niebel, B.W., Draper, A.B., Wysk, R.A.: Modern Manufacturing Process Engineering. McGraw-Hill, London (1989) Thyer, G.E.: Computer Numerical Control of Machine Tools. Butterworth-Heinemann, London (1991) Thyer, G.E.: Computer Numerical Control of Machine Tools. 2nd Edn. Butterworth-Heinemann, London (2014) Wang, L., Gao, R.X.: Condition Monitoring and Control for Intelligent Manufacturing. Springer (2005) Wionzek, K.-H.: Numerically Controlled Machine Tools as a Special Case of Automation. Didaktischer Dienst, Berlin (1982) Wohlers Report: Wohlers Associates (2020)
35
Machine Tool Automation
803
35
Professor Shirase received a Master of Engineering from Kobe University and joined Kanazawa University as a research associate in 1984. He received a Dr. Eng. from Kobe University in 1989. He became an associate professor in 1995 at Kanazawa University and in 1996 at Osaka University. Since 2003, he is a professor of Kobe University. His research interests are mainly in autonomous machine tool and intelligent CAD/CAM systems. He has been fellows of the Japan Society for Mechanical Engineers and the Japan Society of Precision Engineering.
Professor Fujii received a Master of Engineering from Kyoto University and a PhD from University of Wisconsin, Madison, in 1967 and 1971, respectively. He is a professor emeritus of Kobe University. His research interests include automation of manufacturing systems and system simulation and production management and control. He has been an honorary member of the Institute of Systems, Control and Information Engineers and the Society of Precision Engineering and fellows of Operations Research Society of Japan.
Digital Manufacturing Systems
36
Chi Fai Cheung, Ka Man Lee, Pai Zheng, and Wing Bun Lee
Contents 36.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 805
36.2
Digital Manufacturing Based on Virtual Manufacturing and Smart Manufacturing Systems . . . . . . . . . . . . . . . . . . . . . . . . . Virtual Manufacturing . . . . . . . . . . . . . . . . . . . . . . . . . . . . Smart Manufacturing Systems . . . . . . . . . . . . . . . . . . . . . Digital Manufacturing by Industrial Internet of Things (IIoT)-Based Automation . . . . . . . . . . . . . . . . . . . . . . . . . Case Studies of Digital Manufacturing . . . . . . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
36.2.1 36.2.2 36.2.3 36.2.4 36.2.5
806 806 807 810 813 823
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 824
Abstract
Advances in the Internet, communication technologies, and computation power have accelerated the cycle of new product development as well as supply chain efficiency in an unprecedented manner. Digital technology provides not only an important means for the optimization of production efficiency through simulations prior to the start of actual operations but also facilitates manufacturing process automation through efficient and effective automatic tracking of production data from the flow of materials, finished goods, and people to the movement of equipment and assets in the value chain. There are two major applications of digital technology in manufacturing. The first deals with the modeling, simulation, and visualization of manufacturing systems, and the second deals with the automatic “acquisition, retrieval, and processing of manufacturing data used in the supply chain.” This chapter summarizes the state of the art of digital manufacturing which is based on virtual manufacturing (VM) systems,
C. F. Cheung () · K. M. Lee · P. Zheng · W. B. Lee Behaviour and Knowledge Engineering Research Centre, Department of Industrial and Systems Engineering, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong e-mail: [email protected]
© Springer Nature Switzerland AG 2023 S. Y. Nof (ed.), Springer Handbook of Automation, Springer Handbooks, https://doi.org/10.1007/978-3-030-96729-1_36
smart manufacturing (SM) systems, and industrial Internet of Things (IIoT). The associated technologies, their key techniques, and current research work are highlighted. In addition, the social and technological obstacles in the development of a VM system and SM system and an IIoT-based manufacturing process automation system and some practical application case studies of digital manufacturing are also discussed. Keywords
Digital manufacturing · Smart manufacturing · Virtual manufacturing · Industrial Internet of Things (IIoT) automation · Radio frequency identification (RFID) · Industry 4.0
36.1
Introduction
The industrial world is undergoing profound changes as the information age unfolds [1]. The competitive advantage in manufacturing has shifted from the mass production paradigm to one that is based on fast responsiveness and on flexibility [2]. The scope of digital manufacturing includes manifesting physical parts directly from 3D CAD files or data, using additive manufacturing technologies such as 3D printing, rapid prototyping from smart manufacturing (SM) models, and the use of industrial Internet of Things (IIoT) for supporting manufacturing process optimization and resources planning. To achieve the integration, digital manufacturing, which covers all the engineering functions, the information flow, and the precise characteristics of a manufacturing system, is needed. Manufacturing enterprises digitize manufacturing information and accelerate their manufacturing innovation to improve their competitive edge in the global market. There are two major applications of digital technology in manufacturing which include SM dealing with the artificial intelligence, modeling, simulation, and visualization 805
806
C. F. Cheung et al.
of manufacturing systems, while the other is related to the automation of the manufacturing process dealing with the acquisition, retrieval, and processing of manufacturing data in the supply chain. In this chapter, the state of the art of digital manufacturing based on SM and IIoT is reviewed. The concept and benefits of SM and IIoT are presented together with the associated technologies, the key techniques, and current research. Moreover, the social and technological obstacles in the development of an SM system and an IIoTbased manufacturing process automation system are discussed with some practical application case studies of digital manufacturing at the end of the chapter.
36.2
tion. A VM system produces digital information to facilitate physical manufacturing processes. The concept, significance, and key techniques of VM were addressed by Lawrence Associate Inc. [3], while the contribution and achievements of VM were reviewed by Shukla [4]. As mentioned by Kimura [5], a typical VM system consists of a manufacturing resource model, a manufacturing environment model, a product model, and a virtual prototyping model. Onosato and Iwata [6] developed the concept of a VM system, and Kimura [5] described the product and process model of a VM system. Based on the concept and the model, Iwata et al. [7] proposed a general modeling and simulation architecture for a VM system. Gausemeier et al. [8] has developed a cyberbike VM system for the real-time simulation of an enterprise that produces bicycles. With the use of a VM system, people can observe the information that is equivalent to a real manufacturing environment in a virtual environment [9, 10]. A conceptual view of a VM system according to Kimura [5] is shown in Fig. 36.1. The manufacturing activities and processes are modeled before, and sometimes in parallel with, the operations in the real world. Interaction between the virtual and real worlds is accomplished by continuous monitoring of the performance of the VM system. Since a VM model is based on real manufacturing facilities and processes, it provides realistic information about the product
Digital Manufacturing Based on Virtual Manufacturing and Smart Manufacturing Systems
36.2.1 Virtual Manufacturing Digital manufacturing based on virtual manufacturing (VM) integrates manufacturing activities dealing with models and simulations, instead of objects and their operations in the real world. This provides a digital tool for the optimization of the efficiency and effectiveness of the manufacturing process through simulations prior to actual operation and produc-
Virtual world Manufacturing environment model
Product model
Virtual prototyping
Manufacturing resource model Comparison and model
Engineering activity
maintenance
Task organization
Product design
Manufacturing preparation
Prototype and product
Production management
Manufacturing resources
Manufacturing environment Real world
Fig. 36.1 Conceptual view of a virtual manufacturing system. (After Kimura et al. [5])
Monitoring
36
Digital Manufacturing Systems
807
(vii) Verification and evaluation: Technologies including standards of evaluation, decision tools, and evaluation methods are needed to ensure the output from the VM system is equivalent to that from the RM system.
and its manufacturing processes and also allows for their evaluation and validation. Since no physical conversion of materials into products is involved in VM, this helps to enhance production flexibility and reduce the cost of production as the cost for making the physical prototypes can be reduced. The development of VM demands multidisciplinary knowledge and technologies as follows:
36.2.2 Smart Manufacturing Systems
(i) Visualization technologies: VM makes use of computer graphic interfaces to display highly accurate, easily understandable, and acceptable input and output information for the user. This involves advanced visualization technologies including image processing, augmented reality (AR), virtual reality (VR), multimedia, design of graphic interfaces, animation, etc. (ii) Technologies for building a VM environment: A computerized environment for VM includes hardware and software for the computer, modeling and simulation of the information flow, the interface between the real and the virtual environment, etc. This needs the research and development of devices for VM operational environment, interface and control between the VM system and the real manufacturing (RM) system, information and knowledge integration and acquisition, etc. (iii) Information-integrated infrastructure: This refers to the hardware and software for supporting the models and the sharing of resources, information, and communication technologies (ICT) among dispersed enterprises. (iv) Methods of information presentation: The information about product design and manufacturing processes and related solid objects are represented using different data formats, languages, and data structure so as to achieve the data sharing in the information system. There is a need for advanced technologies for 3D geometrical representation, knowledge-based system description, rulebased system description, customer-orientated expert systems, feature-based modeling and physical process description, etc. (v) Model formulation and reengineering: Various technologies, including model exchange, model management, model structure, data exchange, etc., are employed to define, develop, and establish methods and techniques for realizing the functions and interrelationships among various models in the VM system. (vi) Modeling and simulation: This involves the processes and methods used to mimic the real world on the computer. Research on the technologies related to dispersed network modeling, continuous system modeling, model databases and its management optimization analysis, validation of simulation results, development of simulation tools, and software packaging technique are indispensable.
Information and communication technology (ICT) is evolving, and many disruptive technologies, including cloud computing, Internet of Things (IoT), big data analytics, and artificial intelligence, have emerged. These technologies make the ICT to be smart and capable of addressing current challenges, such as increasingly customized requirements, improved quality, and reduced time to market [11]. An increasing number of sensors are used in equipment (e.g., machine tools) to enable it to self-sense, self-act, and communicate with one another [12]. With the use of these technologies, real-time production data can be obtained and shared to facilitate rapid and accurate decision-making. The connection of physical manufacturing equipment and devices over the Internet together with big data analytics in the digital world (e.g., the cloud) has resulted in the emergence of a revolutionary means of production named cyber-physical production systems (CPPSs). CPPSs are a materialization of the general concept cyber-physical systems (CPS) in the manufacturing environment. The interconnection and interoperability of CPS entities in manufacturing shop floors together with analytics and knowledge learning methodology provide an intelligent decision support system [13]. The widespread application of CPS (or CPPS) has ushered in the Industry 4.0 [14]. Although extensive effort has been done to make manufacturing systems digitalized and smart, smart manufacturing systems do not have a widely accepted definition. According to Industry 4.0, CPPSs can be regarded as smart manufacturing systems. CPPSs comprise smart machines, warehousing systems, and production facilities that have been developed digitally and feature end-to-end ICT-based integration from inbound logistics to production, marketing, outbound logistics, and service [13]. Smart manufacturing systems can generally be defined as fully integrated and collaborative manufacturing systems that respond in real time to meet the changing demands and conditions in factories and supply networks and satisfy varying customer needs [15]. Key enabling technologies for smart manufacturing systems include CPS, IoT, Internet of services (IoS), cloud-based solutions, artificial intelligence (AI), and big data analytics. Meanwhile, in line with the prevailing implementation of CPS, the next-generation human-in-the-loop manufacturing system was brought up, which is known as the humancyber-physical system (HCPS) [16]. It is defined as “a kind of composite intelligent system comprising humans, cyber
36
808
C. F. Cheung et al.
The Industry 4.0 concept in the manufacturing sector systems, and physical systems with the aim of achieving specific goals at an optimized level” [17]. In this type of covers a wide range of applications from product design to digital manufacturing context, human-machine is integrated logistics. The role of mechatronics, a basic concept in manuin a collaborative and cyber-physical integrated manner, such facturing system design, has been modified to suit CPS [34]. as AR-assisted machining process monitoring [18], industrial Smart product design based on customized requirements robot inspection [19], human-robot collaborative assembly that target individualized products has been proposed [35]. [20], etc. It not only helps to enhance the efficiency and pro- Predictive maintenance [36] and its application in machine ductivity of the modern manufacturing process with optimal health prognosis are popular topics in Industry 4.0-based CPS human and machine engagement (e.g., the high flexibility [37]. Machine Tools 4.0 as the next generation of machine of human body and high efficiency of robot arms) [21] but tools has been introduced in machining sites [38]. Energy also mitigates the workload (e.g., lifting heavy things) and Management 4.0 has also been proposed for decision-based improves the physical well-being of human operators (e.g., energy data and has transformed energy monitoring systems into autonomous systems with self-optimized energy use dangerous/extreme working conditions) [22]. With the rapid advancement of cyber-physical systems, [39]. Moreover, the implication of Industry 4.0 technologies digital twin (DT) is gaining increasing attention owing to its on logistic systems has been investigated [40]. Figure 36.2 presents a framework of Industry 4.0 smart great capabilities to realize Industry 4.0. Enterprises from different fields are taking advantages of its ability to sim- manufacturing systems. The horizontal axis shows typical ulate real-time working conditions and perform intelligent issues in Industry 4.0, including smart design, smart machindecision-making, where a cost-effective solution can be de- ing, smart monitoring, smart control, smart scheduling, and livered to meet individual stakeholder demands. As DT tech- industrial applications, which are the focus of this work. The nology becomes more sophisticated [23], described it as vertical axis shows issues in another dimension of Industry one of the strategic important directions for manufacturing 4.0 ranging from sensor and actuator deployment to data enterprises to progress. Designed to improve manufacturing collection, data analysis, and decision-making. In Industry efficiency, DT is a digital duplication of entities with real- 4.0, data acquisition and analysis are the main sources of the time two-way communication enabled between the cyber smartness of activities shown on the horizontal axis. and physical spaces [24]. By providing a means to monitor, optimize, and forecast processes, DT is envisioned by [25] (i) Smart design: Traditional design has been transformed to become smart due to the rapid development of new techas an approach for continuous improvement toward human nologies including virtual reality (VR) and augmented well-being and quality of life. The maturity of this technology reality (AR). Hybrid prototyping using VR techniques has also attracted attention from a wide range of industries, has been incorporated to additive manufacturing. Design including healthcare and urban planning. City planners, assoftware (e.g., CAD/CAM) can now interact with smart sisted by DT technology, are able to interact with a dataphysical prototype systems in real time via 3D printing rich city simulation, laying the foundation for a smart city integrated with CPS and AR [41]. As a result, engineeras seen in the case of Singapore [26]. Gartner, a prominent ing changes and physical realizations could be integrated global research and advisory firm, describes DT as one of the to achieve a smart design paradigm. top ten strategic technology trends in 2019 [27]. Meanwhile, Grand View Research forecasts the DT market to grow to (ii) Smart machining: In Industry 4.0, smart machining can be achieved with the aid of smart robots and other types USD $27.06 billion by 2025, an approximate tenfold increase of smart objects that can sense and interact with one anfrom USD $2.26 billion back in 2017 [28]. other in real time [42]. For example, CPS-enabled smart With the capability of DT technology, Industry 4.0 is no machine tools can capture real-time data and transfer longer a “future trend,” and many leading organizations have them to a cloud-based central system so that machine made it the center of their strategic agenda. For instance, tools and their twined services can be synchronized to with DT simulations and optimized decision-making, new provide smart manufacturing solutions. Moreover, selfinsights can be obtained to produce smart products with selfoptimization control systems provide in-process quality awareness [29]. Enterprises that are able to capitalize on control and eliminate the need for post-process quality this will benefit from the competitive advantages that are inspection [43]. available to early adopters [30]. [31] described an enormous range of benefits ranging from product design and verifi- (iii) Smart monitoring: Monitoring is an important aspect in the operation, maintenance, and optimal scheduling cation, product life cycle monitoring, to shop floor design, of Industry 4.0 manufacturing systems [44]. The optimizing manufacturing processes and maintenance. [32, widespread deployment of various sensors has made 33] pointed out the role of DT technology in making smart smart monitoring possible. For example, data on various machine tools via optimal decision support and machine manufacturing objects, such as temperature, electricity health awareness analysis.
Digital Manufacturing Systems
Data-driven machining
Data-driven (realtime) monitoring
Data-driven (realtime) control
Design data analysis
Monitoring data analysis
Machining data analysis
Control data analysis
Scheduling data analysis
Design data collection
Monitoring data collection
Machining data collection
Control data collection
Scheduling data collection
Monitoringoriented sensor deployment
Machiningoriented sensor (e.g., temperature, force, pressure sensors) and actuator deployment
Control-oriented sensor and actuator deployment
Smart monitoring
Smart machining
Smart control
Sensor and actuator deployment
Data-driven design
Data-driven (realtime) scheduling (e.g., data-driven modeling and predication)
Big data analysis
809
Data collection
Big data-driven decision-making
36
Design-oriented sensor deployment
Smart design
Smart scheduling
Fig. 36.2 A framework of smart manufacturing systems
consumption, vibrations, and speed, can be obtained in real time. Smart monitoring provides not only a graphical visualization of these data but also alerts when abnormality occurs in machines or tools [45, 46]. CPS and IoT are key technologies that enable smart monitoring in Industry 4.0 smart manufacturing systems. (iv) Smart control: In Industry 4.0, high-resolution, adaptive production control (i.e., smart control) can be achieved by developing cyber-physical production control systems [47]. Smart control is mainly executed to manage various smart machines or tools physically through a cloud-enabled platform [48]. End users can switch off a machine or robot via their smartphones [49]. Decisions can then be timely reflected in frontline manufacturing sites, such as robot-based assembly lines or smart machines [50]. (v) Smart scheduling: Smart scheduling mainly utilizes advanced models and algorithms to draw information from data acquired by sensors. Data-driven techniques and
advanced decision architecture can be used to perform smart scheduling. For example, distributed smart models that utilize a hierarchical interactive architecture can be used for reliable real-time scheduling and execution [51]. Production behavior and procedures can be carried out automatically and effectively because of the wellestablished structures and services. With the aid of data input mechanisms, the output resolutions are fed back to the parties involved in different ways [52]. Vertically, several key challenges are summarized as follows: (i) Smart design and manufacturing: Research at this level encompasses smart design, smart prototyping, smart controllers, and smart sensors [53, 54]. Real-time control and monitoring support the realization of smart manufacturing [55]. Supporting technologies include IoT, STEPNC, 3D printing, industrial robotics, and wireless communication [56].
36
810
(ii) Smart decision-making: Smart decision-making is at the center of Industry 4.0. The ultimate goal of deploying widespread sensors is to achieve smart decision-making through comprehensive data collection. The realization of smart decision-making requires real-time information sharing and collaboration [57]. Big data and its analytics play an important role in smart decision-making tasks, such as data-driven modeling and data-enabled predictive maintenance [58]. Many technologies, including CPS, big data analytics, cloud computing, modeling, and simulation, contribute to the realization of smart decision-making [59–61]. (iii) Big data analytics: CPS and IIoT-based manufacturing systems involve the generation of vast amounts of data in Industry 4.0 [62], and big data analytics is crucial for the design and operations of manufacturing systems [63]. For example, by using the big data analytics approach, a holistic framework for data-driven risk assessment for industrial manufacturing systems has been presented based on real-time data [64]. Such a topic has been widely reported to support production optimization and manufacturing CPS visualization [65–67]. (iv) Industrial implementations: Industrial applications are the ultimate aim of Industry 4.0. Almost all industries, including manufacturing, agriculture, information and media, service, logistics, and transportation, can benefit from the new industrial revolution. Many new opportunities will be available for industrial parties [68]. Companies may focus on their core business values or challenges, which could be upgraded or addressed with Industry 4.0-enabled solutions.
36.2.3 Digital Manufacturing by Industrial Internet of Things (IIoT)-Based Automation Another major application of digital manufacturing deals with the automatic acquisition and processing of manufacturing data in the supply chain. This is due to the fact that the keen competition in global manufacturing has rekindled the interest for lean manufacturing, in reducing inventory and efficiency in production control. There has been worldwide a growing interest in the use of Internet of Things (IoT) to digitalize the manufacturing information so as to automate the manufacturing process. As technical problems are slowly being overcome and the cost of using IoT is decreasing, IoT is becoming popular in the manufacturing industries. According to the published report of Fortune Business Insights, IoT in manufacturing size is at USD27.76 billion in 2018 and is projected to reach USD136.83 billion by 2026, thereby exhibiting CARGR of 22.1% within 2019–2026 [69].
C. F. Cheung et al.
The rapid increase in the use of IoT technology in the retail industry has been driven by major players such as Cisco Systems Inc., IBM Corporation, General Electric Company, and Siemens AG. Cisco, one of the world’s network hardware and software provider, has provided IoT devices and network and implementing new business solution in manufacturing, wholesale, and retail industry. The aim of the IoT applications is to connect the devices, facilities, and people by providing resilient wireless mesh. It can help to improve agility to support production needs and maximize the uptime through leveraging data collected in the shop floor. GE implemented industrial Internet of Things (IIoT) for predictive analysis. Sensors help to collect the data of machine or equipment status, and those data can be sent via network for further analysis so as realize the machine life cycle. Apart from predicting machine breakdown so as to improve the overall equipment effectiveness, IIoT can realize the dangerous working condition so as alert the worker to evacuate earlier. Another main benefit of IIoT is location tracking such that the equipment and tools can be located quickly and it can greatly reduce the request and searching time. Those applications mentioned above demonstrate how IoT can revamp the industry and provide new customer experiences. With better sensor and 5G network and the release of IoT open platforms, it is likely that IIoT will be widely adopted across various industries. The use of radio frequency identification (RFID) as a type of IoT enables real-time visibility and increases processing efficiency of shop floor manufacturing data. RFID also supports information flow in process-linked applications. Moreover, it can help to minimize the need for reworking, improve efficiency, reduce line stoppages, and replenish justin-time materials on the production line. RFID can assist in automating assembly line processes and thus reduce labor and cost and minimize errors on the plant floor. The integration of RFID with various manufacturing systems is still a challenge to many corporations. As most large retailers will gradually demand the use of RFID in the goods from their suppliers, this creates both pressure and the opportunity for SMEs to adopt this technology in their logistics operations and extended it to the control of their manufacturing process. Some previous work [70] discussed the point that RFID can be more cost-effective in bridging the gap between automation and information flow by providing better traceability and reliability on the shop floor. Traditional shop floor control in a production environment although computerized still requires manual input of shop floor data to various systems such as the ERP for production planning and scheduling. Such data includes product characteristics, labor, machinery, equipment utilization, and inspection records. Companies such as Lockheed Martin, Raytheon, Boeing [71], and Bell helicopter have installed lean data-capture software and technologies and are in the
36
Digital Manufacturing Systems
process of converting barcodes to RFID. Honeywell was using barcodes to collect data related to part histories, inventories, billing, and sharing data with its clients and is accelerating its plan to switch from barcodes to RFID. The use of RFID has several advantages over barcodes as tags that contain microchips that can store as well as transmit dynamic data have a fast response, do not require line of sight, and possess high security. RFID offers greater scope for the automation of data capture for manufacturing process control. In recent years, the cost of the RFID tags is continuously decreasing [72, 73], while the data capability is increasing which makes the practical applications of the RFID technology in manufacturing automation economically feasible. For manufacturers, it is increasingly important to design and integrate RFID information into various enterprise application software packages and to solve connectivity issues related to plant floor and warehousing. Real-time manufacturing process automation is dependent on the principle of closed-loop automation that sense, decide, and respond from automation to plant and enterprise operations. For example, pharmaceutical manufacturer [74] makes use of RFID to trace the route or history taken by an individual product at multiple locations along the production line. This allows the pharmaceutical manufacturer easily to trace all final products that might have been affected by any production miscarriage. An aerospace company named Nordam Group uses RFID to track its high-cost molds. Through the use of RFID tags, they save the cost of real-time tool tracking. With growing emphasis on real-time responsiveness, manufacturers are seeking to control more effectively the production processes in real time in order to eliminate waste and boost throughput. The desire to extend supply chain execution dispatching within plant makes closed-loop automation an imperative. The inability to achieve true closed-loop manufacturing process automation presents one of the biggest barriers to successful real-time operation strategies. Critical elements that support manufacturing process automation include the following: • Dynamic IT systems to support high mix, variable environments • Dynamic modeling, monitoring, and management of manufacturing resources • Real-time synchronization between activities in the offices, manufacturing plant, and supply chain • Visibility into the real-time status of production resources Although some previous studies [75–77] have shown that RFID technology has the potential to address the above problems and has great potential for supporting manufacturing automation, critical deficiencies in the current systems in keeping track of processes under changing conditions include the following:
811
• Lack of integration of manufacturing with the supply chain • Lack of real-time shop floor data for predictive analysis and for decision support • Lack of a common data model for all operational applications • Inability to sense and analyze inputs ranging from the factory floor to the supply chain • Ineffective integration links between manufacturing process automation and enterprise IT applications • Inability to provide intelligent recommendations and directives to targeted decision points for quick action In light of the above, this chapter presents a review of the industrial Internet of Things (IIoT)-based manufacturing process automation system (MPAS), which embraces heterogeneous technologies including RFID that can be applied in the manufacturing process environment with the objective to enhance digital manufacturing at the level of automation across the enterprise and throughout the value chain. The proposed IIoT-based manufacturing process automation system aims to address those deficiencies.
Key RFID Technologies RFID is an advanced automatic identification technology, which makes use of radio frequency signals to capture data remotely from tags within reading range [78, 79]. The basic principle of RFID, i.e., the reflection of power as the method of communication, was first described in 1948. One of the first applications of RFID technology was “identify friend or foe” (IFF) detection deployed by the British Royal Air Force during World War II. The IFF system allowed radar operators and pilots to automatically distinguish between friendly and enemy aircraft using radio frequency (RF) signals. The main objective was to prevent “friendly fire” and to aid the effective interception of enemy aircraft. The radio frequency is the critical factor for the type of application an RFID system is best suited for. Basically, the radio frequencies can be classified as shown in Table 36.1. A typical RFID system contains several components including an RFID tag, which is the identification device attached to the item to be tracked, and an RFID reader and antenna, which are devices that can recognize the presence of RFID tags and read the information stored on them. After receiving the information, in order to process the transmission of information between the reader and other applications, RFID middleware is needed, which is software that facilitates the communication between the system and the RFID devices. Figure 36.3 shows a typical RFID system to illustrate how the pieces fit together. As shown in Fig. 36.3, RF tags are devices that contain identification and other information that can be communicated to a reader from a distance. The tag comprises a simple
36
812
C. F. Cheung et al.
Table 36.1 Comparison of RFID frequency band and their respective applications Frequency Low frequency, LF (125 kHz)
Approximate read range 25 usually behaves like a soft wire, which deforms or bends under point forces. The rotation behavior usually happens to nanowires with aspect ratio of η < 15, which behaves like a rod. In this case, the pushing force F generated by the AFM tip induces a the friction and shear force F = μot F + νFot along the nanowire axial direction when the pushing direction is not perpendicular to the nanowire axis, where μot and ν represent the friction and shear coefficients between the tip and the
a)
Nanowire
s
Static point
F
C
l
Since the shear force is usually proportional to the contact area and the contact area between a nanowire and a substrate surface is much greater than that between the nanowire and an AFM tip, therefore: a 90◦ and, otherwise, Ps1 . The process of manipulating a nanowire from its initial position to its destination is illustrated in Fig. 39.10. A nanowire has to be manipulated to go through a zigzag strategy for positioning the nanowire with specified orientation. When alternating pushing forces are applied to two points of the nanowire, it will rotate around two pivots Pi and Qi . The distance L1 from Pi to Qi can be calculated as the pushing points are determined. The two pivots Ps and Pd (as shown in Fig. 39.10) are linked to form a straight line, and the distance d between the two points can be calculated. The total distance d can be then divided into N segments as the number of manipulation steps. Then the step size Lp (as shown in Fig. 39.10) can be calculated as:
Fig. 39.9 The initial status of the nanowire and the desired status where it is manipulated
q2
for l = L/2.
Pi
Nanowire (initial position)
Pd
Fig. 39.10 The manipulation process of a nanowire from an initial status to its destination status
(23)
39
874
N. Xi et al.
During the manipulation process, the pivot Pi is always on the line defined by the two points Ps and Pd ; therefore, the rotation angle for each step can be determined as:
Lp θ = 2a cos 2L1
.
(24)
The rotation angle θ stays the same during manipulation. The initial pushing angle θ1 and the final pushing angle θ2 (as shown in Fig. 39.10) can be calculated by finding the starting position and the ending position. When θ is determined, the pivots Pi and Qi (i = 1, . . . , N) can be obtained, and the pushing points can then be planned. For instance, we will show how to determine the pushing points when a nanowire rotates around the pivot Pi . Figure 39.11 illustrates the frames used for determining the AFM tip position. The following transformation matrix can be easily calculated. The transformation matrix of the frame originated at Ps relative to the original frame can be represented as Ts . The transformation matrix of the frame originated at Pi relative to the frame originated at Ps can be denoted as Ti . Assume the rotation angle is β (0 < β ≤ θ ); then the transformation matrix relative to the frame originated at Pi can be represented as Tβi . β can be calculated via setting a step size of manipulation. The frames of the pushing point can then be calculated as: ⎤ ⎡ ⎤ XF 0 ⎣YF ⎦ = Ts Ti Tβi ⎣L − 2s⎦ . 1 1 ⎡
(25)
Similar procedure can be followed to calculate the pushing position for a nanowire rotating around the pivot Qi , i ∈ [1, N]. After the coordinates of the pushing point is determined, the manipulation path of AFM tip can then be generated.
y q Qi
yi F
ys
Ps O
xs a
q Pi
Pd
xi Pi+1 q /2
39.2.3 Local Scan-Based Searching and Compensation Methods for Nanomanipulation The random positioning error of AFM tip caused by the thermal extension or contraction puts a major challenge to the nanomanipulation efficiency since the object may be easily lost or manipulated to wrong destinations. Before the manipulation, the objects within the manipulation area are identified, and their locations are labeled, and the corresponding image is called a reference map for guidance. However, the labeled locations of the nanoobjects have errors due to the uncertainty of the AFM system. To manipulate the nanoobjects to follow the planned path, the actual location of each nanoobject has to be identified, and the AFM tip positioning error has to be overcome before each operation. To meet this requirement, this subsection develops spiral local scan-based object searching and associated positioning error compensation approaches.
Spiral Local Scan Method for Structured Nanoobject Searching Local scan approaches have a long successful history for detecting objects with known directions relative to the AFM tip [47, 48]. However, for the case without tip-target orientation information, the conventional nanoobject detection scheme could be inefficient or even unable to find the object location. The main reason causing such an inefficiency is that the conventional local scan strategy employs straight-linetype scanning path with limited discrete searching directions which may not be dense enough to find the target sitting a little far away from the tip. Due to the same reason, the conventional local scan strategy is unable to give qualified image of the ROI for guiding AFM tip to overcome the positioning error. To tackle this challenge, spiral scanning pattern can be a proper option, which has been shown pretty efficient for guiding AFM tip movement [49, 50], thanks to the merits of smoothness, simplicity, and the directionindependent characteristics [29, 51, 52]. In order to perform efficient manipulations for assembling the ball-/rod-like structured nanoobjects under various scenarios, the spiral trajectory-based target searching strategy is developed [49] (as shown in Fig. 39.12), and the main scanning path can be described as ⎧ xL (t) = RL (t) · sin(ωc · t) + x0 ⎪ ⎪ ⎨ yL (t) = RL (t) · cos(ωc · t) + y0 , ⎪ ⎪ ⎩ RL (t) = A/2π · ωc · t
(26)
x
Fig. 39.11 The frames used to determine the AFM tip pushing location at each step
where RL (t) denotes the path radius increasing at the rate A along time t; ωc is a constant radian rotation velocity; [xL (t), yL (t)] represents the searching location at time t; and
39
Nanomanufacturing Automation
875
a)
b) P5
Y
Y RS P 1
A
P2
P4 P1
P3
RC
P1,S
A P1,E
RL
P3
P2
P4
RL PO
PO
39 O
Ball-like target
X
O
Rod-like target
X
Fig. 39.12 Spiral local scan strategy for structured target searching, (a) ball-like object searching trajectory, (b) rod-like object searching trajectory
[x0 , y0 ] denotes the tip starting point PO of the spiral local scan (as shown in Fig. 39.12). Ball-Like Target Searching Strategy To search a ball-like target (length-to-width ratio η should be within the range [1, 1.4] empirically) using spiral local scan, the procedure is developed and elaborated as follows. Taking PO as the AFM tip position when the local scan is triggered, it will be the starting point of the spiral scan trajectory, and the radius increasing rate A of the spiral trajectory can be set as the nanoparticle radius RS /2 empirically. When PO and A are selected, the spiral scan pattern will be ready, and the AFM tip follows the trajectory and hits the point P1,S (detected via judging the collision event defined by a relative height threshold δ) which is on the surface of the ball-like object (as shown in Fig. 39.12a). After the collision, AFM tip continues to move along the desired path to the point P1,E , which is the first point less than the predefined relative height δ. It is noted that there must be one local point with the maximum relative height between P1,S and P1,E , and such a point is defined as P1 . Based on the geometry relation shown in Fig. 39.12a, one can see P1 has to be on the line defined by PO and object center P3 . The AFM tip will continue to move to the point P2 and then turn back to PO . Subsequently, the tip will move along the straight line linking PO and P1 to the destination P4 and then come back to PO to finish the searching. The highest point within the line between PO , P1 will be P3 , which is the center of the ball-like object, and the nanomanipulation system will label the object with the location P3 . Rod-Like Target Searching Strategy To search a rod-like target (length-to-width ratio η is larger than 1.4 but less than 25, otherwise, a string-like object) using spiral local scan,
the searching path is designed as (26), which has a second narrower sub-spiral scanning path defined as: ⎧ xS (t) = RS (t) · sin(ωc · t) + xP1 , ⎪ ⎪ ⎨ yS (t) = RS (t) · cos(ωc · t) + yP1 , ⎪ ⎪ ⎩ RS (t) = AS /2π · ωc · t, s.t. ωc · t ≤ 1.5π ,
(27)
where RS (t) denotes the varying radius of the scan path; [xS (t), yS (t)] represents the location of the sub-local scan path at the moment t; xP1 , yP1 are the regarding components of point P1 shown in Fig. 39.12b, respectively; the rotation angle constraint, 1.5π , is set to end the sub-local scan process. In order to guarantee a universal searching capability, the radius increasing rate AS of the spiral scanning pattern should be smaller than the diameter RC of the rodlike object (as shown in Fig. 39.12b). The AFM tip begins the searching from PO and follows the spiral path to touch the nanowire by detecting the point P1 via judging the collision event defined by a relative height threshold δ. After the collision, the AFM tip takes P1 as the center and performs the narrower sub-local scan to find details of the rode-like target. After rotating by 1.5π degree along the narrower spiral path, two local highest points P2 and P3 can be found, which define the orientation of the nano-rode target. To reveal the two end points of the target, AFM tip first goes from P3 to P4 and then goes back from P4 to P5 (as shown in Fig. 39.12b). Finally, AFM tip will leave from P5 to return to the starting point PO . With this strategy, the status of the rode-like target (e.g., a nanowire) can be efficiently captured, and the regarding location and orientation informant can be updated to the system for automatic nanomanipulation.
876
N. Xi et al.
Optimal Archimedean Spiral Local Scan for ROI Imaging In order to overcome the positioning error of the AFM tip for performing accurate operation to the nanoobject at the ROI, local image is chosen as the feedback. To obtain fast and qualified local images, the scanning pattern should be planned properly [52]. Similar to the object detection task, spiral scanning pattern is selected, which has been proven to be such an ideal option [29, 51, 52]. There is also difference between the object detection task and the ROI imaging task, that is, to guarantee orientationindependent feedback control, the imaging task should collect data uniformly to cover all unstructured features within the ROI. To fully utilize an AFM scanner to collect uniform data as fast as possible, Ziegler et al. proposed the ideal spiral pattern [29]; specifically, it is an Archimedean spiral curve, which can be generally represented as (28): ⎧ ⎪ ⎪ x(t) = RL (t) · sin(θ (t)) + x0 , ⎨ y(t) = RL (t) · cos(θ (t)) + y0 , ⎪ ⎪ ⎩ RL (t)/θ (t) ≡ A/2π,
(28)
where [x(t), y(t)] represents AFM tip location with center locating at [x0 , y0 ] in the Cartesian coordinate (as shown in Fig. 39.13a and b); θ(t) denotes the radian angle, and A represents the pitch which is a constant for the Archimedean spiral curve. To quantitatively describe the ROI image quality, data density of the ROI is introduced, which should be as uniform and dense as possible when the sampling rate Fr and scanning time TS (as shown in Fig. 39.13c) are fixed. To design such a desirable spiral curve, θ(t) and RL (t) are further formulated as (29):
RL (t) = N · A · f (t), θ(t) = 2π · N · f (t),
s.t. t ∈ [0, TS ], f (0) = 0, f (TS ) = 1, (29)
where f (·) represents an arbitrary function to be designed and N denotes the number of loops. Subsequently, the density function can be described using (30) [29]: ρ(t) =
Fr . 2f (t)f˙ (t)π(N · A)2
(30)
To guarantee uniformness, ρ(t) should be a constant. Therefore, one has to eliminate the term “f (t)f˙ (t)” shown in (30), and this results in (31): f (t) =
C1 · t/TS + C2 ,
(31)
with C1 and C2 representing two constant parameters. To meet the constraints f (0) = 0 and f (TS ) = 1, the parameters
C1 = 1 and C2 = 0 can be determined, and this makes the spiral curve (28) capable of generating an almost constant velocity vmax by tuning the scanning time TS to promptly, uniformly, and densely cover the ROI. With the representation (31), the density function ρ(t) can be further calculated as (32), which is called constant velocity density function. ρCV (t) =
Fr · T S . C1 π(N · A)2
(32)
Although one spiral pattern (28) is uniform for collecting data at all the time by choosing (31), it is noted that the ˙ approaches infinity as the time t apangular velocity θ(t) proaches zero. This problem deteriorates the imaging quality since the AFM scanner is unable to follow the high-frequency command properly due to limited bandwidth of its dynamics. Therefore, data at those desired locations may be lost or labeled wrongly. To fully utilize the AFM scanner while respecting its dynamics which should be under the upper boundary frequency ωU , the spiral pattern with constant angular velocity becomes the ideal option for the beginning period of a spiral trajectory, and this segment of trajectory can be represented as (33): f (t) =
ωU · t , 2π · N
s.t. 0 ≤ t ≤ TA ,
(33)
where TA is the end time (as illustrated in Fig. 39.13c) for the constant angular velocity trajectory. Therefore, the next problem comes to combining (31) and (33) together to shorten the scanning time while respecting the density requirement as well as the dynamics bandwidth limitation. To make the piecewise time function (34) smooth (the 1st order derivative should be continuous at the transition time TA ), the parameters TA , C1 , and C2 can be determined according to (34) as well [29]: ⎧ ω ·t U ⎪ , s.t. 0 ≤ t ≤ TA ⎪ ⎪ 2π ·N ⎨ f (t) = C1 · t/TS + C2 , s.t. TA < t ≤ TS , with ⎪ ⎪ ⎪ ⎩ ⎧ ⎪ ⎪ = T − TS2 − (2π · N)2 /ωU2 T S ⎪ ⎨ A . C1 = ωU2 · TA · TS /(2π 2 · N 2 ) ⎪ ⎪ ⎪ ⎩ C2 = 1 − C1 (34) Based on the required data density (32) that meets closedloop control requirement, the optimal imaging time TS can be resolved, and the optimal Archimedean spiral trajectory can be achieved.
Nanomanufacturing Automation
a)
877
b)
Y
c) TL
Working area
AFM tip (x0,y0)
R
X-axis Y-axis Local scan trajectory
L(
A
t)
(x0,y0)
Amplitude
39
TA Scanning time Ts O
X
Return
39
Time
Fig. 39.13 Optimal spiral local scan trajectory, (a) trajectory in the spatial domain, (b) scene of AFM tip performing local scan in the large working area (reference map), (c) trajectories of the X-axis and Y-axis in the time domain
Extended NVS Control Theory for Overcoming AFM Tip Positioning Error In order to perform prompt and reliable task space localization at the nanoscale, it is required to capture local images fast while maintaining acceptable quality. However, system uncertainty and random noise usually make the optimal spiral imaging strategy break the data density criterion (32), leading to sparse sampling with measurement uncertainty (typical comparison between the full and sparse imaging results can be found in Fig. 39.14). To robustly utilize the non-ideal but fast image feedback, the previous set-based NVS control theory is employed, which is insensitive to image distortion or disturbance [32, 33]. To further tackle the noisy situation, the extended version of NVS, ENVS control scheme (as shown in Fig. 39.14), is developed, which employs subsets as the calculation units to evaluate the distance between the desired set and the actual set (difference between two ROI images). Subset-Based Set Distance To reduce positioning error using image feedback-based NVS control scheme, first, the set formulation of an image and the associated set distance should be defined. A n × n pixels image H, where H = 2 [hi×n+j ] ∈ Rn , s.t. 1 ≤ i, j ≤ n, can be represented as a set {hi,j }n2 with n2 pixel elements. To bring in the order information of an image for relative location calculation, the subset {hk,l }m ⊂ {hi,j }n2 based new set formulation can be adopted, and the set X can be described as (35):
and hi,j denotes the pixel value with the 2D index [i, j]. It is noted that the basic element of the set X is actually an array comprised of formatted pixel values from subset {hk,l }m and the representative index [i, j]. Since the basic element of X contains the relative location information [i, j] as well as the adjacent circumstance information, it can guarantee all elements to be distinguishable even under heavy noise or the situation without obvious features, which is a necessary condition required by the setbased control approach [32]. To judge whether two sets are close to each other, the set distance concept is adopted to describe this metric. Any set distance d(X, Y) between set X and Y must satisfy three properties: symmetry, positive definiteness, and triangle inequality [53]. As proposed in this team’s works [30, 31], the Hausdorff distance is such an option, and the modified form is proposed as follows. Given a finite set of X ⊂ P = {[i, j, Vi,jT ]T Vi,j ∈ Rm , i, j ∈ Z + }, the distance between the set X and an element y ∈ P is defined as (36): dX (y) = min W(x − y) , x∈X
PX (y) =
Vi,j = [{hk,l | k = i + p, l = j + q, 1 ≤ k, l ≤ n}m ] ∈ R , (35) m
where Vi,j represents a vector generated from a subset {hk,l }m with m representing the number of the contained elements
(36)
where W ∈ R(m+2)×(m+2) denotes a diagonal weight matrix used to emphasize intensity information inside an element (for instance, it is defined as diag(w, w, 1, · · · , 1), w ∈ R+ in this chapter) and · represents Euclidean norm. The projection from y to X is defined as (37):
T T T T ] , · · · , [i, j, Vi,jT ]T , · · · , [n, n, Vn,n ] }, X = {[1, 1, V1,1
s.t. x, y ∈ Rm+2 ,
y, if dX (y) > ε, argx∈X {W(x − y) = dX (y)}, else,
(37)
where ε denotes a threshold for judging whether the projection is reliable, quantitatively. Based on the established projection function (37), the Hausdorff distance between the
878
N. Xi et al.
Center position (xi, xj) Subsetbased NVS law Γ(·)
uv(t)
Disturbance and noise +
1 s
Kinematic inversion
+
u(t)
+
+
Tip motion Extended NVS scheme
Desired image Xd(t)
AFM system
Non-ideal image X(t)
Optimal spiral local scan strategy
Xp(t)
Fig. 39.14 Schematic diagram of the extended non-vector space (ENVS) control scheme for AFM tip motion control
where : P → U is the projection from the feedback set X(t) to the control output u(t) ∈ U representing the set of D(X, Y) = max max{y − PX (y)}, max{x − PY (x)} , all the possible control signals and ϕ : E × U → BL(E, P) y∈Y x∈X denotes a mapping that projects a state to a bounded Lipschitz (38) function, which is the dynamics of element x(t) ∈ X(t). The which only evaluates the qualified elements. This property controllable set mutation (41) establishes a framework for is vital for robustness; it can remove those heavily distorted solving set matching problem. For a general case, the element X(t) could be diverse, thus may elements which are usually disturbances (e.g., those random dynamics x˙ = ϕ(x, u) ∈ ˚ features shown in the actual image but do not appear in the not be easy to solve [54]. However, for AFM tip localization task which is typically a 2D translation control problem, the reference image). controllable set mutation (41) turns to be simple [30]. Description of the Set Mutation To design an ENVS controller, the set dynamics of the AFM system should be Set Mutation Establishment for an AFM System To make studied. As a matter of fact, the set dynamics describes the (41) implementable, each element x of a local scan collected set-form output X(t) ⊂ P evolving with time, and it can be set X is formatted as an m + 2 dimensional array x = [xi , xj , Vi,jT ]T with Vi,j defined in (35) consisting of local topogmathematically represented as a mapping: X : R+ → P. To describe dynamics of each element inside the set X(t), raphy information of the ROI and xi , xj being the horizontal a bounded Lipschitz function ϕ : E → P with E ⊂ P is and vertical coordinates of the element location, which is introduced [54], and the set of all such functions is denoted governed by the dynamics of an AFM system. The AFM employed in this chapter has X-/Y-axis feedas BL(E, P). Based on these definitions, the transition of a backs, which greatly attenuate the input-output nonlinearity set (35) can be formulated as (39): of the system, such as hysteresis [19,20]; thus the kinematics (39) inversion (as shown in Fig. 39.14) can be treated as an identity Tϕ (t, X0 ) = {x(t) x˙ = ϕ(x), x(0) ∈ X0 }, X matrix I2×2 , and thus the element dynamics x˙ = ϕ(x, u) ∈ ˚ where x(0) ∈ P represents the element of X0 ⊂ P, which can be simply represented as (42): X(t) defined as denotes the initial state of the set dynamics ˚ (42) x˙ (t) = L(x)uv (t), with L(x) = B(I2×2 + d ), (40) based on mutation analysis [54]: set X and Y can then be defined as (38):
˚ X(t) = ϕ(x) ∈ BL(E, P) lim + t→0
1 D(X(t + t), Tϕ ( t, X(t))) = 0 . t
(40)
Based on (40), the controllable mutation equation for each element of set X(t) can be generally described as (41): ϕ(x(t), u(t)) ∈ ˚ X(t)
with u(t) = (X(t)),
(41)
where uv (t) ∈ R2×1 is the control law designed to regulate the virtual plant (42) consisting of an integral portion, an inverse kinematics part, and the physical plant (as shown in Fig. 39.14) and L(x) ∈ R(m+2)×2 is a general mapping matrix represented using B = [−I2×2 , [0]2×(m+2) ]T and d ∈ R2×2 being the uncertainty matrix (generated by system uncertainty) with Td d < I2×2 [32]. From the matrix B, one can see V(i, j) (topography data of the ROI) of element
39
Nanomanufacturing Automation
879
x is assumed to be time-invariant, and this can be guaranteed by setting ε of (37) to a small value to remove those fast changing elements such as the random disturbances. It is noted that each element of the controllable set muX(t) can be uniformly represented as (43) due to tation ˚ AFM translational motion property, which simplifies ENVS controller design. ϕ(x(t), uv (t)) ∈ ˚ X(t) = L(x)uv (t),
(43)
ENVS Control Law Establishment The problem of compensating positioning error at a desired location using the local image feedback can be formulated as designing an ENVS control law uv (t) to properly regulate (43) based on feedback X(t) to realize D(X(t), Xd (t)) → 0 as t → ∞, with Xd representing the desired local image in its set form. To build up such a control law uv (t), first a Lyapunov function (J : P → R+ ) should be established for describing convergence of the closed-loop system. This research employs the weighted Lyapunov function based on the modified Hausdorff distance (38), and it can be expressed as (44): J(X, t) =
1 2
+
[(x − PXd (x))T (x − PXd (x))]dx X
[(xd − PX (xd ))T (xd − PX (xd ))]dxd ,
(44)
Xd
where ∈ R(m+2)×(m+2) is a diagonal matrix to emphasize location information inside an element, and in this chapter, it is defined as (1 + w)I − W. Based on the Lyapunov function (44), the locally stabilizing control law uv (t) = (X(t)) can be derived. For the specific AFM-based nanomanipulator, the governing dynamics of x(t) ∈ X(t) is defined as x(t) independent linear equation (42). Thus, the stabilizing control law uv (t) exists as long as the term (I + d ) of (43) satisfies the positive definite condition [32]. Since the uncertainty matrix d is much less than the nominal output of the AFM system under most cases, the stabilizing control law uv (t) can be designed as (45): uv (t) = −αE(X, t),
(45)
error caused by system uncertainty (e.g., thermal drift), and highly accurate tip positioning capability can be achieved. Furthermore, nanoobject under manipulation can be detected online quickly through the local scan searching function, which enhances the nanomanufacturing efficiency.
39.3
Nanomanufacturing Processes of CNT-Based Devices
The nanomanufacturing process for nanodevices is not straightforward, especially in nano-material preparation, selection, and deposition processes. To prepare the nanomaterial, nanoobjects are usually dissolved into solution; then the nanoobject suspension is put in a ultrasonicator or a centrifuge for dispersing the nanoobjects. Afterward, specific properties of the nanoobjects should be selected. Finally, they are delivered to assemble the nanodevices. However, the nanoobjects are too small to be manipulated by traditional robotic systems; novel devices and systems have to be developed. Since the nanoobjects are dissolved into fluids, dielectrophoresis and microfluidic technology can be considered to perform the tasks. The material preparation can be done by micro mixers. The selection process can be done by micro filters. The deposition process can be done by integrating the micro channel and micro-active nozzle to deposit the nanoobject suspension. Since CNT is one of the most common nanoobjects with promising properties for making the next generation of nanodevices, in this session, the development of a novel automated CNT separation system to classify the electronic types of CNTs will be described, which involves the analysis for DEP force on CNTs and fabrication of a DEP micro chamber. Moreover, this DEP micro chamber was successfully integrated into an automated deposition workstation to manipulate a single CNT to multiple pairs of microelectrodes repeatedly. The automated deposition processes for both SWCNTs and MWCNTs will be presented. As a result, CNTbased nanodevices with specific and consistent electronic properties can be manufactured automatically. The resulting devices can potentially be used in commercial applications.
where α ∈ R+ is a gain to regulate the convergence rate and E(X, t) ∈ R2×1 is a column vector defined as (46): E(X, t) = LT −L
39.3.1 Dielectrophoretic Force on Nanoobjects
[(x − PXd (x))]dx X
(46) [(xd − PX (xd ))]dxd .
T Xd
By implementing the optimal spiral local scan-based ENVS scheme, prompt action can be generated by the AFMbased nanomanipulation system to overcome the positioning
Dielectrophoresis has been used to manipulate and separate different types of biological cells. DEP forces can be combined with field-flow fractionation for simultaneous separation and measurement [55]. DEP force induces movement of a particle or a nanoobject under non-uniform electric fields in liquid medium as shown in Fig. 39.15.
39
880
N. Xi et al.
FDEP =
AC voltage
(50)
where V is the volume of the nanoobject and ∇ | E |2 is the root-mean-square of the applied electric field. Based on this equation, the direction of DEP force is determined by the real part of the CM factor K. When Re[K] > 0, the DEP force is positive, and therefore the CNT is moved toward the microelectrode in the high electric field region. When Re[K] < 0, the DEP force is negative, and the particle is repelled away from the microelectrode. Besides, we can know that the magnitude and direction of DEP forces depend on the size and material properties of the nanoobjects, so separation of nanoobjects can be done.
Liquid medium Nonuniform electric field
Particle
1 Vεm Re(K)∇ | E |2 , 2
FDEP
Microelectrodes
39.3.2 Separating CNTs by Electronic Property Using Dielectrophoretic Effect Fig. 39.15 Illustration of the dielectrophoretic manipulation
The nanoobject is polarized when it is subjected to an electric field. The movement of the nanoobject depends on its polarization with respect to the surrounding medium [56]. When the nanoobject is more polarizable than the medium, a net dipole is induced parallel to the electric field in the nanoobject, and therefore the nanoobject is attracted to the high electric field region. On the contrary, an opposite net dipole is induced when the nanoobject is less polarizable than the medium, and the nanoobject is repelled by the high electric field region. The direction of DEP force on the particle is given by Clausius-Mossotti factor (CM factor, K). It is defined as a complex factor, describing a relaxation in the effective permittivity of the particle with a relaxation time described by [56, 57]: K(εp∗ , εm∗ ) =
εp∗ − εm∗ εp∗ + 2εm∗
.
(47)
Complex permittivities of the nanoobject (εp∗ ) and medium (εm∗ ) are defined and given by [56, 57]: εp∗ = εp − i
σp , ω
(48)
σm , (49) ω where εp and εm are the real permittivities of the nanoobject and the medium, respectively; σp and σm are the conductivities of the nanoobject and the medium, respectively; ω is the angular frequency of the applied electric field; so, the CM factor is frequency-dependent. The time-averaged DEP force acting on the particle is given by [56, 57]: εm∗ = εm − i
A theoretical analysis of the DEP manipulation on a CNT was performed; CM factors were calculated for a metallic SWCNT (m-SWCNT) and a semi-conducting MWCNTs (s-SWCNT), respectively. In the analysis, semi-conducting and metallic CNT mixtures are dispersed in the alcohol medium assuming the permittivities of a s-SWCNT and a m-SWCNT are 5ε0 [42] and 104 ε0 [58], respectively, where ε0 is the permittivity of free space (ε0 = 8.854188 × 10−12 F/m). The conductivities of a s-SWCNT and a m-SWCNT are 105 S/m and 108 S/m [58], respectively. The permittivity and conductivity of the alcohol are 20 ε0 and 0.13 μS/m, respectively. Based on these parameters and Eq. (47), plots of Re[K] for different CNTs are obtained and shown in Fig. 39.16. The result indicated that s-SWCNTs undergo a positive DEP force at low frequencies (< 1 MHz) while the DEP force is negative when the applied frequency is larger than 10 MHz. However, m-SWCNTs always undergo the positive DEP force at the applied frequency from 10 Hz to 109 Hz. The result also matches the experimental result from [43], which showed that the positive DEP effect on SWCNTs reduced as the frequency of applied electric field increased. In addition, the theoretical result provides a better understanding of the DEP manipulation on different types of CNTs. DEP force can be used to separate and identify different electronic types of CNTs (metallic and semi-conducting). Based on the result shown in Fig. 39.16, metallic CNTs can be selectively attracted to the microelectrodes by applying AC voltage in high frequency range (> 10 MHz). However, semi-conducting CNTs cannot be attracted by using the same frequency range; this makes the selection of semi-conducting CNTs become difficult. In order to select semi-conducting CNTs to make nanodevices, a micro chamber (DEP chamber) is developed with arrays of microelectrodes to filter metallic CNTs in the medium. Design and fabrication of the DEP chamber will be discussed in the next section.
39
Nanomanufacturing Automation
881
39.3.3 DEP Micro Chamber for Screening CNTs A DEP micro chamber was designed and fabricated to filter metallic CNTs in CNT suspension as shown in Fig. 39.17a. A lot of finger-like gold microelectrodes were first fabricated inside the chamber. The performance of the filtering process was affected by the design of these finger-like microelectrodes; the microelectrode structure with higher density induced stronger DEP force such that more CNTs could be attracted to the microelectrodes. The gap distance between these microelectrodes is 5–10 μm. The micro pump pumped the CNT suspension to the DEP chamber; a high frequency AC voltage was applied to the finger-like microelectrodes so that metallic CNTs were attracted to the microelectrodes and stayed in the DEP chamber. Semi-conducting CNTs remained in the suspension and flowed out of the chamber. Finally, the filtered suspension (with semi-conducting CNTs only) was transferred to an active nozzle for the CNT deposition process which will be described in the next subsection. The fabrication process of the DEP micro chamber is shown in Fig. 39.18. It was composed of two different substrates. Polymethylmethacrylate (PMMA) was used as the
K(w)
1 0.8 0.6
Positive DEP force
0.4 0.2 0
K (w) =
K(w) =
Negative DEP force
ep* – em*
top substrate because it is electrically and thermally insulating, optically transparent, and biocompatible. By using hot embossing technique, the PMMA substrate was patterned with a micro channel (5 mm L × 1 mm W × 500 μm H) and a micro chamber (1 cm L× 5 mm W× 500 μm H) by replicating from a fabricated metal mold. In order to protect the PMMA substrate from the CNTs-alcohol suspension, a parylene C thin film layer was coated on the substrate because parylene resists chemical attack and is insoluble in all organic solvents. Alternatively, quartz was used as the bottom substrate, and arrays of the gold microelectrodes were fabricated on the substrate by using a standard photolithography process. A layer of AZ5214E photoresist with thickness of 1.5 μm was first spun onto the 2 × 1 quartz substrate. It was then patterned by AB-M mask aligner and developed in AZ300 developer. A layer of titanium with thickness of 3 nm was deposited by thermal evaporator followed by depositing a layer of gold with thickness of 30 nm. The titanium provided a better adhesion between gold and quartz. Afterward, photoresist was removed in acetone solution, and arrays of microelectrodes were formed on the substrate. Finally, PMMA and quartz substrate were bonded together by UV-glue to form a close chamber. The fittings were connected at the ends of the channel to form an inlet and an outlet for the DEP chamber. The separation performance of the DEP chamber should be optimized for different nanoobjects. Several parameters should be considered in the process: the concentration of nanoobject in the suspension; the strength of the DEP force; the flow rate of the suspension in the DEP chamber; the structure of the channel; and the microelectrodes of the DEP chamber.
ep* + 2e m*
–0.2 –0.4 –0.6
Re [K(w )] for s-SWCNT in alcohol Re [K (w )] for m-SWCNT in alcohol 102
104
106
108
39.3.4 Automated Robotic CNT Deposition Workstation
Frequency (Hz)
Fig. 39.16 Plots of Re[K(ω)] that indicated positive and negative DEP forces on different CNTs
In order to manipulate a specific type of CNTs precisely and fabricate the CNT-based nanodevices effectively, a CNT deposition workstation has been developed as shown in
a)
b) Semiconducting CNTs
CNTs
Metallic CNTs CNT dilution in (with metallic and semiconducting CNTs)
Apply AC voltage to microelectrodes
CNT dilution out (semiconducting CNTs)
Microelectrodes
Fig. 39.17 Illustration of a DEP micro chamber to filter metallic CNT. Metallic CNTs are attracted on microelectrodes. Only semi-conducing CNTs flow to the outlet. (a) side view and (b) top view
39
882
N. Xi et al.
Fig. 39.19. The system consists of a micro-active nozzle, a DEP micro chamber, a DC micro-diaphragm pump, and three micromanipulators. By integrating these components into the deposition workstation, a specific type of CNTs can be deposited to the desired position of the microelectrodes precisely and automatically. The micron-sized active nozzle with a diameter of 10 μm was fabricated from a micro pipette using a mechanical puller and is shown in Fig. 39.20. It transferred the CNT suspension to the microelectrodes on a microchip, and a small droplet of the CNT suspension
Top substrate
Bottom substrate
Hot embossing of PMMA substrate on metal mold
Spin on photoresist on quartz substrate
PMMA
Pattern and develop PR
Metal mold
Deposit Ti and Au
Coat parylene C Remove PR
UV glue bonding Microchamber
PMMA
Parylene C
Quartz
Photoresist
Titanium
Gold
(about 400 ul) was deposited on the microchip due to the small diameter of the active nozzle. The volume of the droplet is critical because excessive CNT suspension causes multiple CNTs formation easily. The micro-active nozzle was then connected to the DEP chamber which was designed to filter metallic CNTs and select semiconducting CNTs in the CNT suspension. The raw CNT suspension was firstly pumped to the DEP chamber through a DC micro-diaphragm pump (NF10, KNF Neuberger, Inc.). After the filtering process, the CNT suspension from the DEP chamber was delivered to the active nozzle for CNT deposition. By mounting the active nozzle to one of the computer-controllable micromanipulators (CAP945, Signatone Corp.), the active nozzle could be moved to the desired position of the microelectrodes automatically. In order to apply the electric field to the microelectrodes during the deposition process, the other pair of micromanipulators were connected to an electrical circuit and moved to the desired location of the microelectrodes; therefore, AC voltage with different magnitudes and frequencies could be applied. The micromanipulators, DC micro-diaphragm pump, and electrical circuit were connected to the computer and controlled simultaneously during the deposition process. By controlling the position of the micromanipulators, the magnitude and frequency of the applied AC voltage, and the flow rate of the micro pump, the CNT suspension can be handled automatically and deposited to the desired position. With the help of the robotic CNT deposition workstation and the filtering system, mass of qualified CNTs with semiconducting property can be selected and locate onto the ROI of a microchip coarsely; subsequently, the CNTs can be attracted to the microelectrodes area or the neighborhood through the DEP force; finally, AFM-based manipulation system can be utilized to pattern the CNTs accurately to form circuits, and a CNT-based sensor can be then fabricated.
Fig. 39.18 The fabrication process of a DEP micro chamber
Electrical circuit of AC voltage for DEP manipulation
PC
Micropump for delivering CNT suspension
Micromanipulator for positioning active nozzle
Active nozzle
DEP chamber for filtering
Another electrical circuit of AC voltage for filtering
Fig. 39.19 Illustration of the CNT deposition workstation
CNT suspension
Microelectrodes on microchip
39
Nanomanufacturing Automation
39.4
883
Experimental Demonstration of the Nanomanufacturing Techniques
To elaborate effectiveness of the developed AFM-assisted automatic nanomanufacturing techniques, the following experiments and analysis are carried out.
39.4.1 Demonstration of the Local Scan-Based Nanomanipulation System To demonstrate performance of the key techniques of manufacturing at the nanoscale, an AFM-based nanomanipulation system was developed, and its configuration is shown in Fig. 39.21, consisting of one AFM (Dimension ICON,
15 kV 12.3 mm ×50 SE(U) 9/4/2006 18:32
1 mm
Fig. 39.20 SEM image of the micro active nozzle with 10 μm tip diameter
Haptic joystick Control command; Visualization;
Non-RT big data;
Spiral Local Scan-Based Positioning Error Compensation Test To show AFM positioning uncertainty, two groups of characterization tests were carried out on a polymer substrate: one is performed by scanning an area with 0◦ rotation angle, and the other is performed by scanning the same area with 90◦ rotation angle, respectively. Size of the scanned area was selected as 10 μm × 10 μm with 1024 × 1024 pixels. After capturing the reference image (about 20 min), more than 100 locations (as shown in Fig. 39.22a and d, marked with “+” ) were taken and imaged using the proposed optimal spiral local scan method. For each local image, the diameter was set to 41-pixel (about 400 nm). Finally, cross-correlation method was employed to roughly evaluate the location shifts between these actual local images and the corresponding images captured directly from the reference map [50]. Characterization results of the AFM tip positioning uncertainty are shown in Fig. 39.22b, c, e and f, and the distribu-
NSV AFM controller
AFM PC
3D location; Virtual force;
Bruker Inc., Santa Barbara, CA, USA) with a signal access module (SAM) for reading out AFM various sensor signals and receiving customer control commands; two PCs, one for running AFM imaging software to communicate with AFM controller directly and the other for running the developed nanomanipulation control scheme with functions such as tip positioning control (ENVS method), optimal spiral local scan, and visualization; one DAQ card (NI-PCI6221) for realtime control and sensing signal transferring; and a haptic joystick (Geomagic 3D Touch Stylus) for sending users’ command and feeding back virtual force to the users. During the experiments, peak-force-tapping mode and ScanAsist-air probe were employed to image and manipulate polystyrene nanoparticles (100 nm diameter) on the polymer material substrate.
AFM imaging signal;
Nanomanipulation PC
Nanomanipulation scheme
Fig. 39.21 Configuration of the AFM-based nanomanipulation system
Signal access module
NI DAQ card
RT control and sensing signals;
AFM physical system
All control signals; All sensor signals;
39
884
N. Xi et al.
tions are fitted using Gaussian distribution. The mean values of these distributions are the averaged difference between the reference locations and the actual locations, where Ave. = 0.7562 for (b), Ave. = −4.8070 for (c), Ave. = 2.6587 for (e), and Ave. = −4.8541 for (f), respectively. Based on these data, one can see there always be an offset for each distribution, which means the entire scanned area shifted in the relatively long imaging time (about 20 min). The standard deviation (S.D.) of each distribution is also obtained, where S.D. = 2.9266 for (b), S.D. = 3.6003 for (c), S.D. = 2.1731 for (e), and S.D. = 3.6679 for (f), respectively. It is interesting to see that the fast axis standard deviations are all larger than the counterparts of the slow axis, rendering nonuniform imaging quality of the raster scan. This test tells that there does exist AFM tip positioning error when the working time is relatively long, which is a common case for AFM-based nanomanipulation scenarios. Therefore, compensation of the tip positioning error is pretty necessary. To reduce the tip positioning uncertainty, the ENVS control scheme with optimal spiral local scan strategy is employed. The area shown in Fig. 39.22a was taken as the reference map. During the test, the following parameters were selected: 41 pixels for the local scan maximum diameter, 121 for the subset dimension m of the element x ∈ X of (35), and 0.5 for the gain α in control law (45) to guarantee stability and smoothness of the controller. In this test, ten locations
a)
marked with “+” in Fig. 39.23a were arbitrarily chosen as the centers of the desired areas, and the optimal spiral local scan was performed to image them. Based on the collected actual images of the local areas, the ENVS controller steered the AFM tip toward the desired locations to reduce the errors. After positioning error reduction, the desired and the final actual images are illustrated in Fig. 39.23b, and the error distributions of the fast and the slow axes are shown in Fig. 39.23c and d, respectively. By comparing with the distributions shown in Fig. 39.22b and c, one can see the average positioning errors have been greatly reduced, most of the errors are within 1 pixel (Ave. = −0.2703 for the slow axis and Ave. = 0.2055 for the fast axis), and the uncertainty standard deviation (S.D. = 0.6753 for the slow axis and S.D. = 0.8125 for the fast axis) has been attenuated by around 4 folds. This test demonstrates effectiveness of the proposed ENVS control scheme for overcoming the positioning uncertainty.
Spiral Local Scan-Based Nanoparticle Manipulation To illustrate delicate manipulation capability based on the spiral local scan target searching strategy, the nanoparticlepushing-for-alignment test was conducted. As shown in Fig. 39.24b, before the manipulation, there are three particles sitting on a polymer substrate (7 μm × 7 μm). During the
b)
c)
20
Fast axis deviation
Slow axis deviation
PDF (%)
PDF (%)
15 10
10
5
5 0 –20
–10
0
10
0 –20
20
–10
Deviation (pixel)
d)
0
10
20
Deviation (pixel)
e)
f) Fast axis deviation
Slow axis deviation
20
PDF (%)
PDF (%)
15 10
10
5
5 2 um
0 –20
–10
0
10
Deviation (pixel)
Fig. 39.22 AFM positioning uncertainty characterization, (a) reference map with arbitrarily chosen locations marked by “+” imaged with 0◦ rotation angle, (b) slow axis uncertainty distribution of the 0◦ rotation angle case, (c) fast axis uncertainty distribution of the 0◦ rotation angle
20
0 –20
–10
0
10
20
Deviation (pixel)
case, (d) reference map scanned with 90◦ rotation angle, (e) slow axis uncertainty distribution of the 90◦ rotation angle case, (f) fast axis uncertainty distribution of the 90◦ rotation angle case
Nanomanufacturing Automation
885
a)
b) 1
2
3
4
5
6
7
8
9
10
Desired
7
400 nm
39
Actual
6 10 8 5 Desired
9
2
39
Actual
4
1
3 2 um 60
Slow-axis deviation Gaussian fitting
40 20 0
d) PDF (%)
PDF (%)
c)
–20
–15
–10
5 –5 0 Deviation (pixel)
10
15
20
60
Fast-axis deviation Gaussian fitting
40 20 0
–20
–15
–10
–5 0 5 Deviation (pixel)
10
15
20
Fig. 39.23 Reduction of positioning uncertainty by the ENVS control scheme, (a) arbitrarily chosen locations (marked by “+”) in a reference image, (b) comparison between the desired and actual local images
captured by the optimal spiral local scan approach, (c) error distribution of the slow axis, and (d) error distribution of the fast axis
test, one marked nanoparticle was chosen and pushed to align with anther on its right side. During the pushing process, the visualized model-based simulation was executed in real time to predict behavior of the particle and provide visual feedback to the users. Figure 39.24e shows the time domain data of the entire manipulation process, and during the period between the 1st to the 9th second, the AFM tip was lowered down to 200 nm to reach the substrate surface for conducting reliable pushing operation. However, one can see the pushing period (as shown in Fig. 39.24e, indicated by the probe bending status) cannot last for the whole manipulation period; the tipparticle contact lost after the 5th second. This can be induced by the environmental uncertainty such as the complicated interaction force between the nanoparticle and the substrate, and the particle slipped away from the AFM tip. When the “non-contact” bending status of AFM probe was detected for a while during the pushing operation, the spiral local scan was started. The 3D scanning path is shown in Fig. 39.24d, where three points with local maximum height (labeled as (1), (2), (3) in the plot (d) and (e), respectively) were detected, based on which the nanoparticle location was captured and illustrated in the visualized simulation system
(as shown in Fig. 39.24a), and the users can manipulate the particle efficiently according to the simulated status. After the manipulation, a new AFM scan was performed to capture the actual status of the nanoparticles (as shown in Fig. 39.24c), where one can see the actual result is pretty close to the simulated manipulation result. By combining the ENVS scheme regulated precise positioning capability and the local scan searching function, nanoobjects can be automatically patterned following the planned path.
39.4.2 Fabrication and Testing of CNT-Based Infrared Detector As a demonstration, the CNT-based infrared detector was fabricated in this chapter; the fabrication process is comprised of filtering to get qualified CNTs, depositing them to microchip ROIs, and attracting and manipulating them via DEP method and AFM-based manipulation approach to form circuits.
886
N. Xi et al.
a)
b)
c)
Simulation (desired)
Experiment (before)
Experiment (after)
Pushing direction
50.0 nm Final location AFM tip indicator
–50.0 nm
Z (µm)
d)
e) 0.2
Desired Actual Object
(2,3)
0.12 0.6 0
1.0 um
(2) (3) (1)
0.1 Magnitude
(1) –1.5 –1
Pushing duration
0 PSDHorzDel –0.1
–0.5 X-location (µm) 0
PSDVertDel Height sensor (µm)
0.5 0 –0.5 Y-location (µm)
–0.2 0
Active nozzle
b)
4
6
10
8
12
Time (sec)
Fig. 39.24 Demonstration of spiral local scan-based nanoparticle delicate manipulation, (a) simulation illustrates desired status of the particle after manipulation, (b) particle status before manipulation during the experiment, (c) the real particle status after manipulation captured by a
a)
2
CNT suspension
new AFM scan, (d) 3D trajectory of the tip motion during spiral local scan for the target searching, (e) time domain signals recorded during the pushing operation
c)
d)
1st Electrodes
2nd Electrodes
2nd Electrodes
2nd Electrodes
3rd Electrodes
Fig. 39.25 CNT deposition process flow observed under the optical microscope. (a) The active nozzle tip aligned to the initial electrodes, (b) CNT suspension deposited, (c) the nozzle was moving to the next electrodes, (d) CNT suspension deposited on the second electrodes
In the deposition process, AC voltage of 1.5 V peak-topeak with frequency of 1 kHz was applied, and a positive DEP force was induced to attract CNTs to the microelectrodes. CNT deposition on multiple pairs of microelectrodes was implemented by controlling the movement of the micromanipulator which was connected with the active nozzle. Since the position of each pair of the microelectrodes was known from the design CAD file, distances (along x and y axis) between each pair of the microelectrodes were then calculated and recorded in the deposition system. At the
start, the active nozzle was aligned to the first pair of the microelectrodes as shown in Fig. 39.25a. The position of the active nozzle tip was 2 mm above the microchip. When the deposition process started, the active nozzle moved down 2 mm, and a droplet of CNT suspension was deposited on the first pair of microelectrodes as shown in Fig. 39.25b. Afterward, the active nozzle moved up 2 mm and traveled to the next pair of microelectrodes as shown in Fig. 39.25c. The micromanipulator moved down again to deposit the CNT suspension on the second pair of microelectrodes as shown in
39
Nanomanufacturing Automation
887
Fig. 39.25d. This process repeated continuously until CNT suspension was deposited on each pair of microelectrodes on the microchip. By activating the AC voltage simultaneously, a CNT was attracted and connected between each pair of microelectrodes. The activation time was short (∼ 2 s) to avoid bundled CNT formation on the microelectrodes. After the deposition process, the robotic AFM was used to check and manipulate the CNT formation as shown in Fig. 39.26. Sometimes, there are some impurities or more than one CNT trapped between the microelectrodes as shown in Fig. 39.26f. Therefore, it is necessary to take another step to clean up the microelectrode gap area and adjust the position of the CNT to make the connection. This final step is very critical and is termed CNT assembly which can be done by the AFM-based nanomanipulation system. I-V characteristics of the CNT-based devices were also obtained as shown in inset images of Fig. 39.26. Based on the results, this indicates that both SWCNTs and MWCNTs could be repeatedly and automatically manipulated between the microelectrodes by using the deposition system. This CNT deposition workstation integrates all essential components to manipulate the specific type of CNTs to desired positions
4.00E-07
6.00E-08
2.00E-07
3.00E-08
−1
0.00E+00
I
0.00E+00
I −2
precisely by DEP force. The development of this system produces benefits to the assembling and manufacturing of CNT-based devices. The yield of depositing CNTs on the microelectrodes is very high after optimizing the following factors: the concentration of the CNT suspension; the volume of the CNT suspension droplet; and the activation time, magnitude, and frequency of the applied electric field. In order to validate the separation performance of different CNTs by the DEP chamber, experiments for both raw CNT suspension (before passing through the DEP chamber) and filtered CNT suspension were conducted, respectively. The procedure for preparing the CNT suspension was the same as the process presented in the previous section. SWCNT powder (BU-203, Bucky USA, Nanotex Corp.) was dispersed in an alcohol liquid medium, and the CNT suspension was put in the ultrasonicator for 15 min. The length of the SWCNT is 0.5–4 μm. Finally, the raw SWCNT suspension was prepared, and the concentration was about 1.1 μg/ml. Afterward, the separation process was performed on the raw SWCNT suspension as illustrated in Fig. 39.27. During the process, the raw SWCNT suspension was pumped to the DEP chamber through the micro pump. The flow rate was
0
1
2
−2
−2.00E-07
−1
0
1
2
−3.00E-08
−4.00E-07
−6.00E-08
V
V
4.00E-06
2.00E-06
I
0.00E+00 −2
−1
0
1
2
−2.00E-06
a)
b)
c)
−4.00E-06 V
2.00E-05
I
1.00E-05
−0.8
−0.4
0.00E+00 0
0.4
0.8
−1.00E-05
−2.00E-05
V
1.00E-06
4.00E-11
−0.8
d)
2.00E-07
I
I
2.00E-11
−0.4
0.00E+00 0
−2.00E-11 V
0.4
−0.8
0.8
e)
f)
−0.4
0
−6.00E-07
0.4
0.8
V
Fig. 39.26 AFM images showing the individual CNT was deposited on the microelectrodes. (a)–(c) SWCNTs. (d)–(f) MWCNTs. The inset images are the corresponding I-V curves of each CNT
39
888
N. Xi et al.
Raw CNT suspension
Micropump
Filtered CNT suspension
Selection chamber
Fig. 39.27 Illustration of the CNT filtering process
a) Log current (A)
b) Log current (A)
10–3
10–4
10–4
10–5 10–6
10–5
10–7 10–6 10–8 10–7
10–9
10–8
10–10
10–9 0.01 1 – Metallic 4 – Metallic 7 – Semiconducting 10– Metallic 13– Semiconducting 16– Metallic 19– Metallic
0.1
1 2 – Semiconducting 5 – Metallic 8 – Semiconducting 11 – Metallic 14 – Semiconducting 17 – Metallic 20 – Semiconducting
10 Log voltage (V) 3 – Semiconducting 6 – Metallic 9 – Metallic 12 – Metallic 15 – Metallic 18 – Metallic
10–11 0.01 1– Semiconducting 4– Semiconducting 7– Metallic 10– Metallic 13– Semiconducting 16– Semiconducting 19– Semiconducting
0.1
1 2– Metallic 5– Semiconducting 8– Semiconducting 11– Semiconducting 14– Semiconducting 17– Semiconducting 20– Metallic
10 Log voltage (V) 3– Metallic 6– Metallic 9– Semiconducting 12– Semiconducting 15– Semiconducting 18– Metallic
Fig. 39.28 I-V characteristics of SWCNTs. (a) For the raw SWCNT suspension. (b) For the filtered SWCNT suspension
about 0.03 l/min. A high-frequency AC voltage (1.5 Vpp, 40 MHz) was applied to the microelectrodes in the DEP chamber; thus, it induced a positive DEP force on metallic SWCNTs in the suspension, but it induced a negative DEP force on semi-conducting SWCNTs. Since the metallic SWCNTs were attracted to the microelectrodes and stayed in the DEP chamber, it is predicted that only semi-conducting SWCNTs remained in the filtered SWCNT suspension. The filtered SWCNT suspension was then collected at the outlet of the chamber for the later CNT disposition process. After preparing the raw and filtered SWCNT suspension, the deposition process was performed by using our CNT deposition workstation. In the experiment, the raw SWCNT suspension and the filtered SWCNT suspension have been deposited on the microchip for 20 times, respectively. The electronic properties of CNTs in both suspensions were then studied by measuring the I-V curves. The yields to obtain semi-conducting CNTs from the raw CNT suspension and filtered CNT suspension were also compared. Based
on the preliminary results, the yield of depositing semiconducting SWCNTs (from the raw SWCNT suspension) was about 33% as shown in Fig. 39.28a; the yield of depositing semi-conducting SWCNTs (from the filtered SWCNT suspension) was about 65% as shown in Fig. 39.28b. The yield to form semi-conducting CNTs is very important because many devices require materials with semi-conducting property. The results indicated that there was significant improvement in forming semi-conducting CNTs on the microelectrodes by using our DEP chamber. The yield in forming semi-conducting CNTs changed from 33% (before filtering process) to 65% (after filtering process). The yield should be improved by optimizing the concentration of CNT suspension, the strength of the DEP force, the flow rate of the suspension in the DEP chamber, the structure of the channel, and the microelectrodes of the DEP chamber. The yield to form semi-conducting CNTs is very important, because it affects the successful rate to fabricate nanodevices. The yield to form semi-conducting CNTs
39
Nanomanufacturing Automation
Current (nA) 0
889
IR laser is OFF
–0.2 –0.4 –0.6 –0.8 –1 –1.2 –1.4 –1.6 –1.8
IR laser is ON 0
5
10
15
20
25
30 Time (s)
Fig. 39.29 Temporal photoresponses of a CNT-based IR detector
was increased by using the developed automatic system. It should be noted that both CNT synthesis methods and post-processing separation methods are important and can be combined for different applications; the developed separation system is a post-processing method, which can be used together with different CNT synthesis methods. Since our system used electrical signal to control the DEP manipulation and separation, it can be integrated with a current robotic manufacturing system easily, and eventually the process can be operated automatically and precisely. As a result, batch nanomanufacturing of nanodevices can be achieved by this system. When semi-conducting CNTs are deposited on the microelectrodes, the photonic effects of the CNT-based nanodevice can be studied. As an instance, a fabricated CNTbased microchip detector was put under the infrared (IR) laser source (UH5-30G-830-PV, World Star Tech; optical power, 30 mW; wavelength, 830 nm), and the photocurrent from the CNT-based nanodevice was measured. The laser source was configured to switch on and off in several cycles; the temporal photoresponses of the device are shown in Fig. 39.29. Experimental result shows the CNT-based device is sensitive to IR laser, and thus it can be used to make novel IR detectors.
39.5
Conclusions and Emerging Trends
Automated nanomanipulation is desirable to increase the efficiency and accuracy of nano-assembly. Automated nanoassembly of nanostructures is very challenging because of the manipulation path generation for different nanoobjects and the position errors due to the system uncertainty (e.g., thermal noise). This chapter discusses the automated nanomanipulation technology for nano-assembly. Automated nanoma-
nipulation methods of nanoobjects have been developed. An automated spiral local scan method is presented to detect the actual location of nanoobjects under manipulation, and the ENVS control scheme is introduced to reduce the random positioning error of the AFM tip. A CAD-guided automated nano-assembly method is developed. CAD-guided automated nano-assembly could open a door to assembly complicated nanostructures and nanodevices. Besides, CNT separation by a DEP chamber and the development of an automated CNT deposition workstation which applies DEP manipulation on CNTs have been presented. The system assembles semi-conducting CNTs to the microelectrodes effectively, and therefore, it is possible to improve the success rate to fabricate nanodevices. The separation method developed in this chapter is a post-processing method, which can be used together with different CNT synthesis methods. Since our system used electrical signal to control the CNT separation and DEP manipulation, it can be integrated into current robotic manufacturing systems easily, and eventually the process can be operated automatically and precisely. It opens the possibility of batch fabricating CNT-based devices. We should also mention that, compared to the prominent large-scale nanomanufacturing methods, such as the BCP self-assembly method [3] and the vapor-/chemical-based methods [1], the developed CAD-guided robotic AFM method for CNT nanodevice fabrication shows lower throughput drawbacks. However, it provides a much more flexible and affordable way for quickly manufacturing prototype nanodevices and verifying protocols, which are vital for smaller R&D facilities. Nowadays, promising attempts illustrate advantages of combining AFM-based technology with SEM and FIB for advanced nanomanufacturing [4], and there are also many notable and emerging scanning probe microscopy-based methodologies; one can further read [59] for more technical details.
References 1. Cooper, K.: Scalable nanomanufacturing—A review. Micromachines 8(1), 20 (2017) 2. Binnig, G., Quate, C.F., Gerber, C.: Atomic force microscope. Phys. Rev. Lett. 56(9), 930 (1986) 3. Cummins, C., Lundy, R., Walsh, J.J., Ponsinet, V., Fleury, G., Morris, M.A.: Enabling future nanomanufacturing through block copolymer self-assembly: A review. Nano Today 35, 100936 (2020) 4. Rangelow, I.W., Kaestner, M., Ivanov, T., Ahmad, A., Lenk, S., Lenk, C., Guliyev, E., Reum, A., Hofmann, M., Reuter, C., et al.: Atomic force microscope integrated with a scanning electron microscope for correlative nanofabrication and microscopy. J. Vacuum Sci. Tech. B Nanotechnol. Microelectron. Mater. Process. Measur. Phenomena 36(6), 06J102 (2018) 5. Engstrom, D.S., Porter, B., Pacios, M., Bhaskaran, H.: Additive nanomanufacturing—A review. J. Mater. Res. 29(17), 1792–1816 (2014)
39
890 6. Lyuksyutov, S.F., Vaia, R.A., Paramonov, P.B., Juhl, S., Waterhouse, L., Ralich, R.M., Sigalov, G., Sancaktar, E.: Electrostatic nanolithography in polymers using atomic force microscopy. Nature Materials 2(7), 468–472 (2003) 7. Salaita, K., Wang, Y., Mirkin, C.A.: Applications of dip-pen nanolithography. Nature Nanotechnology 2(3), 145–155 (2007) 8. Sugimoto, Y., Pou, P., Custance, O., Jelinek, P., Abe, M., Perez, R., Morita, S.: Complex patterning by vertical interchange atom manipulation using atomic force microscopy. Science 322(5900), 413–417 (2008) 9. Custance, O., Perez, R., Morita, S.: Atomic force microscopy as a tool for atom manipulation. Nature Nanotechnology 4(12), 803– 810 (2009) 10. Korayem, M., Khaksar, H.: A survey on dynamic modeling of manipulation of nanoparticles based on atomic force microscope and investigation of involved factors. J. Nanopart. Res. 22(1), 1–19 (2020) 11. Yang, R., Song, B., Sun, Z., Lai, K.W.C., Fung, C.K.M., Patterson, K.C., Seiffert-Sinha, K., Sinha, A.A., Xi, N.: Cellular level robotic surgery: Nanodissection of intermediate filaments in live keratinocytes. Nanomed. Nanotechnol. Biol. Med. (2014) 12. Nikooienejad, N., Alipour, A., Maroufi, M., Moheimani, S.R.: Video-rate non-raster afm imaging with cycloid trajectory. IEEE Trans. Control Syst. Technol. (2018) 13. Xie, H., Wen, Y., Shen, X., Zhang, H., Sun, L.: High-speed afm imaging of nanopositioning stages using h∞ and iterative learning control. IEEE Trans. Ind. Electron. (2019) 14. Sitti, M., Hashimoto, H.: Tele-nanorobotics using atomic force microscope. In: Proceedings. 1998 IEEE/RSJ International Conference on Intelligent Robots and Systems. Innovations in Theory, Practice and Applications (Cat. No. 98CH36190), vol. 3, pp. 1739– 1746. IEEE (1998) 15. Guthold, M., Falvo, M.R., Matthews, W.G., Paulson, S., Washburn, S., Erie, D.A., Superfine, R., Brooks, F., Taylor, R.M.: Controlled manipulation of molecular samples with the nanomanipulator. IEEE/ASME Trans. Mechatron. 5(2), 189–198 (2000) 16. Li, G., Xi, N., Yu, M., Fung, W.K.: 3d nanomanipulation using atomic force microscopy. In: 2003 IEEE International Conference on Robotics and Automation (Cat. No. 03CH37422), vol. 3, pp. 3642–3647. IEEE (2003) 17. Leang, K.K., Devasia, S.: Feedback-linearized inverse feedforward for creep, hysteresis, and vibration compensation in afm piezoactuators. IEEE Trans. Control Syst. Technol. 15(5), 927–935 (2007) 18. Zhang, J., Merced, E., Sepúlveda, N., Tan, X.: Optimal compression of generalized prandtl–ishlinskii hysteresis models. Automatica 57, 170–179 (2015) 19. Oliveri, A., Stellino, F., Caluori, G., Parodi, M., Storace, M.: Openloop compensation of hysteresis and creep through a power-law circuit model. IEEE Trans. Circuits Syst. I Regular Papers 63(3), 413–422 (2016) 20. Sun, Z., Song, B., Xi, N., Yang, R., Hao, L., Yang, Y., Chen, L.: Asymmetric hysteresis modeling and compensation approach for nanomanipulation system motion control considering workingrange effect. IEEE Trans. Ind. Electron. 64(7), 5513 (2017) 21. Staub, R., Alliata, D., Nicolini, C.: Drift elimination in the calibration of scanning probe microscopes. Rev. Sci. Instrum. 66(3), 2513–2516 (1995) 22. Mokaberi, B., Requicha, A.A.: Drift compensation for automatic nanomanipulation with scanning probe microscopes. IEEE Trans. Autom. Sci. Eng. 3(3), 199–207 (2006) 23. Belikov, S., Shi, J., Su, C.: Afm image based pattern detection for adaptive drift compensation and positioning at the nanometer scale. In: American Control Conference, 2008, pp. 2046–2051. IEEE (2008)
N. Xi et al. 24. Rahe, P., Schütte, J., Schniederberend, W., Reichling, M., Abe, M., Sugimoto, Y., Kühnle, A.: Flexible drift-compensation system for precise 3d force mapping in severe drift environments. Rev. Sci. Instrum. 82(6), 063704 (2011) 25. Li, G., Wang, Y., Liu, L.: Drift compensation in afm-based nanomanipulation by strategic local scan. IEEE Trans. Autom. Sci. Eng. 9(4), 755–762 (2012) 26. Meyer, T.R., Ziegler, D., Brune, C., Chen, A., Farnham, R., Huynh, N., Chang, J.M., Bertozzi, A.L., Ashby, P.D.: Height drift correction in non-raster atomic force microscopy. Ultramicroscopy 137, 48–54 (2014) 27. Yuan, S., Wang, Z., Liu, L., Xi, N., Wang, Y.: Stochastic approach for feature-based tip localization and planning in nanomanipulations. IEEE Trans. Autom. Sci. Eng. 14(4), 1643–1654 (2017) 28. Chen, H., Xi, N., Sheng, W., Chen, Y.: General framework of optimal tool trajectory planning for free-form surfaces in surface manufacturing. J. Manuf. Sci. Eng. 127(1), 49–59 (2005) 29. Ziegler, D., Meyer, T.R., Amrein, A., Bertozzi, A.L., Ashby, P.D.: Ideal scan path for high-speed atomic force microscopy. IEEE/ASME Trans. Mechatron. 22(1), 381–391 (2016) 30. Zhao, J., Song, B., Xi, N., Sun, L., Chen, H., Jia, Y.: Non-vector space approach for nanoscale motion control. Automatica (2014) 31. Song, B., Zhao, J., Xi, N., Chen, H., Lai, K.W.C., Yang, R., Chen, L.: Compressive feedback-based motion control for nanomanipulation—theory and applications. IEEE Trans. Robot. 30(1) (2014) 32. Song, B., Sun, Z., Xi, N., Yang, R., Cheng, Y., Chen, L., Dong, L.: Enhanced nonvector space approach for nanoscale motion control. IEEE Trans. Nanotechnol. 17(5), 994–1005 (2018) 33. Sun, Z., Cheng, Y., Xi, N., Yang, R., Yang, Y., Chen, L., Song, B.: Characterizing afm tip lateral positioning variability through nonvector space control-based nanometrology. IEEE Trans. Nanotechnol. 19, 56–60 (2019) 34. Franklin, A.D., Luisier, M., Han, S.J., Tulevski, G., Breslin, C.M., Gignac, L., Lundstrom, M.S., Haensch, W.: Sub-10 nm carbon nanotube transistor. Nano Letters 12(2), 758–762 (2012) 35. Shulaker, M.M., Hills, G., Patil, N., Wei, H., Chen, H.Y., Wong, H.S.P., Mitra, S.: Carbon nanotube computer. Nature 501(7468), 526–530 (2013) 36. Chen, H., Xi, N., Song, B., Chen, L., Zhao, J., Lai, K.W.C., Yang, R.: Infrared camera using a single nano-photodetector. IEEE Sens. J. 13(3), 949–958 (2012) 37. Chen, L., Yu, M., Xi, N., Song, B., Yang, Y., Zhou, Z., Sun, Z., Cheng, Y., Wu, Y., Hou, C., et al.: Characterization of carbon nanotube based infrared photodetector using digital microscopy. J. Nanosci. Nanotechnol. 17(1), 482–487 (2017) 38. Valentini, L., Armentano, I., Kenny, J., Cantalini, C., Lozzi, L., Santucci, S.: Sensors for sub-ppm no 2 gas detection based on carbon nanotube thin films. Appl. Phys. Lett. 82(6), 961–963 (2003) 39. Schütt, F., Postica, V., Adelung, R., Lupan, O.: Single and networked zno–cnt hybrid tetrapods for selective room-temperature high-performance ammonia sensors. ACS Appl. Mater. Interf. 9(27), 23107–23118 (2017) 40. Arnold, M.S., Green, A.A., Hulvat, J.F., Stupp, S.I., Hersam, M.C.: Sorting carbon nanotubes by electronic structure using density differentiation. Nature Nanotechnology 1(1), 60–65 (2006) 41. Collins, P.G., Arnold, M.S., Avouris, P.: Engineering carbon nanotubes and nanotube circuits using electrical breakdown. Science 292(5517), 706–709 (2001) 42. Krupke, R., Hennrich, F., Löhneysen, H.v., Kappes, M.M.: Separation of metallic from semiconducting single-walled carbon nanotubes. Science 301(5631), 344–347 (2003)
39
Nanomanufacturing Automation
43. Krupke, R., Linden, S., Rapp, M., Hennrich, F.: Thin films of metallic carbon nanotubes prepared by dielectrophoresis. Advanced Materials 18(11), 1468–1470 (2006) 44. Israelachvili, J.N.: Intermolecular and Surface Forces. Academic Press (2011) 45. McCormick, G.P.: Nonlinear programming; theory, algorithms, and applications. Tech. rep. (1983) 46. Avriel, M.: Nonlinear Programming: Analysis and Methods. Courier Corporation (2003) 47. Liu, L., Luo, Y., Xi, N., Wang, Y., Zhang, J., Li, G.: Sensor referenced real-time videolization of atomic force microscopy for nanomanipulations. IEEE/ASME Trans. Mechatron. 13(1), 76–85 (2008) 48. Yuan, S., Wang, Z., Xi, N., Wang, Y., Liu, L.: Afm tip position control in situ for effective nanomanipulation. IEEE/ASME Trans. Mechatron. 23(6), 2825–2836 (2018) 49. Sun, Z., Xi, N., Yu, H., Xue, Y., Bi, S., Chen, L.: Enhancing environmental sensing capability of afm-based nanorobot via spiral local scan strategy. In: 2019 IEEE 19th International Conference on Nanotechnology (IEEE-NANO), pp. 141–146. IEEE (2019) 50. Sun, Z., Xi, N., Xue, Y., Cheng, Y., Chen, L., Yang, R., Song, B.: Task space motion control for afm-based nanorobot using optimal and ultralimit archimedean spiral local scan. IEEE Robot. Autom. Lett. 5(2), 282–289 (2019) 51. Rana, M., Pota, H., Petersen, I.: Spiral scanning with improved control for faster imaging of afm. IEEE Trans. Nanotechnol. 13(3), 541–550 (2014) 52. Das, S.K., Badal, F.R., Rahman, M.A., Islam, M.A., Sarker, S.K., Paul, N.: Improvement of alternative non-raster scanning methods for high speed atomic force microscopy: A review. IEEE Access 7, 115603–115624 (2019) 53. Rudin, W.: Principles of Mathematical Analysis, 3rd edn. McGrawHill Science/Engineering/Math, New York City, US (1976) 54. Doyen, L.: Mutational equations for shapes and vision-based control. J. Math. Imag. Vis. 5(2), 99–109 (1995) 55. Giddings, J.C.: Field-flow fractionation: analysis of macromolecular, colloidal, and particulate materials. Science 260(5113), 1456– 1465 (1993) 56. Morgan, H., Green, N.: Ac Electrokinetics: Colloids and Nanoparticles, vol. 324. Research Studies Press Ltd., Hertfordshire, England (2003) 57. Jones, T.B., Jones, T.B.: Electromechanics of Particles. Cambridge University Press (2005) 58. Dimaki, M., Bøggild, P.: Dielectrophoresis of carbon nanotubes using microelectrodes: A numerical study. Nanotechnology 15(8), 1095 (2004) 59. Bhushan, B.: Springer Handbook of Nanotechnology. Springer (2017)
891
Ning Xi received the D.Sc. degree in systems science and mathematics from Washington University, St. Louis, MO, USA, in 1993. He is currently the Chair Professor of Robotics and Automation with the Department of Industrial and Manufacturing Systems Engineering, The University of Hong Kong, Hong Kong SAR, China. Before joining The University of Hong Kong, he was a University Distinguished Professor, the John D. Ryder Professor of Electrical and Computer Engineering, Michigan State University, East Lansing, MI, USA. He served as the Founding Head for the Department of Biomedical Engineering of the City University of Hong Kong from 2011 to 2013. He also served as the President for the IEEE Nanotechnology Council from 2010 to 2011. His research interests include robotics, manufacturing automation, micro/nanomanufacturing, nano-bio technology, sensors, and intelligent control and systems.
King Wai Chiu Lai received the Ph.D. degree from the Department of Automation and Computer-Aided Engineering, Chinese University of Hong Kong, Hong Kong SAR, China, in 2005. He was a PostDoctoral Research Associate with the Department of Electrical and Computer Engineering, Michigan State University, East Lansing, MI, USA, from 2006 to 2012. He is currently an Associated Professor with the Department of Biomedical Engineering, City University of Hong Kong, Hong Kong SAR, China. His current research interests include development of micro/nano-sensors using MEMS and nanotechnology, nanobiotechnology, automation, and micro/nanomanipulation.
39
892
Heping Chen received the Ph.D. degree in electrical and computer engineering from Michigan State University, East Lansing, MI, USA, in 2003. From 2005 to 2010, he was with the Robotics and Automation Labs, ABB Corporate Research, ABB, Inc., Windsor, CT, USA. He is currently an Associated Professor with the Ingram School of Engineering, Texas State University, San Marcos, TX, USA. His research interests include micro/nanomanufacturing, micro/nanorobotics, industrial automation, and control system design and implementation.
N. Xi et al.
Zhiyong Sun received the Ph.D. degree in mechatronics engineering from Northeastern University, Shenyang, China, in 2016. He was a jointed Ph.D. student with the Department of Electrical and Computer Engineering, Michigan State University, East Lansing, MI, USA, from 2013 to 2015. He was a Post-Doctoral Fellow with the Department of Industrial and Manufacturing Systems Engineering, The University of Hong Kong, Hong Kong SAR, China, from 2016 to 2019. He is currently an Associated Professor with the Institute of Intelligent Machines, Hefei Institute of Physical Science, CAS, Hefei, China. His research interests include micro/nanorobotics, bioMEMS, and smart material sensors and actuators.
Production, Supply, Logistics, and Distribution
40
Manuel Scavarda Basaldúa and Rodrigo J. Cruz Di Palma
Contents 40.1
Historical Background . . . . . . . . . . . . . . . . . . . . . . . . . . 893
40.2
Machines and Equipment Automation for Production . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Production Equipment and Machinery . . . . . . . . . . . . . . . Material Handling and Storage for Production and Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Process Control Systems in Production . . . . . . . . . . . . . .
895 896
40.3.1 40.3.2 40.3.3 40.3.4
Computing and Communication Automation for Planning and Operations Decisions . . . . . . . . . . . . . . . Supply Chain Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . Production Planning and Programming . . . . . . . . . . . . . . Logistic Execution Systems . . . . . . . . . . . . . . . . . . . . . . . Customer-Oriented Systems . . . . . . . . . . . . . . . . . . . . . . .
896 896 897 898 898
40.4 40.4.1 40.4.2 40.4.3
Automation Design Strategy . . . . . . . . . . . . . . . . . . . . . Labor Costs and Automation Economics . . . . . . . . . . . . The Role of Simulation Software . . . . . . . . . . . . . . . . . . . Balancing Agility, Flexibility, and Productivity . . . . . . .
898 898 899 899
40.5 40.5.1
Emerging Trends and Challenges . . . . . . . . . . . . . . . . . RFID, IoT, and IoS for Smart Manufacturing and Warehousing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . AI and Smart Warehouses . . . . . . . . . . . . . . . . . . . . . . . . . Drones for Logistics and Distribution . . . . . . . . . . . . . . .
900
40.2.1 40.2.2 40.2.3 40.3
40.5.2 40.5.3
894 894
900 903 904
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 906
safety, and even perform tasks that go beyond the precision and reliability of humans. Rapid and constant developments in information technology continuously transform the way people work and interact with each other; electronic media enable enterprises to collaborate on their work and missions within each organization and with other independent enterprises, including suppliers and customers. Within this chapter, the focus is on the main benefits of automation in production, supply, logistics, and distribution environments. The first section centers on machines and equipment automation for production. The second section focuses on computing/communication automation for planning and operations decisions. Finally, the last section highlights some considerations regarding economics, productivity, and flexibility important to bear in mind while designing an automation strategy.
Keywords
Supply chain · Supply chain management · Automate guide vehicle · Enterprise resource planning system · Electronic data interchange
Abstract
To effectively manage a supply chain, it is necessary to coordinate the flow of materials and information both within and among companies. This flow goes from suppliers to consumers, as it passes through manufacturers, wholesalers, and retailers. While materials and information move through the supply chain, automation is used in a variety of forms and levels as a way to raise productivity, enhance product quality, decrease labor costs, improve M. S. Basaldúa () Kimberly-Clark Corporation, Neenah, WI, USA e-mail: [email protected] R. J. Cruz Di Palma () Dollarcity, Antiguo Cuscatlan, El Salvador
© Springer Nature Switzerland AG 2023 S. Y. Nof (ed.), Springer Handbook of Automation, Springer Handbooks, https://doi.org/10.1007/978-3-030-96729-1_40
40.1
Historical Background
Automation is any technique, method, or system of operating or controlling a process without continuous input from an operator, thereby reducing human intervention to a minimum. Many believe that automation of supply chain networks began with the use of personal computers in the late 1970s, while others date it back to the use of electricity in the early 1900s. The fact is that, regardless of when we place the beginning of automation, it has changed the way we work, think, and live our lives. Children are now in contact with automation from the day they are born, such as automated machines that monitor the vital signs of premature 893
894
infants. As people grow older, they continually have contact with automation via automatic teller machines (ATMs), selfoperated airplanes, and self-parking cars. In the industrial realm, automation now can be seen from such simple tasks as milking a cow to complex repetitive task such as building a new car. Computing and communications have also transformed production and service organizations over the past 50 years. While working in parallel, error recovery, and conflict resolution have been addressed by human workers since early days of industry, they have recently been transformed by computer into integrated functions [1]. Automation is the foundation of many of society’s advances. Through productivity advances and reductions in costs, automation allows more complex and sophisticated products and services to be available to larger portions of the world’s population. The evolution of supply chain management at a company as an automation example of the production, supply, logistics, and distribution is described
M. S. Basaldúa and R. J. Cruz Di Palma
in Table 40.1. An example of such evolution for a specific company is discussed in [2].
40.2
Machines and Equipment Automation for Production
40.2.1 Production Equipment and Machinery Automation has been used for many years in manufacturing as a way to increase speed of production, enhance product quality, decrease labor costs, reduce routine labor-intensive work, improve safety, and even perform tasks that go beyond the precision and reliability of normal human abilities [3, 4]. Examples of automation range from the employment of robots to install a windshield on a car, machine tools that process parts to build a computer processor, to inspection and testing systems for quality control to make certain the precise
Table 40.1 Supply chain evolution at a food products production and service company Before supply chain After supply chain design Forecasting/ordering The company determines the amount of nuts that The company and its customer share sales a customer will expect in its food products forecasts based on current point-of-sale data, past demand patterns, and upcoming promotions and agree on an amount and schedule to supply Procurement The company phones its Brazilian office, and The company enters purchase orders in its employees deliver the orders in person to local ERP system and are automatically received farmers, who load the raw nuts on trucks and by employees at the Brazilian office. deliver them to the port Brazilian employees still contact local farmers personally Transportation The shipping company notifies the company Shippers and truckers share up-to-date data when the nuts have sailed. When the nuts arrive online via a collaborative global logistics in a US port, a freight forwarder processes the system that connects multiple manufacturers paperwork to clear the shipment through and transportation companies and handles customs, locates a truck to deliver them to the the customs’ process. The system matches company plants, and delivers the nuts to the orders with carriers to assure that trucks company’s manufacturing plant, although it may travel with full loads be only half full and return empty, costing the company extra money Manufacturing The nuts are cleaned, roasted, and integrated Production forecast is updated based on the with various food products manufactured by the current demand for product mix, and company according to original production products are manufactured under lean forecast manufacturing and just-in-time principles Distribution The products are packed, and trucks take them to After food products are inspected and packed the company’s multiple warehouses across the at the plant, the company sends the products country, from where they are ready to be shipped to a third-party distributor, which relieves the to stores. However, they may not be near the company of a supply chain activity not store where the customer needs them because among its core competencies. The distributor local demand has not been considered consolidates the products on trucks with other products, resulting in full loads and better service Customer If the company ordered too many nuts, they will The company correctly knows the customer’s turn soft in the warehouses, and if they ordered needs so there is neither a shortage nor an too few, the customer will buy food products oversupply of the products. Transportation, with nuts elsewhere distribution, warehousing, and inventory costs drop, and product and service quality improve
Advantages Forecasting accuracy, collaborative replenishment planning
Enhanced communication
Online tracking, transportation cost minimization
Capacity utilization, production efficiency
Enhanced distribution planning, inventory management and control
Increased customer satisfaction. Cost minimization, profit maximization
40
Production, Supply, Logistics, and Distribution
amount of cereal is in each box of cornflakes. An example of industrial robots for car production is shown in Fig. 40.1. There are many reasons that explain why automation has become increasingly important in today’s production systems. The main justifications arise from the relative strengths that automation provide in comparison with humans within a production environment. Automated machines are able to perform systematic and repetitive tasks, requiring precision, storing large amounts of information, executing commands fast and accurately, handling multiple tasks simultaneously, and conducting dangerous/hazardous work [5, 6]. Emerging trends over the last 10 years are influencing the way organizations are designing, planning, and running their manufacturing operations. E-commerce continues to grow, providing customers with higher transparency and easy access to multitude of options, which translates into growing customer service expectations, demand fragmentation, and order granularization and customization. These dynamics call for the need of faster, more agile, and precise supply chain operations, elevating not only the role of production equipment automation but also the criticality of the interconnection and integration of these automated equipment into larger, intra-, and interorganizational supply chain collaborative systems. See Ch. 64 for more details about eCommerce.
Fig. 40.1 Industrial robotics in car production. (Courtesy of KUKA Robotics)
895
New and extraordinary technologies such as artificial intelligence (AI) and robotics, additive manufacturing, neurotechnologies, biotechnologies, and virtual and augmented reality are driving the Fourth Industrial Revolution [7]; but is the increasing shift in focus from single equipment, machinery, or technologies to their interaction and integration into collaborative systems what is shaping the future of Industry 4.0?
40.2.2 Material Handling and Storage for Production and Distribution Material handling equipment plays an important role in the automation of production, storage, and distribution systems, by interconnecting the fixed or flexible workstations that compose them. Typical automated material handling systems include conveyors for moving product in a general direction inside a facility, sorters and carousels for distributing products to specific locations, automated storage and retrieval systems (ASRS) for storage, and automated guided vehicles (AGVs) for transporting materials between work stations [8]. For instance, AGVs play an important role in the paper industry, where moving rolls quickly and efficiently is critical. The key is not to damage rolls during this handling process. An example of an AGV in the paper industry is shown in Fig. 40.2. Furthermore, many material handling applications, such as part selection, piece picking, palletizing, or machine loading, are integrating the use of robotics to add flexibility and precision to supply chain systems. Conveyors are believed to be the most common material handling system used in production and distribution processes. These systems are used when materials must be transferred in relatively large quantities between specific
Fig. 40.2 Inertial guidance automatic guided vehicle. (Courtesy of the Jervis B. Webb Company)
40
896
machines or workstations following a predetermined path or route. Most of these systems use belts or wheels to transfer materials horizontally or gravity to move materials between points at different heights. Frequently within production or distribution systems, several types of conveyors are utilized in a combined manner, constituting conveyor networks or integrated systems. In more sophisticated conveyor networks, sorters are utilized. A sorter consists of an array of closely coupled, highdensity diverters used to sort units (materials, products, parts, etc.) to specific lanes for further consolidation. Consequently, in addition to the transfer functionality supported by regular conveyor systems, sorter mechanisms provide, by means of sensors, a classification capability that makes them especially attractive for highly variable, small-size shipments. Examples of operations that utilize sorters for their shipments are Federal Express, United Parcel Services, and Amazon. Kimberly-Clark claims that a sorter mechanism implemented at one of its distribution facilities in Latin America has improved the truck loading operation time from 1–3 h to 20 min. With capacity of 200 cases per minute and fully customizable logic (i.e., it can be programmed to follow a balanced sorting sequence per dock or a sorting sequence by customer orders), this sorter has significantly increased truck rotation at the Kimberly-Clark distribution center versus the previous manual system.
40.2.3 Process Control Systems in Production Production systems can be designed with different levels of automation. However, in all automated production systems, even at the lowest levels of sophistication constituted by automated devices such as valves and actuators, a control system of some kind is required. At the individual machine level, the automatic control is executed by computer systems. An example is computer numerical control (CNC), which reads instructions from an operator in order to drive a machine tool. At the automatic process level, there are two main types of control systems, the programmable logic controller (PLC) and the distributed control system (DCS). These systems were initially developed to support distinctive process control functions. PLCs began replacing conventional relay/solid-state logic in machine control, while DCSs were merely a digital replacement of analog controllers and panel board displays. However, the technology of both types of process control systems has evolved over the years, and the differences in their functionalities have become less straightforward. Further up in the hierarchy, at the production cell level, cell controllers provide coordination among individual workstations by interacting with multiple PLCs, DCSs, and other automated devices.
M. S. Basaldúa and R. J. Cruz Di Palma
At the highest production automation level, control systems such as manufacturing control systems and/or integrated plant systems initiate, coordinate, and provide visibility of the manufacturing operation of all other lower control levels [6, 9].
40.3
Computing and Communication Automation for Planning and Operations Decisions
40.3.1 Supply Chain Planning To manage the supply chain effectively, it is necessary to coordinate the flow of materials and information both within and between companies [10]. The focus of the supply chain planning process is to synchronize activities from raw materials to the final customer. Supply chain planning processes strive to find an integrated solution for the strategic, tactical, and operational activities in order to allow companies to balance supply and demand for the movement of goods and services. Information and communication technology (ICT) plays a vital role in supply chain planning by facilitating the flow of information and enhancing the cooperation between customers, suppliers, and third-party partners. As shown in Fig. 40.3, intranets can be used to integrate information from isolated business processes within the firm to help them manage their internal supply chains. Access to these private intranets can also be extended to authorized suppliers, distributors, logistics services, and retail customers to improve coordination of external supply chain processes [11]. Electronic data interchange (EDI) and other similar technologies not only save money by reducing or eliminating human intervention and data entry but also pave the way for collaboration initiatives between organizations. An example would be vendor-managed inventory (VMI) where customers send inventory and consumption information to suppliers, who schedule deliveries to keep customer inventory within agreed upon ranges. VMI not only provides benefits for the customer but also increases demand visibility for the supplier and leads to cost savings and reduced inventory investment for the supplier. As shown in Fig. 40.4, EDI can significantly improve productivity. A typical illustration is the case of Warner-Lambert that increased its products’ shelf-fill rate at its retailer Walmart from 87% to 98% by EDI application, earning the company about US $ 8 million a year in additional sales [12]. ICT is key to the strategic, tactical, and operational analysis required for supply chain planning. Transactional ICT helps to automate the acquisition, processing, and
40
Production, Supply, Logistics, and Distribution
897
Suppliers
Order processing Procurement
Inventory Logistics services
Customers Enterprise intranet & extranet
Planning & scheduling Retailers
Shipping Distributors Product development
Production Marketing
Partner enterprises
Fig. 40.3 Intranet and extranet for supply chain planning
40.3.2 Production Planning and Programming Retailer
Producer
Operational system
ERP
I Data n warehouse t Retail link r a Sales data n at POS e
I n t EDI Retailer server
Producer server
SCM r manufacturing a planning n e
Forecast
t
t Inventory plan
Review and comments
Fig. 40.4 Producer and retailer EDI application
communication of raw data related to historic and current supply chain operations. This data is utilized by ICT systems for evaluating and disseminating decisions based on rules; for example, when inventory reaches a certain level, a purchase order for replenishment is automatically generated [13]. The data from ICT systems can also be used for simulating and optimizing a system. An example could be where ICT data is used to simulate the impact on a process with different levels of work in process on total productivity.
As the rate of technological innovation increases, maintaining a competitive cost structure may rely heavily on production efficiency generated by an effective production planning and programming process. For large-scale, global enterprises where traditional MRPI (material resource planning (1st generation)) and MRPII (material resource planning (2nd generation)) approaches may not be sufficient, new enterprise resource planning (ERP) solutions are being deployed that not only link all aspects from the bill of materials to suppliers to customer orders but also utilize different algorithms in order to efficiently solve the scheduling conundrum. A general framework of ERP implementation is shown in Fig. 40.5. Off-the-shelf solutions such as those provided by software supplier SAP advance planner and optimizer (APO) provide automatic data classification and retrieval, allowing key measures to be retrieved for strategic, tactical, and operational planning. However, typical ERP systems are rigid and have a difficult time adjusting to the complexity in demand requirements and constant innovation of the product portfolio mix. Global companies that compete in diverse marketplaces may choose to address these issues by building their own largescale optimization models. With a solid database structure, these models are able to adapt to continuous changes in portfolios and can incorporate external influences on demand,
40
898
M. S. Basaldúa and R. J. Cruz Di Palma
Human resource management
Material requirement planning
Procurement
Marketing studied
Product development & quality assurance
Production scheduling & re-scheduling
Price & promotion
Inventory planning & control
Sales
Distribution planning
Customers
Suppliers
Accounting and financial management
Order entry
ERP analytics
Fig. 40.5 A general framework of ERP implementation
such as market trends in promotions and advertisement. Over the past decade, there has been a shift in focus from business functions to business processes and further into value chains. Nowadays, enterprises focus on the effectiveness of the operation that requires functions to be combined to achieve endto-end business processes [14].
addition of labor and transportation management in a LES has expanded the span of WMS functionality to the point of practically embracing every logistic function from receiving to the actual shipping of goods. An example of a WMS is shown in Fig. 40.6.
40.3.4 Customer-Oriented Systems 40.3.3 Logistic Execution Systems Logistic execution systems (LES) seek to consolidate all logistic functions, such as receiving, storage, inventory management, order processing, order preparation, yard management, and shipping, into an integrated process. The LES can be designed to communicate with the firm’s enterprise network and external entities such as customers and suppliers. The LES might be part of the ERP system or otherwise should communicate with it through interfaces in order to interact with other modules such as finance, purchasing, administration, and production planning. Usually it also communicates with customers, suppliers, and carriers via EDI or web-based systems with the purpose of sharing logistic information relevant to all parties such as order status, trucks availability, confirmation of order reception, etc. A LES is usually composed of a warehouse management system (WMS) complemented with a labor management system (LMS) and a transportation management system (TMS). Essentially WMS issues, manages, and monitors tasks related to warehouse operation performance. A WMS improves efficiency and productivity of warehousing operations usually supported by (1) barcode and (2) radio frequency data communications technologies. Both technologies provide a WMS with instantaneous visibility of warehouse operations, facilitating precise inventory control as well as accurate knowledge of labor and equipment resource availability. The
Many times, automation is thought to only apply to the production environment. However, there are a myriad of examples where other critical non-production-related processes have been automated, such as order entry, inventory management, customer service, and product portfolio management. The goal of these applications is to improve upon the customers’ experience with suppliers. For many companies, the customer experience is a very tangible and measurable effect that should be viewed through the eyes of the customer. The goal for the supplier is to be easy to do business with. For this reason, many companies have decided to automate how the customer exchanges information related to sales and logistics functions. One example of this automation is the use of cellular phone messaging technology to coordinate in real time to the customer the status of their order throughout the delivery process.
40.4
Automation Design Strategy
40.4.1 Labor Costs and Automation Economics Competitive dynamics and consumer needs in today’s marketplace demonstrate the need for manufacturer flexibility in response to the speed of change. Modern facilities must be aligned with the frequent creation of new products and pro-
40
Production, Supply, Logistics, and Distribution
899
Cross docking Incoming shipments directed to shipping dock to fill outgoing orders without put away and picking
Yard management Controls dock activities and schedules dock appointments to avoid bottlenecks
Order management Orders added, modified, or cancelled in real time
Order tracking Tracks inbound and outbound shipments
Warehouse optimization Slotting optimizes placement of items Warehouse management Labor management Plans, manages, and reports on performance of warehouse personnel
Customer labeling and packaging Special packaging; bar coding
40 Fig. 40.6 A warehouse management system with decision support systems
40.4.2 The Role of Simulation Software Production unit cost Hard automation
Manual manufacturing Manual
Robotic manufacturing Flexible Fixed
Determining the benefits of automation may prove to be a challenge, especially when there are complex relationships among processes. Will there be improvement in throughput and will this be enough to justify the required investment? With the availability of software such as ARENA by Rockwell Automation, simulation can be used as a tool to test different possible scenarios without having to make any physical changes to an existing system. This tool can be especially useful in cases where the required investment for automation is high and the expected benefits are not easily measured. In addition, the visual simulation capabilities related to this type of software facilitates the analysis of the process as well as obtaining top management support.
Production volumes
Fig. 40.7 Automation trade-offs based on production volumes
cesses and be prepared to manage these changes and resulting technological shifts. Therefore, one of the most difficult issues now facing companies is identifying a manufacturing strategy that includes the optimal degree of automation for a given competitive environment. As shown in Fig. 40.7, increased production costs, including the mitigation of possible labor shortages, is one of the initial reasons companies look toward automation. However, production costs alone are usually not enough to justify the cost of investment. Productivity, customer response time, and speed to market are many times key factors to the success of automating a production process and must be measured in order to effectively determine the impact on costs and expected revenues in the future.
40.4.3 Balancing Agility, Flexibility, and Productivity Typical business concerns such as increasing sales and operating profit are always considered key performance drivers that lead the investment strategy within an organization. However, with the complexity that can be found within certain marketplaces, a local niche, or a well-developed global market, the need for sustainable growth has increasingly become one of the most important aspects when determining a competitive strategy. A major competitive concern in the global market is agility in order to react to a dynamic market. To maintain agility between autonomous and geographically distributed functions, significant investment in automated error detection is required to facilitate recovery and conflict resolution [1].
900
M. S. Basaldúa and R. J. Cruz Di Palma
This complexity has adjusted the traditional return-oninvestment (ROI) approach to automation investment justification by determining success on how well the solution is able to create value for customers through services or products. What are customers looking for? Is it on-shelf availability at a slightly higher price? A highly computerized ASRS (automated storage and retrieval system) may provide increased throughput, less errors, and speed in distribution. What if the customers change how and where products are required? An ASRS may not be flexible enough to adjust quickly to changes in demand patterns or shifts in the demand network nodes. The correct solution should address customers’ expectations regarding product quality, order response times, and service costs, seeking to obtain and sustain customer loyalty.
40.5
Emerging Trends and Challenges
The use of automation in production and supply chain processes has expanded dramatically in recent years. As globalization advances along with product and process innovation, the importance of automation will continue to intensify into the future. The landscape of global manufacturing is changing. More and more production plants are being built in India, Brazil, China, Indonesia, Mexico, and other developing countries. The playing field is rapidly being leveled. However, by crossing borders, companies also increase the complexity of operations and supply chains. Virtually seamless horizontal and vertical integration of information, communication, and automation technology throughout the organization is needed by companies such as Walmart, Kimberly-Clark, Procter and Gamble, Clorox, and Nestle, in order to address the dynamics of today’s manufacturing environment. Collaborative design, manufacturing, planning, commerce, plant-to-business connectivity, and digital manufacturing are just some of the many models that seem to be on the horizon for leading manufacturing companies in order to induce further integration of processes. Several technologies and systems are reaching the required level of maturity to support these models and thereby accelerate the adoption of automation in production and supply chains. Examples of these technologies would be the following: • • • • • •
Supply chain planning systems Supply chain security Manufacturing operations management solutions Active radio frequency identification (RFID) Sensor-based supply functions Industrial process automation
Furthermore other technologies, such as artificial intelligence (AI), Cyber-physical systems (CPS), Internet of Things (IoT), drones, and blockchains, are more on an emerging or adolescence level of maturity but will have a fundamental and transformational role as enablers for the Fourth Industrial Revolution. In this context, collaborative e-work theory and techniques continue to gain increased relevance as powerful automation support for faster, more agile, and precise collaborative supply chain operations [15–17].
40.5.1 RFID, IoT, and IoS for Smart Manufacturing and Warehousing RFID technology has existed for decades but has recently regained enormous interest with the emergence of IoT. RFID is described as the last mile connection between the physical (living and nonliving) and the digital (data driven) world in the domain of identity management [18] and together with IoT are both key enabler technologies for Industry 4.0 smart factories and smart warehouses. An RFID tag is a small computer chip that can send via radio frequency a small amount of information in a short distance. The signal is captured by an RFID antenna and then transferred to a computer network for data processing (Fig. 40.8). The general RFID system architecture, applications, frequencies, and standards are shown in Fig. 40.9 and Table 40.2. Some RFID applications are summarized in Table 40.3. In supply chain applications, current uses of RFID technology are focused on location identification of products. These are as varied as identifying trailers and ocean freight trailers in a trailer yard, stopping of shoplifting of small but higher-priced
“Who are you” Identify yourself EPC number
Antenna
EPC attributes • RFID server sends “talk” request to reader • Antenna broadcasts “talk” request • Tags within RF field “wakes up” and exchange EPC data RFID server • Antenna recognizes tag signal and transmits data back to reader • Reader communicates collected EPC data back to RFID server for applications and analytics
Fig. 40.8 How RFID works
Reader
40
Production, Supply, Logistics, and Distribution
901
1 Raw materials and components for production can be tagged to automate receiving, tracking of inventory, lot control, etc. The result is more streamlined operations
2 Tags can be integrated into the cartons that will contain the products.
3 Manufacturer produces products and packages them in cartons on pallets. Each carton and/or pallet can have RFID tag. The unique EPC number can be assigned when the tag is first created Inventory (factory programmed) or can be system of “written” later (field programmable). record 4 Readers allow more accurate picking and shipping, record all products that leave the factory, and report status to the inventory system of record. 9 Order information is integrated throughout a crossdock operation–receiving, sorting, staging, and shipping are streamlined.
6 Sensors in the shipment can record temperature, humidity, and other conditions during transit and report them at the end of the journey.
5 Load authentication is streamlined by allowing truck weight to be compared to attributes of the contents reported by the tags. Errors are detected earlier. 7 Authentication of import products links digital certification to specific EPC numbers, 8 speeding customs inspections and Arriving products are decreasing opportunities for automatically detected throughout the distribution Counterfeit products to enter the supply chain. center. Manual steps are eliminated so costs are reduced and accuracy is improved.
RFID possible-state vision
Warehouse management system (WMS) 12 Tracking products throughput the supply chain reduces loss and theft of inventory. In the event of a tampering incident, lot control information is available to trace the problem to its source.
10 Warehouse management system tracks and updates all inventory movement in real time with each read event.
11 Validation to ensure that products, quantities, and destinations are correct is facilitated by readers that can trigger a warning before the products are loaded on trucks.
16 Product recall management is simplified because tags allow monitoring of cases and pallets as they move backwards through the supply chain.
13 RFID readers detect all product moves within store and can automatically prevent stock-out conditions.
Consolidated point-of-sale data
14 RFID readers on recycling bins can monitor tags attached to cartons and deduce that individual products have been put on the retail shelves. 15 Automatic inventory replenishment orders can be accelerated and more accurate by supplementing the POS system information with RFID data.
Fig. 40.9 RFID in the supply chain. (Courtesy of BearingPoint’s RFID solution)
40
902
M. S. Basaldúa and R. J. Cruz Di Palma
How the technology works ONS server
Less than 2" Tag includes a microchip with an antenna atteched. Typically attached to a self-adhesive label. Tag
Tag stores a unique electronic product code (EPC) to identify the product. Can be integrated with the package or with the product itself. The EPC functions like a key, unlocking a wealth of detail about the product.
Middleware
ONS server matches the EPC number from a tag-read event (the only data stored on an RFID tag) to the address of a specific server on the EPC Information Services Network. Internet EPC network
Reader
Internet
Middleware filters raw data and applies relevant business rules to control what goes into the core system.
Enterprise transactional system
EPC network contains detailed information about individual products. Companies establish rules to govern data, access, and security among trading partners.
Reader beams radio signal that “wakes up” the tag so it can reply with its EPC number. Radio waves allow tags to be detected at specific points in the supply chain even when products are concealed in shipping containers. Any tag that’s within range of the reader will be detected.
Fig. 40.9 (continued)
fast-moving consumer goods such as razor blades, as well as an alternative to a WMS. In broader uses retailers widely leverage on RFID technology to increase sales by making sure inventory at the store’s loading dock is actually placed on the shelf. RFID coupled with sensor and actuator networks offers advanced monitoring and control of real-world processes at an unprecedented scale [20], which together with IoT and IoS are of significant relevance for CPS-based smart factories and warehouses. Figure 40.10 illustrates the relationship between these technologies for the case of a smart factory. The physical system is comprised by the set of interconnected automated machines or equipment required to complete a certain production process. These machines are integrated by an IoT, which monitors and controls the operation at real time through RFID and multiple sensors and actuators. The cyberspace is comprised by a digital copy or “digital twin” of each of the machines operating at the physical system. The digital twins are integrated by an IoS which provides the connection, communication, interaction, and collaboration among services based on virtualization, cloud computing, and web service technologies [20]. The CPS constitutes the bridge between the physical and the cyber systems,
Table 40.2 RFID functions, frequencies, and standards [19] Applications Frequencies Animal
Data structure
< Major differences ? >
Study requirements
< Compare >
Work flow sequence
Data interface Modify package
Current data structure
Existing business systems
Work flow sequence
Run new system
No
Ye s
< Compare >
Organization structure
Current data base
As is implementation
Fig. 46.13 Package implementation decision options
complexity from local to wide to metropolitan area networks. Their scope and coverage have been extended from the corporate to mass segment with the inclusion of PCs. Laptops, smart mobile phones, and devices such as Blackberry and the IPhone with their ubiquitous Internet connectivity have stretched the network delivery and management challenges to higher orbits. Clemm [43] defines network management to encompass activities, methods, procedures, and tools that pertain to the operation (running the network), administration (managing network resources), maintenance (repair and upgrade), and provisioning (intended use assurance) of networked systems (Fig. 46.14). Usage monitoring, resource logs, device simulations, and agents deployed on infrastructure form the data sources for network management. Cisco systems, a leader in networking management, provide many tools for performing these functions [44]. Networks are differentiated in terms of the topology (line, bus, star, hub and spoke, tree, etc.) that defines the logical or physical connection between the devices, network protocols containing a language of rules and conventions for communication between network devices (hypertext transfer protocol (HTTP), transmission control protocol/Internet protocol (TCP/IP), simple mail transfer protocol (SMTP), etc.), and standards facilitating interoperability. Designing a specific network for a given role is a function of the involved devices, their locations and cost, and many other specifications.
Managing networks for performance is mission critical. However, no network can function without embedding security considerations throughout. Security concerns cover access rights, person authentication, and transaction validity at a person level and the threat of viral attacks, intrusion of spies, and denial of service (DoS) at a macro level [45]. Every system defines who is a legitimate user and what usage rights the person has; for example, a transport department employee may have access to the names of employees in all other departments of the firm but not to their personal details. A customer may do an online query about the account balance in a bank account but may not be authorized for online funds transfer. Over the years these requirements have extended to time of day and location too. Certain transactions cannot be performed outside office hours and cannot be executed from a nondesignated place or terminal even within office hours. Likewise, denial of legitimate access can also invite trouble to the organization. It then becomes the responsibility of the system to ensure that access rights are adhered to. Users, locations, and devices are assigned unique identities (IDs) to ensure this. The second aspect of security is person authentication. One has to ensure that the system is being accessed only by the authorized person and by no one else. Protection against impersonation is the key concern. Earlier systems relied heavily on passwords associated with each identified user (user ID). Password protection systems have been strengthened over the years. Frequent change in password is
46
Automation in Data Science, Software, and Information Services
1007
46 Network management functions
Operations
NW assets mgt.
NW assets maintenance
Provisioning
Transaction validity
Access control
Identity mgt.
Physical
Security mgt.
Usage accounting & billing
Intrusion prevention
Virtual
Fig. 46.14 Network and security management services
mandated along with selection of passwords that are difficult to guess. At the next level, systems seek additional personal details (such as date of birth) to validate the person’s identity. These are found to be inadequate in modern systems. Impressive advancement in signature recognition and fingerprint (a biometric) recognition technologies has enabled firms to integrate these devices into computer systems. Voice print is used in some cases to validate identity [46]. Transaction validity facilitates fraud prevention. A separate password is associated with the transactions, and nowadays a one-time key is generated specific to each transaction. After use, the key is destroyed. Digital certification technology has come into play to authenticate the combination of person and transaction relating to transmitted documents. One of the most significant inventions is the encryption mechanism. Data is encoded prior to transmission over a network and decoded at the final point. Seminal work has been done by RSA [47] in designing and implementing a public-key approach. Encryption standards have emerged and are being mandated for critical transactions. Vendors and products have flooded the market with services for virus protection and intrusion prevention. Firewalls are being built at various levels to filter out unwanted visitors. The black list is no longer static; it is updated dynamically by most vendors. New upgrades or patches are issued routinely to protect against emerging new threats. It has become a legal necessity to subject most systems to a periodic security audit and to obtain a compliance certification. Vendors have equipped the service providers with many tools for automated network monitoring, performance analysis and reporting, provisioning, adherence to service-level agreements (SLAs), preventive maintenance, and configuration management [48].
The developments for further automation have progressed along the biometric path [49]. Eye or facial feature recognition and their combination with fingerprints have come into play. Pattern recognition algorithms are becoming sophisticated. Sharing of data about events, impersonators, and system loop holes or bugs is being attempted within select special-interest groups. Other attempts cover mapping data from disparate systems, and intervening and filtering suspected transactions using ML are becoming popular. Robotic handheld devices for measuring temperature of entrants have been provided with person authentication functionality (using thermal imaging techniques) during the current Covid-19 period [50]. The IoT technology in combination with ML is thus paving way for advanced security systems of the future but concurrently raising the concern for intrusion into privacy [51]. An interesting development has been the research efforts directed toward psychological profiling of the imposters and cybercriminals. Many of them seem to be technology savvy, young, and motivated by altogether different reasons compared with other types of criminals. Hacker conventions are held annually to learn from them and to recruit experts among them to provide protection services.
46.4.5 Hosting and Infrastructure Management IBM, the industry leader, preferred to lease the services of its hardware, software, and human resources in the formative years. This practice continued for nearly two decades and was followed by its competitors as well. However, the unbundling phenomenon, as it took roots, encouraged firms to own the
1008
equipment, license the software, and build in-house teams for system development and maintenance. The entry of midrange systems in the 1970s and the proliferation of PCs from the 1980s gave further impetus to this trend, however not for long. Many large enterprises found that the tasks of selecting the hardware, upgrading the software, and recruiting and retaining staff were draining management resources. They were caught between decisions to go for centralized versus distributed systems, to select best-of-breed solutions or integrated suite of applications, and to insource or outsource application maintenance function over the years. The rapid pace at which firms in multiple industries were merging or acquiring others compounded these management issues. Multiple contracts with multiple vendors had to be negotiated often, large volumes of data had to be often merged or migrated, and it was difficult to hold any individual service provider responsible when failure occurred. Soon they also realized that information technology is a service function, essential for their survival, and sometimes an enabler of competitive advantage, but not their core activity. The retail chain had to focus on customer acquisition and retention while the financial services firms had the challenge of product innovation. They opted to outsource the IT function in part or full. Thus, a market opportunity was created for firms with expertise in managing IT infrastructure. The infrastructure usually consists of processors, storage units, printers, desktops, networking equipment such as routers and switches, security devices, power supply sources, voltage regulators, surge protectors, etc. It could include operating personnel too. This opportunity is aggressively pursued by many large vendors, extending their services to data center management as well [52]. Firms had the advantage of signing one contract with the infrastructure management services (IMS) service provider instead of many contracts with multiple vendors, service providers, contractors, etc. There was further flexibility in owning versus not owning the equipment to the client firms. Figure 46.15 shows the components of infrastructure management services. Automation has a central role to play in IMS. The agreement between the client firm and the service provider is governed by a service-level agreement (SLA) that calls for device uptime on a category basis and transaction response time on a type basis. Implementation of the monitoring, reporting, and managing functions related to SLAs is automated to a high degree. Many times, self-diagnostic and repair tasks are performed by the respective devices through sophisticated software. The Internet has added a major dimension to this field. Until its arrival IMS was popular only in large commercial firms. Now, in a new avatar called web hosting, IMS has reached out to small- and medium-sized enterprises (SMEs) and the masses. Today, it is estimated to be a multibillion-
P. Balasubramanian
Contract Access management
Performance
Hardware
I.T. infrastructure management
Security
Software
Network Disaster recovery
Fig. 46.15 Infrastructure management services
dollar market. Hosting is a subset of IMS. The server equipment can be installed anywhere geographically (typically on the premises of the host firm), and the Internet or private network is leveraged to provide services to client firms and end customers of client firms seamlessly. Normal web services include email, static pages, help desk, auction sites, etc. IBM, the largest web hosting service provider globally, has included managed events such as Wimbledon, the Grammy Awards, and the Ryder Cup within this service ambit [53]. Web hosting services rely almost 100% on web and automated tools to perform their roles. Specialist software firms write the required application for web hosting to cover performance, content delivery, and security aspects. The need for a hosting and IMS service provider to tie together multiple products is apparent when we note that IBM’s service portfolio embraces Equinix, ATT, Akamai Technologies, Keynote Systems, Cisco, i2, Ariba, and others [53]. Web-hosted services often cross international borders. A customer living in London can be accessing a US car company site hosted in Ireland to order an accessory. Countries often debate over the territorial rights of such transactions and try to figure out who sold what to whom. Both IMS and hosted services are evolving with technological advancement and globalization. They are spinning out new business models. Pay per transaction (including software as a service (SaaS) [54]) as opposed to ownership of devices and resources has become the accepted modus operandi with small and medium enterprises. Startups prefer it too so that operational costs are contained in early stages. In this process, automation of such services has included robust mechanisms for service accounting, billing, and
46
Automation in Data Science, Software, and Information Services
1009
receivable management on the business side and transaction integrity, authenticity, and nonnegation aspects on the technical side. Another distinct aspect of IMS is evoked when severe service interruption due to global events happens. War threats between countries from where components are procured, labor unrest in a factory located offshore, major fire shutting down operations in a region, and a global pandemic like Covid-19 are unpredictable events for their occurrence or impact. These events have the potential to shut down the infrastructure management services whether insourced or outsourced. Firms seek protection against prolonged shutdowns and for quick and early recovery by including the disaster management and recovery services in this portfolio. A duplicated facility is located in a different region, and sophisticated software systems are put in place for orderly shutdown and early recovery. With the rapid adoption of ML, AV, and IoT technologies, it is likely that automation of such services, at least at many sub tasks level, will become a reality in coming decades.
4. Forecasting the expected amount of rainfall in a cropping area for a specified time window, before start of the sowing season 5. Exploration of oil fields for the depth and spread of the field underground
46.5
Automation Path in Data Science Services
46.5.1 Data Science The term data science, as an interdisciplinary field with varying definitions and attributes, has existed from the 1960s but has come to represent gathering and analysis of large data sets of numbers, text, audio, and video outputs (called big data, both structured and unstructured) using concepts from statistics, mathematics, computer science, and other fields since 2001 as propounded by Cleveland [55]. The field has grown during the succeeding years to become the most sought-after career choice from 2016 onward [56]. Linear and logistic regression, clustering, dimensionality reduction, optimization, and machine learning techniques are included in it, and computer languages such as Python, R, and Julia are widely adopted by data scientists. For an introductory list of frameworks (such as TensorFlow), visualization tools (e.g., PowerBI), and platforms (similar to MATLAB), see Ref. [57]. Data science projects are found in almost every business and social domain now. For example: 1. Identifying best locations for telecommunication towers so that signal strength is uniformly high for a widely dispersed population 2. Selecting citizens for priority vaccination while a pandemic is present 3. Identifying most likely defaulters within the next year of all high-risk borrowers of a bank
In the field of healthcare, the data consists of numeric output of many devices (that measure pulse rate, blood pressure, blood sugar, blood compositions, minerals, oxygen saturation level, etc.), charts and graphs (from ECG), X-rays (films), 2D/3D images (ultrasound, MRI), and voice (physician statements). They are a complex data set for storage as well as subsequent selective retrieval. Any inference can be drawn by analyzing them only in the context of past data as well as with reference to a large data set of similar patients who have been treated. Hence the data science project tends to draw expertise from multiple disciplines. The components and workflow sequence of a DS project are shown in Fig. 46.16. We can notice that all data acquisition components are automated. The ETL (extraction, transformation, and load) function tools have existed for two decades. There are numerous vendors (SAP BW, Oracle Data Warehouse) in the marketplace offering the data warehousing solution in conventional platforms. Cloudera data platform exists in cloud environment. The solution analysis is a hybrid stage with both manual and automated activities intertwined. Python and R facilitate the hypothesis testing function on big data, thus crunching the time needed for project completion. Most of the activities in the modeling stage are performed using the optimization tools in analytics. The implementation analysis would be by and large manual at present. Platforms exist to enable the automation of functions such as user training and user feedback collection and analysis during the implementation. All outputs are available from automated devices. Thus, it can be seen that “automating the automation projects” of DS have progressed at a rapid pace in this decade. DS has invaded the field of robotics by giving birth to cobots. Cobots are robots that are meant to work in conjunction with human beings, thus posing no threat to the latter’s safety. Unlike robots that reside in a barricaded zone, cobots work with no physical separation from humans. They can assist humans in task execution and can be programmed to take independent decisions as well. Service robots have been introduced in this Covid-19 era to administer tests, dispense disinfectants, and deliver medicines [58]. Universal Robots, Techman Robot Inc., and FANUC are some of the leading vendors of Cobots. MarketsandMarkets, a market study firm, has reported that the global market for cobots is projected to grow from current, less than one billion USD to nearly eight billion by 2026 [59].
46
1010
P. Balasubramanian
Data warehouse
Data base
ETLs tool
Wearables
POS
Situation analysis
Modeling & analysis
Identify objectives, resources, constraints and context Extract relevant data
Implementation
Seek user feedback
Build model Train users Test model
& rm Fo est es t es th po hy
Iterate
Data preparation
Analyze model output for implementability
Choose modeling techniques, tools; design interfaces
IoT
Implement changes
Consult users
Prepare visual and digital outputs
Data acquisition
Fig. 46.16 Workflow in data science projects
Self-driving vehicles called autonomous vehicles (AV) are not far from invading the transportation services. They can be considered as the pinnacle of DS systems since they are fully automated for data capture, analysis, decision-making, and execution while delivering a comprehensive and vital service. They are embedded with the advanced AI-based subsystems for voice recognition, surrounding objects identification, vehicle guidance, and machine vision. Google, Uber, and many car manufacturers have been testing the AV cars in public roads since 2016. AV trucks are likely to be introduced within this decade. Armed forces and military applications for AV technology look very promising. Advances in DS have been enabled by machine learning (ML) and its subbranches of reinforcement learning (RL) and deep learning (DL) techniques. Both RL and DL are autonomous self-teaching methodologies. While RL uses the trial-and-error and feedback-based approach to improve action and control elements, DL depends on large-volume data to be analyzed in multiple layers to build the algorithm. Areas such as drug discovery [60], speech recognition [61], air traffic control, and AV are likely to be significantly impacted with DL and RL. Many concerns have emerged in use of DS services and systems. Since data is captured automatically and can be shared with collaborating network devices wherever they are, citizens wary of privacy intrusion have raised alarms. Countries wish to treat data as a national resource and protect them from going across borders. Firms and governments worry about security breaches. When intelligent machines are permitted to analyze data and take decisions, data and decision bias is a possibility. There is no assurance that the data set used is unbiased or the decision algorithm is fair. Hence many societies want to
tread with caution in fully automating the DS systems for decision-making. Further due to their speed of execution, some decisions (called algorithmic trading) can result in major losses in a short time interval such as the crash of Dow Jones index in NYSE on February 5, 2018. Moral hazard is yet another unwanted and feared impact. Will the system end up treating citizens as unequal based on birth, age, gender, or other factors? Earth-shaking tremors are expected in the job markets in coming decades due to AI and allied technologies. The economic growth seen in the last decade, in most of the developed countries, is described as a jobless GDP growth. That means the polarization of the citizens into high-income and low-income groups becoming sharper, while the middleincome group would dwindle. Some existing job categories are projected for extinction and many others resulting in massive reskilling needs. A McKinsey study in 2017 [62] has predicted nearly 450 million jobs vanishing and another 400 million requiring retraining in the next 20 years. But it has also shown the silver lining with equivalent number of new jobs getting created in newer job categories. Even though such anxieties are real and legislators have begun articulating their views in many countries, none would want a blanket ban on DS projects as benefits far outweigh the risks.
46.6
Impact Analysis
Computers and IT-based services are not mere automation aids that enhance resource productivity. Their impact on corporations and society is deep and wide. They have created
46
Automation in Data Science, Software, and Information Services
1011
new channels, markets, products, labor markets, and business paradigms. They have also had societal impact in terms of higher level of transparency and fairness and helped to eliminate system leakages and root out corruption in many countries. At an individual level, they have changed perceptions, altered human behavior, enhanced aspirations, and acted as catalysts to feed the innovative spirit of humanity. With direct access to the Internet to every customer or stakeholder, a new channel of delivery has been opened up. It has the advantage of low cost, high reliability, and asynchronous communication possibility. With the convergence of technology, the Internet as a channel is also the most costefficient means of delivering data, text, voice, and images. Companies face a wide range of options in terms of how they wish to leverage this channel. Many traditional organizations, including banks such as ICICI in India, have chosen to use it as the primary vehicle for service delivery. Newer products have emerged to meet both primary and secondary needs. The iPod delivers digital music and, in the process, has eliminated all intermediate storage media such as compact disks (CDs) and digital versatile disks (DVDs). Books need not be printed anymore as their content can be digitally delivered. GPS-based navigation systems guide travelers to their destinations. Confidential delivery of sensitive and business-critical documents through the Internet has created the demand for digital certification and person identification services. Newer markets for teleconferencing, tele-education, online auctions, buyer-seller meet, and tele-healthcare have emerged. Further finer segments of the markets to cater for the needs of aged, handicapped, or people on the move have been created. Information asymmetry in earlier society created many intermediary roles that facilitated bridging between supply and demand sides. These have been destroyed in the Internet age. Any new role can emerge now only on the strength of demonstrable value. Consequently, enhanced skill levels have to be closely related to market needs. An effective example of this is the role of distributors and retailers as intermediaries. They bridged the communication gap between the manufacturer and end customer while moving goods across the supply chain effectively. The Internet has limited their role as a communicator. Consequently, they need to excel in providing comparative value of goods from different manufacturers to enhance their value proposition to end customers. Since many tasks can be executed unbounded by geography, outsourcing has crossed international borders with ease and has created employment opportunities for millions in developing countries. Concerns about the ramifications of this for jobs in developed countries have emerged [63]. However, the market for high-end products and services has expanded globally due to rising affordability in emerging economies.
The emergence of new business models is noteworthy. Fixed costs are presented as variable costs, thus altering investment requirements. Development costs are overshadowed by marketing and customer reach costs. The economics of the information society is very different from that of the engineering society. Companies have to make an appropriate transition to ensure their survival and growth. With individuals, face-to-face interaction has been replaced with virtual meetings, sequential exchanges are now concurrent, and judgment or wisdom, if not supported by data and evidence, is not accepted. New paradigms have emerged in interactive behavior. Age as a proxy to accumulated knowledge is no longer revered. Governments have been aggressive in introducing Internet-based services to reach out to the citizens. Apart from filing tax returns, registration of land records and motor vehicles, disbursal of pension funds, updating voter registry, and government-to-business transactions fall within the purview of eGovernance. When implemented, these initiatives have made operations faster and more costeffective and have rooted out sources of corruption in many instances. The impact on society at large, with respect to volunteer activities, is explored in [64]. The legislative support required for use of information technology to the physically disadvantaged is documented in [65]. The societal impact of computers, the Internet, and information technology is even more significant when we transcend the corporate sector. Facebook and LinkedIn are Internet-based networks of special-interest groups that demonstrate what viral growth is all about. YouTube has given the world on a platter as a platform to showcase one’s talent at virtually no cost. These developments are altering many paradigms that we have grown up with. Minor and major tremors are predicted in the global employment market over the next 20 years. The McKinsey study of 2017, referred earlier [62], has projected the destruction of nearly 800 million existing jobs and complete elimination of a set of categories of jobs in this period. The study however has also identified that more jobs would be created than destroyed but in many new skill categories. This would pose two challenges, viz., reskilling millions of employees and total revamping of syllabus and curriculum in schools and colleges. With most devices becoming intelligent and capable of interaction among themselves (IoT), collaborating with humans (cobots), and many vehicles turning autonomous (AV), it is projected that the employment opportunities for services would outgrow the manufacturing sector. A macro-level model of skill sets trifurcation (Fig. 46.17) is proposed for training people to be aligned with the job markets [66]. A small percentage of the jobs would require very highorder (products and services) design skills. A larger set of
46
1012
P. Balasubramanian
Conceive & Design Implement & Maintain
Provide services
Generic products and services Apply at specific client sites To end customers
Fig. 46.17 Skill sets trifurcation model
people need to be trained to customize, install, or implement at widely dispersed firms globally. Employees in all the firms are to be trained to utilize these intelligent products or artifacts to render service to millions of customers. In other words, acquiring higher-order skills that include capabilities to coexist in a cyber-physical world is likely to be mandated for all, due to automation.
46.7
Emerging Trends
As stated earlier technology convergence, real-time analytical processing, digital delivery of integrated and multimedia information, smart mobile usage, cloud computing, AI, and software as a service (SaaS) are prevalent within the information and technology services industry. Web services are leading to services-on-demand (SoD) business models. Both SaaS and SoD are rewriting business models by converting fixed costs to variable costs. It makes eminent business sense to adopt these models at the early growth stages of a firm and then switch over to the traditional ownership model when the firm has reached a higher sales volume. The entire IT infrastructure consisting of bundled hardware, software, and services is treated akin to a utility [67] that is essential for business operations but may not bestow any competitive advantage. The latter can come only through market and product innovations and analytics. In the scientific community, a model that is being tested is grid computing [68]. It is focused on leveraging the widely dispersed processing power of the Internet for faster execution of complex and time-consuming calculations. Weather forecasting and genetic mapping fall into this domain. Geographical information systems (GIS) that combine location maps with other attributes are already common [69]. Multilayering of GIS and recognizing patterns for effective design of systems is on the anvil. Computer animation services are poised for a giant leap with sophisticated tools and high-speed devices. Movie production and postproduction work, integrated with special
effects, is being automated at a relentless pace by companies such as Pixar and DreamWorks. Simulation combined with animation is turning out niche products such as factory layout and workflow sequencing for the manufacturing industry. Object-oriented design awaits the development of completely interchangeable, plug-and-play components for corporate functions. It will then be coupled with automated and remote diagnostics, repair, testing, and maintenance services. Legal acceptance of digitally filed documents in place of printed matter is growing worldwide. Standards are being enacted and enabling legislation is being introduced in many countries. Cross-border transactional validity and resolution of attendant tax issues are complex. It is expected that the digital highway will become integral to many aspects of human life when these challenges are tackled. The day is not far off when every legal entity, be it a person or a firm, has a unique digital identity, although this is likely to happen faster in some nations compared with others. Rapid digitalization is the mantra for survival and market efficiency in all business spheres. Remote working has become a necessity due to Covid-19 everywhere. It is likely to become an integral part of human life even in post-Covid era. As identified in earlier section, massive shifts are projected to occur in job markets, leaving no country untouched. Citizens and their governments will be required to discuss, decide, and legislate all aspects of living in a cyber-physical world, including who will decide what, who will perform a given task, how to protect privacy and to provide safety and security, and how to ensure all allocation systems are fair and objective. Acknowledgments My sincere gratitude and appreciation extended to Mr. Gautam Kumar Raja, Chief Executive Officer, Kapuchin Games Private Limited, Bangalore, for his advice and contribution in material added in the Data Science section and for improving the readability of the document overall.
References 1. Indian Elections: The electronic voting machine. http://www.indian-elections.com/electoralsystem/electricvotingma chine.html 2. ESI Group: Engineering simulation industry. http://www.esigroup.com/corporate/news-media/press-releases/2007-english-pr/e si-group-visual-environment-optimized- for- intel2019s - platformsoffers-premium-performance-in-simulation (2007) 3. Balasubramanian, P.: Chapter 71: Automating information and technology services. In: Nof, S.Y. (ed.) Handbook of Automation, pp. 1265–1284. Springer-Verlag, Cham (2009) 4. Insights Team at Forbes: How AI Builds a Better Manufacturing Process. https://www.forbes.com/sites/insights-intelai/2018/07/ 17/how-ai-builds-a-better-manufacturing-process/#9e41e721e842. July 17, (2018) 5. Mar Gudmundsson: Governor of the Central Bank of Iceland. The Financial Crisis in Iceland and the fault Line in Cross Border Banking. https://www.bis.org/review/r100129a.pdf.(2010) 6. Isidore, C.: GM bankruptcy: End of an era, CNN Money.com article last updated on June 2, 2009. https://money.cnn.com/2009/06/01/ news/companies/gm_bankruptcy/
46
Automation in Data Science, Software, and Information Services
1013
7. Reported Cases and Deaths by Country, Territory, or Conveyance, Worldometer daily report. https://www.worldometers.info/ coronavirus/?utm_campaign=homeAdvegas1? Page view on Oct 10, 2020 8. Chandler, A.D., Cortada, J.W. (eds.): A Nation Transformed by Information. Oxford Univ. Press, Cambridge (2000) 9. Singh, J.: Are Smartphones An Essential Item Whose sales in India Be Resumed Amid Covid-19 Lockdown? Gadget 360 Features Section Article Dated 7 Apr 2020. https://gadgets.ndtv.com/ mobiles/features/phone-sales-in-india-covid-19-coronavirus-lockd own-essential-services-2207196 10. Yenne, B.: 100 Inventions that Shaped World History. Bluewood Books, New York (1993) 11. Russel, B.: The printing press: technology that changed the world forever. http://www.associatedcontent.com/article/%25%E2 %80%94%E2%80%94%E2%80 %94%E2%80%94%E2 %80%94 %E2%80%94%E2%80%93324287/the_printing_press_technolog y_that.html (2007) 12. Price, C.: The Gutenberg device: the printing press. http:// www.associatedcontent.com/article/82603/the_gutenberg_device_ the_printing_press.html (2006) 13. Sebastian, E.: The history of mass media in America. http:// www.associatedcontent.com/article/13499/the_history_of_mass_ media_in_america.html (2005) 14. Bellis, M.: Your guide to inventors. http://inventors.about.com/od/ gstartinventions/Famous_Invention_History_G.htm (2007) 15. Swedin, E.G., Ferro, D.L.: Computers, the Life Story of a Technology, Greenwood Technographics. Greenwood, Westport (2005) 16. Dhar, V.: Data science and prediction. Commun. ACM. 56(12), 64–73 (2013). https://doi.org/10.1145/2500499. Archivedfrom the original on 9 November 2014. Retrieved 2 September 2015 17. Park, A.: Top 10 AI Applications for Healthcare in 2020: Accenture Report, Becker’s Health IT, 15 Jan 2020. https:// www.beckershospitalreview.com/artificial-intelligence/top-10-aiapplications-for-healthcare-in-2020-accenture-report.html 18. Daley, S.: 32 Examples of AI in Healthcare that will make you feel better about the future, 29 Jul 2010. https://builtin.com/artificialintelligence/artificial-intelligence-healthcare 19. Thomson Reuters: Thomson Reuters company history. http:/ /thomsonreuters.com/about/company, http://thomsonreuters.com/ about/company_history/#1890_-_1799 (2008) 20. Bloomberg: Bloomberg company website. http:// about.bloomberg.com/ (2008) 21. Hacklin, F.: Management of Convergence in Innovation. Springer, Berlin Heidelberg (2008) 22. Government of India program Swayam website: https:// swayam.gov.in/about 23. Ahouandjinou, A. S. R. M., Assogba, K., Motamad, C.: Smart and pervasive ICU based IoT for improving intensive health care. https://ieeexplore.ieee.org/document/7835599, presented in BioSmart Conference in Dubai, UAE in Dec 2018 Published in IEEE Xplore in Jan 2017 24. Drake, N., Turner, B.: Best Video Conferencing Software in 2020, Techradarpro. https://www.techradar.com/in/best/bestvideo-conferencing-software,downloaded on Oct 10, 2020 25. Deepa: Financial value of outsourcing I–IV, BPO J. Online http:// bponews.blogspot.com (2007) 26. Sankar, P.: “Chatbots – The Future of IT Support”, page down load on 23, Mar 2021, https://freshservice.com/itsm/chatbots-future-ofit-support-blog/ 27. Davenport, T.H.: Competing on analytics. Harvard Bus. Rev. 84(1), 98 (2006)., reprint R0R01H 1–10 28. Balasubramanian, P.: Chap. 26: Future of operations research, a practitioner’s perspective. In: Ravindran, A.R. (ed.) Handbook of Operations Research and Management Science. CRC, Boca Raton (2007)
29. Mayadas, A.F., Durbeck, R.C., Hinsberg, W.D., McCrossin, J.M.: The evolution of printers and displays. IBM Syst. J. 25(314), 399– 416 (1986) 30. Hansen, C.D., Johnson, C.: Visualization handbook. Academic, New York (2004) 31. Orten-Jones, C.: Five countries Shaking Up Tax Reporting. Racounter.net, https://www.raconteur.net/finance/tax/five-countrie s-tax-reporting/ . June 23, 2019 32. RSA: Digital certificate solutions. http://www.rsa.com/ node.aspx?id=2604(2008) 33. Balasubramanian, P.: Frontiers of digital supply chain management and the smart factory: smart strategy for Indian Enterprises, Invited session at M.S. Ramaiah Institute of Technology, Bangalore, India on Nov 16, 2017, http://themeworkanalytics.com/files/ DSCSFv3pdf.pdf 34. Swaine, M.: Dr. Dobb’s excellence in programming award 2008, Dr. Dobb’s J. 4, 16–17 (2008) 35. Wirfs-Brock, R.J.: Connecting design with code. IEEE Softw. 25(2), 20–21 (2008) 36. Rodriguez-Martinez, M., Sequel, J., Greer, M.: Open source cloud computing tools: a case study with a weather application. IEEE International conference on Cloud Computing, https://www.researchgate.net/publication/221399941_Open_Sourc e_Cloud_Computing_Tools_A_Case_Study_with_a_ Weather_Ap plication Cloud 2010, Miami, FL,USA, July 2010 37. C. Burns: Top 10 Cloud Tools, https://www.networkworld.com/ article/2164791/top-10-cloud-tools.html. Apr 8, 2013 Networkworld article 38. What is DevOps?, Amazon Web Services, https:// aws.amazon.com/devops/what-is-devops/. Page download as on Oct 10, 2020 39. Top 10 Devops tools in cloud from mindmajix.com, https:/ /mindmajix.com/top-10-cloud-or-iaas-or-paas-devops-tools. Page download on Oct 10, 2020 40. Kaner, C., Falk, J., Nguyen, H.Q.: Testing computer software. Van Nostrand Reinhhold, New York (1993) 41. Jalote, P.: Software project management in practice. AddisonWesley, Boston (2002) 42. Jalote, P.: CMM in practice: processes for executing software. Addison-Wesley, Boston (1999) 43. Clemm, A.: Network Management Fundamentals. Cisco, Indianapolis (2006) 44. Cisco: Cisco website. http://www.cisco.com/en/US/products/sw/ netmgtsw (2008) 45. Turban, E.: Information technology for management. Wiley, New York (2006) 46. D. Tynan: The next 20 years, PC World Mag., 109–112 (2008) 47. Rivest, R., Shamir, A., Adleman, L.: A method for obtaining digital signatures and public-key cryptosystems. Commun. ACM. 21(2), 120–126 (1978) 48. DNSStuff: Beginner’s guide to network management systems, best practices and more by dnsstuff on Sept 12, 2019. https:// www.dnsstuff.com/network-management 49. Ratha, N.K., Govindaraju, V.: Advances in biometrics, services, algorithms and systems. Springer, Berlin Heidelberg (2008) 50. Waltz, E.: Entering a building may soon involve a thermal scan and facial recognition. IEEE Spectrum, 26 Jun 2020. https:// spectrum.ieee.org/the-human-os/biomedical/devices/entering-a-bui lding-may-soon-involve-a-thermal-scan-and-facial-recognition 51. Natta, M.V., Chen, P., Herbek, S., Jain, R., Kastelic, N., Katz, E., Stuble, M., Vanam, V., Vattikonda, N.: The rise and regulation of thermal facial recognition technology during the Covid-19 pandemic. J. Law Biosci. 7(1), Isaa038 (2020). https://doi.org/10.1093/ jlb/lsaa038
46
1014 52. HP: HP infrastructure management services: tailored solutions put you in control. http://www.hp.com/pub/services/infrastructure/ info/in_service_brief.pdf (2008) 53. IBM: IBM – the world’s choice for web hosting. http://www.c2crm.com/c2/c2web/collateral.nsf/0/2CB826453C7D 68FF862571C400697E18/$file/why-ibm-hosting.pdf 54. Microsoft: Microsoft’s software as a service. http:// www.microsoft.com/serviceproviders/saas/default.mspx (2008) 55. W.S. Cleveland. Data science: an action plan for expanding the technical areas of the field of statistics. ISI Rev. 69, 21–26 (2001) and ˆ Gupta, Shanti (11 December 2015). “William S Cleveland”. Retrieved 2 April 2020 56. 50 Best Jobs in America 2020: "Best Jobs in America". Glassdoor. Retrieved 3 April 2020 57. Data Science, article in Wikipedia: https://en.wikipedia.org/wiki/ Data_science 58. Tao, M.: Cobots v Covid: How Universal Robots and others are helping in the fight against coronavirus. Robot. Autom. J. article published on June 19, 2020 https://ro boticsandautomationnews.com/2020/06/19/cobots-v -covid-how-u niversal-robots-is-helping-in-the-fight-against-coronavirus/33285/ 59. Markets and Markets: Collaborative Robot (Cobot) Market worth $ 7.972 million by 2026. Page download on Oct 10, 2020, https:/ /www.marketsandmarkets.com/PressReleases/collaborative-robot .asp 60. Chen, H., Engkvist, O., Wang, Y., Olivecrona, M., Blaschke, T.: The rice of deep learning in drug discovery. Drug Discov Today (Elsevier). 23(6, 1241–1250 (2018) https://www.sciencedirect.com/ science/article/pii/S1359644617303598 61. Manjunath, K.E., Rao, K.S.: Improvement of phone recognition accuracy using articulatory features, circuits, systems, and signal processing. Springer. 37(2), 704–728 (2017). https://doi.org/ 10.1007/s00034-017-0568-8 62. Mckinsey Global Institute Report on Jobs Lost Jobs Gained: Workforce Transitions in a time of Automation, Dec 2017. https://www.mckinsey.com/˜/media/McKinsey/ Industries /Public% 20and%20Social%20Sector/Our%20Insights/What%20the%20fut ure%20of%20work%20will%20mean%20for%20jobs %20skills% 20and%20wages/MGI-Jobs-Lost-Jobs-Gained- Executive-summar y-December-6-2017.pdf 63. McKinsey Global Institute: Who wins in offshoring? http:// www.cfr.org/content/meetings/innovation_rt/2_4_2004/Farrell.pdf (2004) 64. Proulx, K., Hager, M.A., Wittstock, D.: The Promise of Information and Communication technology in Volunteer Administration, Chapter 10 in the book. In: Ariza-Montes, J.A., Lucia-Casademunt, A.M. (eds.) Information and communication technologies Management in Nonprofit Organizations. IGI Global, Hersey (2014) 65. Recktenwald, J.: Technology for the disabled: What does federal law mean for IT? http://articles.techrepublic.com.com/510010878-1036687.html (2000)
P. Balasubramanian 66. Balasubramanian, P.: Tremors in the Job Markets and Trifurcation of Skill Sets, invited presentation at IIT Madras BC Alumni Meet at Bangalore on Apr 7 2018. https://www.researchgate.net/ publication/338357654_Tremors_in_Job_markets_and_Trifurcation _of_skill_sets_in_three_decades_IITM_BC_Alumni_Meet 67. Carr, N.G.: Does IT Matter? Harvard Business School Press, Boston (2004) 68. Buyya, R., Venugopal, S.: A gentle introduction to grid computing and technologies. CSI Commun. 29(1), 9–19 (2005) 69. GIS: What is GIS? http://www.gis.com/whatisgis/index.html (2008)
Parasuram Balasubramanian He has been a Consultant, Group Leader, CIO, CEO, and Profit Centre Head in Information Technology Industry in his career spanning five decades. He has played a noteworthy role in establishing analytics practice in India and India as a premier offshore service destination. He was instrumental in introducing highquality standards for software delivery services and in building up the skill sets of thousands of employees. He has worked in India, Jamaica, and the USA. Over the years, he has sustained considerable interest in academics and executive training. He has been a guest faculty and invited speaker in numerous colleges, executive training programs, and industry fora. He has written chapters in Handbook of Operations Research (CRC Press), Handbook of Automation (Springer Publishers), and Cultural Factors in Systems Design: Decision Making and Action (CRC Press). He is a member of the Enterprise Integration Consortium at Penn State University, USA; Board of studies at RVCE and MSRIT in Bangalore; and CIT in Coimbatore. He has Engineering and Management degrees from IIT, Madras, and a Doctorate from the School of Industrial Engineering at Purdue University in 1977 specializing in Operations Research. He has been recognized as an Outstanding Industrial Engineering Alumnus by Purdue University in March 2001, Fellow at Infosys Technologies Limited in 2002, and made an Honorary Fellow of the Indian Institute of Materials Management in 2005. He was an honorary Entrepreneur in Residence at Purdue University, Discovery Park, from 2006 to 2011. He was honored as a Distinguished Engineering Alumnus by College of Engineering at Purdue University in February 2022.
Power Grid and Electrical Power System Security
47
Veronica R. Bosquezfoti and Andrew L. Liu
Contents 47.1 47.1.1 47.1.2 47.1.3
Background of Electric Power Systems . . . . . . . . . . . . A Brief History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Definition of Power System Security . . . . . . . . . . . . . . . . Multi-Scale Planning and Operation . . . . . . . . . . . . . . . .
1015 1015 1016 1017
47.2 47.2.1 47.2.2 47.2.3
Power System Planning . . . . . . . . . . . . . . . . . . . . . . . . . . System Planning for Regulated Utilities . . . . . . . . . . . . . System Planning Under Deregulation . . . . . . . . . . . . . . . Challenges to System Planning . . . . . . . . . . . . . . . . . . . . .
1018 1018 1022 1022
47.3 47.3.1 47.3.2 47.3.3
Power System Operation . . . . . . . . . . . . . . . . . . . . . . . . Unit Commitment and Economic Dispatch . . . . . . . . . . . Co-optimization of Energy and Reserve . . . . . . . . . . . . . Challenges to Short-Term Operations . . . . . . . . . . . . . . .
1023 1023 1028 1030
47.4
Power System Cybersecurity . . . . . . . . . . . . . . . . . . . . . 1031
uncertainty and less controllability. This chapter aims to provide a high-level overview of power system planning and operation to ensure their security, with a focus on their multi-scale nature, and how they adapt to meet the emerging challenges. Keywords
Reliability · Security · Long-term planning · Unit commitment · Economic dispatch · Ancillary service · Distributed energy resources · Renewable energy · Cybersecurity
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1033
47.1
Background of Electric Power Systems
Abstract
Automation in power systems has a very long tradition. Nowadays automatic generation control and sophisticated communication devices are equipped with most of power plants, and sensors are placed throughout electricity networks to collect real-time information, all to ensure power system security. However, due to the physical requirement of continuous balancing of electricity supply and demand, ensuring the security of power systems calls for planning and operation processes that are inherently of multiple temporal scales, ranging from a decade to a millisecond. In addition, while traditional power systems are operated in a centralized fashion, with bulk, dispatchable fossil fuel plants being the main generation resources, the recent development of renewable energy and various demandside resources is reshaping power systems with more
V. R. Bosquezfoti · A. L. Liu () School of Industrial Engineering, Purdue University, West Lafayette, IN, USA e-mail: [email protected]; [email protected]
© Springer Nature Switzerland AG 2023 S. Y. Nof (ed.), Springer Handbook of Automation, Springer Handbooks, https://doi.org/10.1007/978-3-030-96729-1_47
47.1.1 A Brief History The bulk electric power delivery system is a large set of interconnected devices, often spanning several countries. It is planned, built, operated, and manually and automatically controlled with the ultimate purpose of supplying the variable demand for electric power in a secure, economic, and environmentally responsible fashion. The first high-voltage transmission lines were built to transport power from early hydroelectric generators to customers in nearby cities. As the use of electric power increased, electric power companies that owned and operated generation, high-voltage transmission, and electric distribution systems expanded their systems to interconnect other generation stations to meet the needs fueled by increasing industrialization. While power companies eventually built transmission to interconnect their systems, such transmission was largely used for emergency support. Larger transmission systems improved the reliability of the electric power supply and eventually expanded the access to low-cost power sources. However, transmission
1015
1016
interconnection also resulted in an increasingly complex system prone to large-scale failures. The North American Electric Reliability Corporation (NERC) was formed in 1968, shortly after a blackout in 1965 left the Northeastern United States and Southeastern Ontario without electric power for up to 13 h. NERC establishes and enforces standards for the reliable planning and operation of the bulk electric power system in the United States, Canada, and part of Mexico. In the 1990s, a movement for the deregulation of the electric industry resulted in the separation of transmission planning and operation functions from the generation, distribution, and commercialization of electric power. Chile was one of the first countries, in 1982, to deregulate their electric industry, followed shortly by many others, such as New Zealand in 1987 and England and Wales in 1989. In the United States, the Energy Policy Act of 1992 encouraged energy regulators to promote competition in wholesale electric markets by implementing open-access tariffs that provided independent power producers nondiscriminatory access to the transmission system. Nowadays, the main function of transmission operators around the world is to continually balance the electric power production and demand to maintain power grid reliability. In the United States, independent transmission operators (ISOs) were established to administer openaccess transmission tariffs. ISOs are also established as regional transmission organizations (RTOs) with functions that include the operation of market-based management of transmission system congestion, regional transmission operation and planning, and inter-regional coordination. The United States has three interconnected transmission systems: the Eastern Interconnect, the Western Interconnect, and the Electric Reliability Council of Texas (ERCOT) that covers most of Texas. ERCOT functions as an RTO, whereas the
V. R. Bosquezfoti and A. L. Liu
Eastern and Western Interconnects include several RTOs that embody both deregulated markets and vertically integrated utility regions. The California ISO (CAISO) covers most of the state of California and part of Nevada and offers real-time balancing services to a large portion of the Western Interconnect. The Eastern Interconnect includes non-RTO regions, particularly in the Southeastern United States, and five RTOs: the Southwest Power Pool (SPP), the Midcontinent ISO (MISO), PJM, the New York ISO (NYISO), and ISO New England (ISO-NE). The geographic locations of US interconnections and system operators are depicted in Fig. 47.1. The figure also shows the so-called balancing authorities, which balance the supply and demand within their control areas,maintain the interchange of power with other balancing authorities and maintain the frequency of the electrical power system within reasonable limits.
47.1.2 Definition of Power System Security A typical power system consists of four major components: generation, transmission, distribution, and consumption, as depicted in Fig. 47.2. In an electrical system, the power generation and consumption must be balanced to prevent major material losses, and even losses of human life, as energy shortages or stability issues may cause sustained power outages that can affect climate control, medical equipment, and critical infrastructure. All grid operators/balancing authorities are charged with maintaining the reliability of the systems under their control. Throughout this chapter, we use system operators, or SOs, to refer to the balancing authorities, which include ISOs/RTOs. Within their footprints, SOs oversee and direct
Interconnections Eastern ERCOT Western Circles represent the 66 balancing authorities
Fig. 47.1 U.S. electric power regions: Interconnections, ISOs/RTOs, and balancing authorities. (Source: EIA [42])
47
Power Grid and Electrical Power System Security
1017
Substation step down transformer Transmission lines 765, 500, 345, 230, and 138 kV Generating station
Subtransmission customer 26 kV and 69 kV
Primary customer 13 kV and 4 kV
Generating step up transformer
Transmission customer 138 kV or 230 kV
Secondary customer 120 V and 240 V
Fig. 47.2 Typical components of an electric power system
the high-voltage bulk power system and coordinate electricity generation to maintain a reliable supply of electrical power to electricity end users. Reliability of a power system, based on NERC’s definition, means that the system can “meet the electricity needs of end-use customers even when unexpected equipment failures or other factors reduce the amount of available electricity” [33]. This definition covers two sub-concepts: adequacy and security. Adequacy means that the system has sufficient resources to ensure a constant balance between supply and demand, even with scheduled or forced outages of facilities/equipment. Here the resources are very broadly termed to include both the traditional bulk generation and transmission facilities, and emerging resources, such as renewable, energy storage, distributed generation, and demand response. Security, on the other hand, means that a system’s ability to withstand both unexpected disturbances, such as unexpected loss of system elements due to natural disasters, and man-made physical or cyberattacks. While this chapter focuses on the security aspect of power systems, it is not possible to focus on only the narrower definition of security without addressing adequacy as well. In the following subsection, we show that ensuring power system security inherently requires planning and operation at multiple timescales.
47.1.3 Multi-Scale Planning and Operation To ensure a power system’s security, its planning and operation must be done at multiple temporal scales, which is the focus of this chapter, that is, to provide a nontechnical overview of the detailed planning and operation process. Figure 47.3 provides a good summary of the different timescales of power
system planning and operation. As it takes a long time to find proper sites, to obtain required permits, and to actually build and connect to the main grid, planning of bulk power plants and transmission lines usually happens before 10 years when the assets can be up and running. Along the way, maintenance schedules, retrofit decisions, and compliance to various reliability and environmental regulations need to be considered at a yearly scale. As the timescale moves to weeks to days, SOs need to determine how to best utilize the available power generation resources to meet the short-term projected demand, while considering transmission networks’ constraints and all kinds of contingent situations that may arise in real-time operation. In addition to committing and dispatching power plants to maintain supply-demand balance, an SO also needs to acquire sufficient backup generation resources, termed as reserves, to ensure the system’s reliability. Demand-side resources, such as demand response, distributed generation, and storage, while not under an SO’s control, can participate the wholesale markets by submitting their supply and demand bids to the SO, who will need to consider the demand-side resources as well in their overall decision-making process. At the second to millisecond level, all SOs employ advanced systems, known as supervisory control and data acquisition (SCADA), to maintain system stability. SCADA is a system that contains both hardware and software elements and communicates and controls programmable logic controllers (PLCs), remote terminal units (RTUs), and multiple sensors throughout a transmission network. Such a system helps automate the command and control process to ensure that the power system can withstand small changes or disturbances from equilibrium without the loss of synchronicity. With the abovementioned physical characteristics of power systems, ensuring system security inevitably requires
47
1018
V. R. Bosquezfoti and A. L. Liu
Power plant siting & construction Transmission siting & construction
15 years
10 years
5 years
Closed-loop control and relay setpoint selection Long-term forward markets Load Maintenance forecasting scheduling
1 year
1 month
Closed-loop control and relay action Day ahead market w/ unit commitment Hour ahead market Five minute market
1 week
1 day
5 min
Seconds
Fig. 47.3 Multi-scale of electricity systems planning and operation [13]
a multi-scale approach, as summarized by Fig. 47.4, which is taken from [40]. Shahidehpour et al. [40] have already provided a good overview of power system security. This chapter provides a complementary account and some highlevel details of power systems’ long-term planning and short-term operation, as well as addressing some emerging issues associated with rapid development and deployment of renewable and demand-side resources.
47.2
Power System Planning
Government regulators, power system operators, and electricity market participants must have a view on the direction of electric power generation and consumption in the long term to establish plans and policies required to guide efficient investment to ensure system reliability in the next decade and increasingly to meet any long-term environmental and sustainability goals (such as California’s ambitious goal of 100% clean electric power by 2045). The planning process, however, is very different under a regulated regime versus a deregulated market, which is to be discussed in detail below.
47.2.1 System Planning for Regulated Utilities Under a regulated regime in which an electric utility company owns the generation and transmission assets, and is solely responsible for meeting electricity demand in its control area, the utility company needs to carry out the so-called
integrated resource planning (IRP), which is a process to consider all resource options (generation, transmission, and various demand-side management/energy efficiency options) to meet the projected future demand with lowest costs, while ensuring reliability and meeting other requirements (such as environmental regulations). Two key elements in an IRP process are demand forecast and identification of all resource options in the future.
Demand Forecast Demand forecasts generally include projections of both the amount of energy (watt-hours) and power (watts). In the context of resource planning, forecasts usually look at energy and power requirements from 5 to 30 years into the future. While the projection of energy and power that will be needed over the course of a day, a week, and a year is all useful, the key driver for resource planning is peak demand, which is the highest electrical power demand that occurs over a specified time period (usually in a year). To have built-in redundancy, an SO usually ensures that the available resources at year y in the future will be of (1 + RM) of the projected peak demand in the same year, where RM is a percentage (e.g., 15%) and is referred to as reserve margin. Long-term demand forecast requires both micro- and macro-level data inputs. At the micro level, energy end-use data by sectors are very useful. More specifically, such data include the number of households using specific electric appliances and the number of commercial and industrial consumers using different types of electric equipment. In addition, usage profile of the appliances and equipment is also needed to produce the
47
Power Grid and Electrical Power System Security
Long-term planning Generation resource planning Transmission planning
Mid-term operation planning Maintenance scheduling Fuel allocation Emission allowance Optimal operation cost
Short-term operation Security-constrained unit commitment Security-constrained optimal power flow
Real-time security analysis System monitoring Contingency analysis
Fig. 47.4 Multi-scale risk analysis of power systems security [40]
aggregated load profile. Such micro data will need to be projected into the future, with consideration of technology penetration and development (such as the adoption of more energy-efficient appliances and more electric vehicles). At the macro level, economic and demographic projections are also very important, as they both have significant impacts on the growth of future demand in a specific region. In terms of specific forecasting models and methodologies, it is usually a combination of trending, econometric analysis, and enduse simulation. More detailed accounts of demand forecast can be found in many resources, such as [2], and will not be discussed further here.
Identifying Resources to Meet Future Demand In addition to demand forecasts, power system planning is also driven by potential retirement of existing power plants and, if applicable, environmental policies. Once all the above pieces of information are obtained, the next step is
1019
to determine all the resources available to meet the projected demand and the various regulatory and environmental goals. Traditionally the options are limited to what type of fossilfueled power plants to build, such as coal versus natural gas plants. With the rapid growth of renewable energy, energy storage, and various demand-side resources, there have been significantly more options to choose from. To evaluate all the options, many factors need to be considered, including construction costs and time, available capacity, operation costs, and characteristics (such as fast versus slow ramping), emissions profile, and expected lifetime, to name a few. For the available capacity, a planner needs to consider both the nameplate capacity (also referred to as the installed capacity, or ICAP), and the actual capacity usable in real time (referred to as unforced capacity, or UCAP). For example, wind plants’ outputs in real time obviously depend on wind conditions, which may be much less than their installed capacity. While considering the various factors, a system planner also needs to consider the needs of building transmission lines to integrate the resources into an existing grid. As transmission investment can be quite capital intensive, not to mention the process to obtain various permits to build transmission lines can be lengthy and difficult, there have been an array of emerging resources that aim to make electricity demand more flexible, hence making it easier (and most likely, cheaper) to meet future demand growth. Such resources are broadly termed as the non-wire alternatives (NWA). Based on the definition from a report by Navigant Consulting [12], a non-wire alternative is defined as “an electricity grid investment or project that uses non-traditional transmission and distribution (T&D) solutions, such as distributed generation (DG), energy storage, energy efficiency (EE), demand response (DR), and grid software and controls, to defer or replace the need for specific equipment upgrades, such as T&D lines or transformers, by reducing load at a substation or circuit level.” While the task of considering all the possible options to balance future supply and demand appears to be daunting, this is a perfect situation where optimization models and algorithms can help decision-makers find optimal or nearoptimal solutions, based on the available data and forecasts. The basic idea of long-term planning model is relatively simple, as illustrated in Fig. 47.5, which is adopted from the comprehensive report on resource planning modeling and practice [28]. As seen in Fig. 47.5, a power system planning model generally contains two major components: the longterm aspect related to investments and the short-term aspect regarding to actual operations in real time. This reflects the fundamental trade-offs between the different types of resources. Renewable energy, such as large-scale wind or solar farms, can be capital intensive to build, compared to natural-gas-fired plants. However, they have zero fuel costs when generating electricity; while natural gas plants
47
1020
V. R. Bosquezfoti and A. L. Liu
Minimize
Subject to
• • • •
Investment cost New generation New transmission lines Retrofit; retirement Demand side resources
• • • •
Investment constraints Investment budget Technology constraints Location/permit constraints Other regulations
Operation cost • Fuel costs for fossil fuel plants • Operation and maintenance costs
• • • •
Operation constraints Supply meeting demand Generation capacity limits Transmission line constraints Air pollutant emissions constraints
Fig. 47.5 Conceptual modeling framework of power systems long-term planning Table 47.1 Parameters and indices related to long-term investment planning t = 1, . . . , T j = 1, . . . , N e = 1, . . . , E l = 1, . . . , L Gj Gj t , VCt VCje le t , FCt FCje le t , Mt Mje le PKˆt RMˆt
Time periods (or stages) in the upper-level model ˆ reserve margin regions) Locations; (ˆj = 1, . . . , N: Generation types (including storage) or transmission voltage levels Number of transmission line segments in a network (existing or potential) Set of existing generating units in bus j Set of potential units that can be built at j Variable investment costs for generation capacities or transmission; [$/MW] Fixed investment costs for generation capacities or transmission; [$] Max capacity expansion of generation plant or transmission; [MW] Peak demand in reserve margin ˆj in time t; [MW] Reserve margin requirement for region ˆj in time t; [%]
DFje δ
Derating factors of generation unity/type p at location j; [%] Discount factor
j
j
Table 47.2 Decision variables related to long-term investment
t , xt xje le κjet , κlet σjet ϕlet t φije
Incremental capacity of generation plants or transmission lines; [MW] Cumulative generation or transmission capacity; [MW] Binary variable for generation capacity expansion of type e at region j in t Binary variable for transmission expansion of voltage type p built in t Binary variable indicating the existence of transmission line i–j of voltage type e in t
obviously incur fuel costs. In addition, from the security perspective, the demand of natural gas and electricity can be high at the same time, such as during the 2014 polar vortex in PJM and the 2021 Deep Freeze in Texas, which may leave the natural-gas-fired plants short of fuel, hence significantly raising the possibility of a blackout. During such periods, however, renewable energy’s availability is not certain either, due to their dependence on weather, which makes other types of resources, especially energy storage and the demand-side resources, more valuable. There is not a single-type resource that has both the lowest investment and operation cost, as well as being 100% reliable in real time.
Optimization Models of Long-Term Planning To provide a more concrete idea of long-term planning models, we present a high-level mathematical formulation. Similar models (with varying degrees of modeling fidelity) have been widely used by government agencies, system operators, utilities, and power companies. Comprehensive reviews of such models can be found in [25, 28]. To ease the presentation, we summarize the needed parameters and variables in Tables 47.1 and 47.2, which are related to investment costs and constraints in Fig. 47.5. The corresponding parameters, variables, and constraints for the operation part in Fig. 47.5 will be provided in Sect. 47.3.
47
Power Grid and Electrical Power System Security
1021
To present a specific formulation for the objection function of a planning model, we use TIC to denote the total invest cost, corresponding to the first box in Fig. 47.5, and TOC to denote the total operation cost, corresponding to the second box in Fig. 47.5. A typical form of TIC and TOC are as follows: TICt (xt ) :=
E N
E L t t
t t t t t t VCle xle + FCle xje + FCje σje + σle . VCje
j=1 e=1
l=1 e=1
transmission investment costs
generation/storage investment costs
(47.1) TOCt (zt ) :=
t t γjgh + Cg ptjgh , SCjgh h∈Hk j∈N g∈Gj ∪Gj
start-up costs
h∈Hk j∈N g∈Gj ∪Gj
generation costs
(47.2) where the variable zt = (γ t , pt ) represents the operationalt is a binary variable to deterlevel variables. The variable γjgh mine if a unit g at location j is just turned on at the specific hour h, and, similarly, ptjgh represents the corresponding real t (as opposed to reactive) power generated from unit g. SCjgc represents the start-up costs for unit g and Cg (·) is a generic operation cost function, including fuel and other variable O&M costs. The superscript t in the variables highlight the multi-scale nature of the long-term planning model, as t usually represents coarse temporal scales, such as years; while the subindex h represents finer temporal scales, usually in hours. The detailed account of TOC and the corresponding operational-level constraints are to be provided in Sect. 47.3. Without needing the operational-level details, we can present a basic long-term planning model in the following form, which is adapted from [21]. Minimize x, z
subject to :
T
δ t TICt (xt ) +
t=1
T
δ t TOCt (zt )
(47.3)
t=1
Capacity expansion accounting constraints t t , κlet = κlet−1 + xle , κjet = κjet−1 + xje
∀ j, e, l, t (47.4)
Capacity expansion upper bound constraints t t xje ≤ σjet Mjet , xle ≤ σlet Mlet ,
∀ j, l, e, t (47.5)
Zonal resource adequacy constraint E j∈ˆj e=1
DFje κjet ≥ 1 + RMˆjt PKˆjt ,
∀ ˆj, s, t (47.6)
Nonnegativity and binary constraint κjet , κlet ≥ 0, σjet , σlet ∈ {0, 1},
∀ j, l, e, t (47.7)
Real-time operation constraints zt ∈ X(κ t ), ∀t
(47.8)
47 In the above formulation, Eq. (47.4) reflects the fact that long-term expansion models usually can only model investments by asset types, such as natural gas plants, wind plants, storage, etc., as opposed to specific generation units. Hence, the same type of assets may be added gradually over the years, and generally it is more convenient to deal with explicitly defined incremental and cumulative capacity variables than just using incremental or cumulative variables alone. Constraint (47.5) means that at any particular location, within a certain time, there is an upper bound on how much each type of asset can be added (otherwise, the model will always choose to build the cheapest overall asset, which in reality is unlikely to be feasible). Constraint (47.6) expresses three key aspects in long-term planning that have been discussed above. First, it is the peak demand (PK t ) that drives asset expansion, as the summation of all the (cumulative) capacities (κ t ) needs to be no smaller than the peak demand in any time t. The second aspect is that, to maintain reliability, a power system needs to have builtin redundant capacity, which is represented by the reserve margin requirement—RMˆjt . The subscript ˆj is to stress that, for reserve margin requirement, there are usually predefined reserve margin regions, and different regions may have their own reserve requirement. This is mainly to ensure that in case of future emergency situations, the reserve margin region can have sufficient capacities to meet its own peak load, without needing imports from other regions, which can be unreliable due to transmission constraints. The third aspect is between ICAP and UCAP, where κ exactly represents the installed capacity, while the derating factor DF accounts for potential unforced outage of the corresponding power plant type, and hence DF·κ yields the UCAP. Only UCAP is useful in meeting peak demand, as reflected in Constraint (47.6). Constraint (47.7) is self-explanatory. Note that the integer variables are not always necessary, and it depends on asset types. For example, for natural gas power plants, their capacities are determined by the types of engines, which have their specific configurations. Consequently, one cannot choose to build a natural gas plant of any capacity, and only certain capacity levels are technologically feasible. On the other hand, wind farms or solar panel farms, which contain multiple wind turbines or solar panels, do not have such restrictions. In this case, the corresponding integer variable σ in (47.6) is not needed. The last constraint, (47.8), is an abstract of the real-time system operations, and the detailed formulations are given in Sect. 47.3.1. Note that transmission network modeling and constraints are included in (47.8). The optimization problem (47.3)–(47.8) only means to provide a
1022
glance of a long-term planning model. In real-world applications, many more details are needed, such as investment budget constraints, and any environmental policy constraints.
47.2.2 System Planning Under Deregulation In a deregulated electricity market, such as in an RTO market in the United States, a system operator does not have the authority to dictate the investment decisions of electric power companies. In this case, however, an optimization problem similar to that in the IRP process still has its merits. First and foremost, it is one of the primary functions for RTOs in the United States to plan the enhancement and expansion of transmission systems so that they can operate securely in the steady state and that the transient response to disturbances does not cause instability. While market participants may determine on their own investments based on their own assessments of current and future market conditions, if they wish to interconnect a new generator to the bulk transmission system, they must submit interconnection requests. Sophisticated interconnection studies need to be conducted to understand the contribution and impact of the new generators in meeting future demand. The optimization framework discussed in 47.2.1 is certainly applicable to such situations, and all SOs are responsible for future demand forecasting to conduct such studies. Second, to ensure both resource adequacy and security, SOs need to have sufficient additional capacities to cover future unforeseen emergency situations. The redundancy requirement is exactly the reserve margin requirement in constraint (47.6). Four out of the seven ISOs in the United States (ISONE, NYISO, PJM, and MISO) have an organized capacity market, in which the SOs announce the reserve requirement and solicit offers from all resources (including generation, transmission, and demandside resources). Then the lowest-cost resources that can help meet the future peak demand plus the reserve requirement are selected. Mathematically speaking, this is just a decentralized way to pick resources to satisfy constraint (47.6), while minimizing the total investment cost as in Eq. 47.1. In this case, individual investors or utility companies can run an optimization problem similar to the one in the previous subsection, to have a good sense of what types of resources they should invest in and how much they should bid into a capacity market, provided that they have access to good market data needed to run such an optimization problem. The third reason for the resource planning model to be useful in a deregulated market is due to the fundamental welfare theorems in economics. In a word, the theorems state that under perfect competition (i.e., firms and consumers are all pricetakers and have perfect information), the market outcomes in an equilibrium are Pareto optimal (i.e., the most efficient way to allocate resources to meet demand). To relate to the power system planning situation, this means that when each
V. R. Bosquezfoti and A. L. Liu
individual power company or utility tries to maximize their payoffs (or to minimize their costs) from investing in various generation or demand resources, so long as each market participant considers the capacity market payouts and real-time operation payouts as exogenous variables (i.e., not subject to their own control), then the market outcome is equivalent to that from the IRP process (aka, a centralized optimization), which corresponds to a Pareto optimal outcome of resource allocation. Such a result is proven in [9], and an extended version with uncertainties is shown in [27]. This fact can be quite useful for SOs and policy makers to study the impact of certain policies on power systems’ development, as they can formulate various policies as mathematical constraints (such as a cap-and-trade policy on air pollutant emissions from power plants or requirements on the adoption of renewable energy) and add them to the optimization model (47.3)– (47.8). Such an approach is applicable not just for a regulated market, but for deregulated markets as well, due to the fundamental welfare theorems. This is the theoretical justification for US Environmental Protection Agency (EPA) to utilize such a model, named Integrated Planning Model® , to analyze the projected impact of various environmental policies on the electric power sector [22].
47.2.3 Challenges to System Planning While energy system capacity expansion has been studied over 30 years, new challenges have emerged that call for new modeling and algorithm development. First and foremost, with the large-scale penetration of renewable energy, power systems face uncertainties not just from the demand side but from the supply side as well. As mentioned earlier, just having enough nameplate capacity (i.e., ICAP) in a system does not necessarily guarantee resource adequacy. Now even with sufficient unforced capacity (i.e., UCAP), power system security could still be at risk due to the variability of renewable energy outputs in real time. Flexibility of generation resources becomes key to ensure system’s reliability. Ramping rates and minimum uptime/downtime are two important measures of generation resources’ flexibility, which will be introduced in detail when we present the unit commitment problem in Sect. 47.3.1. Such properties, however, can only be modeled at hourly or even subhourly level (as ramping rates are usually measured at minute level). However, as illustrated in Fig. 47.3, resource expansion decisions are made years or even a decade ahead. To simultaneously consider decisions from decade to subhourly level, while explicitly accounting for various uncertainties, would result in a “monster model” [15], which will be extremely challenging to solve even with the most advanced computing infrastructure. Various decomposition methods have been proposed to address the multi-scale “monster model” under uncertainty, including [8, 21, 29].
47
Power Grid and Electrical Power System Security
Another major challenge of capacity expansion problems lies on the interdependence among critical infrastructure systems, with electricity, natural gas, and water being the main focus here. Interdependence can pose serious security concerns across multiple sectors, as the failure of one sector’s function may propagate to other sectors. For example, largescale gas-fired power generators are part of the backbone of power supply systems; yet they are at the downstream of natural gas usage, meaning that the power plants depend on natural gas pipelines to deliver natural gas. In addition, power plants’ natural gas usage usually is of lower priority to residential heating usage, which means that when there is shortage of fuel, natural gas is delivered for residential usage first before being delivered to power plants. This is one of the major reasons for the PJM to be pushed to the brink of blackout during the 2014 Northeast polar vortex [23]. On the other hand, compressor stations along natural gas pipelines rely on electricity to keep natural gas flowing in pipelines. While they may have backup generator on sites, a prolonged power outage could still affect natural gas pipelines operation, which would further aggravate fuel shortage for power generation. Electricity and water are also intertwined. In the power sector, water is used in different processes during electricity generation [31], but mainly for cooling gas steam, with the liquified steam fed to a boiler to be reused to drive a steam turbine. Most of thermal-based generators, such as coal, nuclear, and natural gas plants, and certain renewable generators, such as biopower and concentrated solar power, use water as the cooling medium to condense steam. As a result, the power sector is the largest sector withdrawing freshwater in the United States, followed by the irrigation sector. Water shortage can limit power plants’ output level, and the drought condition in California, 2000, is one of the contributing factors for California’s energy market collapse. The opposite direction of the dependency is mainly related to the energy that is needed of abstraction, conveyance, distribution, and treatment of freshwater or wastewater. During the 2021 deep freeze in Texas, power outages have prevented treatment centers from properly treating water, making the water delivered to customers unsafe to drink directly [34]. The interdependence will only grow more convoluted, due to electrification of the transportation sector and the power systems’ increasing reliance on communication networks to collect real-time data from digital sensors throughout the energy network and to send out commands or signals. Each of the abovementioned critical infrastructure has its own complicated models; to simply combine all the sectors’ models into a gigantic model is just counterproductive, as the modeling details would be too much for anyone to comprehend, and no algorithms or computational resources could be applicable to produce meaningful results with verifiable quality. In this case, decentralized optimization algorithms and carefully designed coordination mechanisms among the sectors are
1023
likely the solution to improve reliability and resilience of all the interconnected sectors. While interdependence between electricity and natural gas sector has been studied in the past decade [39, 48, 51], the interdependence between electricity and other sectors has only been gaining attention recently [19, 35, 49], and significant research efforts are still needed to improve the modeling, algorithm, and policy design for interconnected systems.
47.3
Power System Operation
In the timescale of weeks to days, a power system operator needs to constantly solve optimization and control problems to ensure the continuous balancing of supply and demand, while subject to various physical constraints. In the following we will provide some details of such problems.
47.3.1 Unit Commitment and Economic Dispatch In US RTO markets, a system operator runs a two-settlement process that includes day-ahead (DA) and real-time (RT) markets to ensure both reliability and the most efficient ways to utilize available resources. The day-ahead market is cleared based on expected supply and demand conditions in the next day, and it issues hourly schedules to allow all the market participants sufficient time for operational planning (such as to start a unit that is not currently running). The realtime market (also referred to as the balancing market) adjusts scheduled dispatch based on actual system conditions, with updated instructions usually issued with a frequency of 5 min or less. The day-ahead process is needed because most of fossilfueled generators need time to start or shut down (i.e., there are constraints on minimum uptime/downtime) and hence need advance notice to operate properly. The problem of selecting which power plants to run (or to shut down in the event of oversupply) and at what output level is referred to as the unit commitment (UC) problem. Once the commitment schedules are set, system operators continue to run optimization problems at hourly or even finer timescale, till the real time. Such problems are referred to as economic dispatch (ED) problems, which take the fixed set of committed units as inputs, and determine output levels of available units to ensure that real-time load is balanced with minimum costs, while subject to various physical constraints and the constantly changing system conditions (such as realtime demand variations, unforced outages of power plants or transmission lines, availability of renewable outputs). The typical timeline of a two-settlement system is depicted in Fig. 47.6, which is taken from [53]. In Fig. 47.6, the operating reserve requirements to be posted at the beginning
47
1024
V. R. Bosquezfoti and A. L. Liu
Day ahead:
Post operating reserve requirements Submit DA bids
Clear DA market using UC/ED
1100
Security-constrained unit commitment
Submit revised bids
1600
Post-DA reliability UC
1700
Post results (DA energy and reserves)
DA RT UC ED
– day ahead – real time – unit commitment – economic dispatch
Operating day: Clear RT market using ED (every 5 min)
Intraday reliability UC Submit RT bids
–30min
Operating hour Post results (RT energy and reserves)
Fig. 47.6 Typical timeline of a two-settlement system [53]
of each DA operation will be the main subject of Sect. 47.3.2. The DA market proceeds with collecting supply and demand bids from all market participants. The market is cleared if the total supply meets the total demand. However, such clearing may not consider transmission and other physical constraints. Hence, a UC optimization problem is run to identify potential security issues. This is usually done after the DA market clearing, between 11 AM and 4 PM, as indicated in Fig. 47.6. If the currently accepted supply and demand bids may cause potential violations of physical constraints in the next day, the SO will ask the related market participants to submit revised bids to help resolve such issues. After the revised bids are collected, usually a specific version of UC, referred to as the reliability unit commitment problem that takes into account of possible contingencies in real time, is solved with the updated bids. This is again to ensure adequacy and security in the next day even in the events of unforced power plant or transmission line outage. As time proceeds to the real-time market, SOs continuously solve ED problems to adjust the committed units’ output level (and to call on the reserves, if needed) to balance the real-time fluctuation of supply and demand. In the following subsections, we will introduce the details of the UC/ED problems first, followed by an introduction of operating reserves and ancillary services.
Optimization Models of Unit Commitment and Economic Dispatch As different generation technology has very different generation characteristics and costs, and the number of generation units is usually very large in a power system (e.g., MISO has
6692 generating units as of 2020), the problem of choosing which power generation units to run to yield the lowest cost, while subjecting to transmission and other constraints, can be very complicated. Table 47.3 from [17] provides a concrete idea on the characteristics of different power plant types. Unit Commitment The UC problem is exactly designed to help SOs automate the process to select a best option to commit supply and demand resources to ensure meeting real-time load, while subject to various engineering and transmission network constraints. A UC problem is usually formulated as a mixed-integer optimization problem. To provide a more concrete idea, we present a basic UC formulation below, which is also adapted from [21]. Various UC models have been widely used by all SOs in the United States on a daily basis. Detailed accounts of the models and applications can be found, for example, for PJM [36], MISO [10], and CAISO [4], to name a few. Due to the importance of UC problems, as any improvements of either algorithm performance or solution quality could lead to improved system reliability and lower energy costs, the US DOE recently organized an open competition, named Grid Optimization (GO) Competition, which aimed to find the best algorithm implementations to solve UC problems. The winners of the competition can be found at https://gocompetition.energy.gov/, which represent the state-of-the-art computational performance. Similar to the previous section, we first summarize the parameters and variables in Tables 47.4 and 47.5. With the defined indices, parameters, and variables, a typical UC problem is presented in Eqs. (47.9)–(47.21).
47
Power Grid and Electrical Power System Security
1025
Table 47.3 Generation characteristics of different power plant technologies [17]
Table 47.4 Parameters, sets, and indices for short-term UC model
Technology
Minimum load (% full load) Hydro reservoir 5 Simple cycle gas turbine 15 Geothermal 15 Gas turbine combined cycle 20 Concentrated solar power 25 Steam plants (gas, oil) 30 Coal power 30 Bioenergy 50 Lignite 50 Nuclear 50 SUjgh Fg (·) max Pmin g , Pg RUjg , RDjg ρj Djh ij h N A Lg H
Table 47.5 Short-term decision variables
⎡
Minimize
⎣
SUjgh γjgh +
h∈H j∈N g∈Gj
Ramping rate (% full load/min) 15 20 5 8 6 7 6 8 4 2
Hot start-up time (h) 0.1 0.16 1.5 2 2.5 3 3 3 6 24
Start-up cost of unit g at bus j in time h; [$] Generation cost function for unit g; [$] Min/max amount of power generated by unit g; [MW] Ramp-up/ramp-down constraints of unit g at bus j; [MW/hr] Storage efficiency at bus j; [%] Real-time demand at bus j at time h Susceptance between bus i and bus j at hour h Set of generating nodes (or buses) Set of all lines among buses in the network Required run time of generator g; [hr] Set of hours or finer time periods within the decision period; e.g. for the next 24 h
αjgh γjgh pjgh pjh
Binary variable; on/off status of unit g in hour h at bus j Binary variable; start-up/shut-down of unit g in hour h at bus j Amount of power generated from generator unit g in hour h at bus j; [MWh]
pjh = g∈Gj pjgh ; the sum of generation capacity at location j in hour h [MWh]
fijh sjgh θjh ujh vjh rjh
Power flow between bus i and bus j in hour h; [MWh] Spinning reserve of power generator unit g in hour h at bus j ∈ [0, 360◦ ], phase angle at hour h at generating unit j Total power withdrawn from storage facilities of bus j at hour h Total power injected into storage facilities of bus j at hour h Total remaining power in storage facilities of bus j at the beginning of time hour h
⎤ Cg pjgh ⎦
γjgh ≥ αjgh − αjg(h−1) ,
∀g ∈ Gj , j ∈ N, h ∈ Hk ,
(47.12)
h∈Hk j∈N g∈Gj
Maximum ramping up and down limit
(47.9) subject to : Lower and upper bound on real power output αjgh Pmin ≤ pjgh ≤ Pmax αjgh , ∀g ∈ Gj , j ∈ N, h ∈ H, g g
(47.10)
pjgh − pjg(h−1) ≤ RUjg , ∀g ∈ Gj , j ∈ N, h ∈ H,
ujh ≤ rjh ,
αjg(τ −1) ≤ αjgh − αjg(h−1) ≤ αjgτ ,
Available energy in storage
(47.11)
(47.13)
Limit of energy output from storage
Minimum up and down constraints of all generators
τ = h, . . . , min{h + Lg − 1, |H|}, g ∈ Gj , j ∈ N, h ∈ H,
pjg(h−1) − pjgh ≤ RDjg ,
∀j ∈ N, h ∈ H,
rjh = rj(h−1) + vj(h−1) − uj(h−1) ,
(47.14)
∀j ∈ N, h ∈ H,
(47.15)
Energy storage capacity Startup action indicator constraint where γjgh denote the startup action
0 ≤ rjh ≤ κjy ,
y : storage, ∀j ∈ N, h ∈ H,
(47.16)
47
1026
V. R. Bosquezfoti and A. L. Liu
DC Kirchhoff’s Current Law with transmission losses fjih − ljih = ρj ujh + pjh − vjh − Djh , fijh − (i,j)∈A+ i
(j,i)∈A− i
∀i ∈ N, h ∈ H,
(47.17)
Calculating the power losses 2 lijh = Bij fijh , ∀(i, j) ∈ A, h ∈ Hk ,
(47.18)
DC Kirchhoff’s Voltage Law (including potential new lines) fijh − ijh θih − θjh ≤ 0, ∀ (i, j) ∈ A, h ∈ H, (47.19) Transmission line capacity constraint 0 ≤ fijh ≤ Mij , ∀ (i, j) ∈ A, h ∈ H,
(47.20)
Binary and nonnegativity constraints αjgh , γjgh ∈ {0, 1}, pjgh , rjh , ujh , vjh ≥ 0, ∀g ∈ Gj , j ∈ N, h ∈ H.
(47.21)
The objective function in (47.9) is exactly the one given in (47.2), and it is very generic. In this given form, it includes the start-up cost (the first term) and the fuel cost (the C(·) function). For the first term, since it will need energy to bring a fossil-fueled plant to be up and running (and the initial stage’s fuel efficiency is less than running at nearly full capacity), there are costs associated with starting each fossil-fueled plant from a completely off state. The second term, the fuel cost function Cg (p), is usually a linear or a convex quadratic function with respect to the output power p. In addition, the cost term may also include penalties when there is a blackout (i.e., the supply and demand cannot be met by available resources or without violating any of the constraints). The penalty must be set very high; otherwise, the model may intentionally choose to blackout, as it may be cheaper than committing additional resources to meet demand. To understand the constraints in the UC formulation, we start with the two sets of binary variables. The variable αjgh indicates the current status of a generation resource g, with 1 meaning that the resource is on in hour h and 0 indicating off. By constraint (47.12), it can be inferred that γjgh = 1 when the generation unit g is starting in hour h. If there are no status changes from hour h − 1 to h (i.e., the unit is either on or off in both hours), then at an optimal solution, γjgh will stay at 0 (cause otherwise it would incur unnecessary start-up cost in the objective function). For the bound constraint (47.10) on (real) power output, while the upper bound is the capacity constraint, the lower bound exactly reflects the minimum load requirement in Table 47.3, as most of fossil-fueled (and nuclear) power plants cannot go below a certain percentage of their full capacity when they are running. The start-up time in Table 47.3 is reflected in the minimum-down time
constraint 47.11; while the same constraint also includes requirements on minimum-up time, meaning that for most of conventional power plants, once they are up and running, they cannot be turned off immediately. Another constraint related to power plants’ flexibility is the ramping constraint 47.13, which reflects the second column in Table 47.3 and means that from hour to hour, generation outputs can only change within a certain range. Both the minimum-up/minimumdown constraints and the ramping constraints highlight the difficulties with large-scale wind/solar integration, as the power outputs from wind and solar can change quickly with changing weather conditions, and if there are not sufficient flexible resources available to compensate the renewable’s output changes, the system’s reliability can be jeopardized. Several ISOs/RTOs in the United States (such as CAISO, MISO, and ISONE) have considered a market approach to procure flexible generation resources (such as those that can start and shut down within 15 min and can freely change their outputs from zero to full capacity with less stringent ramping constraints), which aims to add additional financial incentives for market participants to bring more flexible generation assets to the grid. Constraint 47.17 is a direct current (DC) version of the Kirchhoff’s Current Law, which states that the current flowing into a node must be equal to the current flowing out of it. As all transmission conductors have resistance, there is always energy lost as heat in the conductors, and the term l in 47.17 is to explicitly account for such losses, and the loss level can be (approximately) determined by Eq. (47.18). Constraint (47.19) is the DC version of Kirchhoff’s Voltage Law, which states that the directed sum of the potential differences of voltages around any closed loop circus is zero. Transmission line capacity is enforced by Constraint (47.20). The UC problem represented by Eqs. (47.9)–(47.21) provides a basic version of the UC problem that is used by SOs to ensure power systems’ reliability at daily and hourly level. The resulting UC problem from real power system operation usually has millions of variables and constraints, and significant research has been done to improve the efficiency of computational algorithms to solve the resulting largescale mixed integer programs (MIPs). Commercial solvers, including CPLEX and Gurobi, have made notable advancements and are well suited to solve the challenging UC problems. That being said, computational challenges abound for such problems in real-world settings. In Constraint (47.17) and (47.19), the transmission network is modeled as a DC network. However, high-voltage transmission lines connecting bulk generators and transmitting over a long distance to demand regions are mainly AC lines. While a DC network is an approximation of a AC network, the approximation becomes less accurate when the energy flows in the transmission lines are near the line capacities. In addition, a DC network does not have reactive power, which is only
47
Power Grid and Electrical Power System Security
present in an AC network. Reactive power is either generated or absorbed by electric generators to maintain a constant voltage level, which is crucial for a power system’s stability. (More details on reactive power will be addressed in the Sect. 47.3.2.) A full AC network modeling, however, will inevitably introduce additional non-convexity constraints into a UC problem, which is already a non-convex problem with the binary variables. UC problems with full AC network modeling form another active research area, and more details can be found in [16] and [6], for example. In addition to traditional optimization approaches to address the computational difficulties, there have been recent works on applying machine learning (ML)-based approaches. One particular idea is based on linear approximation, that is, to approximate the nonconvex UC problem with a linear program (LP) (by relaxing the binary variables and linearizing nonlinear constraints). Then an ML-based model is trained to learn the differences of the solutions from the LP and the original UC problem. The goal is to just solve the relaxed LP in real time, which will be much faster to solve, and to use the LP solutions and the trained ML model to infer the corresponding UC solutions. Such an approach is documented in [26]. There have been other works (e.g., [45]) to use ML-based models to classify UC problems as “easy” versus “hard,” since the complexity of an UC also depends on the underlying network topology and available resources. If an ML model can identify “hard” UC problems before solving them, heuristic algorithms could be applied to solve them, which may obtain good-quality solutions much faster than blindly applying a MIP solver. While these methods have not been tested in real-world control situations, they represent promising areas to combine new developments in both optimization and machine learning. A comprehensive review of ML approaches for UC problems can be found in [46]. Two other modeling enhancements of the basic UC model, considering forced outages in real time and co-optimization of energy and reserve resource allocation, will be discussed in the following. Reliability Unit Commitment An immediate and important extension of the basic UC problem is to account for generation or transmission resource outages in real time. A typical approach of doing so is the so-called N-1 contingency analysis, which is to take out each generation asset or transmission line first and to ensure that the remaining assets can still meet the projected demand in real time. This process corresponds to the post-DA reliability UC (RUC) in the twosettlement system, as illustrated in Fig. 47.6. The RUC can provide SOs a contingency plan corresponding to the outage of each generation asset and transmission line. To further improve systems’ security, more thorough analysis can be done for N-1-1 (loss of two assets consecutively) and N-2 (loss of two assets simultaneously) contingencies. Mathematically,
1027
however, even the N-1 analysis brings significant computational challenges. To modify the basic UC model to account for N-1 contingencies, every decision variable and constraint in (47.10)–(47.21) will have an added subscript, such as c, to represent each contingency from 1 to C, with C being the total number of N-1 contingencies, such as the total number of generation plants and transmission lines in a power system. As such, the RUC will have a problem size (in terms of the number of decision variables and constraints) being C times of the basic UC, with C likely being in thousands. [20] provides a detailed formulation of an RUC with N1 contingency and its corresponding solution method. [7] discusses the N-1-1 procedure at MISO, which only checks for feasibility and does not include the optimization. Even so, the process is still quite valuable, as it can be developed into a fast and automated process to help SOs identify potential reliability concerns. Economic Dispatch An ED problem is usually solved after a UC problem is solved. With the UC problem determining the on/off status of the power generation resources, the ED problem is simply the UC problem with the binary variables’ values fixed at 0 or 1 and to determine how much energy to generate from those resources that have been committed (i.e., α = 1 in constraint (47.10) in the particular hour). Since an ED problem is an optimization problem with only continuous variables, its computational complexity is much less than that of an UC problem, and it is usually solved hourly (or even every 5 min, as shown in Fig. 47.6) on a rolling basis, with updated near-term forecasts of demand and renewable energy outputs. In addition to ensuring balancing of supply and demand in RT markets, solving an ED problem also yields the so-called locational marginal price (LMP) at each pricing node. An LMP is a way for wholesale electric energy prices to reflect the value of electric energy at different locations, accounting for the patterns of load, generation, the physical limits of transmission lines, and transmission losses. Mathematically speaking, an LMP is exactly the Lagrangian multiplier (aka the dual variable) of the supply and demand balancing constraint (47.17) in an ED problem. The LMPs are directly linked to the energy component of consumers’ electric bills. (The other components in electric bills are charges made by a utility company to maintain and upgrade its transmission and distribution networks to ensure the delivery of energy to end users.) The LMPs also serve to send market signals for market participants, as locations with high LMPs indicate potential high payoffs if additional generation or demand resources are added at the locations. Texas’ system operator, ERCOT, solely relies on the LMPs as signals to incentivize potential investors to build generation resources or to develop DERs to ensure future resource adequacy, as opposed to organizing a separate capacity market.
47
1028
V. R. Bosquezfoti and A. L. Liu
47.3.2 Co-optimization of Energy and Reserve From an SO’s perspective, energy consumption and generation are inherently stochastic in real time. On the consumption side, an SO would not know how many electricityconsuming devices would be turned on or off at any particular moment; on the generation side, power plants may be out due to unplanned mechanical or other technical issue or shortage of fuel during extreme weather conditions. The rapid development of variable-output renewable resources makes the supply side even more unpredictable. In addition, transmission lines connecting supply and demand may be subject to outages too. The 2013 Northeastern blackout is arguably the most prominent example, where the massive blackout was caused by tree branches touching power lines. Energy supply and demand in an electrical system directly influence the system’s frequency, which is a measure of the rotation speed of the synchronized generators. As the total demand increases, the system frequency will decrease, and vice versa. Under undisturbed conditions, the system frequency must be maintained within strict limits in order to ensure full and rapid deployment of control facilities in response to a disturbance. Such frequency limits are illustrated in Fig. 47.7, in which the 60 HZ requirement is the standard in North America; while it is 50 HZ in most Europe and Asia countries. With the multifaceted uncertainties, just solving the UC/ED problems, even with contingency analysis, is not sufficient to ensure the system frequency to be in the required range in real time. Backup generation resources (and demand resources as well) that can be quickly deployed to help
balance supply and demand are needed. Such resources are referred to as operating reserves, and the various services they provide, all aimed to ensure the instantaneous and continuous balance of electricity generation and load, are called ancillary services.
Types of Operating Reserve Resources and Services Operating reserves usually include regulation and loadfollowing reserves under normal system condition and spinning and non-spinning reserves for contingency situations [24]. Under normal conditions, generation units providing regulation service are usually equipped with automatic generation control (AGC) and can respond quickly (at minute level) to real-time supply/demand fluctuations. Since at any moment load may be less or more than the supply, correspondingly, there are regulation-up or regulation-down services. For units already running at full capacity, obviously they can no longer offer regulationup service; similarly, units running at minimum capacity cannot provide regulation-down service. As a result, careful planning is needed to ensure both adequate regulation-up and regulation-down services in real time, which is exactly why an optimization-based approach is needed and will be discussed in the later part of this section. Load-following services are similar to regulation services, though their response time is slower than regulation services, usually around 10 to 15 min. Units providing such services may be controlled by AGC or manually. While loadfollowing units’ response time is slower than those providing regulation services, their available capacities are usually
64
60.05 Governor response
Equipment damage 63
60.04 60.03
62 Time correction Frequency
61 Normal frequency deviation and AGC corrective action range
Governor response Nominal frequency
60
Underfrequency load shedding 59 Underfrequency generation trip 58
Normal conditions Time correction
Contingency response
60.02 60.01 60.00 59.99 59.98 59.97
57
59.96 Equipment damage
Governor response
56
Fig. 47.7 Frequency control is key to ensure power system stability and security [24]
59.95
Frequency
Overfrequency generation trip
Field heating limit 0.85 PF Armature heating limit Prime mover limit
Absorbing MVAR
much bigger. Such services have become more important with the increasing penetration of variable-output renewable resources, as the load-following capability is key to accommodate the fluctuation of renewable energy outputs. Under contingency situations, such as a sudden and dramatic change of supply or demand, or a sudden loss of major transmission lines, a power system’s frequency would likely fall out of the required range, as shown in Fig. 47.7. In this case, spinning and non-spinning reserves are called upon to help the system quickly restore to its normal frequency range. Both spinning and non-spinning reserves are usually equipped with AGC and communication capabilities that can automatically respond to the SO’s signal within 10 min and are able to continue running for several hours before the SO can have enough resources to operate the system within the normal frequency range. The main difference between spinning and non-spinning reserve is that the former requires the units to be running and is synchronized to the system so that it is ready to serve load whenever called. Non-spinning reserve, on the other hand, is not operating when called upon but can start and generate within 10 min. Certain internal combustion generators, aero-derivative combustion turbines, and hydro plants can all serve as non-spinning reserve. All the abovementioned ancillary services are to provide real power to the system when needed. There are two additional types of ancillary services that provide different things that are also important to ensure power systems’ reliability and security. The first type is voltage support service. Just like frequency, voltages in a power system must be maintained within a tight range as well. Exceeding the range, either too high or too low, can destroy equipment connected to the system. Voltage in a power system is sensitive to injection and withdrawal of reactive power, and hence voltage control is realized through controlling reactive power in the system. While various transmission system equipment, including capacitors, inductors, and transformer tap changes, can be used to control reactive power, they are not sufficient, especially during emergency situations. Most of synchronous generators can provide reactive power. However, their ability to provide voltage support (either to produce or to absorb reactive power) is related to their ability to produce real power, and the relationship is nonlinear, as shown in Fig. 47.8, which is provided in [24]. The prime mover in Fig. 47.8 refers to the mechanical machines that convert energy of fuel into mechanical energy (that rotates the machines to cut through magnetic field to generate electricity). They are exactly what we usually refer to as “turbines” or “engines.” Most of fossilfuel generators are so designed that, even when their prime movers reach their production limit, they can still produce or absorb reactive power. However, if they are required to provide more voltage support, their real power production has to be reduced, as shown in Fig. 47.8. In addition, reactive power does not travel well as inductive impedance of a
1029
Producing MVAR
Power Grid and Electrical Power System Security
Reactive power (MVAR)
47
0.95 PF Core end heating limit Real-power output (MW)
Fig. 47.8 Relationship between real and reactive power production in synchronous generators [24]
transmission system is much greater than its resistance. Consequently, voltage support resources, both their quantities and their locations, also need careful planning to ensure real-time system security. The other type of non-real-power ancillary service is black start, which is provided by generation resources to restart a power system in the extreme events of blackout. Such resources must be able to start themselves quickly without an external electricity source and capable of providing both real and reactive power to energize transmission lines and restart other generators. They are also usually required to have fast ramping capability and sufficient capacity to accommodate system real and reactive loads change, as the system is recovering from a blackout. In addition, black start generators need to be appropriately located in a power system to be useful in restarting other generators and in resynchronizing the interconnection. As such, they are usually equipped with communication control automation devices connected with the SOs. Some examples of black start resources include hydroelectric dams, diesel generators, open cycle gas turbines, and various grid-connected energy storage resources, such as compressed air storage. The procurement of black start services is very different than other ancillary services. They are usually procured through specific contracts between SOs and black-start-capable generators. ERCOT, the ISO in Texas, is an exception that it does implement a market approach to solicit bids from black start service providers. Taken from [24], the table in Fig. 47.6 provides a good summary of the characteristics of the various ancillary services discussed above. As power-generating units’ ability to provide real power, reserve service, and reactive power is intertwined, the best approach to ensure both efficient
47
1030
V. R. Bosquezfoti and A. L. Liu
allocation of resources and system reliability is to jointly optimize all the three aspects together, termed as cooptimization, which is the main subject in the following subsection.
Co-optimization of Energy and Reserve Resources As introduced above, for a power plant to provide regulation, load-following, or spinning reserves, the plant needs to be up and running when its reserve service is called. Also when demand is high and additional supply is needed, the power plants providing regulation-up and spinning reserves cannot be running at their full capacity, as otherwise they would not be able to provide the promised services. Hence, securing ancillary services needs to be closely coordinated with the UC/ED process, and the best approach is the so-called cooptimization, which is to modify the UC problem to explicitly include ancillary service variables and constraints into the optimization problem. As an illustrative example, we show how to add spinning reserves into the UC problem (47.9)– (47.21). Let SRjh denote the spinning reserve requirement at bus j in hour h, which is measured in MW. This exactly corresponds to the first box in Fig. 47.6, which shows that such a requirement is usually posted by an SO at the beginning of a DA market. Let sjgh denote the spinning reserve service that generator g, which is located at bus j, is to provide in hour h. Then we can use the following constraint to replace constraint (47.10): ≤ pjgh + sjgh ≤ Pmax αjgh . αjgh Pmin g g
(47.22)
We also need to add an additional constraint as follows:
sjgh ≥ SRjh ,
∀j ∈ N, h ∈ H,
(47.23)
g∈Gj
which ensures the system to have at least SRjh amount of spinning reserve in location j in hour h. As providing ancillary service will also incur fuel costs, the corresponding cost of providing sjgh needs to be added to the objective function (47.9). Regulation and load-following services can be modeled in a similar fashion as spinning reserves. If an AC network flow model is used in the UC problem, then requirement on reactive power to support system voltage can also be added to the UC problem. More sophisticated energy and reserve co-optimization models can be found in [5, 30, 41, 43].
47.3.3 Challenges to Short-Term Operations While the extremely large-scale UC/ED problems, along with their security-constrained versions, are already difficult problems to solve within the required time limit in power systems daily operations, there have been multiple emerging challenges. The first challenge is how to specifically account for real-time uncertainty, especially considering the increasing penetration of variable-output renewable energy. Stochastic UC has been an active research area for the past decade, and a comprehensive literature review is provided in [53]. The subjects of multi-stage stochastic programming [54], robust optimization [1], distributionally robust optimization [44], and their applications to improve power systems’ security are all on the frontiers of current research. Another major challenge is how to integrate the potential billions of the grid-edge resources, which can include a great variety of resources and technologies at the consumer side, such as roof-top solar panels, energy storage, electric vehicles, and smart buildings/homes. (A new term, prosumers, is conceived to describe those consumers with generation resources.) The recent order from the Federal Energy Regulatory Commission (FERC), Order 2222, aims at “removing the barriers preventing distributed energy resources (DERs) from competing on a level playing field in the organized capacity, energy, and ancillary services markets run by regional grid operators.” (From FERC Order 2222, https://www.ferc.gov/media/ferc-order-no-2222-factsheet) In a nutshell, FERC Order 2222 intends to encourage consumer-side resources to participate into a wholesale market run by an ISO/RTO. The problems to the SOs are, however, that such DERs are not controllable (or even visible) to them. For power plants that participate into a wholesale market, once their supply bids are cleared in the DA market, they are bound to generate the promised amount of energy in real time; otherwise, they will have to purchase in the RT market to cover their production shortage, and RT market energy prices are usually much higher than in the DA market, essentially causing a financial penalty to those nonperforming power plants. How to make DERs financially accountable for the energy or reserve they promise to provide in DA, or even how should DERs participate in a wholesale market, is not clear at this moment. One particular approach, referred to as transactive energy, has been the main focus on DER integration. Transactive energy refers to any approaches that are incentive-based (such as sending price signals) to encourage the DERs to respond in a way that can help lower the overall system costs (of balancing supply and demand) while not endangering system reliability. This is referred to as a decentralized control paradigm, which is in stark contrast to the traditional centralized control paradigm. While the DERs are not under an ISO/RTO’s control, it is still the SOs’ responsibility to maintain system reliability. How to solve a
47
Power Grid and Electrical Power System Security
1031
Table 47.6 Description of key ancillary services [24] Service
Normal conditions Regulating reserve
Load following or fast energy markets
Service description Response speed
Duration
Cycle time
Market cycle
Price rangea (average/max) $/MW-hr
Online resources, on automatic generation control, that can respond rapidly to system-operator requests for up and down movements; used to track the minute-to-minute fluctuations in system load and to correct for unintended fluctuations in generator output to comply with Control Performance Standards (CPSs) 1 and 2 of the North American Electric Reliability Council ∼1 min Minutes Minutes Hourly 35–40/ 200–400 Similar to regulation but slower. Bridges between the regulation service and the hourly energy markets
∼10 min
10 min to hours
10 min to hours
Hourly
–
Contingency conditions Spinning reserve
Online generation, synchronized to the grid, that can increase output immediately in response to a major generator or transmission outage and can reach full output within 10 min to comply with NERC’s Disturbance Control Standard (DCS) Seconds to
Application layer Undefined
Service monitor system ITS application information layer Undefined
Application layer Undefined
Presentation layer ISO ASN.1 UPER Session layer IETF DTLS
Security plane IETF DTLS
ITS application information layer Undefined
Roadside equipment
Security plane IEEE 1609.2
Vehicle OBE
Presentation layer ISO ASN.1 UPER Session layer IETF DTLS
Session layer IETF DTLS
Transport layer IETF UDP
Transport layer IETF UDP
Network layer IETF IPv6
Network layer IETF IPv6
Data link layer IEEE 1609.4, IEEE 802.11
Data link layer IEEE 1609.4, IEEE 802.11
LLC and MAC compatible with physical and network
LLC and MAC compatible with physical and network
Physical layer IEEE 802.11
Physical layer IEEE 802.11
Physical layer Backhaul PHY
Physical layer Backhaul PHY
Transport layer IETF UDP Network layer IETF IPv6
Security plane IETF DTLS
Session layer IETF DTLS
Data link layer
Transport layer IETF UDP Network layer IETF IPv6 Data link layer
Fig. 50.5 Typical triple solution with three communication stacks, USDOT/FHWA’s architecture reference for cooperative and intelligent transportation [80]
framework architecture RAIM is based upon TOGAF’s four phases: a preparatory phase, a strategy/architecture vision phase, a business architecture phase, and an information system architecture phase.
SPACE (Shared Personalized Automated Connected vEhicles) SPACE (Shared Personalized Automated Connected vEhicles) is a project of the International Association of Public Transport, UITP, to promote the integration of ADSequipped vehicles as shared vehicles into public transportation and create a high-level reference architecture for the “comprehensive and seamless integration of driverless vehicles with other IT systems in the mobility ecosystem using a fleet orchestration platform” [67]. By identifying key functions and components, and linking the platform to transit’s backend systems, road infrastructure, traffic management centers, and other data, the architecture facilitates deployment for 13 mobility scenarios and allows collaborative use of mixed vehicle fleet (brands, types, and
different SAE levels) operated by multiple agencies and owners/operators. Key considerations include demand and local needs; accommodation of dual-use vehicles (e.g., passenger and freight delivery); and modular or platooning vehicles to address changing demand and capacity needs. The following 13 use cases were considered in the development of the architecture (see Table 50.3). Most of these (with the exception of #2 and #10) are or are expected to be fully integrated into public transportation ticketing/app, dispatching, and control room [67].
50.4.4 NIST Cybersecurity Framework The NIST Cybersecurity Framework addresses the physical, cyber, and people dimensions of cybersecurity and helps organizations improve security and resilience by providing risk management best practices and effective standards, guidelines, and practices. The three components of the Framework are framework core, the implementation tiers, and the framework profiles.
50
1100
Y. J. Nakanishi and P. M. Auza
Table 50.2 ITS security area and service packages [82] ITS security area Disaster response and evacuation Addresses all types of disasters through interagency coordination
Freight and commercial vehicle security Focuses on awareness through surveillance HAZMAT security To address hijackings of HAZMAT and its use as a weapon: Tracking HAZMAT cargo and reporting deviations Detection of HAZMAT cargoes and identifying and reporting unauthorized cargoes Driver authentication and reporting ITS wide area alert Dissemination of emergency notifications to travelers Rail security Monitor and secure rail assets including personnel Surveillance of highway-rail intersections
Transit security Monitor and surveil transit infrastructure and assets Security management and control Emergency information to travelers Transportation infrastructure security Monitoring of transportation infrastructure Use of barriers and safeguard systems before, during, or after an incident Traveler security Security of travelers in public areas
Service package(s) Early warning system Disaster response and recovery Evacuation recovery and management Disaster traveler information Carrier operations and fleet management Freight administration Fleet and freight security Fleet administration CV administrative processes Roadside HAZMAT security detection and mitigation CV driver security authentication Wide area alert Advanced railroad grade crossing Standard railroad grade crossing Railroad operations coordination Traffic incident management system Disaster response and recovery Evacuation and reentry management Transit security
Transportation infrastructure protection
Transit security Transportation infrastructure protection Wide area alert Disaster traveler information
Table 50.3 Use cases for the SPACE architecture [84] Use case 1. First-/last-mile feeder to public transport station 2. Special service (campus, business park, hospital, letter, and small parcel delivery) 3. Bus rapid transit (BRT) (fixed route, dedicated lane) Pop-up shuttle transport (fixed route, temporary service) 4. Area-based service (on-demand stops) and feeder to public transport station 5. Depot 6. Premium shared point-to-point service (on demand) 7. Shared point-to-point service 8. Local bus service (on-demand replacement for traditional transit) 9. School bus 10. Premium – Robo taxis (on demand) 11. Car sharing (on demand) 12. Intercity travel (fixed highway routes)
• Framework core is “a set of cybersecurity activities, outcomes, and informative references that are common across sectors and critical infrastructure.” • Implementation tiers are “a mechanism for organizations to view and understand the characteristics of their approach to managing cybersecurity risk, which will help in prioritizing and achieving cybersecurity objectives.”
Physical location/Setting Urban, suburban, mixed traffic Suburban, mixed traffic/pedestrian areas Urban (high density), suburban Not applicable Small, isolated city; rural Urban (high density) Urban, suburban; small, isolated city; rural, mixed traffic Urban, suburban; small, isolated city; rural Small, isolated city; mixed traffic Small, isolated city; rural, mixed traffic Urban, suburban; small, isolated city; rural Suburban; small, isolated city; rural Not applicable
• Framework profiles help organizations prioritize cybersecurity activities and align them with its business/mission requirements, risk tolerances, and resources [85]. Figure 50.6 illustrates the information and decision flows at the executive level, business process level, and implementation/operations level of an organization. At the exec-
50
Connected Vehicles and Driving Automation Systems
1101
Risk management
Senior executive level Focus: Organizational risk Actions: Express mission priorities approve implementation tier selection direct risk decisions Changes in current and future risk
Implementation progress changes in assets, vulnerability and threat
Business/ process level Focus: Critical infrastructure risk management Actions: Nominate implementation tiers develop profiles allocate budget
Implementation/ operations level Focus: Securing critical infrastructure Actions: Implements profile
Mission priority and risk appetite and budget
50
Framework profiles
Implementation
Fig. 50.6 Notional information and decision flows within an organization [85]
utive level, mission priorities, available resources, and risk tolerance must be communicated to the business/process level. The business/process level coordinates with the implementation/operations level to communicate business needs and produce a profile. The implementation/operations level informs the business/process level regarding implementation progress and the business/process level performs an impact assessment which is then communicated to the executive level. The executive level uses the information as input to its risk management process and provides the information to the implementation/operations level to enhance its impact awareness [85].
50.5
abled cross-border movement of vehicles – has 36 signatories and requires a driver to be in full control of the vehicle. To support the safe deployment of automated vehicles in road traffic, the United Nations Economic Commission for Europe Global Forum on Road Traffic Safety in 2018 adopted a legal resolution to this effect endorsed by 28 EU Ministers of Transport [86]. With respect to security, the UN recognizes the importance of securing CVs and has promulgated the requirement of automobile manufacturers in 53 nations including the EU, Japan, and South Korea but not including the USA to secure CVs against cyberattacks by January, 2021, and expects governments to take appropriate action to ensure this protection [87].
Government Roles
Government research, regulation, and enforcement of CVs and ADS-equipped vehicles can help ensure their safety and security and successful and efficient implementation of connected and automated mobility. Because new concerns are presented by new and emerging transportation technologies, laws and regulations of nations around the world are being reviewed and altered.
50.5.1 United Nations The international UN treaty – Convention on Road Traffic signed in Vienna in 1968 standardized traffic rules and en-
50.5.2 United States: Federal and State Roles USDOT’s three overriding goals for achieving the benefits of ADS and its ADS vision of prioritizing safety while preparing for the future of transportation are as follows [88]: 1. Promote Collaboration and Transparency – USDOT will promote access to clear and reliable information to its partners and stakeholders, including the public, regarding the capabilities and limitations of ADS. 2. Modernize the Regulatory Environment – USDOT will modernize regulations to remove unintended and unnec-
1102
essary barriers to innovative vehicle designs, features, and operational models, and will develop safety focused frameworks and tools to assess the safe performance of ADS technologies. 3. Prepare the Transportation System – USDOT will conduct, in partnership with stakeholders, the foundational research and demonstration activities needed to safely evaluate and integrate ADS, while working to improve the safety, efficiency, and accessibility of the transportation system. USDOT and its modal agencies National Highway Traffic Safety Administration (NHTSA), Federal Motor Carrier Safety Administration (FMCSA), Federal Transit Administration (FTA), and Federal Highway Administration (FHWA) support ADS and AV development and industry through research, regulatory, and enforcement activities. For instance, NHTSA’s roles are to set and enforce motor vehicle safety standards, manage recalls and remedies, and educate the public. However, NHTSA and FMCSA may grant waivers and exemptions for research, testing, and demonstration projects, and for deployment of vehicles not complying with, respectively, the Federal Motor Vehicle Safety Standards or the Federal Motor Carrier Safety Regulations if an equivalent level of safety is shown. States are responsible for licensing drivers and registering vehicles, making and enforcing traffic laws, regulating insurance and liability, and executing safety inspections [89, 90]. NHTSA has produced a guidance on basic automotive cybersecurity: National Highway Traffic Safety Administration (2016, October) and Cybersecurity best practices for modern vehicles (Report No. DOT HS 812 333). NHTSA has also been developing multilayered cybersecurity approaches to address vehicle vulnerabilities to cyberattack. These approaches include a risk-based prioritization method for vehicle control systems, detection and response strategy, designing-in resiliency, and industrywide information sharing methods. Additional information on USDOT’s ADS and AV advancement activities can be accessed at http://www.Transportation.gov/AV. The USDOT’s ITS Joint Program Office (JPO) promotes ITS development and deployment to enhance safety and mobility of people and goods. The ITS JPO Strategic Plan 2015– 2019 highlighted Connected Vehicle implementation and Advancing Automation as strategic priorities and focused ITS activities on CV adoption and predeployment actions such as pilot testing and the need for more personalized, Mobility on Demand–type mobility solutions [91]. ITS JPO’s Vision is “Transform the Way Society Moves” and its strategic program areas include automation, cybersecurity for ITS, data access and exchanges, emerging and enabling technologies, complete trip – ITS4US, and accelerating ITS deployment. The four programs under Accelerating ITS Deployment are ITS Architecture and Standards, ITS Evaluation, ITS Professional Capacity Building and ITS
Y. J. Nakanishi and P. M. Auza
Communications. With respect to its automation program, the ITS JPO’s role is “to facilitate multimodal automation research and collaboration in safety, infrastructure interoperability, and policy analysis” [92]. In 2017 and 2018, ITS JPO considered the impacts of automation and produced a framework for estimating the potential safety, mobility, energy, and environmental benefits and disbenefits. In addition, target crash population work for L2 to L5 has allowed the calculation of potential safety benefits through the linking of automation applications to potential reductions by crash type. Current modeling and testing work include the connected/automated vehicle analysis modeling and simulation project, and the FHWA Cooperative Adaptive Cruise Control testing. Ensuring American Leadership in Automated Vehicle Technologies: Automated Vehicles 4.0 presents US Government principles in three core interests – the protection of users and communities, efficient markets, and coordinated efforts; subareas for each are listed below: I. Protect users and communities 1. Prioritize safety 2. Emphasize security and cybersecurity “The U.S. Government will support the design and implementation of secure AV technologies, the systems on which they rely, and the functions that they support to adequately safeguard against the threats to security and public safety posed by criminal or other malicious use of AVs and related services. The U.S. Government will work with developers, manufacturers, integrators, and service providers of AVs and AV services to ensure the successful prevention, mitigation, and investigation of crimes and security threats targeting or exploiting AVs, while safeguarding privacy, civil rights, and civil liberties. These efforts include the development and promotion of physical and cybersecurity standards and best practices across all data mediums and domains of the transportation system to deter, detect, protect, respond, and safely recover from known and evolving risks.” 3. Ensure privacy and data security “The U.S. Government will use a holistic, risk-based approach to protect the security of data and the public’s privacy as AV technologies are designed and integrated. This will include protecting driver and passenger data as well as the data of passive third-parties – such as pedestrians about whom AVs may collect data – from privacy risks such as unauthorized access, collection, use, or sharing.” 4. Enhance mobility and accessibility II. Promote efficient markets 5. Remain technology neutral 6. Protect American innovation and creativity 7. Modernize regulations III. Facilitate coordinated efforts
50
Connected Vehicles and Driving Automation Systems
8. Promote consistent standards and policies 9. Ensure a consistent federal approach 10. Improve transportation system-level effects [61] USDOT’s CARMASM products provide tools and software for research and testing of cooperative driving. Research tracks include freeways and arterials traffic research, reliability research, and commercial motor vehicle and port operations freight research. Evaluation and testing methods include scaled-down vehicles, safety testing, and analytics. Simulation tools are currently under development.
50.5.3 European Union (EU) In 2018, the European Commission added a connected and automated driving policy in its third mobility package “Europe on the Move – Sustainable Mobility for Europe: Safe, Connected and Clean.” The policy includes the following goals: • Passenger cars and trucks – highways: By 2020, some Level 3 and 4 capabilities on motorways such as Highway Chauffer and truck platooning convoys. Cities: By 2020, certain low-speed scenarios in cities (e.g., valet parking) • Public transport – by 2020, low-speed Level 4 situations such as shuttles for dedicated trips According to the European Commission, Europe has strengths in Advanced Driving Assistance Systems, traffic management, and cooperative ITS which will support future ADS functionalities, has garnered a high number of patents, and has six out of the top ten companies with patents in automated driving technologies. However, the USA, China, Korea, and Japan are perceived to be particularly competitive in connected and automated driving through their emphasis on AI and digital infrastructures [86]. Legislative and policy initiatives on automated mobility adopted by the European Commission in its Third Mobility Package include EU’s strategy for automated mobility. The strategy proposes a “ . . . series of measures to be implemented in the next years to: 1) develop the necessary technologies and infrastructure in Europe, 2) ensure that automated mobility is safe and 3) to cope with societal issues such as jobs, skills and ethics” [93]. EU’s approach to CAV regulation includes a new legal framework effective from July, 2022, large-scale testing support, auditing safety management system and assessing ADS design and validation, requiring physical track and on-road testing, and postsales safety monitoring. The monitoring consists not only of safety confirmation but scenarios generation and a feedback system based on operational experience. Changes to the EU Regulatory Framework and legal framework have been made as follows:
1103
EU Regulatory Framework updates address: • • • • • •
Harmonized data exchange format for vehicle platooning Automated systems controlling the vehicle Real-time situational awareness systems for vehicles Road infrastructure safety management rules Accident data recorder for AVs Driver readiness monitoring systems Legal framework changes include:
• Mandatory black boxes for AVs • New safety requirements for roads for AVs • New safety measures for driver assistance systems and AVs • New vehicle approval criteria • Guidelines on product liability [93] The 2019 Strategic Transport Research and Innovation Agenda (STRIA) roadmap on connected and automated mobility establishes recommended actions and respective roles and responsibilities of the European Union, the Member States, and the industry. The STRIA built on a prior roadmap which focused on connected and automated driving by 2030 and automated on-demand shuttle, truck platooning, valet parking, delivery robot, and traffic jam chauffeur use cases. The following are the major thematic areas and associated initiatives identified in the 2019 STRIA report: • • • • • • • •
In-vehicle enablers Vehicle validation Large-scale demonstration pilots to enable deployment Shared, connected, and automated mobility services for people and goods Socioeconomic impacts/user/public acceptance Human factors Physical and digital infrastructure and secure connectivity Big data, artificial intelligence, and their applications [86]
Not surprisingly secure connectivity is included in the major thematic areas. Also a central element of the European Commission cybersecurity policy is a trust model for cooperative, connected, and automated mobility (CCAM)’s data communication to secure message transmissions [4].
50.6
Connected Vehicles (CVs)
Connected vehicle (CV) systems use wireless communications to continuously receive and transmit vehicle position, direction, and speed via onboard units. In addition to neighboring vehicles, information may also be exchanged with roadside units, traffic management centers, mobile devices, and other connected devices [23, 93].
50
1104
CVs enable other vehicles in the vicinity to “see” them and receive messages warning of obstacles in their area no matter the weather conditions or geography of the roadway and, thus, prevent multiple car pileups and crashes involving pedestrians and cyclists. This feature is especially useful in a disaster situation in which some roadways may be impassable. Other application examples are summarized below: • Vehicle platooning – vehicles create a convoy, traveling close together and taking coordinated driving actions. A lead vehicle determines speed and makes other driving decisions while following vehicles react to the actions of the lead vehicle (e.g., accelerating and decelerating). • Cooperative driving – vehicles communicate with each other, alerting other vehicles about emergency actions and emergency vehicles or intention to take an action such as changing lanes. • Collision avoidance – road infrastructure may coordinate passage of vehicles at intersections; also, vehicles communicate with other vehicles about taking emergency actions such as emergency braking and about dangerous conditions or obstacles including pedestrians and cyclists. • Emergency vehicles – emergency vehicles receive optimal, real-time route information from road infrastructure/TMCs and safe, priority passage through signalized intersections and congested roadways to access victims and incident sites safely and quickly. • Hazard ahead warning – road infrastructure or vehicle communicates to other vehicles about congestion, work zones, emergency vehicles, incidents, obstacles obscured by weather conditions, large trucks, or other blind spots (e.g., curves) [94].
Y. J. Nakanishi and P. M. Auza
tions require minimal delay or latency. Delays or latency can cause urgent warnings to be delivered too late for the driver or other road users or other vehicles to take appropriate and immediate action. In addition, high availability ensures that the system will be able to handle messages without interference from other radio systems even during periods of high traffic congestion, thus ensuring timely communications. • Finally, secure communications are essential in meeting high-level performance requirements – message accuracy and validity, and privacy and system recovery [95].
50.6.1 Connected Vehicle Security: Secure Communications Driver safety and other safety issues may be caused by wrong or unreliable information sent to safety applications; these applications relay safety information to drivers or initiate emergency action to prevent or mitigate safety issues. Governments expecting widespread CV deployment in their jurisdictions are focusing on securing the communications systems. For instance, in the USA, a National SCMS is being planned; similar planning is occurring in Europe as well.
USA – Security Credential Management System (SCMS) The Security Credential Management System (SCMS) ecosystem provides security for V2X (vehicle-to-everything) communications through a public key infrastructure (PKI) method balancing security, privacy, and efficiency (see Fig. 50.7) [96]. The key benefits of the SCMS include the following:
The types of data exchanges include V2V, V2I, and V2X. While one-to-one or unicast messages are possible, the current system uses the broadcast communications method in which messages are broadcast to all users in the area since messages are usually not user specific. This method employs the wireless access in vehicular environment short message which is governed by the IEEE 1609.x suite of standards and dedicated short-range communications (DSRC) 802.11 suite of standards. Interoperable, timely, and secure communications are core aspects of successful CV implementation.
• Ensures integrity – so users can trust that the message was not modified between sender and receiver • Ensures authenticity – so users can trust that the message originates from a trustworthy and legitimate source • Ensures privacy – so users can trust that the message appropriately protects their privacy • Helps achieve interoperability – so different vehicle makes and models will be able to talk to each other and exchange trusted data without pre-existing agreements or altering vehicle designs [97]
• Interoperability requires that the communications devices be compatible so messages sent from any device can be understood by any other device and messages received from any device can be understood by that device. Also, all users must use the same communication technology. • Another requirement is the speed at which communications occurs. While some applications such as congestion notifications have greater tolerable delays, safety applica-
The SCMS certifies V2X devices as trusted players; the devices then receive digital security certificates which are attached to outgoing messages. The certificates enable system participants receiving messages to authenticate and validate them. Privacy of vehicle owners is another key security issue addressed by ensuring that the certificates have no personal or equipment-identifying information; also, certificates are
50
Connected Vehicles and Driving Automation Systems
Frequently change certificates to prevent linking BSMs to one-another for tracking purposes
Digital signatures to guarantee integrity
1105
Security credential management system (SCMS) as trust anchor
Option to verify-on-demand: only verify messages that will result in driver’s warnings
Fig. 50.7 The Security Credential Management System (SCMS) ecosystem. (Source: U.S. Department of Transportation (USDOT): Security Credential Management System (SCMS) Technical Primer, USDOT, Nov., 2019, Page 1)
required to change periodically. Further, the SCMS also protects users’ privacy from attacks by SCMS outsiders and SCMS insiders, and fake warnings generated by authenticated messages. The SCMS identifies and removes misbehaving devices to ensure the protection of messages. To prevent cascading failures, SCMS operations are compartamentalized. In addition, SCMS ecosystem participants include CV equipment and applications makers, certifying entities for the CV equipment and applications, vendors and maintenance/servicing entities, end users including drivers, and state and local agencies. Hence, a National SCMS is required to sort out the various ownership, operational, and governance issues and select the appropriate model for national implementation. The National SCMS Deployment Support project will “develop a National SCMS Deployment Strategy to help USDOT and industry establish a viable National SCMS ecosystem in support of V2X communications” [96]. These strategies will include:
• Establishment of an SCMS Governance Board (or similar oversight entity), including definitions of functions, roles, and responsibilities • Establishment of an overall SCMS Manager (or similar system management entity), along with definitions of functions, roles, and responsibilities for managing ongoing operations and executing any functions deemed to be “inherently central” • Establishment of management entities that will be part of the larger SCMS delivery system (and whose authority is directly dependent on and linked to the SCMS Manager) • High-level policies and procedures that define and guide interactions among the various entities that make up the SCMS • Roles and responsibilities of other entities that are not directly part of the SCMS but who may play a supportive, authoriza-
tion, administrative, or other indirect role (such as the federal government, state governments, industry associations, etc.) • Business and financial options for initial deployment and sustainable operations. [98]
The 2018 SCMS Baseline Summary Report discusses governance and ownership models and evaluation criteria in detail. The report also addresses current public interest objectives and evaluation criteria: • Public interest objectives include: secure communications, privacy, availability (interoperability, flexibility, and redundancy), stakeholder representation, affordability, and performance. • Evaluation criteria include: ownership, funding, policy creation and approval, oversight and auditing, trust anchor management, legislation and regulation, competition, and overall risk. • Additional areas of interest which will affect PKI Certificate Policy include: trust anchor and certificate authority management, policy compliance and enforcement, and SCMS communications options [99]. Possible governance, ownership, and deployment models for National SCMS range from completely public with a government office created to fill the SCMS manager position to completely private with an industry-led SCMS manager. The National SCMS manager will develop policies and procedures for the SCMS (i.e., PKI and trust anchor management) and manage and administer communications for SCMS end users and supporting services. The models may be hybrids with some functions owned and operated by the government and others by industry. The models may also evolve over time, depending on system needs. All models should have
50
1106
Y. J. Nakanishi and P. M. Auza
SCMS manager Policy
• Local certificate chain file (LCCF) • Local policy files (LPF)
Technical
Root CA Misbehavior authority Intermediate CA
Certificate services
Internal blacklist manager
Global detection
CRL generator
Pseudonym CA Linkage authority 1
Linkage authority 2
CRL store
CRL broadcast
Registration authority
Enrollment CA
Location obscurer proxy
OBU 1
OBU 2
RSU
Fig. 50.8 Security Credential Management System (SCMS) Structure. (Source: U.S. Department of Transportation (USDOT): Security Credential Management System (SCMS) Technical Primer, (Nov., 2019) p. 4)
a robust certificate policy and enforcement procedure and provide for system and data security. These models, however, will have differing outcomes relative to public interest objectives due to their varying levels of competition and profit motive. To ensure privacy of vehicle and operator information, the involvement of the government will likely be necessary as the government rather than the private sector places more emphasis on privacy. On the other hand, a higher level of private sector involvement will likely increase availability (e.g., redundancy and flexibility) of valid certificates to end users. It is expected that government-owned and operated models will have good stakeholder representation, oversight, and transparency but result in suboptimal organizational performance and agility to respond to changing situations, decreased efficiency, and higher costs. Even with a completely private model, some level of government oversight as well as new legislation and regulations may be required. Also, policies such as PKI policy will be influenced by PKI design and root structure [100].
SCMS Process The SCMS process – enrollment, operations, and misbehavior detection – is described in the 2019 SCMS Technical Primer. Figure 50.8 depicts the structure of the SCMS. To connect a device to the SCMS, the device must enroll in a secure environment. The device produces a public or private key and requests a public key from the SCMS. If approved, the SCMS will provide the required file to enable the device to create a public-private key pair. The device can then request operational certificates from an SCMS registration authority through HTTPS protocols or via a roadside unit. A device is provided with many short-term pseudonym certificates to use with safety messages. These certificates, valid for a week, rotate periodically to ensure privacy. Public vehicles such as police vehicles and ambulances use identification certificates for V2I communications for specific public use purposes such as emergency signal preemption [96]. Misbehavior detection systems being used in the CV Pilot programs employ algorithms to analyze safety messages to detect misbehavior. If detected, the device will send a report
50
Connected Vehicles and Driving Automation Systems
to the misbehavior authority which will add the devices to the certificate revocation list. When receiving certificates, a device will download the certificate revocation lists from roadside units and use an automated misbehavior detection system to flag messages from misbehaving devices [96]. Message accuracy and message validity are two different concepts. Message accuracy impacts safety as incorrect information about another vehicle’s location, direction, and/or speed can have dire consequences if the vehicle receiving the message acts on the information. To ensure message accuracy, sensors collecting and providing this data must supply accurate information. Message validity is predicated on the following: • The sender having the authority and permission to send the message. • The sender is allowed to operate in the location of the message being sent. • The message has not been altered from the original message or recorded in another location/time. To assure validity, messages are digitally signed and encrypted using a cryptographic key. The signature containing a digest which is a portion of the message, and time and location information and a certificate containing a public key to decrypt the message and a digital signature for the certificate are attached to each message. The message recipient compares the digest of the received message and the decrypted digest in the signature to verify they are the same. The receiver also examines the signature and certificate details to further verify the message’s validity. Certificates are valid for specific applications and regions [96]. Connected vehicle systems use data from individual vehicles that may be used to track the vehicles, determine the location of a particular vehicle, and flag vehicles in violation of traffic regulations or vehicles with outstanding fines, etc. The consensus thought is to mitigate these issues by eliminating identifying data. Ensuring anonymity of vehicles broadcasting their data and message validity within the context of over millions of vehicles presents a challenge to connected vehicle security design. Using pseudonym certificates for short periods of time and cycling through certificates is one method to avoid tracking based on certificates [96]. Public key cryptography uses asymmetrical key encryption to produce mathematically linked key pairs. Messages are encrypted with the receiver’s public key and decrypted with their private key. Only a key in a key pair can decrypt data which has been encrypted by the other key. The PKI authenticates the identity of other entities (i.e., message senders) and determines the correctness of public keys, ensures their integrity, and verifies the ownership of key pairs prior to certificate issuance. PKI will be complex and
1107
massive – because each device has a store of a thousand certificates, the PKI will need to handle 5 trillion certificates annually. Various entities are involved in a chain of trust with the last link, known as the root of trust or simply, the root, being trusted by both the message sender and receiver. This chain of trust is the PKI. If any link in the chain is compromised, the entire PKI may be at risk [96].
C-ITS Credential Management System (CCMS) A National SCMS is being planned in the USA as in Europe and other regions around the world. However, if there are multiple systems or in cross-border or cross-jurisdictional situations which are more likely in Europe, interoperability is not assured. Therefore, the HTG6 team focused on inter-CCMS trust comprised of organizational trust between entities and technical manifestation of that trust in digital interactions and shared policies. The HTG6 team comprised of diverse stakeholders sought to create a CV security policy framework, identify harmonization areas that would benefit the public, and identify the technical and policy elements for communications security system trust models. The HTG6 team identified the following as unique constraints for a security solution: • The system must scale to meet the needs of millions of users • To support privacy at the highest levels, the system must not need to know the identity of participating parties; • The system must be fast (and thus the communications exchange not burdened by security overhead) to support crashavoidance applications, in particular; • The system must provide security to protect critical assets as well as access to critical assets by trusted users; • The system must work with a highly mobile environment; and • The system must support trust among known crossjurisdictional and operational partners. PKI is the security solution for CV security. [101]
The HTG6-4: C-ITS Credential Management System Functional Analysis and Recommendations for Harmonization report presents PKI system components that pertain to CVs and addresses trust chain elements enabling CCMS entities to trust each another. CCMS decomposition analysis performed by the HTG6 team assumed that CCMS plays the roles of root trust authority and security credentials provider. HTG6 produced four trust models, identified interfaces at which trust may be established, identified useful organizational policy areas for harmonization, the role for compliance or certification of applications and devices, and areas lacking standards or specifications. CCMS recommendations of the HTG6 team included the following: • Inter-CCMS interface standards are a priority need • Minimize the number of CCMS
50
1108
Y. J. Nakanishi and P. M. Auza
• Clarify the relationship between certification and credentials distribution • Agree upon policy frameworks • Study inter-CCMS trust scenarios • Establish requirements on end entities and proper disposal [102] The HTG6-3: Architecture Analysis report identified the primary elements of a PKI security system and compared them against the following security architectures: US Security Credential Management System (SCMS) architecture design for secure, private, and authenticatable communications, EC’s PRESERVE PKI architecture design for secure, authenticatable communications, EC’s Joint Research Center PKI for secure and confidential commercial vehicle digital tachograph regulation, and TCA’s Gatekeeper PKI for secure and confidential commercial vehicle regulation. A Generic PKI Architecture developed as a result of the analysis is included in the report (see Fig. 50.9) [101].
The first-tier areas for harmonization were security functions or procedures that engender mutual trust and interoperability. They include the following: • Generation, distribution, and injection of cryptographic material • Management, processes, and procedures of the registration authority, and certificate authority data centers • Vetting, certification, and audit of organizations • Establishment of the trust chain The second-tier areas for harmonization were also elements related to trust and security – authenticated data, protection of user identity, security of ITS devices, and secure time-stamping. More specifically, these areas include: • Security (electronic and physical) of ITS stations • Concealment of header/MAC information related to user identity
Root CA Root CA Root CA
Servers Authentication Authentication Authentication card
Intermediate CA Intermediate CA Intermediate CA
Servers
Bootstrap/ enrolment CA
Other CAs (Pseudonyms)
Certificate distribution Production plant
Production server
Wireless communication
Binding/pairing
Telematic device
Vehicle
Fig. 50.9 Generic PKI Architecture. (Source: HTG6-3:Architecture Analysis, Fig. 50.2, Page 18)
Roadside equipment
50
Connected Vehicles and Driving Automation Systems
• Trusted data exchange (time stamps, data providers, and incoming data for users, and incoming data from users for providers) Additional areas useful for harmonization were protection of user information and the accessibility of services. These areas include: • Protection of user location data and privacy, and system support for tracking prevention • Protection of all data involving the user’s activities and application data • User support in finding new services and accessing known services, and provider support in offering services Areas where harmonization was not recommended due to regulatory conflicts or practices include [103]: • Disaster recovery • Management of the PKI, the registration authority, and certificate authority offices • Key management – backup and recovery, destruction, and history management • Protection of transient data on user activities • Education and training except in aforementioned areas • Local services • Network communications • Logging
50.7
Conclusions, Challenges, and Further Research Needs
There are still many aspects of AVs that require further research and testing. Cybersecurity for CVs and AVs and associated standards are an essential element of transportation network protection because security breaches and errors can result in disastrous physical consequences. NHTSA is currently pursuing research on cybersecurity of firmware updates, anomalybased intrusion detection, and other applied research, cybersecurity for heavy vehicles, and a reference parser for V2V communication interfaces [104]. Also, the NIST Cybersecurity Framework, upon integration of CVs, needs to be utilized by all public and private stakeholders and emerging architectures for the soon-to-be-created Internet of Vehicles and cloud architecture required for the cloud-based system storing and analyzing enormous amounts of data generated by the vehicles, roadside sensors, mobile applications, and other data sources need to be researched and developed [42]. The maintenance and updating of ITS architectures to include updated security concepts especially as they relate to new and emerging cyber physical threats and continued
1109
harmonization efforts to ensure safe operations and interoperability of vehicles across jurisdictions and operational environments are essential as well. Research into the development of security automation systems to address and mitigate the unique vulnerabilities of cooperative, connected, and automated mobility solutions will be needed as well. Because massive amounts of realtime data are becoming available due to increased numbers of sensors, connectivity of vehicles, and mobile devices, big data analytics and AI will play an important role in assisting agency (Traffic Management Center/Control Center) staff automate security systems, and analyze data and patterns to identify incidents, emergencies, and potential threats to their digital and physical infrastructures. Vehicle design and human factors for ADS vehicles are important areas that require research. Level 4 and 5 driverless vehicles can provide additional seating for passengers and offer different seating configurations. NHTSA is undertaking research on vehicle design and its effects on occupant protection including disabled occupants and the influence of occupant behavior and the enhanced sensor systems [61]. For Level 3 systems, research areas being pursued by NHTSA include designing vehicles and driver behavioral responses to ADS requests and human-machine interfaces to facilitate safe transfers between drivers and ADS and ensure driver readiness to resume control. For Levels 1–3, additional research into methods to ensure driver adherence to the vehicle maker’s instructions (e.g., remaining alert and keeping hands on the steering wheel while adaptive cruise control is operating) is needed. Training, certification, and selection criteria for test vehicle safety operators require similar research. In addition, human remote operators will undertake assessment, assistance, and active control roles and when the need arises, will be required to swap roles. Much of the time, the remote operator will assess the situation by managing error messages, monitoring tasks, monitoring multiple vehicles, and communicating with customers. Infrequently, the remote operator will assist vehicles in issue resolution, predefined mission/operations activation, understanding local and site rules/practices, etc. and rarely in active control or driving during critical and noncritical events. Issues related to remote operations include [105]: • • • • • •
Remote operations center design and requirements Remote operator selection, training, and certification Remote operator task requirements Optimal information provision (content and delivery) Issues regarding alertness and psychological detachment Cybersecurity issues
Road infrastructure readiness. Road types in confined areas or with dedicated roads or lanes are more conducive
50
1110
to safe operations of AVs. Also, support capability of road segments differ considerably and benefits from a road readiness categorization method, such as the one produced by EU research project INFRAMIX. INFRAMIX created the following infrastructure readiness levels, listed from high support to no support: • Cooperative driving: “Based on the real-time information on vehicle movements, the infrastructure is able to guide AVs (groups of vehicles or single vehicles) in order to optimize the overall traffic flow.” – Digital map with static road signs; electronic message signs, warnings, incidents, and weather; microscopic traffic situation; and guidance: speed, gap, and lane advice • Cooperative perception: “Infrastructure is capable of perceiving microscopic traffic situations and providing this data to AVs in real-time.” – Digital map with static road signs; electronic message signs, warnings, incidents, and weather; and microscopic traffic situation • Dynamic digital information: “All dynamic and static infrastructure information is available in digital form and can be provided to AVs.” – Digital map with static road signs and electronic message signs, warnings, incidents, and weather • Static digital information/map support: “Digital map data is available with static road signs. Map data could be complemented by physical reference points (landmarks signs). Traffic lights, short term road works and VMS need to be recognized by AVs.” – Digital map with static road signs • Conventional infrastructure/no AV support: “Conventional infrastructure without digital information. AVs need to recognize road geometry and road signs” [93]. Other changes to road infrastructure, parking facilities and garages, transit facilities and operations, and the urban landscape, and traffic management center functions, procedures, and design to accommodate CVs and AVs should be evaluated. Also, liability issues may become more pronounced as the proportion of AVs in mixed traffic increases. Finally, governments and transportation agencies need to prepare now for the eventual introduction of full Level 5–equipped ADS. Recognizing this, USDOT/NHTSA have identified the following safety areas in addition to cybersecurity: include system safety, operational design domain, object and event detection and response, fallback, validation methods, HMI, crashworthiness, post-crash automated driving system behavior, data recording, consumer education and training, federal/state/local laws, and commercial vehicle inspection [15]. The results of the NCHRP 17-91 “Assess-
Y. J. Nakanishi and P. M. Auza
ing the Impacts of Automated Driving Systems (ADS) on the Future of Transportation Safety” project, completed in November, 2021, should assist transportation agencies in assessing and understanding the full implications of ADS on safety including roadway design, operations, planning, and behavioral factors. These are by no means all the outstanding security- and safety-related topics that require further exploration and/or research on the path toward increased and full automation and connectivity. In fact, some issues may be unknown at this time. However, to ensure the fulfillment of the promise of vehicle and road automation, these challenging issues will need to be addressed. See additional details about autonomous driving in Ch. 19.
References 1. World Health Organization (WHO): Global Status Report on Road Safety 2018. December 2018. https://www.who.int/viol ence_injury_prevention/road_safety_status/2018/en/externalicon 2. U.S. Department of Transportation (USDOT): National Highway Traffic Safety Administration, February 2015. https://crashstats.nhtsa.dot.gov/Api/Public/ViewPublication/8121 15; https://www.nhtsa.gov/technology-innovation/automated-vehi cles-safety 3. U.S. Department of Transportation (USDOT): National Highway Traffic Safety Administration, February 2015. https://crashstats.nhtsa.dot.gov/Api/Public/ViewPublication/8121 15 U.S. Department of Transportation (USDOT): National Highway Traffic Safety Administration, Automated Vehicles Comprehensive Plan (2021) 4. The CCAM Partnership: https://www.ccam.eu/ 5. U.S. Department of Transportation (USDOT): National Highway Traffic Safety Administration. The Economic and Societal Impact of Motor Vehicle Crashes (2010, Revised 2015) 6. Texas A&M Transportation Institute (TTI): 2021 Urban Mobility Report (2021) 7. NAVYA: https://navya.tech/en 8. Bel Geddes, N.: Magic Motorways, p. 48. Random House, New York (1940) 9. Weber, M.: Where to? A history of autonomous vehicles, 8 May 2014. https://computerhistory.org/blog/where-to-a-history-of -autonomous-vehicles/?key=where-to-a-history-of-autonomous-v ehicles; A Brief History of Autonomous Vehicles, content sponsored by Ford. https://www.wired.com/brandlab/2016/03/abrief-history-of-autonomous-vehicle-technology 10. Bishop, R., et al.: Commemorating the 20th Anniversary of the National Automated Highway Systems Consortium Demo ‘97 in San Diego 11. National Automated Highway System Consortium: Automated Highway System (AHS) System Objectives and Characteristics, pp. 20–21 (1995) 12. National Automated Highway System Consortium: Automated Highway System – Solving Transportation Problems. San Diego (7–10 Aug 1997) 13. U.S. Department of Transportation (USDOT): National Highway Traffic Safety Administration. https://www.nhtsa.gov/ technology-innovation/automated-vehicles-safety; McGrath, M.E.: Autonomous Vehicles: Opportunities, Strategies and Disruptions: Updated and Expanded Second Edition, pp. 67–69. (Kindle Edition, 2020)
50
Connected Vehicles and Driving Automation Systems
14. U.S. Department of Transportation (USDOT): Intelligent Vehicle Initiative Final Report: Saving Lives Through Advanced Vehicle Safety Technology Rep. FHWA-JPO-05-057; NTISPB2006101571 (2005) 15. U.S. Department of Transportation (USDOT): Preparing for the Future of Transportation: Automated Vehicles 3.0 (AV 3.0) (2018) 16. U.S. Department of Transportation (USDOT): National Highway Traffic Safety Administration Traffic Safety and the 5.9 GHz Spectrum, Speech by Heidi King. https://www.nhtsa.gov/ speeches-presentations/traffic-safety-and-59-ghz-spectrum 17. CA Connected Vehicle Test Bed: http://www.caconnectedvehiclet estbed.org/index.php/ 18. Ann Arbor Connected Vehicle Test Environment (AACVTE): https: / / aacvte.umtri.umich.edu/about-ann-arbor-connected-vehic le-test-environment-aacvte/ 19. McGrath, M.E.: Autonomous Vehicles: Opportunities, Strategies, and Disruptions, 2nd ed. Independently published (2021) 20. Kala, R.: On-Road Intelligent Vehicles: Motion Planning for Intelligent Transportation Systems. Elsevier, Oxford (2016) 21. U.S. Department of Transportation (USDOT)/ITS JPO: Connected Vehicle Benefits Fact Sheet. https://www.its.dot.gov/ factsheets/pdf/ConnectedVehicleBenefits.pdf 22. Eliot, L.: AI Self-Driving Cars Divulgement, Practical Advances in Artificial Intelligence and Machine Learning (Kindle Edition, 2020); Eliot, L.: AI Self-Driving Cars Equanimity: Practical Advances in Artificial Intelligence and Machine Learning (Kindle Edition, 2020) 23. McGrath, M.E.: Autonomous Vehicles: Opportunities, Strategies and Disruptions: Updated and Expanded Second Edition (Kindle Edition, 2020) 24. Tummala, R.: IEEE-CPMT Workshop – Autonomous Cars: Radar, Lidar, Stereo Cameras 25. Hausler, S., Milford, M.: P1–021: Map Creation, Monitoring and Maintenance for Automated Driving – Literature Review (2021) 26. Mobileye: https://www.mobileye.com/our-technology/rem 27. TN-ITS: https://tn-its.eu/ 28. Navigation Data Standard: https://nds-association.org/sensoris/ 29. Ekman, M.: Learning Deep Learning: Theory and Practice of Neural Networks, Computer Vision, Natural Language Processing, and Transformers Using TensorFlow (Kindle Edition, 2021) 30. McGrath, M.E.: Autonomous Vehicles: Opportunities, Strategies and Disruptions: Updated and Expanded Second Edition, pp. 304– 305. (Kindle Edition, 2020) 31. NHTSA Action Number: PE21020 (13 Aug 2021). https:// www.nhtsa.gov/recalls?nhtsaId=PE21020 32. Dimitrakopoulos, G., Uden, L., Varlamis, I.: The Future of Intelligent Transport Systems. Elsevier, Oxford (2020).; Deka, L., Chowdhury, M.: Transportation Cyber-Physical Systems. Elsevier (2018) 33. Lu, N., Cheng, N., Zhang, N., Shen, X., Mark, J.W.: Connected vehicles: solutions and challenges. IEEE Internet Things J. 1(4), 289–299 (2014) 34. Michel, P., Karbowski, D., Rousseau, A.: Impact of connectivity and automation on vehicle energy use. In: SAE Technical Papers (2016). https://doi.org/10.4271/2016-01-0152 35. Ha, Y.J., Chen, S., Du, R., Dong, J., Li, Y., Sabi, S.: Vehicle connectivity and automation: a sibling relationship. Front. Built Environ. 6, 199–204 (2020) 36. Chen, S., Leng, Y., Labi, S.: A deep learning algorithm for simulating autonomous driving considering prior knowledge and temporal information. Comput. Aided Civ. Inf. Eng. 35(4), 305– 321 (2020) 37. Anderson, J.M., Kalra, N., Stanley, K.D., Sorensen, P., Samaras, C.: Autonomous Vehicle Technology. A Guide for Policymakers, p. 83. RAND Corporation, Santa Monica (2016)
1111 38. Anderson, J.M., Kalra, N., Stanley, K.D., Sorensen, P., Samaras, C.: Autonomous Vehicle Technology. A Guide for Policymakers, p. 91. RAND Corporation, Santa Monica (2016) 39. Anderson, J.M., Kalra, N., Stanley, K.D., Sorensen, P., Samaras, C.: Autonomous Vehicle Technology. A Guide for Policymakers, p. 92. RAND Corporation, Santa Monica (2016) 40. Anderson, J.M., Kalra, N., Stanley, K.D., Sorensen, P., Samaras, C.: Autonomous Vehicle Technology. A Guide for Policymakers, p. 161. RAND Corporation, Santa Monica (2016) 41. National Academies of Sciences: Engineering, and Medicine: Protection of Transportation Infrastructure from Cyber Attacks: A Primer. The National Academies Press, Washington, DC (2016) 42. Deka, L., Chowdhury, M.: Transportation Cyber-Physical Systems. Elsevier, New York (2018) 43. Trend Micro Research, Driving Security into Connected Cars: Threat Model and Recommendations 44. The cybersecurity blind spots of connected vehicles. https:// connectedautomateddriving.eu/mediaroom/the-cybersecurity-blin d-spots-of-connected-vehicles/ (7.9.2020) 45. U.S. Department of Transportation (USDOT)/ITSJPO: Development Activities, International Standards Harmonization. https://www.standards.its.dot.gov/DevelopmentActivities/IntlHar monization 46. United Nations Economic Commission for Europe: UNECE WP.29 R.155 (cybersecurity MS); United Nations Economic Commission for Europe UNECE WP.29 R.156 (software update MS) 47. SAE Cybersecurity Guidebook for Cyber-Physical Vehicle Systems J3061_201601 (2016) 48. ISO 21434: Road Vehicles – Cybersecurity Engineering (2021) 49. American Association of State Highway and Transportation Officials (AASHTO), Institute of Transportation Engineers (ITE), and National Electrical Manufacturers Association (NEMA) NTCIP 9014 v01.20 National Transportation Communications for ITS Protocol Infrastructure Standards Security Assessment (ISSA) (August, 2021) 50. ISO 26262 Road Vehicles – Functional Safety (2011/2018) 51. UL 4600 Standard for Safety for the Evaluation of Autonomous Products (April, 2020) 52. Automotive SPICE® : www.automotivespice.com 53. ISO/PAS 21448:2019: Road Vehicles – Safety of the Intended Functionality (2019) 54. AUTOSAR (AUTomotive Open System Architecture): https:// www.autosar.org/about/ 55. SAE International J3016: Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles (J3016: June, 2018) 56. SAE International J3016: Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles. P. 6. (J3016: June, 2018) 57. U.S. Department of Transportation/Volpe National Transportation Systems Center: Benefits Estimation Model for Automated Vehicle Operations: Phase Two Final Report. FHWA-JPO-18-636 (2018) 58. ERTRAC Working Group: Connectivity and Automated Driving (08.03.2019), p. 14 59. U.S. Department of Transportation (USDOT): Intelligent Transportation Systems (ITS) Strategic Plan 2015–2019, p. 83 60. SAE International J3216: Taxonomy and Definitions for Terms Related to Cooperative Driving Automation for On-Road Motor Vehicles (J3216: May, 2020) 61. U.S. Department of Transportation (USDOT): Ensuring American Leadership in Automated Vehicle Technologies: Automated Vehicles 4.0 (AV 4.0) (2020)
50
1112 62. U.S. Department of Transportation/Volpe National Transportation Systems Center. May 2019. https://www.transportation.gov/ /sites/dot.gov/files/docs/research-and-technology/345996/cv-dep loyment-locationsusamapnodetails-2.pdf 63. U.S. Department of Transportation (USDOT): ITS JPO. Tampa, Florida Connected Vehicle Pilot Deployment Program. FHWAJPO-17-502 64. U.S. Department of Transportation (USDOT): FHWA. Advanced Transportation and Congestion Management Technologies Deployment. https://www.fhwa.dot.gov/fastact/factsheets/ advtranscongmgmtfs.cfm 65. AACVTE: https://aacvte.umtri.umich.edu/ 66. USDOT/NHTSA AV Test Initiative: https://www.nhtsa.gov/ automated-vehicles-safety/av-test-initiative-tracking-tool 67. UITP/Shared Personalised Automated Connected vEhicles (SPACE): https://space.uitp.org 68. U.S. States Are Allowing Automated Follower Truck Platooning While The Swedes May Lead In Europe; Richard Bishop, 2 May 2020. https://www.forbes.com/sites/richardbishop1/2020/05/02 /us-states-are-allowing-automated-follower-truck-platooning-whil e-the-swedes-may-lead-in-europe/#2a7fa23dd7e8 69. USDOT/TTI: Consumer Acceptance, Trust and Future Use of Self-Driving Vehicles. Zmud, J. Sener, I.N.: Texas A&M Transportation Institute (August 2019); McGrath, M.E.: Autonomous Vehicles: Opportunities, Strategies and Disruptions: Updated and Expanded Second Edition, pp. 304–305. (Kindle Edition, 2020) 70. Who is Using the FRAME Architecture? https://frame-online.eu/ frame-architecture/detailed-information/who-is-using-the-framearchitecture 71. FRAME Architecture: https://frame-next.eu/frame/ 72. Bossom, R.: ITS Architecture. PIARC. Systems and Standards. https://rno-its.piarc.org/en/systems-and-standards-itsarchitecture/what-its-architecture 73. Personal Communications of the Authors with Dr. Robert Jaffee 74. RAIM Architecture: https://raim-architektur.de/ 75. Directive 2010/40/EU: https://eur-lex.europa.eu/legal-content/ EN/ALL/?uri=CELEX%3A32010L0040 76. USDOT/FHWA: Architecture Reference for Cooperative and Intelligent Transportation. ARC-IT Version 8.3. www.arc-it.org 77. Concerns, USDOT/FHWA: Architecture Reference for Cooperative and Intelligent Transportation. ARC-IT Version 8.3. https:// local.iteris.com/arc-it/html/security/concerns.html 78. Physical Object Device Security Classes-USDOT/FHWA’s Architecture Reference for Cooperative and Intelligent Transportation, ARC-IT Version 8.3. https://local.iteris.com/arc-it/html/ security/deviceclasses.html 79. CCMS Communications – USDOT/FHWA’s Architecture Reference for Cooperative and Intelligent Transportation, ARC-IT Version 8.3. https://local.iteris.com/arc-it/html/comm/profile28.html 80. Communications – USDOT/FHWA’s Architecture Reference for Cooperative and Intelligent Transportation, ARC-IT Version 8.3. https://local.iteris.com/arc-it/html/viewpoints/ communications.html 81. Security – USDOT/FHWA: Architecture Reference for Cooperative and Intelligent Transportation. ARC-IT Version 8.3. https:// local.iteris.com/arc-it/html/security/security.html 82. Security – USDOT/FHWA: ARC-IT Version 8.3. https:// local.iteris.com/arc-it/html/security/securityareas.html 83. The Open Group Architecture Framework (TOGAF): http:// www.togaf.org/ 84. UITP/SPACE Use Cases: https://space.uitp.org/toolkit/how-tointegrate-avs-in-public-transport 85. National Institute of Standards and Technology, Cybersecurity Framework Version 1.1 (16 Apr 2018) 86. European Commission: Strategic Transport Research and Innovation Agenda (STRIA) Roadmap 2019
Y. J. Nakanishi and P. M. Auza 87. CARTRE and ARCADE Knowledgebase: http:// connectedautomateddriving.eu 88. U.S. Department of Transportation (USDOT): National Highway Traffic Safety Administration, Automated Vehicles Comprehensive Plan. P. ii (2021) 89. U.S. Department of Transportation (USDOT): National Highway Traffic Safety Administration, Automated Vehicles Comprehensive Plan. P. 12 (2021) 90. U.S. Department of Transportation (USDOT): Automated Driving Systems 2.0: A Vision for Safety (2017) 91. U.S. Department of Transportation (USDOT): ITS Strategic Plan 2015–2019 92. U.S. Department of Transportation (USDOT): ITS Strategic Plan 2020–2025 93. ERTRAC Working Group: Connectivity and Automated Driving (08.03.2019) 94. U.S. Department of Transportation (USDOT)/ITS JPO: Connected Vehicle Benefits Fact Sheet. https://www.its.dot.gov/ factsheets/pdf/ConnectedVehicleBenefits.pdf; Dimitrakopoulos, G., Uden, L., Varlamis, I.: The Future of Intelligent Transport Systems. Elsevier, Oxford (2020); Deka, L., Chowdhury, M.: Transportation Cyber-Physical Systems. Elsevier (2018) 95. U.S. Department of Transportation (USDOT): National Security Credential Management System (SCMS) Deployment Support: SCMS Baseline Summary Report, Final Report (12 Jan 2018) 96. U.S. Department of Transportation (USDOT): Security Credential Management System (SCMS) Technical Primer (2019) 97. U.S. Department of Transportation (USDOT): Security Credential Management System (SCMS). https://www.its.dot.gov/resources/ scms.htm 98. U.S. Department of Transportation (USDOT): National Security Credential Management System (SCMS) Deployment Support: SCMS Baseline Summary Report, Final Report. P. 5 (2018) 99. U.S. Department of Transportation (USDOT): National Security Credential Management System (SCMS) Deployment Support: SCMS Baseline Summary Report, Final Report (2018) 100. U.S. Department of Transportation (USDOT): National Security Credential Management System (SCMS) Deployment Support: SCMS Baseline Summary Report, Final Report. Table 5–6, p. 34 (2018) 101. EU-ITS Task Force Standards Harmonization Working Group, Harmonization Task Group 6 HTG6-3: Architecture Analysis 102. EU-ITS Task Force Standards Harmonization Working Group, Harmonization Task Group 6 HTG6-4: Functional Decomposition Analysis 103. EU-ITS Task Force Standards Harmonization Working Group, Harmonization Task Group 6 HTG6-3: Architecture Analysis, pp. 10–11 104. NHTSA: https://www.nhtsa.gov/technology-innovation/vehiclecybersecurity 105. Habibovic, A.: What is the role of a human remote operator? Day 3, BO 7 EUCAD 2021
Further Reading Clark, J.R., Stanton, N.A., Revell, K.: Human-Automation Interaction Design: Developing a Vehicle Automation Assistant. CRC Press, Boca Raton (2021) Guzzella, L., Sciaretta, A.: Vehicle Propulsion Systems: Introduction to Modeling and Optimization, 3rd edn. Springer, Berlin (2013) Isermann, R.: Automotive Control: Modeling and Control of Vehicles. Springer, Berlin (2022) Liu, S., Li, L., Tang, J.: Creating Autonomous Vehicle Systems, 2nd edn. Morgan & Claypool, San Rafael (2020)
50
Connected Vehicles and Driving Automation Systems
1113
Meyer, G., Beiker, S. (eds.): Road Vehicle Automation 8. Springer, Cham (2021) Nakanishi, Y.: Vehicle and road automation, Chapter 66. In: Nof, S.Y. (ed.) Springer Handbook of Automation, pp. 1165–1180. Springer, Berlin (2009) Sjafrie, H.: Introduction to Self-Driving Vehicle Technology. CRC, Boca Raton (2019)
Dr. Pierre M. Auza is a research assistant for Nakanishi Research and Consulting, LLC. In the past, Pierre was the secretary of the Transportation Research Board (TRB) Standing Committee on Critical Transportation Infrastructure Protection (ABR10, formerly ABE40). Pierre earned his Ph.D. in 2018 at the University of California at Irvine in civil engineering (Transportation Engineering) under Advisor Professor Jayakrishnan.
Dr. Yuko J. Nakanishi is the principal of Nakanishi Research and Consulting, LLC. She has also been an active member of the Intelligent Transportation Society NY, ITS America, and former chair of Transportation Research Board (TRB) Standing Committee on Critical Transportation Infrastructure Protection (ABR10, formerly ABE40) and Subcommittee on Training, Education, and Technology Transfer. She holds a Ph.D. in transportation planning and engineering from the Tandon School of Engineering of New York University, MS in civil engineering from City College NY, and MBA in management from Columbia Business School.
50
Aerospace Systems Automation
51
Steven J. Landry and William Bihlman
Contents
Keywords
51.1 51.1.1 51.1.2 51.1.3 51.1.4 51.1.5 51.1.6 51.1.7
Manufacturing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Level of Automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Current State of Automation in Aerospace . . . . . . . . . . . Part Fabrication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Subassemblies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Final Assembly . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Non Air Transport Markets . . . . . . . . . . . . . . . . . . . . . . . . Manufacturing Closing Thoughts . . . . . . . . . . . . . . . . . . .
1115 1115 1116 1116 1120 1121 1123 1126
51.2 51.2.1
Automation in Aircraft Systems and Operations . . . . 1126 Guidelines for Automation Development . . . . . . . . . . . . 1126
Automation · Air traffic control · Flight deck · Aircraft manufacturing · Airspace systems · Additive manufacturing
51.1
Manufacturing
The manufacturing of an aircraft or jet engine can be distilled into three sequential stages: parts fabrication, sub51.3.1 assembly, and final assembly. Generally speaking, most au51.3.2 tomation is found upstream in the fabrication of detailed 51.3.3 parts/components. The notable exception is carbon fiber tape 51.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1135 placement, part of the subassembly process for an aircraft’s References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1135 aerostructure. There are two other processes that lend themselves to automation – inspections and painting/coatings – the same is true for automobile production. These tasks do Abstract not require the same level of positional accuracy as drilling A review of aerospace systems automation is provided or rivet placement, for example. There are at least three reasons why most automation with an emphasis on examples and design principles. First, occurs at the initial stages of production. First, the parts are a discussion of aerospace systems manufacturing automarelatively small and are easier to manipulate. Second, there tion is provided, followed by a discussion of automation are issues with tolerance buildup as parts become subassemin the operation of aerospace systems, including aircraft blies, and eventually major assemblies. Although aviation and air traffic control. The chapter provides guidance to involves high levels of precision, mating large subassemblies managers, engineers, and researchers tasked with studying is notoriously challenging. And third, there is the issue of or building aerospace systems. economics. For major assemblies, in many cases, the value of the labor content is diminutive compared to the total cost of the asset. This is especially true for engines. It is therefore not cost effective to replace human labor with automation under these circumstances. 51.3
Automation in Air Traffic Control Systems and Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sequencing and Scheduling Automation . . . . . . . . . . . . . Conflict Detection and Resolution Automation . . . . . . . Future Automation Needs . . . . . . . . . . . . . . . . . . . . . . . . .
1133 1133 1134 1134
S. J. Landry () Pennsylvania State University, State College, PA, USA e-mail: [email protected]
51.1.1 Level of Automation
W. Bihlman Aerolytics, LLC, Lafayette, IN, USA
Industrial automation can be segmented into three tiers or levels. A low-level includes processes such as part
© Springer Nature Switzerland AG 2023 S. Y. Nof (ed.), Springer Handbook of Automation, Springer Handbooks, https://doi.org/10.1007/978-3-030-96729-1_51
1115
1116
S. J. Landry and W. Bihlman
pick-and-place robotics. The medium-level contains automated welding, drilling, and deburring. The highestlevel involves assembling parts or inspection. As discussed, most automation in aerospace would be considered at the low to medium-level, associated with the fabrication of individual parts/components (note these terms will be used interchangeably). Component size is an important characteristic. The three other essential considerations are part complexity and material. Some material systems lend themselves to large-scale automation – carbon fiber “preprep” tape is the most popular instantiation in aerospace. And in the context of automotive, steel sheet is another example. These metal blanks can be easily stamp formed, and rapidly join via welding. In contrast, aluminum, historically the most predominant aeromaterial, is not. Most high-strength aluminum alloys are very difficult to weld. Consequently, the aerospace industry uses fasteners to consolidate aluminum parts. Fasteners are also more forgiving for the thermal cycling that occurs during typical flight operations, where curse-altitude temperatures often exceed negative 60 ◦ F. There are examples of robotic drilling and riveting discussed; nonetheless, this is inordinately more complex and much less common than spot welding for automobile chassis.
51.1.2 Current State of Automation in Aerospace Each of the three manufacturing stages involves a different type of automation. The most prevalent form for each stage is summarized below in Table 51.1. One of the technologies listed, additive manufacturing, is still at a relatively lowtechnology readiness level (TRL). In addition to these three production stages, there is the production ecosystem itself – the factory. The remainder of this manufacturing section will specifically address the technologies highlighted.
51.1.3 Part Fabrication The majority of the automation in aerospace is dedicated to detailed parts manufacturing. There are four basic metalforming techniques (Fig. 51.1) prevalent in aerospace –
forging, casting, extrusion, and machining. The first three are summarized in the following figure. The fourth, machining, is easily the most ubiquitous and will be discussed in more detail. Nearly all metal parts require machining. The process nowadays is highly mechanized. Modern automated machine tools have supplanted most manual turning, milling, and drilling functions in serialized production settings. These ultrahigh-precision five-axis tools can create complex metal parts with minimal human interaction.
The Advent of CNC Machines Computer-numeric controlled (CNC) machines were first introduced by Massachusetts Institute of Technology (MIT) in 1952; they were later commercialized by Cincinnati HydroTel Corporation [1]. One of the major advancements was programming language that essentially disintermediated the machinist. For instance, a typical day-long task could now be performed in less than an hour [2]. Another major advancement was the development of computer-aided machining (CAM). Lockheed-California is credited for developing computer-graphics augmented design and manufacturing (CADAM) in 1965, heavily influenced by the advancement of “adequate graphic systems” by IBM [3]. The Concept of Near-Net Shape The efficiency of the machine is measured in terms of throughput. And throughput, in turn, is quantified by material “feeds” and machine “speeds.” There are two basic classifications of machining. Rough machining is used to remove large volumes of material quickly, versus final machining that is used to provide smooth (usually final) surfaces. For aerospace parts, an important concern is surface defects that could lead to stress risers, and then part failure. This can become problematic for cyclically loaded parts where fatigue is a concern. Final machining helps ameliorate this issue by providing extremely smooth surfaces. Tolerances are approximately 0.005 in., or about twice the thickness of a human hair. Some parts are extensively machined such as billet, plate, and forgings; whereas others, such as castings and some extrusions, are less so. The amount of the material removed
Table 51.1 Key automation technology for aircraft and engine Parts Fabrication:
CNC machining, superplastic forming, lost-wax casting, aeroengine 3D-woven blades, additive manufacturing, coatings, inspection (GD&T, x-ray, ultrasonic) Composite tow placement, inspection (x-ray, ultrasonic) Robotic drilling & riveting, painting Tool cribs, automated pallet systemsa
Subassembly: Final Assembly: Plant-wide Automation: Source: Aerolytics a Note that Internet of Things (IOT) is an important enabler and fundamental infrastructure for the digital factory
51
Aerospace Systems Automation
1117
Forging
Process: hardened metal manipulated using force and heat Advantages: controlled grain flow, fine grain structure, increased tensile strength, fatigue strength, and ductility
Casting
Process: molten metal poured into a cavity with given geometry Advantages: complex geometries, high production rates, superior high temp properties, less machining than forgings
51 Extrusion
Process: hardened metal is heated and pushed through a die Advantages: low cost for uniform cross section shapes, excellent surface finish, good for brittle materials
Fig. 51.1 Outline of three of the four most common manufacturing processes. (Source: Aerolytics)
can be quantified in terms of the buy-to-fly ratio. This is the amount of initial material (or preform) required relative to the final material remaining. The average buy-to-fly is around 7:1, meaning that for every 1 pound that remains, 7 pounds of material is required [4]. This an astounding 85% removal rate. Processes that are closer to “near-net” shape, such as additive manufacturing, would be closer to 1:1.
The Advent of 3D Printing The genesis of additive manufacturing (AM) can be traced back to the mid-1980s in the United States. In its most primitive form, it simply adds (or cures) material layer-by-layer to make a three-dimensional part. The term, however, was not formalized in the United States until 2009. AM remains the standard appellation for industrial three-dimensional printing in North America, as opposed to the term “three-dimensional printing,” that basically connotes a desktop polymer printer. AM can be classified by material type/source and energy method, as depicted in Fig. 51.2. Clearly, as with any technology, there are various advantages and disadvantages to consider. Additive Manufacturing Value Proposition This manufacturing paradigm has emerged as transformational for targeted aerospace parts. In particular, it can create extremely lightweight parts that are optimized for a given performance (e.g., strength, stiffness, and durability). Material is
added only if required by the design load path, as predicted by the stress analysis software. The basic value proposition is predicated upon two scenarios. First, and the most obvious, AM targets parts that cannot be created conventionally via machining. Examples include lattice structures, internal channels for fluid (for conveyance or cooling), and various “organic” shapes. Additionally, this would include former assemblies that can now be redesigned and consolidated into a single part. The GE LEAP fuel nozzle tip is a popular example [5]. The second scenario includes parts that cannot otherwise be justified economically, or take too long to produce, due to the need for tooling. AM does not require tooling. This becomes an important differentiator for the situation that involved part and tool obsolesce. In fact, this is a serious issue for the US Military given the maturity of their fleet [5]. Accordingly, all branches of the US Military have been actively researching AM, both metal and polymer. In essence, the AM business case can be highly nuanced. It might not support printing “simple” geometries or parts with “large” production runs writ large. Most often, the business case must accompany some type of system-level analysis and part redesign. Both designers and managers need to understand the relationship between part complexity, size, production volume, and unit cost (see [6]). Meaningful adoption of AM is predicated upon a fundamental understanding of this trade-off.
1118
S. J. Landry and W. Bihlman
Classification
Benefits Reduce weight
Material source:
Reduce part count
Powder bed
Reduce lead time
Powder feed
Reduce material loss
Wire feed Energy method: Laser Electron beam Plasma
Concerns Aerospace applications Repairs (1980s)
Grain structure quality Machine repeatability
Tooling (1990s) Parts (2000s)*
* from prototyping to production
Fig. 51.2 Summary of the characteristic for metal AM. (Source: Aerolytics; photo courtesy of EOS, Inc.)
Prevalent AM Modalities AM includes seven different ontologies or “modalities.” For aerospace metal parts, there are two that dominate. In both cases, the ultimate goal is to print critical structural parts for the aerostructure and engine. Powder bed fusion (BPF) is the most ubiquitous. It has several commercial names, the most common of which are selective laser melting (SLM) and selective laser sintering (SLS). It is most effective for small, complex parts, made of materials that are difficult to machine like superalloys. For this reason, engine investment-casted blades – with their internal channels for cooling – are particularly attractive candidates [5]. And more generically, small titanium and superalloy castings are strong prospects, too. The second popular AM modality is wire-feed-directed energy deposition (DED). This is similar to arc welding and targets basic net shapes, mainly relatively large titanium closed-die forgings used in the aerostructure. The greatest advantage is a much higher rate of material deposition compared to PBF. Another important advantage is the convenience and ease of using wire feedstock. The amount of machining required varies; on average, though, it is usually less than die forgings. Challenges for Adoption There are a number of challenges for AM’s adoption. This ranges from the more pedestrian – such as PBF’s small build chamber – to the more technically nuanced – including a lack of fundamental understanding of the final part’s microstructure due to machine process variations. Inevitably, as technology matures and proliferates, certain aspects will become commoditized. Global standards development organizations (SDOs) such as SAE International and ASTM are supporting industrialization by devel-
oping guidelines to better control the build process. This includes such measures as identifying the key process variables (KPVs), providing process control limits, specifying feedstock handling requirements, and creating criteria for machine and part acceptance and qualification. Standards and best practices are also critical to support governmental regulation. For instance, regulatory agencies such as the FAA are working with SDOs to help qualify machine performance. The current position is to effectively treat every machine as its own foundry, implying that each machine must be individually qualified as “adequately” calibrated. Indeed, this lack of standardization is prohibitively costly in terms of time and money. The quality of machines and the parts they produce will continue to improve. And advances in related areas like predictive modeling will facilitate adoption. Meanwhile, next generation machines will push for greater throughput and higher quality parts by employing in-situ process monitoring and closed-loop controls, multiple lasers, and internal powder reuse systems. Nonmetal AM Modalities The most popular non-metal AM market for industrial fabrication is extrusion deposition of thermoplastics. Similar to PBF, there are a variety of commercial labels – the most widely accepted is fused filament fabrication (FFF). This technology was developed in the late 1980s by the founder of Stratasys, the market leader. The process involves a heated nozzle (liquefier) that moves while extruding molten plastic filament at a nearuniform rate. Unlike PBF, the build plate remains fixed and the nozzle traverses three-dimensionally to create the layers. The parts are similar in size to PBF, so are limited to a few feet. For aerospace, the main applications are non-critical
51
Aerospace Systems Automation
parts such as clips and brackets, and repairs for cabin interiors. Some FFF systems involved a controlled build chamber – in the case of Statasys’s patented fused deposition modeling (FDM), the build chamber is heated. There are commercially available large-scale extrusion deposition platforms. None carry the FFF label. More popular models include big area additive manufacturing (BAAM) and large scale additive manufacturing (LSAM). The largest printer has a build envelope that exceeds 40 ft in length. The aerospace end-market is prototype tooling for composites. The part/tool is printed to near-net shape, and then the innermold line is machined smooth. Rapid prototyping for tooling is of particular interest due to the expenses and time associated with machining Invar – the most common metal for dies due to its “invariable” properties. Both Invar and thermoplastics (when infused with carbon fiber) have very low coefficients of thermal expansion. The latter is considerably easier to machine, cheaper to procure, and more convenient to transport and store due to its lightweight but durable construction [7].
Castings Casting technology is centuries old. For aircraft engines, it has become a vital part of the design due to the ability to create internal channels. The most demanding applications incorporate superalloys in the form of single-crystal (SX) investment cast for the engines’ turbine blades. These are located in “hot” section near the middle of the engine. SX’s continuous-grain structure, when coupled with thermalbarrier coatings and internal cooling channels, allows the casting to operate in environments 800 ◦ F above the alloy’s melting temperature [8]. Castings themselves are relatively sophisticated. Nonetheless, the process to create most castings is not highly automated. Lost-Wax Casting Lost wax casting is a multistep process. The term originates from the process of pouring molten metal into a ceramic mold that was originally formed via a wax mold of the final part. Creating the mold for the part itself is the first step. A high-pressure metal die or “master mold” is used to create the wax mold with the exact dimensions as the desired final part. For smaller parts, dozens of these wax parts are affixed to a wax sprue or tree to form a block mold. This tree provides the flow channels needed to fill the individual parts’ shells with the molten metal. Next, the wax tree is dipped or “invested” into a ceramic slurry that solidifies to create a shell. The dipping process may be repeated up to a dozen times [9]. After a sufficiently thick shell is formed, the ceramic is cured in an autoclave, and the wax melts and is “lost” from the mold. Molten metal is then poured into the empty mold through the opening of the tree. After the metal hardens, the
1119
ceramic shell is removed. The individual metal parts are cut from the tree, grinded, sometimes sandblasted, and inspected. Investment casting can be considered a near-net process if one were to ignore the additional metal required to attach the actual parts to the tree. Minimal machining is required for the part itself. This is critical since, as noted, superalloys are exceedingly difficult to machine. Automating Casting On average, the dipping of the tree into the ceramic slurry is the only automated process; otherwise, there is a fair amount of manual labor involved. According to Rolls-Royce, “You’ll always find a wax room at an investment casting foundry.” The manufacturing manager, Steve Pykett, explains “it requires hand-eye coordination and dexterity to make the wax form, but that doesn’t deliver consistency” [10]. The Manufacturing Technology Center in the United Kingdom houses an automated system to hold the ceramic core, inject the wax, secure the core, and finally assemble the wax tree. As a consequence, work that would normally take an entire shift requires only an hour. Moreover, the quality of the wax process was greatly improved, independent of the time of day. They claim there are savings in both time and money [10].
Superplastic Forming Superplastic forming (SPF) is another relatively mature manufacturing process. It is similar to the deep-draw of sheet metal used in automotive. SPF involves gas that forces a sheet of metal into a mold cavity at a constant strain rate. The main advantages of SPF are the isotropy of the final part, the lack of bonding required for “complex” shapes, and the fact that the part has minimal residual stress. The most common metals for SPF aerospace applications are “fine-grain” aluminum and titanium alloys [11]. And since metal becomes more pliable when heated, the alloy can be stretched 200–1000% of its original size [12]. Another benefit of SPF is the ability to form tight curvatures while using titanium sheet. This enables parts designed for high-temperature applications, eliminating the need for thermal insulation if composites were to be used. One such application is structures near the thrust reverser, where temperatures approach 1000 ◦ F. Cycle time, however, is considered a handicap. Another challenge is galling when using aluminum. This localized roughness is due to friction generated by the stretching process. SPF can be used to produce smaller parts, such as fan blades. Rolls-Royce established a Center of Excellence in Singapore dedicated to SPF for the Trent fan blades. Part of the process was considered extremely hazardous, and automation has proven to be an effective solution.
51
1120
S. J. Landry and W. Bihlman
The first fabrication step is diffusion bonding three sheets of titanium to form a hollow structure. The blade is twisted and inflated with an inert gas in a highly controlled process, requiring the accuracy of 0.002 in. Historically, blades would be loaded into the furnace manually. Now, 10 foundry robots are used. There is another set of robots that spray boron nitride to facilitate part removal. The assertion is that these robots can ultimately “mimic human dexterity” [13].
51.1.4 Subassemblies A subassembly is a collection of parts that are joined often mechanically. Metal fasteners (including rivets) are preferred to welded joints for metals and chemical bonds for composites, although, primary structure that requires fail-safe design will inevitably include a combination of joining techniques. Fasteners are used for redundancy. A typical air transport aircraft has between two and four million parts – a large portion of which are fasteners. Within the past decade, Boeing and Airbus have introduced aircraft that have significantly less parts than their predecessors. Both companies have “unitized” sections of the aerostructures by employing composites, meanwhile adopting greater levels of automation.
Paradigm for Design Curiously, the most modern aircraft designs have not leveraged the full capability of composite technology. In particular, the Boeing 787 and Airbus A350 are semimonocoque designs that mimic an aluminum aircraft configuration. This is colloquially known as “black aluminum” (See Fig. 51.3). As a consequence, sections or major subassembly of the fuselage were formed as a combination of bonded and fastened composite parts (i.e., skins, stringers, longerons, frames, and bulkheads). A similar approach was used for the wing and empennage. Indeed, in most cases these parts are composite, but titanium is also used. Although aluminum is remarkably easier to machine than both titanium and carbon fiber composites, it is not a candidate material [14]. Aluminum has a fundamentally different coefficient of thermal expansion and propensity for galvanic corrosion when in contact with carbon fiber. The technology does exist to be able to join most of these composite parts from the onset. Carbon Fiber Composites Airbus led the foray into composites in the 1970s with secondary structures, and then primary structures in the 1980s with its A310 and A320 aircraft [15]. Boeing soon followed. The Boeing 777 empennage was the first major structure that was predominantly carbon fiber-reinforced polymer (CFRP)
Fig. 51.3 Black aluminum design using CFRP. (Source: 2nd EM to J. Sloan at Composite World 2 Aug)
[16]. And given the similarity between the empennage and the wing, a CFRP wing was a natural progression. The wing is an excellent prospect due to the high stiffness associated with CFRP, which enables long, narrow wings with high-aspect ratios. This reduces aerodynamic drag. Moreover, aluminum wing skins are made from extensively machined aluminum plate. These expansive surfaces are stretch-formed to provide camber, and then assembled using thousands of fasteners [11]. It is arduous, and costly. The fundamental importance of a CFRP wing can be perhaps best illustrated anecdotally. Boeing made a strategic investment in 2016 to build a billion-dollar composite wing facility for the 777X in Washington. This includes robotic placement of fiber to construct the 105-foot-long CFRP wing spars [17]. Nevertheless, as evidenced by the Boeing 787 and the Airbus A350, there is value in designing a composite fuselage [18]. Regardless of the aircraft section, composites are accompanied by greater levels of automation. In general, there are three basic manufacturing ontologies. Automated Tape Laying Arguably, the most iconic form of automation in aerospace is associated with composites. The earliest instantiation is automated tape laying (ATL). Along with carbon fiber, ATL originated in the late 1960s in the United States for use on military aircraft. ATL uses unidirectional prepreg tape that typically measures 3–12 in. wide, and is applied to a male or female mold with modest curvature. The orientation and thickness of the plies can be accurately controlled to optimize the mechanical characteristics for a given section. Often, the ATL head is heated and a compaction roller is used to ensure proper and consistent layer-to-layer adherence of the tape. If using a thermoset, it will likely be cured in an autoclave.
51
Aerospace Systems Automation
Filament Winding A second technology for carbon fiber placement was industrialized in the 1980s for business and general aviation (BGA) per the Beechcraft Starship. This fiber placement system requires a cylindrical mandrel. Filament winding is a process of wrapping fiber tows under constant tension around a rotating mandrel as the tow placement head linearly traverses the mandrel. This allows changes in the tow orientation (e.g., +45◦ and −45◦ ) to provide a composite laminate of varying thickness and anisotropy. Note that composite structures are strongest in tension, as the load is carried along the fibers. Structures are therefore designed accordingly, often using a sophisticated ply layup schedule. In some cases, a core is used to provide bending stiffness, as in the case of the Starship. The fuselage was filamentwound carbon fiber plies that sandwiched a lightweight Nomex honeycomb core. Another common core is foam. There are various trade-offs using core material – the decision is based upon the expected mission of the aircraft. In general, filament winding has never fully matured in commercial aerospace. The technology is most suited for simple rotationally symmetric geometries like cylinders or cones since constant fiber tension is required to ensure compaction. It is commonly used for spacecraft structures and various industrial pressure vessels.
Automated Fiber Placement Automated fiber placement (AFP) is a third example of composites automation. This has become widely known since it is used by both Boeing and Airbus to fabricate fuselage sections of the 787 and A350, respectively. AFP is a form of ATL that incorporates elements of filament winding. There are two notable differences from filament winding – AFP requires prepreg tape, and the tows are often cut routinely during the application process. An AFP multiaxis head places narrow carbon fiber tows that are typically less than 0.5 in. wide. The machines can place up to 32 tows simultaneously. The advantage of AFP is that the narrower (and often discontinuous) tows can be placed on more complex, higher-contoured surfaces, which increases the versatility of the material. For the 787, AFP is used to produce eight barrel sections that are then joined with fasteners. Airbus implemented a fundamentally different approach for its aircraft. The A350 fuselage is composed of a series of panels that are fastened together. Each section has three panels – two sides and a crown. The panels are approximately 10 m long with an arc width of 6 m and are fabricated in the United States and shipped to France for assembly [19]. Skins are laid up in a female tool using ATL. The stringer and frames are mostly composites, and are subsequently attached to the composite skins via thermoplastic clips and brackets.
1121
Approximately two-thirds of the holes drilled for these panels are via automation. Drilling, as well as cutting through CFRP surfaces, is nontrivial, and the latter is usually performed via water jet. Similar to metal structures, the fasteners pose the greatest challenge – for this reason, they are installed manually.
51.1.5 Final Assembly The final assembly of an aircraft is a colossal task. The primary explanation is the sheer size of the subassemblies. But other factors include tolerance build-up, the need for numerous jigs and fixtures, and complex sequencing of the events required to consolidate the myriad parts. The aircraft structure, furthermore, needs to accommodate the mechanical, hydraulic, fuel, and electrical systems – a single aircraft has a couple of hundred miles of wire, for example [20]. These systems connect critical components such as the landing gear and control-surface actuators and include design redundancy for safety. For the most part, aircraft final assembly is labor intensive. The engineering drawings are used to create a bill of materials (BOM). In addition to aiding the Purchasing Department, production decomposes the drawings into a series of discrete work-steps. These prescriptive shop-floor instructions provide the necessary rudimentary tasks to construct a quality final article. The supply chain plays an increasingly important role in design and production. The original Boeing 737 entry-intoservice (EIS) was in 1968, and over 70% was built internally by Boeing. This proportion changed markedly with the 787 – the figure now is closer to 30%. In the case of the 787, the initial goal was to complete final assembly within 3 days with major assemblies being delivered by strategic partners from around the globe. This was in stark contrast to the typical 30-plus days normally required [21]. This level of transformation requires a new paradigm for the entire production ecosystem. One of the greatest challenges of automating an existing production line is the legacy ecosystem – infrastructure (hardware and software), people, culture, and methods. One can imagine that the system is well established for a complex artifact with millions of parts that has been produced for over a half a century. Even the Airbus A320, with an EIS of 1988, faces similar challenges. During the FAA/EASA Type Certificate process, the design and engineering must be finalized and effectively frozen, as well as the production system. The OEM must prove to the regulators that the production system is fixed and stable, with an appropriate quality system in place to account for any defects and dispositions. This results in a Production Certificate.
51
1122
It is worth mentioning that by 2020, Boeing and Airbus’ production systems were strained. Both companies aspired to significantly increase rate, from roughly 40 single-aisle aircraft per month, to a staggering 70 per month. The motivation was to more rapidly work through the order backlog that hovered around eight years. One strategy was to increase factory automation [22, 23].
Airbus and A320 European-based Airbus has contemplated investing over $1 billion on robotics, automation, and factory digitalization. The single largest production facility is in Toulouse, France. The second largest is in Hamburg, German, employing some 13,000 people [24] (See Fig. 51.4). This site is also the company’s most advanced factory and is dedicated to structural assembly. Recently, robots have been incorporated to drill and to rivet/fasten the 6000 holes in the fuselage. It is estimated that productivity at this site has increased 20% [24]. The facility location is casually referred to as “Hanger 245.” Michael Schoellhorn, Airbus’ COO, had previously worked in the automotive industry. He believes that what they are trying to accomplish is “more difficult, in that the design of the single-aisle [aircraft] is not as mature, not as digital as automotive designs are today” [24]. In the case of Hanger 245, eight Flextrack robots, shown in Fig. 51.5, drill and countersink up to 2400 holes per longitudinal joint. This is illustrated in the following figure [24].
Fig. 51.4 The bare fuselage for the Airbus A320 in Hamburg, Germany. (Source: A. Montaqim at Robotics and Automation News)
S. J. Landry and W. Bihlman
Next, 12 robots with seven-axis capability are used to mate the fuselage and the empennage via drilling, sealing, and riveting 3000 positions per orbital joint within 0.008-inches accuracy using a laser measurement system. A Hamburg University study estimates that this system reduces the recurring cost and workload by as much as 50% [24].
Boeing and 777X Similar to automotive, the generic product development strategy for an aircraft OEM is to modify an existing product to extend its sales life. Product derivatives are often less costly and less risky than clean sheet designs. In the case of aviation, there are three enabling technologies to consider, leading with the most important: the engine, wing, and avionics. A step-change in any one of these may warrant a new aircraft, or, as Boeing and Airbus have demonstrated repeatedly, an entire family of aircraft. The original Boeing 737 family (EIS 1968) was the 737100 and 737-200. The aircraft has since experienced three major upgrades. These are known as the Classic (EIS 1984), the Next Generation (NG) (EIS 1997), and most recently, the beleaguered MAX (EIS 2017). In each case, the family involved newer, more fuelefficient engines. And in the case of the NG, the aircraft includes a larger wing to increase the aircraft’s capacity. Given the gradual product evolution, it is no surprise that the production environment has not undergone a major transformation. The company elaborated on some of its challenges: A recent area of improvement is on the 737 component production system. Integrating new technology like cobots on an existing system can be a challenge because factory layouts are already configured, operators are experienced and skilled in existing processes, and there is often limited capital
Fig. 51.5 A close-up of a Flextack robot system used for the A320. (Source: A. Montaqim at Robotics and Automation News)
51
Aerospace Systems Automation
Fig. 51.6 Kuka robots working on semi-assembled A320 fuselage. (Source: A. Montaqim at Robotics and Automation News)
available for a system change. Implementing a new robot cell needs to be low cost, cause minimal disruption, and deliver rapid payback of value to an already streamlined production system [25]. Understandably, Boeing has applied more advanced manufacturing for its newer products, such as the 787 and more recently, the 777X. The 777X is a major derivative of the venerable 777, enhanced with a new composite wing and high-efficiency engines. In addition to building the composite wing factory in Washington, the company invested millions in robotic drilling and riveting for the fuselage, somewhat similar to the A320 operation (See Fig. 51.6). With the support of KUKA Systems from Germany, Boeing developed the fuselage-automated upright build (FAUB) robotic system over a six year period. Unfortunately, however, it was announced in November 2019 that the system would be abandoned due to persistent problems. The FAUB system was designed to drill holes and add tens of thousands of fasteners. Mechanics would first mount the track to the fuselage, and then four robots would traverse the surface adding fasteners. The robots worked in unison, with one inside, and the other outside. They were plagued with setbacks. In the end, Boeing reverted back to an older, smallerscale, flexible automation system called “Flex Track.” The mechanics first would attach the track to the fuselage, and the machine would then move circumferentially around the barrel section drilling holes. The mechanics followed, adding fasteners. This technology was developed for the 777 aircraft and is still in production [26].
An Engine OEM A typical air transport engine has 20 to 30 thousand parts, comparable to that of an automobile. In general, engine assembly is less labor intensive than aircraft assembly. Yet
1123
this is more of a function of a simpler bill of materials. Similar to aircraft OEMs, engine OEMs do not employ extensive factory automation. One source estimates the labor content as a percentage of the total engine value in the low single digits. Notwithstanding, it is important to note that the high value of engine parts considerably skews this figure. A turbine blade, for instance, can cost $1000 to $4000 – and there are hundreds of blades for a given engine. One example of an advanced engine final assembly facility is the GE Aviation smart factory in Indiana. The 225,000 ft2 facility was opened in 2014 to assemble the CFM LEAP engine core. During steady production, they anticipate employing roughly 300 employees. This is one of three such sites in the United States. The plant itself is optimized for lean manufacturing. It is well organized and reflects the essential tenets of lean manufacturing such as kitting and kaizen, with highly visible process maps and schedules. The company is proud to highlight its extremely autonomous and collaborative teams. But there is zero automation. This is likely similar to other modern engine final assembly plants around the world.
51.1.6 Non Air Transport Markets The focus of this chapter thus far has been the commercial air transport market. This market dominates the total aerospace production volumes, and it is characterized by the BoeingAirbus duopoly. To underscore their significance, one can use raw material as a proxy. For a typical year, 1.5 to 2 billion pounds of raw material is consumed in aerospace globally – two-thirds of which is associated with Boeing and Airbus [4]. The balance of the material is used to build business and general aviation and military aircraft. These markets will be explored in more detail, along with a quickly emerging market, advanced air mobility.
Business and General Aviation The business and general aviation (BGA) market is occupied by a small number of global companies: Textron Aviation (United States), Embraer (Brazil), Gulfstream (United States), Bombardier (Canada), and Dassault (France). The turboprops and private jets produced annually range between 100 and 200 units per company. Similar to the air transport, large-scale automation is almost nonexistent save part fabrication. In addition to low volumes, legacy designs and infrastructure make it impractical to justify significant investments in automation. Even new entrant, Hondajet of Japan, with its production facility in North Carolina, has no appreciable automation. Military Military aircraft spending in the United States is dominated by maintenance, repair, and overhaul (MRO), or simply
51
1124
“sustainment” in military parlance. This represents some 70% of the total annual spending for the US Air Force and US Navy (i.e., NAVAIR). Both branches have aging fleets – the average age for aircraft is greater than 20 years. MRO for aerostructures is conducted primarily at military depots by military and civilian mechanics. For the engine, some work resides with the engine OEM. All things considered, there is minimal automation involved in MRO beyond component fabrication and repair. The same is basically true for aircraft production. One notable exception is the F-35 Joint Strike Fighter. The F-35 is a fifth-generation fighter with an EIS of 2015. With its three variants, it is used by the US Air Force, US Navy, and the US Marines, followed by various NATO allies. Lockheed Martin is the prime contractor, BAE (United Kingdom) provides the forward section, and Northrop Grumman (United States) manufactures the center fuselage and wing skins. The aerostructure is comprised of aluminum, titanium, and composites – the latter representing 35–40% of the total weight. Composites are used for items such as air-inlet ducts and weapons-bay doors. Advanced manufacturing is used extensively to fabricate these disparate materials. One example is Northrop Grumman’s facility in Southern California. In its plant near Edwards Air Force Base, Northrop Grumman invested $100 million to create the integrated assembly line (IAL). The precision required is so exacting that this 200,000 ft2 facility is climate controlled. The company boasts of using “system-engineering approach to integrate tooling and structure transport, system automation, automated drilling cells, and tooling mechanization coordinated across multiple build centers” [27]. They credit US automakers for inspiration, along with the help of KUKA Systems of Germany [27]. The capital equipment is extensive. The IAL contains 600plus tools and 79 major tool positions, along with 13 articulated robots, and a radio-frequency identification (RFID) monitored automated guided vehicle system. The system also includes automatic laser welding and automated pallet systems, as a complement to the multifunctional robotic “end-effectors.” These robots are used for drilling, sealant and coating application, and fastener insertion. Furthermore, it is all orchestrated via a moving assembly line [27]). The facility uses multiple nine-axis robots to drill thousands of holes through variable material stacks in difficultto-reach locations. In one operation, robots reduced a 52-h manual process to 12 h. Finally, their laser-tracking system allows robots to drill and countersink with an exceptional degree of accuracy, while monitoring progress via real-time statistical process control (SPC). Another laser system is then used to evaluate the quality of the holes [27]. This is arguably the most advanced aviation-production facility in the world.
S. J. Landry and W. Bihlman
Emerging Markets: Urban Air Mobility One market that may realize significant automation is advanced air mobility (AAM). Both the aircraft’s size and the expected high volumes make it an attractive candidate. On the other hand, because this market is an entirely new paradigm, there is no true precedence for an accurate market forecast. Generally speaking, investors are bullish. One forecast predicts vehicle sales for a 20-year period to exceed $40 billion [28], and recent SPAC funding levels help to underwrite this optimism. Globally, there are over 250 companies, and nearly twice as many designs. These aircraft are typically all-electric, relatively small, and, operationally, are hybrids between an airplane and a helicopter (See Fig. 51.7). The business models range from autonomous cargo drones to piloted urban air taxis that hop from building heliport to heliport. In a sense, the idea is to marry the production efficiency and supply chain management system of automotive with quality and material innovation of aerospace. Materials are a key enabler. In particular, quick-cure thermoset composites and thermoplastics will be required to meet these ambitious production goals. Quick-cure thermosets were initially developed for use in the automotive industry – this material cures in minutes without an autoclave. This is a vast improvement over the normal six-to-eight hours required, in addition to the autoclave, which itself is often a production bottleneck. There are few details of the various designs, material systems, and manufacturing schemes publically available. The current focus for companies like Joby Aviation, Beta Technologies, Wisk, and Lillium is Type Certification. These firms will then need to obtain their Production Certificate, likely deploying technologies such as ATL in order to meet their aggressive rates. Indeed, some companies like Joby and Archer have partnered with automotive companies to execute this daunting task. To many, this is extremely risky. There are fundamental nuances between aerospace and automotive in terms of materials, manufacturing, and regulations, not to mention
Fig. 51.7 Germany’s Lillium AAM flying prototype. (Source: Lillium).
51
Aerospace Systems Automation
cultures, in general. Many aviation enthusiasts are cautiously optimistic.
Automotive as a Benchmark An aspect not addressed in detail hitherto is the digital factory. This is not automation per se, but increasingly, it is difficult to separate automation from digitalization. There is a strongly symbiotic relationship. Historically, robots have operated in isolation in cells that were designed and optimized for dedicated tasks like spot welding and painting (See Fig. 51.8.). A derivation of this strategy is a moving assembly line. Notwithstanding, the digital factory is a step change. Why the change? It is a confluence of five basic enablers, namely considerable advances in: • Robotics • Sensors • Predictive analytics and adaptive controls (e.g., artificial intelligence/machine learning) • Modeling and simulation (e.g., model-based systems engineering (MBSE), digital twin) • Connectivity of disparate systems (e.g., Internet of Things (IOT), standard protocols)
1125
One other consideration that enhances the man-machine interface is augmented reality and virtual reality (AR/VR). This leverages several of these aforementioned technologies, rendering them graphically in three-dimensional space. Facilities employing these technologies have been referred to as Industry 4.0, smart factory, digital factory, and the factory-of-the-future. Regardless of the caption, the goal is to improve product quality, cost, and throughput, while maintaining workers’ safety. Seemingly, the challenge is to optimize a system for a given objective, while maintaining a certain amount of flexibility. Agile enterprise and “interoperability” have thus become common industry lexicon. There are also implications for logistics, inventory management, enterprise resource planning (ERP) systems, etc. There are two recent examples of smart factories in the automotive industry. The first is a new factory in Germany. The second is an existing factory in the United States that is being renovated. Porsche of Germany has relatively low volumes and a high product mix. This requires a flexible tooling solution – and, unfortunately, automation systems typically are not. In 2020, the company announced a joint venture called “FlexFactory.” The justification, according to Porsche, was the advent and
Fig. 51.8 An example of (Kuka) robots dedicated to a series of prescriptive tasks. (Courtesy of Kuka Robotics)
51
1126
the intersection of both the electric vehicle (EV) and IOT [29]. In the United States, General Motors had a similar vision. It is in the process of investing $2 billion to retool a Detroitbased facility, built in the late 1980s. This repurposing represents the single largest manufacturing plant investment in the company’s history. This “Factory ZERO” will be dedicated to EVs. Central to the company’s strategy is to deploy a 5G communications network – they claim it will be the first of its kind in the United States [29]. The factory will be the baseline for future plants, with plans to produce its subsidiary’s autonomous taxi. The layout will be modular. The car itself will have a greatly simplified bill of materials and will exist as an accurate digital replica. Factory ZERO will occupy 4 million ft2 and will be one of GM’s largest assembly plants. The plan is for 2200 employees to oversee production of 270,000 vehicles annually [29].
51.1.7 Manufacturing Closing Thoughts Aerospace manufacturing is heavily focused on quality due to the inherent risk of flying. Over the past two decades, the industry has moved toward reducing part-count and greater use of unitized structures. This has accompanied an increase in factory automation. Most automation occurs upstream during parts fabrication. The CNC process is highly automated, yet a mature technology. Additive manufacturing is the new frontier. Two other component-level technologies that deserve mention: high-speed electro erosion machining, and frictionstir welding (FSW). The latter has proven popular in the commercial space-flight industry to join difficult-to-weld or dissimilar alloys. After decades, FSW still has yet to take root within the broader aerospace community. The most recognizable form of automation is fiber placement for aerostructures, as illustrated by Boeing and Airbus’ recent aircraft. At the same time, both companies have had mixed results endeavoring to automate drilling and riveting for existing product lines. Clearly, new aircraft with new(er) production environments are much better candidates for automation. The F-35 ecosystem is considered exemplar. One important caveat – jet fighters are a fraction of the size of air transport aircraft. Robotics do not scale easily. Industrial production is trending toward the digital factory. Organizations such as the International Council on Systems Engineering (INCOSE) and American Society of Mechanical Engineers (ASME) help with definitions, standards, and protocols for MBSE, the digital thread/twin. Two key considerations include the concept of “design for automation,” and the ability to have reconfigurable robotic cells. Perhaps joint research with the US Military and NASA concerning automation and manufacturing readiness level
S. J. Landry and W. Bihlman
(MRL) is in order. MRL was developed by the US Military to help with program (and supplier) management, by modifying the technology readiness level (TRL) rubric developed by NASA in the 1970s. Maybe tellingly, there is no guidance pertaining to level-of-automation. The maximum level, MRL 10, simply represents “full-scale” production. Cobots do not seem to have a significant future within aerospace, at least within the foreseeable future. It has been surmised that Airbus is more inclined to use cobots in collaboration with humans, whereas the Boeing model of future automation is more aligned with the automotive “light-out” style factory. Inspection will be an area of growth. The most promising is computer tomography. And lastly, AR/VR seems to possess great potential for aerospace, leveraging Paul Fitt’s legendary MABA-MABA construct from the 1950s. Somewhere along the way, systems designers may well be confronted by Moravec’s Paradox. Certainly, production systems need to be designed holistically, and ideally, concurrent with product development. But this would require more than just including a cadre of mechatronic engineers. A fundamental cultural change would be in store; however, realistically, traditional aerospace OEMs are not ready for this type of step change. Perhaps AAM can adopt a few bestpractices from automotive to help change this paradigm.
51.2
Automation in Aircraft Systems and Operations
51.2.1 Guidelines for Automation Development Flight decks are highly automated and have many different types of automation. In this section, the important principles for the development of automation are discussed. First, principles related to each of the types of automation (control, warning, and information) are discussed. This is followed by several overarching concerns for the development of automation – human factors issues, software and system safety, system integration issues, and certification. The guidelines listed here are not necessarily followed in the development of current automation. One reason for this is that it is often impossible to follow all guidelines regarding automation and still meet all other constraints (such as space, cost, and certification). As such, these guidelines should be treated as goals for automation development rather than hardand-fast rules.
Control Automation Control automation should make apparent to the operator the axes under control and the expected behavior of the automation. Due to the desire to be able to control axes
Aerospace Systems Automation
separately under certain circumstances, there are a number of modes in which the autopilot can be operated. These modes are often not clearly identified to the pilot, or, if they are, the expected operation of the aircraft while in these modes is not well understood. A number of aircraft accidents have occurred due to this “mode confusion.” A number of researchers have investigated this problem and proposed solutions [30–35], but to date no particular method for mitigating the problem has been widely adopted. Designers of future control automation, however, must strongly consider the likelihood of mode confusion, and use good human factors design methodology to ensure the transparency of the automation. Control automation should fail gracefully. It is possible for the aircraft to be exposed to conditions that exceed the expectations of the designers. Such conditions are often cases where the pilots could use the assistance of automation, but where automation typically turns itself off. Autopilots are designed to be used under specific sets of flight conditions; if exceeded, autopilots will simply disconnect, leaving the pilot to handle the unusual circumstance by themselves. To the extent possible, control automation should be designed to assist pilots in even unusual circumstances rather than just shutting off.
Warning and Alerting Systems Warning and alerting systems are designed to identify hazards. If this identification (and the corrective action) were deterministic, there would be no need to alert (the system should operate automatically to perform the corrective action). Typically, the system acts as a signal detector, alerting based on some threshold of evidence regarding the hazardous condition. Signal detection theory provides a convenient and effective way of analyzing alerting systems. If the detector is correct in identifying the signal, then the detection is considered “correct.” Otherwise, the detection is considered a “false alarm.” If, on the other hand, the system does not alert and is wrong (i.e., it should have alerted), that is considered a missed detection. A correct rejection is the final case, where the system correctly does not alert. Given an equal cost of a missed detection and false alarm, thresholds should be set to minimize missed detections and false alarms. Such a trade-off can be viewed on a systemoperating characteristic (SOC) chart, such as that shown in Fig. 51.9. The “chance” or “guess” line is the 45◦ diagonal on the chart. The curve is constructed by manipulating the alert threshold and determining the resulting probability of false alarm and correct detection. Different system designs will result in different curves. In the SOC chart, the “perfect” system operates in the upper left corner of the chart, where the probability of false alarms is zero and the probability of correct detections is 1.
1127
1.00
0.80
0.60 P (CD)
51
0.40
0.20
0.00 0.00
51 0.20
0.40
0.60
0.80
1.00
P (FA)
Fig. 51.9 System-operating characteristic chart
Since perfect performance is not possible, the system should be operated using the alert threshold that is as close as possible to the upper left corner of the SOC chart. Moreover, since correct detections and false alarms are typically not valued equally, the best point on the SOC curve is determined by the relative value of the correct detections and false alarms. In particular, for the given curve, one should attempt to maximize the value of Eq. 51.1. U = P(CD)V(CD) − P(FA)V(FA)
(51.1)
For collision detection systems, there is another consideration. If a false alarm occurs, it is possible that the resulting actions of the pilot will induce a collision that would not occur if no action had been taken. Therefore, such systems must also consider “induced collisions” as a metric. Other types of alerting systems should also consider the full effect of false alarms on the resulting system. Alert thresholds should include consideration for pilot response time, which will vary considerably based on the frequency of the alert. Often, alert thresholds are set based on assumptions about the resulting response. For example, ACAS systems often expect a pilot to initiate the resolution maneuver within just a few seconds. However, many alerts are uncommon, sometimes only heard a few times in the career of a pilot. Expectation seems to affect pilot response to alerts, and pilots have been known to take 30 s or longer to respond to alerts [36–38]. If the alert thresholds are set assuming a 5-s response time, but that threshold is not met, the alert may come too late to be effective.
1128
S. J. Landry and W. Bihlman
Expected responses (often included in the alert threshold) should also reflect heuristics, as pilots will often apply common shortcuts (or techniques) when executing a response, even if that response is not consistent with that expected by the alert. For example, prototype-alerting systems for collision avoidance on approach assume that the pilots will turn away from the approach path of the other aircraft, and the alert thresholds are set assuming this response [39, 40], although other thresholds may be safe [41]. However, military pilots are trained to always keep proximate aircraft in sight so that separation can be assured visually. A turn away from an aircraft violates this heuristic and may not be followed. Several other factors, such as the commanding of a reversal maneuver (climb when descending or descend when climbing), can adversely impact pilot response [42]. Wherever possible, preparatory warnings should be given. It has been shown that adherence to desired actions subsequent to an alert improves if the alert is preceded by a preparatory warning [43, 44]. For example, the Traffic Collision Avoidance System (TCAS), a type of ACAS, utilizes a twolevel warning system. A traffic advisory (TA) is first given to warn of a proximate aircraft that may pose a threat. No action is required due to the TA (although some action presumably should be taken, even if it is to confirm the threat). If the condition persists, a resolution advisory is given to indicate the high potential for collision. Resolution advisories provide specific guidance to the pilots to avoid the potential collision; pilots must comply with the resolution-advisory instructions.
when needed) by the automation, and the need to navigate through the display. It can be difficult for the designer to envision all the circumstances involved that lead to a pilot needing information displayed, or needing the display to remain the same. Moreover, for such automation to be used, it should be highly accurate at predicting the information needs of the pilot. When depth is introduced into displays, the pilot must navigate through that depth to arrive at a particular set of information, just as people navigate through the depth of webpages. On flight decks, information displays are typically very small (a few inches by a few inches), so for the same amount of information, more depth is needed than on a conventional laptop-sized display. Moreover, fewer controls are provided to navigate through the displays, making the task even more difficult. Another method to de-clutter displays is to de-emphasize some information by reducing its contrast, brightness, or both. Important information will then “pop out” of the display, and unimportant information will be easier to ignore. Recent work on visualization may provide some assistance [45], especially as display sizes and resolution increase [46]. Visualization approaches attempt to provide information in a format that eases interpretation, allowing more information to be extracted from a display for a given amount of information presented. However, such approaches have yet to be validated for use in such safety-critical systems as a flight deck.
Information Automation During periods of high workload, pilots will have little time to scan a display looking for information. For that reason, information automation must avoid clutter – the presentation of useless information alongside useful information. This coincidence of information forces the pilot to search a display, utilizing time and cognitive resources in short supply during times of high workload. However, the designer often cannot identify useful information a priori. In this case, there are a number of methods to “de-clutter” a display, although no definitive methods have been identified. De-cluttering eliminates low-priority information, or information that is unlikely to be needed, from the display. One method is to utilize multifunction displays (MFDs) instead of single-sensor, single-instrument (SSSI) displays. MFDs allow for depth, where information can be put on separate “pages” of the display that can be brought up when needed or when called for by the operator. The information is then present in the system but not visible until needed. For example, automation onboard flight decks will display information related to a malfunction when that particular malfunction is detected. Two trade-offs when using MFDs are the possibility that information is presented when not needed (or not presented
Human Factors Issues Our ability to create automation capable of replacing a function previously done by a human is becoming largely dependent upon human factors issues rather than technical ones. With computers becoming smaller and faster, it is technically possible to produce automation capable of remarkable feats. However, if that system must interact with humans, it must be compatible. This problem of compatibility is often a major obstacle to the introduction of automation into complex socio-technical systems. Such automation must consider a number of aspects of human interaction with technology. The role of human in a highly automated system is often relegated to that of a supervisor. This role has particular requirements and challenges that must be considered by the designer of automation. One of these challenges relates to the “out-of-the-loop” problem [47], which is being addressed in terms of understanding the operator’s awareness of the situation. In addition, automation must incorporate human limitations with regard to perception, workload, and physical ergonomics. Supervisory Control As flight deck automation increases in quantity and sophistication, some have expressed concern that pilots are becoming
51
Aerospace Systems Automation
supervisory controllers of automation rather than users of automation. On some aircraft, it is possible to connect the autopilot once aligned on the departing runway, and allow the aircraft to fly to its destination, land, and taxi off the runway without interacting with the aircraft control surfaces (except drag/lift devices such as flaps and spoilers) except through the autopilot. One concern related to this phenomenon is the loss of manual control skills, as discussed by Billings [48] and others [49, 50]. Should the automation fail, the pilots would have to fly the aircraft manually. Since such failure is likely to result from a more serious system failure, such control may have to be assumed under less than ideal circumstances. For example, the aircraft may have only partially operating control surfaces, as was the case with a DC-10 mishap in Sioux City, Iowa, in 1989 [51]. In that accident, an uncontained engine failure left the aircraft without hydraulic power to operate any of the control surfaces. The pilots controlled the aircraft using engine power only, a procedure that had to be created by the pilots on the spot. Without excellent manual flying skills, it is unlikely that the result would have been as successful (175 of the 285 passengers and all but one of the crew survived the crash landing). Designers of automation should ensure that sufficient opportunity exists for the operators to exercise those aspects of control for which an intervention need may arise in the case of automation failure. The overarching considerations that are part of supervisory control [52] should be considered when designing automation for flight decks. These considerations include the ability of pilots to monitor information, an understanding of the impact of communications/control delay, and loss of situation awareness. As supervisors of automation, pilots are responsible for monitoring the automation to ensure it is operating properly and to intervene should the automation fail. Humans are notoriously bad at monitoring highly reliable systems [53]. Under such conditions, humans will tend to become complacent and fail to monitor adequately. One response to this has been to suggest caution against overautomation [48], while others have suggested that more automation (or at least feedback) is the answer [54]. Likely what is required is smarter automation, including designs that make a system fail only in ways that can be handled by a human supervisor. Even relatively small delays in communication or control can have significant consequences for supervisory control. This is analogous to manual control, where delays between identification of control requirement and application of that control can lead to instability. For faster control loops, less delay is required; for slower control loops, longer delays can be tolerated. For example, under conditions of several seconds of delay, intervention by a supervisor to accept manual control would likely be impossible, whereas such
1129
delays would not impact the supervisor’s ability to provide navigation commands to a vehicle. In addition, humans have better situation awareness when actively involved in the operation rather than when acting as a supervisor of automation [55, 56]. Situation awareness is the set of information used by the operator, in order to make decisions and choose courses of action, and will be discussed in more detail in the next section. Good situation awareness, considered a key to good performance in complex systems, is adversely affected by the operator’s cognitive distance from the task. This may be a result of research findings suggesting that concrete, direct experiences involve deeper cognitive processing, and are therefore better recalled than those that are merely described or otherwise undergo more shallow processing [57]. Situation Awareness, Clutter, and Boundary Objects Since situation awareness is correlated with good performance, automation designers should attempt to enhance situation awareness. However, this cannot be addressed by simply making sufficient information available and salient to the operator, since situation awareness involves the operator making use of that information. Making information available is not sufficient for ensuring that the information gets used; operators may simply fail to use available information for guiding action for reasons that are as yet unclear. For example, one aviation incident self-report from NASA’s Aviation Safety Reporting System database describes landing at the Los Angeles International airport without first obtaining landing clearance as required by FAA regulation. The crew, who would normally switch their radio frequency to that of the tower controller (who would then grant landing clearance), was told to remain on a previous frequency by air traffic control (most likely due to some separation concern). As a result, the crew never switched frequency and never got landing clearance. They realized their mistake after turning off the runway, at which point they were about to switch to the frequency of the ground controller. Noticing the error, the crew called the tower and reported their mistake. The crew in this incident had sufficient information to know they did not have clearance; they simply did not make use of that information. While factors such as interruptions (and other disturbances to a routine), workload, idiosyncratic cognitive capabilities, and experience would seem to influence the prevalence of this type of behavior, as yet no definitive research has been accomplished to understand it. Automation designers should be aware that the absence or lack of salience of information, while not technically a part of situation awareness, will have the same effect as a loss of situation awareness. That is, not having the information in the first place will have an identical effect on performance as not
51
1130
making use of available information (although the causes of the error may be different). One method to ensure this is to conduct a thorough task analysis [58], such as goal-directed task analysis [59]. Such methods provide insight into what information may be required by an operator in the conduct of their task. In addition to ensuring the availability of information, designers should ensure that information is accessible to the operator. Too much irrelevant information can make accessing particular (important) pieces of information more difficult than necessary [60, 61]. For this purpose, a number of de-cluttering schemes are available [62–64]. Intelligent design of displays, so that multiple operators across a collaborative task can easily contextualize information while making use of the same body of information, is also recommended. Such displays, known as boundary objects, have been found in examples of good collaborative work [65, 66]. Unfortunately, design criteria for these types of displays are still lacking.
Situation Assessment Automation designers should consider the situation assessment capabilities of the pilots. Situation assessment is a term that refers to the process of populating situation awareness. Situation assessment involves most aspects of cognition – perception, attention, comprehension, and memory – and is therefore very complex. These limitations are discussed in subsequent sections. However, several important overarching findings bear on this capability. Operators monitoring dynamic displays do so at a nearoptimal rate under most circumstances. According to the Nyquist–Shannon sampling theorem [67], one can fully reproduce a signal if that signal is sampled at more than twice the signal’s frequency, if the signal is bandlimited. Operators have been found to sample at precisely this rate, with some deviation at higher and lower frequency [68]. For highly dynamic information, operators will oversample, while they tend to undersample low-frequency information. Numerous reasons have been proposed for this tendency, without a definitive answer. However, designers can feel somewhat comfortable that human operators will sample dynamic displays appropriately. However, as mentioned above, operators become easily complacent with highly reliable or very low-frequency information. In such cases, designers should, where practical, ensure that operators are engaged in the task. One method, utilized in airport security X-ray machines as the Threat Image Protection system, is to occasionally probe the operator with false signals. Such signals can be used to engage the operator but also to test to see if the operator is becoming complacent. Another method is to increase the salience of potentially offending signals, such as providing a warning
S. J. Landry and W. Bihlman
light to attract the attention of the operator to the instrument containing the relevant information. In addition, operators have significant cognitive limitations (but also have several methods of coping with these limitations). A human’s channel capacity is limited to somewhere between 4 and 9 individual items [69, 70]. Above this, operators must be able to “chunk” the information (such as recalling a phone number as one seven-digit number rather than as seven separate digits in a sequence). Expertise improves the ability of operators to chunk information, but designers are cautioned not to design automation that taxes a human’s channel capacity. Perception In terms of visual perception, human operators sparsely sample scenes and then integrate their knowledge of the world with these sparse samples to obtain a representation. One factor known to affect sampling is the Gestalt of the scene [71]. Factors such as the proximity of items, their “common fate,” and their similarity all affect the ability of the operator to associate information. An example of this ability regards the monitoring of a number of check-reading gauges. Gauges oriented so that their limits are aligned, as seen in Fig. 51.10, are much easier to monitor than those that are not. Designers are therefore encouraged to consider the Gestalt of instruments to assist operators, including grouping information used for the same purpose close to one another [72]. The basic layout of flight deck instrumentation was established many decades ago, based on research into the scan pattern of pilots [73]. This placed the most-used instruments into a T-shaped pattern (now called the “basic T”), with lessused instruments being placed outside of this T, but close to related instruments. The basic T has been replicated on multifunction displays utilized in modern aircraft in place of single-sensor, single-instrument gauges. It seems certain that this arrangement will continue, and should be adhered to by automation designers. Workload has been known to affect perception and attention, as has been widely reported in studies on driving
90
90
80
80
Max
90 Max
80
70
70
70
60
60
60
50
50
50
40
40
40
30
30
30
Fig. 51.10 Gestalt factors in instrument displays
Max
51
Aerospace Systems Automation
[74, 75]. Increased workload impairs visual detection, as well as the size and shape of the visual field [76]. Increased workload also dissipates attention, which in turn influences perception. Designers should be aware that under conditions of high workload, pilots may have difficulty in perceiving information that would otherwise be considered sufficiently salient. Much of the information imparted to the user by flight deck automation uses the visual channel. In order to avoid further saturating the visual channel, designers are reminded that other perceptual channels are available. Several automation systems use the aural channel, particularly alerting systems. The GPWS, the stall warning system, the landing gear warning system, and a number of others produce both visual and aural warnings; each aural warning is distinct to aid in identification. Very little work has been done in trying to utilize the haptic medium in aviation, but some success has been found in aiding operators in locational attention directing [77, 78], which could be useful for pilots. Workload Often the primary concern related to workload is to ensure that the work required of an operator does not exceed the operator’s capability. Pilots experience swings in workload from very high (takeoff and landing) to very low (cruise). The periods of high workload are such that almost no cognitive work, required for reasoning and other higher-level functions, can be accomplished. Instead, pilots are relegated to skillbased control [79] and do not have time for things such as mental calculation or troubleshooting. Imposing requirements for such activities during high workload periods should be avoided. Performance in complex tasks has been generally assumed to be inverse-U shaped with respect to workload [80, 81]. At low levels of workload, human performance is low due to complacency. At high levels of workload, performance is also low, but due to task demands taxing or exceeding human capabilities. Performance is highest at moderate levels of workload. Automation designers should therefore try to balance workload (not too high and not too low), rather than strictly trying to reduce it. Physical Ergonomics Automation designers should consider good physical ergonomics as well as cognitive ergonomics. For automation, several factors are considerations, including repetitive strain avoidance, placing frequently used controls within reach, ensuring proper visibility and audibility, and reducing the “heads-down” time of the operator. Automation should avoid requiring significant fine input from the operator. The flight management system (FMS) includes a control display unit (CDU), which is the primary manual interface for the pilot with the unit. Pilots may need
1131
to input information into the FMS using the CDU, which consists of a small keyboard and display. The keys on the CDU are small, mostly identically shaped, and laid out in an idiosyncratic (to the particular CDU) manner. This layout is far from ideal, from a human factors standpoint, and results in a slow rate of typing. Should significant typing be required, repetitive stress injuries (such as carpal tunnel syndrome) would be a significant risk. However, designers of the CDU must contend with a very small device footprint, it must have very high reliability, and it must be robust to flight conditions such as turbulence. Fortunately, most of the information in the FMS is loaded only once (or by electronic download), so typing is infrequent and usually limited to a few key presses. There are numerous principles for controls to avoid the chance of repetitive strain or overuse disorders [81]. These types of injuries are associated with a high-frequency activity and therefore can be avoided by eliminating prolonged periods of repetitive motions. High repetitiveness has been defined as a cycle time of less than 30 s, or more than 50% of the total task time [82]. Larger loads can induce the same conditions with less repetition. Maintaining body positions (other than the neutral body position) or gripping objects can also induce repetitive strain injuries. According to [82], there are seven “sins” associated with overuse disorders: 1. Highly repetitive motions 2. Prolonged or repeated application of more than one-third of an operator’s static muscle strength 3. Stressful body postures, such as elevated wrists when typing or working while bent or twisted at the waist 4. Prolonged non-neutral body postures, such as standing 5. Prolonged contact with a surface, such as leaning against a surface or holding a tool 6. Prolonged exposure to vibration 7. Exposure to low temperatures (such as a draft or exhaust from a pneumatic tool) Certain types of reaching are ergonomically undesirable. The need to extend oneself to reach a control poses risks to the back and shoulders, as does the need to reach above one’s shoulder height [83]. The need to twist one’s body frequently also poses significant risk of injury to operators. For these reasons, it is a common ergonomic principle that frequently used controls be placed within the normal reach of the operator. To find the normal reach of the operator, one should first identify the range of sizes of the people that constitute the population for which the device will be designed. A common choice is to use a 5th percentile female as the lower limit and the 95th percentile male as the upper limit. However, certain occupations may further limit the population. If single measurements are required (such as shoulderhand length), lookup tables, such as those that can be
51
1132
obtained from NASA or commercial sources, can be used. If multiple measurements are required, however, “cases” should be considered rather than trying to combine measurements from the tables. Cases identify a typical small or large person, rather than assuming that a small person has the 5th percentile seated height and 5th percentile shoulderhand length (for example), which is usually not accurate. Once the lower and upper measurements have been obtained, the designer can then check to see whether the commonly used controls are within the reach of that range of persons without extending. Such methods have been applied in a great number of different domains. Proper visibility should be ensured throughout the typical illuminance conditions. Studies that provide guidance on illuminance were conducted decades ago, including studies in which Blackwell [84] found that the threshold contrast at which objects became recognizable increased with decreasing object size, decreasing orderliness of objects, movement of the objects, and if less time was given to detect the object. Increased age also increases the need for contrast [85]. In addition, the viewing of displays should be free of glare and veiling reflections. Glare is the condition where bright light sources are in the line of sight of the operator, creating a nuisance. Veiling reflections are the condition where bright light sources are reflected off the display surface, obscuring information on the display. These problems can be minimized by positioning the display where bright light sources pose neither a glare or reflection issue (to the maximum extent possible). Automation designers should also ensure audibility, particularly of aural feedback or alerts. An alert that is at least 10–15 dB above the ambient noise level is generally considered sufficiently salient, although alerts are often found with significantly less difference when placed in the normal operating environment (a flight deck is a noisy place in flight). Different types of tones can also be used to indicate different levels of criticality [86]. When multiple auditory warnings exist, as they do on flight decks, they should utilize distinct tones to avoid confusing one alert with another. Some alerts may even include computerized voices. On a typical flight deck, auditory warnings exist for the GPWS system, TCAS system, overspeed warning, landing gear warning, and altitude alerts. When the weather permits, pilots must direct a significant amount of attention to outside the flight deck. In addition to visually locating other aircraft that may pose a collision risk, pilots must frequently locate, identify, and orient themselves with respect to a particular airport runway. On the ground, scanning for obstacles and other aircraft is a particularly important task. For this reason, automation should not impose upon pilots too much “heads-down” time (the term is contrasted with “heads-up” displays located on the wind shield of the aircraft). Flight management system reprogramming, for
S. J. Landry and W. Bihlman
example, is a difficult task, requiring many minutes of headsdown time to accomplish relatively simple reprogramming tasks. It is therefore undesirable to have changes close to the ground, where pilots’ attention should be outside rather than inside the aircraft.
Software and System Safety Calls have been made previously for automation that gracefully degrades [48, 52], and recently efforts have been made toward that end [87]. One significant complication for the introduction of such methods is that most advanced automation is software rather than hardware based. Software systems lack physical constraints available in hardware. Whereas one can construct physical constraints on machines that prevent them from failing in certain ways (e.g., short-circuit protection devices on electrical outlets in bathrooms), no such assurances can (so far) be provided for software systems. Formal methods for mathematically verifying software [88], particularly in concert with model-based software development methods [89], are methods that go far toward improving software systems in this regard. Formal methods, however, are insufficient for ensuring software safety [90]. Safety is often more than ensuring that the output of the software remains within some constrained set of appropriate responses. Safety also involves the processes for defining those constraints and the interaction of human and other automated agents with the system. Therefore, system safety must also account for these (and other) factors. Because of this, safety has been viewed as an emergent feature of a system, ensured by providing sufficient control for agents in the system to prevent it from entering unsafe states [91]. Under this concept, a system can be modeled as a set of states, where safety (as a goal) is preventing the system from entering an unsafe state. System Integration As flight decks get more complex, and as the interactions between systems (other vehicles, air traffic controllers, passengers, and company) become tighter, system integration issues will grow in importance. Aircraft have been typically designed to optimize some aspect of the vehicle (for example, speed, fuel economy, range, or capacity) but now the integration of that vehicle within a larger system must be considered. There are tight interactions between customer demand, airline route decisions, air traffic control infrastructure, and aircraft design. Such design optimizations involve multiple systems, each of which have very different dynamics. The behavior of the system is not defined by the sum of the behaviors of the components but typically also has emergent behavior at higher levels of aggregation of the components. Methods
51
Aerospace Systems Automation
for accomplishing such optimizations have not yet been developed but are being studied as “systems-of-systems” problems. Nonetheless, automation designers must address systems integration issues. Flight deck automation is expensive to develop and implement and therefore must last many years (typically the lifetime of different aircraft models). For example, flight management systems have hardware and software that is highly antiquated by modern computer standards, but because it works and is certified (see section below), it has not substantially changed over the last few decades. Moreover, the flight deck automation needed to take advantage of next generation air traffic systems will need to be developed and installed over the next decade. Flight deck systems often interface with multiple systems of various types. State information on aircraft comes from pito-static, temperature, and other instruments. Position and navigation information come from radio navigation aids, inertial navigation systems, GPS, and the flight management systems. There are also mechanical, electrical, hydraulic, and pressurization systems on board. Potentially, automation must be integrated with several of these different types of systems.
Certification and Equipage The various civil flight authorities have strict certification requirements for new automation. These requirements impose rigorous safety requirements on that automation. This results in long lead times and high costs, which in turn impose a heavy burden on any new automation system being proposed. In the USA, the Federal Aviation Administration certifies new equipment. Under the Code of Federal Regulations, each installed system must: (a) Be of a kind and design appropriate to its intended function; (b) Be labeled as to its identification, function, or operating limitations, or any applicable combination of these functions (c) Be installed according to limitations specified for that equipment; and (d) Function properly when installed [92].
In addition, for flight and navigation instruments, (a) The equipment, systems, and installations . . . must be designed to ensure that they perform their intended functions under any foreseeable operation condition. (b) The airplane systems and associated components, considered separately and in relation to other systems, must be designed so that – (1) The occurrence of any failure condition which would prevent the continued safe flight and landing of the airplane is extremely improbable, and (2) The occurrence of any other failure conditions which would reduce the capability of the airplane or the ability of the crew to cope with adverse operating conditions is improbable. (c) Warning information must be provided to alert the crew to unsafe system operating conditions, and to enable them to take appropriate corrective action. Systems, controls, and associated
1133 monitoring and warning means must be designed to minimize crew errors which could create additional hazards [92].
The certification process is detailed and slow, designed to prevent systems with deleterious safety effects from being installed in aircraft (particularly commercial aircraft). This process, which is well known to the industry but is not well documented, should be considered by automation designers. The use of off-the-shelf and previously certified hardware and software is recommended to simplify and shorten the certification process.
51.3
Automation in Air Traffic Control Systems and Operations
Modern air traffic control is still highly manual, with the primary task of monitoring radar-identified aircraft positions and intentions to ensure no conflicts occur, being largely manual. However, a few automation tools have been developed and many more are being proposed. This section discusses two examples, with a focus on the principles involved in, and lessons learned from, that automation. The amount of automation used in air traffic control is still very small, so this section is relatively short. However, in order to accommodate air traffic regulators’ goals of vastly increasing air traffic system capacity, it appears that a major move to much more automation is needed. This is discussed in the final part of this section, as it represents a major challenge to the incorporation of automation in the aerospace system.
51.3.1 Sequencing and Scheduling Automation Air traffic control is broken down into volumes of airspace (“sectors”) within which the aircraft are monitored and controlled by an individual controller or small team, but these controllers and teams have minimal interaction with adjacent controllers/teams. As a result, there was virtually no coordination on the sequencing and spacing of aircraft approaching an airport, making the job of the controllers attempting to sequence and space those arrivals difficult. That difficulty often led to inefficient strategies, including low-altitude holding and the restriction of departures. Automation was introduced, in the USA, Europe, and elsewhere, that coordinates arrivals and provides controllers with instructions on how to space aircraft in their sectors so that the job of the arrival controller was simplified and the flow of aircraft was made more efficient. There are several key lessons from the development and deployment of this automation in the air traffic system. First, automation for air traffic controllers must have a minimal footprint in terms of its impact on the tools and
51
1134
strategies used by controllers. The volume of aircraft and the nature of the job mean that controllers will not tolerate any technology that requires concentration, judgments, or correction. The automation that sequences and schedules aircraft changes only a few characters on the data tag associated with the aircraft. That data tag gives a simple requirement on controllers – ensure the aircraft is delayed by the number of minutes specified. Such an instruction is easy for a controller to implement and leverages the skills controllers have in determining how to delay aircraft to ensure proper spacing. Second, the automation must be developed in close cooperation with the air traffic controllers whose jobs are impacted. The development of these automation tools, despite the minimal physical changes to the task environment, required years of simulation and field testing with professional air traffic controllers. Those interactions led to numerous changes and, more importantly, buy-in from the operators, who hold veto power over the introduction of automation into their task environment. Lastly, for implementation, the software must be integrated into the existing commercial software baseline that is in use in the air traffic facilities. This is a major undertaking, even for small changes, since the code base for these systems in many millions of lines of code is tightly controlled. Therefore, the introduction of automation necessarily involves the careful and deliberate consideration of that software baseline and the involvement of the software engineers who are familiar with development within the baseline.
51.3.2 Conflict Detection and Resolution Automation The air traffic system has had several automation tools that have been used to assist controllers with the critical task of conflict detection and resolution. These tools have had minimal success, and efforts continue to improve them. The first such automation tool was introduced shortly after radar was introduced for use in the air traffic system in the 1960s. The tool is called “conflict alert” (CA) and was programmed into the initial software used by the radar display systems. (The language used at that time was JOVIAL, which is still in existence in some legacy mission critical systems such as air traffic control.) CA computed a dead-reckoning extrapolation of aircraft trajectories and identified aircraft whose (dead reckoning) trajectories would violate minimum separation within two minutes. CA produces numerous false alarms, both due to its use of dead-reckoning trajectories that are actually not going to be flown, as well as inaccuracies in the trajectory extrapolations themselves.
S. J. Landry and W. Bihlman
CA is therefore not relied upon by controllers, and the alerts are often ignored. However, most controllers will admit that CA has “saved them” from missing some conflicts. More recently, there have been additional efforts to assist controllers in identifying and resolving conflicts. One such effort, called a “conflict probe,” was implemented starting in 2001, and the second is still under development as part of the FAA’s Center-TRACON automation system. The former system uses a large lookup table to predict conflicts as far out as 20 min, while the latter uses very sophisticated and detailed trajectory prediction routines similar to those used in an aircraft’s flight management system. The conflict probe system is largely unused, as the cost of false alarms in terms of workload is high. The trajectory prediction system has not yet been accepted for use by the FAA, even though the software for its implementation is already in the operation. A lesson from these systems is that utilization of such automation in air traffic control is low, even if the benefit of using the system is high. Air traffic controllers are highly skilled and resist tools that they perceive may interfere with the successful execution of their duties. As mentioned in the previous section, a deliberate and thorough process that involves professional controllers is a necessary, if not sufficient, step for implementation.
51.3.3 Future Automation Needs Air transportation system-regulating agencies across the globe are working toward a system that can handle several times the current demand on the system. Since the current system is at, or very near, its maximum capacity even with the very best decision support automation, major changes will be necessary to achieve that goal. In particular, the primary function of monitoring aircraft positions and trajectories to ensure safe separation is maintained and will have to be automated. This is a revolutionary change and will require principles for integrating humans and automation that currently do not exist. Among the considerations for such a system are the automation of conflict detection and resolution (CD&R) and the integration of CD&R with the aircraft’s flight management system. The former involves a major change to the duties and responsibilities of controllers, and the latter a similar change for pilots. Air traffic capacity is limited by, in part, the number of aircraft a controller can manage in a volume of airspace. With existing and even proposed decision support tools, this number is not likely to increase substantially. However, proposed CD&R tools could theoretically allow for the limit to increase to several times its current limit [93].
51
Aerospace Systems Automation
However, a major question remains about the role of the air traffic controller in such a system. No serious thought is being given to installing an automated system and sending controllers home, as no one believes that such automation would be even close to be sufficiently reliable/safe. So controllers need to do something, but human factors principles for integrating such advanced automation with human operators have not yet been identified. This is a major challenge for automation designers going forward, even beyond the air transportation system. System designers must go beyond the concepts of human-centered design and supervisory control in order to develop a safe, efficient, and effective next generation air traffic system.
51.4
Conclusion
Modern flight decks contain a large number of automation systems, including control, warning, and information automation. Significant changes are being considered for air traffic control systems, which will influence automation systems for flight decks and provide incredible opportunities for tomorrow’s automation engineer. These changes will have to balance future needs, the increase in capabilities over the last few (and next few) decades, the expense of advanced automation, and the long lead times associated with the development and certification of flight deck automation.
References 1. Olexa, R.U.S.S.: The father of the second industrial revolution. Manuf. Eng. 127(2) (2001) 2. Ross, D.T.: Origins of the APT language for automatically programmed tools. In: History of Programming Languages I, pp. 279– 338. ACM (1978) 3. Weisberg, D.E.: The Engineering Design Revolution: The People, Companies and Computer Systems that Changed Forever the Practice of Engineering, pp. 1–26. Cyon Research Corporation (2008) 4. Bihlman, B.: Key Material Trends in Aerospace. International Titanium Association Conference, Paris, France (2016, April 20) 5. Bihlman, W.: A Methodology to Predict the Impact of Additive Manufacturing on the Aerospace Supply Chain. Doctoral Dissertation, Purdue University Graduate School (2020) 6. Debroy, T., Mukherjee, T., Milewski, J.O., Elmer, J.W., Ribic, B., Blecher, J.J., Zhang, W.: Scientific, technological and economic issues in metal printing and their solutions. Nat. Mater. 18, 1026 (2019) 7. Bihlman, B.: Extrusion Deposition AM Machines. Large-scale Additive Manufacturing Workshop, Purdue University (2018, Sept 18) 8. Langston, L.: Single-Crystal Turbine Blades Earn ASME Milestone Status, MachineDesign.com (2018, Mar 16); https: //www.machinedesign.com/mechanical-motion-systems /article /21 836518/singlecrystal-turbine-blades-earn-asme-milestone-status 9. Donner, R.: Aircraft Maintenance Technology Magazine, Sept 5 (2014)
1135 10. Nathan, S.: The Engineer, September 19 (2017). https://www.th eengineer.co.uk/rolls-royce-single-crystal-turbine-blade/ accessed December 2020 11. Sarh, B., Buttrick, J., Munk, C., Bossi, R.: Aircraft manufacturing and assembly. In: Springer Handbook of Automation, pp. 893–910 (2009) 12. Saint-Gobain: (2020). https://www.bn.saint-gobain.com/resourcecenter/blog/what-superplastic-forming. Accessed Dec 2020 13. JTC Corporation: (2020). https://www.jtc.gov.sg/news-andpublications/featured-stories/Pages/Rolls-Royce.aspx. Accessed Dec 2020 14. Rahman, M., San Wong, Y., Zareena, A.R.: Machinability of titanium alloys. JSME Int. J. Ser. C Mech. Syst. Mach. Elem. Manuf. 46(1), 107–115 (2003) 15. Michaels, K.: AeroDynamic: Inside the High-Stakes Global Jetliner Ecosystem. AIAA (2018) 16. Ilcewicz, L.: Past Experiences and Future Trends for Composite Aircraft Structure. Montana State University (2009) 17. Gates, D.: Boeing Shows-off New 777X Wing Center. Seattle Times (2016, May 19); https://www.seattletimes.com/business/ boeing-aerospace/boeing-shows-off-new-777x-wing-center/ 18. Kotha, S., Olesen, D., Nolan, R., Harding, W., Condit, P.: Boeing 787: The Dreamliner, Harvard Business School, MA (2005, Jun 21) 19. Sloan, J.: The first composite fuselage section for the first composite commercial jet. Composites World Magazine (2018, July 1). https://www.compositesworld.com/articles/the-first-composite-fus elage-section-for-the-first-composite-commercial-jet 20. Bellamy, W.: Aircraft Wire and Cable: Doing More with Less, Sept 1 (2014); https://www.aviationtoday.com/2014/09/ 01/aircraft-wire-and-cable-doing-more-with-less/. Accessed Dec 2020 21. Moores, V.: Pictures – Boeing begins 787 final assembly (2007, May 22); https://www.flightglobal.com/pictures-boeingbegins-787-final-assembly/73770.article. Accessed Dec 2020 22. Francis, S.: New Airbus chief wants more automation and digitalisation to increase productivity, June 4 (2018). https://roboticsandautomationnews.com/2018/06/04/new-airbus-ch ief-wants-more-automation-and-digitalisation-to-increase-producti vity/17526/ Accessed Dec 2020 23. Weber, A.: Airbus harnesses automation to boost fuselage production. Assembly Online Magazine (2019, Dec 10); https://www.assemblymag.com/articles/95340-airbus-harnesses-au tomation-to-boost-fuselage-production 24. Montaqim, A.: Tour of Airbus’ advanced manufacturing facility in Germany, Robotics and Automation News (2019, Oct 8) 25. Boeing: (2017). https://www.boeing.com/features/innovationquarterly/nov2017/feature-technical-cobots.page. Accessed Dec 2020 26. Gates, D.: Boeing abandons its failed fuselage robots on the 777X. Seattle Times (2019, Nov 13). https://www.seattletimes.com/business/boeing-aerospace/boeing-a bandons-its-failed-fuselage-robots-on-the-777x-handing-the- job-b ack-to-machinists/ 27. Weber, A.: Northrop Grumman Soars with Automation. Assembly Online Magazine (2013, Oct 1); https://www.assemblymag.com/ articles/91578-northrop-grumman-soars-with-automation 28. Garrett-Glaser, B.: Will the Global Urban Air Mobility Market Justify the Investment? Aviation Today (2019, Aug 22); https://www.aviationtoday.com/2019/08/22/will-global-urban-airmobility-market-justify-investment/ 29. Visnic, B.: New initiatives prep Porsche, GM for digitalmanufacturing transformation; SAE International (2020, Dec 8); https://www.sae.org/news/2020/12/digital-factories 30. Bjorklund, C.M., Alfredson, J., Dekker, S.W.A.: Mode monitoring and call-outs: an eye-tracking study of two-crew automated flight deck operations. Int. J. Aviat. Psychol. 16(3), 263–275 (2006)
51
1136 31. Bredereke, J., Lankenau, A.: Safety-relevant mode confusionsmodelling and reducing them. Reliab. Eng. Syst. Saf. 88(3), 229– 245 (2005) 32. Degani, A., Heymann, M.: Formal verification of humanautomation interaction. Hum. Factors. 44(1), 28–43 (2002) 33. Degani, A., Shafto, M., Kirlik, A.: Modes in human-machine systems: review, classification, and application. Int. J. Aviat. Psychol. 9(2), 125–138 (1999) 34. Mumaw, R.J.: Plan B for eliminating mode confusion: an interpreter display. Int. J. Human-Comput. Interact. 37, 693–702 (2021). https://doi.org/10.1080/10447318.2021.1890486 35. Vakil, S.S., John Hansman, R.: Approaches to mitigating complexity-driven issues in commercial autoflight systems. Reliab. Eng. Syst. Saf. 75(2), 133–145 (2002) 36. Ennis, R.L., Zhao, Y.J.: A formal approach to analysis of aircraft protected zone. Air Traffic Control Q. 12(1), 75 (2004) 37. Landry, S.J., Pritchett, A.R.: Examining assumptions about pilot behavior in paired approaches. In: Paper presented at the International Conference on Human-Computer Interaction in Aeronautics, Cambridge, MA (2002) 38. Pritchett, A.R., Hansman, R.J.: Pilot non-conformance to alerting system commands during closely spaced parallel approaches. In Digital Avionics Systems Conference, 1997. 16th DASC., AIAA/IEEE, 2 (1997) 39. Abbott, T.S., Mowen, G.C., Person Jr., L.H., Keyser Jr., G.L., Yenni, K.R., Garren Jr., J.F.: Flight Investigation of CockpitDisplayed Traffic Information Utilizing Coded Symbology in an Advanced Operational Environment (NASA Technical Paper 1684). NASA Langley Research Center, Hampton (1980) 40. Kuchar, J.K., Yang, L.C., Mit, C.: A review of conflict detection and resolution modeling methods. Intell. Transp. Syst. IEEE Trans. 1(4), 179–189 (2000) 41. Jeannin, J.-B., Ghorbal, K., Kouskoulas, Y., Schmidt, A., Gardner, R., Mitsch, S., Platzer, A.: A formally verified hybrid system for safe advisories in the next-generation airborne collision avoidance system. Int. J. Softw. Tools Technol. Transf. 19, 717–741 (2017). https://doi.org/10.1007/s10009-016-0434-1 42. Stroeve, S.H., Blom, H.A.P., Hernandez Medel, C., Garcia Daroca, C., Arroyo Cebeira, A., Drozdowski, S.: Modeling and simulation of intrinsic uncertainties in validation of collision avoidance systems. J. Air Transp. 28, 173–183 (2020) 43. Winder, L.F., Kuchar, J.K.: Evaluation of collision avoidance maneuvers for parallel approach. J. Guid. Control. Dyn. 22(6), 801– 807 (1999) 44. Yang, L.C., Kuchar, J.K.: Prototype conflict alerting system for free flight. J. Guid. Control. Dyn. 20(4), 768–773 (1997) 45. Johnson, C.R.: The Visualization Handbook. Elsevier, Burlington (2011) 46. Mei, H., Guan, H., Xin, C., Wen, X., Chen, W.: DataV: data visualization on large high-resolution displays. Visual Inf. 5, 12– 23 (2020) 47. Berberian, B.: Out-of-the-loop (OOL) performance problem: characterization and compensation. In: Ayaz, H., Dehais, F. (eds.) Neuroergonomics: The Brain at Work and in Everyday Life. Academic, London (2019) 48. Billings, C.E.: Aviation Automation: The Search for a HumanCentered Approach. Lawrence Erlbaum, Mahwah (1997) 49. Kaber, D.B., Endsley, M.R.: The effects of level of automation and adaptive automation on human performance, situation awareness and workload in a dynamic control task. Theor. Issues Ergon. Sci. 5(2), 113–153 (2004) 50. Miller, C.A., Funk, H.B., Goldman, R., Meisner, J., Wu, P.: Implications of adaptive vs. adaptable UIs on decision making: why “automated adaptiveness” is not always the right answer. In: Proceedings of the 1st International Conference on Augmented Cognition, Las Vegas, NV (2005)
S. J. Landry and W. Bihlman 51. National Transportation Safety Board: 14 CFR Part 121 Scheduled Operation of United Airlines, Inc. (Report No. DCA89MA063). Washington, DC (1989) 52. Sheridan, T.B.: Telerobotics, Automation, and Human Supervisory Control. The MIT Press, Cambridge (1992) 53. Bainbridge, L.: Ironies of automation. Automatica. 19(6), 775–780 (1983) 54. Norman, D.A.: The “problem” with automation: inappropriate feedback and interaction, not “over-automation”. Philos. Trans. R. Soc. Lond. Ser. B Biol. Sci. (1934–1990). 327(1241), 585–593 (1990) 55. Adams, M.J., Tenney, Y.J., Pew, R.W.: Situation awareness and the cognitive management of complex systems. Hum. Factors: J. Hum. Factors Ergonom. Soc. 37(1), 85–104 (1995) 56. Endsley, M.R., Kaber, D.B.: Level of automation effects on performance, situation awareness and workload in a dynamic control task. Ergonomics. 42(3), 462–492 (1999) 57. Craik, F.I.M., Tulving, E.: Depth of processing and the retention of words in episodic memory. In: Cognitive Psychology: Key Readings. Psychology Press, New York (2004) 58. Kirwan, B., Ainsworth, L.K.: A Guide to Task Analysis. Taylor & Francis (1992) 59. Endsley, M.R., Bolté, B., Jones, D.G.: Designing for Situation Awareness: An Approach to User-Centered Design. Taylor & Francis, London (2003) 60. Ververs, P.M., Wickens, C.D.: Head-up displays: effect of clutter, display intensity, and display location on pilot performance. Int. J. Aviat. Psychol. 8(4), 377–403 (1998) 61. Yeh, M., Merlo, J.L., Wickens, C.D., Brandenburg, D.L.: Head up versus head down: the costs of imprecision, unreliability, and visual clutter on cue effectiveness for display signaling. Hum. Factors. 45(3), 390–408 (2003) 62. John, M.S., Manes, D.I., Smallman, H.S., Feher, B.A., Morrison, J.G.: Heuristic automation for decluttering tactical displays. In: Proceedings of the Human Factors and Ergonomics Society 48th Annual Meeting. Human Factors and Ergonomics Society, Santa Monica (2004) 63. Kroft, P., Wickens, C.D.: Displaying multi-domain graphical database information: an evaluation of scanning, clutter, display size, and user activity: information design for air transport. Inf. Des. J. 11(1), 44–52 (2002) 64. Ruffner, J.W., Lohrenz, M.C., Trenchard, M.E.: Human factors issues in advanced moving-map systems. J. Navig. 53(01), 114– 123 (2000) 65. Lutters, W., Ackerman, M.: Achieving Safety: A Field Study of Boundary Objects in Aircraft Technical Support. Paper presented at the CSCW ‘02, New Orleans, LA (2002) 66. Starr, S.L.J.R.G.: Institutional ecology, “translations,” and boundary objects: amateurs and professionals in Berkeley’s Museum of Vertebrate Zoology. Soc. Stud. Sci. 19, 387–420 (1989) 67. Shannon, C.E.: Communication in the presence of noise. Proc. IEEE. 72(9), 1192–1201 (1984) 68. Senders, J.W.: The human operator as a monitor and controller of multidegree of freedom systems. In: Moray, N. (ed.) Ergonomics: Major Writings. Taylor & Francis, New York (2005) 69. Cowan, N.: The magical number 4 in short-term memory: a reconsideration of mental storage capacity. Behav. Brain Sci. 6, 21–41 (2000) 70. Miller, G.A.: The magical number seven, plus or minus two: some limits on our capacity for processing information. Psychol. Rev. 63, 81–97 (1956) 71. Koffka, K.: Principles of Gestalt Psychology. Routledge, Abingdon (1935) 72. McCormick, E.J., Sanders, M.S.: Human Factors in Engineering and Design. McGraw-Hill, New York (1982)
51
Aerospace Systems Automation
73. Fitts, P.M., Jones, R.E., Milton, J.L.: Eye movements of aircraft pilots during instrument-landing approaches. In: Moray, N. (ed.) Ergonomics: Major Writings. Taylor & Francis, New York (2005) 74. Horrey, W.J., Wickens, C.D., Consalus, K.P.: Modeling drivers’ visual attention allocation while interacting with in-vehicle technologies. J. Exp. Psychol. Appl. 12(2), 67–78 (2006) 75. Recarte, M.A., Nunes, L.M.: Mental workload while driving: effects on visual search, discrimination, and decision making. J. Exp. Psychol. Appl. 9(2), 119–137 (2003) 76. Rantanen, E.M., Goldberg, J.H.: The effect of mental workload on the visual field size and shape. Ergonomics. 42(6), 816–834 (1999) 77. Minaie, A., Neeley, J.D., Brewer, N.E., Sanati-Mehrizy, R.: Haptics in Aviation Paper presented at 2021 ASEE Virtual Annual Conference Content Access, Virtual Conference (2021, July) 78. Tan, H.Z., Gray, R., Young, J.J., Traylor, R.: A haptic back display for attentional and directional cueing. Haptics-e. 3(1), 1–20 (2003) 79. Rasmussen, J.: Skills, rules, and knowledge; signals, signs, and symbols, and other distinctions in human performance models. In: System Design for Human Interaction Table of Contents, pp. 291– 300 (1987) 80. Hancock, P.A., Williams, G., Manning, C.M.: Influence of task demand characteristics on workload and performance. Int. J. Aviat. Psychol. 5(1), 63–86 (1995) 81. Kroemer, K.H.: Avoiding cumulative trauma disorders in shops and offices. Am. Ind. Hyg. Assoc. J. 53(9), 596–604 (1992) 82. Kroemer, H.B.: Ergonomics: How to Design for Ease and Efficiency. Prentice Hall, Englewood Cliffs (1994) 83. Lehto, M.R., Landry, S.J.: Introduction to Human Factors and Ergonomics for Engineers, 2nd edn. CRC Press, Boca Raton (2013) 84. Blackwell, H.R.: Development and use of a quantitative method for specification of interior illuminating levels on the basis of performance data. Illum. Eng. 54, 317–353 (1959) 85. Hemenger, R.P.: Intraocular light scatter in normal vision loss with age. Appl. Opt. 23, 1972–1974 (1984) 86. Edworthy, J., Adams, A.: Warning Design: A Research Prospective. Taylor & Francis, London (1996) 87. Weiss, P., Weichslgartner, A., Reimann, F., Steinhorst, S.: Failoperational automotive software design using agent-based graceful degradation. In: 2020 Design, Automation & Test in Europe Conference & Exhibition, pp. 1169–1174 (2020). https://doi.org/ 10.23919/DATE48585.2020.9116322 88. Wang, J., Tepfenhard, W.: Formal Methods in Computer Science. CRC Press, Boca Raton (2019) 89. Tockey, S.: How to Engineer Software: A Model Based Approach. IEEE Press, Piscataway (2019) 90. Bowen, J.P., Hinchey, M.G.: Seven more myths of formal methods. Softw. IEEE. 12(4), 34–41 (1995) 91. Leveson, N.G.: Safeware: System Safety and Computers. AddisonWesley, Reading (1995) 92. Federal Aviation Administration: Certification of Transport Airplane Mechanical Systems (AC 25-22). Washington, DC (2000) 93. Tang, J.: Conflict detection and resolution for civil aviation: a literature survey. IEEE Aerosp. Electron. Syst. Mag. 34, 20–35 (2019). https://doi.org/10.1109/MAES.2019.2914986
Web Resources Airbus: http://www.airbus.com/ American Institute of Aeronautics and Astronautics: http:// www.aiaa.org
1137 Boeing commercial airplanes: http://www.boeing.com/commercial/ Federal Aviation Administration: http://www.faa.gov/ Honeywell labs: https://www.honeywell.com/sites/htsl/ Joh Single European Sky: http://www.eurocontrol.be/sesar NASA aeronautics: http://www.aeronautics.nasa.gov/ US Joint Planning and Development office: http://www.jpdo.gov/
51 Steven J. Landry Ph.D., is Professor and Peter and ADP Chair and Department Head in the Harold & Inge Marcus Department of Industrial and Manufacturing Engineering at the Pennsylvania State University. He was previously a faculty member, associate head, and interim head in the School of Industrial Engineering at Purdue University, with a courtesy appointment in the School of Aeronautics and Astronautics. Dr. Landry has conducted research and published in air transportation systems engineering and human factors and has taught undergraduate and graduate courses in human factors, statistics, and industrial engineering. Prior to joining the faculty at Purdue, Dr. Landry was an aeronautics engineer for NASA at the Ames Research Center. Dr. Landry was also previously a C-141B aircraft commander, instructor, and flight examiner with the U.S. Air Force with over 2500 heavy jet flying hours.
Bill Bihlman founded Aerolytics in 2012, a strategic management consultancy dedicated to aerospace materials, manufacturing, and the global supply chain. He started his career in 1995 at Raytheon Aircraft. He actively supports SAE International’s aerospace consensus standards for advanced materials and manufacturing. Bill received a BSME, MSME, and PhD in IE from Purdue University. He also has an MBA and MPA from Cornell University, is a licensed private pilot, and a life-long aviation enthusiast.
Space Exploration and Astronomy Automation
52
Barrett S. Caldwell
Contents
1151
existed for nearly 200. Physical servomechanisms using timers and simple predeveloped rules have evolved to hardware (and, increasingly, software) capabilities with years of functioning on the Martian surface and traveling to the heliopause at the edge of interstellar space. Modern spaceflight operations represent an increasing capability to enable distributed coordination among multiple automation systems, complex communication networks, and multidisciplinary communities of scientists and engineers. This chapter addresses issues of automation and autonomy applied to software and hardware operations, as well as functions and constraints for communication, cooperation, and coordination. Examples of distributed spaceflight operations using a supervisory control paradigm include current human exploration activity, scientific communities conducting physics and planetary science study, and plans for advanced human-agent and human-human systems integration.
1152
Keywords
1153 1153
Automation levels · Autonomy · Communication · Function allocation · Human-system integration · Mission operations · Supervisory control
52.1
Scope and Background . . . . . . . . . . . . . . . . . . . . . . . . . . 1139
52.2
Agents, Automation, and Autonomy . . . . . . . . . . . . . . . 1142
52.3
Functions and Constraints on Automation . . . . . . . . . 1144
52.4
Signals and Communications for Automation in Space Observation and Exploration . . . . . . . . . . . . . 1146 Radio Astronomy and Automation for Space Observation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1146 Automation Requirements for Satellite Communications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1147
52.4.1 52.4.2 52.5
Automation System Hardening, Protection, and Reliability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1149
52.6
Multisystem Operations: Past, Present, and Future . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lunar Mission Coordination . . . . . . . . . . . . . . . . . . . . . . . Space Shuttle Human-Automation System Interactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Distributed Astronomy and Human-Automation Interactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Future Automation-Automation and Human-Automation Exploration . . . . . . . . . . . . . . . .
52.6.1 52.6.2 52.6.3 52.6.4 52.7
1150 1150 1150
52.7.3
Additional Challenges and Concerns for Future Space Automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cybersecurity and Trusted Automation . . . . . . . . . . . . . . Distributed Space-Based High-Performance Computing . . . . . . . . . . . . . . . . . . . . . Enhanced Awareness and “Projective Freshness” . . . . . .
52.8
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1155
52.7.1 52.7.2
1154 1155
52.1
Scope and Background
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1156
Abstract
The history of automated systems operating in space environments extends approximately 75 years; practical automated tools to support astronomical observation have
B. S. Caldwell () Schools of Industrial Engineering and Aeronautics & Astronautics, Purdue University, West Lafayette, IN, USA e-mail: [email protected]
© Springer Nature Switzerland AG 2023 S. Y. Nof (ed.), Springer Handbook of Automation, Springer Handbooks, https://doi.org/10.1007/978-3-030-96729-1_52
Images of the earliest galaxies from the Hubble Deep Field observations, of rover tracks and “selfie” images from robots on the Martian surface, or signals from 40-year-old missions traveling beyond the edge of our solar system, speak to the advances in automation and other technological advances in space observation (astronomy) and exploration. However, the transformations of astronomy and space exploration due to advances in automation date back nearly 200 years. This chapter addresses this broader context of automation in space exploration, including considerations of autonomy, distributed operations and task coordination, 1139
1140
and function allocation among human and hardware- and software-based agents. To clarify the historical timeline and scope of this chapter, a definitional foundation is required. Thomas Sheridan, a leading researcher in the area of design, operation, and control of electromechanical devices (a field known as “human supervisory control”), adapted a dictionary definition to describe automation as “automatically controlled operation of an apparatus, a process, or a system by mechanical or electronic devices that take the place of human organs of observation, decision, and effort” [1, p. 3]. A robot is any device (not just the humanoid-style creations of science fiction) that is designed to perform various movement, operations, or other functions through programmed activity, those that do so under human control management, and supervision can be described as “telerobots” [1, p. 4]. This chapter will then focus on automation in the context of programmable electronic/mechanical devices that are designed and operate to enable, expand, or extend the scientific and engineering processes of spaceflight exploration and space observation. Using this definition as a starting point, perhaps one of the most transformative innovations in the history of astronomy dates back to the invention of the first mechanical clock drive for telescope operations by Joseph von Fraunhofer in 1824 [2, 3]. This clock drive, with principles still in use today known as the German equatorial mount (GEM), allowed an optical telescope to automatically compensate for the rotation of the Earth compared to an object in the telescope’s focus. The GEM compensation used a clockwork mechanism that could be wound (as would be seen in other forms of watch mechanisms) and left to run for several hours. However, instead of the clockwork gears driving the movement of watch hands, the gears and counterweights enabled the smooth movement of the telescope itself (see Fig. 52.1). This compensation would permit the study of smaller astronomical objects that would otherwise transit the telescope’s optics in a matter of a few minutes or less. In his discussion of the Fraunhofer telescope (which was delivered to him in November 1824), contemporary astronomer Wilhelm Struve remarked on the mechanical ease and precision of observing phenomena such as binary stars using the GEM, even with a magnification of up to 700×, with a stable observation within the center of the telescope’s field of view and measurements of even smaller deviations between stellar objects [4, 5]. In essence, then, the GEM permitted astronomers to spend much longer periods of time in the process of astronomical observation and data collection , rather than in the mechanical operations of telescope movement. This change in the allocation of the astronomer’s time, from manipulation of the apparatus to the process of investigation, enabled new forms of optical physics, scientific study of stars, and determination of objects within and beyond the Milky Way [3, 4]. Astronomers such as
B. S. Caldwell
Fig. 52.1 Fraunhofer clock drive mount and weights. (Image copyright Deutsches Museum; used with permission)
Struve could now conduct observations with integer multipliers of precision and even measure previously unmeasurable variables, such as the distance of a star from Earth and the absorption of light in the interstellar medium [5]. While the GEM represents a transformative technology innovation in nineteenth-century space science automation, among the primary transformative innovations for twentiethcentury space exploration automation would be the electronic computer. Aerial observations of both Earth and space had been conducted since the 1850s via balloons and then airplanes. The first photographs of the Earth from space were made from ballistic missiles (incidentally, also using clock-based mechanisms that opened the camera shutter at predetermined times for specific intervals). As has been made famous in the American book and movie, Hidden Figures, the trajectory calculations of early American human spaceflight missions were conducted by female “human computers,” primarily Katherine Johnson (see Fig. 52.2), to determine reentry angles and locations [6]. Although electronic digital computers had developed sufficient calculation speed and storage capacity to numerically evaluate orbital trajectories by 1962, Mercury astronauts were not yet convinced of their validity. Programming of new IBM computers was also
52
Space Exploration and Astronomy Automation
1141
Magnetometer
Cosmic ray subsystem
Plasma science subsystem
Low-energy charged particle instrument Plasma wave subsystem antennas
Fig. 52.2 Circa 1960 image of Katherine Johnson, mathematician who helped develop algorithms and confirm computer calculations to support Mercury spaceflight missions. (Image courtesy of NASA)
the responsibility of the female computing team, led by Dorothy Vaughan [7, 8]. As a Black female, Johnson not only codeveloped the mathematical equations to determine orbital dynamics for determining, predicting, and validating the location of a satellite with respect to the Earth [9], she helped confirm the calculations of the early digital electronic computers. Again, the role of automation (in this case, the automatic calculation of an object’s position, velocity, and orientation in space) enabled and extended the capability of spaceflight exploration (representing both human missions and telerobotic satellites and landing systems) to emphasize increasingly complex and ambitious exploration missions. Increasing precision of both position determination and sensor-based signal detection has enabled engineers and scientists to continue to track the progress and identify the signals from the Voyager deep space missions (see Fig. 52.3). Launched in 1977, Voyager 1 and 2 were launched on complex trajectories that included Voyager 2 flyby observations of Jupiter, Saturn, Uranus, and Neptune (with many photographic and other sensor data transmitted back to Earth) from 1979 through 1986. Although the general forms of such complex gravityassist missions can be calculated by humans, actual trajectory modification commands and detection of satellite signals (now in fractions of femtowatts) require complex computerbased controls of antennas and the spacecraft itself over the nearly 20 billion km or more distance to either Voyager spacecraft (still in two-way contact with Earth as of November 2022) [10–12]. Similar to the scientific advances enabled by the GEM, the automation systems onboard each Voyager, and controlling Earth-based antennas, allow for stable positioning for ongoing communications enabling command and scientific data exchange. The data being shared continue to refine and expand scientific understanding, such as the recent determination and isolation of the interstellar plasma “hum” beyond the heliopause [12].
Fig. 52.3 Artist’s depiction of Voyager spacecraft configuration with scientific instruments and antenna. (Image courtesy NASA)
Perhaps a more popular representative example of automation in space exploration is found in the examples of US robotic rovers on Mars. The two Mars Exploration Rovers (MERs), Spirit and Opportunity, were designed to combine imaging and scientific data collection capabilities on a mobile platform capable of limited self-navigation and teleoperation from Earth-based science and engineering teams. Originally designed for 90-day missions, both MER platforms were able to continue their mobile operations for over 2200 Martian days (sols), while traversing 7.7 km (Spirit) and 45.2 km (Opportunity) on the Martian surface. This duration of operations unexpectedly enabled additional development of autonomous operations capabilities based on revised and updated onboard software programming. Although software capabilities for MER autonomous operations were upgraded from earlier robotic missions [13, 14], MER performance could also be improved for the same platform after operational experience and additional scientific and engineering knowledge about the vehicles’ own automation performance over time [14]. The MER program demonstrated an evolving model of human-automation interaction and distributed task coordination with remotely operated automation systems, an issue that will be addressed in additional detail in Sects. 52.2 and 52.5. As of May 15, 2021, a unique milestone has arrived in space exploration: three satellites from different nations are simultaneously in orbit around Mars: the US Mars Reconnaissance Orbiter (MRO), the United Arab Emirates “Hope” orbiter, and the Chinese Tianwen (“Questions to Heaven”) orbiter. Additionally, the Perseverance rover has been successfully joined on the Martian surface by the Zhurong (“God of Fire”) lander/rover launched by China – only the third nation to achieve the automation feat of a soft landing on Mars. Each nation’s orbiter is capable of unique automated observations of Mars’ surface and atmospheric conditions, relaying communications data and telemetry, or imaging, both natural features of Mars, and automation systems
52
1142
B. S. Caldwell
themselves [15, 16]. Additional considerations of information sharing and trust among these types of distributed automation systems are addressed in Sect. 52.7. This excessively, but necessarily, brief history provides an introduction to major considerations in the design and application of automation systems in space science and engineering contexts, which represent the following sections of this chapter. Section 52.2 will address and provide distinctions between the concepts of automation and autonomy, in both engineered and human agents. Section 52.3 examines in more detail the concept of abstraction hierarchies and how these influence the design and function of particular space automation subsystems. Section 52.4 addresses communication network requirements for spacecraft information flows that enable space exploration. Section 52.5 considers the need for hardened design, system robustness, and unique risks for automation in the space environment. Section 52.6 explores previous and current examples of collaboration, information and knowledge sharing among multiple agents (both human and automation) and space science systems within a single organization, as well as needs for future coordination and human-system integration for nextgeneration space exploration automation. Additional questions of distributed automation, including considerations of security, trust, and appropriate information exchange between multiple organizations or nations, are addressed in Sect. 52.7.
52.2
Agents, Automation, and Autonomy
The variety of scientific explorations and observations over the past two centuries certainly precludes a comprehensive discussion (or even listing) of the advances in automation technologies that have enhanced, expanded, or extended human scientific study of space and human-built objects on other planetary bodies. A more critical consideration is that the few examples provided above help to illuminate the range of hardware functions and software operations that would have been unimagined in the 1820s. It is also essential to point out that automation, in either hardware or software, does not exclude human interactions in various aspects of planning, coordination, and scientific interpretation of observations collected by automated systems. Despite the differences in physical components and operational capabilities, there is a clear functional continuity seen in automation advances from the Fraunhofer clock drive to the Hubble Deep Field imaging and Voyager communication systems. Each system provides increased precision and stability of positioning beyond what would be feasible for unassisted human scientists, allowing them to focus increasingly on sensemaking and interpretation of astronomical observations, rather than routine mechanical tasks. This
concept of function allocation is at the heart of the design and implementation of both hardware and software automation systems [1, 17]. Additional definitions and explanations of function allocation and operations, including distinctions of automation and autonomy of hardware and software, will be addressed in this section as well. Scientific exploration and observation are essentially human endeavors; we also must acknowledge that pure scientific discovery is not the only goal of a complex astronomical or spaceflight mission. Regardless of the relative emphases of these scientific and other goals, the critical issue is that any automation design is intended to interact with and support human priorities, from initial conceptualization through mission execution and designations of successful achievement. Any human-automation function allocation is philosophically, as well as practically, grounded in the resolution of the question, “Who does what, when, in order to achieve our desired goals for this task or activity?” Although this question can be crafted in purely engineering design terms, the questions of mission goals and priorities relate to broader human objectives. Another consideration to address is one of measures of success: at what point do we consider a task complete and by what criteria? This consideration requires that designers and operators consider the boundaries of the human-automation system and how that system functions (for the purposes of this chapter) in the context of the scientific and exploration enterprise. Automation may be carried out by purely physical devices (such as a camera shutter that opens according to a gearing-based timer mechanism) or by physical devices monitored and controlled by software programs (such as a rover programmed to enter a “power down” phase when battery storage or charging rates fall below a certain level). Further, purely software operations may allow programs to conduct automatic analysis of data to perform particular functions as the relevant events occur (such as image analysis to allow a rover to identify and reference a physical feature by appearance or even detect a temporarily appearing object such as a “dust devil” and take a picture of it). All of these are elements of space observation and exploration automation systems (and, in fact, are all capabilities onboard the Mars Exploration Rovers Opportunity and Spirit by January 2007 [14]). Researchers and technicians may devote significant attention to such hardware/software distinctions, and the allocation of functions between hardware and software, which may also be considered aspects of cyber-physical systems applied to spaceflight [18, 19]. Within the spaceflight vehicle, we may refer to the software programs that conduct complex analysis, evaluation, or event-based actions as agents, to distinguish these features from purely physical or deterministic rule-based actions. However, it is important to acknowledge that these concepts of agents do not replace the elements of human-automation function allocation but do allow a greater
52
Space Exploration and Astronomy Automation
range of automation-based activities to be conducted without direct human control. One source of confusion among the general public, and even among researchers and practitioners, is that several competing terms exist to define, describe, and document what automation is, what it does, and how it performs in the context of the scientific mission. These terms overlap with, but should be distinguished from, concepts of autonomy, which is a functional question of independence of (human or automation) operations (and will be addressed later in this section). A variety of considerations exist for allocating mission functions to human task performers versus engineered capabilities of hardware or software automation. One consideration is an ethical as well as technical one, known as the “dirty, dull or dangerous” criterion [20]: allocating mission tasks to automation when the task would be too difficult or risky to the human operator to perform safely or too monotonous for a human to perform accurately and effectively over long periods of time. The increased tracking precision of the GEM drive, through the hours-long exposures of the Hubble Extreme Deep Field, represents examples of the major contributions of hardware pointing and timing automation to astronomical observation. In order for an engineering team to identify what systems to build, a required initial design element of function allocation is the determination of mission requirements and operational criteria for automation systems [21]. Sheridan’s discussion of human supervisory control [1] emphasizes that phases of automation design and execution must include the following five elements: • Plan: developing operational goals and priorities for task completion and how those tasks should be completed. • Teach: determining how the automation system will perform these tasks and enabling these operational performance capabilities in hardware and software. • Monitor: observing the ongoing performance of the automation. • Intervene: interrupting and modifying automation operations if there are problems with automation performance, or changes in task priorities, in order to reduce adverse performance outcomes. • Learn: improving system operations based on previous mission performance, automation may be updated or revised in order to improve achievement of mission goals [1, p. 88]. A meta-analysis of engineering and human factors studies regarding the role of automation in achieving mission goals has distinguished elements of both stages and levels of automation [22, 23]. The consideration of automation stages addresses whether automation systems are involved
1143
in (a) information acquisition and filtering; (b) information integration and analysis; (c) action selection (deciding which physical action to execute); and (d) action execution and task implementation. As described above, the inclusion of software agents in cyber-physical automation systems allows those agents to perform more actions across the stages of automation to conduct science or perform necessary engineering control functions. Various authors and research domains address levels of automation in different ways and with different amounts of gradation; most of these definitions address the allocation of tasks to the automated hardware and software functions, rather than those of the human operator. Regardless of the intervals between levels, the general pattern is to consider lower levels of automation where the human manages most taskrelated control activities and the highest levels of automation encompassing automatic control and decision-making of all task-critical activity and even emergency response or task replanning as needed. Sheridan’s [1] list included ten levels (ironically, and confusingly, labeled as levels of autonomy [sic]); other models include five to seven levels [24, 25]. The five-level model is, as of 2020, highly visible in discussions of autonomous driving systems, with the lowest automation control level (L1) involving limited and purpose-specific automatic control of one or more operational functions and the highest level (L5) indicating that the vehicle can be operated without a driver [24]. In the space exploration and operations context, Fraunhofer’s GEM clock drive should be classified as L1 automation, supporting the controlled movement of the telescope and leaving all other observational tasks and control decisions to the astronomer. Satellites and fully automated landers and rovers which do not require commands to be sent from ground control stations in order to complete their tasks qualify as L5 missions. Mission constraints, such as the hostility of the operational setting and distance from Earth, often force engineers to create L5 functionality. One of the best and most successful examples of this type of L5 automation would be the series of Soviet Venera missions, including Venera 9 and 10 (which landed on the surface of Venus and sent back black and white photographs) and Venera 13 and 14 (which sent the first color photos of the Venusian surface in human history) [26, 27]. Given the temperature (over 450 ◦ C), pressure (nearly 90 times the atmospheric pressure at earth sea level), and chemical composition of the surface of Venus, along with transmission delays of 2–15 min depending on Earth-Venus distance, human-controlled operations of photography, airborne sampling, and even core drilling become infeasible. The longest total surface operations period of Venera 13 was approximately 2 h, exceeding its planned 30-min operational life. Fully automatic L5 operations were required to enable the successful mission completion of the Venera landers
52
1144
(using rudimentary hardware-software based detection and response capabilities). While frequently used interchangeably with the term automation, the concept of autonomy is actually a distinct challenge when designing automation functions. Complex engineering hardware, and sophisticated software agents, increase the capability or precision or operational range of a space exploration or observation platform. Autonomy, more broadly, addresses deeper questions of operational independence, particularly in the determination of answers to the following questions:
B. S. Caldwell
demonstrating “higher” levels of autonomy but lower operational reliability with respect to completing actual mission goals. In summary, the process of function and task allocation between human scientists and engineers, and the spaceflight automation systems, is a multidimensional one. Some authors do address the elements of cognitive decision-making and process determination as distinct from physical execution [25], highlighting that automation may occur at different levels within the various phases of Sheridan’s human supervisory control paradigm. Table 52.1 helps to illustrate these distinctions, with a few representative examples.
• Autonomy from whom? • Autonomy to do what? For instance, human astronauts conducting spaceflight activities are often highly procedure driven. When faced with an anomaly during spaceflight tasks on the International Space Station, for instance, they may pause that task and wait for additional instructions from the ground-based mission control center, rather than perform autonomously. The Mars Exploration Rovers, by contrast, did develop additional capability to autonomously take pictures and calibrate their actual distance travelled, but they did not independently select or vote on which scientific destination should be of highest priority. An automated hardware system, or software data processing algorithm, may be able to use onboard computing capabilities to resolve questions of what, or how, but not why, particularly in the human scientific or societal context. Therefore, the concept of higher levels of automation takes on a new meaning in the sense of autonomous operations: a fully autonomous vehicle, whether it is on a highway on Earth or a mare on the Moon, would conceivably have the capability and authority to determine its own actions (and reject those of the human controller) within an increasing scope of decision-making and task execution. Theoretically, a completely autonomous system – currently a science fiction construction – would not even be guaranteed to perform the science missions or engineering control tasks for which it was originally constructed and programmed. It is not logically impossible that an actively evolving deep learning software agent might determine that the experiment, observation, or traverse requested of it should not be completed. This level of autonomy becomes problematic. Full autonomy may be described as identical to full automation of a robotic vehicle to execute a specific task, for instance, driving an incapacitated person to their home/habitat without the need for a local driver or remote human controller; however, in this instance the vehicle is still performing the missionrelevant function expected of it, subject to human-determined goals and priorities. An automation system that might decide to conduct different science than the mission planners, flight controllers, and domain scientists intend might be
52.3
Functions and Constraints on Automation
The description of “function allocation” described above is a highly simplified one. Advanced space observations or spaceflight operations might be straightforward, consisting of a single objective and a fixed and consistent set of automation goals (e.g., point the Hubble Space Telescope at a single location in the sky and observe that location for over 100 h, as in the original Hubble Deep Field imaging). However, many science and exploration goals are multifaceted and dynamic, regarding understanding of tasks, operations, and interactions of multiple automation systems and human priorities over time and event cycles [28]. Appropriate execution of these mission goals and interacting requirements must be considered early in the design phase of these space science and engineering missions. One design approach to anticipating and incorporating multiple automation demands and interaction considerations is known as an abstraction hierarchy [29, 30]. The power of an abstraction hierarchy lies in its ability to define, not just conceptually but in its mapping to specific pieces of equipment, operational responses to the general questions of who does what, when, and who has autonomy from whom to do what, during each aspect of mission operations. As defined by Rasmussen [29], the abstraction hierarchy has five layers of general-to-specific definition: • System purpose (what scientific or operational goals is this observation or exploration mission intended to achieve) • Abstract function (conceptual definitions of information gained, commands exchanged, and resources required to achieve purpose) • Generic function (allocation of roles to send and receive information, or complete tasks, within the specific context of the mission, and in the specific operating conditions anticipated)
52
Space Exploration and Astronomy Automation
1145
Table 52.1 Matrix of supervisory control phases of automation by levels of automation, with representative space observation and exploration examples in this chapter Supervisory control function/level of automation L0: Full human control L1: Function specific automation
Teach: Creating Plan: Developing task operational rules for task performance goals Selecting region of sky for visual observation Pause and hold rules if rover approaches an obstacle while traversing
L2: Automation control over multiple critical functions
Learn: Improve system performance and update capabilities of automation
Development of improved telescope clock drive Algorithms for detecting, recording, and alerting researchers for celestial transients
L3: Automation monitoring of environmental conditions L4: Human input limited to general purposes and goals
L5: Full automation, no human operations required
Intervene: Interrupt Monitor: Observe task performance if automation problems arise or performance goals/priorities shift Reinterpret stellar death hypothesis from telescope array data
Automated vehicle safe mode or shutdown procedures Routine satellite engineering status checks and acknowledgements
52 New machine learning rules for detecting and automatic photography of atmospheric transients
Automatic data collection and continuous transmission from Venus lander
• Physical function (distributions of functions to specific components, with associated software routines, to complete necessary communications, operations, or analysis tasks) • Physical form (specification of component design requirements, such as power, structural integrity, use of electromagnetic spectrum, and local operating conditions/functional tolerances) The physical functions and forms in Rasmussen’s abstraction hierarchy map most directly to the older forms of automation described in this chapter, with replication of known physical tasks to mechanical capabilities operating precisely, continuously, and without need for regular human intervention. Specific teleoperations functions (such as pointing of a telescope or initiation of engine burns) by groundor vehicle-based humans also represent the lower levels of automation defined earlier as L0 or L1. Increased automation autonomy is associated with increased variability of when, or under what conditions, a specific mission purpose may be achieved. Software analysis of vehicle conditions and autonomous hardware execution allows the translation of abstract and generic function into the relevant physical forms without an immediate involvement of a human operator, thus defining automation L2–L3 capabilities.
In order to achieve true L4 or L5 capability, the remote space vehicle would be required to autonomously assess and redefine the mission goals and priorities defined by the system purpose abstraction level. No operational space observation or exploration mission in place as of November 2022 can demonstrate such dynamic repurposing. This is not to take away from such impressive achievements as the simultaneous automated landings of two SpaceX launch boosters, the capability of a robot to avoid obstacles or take pictures of atmospheric anomalies (“dust devils”) while continuing a traverse on Mars, or even detecting an onboard anomaly requiring an update of engineering software. These achievements all represent increasing onboard software processing and precise hardware control to monitor and respond within the constraints of physical functions and forms and the local operational context of the flight or mission environment. One of the most important elements of this facet of the abstraction hierarchy is the difference between hardware automation systems that are fixed at mission launch (if not within the reach of human repair or replacement missions, such as those servicing the Hubble Space Telescope) and programmable software that can be revised and uploaded to the space vehicle during mission operations. These software upgrades represent an aspect of abstract function revision of the space vehicle, as existing physical forms may be
1146
B. S. Caldwell
used in novel ways based on upgraded software capabilities for executing commands of those physical components or improved or modified analysis of data received from them. The human supervisory control “loops” of scientific and engineering data analysis and interpretation represent a different domain of “function allocation” between different science and engineering specialists with distinct domains of expertise [28, 31, 32]. Operational constraints defined by hardware physical forms, as well as environmental restrictions defined by communication transmission delays or windows of availability, therefore have their own mechanisms to limit the realm of system purpose redefinitions available during the mission. For instance, debates between scientists and engineers for Mars rover destinations and analysis priorities are constrained by the available communications windows to send system updates to the rover, as well as the engineering conditions (such as power) and fixed instrumentation analysis capabilities (which may also have been degraded over the mission lifetime). Thus, functional hierarchy decisions made during mission design will continue to affect (and influence feedback control loops of) automation capability and function throughout the life of the mission. As missions increase in complexity and duration, it becomes infeasible to create new operational infrastructure for each new mission. This is especially true when network and command interoperability provides increased functionality even beyond initial mission expectations (such as the use of the Mars Reconnaissance Orbiter (MRO)) to provide common communications relay and location coordination between the MER and Curiosity rovers. Thus, “optimization” of a new mission capability may rely on its ability to take advantage of existing physical forms and functions, rather than creating new forms that might provide marginal incremental benefits but lose the capacity for benefits built upon prior successful instantiations of software, hardware, or task allocation forms. The Chinese Tianwen/Zhurong mission, with the Tianwen orbiter hosting the Zhurong lander/rover in orbit for multiple months prior to Zhurong separation and touchdown on May 15, 2021, provides an example of integrated physical forms in a single mission infrastructure [15]. Additional discussion of these interoperability considerations is provided in Sects. 52.6 and 52.7 below.
52.4
space exploration and observation should be separated into two types: astronomical observational capabilities outside of the visual spectrum and systems for spaceflight communication. Although a detailed discussion of the physics of the electromagnetic spectrum, and its relationships to scientific studies of celestial phenomena, is out of the scope of this chapter, a brief history of automation applied to space-related signal detection and processing may be of value.
52.4.1 Radio Astronomy and Automation for Space Observation Radio astronomy dates to the early 1930s, with a number of intentional as well as serendipitous observations and discoveries associated with scientific and engineering studies of radio and other electromagnetic systems [33]. By the 1950s and 1960s, use of electronic sensors for detecting signals in visual frequencies, as well as other areas of the electromagnetic spectrum, had become commonplace. However, with the development of radio astronomy receiver (“radio telescope”) arrays, with large numbers of radio frequency channels and coordination among multiple receivers, the combination of hardware automation and digital computer processing has become essential in the collection and processing of astronomical data [34, 35] including data storage and various levels of signal processing. Thus, radio astronomy has now adopted widespread use of both hardware automation and software agent processing of the very large volume of astronomical signals that be detected with earthbound receivers, as well as the even larger spectrum of signals available to space-based observation platforms, where atmospheric attenuation is not a primary limitation to signal processing (see Figs. 52.4 and
Signals and Communications for Automation in Space Observation and Exploration
Perhaps one of the most iconic twentieth- and twenty-firstcentury symbols of space exploration and observation is the parabolic dish, sending and receiving signals in the electromagnetic spectrum. Automation in this domain of
Fig. 52.4 Image of four 7-m radio telescopes of the Atacama Compact Array in Chile. (Image credit: ALMA (ESO/NAOJ/NRAO), W. Garnier (ALMA). Used under Creative Commons Attribution 4.0 International License)
52
Space Exploration and Astronomy Automation
Sunshade door
1147
Aspect camera stray light shade
Spacecraft module
Solar array
High resolution mirror assembly (HRMA) Optical bench High resolution camera (HRC) Solar array
Thrusters (4) Low gain antenna (2)
Integrated science instrument module (ISIM) Advanced CCD imaging spectrometer (ACIS)
Fig. 52.5 Engineering schematic of primary instruments for Chandra X-Ray Observatory satellite. (Image courtesy of NASA/CXC)
52.5 for examples of radio astronomy receivers on Earth and in space). The use of computers for servomechanism positioning, data filtering, and both real-time and later data storage and processing has shifted the challenge of function allocation to one of providing the human astronomer with assistance to help identify the scientific elements of interest [34, 35]. In fact, this aspect of human-automation integration was considered critical, even 50 years ago: “It seems to me that much of the care that the great optical observers of past decades put into the making of long exposures, must now be replaced by skillful consideration of the best way in which the output of the computer can be presented to the astronomer. A vital point here is that the data processing must not be so finely tuned that it can only present the expected answer. A too scientific optimization of the signal to noise ratio on a particular type of signal will often obscure the unexpected results that can lead to new discoveries” [35, p. 332].
52.4.2 Automation Requirements for Satellite Communications Any effort to communicate with, and process scientific data from, an automated system for space exploration requires the use of radio frequency transmissions as an essential component of the space vehicle. As described above, one of the triumphs of the Voyager missions is that, over 40 years later, messages of extremely low power can still be sent to and from the spacecraft, even with light speed transmission
time of over 18 h one way. Automation provides advantages of precise engineering capabilities for attitude control and positioning, signal frequency resolution, and data quality management to know exactly where to find the signal, exactly what format and frequency signal to detect, and exactly how to decode the signal once received. These communication capabilities are managed by several distinct spaceflight communication architectures using two-way radio transceiver systems tuned to particular radio frequencies. Two major systems managed by NASA operate to provide communications with satellites and spacecraft (the Near Earth Space Network, or NESN) or to communicate with other satellites beyond the EarthMoon system (the Deep Space Network, or DSN). Due to the rotation of the Earth, spacecraft communications are often managed by coordinated automation between ground-based transmission systems, relay networks, and “far point” robotic systems on satellites, landers, or rovers [32, 36]. Coordinated automation-automation interactions, with human involvement regarding mission planning, monitoring of science mission achievement progress, and intervention/replanning based on events (or, in cases such as Voyager or Mars Exploration Rovers, extended mission durations permitting the addition of new software to enable supplemental mission goals), allows a human supervisory control paradigm to be extended far beyond the limited direct human experience of spaceflight destinations. Human supervisory mission operations of multiple remote spacecraft are accomplished through a variety of Earthbased telecommunications stations, depending on the trajectories of the spacecraft being controlled. A census of all
52
1148
currently operating spacecraft (as of December 31, 2020) is of less value than an understanding of the multiple operational communications networks. NASA, as a primary manager of remote satellites, operates several networks that include critical ground-based and Earth-orbiting “space network” automation systems such as automated tracking and clock synchronization to permit highly robust telecommunications with national, commercial, and even university and student project nanosat payloads [37]. This space network of Earth-orbiting satellites (such as the Tracking and Data Relay Satellite, or TDRS, network) and ground stations allow for nearly continuous (reported uptime rates exceed 99.9%) linkages via communications in the microwave region of the electromagnetic spectrum known as K-band [38]. Within this general range of communications frequencies, amateur satellite communications are licensed in the range from approximately 24–24.25 GHz; other satellite communications occur from 12 to 18 and 26.5 to 40 GHz (as well as “S band” communications ranging from 1.98 to 2.2 GHz supporting Earth-space (uplink) and space-Earth (downlink) communications and 2.4 and 3.4 GHz for amateur satellite communications). Licensed and assigned communications frequencies also assist and support accurate tracking and monitoring of which satellites are within communications ranges at specific times and along specific trajectories. The TDRS network satellites operate in geosynchronous Earth orbit, to remain fixed over specific ground-based antenna locations. Their communications capabilities are “downward-facing,” as defined by geosynchronous orbits (approximately 22,200 miles, or 35,800 km) looking toward Earth. When extending beyond the region of the Earth, the above satellite networks are of relatively less importance for automated space observation and exploration. A distinct satellite network, known as the Deep Space Network (DSN) for NASA’s US more distant spaceflight missions, permits communications between human mission controllers, via ground stations, and the planetary satellites [37]. The distances to any satellites, landers, or rovers beyond the region of the Moon result in light transmission communications delays of over 2 s, which limits any type of real-time human command or control interactions with the space vehicle and places additional demands for increasing autonomy of automation operations on the remote vehicle to accommodate those delays [28]. The NASA Jet Propulsion Laboratory (JPL) conducts consolidated operations of US deep space mission oversight and communication with the DSN, using automation systems to distinguish the location and timing of specific vehicles for communications uplinks (commands sent from JPL to control software or robotic operations) and downlinks (acknowledgement of received commands, receipt of science data and vehicle status condition, or indications of any anomaly conditions requiring
B. S. Caldwell
Fig. 52.6 Artistic display of deep space communications at Jet Propulsion Laboratory. Apparent motion of lights represents “uplink” and “downlink” directions; number of lights represents signal bandwidth transmitted or received. (Image by author)
human troubleshooting and new command sequences). Communications with these vehicles must be able to function in relatively low-power and short communications, due to limited windows of availability to communicate with distant space vehicles; unique “displays” relating these communications can be seen in Fig. 52.6. Automation at the “far point” of the space vehicle itself must enable efficient decoding of electromagnetic signals into software code for control of scientific instruments or engineering functions and encoding of instrument and engineering status data into signals for low-power transmission back to Earth. In order to conserve power, and in recognition of the long transmission cycles, uplink and downlink signals are “batched” or multiple signals for multiple functions sent or received in a single communications event. Included in the space vehicle’s automation systems, therefore, must be an autonomous function to prioritize which signals must be sent first, in situations of reduced transmission capability or anomalous conditions. Since no other communications can be shared with a vehicle which is not pointed correctly, accurate pointing to other relay satellites or the DSN, and engineering commands to enable and monitor spacecraft attitude to permit such pointing, must be an absolute priority for the remote vehicle, rivaled only by critical monitoring of imminent danger to vehicle health and operational status. It is for these reasons that the final communications received from remote space vehicles at the end of their functional mission are often either interrupted data streams (indicating catastrophic system failures during transmission) or “shutdown” status updates as the vehicle slowly loses energy or consumables sufficient to continue ongoing operations.
52
Space Exploration and Astronomy Automation
52.5
Automation System Hardening, Protection, and Reliability
Spaceflight represents significant and unique challenges for ensuring that automation systems operate reliably and predictably in hostile environments. Vibration, radiation, temperature, and other hazards all represent risks to mechanical, electrical, and software components in spaceflight automation, requiring a much more robust (“hardened”) design, prototype testing, and construction process than for most terrestrial automation systems. Size and efficiency of the vehicle also are strongly constrained by mission design and small engineering safety factors (frequently in the range of 5–10%, rather than integer multiples of safety factor found in material or stress limits for more mundane applications). The following section is only a brief summary of the types of hazards that can affect spacecraft mission success due to impacts on automation components. While there has been little discussion of launch systems in this chapter, due to the focus on the operational emphasis on the scientific mission emphasis of advanced rovers and interplanetary observational missions, every successful mission must begin with a launch. The launch vehicle is required to utilize multiple automation systems for accurate timing and coordination needed to achieve the specific velocity and trajectory, launched at a specific time. In order to achieve Earth escape velocity and successfully traverse the distances required, a launch vehicle subjects its payload to many times the normal force of gravity. Although this initial launch phase usually only lasts for a few minutes, multiple periods of acceleration and course correction also require automated firings of rocket engines. Such course correction opportunities are often programmed into the mission at predetermined stages of the flight, based on coordination and timing signals and sophisticated numerical analysis of the mission trajectory conducted by ground-based computing systems, and transmitted to the spacecraft so that correction updates are received prior to the expiration of the relevant timing window. Thus, any unexpected or anomalous automation failure (such as a throttle valve closing too early or late) can have catastrophic effects on mission success if spacecraft trajectory or orientation is affected and ground-based signals cannot be received to enable corrective actions. Further travel beyond the region of low Earth orbit also exposes any space vehicle to significant electromagnetic radiation not experienced on Earth. The Van Allen radiation belts provide a protective insulation to electromechanical systems (as well as all others) on Earth, and only rare solar flare eruptions create sufficient electromagnetic pulses that cause damage to electrical operations. In fact, the most significant solar coronal mass ejection (CME) event recorded since humans began reliable use of electromagnetic devices occurred in September 1859 [39]. The Carrington Event,
1149
as it has been named, caused significant destruction and interference with telegraph equipment across the Northern Hemisphere, and demonstrated the power of CME events to cause collapse of electromagnetically based automation infrastructures. Although a Carrington-level CME hitting Earth is estimated as a multimillennium-frequency event, even smaller levels of solar activity (most recently in 1989) can cause major disruptions, and a Carrington-level CME did miss Earth in 2012. These events, while relatively rare on Earth, do represent the levels of hostile space weather that spacecraft must endure [39]. Interplanetary space, the Moon, and Mars, all are unprotected against these levels of radiation. In addition, effective shielding against more energetic forms of radiation (such as water or lead blanketing) would add significant and infeasible weight penalties to a spacecraft where every kilogram of payload mass is debated intensely among scientists and engineers. Almost paradoxically, spacecraft that travel to other planetary surfaces, including several 2020 missions to asteroids, must deal with the environmental challenges of very low pressure (“hard”) vacuum, as well as dust and particulates that can interfere with mission operations. Reduced mobility and engineering performance of both Spirit and Opportunity Mars Exploration Rovers have been attributed to Mars dust (regolith) interference with drive systems. It is also suggested that the Apollo-era spacesuit and lunar rover technologies would not have endured much more than a few days’ surface operations due to the abrasive and destructive nature of the lunar regolith (E. Cernan, personal communication, 2014). The lunar surface provides a second paradoxical stressor to automation components: during the lunar day, surface temperatures can exceed 385 K (water boils at sea level on Earth at 373 K), while lunar night surface temperatures drop to below 90 K (methane freezes at 91 K) [40]. Of course, there have been no maintenance repairs of any spacecraft automation mechanical components beyond low Earth orbit during the Space Shuttle era. As a result, all spaceflight automation missions must be able to function in a no-maintenance configuration. Physical components are rigorously tested in both component-level and operational configurations, including radiation, temperature, vacuum, and vibration test chambers. Prototypes are also tested with hardware and software command function interactions in operational testbeds, within and beyond expected operational conditions (see Fig. 52.7). This conservative, nomaintenance design and construction process has allowed for unexpectedly rich and long-lasting automated system functionality, particularly for several Mars missions. Both the Spirit and Opportunity MER rovers far exceeded their initial planned mission durations of 90 sols (Martian days), enabling not only a rich collection of science data but substantial distances traversed across the Martian surface and opportunities for testing out new software capabilities [13, 14, 41].
52
1150
B. S. Caldwell
ground-based human observation. There are no feasible life support systems that could support a human on the 40-year, multi-billion mile traverses of the Voyager spacecraft (even if rockets powerful enough to launch a human crew on such a trajectory existed). No mechanical devices, let alone humans, can survive more than a few hours on the surface of Venus. However, it is equally clear that human problem-solving and automation-based data collection mutually enable each other over multiple missions and exploration contexts.
52.6.1 Lunar Mission Coordination
Fig. 52.7 Mars InSight lander prototype construction and testing facility at NASA Jet Propulsion Laboratory. (Image by author)
Based on this experience, software upgrades also became part of the operational sequence of events for the Curiosity rover [42]. The Mars Reconnaissance Orbiter (MRO) was originally designed to provide satellite mapping and relay communications for 4 years starting with orbital insertion in 2006. As of December 29, 2020, MRO was still providing these communications functions after 14.75 years, having returned nearly 400 TB of data. Ongoing MRO operations have also enabled remote updates of software functionality and creating an image training set to permit machine learning identification of previously undiscovered craters on Mars [43].
52.6
Multisystem Operations: Past, Present, and Future
As discussed above, the use of automation in space observation and exploration fulfills a fundamentally human purpose: to expand realms of knowledge and continue a fulfillment of human curiosity. The development of automated systems to achieve these purposes means that the automation is not an end in itself. Space-based telescopes and planetary rovers exist to perform tasks that humans have performed, with activities to extend and enhance human capabilities. Thus, it is expected that no single mission could “complete” the exploration process or obtain “all possible and relevant” scientific results. A resolution to the question of whether automation can or should replace all human space exploration is beyond the scope of this chapter. It is clear that a number of automation tasks, particularly in space astronomy (such as the Hubble Deep Field or Chandra X-Ray observations), extend astronomy capabilities into realms previously impossible with
Both Soviet and American governments, during the 1960s and 1970s, launched multiple robotic missions to the Moon to conduct precision mapping, landing, sample collection and return, and environmental analysis to inform and expand the capabilities of a human lunar landing. In fact, the successful “pinpoint landing” of Apollo 12 was seen as the result of a robotic landing and mapping mission, Surveyor 1. However, it was not only the success of the Surveyor 1 mission but the determination by a human astronomer that the presumed Surveyor 1 landing location was not an accurate rendering based on returned images. That human determination resulted in greater understanding and precision of siting the Surveyor 3 landing location so that, later, Apollo 12 astronauts Alan Bean and Pete Conrad could walk only 200 m to its location [44]. Human astronauts on the moon also enabled unique scientific discoveries through serendipitous “eyes-on” experience. One of the most significant of the Apollo lunar sample returns, the “Genesis rock,” was said to be noticed accidentally by an astronaut, who needed to create a “temporary anomaly” to permit time in the highly scripted exploration timeline to go and retrieve it (E. Cernan, personal communication, 2014). It is unlikely that automated surveying by, or continuous remote teleoperations of, a lunar rover would have identified this rock among the lunar surface.
52.6.2 Space Shuttle Human-Automation System Interactions The human exploration capabilities of the Space Shuttle (STS) provide a critical demonstration of the interactions of human and automation capabilities over an extended period. (There is also an important object lesson in the development of the STS regarding the sensitivity of timing and mission coordination in the spaceflight arena. Although the STS program was originally planned as a mechanism for extending the life of the US Skylab space station, funding and planning inconsistencies led to delays that resulted in Skylab deorbiting and reentering Earth’s atmosphere before the first Shuttle
52
Space Exploration and Astronomy Automation
missions could be launched to serve a reboosting function.) The case of the Hubble Space Telescope also serves as a possible warning to those suggesting an automation-only approach to systems integration. Originally, the Hubble Telescope was designed for fully automated operations, without the possibility for human intervention for repair or upgrade. However, soon after launch, astronomers and engineers discovered that a design and construction error in the telescope had left a “spherical aberration” that prevented any significant scientific uses of the telescope [45]. Eventually, a complex mission was developed to retrieve, capture, and repair the telescope while using the robotic arm and human astronauts upon the STS. The success of this mission not only proved the capability of human astronauts to do significant work in space (while supported in a low-autonomy mode from flight controllers on the ground who assisted in relaying and revising task procedures to the astronauts on orbit) but enabled the Hubble to begin a period of exceptional scientific return [45]. Subsequent repair missions using STS vehicles and human astronaut crews also enabled further extensions of the space telescope’s life through replacement of worn out gyroscopes and other pieces of automation equipment. These examples provide strong support for the need for spaceflight mission designs to permit robust integration and reconfiguration of mission architectures and schedules to allow dynamic allocations of tasks and functions between humans and automated systems for space-based observation and exploration [46]. In addition, the development of networked information technology systems and software-based agents controlling space observation automation has enabled new generations of distributed and collaborative scientific investigations, such as those described in the next section.
52.6.3 Distributed Astronomy and Human-Automation Interactions Examples of distributed collaborations among astronomers over the past 50 years are numerous, resulting from the noted observation that the incorporation of automation into deep space visible or radio astronomy resulted in drastic increases in the amount of data and the resulting processing obtained from a particular period of observation [34, 35]. The prior edition of this Handbook included the discussion of the Sunfall project, which included a complex middleware architecture of software agents capable of image detection, processing, and remote control of the 2.2-m telescope operational in Hawai’i [47, 48]. The description of Sunfall graphical user interfaces and image detection/subtraction agents lays out a software version of the abstraction hierarchy of functions and forms described by Rasmussen, although “physical form” represents modules of code in a specific software language and data format, rather than pieces of me-
1151
chanical hardware. One result of programs such as Sunfall is the development of much larger research teams, with increasing collaboration between experts distributed in both location and subdiscipline, able to apply their distributed expertise to different contexts of the same raw observation data. This concept of distributed supervisory coordination represents an extension of the Sheridan model of human supervisory control [31] and is made possible only by the increased automation capabilities of remote software-based operation of the physical antenna or telescope hardware through shared tools and knowledge coordination [47]. Another result of this type of hardware/software/scientist collaboration is that additional disciplinary specialists – in this context, computer scientists rather than clockmakers – become essential participants in the scientific knowledge generation, interpretation, and dissemination process. Fortuitously, the distributed remote operations capabilities of automated space observation platforms can also increase the access and learning opportunities for the next generation of scientific scholars. Multiple programs such as Sunfall exist for professional astronomers; major facilities such as the WIYN network also represent major university research infrastructure support. However, a similar remote operations architecture known as the Southeastern Association for Research in Astronomy (SARA) has also been set up by a consortium of universities and small colleges for undergraduate research and education in astronomy, using telescopes at Kitt Peak in Arizona, Cerro Tololo in Chile, Roque de los Muchachos in the Canary Islands [49]. Full-time undergraduates can use Web-based interfaces to “travel” between these three facilities and conduct observations (including communications with local telescope operations staff) without the need to physically shift between telescope sites or spend hundreds of dollars only to arrive at a facility with poor visibility due to local weather. The cover story of the December 2020 issue of Scientific American highlights how new generations of software automation and machine learning serve to evolve the science of astronomy. In this article, the author describes research based on findings of a “robotic telescope on its routine patrol of the night sky detected” a significant transient of stellar light, “triggering a flag by software I had written to identify unusual celestial events” [50, p. 28]. Although the author was asleep, the software program automatically alerted globally dispersed colleagues by email who then conducted additional observation and analysis. The results of the analysis began a new study (and a successful dissertation) helping to redefine the frequency and processes of stellar death [50]. Once again, the implementation of distributed software-managed automation, including information technologies, enables larger populations of emerging scientists to spend their time on the analysis and interpretation of their observations, rather than the mechanics of manipulating the telescope itself.
52
1152
Teams of planetary scientists have similarly developed new capabilities for distributed use of automation systems and software to improve their collaboration and coordination using spaceflight exploration vehicles. The ongoing coordination between Mars Reconnaissance Orbiter, the MER rovers, and the Curiosity Mars Science Laboratory has resulted in a rich mapping of the Martian surface capable of virtual reality “visits” to Mars. Improvements in network bandwidth and information visualization have enabled physically distributed researchers to have “virtual project meetings” in immersive goggle-based presentations of the Martian surface associated with previous waypoints of the MER or Curiosity rovers (B. Horgan, personal communication, 2019). Stored data allows researchers to identify orientation, distance, and composition of samples tested at various locations visited by the rovers. The launch of the NASA Mars 2020 mission on July 30, 2020, represented the beginning of one of the most ambitious multi-mission programs (within a single spacefaring organization) in human history. Included in this launch were a new Martian rover (named Perseverance) and a helicopter (named Ingenuity) designed to fly in the reduced gravity and atmosphere of Mars (Fig. 52.8). After landing on February 18, 2021, the rover began engineering and science operations in advance of the first powered flight of the Ingenuity helicopter on April 19. Because of the multi-minute communication delay constraints, both rover and helicopter are required to operate in L5 autonomy during live science collection and engineering operations. Due to these constraints and lack of real-time operational recovery modes, onboard systems conduct multiple hardware and software “checkout” evaluations, with test data and telemetry measurements provided to Earth for analysis prior to confirmation of full operational status. In fact, this type of checkout was responsible for automated detection of an onboard anomaly in Ingenuity rotor operations [51]. Distributed human supervisory controller review of software code and engineering telemetry data resulted in an update of the flight control software, which then required multiple collaborative and independent validation, verification, and integration steps before uploading the revised software to Ingenuity (during a period where one-way transmission times were approximately 16 min) [52, 53]. Clearly, real-time modifications of such software would not be possible; also importantly, such anomalies in automated operations are not currently resolvable except through human-automation interactions. The fault detection, isolation, and recovery loops associated with Ingenuity anomaly resolution demonstrate important elements of operational robustness for distributed automated operations. Even with these challenges, Ingenuity managed multiple operational flights in the ensuring month (April 19 through May 13, 2021), including images of the Martian surface and the Perseverance rover itself. Further,
B. S. Caldwell
Fig. 52.8 Detail of image of Ingenuity helicopter in flight on Mars, taken from Perseverance rover, on April 25, 2021. (Image courtesy NASA / JPL-Caltech)
Perseverance captured audio as well as video of Ingenuity, allowing engineers and scientists additional insights into the atmosphere and acoustics of Mars. On April 20, the day after the first flight of Ingenuity, Perseverance completed its own milestone activity, converting carbon dioxide from the Martian atmosphere into oxygen – a crucial technology capability for enabling future in situ generation of rocket fuel for ascent vehicles and oxygen for astronauts [54].
52.6.4 Future Automation-Automation and Human-Automation Exploration The initial landing, engineering, and science operations of the NASA Mars 2020 mission, including multi-robotic operations, all provide crucial data for planning future human explorations of Mars (similar to the Surveyor missions of the lunar surface as preparations for Apollo human missions). In addition, the scientific sample collection is also the first step in the Mars Sample Return (MSR) mission. Still under planning and construction, the later phases of the MSR mission include autonomous and Earth-controlled automation capabilities for a similar landing and rover mission to retrieve Mars 2020 collected samples and return them to be stowed on the Mars Ascent Vehicle (MAV), a 3-m launch vehicle which will lift the samples from the Martian surface to be returned to Earth [55]. The successful completion by Perseverance of oxygen synthesis, from the carbon dioxide of the Martian atmosphere, is an essential technology demonstration in the evolution of MSR.
52
Space Exploration and Astronomy Automation
The overall MSR mission, including the coordination of multiple engineering and science teams working with the Perseverance and Ingenuity vehicles and the design of MAV, represents very high levels of design coordination and timing to permit each vehicle to be constructed, tested, and made operational within the fixed launch windows permitting efficient Earth-Mars mission trajectories. Since signal transmission delays between Earth and Mars range from 4 to 20 min each way, many aspects of landing, instrument deployment, helicopter flight sorties, and MAV launch operations must be able to operate automatically. Even with extensive testing, a number of in-flight conditions are difficult or impossible to accurately simulate; both human supervisory control and automatic onboard capabilities (and ground-tospacecraft communications) must be able to respond temperature variations or data signal overloads to prevent damage to critical engine, communications, payload, or other spacecraft systems [56]. Perhaps the greatest new challenge for the next 20– 40 years of space exploration automation systems will be demanded, not coincidentally, by the needs of a human exploration mission to Mars. Due to the Earth-Mars distance and signal transmission delay that requires high levels of autonomous hardware and software operations for current uncrewed missions, a crewed mission will also demand significant autonomy from Earth-based ground controllers. However, the small crew size also means that many of the real-time functions managed by multiple teams of humans on the ground must be filtered, organized, and presented to the crew members by means of software agents [57]. The onboard vehicle functions themselves will be more complex as well, in order to sustain the physiological and psychological health of the crew members during the expected 3 years of a Mars mission (7–9 months in transit in each direction, as well as a year or more on the Mars surface). Automatic monitoring of life support status and consumables, including shifting baselines of crew physiological function, will require individualized, machine learning agents to detect and respond to anomalies in crew member health [58]. The dynamics of function allocation between humans and machine software and hardware on Mars will also require additional considerations of autonomy and abstraction hierarchies for planetary exploration crew members well beyond the near-real-time coordination experience of all previous human spaceflight missions [59]. Ongoing tools to maintain crew member cognitive readiness and provide countermeasures to isolation and psychological dysfunction might be able to be tied to automatic software system monitoring capabilities, to provide a beneficial as well as informative “Earth room” display to assist astronauts in connecting to Earth, conducting ongoing research, and supporting as-needed intervention and response to offnominal conditions [57].
1153
52.7
Additional Challenges and Concerns for Future Space Automation
The wondrous innovations and advances seen in various spaceflight missions and tests in early 2021 must be tempered by an awareness of additional significant challenges which might endanger future astronomy and spaceflight automation infrastructure systems. As described above, many advances in distributed space-related automation involve multiple robotic systems interacting with teams of human scientists and engineers from multiple disciplines and physical locations. As a result, network communications and data processing become an essential part of space automation infrastructure. However, we cannot assume that future operations will be able to continue in a benign, Earth-centric data flow environment. Security of information flow and infrastructure management, timely and robust communications between distributed automation assets, and effective techniques to ensure space situation awareness in the face of hazards (be they adversarial, natural, or unintentional) all represent issues of considerable concern. Three illustrative (but by no means exhaustive) themes in this conceptual sphere are described below.
52.7.1 Cybersecurity and Trusted Automation Spaceflight communications and data flows have existed under a combination of military, scientific, and commercial models throughout the 65 years since the first Sputnik signals could be heard by amateur radio operators worldwide. There are considerable controls in place to limit sharing of frequency spectra and specific orbital trajectories for various types of military or otherwise classified missions; however, a growing number of commercial and academic entities have greater opportunities to observe, communicate with, and control space-based assets. With this increased access comes additional risks for intrusion, sabotage, or even destruction of automated satellite or other systems; cybersecurity protection and control is not an element of most space systems [60]. Unfortunately, most cybersecurity operations centers (CSOCs) can be operationally and organizationally removed from other “core mission” functions of a company and thus not seen as integrated with critical operations until an actual incident threatens those operations [61]. The May 2021 incident of a ransomware event shutting down a major pipeline restricting gasoline availability throughout the eastern United States was such an incident, following the largescale breach of multiple corporate and government systems via the SolarWinds information technology infrastructure, highlighting the cybersecurity vulnerabilities of outdated and unmonitored automation operations [62].
52
1154
The development of “trusted automation” capabilities for space-based assets is both a human and a software process. Companies may think to consolidate their software development operations in order to minimize the number of external partners who have access to such code [62]; this approach may not be feasible, though, when multiple organizations and international partners are constructing and operating largescale space science or space exploration enterprises. In this environment, multinational and multi-organizational groups such as the Space Information Sharing and Analysis Center (S-ISAC: www.s-isac.org) have been created to help foster beneficial interactions in response to adversarial intrusions [60]. (Note: the author participates in S-ISAC activities, and the author’s institution was a founding academic member. However, the author has no relationships with the authors of the cited reference mentioning S-ISAC and has not shared nonpublic S-ISAC information.) As a result of new concerns regarding space automation vulnerabilities, there are additional considerations for how organizations maintain appropriate security of space assets and how CSOCs will play an increasing role in ensuring that automation operations are conducted as intended, by the intended operators, for the intended purposes [60, 62]. Trusted automation for space exploration and scientific study cannot be achieved simply via “walled gardens” throughout the solar system. We may look to existing models of tiered suppliers in engineering design and fabrication for a relevant analogy for developing such trusted space exploration information exchange. Engineering specifications and production constraints embedded in computer-aided design (CAD) files can be seen as valuable intellectual property for a particular engineering firm (such as a satellite integrator); they do not want to share all of those intellectual property elements with lower-tier suppliers, let alone competitors. Further, a single large company may have different groups using distinct CAD tools for different early stage designs. A process known as “lightweight CAD” was developed to more efficiently share important supply chain CAD information (such as dimensions and tolerances) efficiently, without sharing other embedded intellectual property elements [63, 64]. Lightweight CAD exchange also provides a benefit of creating a de facto interoperability “crosswalk” between different engineering design tools, companies, and philosophies. We cannot assume that design philosophies, security management protocols, or underlying algorithms driving autonomous operation modes would be identical, or even compatible, between the space agencies of China, the United Arab Emirates, and the United States. However, there are certainly foreseeable situations (such as an active marsquake) where environmental conditions, engineering status, and other automation information could be beneficially shared between Tianwen, Hope, and MRO with Perseverance or Zhurong regarding activity on the Martian surface. The
B. S. Caldwell
existing gaps between scientists and engineers, and CSOC managers, should be addressed as an additional element of multidisciplinary coordination required to sustain robust and distributed space automation operations.
52.7.2 Distributed Space-Based High-Performance Computing Scientific discoveries based on astronomy automation, as discussed earlier in this chapter, have come to increasingly rely on powerful data science and machine learning capabilities conducted by high-performance computing resources. At some point, however, one might ask the question of whether it continues to be resource-efficient to send all of the raw data from the space asset back to Earth for algorithmic processing and synthesis. Advanced concepts such as the “Earth room” described in Sect. 52.6.4 above [57] require far more onboard computing power than is currently available on any spacecraft currently in operation, production, or design. The case of anomaly resolution for the Ingenuity helicopter, requiring software updates rewritten on Earth and sent back to Mars [51, 53], already demonstrates a limiting case for Earth-based high-performance computing: automation systems operating at gigabyte scale (equivalent to current terrestrial laptop computers), let alone the petabyte scale common to cloudbased data centers, simply cannot be supported in existing communication bandwidth limitations. The alternative, of “launching the cloud,” is also far from straightforward. Hardened computer processors to survive the radiation environment of space require substantial additional development, testing, and error validation/correction support (see Sect. 52.5 above). While power demands and environmental/climate degradations may be mitigated by affording oneself of a dedicated solar array for a data center, other technical challenges exist. Earth-based data centers generate substantial waste heat; cooling may involve fluid transport, convection, or other gravity-based mechanisms. There have been experimental systems developed to operate a data center underwater, with significant improvements in technology reliability [65]. However, maintenance and servicing access for a space-based data center being used to manage and support critical space automation functions in cislunar or cismartian space would also be a tremendously difficult and risky undertaking. The resolution to this challenge is unclear. Researchers and designers of advanced mission architectures describe the importance of in situ resource utilization to “live off the land” to enable more robust deep space operations [54]. The demand for “forward operations” (such as “crew forward operations” for a human Mars exploration mission [57]) and automation flexibility that is not constrained by long transmission delays and small “pipes” back to Earth
52
Space Exploration and Astronomy Automation
suggests an increased “automation forward” approach to high-performance computing. Evolutions of cloud computing and edge computing architectures to “orbital centers” or “Lagrange computing” remain a speculative concept, at a relatively immature level of discussion or development, into the early 2020s.
52.7.3 Enhanced Awareness and “Projective Freshness” An important aspect of data collection and analysis for any automation system (or human task-focused activity) is the use of sensor data and mission-related commands in the development of situation awareness (sometimes described as “domain awareness”) and feasible implications for action [66–68]. In this sense, effective autonomous systems operating at levels L3–L5 are required to collect, analyze, and utilize data to conduct local forecasting on the current environment, as well as the likely changes in engineering status or environmental conditions during the execution of upcoming tasks. The concept of “information freshness” can be considered as the period during which previously collected and analyzed data are still valid for tactical planning and taskrelated action [69]. (A common example using information freshness considerations is a 24- or 48-h weather forecast that indicates when the forecast was generated and over what time frame the forecast is considered of sufficient accuracy or confidence for planning.) Freshness may be described according to an event criterion (“good until condition A occurs”), an absolute clock/calendar time (“good until 2230 GMT on September 25”), or a relative time based on analysis and processing cycles (“good for 24 h after publication” or “good for three update cycles”). Information freshness is of particular interest when coordinating multiple actors in a context constrained by network delays. As described above, periods of operational freshness are considerably shortened (or invalidated) if all remote automation functions must be analyzed and controlled by earthbound processing. (In fact, one reason for the relatively slow movement rates of Mars rovers is to limit the change in environmental conditions during the period of operational autonomy, in between communication updates from Earth.) A potentially underappreciated determination in multi-agent coordination under delay conditions is whether analysisbased information is considered “fresh on send,” or “fresh on receipt” (referring to durations of analysis accuracy and confidence for decision-making and task execution). Business practice and legal expectations between research communities, organizations, or nations may differ significantly in their expectations. For instance, determinations of validation or conformance to defined standards in the European Union are considered based on the date of conformity determination
1155
decisions (“fresh on receipt and action”) [70]; by contrast, validation requirements in the United States are based on the date of submission (“fresh on send”) (J. Lee, personal communication, 2021). There is currently no clearly defined standard or metadata for automation-related information freshness. When multiple automated systems are involved in coordination efforts, higher-confidence task coordination would be achieved when using “projective freshness”: the sending actor determines operational conditions and needs of the receiver at time of receipt, with fewer assumptions of the receiver’s availability, precision, or speed of analysis capability. The concept of projective freshness may also be considered an element of “lightweight automation coordination,” as described in Sect. 52.7.2 immediately above.
52.8
Conclusion
Advances in automation hardware over the past 200 years (and automation software over the past 60) have drastically expanded and furthered the scientific study of space. Beginning with mechanical clock drives to automate the process of following celestial objects through an optical telescope, new capabilities allow distributed astronomers to communicate and collaborate globally in response to robotic data collection of visual and radio spectrum electromagnetic signals detected from multiple sites on Earth and platforms in Earth orbit. These distributed teams and platforms, supported by software-based machine learning algorithms, help to identify and interpret stellar birth and death events over 10 billion light years’ distance. High-precision instrument design and extensive testing of physical components and software routines have enabled robotic operations to continue uninterrupted on the Martian surface for over 15 years since the arrival of the MER rovers in 2004. Without direct human hardware maintenance (but with help from the occasional dust devil to clean off rover solar panels), orbiters and rovers have far exceeded their design life, enabling return of hundreds of terabytes of Mars planetary science data and supporting software upgrades and new automation functions in distributed human-automation interactions with ground-based scientists and engineers. Capabilities for extremely sensitive positioning, tracking, and signal detection of satellite communications enable these missions to continue, as well as those visiting the planets of the solar system and enabling signal exchanges to the edge of the solar system itself, over 12 h’ light travel (and 40 years of satellite travel) distant. Human exploration of space has also been strongly dependent on vehicle, communications, and monitoring of onboard systems within and across missions. The coordination of robotic missions with human creativity and innovation has
52
1156
permitted the accurate and successful landing of Apollo 12, the repair of the Hubble Space Telescope, and continuous habitation of the International Space Station for over 20 years. We can assume very few things about the future of space science and exploration, but from a robotic Mars Sample Return to the actual landing of astronauts on Mars and safe return to Earth, the role of highly precise, reliable, scalable, dynamic automation hardware and software systems must be integral to mission success and the expansions of scientific knowledge. Acknowledgments The author has been supported by multiple NASA grants, including as Director of the NASA-funded Indiana Space Grant Consortium (INSGC). INSGC support was the only NASA grant active during the period of writing this chapter. The author thanks the Editor of the Handbook for the invitation and opportunity to contribute this chapter to the Handbook.
References 1. Sheridan, T.B.: Telerobotics, Automation, and Human Supervisory Control. MIT Press, Cambridge (1992) 2. Cosmoquest: The story of Joseph von Fraunhofer and the first German Equatorial Mount. vBulletin Solutions. https:/ /forum.cosmoquest.org/showthread.php?89447-The-story-of-Jose ph-von-Fraunhofer-and-the-first-German-Equatorial-Mount(2009). Accessed 7 Dec 2020 3. Jahn, W., Kirmeier, J., Mewes, C., Preyss, C.R., Weber, L.: Fraunhofer in Benediktbeuern Glassworks and Workshop. Munchen (2008) 4. Astrophysics, C.F.: Fraunhofer’s 1824 Dorpat Refractor Mount: Reverse Salients and Big Telescopes in the 19th Century. MA: Harvard-Smithsonian Center for Astrophysics, CfA Colloquium Cambridge (2016) 5. Valder, V.: Tartu observatory: Friedrich Georg Wilhelm Struve. https://www.muuseum.ut.ee/vvebook/pages/4_3.html (n.d.). Accessed 7 Dec 2020 6. Wei-Haas, M.: The True Story of “Hidden Figures,” the Forgotten Women Who Helped Win the Space Race. Smithsonian Magazine, Washington (2016) 7. Blair, E.: ‘Hidden Figures’ No More: Meet the Black Women Who Helped Send America To Space. Washington, DC: National Public Radio (2016) 8. Howell, E.: NASA’s Real ‘Hidden Figures’. Spacecom: F (2020) 9. Skopinski, T.H., Johnson, K.G.: Determination of Azimuth Angle at Burnout for Placing a Satellite Over a Selected Earth Position. NASA, Langley Field (1960) 10. Greicius, A.: NASA contacts Voyager 2 using upgraded deep space network dish. NASA. https://www.nasa.gov/feature/jpl/ nasa-contacts-voyager-2-using-upgraded-deep-space-network-dish (2020). Accessed 7 Dec 2020 11. JPL N: Voyager mission status. NASA. https://voyager.jpl.nasa. gov/mission/status/#where_are_they_now (2020). Accessed 29 Dec 2020 12. Rao, R.: Voyager 1 Discovers Faint Plasma ‘Hum’ in Interstellar Space. Spacecom. New York, NY: Future US, Inc. (2021) 13. Biesiadecki, J.J., Maimone, M.W.: The Mars Exploration Rover surface mobility flight software: driving ambition. In: 2006 IEEE Aerospae Conference Proceedings, p 15. IEEE, Big Sky (2006) 14. Maimone, M., Leger, P., Biesiadecki, J.: Overview of the Mars Exploration Rovers Autonomous Mobility and Vision Capabilities. NASA JPL, Pasadena (2007)
B. S. Caldwell 15. Clark, S.: China Lands Its First Probe on Mars. Spaceflight Now, Palo Alto, CA: Annual Reviews (2021) 16. Agency US: Images and scientific results from hope probe. Emirates Mars Mission. https://www.emiratesmarsmission.ae/gallery/ images-of-hope-probe/1 (2021). Accessed 17 May 2021 17. Stanton, N.A., Salmon, P.M., Jenkins, D.P., Walker, G.H.: Human Factors in the Design and Evaluation of Central Control Room Operations. CRC Press, Boca Raton (2010) 18. Klesh, A.T., Cutler, J.W., Atkins, E.M.: Cyber-physical challenges for space systems. In: 2012 IEEE/ACM Third International Conference on Cyber-Physical Systems. IEEE, Beijing (2012) 19. Reza, H., Straub, J., Alexander, N., Korvald, C., Hubber, J., Chawla, A.: Toward requirements engineering of cyber-physical systems: modeling CubeSat. In: 2016 IEEE Aerospace Conference. IEEE, Big Sky, 5–12 Mar 2016 20. Repperger, D.W., Phillips, C.A.: The human role in automation. In: Nof, S.Y. (ed.) Springer Handbook of Automation Springer Handbooks, pp. 295–304. Springer, Heidelberg (2009). https:// doi.org/10.1007/978-3-540-78831-7_17 21. NASA: NASA Systems Engineering Handbook. NASA, Washington, DC (2007) 22. Li, H., Wickens, C.D., Sarter, N., Sebok, A.: Stages and levels of automation in support of space teleoperations. Hum. Factors. 56(6), 1050–1061 (2014). https://doi.org/10.1177/0018720814522830 23. Onnasch, L., Wickens, C.D., Li, H., Manzey, D.: Human performance consequences of stages and levels of automation: an integrated meta-analysis. Hum. Factors. 56(3), 476–488 (2014). https://doi.org/10.1177/0018720813501549 24. Miller, D., Sun, A., Ju, W.: Situation awareness with different levels of automation. In: 2014 IEEE International Conference on Systems, Man, and Cybernetics. IEEE, San Diego, 5–8 Oct 2014 25. Fasth-Berglund, A., Stahr, J.: Task allocation in production systems – measuring and analysing levels of automation. In: Proceedings of the 12th IFAC/IFIP/IFORS/IEA Symposium on Analysis, Design, and Evaluation of Human-Machine Systems, pp 435–441. https://doi.org/10.3182/20130811-5-US-2037.00032 (2013) 26. Williams, D.R.: Venera 9 Descent Craft. NASA. https:// nssdc.gsfc.nasa.gov/nmc/spacecraft/display.action?id=1975-050D (2020). Accessed 29 Dec 2020 27. Williams, D.R.: Venera 13 Descent Craft. NASA. https:// nssdc.gsfc.nasa.gov/nmc/spacecraft/display.action?id=1981-106D (2020). Accessed 29 Dec 2020 28. Caldwell, B.S.: Multi-team dynamics and distributed expertise in mission operations. Aviat. Space Environ. Med. 76(6):Sec II, B145–B153 (2005) 29. Rasmussen, J.: The role of hierarchical knowledge representation in decision making and system management. IEEE Trans. Syst. Man Cybern. SMC-15(2), 234–243 (1985). https://doi.org/ 10.1109/TSMC.1985.6313353 30. Sheridan, T.B.: Musings on models and the genius of Jens Rasmussen. Appl. Ergon. 59, 598–601 (2017). https://doi.org/10.1016/ j.apergo.2015.10.015 31. Caldwell, B.S.: Analysis and modeling of information flow and distributed expertise in space-related operations. Acta Astronaut. 56, 996–1004 (2005) 32. Perl, S.M., DeLaurentis, D.A., Caldwell, B.S.: A system-ofsystems approach to enhance information exchange for the mars science laboratory mission. In: SpaceOps 2010 Conference Delivering on the Dream, AIAA, Huntsville. https://doi.org/10.2514/ 6.2010-2028 (2010) 33. Wikipedia: Radio astronomy. Wikimedia Foundation. https:// en.wikipedia.org/wiki/Radio_astronomy (2020). Accessed 18 Dec 2020 34. Clark, B.G.: Information-processing systems in radio astronomy and astronomy. Annu. Rev. Astron. Astrophys. 8, 115–138 (1970)
52
Space Exploration and Astronomy Automation
35. Davies, J.G.: Automation and astronomy. Astron. Astrophys. Suppl. 15, 331–332 (1974) 36. Sindiy, O.V., DeLaurentis, D.A., Caldwell, B.S.: Command, control, communication and information architectural analysis via system-of-systems engineering. In: AIAA Space 2010 (2010) 37. Tzinis, I.: Space communications and navigation. NASA. https:/ /www.nasa.gov/directorates/heo/scan/services/networks/ (2020). Accessed 17 Dec 2020 38. Wikipedia: K band (IEEE). Wikimedia Foundation. https:// en.wikipedia.org/wiki/K_band_(IEEE) (2018). Accessed 17 Dec 2020 39. Gebhardt, C.: Carrington event still provides warning of sun’s potential 161 years later. NASASpaceflight.com. https://www. nasaspaceflight.com/2020/08/carrington-event-warning/ (2020). Accessed 27 Dec 2020 40. Cremers, C.J., Birkebak, R.C., White, J.E.: Lunar surface temperatures from Apollo 12. The Moon. 3(3), 346–351 (1971) 41. Bajracharya, M., Maimone, M.W., Helmrick, D.: Autonomy for Mars rovers: past, present, and future. Computer. 41(12 (December)), 44–50 (2008). https://doi.org/10.1109/MC.2008.479 42. NASA: Sols 871-872: a software upgrade is available. Install now? NASA. https://mars.nasa.gov/MSL/mission/mars-rovercuriosity-mission-updates/index.cfm?mu=sols-871-872-a-software -upgrade-is-available-install-now (2015). Accessed 18 Dec 2020 43. NASA: Mars reconnaissance orbiter. NASA. https://mars.nasa.gov/ mro/ (2020). Accessed 29 Dec 2020 44. Swindle T.: Apollo 12: fifty years ago, a passionate scientist’s keen eye led to the first pinpoint landing on the Moon. The Conversation US, Inc. https://theconversation.com/apollo-12-fifty-years-ago-apassionate-scientists-keen-eye-led-to-the-first-pinpoint-landing-on -the-moon-126100 (2019). Accessed 27 Dec 2020 45. Mattice, J.J.: Hubble Space Telescope Systems Engineering Case Study. Center for Systems Engineering, Air Force INstitute of Technology, Wright-Patterson OH: US Air Force (2008) 46. Spaceflight CoH: Pathways to Exploration—Rationales and Approaches for a U.S. Program of Human Space Exploration. National Academies Press, Washington, DC (2014) 47. Aragon, C.R.: Collaborative analytics for astrophysics explorations. In: Nof, S. (ed.) Springer Handbook of Automation, pp. 1645–1670. Springer Handbooks, Heidelberg (2009) 48. Aragon, C.R., Bailey, S.J., Poon, S., Rune, K., Thomas, R.C.: Sunfall: a collaborative visual analytics system for astrophysics. J. Phys. Conf. Ser. 124(1) (2008) 49. Keel, W.C., Oswalt, T., Mack, P., Henson, G., Hillwig, T., Batcheldor, D., Berrington, R., De Pree, C., Hartmann, D., Leake, M., Licandro, J., Murphy, B., Webb, J., Wood, M.A.: The remote observatories of the Southeastern Association for Research in Astronomy (SARA). Publ. Astron. Soc. Pac. 129(971) (2017). https:/ /doi.org/10.1088/1538-3873/129/971/015002 50. Ho, A.Y.Q.: Explosions at the edge, vol. 323. Scientific American / Springer Nature America, New York (2020) 51. JPL N: Mars helicopter flight delayed to no earlier than April 14. https://mars.nasa.gov/technology/helicopter/status/291/ mars-helicopter-flight-delayed-to-no-earlier-than-april-14/ (2021). Accessed 13 May 2021 52. Kluger, J.: How NASA’s Mars Helicopter Flight Opens the Door to More Ambitious Missions. New York, NY: Time, Inc. (2021) 53. Messier, D.: Engineers Identify Software Solution for Ingenuity Mars Helicopter Anomaly. Mojave, Parabolic Arc (2021) 54. NASA: NASA’s Perseverance Mars Rover Extracts First Oxygen from Red Planet. NASA, Jet Propulsion, Laboratory California Institute of Technology, Pasadena (2021) 55. Jackman, A.: Problem Solved. NASA Industrial Engineer Sets Sights on Mars. Institute of Industrial and Systems Engineers, Norcross (2020)
1157 56. Greicius, A.: Mars 2020 Perseverance Healthy and on Its Way Mars 2020, vol. 2020. NASA, Washington (2020) 57. Caldwell, B.S.: Meadows on the way to Mars: creating future spaceflight capabilities. World Fut. Rev. 6(4), 378–389 (2015). https://doi.org/10.1177/1946756715569244 58. Hill, J.R., Caldwell, B.S.: A bootstrap method for the analysis of physiological data in uncontrolled settings. In: Proceedings of the Human Factors and Ergonomics Society 2019 Annual Meeting, pp 136–140. Sage, Seattle, Oct 31–Nov 4 2019 59. Hill, J.R., Caldwell, B.S.: Toward better understanding of function allocation requirements for planetary EVA and habitat tasks. In: Proceedings of the Human Factors and Ergonomics Society 2018 Annual Meeting, pp 29–33. Sage, Philadelphia, Oct 2–5 2018 60. King, M., Goguichvili, S.: Cybersecurity Threats in Space: A Roadmap for Future Policy, vol. 2021. CTRL Forward, Wilson Center Science and Technology Innovation Program, Washington, DC (2020) 61. Nyre-Yu, M., Caldwell, B.S.: Observing cyber security incident response: qualitative themes from field research. In: Proceedings of the Human Factors and Ergonomics Society 2018 Annual Meeting, pp 437–441. Sage, Seattle, 2019 62. Jewett, R.: New Space Players Take Stock of Headline-Grabbing Security Breaches. Via Satellite, Rockville (2021) 63. Hartman, N., Rosche, P., Fischer, K.: A framework for evaluating collaborative product representations in product lifecycle workflows. In: IFIP International Conference on Product Lifecycle Management, 2012. Springer, pp 424–434 64. Hartman, N.W.: Evaluating lightweight 3D graphics formats for product visualization and data exchange. J. Appl. Sci. Eng. Technol. 3 (2009) 65. Wood, I.: 20,000 Petabytes under the Sea: Exploring the Potential of Underwater Data Centres. Cloud Computing News, Bristol (2020) 66. Endsley, M.R., Bolté, B., Jones, D.G.: Designing for Situation Awareness. Taylor & Francis, Boca Raton (2003) 67. Forrester, J.W.: Principles of Systems. 2nd preliminary edn. Pegasus Communications, Waltham (1990) 68. Shannon, C.E., Weaver, W.: The Mathematical Theory of Communication. The University of Illinois Press, Urbana (1949) 69. Caldwell, B.S., Wang, E.: Delays and user performance in humancomputer-network interaction tasks. Hum. Factors. 51(6), 813–830 (2009) 70. Commission E: The ‘Blue Guide’ on the Implementation of EU Products Rules 2016. Publications Office of the European Union, Luxembourg (2016)
Barrett S. Caldwell, PhD is Professor of Industrial Engineering (and Aeronautics & Astronautics, by courtesy) at Purdue. His PhD (Univ. of California, Davis, 1990) is in Social Psychology, and BS degrees in Aeronautics and Astronautics and Humanities (MIT, 1985). His research highlights human factors and systems engineering in complex environments, including human-automation interactions in spaceflight operations. Prof. Caldwell has been Director of the NASA-funded Indiana Space Grant Consortium since 2002.
52
53
Cleaning Automation Norbert Elkmann and José Saenz
Contents 53.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1159
53.2
Background Developments, Cleaning Automation Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1160 Floor Cleaning Robots . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1161 Pool, Facade, Window, Hull, Solar Panel, Ventilation Duct, and Sewer Line Cleaning Robots . . . . . . . . . . . . . . 1163
53.2.1 53.2.2 53.3
Emerging Trends . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1168
plines, e.g., kinematics, power supply, sensor systems, environment modeling, path planning in dynamic environments, and autonomy. Some examples of automatic cleaning systems for floors, facades, swimming pools, ship hulls, ventilation ducts, and sewer lines serve to highlight the current state of the art and the potential of cleaning automation and provide a glimpse of future developments.
Literature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1168
Keywords Abstract
The field of cleaning automation has grown in the last years, for floor cleaning and other areas, in the home and beyond. Buoyed by recent innovations in sensing and navigation technology, automated cleaning systems have gotten past laboratory status and are used for a diverse set of applications all over the world. By far the biggest market value is attributable to robot vacuum and floor cleaners, which sold over 11.6 million units in 2018 (World Robotics Report 2019 (International Federation of Robotics IFR 2019).). While versatile, high-performance systems exist for other applications such as professional floor cleaning, ship cleaning, and facade and solar panel cleaning, they are by no means as widespread as household systems for floor and window cleaning. Automatic cleaning systems for professional cleaning tasks are frequently complex robot systems that operate autonomously in unstructured environments and/or outdoor areas. Cleaning automation not only incorporates cleaning engineering but also a variety of other technical disci-
N. Elkmann () · J. Saenz Fraunhofer IFF, Magdeburg, Germany e-mail: [email protected]
© Springer Nature Switzerland AG 2023 S. Y. Nof (ed.), Springer Handbook of Automation, Springer Handbooks, https://doi.org/10.1007/978-3-030-96729-1_53
Cleaning · Robots · Kinematics · Navigation · Autonomy · Teleoperation · Vacuum robots · Façade robots · Pool robots · Sewer robots
53.1
Introduction
Cleaning is typically considered to be tedious and monotonous, therefore the broadly based use of cleaning robots is obvious. Moreover, many cleaning activities can be classified as hazardous for humans, due to the cleaning process itself (e.g., exposure to chemicals, pathogens, and germs), or due to the associated environmental conditions (e.g., in a sewer pipe with a potentially harmful atmosphere). The main goal of cleaning is to maintain the function and condition of houses, buildings, production facilities, and infrastructures. Regular cleaning cycles are indispensable for a wide range of areas, ranging from ensuring hygienic conditions in living spaces to high-quality level in production, long use of machines and buildings, and maintaining a high efficiency for solar panels. Cleaning therefore represents the ideal use case for robots and remote-controlled systems and remains a typical service robot application. Unsurprisingly, developments over the last 20 years have been aimed at automating cleaning systems, too. The range of systems available varies widely. In addition to floor cleaning systems, other systems feature widely vary-
1159
1160
ing degrees of complexity clean facades, swimming pools, ventilation ducts, solar panels, and sewer lines. Mass markets for cleaning robot applications have become established for a few, limited sectors. Vacuuming robots for household use represent the most widely sold type of robot systems worldwide. Their convenience and low purchase prices (starting at around $250) account for their great commercial success in the household sector [4, 18, 25]. Residential users usually have lower demands on cleaning quality and, above all, cleaning speed than professional users. Since the 1990s, several manufacturers of cleaning machines throughout the world have developed autonomous cleaning robots for professional floor cleaning. Despite these developments, such systems for professional cleaning have not yet become established on the market for a variety of reasons. These include lower flexibility compared to humans, high acquisition costs, low availability, and the relatively high complexity for operation and maintenance. Cleaning robots (Fig. 53.1b–d) and floor cleaning systems (Fig. 53.1a) in particular share many commonalities with other service robots, such as those designed for transport or inspection tasks. Particular points of intersection are sensor systems for obstacle detection and environmental perception
N. Elkmann and J. Saenz
and modeling, power supply, path planning and execution, and human–machine interfaces. Thus, cleaning robots constitute a preliminary stage toward more complex applications, such as household applications for or applications with direct human–robot interaction. Cleaning robots for facades, pipes, ventilation ducts, and sewer lines are however not mass-produced items. These systems are specially optimized for the requirements and geometry of the surface or object being cleaned and are used exclusively in professional environments rather than in the residential sector. These typically require installation, connection to media (e.g., water and power), fall protection, and are, with a few exceptions, mostly teleoperated. Crucial to acceptance of cleaning robots is its cleaning efficiency and cost-effectiveness. A system’s flexibility and ease of operation are other important criteria for acceptance.
53.2
Background Developments, Cleaning Automation Examples
Cleaning robots incorporate a multitude of components and technologies originating from the fields of automation and
© Lightfield studios – stock.adobe.com
© markobe – stock.adobe.com
© Gianmichele – stock.adobe.com
© Yulia – stock.adobe.com
Fig. 53.1 Four different types of cleaning robots: vacuum robot (a), duct cleaning robot (b), pool cleaning robot (c), and window cleaning robot (d)
53
Cleaning Automation
robotics, including kinematics, mobility and navigation, communication, sensors and sensor networks, robotics and intelligent machines, and teleoperation, many of which feature in greater detail in other chapters of this handbook. Cleaning robots have different levels of automation and intelligence, ranging from remote-controlled systems to simple automated systems with limited functionalities, and to autonomous systems with multiple sensors, environmental modeling, and navigation. Newer systems also utilize artificial intelligence and machine learning techniques for perception and navigation tasks and offer advanced connectivity and comfort capabilities. All types of cleaning systems share a specific set of technical subsystems: • Motion platform for the system, kinematics • Control and operating system • Sensor system for environment modeling (in automatic systems) and obstacle detection • Power supply • Communications system • Human machine–user interface • Cleaning unit and, where applicable, suction and waste handling units • Various safety devices (collision detection or avoidance systems for floor cleaning systems, securing or recovery ropes for facade, pipe, duct, sewer, and pool cleaning systems) All automated cleaning systems draw on established cleaning methods and technologies, typically consisting of a combination of brushes and vacuum technology or cleaning tissue. An automated cleaning system however cannot inspect its cleaning quality as easily as a human can. At present, mobile floor cleaning systems use optical or acoustic sensors to detect the amount of particles picked up, in order to initiate extra cleaning of a specific area. Cleaning robots from the different fields of application differ widely in terms of requirements and their technical subsystems. The International Federation of Robots differentiates between professional and consumer robots when considering the field of service robots. While that can be useful for discussing and projecting sales, for the purpose of this chapter the technical challenges from an automation perspective across all technical subsystems are presented. We therefore differentiate between floor cleaning robots and all other types of cleaning robots. Floor cleaning robots include consumer and professional systems, operate on easily accessible horizontal surfaces, and operate almost exclusively autonomously. The other types of cleaning robots often require specialized kinematics for motion along a wide range of surfaces in various types of environments that are typically
1161
difficult for humans to access, include a wider range of cleaning technologies, and are often teleoperated. • Floor cleaning robots: Floor cleaning systems relieve humans of monotonous work such as mopping and vacuuming. For their use to be cost-effective, these systems must function autonomously and without an operator. Easy operability, high-quality cleaning, and flexible use are basic requirements these systems have to meet. • Pool, facade, window, hull, solar panel, ventilation duct, and sewer line cleaning robots: Such cleaning systems are typically utilized where humans are unable to access an area in need of cleaning or are only able to access it with difficulty. These systems can be engineered to be remote controlled or fully autonomous. Usually, they are customized for a specific application scenario. Nevertheless, easy operability and high-quality cleaning are basic requirements. They must also be recoverable when cleaning areas are inaccessible to humans without requiring a human to enter the cleaning area. Significant features of these two groups of cleaning robots and associated examples are highlighted below. The automatic systems presented here are established systems that have achieved product maturity and, for the most part, been in operation for years. In addition to special applications for which very few systems are available worldwide, there are also applications for which a mass market has already opened. Accordingly, the systems cited here merely represent a few examples of the wide range of cleaning automation products.
53.2.1 Floor Cleaning Robots Floor cleaning systems utilize wheel- or track-driven mobile platforms. The configuration of the wheels or tracks varies, depending on the case of application and the maneuverability requirements. Kinematics with two driven wheels and caster wheels are often used. The systems are not able to move up and down stairs. Sensor systems for obstacle detection and navigation vary widely. They range from contact switches to inexpensive infrared or ultrasonic sensors, and to cameras and laser scanners. Whereas the first generation of commercially available floor cleaning robots relied on simple algorithms to complete a given surface, current models create maps of rooms, move along cleaning paths that are more efficient [8], and feature varying degrees of connectivity, enabling external log-in so that users can start cleaning cycles for specific rooms or
53
1162
areas. More advanced models even share these house maps with other devices, so that a combination of vacuum and mopping systems can share the same data basis. Floor cleaning systems for professional use predominantly contain laser scanners to generate maps and navigate and to avoid collisions. In this case, path planning and execution are typically optimized to clean a maximum surface area within a specific time. Ultrasonic sensors and contact switches are often employed as collision sensors too. Objects such as walls, shelves, and the like as well as passing humans are reliably detected and avoided. To be cost-effective, a floor cleaning system must be usable over several hours with batteries, and/or only require short charging times, and recent advances in battery technology have been a positive development for this class of cleaning machine. Professional floor cleaning systems are furthermore often equipped with wet cleaning systems, providing more than just vacuuming of dry surfaces. Cleaning robots for home use are available with brushes for dry surfaces or with wet cleaning systems, but not with both combined. The Avidbots Neo floor-washing robot (Fig. 53.2) [10] is an example of professional wet cleaning, whereas iRobot’s Roomba [18] of household floor cleaning. While the technical configuration and performance of other manufacturer’s systems differ, the basic concept is comparable. Avidbots Neo Floor Cleaning Robot Manufacturer: Avidbots, Canada Type: Professional wet floor-scrubbing system for hard surfaces Operating mode: Autonomous Cleaning technology: Scrubber cleaning (wet) Area of application: Indoor public spaces including shopping malls and airports
N. Elkmann and J. Saenz
Based on a standard floor cleaning system, the Avidbots Neo is intended for professional cleaning of hard floors. The system is outfitted with localization and collision-avoidance sensor systems including front and rear LIDAR detectors and 3D cameras. It can autonomously clean up to 3.900 m2 per hour, with a typical time of 4–6 hours between charging cycles. This cleaning robot can run in autonomous and manual modes, whereby the manual mode is typically reserved for the initial mapping of the environment. Wi-Fi and cellular data connectivity allow for the remote management of fleets, updating changes to the layout, and for documenting which surfaces have been cleaned. It is used in many airports worldwide, including Paris Charles de Gaulle, Singapore Changi, Tokyo Narita, Tokyo Haneda, Osaka Kansai, Montréal-Pierre Elliott Trudeau International, and Ben Gurion. Systems from other manufacturers include: • The LionsBot family of floor cleaning robots combine two standard platforms with a variety of cleaning technologies including scrubbers (LeoScrub), vacuuming (LeoVac), and mopping (LeoMop) [24]. • The autonomous cleaning robot for Pittsburgh International Airport (PIT), developed by partnership between PIT and Carnegie Mellon University in response to the COVID-19 pandemic features UV technologies in addition to traditional wet cleaning techniques to add an extra layer of cleanliness [30]. An interesting development is the combination of specialist knowledge from cleaning machine manufacturers with navigation and autonomous operation technology. The BrainCorp [11] offers the Brain OS technology to makers of commercial cleaning machines to bring autonomous navigation and safety capabilities to standard manual cleaning platforms. Roomba s-Series Vacuum Cleaning Robot Manufacturer: iRobot Corporation, USA Type: Household vacuum cleaning system Operating mode: Autonomous Cleaning technology: Vacuum cleaning (dry) Area of application: Standard rooms in homes
© Avidbots
Fig. 53.2 Neo professional floor cleaning robot from Avidbots, operating in a retail setting
Vacuum cleaning is one of the few fields of application for which a mass market has already opened for service robots in general and cleaning robots in particular. The most successful system in this segment is iRobot’s Roomba robot, which consists of a cleaning robot and a base station. The robot stands atop three wheels, of which two are drive wheels and one a caster wheel, and navigates an unfamiliar environment completely autonomously [8].
53
Cleaning Automation
In principle, it cleans on the basis of a classical vacuum cleaner with a powerful vacuuming unit and rotating brushes configured so that even dirt in the robot’s boundary area is picked up. A dirt detecting sensor is located in the suction zone. The path being cleaned does not have to be programmed. Equipped with optical ranging sensors and contact sensors, the system detects when it nears an obstacle, proceeds toward it at reduced speed, and changes its path direction. Ranging sensors directed downward detect stairs and drop-offs and generate a change of path direction as well. Path direction is not changed randomly. The robot utilizes sensor data to analyze its environment and can select from a range of motion patterns depending on the situation. Dirt detection is selected whenever larger quantities of dirt are detected in the suction flow. The robot reacts by increasing suction power and performing movements to continue over the same surfaces until the quantity of dirt drops again. “Virtual Walls” can be used to limit the robot’s workspace. These are generated by stations which emit an infrared beam. If its battery charge drops below a critical value or it has finished cleaning, the system returns on its own to its base station, which emits an infrared beam that acts as a guide beam for the robot. Advanced features are a self-emptying dustbin, able to collect dust from up to 30 cleaning cycles, the use of bags in combination with the self-emptying station to limit users’ exposure to dust, HEPA filtering, and advanced connectivity to allow for a variety of methods to interact with the robot. This includes voice control, control via an app, and flexible cleaning cycles. Systems from other manufacturers include: • The Kärcher RC3 (Germany) is an autonomous vacuuming robot for household that uses cameras and lasers for navigation. It was the first household appliance that disposes of collected dirt in its docking station [21]. • The iRobot Braava (USA) mops and can be coordinated with a Roomba [18]. • Electrolux [25]. • Ecovacs features a combined vacuum and wet mopping system [15]. Other systems that at least deserve brief mention are from the manufacturers Shark, Samsung, Neato, and LG.
53.2.2 Pool, Facade, Window, Hull, Solar Panel, Ventilation Duct, and Sewer Line Cleaning Robots Unlike floor cleaning robots that are normally used on level ground and equipped with batteries, dust reservoirs, and
1163
water tanks, other cleaning systems for facades, pools, pipes, sewers, and ducts are typically supplied power through cables, use hoses to provide cleaning medium, and, where necessary, must have a fall arrester system or a recovery rope. The engineering and overall costs required to implement these essential components, which constitute an indispensable infrastructure, are quite substantial compared with floor cleaning systems. A notable exception here is the emerging class of window cleaning robots for household and private use. These systems have more in common with floor cleaning robots for consumers than with other robots within this category, in terms of technologies used, complexity, and price. Indeed, many window cleaning robots can even be used both for cleaning of windows and for horizontal surfaces like glass or stone countertops and tables. The use of cleaning robots on facades, in pools, pipes, ducts, and sewer lines may require their adaptation to ambient conditions that are highly unusual for automated systems and to environments that are less than ideal for robots. Variable ambient conditions such as humidity, temperature, and light conditions place stringent demands on the automation components and, for example, necessitate adapting and increasing the redundancy of sensor systems for navigation. Given the high expectations on system reliability, this is particularly important.
Pool Cleaning Robots Swimming pools accumulate large quantities of dirt on a daily basis. The relatively large surfaces gather dirt out of the air and off swimmers. Public swimming pools are subject to hygiene codes with strict water quality requirements that necessitate regularly cleaning the bottom and walls of a pool. Underwater cleaning machines have been in use since the 1970s. Such pool cleaners qualify as service robots since they move and systematically navigate pools autonomously. While the various manufacturers’ systems share virtually the same robot engineering concept, their designs, target markets, level of flexibility for choosing cleaning cycles and defining specific areas to clean, and cleaning/filtering systems differ. The systems feature a combination of brushes, a vacuum filtering system, and a motion platform with wheels or tracks for navigating along the different surfaces. Rotating brushes mounted on the front and back of the unit loosen dirt, which is then suctioned into a slot on the unit’s underside and pumped through a filter. The water intake on the underside additionally increases the contact pressure, thus facilitating controlled movement on the vertical sides of pools. Pool cleaning robots use a minimum number of sensors to orient themselves and move underwater completely independently. Since pool geometries are usually simple, navigation
53
1164
logic can also be kept to a minimum. Simple sensor arrays on the fronts and backs of these systems are the elements of an efficient cleaning strategy. Differences between systems are their weight (for inserting and removal from the pool), the comfort (handling of filters, etc.), the cleaning performance (m2 /hr), and navigation abilities (a system with advanced sensors and navigation capabilities are better suited for more complexly shaped pools). Pool cleaning robots must be lightweight to enable easy handling during insertion and removal. Hence, battery operation is often not an option since it would not achieve the necessary cleaning performance given weight limitations. The umbilical also allows for retrieval of the system in case of a malfunction or power outage. W2000 Pool Cleaning Robot Manufacturer: Weda, Sweden Type: Professional pool cleaning system Operating mode: Autonomous, remote controlled Cleaning technology: Brushes, water filter system The base of Weda’s W2000 professional pool cleaning robot [31] is a tracked vehicle maneuverable by separately controlling its left and right track (see Fig. 53.7). It is supplied with 42VDC to ensure human safety when used while people are in the pool. In manual mode it can move between 0.2– 0.4 m/s and 0.2 m/s during automatic mode. A water pump suctions in approximately 1200 l of water per minute on the underside and rinses it through a reusable particle filter. This can be compared to common systems for residential use, which manage approximately 250 l/min. The suction generated beneath the system is sufficient to enable the pool cleaner to traverse vertical walls underwater. Rotating brushes loosen dirt particles in front of and behind the unit, moving them in the direction of the suction opening. Power is supplied by a cable connected to a base station on the edge of the pool. It is uncoiled and floats on the surface during cleaning. Other manufacturers of professional and household pool cleaning systems include: • Maytronics’ (Israel) pool robots are intended for residential use and for larger, public pools and waterparks. Maytronics offers the only battery-powered robot. Maytronics products are marketed globally under various names [22]. • Aquatools (USA) offers robots for residential use and smaller public pools [9]. • Mariner 3S (Switzerland) sells professional pool cleaners. Instead of collecting bags, some units use filter cartridges that are cleaned afterward with auxiliary equipment [23].
N. Elkmann and J. Saenz
Facade, Window, and Solar Panel Cleaning Robots Facade cleaning robots are often remote-controlled systems, used for surfaces that are inaccessible to humans or accessible only with great effort. However, isolated fully automatic systems that operate without human supervision are also in use, particularly in Europe. Some of these systems were designed specifically for a building during its planning phase. Remote-controlled systems in particular are designed to be universal and usable on a variety of buildings including their infrastructures without structural modifications. Facade cleaning robots come in many different designs: wheel-driven systems for flat and slightly inclined glass roofs, climbing systems with vacuum cups for sharply inclined and vertical facades, and rail-guided systems. As a rule, facade cleaning robots must be able to navigate obstacles such as window framework and other facade elements. The use of sensors to determine position constitutes a particular challenge since they must deliver reliable data under the widest variety of outdoor weather conditions from rain to sunshine. All the facade cleaning robots in operation today have cables that supply electrical power and, depending on the system, compressed air, water, and data communications. There are no known systems that operate autonomously on facades without an umbilical. What is more, measures often have to be taken to secure a robot against falling [1, 2, 5]. Such measures have to be integrated into the overall concept and, where necessary, automated for fully automatic systems. Typically, these are expensive due to engineering costs associated with customized solutions. Large-sized solar energy farms featuring flat photovoltaic panels have become more widespread in recent years. Regular cleaning is important to ensure a steady energy output over time. In terms of the operating environment and overall system requirements, this area of application is very similar to flat and slightly inclined glass roofs. Indeed, many robots that were designed for these types of facades are also commercially available for solar panel cleaning. Although a limited number of facade cleaning robots have moved beyond the research and prototype stages, these have often been special developments that are used exclusively on specific facades for which they were designed. Whereas previous systems have been classified according to their locomotion over the glass, their degree of autonomy, and the cleaning technology used, new developments include the use of drones for accessing the façade and/or solar panel surface. Gekko Surface Glass Cleaning Robot Manufacturer: Serbot, Switzerland Type: Professional roof cleaning system Operating mode: Remote controlled Cleaning technology: Wet roof cleaning Area of application: Louvre Pyramid (France)
53
Cleaning Automation
A remote-controlled robot cleans the glass of the Louvre Pyramid in Paris, France (Fig. 53.3a). Combining vacuum cups with specially designed tracks, the robot travels up and down the triangular surfaces. The absence of any overhead securing device is a special feature of the Louvre robot. Technically, it is designed so that the frictional force of the walking mechanism suffices to prevent it from sliding down the sloping surface in the event of a malfunction. Operators at the bottom of the pyramid supply the robot from below with power and water through a cable and hose. The glass is cleaned by rotating brushes cleaning and a wiper dries the path of travel during downward travel [27]. Compared with other facade cleaning robots, the Louvre system has a very simple design. The minimum of distinctive features of the facade substantially reduces complexity. First, the triangular surfaces are completely level, only interrupted by silicone joints. Consequently, a very simple walking mechanism can be used since there are no obstacles to be navigated. Second, the system cannot fall when it travels across the sloping surfaces, thus making an overhead securing device unnecessary. Safety engineering accounts for a large part of other facade cleaning robots’ systems and weight. Third, the sloping smooth surfaces allow water from cleaning to simply runoff. This simplifies the cleaning system. Fourth, the space around the pyramid is navigable by a vehicle and the system can be recovered with a small trolley at any time. Its relatively short cleaning paths enable supplying the robot from below. The requisite cables and hoses are simply pulled along the glass. Interestingly, the same system can be used to clean large solar panels (Fig. 53.3b).
1165
Glass Roof Cleaning Robot for the Glass Hall of the Leipziger Messe Developer: Fraunhofer IFF, Germany Type: Professional roof cleaning system Operating mode: Autonomous Cleaning technology: Wet roof cleaning Area of application: Glass hall of the Leipziger Messe (Germany) Two fully automatic cleaning robots have been cleaning the 25,000 m2 glass hall of the Leipziger Messe in Germany since 1997 (see Fig. 53.4). The roof consists of a glass facade suspended from a steel structure. Accessing it with a gantry alone or other access equipment would be extremely complicated. A gantry transports the robots along the roof ridge and uses small hoists to lower them onto the glass surface between the pane mounts. The robots then move downward under the steel trusses and between the mounts, cleaning the glass. Upon returning to the top, the robots are picked up by the hoists and shifted to the next path. A broad roller brush cleans the entire surface of the facade lane by lane, starting at the eastern end of the roof and ending at the western end. Since the robots are unable to move around the mounts, these areas are cleaned by disc brushes on retractable arms. Chemical-free deionized water is sprayed onto the glass to moisten and wash away dirt mobilized by the brushes. The gantry on the roof ridge secures each cleaning robot with two Dyneema ropes and supplies each with power and water through its own cable and hose. To prevent damag-
© Serbot
© Serbot
Fig. 53.3 Gekko cleaning robot for the Louvre in Paris (a), and for solar panel cleaning (b)
53
1166
N. Elkmann and J. Saenz
© Fraunhofer IFF
Fig. 53.5 SIRIUS facade cleaning robot for automatic cleaning of high-rise buildings © Fraunhofer IFF
Fig. 53.4 Fraunhofer IFF cleaning robot the glass hall in Leipzig
ing the panes of glass and silicone seals, hose, cable, and securing ropes are coiled and uncoiled inside a robot and thus laid down on the glass instead of being dragged over it. Furthermore, since the bearing wheels are not driven, the two securing ropes are used to correct the robot’s direction of travel. A fifth wheel only provides the necessary propulsion in flat areas. The steel structure limits the size a robot may have. At an overhead clearance of 38 cm, the robot’s height is just 30 cm. The travel path is 45 m long and runs from the ridge between the pane mounts down into the eaves. The distance between mounts limits the robot’s width to 1.5 m. Odometer measurements of the distance covered by two wheels are supplemented by eddy current sensor measurements of the distance covered using the mounts as reference marks. In addition, the distance of the mounts to the robot’s left and right is used to correct path direction, controlled by adjusting the coiling of the two ropes. Expansion of the hall due to heat as well as the general tolerances of the mounts’ uniformity make navigation between mounts more challenging. The gantry and robots move fully automatically and are monitored from a master control room where exact positions and actions are displayed. The robots also accept abstract commands from a manual control menu. [7]. Other facade cleaning robots include: • The Fraunhofer IFF’s SIRIUSc cleaning the vertical facades of the Fraunhofer-Gesellschaft’s headquarters in Munich, Germany (Fig. 53.5) [6].
• Skyline Robotics, based in Tel Aviv, Israel, use two standard industrial robots atop an automated gantry for cleaning [28]. • The lightweight robot CleanKong from Erylon in France hangs from the rooftop hoist and is connected via umbilical to the hoist [16]. A notable new development in the field of façade cleaning is the use of drones as the means of positioning the cleaning unit along the façade. CleanDrone [12] is one example of this new approach. Particular challenges of this approach are the limited payload capability (especially when considering water needed for cleaning glass), accurate positioning despite wind and other environmental conditions, and of course human safety considerations.
Hull Cleaning Robots Cleaning systems for ship hulls have been developed and tested in research for quite some time. There is a strong environmental and economical case for hull cleaning systems. Ship fuel costs and associated CO2 emissions can be reduced by as much as 15% due to increased efficiency from a reduction in the ship’s drag. These systems are designed to clean underwater, eliminating the need for dry-docking, and are typically used when the ship is stationary, e.g., in a harbor. Jotun Hull Skater Developer: Jotun Type: Professional ship hull cleaning system Operating mode: Remote controlled Cleaning technology: Brushes The battery-powered hull cleaning system (Fig. 53.6) rolls on magnetic wheels and uses brushes to clean the surface of
53
Cleaning Automation
1167
but arguably to protect people’s health. Although sewer lines and ventilation ducts have similar geometric properties, the boundary conditions for cleaning robot operations differ fundamentally. Two systems serve as examples of ventilation duct and sewer line cleaning, respectively. With both types of systems, a key issue is the removal of debris and materials out of the duct/pipe. Many duct cleaning systems work in conjunction with aspirator units to remove unwanted material. Removal of sediments in sewer pipes is a larger challenge and is either meant to be moved downstream by the flow itself or removed through specialized shovel/dredging systems [2]. © Semcon
Fig. 53.6 Jotun hull cleaning robot
the ship while it is stationary, in a dock or harbor [20]. The system is connected to the Internet and alerts crew members when a cleaning cycle is needed. The umbilical is for a data connection, so that an operator (typically not located on the ship) can see the camera data and control the position of the robot. Other hull cleaning robots are manufactured by: • Ecosubsea – interestingly also collects debris with a 97.5% collection rate, minimizing spread of invasive species between continents, used in ports across Europe [14] • SeaRobotics – semiautonomous hull cleaning with ROVs [26]
Ventilation Duct and Sewer Line Cleaning Robots Cleaning systems for pipes, sewers, and ducts represent a sizeable market since their inaccessibility often prevents humans from being able to clean them without cleaning automation. Pipe and ventilation duct diameters are too small or areas may be hazardous to health or potentially explosive, e.g., in the petroleum industry or sewage disposal. Thousands of different systems for pipe, ducts, and sewer line cleaning exist all over the world. As a rule, they are remote controlled or semiautomatic and move on wheels or tracks. They normally do not navigate autonomously and incorporate video cameras to display their environment to the operator. A cable connects these systems to a supply and control station. While this limits the systems’ radius of action, it assures they are highly reliable, are supplied cleaning medium, and can be recovered from pipes, sewers, and ducts with certainty. Cleaning methods vary widely depending on the case of application and include brushes, water, high water pressure, and dry ice. Regular cleaning of sewer lines is a basic measure to ensure they operate reliably. Cleaning of ventilation ducts on the other hand is not essential to operational reliability
Multipurpose Duct Cleaning Robot Manufacturer: Danduct Clean, Danmark Type: Professional duct cleaning system Operating mode: Remote controlled Cleaning technology: Brushes, dry ice cleaning The Danish company Danduct Clean’s multipurpose robot [13] is a universal system that cleans and inspects ventilation ducts. An all-wheel drive robot platform serves as the carrier system. The overall system does not have any collision sensors. However, two cameras directed toward the front and the rear relay a direct impression of the robot’s environment. The carrier system is outfitted with different cleaning systems depending on the case of application. Rotating brushes are used in dry ventilation ducts to remove dust clinging to the walls. Various brush systems are available depending on the duct geometry. A dry ice cleaning system is employed in exhaust areas of kitchens and the like where sizeable grease deposits form in ventilation ducts. Irrespective of the cleaning system used, cables with lengths of up to 30 m supply media and transmit data to an external control box. The robot is controlled from the control box by joystick based on camera feedback. Cruise control is additionally available for long, straight duct sections. To effectively remove loosened dirt, inflatable balloons seal off the duct section being cleaned. A powerful suction unit is hooked up to the duct opening in the direction of work and extracts loosened dirt from the duct. Other duct cleaning robots are manufactured by: • TEINNOVA (Spain) [29] • Jettyrobot (Czech Republic) [19] Sewer Cleaning Robot Developer: Fraunhofer IFF, Germany Type: Professional sewer cleaning system Operating mode: Automatic Cleaning technology: High pressure A few sewer cleaning robots exist for inaccessible sewer lines. Some remote-controlled inspection robots for small
53
1168
N. Elkmann and J. Saenz
© Fraunhofer IFF
Fig. 53.7 Sewer cleaning robot for the Emscher Sewer System
sewer line diameters can be outfitted with high-pressure nozzles. As a rule, however, small sewer lines are cleaned with cleaning nozzles that utilize high-pressure water for propulsion. In cooperation with the Emschergenossenschaft in Germany, the Fraunhofer IFF developed a fully automatic cleaning robot (Fig. 53.7) for sewer lines with diameters of 1600– 2800 mm. The system employs an ejector nozzle to mobilize deposits on the bottom and high-pressure water to clean the pipe wall in the areas above water in sewer lines that are partially filled (normally between 20% and 40% full) at all times. The system can clean sewers up to 1200 m in length. The wheel-driven cleaning system is roughly 4 m long and weighs approximately 2.5 tons. An ultrasonic scanner monitors the cleaning results underwater and cameras monitor the area above water. A specially equipped vehicle on the street level supplies the cleaning system with up to 250 liters of water per minute over a distance of up to 1200 m with a nozzle pressure of over 100 bar. The cleaning system dependably navigates in the sewer line autonomously and system recovery is ensured through the cable connection [3, 17].
53.3
Emerging Trends
Especially in the domain of cleaning, service robots have proven themselves as a viable option for relieving people of dangerous, stressful, and/or monotonous work, both in the household and professional sectors. The technology available to household systems is rapidly improving, and systems ranging from simple and low-cost designs to intelligent systems are already being sold in large numbers. Professional systems are technically complex and
often cost-effective. However, there are many situations where they fail to fulfill all the required criteria and have therefore not yet established themselves as mass products. This is in sharp contrast to expectations from 10 to 20 years ago, especially in the fields of professional floor and façade cleaning. Nevertheless, numerous individual solutions exist for special applications. Recent advances in sensors and AI/machine learning techniques have contributed to improve the capabilities of cleaning system through better navigation and situational awareness in dynamic environments. While cleaning technologies themselves have generally remained the same in the past years, the overall ability of the systems to save operators time has improved through these advances, with features like automatic charging, faster cleaning cycles, and Internet connectivity. Recent partnerships between companies with expertise in cleaning and in robot navigation underline these trends and show a path to market. Connectivity and sharing of data is another trend that will advance in the future. Current families of systems from the same manufacturer are able to share maps of environments with each other (e.g., iRobot vacuuming and mopping robots can share maps). Future systems will be more adept at sharing information with ambient sensors and other sources of data to increase efficiency and comfort. The use of drones for cleaning tasks in the recent past plays well to their strength of accessing difficult-to-reach surfaces. Nevertheless, the typical challenges of supplying energy and providing cleaning media remain the same. Thus, it remains to be seen whether drones will extend beyond niche applications in modern automated cleaning.
Literature Journals 1. Kim, J., Mishra, A.K., Limosani, R., Scafuro, M., Cauli, N., SantosVictor, J., Mazzolai, B., Cavallo, F.: Control strategies for cleaning robots in domestic applications: a comprehensive review. Int. J. Adv. Robotic Syst. (2019). https://doi.org/10.1177/1729881419857432 2. Seo, T., Jeon, Y., Park, C., et al.: Survey on glass and Façade-cleaning robots: climbing mechanisms, cleaning methods, and applications. Int. J. Precis. Eng. Manuf.-Green Tech. 6, 367–376 (2019). https:// doi.org/10.1007/s40684-019-00079-4 3. Walter, C., Saenz, J., Elkmann, N., Althoff, H., Kutzner, S., Stuerze, T.: Design considerations of robotic system for cleaning and inspection of large-diameter sewers. J. Field Robotics. 29, 186–214 (2012). https://doi.org/10.1002/rob.20428 4. World Robotics Report 2019 (International Federation of Robotics IFR 2019). 5. Yoo, S., et al.: Unmanned high-rise Façade cleaning robot implemented on a Gondola: field test on 000-Building in Korea. IEEE Access. 7(30), 174–30184 (2019). https://doi.org/10.1109/ ACCESS.2019.2902386
53
Cleaning Automation
1169
Proceedings 6. Elkmann, N., Kunst, D., Krueger, T. Lucke, M., Stuerze, T.: SIRIUSc: Fully Automatic Facade Cleaning Robot for a High-rise Building in Munich, Germany. In: Proceedings of the Joint Conference on Robotics ISR 2006/Robotik 2006, Munich 2006 7. Elkmann, N, Schmucker, U., Boehme, T., Sack, M.: Service Robots for Facade Cleaning. Advanced Robotics: Beyond 2000. In: The 29th International Symposium on Robotics, Birmingham, 27 April – 1 May 1998 8. Kleiner, A., Baravalle, R., Kolling, A., Pilotti, P., Munich, M.: A solution to room-by-room coverage for autonomous cleaning robots. In: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver (2017). https://doi.org/10.1109/ IROS.2017.8206429
Internet Links 9. https://aquaproducts.com/commercial/. Accessed 28 Sept 2020 10. https://www.avidbots.com/commercial-floor-cleaning-machines/ airport-cleaning-robot/. Accessed 28 Sept 2020 11. https://www.braincorp.com/robotic-floor-care. Accessed 28 Sept 2020 12. http://www.cleandrone.com/. Accessed 28 Sept 2020 13. www.danduct.com. Accessed 28 Sept 2020 14. https://www.ecosubsea.com/. Accessed 28 Sept 2020 15. https://www.ecovacs.com. Accessed 28 Sept 2020 16. https://www.erylon.com/. Accessed 28 Sept 2020 17. www.iff.fraunhofer.de/en/robotersysteme.htm. Accessed 28 Sept 2020 18. www.irobot.com. Accessed 28 Sept 2020 19. https://www.jettyrobot.com/. Accessed 28 Sept 2020 20. https://jointherevhullution.com/. Accessed 28 Sept 2020 21. www.karcher.com. Accessed 28 Sept 2020 22. www.maytronics.com. Accessed 28 Sept 2020 23. https://www.mariner-3s.com/en/home. Accessed 28 Sept 2020 24. https://www.lionsbot.com/robots/floor-cleaning/. Accessed 28 Sept 2020 25. https://purei9.com/. Accessed 28 Sept 2020 26. https://www.searobotics.com/. Accessed 28 Sept 2020 27. https://www.serbot.ch/en/facades-surfaces-cleaning/gekkosurface-robot. Accessed 28 Sept 2020 28. https://www.skylinerobotics.com/. Accessed 28 Sept 2020 29. https://teinnovacleaning.com. Accessed 28 Sept 2020 30. https://www.theverge.com/2020/5/7/21249319/pittsburgh-airportuv-robots-coronavirus-cleaning-disinfecting (2020). Accessed 28 Sept 2020 31. www.weda.se. Accessed 28 Sept 2020
Norbert Elkmann received his diploma in mechanical engineering from the University of Bochum (Germany) in 1993 and his doctorate in mechanical engineering from the Vienna University of Technology (Austria) in 1999. Since 1998, he has headed the Fraunhofer IFF’s Robotic Systems Business Unit in Magdeburg, where he leads a staff of currently 28 researchers. His research interests include cleaning and inspection robots, safe human–robot collaboration, and cognitive mobile robots. He has published more than 100 papers. In November 2015, he was awarded an honorary professorship in assistant robotics at Otto von Guericke University Magdeburg.
53
José Saenz earned a B.S. in mechanical engineering from Stanford University in 1999, an M.S. in mechatronics from Otto von Guericke University Magdeburg in 2004, and a doctorate in automation from the École Nationale Supérieure d’Arts et Métiers in 2019. He is leader of the group Assistance, Service and Industrial Robotics in the Business Unit Robotic Systems and has been at the Fraunhofer Institute for Factory Operation and Automation in Magdeburg since 2000. His main research interests are in the fields of safe human–robot collaboration, mobile manipulation, inspection and cleaning service robots, and safety sensor development. He has served in the euRobotics aisbl Board of Directors since 2017.
Library Automation and Knowledge Sharing
54
Paul J. Bracke, Beth McNeil, and Michael Kaplan
Contents 54.1
In the Beginning: Book Catalogs and Card Catalogs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1171
54.2
Foundations for the Digital Age: Indexes and the Beginnings of Online Search . . . . . . . . . . . . . . 1172
54.3 54.3.1 54.3.2 54.3.3
Development of the MARC Format and Online Bibliographic Utilities . . . . . . . . . . . . . . . . . . . . . . . . . . . Integrated Library Systems . . . . . . . . . . . . . . . . . . . . . . . . Integrated Library Systems: The Second Generation . . . The Shift to End-User Search Indexes . . . . . . . . . . . . . . .
1173 1173 1174 1177
54.4 54.4.1 54.4.2 54.4.3
The First Generation of Digital Library Tools . . . . . . OpenURL Linking and the Rise of Link Resolvers . . . . Metasearching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Electronic Resource Management . . . . . . . . . . . . . . . . . .
1177 1177 1178 1178
54.5
Digital Repositories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1179
54.6
Library Service Platforms: The Third Generation Library Automation System . . . . . . . . . . . . . . . . . . . . . . 1181 From OPAC to Discovery . . . . . . . . . . . . . . . . . . . . . . . . . 1181 The Library Service Platform . . . . . . . . . . . . . . . . . . . . . . 1182
54.6.1 54.6.2 54.7
Evolving Data Standards and Models, Linked Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1183
54.8
Two Future Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . 1184
readable cataloging (MARC) communications format and bibliographic utilities. Beginning in the early 1980s university libraries and library automation vendors pioneered the first integrated library systems (ILS). The digital era, characterized by the proliferation of content in electronic format, brought with it the development of services for casual users as well as scholarly researchers – services such as OpenURL linking and metasearching and library staff tools such as electronic resource management systems. Libraries have now developed approaches to search that integrate data from the many disparate content sources they have historically managed, as well as the new systems being developed for digital object management. Additionally, the proliferation of services that emerged in the first decade of this century is being consolidated into emergent library service platforms that provide flexible frameworks for managing the full range of library collections. These developments have allowed libraries to evolve their mission of knowledge sharing from aggregating knowledge created globally to a local community to one in which they also aggregate and disseminate locally created knowledge to the world.
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1184
Keywords Abstract
Library automation has a rich history of more than 130 years of development, from the standardization of card catalogs to the creation of the machine-
P. J. Bracke () Gonzaga University, Spokane, WA, USA e-mail: [email protected] B. McNeil Purdue University, West Lafayette, IN, USA e-mail: [email protected] M. Kaplan Ex Libris Ltd., Newton, MA, USA e-mail: [email protected]
© Springer Nature Switzerland AG 2023 S. Y. Nof (ed.), Springer Handbook of Automation, Springer Handbooks, https://doi.org/10.1007/978-3-030-96729-1_54
Library catalog · Library automation · Repositories · Discovery · Knowledge sharing
54.1
In the Beginning: Book Catalogs and Card Catalogs
Libraries have a long history of facilitating knowledge sharing at scale. Having developed to aggregate knowledge in the form of print publications, libraries faced challenges with the management of collections at scale, and became enthusiastic adopters of automated means for carrying out their work. In many ways one can date the beginning of post-industrial era library automation to the development of the library catalog 1171
1172
P. J. Bracke et al.
Fig. 54.1 Example of a handwritten catalog card. (Courtesy of University of Pennsylvania [3])
Fig. 54.2 Example of OCLC-produced catalog (shelflist) card. (Courtesy of OCLC [4])
card and the associated card catalog drawer [1, 2]. The nature of the original library catalog card can be gleaned from this patent description of the so-called continuous library catalog card that had evolved to take advantage of early computerarea technology by the 1970s [1]:
program, in turn, was succeeded by computerized card sets from the Ohio College Library Center (OCLC), later renamed the Online Computer Library Center, Inc., and later still, renamed OCLC; the Research Libraries Group (RLG); and other bibliographic utilities. This was largely due to the advantage the utilities had of delivering entire production runs of card sets that could be both customized in format and at the same time delivered with card packs already presorted and alphabetized.
a continuous web for library catalogue cards having a plurality of slit lines longitudinally spaced 7.5 cm apart, each slit line extending 12.5 cm transversely between edge carrier portions of the form such that upon removal of the carrier portions outwardly of the slit lines a plurality of standard 7.5 cm × 12.5 cm catalogue cards are provided. Longitudinally extending lines of uniquely shaped feed holes or perforations are provided in the carrier portions of the form which permit printing of the cards by means of existing high speed printers of United States manufacture despite the cards being dimensioned in the metric system and the feed of the printers being dimensioned in the English system.
Card catalogs and card catalog drawers are well described in this patent that pertains to an update to the basic card catalog drawer [2]: an apparatus for keeping a stack of catalogue cards in neat order to be used, for example, in a library. If necessary, a librarian can replace a damaged catalogue card with a new one by just pressing a button at the bottom of the drawer to enable a compression spring to eject the metal rod passing through a stack of catalogue cards. Consequently, the librarian can freely rearrange the catalogue cards.
Though these devices continued to evolve, the originals, when standardized at the insistence of Melvil Dewey in 1877, led eventually to the demise of large book catalogs, which, despite their longevity, were at best unwieldy and difficult to update [5]. As we will see later, the goal here was one of increased productivity and cost reduction, not that different from the mission espoused by late twentieth-century computer networks (Figs. 54.1 and 54.2). The development and adoption of the standard catalog card presaged the popular Library of Congress printed card set program that, together with a few commercial imitators, was the unchallenged means of universalizing a distributed cataloging model until the mid-1970s. The printed card set
54.2
Foundations for the Digital Age: Indexes and the Beginnings of Online Search
By the mid-1950s computer scientists and others were imagining what would become the beginnings of the digital age. What we know today as Clarivate’s Web of Science originated in 1955 with an idea by Eugene Garfield about citation indexing and searching [6]. The National Library of Medicine (NLM) developed MEDLARS, Medical Literature Analysis and Retrieval System, in the mid-1960s. By then, discussions of standards for indexing were well underway, and in 1964 the National Bureau of Standards hosted a symposium, “Statistical Association Methods for Mechanized Documentation,” which included topics of “historical foundations, background and principles of statistical association techniques as applied to problems of documentation, models and methods of applying such techniques, applications to citation indexing, and tests, evaluation methodology and criticism” [7]. The first online systems were developed in the early 1970s, including NLM’s Medline and Lockheed’s Dialog. In 1975 Gerald Salton, a computer science professor at Cornell University and developer of the Smart information retrieval system, which became the basis for many future systems, published several influential publications on indexing, auto-
54
Library Automation and Knowledge Sharing
mated text analysis, and automated indexing. In 1991 Salton wrote, “it is easy to store large masses of information, but storage in itself is of no value unless systems are designed that make selected items available to interested users” [8]. Early online search systems relied largely on command language searching. A search plan was necessary for finding relevant results, and Boolean logic and descriptor terms were important aspects of early search strategy [9].
54.3
Development of the MARC Format and Online Bibliographic Utilities
The success of the bibliographic utilities coincided with several other developments that together led to the library automation industry. These developments were the MARC communication format, the development and evolution of university homegrown and then vendor-based library systems, and finally the growth of the networking capabilities that we now know as the Internet and the World Wide Web. In fact, without the development of the MARC format, library automation would not have been possible. Taken as a whole, these milestones bookend 130 years of library automation. Two extraordinary individuals, Henriette Avram and Frederick Kilgour, were almost single-handedly responsible for making possible library automation as we know it today. During her tenure at the Library of Congress, Avram oversaw the development of the MARC format; it emerged as a pilot program in 1968, became a US national standard in 1971, and an international standard in 1973 [10]. The MARC standard bore a direct connection to the development of OCLC and the other bibliographic utilities that emerged during the 1970s. RLG, the Washington Library Network (WLN), and the University of Toronto Library Automation System (UTLAS) all sprang from the same common ground. Kilgour, who had served earlier at Harvard and Yale Universities, moved to Ohio in 1967 to establish OCLC as an online shared cataloging system [11]. OCLC now has a dominant, worldwide presence, serving over 57,000 libraries in 112 countries with a database that comprises in excess of 168 million records (including articles) and 1.75 billion holdings. From the beginning the goals of library automation were twofold: to reduce the extremely labor-intensive nature of the profession while increasing the level of standardization across the bibliographic landscape. It was, of course, obvious that increasing standardization should lead to reduced labor costs. It is less clear that actions on the bibliographic “production room floor” made that possible. For many years these goals were a core part of the OCLC mission – the most successful modern (and practically sole surviving) bibliographic utility. Founded in 1967, OCLC is a [12]
1173 nonprofit, membership, computer library service and research organization dedicated to the public purposes of furthering access to the world’s information and reducing information costs.
In the early years of library automation, libraries emphasized cost reduction as a primary rationale for automation. In the early 1970s a number of so-called bibliographic utilities arose to provide comprehensive and cooperative access to a database comprised of descriptive bibliographic (or cataloging) data – now commonly known as metadata. One by one these bibliographic utilities have vanished. In fact, in 2007 OCLC absorbed RLG and migrated its bibliographic database and bibliographic holdings into OCLC’s own WorldCat, thus leaving OCLC as the last representative of the utility model that arose in the 1970s. A hallmark of library automation systems is their evolution from distinct, separate modules to large, integrated systems. On the level of the actual bibliographic data this evolution was characterized by the migration and merger of disparate files that served different purposes (e.g., acquisitions, cataloging, authority control, circulation) into a single bibliographic master file. The concept of such a master file is predicated on avoidance of duplicative data entry; that is, data should enter the system once and then be repurposed however needed. A truly continuous, escalating value chain of information, from publisher data to library bibliographic data, still does not exist. Libraries, especially national libraries such as the Library of Congress, routinely begin their metadata creation routines either at the keyboard or with suboptimal data. This often includes a reliance on third-party subscription agents and the use of an electronic catalogingin-publication program by the Library of Congress in which bibliographic records are created for books that have yet to be published.
54.3.1 Integrated Library Systems The story of modern library automation began with a few pioneering individuals and libraries in the mid to late 1960s [13–17]. In addition to Avram and Kilgour, one of the more notable was Herman Fussler at the University of Chicago, whose efforts were supported by the National Science Foundation. Following the initial experiments, the university began to develop a new system, one of whose central concepts was in fact the bibliographic master file [13]. The University of Chicago was not alone. Stanford University (with Bibliographic Automation of Large Library Operations using Time Sharing (BALLOTS)), the Washington Library Network, and perhaps most importantly for large academic and public libraries, Northwestern University (NOTIS) were all actively investigating and developing systems. NOTIS was emblematic of systems developed through the mid-1980s. NOTIS and others (e.g., Hebrew University’s
54
1174
Automated Library Expandable Program (Aleph 100) that became the seed of Ex Libris and the Virginia Tech Library System (VTLS)) were originally developed in university settings. In the 1980s NOTIS was commercialized and later sold to Ameritech in the heyday of AT&T’s breakup into a series of Baby Bells. The NOTIS management team eventually moved on and started Endeavor Information Systems, which was sold to Elsevier Science and then in turn to the Ex Libris Group in 2006. Other systems, aimed at both public and academic libraries, made their appearance during the 1970s and 1980s. Computer Library Services Inc. (CLSI), Data Research Associates (DRA), Dynix, GEAC, Innovative Interfaces Inc. (III), and Sirsi were some of the better known. Today only Ex Libris, III, and SirsiDynix (merged in 2005) survive as major players in the integrated library system arena. The Harvard University Library epitomizes the various stages of library automation on a grand scale. Under the leadership of the fabled Richard DeGennaro, then Associate University Librarian for Systems Development, Harvard University’s Widener Library keyed and published its manual shelflist in 60 volumes between 1965 and 1979 [18]. At the same time the university was experimenting with both circulation and acquisitions applications, the latter with the amusing moniker, Computer-Assisted Ordering System (CAOS), later renamed the Computer-Aided Processing System (CAPS). In 1975 Harvard also started to make use of the relatively young OCLC system. As with other institutions, Harvard initially viewed OCLC as a means to more efficiently generate catalog cards [19]. In 1983 Harvard University decided to obtain the NOTIS source code from Northwestern University to unify and coordinate collection development across the 100 libraries that constituted the vast and decentralized Harvard University Library system. The Harvard system, HOLLIS, served originally as an acquisitions subsystem. Meantime, the archive tapes of OCLC transactions were being published in microfiche format as the Distributable Union Catalog (DUC), for the first time providing distributed access to a portion of the Union Catalog – a subset of the records created in OCLC. It was not until 1987 that the catalog master file was loaded into HOLLIS. In 1988 the HOLLIS OPAC (Online Public Access Catalog) debuted, eliminating the need for the DUC [19]. It was, in fact, precisely the combination of the MARC format, bibliographic utilities, and the emergence of local (integrated) library systems that together formed the basis for the library information architecture of the mid to late 1980s. Exploiting the advantages presented by these building blocks, the decade from 1985 to 1995 witnessed rapid adoption and expansion in the field of library automation, characterized by maturing systems and increasing experience in networking. Most academic and many public libraries had an Integrated
P. J. Bracke et al.
Library System (ILS) in place by 1990. While the early ILS systems developed module by module, ILS systems by this time were truly integrated. Data could finally be repurposed and reused as it made its way through the bibliographic lifecycle from the acquisitions module to the cataloging module to the circulation module. Some systems, it is true, required overnight batch jobs to transfer data from acquisitions to cataloging, but true integration was becoming more and more the norm. All this time, the underlying bibliographic databases were largely predicated on current acquisitions. The key to an allencompassing bibliographic experience lay in retrospective conversion (recon) as proven by Harvard University Library’s fundamental commitment to recon. Between 1992 and 1996 Harvard University added millions of bibliographic records in a concerted effort to eliminate the need for its thousands of catalog card drawers and provide its users with complete online access to its rich collections. Oxford University and others soon followed. While it would be incorrect to say that recon is a product of a bygone era, most major libraries do indeed manage the overwhelming proportion of their collection metadata online. This has proven to be the precursor to the mass digitization projects, such as the Google Books project, which have depended to a large degree on the underlying bibliographic metadata, much of which was consolidated during the Recon era. In the early years of library automation, library systems and library system vendors moved from time-sharing, as was evident in the case of BALLOTS, to large mainframe systems. (Time-sharing involves many users making use of time slots on a shared machine.) Library system vendors normally allied themselves with a given hardware provider and specialized in specific operating system environments. The advent of the Internet, and more especially the World Wide Web, fueled by the growth of systems based on Unix (and later Linux) and relational database technology (most notably, Oracle) – all combined with the seemingly ubiquitous personal computer – gave rise to the second generation in library automation, beginning around 1995.
54.3.2 Integrated Library Systems: The Second Generation Libraries, often at the direction of their governing bodies, made a generational decision during this period to migrate from mainframe to client–server environments. Endeavor Information Systems, founded with the goal of creating technology systems for academic, special, and research libraries, was first out of the starting blocks and had its most successful period for about a decade following its incorporation in 1994 [20]. It was no accident, given its roots in NOTIS and its familiarity with NOTIS software and customers, that
54
Library Automation and Knowledge Sharing
1175
54
Fig. 54.3 Endeavor (now Ex Libris) WebVoyáge display
Endeavor quickly captured the overwhelming share of the NOTIS customer base that was looking to move to a new system. This was particularly true in the period immediately preceding the pivotal year 2000, when Y2K loomed large. In planning for the shift to the year 2000 library computer systems faced the same challenge all other legacy computer systems faced, namely, reworking computer code to handle calendar dates once the new millennium had begun. So many libraries, both large and small had to replace computer systems in the very late 1990s that 1998 and 1999 were banner years in the library computer marketplace, followed by a short-term downturn in 2000 (Fig. 54.3). Ex Libris Ltd., a relative latecomer to the North American library market, was founded in Israel in 1983, originally as ALEPH Yissum, and was commissioned to develop a state-of-the-art library automation system for the Hebrew University in Jerusalem. Incorporated in 1986 as Ex Libris, a sales and marketing company formed to market Aleph, Ex Libris broke into the European market in 1988 with a sale to CSIC, a Spanish network of 80 public research organization libraries affiliated with the Spanish Ministry of Science and Technology. This sale demonstrated an early hallmark of Aleph, namely deep interest in the consortial environment.
At roughly the same time as Endeavor was releasing its web-based client–server system, Voyager, Ex Libris released the fourth version of the Aleph system, Aleph 500. Based on what was to become the industry standard architecture, Aleph featured a multi-tier client–server architecture, support for a relational database, and – a tribute to its origins and world outlook – support for the unicode character set. Starting in 1996, with Aleph 500 as its new ILS stock-in-trade, Ex Libris created a US subsidiary and decided to make a major push into the North American market. At the same time – and crucially for its success in the emerging digital content market – Ex Libris made the strategic commitment to invest heavily in research and development with the stated goal of creating a unified family of library products that would meet the requirements of the evolving digital library world (Fig. 54.4). The significance of this generation of integrated library systems is best characterized by two developments. First, libraries and library automation vendors became more deeply engaged in developing and adopting standards that would allow for interoperability beyond the library world. Library standards, e.g., the use MARC format and the American standard code for information interchange (ASCII) character
1176
P. J. Bracke et al.
Fig. 54.4 Catalog record from University of Iowa InfoHawk catalog, with parallel Roman alphabet and Chinese fields. (Courtesy of University of Iowa)
set, needed to be considered in light of the larger universe of information standards, such as XML (ONIX) and unicode. Similarly, there was increased interest in leveraging communications standards such as Z39.50 to facilitate the integration of library systems with external applications. Second, the move from Telnet and Gopher-based search interfaces to web-based ones positioned library systems for visibility and usability in an emerging online world. Those libraries and library vendors that have the most robust understanding of this fundamental shift are the ones that were best positioned to thrive amidst the changes that would come. Over the next decade, however, libraries begin to deal with new challenges related to collection management that were not always well-supported by this generation of integrated library systems. Some of these challenges were related to emerging electronic formats. Historic acquisitions and cataloging practices, for example, were not well-suited to the explosion in electronic content being acquired by libraries. OPACs had limitations in displaying electronic content in a manner consistent with users experiences
elsewhere on the web [21–23]. Going beyond changes in formats, libraries were increasingly looking to automated and, in some cases, partially outsourced models for transforming acquisitions, cataloging, and physical processing functions. Approval plans, partnerships between libraries and book vendors that had emerged in the 1960s to automatically supply libraries with books that met selection criteria, evolved in several important respects. First, workflows between library and vendors now often involved loading of acquisitions and early cataloging data via EDI [24, 25]. Second, more libraries took advantage of shelf-ready services to also automate physical processing as an extension of approval plan programs. Books acquired in such a manner arrived labeled with a call number and potentially other physical processing options (e.g., library stamps applied). Coupled with approval plans and EDI data loads, this could dramatically reduce the time between publication and a book’s being available on a library shelf [26, 27]. Third, libraries begin to adopt patron-driven acquisitions models. These models, particularly when applied to e-books, involve
54
Library Automation and Knowledge Sharing
the automate automatic loading of records two titles that the library may not on, but for which an acquisitions decision could be automated based on user behaviors [28].
54.3.3 The Shift to End-User Search Indexes At the same time library catalogs were developing userfriendly interfaces, the indexing services that provide articlelevel access to the scholarly literature were also making rapid developments. By the early 1990s Telnet and Gopher clients made possible end-user accessible online presence of abstracting and indexing services (A&I services). Preprint databases arose and libraries began to mount these as adjuncts to the catalog proper. DOS-based Telnet clients gave way to Windows-based Telnet clients. Later in the 1990s, Windowsbased clients in turn gave way to web browsers. By the mid1990s CD-ROMS had begun to replace online search services like Dialog and Medline. Libraries loaded CD-ROMS onto workstations or local area networks for access by patrons. ERIC, PsycINFO, and CINAHL were early databases available on CD-ROMs, and users could access themselves rather than rely on mediated searching. Subscriptions to these and similar indexes became more common and available from library terminals and workstations. With the advent of web-based systems, these indexes became available through web-based subscriptions. This is now the dominant form of delivery.
54.4
The First Generation of Digital Library Tools
The shift to web-based, end-user access to library catalogs and search indexes was a first step in library automation supporting libraries’ transition to digital delivery and management of content. With digital versions of journals leading the way, libraries saw provision of digital access to content as a core means of fulfilling their mission, as well as a means for them to more widely disseminate collections unique to their institutions. Although libraries adapted existing standards to this new environment (e.g., using the MARC 856 field for URLs), a new set of approaches to library automation began to emerge. These systems, focused on providing and federating access to digital content, emerged to supplement the second-generation of integrated library systems. They were intended to allow libraries to continue to provide access to subscription-based resources in a vendor-neutral manner, and set the stage for libraries’ leadership in the Open Access movement. It should be noted, however, that the challenges and opportunities facing libraries at the point in time were not merely technical. By 2000, the rule for libraries in knowledge
1177
management was beginning to become a more present topic within the Library and Information Science literature. It was suggested that there were four roles for libraries in knowledge management: creating knowledge repositories, improving knowledge access, improving the knowledge environment, and management of knowledge as an asset [29]. These roles were reflected in many of the technologies developed and adopted in the early 2000s in support of library scholarly communication efforts. As libraries both sought to take advantage of opportunities to better manage and disseminate intellectual work of their local communities and to reform the economics of publishing, they engaged in a number of initiatives that fundamentally changed the nature of the way in which they approached knowledge sharing. Rather than acting as an aggregator of external knowledge for their local community, they were also now involved in managing and disseminating knowledge created locally to the broader world. This new approach has been referred to as “inside-out collection” [30].
54.4.1 OpenURL Linking and the Rise of Link Resolvers The advent of digital content expanded the view of library technologists beyond the management of library collections at the title level (e.g., the journal as a whole) to the desire to link users as seamlessly as possible to the contents of these titles. Herbert Van de Sompel developed the concept of the OpenURL link resolver in response to what he termed the appropriate copy problem [31]: There has been an explosive growth in the number of scholarly journals available in electronic form over the internet. As e-Journal systems move past the pains of initial implementation, designers have begun to explore the power of the new environment and to add functionality impossible in the world of paper-based journals. Probably the single most important such development has been reference linking, the ability to link automatically from the references in one paper to the referred-to articles.
Ex Libris partnered with Van de Sompel to develop the first commercially available OpenURL resolver, SFX. This product, which was an immediate commercial success, was the first of several platforms that gained popularity over the following decade [32]. These products were notable not only as a pioneering approach to linking at the article-level but also in their use of knowledge bases external to the integrated library system. These early OpenURL adopters were predicated on the creation of a database of journals subscribed to by a library, along with linking patterns that allowed the resolver to construct deep links to content. This knowledge base approach would prove to be important in the context of later efforts to automate electronic resources management and search interfaces (Fig. 54.5).
54
1178
P. J. Bracke et al.
OPAC Portal
A&I
Digital collections Reference manager
e-Print
Link server
Full text
ILL/ Doc Del
Fig. 54.5 SFX and OpenURL linking. (After [4])
Librarians, especially systems librarians, are well known as staunch proponents of standards. Thus, it was an extremely astute move when Ex Libris took the initiative to have the National Information Standards Organization (NISO) adopt the OpenURL as a NISO standard. The OpenURL framework for context-sensitive services was approved on a fast-track basis as ANSI/NISO Z39.88 in 2005 [33]: The OpenURL framework standard defines an architecture for creating OpenURL framework applications. An OpenURL framework application is a networked service environment, in which packages of information are transported over a network. These packages have a description of a referenced resource at their core, and they are transported with the intent of obtaining context-sensitive services pertaining to the referenced resource. To enable the recipients of these packages to deliver such context-sensitive services, each package describes the referenced resource itself, the network context in which the resource is referenced, and the context in which the service request takes place.
54.4.2 Metasearching As OpenURL linking was gaining adoption within the library community, a second new area of library automation, metasearching, emerged. Proposed as a strategy for libraries to provide access to their resources in the context of the emerging commercial web, Campbell proposed the development of a “Scholar’s Portal” to federated access to library resources through metasearching, also known as federated search [34]. This approach acknowledged that library users need to search abstracting and indexing databases in addition to the library catalog, and that this paradigm was confusing to users accustomed to internet search engines. Metasearching was thus intended to allow users to retrieve the most
comprehensive set of results in a single search, including traditional bibliographic citations, abstracting and indexing service information, and the actual full text whenever it is available. Most products developed to address this issue utilized Z39.50 to take a user inquiry and retrieve results from a range of bibliographic databases. Given the desire of libraries to provide a single search comparable in ease of use to web search engines, a number of commercial products emerged, such as Ex Libris MetaLib, WebFeat, and Fretwell-Downing’s ZPortal. By 2005 many institutions had made their primary bibliographic search gateway not the library catalog, but their metasearch application. Boston College is a good example. Its library catalog, Quest, is but one of the numerous targets searchable through its metasearch application, MetaQuest (Fig. 54.6). By now libraries and their users had bought wholeheartedly into the combination of metasearching and linking – it is the combination of these two approaches for dealing with the electronic content universe that makes them together vastly more powerful than either alone. Troubles loomed, though, in the absence of standards for metasearching. NISO once again stepped forward and created the NISO Metasearch Initiative. NISO described the challenge as [35]: Metasearch, parallel search, federated search, broadcast search, cross-database search, search portal are a familiar part of the information community’s vocabulary. They speak to the need for search and retrieval to span multiple databases, sources, platforms, protocols, and vendors at one time. Metasearch services rely on a variety of approaches to search and retrieval including open standards (such as NISO’s Z39.50), proprietary API’s, and screen scraping. However, the absence of widely supported standards, best practices, and tools makes the metasearch environment less efficient for the system provider, the content provider, and ultimately the end-user.
The lack of standards was not the only issue for metasearch, however. The technology was inherently slow, with significant latency in the process due to the need to conduct parallel searches in real-time in response to a user’s query and then ranking and sorting results into an integrated set once bibliographic records had been retrieved. In effect, the service provided by any library through their metasearch software was only as responsive as the slowest of the targets queried. While metasearch could be viewed as simpler than selecting the many fragmented search tools provided by libraries, it was also often viewed as unacceptably slow and, given the sorting of records on the fly, lacking the effective relevance ranking users were accustomed to from internet search engines.
54.4.3 Electronic Resource Management On the serial side of the bibliographic house, the electronic journal evolution of the 1990s turned rapidly into the
54
Library Automation and Knowledge Sharing
1179
54 Fig. 54.6 Boston College MetaQuest, with quick sets
electronic Journal avalanche of the twenty-first century. Libraries and information centers that subscribed to thousands or tens of thousands of print journals found themselves facing a very different journal publication model when presented with electronic journals. Not only have libraries had to contend with entirely new models – print only, print with free electronic access, electronic bundles or aggregations, often in multiple flavors from competing publishers and aggregators – but also these electronic journals came with profoundly different legal restrictions on their use. Libraries found themselves swimming upstream against a rushing tide of electronic content. A number of institutions (among them Johns Hopkins University, Massachusetts Institute of Technology and the University Libraries of Notre Dame) developed homegrown systems to contend with the morass; most others relied on spreadsheets and quantities of paper. Into this fray stepped the Digital Library Foundation (DLF), which convened a small group of electronic content management specialists to identify the data elements required to build an Electronic Resources Management (ERM) system. This study, developed in conjunction with a group of vendors and issued in 2004 as Electronic Resource Management: Report of the DLF ERM Initiative (DLF ERMI), was one of the few times since the development of the MARC standard that a standard predated full-blown system development [36].
Systems developers adopted the DLF ERMI initiative to greater or lesser degrees as dictated by their business interests and commitment to standards. Unlike the OpenURL link resolvers and metasearch tools, electronic resource management tools addressed an audience that was primarily staff, harking back to the original goal of library automation: to make librarians more productive. Like OpenURL link resolvers, ERM systems were often developed utilizing a knowledge base external to the integrated library system. While many library automation vendors offered ERM products, this functionality has now largely been incorporated into library service platforms, to be discussed later in the chapter.
54.5
Digital Repositories
As the tools for managing purchased and licensed library collections, primarily in response to the rise in digital content, were evolving, so too were technologies for managing the digital objects that are part of the unique collections of individual libraries. Libraries and archives, as recognized custodians of our intellectual and cultural heritage, have long understood the value of preservation and ensuring longitudinal access to the materials for which they have accepted custodianship. As more and more materials are digitized, particularly locally
1180
important cultural heritage materials, or are born digital, these institutions have accepted responsibility for preserving these digital materials. The concurrent increase in the creation of the intellectual outputs of their broader institutions in digital formats has also led libraries to envisage a transformed system of scholarly communication that leverages digital repositories in the open sharing of article pre- and post-prints, data sets, and more. Like the emergence of library automation platforms such as OpenURL linking and metasearch, digital repositories were developed to assist in helping libraries engage in new service development despite the limitations of the thencurrent generation of automation platforms. Libraries were also rethinking their role in the broader system of scholarly communication and the role they might play in promoting open access to the scholarly record. Along with pioneering efforts in high-energy physics (arXiv) and economics (RePEc) in creating repositories serving disciplinary communities, libraries became engaged in imagining the ways in which leveraging web-based repositories might simultaneously facilitate an open system of scholarly communication and transform libraries’ abilities to manage digitized and born digital content [37]. In short, libraries were becoming engaged in facilitating and managing knowledge sharing in partnership with scholarly communities. As with early developments in library automation, one of the first efforts to manage digital objects, especially textual objects, began in a university environment. In 2000 the Massachusetts Institute of Technology Libraries and Hewlett-Packard embarked on a joint research project that we now know as DSpace – an early Digital Asset Management (DAM) system. DSpace introduced the concept of an Institutional Repository (IR) [38], a robust software platform to digitally store . . . collections and valuable research data, which had previously existed only in hard copies . . . an archiving system that stores digital representations of analog artifacts, text, photos, audio and films . . . capable of permanently storing data in a non-proprietary format, so researchers can access its contents for decades to come.
While DSpace was in general release by 2002, a similar project was being developed at Cornell University. The Flexible Extensible Digital Object Repository Architecture (Fedora) project had its initial software release in 2003 [39]. These two software toolkits remain the leading open source platforms nearly 20 years later. In 2009, the DSpace and Fedora foundations merged into Duraspace, which provided continued development support for both platforms as well as cloud-based storage services and VIVO, a software application for managing researcher profiles. Duraspace has subsequently merged with Lyrasis [40]. Library automation vendors also realized this was a fertile field for institutions not inclined or not able to implement open source
P. J. Bracke et al.
software. Notable among them are CONTENTdm, developed at the University of Washington and now owned by OCLC and BePress, founded by professors to support open access publishing and repository services but now owned by the RELX Group. Additionally, some vendors such as Ex Libris have developed repository software as a component of their library service platforms, to be discussed later. All these products are designed for deposit, search, and retrieval. While they have maintenance aspects about them, they are not true preservation systems. A notable development in the development of repository technologies has been the emergence of Samvera, a Ruby on Rails-based digital repository software framework originally known as Hydra. Samvera allows a range of applications to be developed on top of a common data storage and middleware infrastructure. This allows, for example, custom applications to be built for different user communities or content types without replicating the underlying infrastructure. In this framework, content and metadata is stored in a Fedora repository as linked data, with a Solr index providing the foundation for search. A middleware layer consisting of Ruby gems provides the building blocks for developing web-facing applications [41]. The development of a new class of library automation systems leveraged and precipitated new data standards and protocols. The Dublin Core Metadata Element Set, often referred to simply as “Dublin Core,” has become almost ubiquitous as a descriptive metadata standard in institutional and subject repositories. With its origins as an RDF-based generic data model for metadata, Dublin Core’s use quickly became widespread in repository implementations, and is a still-evolving standard maintained by the Dublin Core Metadata Initiative [42]. An early challenge for repositories was providing a mechanism for interoperability that would allow for third-party applications or services to be developed. This led to the development of the Open Archives Initiatives Protocol for Metadata Harvesting (OAI-PMH), developed initially by Herbert Van de Sompel and Carl Lagoze [43]. This client-server protocol, implemented by the archive, allows for the harvesting of metadata describing the archive’s holdings using XML over HTTP. The protocol requires metadata to be represented in Dublin Core format, although additional metadata representations may be provided by the archive. Although OAI-PMH is still widely supported by repositories and digital archives, the last decade has also seen the development of RESTful APIs by repository providers to provide alternative means for interoperability. RESTful APIs are Application Programming Interfaces (APIs) based on Representational State Transfer (REST) system architecture. These APIs implement stateless calls using procedures defined within the HTTP protocol (e.g., GET, PUT) [44]. Library activities in repository development also led to involvement in related areas that enhanced their role in
54
Library Automation and Knowledge Sharing
knowledge sharing and often brought them into greater partnership with faculty and administrative offices (e.g., Offices of Research) than had previously been the case. Add some institutions, for example, librarians became deeply involved in designing and managing faculty profile profiling systems. One example of this was the VIVO Initiative, a software platform designed to facilitate the development of communities of practice across institutions through the creation and linking of faculty expertise profiles [45]. Whether using open source options such as VIVO or one of the commercial options now in the market, this is become an increasing area of activity for libraries [46]. Similarly, libraries have become more involved in research data curation and data science initiatives over the past decade. This has put librarians in conversation and partnership with providers of cyberinfrastructure and the computationally focused researchers who require access to data resources. Libraries have sat rules in managing data knowledge “upstream” in the research process, both to better facilitate computational research and to better facilitate sharing of the knowledge that results from it [47].
54.6
Library Service Platforms: The Third Generation Library Automation System
By 2006, it had become apparent that the second generation of integrated library systems was inadequate for supporting the needs of libraries in a rapidly changing operating context. Library management tasks, now comprised of managing print collections, purchased or licensed electronic content, and digitized or born-digital resources, were not well-supported by systems that had evolved out of an exclusively print paradigm and that had been developed prior to the large-scale adoption of digital content by most libraries. The fragmentation in library systems that resulted created functional burdens for libraries. Furthermore, due to the functional limitations of metasearch described previously in this chapter, it was also becoming apparent that new approaches to library search were necessary to provide convenient access to the breadth of content held by libraries. Furthermore, multi-tenant, cloud-based systems were becoming viable options. As a result, integrated library system vendors began positioning themselves for the development of both the next generation of technologies that would be positioned as library service platforms, rather than integrated library systems, and the accompanying discovery systems intended to provide a more modern search experience. These systems were intended to not only provide efficiencies in library operations, but to enable the development of innovative and transformative services by libraries deploying the systems.
1181
54.6.1 From OPAC to Discovery A deeply held tenet of librarians is that they have a responsibility to provide suitably vetted, authoritative information to their users. In the years since the rise of Google as an Internet phenomenon, librarians have shown enormous angst in the reliance, particularly among the generation that has come of age since the dawn of the Internet, on whatever they find in the first page of hits from any of the most popular Internet search engines. Critical analysis and the intellectual value of the resulting hits are less important to the average user than the immediacy of an online hit, especially if it occurs on the first results page. Libraries continue to spend in the aggregate hundreds of millions of dollars on content, both traditional and electronic, yet have struggled to retain even their own tuition-paying clientele. OCLC, to its credit, confronted the issue head on with a report it issued in 2005 [48]. In the report OCLC demonstrates quite conclusively that ease of searching and speedy results, plus immediate access to content, are the most highly prized attributes users desire in search engines. Unfortunately, the report shows that libraries and library systems rank very low on this scale. On the other hand, when it comes to a question of authoritative, reliable resources, the report shows that libraries are clear winners. The question has serious implications for the future of library services. How should libraries and library system vendors confront this conundrum? The debate was framed in part by a series of lectures that Dale Flecker, associate director of the Harvard University Library for Planning and Systems, gave in 2005 titled: “OPACS and our changing environment: observations, hopes, and fears.” The challenge he framed was the evolution of the local research environment that comprises multiple local collections and catalogs. Harvard University, for example, had separate catalogs for visual materials, geographic information systems, archival collections, social science datasets, a library OPAC, and numerous small databases. Licensed external services had proliferated: in 2005 Harvard had more than 175 search platforms on the Harvard University Library portal all in addition to Internet engines, online bookstores, and so forth. Flecker expressed hope that OPACs would evolve to enable greater integration with the larger information environment. To achieve this, OPACs (or their eventual replacements) would have to cope with both the Internet and the explosion of digital information that was already generating tremendous research and innovation in search technology. Speed, relevance (ranking), and the ability to deal with very large results sets would be the key to success – or failure [49]. Once again, the usual series of library automation vendors stepped up. In truth, they were already working on their products but this time they were joined by yet another set of
54
1182
P. J. Bracke et al.
Fig. 54.7 Primo at Boston College: an example of the discovery layer
players in the library world. North Carolina State University adopted Endeca [50], best known as a data industry platform, to replace its library OPAC: “Endeca’s unique information access platform helps people find, analyze, and understand information in ways never before possible” [51]. As the NCSU authors Antelman et al. noted in the abstract to their 2006 paper [51], Library catalogs have represented stagnant technology for close to twenty years. Moving toward a next-generation catalog, North Carolina State University (NCSU) Libraries purchased Endeca’s Information Access Platform to give its users relevance-ranked keyword search results and to leverage the rich metadata trapped in the MARC record to enhance collection browsing. This paper discusses the new functionality that has been enabled, the implementation process and system architecture, assessment of the new catalog’s performance, and future directions.
Other new players eventually entered the market for new library delivery systems. Commercial OPAC replacements that did not provide a full-range of ILS functionality from vendors such as AquaBrowser emerged, as did open source alternatives emerging from the library community such as VUFind. Ultimately, integrated library systems and abstracting and indexing vendors began to develop their own services, generally known as discovery layers. Ex Libris’s Primo and the EBSCO Discovery Service are two prominent examples. These systems harvest and index content from integrated
library systems, repositories, and the abstracting and indexing services that had once required separate searches. While users may still prefer to utilize the native, separate search interfaces of abstracting and indexing services, access to article-level metadata could now be integrated into library search in a much more elegant way than previously possible with metasearch. In short, discovery layers offered the promise of a single, Google-like search for all resources provided by the library (Fig. 54.7).
54.6.2 The Library Service Platform Coupled with the development of discovery layer technologies, library automation vendors began to develop the library service platform. As Carl Grant wrote [52]: This new generation of products . . . addresses the fundamental changes that libraries have experienced over the course of the last decade or so toward more engagement with electronic and digital content. In their own distinctive ways, these recently announced or delivered systems aim to break free of the models of automation centered mostly on print materials deeply embodied by the incumbent line of integrated library systems. To make up for functionality absent in their core integrated library systems, many libraries implemented a cluster of ancillary products, such as link resolvers, electronic resource management systems, digital asset management systems, and other repository platforms to manage all their different types of materials. The new products
54
Library Automation and Knowledge Sharing aim to simplify library operations through a more inclusive platform designed to handle all the different forms of content.
These platforms have generally moved library automation systems from client-server models to multi-tenant cloudbased approaches, employing service-oriented architectures, leveraging linked data in their implementation, and incorporating more technical standards and proprietary APIs for system integration than previous generations [53]. From a functional perspective these systems had several key advantages for libraries. First, they were designed to manage content in multiple formats, rather than layering functionality for electronic subscriptions and digital content onto infrastructure designed for print. This also allowed libraries to simplify their infrastructures by eliminating technology components that had been made necessary by the limitations of earlier integrated library systems. Second, these service platforms natively supported many of the innovations in automation that emerged in the past decade to make acquisitions and early cataloging processes more efficient (e.g., EDI data loads from vendors in approval plan and shelf-ready services). Third, they have been designed to enable the management of multiple metadata formats rather than just MARC. This includes the ability in some systems to expose records through linked data technologies such as BIBFRAME or Sinopia [54]. Fourth, the trends toward the use of knowledge bases that began with OpenURL resolvers and electronic resources management systems continued. Now conceived of as a bibliographic service, knowledge bases allow libraries to leverage externally managed knowledge bases detailing the holdings of various subscription packages available from publishers in maintaining current online holdings with discovery systems. There have been several efforts to develop library service platforms. Ex Libris has been a leader among commercial providers. Its Alma platform has been in production since 2012, with a current installed base of 1944 libraries [55]. Library cooperative OCLC’s WorldShare system has an installed base of 557 libraries [56]. A more recent entrant has been the Folio Library Service Platform, an open source approach that grew out of the earlier KUALI-OLE initiative. The Folio project started in 2016 as a collaboration between IndexData, EBSCO, and OLE and now housed within the Open Library Foundation [57]. IndexData is a library software company that specializes in open source development. EBSCO is a major provider of library services including subscription management and abstracting and indexing services. OLE, the Open Library Environment, was a project led by Duke University [58]. Hosting and support services are available from many vendors, although EBSCO’s hosting services are particularly prominent [59]. It should be noted that these vendors have taken steps to integrate their library service platforms with other products, both technology and content, owned by the vendor or their parent companies.
1183
54.7
Evolving Data Standards and Models, Linked Data
Data standards have emerged as a prime concern for libraries as they have moved beyond early generation ILSs and sought to develop systems that positioned library collections and services for a digital environment. As libraries sought to describe and manage access to electronic resources and digital objects, the limitations of the MARC format and AACR2 cataloging rules became increasingly apparent. Work began in earnest to develop XML representations of MARC data through initiatives such as MARCXML, MODS, and MADS [60], and cataloging description standards were updated from the Anglo-American focused AACR2 to the more international Resource Description and Access (RDA) [61]. At the same time, work was being done, sponsored by IFLA, to develop a new conceptual model for bibliographic data to improve user-centered search and retrieval from library catalogs in 1998. This entity-relationship model, the Functional Requirements for Bibliographic Records (FRBR), extended existing cataloging models to allow for more user-centered catalog displays. The hierarchies expressed within FRBR could, for example, allow one to co-locate all editions of a particular work without having to navigate each edition’s record individually [62]. FRBR was subsequently replaced by the IFLA Library Reference Model (LRM) in 2017. LRM Represented the consolidation and harmonization of FRBR and two other conceptual models developed by IFLA in a manner specifically designed for a linked data environment [63, 64]. In addition to this work by IFLA, the Program for Cooperative Cataloging (PCC) has also launched collaborative projects to develop new approaches to library cataloging and metadata in a linked data environment The PCC is a membership organization dedicated to advancing cataloging through cooperation. In recent years, several working groups of the PCC have developed recommendations regarding linked data for libraries, including reports on topics such as the use of URIs in MARC, metadata encoding standards, and sponsoring pilot projects with Wikidata and ISNI [65]. Despite these innovations, work was still needed to provide a stronger conceptual foundation for future bibliographic data that was not simply oriented toward putting library data on the web, but to make it of the web. To that end, the Library of Congress initiated a process for developing BIBFRAME, which “provides a foundation for the future of bibliographic description, both on the web, and in the broader networked world that is grounded in Linked Data techniques. A major focus of the initiative is to determine a transition path for the MARC formats while preserving a robust data exchange that has supported resource sharing and cataloging cost savings in recent decades” [66]. Other national libraries and large-scale digital portals have developed alternative approaches to linked data for
54
1184
P. J. Bracke et al.
libraries, such as the Bibliothèque Nationale de France, British Library, and Europeana [67]. There are early software implementations built upon the BIBFRAME model. The Share-Virtual Discovery Environment (Share-VDE) project, for example, is a collaboration between a group of North American research libraries, bibliographic data provider Casalini Libri, and software vendor @Cult to create a discovery service based upon BIBFRAME data from contributing libraries [68]. Stanford has developed software for editing BIBFRAME data in its Sinopia project [68, 69]. A series of projects under the moniker of Linked Data for Production (LD4P) have aimed to create a complete cycle for metadata creation sharing and reuse. This work represents a broad coalition, including research libraries, library and information science researchers, the Share-VDE project, and PCC [70–72]. As a whole, however, adoption of BIBFRAME and other production approaches to linked data is nascent and can be expected to develop in the near future along with the continued development and adoption of description standards (Beta RDA) and entity models.
54.8
of new services (e.g., repositories) not supported by thencurrent systems, and the desire to innovate in moving libraries into a web context. While some of these systems have persisted and many well-resourced libraries continue to develop new software in service to innovation, the past decade has also seen the consolidation of technological advances into a new generation of systems. These new library service platforms have been developed to enable libraries to manage collections in a format agnostic manner, and in doing so have developed more flexible data models that will enable libraries to manage emerging services through these platforms. We have already begun to see the beginnings of vendors leveraging these new platforms to support emerging services. For example, Ex Libris’ Esploro, layered on top of Alma, provides librarians tools for managing the artifacts of research (e.g., pre- and post-prints, research data) in a manner integrated into the non-library workflows of researchers [74]. It is this direction, the leveraging of common automation platforms in service of community needs that traditionally reside outside the library that are the greatest opportunities and challenges for libraries moving forward.
Two Future Challenges
Libraries and librarians have frequently stood at the forefront of the information revolution. The MARC format was a model of interinstitutional and international collaboration in its time. As libraries and their vendors evolved, though, large installed bases of customers and data made further change that much more challenging. At the same time, libraries became but one outpost on the information frontier. The transformation of information-seeking to an online-first activity for most users has challenged libraries to position their services and collections in a web context. This challenge is not simply a matter of delivery format, however. The same trends that have challenged libraries have also resulted in transformations of the communities within which libraries are embedded, and thus resulted in a need for libraries to leverage their content, expertise, and technologies in service to a changing environment. Libraries are therefore now employing significantly different strategies than they were even 10 years ago for delivering services that impact their communities. It can be expected that the momentum toward linked data approaches for managing library metadata will continue, and be better supported in commercial and open source applications. It can also be expected that artificial intelligence and machine learning will become increasingly important in library automation. Projects such as the development of Annf for recommending subject headings to catalog are represent the beginning of such approaches [73]. Fragmentation of library applications was widespread a decade ago in response to the limitations of integrated library systems in managing emerging formats, the development
References 1. Porter, V.: Continuous library catalog card, Patent 4005810. This represents a computer-area patent on the original library catalog card. https://pdfpiw.uspto.gov/.piw?Docid=04005810& idkey=NONE (1976). Last accessed 2021 2. Wong, T.-A.: Card catalogs and card catalog drawers, Patent 5257859. This represents a tweak on the original design of the card catalog drawer. Patent Images (uspto.gov) (1993). Last accessed 2021 3. University of Pennsylvania: Cards from rapidly-disappearing card catalog. (Philadelphia 2008). http://www.library.upenn. edu/exhibits/pennhistory/library/cards/cards.samples.html Last accessed 2009 4. Online Computer Library Center (OCLC): OCLC catalog cards, Dublin. http://www.oclc.org/support/documentation/worldcat/ cataloging/cards/default.htm (2008). Last accessed 2009 5. Coyle, K.: Catalogs, card – and other anachronisms. J. Acad. Librariansh. 31(1), 60–62 (2005) 6. Early years of Web of Science, which became Web of Knowledge. http://wokinfo.com/about/whoweare/. Last accessed 2021 7. National Bureau of Standards: Miscellaneous Publication 269, Issued December (1965) 8. Salton, G.: Developments in automated text retrieval. Science. 253(5023), 974–980 (1981) 9. Ojala, M.: Everything old is new again. Medford. 25(4), 5 (2001) 10. Avram, H.: Obituary, with reference to the MARC project. https://www.nytimes.com/2006/04/30/classified/paid-notice-death s-avram-henriette.html (2006). Last accessed 2021 11. Kilgour, F.: Obituary, with reference to the founding of OCLC. https://scanblog.blogspot.com / 2006 / 07/frederick-g-kilgour-19142006.html (2006). Last accessed 2021 12. Online Computer Library Center (OCLC): Official homepage. http://www.oclc.org. Last accessed 2021 13. Goldstein, C.M.: Integrated library systems. Bull. Med. Libr. Assoc. 71(3), 308–311 (1983)
54
Library Automation and Knowledge Sharing
14. Fayen, E.G.: Integrated library systems. Enc. Libr. Inf. Sci. 1(1), 1–12 (2004) 15. Lynch, C.: From automation to transformation: forty years of libraries and information technology in higher education. Educ. Rev. 35(1), 60–68 (2000) 16. Primich, T., Richardson, C.: The integrated library system: from innovation to relegation to innovation again. Acquis. Libr. 18(35– 36), 119–133 (2006) 17. Kochtanek, T.R., Matthews, J.R.: Library Information Systems: From Library Automation to Distributed Information Access Solutions. Libraries Unlimited, Westport (2002) 18. De Gennaro, R.: A computer produced shelf list. Coll. Res. Libr. 31(5), 318–331 (1970) 19. Robinson, T.: Personal Communication. Harvard University Office for Information Systems, Harvard (2007) 20. Elsevier: Endeavor merges with Elsevier Science, American Libraries (2000) 21. Fast, K.V., Campell, D.G.: I still like Google: university student perceptions of searching OPACS and the web. Proc. Am. Soc. Inf. Sci. Technol. 138-146 (2005). https://doi.org/10.1002/ meet.1450410116 22. Connaway, L.S., Dickey, T.J., Radford, M.L.: If it is too inconvenient I’m not going after it: convenience as a critical factor in information-seeking behaviors. Libr. Inf. Sci. Res. 33(3), 79–190 (2011) 23. Antelman, K., Lynema, E., Pace, A.K.: Toward a twenty-first century catalog. Inf. Technol. Libr. 25(3), 128–139 (2006). https:// doi.org/10.6017/ital.v25i3.3342. Last accessed 2021 24. Simser, C.N., Vukas, R.R., Stephens, J.M.: The impact of EDI on serials management. Ser. Libr. 40(3–4), 331–336 (2001). https:// doi.org/10.1300/J123v40n03_19. Last accessed 2021 25. Kelsey, P.: Implementing EDI X12 book acquisitions at a mediumsized university library. New Libr. World. 116(7–8), 383–396. https://doi.org/10.1108/NLW-11-2014-0130. Last accessed 2021 26. Somers, M.A.: Causes and effects: shelf-ready processing, promptcat, and Lousiana State University. https://www.sciencedirect.com/ science/article/abs/pii/S0364640897000185. Last accessed 2021 27. Schroeder, R., Howald, J.L.: Shelf-ready: a cost-benefit analysis. Libr. Collect. Acquis. Tech. Serv. 35(4), 129–134. https://doi.org/ 10.1016/j.lcats.2011.04.002. Last accessed 2021 28. Price, J., Savova, M.: DDA in context: defining a comprehensive eBook acquisition strategy in an access-driven world. Against Grain. 27(5) (2011). https://doi.org/10.7771/2380-176X.7177 29. Townley, C.T.: Knowledge management and academic libraries. Coll. Res. Libr. 62(1) (2001). https://crl.acrl.org/index.php/crl/ article/view/15420/16866. Last accessed 2021 30. Dempsey, L.: Library collections in the life of the user: two directions. LIBER Q. 26(4), 338–359. https://doi.org/10.18352/ lq.10170. Last accessed 2021 31. Caplan, P.: Information on the appropriate copy problem. http:// www.dlib.org/dlib/september01/caplan/09caplan.html (2001). Last accessed 2009 32. Breeding, M.: Trends in library automation: meeting the challenges of a new generation of library users. http://www.oclc.org/ programsandresearch/dss/ppt/breeding.ppt (2006). Last accessed 2009 33. NISO: Information on NISO’s OpenURL standard. http:// www.niso.org/standards/index.html (updated 2008). Last accessed 2009 34. Campbell, J.: The case for creating a scholars portal to the web: a white paper. Portal Libr. Acad. 1(1), 15–21 (2001) 35. NISO: NISO metasearch initiative. http://www.niso.org/ workrooms/mi (2006). Last accessed 2009 36. Digital Library Federation: Electronic resource management. http:/ /www.diglib.org/pubs/dlf102/ (2004). Last accessed 2009
1185 37. Crow, R.: The case for institutional repositories: a SPARC position paper. https://ils.unc.edu/courses/2014_fall/inls690_109/ Readings/Crow2002-CaseforInstitutionalRepositoriesSPARCPape r.pdf (2002). Last accessed 2021 38. DSpace: Official homepage. http://www.dspace.org. Last accessed 2009 39. Fedora: Official homepage. http://www.fedora.info. Last accessed 2009 40. Fedora DSpace merger. https://duraspace.org/lyrasis-and-duras pace-complete-merger-members-and-community-benefit/ Last accessed 2021 41. Samvera: https://samvera.org/samvera-open-source-repositoryframework/technology-stack/. Last accessed 2021 42. Dublin Core: Official homepage. https://dublincore.org/. Last accessed 2021 43. OAI-PMH: https://www.openarchives.org/pmh/. Last accessed 2021 44. Kindling, M.: The landscape of research data repositories in 2015: a re3data analysis. D-Lib Mag. 23(4). http://mirror.dlib.org/dlib/ march17/kindling/03kindling.html (2017) 45. Krafft, D., Cappadona, N., Caruso, B., Corson-Rikert, J., Devare, M., Lowe, B., V. Collaboration: VIVO: enabling national networking of scientists. In: Proceedings of the Web Science Conference, Web Science Trust. https://www.bibsonomy.org/bibtex/ 2ece777913fc3e2f2bda4a37afbd0bdc8/jaeschke (2010). Last accessed 2021 46. Givens, M., Macklin, L., Mangiofico, P.: Faculty profile systems: new services and roles for libraries. Portal Libr. Acad. 17(2), 235– 255. https://muse.jhu.edu/article/653202/summary. Last accessed 2021 47. Brandt, D.S.: Librarians as partners in e-research: Purdue University Libraries promote collaboration. Coll. Res. Libr. News. 68(6), 365–367, 396 (2007) 48. Online Computer Library Center (OCLC): Perceptions of libraries and information resources. http://www.oclc.org/reports/ 2005perceptions.htm (2005). Last accessed 2009 49. Flecker, D.: OPACS and our changing environment: observations, hopes, and fears. http://www.loc.gov/catdir/pcc/archive/ opacfuture-flecker.ppt (2005). Last accessed 2009 50. Endeca: Official hompepage. http://www.endeca.com. Last accessed 2009 51. Antelman, K., Lynema, E., Pace, A.K.: Toward a twenty-first century library catalog. Libr. Inf. Technol. Assoc. 25(3), 128–139 (2006). http://www.lib.ncsu.edu/resolver/1840.2/84. Last accessed 2009 52. Grant, C.: The future of library systems: library service platforms. Inf. Stand. Q. 24(4), 4–15 (2012) 53. Breeding, M.: The future of library systems. J. Elect. Res. Libr. 24(4), 338–339 (2012) 54. Ex Libris: BIBFRAME. https://developers.exlibrisgroup.com/ alma/integrations/linked_data/bibframe/ Last accessed 2021 55. Library Technology Guides. Alma. https://librarytechnology.org/ product/alma/. Last accessed 2021 56. Library Technology Guides. WorldShare. https://librarytech nology.org/product/wms/ Last accessed 2021 57. Folio. About. https://www.folio.org/about/. Last accessed 2021 58. OLE Press Release. https://librarytechnology.org/document/ 13445. Last accessed 2021 59. Folio. Supporting partners and contributors. https://www.folio.org/ community/support/. Last accessed 2021 60. Library of Congress. Standards. https://www.loc.gov/librarians/ standards. Last accessed 2021 61. Joint Steering Committee for the Development of RDA: RDA: Research description and access. http://www.rda-jsc.org/archivedsite/ rda.html. Last accessed 2021
54
1186 62. Carlyle, A.: Understanding FRBR as a conceptual model: FRBR and the Bibliographic Universe. Libr. Resour. Tech. Serv. 50(4), 264–273 (2006) 63. Overview of differences between IFLA LRM and the FRBRFRAD-FRSAD models. https://www.ifla.org/files/assets/catalogu ing/frbr-lrm/transitionmapping_overview_20161207.pdf 64. IFLA Library Reference Model: A Conceptual Model for Bibliographic Information. Pat Riva, Patrick Le Bœuf, and Maja Žumer. https://www.ifla.org/files/assets/cataloguing/frbr-lrm/iflalrm-august-2017.pdf 65. Program for Cooperative Cataloging. https://www.loc.gov/aba/ pcc/. Last accessed 2021 66. Hallo, M., Lujan-Mora, S., Mate, A., Trujillo, J.: Current state of linked data in digital libraries. J. Inf. Sci. 42(2), 117–127 (2015) 67. Share-VDE: https://share-vde.org/sharevde/clusters?l=en. Last accessed 2021 68. Sinopia. https://sinopia.io/. Last accessed 2021 69. Nelson, J.: Developing Sinopia’s linked-data editor with react and redux. Code4Lib J. (45). https://journal.code4lib.org/articles/ 14598 (2019). Last accessed 2021 70. Linked Data for Productio: LD4P. https://wiki.lyrasis.org/display/ LD4P. Last accessed 2021 71. Linked Data for Production: Pathway to Implementation. https:// wiki.lyrasis.org/display/LD4P2 (last accessed 2021) 72. Linked Data for Production: Closing the Loop. (LD4CP). https:/ /wiki.lyrasis.org/pages/viewpage.action?pageId=187176106. Last accessed 2021 73. Suominen, A.O.: DIY automated subjectindexing using multiple algorithms. LIBER Q. 29(1), 1–25 (2019). Last accessed 2021 74. Esploro Web site: https://exlibrisgroup.com/products/esplororesearch-services-platform/. Last accessed 2021
P. J. Bracke et al.
Beth McNeil received her PhD from the University of Nebraska (2015). After serving in positions at Bradley University, University of NebraskaLincoln, Purdue University, and Iowa State University, she returned to Purdue as Dean of Libraries and School of Information Studies and Esther Ellis Norton Professor of Library Science in 2019.
Michael Kaplan received his PhD from Harvard University (1977). After 20 years at Harvard University and 2 years as Associate Dean at Indiana University Libraries, he was with Ex Libris, Ltd. from 2000 to 2011. He is the editor of two books on library automation. In 1998 he received the LITA/Library Hi-Tech Award for Outstanding Communication in Library and Information Science.
Paul J. Bracke received his PhD from the University of Arizona (2012). After serving in positions at the University of Illinois at UrbanaChampaign, The University of Texas Medical Branch at Galveston, the University of Arizona, and Purdue University, he has been Dean of the Foley Library at Gonzaga University since 2016. He has published in library automation, digital repositories, and library service development.
Part VIII Automation in Medical and Healthcare Systems
Automatic Control in Systems Biology
55
Narasimhan Balakrishnan and Neda Bagheri
Contents
Abstract
55.1 Background, Basics, and Context . . . . . . . . . . . . . . . . . . 1189 55.1.1 Systems Biology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1190 55.1.2 Control of and in Biological Systems . . . . . . . . . . . . . . . . 1191
Reductionist approaches toward molecular and cellular biology have greatly advanced our understanding of biological function and information processing. To better map molecular components to systems-level understanding and emergent function, the relatively new field of systems biology was established. Systems biology enables the analysis of complex functions in networked biological systems using integrative (rather than reductionist) approaches, leveraging many principles, tools, and best practices common to control theory. Systems biology requires effective collaboration between experimental, theoretical, and computational scientists/engineers to effectively execute the tightly iterative design-build-test cycle that is critical to the understanding, development, and control of biological models. This chapter summarizes new developments of automatic control in systems biology, providing illustrative examples as well as theoretical background for select case studies.
55.2 Biophysical Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1191 55.2.1 Circadian Processes: Timing and Rhythm . . . . . . . . . . . . . 1192 55.2.2 Signaling in the Insulin Pathway . . . . . . . . . . . . . . . . . . . . 1193 55.3 Network Models for Structural Classification . . . . . . . 1193 55.3.1 Hierarchical Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1194 55.3.2 Boolean Networks and Associated Structures . . . . . . . . . . 1195 55.4 Dynamical Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1196 55.4.1 Stochastic Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1196 55.4.2 Modeling Metabolism: Constraints and Optimality . . . . . 1197 Network Identification . . . . . . . . . . . . . . . . . . . . . . . . . . . Data-Driven Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Linear Approximations . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mechanistic Models, Identifiability, and Experimental Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55.5.4 Sensitivity Analysis and Sloppiness . . . . . . . . . . . . . . . . . .
1199 1199 1200
55.6 55.6.1 55.6.2 55.6.3
Control of and in Biological Processes . . . . . . . . . . . . . . The Artificial Pancreas . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Antithetic Integral Feedback Network . . . . . . . . . . . . Optogenetic Control of Gene Expression . . . . . . . . . . . . .
1202 1202 1202 1203
55.7
Emerging Opportunities . . . . . . . . . . . . . . . . . . . . . . . . . . 1203
55.5 55.5.1 55.5.2 55.5.3
1200 1201
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1204
Keywords
Systems biology · Control/systems theory · Biological networks · Dynamical systems · Modeling
55.1
N. Balakrishnan () Department of Chemical and Biological Engineering, Northwestern University, Evanston, IL, USA e-mail: [email protected] N. Bagheri () Department of Chemical and Biological Engineering, Northwestern University, Evanston, IL, USA Departments of Biology and Chemical Engineering, University of Washington, Seattle, WA, USA e-mail: [email protected]
© Springer Nature Switzerland AG 2023 S. Y. Nof (ed.), Springer Handbook of Automation, Springer Handbooks, https://doi.org/10.1007/978-3-030-96729-1_55
Background, Basics, and Context
Toward the end of the twentieth century, advances in molecular biology made it possible to experimentally probe causal relationships within and among cells. Scientists began to uncover how processes initiated by individual molecules within a cell culminated in macroscopic phenotypic behavior at the cellular and organismal levels. A systematic approach for analyzing complexity in biophysical networks was previously untenable owing to the lack of suitable measurements as well as to limitations associated with simulating large, complex mathematical models. Recent studies provide increasingly 1189
1190
N. Balakrishnan and N. Bagheri
detailed insights into underlying networks, circuits, and pathways responsible for the basic functionality and robustness of biological systems. These studies present new and exciting opportunities for the development of quantitative, predictive modeling and simulation tools [1]. Thus, the discipline of systems biology emerged in response to grand challenges in the characterization, foundational understanding, and control of complex biological networks [2].
55.1.1 Systems Biology Systems biology combines methods from systems engineering, computational biology, statistics, genomics, molecular biology, and biophysics (among other fields) [3] to build a systems-level understanding of complex biological networks. Systems engineering approaches are particularly well suited for characterizing the rich dynamic behavior exhibited by biological systems. Similarly, new developments in systems engineering theory are often motivated by biology. Two characteristics of systems biology are [4]: • An integrative, unified perspective toward unraveling complex dynamical systems • Tight iterations between experiments, modeling, and hypothesis generation (Fig. 55.1) Early successes in systems biology include detailed analyses of a gene switch in the bacterial virus, λ−phage [5], which showed convincingly that formal stochastic processes were required to understand the cell fate switch between
lysis (cell rupture) and lysogeny (integration of bacteriophage into a cell for reproduction). The integration of statistics was instrumental to this finding. The analysis of perfect adaptation—where a system’s post-stimulus response remains exactly the same as its prestimulus response—in chemotaxis highlights another example where scientists employed a systems perspective to generate key insights [6, 7]. Notably, the mechanism for perfect adaptation had been elucidated and interpreted in classical control theory as integral feedback [7], and it was through this lens that scientists were able to unravel the complexity of an organism’s movement in response to a chemical stimulus. Ma et al. (among others) combined principles in control theory, graph theory, and systems analysis to evaluate all possible minimal biochemical circuits capable of perfect adaptation. This analysis offered invaluable insights applicable to natural and engineered biological circuits [8]. Further application of systems strategy—specifically, model reduction and systematic analysis—highlighted how disparate organisms have both overlapping and distinct architectures for chemotaxis [7]. Another example of systems biology that received considerable attention is the discovery and analysis of the gene network underlying circadian rhythms. With the control theoretic understanding of feedback loops, several models have been proposed to characterize these robust timekeepers [9–11]. Formal robustness analysis of these predictive models generated key insights and hypotheses that led to the discovery that biological oscillator models’ performance is more sensitive to global—as opposed to local—parameter perturbations [12, 13]. These insights led to the analysis and
Experiments
Protein dynamics
Signaling networks A
B D E
Gene expression
A
B
C
D E
Interaction networks
Genes
dy dt = f (...)
C
e
Conditions
Tim
Fig. 55.1 Iterations and interactions between experimental analysis and theoretical approaches in systems biology. Experimental data can be used to build/train models and identify networks of interactions, from which one can build hypotheses or predict the consequences of new perturbations. These perturbations can take the form of genetic
Modeling
mutations or knock-outs, which can be tested experimentally, generating new data to further refine existing computational models as well as our biological understanding/intuition, thereby continuing the iterative loop [4]
55
Automatic Control in Systems Biology
realization of multiple robust yet fragile systems, motivating key design principles fundamental to synthetic biology as well as synthetic systems [14, 15]. A more detailed case study and success story emerged from the work of Muller et al. on the JAK-STAT pathway [16]. The authors showed that iterating between modeling and experiments can yield new hypotheses, particularly regarding unobservable components that can be simulated (but not measured). An important implication and ongoing area of study for the JAK-STAT pathway—as well as other signaling pathways involved in cell division/apoptosis and tumor formation—involves pharmacological intervention. Many studies focus on the phosphorylation element of the pathway; however, through model development, a more effective strategy (viz., blocking nuclear export) was elucidated. The seamless integration of, and iteration between, experimental design/observation and systems modeling/analysis has proven fundamental to the field of systems biology and invaluable to advancing both basic and translational science.
55.1.2 Control of and in Biological Systems Natural control systems are paragons of optimality. Over millennia, these architectures have evolved to achieve automatic, robust regulation of a myriad of processes at the levels of genes, proteins, cells, and entire organisms. One of the more challenging opportunities for systems research is unraveling (and subsequently replicating) the multi-scale, hierarchical control that ensures robust and unwavering performance in the face of stochastic perturbations. These perturbations arise from both intrinsic sources (e.g., inherent variability in the transcription machinery) and extrinsic sources (e.g., environmental fluctuations) [17]. Robust performance reflects a relative insensitivity to these perturbations; it is the persistence of a system’s characteristic behavior under perturbations or conditions of uncertainty. The robustness of a system can be measured by quantifying how its output (or performance) changes as a function of different or perturbed system input, state, and/or parameter values. Coexisting robustness and fragility, termed robust yet fragile, are a salient feature of highly evolved and/or complex systems [18]. Optimally robust systems balance their robustness to frequent environmental variations with their coexisting sensitivity to rare events. As a result, robustness and sensitivity analysis is a key tool used to understand and subsequently control system performance. Similar to robustness analysis, sensitivity analysis quantifies the change in a desired output as a function of changes to reference signal(s), system state(s), and/or parameter value(s).
1191
Control theory has had a pervasive influence on the discipline of systems biology. For instance, the chemotaxis work [7] was a paradigm of collaboration between control engineers and biologists. Recent advances in systems biology enabled by control theory include further refinements to architectures for robust perfect adaptation [19], unraveling the design principles of circadian rhythms [12], and designing circuits capable of dynamically filtering noise [20].
55.2
Biophysical Networks
Biophysical networks are remarkably diverse, cover a wide spectrum of scales, and are inevitably characterized by a range of rich behaviors. The term complexity is often invoked in the description of biophysical networks that underlie gene regulation, protein interactions, and/or metabolic networks in biological organisms. It is important to note that there are two categorical definitions of complexity: • The descriptive or topological notion of a large number of constitutive elements with nontrivial connectivity (Sect. 55.3) • The classical notion of behavior associated with the mathematical properties of chaos and bifurcations (Sect. 55.4) In most contexts—biological and otherwise—complexity implies that the underlying system is difficult to understand and/or verify [21]. Simple low-order mathematical models can be constructed that yield chaotic behavior; yet, rich complex biophysical networks may be designed to reinforce reliable execution of simple tasks or behaviors [22]. Biophysical networks have attracted a great deal of attention at the level of gene regulation, where dozens of input connections can regulate a single gene or protein. Thousands of interactions have been mapped in so-called protein interactome diagrams that illustrate the potential coupling of pairs of proteins. Similar networks also exist at higher levels, including the coupling of individual cells via signaling molecules [23], the coupling of organs via endocrine signaling, and ultimately the coupling of organisms in ecosystems. To elucidate the mechanisms employed by these networks, biological experimentation and intuition are by themselves insufficient. As noted earlier, the field of systems biology has laid claim to this class of problems, and engineers, biologists, physicists, chemists, mathematicians, and many others have united to embrace these problems with interdisciplinary approaches [2]. In this field, investigators characterize dynamics via mathematical models and apply systems theory with the goal of guiding further experimentation to better understand the biological network that gives rise to robust performance [2]. Chapter 11.5 [24] of this handbook provides an overview of
55
1192
N. Balakrishnan and N. Bagheri
a)
b) Proteins
mRNAs Proteins
c) PER protein N TIM protein N
N Cytoplasm
per mRNA tim mRNA
Nucleus Protein/protein complex Transcription factor
Translocation / dimerization
N
N
per mRNA Cry1 mRNA Cry2 mRNA
Cytoplasm
Nucleus
Nucleus PER/CRY1 2 PER/CRY2 2 dimers
PER/TIM dimer 2
Repression
N
Cytoplasm
per gene Cry1 gene Cry2 gene
per gene tim gene
Production
PER protein CRY1 protein CRY2 protein
Degradation
N
Phosphoylation
Fig. 55.2 Models of circadian regulation, ranging from simple to more complex. (a) A generic negative feedback loop and time delay structure characterizing circadian rhythms comprises five states and enables tunable oscillations. (b) A comparable circadian model of the Drosophila Melanogaster clock comprises ten states [29]. (c) A mammalian ver-
sion, specifically for Mus Musculus, extends the mathematical model to 16 states [10]. In each model, the time delay is reinforced through transport of species between the nucleus and cytoplasm, and feedback is realized through regulation of defined transcription factors
the theoretical foundations of network science, including a discussion of applications related to biology.
back circuits (Fig. 55.2). The activity states of the proteins in this network are modulated (activated/inactivated) through a series of chemical reactions including phosphorylation and dimerization. These networks exist at the sub-cellular level. Above this layer, their collective signaling leads to a synchronized response from the population of thousands of clock neurons in the SCN. Ultimately, this coherent oscillator coordinates the timing of daily behaviors, such as the sleep/wake cycle. Left in constant (dark) conditions, the clock will run freely with a period of approximately (not exactly) 24 h such that its internal time, or phase, drifts away from that of its environment. Thus, the ability to entrain its phase through environmental factors is vital to the circadian clock [28]. Entrainment occurs when two interacting or coupled oscillating systems assume the same period. In the case of circadian rhythms, cells in the SCN will advance or delay their phase in response to light to assume the period of the sun cycle. In the 2000s, various mathematical models of circadian rhythms were developed [10, 11, 30] in concert with tools to analyze their dynamics, many of which offer broader use [31, 32]. Some of these tools are discussed in Sect. 55.5.4. The use of systems and control theory in the study of circadian biology has significantly advanced our understanding of chrono-biology over the past decade. Control and systems theoretic analyses of circadian rhythms are ongoing. Recent successes include using human sleep experiment data for the development of macroscopic human circadian rhythm models [33]; inference of the mouse SCN network topology [23]; identification of optimal light exposure schedules to correct circadian misalignment in humans [34]; devel-
55.2.1 Circadian Processes: Timing and Rhythm Oscillatory processes are ubiquitous in living organisms and occur across multiple length and time scales. They include the cell cycle, neuron firing, ecological cycles, and others, governing many organisms’ behaviors. A well-studied example of a biological oscillator is the circadian rhythm clock. The term circa- (about) diem (day) describes a biological event that repeats approximately every 24 h. Circadian rhythms are observed at all cellular levels as oscillations in enzymes and hormones affect cell function, cell division, and cell growth [25], imposing internal alignments between different biochemical and physiological oscillations. Their ability to anticipate environmental changes enables organisms to coordinate their physiology and behavior such that they occur at biologically advantageous times during the day [25]: visual and mental acuity fluctuate, for instance, affecting complex behaviors. The mammalian circadian master clock resides in the suprachiasmatic nucleus (SCN), located in the hypothalamus [26]. It is a network of multiple autonomous noisy (sloppy) oscillators, which communicate via neuropeptides to synchronize and form a coherent oscillator [27]. The core of the clock consists of a gene regulatory network in which approximately six key genes are regulated through an elegant array of time-delayed and coupled negative and positive feed-
55
Automatic Control in Systems Biology
a)
GLUT4
Insulin
b) P
1193
GLUT4
Insulin
c) P
55 GLUT4
Insulin
P
Membrane
Membrane
Membrane
Cytoplasm
Cytoplasm
Cytoplasm
IRS1
IRS1
P
PKB
GLUT4
Signaling Feedback Poorly defined signaling
PKC
GLUT4
IRS1
P
P13K
P13K
PDK1
PDK1
PKB
PKC
mTOR
GLUT4
AS160
P
Grb2 Sos Ras Raf Mek
mTORC2
PKB
ERK1/2
mTORC1
Other cellular processes
Translocation
Fig. 55.3 Models of increasing granularity of the insulin signaling cascade. The insulin signaling pathway initiates with the binding of insulin to its receptor. This event is followed by several chemical reactions resulting in the translocation of GLUT4 from the cytoplasm to the cell membrane. The process can be modeled at multiple levels of granularity.
In (a) a simple five-state feedforward model is illustrated. The model can be expanded to incorporate mechanistic detail and nonlinear regulation through intracellular feedback (b) and feedforward pathways (c), adding complexity to the dynamics. (Adapted from [39])
opment of computational tools to better quantify oscillations in circadian gene expression [35]; and multi input and model-predictive control approaches to reset circadian phase [36].
discussed in [38], and there have been several analyses of minimal and detailed models since [39]. Recent developments also include a rule-based model for the signaling pathway [40]. In Sect. 55.7 we discuss the development of the artificial pancreas system, which makes use of different models for the insulin signaling pathway (Fig. 55.3).
55.2.2 Signaling in the Insulin Pathway In healthy cells, the uptake of glucose is regulated by insulin, which is secreted by β-cells in the pancreas. Simply stated, in patients with type 1 diabetes, the pancreas does not produce insulin; in type 2 diabetes, among other consequences, cells—particularly those in the liver, muscles, and fat tissue [37]—are resistant to the insulin produced by the pancreas. The insulin signaling cascade begins with the binding of insulin to its receptor on the cell surface, which causes receptor auto-phosphorylation and activation. Subsequently, the activated insulin receptor then triggers a cascade of reactions that involve phosphorylation, complexation, and other allosteric interactions with several molecules, finally leading to the translocation of the glucose transporter (GLUT4) from an internal compartment to the cell membrane. This process would ultimately lead to the uptake and regulation of glucose in healthy cells. In type 2 diabetes, the cascade is desensitized to insulin, thereby diminishing the effectiveness of the signal. A detailed model of the insulin signaling pathway was first
55.3
Network Models for Structural Classification
An integrative perspective—one that eschews a more reductionist analysis of individual components—is essential to systems biology. This perspective seeks to analyze a system as a whole across its various levels (or scales) of organization. While it is useful to study the elements of a system within a hierarchical regulatory scheme, it is equally (if not more) insightful to study higher-level behaviors that emerge from combinations of lower-level regulatory modules. Network models often consist several lower-level regulatory modules—also known as motifs—within the greater complex architecture of genes, proteins, metabolites, or other regulatory elements. Motifs are sub-modules or patterns of interconnections occurring in networks at numbers that are significantly higher than those in randomized networks [41]. Notable examples of simple motifs—or canonical regulatory
1194
constructs—that yield specific classes of behavior in gene regulatory networks are outlined below [42]. • Negative feedback is fundamental to steady-state homeostasis and adaptation. • Positive feedback amplifies signals and can give rise to multi-stability, oscillations, and state-dependent responses [43, 44]. • Time delays confer complexity. In combination with negative feedback, it promotes oscillatory behavior. For example, the multiple phosphorylations of key clock proteins in circadian regulatory networks provide the time delays necessary to bring about oscillations [9]. • Integral feedback systems—those where the differential between two signals are determined by the regulatory architecture—have proven necessary for robust adaptation and resilience to perturbations across all levels of organization, including sub-cellular to population levels [7, 45]. • Protein oligomerization, wherein a protein is comprised of multiple associating polypeptide chains, offers multistability, oscillations, and resonant stimulus frequency responses [46, 47]. These motifs and their properties are characteristic of general networks, including social networks, communication networks, and biological networks [48]. Given the wide variety of modeling objectives, as well as the heterogeneous sources of data, several different approaches exist for capturing network interactions in the form of mathematical structures, each with its advantages and limitations.
55.3.1 Hierarchical Networks Biophysical networks can be decomposed into modular components that recur across and within given organisms. One form of hierarchical classification defines the top level of organization the network, which comprises interacting regulatory motifs. These motifs typically consist of two to four gene or protein components [49, 50]. Motifs are small sub-networks that are over-represented when compared to random networks with the same large-scale properties [41]. Thus, modules that describe transcriptional regulation, for instance [51], are found at the lowest level of this hierarchy. At the sub-network level, one can use pattern searching to determine the frequency of occurrence of specific simple motifs [49], characterizing them as the basic building blocks of larger biological networks. It is important to note that many of these motifs have direct analogs in systems engineering architectures; the marriage of these fields further advances our understanding of biology and design of engineered systems. Consider the three dominant network motifs found in Escherichia coli [49]:
N. Balakrishnan and N. Bagheri
• Coherent feedforward loops describe the case where one transcription factor regulates a second factor, and— in turn—the pair jointly regulates a third transcription factor. This motif often serves as a sign-sensitive delay element, describing a circuit that responds rapidly to steplike stimuli in one direction (e.g., on to off) and slowly to steps in the opposite direction [52]. • Single-input modules (SIM), wherein a single transcription factor regulates a set of operons, are similar to a single-input multiple-output (SIMO) system encountered in systems engineering or signal processing literature. The SIM motif is found in systems of genes that function stoichiometrically to form a protein assembly or a metabolic pathway [53]. • Densely overlapping regulon architectures define a set of operons that are each regulated by a set of multiple input transcription factors. This architecture is equivalent to a multiple-input multiple-output (MIMO) system. In addition to the feedforward and single-input modules, similar studies in Saccharomyces cerevisiae highlighted additional network motifs that are common to gene regulation as well as to engineered systems [50]: • Autoregulatory motifs describe a specific condition of negative feedback in which a regulator binds to the promoter region of its own gene. Apart from studies in Saccharomyces cerevisiae, these motifs are prevalent in many prokaryotic and eukaryotic systems [54]. • Regulator chains are a cascade of serial transcription factor interactions. These cascades comprise of a regulator that binds the promoter of a second regulator, which binds the promoter of a third, and so forth. The regulatory circuit for the cell cycle is a prime example of such serial regulation [55]. • Multi-input modules are a natural extension of the SIM motif, where a set of regulators bind together to enable promotion of a gene. • Multi-component loops describe closed-loop modules with two or more transcription factors. The closed-loop structure offers the potential for feedback control as well as the possibility of bistability wherein the system can switch between two stable steady states. These studies provide evidence that cell function (in both eukaryotic and prokaryotic systems) is regulated by sophisticated networks of (transcriptional) control loops that regulate and/or interconnect with other control loops. The noteworthy insight is that these complex networks underlying biological regulation appear to be made of elementary system components much like in a digital circuit. This composition lends credibility to the notion that analysis tools from systems
55
Automatic Control in Systems Biology
1195
engineering should find relevance in elucidating, characterizing, and modulating biological networks.
55.3.2 Boolean Networks and Associated Structures Boolean networks are abstract mathematical models that represent circuit diagrams of variables that can take on two possible values: on/true or off/false. These network representations are employed for coarse-grained analysis of biophysical networks. A Boolean network is represented as a graph of nodes, each of which renders a specific operation. The edges between nodes represent Boolean variables whose state value is determined by operations conducted on other variables in the network [56]. Boolean networks can be used to model network dynamics, including the regulation of transcripts and proteins. In this representation, transcripts and proteins are either on (1) or off (0), and their expression value at time step t is given by a logical function of the
a)
b)
Transcriptional activation
c)
Transcriptional inhibition
t
DNA (1)
expression of its effectors at time t − 1. An illustration of the Boolean state values for transcript and protein abundances during transcriptional regulation and translation is depicted in Fig. 55.4. Boolean networks are often useful as a first step in checking the validity of model structures in biophysical networks. An example of the Boolean truth table for the repressilator [57], one of the simplest synthetic circuits capable of showing sustained oscillatory behavior, is shown in Fig. 55.5. Boolean network-based analyses of repressilator-like circuits have shown that systems with m species coupled in a cyclic inhibitory loop will exhibit limit cycle oscillations when the number of states is odd and m ≥ 3 [58]. More recently, a Boolean network framework was used to model vesicle trafficking needed to form organelles, capturing essential features of eukaryotic secretory systems, such as the cisternal maturation model for the Golgi apparatus [59]. Several additional forms of networks exist, each offering its own strength and weakness. Bayesian networks combine acyclic graphs with signed directed graphs to characterize
Translation
t mRNA (1)
mRNA (0)
DNA (1)
t+1
mRNA (1)
t+1
Protein (1)
t
t+1
Fig. 55.4 Boolean networks. Boolean network interactions depict binary states for relevant species. In (a) the transcription of mRNA from DNA is activated; in (b) it is inhibited, and in (c) protein is translated. The state value at t + 1 is indicated in parenthesis: 0 is “off” and 1 is “on”
b)
a)
X
Y
Z
Fig. 55.5 Generic Boolean structure and corresponding truth table. (a) A Boolean structure of the repressilator circuit is represented. This network comprises cyclic inhibitory interactions among three species. Each species has basal levels of activation, meaning it can turn “on” from the “off” state. (b) The corresponding truth table describes the
Time
X
Y
Z
t
0
0
0
t+1
1
1
1
t+2
0
0
0
t+3
1
1
1
t+4
0
0
0
t+5
1
1
1
dynamic species activity over time. Each species alternates between the 0 and 1 state over successive time steps, bringing about sustained oscillations. The presented initial condition yields oscillations that are in phase. Varying the initial conditions can result in different phase oscillation relationships among the species
55
1196
N. Balakrishnan and N. Bagheri
causal, conditional probabilities among random variables [60]. Acyclic graphs do not have feedback loops, while signed directed graphs assign either activation or inhibition to each edge [61]. Bayesian networks are powerful and highly interpretable representations of biological systems, but prove to be more challenging to train. Additionally, S-systems (biochemical systems theory, BST) provide an approach wherein polynomial nonlinear dynamic nodes are used to capture network behavior (e.g., [62]). These polynomial nonlinear models differ from standard ODE-based kinetic descriptions of biochemical reactions in that their kinetic orders need not be integer values as is common with mass action power-law models.
55.4
Dynamical Models
In addition to considering motifs and topologies of biophysical networks, it is necessary to understand the role of dynamic behavior in ascribing meaning to the rich hierarchies of regulation. Given the interplay between topology and dynamics, it is often insufficient to specify merely the nodes (components) and edges (interactions). One also needs to consider the dynamic aspects of the interaction, as well as the characteristic quantity changed by the interaction. There is a large body of work detailing the importance of factoring in dynamics when working with models of biophysical networks. For example, a close relationship between dynamic measures of robustness and the abundance of particular network motifs for a wide range of organisms is shown in [63]. More recently, Finkle et al. discuss the importance of considering the time evolution of biological species to develop better quantitative models of gene regulation that can be tested in new experimental conditions [64]. Attempts to detail dynamic behavior in biophysical networks have fallen into three broad classes of modeling techniques: • First-principles approaches, • Empirical model identification • Hybrid approaches combining minimum biophysical network knowledge with an objective function to yield a predictive model In this section, we outline key results in the development of mechanistic models, and in the following section, we address the problem of network identification and recent developments in that field. Given detailed knowledge of a biological architecture, mathematical models can be constructed to describe the behavior of interconnected motifs or transcriptional units (TUs). Several detailed reviews have been published in this regard [65, 66]. Often in these studies, gene expression is
described as a continuous-time biochemical process, using combinations of algebraic and ordinary differential equations (ODEs) [9, 42]. In a similar manner, models at the signal transduction pathway level have been developed in a continuous-time framework, yielding ODEs [67]. At the TU level, a detailed mathematical treatment of transcriptional regulation is described in [51]. Mechanistic models for a number of specific biological systems have been reported, including basic operons and regulons in E. coli (trp, lac, and pho) and bacteriophage systems (T7 and λ) [68]. Systems theory has found an enabling role in the analysis of the complex mathematical structures that derives from previously described modeling approaches. The language of systems theory now dominates the quantitative characterization of biological regulation, as robustness, complexity, modularity, feedback, and fragility are invoked to describe these systems. Even classical control theoretic results, such as the Bode sensitivity integral, are being applied to describe the inherent trade-offs in sensitivity across frequencies [69]. Robustness has been introduced as both a biological system-specific attribute and a measure of model validity [70, 71]. In the following subsections, brief accounts of systems-theoretic analysis of biological regulatory structures are given, emphasizing where new insights into biological regulation have been uncovered.
55.4.1 Stochastic Systems Discrete stochastic modeling is extremely useful owing to its relevance in biological processes [72] that achieve their functions with low copy numbers of key chemical species. The states and outputs of discrete stochastic systems evolve according to discrete jump Markov processes, which naturally lead to a probabilistic description of system dynamics. A first-order Markov process is a random process in which the future probabilities are dependent only on the present value, and not on past values. Such descriptions can find relevance in systems biology when the magnitude of the fluctuations in a stochastic system approaches levels of the actual variables (e.g., protein concentrations). The analysis of the phage λ lysis—lysogeny decision circuit in [5]—was a seminal illustration that stochastic phenomena are essential to understand complex transcriptional processes. The probabilistic division of the initially homogeneous cell population into sub-populations corresponding to the two possible fate outcomes was shown to require a stochastic description (and could not be described with a continuous deterministic model). In another nice illustration of this point, El Samad et al. show that stochastic fluctuations due to noise were essential in sustaining oscillations in the VBKL circadian oscillator model; the deterministic model proved incapable of producing the necessary output [73].
55
Automatic Control in Systems Biology
1197
In the discrete stochastic setting, the states and outputs are random variables governed by a probability density function, which follows a chemical master equation (CME) [74]. The rate of reaction no longer describes the amount of chemical species being produced or consumed per unit time in a reaction, but rather the likelihood of a certain reaction to occur in a time window. Though analytical solutions of the CME are rarely available, the density function can be constructed using the stochastic simulation algorithm (SSA) [74]. The CME for a discrete stochastic system can be written as shown in Eq. (55.1): df (x, t|x0 , t0 ) ak (x − vk , p) f (x − vk , t|x0 , t0 ) = dt k=1 m
− ak (x, p) f (x, t|x0 , t0 ) ,
(55.1)
where f (x, t|x0 , t0 ) is the conditional probability of the system to be at state x and time t, given the initial condition x0 at time t0 . The state vector x provides the molecular counts of the species in the system. Here, ak denotes the propensity functions, vk denotes the stoichiometric change in x when the k-th reaction occurs, and m is the total number of reactions. The propensity function ak (x, p)dt gives the probability of the k-th reaction to occur between time t and t + dt, given the parameters p. As the state values are typically unbounded, the CME essentially consists of an infinite number of ODEs, whose analytical solution is rarely available except for a few simple problems. The SSA provides an efficient numerical algorithm for constructing the density function [74]. The algorithm follows a Monte Carlo approach based on the joint probability for the time to and the index of the next reaction, which is a function of the propensities. The SSA indirectly simulates the CME by generating many realizations of the states (typically of the order of 104 ) at specified time t, given the initial condition and model parameters, from which the distribution f (x, t|x0 , t0 ) can be constructed. Despite the CME rarely having analytically tractable solutions, several theoretical advancements have been made with regard to its analysis, such as the development of the finite state projection (FSP) algorithm [75], which directly solves or approximates the solution of the CME under certain assumptions and has theoretical guarantees for convergence and exactness of solutions. Another popular method of looking at the CME while eschewing direct simulations is to use the method of moments, wherein one looks to identify the moments of the distribution f (x, t) as opposed to identifying the distribution itself. Often, this approach leads to an infinite system of equations with the dynamics of the ith moment depending on the (i + 1)th or further moments, resulting in a lack of closure. Fortunately, several moment closure approximations have been formulated to overcome this issue [76]. Concurrent advances in experimental methods for quantifying the abundance of biological species have led to im-
proved understanding of biological regulation. Recently, a large-scale experimental and stochastic analysis of the osmotic stress response pathway in Saccharomyces cerevisiae yielded key insights into several dynamical features, including multi-step regulation and switch-like activation for several osmo-sensitive genes in yeast [77].
55.4.2 Modeling Metabolism: Constraints and Optimality To understand complex biological systems, one can reduce the problem by first separating the possible from the impossible, such as configurations and behaviors that would violate constraints. Systems approaches try to exploit three broad classes of constraints: • Empirical: large-scale experimental analysis can provide constraints on possible network structures, such as the average or maximal number of interactions per component. • Physico-chemical: laws of physics such as conservation of mass and thermodynamics impose constraints on cellular and network behaviors. These are used, in particular, for structural network analysis (SNA) with roots in the analysis of chemical reaction networks [78]. • Functional: biological systems perform certain functions, and their building blocks are confined to a large, yet finite set. Network structures and behaviors have to conform with both aspects. Functional constraints are essential to modeling complex biological systems. Biological systems evolve to fulfill specific functions, and their performance (fitness) is constantly tested and refined. Insufficient or poor performance can result in extinction, while stronger, more fit solutions are likely to persist and survive. Hence, it is reasonable to assume some type of optimality in biological systems, which imposes considerable design constraints since effective and reliable networks are rare and presumably highly structured. Thus, understanding biological complexity could employ a calculus of purpose by asking teleological questions such as why cellular networks are organized as observed, given their known or assumed function [79].
Physico-Chemical Constraints in Metabolism Essential constraints for the operation of metabolic networks are imposed by the following: • Reaction stoichiometries • Thermodynamics that restrict flow directions through enzymatic reactions • Maximal fluxes for individual reactions
55
1198
N. Balakrishnan and N. Bagheri
Metabolism usually involves fast reactions and high turnover of substances when compared with regulatory events. Therefore, on longer time scales, it can be regarded as being in quasi-steady state. The metabolite balancing equation (Eq. 55.2) for a system of m internal metabolites and q reactions with the m × q stoichiometric matrix N and the q × 1 vector of reaction rates (fluxes) r formalizes this constraint in SNA. As for most real networks with q >> m, the system of linear equations is under-determined. However, all possible solutions are contained in a convex vector space or flux cone (Fig. 55.6). Methods from convex analysis enable investigation of this space [80]. dx = N · r = 0. dt
(55.2)
Rate 2
Two broad classes of methods for SNA have been developed: metabolic pathway analysis (MPA) and flux balance analysis (FBA) [82]. Many of the modern tools for SNA build upon these frameworks. MPA computes and uses the set of independent pathwaygenerating rays in Fig. 55.6 that uniquely describe the entire flux space; owing to algorithmic complexity, it only handles networks of moderate size. In contrast, FBA determines a single flux solution through linear optimization [83], often assuming that cells try to achieve optimal growth rates. The computational costs are modest, even for genome-scale models.
te
3
Ra
Rate 1
Fig. 55.6 Solution spaces in the analysis of a representative metabolic network. Steady-state representations of metabolic networks are often under-determined, presenting challenges to their analysis. Fortunately, solutions to the optimal flux problem are often linearly constrained and exist in a convex cone. (After [4], from [81])
FBA, however, has to reverse-engineer and operate with an essentially unknown objective function. Typical choices for the objective include maximizing growth, minimizing overall fluxes in the network, minimizing or maximizing the uptake of certain substrates and the production of ATP, or optimizing energy expenditure/production. Since the solution is heavily dependent on the choice of objective, it is critical to assess context, the availability of data, among other factors when using FBA-based approaches [84]. An extension of the FBA approach is metabolic control analysis (MCA) [85]. In MCA, one determines quantitatively the degree of control that a given enzyme exerts on flux and on the concentration of metabolites. This approach alleviates the need to assume a rate-determining step in the reaction network. MCA parallels classical sensitivity analysis (Sect. 55.5.4). It provides an improved understanding of pathway properties when working under a variety of conditions, thus allowing for a more successful manipulation of flux and metabolite concentrations. Metabolic network analysis is essential to the field of microbial synthetic biology where one seeks to optimize the design of microbes for the production of industrially or medically relevant biochemicals. Recent successes and advances based on FBA and its extensions include development of genome-scale metabolic models [86] for various microbes; new methods to identify incorrect regulatory rules and geneprotein-reaction associations in integrated metabolic and regulatory models [87]; identification of synthetic lethal combinations (sets of reactions/genes where only the simultaneous removal of all reactions/genes in the set abolishes growth) in several organisms [88]; and the identification of new drug targets in Mycobacterium tuberculosis [89].
Functional Constraints, Optimality, and Design The analysis of living systems often begins with the identification of baseline assumptions that must be fulfilled in order for the system to produce defined functions. Underlying this concept is the notion that cells have been organized over evolutionary time scales to optimize their operations in a manner consistent with mathematical principles of optimality. FBA demonstrates the utility of assumptions even when its implicit functional constraint (i.e., steady-state operation of metabolic networks) is not self-explanatory. The cybernetic approach—developed by Kompala et al. [90] and Varner and Ramkrishna [91]—is based on a simple principle: evolution has programmed or conditioned biological systems to optimally achieve physiological objectives. This straightforward concept can be translated into a set of optimal resource allocation problems that are solved at every time-step in parallel with mass balances (basic metabolic network model). More recently, Reimers et al. used constraint-based models to investigate optimal resource allocation during cyanobacterial diurnal growth [92]. Their
55
Automatic Control in Systems Biology
findings suggest that phototrophic metabolism is optimized to fast growth rates, but also constrained by the timing of enzyme synthesis. Instead of focusing on a single objective function, mathematical models and experimental data can be used to test hypotheses on optimality principles in context of specific cellular functions. For instance, extensions of FBA suggested that E. coli optimizes the trade-off between achieving high growth rates and maintaining wild-type metabolic fluxes after gene deletions [93]. MPA showed that the interplay between the metabolic network (the controlled plant) and gene regulation (the controller) in E. coli might be designed to achieve optimal trade-offs between long-term objectives, such as metabolic flexibility, and short-term adjustment for metabolic efficiency [94]. In cases where optimality is not necessarily assumed, one can interrogate how functions in biological systems are established. Drawing from analogies in engineered systems can provide generalizable insights and design principles fundamental to biology. For instance, use of nonlinear dynamics demonstrates that many common biological functions— such as oscillators and switches—require nonlinearity. Thus, recreating such functions with biological building blocks constrains circuit designs [95]. Similar ideas can prove powerful at different levels of abstraction. Highly structured bowties with multiple inputs, channeled through a core with standardized components and protocols to multiple outputs, could represent organizational principles that establish complex production systems in engineering and biology [69]. Contrastingly, intertwined feedback and feedforward loops can be assigned individual functions in parallel to their native function in designed control circuits to yield fast responses in highly fluctuating environments, as demonstrated by ElSamad et al. in the E. Coli heat shock response [96]. Notably, most of the examples described here involve new developments in theory to address challenges posed by biology.
55.5
Network Identification
Model development couples biological processes to dynamical equations that are amenable to numerical simulation and analysis. These equations describe the interactions between various constituents and the environment and involve multiple feedback loops responsible for system regulation and noise attenuation and amplification. Currently, our knowledge of biological systems remains incomplete. Despite genome projects that allow enumeration and, to a certain extent, characterization of all genes in a system, we do not have complete knowledge about all network components (e.g., all protein variants that can be derived from a single gene), interactions, and properties therein [2]. Even in a minimal bacterial genome, the function of 149 out of the 473
1199
genes remains unknown [97]. An important task in systems biology consists of specifying network interactions, which can concern qualitative or quantitative properties (existence and strength of couplings), or detailed reaction mechanisms, for genome-based inventories of components. At the core, this challenge is a systems identification problem: given a set of experimental data and prior knowledge, the goal is to identify the underlying biological network. Alternatively termed reverse engineering, network reconstruction, or network inference, the general network identification problem provides a key interface between science and engineering. Several qualitatively different approaches for biological systems have been proposed, which can be roughly classified into three categories: data-driven (black box), hybrid (grey box), and mechanistic (white box).
55.5.1 Data-Driven Methods Empirical or data-driven methods rely on large-scale datasets that can be generated, for instance, through microarrays, RNA-seq, ChIP-seq, or other sequencing analyses for gene regulatory networks. In the previous decade, there have been numerous advancements in both the experimental and computational sides of gene regulatory network (GRN) inference. Several high-throughput sequencing techniques have enabled the collection of large-scale “-omic” data, including transcriptomic, genomic, proteomic, metabolomic, and even epigenomic (ATAC-seq [98]). Large-scale -omic data can be collected at single-cell resolution in contrast to bulk data obtained through standard microarrays or RNA-seq pipelines [99]. Methods from machine learning that employ various matrix factorization techniques—such as principal component analysis/singular value decomposition [100], clustering [101], and regression or tree-based methods [102, 103]— are often used in network inference. Information theorybased approaches using mutual information calculations or approximations [104, 105] have also found success in GRN reconstruction. It is important to note that these empirical methods are more grey- or black-box in nature compared to the mechanistic, white-box models discussed later. Advances in experiments have made it possible to collect time-series data at reasonable resolution and sampling rates, furthering the possibility of constructing causal network models of these biological systems. Probabilistic graphical models can potentially elucidate causal coupling among network components [106]. Recent developments toward inference of causal network structures include the use of Granger causality [107] and assign pseudo-times allowing cells to be ordered by their state in a dynamic biological process [108]. With large-scale single-cell sequencing data
55
1200
and improved inference methods, further fundamental questions in biology can be probed, such as the reconstruction of developmental trajectories by which cells may adopt different fates. In recent work, an optimal transport theory-based analysis of single-cell RNA-seq data from over 300, 000 cells shed new light on induced pluripotent stem cell reprogramming [109]. Further analyses of the associated regulatory networks identified a series of transcription factors predictive of specific cell fates. However, it is important to note that these data-driven methods have several caveats and challenges. For instance, Bayesian models cannot cope with the ubiquitous feedback in cellular networks, since causal relationships are often represented by directed acyclic graphs [106]. Another challenge concerns limitations of the data; sampling rates might not uniform, and data might be combined from multiple labs following different protocols. Since network inference is a rapidly evolving field, it is also important to assess the advantages and limitations of different inference methods. Certain methods might be better suited to specific forms of data and perform poorly on others. Topological features of the underlying networks, noise in the data, and the dynamics of regulation greatly influence the performance of network inference [110]. More generally, accurate identification of network topologies (corresponding to the model structure) does not suffice for establishing predictive mathematical models. Engineered genetic circuits illustrate this point: with identical topology, qualitatively different behavior can result and vice versa [111]. Hence, quantitative characteristics, which are usually incorporated through parameters in deterministic models, are required to build a comprehensive and predictive characterization of the biological system. Corresponding identification methods are rooted in systems and information theory and, thereby, also provide a large intersection among biology, other sciences, and engineering.
N. Balakrishnan and N. Bagheri
dx = f (x, p, t) ≈ Ax(t) + Bu(t) dt
(55.3)
Mathematically, most methods reconstruct the system matrix A, which corresponds to the Jacobian matrix J = ∂f (x, p)/∂x, from the measured effects of (sufficiently small) perturbations. However, direct recovery of the system matrix A will be unreliable with noisy data and inputs. In a study using linear models and perturbation experiments to identify the structure of genetic networks, Tegner et al. [112] proposed an iterative algorithm that uses rational choices of perturbations to improve the identification quality. For a developmental circuit, despite high nonlinearities in the system, the reverse engineering algorithm, which involves building and refining an average connectivity matrix in successive steps, recovered all genetic interactions [112]. A related approach that uses linear models and multiple linear regressions showed similar performance. The algorithm attempts to exploit the sparsity of system matrices for biological networks owing to, for example, (estimated) upper bounds on the number of connections per node [113, 114]. As experimental tools increase in resolution and throughput, and as corresponding models grow in granularity and size, scalability is an increasingly important criteria for algorithms. Both of the algorithms described are scalable and are now gaining popularity in biology. Another approach to systems identification aims at exploiting modularity, which is inherent to biological networks. For modular systems with one output per module, network connectivities and local responses can be identified using perturbation experiments [115]. This approach requires a relatively reduced number of measurements as only changes in the communicating intermediates have to be recorded. An important extension and strength of modular identification using time-series data is that it is not necessary to perturb nodes directly; inference can rely on detecting the network responses to remote perturbations [116].
55.5.2 Linear Approximations The identification of dynamically changing interactions requires corresponding dynamic models. For a first approximation, we consider linear systems, systems with additive responses to perturbations. In systems engineering, a standard form for linear time-invariant (LTI) systems with n states and m inputs is given by Eq. (55.3) with an n × 1 state vector x(t), n × n system matrix A, n × m input matrix B, and m × 1 input vector u(t). Linearization of the general dynamic system dx(t)/dt = f (x, p, t) with parameter vector p provides first-order approximations to the network dynamics, even for highly nonlinear systems. Linear models provide an effective means to capture local dynamics in the region of a steady state, rather than aiming to characterize global behaviors.
55.5.3 Mechanistic Models, Identifiability, and Experimental Design Mechanistic models pose particular challenges because they involve identification of nonlinear systems. Often, the model ends up being posed as mixed-integer nonlinear programs or nonlinear programs with nonconvex objectives. Identifying a unique global optimum or convergence of the algorithms cannot be guaranteed, presenting a clear limitation to this approach. Further, model identification presents high computational costs owing to numerous model simulations [117]. Key tools from systems engineering are highly relevant to parameter estimation in nonlinear systems. Variations of the extended Kalman filter, commonly used in state estimation
55
Automatic Control in Systems Biology
problems in systems engineering, have been used for the estimation of parameters in biological systems [118]. To find global optima in such nonlinear systems, it is often useful to take a stochastic approach since deterministic methods may result in converging to local optima and saddle points. For example, a Markov Chain Monte Carlo (MCMC) scheme was used to identify the parameters of one nonlinear phase oscillator model [33], while a genetic algorithm was used to do so in another [30]. In addition to being better able to find global optima, stochastic methods present a natural means of accounting for the uncertainty associated with parameters in these biological systems, but they come at a greater computational cost. Hybrid methods, as discussed in [119], try to combine the utilities of both stochastic and deterministic optimization schemes to increase the efficiency of the model identification pipeline. To facilitate more accurate and computationally feasible parameter estimation in biological models, identifiability and experimental design need to be addressed. Unstructured approaches to model identification can be ill-posed, especially in context of models that present many possible expression states. A yeast cell with ≈6200 genes and four possible states per gene has over 1015 possible expression states [120], a classic identifiability challenge. In these cases, constraints and correlations should be imposed. For discrete models, experimentally observed upper bounds on the number of interactions per species constrain the amount of data needed for parameter identification [121]. However, mere extrapolation of current high-throughput technology will not solve these high-dimensional data issues. Numerous studies have highlighted the importance of deliberate experimental design to reveal the logical connectivity of gene networks [122,123]. Systems engineering can guide the design of experiments to ensure information-rich datasets that can be used to develop predictive mechanistic models.
55.5.4 Sensitivity Analysis and Sloppiness The accuracy of estimated parameters is central to identifying mechanistic models. Poor accuracy could present a situation in which parameters can be significantly modified and continue to describe observed data, rendering the parameter value meaningless. Parameter sensitivity and sloppiness are complementary concepts that enable assessment of parameter accuracy. Sensitivity analysis provides insights into the functioning of complex biophysical networks. Significant progress was made in the early and mid-2000s to development novel tools and metrics for sensitivity analysis, particularly in the case of circuits exhibiting oscillations [31, 32, 124], and many of these methods are generalizable to other classes of models for biological systems.
1201
For a dynamic model of the form given in Eq. (55.4), with an n × 1 state vector x and m × 1 parameter vector p, dx(t) = f (x(t), p) , dt
(55.4)
the local parametric state sensitivity matrix S is the m × n matrix containing the partial derivatives of the states with respect to the parameters, Sij (t) = ∂xi (t)/∂pj . Computation of these matrix coefficients requires the continuity of f with respect to p. These values are notionally the individual state performance values with respect to isolated parametric perturbations. Sometimes a modified sensitivity index Sij = ∂ log(xi )/∂ log(pj ) can be employed to mitigate concerns involving different scales or magnitudes of parameter values. In the case of oscillatory systems, these raw sensitivity indices rely on coupled outputs such as period and phase and can grow unbounded in time [125]. For such systems, alternate sensitivities, pertaining to the period, amplitude, or phase of oscillations can be more useful than the raw state sensitivities. For example, Zak et al. devised a method to compute the period sensitivity using spectral decomposition of the state sensitivity matrix S [125]. In addition to the abovementioned approaches to sensitivity analysis, methods such as variance-based sensitivity analysis (also referred to as the Sobol method) find use in analyzing systems biology models. This method aims to decompose the variance of model outputs into fractions that can be attributed to a particular input or group of inputs [126]. A variation on the Sobol method was used as part of a global sensitivity analysis technique for systems biology models described in [127]. More recently, Babtie and colleagues [128] developed a topological sensitivity analysis method for network models in systems biology that considers uncertainty in both model structure and parameters. The concept of sloppiness is often encountered in models where several parameters need to be estimated simultaneously. Direct measurements of biochemical parameters in vivo are difficult, if not impossible. As a consequence, collectively fitting them to other experimental data often yields large parameter uncertainties [129]. However, collective fitting may sometimes result in reasonable predictions, even if a subset of parameters are poorly constrained. In such models, sloppiness refers to the spectrum of eigenvalues of the parameter sensitivity matrix S, which are evenly spread over multiple orders of magnitude. Gutenkunst et al. [129] examined several different models in systems biology and found that nearly all of them suffered from sloppiness. The existence of sloppiness in systems biology models thereby presents several limits to the reverse engineering problems discussed in this section [130]. It is therefore important to pair model identification with a rigorous assessment of its performance and capabilities using tools such as sensitivity analysis.
55
1202
55.6
N. Balakrishnan and N. Bagheri
Control of and in Biological Processes
Advances in modeling biological systems paired with improved experimental tools have enabled the development of several new technologies wherein feedback control has been engineered into living systems to better understand function, correct specific dysfunction, or exploit capabilities for clinically and industrially relevant purposes. In the following section, we describe examples of systems that have been designed and built to control biological processes at multiple scales, from the sub-cellular to the organismal level.
55.6.1 The Artificial Pancreas A notable recent success in systems biology and control theory is the development of the artificial pancreas for the management of type 1 diabetes [131], brought to fruition through several years of collaboration between modeling experts, experimentalists, and medical practitioners. The artificial pancreas system can integrate both detailed signaling models discussed in Sect. 55.2 and empirical input-output or transfer function models relating insulin and blood glucose levels. Recent additions to the artificial pancreas system include a health monitoring system to improve its safety and user-friendliness [132], as well as several advancements in control algorithms for the glucose controller module of the artificial pancreas [133, 134]. Figure 55.7 presents a block diagram representation of the artificial pancreas system. The process model of the patient (highlighted in grey) can comprise detailed signaling models for insulin dynamics, simpler pharmacokinetic models, or transfer function models. The
artificial pancreas system and similar closed-loop insulin delivery systems represent a significant breakthrough in type 1 diabetes management and have shown to result in improved glycemic control in patients [135].
55.6.2 The Antithetic Integral Feedback Network Robust perfect adaptation is one of the most studied problems at the intersection of control theory and systems biology [8]. An exciting new development is the discovery of the antithetic integral feedback (AIF) architecture [137]. Previous analyses of cellular homeostasis and adaptation were often confined to a deterministic context, wherein the mechanisms by which adaptation was achieved under noisy conditions remained poorly understood. The AIF architecture is shown to achieve robust set-point tracking and robust perfect adaptation in the stochastic setting [137]. Figure 55.8 illustrates the antithetic integral feedback architecture to control a reaction network. The network within the cloud is open-loop, and its dynamics need to be controlled. The structure of this open-loop network does not need to be fully known. The network is augmented with another reaction network (the controller) depicted outside the cloud. The controller acts on the open-loop network by influencing the rate of production of actuated species X1 by means of the control input species Z1 . The regulated species Xl is influenced by the change in abundance of X1 , which in turn influences the rate of production of the sensing species Z2 . Finally, Z2 annihilates with the control input species Z1 , thereby implementing a negative feedback control loop. The
Meal
Type – 1 diabetes patient
Control objective
+ Controller −
Insulin pump
Insulin signaling /PK
Digestion process
Glucose-insulin interaction
Blood glucose concentration
Glucose sensor
Fig. 55.7 A block diagram representation of the artificial pancreas system. The components of the artificial pancreas constitute a classic feedback control loop, in which the patient’s insulin signaling and digestive systems are defined as the “process” (represented in the dashed, shaded box) to be controlled. A sensor measures the patient’s
blood glucose concentration continuously and feeds the information to the controller. Based on this input, the controller calculates the optimal insulin dose to be administered via an insulin pump, closing the control loop. (Adapted from [136])
55
Automatic Control in Systems Biology
1203
φ Xl
θ Xl
Z2
φ
η
Z1
kZ1
X1
μ φ
Fig. 55.8 Antithetic integral control of an undefined reaction network. The input species Z1 regulates the rate of production of X1 . X1 – through an undefined regulatory network represented by the shaded cloud– influences the activation of regulated species Xl . The annihilation reaction between Z1 and Z2 completes the antithetic integral control feedback loop. (Adapted from [137])
integral action is encoded in all the reactions of the controller network [137]. This control architecture has been shown to function robustly in the presence of biochemical noise and environmental perturbations. It has also been posited to explain the functioning of the σ 70 circuit responsible for the expression of several housekeeping genes in E. coli [137]. Recently, Briat et al. developed an improved antithetic proportionalintegral variant to this architecture which achieves further reduction in variance of the controlled species and overcomes certain problems arising from cell-to-cell variability in the original AIF architecture [138]. In other recent work, it was shown that any control architecture capable of achieving robust perfect adaptation must necessarily contain the AIF motif [19]. In the same work, the authors engineered a synthetic circuit using this AIF architecture to successfully control growth rate control in E. coli.
55.6.3 Optogenetic Control of Gene Expression Advances in real-time measurements of gene expression coupled with the discovery of light-inducible gene regulation [139, 140] have enabled several new control systems by which temporal control of gene expression, and consequently cell function, is possible. Optogenetics broadly refers to the modulation of gene expression through light-sensitive proteins and domains [141]. Studies using this technique have brought about novel insights pertaining to gene func-
tion [142] and paved the way for novel genetic engineering applications that were previously not possible. In [141], Milias-Argeitis et al. used model-predictive control algorithms combined with a light responsive twocomponent system to show robust and precise long-term control of protein production in liquid E. coli cultures. A closed-loop optogenetic system was developed by Harrigan et al. [142] wherein the controller delivered optogenetically enabled transcriptional input designed to compensate for the deletion of an essential gene in a mutant strain of yeast. This research provided notable insights on the yeast mating pathway, one of the most well-studied systems in genetics, and provided the foundation enabling the control of transcriptional regulation [143].
55.7
Emerging Opportunities
Great progress has been made to enable accurate and efficient characterization of complex biological systems. Even so, many grand challenges remain outstanding. With the advent of increased computational power, novel statistical algorithms, and higher-resolution and higher-throughout experimental techniques, disruptive opportunities and profound insights linger on the horizon. Greater understanding of biological systems naturally catalyzes additional questions and hypotheses about structure and function, for which concepts from control theory and systems engineering remain essential to further investigation. The symbiosis between control theory and biology continues to fuel advances in both fields. It is therefore fundamental that these fields continue to be inspired by, celebrate, and complement one another. This marriage is realized in the burgeoning and related field of synthetic biology, providing fertile ground for the confluence of control in, and of, biological systems. Related advances in agent-based modeling [144], model inference [145], and deep learning [146] have disrupted the accuracy of model predictions and are being integrated in control systems. As we adopt new technologies, it is important to maintain focus and balance on intent: models of biological systems are invaluable tools that enable prediction and hypothesis generation. At the core, models enable greater understanding and offer critical biological insights. Maintaining focus on knowledge acquisition is therefore vital, especially as data, algorithms, and models become ubiquitous. It is equally critical to create standards and quality control metrics to ensure that models and control systems are reproducible and can be easily integrated with other models and systems [147]. Models should be developed deliberately and with purpose toward uncovering novel insights or addressing specific questions [148]. In this way, we stand to realize a fundamental principle of biology, where the whole is greater than the sum of its parts.
55
1204 Acknowledgments The authors would like to acknowledge Dr. Jessica S. Yu for support in developing Figs. 55.1, 55.2, and 55.3. The authors would also like to acknowledge McCormick School of Engineering at Northwestern University and the Washington Research Foundation at University of Washington for their financial support. The authors also acknowledge contributions from Mirksy et al. [81], which provide historical context and rigorous review of core subject matter that comprise the foundation of this chapter.
N. Balakrishnan and N. Bagheri
20.
21.
22.
References 1. Bartocci, E., Lió, P.: Computational modeling, formal analysis, and tools for systems biology. PLOS Comput. Biol. 12(1), 1–22 (2016) 2. Kitano, H.: Computational systems biology. Nature 420(6912), 206–210 (2002) 3. Szallasi, Z., Stelling, J., Periwal, V.: System Modeling in Cellular Biology: From Concepts to Nuts and Bolts. MIT Press, Cambridge (2010). OCLC: 868211437 4. Doyle III, F.J., Stelling, J.: Systems interface biology. J. R. Soc. Interface 3(10), 603–616 (2006) 5. Arkin, A., Ross, J., McAdams, H.H.: Stochastic kinetic analysis of developmental pathway bifurcation in phage lambda-infected escherichia coli cells. Genetics 149(4), 1633–1648 (1998) 6. Barkai, N., Leibler, S.: Robustness in simple biochemical networks. Nature 387(6636), 913–917 (1997) 7. Yi, T.M., Huang, Y., Simon, M.I., Doyle, J.: Robust perfect adaptation in bacterial chemotaxis through integral feedback control. Proc. Natl. Acad. Sci. U. S. A. 97(9), 4649–4653 (2000) 8. Ma, W., Trusina, A., El-Samad, H., Lim, W.A., Tang, C.: Defining network topologies that can achieve biochemical adaptation. Cell 138(4), 760–773 (2009) 9. Goldbeter, A.: Biochemical Oscillations and Cellular Rhythms: The Molecular Bases of Periodic and Chaotic Behaviour. Cambridge University Press, Cambridge (1997) 10. Mirsky, H.P., Liu, A.C., Welsh, D.K., Kay, S.A., Doyle, F.J.: A model of the cell-autonomous mammalian circadian clock. Proc. Natl. Acad. Sci. 106(27), 11107–11112 (2009) 11. Forger, D.B., Peskin, C.S.: A detailed predictive model of the mammalian circadian clock. Proc. Natl. Acad. Sci. U. S. A. 100(25), 14806–14811 (2003) 12. Stelling, J., Gilles, E.D., Doyle, F.J.: Robustness properties of circadian clock architectures. Proc. Natl. Acad. Sci. U. S. A. 101(36), 13210–13215 (2004) 13. Bagheri, N., Stelling, J., Doyle III, F.J.: Quantitative performance metrics for robustness in circadian rhythms. Bioinformatics 23(3), 358–364 (2007) 14. Potvin-Trottier, L., Lord, N.D., Vinnicombe, G., Paulsson, J.: Synchronous long-term oscillations in a synthetic gene circuit. Nature 538(7626), 514–517 (2016) 15. Baetica, A.-A., Westbrook, A., El-Samad, H.: Control theoretical concepts for synthetic and systems biology. Curr. Opin. Syst. Biol. 14, 50–57 (2019). Synthetic biology 16. Muller, T.G., Faller, D., Timmer, J., Swameye, I., Sandra, O., Klingmuller, U.: Tests for cycling in a signalling pathway. J. R. Stat. Soc.: Ser. C: Appl. Stat. 53, 557–568 (2004) 17. Raj, A., van Oudenaarden, A.: Nature, nurture, or chance: stochastic gene expression and its consequences. Cell 135(2), 216–226 (2008) 18. Stelling, J., Sauer, U., Szallasi, Z., Doyle, F.J., Doyle, J.: Robustness of cellular functions. Cell 118(6), 675–685 (2004) 19. Aoki, S.K., Lillacci, G., Gupta, A., Baumschlager, A., Schweingruber, D., Khammash, M.: A universal biomolecular inte-
23.
24. 25. 26. 27.
28.
29.
30.
31. 32.
33.
34.
35.
36.
37.
38.
39.
40.
41.
gral feedback controller for robust perfect adaptation. Nature 570(7762), 533–537 (2019) Zechner, C., Seelig, G., Rullan, M., Khammash, M.: Molecular circuits for dynamic noise filtering. Proc. Natl. Acad. Sci. 113(17), 4729–4734 (2016) Wen, X.L., Fuhrman, S., Michaels, G.S., Carr, D.B., Smith, S., Barker, J.L., Somogyi, R.: Large-scale temporal gene expression mapping of central nervous system development. Proc. Natl. Acad. Sci. U. S. A. 95(1), 334–339 (1998) Lauffenburger, D.A.: Cell signaling pathways as control modules: complexity for simplicity? Proc. Natl. Acad. Sci. U. S. A. 97(10), 5031–5033 (2000) Abel, J.H., Meeker, K., Granados-Fuentes, D., St. John, P.C., Wang, T.J., Bales, B.B., Doyle, F.J., Herzog, E.D., Petzold, L.R.: Functional network inference of the suprachiasmatic nucleus. Proc. Natl. Acad. Sci. 113(16), 4512–4517 (2016) Zino, L., Barzel, B., Rizzo, A.: Network Science and Automation, chapter 11.5, pp. 1–39. Springer, Berlin (2021) Edery, I.: Circadian rhythms in a nutshell. Physiol. Genomics 3(2), 59–74 (2000) Reppert, S.M., Weaver, D.R.: Coordination of circadian timing in mammals. Nature 418(6901), 935–941 (2002) Herzog, E.D., Aton, S.J., Numano, R., Sakaki, Y., Tei, H.: Temporal precision in the mammalian circadian system: a reliable clock from less reliable neurons. J. Biol. Rhythm. 19(1), 35–46 (2004) Dunlap, J.C., Loros, J.J., DeCoursey, P.J.: Chronobiology: Biological Timekeeping. Sinauer Associates, Sunderland (2004). OCLC: 51764526 Leloup, J.C., Goldbeter, A.: A model for circadian rhythms in Drosophila incorporating the formation of a complex between the PER and TIM proteins. J. Biol. Rhythm. 13(1), 70–87 (1998) Bagheri, N., Lawson, M.J., Stelling, J., Doyle, F.J.: Modeling the drosophila melanogaster circadian oscillator via phase optimization. J. Biol. Rhythm. 23(6), 525–537 (2008). PMID: 19060261 Gunawan, R., Doyle III, F.J.: Phase sensitivity analysis of circadian rhythm entrainment. J. Biol. Rhythm. 22(2), 180–194 (2007) Taylor, S.R., Gunawan, R., Petzold, L.R., Doyle III, F.J.: Sensitivity measures for oscillating systems: application to mammalian circadian gene network. IEEE Trans. Autom. Control 53, 177–188 (2008) Hannay, K.M., Booth, V., Forger, D.B.: Macroscopic models for human circadian rhythms. J. Biol. Rhythm. 34(6), 658–671 (2019). PMID: 31617438 Serkh, K., Forger, D.B.: Optimal schedules of light exposure for rapidly correcting circadian misalignment. PLOS Comput. Biol. 10(4), 1–14 (2014) St. John, P.C., Taylor, S.R., Abel, J.H., Doyle, F.J.: Amplitude metrics for cellular circadian bioluminescence reporters. Biophys. J. 107(11), 2712–2722 (2014) Brown, L.S., Doyle III, F.J.: A dual-feedback loop model of the mammalian circadian clock for multi-input control of circadian phase. PLOS Comput. Biol. 16(11), 1–25 (2020) Lippincott Williams & Wilkins (ed.): Diabetes Mellitus: A Guide to Patient Care. Lippincott Williams & Wilkins, Philadelphia (2007). OCLC: ocm68705615 Sedaghat, A.R., Sherman, A., Quon, M.J.: A mathematical model of metabolic insulin signaling pathways. Am. J. Physiol. Endocrinol. Metab. 283(5), E1084–E1101 (2002) Nyman, E., Cedersund, G., Strålfors, P.: Insulin signaling – mathematical modeling comes of age. Trends Endocrinol. Metab. 23(3), 107–115 (2012) Di Camillo, B., Carlon, A., Eduati, F., Toffolo, G.M.: A rule-based model of insulin signalling pathway. BMC Syst. Biol. 10(1), 38 (2016) Newman, M.E.J.: Networks: An Introduction. Oxford University Press, Oxford (2010). OCLC: ocn456837194
55
Automatic Control in Systems Biology
42. Smolen, P., Baxter, D.A., Byrne, J.H.: Mathematical modeling of gene networks. Neuron 26(3), 567–580 (2000) 43. Avendaño, M.S., Leidy, C., Pedraza, J.M.: Tuning the range and stability of multiple phenotypic states with coupled positive– negative feedback loops. Nat. Commun. 4(1), 2605 (2013) 44. Ananthasubramaniam, B., Herzel, H.: Positive feedback promotes oscillations in negative feedback loops. PLoS One 9(8), e104761 (2014) 45. Schmickl, T., Karsai, I.: Integral feedback control is at the core of task allocation and resilience of insect societies. Proc. Natl. Acad. Sci. 115(52), 13180–13185 (2018) 46. Ali, M.H., Imperiali, B.: Protein oligomerization: how and why. Bioorg. Med. Chem. 13(17), 5013–5020 (2005) 47. Hashimoto, K., Panchenko, A.R.: Mechanisms of protein oligomerization, the critical role of insertions and deletions in maintaining different oligomeric states. Proc. Natl. Acad. Sci. 107(47), 20352–20357 (2010) 48. National Research Council, Division on Engineering and Physical Sciences, Board on Army Science Technology, and Committee on Network Science for Future Army Applications: Network Science. National Academies Press, Washington (2006) 49. Shen-Orr, S.S., Milo, R., Mangan, S., Alon, U.: Network motifs in the transcriptional regulation network of escherichia coli. Nat. Genet. 31(1), 64–68 (2002) 50. Lee, T.I., Rinaldi, N.J., Robert, F., Odom, D.T., Bar-Joseph, Z., Gerber, G.K., Hannett, N.M., Harbison, C.T., Thompson, C.M., Simon, I., Zeitlinger, J., Jennings, E.G., Murray, H.L., Gordon, D.B., Ren, B., Wyrick, J.J., Tagne, J.B., Volkert, T.L., Fraenkel, E., Gifford, D.K., Young, R.A.: Transcriptional regulatory networks in saccharomyces cerevisiae. Science 298(5594), 799–804 (2002) 51. Barkai, N., Leibler, S.: Biological rhythms – circadian clocks limited by noise. Nature 403(6767), 267–268 (2000) 52. Mangan, S., Zaslaver, A., Alon, U.: The coherent feedforward loop serves as a sign-sensitive delay element in transcription networks. J. Mol. Biol. 334(2), 197–204 (2003) 53. Dubitzky, W., Wolkenhauer, O., Cho, K.-H., Yokota, H.: (eds.) Encyclopedia of Systems Biology. Springer Reference, New York (2013). OCLC: ocn821700038 54. Ali, M.Z., Parisutham, V., Choubey, S., Brewster, R.C.: Inherent regulatory asymmetry emanating from network architecture in a prevalent autoregulatory motif. eLife 9, e56517 (2020) 55. Simon, I., Barnett, J., Hannett, N., Harbison, C.T., Rinaldi, N.J., Volkert, T.L., Wyrick, J.J., Zeitlinger, J., Gifford, D.K., Jaakkola, T.S., Young, R.A.: Serial regulation of transcriptional regulators in the yeast cell cycle. Cell 106(6), 697–708 (2001) 56. Ideker, T.E., Thorssont, V., Karp, R.M.: Discovery of regulatory interactions through perturbation: inference and experimental design. In: Biocomputing 2000, pp. 305–316. World Scientific, Singapore (1999) 57. Elowitz, M.B., Leibler, S.: A synthetic oscillatory network of transcriptional regulators. Nature 403(6767), 335–338 (2000) 58. Dilão, R.: The regulation of gene expression in eukaryotes: Bistability and oscillations in repressilator models. J. Theor. Biol. 340, 199–208 (2014) 59. Mani, S., Thattai, M.: Stacking the odds for Golgi cisternal maturation. eLife 5, e16231 (2016) 60. Pe’er, D., Regev, A., Elidan, G., Friedman, N.: Inferring subnetworks from perturbed expression profiles. Bioinformatics 17 Suppl 1, S215–24 (2001) 61. Kyoda, K., Baba, K., Onami, S., Kitano, H.: DBRF-MEGN method: an algorithm for deducing minimum equivalent gene networks from large-scale gene expression profiles of gene deletion mutants. Bioinformatics 20(16), 2662–2675 (2004) 62. Kimura, S., Ide, K., Kashihara, A., Kano, M., Hatakeyama, M., Masui, R., Nakagawa, N., Yokoyama, S., Kuramitsu, S., Konagaya, A.: Inference of s-system models of genetic networks using
1205
63.
64.
65. 66.
67.
68.
69. 70. 71.
72.
73.
74.
75.
76. 77.
78. 79. 80.
81.
82.
83.
84.
85.
a cooperative coevolutionary algorithm. Bioinformatics 21(7), 1154–1163 (2005) Prill, R.J., Iglesias, P.A., Levchenko, A.: Dynamic properties of network motifs contribute to biological network organization. Plos Biol. 3(11), 1881–1892 (2005) Finkle, J.D., Bagheri, N.: Hybrid analysis of gene dynamics predicts context-specific expression and offers regulatory insights. Bioinformatics 35(22), 4671–4678 (2019) Karlebach, G., Shamir, R.: Modelling and analysis of gene regulatory networks. Nat. Rev. Mol. Cell Biol. 9(10), 770–780 (2008) Aldridge, B.B., Burke, J.M., Lauffenburger, D.A., Sorger, P.K.: Physicochemical modelling of cell signalling pathways. Nat. Cell Biol. 8(11), 1195–1203 (2006) Kholodenko, B.N., Demin, O.V., Moehren, G., Hoek, J.B.: Quantification of short term signaling by the epidermal growth factor receptor. J. Biol. Chem. 274(42), 30169–30181 (1999) Gilman, A., Arkin, A.P.: Genetic “code”: representations and dynamical models of genetic components and networks. Annu. Rev. Genomics Hum. Genet. 3, 341–369 (2002) Csete, M., Doyle, J.: Bow ties, metabolism and disease. Trends Biotechnol. 22(9), 446–450 (2004) Ma, L., Iglesias, P.A.: Quantifying robustness of biochemical network models. BMC Bioinform. 3 (2002) Ueda, H.R., Hagiwara, M., Kitano, H.: Robust oscillations within the interlocked feedback model of drosophila circadian rhythm. J. Theor. Biol. 210(4), 401–406 (2001) Wilkinson, D.J.: Stochastic modelling for quantitative description of heterogeneous biological systems. Nat. Rev. Genet. 10(2), 122– 133 (2009) El Samad, H., Khammash, M., Petzold, L., Gillespie, D.: Stochastic modelling of gene regulatory networks. Int. J. Robust Nonlinear Control 15(15), 691–711 (2005) Gillespie, D.T.: General method for numerically simulating stochastic time evolution of coupled chemical-reactions. J. Comput. Phys. 22(4), 403–434 (1976) Munsky, B., Khammash, M.: The finite state projection algorithm for the solution of the chemical master equation. J. Chem. Phys. 124(4), 044104 (2006) Gillespie, C.S.: Moment-closure approximations for mass-action models. IET Syst. Biol. 3(1), 52–58 (2009) Neuert, G., Munsky, B., Tan, R.Z., Teytelman, L., Khammash, M., van Oudenaarden, A.: Systematic identification of signalactivated stochastic gene regulation. Science 339(6119), 584–587 (2013) Clarke, B.L.: Stoichiometric network analysis. Cell Biophys. 12, 237–253 (1988) Lander, A.D.: A calculus of purpose. PLoS Biol. 2(6), 712–714 (2004) Julius, A.A., Imielinski, M., Pappas, G.J.: Metabolic networks analysis using convex optimization. In: 2008 47th IEEE Conference on Decision and Control, pp. 762–767 (2008). ISSN: 01912216 Mirsky, H., Stelling, J., Gunawan, R., Bagheri, N., Taylor, S.R., Kwei, E., Shoemaker, J.E., Doyle III, F.J.: Automatic Control in Systems Biology, pp. 1335–1360. Springer, Berlin (2009) Yasemi, M., Jolicoeur, M.: Modelling cell metabolism: a review on constraint-based steady-state and kinetic approaches. Processes 9(2) (2021) Varma, A., Palsson, B.O.: Metabolic flux balancing – basic concepts, scientific and practical use. Bio-Technology 12(10), 994– 998 (1994) Sánchez, C.E.G., Sáez, R.G.T.: Comparison and analysis of objective functions in flux balance analysis. Biotechnol. Prog. 30(5), 985–991 (2014) Moreno-Sánchez, R., Saavedra, E., Rodríguez-Enríquez, S., OlínSandoval, V.: Metabolic control analysis: a tool for designing
55
1206
86.
87.
88.
89.
90.
91.
92.
93.
94.
95.
96.
97.
98.
99. 100.
101.
102.
103.
N. Balakrishnan and N. Bagheri strategies to manipulate metabolic pathways. J. Biomed. Biotechnol. 2008 (2008) Zhang, C., Hua, Q.: Applications of genome-scale metabolic models in biotechnology and systems medicine. Front. Physiol. 6, 413 (2016) Barua, D., Kim, J., Reed, J.L.: An automated phenotype-driven approach (geneforce) for refining metabolic and regulatory models. PLOS Comput. Biol. 6(10), 1–15 (2010) Pratapa, A., Balachandran, S., Raman, K.: Fast-SL: an efficient algorithm to identify synthetic lethal sets in metabolic networks. Bioinformatics 31(20), 3299–3305 (2015) Raman, K., Yeturu, K., Chandra, N.: targetTB: a target identification pipeline for Mycobacterium tuberculosis through an interactome, reactome and genome-scale structural analysis. BMC Syst. Biol. 2(1), 109 (2008) Kompala, D.S., Ramkrishna, D., Jansen, N.B., Tsao, G.T.: Investigation of bacterial-growth on mixed substrates – experimental evaluation of cybernetic models. Biotechnol. Bioeng. 28(7), 1044–1055 (1986) Varner, J., Ramkrishna, D.: Application of cybernetic models to metabolic engineering: investigation of storage pathways. Biotechnol. Bioeng. 58(2–3), 282–291 (1998) Reimers, A.-M., Knoop, H., Bockmayr, A., Steuer, R.: Cellular trade-offs and optimal resource allocation during cyanobacterial diurnal growth. Proc. Natl. Acad. Sci. 114(31), E6457–E6465 (2017) Segre, D., Vitkup, D., Church, G.M.: Analysis of optimality in natural and perturbed metabolic networks. Proc. Natl. Acad. Sci. U. S. A. 99(23), 15112–15117 (2002) Stelling, J., Klamt, S., Bettenbrock, K., Schuster, S., Gilles, E.D.: Metabolic network structure determines key aspects of functionality and regulation. Nature 420(6912), 190–193 (2002) Tyson, J.J., Chen, K.C., Novak, B.: Sniffers, buzzers, toggles and blinkers: dynamics of regulatory and signaling pathways in the cell. Curr. Opin. Cell Biol. 15(2), 221–231 (2003) El-Samad, H., Kurata, H., Doyle, J.C., Gross, C.A., Khammash, M.: Surviving heat shock: control strategies for robustness and performance. Proc. Natl. Acad. Sci. U. S. A. 102(8), 2736–2741 (2005) Hutchison, C.A., Chuang, R.-Y., Noskov, V.N., Assad-Garcia, N., Deerinck, T.J., Ellisman, M.H., Gill, J., Kannan, K., Karas, B.J., Ma, L., Pelletier, J.F., Qi, Z.-Q., Richter, R.A., Strychalski, E.A., Sun, L., Suzuki, Y., Tsvetanova, B., Wise, K.S., Smith, H.O., Glass, J.I., Merryman, C., Gibson, D.G., Venter, J.C.: Design and synthesis of a minimal bacterial genome. Science 351(6280) (2016) Buenrostro, J., Wu, B., Chang, H., Greenleaf, W.: ATAC-seq: a method for assaying chromatin accessibility genome-wide. Curr Protoc Mol Biol. 109, 21.29.1–21.29.9 (2015) Eberwine, J., Sul, J.-Y., Bartfai, T., Kim, J.: The promise of singlecell sequencing. Nat. Methods 11(1), 25–27 (2014) Fan, A., Wang, H., Xiang, H., Zou, X.: Inferring large-scale gene regulatory networks using a randomized algorithm based on singular value decomposition. IEEE/ACM Trans. Comput. Biol. Bioinform. 16(6), 1997–2008 (2019) D’Haeseleer, P., Liang, S.D., Somogyi, R.: Genetic network inference: from co-expression clustering to reverse engineering. Bioinformatics 16(8), 707–726 (2000) Haury, A.-C., Mordelet, F., Vera-Licona, P., Vert, J.-P.: TIGRESS: trustful inference of gene REgulation using stability selection. BMC Syst. Biol. 6(1), 145 (2012) Ciaccio, M.F., Chen, V.C., Jones, R.B., Bagheri, N.: The DIONESUS algorithm provides scalable and accurate reconstruction of dynamic phosphoproteomic networks to reveal new drug targets. Integr. Biol. Quant. Biosci. Nano Macro 7(7), 776–791 (2015)
104. Villaverde, A.F., Ross, J., Morán, F., Banga, J.R.: Mider: network inference with mutual information distance and entropy reduction. PLoS One 9(5), 1–15 (2014) 105. Margolin, A.A., Nemenman, I., Basso, K., Wiggins, C., Stolovitzky, G., Favera, R.D., Califano, A.: ARACNE: an algorithm for the reconstruction of gene regulatory networks in a mammalian cellular context. BMC Bioinf. 7(1), S7 (2006) 106. Friedman, N.: Inferring cellular networks using probabilistic graphical models. Science 303(5659), 799–805 (2004) 107. Finkle, J.D., Wu, J.J., Bagheri, N.: Windowed granger causal inference strategy improves discovery of gene regulatory networks. Proc. Natl. Acad. Sci. 115(9), 2252–2257 (2018) 108. Qiu, X., Rahimzamani, A., Wang, L., Ren, B., Mao, Q., Durham, T., McFaline-Figueroa, J.L., Saunders, L., Trapnell, C., Kannan, S.: Inferring causal gene regulatory networks from coupled singlecell expression dynamics using scribe. Cell Syst. 10(3), 265 – 274.e11 (2020) 109. Schiebinger, G., Shu, J., Tabaka, M., Cleary, B., Subramanian, V., Solomon, A., Gould, J., Liu, S., Lin, S., Berube, P., Lee, L., Chen, J., Brumbaugh, J., Rigollet, P., Hochedlinger, K., Jaenisch, R., Regev, A., Lander, E.S.: Optimal-transport analysis of singlecell gene expression identifies developmental trajectories in reprogramming. Cell 176(4), 928 – 943.e22 (2019) 110. Muldoon, J.J., Yu, J.S., Fassia, M.-K., Bagheri, N.: Network inference performance complexity: a consequence of topological, experimental and algorithmic determinants. Bioinformatics 35(18), 3421–3432 (2019) 111. Guet, C.C., Elowitz, M.B., Hsing, W.H., Leibler, S.: Combinatorial synthesis of genetic networks. Science 296(5572), 1466–1470 (2002) 112. Tegner, J., Yeung, M.K.S., Hasty, J., Collins, J.J.: Reverse engineering gene networks: integrating genetic perturbations with dynamical modeling. Proc. Natl. Acad. Sci. U. S. A. 100(10), 5944–5949 (2003) 113. Gardner, T.S., di Bernardo, D., Lorenz, D., Collins, J.J.: Inferring genetic networks and identifying compound mode of action via expression profiling. Science 301(5629), 102–105 (2003) 114. Bansal, M., Gatta, G.D., di Bernardo, D.: Inference of gene regulatory networks and compound mode of action from time course gene expression profiles. Bioinformatics 22(7), 815–822 (2006) 115. Kholodenko, B.N., Kiyatkin, A., Bruggeman, F.J., Sontag, E., Westerhoff, H.V., Hoek, J.B.: Untangling the wires: a strategy to trace functional interactions in signaling and gene networks. Proc. Natl. Acad. Sci. U. S. A. 99(20), 12841–12846 (2002) 116. Sontag, E., Kiyatkin, A., Kholodenko, B.N.: Inferring dynamic architecture of cellular networks using time series of gene expression, protein and metabolite data. Bioinformatics 20(12), 1877– 1886 (2004) 117. Maria, G.: A review of algorithms and trends in kinetic model identification for chemical and biochemical systems. Chem. Biochem. Eng. Q. 18(3), 195–222 (2004) 118. Lillacci, G., Khammash, M.: Parameter estimation and model selection in computational biology. PLOS Comput. Biol. 6(3), 1– 17 (2010) 119. Rodriguez-Fernandez, M., Mendes, P., Banga, J.R.: A hybrid approach for efficient and robust parameter estimation in biochemical pathways. Biosystems 83(2–3), 248–265 (2006) 120. Lockhart, D.J., Winzeler, E.A.: Genomics, gene expression and DNA arrays. Nature 405(6788), 827–836 (2000) 121. Selinger, D.W., Wright, M.A., Church, G.M.: On the complete determination of biological systems. Trends Biotechnol. 21(6), 251–254 (2003) 122. MacCarthy, T., Pomiankowski, A., Seymour, R.: Using largescale perturbations in gene network reconstruction. BMC Bioinf. 6 (2005)
55
Automatic Control in Systems Biology
123. Wagner, A.: Reconstructing pathways in large genetic networks from genetic perturbations. J. Comput. Biol. 11(1), 53–60 (2004) 124. Gunawan, R., Doyle III, F.J.: Isochron-based phase response analysis of circadian rhythms. Biophys. J. 91(6), 2131–2141 (2006) 125. Zak, D.E., Stelling, J., Doyle, F.J.: Sensitivity analysis of oscillatory (bio)chemical systems. Comput. Chem. Eng. 29(3), 663–673 (2005) 126. Sobol, I.M.: Global sensitivity indices for nonlinear mathematical models and their monte carlo estimates. Math. Comput. Simul. 55(1), 271–280 (2001). The Second IMACS Seminar on Monte Carlo Methods 127. Sumner, T., Shephard, E., Bogle, I.D.L.: A methodology for global-sensitivity analysis of time-dependent outputs in systems biology modelling. J. R. Soc. Interface 9(74), 2156–2166 (2012) 128. Babtie, A.C., Kirk, P., Stumpf, M.P.H.: Topological sensitivity analysis for systems biology. Proc. Natl. Acad. Sci. 111(52), 18507–18512 (2014) 129. Gutenkunst, R.N., Waterfall, J.J., Casey, F.P., Brown, K.S., Myers, C.R., Sethna, J.P.: Universally sloppy parameter sensitivities in systems biology models. PLOS Comput. Biol. 3(10), e189 (2007) 130. Erguler, K., Stumpf, M.P.H.: Practical limits for reverse engineering of dynamical systems: a statistical analysis of sensitivity and parameter inferability in systems biology models. Mol. BioSyst. 7(5), 1593–1602 (2011) 131. Doyle, F.J., Huyett, L.M., Lee, J.B., Zisser, H.C., Dassau, E.: Closed-loop artificial pancreas systems: engineering the algorithms. Diabetes Care 37(5), 1191–1197 (2014) 132. Harvey, R.A., Dassau, E., Zisser, H., Seborg, D.E., Jovanoviˇc, L., Doyle III, F.J.: Design of the health monitoring system for the artificial pancreas: low glucose prediction module. J. Diabetes Sci. Technol. 6(6), 1345–1354 (2012). PMID: 23294779 133. Shi, D., Dassau, E., Doyle, F.J.: A multivariate bayesian optimization framework for long-term controller adaptation in artificial pancreas. In: 2018 IEEE Conference on Decision and Control (CDC), pp. 276–283 (2018) 134. Ortmann, L., Shi, D., Dassau, E., Doyle, F.J., Misgeld, B.J.E., Leonhardt, S.: Automated insulin delivery for type 1 diabetes mellitus patients using gaussian process-based model predictive control. In: 2019 American Control Conference (ACC), pp. 4118– 4123 (2019) 135. Brown, S.A., Kovatchev, B.P., Raghinaru, D., Lum, J.W., Buckingham, B.A., Kudva, Y.C., Laffel, L.M., Levy, C.J., Pinsker, J.E., Wadwa, R.P., Dassau, E., Doyle, F.J., Anderson, S.M., Church, M.M., Dadlani, V., Ekhlaspour, L., Forlenza, G.P., Isganaitis, E., Lam, D.W., Kollman, C., Beck, R.W.: Six-month randomized, multicenter trial of closed-loop control in type 1 diabetes. N. Engl. J. Med. 381(18), 1707–1717 (2019) 136. Huyett, L.M., Dassau, E., Zisser, H.C., Doyle, F.J.: Glucose sensor dynamics and the artificial pancreas: the impact of lag on sensor measurement and controller performance. IEEE Control. Syst. Mag. 38(1), 30–46 (2018) 137. Briat, C., Gupta, A., Khammash, M.: Antithetic integral feedback ensures robust perfect adaptation in noisy biomolecular networks. Cell Syst. 2(1), 15–26 (2016) 138. Briat, C., Gupta, A., Khammash, M.: Antithetic proportionalintegral feedback for reduced variance and improved control performance of stochastic reaction networks. J. R. Soc. Interface 15(143), 20180079 (2018)
1207 139. Polstein, L.R., Gersbach, C.A.: Light-inducible gene regulation with engineered zinc finger proteins. Methods Mol. Biol. 1148, 89–107 (2014) 140. Shimizu-Sato, S., Huq, E., Tepperman, J.M., Quail, P.H.: A lightswitchable gene promoter system. Nat. Biotechnol. 20(10), 1041– 1044 (2002) 141. Milias-Argeitis, A., Rullan, M., Aoki, S.K., Buchmann, P., Khammash, M.: Automated optogenetic feedback control for precise and robust regulation of gene expression and cell growth. Nat. Commun. 7(1), 12546 (2016) 142. Harrigan, P., Madhani, H.D., El-Samad, H.: Real-time genetic compensation defines the dynamic demands of feedback control. Cell 175(3), 877–886.e10 (2018) 143. Baumschlager, A., Rullan, M., Khammash, M.: Exploiting natural chemical photosensitivity of anhydrotetracycline and tetracycline for dynamic and setpoint chemo-optogenetic control. Nat. Commun. 11(1), 3834 (2020) 144. Yu, J.S., Bagheri, N.: Agent-based models predict emergent behavior of heterogeneous cell populations in dynamic microenvironments. Front. Bioeng. Biotechnol. 8 (2020) 145. Mikelson, J., Khammash, M.: Likelihood-free nested sampling for parameter inference of biochemical reaction networks. PLOS Comput. Biol. 16(10), 1–24 (2020) 146. Yazdani, A., Lu, L., Raissi, M., Karniadakis, G.E.: Systems biology informed deep learning for inferring parameters and hidden dynamics. PLOS Comput. Biol. 16(11), e1007575 (2020) 147. Porubsky, V.L., Goldberg, A.P., Rampadarath, A.K., Nickerson, D.P., Karr, J.R., Sauro, H.M.: Best practices for making reproducible biochemical models. Cell Syst. 11(2), 109–120 (2020) 148. An, G., Mi, Q., Dutta-Moscato, J., Vodovotz, Y.: Agent-based models in translational systems biology. Wiley Interdiscip. Rev. Syst. Biol. Med. 1(2), 159–171 (2009)
Narasimhan Balakrishnan earned his Ph.D. in chemical & biological engineering at Northwestern University and his Bachelor’s degree with honors from the Indian Institute of Technology, Madras. Narasimhan’s academic studies spanned chemical engineering, systems engineering, and control theory. His doctoral research focused on the study of nonlinear, coupled biological oscillators and the role of cellular heterogeneity on both the system level function and control of these cell populations. His research integrates tools from dynamical systems, optimal control theory, graph theory, and hybrid techniques for the reduction and inference of phase oscillator models from high-dimensional, mechanistic biochemical network models.
55
1208
Neda Bagheri received her B.S. and Ph.D. degrees in electrical and computer engineering from the University of California, Santa Barbara. She is currently an Associate Professor and Distinguished Washington Research Foundation Investigator in Biology and Chemical Engineering at the University of Washington, Seattle. Neda Bagheri also holds an Adjunct Associate Professor of Chemical & Biological Engineering at Northwestern University. Neda Bagheri’s research program aims to uncover the decision processes and underlying circuitry that govern intracellular dynamics and intercellular regulation across broad systems, from cancer biology to circadian rhythms. Her lab operates at the evolving interface between engineering and biology, working to better understand, predict, and control complex biological functions.
N. Balakrishnan and N. Bagheri
Automation in Hospitals and Health Care
56
Atsushi Ugajin
Contents 56.1 56.1.1 56.1.2
Need for Digital Transformation in Hospitals and Health Care . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1210 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1210 Realization of Human-Centric Health Care . . . . . . . . . . . 1210 The Key Technologies . . . . . . . . . . . . . . . . . . . . . . . . . . . History of Major Technologies . . . . . . . . . . . . . . . . . . . . . Standardization of Health Care . . . . . . . . . . . . . . . . . . . . . Changes in Medical Institutions Toward Cutting-Edge Technologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1213 1213 1215
56.3 56.3.1 56.3.2 56.3.3 56.3.4
Use Cases of Application . . . . . . . . . . . . . . . . . . . . . . . . . Imaging AI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Smart Surgery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Robotics Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Remote Patient Monitoring . . . . . . . . . . . . . . . . . . . . . . . .
1217 1218 1219 1219 1222
56.4 56.4.1 56.4.2 56.4.3 56.4.4
Digital Platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Healthcare AI Platform . . . . . . . . . . . . . . . . . . . . . . . . . . . Platform in Hospitals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . PHR and Beyond . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1223 1223 1224 1227 1228
56.5
Emerging Trends and Challenges . . . . . . . . . . . . . . . . . 1229
56.2 56.2.1 56.2.2 56.2.3
1216
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1229
Abstract
We are sure that patient-centric health care is required for improving quality of life (QoL). To realize patientcentric health care, the burden and workload of doctors, nurses, and healthcare professionals must be reduced by using cutting-edge technologies to concentrate on patients’ care for improving quality of medicine. This means that human-centric health care improves QOL not only
A. Ugajin () Hitachi, Ltd. Healthcare Innovation Division, Toranomon Hills Business Tower, Tokyo, Japan e-mail: [email protected]
© Springer Nature Switzerland AG 2023 S. Y. Nof (ed.), Springer Handbook of Automation, Springer Handbooks, https://doi.org/10.1007/978-3-030-96729-1_56
for patients but also for doctors, nurses, and healthcare professionals. There have been several major technological advancements over the past decades. Smart devices, cloud computing, AI (artificial intelligence), robotics, and the Internet of Things (IoT) can make big contributions to accelerating digital transformation in hospitals and health care. As for the network environment, the utilization of 5G is expected to create new healthcare applications remotely. Use cases of application are rapidly expanding with the spread of the advancements described above. The applications can be categorized along with the patient care cycle from prevention, testing, diagnosis, treatment, and prognosis. The applications are mainly used in medical institutions, but they are also expanding outside of medical institutions, such as patients at home and remote patient monitoring. These situations are rapidly expanding owing to the COVID-19 pandemic. However, each application using cutting-edge technologies allows the situation of application silo and data silo to be accelerated without any standard and digital platform. We are required to contribute to making some standards such as application programing interface, terminology, and data format. In addition, we are also required to realize digital platforms with views from doctors, nurses, healthcare professionals, and patient usability and true benefits. Useful applications can be utilized easily by doctors, nurses, healthcare professionals, and patients as needed, realizing data connectivity and a data integration environment for visualizing real situations of human resources, asset utilization, clinical outcome, etc., in hospitals and health care. Digital transformation in hospitals and health care is very important not only for patients but also for
1209
1210
A. Ugajin
doctors, nurses, and healthcare professionals. This means not just patient-centric health care but human-centric health care. Keywords
Artificial intelligence · COVID-19 · Cloud computing · Digital transformation · Internet of Things · Personal health record · Platform · Remote patient monitoring · Robotics · 5G
56.1
Need for Digital Transformation in Hospitals and Health Care
We are sure that patient-centric healthcare is required to improving quality of life (QoL). To realize patient-centric healthcare, the burden and workload of doctors, nurses, and healthcare professionals must be reduced by using cuttingedge technologies. As a result, doctors, nurses, and healthcare professionals can concentrate on patients to improve the quality of medicine. That is why digital transformation (DX) is required in hospitals and health care.
56.1.1 Background Digital transformation is moving forward in various business categories by utilizing cutting-edge technologies such as artificial intelligence (AI), the Internet of Things (IoT), smart devices, and cloud computing, On the other hand, the medical field such as healthcare providers has a big mission to save human lives, so the adoption of cutting-edge technology is delayed compared with other industries. In addition, the medical field is always very busy. As a result, physicians, nurses, and healthcare professionals are always exhausted, both mentally and physically. That is why patient-centric medical care should be performed. By collaborating with physicians, nurses, and healthcare professionals in the medical field to perform proof of concept (PoC) with cutting-edge technologies, it is possible to improve the appropriate solutions that can be used in the real medical field. As a result, it is possible to realize the combination of personalized medical care and standard medical care that results in patient- and citizen-centric medical care by maximizing the time spent facing patients while reducing the working hours of healthcare professionals. Also, patients and citizens can get easy-to-understand medical care and the best treatment options for everyone, which improves QoL.
56.1.2 Realization of Human-Centric Health Care Standard medical care is a typical approach in current health care, called traditional medical care. However, future medical care is based on personalized medical care [1]. It is necessary to balance between standard medical care and personalized medical care. However, there are six main gaps between traditional medical care and future medical care. We fill in the six gaps to promote human-centric health care, not only for patients but also doctors, nurses, and medical professionals. The six gaps are time, special distance, treatment, doctor–patient communication, clinicians’ knowledge, and workload (Table. 56.1). To fill the above-mentioned six gaps, it is necessary to adapt the cutting-edge technologies in hospitals and outside hospitals. In addition, it is also necessary to balance among the cutting-edge technologies, improvement of clinical and operation processes, and changing the management of doctors, nurses, and healthcare professionals. 1. Gap of time Until now, medical care in hospitals has had some challenges such as long wait times for consultations [4, 5], short consultation times [4], and a lot of paperwork required before consultations at the first visit. A typical patient flow is to fill out a medical questionnaire after visiting a medical institution. Medical assistant needs to calculate medical consultation fee in accordance with consultation, and a patient waits for the payment because there are a lot of out patients in a hospital on a day (an acute hospital accepts not only appointed patients but also outside appointed patients in Japan) [5]. A patient must bring the prescription given by the doctor to the dispensing pharmacy, where he or she must wait again. The time spent in the hospital for a single visit is not insignificant [6]. To solve this situation, a patient can make an appointment online, fill out a medical questionnaire at home, or fill it out on a tablet when they arrive at the hospital. The electronic data of answers from the medical questionnaire will be automatically linked to the doctor’s electronic medical record (EMR). The medical interview is a very important part of the differential diagnosis. If it can be confirmed in advance, it is more efficient to dig deeper into only the important points. If there is a system that automatically transcribes the conversation between the patient and the doctor into text and classifies it into subjective, objective, assessment, plan (SOAP) note format [7, 8] in the EMR, the doctor can concentrate on the patient instead of typing on a computer to record a SOAP note in the EMR. The doctor can look the patient in the eye and examine them properly. If cashless
56
Automation in Hospitals and Health Care
1211
Table 56.1 The major gaps between traditional medical care and future medical care Item Definition
Examples of gaps
Traditional medical care Provider-centric model Disease-oriented care and standard care Improve outcomes for the average patient Compliance with the physician’s decisions Long waiting time for payment, prescription Short consultation time Consultation at physical facility only Less connectivity among healthcare data Standard diagnosis and treatment Limited treatment options Lack of patient friendly explanation Lack of clinical information and knowledge for patients Knowledge gaps between senior doctors and junior doctors Difficulties of catching up with the state-of-the-art medical care by information flood Knowledge transfer from specialized doctors to general practitioners Clinicians overload of administrative duties [2, 3]
Future medical care Patient-centric model Patient-oriented and personalized care Improve outcomes for the individual patient Sharing the decision making with patients
The merits of future medical care Improvement of QoL Improvement of QoL Improvement of QoL Improvement of QoL
Cashless payment, home delivery of drug Remove time constraints Early diagnosis by AI Hybrid consultation of remote diagnosis and physical facility Interoperable Personal Health Record and Electronic Health Record Genomic medicine and personalized patient profile-based treatment Comprehensive treatment options Combination of digital content and clinicians’ explanation Self-management and behavior change improvement Fill the gaps using clinical decision support system AI supports to crawling appropriate information automatically
Remove time constraints Remove spatial distance constraints
Remote healthcare solution between specialized doctors and patients with general practitioners Paperwork reduction using AI, IoT, natural language processing (NLP) to focus on medical care for patients
Improvement of prognosis stage in patient care cycle
payment can be made using biometrics certification or another way, the patient can leave medical institutions without having to pay there [9]. If prescriptions can be sent from the hospital to a pre-registered dispensing pharmacy as electronic prescriptions at the same time as on the patient’s smart device [10], it is also possible for the patient to specify the dispensing pharmacy where he or she would like to pick up the medication. If the patient does not have time to pick up the medication at the dispensing pharmacy, the medication is delivered to their home. The consultation records and dispensing history are recorded at Personal Health Record (PHR) application on the cloud and can be viewed by the patient at anytime and anywhere. 2. Gap of spatial distance Although face-to-face consultations are the basis of medical care, during the COVID-19 pandemic, a shift to remote medical care is occurring from the perspective of preventing nosocomial infections and infection in patients with chronic diseases. Remote medical care can fill the gap in spatial distance [2, 11].
Remove spatial distance constraints Remove treatment option constraints Remove treatment option constraints Improvement of patient literacy Improvement of patient literacy Improvement of clinician’s knowledge Improvement of clinician’s knowledge
Remove burn out Improvement of quality of medicine
At present, PHRs are mostly used for personal health management and less for medical treatment. In the future, PHRs are expected to become the hub of medical data in terms of data portability, self-management, and the ability to present one’s own medical profile in the event of a disaster. It is more convenient to have the data tied to the PHR. The data themselves need to be in a secure cloud environment [12, 13]. If the PHR contains the patient’s own medical history, medication history, vital data, lifestyle, etc., the physician can be given one-time access to the PHR and can easily check the patient’s status. The medical