789 56 17MB
English Pages 680 [778] Year 2018
Copyright © 2018. International Society of Automation (ISA). All rights reserved. A Guide to the Automation Body of Knowledge, Third Edition, edited by Nicolas Sands, and Ian Verhappen, International Society of Automation (ISA), 2018. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/ybp-ebookcentral/detail.action?docID=6110271. Created from ybp-ebookcentral on 2020-03-30 07:32:50.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Notice The information presented in this publication is for the general education of the reader. Because neither the author nor the publisher has any control over the use of the information by the reader, both the author and the publisher disclaim any and all liability of any kind arising out of such use. The reader is expected to exercise sound professional judgment in using any of the information presented in a particular application. Additionally, neither the author nor the publisher has investigated or considered the effect of any patents on the ability of the reader to use any of the information in a particular application. The reader is responsible for reviewing any possible patents that may affect any particular use of the information presented. Any references to commercial products in the work are cited as examples only. Neither the author nor the publisher endorses any referenced commercial product. Any trademarks or tradenames referenced in this publication, even without specific indication thereof, belong to the respective owner of the mark or name and are protected by law. Neither the author nor the publisher makes any representation regarding the availability of any referenced commercial product at any time. The manufacturer’s instructions on the use of any commercial product must be followed at all times, even if in conflict with the information in this publication. The opinions expressed in this book are the author’s own and do not reflect the view of the International Society of Automation. Copyright © 2018 International Society of Automation (ISA) All rights reserved. Printed in the United States of America. 10 9 8 7 6 5 4 3 2 ISBN 978-1-941546-91-8 No part of this work may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior written permission of the publisher.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
ISA 67 T. W. Alexander Drive P.O. Box 12277 Research Triangle Park, NC 27709 Library of Congress Cataloging-in-Publication Data in process
About the Editors
Nicholas P. Sands PE, CAP, ISA Fellow Nick Sands is currently a senior manufacturing technology fellow with more than 27 years at DuPont, working in a variety of automation roles at several different businesses and plants. He has helped develop several company standards and best practices in the areas of automation competency, safety instrumented systems, alarm management, and process safety.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Sands has been involved with the International Society of Automation (ISA) for more than 25 years, working on standards committees, including ISA18, ISA101, ISA84, and ISA105, as well as training courses, the ISA Certified Automation Professional (CAP) certification, and section and division events. His path to automation started when he earned a BS in Chemical Engineering from Virginia Tech.
Ian Verhappen P Eng, CAP, ISA Fellow After receiving a BS in Chemical Engineering with a focus on process control, Ian Verhappen’s career has included stints in all three aspects of the automation industry: end user, supplier, and engineering consultant. Verhappen has been an active ISA volunteer for more than 25 years, learning from and sharing his knowledge with other automation professionals as an author, presenter, international speaker, and volunteer leader. Through a combination of engineering work, standards committee involvement, and a desire for continuous learning, Verhappen has been involved in all facets of the process automation industry from field devices, including process analyzers, to controllers to the communication networks connecting these elements together. A Guide to the Automation Body of Knowledge is a small way to share this continuous learning and
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
pass along the expertise gained from all those who have helped develop the body of knowledge used to edit this edition.
Preface to the Third Edition
It has been some years since the second edition was published in 2006. Times have changed. We have changed. Technology has changed. Standards have changed. Some areas of standards changes include; alarm management, human machine interface design, procedural automation, and intelligent device management.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Another change, in 2009, we lost the pioneer of A Guide to the Automation Body of Knowledge and the Certified Automation Professional (CAP) program, my friend, Vernon Trevathan. He had a vision of defining automation engineering and developing automation engineers. With the changes in technology, it is clear that the trend of increasing automation will continue into the future. What is not clear, is how to support that trend with capable engineers and technicians. This guide is a step towards a solution. The purpose of this edition is the same as that of the first edition, to provide a broad overview of automation, broader than just instrumentation or process control, to include topics like HAZOP studies, operator training, and operator effectiveness. The chapters are written by experts who share their insights in a few pages. The third edition was quite a project for many reasons. It was eventually successful because of the hard work and dedication of Susan Colwell and Liegh Elrod of the ISA staff, and the unstoppable force of automation that is my co-editor Ian Verhappen. Every chapter has been updated and some new chapters have been added. It is my hope that you find this guide to be a useful quick reference for the topics you know, and an overview for the topics you seek to learn. May you enjoy reading this third edition, and I hope Vernon enjoys it as well. Nicholas P. Sands May 2018
Contents
About the Editors Preface
I – Control Basics
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
1 Control System Documentation By Frederick A. Meier and Clifford A. Meier Reasons for Documentation Types of Documentation Process Flow Diagram (PFD) Piping and Instrument Diagrams (P&IDs) Instrument Lists Specification Forms Logic Diagrams Location Plans (Instrument Location Drawings) Installation Details Loop Diagrams Standards and Regulations Other Resources About the Authors 2 Continuous Control By Harold Wade Introduction Process Characteristics Feedback Control Controller Tuning Advanced Regulatory Control
Further Information About the Author 3 Control of Batch Processes By P. Hunter Vegas, PE What Is a Batch Process? Controlling a Batch Process What Is ANSI/ISA-88.00.01? Applying ANSI/ISA-88.00.01 Summary Further Information About the Author 4 Discrete Control By Kelvin T. Erickson, PhD Introduction Ladder Logic Function Block Diagram Structured Text Instruction List Sequential Problems Further Information About the Author Copyright © 2018. International Society of Automation (ISA). All rights reserved.
II – Field Devices 5 Measurement Uncertainty By Ronald H. Dieck Introduction Error Measurement Uncertainty (Accuracy) Calculation Example Summary Definitions References Further Information About the Author 6 Process Transmitters By Donald R. Gillum
Introduction Pressure and Differential Pressure Transmitters Level Measurement Hydraulic Head Level Measurement Fluid Flow Measurement Technology Temperature Conclusion Further Information About the Author 7 Analytical Instrumentation By James F. Tatera Introduction Sample Point Selection Instrument Selection Sample Conditioning Systems Process Analytical System Installation Maintenance Utilization of Results Further Information About the Author
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
8 Control Valves By Hans D. Baumann Valve Types Actuators Accessories Further Information About the Author 9 Motor and Drive Control By Dave Polka and Donald G. Dunn Introduction DC Motors and Their Principles of Operation DC Motor Types AC Motors and Their Principles of Operation AC Motor Types Choosing the Right Motor Adjustable Speed Drives (Electronic DC)
Adjustable Speed Drives (Electronic AC) Automation and the Use of VFDs Further Information About the Authors
III – Electrical Considerations 10 Electrical Installations By Greg Lehmann, CAP
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Introduction Scope Definitions Basic Wiring Practices Wire and Cable Selection Ground, Grounding, and Bonding Surge Protection Electrical Noise Reduction Enclosures Raceways Distribution Equipment Check-Out, Testing, and Start-Up Further Information About the Author 11 Safe Use and Application of Electrical Apparatus By Ernie Magison, Updated by Ian Verhappen Introduction Philosophy of General-Purpose Requirements Equipment for Use Where Explosive Concentrations of Gas, Vapor, or Dust Might Be Present Equipment for Use in Locations Where Combustible Dust May Be Present For More Information About the Author 12 Checkout, System Testing, and Start-Up By Mike Cable Introduction Instrumentation Commissioning Software Testing
Factory Acceptance Testing Site Acceptance Testing System Level Testing Safety Considerations Further Information About the Author
IV – Control Systems 13 Programmable Logic Controllers: The Hardware By Kelvin T. Erickson, PhD Introduction Basic PLC Hardware Architecture Basic Software and Memory Architecture (IEC 61131-3) I/O and Program Scan Forcing Discrete Inputs and Outputs Further Information About the Author
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
14 Distributed Control Systems By Douglas C. White Introduction and Overview Input/Output Processing Control Network Control Modules Human-Machine Interface—Operator Workstations Human-Machine Interface—Engineering Workstation Application Servers Future DCS Evolution Further Information About the Author 15 SCADA Systems: Hardware, Architecture, and Communications By William T. (Tim) Shaw, PhD, CISSP, CPT, C|EH Key Concepts of SCADA Further Information About the Author
V – Process Control
16 Control System Programming Languages By Jeremy Pollard Introduction Scope What Is a Control System? What Does a Control System Control? Why Do We Need a Control Program? Instruction Sets The Languages Conclusions About the Author 17 Process Modeling By Gregory K. McMillan
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Fundamentals Linear Dynamic Estimators Multivariate Statistical Process Control Artificial Neural Networks First Principle Models Capabilities and Limitations Process Control Improvement Costs and Benefits Further Information About the Author 18 Advanced Process Control By Gregory K. McMillan Fundamentals Advanced PID Control Valve Position Controllers Model Predictive Control Real-Time Optimization Capabilities and Limitations Costs and Benefits MPC Best Practices Further Information About the Author
VI – Operator Interaction 19 Operator Training By Bridget A. Fitzpatrick Introduction Evolution of Training The Training Process Training Topics Nature of Adult Learning Training Delivery Methods Summary Further Information About the Author
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
20 Effective Operator Interfaces By Bill Hollifield Introduction and History Basic Principles for an Effective HMI Display of Information Rather Than Raw Data Embedded Trends Graphic Hierarchy Other Graphic Principles Expected Performance Improvements The HMI Development Work Process The ISA-101 HMI Standard Conclusion Further Information About the Author 21 Alarm Management By Nicholas P. Sands Introduction Alarm Management Life Cycle Getting Started Alarms for Safety References About the Author
VII – Safety
22 HAZOP Studies By Robert W. Johnson Application Planning and Preparation Nodes and Design Intents Scenario Development: Continuous Operations Scenario Development: Procedure-Based Operations Determining the Adequacy of Safeguards Recording and Reporting Further Information About the Author
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
23 Safety Instrumented Systems in the Process Industries By Paul Gruhn, PE, CFSE Introduction Hazard and Risk Analysis Allocation of Safety Functions to Protective Layers Determine Safety Integrity Levels Develop the Safety Requirements Specification SIS Design and Engineering Installation, Commissioning, and Validation Operations and Maintenance Modifications System Technologies Key Points Rules of Thumb Further Information About the Author 24 Reliability By William Goble Introduction Measurements of Successful Operation: No Repair Useful Approximations Measurements of Successful Operation: Repairable Systems Average Unavailability with Periodic Inspection and Test Periodic Restoration and Imperfect Testing Equipment Failure Modes
Safety Instrumented Function Modeling of Failure Modes Redundancy Conclusions Further Information About the Author
VIII – Network Communications 25 Analog Communications By Richard H. Caro Further Information About the Author
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
26 Wireless Transmitters By Richard H. Caro Summary Introduction to Wireless Powering Wireless Field Instruments Interference and Other Problems ISA-100 Wireless WirelessHART WIA-PA WIA-FA ZigBee Other Wireless Technologies Further Information About the Author 27 Cybersecurity By Eric C. Cosman Introduction General Security Concepts Industrial Systems Security Standards and Practices Further Information About the Author
IX – Maintenance 28 Maintenance, Long-Term Support, and System Management
By Joseph D. Patton, Jr. Maintenance Is Big Business Service Technicians Big Picture View Production Losses from Equipment Malfunction Performance Metrics and Benchmarks Further Information About the Author 29 Troubleshooting Techniques By William L. Mostia, Jr. Introduction Logical/Analytical Troubleshooting Framework The Seven-Step Troubleshooting Procedure Vendor Assistance: Advantages and Pitfalls Other Troubleshooting Methods Summary Further Information About the Author
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
30 Asset Management By Herman Storey and Ian Verhappen, PE, CAP Asset Management and Intelligent Devices Further Information About the Authors
X – Factory Automation 31 Mechatronics By Robert H. Bishop Basic Definitions Key Elements of Mechatronics Physical System Modeling Sensors and Actuators Signals and Systems Computers and Logic Systems Data Acquisition and Software The Modern Automobile as a Mechatronic Product Classification of Mechatronic Products
The Future of Mechatronics References Further Information About the Author 32 Motion Control By Lee A. Lane and Steve Meyer What Is Motion Control? Advantages of Motion Control Feedback Actuators Electric Motors Controllers Servos Feedback Placement Multiple Axes Leader/Follower Interpolation Performance Conclusion Further Information About the Authors
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
33 Vision Systems By David Michael Using a Vision System Vision System Components Vision Systems Tasks in Industrial/Manufacturing/Logistics Environments Implementing a Vision System What Can the Camera See? Conclusion Further Information About the Author 34 Building Automation By John Lake, CAP Introduction Open Systems Information Management
Summary Further Information About the Author
XI – Integration 35 Data Management By Diana C. Bouchard
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Introduction Database Structure Data Relationships Database Types Basics of Database Design Queries and Reports Data Storage and Retrieval Database Operations Special Requirements of Real-Time Process Databases The Next Step: NoSQL and Cloud Computing Data Quality Issues Database Software Data Documentation Database Maintenance Data Security Further Information About the Author 36 Mastering the Activities of Manufacturing Operations Management By Charlie Gifford Introduction Level 3 Role-Based Equipment Hierarchy MOM Integration with Business Planning and Logistics MOM and Production Operations Management Other Supporting Operations Activities The Operations Event Message Enables Integrated Operations Management The Level 3-4 Boundary References Further Information About the Author
37 Operational Performance Analysis By Peter G. Martin, PhD Operational Performance Analysis Loops Process Control Loop Operational Performance Analysis Advanced Control Operational Performance Analysis Plant Business Control Operational Performance Analysis Real-Time Accounting Enterprise Business Control Operational Performance Analysis Summary Further Information About the Author
XII – Project Management
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
38 Automation Benefits and Project Justifications By Peter G. Martin, PhD Introduction Identifying Business Value in Production Processes Capital Projects Life-Cycle Cost Analysis Life-Cycle Economic Analysis Return on Investment Net Present Value Internal Rate of Return Project Justification Hurdle Getting Started Further Information About the Author 39 Project Management and Execution By Michael D. Whitt Introduction Contracts Project Life Cycle Project Management Tools and Techniques References About the Author 40 Interpersonal Skills
By David Adler Introduction Communicating One-on-One Communicating in Group Meetings Writing Building Trust Mentoring Automation Professionals Negotiating Resolving Conflict Justifying Automation Selecting the Right Automation Professionals Building an Automation Team Motivating Automation Professionals Conclusion References About the Author
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Index
I Control Basics
Documentation One of the basic tenets of any project or activity is to be sure it is properly documented. Automation and control activities are no different, though they do have different and unique requirements to properly capture the requirements, outcomes, and deliverables of the work being performed. The International Society of Automation (ISA) has developed standards that are broadly accepted across the industry as the preferred method for documenting a basic control system; however, documentation encompasses more than just these standards throughout the control system life cycle.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Continuous and Process Control Continuous processes require controls to keep them within safe operating boundaries while maximizing the utilization of the associated equipment. These basic regulatory controls are the foundation on which the automation industry relies and builds more advanced techniques. It is important to understand the different forms of basic continuous control and how to configure or tune the resulting loops—from sensor to controller then actuator—because they form the building blocks of the automation industry.
Batch Control Not all processes are continuous. Some treat a discrete amount of material within a shorter period of time and therefore have a different set of requirements than a continuous process. The ISA standards on batch control are the accepted industry best practices in implementing control in a batch processing environment; these practices are summarized.
Discrete Control
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
This chapter provides examples of how to implement discrete control, which is typically used in a manufacturing facility. These systems mainly have discrete sensors and actuators, that is, sensors and actuators that have one of two values (e.g., on/off or open/closed).
1 Control System Documentation By Frederick A. Meier and Clifford A. Meier
Reasons for Documentation
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Documentation used to define control systems has evolved over the past half century as the technology used to generate it has evolved. Information formerly stored on smudged, handwritten index cards in the maintenance shop is now more likely stored in computer databases. The purpose of that documentation, however, remains largely unchanged: to impart information efficiently and clearly to a knowledgeable viewer. The information that is recorded evolves in the conceptualization, design, construction, operation, and maintenance of a facility that produces a desired product. The documents described in this chapter form a typical set used to accomplish the goal of defining the work to be done, be it design, construction, or maintenance. The documents were developed and are commonly used for a continuous process, but they also work for other applications, such as batch processes. The authors know of no universal “standard” for documentation, but these can be considered typical. Some facilities or designs won’t include all the described documents, and some designs may include documents not described, but the information provided on these documents will likely be found somewhere in any successful document suite. All the illustrations and much of the description used in this section were published in the 2011 International Society of Automation (ISA) book Instrumentation and Control System Documentation by Frederick A. Meier and Clifford A. Meier. That book includes many more illustrations and a lot more explanation. This section uses the term automation and control (A&C) to identify the group or discipline responsible for the design and maintenance of a process control system; the group that prepares and, hopefully, maintains these documents. Many terms are used to identify the people responsible for a process control system; the group titles differ by industry, company, and even region. In their book, the Meiers’ use the term instrument and control (I&C) to describe the engineers and designers who develop control system documentation; for our purposes, the terms are interchangeable.
Types of Documentation This chapter provides descriptions and typical, albeit simple sketches for the following documents: • Process flow diagrams (PFDs) • Piping and instrument diagrams (P&IDs) • Loop and tag numbering • Instrument lists • Specification forms • Logic diagrams • Location plans (instrument location drawings) • Installation details • Loop diagrams • Standards and regulations
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
• Operating instructions Figure 1-1 is a timeline that illustrates a sequence for document development. There are interrelationships where information developed in one document is required before a succeeding document can be developed. Data in the process flow diagram drives the design of the P&ID. P&IDs must be essentially complete before instrument specification forms can be efficiently developed. Loop diagrams are built from most of the preceding documents in the list.
The time intervals and percentage of total effort for each task will vary by industry and by designer. The intervals can be days, weeks, or months, but the sequence will likely be similar to that shown above. The documents listed are not all developed or used solely by a typical A&C group. However, the A&C group contributes to, and uses, the information contained in them.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Process Flow Diagram (PFD) A process flow diagram is a “big picture” schematic representation of the major features of a process. These diagrams summarize the overall intent of the process using a graphical representation of the material flow and the conversion of resources into a product. Points where resources and energy combine to produce material are identified graphically. These points are then defined in more detail in associated mass balance calculations. The PFD shows how much of each resource or product a plant might make or treat; it includes descriptions and quantities of needed raw materials, as well as byproducts produced. PFDs show critical process conditions—pressures, temperatures, and flows; necessary equipment; and major process piping. They differ from P&IDs, which will be discussed later, in that they have far less detail and less ancillary information. They are, however, the source from which P&IDs grow. Figure 1-2 shows a simple PFD of a knockout drum used to separate liquid from a wet gas stream.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Process designers produce PFDs to sketch out the important aspects of a process. In a way, a PFD serves the same purpose that an abstract does for a technical paper. Only the information or major components needed to define the process are included, using the minimal amount of detail that is sufficient to define the quantities and energy needed. Also shown on the drawing are the main components for storage, conversion of materials, and transfer of materials, as well as the main interconnections between components. Because a schematic is a very broad view and an A&C is ultimately about details, little A&C information is included. Identification marks are used on the lines of interconnection, commonly called streams. The marks link to tables containing the content and conditions for that stream. This information comes from a mass and energy balance calculated by the process designers. The mass balance is the calculation that defines what the process will accomplish. PFDs may include important—or high-cost—A&C components because one purpose of a PFD is to support the preparation of cost estimates made to determine if a project will be done. There is no ISA standard for PFDs, but ANSI/ISA-5.1-2009, Instrument Symbols and Identification, and ISA-5.3-1983, Graphic Symbols for Distributed Control/Shared Display Instrumentation, Logic, and Computer Systems, contain symbols that can be used to indicate A&C components. Batch process plants configure their equipment in various ways as raw materials and process parameters change. Many different products are often produced in the same plant. A control recipe, or formula, is developed for each product. A PFD might be used
to document the different recipes.
Piping and Instrument Diagrams (P&IDs) The acronym P&ID is widely understood within process industries to identify the principal document used to define the equipment, piping, and all A&C components needed to implement a process. ISA’s Automation, Systems, and Instrumentation Dictionary definition for P&ID tells us what they do: P&IDs “show the interconnection of process equipment and instrumentation used to control the process.”1 The PFD says what the process will do; the P&ID defines how it happens. P&IDs are developed in steps by members of the various design disciplines as a project evolves. Information placed on a P&ID by one discipline is then used by other disciplines as the basis for their design.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
The P&ID shown in Figure 1-3 has been developed from the PFD in Figure 1-2. The P&ID includes the control system definition using symbols from ISA-5.1 and ISA-5.3. In this example, there are two electronic loops that are part of the shared display/distributed control system (DCS): FRC-100, a flow loop with control and recording capability, and LIC-100, a level loop with control and indicating capability. There is one field-mounted pneumatic loop, PIC-100, with control and indication capability. There are several switches and indication lights on a local (field) mounted panel, including hand-operated switches and lights HS and ZL-400, HS and HL-401, and HS and HL-402. Other control system components are also included in the drawing.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
The P&ID also includes piping and mechanical equipment details; for instance, the data string “10″ 150 CS 001” associated with an interconnecting line defines it as a pipe with the following characteristics: • 10″ = 10 nominal pipe • 150 = ANSI 150 Class 150 rated system • CS = carbon steel pipe • 001 = associated pipe is line number 1 The project standards, as defined on a legend sheet, will establish the format and terms
used in the data string.
Loop and Tag Numbering A unique loop number is used to group a set of functionally connected process control elements. Grouped elements generally include the measurement of a process variable, an indication or control element, and often a manipulated variable. Letters combined with the loop number comprise a tag number. Tag numbers provide unique identification for each A&C component. All the devices that make up a single process control loop have the same number but use different letter designations to define their process function. The letter string is formatted and explained on a legend sheet based on ISA5.1, Table 1.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Figure 1-4 consists of LT-100, a field-mounted electronic level transmitter; LI-100, a field-mounted electronic indicator; LIC-100, a level controller that is part of the distributed control system; LY-100, a current-to-pneumatic (I/P) converter; and LV-100, a pneumatic butterfly control valve. In this case, loop L-100 would have some variation of the title “KO Drum 100 Level.” ISA-5.1 states that loop numbers may be parallel, using a single numeric sequence for all process variables, or serial, requiring a new number for each process variable. Figure 1-3 illustrates a P&ID with a parallel numbering system using a single loop number (100) with multiple process variables, flow, level and temperature. The flow loop is FRC-100, the level loop is LIC-100, and the temperature loop is TI-100.
Figure 1-5 shows how tag marks may also identify the loop location or service. Number prefixes, suffixes, and other systems can be used that further tie instruments to a P&ID,
a piece of equipment, or a location.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Instrument Lists The instrument list, also called an instrument index, is an alphanumeric listing of all tagmarked components. The list or index provides a place for each tag identifier to reference the relevant drawings and documents for that device. Figure 1-6 is a partial listing that includes the level devices on the D-001 K.O. drum— LG-1, level gauge; LT-100, level transmitter; and LI-100, level indicator—all of which are shown in Figure 1-3. The list includes instruments on P&IDs that were not included in the figures for this chapter. Figure 1-6 has nine columns: “Tag,” “Desc(ription),” “P&ID,” “Spec Form,” “REQ(uisition),” “Location Plan,” “Install(ation) Detail,” “Piping Drawing,” and “Loop Diagram.” The instrument list is developed by the A&C group.
There is no ISA standard defining an instrument list; thus, the list may contain as many columns as the project design team or the owner needs to support the work, including design, procurement, maintenance, and operation. The data contained within the document will have various uses during the life of the facility. In the example index, level gauges show “n/ a,” not applicable, in the loop drawing column because, for this facility, gauges are mechanical devices that are not wired to the control system so no loop drawings are made for them. This is a common approach, but your facility may choose to prepare and use loop diagrams for all components even when they do not wire to anything.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Specification Forms The A&C group defines tag-marked physical or real devices on specification forms, also known as data sheets. The forms include information useful to designers and device suppliers during the design and procurement phase of a project, enabling vendors and contractors to quote and supply the correct device. The forms record for maintenance and operations the features and capabilities of the devices installed. The forms list all the component information including materials, ratings, area classification, range, signal, power, and service. A specification form is completed for each component. In some cases, similar devices can all be listed on one specification form as long as the complete model number of the device is the same for all tag numbers listed. Let’s look at LT-100 from Figure 1-3. The P&ID symbol defines it as an electronic displacement-type level transmitter. Figure 1-7 is the completed specification form for LT-100. This form is from ISA-20-1981, Specification Forms for Process Measurement of Control Instruments, Primary Elements, and Control Valves. There are many variations of specification forms. Most engineering contractors have developed their own set of forms; some control component suppliers have done this as well. ISA has a revised set of forms in a “database/dropdown selection” format in technical report ISATR20.00.01-2001, Specification Forms for Process Management and Control Instruments – Part 1: General Considerations. The purpose of all the forms is to aid the
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
A&C group in organizing the information needed to fully and accurately define control components.
Logic Diagrams Continuous process control is shown clearly on P&IDs; however, different presentations are needed for on/off control. Logic diagrams are one form of these presentations. ISA’s set of symbols are defined in ISA-5.2-1976 (R1992), Binary Logic Diagrams for Process Operations.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
ISA symbols AND, OR, NOT, and MEMORY (FLIP-FLOP) with an explanation of their meaning are shown in Figures 1-8 and 1-9. Other sets of symbols and other methods may be used to document on/off control, for example, text descriptions, a written explanation of the on/off system; ladder diagrams; or electrical elementary diagrams, known as elementries.
Some designers develop a functional specification or operation description to describe the intended operation of the system. These documents usually include a description of the on/ off control of the process. Figure 1-10 is an illustration of a motor start circuit as an elementary diagram and in ISA logic form.
Location Plans (Instrument Location Drawings)
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
There is no ISA standard that defines or describes a location plan or an instrument location drawing. Location plans show the ordinate location and sometimes, if the user chooses, the elevation of control components on plan drawings of a plant. These predominantly orthographic drawings are useful to the people building the facility, and they can be used by maintenance and operations as a road map of the system. Figure 1-11 shows one approach for a location plan. It shows the approximate location and elevation of the tag-marked devices included on the P&ID (see Figure 1-3), air supplies for the devices, and the interconnection tubing needed to complete the pneumatic loop. Other approaches to location plans might include conduit and cabling information, and fitting and junction box information. Location plans are developed by the A&C or electrical groups. They are used during construction and by maintenance personnel after the plant is built to locate the various devices.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Installation Details Installation details define the requirements for properly installing the tag-marked devices. The installation details show process connections, pneumatic tubing, or conduit connections; insulation and even winterizing requirements; and support methods. There is no ISA standard that defines installation details. However, libraries of installation details have been developed and maintained by engineering contractors, A&C device suppliers, some plant owners, installation contractors, and some individual designers. They all have the same aim—successful installation of the component so that it operates properly and so it can be operated and maintained. However, they may differ in the details as to how to achieve reliable operations. Figure 1-12 shows one approach. This drawing includes a material list to aid in procuring installation materials and to assist the installation personnel.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Installation details may by developed by the A&C group during the design phase. However, they are sometimes developed by the installer during construction or by an equipment supplier for the project.
Loop Diagrams ISA’s Automation, Systems, and Instrumentation Dictionary defines a loop diagram as “a schematic representation of a complete hydraulic, electric, magnetic, or pneumatic circuit.”2 The circuit is called a loop. For a typical loop see Figure 1-4. ISA-5.4-1991,
Instrument Loop Diagrams, presents six typical loop diagrams, two each for pneumatic, electronic, and shared display and control. One of each type shows the minimum items required, and the other shows additional optional items.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Figure 1-13 is a loop diagram for electronic flow loop FIC-301. Loop diagrams are helpful documents for maintenance and troubleshooting because they show how the components are connected from the process to the control device, all on one sheet. Loop diagrams are sometimes produced by the principal project A&C supplier, the installation contractor, or by the plant owner’s operations, maintenance, or engineering personnel. They are arguably less helpful for construction because there are other more efficient presentations that are customized to only present information in support of initial construction efforts, such as cable and conduit schedules and termination lists, which are not discussed here because they are more appropriate for electrical disciplines than A&C. Sometimes loop diagrams are produced on an as-needed basis after the plant is running.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Standards and Regulations Mandatory Standards Federal, state, and local laws establish mandatory requirements: codes, laws, regulations, and requirements. The Food and Drug Administration issues Good Manufacturing Practices. The National Fire Protection Association (NFPA) issues Standard 70, the National Electric Code (NEC). The United States government manages about 50,000 mandatory standards. The Occupational Safety and Health Administration
(OSHA) issues many regulations including government document 29 CFR 1910.119, Process Safety Management of Highly Hazardous Chemicals (PSM). There are three paragraphs in the PSM that list documents that are required if certain hazardous materials are handled. Some of these documents require input from the plant A&C group.
Consensus Standards Consensus standards include recommended practices, standards, and other documents developed by professional societies and industry organizations. The standards developed by ISA are the ones used most often by A&C personnel. Relevant ISA standards include: • ISA-5.1-2009, Instrumentation Symbols and Identification – Defines symbols for A&C devices. • ISA-TR5.1.01/ISA-TR77.40.01-2012, Functional Diagram Usage – Illustrates usage of function block symbols and functions. • ISA-5.2-1976-(R1992), Binary Logic Diagrams for Process Operations – Provides additional symbols used on logic diagrams.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
• ISA-5.3-1983, Graphic Symbols for Distributed Control/Shared Display Instrumentation, Logic, and Computer Systems – Contains symbols useful for DCS definition. The key elements of ISA-5.3 are now included in ISA-5.1, and ISA-5.3 will be withdrawn in the future. • ISA-5.4, Instrument Loop Diagrams – Includes additional symbols and six typical instrument loop diagrams. • ISA-5.5, Graphic Symbols for Process Displays – Establishes a set of symbols used in process display. Other ISA standards of interest include: • ISA-20-1981, Specification Forms for Process Measurement and Control Instruments, Primary Elements, and Control Valves – Provides standard instrument specification forms, including space for principal descriptive options to facilitate quoting, purchasing, receiving, accounting, and ordering. • ISA-TR20.00.01-2001, Specification Forms for Process Measurement and Control Instruments – Part 1: General Considerations – Updates ISA-20. • ANSI/ISA-84.00.01-2004, Functional Safety: Safety Instrumented Systems for the Process Industry Sector – Defines the requirements for safe systems.
• ANSI/ISA-88.01-1995 (R2006), Batch Control Part 1: Models and Terminology – Shows the relationships involved between the models and the terminology. In addition to ISA, other organizations develop documents to guide professionals. These organizations include the American Petroleum Institute (API), American Society of Mechanical Engineers (ASME), National Electrical Manufacturers Association (NEMA), Process Industry Practice (PIP), International Electrotechnical Commission (IEC), and the Technical Association of the Pulp and Paper Industry (TAPPI).
Operating Instructions
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Operating instructions, also known as control narratives, are necessary to operate a complex plant. They range from a few pages describing how to operate one part of a plant to a complete set of books covering the operation of all parts of a facility. They might be included in a functional specification or an operating description. There is no ISA standard to aid in developing operating instructions. They might be prepared by a group of project, process, electrical, and A&C personnel during plant design; however, some owners prefer that plant operations personnel prepare these documents. The operating instructions guide plant operators and other personnel during normal and abnormal plant operation, including start-up, shutdown, and emergency operation of the plant. OSHA requires operating procedures for all installations handling hazardous chemicals. Their requirements are defined in government document 29 CFR 1910.119(d) Process Safety Information, (f) Operating Procedures, and (l) Management of Change. For many types of food processing and drug manufacturing, the Food and Drug Administration issues Good Manufacturing Practices.
Other Resources Standards ANSI/ISA-5.1-2009. Instrumentation Symbols and Identification. Research Triangle Park, NC: ISA (International Society of Automation). ANSI/ISA-88.01-1995 (R2006). Batch Control Part 1: Models and Terminology. Research Triangle Park, NC: ISA (International Society of Automation). ISA. Automation, Systems, and Instrumentation Dictionary. 4th ed. Research Triangle Park, NC: ISA (International Society of Automation), 2003. ISA-TR5.1.01/ISA-TR77.40.01-2012. Functional Diagram Usage. Research Triangle
Park, NC: ISA (International Society of Automation). ISA-5.2-1976 (R1992). Binary Logic Diagrams for Process Operations. Research Triangle Park, NC: ISA (International Society of Automation). ISA-5.3-1983. Graphic Symbols for Distributed Control/Shared Display Instrumentation, Logic, and Computer Systems. Research Triangle Park, NC: ISA (International Society of Automation). ISA-5.4-1991. Instrument Loop Diagrams. Research Triangle Park, NC: ISA (International Society of Automation). ISA-5.5-1985. Graphic Symbols for Process Displays. Research Triangle Park, NC: ISA (International Society of Automation). ISA-20-1981. Specification Forms for Process Measurement and Control Instruments, Primary Elements, and Control Valves. Research Triangle Park, NC: ISA (International Society of Automation). ISA-TR20.00.01-2007. Specification Forms for Process Measurement and Control Instruments – Part 1: General Considerations. Research Triangle Park, NC: ISA (International Society of Automation). ISA-84.00.01-2004 (IEC 61511 Mod). Functional Safety: Safety Instrumented Systems for the Process Industry Sector. Research Triangle Park, NC: ISA (International Society of Automation).
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
NFPA 70. National Electric Code (NEC). Quincy, MA: NFPA (National Fire Protection Agency). OSHA 29 CFR 1910.119(d) Process Safety Information, (f) Operating Procedures, and (l) Management of Change. Washington, DC: OSHA (Occupational Safety and Health Administration).
Books Meier, Frederick A., and Clifford A. Meier. Instrumentation and Control System Documentation. Research Triangle Park, NC: ISA (International Society of Automation), 2011.
Training ISA FG15E. Developing and Applying Standard Instrumentation and Control Documentation. Online training course. Research Triangle Park, NC: ISA (International Society of Automation).
About the Authors Frederick A. Meier’s career in engineering and engineering management spanned 50 years, and he was an active member of ISA for more than 40 years. He earned an ME from Stevens Institute of Technology and an MBA from Rutgers University, and he has held Professional Engineer licenses in the United States and in Canada. Meier and his son, Clifford Meier, are the authors of Instrumentation and Control System Documentation published by ISA in 2004. He lives in Chapel Hill, North Carolina. Clifford A. Meier began his career in 1978 as a mechanical engineer for fossil and nuclear power generation projects. He quickly transitioned to instrumentation and control system design for oil and gas production facilities, cogeneration plants, chemical plants, paper and pulp mills, and microelectronic chip factories. Meier left the consulting engineering world in 2004 to work for a municipal utility in Portland, Oregon. He retired in 2017. Meier holds a Professional Engineer license in control systems, and lives in Beaverton, Oregon.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
1. The Automation, Systems, and Instrumentation Dictionary, 4th ed. (Research Triangle Park, NC: ISA [International Society of Automation], 2003): 273. 2. Ibid., pg. 299.
2 Continuous Control By Harold Wade
Introduction Continuous control refers to a form of automatic process control in which the information— from sensing elements and actuating devices—can have any value between minimum and maximum limits. This is in contrast to discrete control, where the information normally is in one of two states, such as on/off, open/closed, and run/stop.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Continuous control is organized into feedback control loops, as shown in Figure 2-1. In addition to a controlled process, each control loop consists of a sensing device that measures the value of a controlled variable, a controller that contains the control logic plus provisions for human interface, and an actuating device that manipulates the rate of addition or removal of mass, energy, or some other property that can affect the controlled variable. The sensor, control and human-machine interface (HMI) station, and actuator are usually connected by some form of signal communication system, as described elsewhere in this book.
Continuous process control is used extensively in industries where the product is in a continuous, usually fluid, stream. Representative industries are petroleum refining, chemical and petrochemical, power generation, and municipal utilities. Continuous control can also be found in processes in which the final product is produced in batches,
strips, slabs, or as a web in, for example, the pharmaceutical; pulp and paper; steel; and textile industries. There are also applications for continuous control in the discrete industries—for instance, a temperature controller on an annealing furnace or motion control in robotics. The central device in a control loop, the controller, may be built as a stand-alone device or may exist as shared components in a digital system, such as a distributed control system (DCS) or programmable logic controller (PLC). In emerging technology, the control logic may be located at either the sensing or the actuating device.
Process Characteristics In order to understand feedback control loops, one must understand the characteristics of the controlled process. Listed below are characteristics of almost all processes, regardless of the application or industry. • Industrial processes are nonlinear; that is, they will exhibit different responses at different operating points. • Industrial processes are subject to random disturbances, due to fluctuations in feedstock, environmental effects, and changes or malfunctions of equipment. • Most processes contain some amount of dead time; a control action will not produce an immediate feedback of its effect.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
• Many processes are interacting; a change in one controller’s output may affect other process variables besides the intended one. • Most process measurements contain some amount of random variation, called noise. • Most processes are unique; processes using apparently identical equipment may have individual idiosyncrasies. A typical response to a step change in signal to the actuating device is shown in Figure 2-2.
In addition, there are physical and environmental characteristics that must be considered when selecting equipment and installing control systems. • The process may be toxic, requiring exceptional provisions to prevent release to the environment. • The process may be highly corrosive, limiting the selection of materials for components that come in contact with the process.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
• The process may be highly explosive, requiring special equipment housing or installation technology for electrical apparatus.
Feedback Control The principle of feedback control is this: if a controlled variable deviates from its desired value (set point), corrective action moves a manipulated variable (the controller output) in a direction that causes the controlled variable to return toward the set point. Most feedback control loops in industrial processes utilize a proportional-integralderivative (PID) control algorithm. There are several forms of the PID. There is no standardization for the names. The names ideal, interactive, and parallel are used here, although some vendors may use other names.
Ideal PID Algorithm The most common form of PID algorithm is the ideal form (also called the noninteractive form or the ISA form). This is represented in mathematical terms by Equation 2-1, and in block diagram form by Figure 2-3.
(2-1) Here, m represents the controller output; e represents the error (difference between the set point and the controlled variable). Both m and e are in percent of span. The symbols KC (controller gain), TI (integral time), and TD (derivative time) represent tuning parameters that must be adjusted for each application.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
The terms in the algorithm represent the proportional, integral, and derivative contributions to the output. The proportional mode is responsible for most of the correction. The integral mode assures that, in the long term, there will be no deviation between the set point and the controlled variable. The derivative mode may be used for improved response of the control loop. In practice, the proportional and integral modes are almost always used; the derivative mode is often omitted, simply by setting TD = 0. There are other forms for the tuning parameters. For instance, controller gain may be expressed as proportional band (PB), which is defined as the amount of measurement change (in percent of measurement span) required to cause 100% change in the controller output. The conversion between controller gain and proportional band is shown by Equation 2-2: (2-2) The integral mode tuning parameter may be expressed in reciprocal form, called reset rate. Whereas TI is normally expressed in “minutes per repeat,” the reset rate is expressed in “repeats per minute.” The derivative mode tuning parameter, TD, is always in time units, usually in minutes. (Traditionally, the time units for tuning parameters has been “minutes.” Today, however, some vendors are expressing the time units in
“seconds.”)
Interactive PID Algorithm The interactive form, depicted by Figure 2-4, was the predominant form for analog controllers and is used by some vendors today. Other vendors provide a choice of the ideal or interactive form. There is essentially no technological advantage to either form; however, the required tuning parameters differ if the derivative mode is used.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Parallel PID Algorithm The parallel form, shown in Figure 2-5, uses independent gains on each mode. This form has traditionally been used in the power generation industry and in such applications as robotics, flight control, and motion control. Other than power generation, it is rarely found in the continuous process industries. With compatible tuning, the ideal, interactive, and parallel forms of PID produce identical performance; hence no technological advantage can be claimed for any form. The tuning procedure for the parallel form differs decidedly from that of the other two forms.
Time Proportioning Control
Time proportioning refers to a form of control in which the PID controller output consists of a series of periodic pulses whose duration is varied to relate to the normal continuous output. For example, if the fixed cycle base is 10 seconds, a controller output of 30% will produce an on pulse of 3 seconds and an off pulse of 7 seconds. An output of 75% will produce an on pulse of 7.5 seconds and an off pulse of 2.5 seconds. This type of control is usually applied where the cost of an on/off final actuating device is considerably less than the cost of a modulating device. In a typical application, the on pulses apply heating or cooling by turning on a resistance-heating element, a siliconcontrolled rectifier (SCR), or a solenoid valve. The mass of the process unit (such as a plastics extruder barrel) acts as a filter to remove the low-frequency harmonics and apply an even amount of heating or cooling to the process.
Manual-Automatic Switching
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
It is desirable to provide a means for process operator intervention in a control loop in the event of abnormal circumstances, such as a sensor failure or a major process upset. Figures 2-3, 2-4, and 2-5 show a manual/automatic switch that permits switching between manual and automatic modes. In the manual mode, the operator can set the signal to the controller output. However, when the switch is returned to the automatic position, the automatic controller output must match the operator’s manual setting or else there will be a “bump” in the controller output. (The term bumpless transfer is frequently used.) With older technology, it was the operator’s responsibility to prevent bumping the process. With current technology, bumpless transfer is built into most control systems; some vendors refer to this as initializing the control algorithm.
Direct- and Reverse-Acting For safety and environmental reasons, most final actuators, such as valves, will close in the event of a loss of signal or power to the actuator. There are instances, however, when the valve should open in the event of signal or power failure. Once the failure mode of the valve is determined, the action of the controller must be selected. Controllers may be either direct-acting (DA) or reverse-acting (RA). If a controller is direct-acting, an increase in the controlled variable will cause the controller output to increase. If the controller is reverse-acting, an increase in the controlled variable will cause the output to decrease. Because most control valves are fail-closed, the majority of the controllers are set to be reverse-acting. The setting— DA or RA—is normally made at the time the control loop is commissioned. With some DCSs, the DA/RA selection can be made without considering the failure mode of the valve; then a separate selection is made as to whether to reverse the analog output signal. This permits the HMI to display all valve
positions in a consistent manner, which is 0% for closed and 100% for open.
Activation for Proportional and Derivative Modes Regardless of algorithm form, there are certain configuration options that every vendor offers. One configuration option is the DA/RA setting. Other configuration options pertain to the actuating signal for the proportional and derivative modes. Note that, in any of the forms of algorithms, if the derivative mode is being used (TD ≠ 0), a set point change will induce an undesirable spike on the controller output. A configuration option permits the user to make the derivative mode sensitive only to changes in the controlled variable, not to the set point. This choice is called derivative-on-error or derivative-onmeasurement.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Even with derivative-on-measurement, on a set-point change, the proportional mode will cause a step change in the controller output. This, too, may be undesirable. Therefore, a similar configuration option permits the user to select proportional-onmeasurement or proportional- on-error. Figure 2-6 shows both proportional and derivative modes sensitive to measurement changes alone. This leaves only the integral mode on error, where it must remain, because it is responsible for assuring the long-term equality of the set point and the controlled variable. In the event of a disturbance, there is no difference between the responses of a configuration using either or both derivativeon-measurement and proportional-on-measurement and a configuration with all modes on error.
Two-Degree-of-Freedom PID A combination of Figure 2-3 (ideal PID algorithm) and Figure 2-6 (ideal PID algorithm with proportional and derivative modes on measurement) is shown in Figure 2-7 and described in mathematical terms by Equation 2-3. If the parameters b and c each have
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
values of 0 or 1, then this would be the equivalent of providing configuration options of proportional-on-measurement or proportional-on-error, and derivative-on-measurement or derivative-on-error. Although the implementation details may differ, some manufacturers have taken this conceptual idea even further by permitting parameters b and c to take on any value equal to or between 0 and 1. For instance, rather than having the derivative mode wholly on error or wholly on measurement, a fraction of the signal can come from error and the complementary fraction can come from measurement. Such a controller is called a two-degree-of-freedom control algorithm. Some manufacturers only provide the parameter modification b on the proportional mode. This configuration is normally called a set-point weighting controller.
(2-3) A problem often encountered with an ideal PID, or one of its variations, is if it is tuned to give an acceptable response to a set-point change, it may not be sufficiently aggressive in eliminating the effect of a disturbance. On the other hand, if it is tuned to aggressively eliminate a disturbance effect, it may respond too aggressively to a setpoint change. By utilizing the freedom provided by the two additional tuning parameters, the two-degree-of-freedom control algorithm can be tuned to produce an acceptable response to both a set-point change and a disturbance. This is due to the fact that, in essence, there is a different controller acting on a disturbance from the controller that acts on a set-point change. Compare the loop diagrams shown in Figures 2-8a and 2-8b.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Discrete Forms of PID The algorithm forms presented above, using calculus symbols, are applicable to analog controllers that operate continuously. However, control algorithms implemented in a digital system are processed at discrete sample instants (for instance, 1-second intervals), rather than continuously. Therefore, a modification must be made to show how a digital system approximates the continuous forms of the algorithm presented above. Digital processing of the PID algorithm also presents an alternative that was not present in analog systems. At each sample instant, the PID algorithm can calculate either a new position for the controller output or the increment by which the output should change. These forms are called the position and the velocity forms, respectively. Assuming that the controller is in the automatic mode, the following steps (Equation 24) are executed at each sample instant for the position algorithm. (The subscript “n” refers to the nth processing instant, “n-1” to the previous processing instant, and so on.)
(2-4)
Save Sn and en values for the subsequent processing time. The velocity mode or incremental algorithm is similar. It computes the amount by which the controller output should be changed at the nth sample instant.
Use Equation 2-5 to compute the change in controller output: (2-5) Add the incremental output to the previous value of controller output, to create the new value of output (Equation 2-6): mn = mn – 1 + ∆mn
(2-6)
Save mn, en–1, and en–2 values for the subsequent processing time. From a user point of view, there is no advantage of one form over the other. Vendors, however, may prefer a particular form due to the ease of incorporation of features of their system, such as tuning and bumpless transfer. The configuration options—DA/RA, derivative-on-measurement and proportional-onmeasurement, or error—are also applicable to the discrete forms of PID. In fact, there are more user configuration options offered with digital systems than were available with analog controllers.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Controller Tuning In the previous section, it was mentioned the parameters KC, TI (or their equivalents, proportional band and reset rate), and TD must be adjusted so the response of the controller matches the requirements of a particular process. This is called tuning the controller. There are no hard and fast rules as to the performance requirements for tuning. These are largely established by the particular process application and by the desires of the operator or controller tuner.
Acceptable Criteria for Loop Performance One widely used response criterion is this: the loop should exhibit a quarter-amplitude decay following a set-point change. See Figure 2-9. For many applications, however, this is too oscillatory. A smooth response to a set-point change with minimum overshoot is more desirable. A response to a set-point change that provides minimum overshoot is considered less aggressive than quarter-amplitude decay. If an ideal PID controller (see Figure 2-3) is being used, the penalty for less aggressive tuning is that a disturbance will cause a greater deviation from the set point or a longer time to return to the set point. In this case, the controller tuner must decide the acceptable criterion for loop performance
before actual tuning.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Recent developments, applicable to digital controllers, have provided the two-degree-offreedom PID controller, conceptually shown in Figure 2-7. With this configuration (which is not yet offered by all manufacturers), the signal path from the set point to the controller output is different from the signal path from the measurement to the controller output. This permits the controller to be tuned for acceptable response to both disturbances and set-point changes. Additional information regarding the two-degree-offreedom controller can be found in Aström and Hägglund (2006). Controller tuning techniques may be divided into two broad categories: those that require testing of the process, either with the controller in automatic or manual, and those that are less formal, often called trial-and-error tuning.
Tuning from Open-Loop Tests The open-loop process testing method uses only the manually set output of the controller. A typical response to a step change in output was shown in Figure 2-2. It is often possible to approximate the response with a simplified process model containing only three parameters—the process gain (Kp), the dead time in the process (Td), and the process time constant (Tp). Figure 2-10 shows the response of a first-order-plus-deadtime (FOPDT) model that approximates the true process response. Figure 2-10 also shows the parameter values, Kp, Td, and Tp. There are several published correlations for obtaining controller tuning parameters from these process parameters.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
The best known is based on the Ziegler-Nichols Reaction Curve Method. Correlations for P-only, PI, and PID controllers are given here in Table 2-1.
Another tuning technique that uses the same open-loop process test data is called lambda tuning. The objective of this technique is for the set-point response to be an exponential rise with a specified time constant, λ. This technique is applicable whenever it is desired to have a very smooth set-point response, at the expense of degraded response to disturbances. There are other elaborations of the open-loop test method, including multiple valve movements in both directions and numerical regression methods for obtaining the process parameters. Despite its simplicity, the open-loop method suffers from the following problems: • It may not be possible to interrupt normal process operations to make the test. • If there is noise on the measurement, it may not be possible to get good data, unless the controlled variable change is at least five times the amplitude of the noise. For many processes, that may be too much disturbance. • The technique is very sensitive to parameter estimation error, particularly if the
ratio of Td/Tp is small. • The method does not take into consideration the effects of valve stiction. • The actual process response may be difficult to approximate with an FOPDT model. • A disturbance to the process during the test will severely deteriorate the quality of the data. • For very slow processes, the complete results of the test may require one or more working shifts. • The data is valid only at one operating point. If the process is nonlinear, additional tests at other operating points may be required. Despite these problems, under relatively ideal conditions—minimal process noise, minimal disturbances during the test, minimal valve stiction, and so on—the method provides acceptable results.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Tuning from Closed-Loop Tests Another technique is based on testing the process in the closed loop. (Ziegler-Nichols referred to this as the ultimate sensitivity method.) To perform this test, the controller is placed in the automatic mode, integral and derivative actions are removed (or a proportional-only controller is used), a low controller gain is set, and then the process is disturbed—either by a set-point change or a forced disturbance—and the oscillating characteristics are observed. The objective is to repeat this procedure with increased gain until sustained oscillation (neither increasing nor decreasing) is achieved. At that point, two pieces of data may be obtained: the value of controller gain (called the ultimate gain, or KCU) that produced sustained oscillation, and the period of the oscillation, PU. With this data, one can use Table 2-2 to calculate tuning parameters for a P-only, PI, or PID controller.
There are also problems with the closed-loop method: • It may not be possible to subject the process to a sustained oscillation.
• Even if that were possible, it is difficult to predict or to control the magnitude of the oscillation. • Multiple tests may be required, resulting in long periods of interruption to normal operation. Despite these problems, there are certain advantages to the closed-loop method: • Minimal uncertainty in the data. (Frequency, or its inverse, period, can be measured quite accurately.) • The method inherently includes the effect of a sticking valve. • Moderate disturbances during the testing can be tolerated. • No a priori assumption as to the form of the process model is required.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
A modification of the closed-loop method, called the relay method, attempts to exploit the advantages while circumventing most of the problems. The relay method utilizes the establishment of maximum and minimum limits for the controller output. For instance, if the controller output normally is 55%, the maximum output can be set at 60% and the minimum at 50%. While this does not establish hard limits for excursion of the controlled variable, persons familiar with this process will feel comfortable with these settings or will reduce the difference between the limits. The process is then tested by a set-point change or a forced disturbance, using an on/off controller. If an on/off controller is not available, then a P-only controller with a maximum value of controller gain can be substituted. For a reverse-acting controller, the controlled variable will oscillate above and below the set point, with the controller output at either the maximum or minimum value, as shown in Figure 2-11.
If the period of time when the controller output is at the maximum setting exceeds the time at the minimum, then both the maximum and minimum limits should be shifted upward by a small but identical amount. After one or more adjustments, the output
square wave should be approximately symmetrical. At that condition, the period of oscillation, PU, is the same as would have been obtained by the previously described closed-loop test. Furthermore, the ultimate gain can be determined from a ratio of the controller output and CV amplitudes as shown in Equation 2-7: (2-7) Thus, the data required to enter in Table 2-2 and calculate tuning parameters have been obtained in a much more controlled manner than the unbounded closed-loop test. While the relay method is a viable technique for manual testing, it can also be easily automated. For this reason, it is the basis for some vendors’ self-tuning techniques.
Trial-and-Error Tuning
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Despite these tools for formal process testing for determination of tuning parameters, many loops are tuned by trial-and-error. That is, for an unsatisfactory loop, closed-loop behavior is observed, and an estimate (often merely a guess) is made as to which parameter(s) should be changed and by how much. Good results often depend on the person’s experience. Various methods of visual pattern recognition have been described but, in general, such tuning techniques remain more of an art than a science. In the book Basic and Advanced Regulatory Control: System Design and Application (Wade 2017), a technique called improving as-found tuning, or intelligent trial-anderror tuning, attempts to place controller tuning on a more methodological basis. The premise of this technique, which is applicable only to PI controllers, is that a well-tuned controller exhibiting a slight oscillation (oscillations that are decaying rapidly) will have a predictable relation between the integral time and period of oscillation. The relation in Equation 2-8 has been found to provide acceptable results: (2-8) Further insight into this technique can be gained by noting that the phase shift through a PI controller, from error to controller output, depends strongly on the ratio P/TI and only slightly on the decay ratio. For a control loop with a quarter-amplitude decay, the limits above are equivalent to specifying a phase shift of approximately 15°. If a control system engineer or instrumentation technician is called on to correct the errant behavior of a control loop, then (assuming that it is a tuning problem and not
some external problem) the as-found behavior is caused by the as-found tuning parameter settings. The behavior can be characterized by the decay ratio (DR) and the period (P) of oscillation. The as-found data set—KC, TI, DR, and P—represents some quanta of knowledge about the process. If either an open-loop or closed-loop test were made in an attempt to determine tuning parameters, then the existing knowledge about the process would be sacrificed. From Equation 2-7, the upper and lower limits for an acceptable period can be established using Equation 2-9. 1.5TI ≤ P ≤ 2.0TI
(2-9)
If the as-found period P meets these criteria, the implication is that the integral time is acceptable. Hence, adjustments should be made to the controller gain (KC) until the desired decay ratio is obtained. If the period is outside this limit, then the present period can be used in the inverted relation to determine a range of acceptable new values for TI (Equation 2-10). 0.5P ≤ TI ≤ 0.67P
(2-10)
Basic and Advanced Regulatory Control: System Design and Application (Wade 2017) and “Trial and Error: An Organized Procedure” (Wade 2005) contain more information, including a flow chart, describing this technique.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Self-Tuning Although self-tuning, auto-tuning, and adaptive-tuning have slightly different connotations, they will be discussed collectively here. There are two different circumstances where some form of self-tuning would be desirable: 1. If a process is highly nonlinear and also experiences a wide range of operating points, then a technique that automatically adjusts the tuning parameters for different operating conditions would be highly beneficial. 2. If a new process unit with many control loops were to be commissioned, it would be beneficial if the controllers could determine their own best tuning parameters. There are different technologies that address these situations.
For initial tuning, there are commercial systems that, in essence, automate the open-loop test procedure. On command, the controller will revert to the manual mode, test the process, characterize the response by a simple process model, and then determine appropriate tuning parameters. Most commercial systems that follow this procedure display the computed parameters and await confirmation before entering the parameters into the controller. An automation of the relay tuning method described previously falls into this category. The simplest technique addressing the nonlinearity problem is called scheduled tuning. If the nonlinearity of a process can be related to a key parameter such as process throughput, then a measure of that parameter can be used as an index to a lookup table (schedule) for appropriate tuning parameters. The key parameter may be divided into regions, with suitable tuning parameters listed for each region. Note that this technique depends on the correct tabulation of tuning parameters for each region. There is nothing in the technique that evaluates the loop performance and automatically adjusts the parameters based on the evaluation.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
There are also systems that attempt to recognize features of the response to normal disturbances to the loop. From these features, heuristic rules are used to calculate new tuning parameters. These may be displayed for confirmation, or they may be entered into the algorithm “on the fly.” Used in this manner, the system tries to adapt the controller to the random environment of disturbances and set-point changes as they occur. There are also third-party packages, typically running in a notebook computer, that access data from the process, such as by transferring data from the DCS data highway. The data is then analyzed and advisory messages are presented that suggest tuning parameters and provide an indication of the “health” of control loop components, especially the valve.
Advanced Regulatory Control If the process disturbances are few and not severe, feedback controllers will maintain the average value of the controlled variable at the set point. But in the presence of frequent or severe disturbances, feedback controllers permit significant variability in the control loop. This is because a feedback controller must experience a deviation from the set point in order to change its output. This variability may result in an economic loss. For instance, a process may operate at a safe margin away from a target value to prevent encroaching on the limit and producing off-spec product. Reducing the margin of safety will produce some economic benefit, such as reduced energy consumption, reduced raw
material usage, or increased production. Reducing the variability cannot be done by feedback controller tuning alone. It may be accomplished using more advanced control loops such as ratio, cascade, feedforward, decoupling, and selector control.
Ratio Control
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Often, when two or more ingredients are blended or mixed, the flow rate of one of the ingredients paces the production rate. The flow rates for the other ingredients are controlled to maintain a specified ratio to the pacing ingredient. Figure 2-12 shows a ratio control loop. Ratio control systems are found in batch processing, fuel oil blending, combustion processes where the air flow may be ratioed to the fuel flow, and many other applications. The pacing stream is often called the wild flow, because it may or may not be provided with an independent flow rate controller—only a measurement of the wild flow stream is utilized in ratio control.
The specified ratio may be manually set, automatically set from a batch recipe, or adjusted by the output of a feedback controller. An example of the latter is a process heater that uses a stack oxygen controller to adjust the air-to-fuel ratio. When the required ratio is automatically set by a higher-level feedback controller, the ratio control strategy is merely one form of feedforward control.
Cascade Control Cascade control refers to control schemes that have an inner control loop nested within an outer loop. The feedback controller in the outer loop is called the primary controller. Its output sets the set point for the inner loop controller, called the secondary. The
secondary control loop must be significantly faster than the primary loop. Figure 2-13 depicts an example of cascade control applied to a heat exchanger. In this example, a process fluid is heated with a hot oil stream. A temperature controller on the heat exchanger output sets the set point of the hot oil flow controller.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
If the temperature controller directly manipulated the valve, there would still be a valid feedback control loop. Any disturbance to the loop, such as a change in the process stream flow rate or a change in the hot oil supply pressure, would require a new position of the control valve. Therefore, a deviation of temperature from the set point would be required to move the valve. With the secondary loop installed as shown in Figure 2-13, a change in the hot oil supply pressure will result in a change in the hot oil flow. This will be rapidly detected by the flow controller, which will then make a compensating adjustment to the valve. The momentary variation in the hot oil flow will cause minimal, if any, disturbance to the temperature control loop. In the general situation, all disturbances within the secondary loop—a sticking valve, adverse valve characteristics, or (in the example) variations in supply pressure—are confined to the secondary loop and have minimal effect on the primary controlled variable. A disturbance that directly affects the primary loop, such as a change in the process flow rate in the example, will require a deviation at the primary controller for its correction regardless of the presence or absence of a secondary controller. When examining a process control system for possible improvements, consider whether intermediate control loops can be closed to encompass certain of the disturbances. If so, the effect of these disturbances will be removed from the primary controller.
Feedforward Control Feedforward control is defined as the manipulation of the final control element—the valve position or the set point of a lower-level flow controller—using a measure of a disturbance rather than the output of a feedback controller. In essence, feedforward control is open-loop control. Feedforward control requires a process model in order to know how much and when correction should be made for a given disturbance. If the process model were perfect, feedforward control alone could be used. In actuality, the process model is never perfect; therefore, feedforward and feedback control are usually combined.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
The example in the previous section employed cascade control to overcome the effect of disturbances caused by variations in the hot oil supply pressure. It was noted, however, that variations in the process flow rate would still cause a disturbance to the primary controller. If the process and hot oil flow rates varied in a proportionate amount, there would be only minimal effect on the process outlet temperature. Thus, a ratio between the hot oil and process flow rates should be maintained. While this would eliminate most of the variability at the temperature controller, there may be other circumstances, such as heat exchanger tube scaling, that would necessitate a long-term shift in the required ratio. This can be implemented by letting the feedback temperature controller set the required ratio as shown in Figure 2-14.
Ratio control, noted earlier as an example of feedforward-feedback control, corrects for the steady-state effects on the controlled variable. Suppose that there is also a difference in dynamic effects of the hot oil and process streams on the outlet temperature. In order to synchronize the effects at the outlet temperature, dynamic compensation may be required in the feedforward controller.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
To take a more general view of feedforward, consider the generic process shown within the dotted lines in Figure 2-15. This process is subject to two influences (inputs)—a disturbance and a control effort. The control effort may be the signal to a valve or to a lower level flow controller. In this latter case, the flow controller can be considered as a part of the process. Transfer functions A(s) and B(s) are mathematical abstractions of the dynamic effect of each of the inputs on the controlled variable. A feedforward controller C(s), a feedback controller, and the junction combining feedback and feedforward are also shown in Figure 2-15.
There are two paths of influence from the disturbance to the controlled variable. If the disturbance is to have no effect on the controlled variable (that is the objective of feedforward control), these two paths must be mirror images that cancel out each other. Thus, the feedforward controller must be the ratio of the two process dynamic effects, with an appropriate sign adjustment. The correct sign will be obvious in any practical situation. That is: If both A(s) and B(s) have been approximated as FOPDT models (see “Tuning from Open-Loop Tests” in this chapter), then C(s) is comprised of, at most, a steady-state gain, a lead-lag, and a dead-time function. These functions are contained in every vendor’s function block library. The dynamic compensation can often be simpler than this. For instance, if the dead times through A(s) and B(s) are identical, then no deadtime term is required in the dynamic compensation. (2-11) Now consider combining feedback and feedforward control. Figure 2-15 shows a junction for combining these two forms of control but does not indicate how they are
combined. In general, feedback and feedforward can be combined by adding or by multiplying the signals. A multiplicative combination is essentially the same as ratio control. In situations where a ratio must be maintained between the disturbance and the control effort, a multiplicative combination of feedback and feedforward will provide a relatively constant process gain for the feedback controller. If the feedback and feedforward were combined additively, variations in process gain seen by the feedback controller would require frequent retuning. In other situations, it is better to combine feedback and feedforward additively, a control application often called feedback trim. Regardless of the method of combining feedback and feedforward, the dynamic compensation terms should be only in the feedforward path, not the feedback path. It would be erroneous for the dynamic compensation terms to be placed after the combining junction in Figure 2-15. Feedforward control is one of the most powerful control techniques for minimizing variability in a control loop. It is often overlooked due to lack familiarity with the technique.
Decoupling Control
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Frequently in industrial processes, a manipulated variable—a signal to a valve or to a lower-level flow controller—will affect more than one controlled variable. If each controlled variable is paired with a particular manipulated variable through a feedback controller, interaction between the control loops will lead to undesirable variability. One way of coping with the problem is to pair the controlled and manipulated variables to reduce the interaction between the control loops. A technique for pairing the variables, called relative gain analysis, is described in most texts on process control, as well as in the book Process Control Systems: Application Design and Tuning by F. G. Shinskey and the ISA Transactions article “Inverted Decoupling, A Neglected Technique” by Harold Wade. If, after applying this technique, the residual interaction is too great, the control loops should be modified for the purpose of decoupling. With decoupled control loops, each feedback controller output affects only one controlled variable. Figure 2-16 shows a generic process with two controlled inputs—a signal to valves or set points to lower-level flow controllers—and two controlled variables. The functions P11, P12, P21, and P22 represent dynamic influences of inputs on the controlled variables. With no decoupling, there will be interaction between the control loops. However, decoupling elements can be installed so that the output of PID#1 has no effect on CV#2, and the output of PID#2 has no effect on CV#1.
Using an approach similar to feedforward control, note that there are two paths of influence from the output of PID#1 to CV#2. One path is through the process element P21(s). The other is through the decoupling element D21(s) and the process element P22(s). For the output of PID#1 to have no effect on CV#2, these paths must be mirror images that cancel out each other. Therefore, the decoupling element must be as shown in Equation 2-12: (2-12)
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
In a practical application, the appropriate sign will be obvious. In a similar fashion, the other decoupling element is given in Equation 2-13: (2-13) If the process elements are approximated with FOPDT models as described in the “Tuning from Open-Loop Tests” section of this chapter, the decoupling elements are, at most, comprised of gain, lead-lag, and dead-time functions, all of which are available from most vendors’ function block library. The decoupling technique described here can be called forward decoupling. A disadvantage of this technique is that if one of the manual-auto switches is in manual, the apparent process seen by the alternate control algorithm is not the same as if both controllers were in auto. Hence, to get acceptable response from the control algorithm yet in auto, its control algorithm tuning would have to be changed.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
An alternative technique to forward decoupling is inverted decoupling, depicted in Figure 2-17. With this technique, the output of each controller’s manual-automatic switch is fed backward through the decoupling element and combined with the output of the alternate control algorithm PID. The forms of the decoupling elements are identical to that used in forward decoupling.
The very significant advantage of using inverted decoupling is that the apparent process seen by each controller algorithm does not change with changing the position of the alternate control algorithm’s manual/automatic switch. A caution about using inverted decoupling, however, is that an inner loop is formed by the presence of the decoupling elements; this inner loop may or may not be stable. This loop is comprised of elements of known parameters, and, therefore, can be precisely analyzed and the stability can be verified before installation. Chapter 13 of Basic and Advanced Regulatory Control: System Design and Application (Wade 2017) provides a rigorous procedure for verifying stability, as well as for investigating realizability and robustness of the decoupling technique. An alternate to complete decoupling, either forward or inverted, is partial decoupling. If one variable is of greater priority than the other, partial decoupling should be considered. Suppose that CV#1 in Figure 2-16 is a high-valued product and CV#2 is a low-valued product. Variability in CV#1 should be minimized, whereas variability in CV#2 can be tolerated. The upper decoupling element in Figure 2-16 can be implemented and the lower decoupling element eliminated. Some form of the decoupling described above can be utilized if there are two, or possibly three, interacting loops. If there are more loops, using a form of advanced process control is recommended.
Selector (Override) Control Selector control, also called override control, differs from the other techniques because it does not have as its objective the reduction of variability in a control loop. It does have an economic consequence, however, because the most economical operating point for many processes is near the point of encroachment on a process, equipment, or safety limit. Unless a control system is present that prevents such encroachment, the tendency will be to operate well away from the limit, at a less-than-optimum operating point. Selector control permits operating closer to the limit.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
As an example, Figure 2-18 illustrates a process heater. In normal operation, an outlet temperature controller controls the firing rate of the heater. During this time, a critical tube temperature is below its limit. Should, however, the tube temperature encroach on the limit, the tube temperature controller will override the normal outlet temperature controller and reduce the firing rate of the heater. The low-signal selector in the controller outputs provides for the selection of the controller that is demanding the lower firing rate.
If ordinary PI or PID controllers are used for this application, one or the other of the controlled variables will be at its set point, with the other variable less that its set point. The integral action of the nonselected controller will cause it to wind up—that is, its output will climb to 100%. In normal operation, this will be the tube temperature controller. Should the tube temperature rise above its set point, its output must unwind from 100% to a value that is less than the other controller’s output before there is any effect on heater firing. Depending on the controller tuning, there may be a considerable amount of time when the tube temperature is above its limit.
When the tube temperature controller overrides the normal outlet temperature controller and reduces heater firing, there will be a drop in heater outlet temperature. This will cause the outlet temperature controller to wind up. Once the tube temperature is reduced, returning to normal outlet temperature control is as awkward as was the switch to tube temperature control. These problems arise because ordinary PID controllers were used in the application. Most vendors have PID algorithms with alternative functions to circumvent these problems. Two techniques will be briefly described.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Some vendors formulate their PID algorithm with external reset. The integral action is achieved by feeding the output of the controller back to a positive feedback loop that utilizes a unity-gain first-order lag. With the controller output connected to the external feedback port, the response of a controller with this formulation is identical to that of an ordinary PID controller. Different behavior occurs when the external reset feedback is taken from the output of the selector, as shown in Figure 2-18. The nonselected controller will not wind up. Instead, its output will be equal to the selected controller’s output plus a value representing its own gain times error. As the nonselected controlled variable (for instance, tube temperature) approaches its limit, the controller outputs become more nearly equal, but with the nonselected controller’s output being higher. When the nonselected controller’s process variable reaches the limit, the controller outputs will be equal. Should the nonselected controller’s process variable continue to rise, its output will become the lower of the two—hence it will be selected for control. Because there is no requirement for the controller to unwind, the switch-over will be immediate. Other systems do not use the external feedback. The nonselected controller is identified from the selector switch. As long as it remains the nonselected controller, it is continually initialized so that its output equals the other controller output plus the value of its own gain times error. This behavior is essentially the same as external feedback. There are many other examples of selector control in industrial processes. On a pipeline, for instance, a variable speed compressor may be operated at the lower speed demanded by the suction and discharge pressure controllers. For distillation control, reboiler heat may be set by the lower of the demands of a composition controller and a controller of differential pressure across one section of a tower, indicative of tower flooding.
Further Information Aström, K. J., and T. Hägglund. Advanced PID Control. Research Triangle Park, NC:
ISA (International Society of Automation), 2006. Shinskey, F. G. Process Control Systems: Application Design and Tuning. 4th ed. New York: McGraw-Hill, 1996. The Automation, Systems, and Instrumentation Dictionary. 4th ed. Research Triangle Park, NC: ISA (International Society of Automation), 2003. Wade, H. L. Basic and Advanced Regulatory Control: System Design and Application. Research Triangle Park, NC: ISA (International Society of Automation), 2017. ——— “Trial and Error Tuning: An Organized Procedure.” InTech (May 2005). ——— “Inverted Decoupling, A Neglected Technique.” ISA Transactions 36, no. 1 (1997).
About the Author
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Harold Wade, PhD, is president of Wade Associates, Inc., a Houston-based consulting firm specializing in control systems engineering, instrumentation, and process control training. He has more than 40 years of instrumentation and control industry experience with Honeywell, Foxboro, and Biles and Associates. A senior member of ISA and a licensed Professional Engineer, he is the author of Basic and Advanced Regulatory Control: System Design and Application, published by ISA. He started teaching for ISA in 1987 and was the 2008 recipient of the Donald P. Eckman Education Award. He was inducted into Control magazine’s Automation Hall of Fame in 2002.
3 Control of Batch Processes By P. Hunter Vegas, PE
What Is a Batch Process?
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
A batch process is generally considered one that acts on a discrete amount of material in a sequential fashion. Probably the easiest way to describe a batch process is to compare and contrast it to a continuous process, which is more common in industry today. The examples discussed below are specifically geared to continuous and batch versions of chemical processes, but these same concepts apply to a diverse range of industries. Batch manufacturing techniques can be found in wine/beer making, food and beverage production, mining, oil and gas processing, and so on. A continuous chemical process usually introduces a constant stream of raw materials into the process, moving the material through a series of vessels to perform the necessary chemical steps to make the product. The material might pass through a fluidized bed reactor to begin the chemical reaction, pass through a water quench vessel to cool the material and remove some of the unwanted byproducts, and finally be pushed through a series of distillation columns to refine the final product before pumping it to storage. In contrast a batch chemical process usually charges the raw materials to a batch reactor, and then performs a series of chemical steps in that same vessel until the desired product is achieved. These steps might include mixing, heating, cooling, batch distilling, and so on. When the steps are complete, the material might be pumped to storage or it may be an intermediate material that is transferred to another batch vessel where more processing steps are performed. Another key difference between continuous and batch processes is the typical running state of the process. A continuous process usually has a short start-up sequence and then it achieves steady state and remains in that state for days, weeks, months, and even years. The start-up and shutdown sequences are often a tiny fraction of the production run. In comparison, a batch process rarely achieves steady state. The process is constantly transitioning from state to state as the control system performs the processing sequence on the batch.
A third significant difference between batch and continuous processes is one of flexibility. A continuous process is specifically designed to make a large quantity of a single product (or a narrow family of products). Modification of the plant to make other products is often quite expensive and difficult to implement. In contrast, a batch process is intentionally designed to make a large number of products easily. Processing vessels are designed to handle a range of raw materials; vessels can be added (or removed) from the processing sequence as necessary; and the reactor jackets and overhead piping are designed to handle a wide range of conditions. The flexibility of the batch process is an advantage and a disadvantage. The inherent flexibility allows a batch process to turn out a large number of very different products using the same equipment. The batch process vessels and programming can also be easily reconfigured to make completely new products with a short turn around. However, the relatively small size of the batch vessels generally limits the throughput of the product so batch processes can rarely match the volume and efficiency of a large continuous process. This is why both continuous and batch processes are extensively used in manufacturing today. Each has advantages that serve specific market needs.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Controlling a Batch Process From a control system perspective the design of a continuous plant is usually quite straightforward. The instruments are sized and selected for the steady-state conditions, and the configuration normally consists of a large number of continuous proportionalintegral-derivative (PID) controllers that keep the process at the desired steady state. The design of a batch control system is another matter entirely. The field instrumentation will often face a larger dynamic range of process conditions, and the control system must be configured to handle a large number of normal, transitional, and abnormal conditions. In addition, the control system must be easily reconfigured to address the changing process vessel sequences, recipe changes, varying product requirements, and so on. The more flexible a batch process is, the more demanding the requirements on the control system. In some cases, a single set of batch vessels might make 50 to 100 different products. Clearly such programming complexity poses quite a challenge for the automation professional. Due to the difficulties that batch sequences posed, most batch processes were run manually for many years. As sequential controllers became available, simple batch systems were occasionally “automated” with limited sequences programmed into programmable logic controllers (PLCs) and drum programmers. Fully computerized systems that could deal with variable sequences for flexible processes were not broadly
available until the mid-1980s and around that time several proprietary batch control products were introduced. Unfortunately, each had its own method of handling the complexities of batch programming and each company used different terminology adding to the confusion. The need for a common batch standard was obvious. The first part of the ISA-88 batch control standard was published in 1995 and has had a remarkable effect on the industry. Later it was adopted by the American National Standards Institute (ANSI), it is currently named ANSI/ISA-88.00.01-2010, but it is still broadly known as S88. It provides a standard terminology, an internally consistent set of principles, and a set of models that can be applied to virtually any batch process. ANSI/ISA-88.00.01 can (and has) been applied to many other manufacturing processes that require procedural sequence control.
What Is ANSI/ISA-88.00.01?
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
ANSI/ISA-88.00.01 is such a thorough standard that an exhaustive description of the contents is impossible in this chapter. However, it is important for the automation professional to understand several basic concepts presented in the standard and learn how to apply these concepts when automating a batch process. The following description is at best a cursory overview of an expansive topic. It will not make the reader a batch expert, but it will provide a basic knowledge of the subject and serve as a starting point for further study and understanding. Before discussing the various parts of the standard, it is important to understand what the ISA88 standards committee was trying to accomplish. ANSI/ISA-88.00.01 was written to define batch control systems and give automation professionals a common set of terms and definitions that could be understood by everyone. Prior to the release of the standard, different control systems had different definitions for words such as “phases,” “operations,” “procedures,” and “recipes,” and each engineer and control system used different methods to implement the complicated sequential control that batch processing requires. ANSI/ISA-88.1.1.01 was written to standardize the underlying batch recipe logic and ideally allow pieces of logic code to be reused within the control system and even with other control systems. This idea is a critical point to remember. When batch software is done correctly, the recipes and procedures consist of relatively simple, generic pieces of logic that can be reused again and again without developing new logic. Such compartmentalization of the code also allows recipes to be easily adapted and modified as requirements change without rewriting the whole program. ANSI/ISA-88.00.01 spends a great deal of time explaining the various models associated with batch control. Among the models addressed in the standard are the
Process Model, the Physical Model, the Equipment Entity Model, the Procedural Control Model, the Recipe Type Model, the General Recipe Procedure Model, and the Master Recipe Model. While each of these models was created to define and describe different aspects of batch control, there are two basic model concepts that are critical to understanding batch software. These are the Physical Model and the Procedural Control Model.
ANSI/ISA-88.00.01 Physical Model
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
The Physical Model describes the plant equipment itself (see Figure 3-1). As the diagram shows, the model starts at the highest level (enterprise), which includes all the equipment in the entire company. The enterprise is composed of sites (or plants), which may be one facility or dozens of facilities spread across the globe. Each site is composed of areas, which may include single or multiple processing areas in a plant. Within a specific plant area there is one (or multiple) process cell(s), which are usually dedicated to a particular product or family of products. Below the level of process cell, the ANSI/ISA-88.00.01 becomes more complicated and begins to define the plant equipment in much more detail. It is crucial that the automation professional understand these lower levels of the Physical Model because the batch control code itself is structured around these same levels.
Units are officially defined in the standard as “a collection of associated equipment modules and/or control modules that can carry out one or more major processing activities.” In the plant this usually translates into a vessel (or a group of related vessels) that are dedicated to processing one batch (and only one batch) at a time. In most cases, this will be a single batch reactor with its associated jacket and overhead piping. The various processing steps performed in the unit are defined by the recipe’s “unit
procedure,” which will be discussed in the next section. The units are comprised of control modules and possibly equipment modules. Both are discussed in the next paragraphs.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Defining the units in a plant can be a difficult proposition for an automation professional new to batch processing. Generally, one should look for vessels (and their associated equipment) that perform processing steps on a batch (and only one batch) at a time. Storage tanks are generally not considered a unit as they do not usually perform any batch operations on the product and often contain several batches at any given time. Consider the example vessels shown in Figure 3-2. There are two identical batch reactors (Rx101 and Rx102) that feed a third vessel, Mix Tank 103. Batches are processed in the two reactors; when the process is finished, they are alternately transferred to Mix Tank 103 for further raw material addition before the final product is shipped to storage. (The mixing process is much faster than the reaction steps so a single mix tank can easily process the batch and have it shipped before the next reactor is ready.) There are four raw material headers (A, B, C, and D) and each reactor has jacket valves and controls (not shown), as well as an overhead condenser and reflux drum that is used to remove solvent during batch processing. In such an arrangement, each reactor (along with its associated jacket and overhead condenser equipment) would be considered a unit, as would the mix tank.
Control modules are officially defined in ANSI/ISA-88.00.01 as “the lowest level grouping of equipment in the Physical Model that can carry out batch control.” In practice, the control modules tend to follow the typical definition of “instrument loops” in a plant. For example, one control module might include a PID loop (transmitter, PID controller, and control valve), and another control module might incorporate the controls around an automated on/off valve (i.e., open/close limit switches, an on/off DCS control block, a solenoid, and a pneumatic air-operated valve). Only a control module can directly manipulate a final control element. All other modules can affect equipment only by commanding one or more control modules. Every physical piece of equipment is controlled by one (and only one) control module. Sensors are treated differently. Regardless of which control modules contain measurement instrumentation, all modules can share that information. Control modules are given commands by the phases in the unit (to be discussed later) or by equipment modules (discussed below).
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
There are many control modules in Figure 3-2. Every on/off valve, each pump, each agitator, and each PID controller would be considered a control module. Equipment modules may exist in a batch system or they may not. Sometimes it is advantageous to group control modules into a single entity that can be controlled by a common piece of logic. For instance, the sequencing of all the valves associated with a reactor’s heating/cooling jacket may be better controlled by one piece of logic that can correctly open/close the proper valves to achieve heating/cooling/draining modes and handle the sequenced transitions from one mode to another. Another common example would be a raw material charge header. Occasionally it is easier to combine the charge valves, pumps, and flow meters associated with a particular charge into one equipment module that can open/close the proper valves and start/stop the pump as necessary to execute the material charge to a specific vessel. Equipment modules can be dedicated to a unit (such as the reactor jacket) or they may be used as a shared resource that can be allocated by several units (such as a raw material header). They are also optional—some batch systems will employ them while others may simply control the control modules as individual entities. In the example in Figure 3-2, the reactor jacket controls might be considered an equipment module, as could the individual raw material headers. The decision to create an equipment module is usually based on the complexity of the logic associated with that group of equipment. If the reactor jacket has several modes of operation (such as steam heating, tempered water heating, tempered water cooling, cooling water cooling, brine chilling, etc.), then it is probably worth creating an equipment module to handle the complex transitions from mode to mode independent of the batch recipe software. (The batch recipe would just send mode commands to the equipment module and let it
handle the transition logic.) If the reactor jacket was only capable of steam heating, then it probably is not worth the effort to create an equipment module for the jacket—batch can easily issue commands to the steam valve control module directly.
Procedural Control Model While the Physical Model describes the plant equipment, the Procedural Control Model describes the batch control software. The batch recipe contains the following parts: • Header – This contains administrative information including product information, version number, revision number, approval information, etc. • Formulas – Many recipes can make an entire family of related products by utilizing the same processing steps but changing the amount of ingredients, cook times, cook temperatures, etc. The formula provides the specific details that the recipe needs to make a particular subproduct.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
• Equipment requirements – This is a list of equipment that the recipe must acquire in order to make a batch. In many cases, this equipment must be acquired before the batch can start, but if the recipe must process the batch in other equipment later in the process then a recipe can “defer” the allocation of the other equipment until it needs it. (In the example in Figure 3-2, the recipe would need to acquire a reactor before starting and then acquire the mix tank once the reactor processing was completed.) • Procedure – The procedure makes up the bulk of the recipe and it contains the detailed control sequences and logic required to run the batch. The structure of the procedure is defined and described by the Procedural Control Model, which is described below. The Procedural Control Model defines the structure of the batch control software. It is made up of four layers (see Figure 3-3), which will be described in the next sections. Rather than starting at the top and working down, it is easier to start with the lowest layers and work up through the structure as each layer is built on the one preceding it.
Phase
The phase is defined in ANSI/ISA-88.00.01 as “the lowest level of procedural element in the Procedural Control Model that is intended to accomplish all or part of a process action.” This is a rather vague description, but typically a phase performs some discrete action, such as starting an agitator, charging a raw material, or placing a piece of equipment in a particular mode (such as jacket heating/cooling, etc.) It is called by the batch software as needed (much like a programming subroutine), and it usually has one or more parameters that allow the batch software to direct the phase in certain ways. (For instance, a raw material phase might have a CHARGE_AMT parameter that tells the phase how much material to charge.) The level of complexity of the phases has been an ongoing debate. Some have argued that the phase should be excessively simple (such as opening/closing a single valve), while others tend to make phases that perform much more complicated and involved actions. Phases that are too simple result in large, unwieldy recipes that can strain network communications. Phases that are too complex work fine, but troubleshooting them can be quite difficult and making even the slightest change to the logic can become very involved. Ideally the phase should be a relatively simple piece of code that is as generic and flexible as possible and applicable to as large a number of recipes as possible. The subject of what to include in a phase will be handled later in the “Applying ANSI/ISA-88.00.01” section of this chapter. Refer to Figure 3-2. Here is a list of phases that would likely apply to the system shown. The items in parentheses are the phase parameters: Reactor Phases
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
• Setup • Agitate (On/Off) • Jacket (Mode, Set Point) • Charge_Mat_A (Amount) • Charge_Mat_B (Amount) • Drain • Overhead (Mode) Mix Tank Phases • Setup • Agitate (On/Off) • Charge_Mat_C (Amount)
• Charge_Mat_D (Amount) • Ship (Destination) Operations Operations are a collection of phases that perform some part of the recipe while working in a single unit. If the recipe is only run in one unit, then it may only have one operation that contains the entire recipe. If the recipe must run in multiple units, then there will be at least one operation for every unit required. Sometimes it is advantageous to break up the processing in a single unit into multiple operations, especially if the process requires the same set of tasks repeated multiple times. For instance, a reactor recipe might charge raw materials to a reactor, and then run it through multiple distillation sequences that involve the same steps but use different temperature set points or cook times. In this case, the batch programmer would be wise to create a distillation operation that could be run multiple times in the recipe using different operation parameters to dictate the changing temperature set points or cook times. This would avoid having multiple sets of essentially identical code that must be maintained and modified every time a change was made to the distillation process.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Some reactor operations for the example in Figure 3-2 might look like the illustration in Figure 3-4. (The phases with their parameters are in bold.)
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Unit Procedures Unit procedures combine the operations of a single unit into a single entity that is called by the highest level of the recipe (the procedure). Every recipe will have at least one unit procedure for every unit that is required by the recipe and every unit procedure will contain at least one operation. When a unit procedure is encountered in a recipe, the batch system immediately allocates the unit in question before beginning to process the operations and phases it contains. When the unit procedure is completed, the programmer has the option of releasing the unit (for use by other recipes) or retaining the unit for some later step and thus keeping other recipes from getting control. In the example in Figure 3-2 the unit procedures might look like this: Reactor Unit Procedure “UP_RX_PROCESS” Contains the Reactor Operation “OP_PROCESS” (see above). Mix Tank Unit Procedure “UP_MIX_TANK_AQUIRE”
Contains the Mix Tank Operation “OP_SETUP,” which acquires the mix tank when it is available and sets it up for a transfer from the reactor. Reactor Unit Procedure “UP_RX_TRANSFER” Contains the Reactor Operation “OP_TRANSFER” (see above). Procedure This is the highest level of the batch software code contained in the recipe. Many consider the procedure to be the recipe as it contains all the logic required to make the recipe work, however ANSI/ISA-88.00.01 has defined the recipe to include the procedure, as well as the header, equipment requirements, and formula information mentioned previously. The procedure is composed of at least one unit procedure (which then contains at least one operation and its collection of phases). In the example in Figure 3-2 the final procedure might look like this: Reactor Unit Procedure “UP_RX_PROCESS” Acquire the reactor before starting. Mix Tank Unit Procedure “UP_MIX_TANK_AQUIRE” Acquire the mix tank before starting. Reactor Unit Procedure “UP_RX_TRANSFER”
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Release the reactor when finished. Mix Tank Unit Procedure “UP_MIX_TANK_PROCESS” Release the mix tank when finished. As one considers the Procedural Control Model, one might wonder why ANSI/ISA88.00.01 has defined such an elaborate structure for the recipe logic. The reason is “reusability.” Similar to the phases, it is possible to write generic operations and even unit procedures that can apply to several recipes. This is the strength of ANSI/ISA-88.00.01 —when implemented wisely, a relatively small number of flexible phases, operations, and unit procedures can be employed to manufacturer many, very diverse products. Also, simply arranging the phase blocks in a different order can often create completely new product recipes.
Applying ANSI/ISA-88.00.01
Now that the basic terminology has been discussed, it is time to put ANSI/ISA-88.00.01 into practice. When tackling a large batch project, it is usually best to execute the project in the following order. 1. Define the equipment (piping and instrumentation drawings (P&IDs), control loops, instrument tag list, etc.). 2. Understand the process. What processing steps must occur in each vessel? Can equipment be shared or is it dedicated? How does the batch move through the vessels? Can there be different batches in the system at the same time? Do the automated steps usually function as designed or must the operator take control of the batch and manually adjust the process often? 3. Define the units.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
4. Carefully review the batch sheets and make a list of the required phases for all the recipes that will run on the equipment. Be sure this list is comprehensive! A partial list will result in a great deal of rework and additional programming later. 5. Review the phase list and determine which phases can be combined. For instance, there might be multiple Agitate phases in the list to handle the different types of agitators (on/off, variable-frequency drive [VFD], agitators with speed switches, etc.). Rather than creating a special type of agitator phase for each agitator type, it is usually not difficult to create a single phase that can handle all the types. This results in a single phase to maintain and adjust rather than a handful. Of course, such a concept can be carried to the extreme and phases can get extremely complicated if they are designed to handle every contingency. Combine phases where it makes sense and where the logic is easily implemented. If it is not easy to handle the different scenarios, create two versions of the phase. Document the phases (usually using simple flow charts). This documentation will eliminate a great deal of miscommunication between the programming staff and the system designer during logic development. 6. With the completed phase list in hand, start building operations. Watch for recipes that use the same sequence of steps repeatedly. If this situation exists, create an operation for each repeated sequence so that the operation can be called multiple times in the recipe. (Similar to the phases, this re-usability saves programming time and makes system maintenance easier.) 7. Build the unit procedures and final recipe. Document these entities with flow charts. At this point the engineering design team should review the phases and recipe
procedures with operations and resolve any issues. (Do NOT start programming phases and/or batch logic until this step is done.) Once the reviews have been completed, the team can start configuring the system. To avoid problems and rework, it is best to perform the configuration in the following order: 1. Build the I/O and low-level control modules, such as indicators, PID controllers, on/ off valves, motors, etc. Be sure to implement the interlocks at this level. 2. Fully test the low-level configuration. Do NOT start the phase or low-level equipment module programming until the low level has been tested. 3. Program any equipment modules (if they exist) and any non-batch sequences that might exist on the system. Fully test these components. 4. Begin the phase programming. It is best to fully test the phases as they are completed rather than testing at the end. (Systemic issues in alarming and messaging can be caught early and corrected before they are replicated.) 5. Once the phases are fully built and tested, build the operations and test them. 6. Finally, build and test the unit procedures and recipe procedures. Following this order of system configuration will radically reduce the amount of rework required to create the system. It also results in a fully documented system that is easy to troubleshoot and/or modify.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Summary As stated at the beginning of this chapter, programming a batch control system is usually orders of magnitude more difficult than programming a continuous system. The nearly infinite combination of possible processing steps can be overwhelming and the opportunity for massive rework and error correction can keep project managers awake at night. The ANSI/ ISA-88.00.01 batch control standard was created to establish a common set of terminology and programming techniques that are specifically designed to handle the daunting challenges of creating a batch processing system in a consistent and efficient manner. When implemented wisely, ANSI/ISA-88.00.01 can result in flexible, reusable code that is easy to troubleshoot and can be used across many recipes and control systems. Study the standard and seek to take advantage of it.
Further Information ANSI/ISA-88.00.01-2010. Batch Control Part 1: Models and Terminology. Research
Triangle Park, NC: ISA (International Society of Automation). Parshall, J. H., and L. B. Lamb. Applying S88: Batch Control from a User’s Perspective. Research Triangle Park, NC: ISA (International Society of Automation), 2000. Craig, Lynn W. A. “Control of Batch Processes.” Chap. 14 in A Guide to the Automation Body of Knowledge, 2nd ed, edited by Vernon L. Trevathan. Research Triangle Park, NC: ISA (International Society of Automation), 2006.
About the Author
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
P. Hunter Vegas, PE, was born in Bay St. Louis, Miss., and he received his BS degree in electrical engineering from Tulane University in 1986. Upon graduating, Vegas joined Babcock and Wilcox, Naval Nuclear Fuel Division in Lynchburg, Va., where his primary job responsibilities included robotic design and construction, and advanced computer control. In 1987, he began working for American Cyanamid (now Cytec Industries) as an instrument engineer. In the ensuing 12 years, his job titles included instrument engineer, production engineer, instrumentation group leader, principal automation engineer, and unit production manager. In 1999, he joined a speciallyformed group to develop next-generation manufacturing equipment for a division of Bristol-Myers Squibb. In 2001, he entered the systems integration industry, and he is currently working for Wunderlich Malec as an engineering project manager in Kernersville, N.C. Vegas holds Louisiana and North Carolina Professional Engineering licenses in electrical and controls system engineering, a North Carolina Unlimited Electrical contractor license, a TUV Functional Safety Engineering certificate, and an MBA from Wake Forest University. He has executed thousands of instrumentation and control projects over his career, with budgets ranging from a few thousand to millions of dollars. He is proficient in field instrumentation sizing and selection, safety interlock design, electrical design, advanced control strategy, and numerous control system hardware and software platforms. He co-authored a book, 101 Tips to a Successful Automation Career, with Greg McMillan and co-sponsors the ISA Mentoring program with McMillan as well. Vegas is also a frequent contributor to InTech, Control, and numerous other publications.
4 Discrete Control By Kelvin T. Erickson, PhD
Introduction A discrete control system mainly has discrete sensors and actuators, that is, sensors and actuators that have one of two values (e.g., on/off or open/closed). Though Ladder Diagram (LD) is the primary language of discrete control, the industry trend is toward using the IEC 61131-3 (formerly 1131-3) standard. Besides Ladder Diagram, IEC 61131-3 defines four additional languages: Function Block Diagram (FBD), Structured Text (ST), Instruction List (IL), and Sequential Function Chart (SFC). Even though Ladder Diagram was originally developed for the programmable logic controller (PLC) and Function Block Diagram (FBD) was originally developed for the distributed control system (DCS), a PLC is not limited to ladder logic and a DCS is not limited to function block. The five IEC languages apply to all platforms for implementation of discrete control.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Ladder Logic Early technology for discrete control used the electromechanical relays originally developed for the telegraph industry of the 1800s. Interconnections of relays implemented logic and sequential functions. The PLC was originally developed to replace relay logic control systems. By using a programming language that closely resembles the wiring diagram documentation for relay logic, the new technology was readily adopted. To introduce LD programming, simple logic circuits are converted to relay logic and then to LD (also called ladder logic). Consider the simple problem of opening a valve, XV103, when pressure switches PS101 and PS102 are both closed, as in Figure 4-1a. To implement this function using relays, the switches are not connected to the valve directly but are connected to relay coils labeled PS101R and PS102R whose normally open (NO) contacts control a relay coil, XV103R, whose contacts control the valve (see Figure 4-1b). When PS101 and PS102 are both closed, the corresponding relay coils PS101R and PS102R are energized,
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
closing two contacts and energizing the XV103R relay coil. The contact controlled by XV103R is closed, supplying power to the XV103 valve.
The output (a valve in this case) is driven by the XV103R relay to provide voltage isolation from the relays implementing the logic. The need for this isolation is more obvious when the output device is a three-phase motor operating at 440 volts. The input switches, PS101 and PS102, control relay coils so that the one switch connection to an input relay can be used multiple times in the logic. A typical industrial control relay can have up to 12 poles, or sets of contacts, per coil. For example, if the PS101R relay has six poles (only one is shown), then the other five poles (contacts) are available for use in the relay logic without requiring five other connections to PS101. The ladder logic notation (Figure 4-1c) is shortened from the relay wiring diagram to show only the third line, the relay contacts, and the coil of the output relay. Ladder logic notation assumes that the inputs (switches in this example) are connected to discrete input channels (equivalent to the relay coils PS101R and PS102R in Figure 4-1b). Also, the actual output (valve) is connected to a discrete output channel (equivalent to the NO contacts of XV102R in Figure 4-1b) controlled by the coil. The label shown above the contact symbol is not the contact label; it is the label of the control for the coil that drives the contact. Also, the output for the rung occurs on the extreme right side of the rung, and power is assumed to flow from left to right. The ladder logic rung is interpreted as follows: “When input (switch) PS101 is closed and input (switch) PS102
is closed, then XV103 is on.” Suppose the control function is changed so that valve XV103 is open when switch PS101 is closed and switch PS102 is open. The only change needed in the relay implementation in Figure 4-1b is to use the normally closed (NC) contact of the PS102R relay. The ladder logic for this control is shown in Figure 4-2a and is different from Figure 4-1c only in the second contact symbol. The ladder logic is interpreted as follows: “When PS101 is closed (on) and PS102 is open (off), then XV103 is on.”
Further suppose the control function is changed so that valve XV103 is open when either switch PS101 or PS102 is closed. The only change needed in the relay implementation in Figure 4-1b is to wire the two NO contacts in parallel rather than in series. The ladder logic for this control is shown in Figure 4-2b. The ladder logic is interpreted as follows: “When PS101 is closed (on) or PS102 is closed (on), then XV103 is on.” Summarizing these three examples, one should notice that key words in the description of the operation translate into certain aspects of the solution: Copyright © 2018. International Society of Automation (ISA). All rights reserved.
and → series connection of contacts or → parallel connection of contacts on → NO contact off → NC contact These are key ladder logic concepts. An example of a PLC ladder logic diagram appears in Figure 4-3. The vertical lines on the left and right are called the power rails. The contacts are arranged horizontally between the power rails, hence the term rung. The ladder diagram in Figure 4-3 has three rungs. The arrangement is similar to a ladder one uses to climb onto a roof. In addition, Figure 4-3 shows an example of a diagram one would see when monitoring the running program. The thick lines indicate continuity, and the state (on/off) of the inputs and outputs is shown next to the contact/coil. Regardless of the contact symbol, if the contact is closed (when it has continuity through it), it is shown as thick lines. If the
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
contact is open, it is shown as thin lines. In a relay ladder diagram, power flows from left to right. In ladder logic, there is no real power flow, but there still must be a continuous path through closed contacts in order to energize an output. In Figure 4-3, the XV101 output on the first rung is off because the contact for PS132 is open (meaning PS132 is off), blocking continuity through the PS124 and LS103 contacts. Also note that the LS103 input is off, which means the NC contact in the first rung is closed and the NO contact in the second rung is open. According to IEC 61131-3, the right power rail may be explicit or implied.
Figure 4-3 also introduces the concept of function blocks. Any object that is not a contact or a coil is called a function block because of its appearance in the ladder diagram. The most common function blocks are timer, counter, comparison, and computation operations. More advanced function blocks include message, sequencer, and shift register operations. Some manufacturers group the ladder logic objects into two classes: inputs and outputs. This distinction was made because in relay ladder logic, outputs were never connected in series and always occurred on the extreme right-hand side of the rung. Contacts always appeared on the left side of coils and never on the right side. To turn on multiple outputs simultaneously, coils are connected in parallel. This restriction was relaxed in IEC 61131-3 and now outputs may be connected in series. Also, contacts can occur on
the right side of a coil as long as a coil is the last element in the rung. Many novice ladder logic programmers tend to use the same type of contact (NO or NC) in the ladder that corresponds to the type of field switch (or sensor) wired to the discrete input channel. While this is true in many cases, this is not the best way to think of the concept. The type of contact (NO, NC) in the field is determined by safety or fail-safe factors, but these factors are not relevant to the PLC ladder logic. The PLC is only concerned with the current state of the discrete input (open/on or closed/off).
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
As an example, consider the problem of starting and stopping a motor with momentary switches. The motor is representative of any device that must run continuously but is started and stopped with momentary switches. The start and stop momentary switches are shown with the general ladder logic in Figure 4-4. Concentrating on the two momentary switches, the stop pushbutton is an NC switch, but an NO contact is used in the ladder logic. In order for the motor EX101 to start, the STOP_PB must be closed (not pushed) and the START_PB must also be closed (pushed). When the EX101 coil is energized, it also provides continuity through the parallel path and the motor remains running when START_PB is released. When STOP_PB is pushed, the discrete input is now off, opening the contact in the ladder logic. The EX101 output is then de-energized.
The start and stop switches are chosen and wired this way for safety. If any part of the system fails (switch or wiring), the motor will go to the safe state, which is stopped. If the start-switch wiring is faulty (open wire), then the motor cannot be started because the controller will not sense a closed start switch. If the stop-switch wiring is faulty (open wire), then the motor will immediately stop if it is running. Also, the motor cannot be started with an open wire to the stop switch. The ladder logic in Figure 4-4 is also called a seal or latching circuit, and appears in other contexts. In many systems, the start and stop of a device, such as a motor, have more conditions that must be satisfied for the motor to run. These conditions are referred to as permissives, permits, lockouts, inhibits, or restrictions. These conditions are placed on the start/stop rung as shown in Figure 4-4. Permissives allow the motor to start and lockouts will stop the motor, as well as prevent it from being started.
After coils and contacts, timers are the next most common ladder logic object. IEC 61131-3 defines three types of timers: on-delay, off-delay, and pulse. The on-delay timer is by far the most prevalent and so it is the only one described here. For a description of the other timers, the interested reader is referred to the references.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
The TON on-delay timer is shown in Figure 4-5a. The EN input is the block execution control and must be on for the block to execute. The ENO output echoes the EN input and is on if EN is on and the block executes without error. The TON timer basically delays the turn-on of a signal and does not delay the turn-off. When the IN input turns on, the internal time (ET) increases. When ET equals the preset time (PT), the timer is timed out and the Q output turns on. If IN turns off during the timing interval, Q remains off, ET is set to zero, and timing recommences when the IN turns on. The ET output can be connected to a variable (of type TIME) to monitor the internal time. The instance name of the timer appears above the block. The preset time can be a variable or a literal of type TIME. The prefix must be TIME#, T#, time#, or t#. The time is specified in days (d), hours (h), minutes (m), seconds (s), and milliseconds (ms), for example, t#2d4h45m12s450ms. The accuracy is 1 millisecond.
The timing diagram associated with the ladder in Figure 4-5a is shown in Figure 4-5b. The LS_1 discrete input must remain on for at least 15 seconds before the LS1_Hold coil is turned on. When LS_1 is turned off after 5 seconds, ET is set to zero time and the LS1_Hold coil remains off. As a final ladder logic example, consider the control of a fixed-speed pump motor. The specifications are as follows. The logic controls a motor starter through a discrete output. There is a Hand-Off-Auto (HOA) switch between the discrete output and the motor starter that allows the operator to override the PLC control. The control operates in two modes: Manual and Automatic. When in the Manual mode, the motor is started and stopped with Manual Start and Manual Stop commands. When in the Automatic mode, the motor is controlled by
Sequence Start and Sequence Stop commands. In the Automatic mode, the motor is controlled by sequences. When switching between the two modes, the motor control discrete output should not change. The logic must monitor and report the following faults: • Motor fails to start within 10 seconds • Motor overload • HOA switch is not in the Auto position • Any fault (on when any of the above three are on) The Fail to Start fault must be latched when it occurs. An Alarm Reset input must be provided that when on, resets the Fail to Start fault indication. The other fault indications track the appropriate status. For example, the Overload Fault is on when the overload is detected and off when the overload has been cleared. When any fault indication is on, the discrete output to the motor starter should be turned off and remain off until all faults are cleared. In addition, a Manual Start or Sequence Start must be used to start the motor after a fault has cleared. To detect these faults, the following discrete inputs are available: • Motor auxiliary switch • Overload indication from the motor starter • HOA switch position indication Copyright © 2018. International Society of Automation (ISA). All rights reserved.
The connections to the pump motor, tagged as EX100, are explained as follows: • EX100_Aux – Auxiliary contact; closes when the motor is running at the proper speed • EX100_Hoa – HOA switch; closes when the HOA switch is in the Auto position • EX100_Ovld – Overload indication from the motor starter; on when overloaded • Alarm_Reset – On to clear an auxiliary contact-fail-to-close failure indication • EX100_ManMode – On for Manual mode; off for Automatic mode • EX100_ManStart – On to start the motor when in Manual mode; ignored in Automatic • EX100_ManStop – On to stop the motor when in Manual mode; ignored in Automatic • EX100_SeqStart – On to start the motor when in Automatic mode; ignored in
Manual mode • EX100_SeqStop – On to start the motor when in Automatic mode; ignored in Manual mode • EX100_MtrStrtr – Motor starter contactor; on to start and run the motor; off to stop the motor • EX100_AnyFail – On when any failure indication is on • EX100_AuxFail – On when the auxiliary contact failed to close 10 seconds after motor started • EX100_OvldFail – On when the motor is overloaded
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
• EX100_HoaFail – On when the HOA switch is not in the Auto position, indicating that the PLC does not control the motor The ladder logic that implements the control of pump EX100 is shown in Figure 4-6. The start and stop internal coils needed to drive the motor contactor are determined by the first and second rungs. In these two rungs, the operator generates the start and stop commands when the control is in the Manual mode. When the control is in the Automatic mode (not the Manual mode), the motor is started and stopped by steps in the various sequences (function charts). The appropriate step sets the EX100_SeqStart or EX100_SeqStop internal coil to control the motor. These two internal coils are always reset by this ladder logic. This method of sequence-based control allows one to start/stop the motor in multiple sequence steps without having to change the motor control logic. The third rung delays checking for the auxiliary fail alarm until 10 seconds after the motor is started. This alarm must be latched since this failure will cause the output to the starter to be turned off, thus disabling the conditions for this alarm. The fourth and fifth rungs generate the overload fail alarm and the indication that the HOA switch is not in the Auto position. The sixth rung resets the auxiliary failure alarm so that another start attempt is allowed. The reset is often for a group of equipment, rather than being specific to each device. The seventh rung generates one summary failure indication that would appear on an alarm summary screen. The eighth rung controls the physical output that drives the motor contactor (or motor starter). Note that this rung occurs after the failure logic is scanned and the motor is turned off immediately when any failure is detected. The last rung resets the sequential control commands.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Function Block Diagram The Function Block Diagram (FBD) language is another graphical programming language. The typical DCS of the 1970s used this type of language to program the PID loops and associated functions and logic. An FBD is a set of interconnected blocks displaying the flow of signals between blocks. It is similar to a ladder logic diagram, except that function blocks replace the contact interconnections and the coils are simply Boolean outputs of function blocks. Figure 4-7 contains two FBD equivalents to the start/stop logic in Figure 4-4. The “&”
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
block is a logical AND block, and the >=1 block is a logical OR block. The number of inputs for each block can be increased. The circle at the lower input of the rightmost & block is a logical inversion, equivalent to the NC relay contact. The FBD in Figure 4-7a has an implicit feedback path, because the EX101 output of the rightmost & block is also an input to the leftmost & block. The EX101 variable is called the feedback variable. An alternative FBD is shown in Figure 4-7b. This FBD has an explicit feedback path, where there is an explicit path from the EX101 output to an input of the first & block. On this path, the value of EX101 passes from right to left, which is opposite to the normal left-to-right flow.
The execution of the function blocks can be controlled with the optional EN input. When the EN input is on, the block executes. When the EN input is off, the block does not execute. For example, in Figure 4-8, the set point from a recipe (Recipe2.FIC102_SP) is moved (“:=” block) into FIC102_SP when both Step_32 and Temp_In_Band are true. The EN/ENO connections are completely optional in the FBD language.
IEC 61131-3 does not specify a strict order of network evaluation. However, most FBDs are portrayed so that execution generally proceeds from left to right and top to bottom. The only real exception to this generality is an explicit feedback path. IEC 61131-3 specifies that network evaluation obey the following rules: 1. If the input to a function block is the output from another function block, then it should be executed after the other function block. In Figure 4-7a, the execution order is the leftmost &, the >=1, and then the rightmost &. 2. The outputs of a function block should not be available to other blocks until all outputs are calculated. 3. The execution of an FBD network is not complete until all outputs of all function blocks are determined. 4. When data is transferred from one FBD to another, the second FBD should not be evaluated until the values from the first FBD are available. According to IEC 61131-3, when the FBD contains an implicit or explicit feedback path, it is handled in the following manner:
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
1. “Feedback variables shall be initialized by one of the mechanisms defined in clause 2. The initial value shall be used during the first evaluation of the network.” Clause 2 of IEC 61131-3 defines possible variable initialization mechanisms. 2. “Once the element with a feedback variable as output has been evaluated, the new value of the feedback variable shall be used until the next evaluation of the element.” One of the more powerful aspects of the FBD language is the use of function blocks to encapsulate standard operations. A function block can be invoked multiple times in a program without actually duplicating the code. As an example, the ladder logic of Figure 4-6 can be encapsulated as the function block of Figure 4-9. The “EX100_” part of all symbols in Figure 4-6 is stripped and most of them become inputs or outputs to the function block. The block inputs are shown on the left side and the outputs are shown on the right side.
Structured Text The Structured Text (ST) language defined by IEC 61131-3 is a high-level language whose syntax is similar to Pascal. In general, ST is useful for implementing calculationintensive functions and other functions that are difficult to implement in the other languages. The ST language has a complete set of constructs to handle variable assignment, conditional statements, iteration, and function block calling. For a detailed language description, the interested reader is referred to the references.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
As a simple example, the ST equivalent to the ladder logic of Figure 4-4 and the FBD of Figure 4-7 is shown in Figure 4-10. One could also write one ST statement that incorporated the logic of Figure 4-4 without using the IF-THEN-ELSE construct.
As a more complicated example, the ST equivalent to the ladder logic implementation of the motor control of Figure 4-6 is shown in Figure 4-11. The use of the timer function block in lines 8–10 needs some explanation. A function block has an associated algorithm, embedded data, and named outputs. Multiple calls to the same function block may yield different results. To use a function block in ST, an instance of the function block must be declared in the ST program. For the ST shown in Figure 4-11, the instance of the TON timer block is called AuxF_Tmr and must be declared in another part of the program as:
VAR AuxF_Tmr: TON; END_VAR Line 8 of Figure 4-11 invokes the Aux_F_Tmr instance of the TON function block. Invoking a function block does not return a value. The outputs must be referenced in subsequent statements. For example, line 9 of Figure 4-11 uses the Q output of the Aux_F_Tmr along with other logic to set the auxiliary failure indication.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Instruction List The Instruction List (IL) language defined by IEC 61131-3 is a low-level language comparable to the assembly language programming of microprocessors. The IL language has a set of instructions to handle variable assignment, conditional statements, simple arithmetic, and function block calling. The IL and ST languages share many of the same elements. Namely, the definition of variables and direct physical PLC addresses are identical for both languages. An instruction list consists of a series of instructions. Each instruction begins on a new line and contains an operator with optional modifiers, and if necessary for the particular operation, one or more operands separated by commas. Figure 4-12 shows an example list of IL instructions illustrating the various parts. For a detailed language description, the interested reader is referred to the references.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
As an example, the IL equivalent to the ladder logic of Figure 4-4 and the FBD of Figure 4-7 is shown in Figure 4-13. As a simple example, the ST equivalent to the ladder logic of Figure 4-4 and the FBD of Figure 4-7 is also shown in Figure 4-13.
As a more complicated example, the IL equivalent to the ladder logic implementation of the motor control of Figure 4-6 is shown in Figure 4-14. As for the ST implementation in Figure 4-11, the instance of the TON timer block is called AuxF_Tmr and is declared in the same manner. The timer function block is invoked in line 16 and the Q output is loaded in line 17 to be combined with other logic.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Sequential Problems The Sequential Function Chart (SFC) is the basic design tool for sequential control applications. The IEC 61131-3 SFC language is derived from the IEC 848 function chart standard. IEC 61131-3 defines the graphical, semigraphical, and textual formats for an SFC. Only the graphical format is explained here because it is the most common form. The general form of the function chart is shown in Figure 4-15. The function chart has the following major parts: • Steps of the sequential operation • Transition conditions to move to the next step • Actions of each step
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
The initial step is indicated by the double-line rectangle. The initial step is the initial state of the program when the controller is first powered up or when the operator resets the operation. The steps of the operation are shown as an ordered set of labeled steps (rectangles) on the left side of the diagram. Unless shown by an arrow, the progression of the steps proceeds from top to bottom. The transition condition is shown as a horizontal bar between steps. If a step is active and the transition condition below that step becomes true, the step becomes inactive, and the next step becomes active. The stepwise flow (called step evolution) continues to the bottom of the diagram. Branching is permitted to cause the step evolution to lead back to an earlier step or to proceed along multiple paths. A step without a vertical line below it is the last step of the sequence. The actions associated with a step are shown to the right of the step. Each step action is shown separately with a qualifier (“N” in Figure 4-15). Only the major aspects of SFCs are described in this section. For a detailed language description, the interested reader is referred to the references. Each step within an SFC has a unique name and should appear only once in an SFC. Every step has two variables that can be used to monitor and synchronize step activation. The step flag is a Boolean of the form ****.X, where **** is the step name, which is true while the step is active and false otherwise. The step elapsed time (****.T) variable of type TIME indicates how long the step has been active. When a step is first activated, the value of the step elapsed time is set to T#0s. While the step is active, the step elapsed time is updated to indicate how long the step has been active.
When the step is deactivated, the step elapsed time remains at the value it had when the step was deactivated; that is, it indicates how long the step was active. The Cool_Wait step in Figure 4-24 (later in this chapter) is an example using the step elapsed time for a transition. The flow of active steps in an SFC is called step evolution and generally starts with the initial step and proceeds downward. The steps and transitions alternate, that is: • Two steps are never directly linked; they are always separated by a transition. • Two transitions are never directly linked; they are always separated by a step.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Sequence selection is also possible, causing the step evolution to choose between two or more different paths. An example sequence selection divergence, and its corresponding convergence, is shown in Figure 4-16. The transition conditions must be mutually exclusive; that is, no more than one can be true. Alternate forms of sequence selection specify the order of transition evaluation, relaxing this rule. In the alternate forms, only one path is selected, even if more than one transition condition is true. There are two special cases of sequence selection. In a sequence skip, one or more branches do not contain steps. A sequence loop is a sequence selection in which one or more branches return to a previous step. These special cases are shown in the references.
The evolution out of a step can cause multiple sequences to be executed, called simultaneous sequences. An example of simultaneous sequence divergence, and its corresponding convergence, is shown in Figure 4-17. In Figure 4-17, if the Prestart_Check step is active and Pre_OK becomes true, all three branches are executed. One branch adds ingredient A followed by ingredient X. A second branch adds ingredient B. The third branch starts the agitator after the tank level reaches a certain
value. When all these actions are completed, the step evolution causes the Heat_Reac step to be active. The three wait steps are generally needed to provide a holding state for each branch when waiting for one or more of the other two branches to complete.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
The most common format of a transition condition is a Boolean expression in the Structured Text (ST) language to the right of the horizontal bar below the step box (Figure 4-18a). Two other popular formats are a ladder diagram network intersecting the vertical link instead of a right rail (Figure 4-18b), and an FBD network whose output intersects the vertical link (Figure 4-18c).
Action blocks are associated with a step. Each step can have zero or more action blocks. Figure 4-15 shows multiple action blocks associated with the Step_1_Name step. An action can be a Boolean variable, a ladder logic diagram, an FBD, a collection of ST statements, a collection of IL statements, or an SFC. The action box is used to perform a process action, such as opening a valve, starting a motor, or calculating an endpoint for the transition condition. Generally, each step has an action block, although in cases where a step is only waiting for a transition (e.g., waiting for a limit switch to close) or
executing a time delay, no action is attached. Each step action block may have up to four parts, as is seen in Figure 4-19: 1. a – action qualifier 2. b – action name 3. c – Boolean indicator variable 4. d – action description using the IL, ST, LD, FBD, or SFC language
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
The “b” field is the only required part of the step block. The “c” field is an optional Boolean indicator variable, set by the action to signify step completion, time-out, error condition, and so on. When the “b” field is a Boolean variable, the “c” and “d” fields are absent. When the “d” field is present, the “b” field is the name of the action whose description is shown in the “d” field. IEC 61131-3 defines the “d” field as a box below the action name. Figure 4-20 shows an action defined as ladder logic, an FBD, ST, and an SFC. An action may also be defined as an IL, which is not shown.
The action qualifier is a letter or a combination of letters describing how the step action is processed. If the action qualifier is absent, it is assumed to be N. Possible action qualifiers are defined in Table 4-1. Only the N, S, R, and L qualifiers are described in
the following paragraphs. The interested reader is referred to the references for a description of the other qualifiers.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
N – Non-Stored Action Qualifier A non-stored action is active only when the step is active. In Figure 4-21, the action P342_Start executes continuously while the Start_P342 step is active, that is, while the Start_P342.X flag is on. In this example, P342_Start is an action described in the ST language. When the transition P342_Aux turns on, the Start_P342 step becomes inactive, and P342_SeqStart is turned off. Deactivation of the step causes the action to execute one last time (often called postscan) in order to deactivate the outputs (the left side of expression).
S and R – Stored (Set) and Reset Action Qualifiers A stored action becomes active when the step becomes active. The action continues to
be executed even after the step is inactive. To stop the action, another step must have an R qualifier that references the same action. Figure 4-22 is an example use of the S and R qualifiers. The S qualifier on the action for the Open_Rinse step causes XV345.Seq_Open to be turned on immediately after the Open_Rinse step becomes active. The XV345_SeqOpen remains on until the Close_Rinse step, which has an R qualifier on the XV345_Open action. As soon as the Close_Rinse step becomes active, the action is executed one last time to deactivate XV345_SeqOpen.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
L – Time-Limited Action Qualifier A time-limited action becomes active when the step becomes active. The action becomes inactive when a set length of time elapses or the step becomes inactive, whichever happens first. In Figure 4-23a, the L qualifier on the action for the Agit_Tank step causes A361_Run to be turned on immediately after the Agit_Tank step becomes active. If the step elapsed time is longer than 6 minutes, A361_Run remains on for 6 minutes (Figure 4-23b). If the step elapsed time is less than 6 minutes, A361_Run turns off when the step becomes inactive (Figure 4-23c).
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
A more complicated SFC example is shown in Figure 4-24. This SFC controls a batch process. The reaction vessel is heated to a desired initial temperature and then the appropriate main ingredient is added, depending on the desired product. The reactor temperature is raised to the soak temperature and then two more ingredients are added while agitating. The vessel is cooled and then dumped.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Further Information Erickson, Kelvin T. Programmable Logic Controllers: An Emphasis on Design and Applications. 3rd ed. Rolla, MO: Dogwood Valley Press, 2016. IEC 61131-3:2003. Programmable Controllers – Part 3: Programming Languages. Geneva 20 – Switzerland: IEC (International Electrotechnical Commission).
IEC 848:1988. Preparation of Function Charts for Control Systems. Geneva 20 – Switzerland, IEC (International Electrotechnical Commission). Lewis, R. W. Programming Industrial Control Systems Using IEC 1131-3. Revised ed. The Institution of Electrical Engineers (IEE) Control Engineering Series. London: IEE, 1998.
About the Author
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Kelvin T. Erickson, PhD, is a professor of electrical and computer engineering at the Missouri University of Science and Technology (formerly the University of MissouriRolla, UMR) in Rolla, Missouri. His primary areas of interest are in manufacturing automation and process control. Before coming to UMR in 1986, he was a senior design engineer at Fisher Controls International, Inc. (now part of Emerson Process Management). During 1997, he was on a sabbatical leave from UMR, working for Magnum Technologies (now Maverick Technologies) in Fairview Heights, Illinois. Erickson received BS and MS degrees in electrical engineering from the University of Missouri-Rolla and a PhD in electrical engineering from Iowa State University. He is a registered professional engineer (control systems) in Missouri. He is a member of the International Society of Automation (ISA) and senior member of the Institute of Electrical and Electronics Engineers (IEEE).
II Field Devices
Measurement Accuracy and Uncertainty It is true that you can control well only those things that you can measure—and accuracy and reliability requirements are continually improving. Continuous instrumentation is required in many applications throughout automation, although we call it process instrumentation because the type of transmitter packaging discussed in this chapter is more widely used in process applications. There are so many measurement principles and variations on those principles that we can only scratch the surface of all the available ones; however, this section strives to cover the more popular/common types.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Process Transmitters The field devices, sensors, and final control elements are the most important links in process control and automation. The reason is if you are unable to measure or control your process, everything else built upon those devices cannot compensate for a poor input or the lack of ability to control the output without excessive variance.
Analytical Instrumentation Analytical instrumentation is commonly used for process control, environmental monitoring, and related applications in a variety of industries.
Control Valves Final control elements, such as control valves and now increasingly variable speed or variable/adjustable frequency motors, are critical components of a control loop in the process and utility industries. It has been demonstrated in nearly all types of process plants that control valve problems are a major cause of poor loop performance. A
general knowledge of the impact of the control valve on loop performance is critical to process control. Today it has become commonplace for automation professionals to delegate the selection and specification of instrumentation and control valves, as well as the tuning of controllers to technicians. However, performance in all these areas may depend on advanced technical details that require the attention of an automation professional; there are difficult issues including instrument selection, proper instrument installation, loop performance, advanced transmitter features, and valve dynamic performance. A knowledgeable automation professional could likely go into any process plant in the world and drastically improve the performance of the plant by tuning loops, redesigning the installation of an instrument for improved accuracy, or determining a needed dynamic performance improvement on a control valve—at minimal cost. More automation professionals need that knowledge.
Motor Controls
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Not all final control elements are valves. Motors with adjustable speed drives are used for pumps, fans, and other powered equipment. This chapter provides a basic review of motor types and adjustable speed drive functionality.
5 Measurement Uncertainty By Ronald H. Dieck
Introduction All automation measurements are taken so that useful data for the decision process may be acquired. For results to be useful, it is necessary that their measurement errors be small in comparison to the changes, effects, or control process under evaluation. Measurement error is unknown but its limits may be estimated with statistical confidence. This estimate of error is called measurement uncertainty.
Error Error is defined as the difference between the measured value and the true value of the measur and [1] as is shown in Equation 5-1: E = (measured) – (true)
(5-1)
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
where E = (measured) = (true) =
the measurement error the value obtained by a measurement the true value of the measurand
It is only possible to estimate, with some confidence, the expected limits of error. The first major type of error with limits needing estimation is random error. The extent or limits of a random error source are usually estimated with the standard deviation of the average, which is written as:
(5-2)
where S
=
M SX
= =
the standard deviation of the average; the sample standard deviation of the data divided by the square root of M the number of values averaged for a measurement the sample standard deviation
=
the sample average
Note in Equation 5-2 that N does not necessarily equal M. It is possible to obtain SX from historical data with many degrees of freedom ([N – 1] greater than 29) and to run the test only M times. The test result, or average, would therefore be based on M measurements, and the standard deviation of the average would still be calculated with Equation 5-2.
Measurement Uncertainty (Accuracy) One needs an estimate of the uncertainty of test results to make informed decisions. Ideally, the uncertainty of a well-run experiment will be much less than the change or test result expected. In this way, it will be known, with high confidence, that the change or result observed is real or acceptable and not a result of errors from the test or measurement process. The limits of those errors are estimated with uncertainty, and those error sources and their limit estimators, the uncertainties, may be grouped into classifications to make them easier to understand.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Classifying Error and Uncertainty Sources There are two classification systems in use. The final uncertainty calculated at a chosen confidence is identical for the two systems no matter what classification system is used. The two classifications utilized are the International Organization for Standardization (ISO) classifications and the American Society of Mechanical Engineers (ASME)/engineering classifications. The former groups errors and their uncertainties by type, depending on whether or not there is data available to calculate the sample standard deviation for a particular error and its uncertainty. The latter classification groups errors and their uncertainties by their effect on the experiment or test. That is, the engineering classification groups errors and uncertainties by random and systematic effects, with subscripts used to denote whether there are data to calculate a standard deviation or not for a particular error or uncertainty source.
ISO Classifications
In the ISO system, errors and uncertainties are classified as Type A if there are data available to calculate a sample standard deviation and Type B if there are not [2]. In the latter case, the sample standard deviation might be obtained, for example, from engineering estimates, experience, or manufacturer’s specifications. The impact of multiple sources of error is estimated by root-sum-squaring their corresponding elemental uncertainties. The operating equations are as follows. ISO Type A Errors and Uncertainties For Type A, data are available for the calculation of the standard deviation:
(5-3)
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
where uAi
=
NA
=
θi
=
the standard deviation (based on data) of the average for uncertainty source i of Type A, each with its own degrees of freedom; uA is in units of the measurement. It is considered an S and an elemental uncertainty. the number of parameters with a Type A uncertainty. the sensitivity of the test or measurement result, R, to the ith Type A uncertainty. θi is the partial derivative of the result with respect to each ith independent measurement.
ISO Type B Errors and Uncertainties For Type B (no data for standard deviation), uncertainties are calculated as follows:
(5-4)
where uBi
=
the standard deviation of the average (based on an estimate, not data) for uncertainty source i of Type B; uB is in units of the measurement. It is considered an S and an elemental
uncertainty. NB
=
the number of parameters with a Type B uncertainty.
θi
=
the sensitivity of the measurement result, to the ith Type B uncertainty.
For these uncertainties, it is assumed that uBi represents one standard deviation of the average for one uncertainty source. ISO Combined Uncertainty In computing a combined uncertainty, the uncertainties noted by Equations 5-3 and 5-4 are combined by root-sum-square. For the ISO model [2], this is calculated as:
(5-5)
The degrees of freedom of the uAi and the uBi are needed to compute the degrees of freedom of the combined combined uncertainty. It is calculated with the WelchSatterthwaite approximation. The general formula for degrees of freedom [2] is:
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
(5-6)
The degrees of freedom (df) calculated with Equation 5-6 are often a fraction. This may be truncated to the next lower whole number to be conservative. ISO Expanded Uncertainty Then the expanded, 95% confidence uncertainty is obtained with Equation 5-7:
(5-7)
where usually K = t95 = Student’s t for vR degrees of freedom as shown in Table 5-1.
Note that alternative confidences are permissible. The ASME recommends 95% [1], but 99% or 99.7% or any other confidence is obtained by choosing the appropriate Student’s t. However, 95% confidence is recommended for uncertainty analysis.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
In all the above, the errors were assumed to be independent. Independent sources of error are those that have no relationship to each other. That is, an error in a measurement from one source cannot be used to predict the magnitude or direction of an error from the other independent error source. Nonindependent error sources are related. That is, if it were possible to know the error in a measurement from one source, one could calculate or predict an error magnitude and direction from the other nonindependent error source. These are sometimes called dependent error sources. Their degree of dependence may be estimated with the linear correlation coefficient. If they are nonindependent, whether Type A or Type B, Equation 5-7 becomes [3]:
(5-8)
where the ith elemental uncertainty of Type T (can be Type A or B) the expanded uncertainty of the measurement or test result
ui,T
=
UR,ISO
=
θi
=
the sensitivity of the test or measurement result to the ith Type T uncertainty
θj
=
the sensitivity of the test or measurement result to the jth Type T uncertainty
u(i,T),
=
the covariance of ui,T on uj, so that:
(j,T)
(5-9) where l
=
an index or counter for common uncertainty sources
K δi, j
= =
the number of common source pairs of uncertainties the Kronecker delta. δi, j = 1 if i = j, and δi, j = 0 if not
T Ni,T
= =
an index or counter for the ISO uncertainty Type, A or B the number of error sources for Types A and B
Equation 5-9 equals the sum of the products of the elemental systematic standard uncertainties that arise from a common source (l). This ISO classification equation will yield the same expanded uncertainty as the engineering classification, but the ISO classification does not provide insight into how to improve an experiment’s or test’s uncertainty. That is, it does not indicate whether to take more data because the random standard uncertainties are too high or calibrate better because the systematic standard uncertainties are too large. The engineering classification now presented is therefore the preferred approach.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
ASME/Engineering Classifications The ASME/engineering classification recognizes that experiments and tests have two major types of errors that affect results and whose limits are estimated with uncertainties at some chosen confidence. These error types may be grouped as random and systematic. Their corresponding limit estimators are the random standard uncertainties and systematic standard uncertainties, respectively. ASME/Engineering Random Standard Uncertainty The general expression for random standard uncertainty is the 1S standard deviation of the average [4]:
(5-10)
where SXi, T
=
S
=
S
i, T
, R
the sample standard deviation of the ith random error source of Type T the random standard uncertainty (standard deviation of the average) of the ith parameter random error source of Type T the random standard uncertainty of the measurement or test result
=
Ni,T
=
the total number of random standard uncertainties, Types A and B, combined
Mi,T
=
θi
=
T
=
the number of data points averaged for the ith error source, Type A or B the sensitivity of the test or measurement result to the ith random standard uncertainty an index or counter for the ISO uncertainty Type, A or B
Note that S , R is in units of the test or measurement result because of the use of the sensitivities, θi . Here, the elemental random standard uncertainties have been root-sumsquared with due consideration for their sensitivities, or influence coefficients. Since these are all random standard uncertainties, there is, by definition, no correlation in their corresponding error data so these can always be treated as independent uncertainty sources.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
(Note: The term standard is inserted to provide harmony with ISO terminology and to indicate that the uncertainties are standard deviations of the average.) ASME/Engineering Systematic Standard Uncertainty The systematic standard uncertainty of the result, bR, is the root-sum-square of the elemental systematic standard uncertainties with due consideration for those that are correlated [4]. The general equation is:
(5-11)
where bi,T
=
the ith parameter elemental systematic standard uncertainty of Type T
bR
=
the systematic standard uncertainty of the measurement or test result
NT
=
the total number of systematic standard uncertainties
θi
=
the sensitivity of the test or measurement result to the ith systematic standard uncertainty
θj
=
the sensitivity of the test or measurement result to the jth systematic standard uncertainty
b(i,T),(j,T)
=
the covariance of bi on bi
(5-12)
where l δij
= =
an index or counter for common uncertainty sources the Kronecker delta. δij = 1 if i = j, and δij = 0 if not
T
=
an index or counter for the ISO uncertainty Type, A or B
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Equation 5-12 equals the sum of the products of the elemental systematic standard uncertainties that arise from a common source (l). Here, each bi,T and bj,T are estimated as 1S for an assumed normal distribution of errors at 95% confidence with infinite degrees of freedom. ASME/Engineering Combined Uncertainty The random standard uncertainty, Equation 5-10, and the systematic standard uncertainty, Equation 5-11, must be combined to obtain a combined uncertainty, Equation 5-13.
(5-13)
ASME/Engineering Expanded Uncertainty Then the expanded 95% confidence uncertainty may be calculated with Equation 5-14:
(5-14)
Note that bR is in units of the test or measurement result as is S
, R.
The degrees of freedom will be needed for the engineering system combined and expanded uncertainties in order to determine Student’s t. It is accomplished with the Welch–Satterthwaite approximation, the general form of which is Equation 5-15, and the specific formulation here is:
(5-15)
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
where N M vi,T
= = =
vi,T
=
t
=
the number of random standard uncertainties of Type T the number of systematic standard uncertainties of Type T the degrees of freedom for the ith uncertainty of Type T infinity for all systematic standard uncertainties Student’s t associated with the degrees of freedom (df) for each Bi,
High Degrees of Freedom Approximation It is often the case that it is assumed that the degrees of freedom are 30 or higher. In these cases, the equations for uncertainty simplify further by setting t95 equal to 2.000 [1]. This approach is recommended for a first-time user of uncertainty analysis procedures as it is a fast way to get to an approximation of the measurement uncertainty.
Calculation Example In the following calculation example, all the uncertainties are independent and are in the
units of the test result: temperature. It is a simple example that illustrates the combination of measurement uncertainties in their most basic case. More detailed examples are given in many of the references cited. Their review may be needed to assure a more comprehensive understanding of uncertainty analysis. It has been shown [5] that there is no difference in the uncertainties calculated with the different models. The data from Table 5-2 will be used to calculate measurement uncertainty with these two models. These data are all in temperature units and thus the influence coefficients, or sensitivities, are all unity.
Note the use of subscripts “A” and “B” to denote where data does or does not exist to calculate a standard deviation. Note also in this example, all errors (and therefore uncertainties) are independent and all degrees of freedom for the systematic standard uncertainties are infinity except for the reference junction whose degrees of freedom are 12.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Each uncertainty model will now be used to derive a measurement uncertainty. For the UISO model one obtains, via Equations 5-3 and 5-4, the expressions are as follows: (5-16)
(5-17) Thus: (5-18) Here, remember that the 0.21 is the root-sum-square of the 1S Type A uncertainties in Table 5-2, and 0.058 is the root-sum-square for the 1S Type B uncertainties. Also note that in most cases, the Type B uncertainties have infinite degrees of freedom and
represent an equivalent 1SX. If K is taken as Student’s t95, the degrees of freedom must first be calculated. All the systematic components of Type B have infinite degrees of freedom except for the 0.032, which has 12 degrees of freedom. Also, all the systematic standard uncertainties, (b), in Table 5-2 represent an equivalent 1SX. All Type A uncertainties, whether systematic or random in Table 5-2, have degrees of freedom as noted in the table. The degrees of freedom for UISO are then:
(5-19)
t95 is therefore 2.074. UR,ISO, the expanded uncertainty, is determined with Equation 5-7. (5-20) For the engineering system, UR,ENG, model, one obtains the random standard uncertainty with Equation 5-10:
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
(5-21) The systematic standard uncertainty is obtained with Equation 5-11 greatly simplified as there are no correlated errors or uncertainties in this example. Equation 5-11 then becomes: (5-22) The combined uncertainty is then computed with Equation 5-13: (5-23) UR,ENG, the expanded uncertainty is then obtained with Equation 5-14: (5-24)
The degrees of freedom must be calculated just as in Equation 5-19. Therefore, the degrees of freedom are 22 and t95 equals 2.07. UR,ENG is then: (5-25) This is identical to UR,ISO, Equation 5-20, as predicted.
Summary Although these formulae for uncertainty calculations will not handle every conceivable situation, they will provide, for most experimenters, a useful estimate of test or measurement uncertainty. For more detailed treatment or specific applications of these principles, consult the references and the recommended “Additional Resources” section at the end of this chapter.
Definitions Accuracy – The antithesis of uncertainty. An expression of the maximum possible limit of error at a defined confidence.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Combined uncertainty – The root-sum-square combination of either the Type A and Type B uncertainties for the ISO error classifications or the random and systematic standard uncertainties for the engineering error classifications. Confidence – A statistical expression of percent likelihood. Correlation – The relationship between two data sets. It is not necessarily evidence of cause and effect. Degrees of freedom – The amount of room left for error. It may also be expressed as the number of independent opportunities for error contributions to the composite error. Error – [Error] = [Measured] – [True]. It is the difference between the measured value and the true value. Expanded uncertainty – The 95% confidence interval uncertainty. It is the product of the combined uncertainty and the appropriate Student’s t. Influence coefficient – See sensitivity. Measurement uncertainty – The maximum possible error, at a specified confidence, that may reasonably occur. Errors larger than the measurement uncertainty should rarely
occur. Propagation of uncertainty – An analytical technique for evaluating the impact of an error source (and its uncertainty) on the test result. It employs the use of influence coefficients. Random error – An error that causes scatter in the test result. Random standard uncertainty – An estimate of the limits of random error, usually one standard deviation of the average. Sensitivity – An expression of the influence an error source has on a test or measured result. It is the ratio of the change in the result to an incremental change in an input variable or parameter measured. Standard deviation of the average or mean – The standard deviation of the data divided by the number of measurements in the average. Systematic error – An error that is constant for the duration of a test or measurement. Systematic standard uncertainty – An estimate of the limits of systematic error, usually taken as an equivalent single standard deviation of an average. True value – The desired result of an experimental measurement. Welch-Satterthwaite – The approximation method for determining the number of degrees of freedom for a combined uncertainty.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
References 1. ANSI/ASME PTC 19.1-2005. Instruments and Apparatus, Part 1, Test Uncertainty (American National Standards Institute/American Society of Mechanical Engineers), 1985: 5. 2. ISO. Guide to the Expression of Uncertainty in Measurement. Geneva, Switzerland: ISO (International Organization for Standardization), 1993: 10 and 11. 3. Brown, K. K., H. W. Coleman, W. G. Steele, and R. P. Taylor. “Evaluation of Correlated Bias Approximations in Experimental Uncertainty Analysis.” In Proceedings of the 32nd Aerospace Sciences Meeting & Exhibit. AIAA paper no. 94-0772. Reno, NV, January 10–13, 1996. 4. Dieck, R. H. Measurement Uncertainty, Methods and Applications. 4th ed. Research Triangle Park, NC: ISA (International Society of Automation), 2006: 45. 5. Strike, W. T., III, and R. H. Dieck. “Rocket Impulse Uncertainty; An Uncertainty
Model Comparison.” In Proceedings of the 41st International Instrumentation Symposium, Denver, CO, May 1995. Research Triangle Park, NC: ISA (International Society of Automation).
Further Information Abernethy, R. B., et al. Handbook-Gas Turbine Measurement Uncertainty. United States Air Force Arnold Engineering Development Center (AEDC), 1973. Abernethy, R. B., and B. Ringhiser. “The History and Statistical Development of the New ASME-SAE-AIAA-ISO Measurement Uncertainty Methodology.” In Proceedings of the AIAA/SAE/ASME, 21st Joint Propulsion Conference. Monterey, CA, July 8–10, 1985. ICRPG Handbook for Estimating the Uncertainty in Measurements Made with Liquid Propellant Rocket Engine Systems. Chemical Propulsion Information Agency. no. 180, 30 April 1969. Steele, W. G., R. A. Ferguson, and R. P. Taylor. “Comparison of ANSI/ASME and ISO Models for Calculation of Uncertainty.” In Proceedings of the 40th International Instrumentation Symposium. Paper number 94-1014. Research Triangle Park, NC: ISA (International Society of Automation), 1994: 410-438.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
About the Author Ronald H. Dieck is an ISA Fellow and president of Ron Dieck Associates, Inc., an engineering consulting firm in Palm Beach Gardens, Florida. He has more than 35 years of experience in measurement uncertainty methods and applications for flow, temperature, pressure, gas analysis, and metrology; the testing of instrumentation, temperature, thermocouples, air pollution; and gas analysis. Dieck is a former president of ISA (1999) and the Asian Pacific Federation of Instrumentation and Control Societies (2002). He has served as chair of ASME PTC19.1 on Test Uncertainty for more than 20 years. From 1965 to 2000, Dieck worked at Pratt & Whitney, a world leader in the design, manufacture, and service of aircraft engines and auxiliary power units. He earned a BS ino physics and chemistry from Houghton College in Houghton, New York, and a master’s in physics from Trinity College, Hartford, Connecticut. Dieck can be contacted at [email protected].
6 Process Transmitters By Donald R. Gillum
Introduction
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
With the emphasis on improved control and control quality and with advanced control systems, the significance and importance of measurement is often overlooked. In early industrial facilities, it was soon realized that many variables needed to be measured. The first measuring devices consisted of simple pointer displays located in the processing area, a pressure gauge for example. When the observation of a variable needed to be remote from the actual point of measurement, hydraulic impulse lines were filled with a fluid and connected to a readout device mounted to a panel for local indication of a measured value. The need for transmitting measurement signals through greater distance became apparent as the size and complexity of process units increased and control moved from the process area to a centrally located control room. Transmitters were developed to isolate the process area and material from the control room. In general terms, the transmitter is a device that is connected to the process and generates a transmitted signal proportional to the measured value. The output signal is generally 3–15 psi for pneumatic transmitters and 4–20 mA for electronic transmitters. As it has taken many years for these standard values to be adopted, other scaled output values may be used and converted to these standard units. The input to the transmitter will represent the value of the process to be measured and can be nearly any range of values. Examples can be: 0–100 psi, 0–100 in of water, 50–500°F and 10–100 in of level measurement or 0–100 kPa, 0–10 mm Hg, 40–120°C and 5–50 cm of level measurement. The actual value of input measurement, determined by the process requirements, is established during initial setup and calibration of the device. Although transmitters have been referred to as transducers, this term does not define the entire function of a transmitter which usually has an input and output transducer. A transducer is a device that converts one form of energy into another form of energy that is generally more useful for a particular application.
Pressure and Differential Pressure Transmitters The most common type of transmitters used in the processing industries measure pressure and differential pressure (d/p). These types will be discussed in greater detail in the presentation of related measurement applications. The input transducer for most process pressure and d/p transmitters is a pressure element which responds to an applied pressure and generates a proportional motion, movement, or force. Pressure is defined as force per unit area which can be expressed as P = F/A where P is the pressure to be measured, F is the force, and A is the area over which the force is applied. By rearranging the expression, F = PA. So, the force produced by a pressure element is a function of the applied pressure acting over the area of the pressure element to which the force is applied. Pressure elements represent a broad classification of transducers in pressure instruments. The deflection of the free end of a pressure element, the input transducer, is applied to the secondary or output transducer to generate a pneumatic or electronic signal which is the corresponding output of the transmitter. A classification of pressure instruments is called Bourdon elements and include: • “C” tube • Spiral • Helical • Bellows Copyright © 2018. International Society of Automation (ISA). All rights reserved.
• Diaphragm • Capsule While most of these transducers can be used on pressure gauges or transmitters, the “C” tube is predominant in pressure gauges. The diaphragm or capsule is used in d/p transmitters because of the ability to respond to the low pressure in such applications. Other types of Bourdon elements are used in most pressure transmitters. A flapper-nozzle arrangement or a pilot-valve assembly is used for the output transducer in most pneumatic transmitters. A variety of output transducers have been used for electronic transmitters. The list includes: • Potentiometers or other resistive devices – This measurement system results in a change of resistance in the output transducer, which results in a change in current or voltage in a bridge circuit or other type of signal conditioning system.
• Linear variable differential transformer (LVDT) – This system is used to change the electrical energy generated in the secondary winding of a transformer as a pressure measurement changes the positioner of a movable core between the primary and secondary windings. This device can detect a very small deflection from the input transducer. • Variable capacitance device – This device generates a change in capacitance as the measurement changes the relative position of capacitance plates. A capacitance detecting circuit converts the resulting differential capacitance to a change in output current. • Electrical strain gauge – This device operates much like the potentiometric or variable resistance circuit mentioned above. A deformation of a pressure element resulting from a change in measurement causes a change in tension or compression of an electrical strain gauge. This results in a change in electrical resistance which through a signal conditioning circuit produces a corresponding change in current. Other types of secondary transducers for electrical pressure and d/p transmitters include: • Resonant frequency • Quartz resonant frequency • Silicon resonant sensors • Variable conductors Copyright © 2018. International Society of Automation (ISA). All rights reserved.
• Variable reluctance • Piezoresistivity transmitters These transducers are not very prominent. The reader is directed to the listed reference material for further clarification.
Level Measurement Level measurement is defined as the determination of the position of an existing interface between two media, which are usually fluids but can be solids or a combination of a fluid and a solid. Many technologies are available to measure this interface. They include: • Visual • Hydraulic head
• Displacer • Capacitance • Conductance • Sonic and ultrasonic • Weight • Load cells • Radar • Fiber optics • Magnetostrictive • Nuclear • Thermal • Laser • Vibrating paddle • Hydrostatic tank gauging This section will discuss the most common methods in present use.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Automatic Tank Gauges An automatic tank gauge (ATG) is defined in the American Petroleum Institute (API) Manual of Petroleum Measurement Standards as “an instrument which automatically measures and displays liquid level or ullages in one or more tanks, either continuously, periodical or on demand.” From this description, an ATG is a level-measuring system that produces a measurement from which the volume and/ or weight of liquid in a vessel can be calculated. API Standard 2545 (1965), Methods of Gauging Petroleum and Petroleum Products, described float-actuated or float-tape ATGs, power-operated or servo-operated ATGs, and electronic surface-detecting-type level instruments. These definitions and standards imply that ATGs encompass nearly all level-measurement technologies, including hydrostatic tank gaging. Some ATG technology devices are float-tape types, which rival visual levelmeasurement techniques in their simplicity and dependability. These devices operate by float movement with a change in level. The movement is then used to convey a level measurement.
Many methods have been used to indicate level from a float position, the most common being a float and cable arrangement. The float is connected to a pulley by a chain or a flexible cable, and the rotating member of the pulley is in turn connected to an indicating device with measurement graduations. When the float moves upward, the counterweight keeps the cable tight and the indicator moves along a circular scale. When chains are used to connect the float to the pulley, a sprocket on the pulley mates with the chain links. When a flat metallic tape is used, holes in the tape mate with metal studs on a rotating drum. A major type of readout device is commonly used with float systems and perhaps the simplest type is a weight connected to the float with a cable. As a float moves, the weight also moves by means of a pulley arrangement. The weight, which moves along a board with calibrated graduations, will be at the extreme bottom position when the tank is full and at the top when the tank is empty. This type is generally used for closed tanks at atmospheric pressure.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Variable-Displacement Measuring Devices When the weight of an object is always heavier than an equal volume of the fluid into which it is submerged, full immersion results and the object never floats. Although the object (displacer) never floats on the liquid surface, it does assume a relative position in the liquid, and as the level moves up and down along the length of the displacer, the displacer undergoes a change in weight caused by the buoyancy of the liquid. Buoyancy is explained by Archimedes’ principle, which states that the resultant pressure of a fluid on a body immersed in it acts vertically upward through the center of gravity of the displaced fluid and is equal to the weight of the fluid displaced. The upward pressure acting on the area of the displacer creates the force called buoyancy. The buoyancy is of sufficient magnitude to cause the float (displacer) to be supported on the surface of a liquid or a float in float-actuated devices. However, in displacement level systems, the immersed body or displacer is supported by arms or springs that allow some small amount of vertical movement or displacement with changes of resulting buoyancy forces caused by level changes. This buoyancy force can be measured to reflect the level variations. Displacers Used for Interface Measurement Recall that level measurement is the determination of the position of an interface between two fluids or between a fluid and a solid. It can be clearly seen that displacers for level measurement operate in accordance with this principle. The previous paragraph concerning displacer operation considered a displacer suspended in two fluids, with the
displacer weight being a function of the interface position. The magnitude of displacer travel is described as being dependent on the interface change and on the difference in specific gravity between the upper and lower fluids. When both fluids are liquids, the displacer is always immersed in liquid. In displacement level transmission, a signal is generated proportional to the displacer position.
Hydraulic Head Level Measurement Many level-measurement techniques are based on the principle of hydraulic head measurement. From this measurement, a level value can be inferred. Such levelmeasuring devices are used primarily in the water and wastewater, oil, chemical, and petrochemical industries, and to a lesser extent in the refining, pulp and paper, and power industries.
Principle of Operation The weight of a 1 ft3 container of water is 62.4271bs, and this force is exerted over the surface of the bottom of the container. The area of this surface is 144 in2; the pressure exerted is P = F/A
(6-1)
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
where P is pressure in pounds per square inch, F is a force of 62.427 lb, and A is an area of 1 ft2 = 144 in2. P = 62.427 lb/144 in2 = 0.433 lb/in2/ft This pressure is caused by the weight of a 12-in column of liquid pushing downward on a 1in2 surface of the container. The weight of 1 in3 of water, which is the pressure on 1 in2 of area caused by a 1-in column of water, is P = 0.433 psi/ft (H) = 0.036 psi/in
(6-2)
By the reasoning expressed above, the relationship between the vertical height of a column of water (expressed as H in feet) and the pressure extended on the supporting surface is established. This relationship is important not only in the measurement of pressure but also in the measurement of liquid level. By extending the discussion one step further, the relationship between level and pressure can be expressed in feet of length and pounds per square inch of pressure:
1 psi = 2.31 in/ft wc = 27.7 in wc
(6-3)
(The term wc, which stands for water column, is usually omitted as it is understood in the discussion of hydraulic pressure measurement.) It is apparent that the height of a column of liquid can be determined by measuring the pressure exerted by that liquid.
Open-Tank Head Level Measurement
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Figure 6-1 illustrates an application where the level value is inferred from a pressure measurement. When the level is at the same elevation point as the measuring instrument, atmospheric pressure is applied to both sides of the transducer in the pressure transmitter, and the measurement is at the zero reference level. When the level is elevated in the tank, the force created by the hydrostatic head of the liquid is applied to the measurement side of the transducer, resulting in an increase in the instrument output. The instrument response caused by the head pressure is used to infer a level value. Assuming the fluid is water, the relationship between pressure and level is expressed by Equation 6-3. If the measured pressure is 1 psi, the level would be 2.31 ft, or 27.7 in. Changes in atmospheric pressure will not affect the measurement because these changes are applied to both sides of the pressure transducer.
When the specific gravity of a fluid is other than 1, Equation 6-2 must be corrected. This equation is based on the weight of 1 ft3 of water. If the fluid is lighter, the pressure exerted by a specific column of liquid is less. The pressure will be greater for heavier liquids. Correction for specific gravity is expressed by Equation 6-4. (6-4)
where G is specific gravity and H is the vertical displacement of a column in feet. The relationship expressed in Equation 6-4 is used to construct the scale graduations on the gauge. For example, instead of reading in pounds per square inch, the movement on the gauge corresponding to 1 psi pressure would express graduations of feet and tenths of feet. Many head-level transmitters are calibrated in inches of water. The receiver instrument also is calibrated in inches of water or linear divisions of percent.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Air Purge or Bubble System The system known by various names, such as air purge, air bubble, or dip tube, is an adaptation of head-level measurement. With the supply air blocked, the water level in the tube will be equal to that in the tank. When the air pressure from the regulator is increased until the water in the tube is displaced by air, the air pressure on the tube is equal to that required to displace the liquid and equal to the hydraulic head of the liquid in the tube. The pressure set on the regulator must be great enough to displace the air from the tube for maximum pressure, which will coincide with maximum level. This will be indicated by a continuous flow, which is evidenced by the formation of bubbles rising to the level of the liquid in the tank. As it may not be convenient to visually inspect the tank for the presence of bubbles, an airflow indicator will usually be installed in the air line running into the tank. A rotameter is generally used for this purpose. The importance of maintaining a flow through the tube lies in the fact that the liquid in the tube must be displaced by air, and the back-pressure on the air line provides the measurement, which is related to level. The amount of airflow through the dip tube is not critical but should be fairly constant and not too great. In situations where the airflow is great enough to create a backpressure in the tube caused by the airflow restriction, this back-pressure would signify a level resulting in a measurement error. For this reason, 3/8-inch tubing or 1/4-inch pipe should be used. An important advantage of the bubble system is the fact that the measuring instrument can be mounted at any location or elevation with respect to the tank. This application is advantageous for level-measuring applications where it would be inconvenient to mount the measuring instrument at the zero reference level. An example of this situation is level measurement in underground tanks and water wells. The zero reference level is established by the relative position of the open end of the tube with respect to the tank. This is conveniently fixed by the length of the tube, which can be adjusted for the
desired application. It must be emphasized that variations in back-pressure on the tube or static pressure in the tank cannot be tolerated. This method of level measurement is generally limited to open-tank applications but can be used in closed tank applications with special precautions listed below.
Measurement in Pressurized Vessels: Closed-Tank Applications The open-tank measurement applications that have been discussed are referenced to atmospheric pressure. That is, the pressures (usually atmospheric) on the surface of the liquid and on the reference side of the pressure element in the measuring instrument are equal. When atmospheric pressure changes, the change is by equal amounts on both the measuring and the reference sides of the measuring element. The resulting forces created are canceled one opposing the other, and no change occurs in the measurement value.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Suppose, however, that the static pressure in the level vessel is different from atmospheric pressure. Such would be the case if the level was measured in a closed tank or vessel. Pressure variations within the vessel would be applied to the level surface and have an accumulated effect on the pressure instrument, thus affecting level measurement. For this reason, pressure variations must be compensated for in closedtank applications. Instead of a pressure-sensing instrument, a differential-pressure instrument is used for head-type level measurements in closed tanks. Since a differential-pressure instrument responds only to a difference in pressure applied to the measuring ports, the static tank pressure on the liquid surface in a closed tank has no effect on the measuring signal. Variations in static tank pressure, therefore, do not cause an error in level measurement as would be the case when a pressure instrument is used.
Mounting Considerations: Zero Elevation and Suppression Unless dip tubes are used, the measuring instrument is generally mounted at the zero reference point on the tank. When another location point for the instrument is desired or necessary, the head pressure caused by liquid above or below the zero reference point must be discounted. When the high-pressure connection of the differential-pressure instrument is below the zero reference point on the tank, the head pressure caused by the elevation of the fluid from the zero point to the pressure tap will cause a measured response or instrument signal. This signal must be suppressed to make the output represent a zero-level value. The term zero suppression defines this operation. This refers to the correction taken or
the instrument adjustment required to compensate for an error caused by the mounting of the instrument to the process. With the level at the desired zero reference level, the instrument output or response is made to represent the zero-level value. In the case of transmitters, this would be 3 psi, 4 mA, 10 mA, or the appropriate signal level to represent the minimum process value. More commonly, however, the zero-suppression adjustment is a calibration procedure carried out in a calibration laboratory or shop, sometimes requiring a kit for the transmitter that consists of an additional zero bias spring. When using differential-pressure instruments in closed-tank level measurement for vapor service, quite often the vapor in the line connecting the low side of the instrument to the tank will condense to a liquid. This condensed liquid, sometimes called a wet leg, produces a hydrostatic head pressure on the low side of the instrument, which causes the differential-pressure instrument reading to be below zero. Compensation is required to eliminate the resulting error. This compensation or adjustment is called zero elevation.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
To prevent possible condensation, fouling, or other external factors from affecting the wet leg, in most applications a sealed wet leg as described below is used to reduce the impact of such variables as condensation as a function of ambient temperature. Repeaters Used in Closed-Tank Level Measurement Some level applications require special instrument-mounting considerations. For example, sometimes the process liquid must be prevented from forming a wet leg. Some liquids are nonviscous at process temperature but become very thick or may even solidify at the ambient temperature of the wet leg. For such applications, a pressure repeater can be mounted in the vapor space above the liquid in the process vessel, and a liquid-level transmitter (e.g., a differential-pressure instrument) can be mounted below in the liquid section at the zero reference level. The pressure in the vapor section is duplicated by the repeater and transmitted to the low side of the level instrument. The complications of the outside wet leg are avoided, and a static pressure in the tank will not affect the operation of the level transmitter. A sealed pressure system can also be used for this application. This system is similar to a liquid-filled thermal system. A flexible diaphragm is flanged to the process and is connected to the body of a differential-pressure cell by flexible capillary tubing. The system should be filled with a fluid that is noncompressible and that has a high boiling point, a low coefficient of thermal expansion, and a low viscosity. A silicon-based liquid is commonly used in sealed systems. For reliable operation of sealed pressure devices, the entire system must be evacuated and filled completely with the fill fluid. Any air pockets that allow contraction and
expansion of the filled material during operation can result in erroneous readings. Most on-site facilities are not equipped to carry out the filling process; a ruptured system usually requires component replacement.
Radar Measurement
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
The function of a microwave gauge can be described where the gauge and its environment are divided into five parts: microwave electronic module, antenna, tank atmosphere, additional sensors (mainly temperature sensors), and a remote (or local) display unit. The display may include some further data processing, such as calculation of the mass. Normally, the transmitter is located at the top of a vessel and the solid-state oscillator transmits an electronic wave at a selected carrier frequency and wave form that is aimed downward at the surface of the process fluid in the vessel. The standard frequency is 10 GHz. The signal is radiated by a dish or horn-type antenna that can take various forms depending on the need for a specific application. A portion of the wave is reflected to the antenna where it is collected and sent to the receiver where a microprocessor determines the time of flight for the transmitted and reflected waveform. Knowing the speed of the waveform and travel time, the distance from the transmitter to process fluid surface can be calculated. The detector output is based on this difference. Non-contact radar detectors operate by using pulsed radar waves or frequency modulated continuous waves (FMCW). In pulsed wave operation, short-duration radar pulses are transmitted and the target distance is calculated using the transit time. The FMCW sensor sends out continuous frequency-modulated signals, usually in successive (linear) ramps. The frequency difference caused by the time delay between transmit and reception indicates the distance which directly infers the level. The low power of the beam permits safe installation for both metallic and nonmetallic vessels. Radar sensors can be used when the process material is flammable and when the composition or temperature of the material in the vapor space varies. Contact radar measuring devices send a pulse down a wire to a vapor-liquid interface where a sudden change in the dielectric of the materials causes the signal to be partially reflected. The time of flight is measured and the distance traversed by the signal is calculated. The non-reflected portion of the signal travels to the end of the probe and gives a signal for a zero reference point. Contact radar can be used for liquids and small granular bulk solids. In radar applications, the reflective properties of the process material will affect the transmitted signal strength. Liquids have good reflective qualities but solids usually do not. When heavy concentrations of dust particles or other such foreign materials are present, these materials will be measured instead of the
liquid. Tank Atmosphere The radar signal is reflected directly on the liquid surface to obtain an accurate level measurement. Any dust or mist particles present must have no significant influence as the diameters of such particles are much smaller than the 3-cm radar wavelength. For optical systems with shorter wavelengths, this is not the case. For comparison, when navigating with radar aboard ships, a substantial reduction of the possible measuring range is experienced, but even with a heavy tropical rain the range will be around 1 km, which is large compared to a tank.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
There can be slight measurement errors for a few specific products in the vapor space of the tank. This is especially true when the composition may vary between no vapor and fully saturated conditions. For these specific products, pressure and temperature measurement may be required for compensation. Such compensation is made by the software incorporated in the tank intelligence system provided by the equipment manufacturer. End-of-the-Probe Algorithm End-of-the-probe algorithm can be used in guided-wave radar when there is no reflection coming back from the product. This new technology innovation provides a downward-looking time of flight situation which allows the guided-wave radar system to measure the distance from the probe mounting to the material level. An electromagnetic pulse is transmitted and guided down a metal cable or rod which acts as a surface wave transmission line. When the surface wave meets a discontinuity in the surrounding medium such as a sudden change in dielectric constant, some of the signal is reflected to the source where it is detected and timed. The portion of the signal that is not reflected travels on and is reflected at the end of the probe. The radar level gauging technique used on tanker ships for many years has been used in refineries and tank farms in recent years. Its high degree of integrity against virtually all environmental influences has resulted in high level-measurement accuracy; one example is the approval for radar gauge for 1/16-in accuracy. Nearly all level gauging in a petroleum storage environment can be done with radar level gauges adopted for that purpose. Although radar level technology is a relatively recent introduction in the process and manufacturing industries, it is gaining respect for its reliability and accuracy.
Ultrasonic/Time-of-Flight Measurement
While the time-of-flight principle of sonic and ultrasonic level-measurement systems is similar to radar, there are distinct differences. The primary difference is that sound waves produced by ultrasonic units are mechanical and transmit sound by expansion of a material medium. Since the transmission of sonic waves requires a medium, changes in the medium can affect the propagation. The resulting change in velocity will affect the level measurement. Other factors can also affect the transmitted or reflected signal, including dust, vapors, foam, mist, and turbulence. Radar waves do not require a medium for propagation and are inherently immune to the factors which confuse sonictype devices.
Fluid Flow Measurement Technology Flow transmitters are used to determine the amount of fluid flowing in a pipe, tube, or open stream. These flow values are normally expressed in volumetric or mass flow units. Flow rates are inferred from other measured values. When the velocity of a fluid through a pipe can be determined, the actual flow value can be derived, usually by calculations. For example, when the velocity can be expressed in ft/sec through a crosssectional area of the pipe measured in ft2, the volumetric flow rate can be given in cubic feet per second (ft3/sec). By knowing relationships between volume and weight, the flow rate can be expressed in gallon per minute (gal/min), pound per hour (lb/hr), or any desired units.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Head Flow Measurement Most individual flow meters either measure the fluid velocity directly or infer a velocity from other measurements. Head meters are described by Bernoulli’s expression which states that for incompressible fluids the product of the area (A) and velocity (V) is equal to the flow rate for static flow conditions. In a pipe, the flow rate (Q) is equal at all points. So, Q1 = Q2 = Q3 ... = Qn and Q = (A)(V) and A1V1 = A2V2 = A3V3 = ... AnVn. This continuity relationship requires that the velocity of a fluid increases as the crosssectional area of the pipe decreases. Furthermore, from scientific equations, a working equation is developed that shows: Q = KA (∆P/ρ)1/2 where ∆P ρ
= =
the pressure drop across a restriction the density of the fluid
The constant (K) adjusts for dimensional units, nonideal fluid losses and behavior, discharge coefficients, pressure tap locations, various operating conditions, gas expansion factor, Reynolds number, and viscosity corrections which are accounted for by empirical flow testing. Many types of restrictions or differential producers and tap locations (or points on the pipe where the pressure is measured) are presently used. The overall differential flow meter performance will vary based on several conditions, but is generally limited to about 2% for ideal conditions with a turndown ratio of about 3–5:1. Head-type flow meter rangeability can be as great at 10:1, but care should be exercised in expressing a rangeability greater than about 6–8:1 as the accuracy may be affected. The performance characteristics will preclude the use of head flow meters in several applications where flow measurement is more of a consideration than flow control. The high precision or repeatability make them suitable candidates for flow control.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Particular attention must be given to instrument mounting and connection to the process line. For liquids, the instrument is located below the process lines, and for gas or vapor service the instrument is located above the lines. This is to assure that the instrument and connecting or lead-lines, as they are called, are liquid full for liquid and vapor service and liquid free for gas service. This is to prevent liquid legs of unequal values from forming in the instrument and lead-lines. Because of the nonlinear relationship between transmitter output and flow rates, headtype flow meters are not used in ratio control, totalizing, or other applications where the transmitter output must represent the same flow value at every point in the measuring range. An important factor to consider in flowmeter engineering is the velocity profile and Reynolds number (Re) of the flowing fluid. Re is the relationship of internal forces/viscous forces of the fluid in the pipe which is equal to ρ(V)(D)/μ where ρ = the density of the flowing fluid in pounds per cubic foot (lb/ft3) V = the velocity of the fluid in feet per second D = the pipe internal diameter in inches μ = the fluid viscosity in Centipoise This scientific equation is reduced to the following working equations: Re = (3160)(Qgpm)(GF)/μ(D) for liquids Re
=
(379)(Qacfm)(ρ)/μ(D) for gases where the units are as given before and Qacfm = the gas flow in absolute cubic feet per minute and GF = specific gravity of the liquid. The velocity profile should be such that the fluid is flowing at near uniform velocity as noted by a turbulent flow profile. This will be such when Re is above about 6,000; in actual practice Re should be significantly higher. For applications where Re is lower than about 3,000, the flow profile is noted as laminar and head-type flow meters should not be considered. Flow meter engineering for head flow meters consists of determining the relationship between maximum flow (QM), maximum differential (hM) at maximum flow and β, the ratio of the size of the differential producer, and the pipe size (d/D). Once β is determined and the pipe size is known, d can be calculated. In general, QM and D are known, β is calculated and then d is determined from the equation d = βD.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Although head flow technology is well established for flow meter applications, the limited accuracy, low Re, limited turndown, limited rangeability, and relatively high permanent head loss (which may be excessive) can preclude the adoption of this technology. However, it is a well-established technology to use when the requirement is to control flow within about 5% of a given maximum value. In the oil, chemical, petrochemical, water and waste water industries, head meters are still predominant but other flow measuring technologies are expanding and replacing head meters in some new installations. Three such technologies are velocity measuring flow instruments, mass measuring flow instruments, and volumetric measurement by positive displacement meters. Velocity measurement flow meters comprise a large group which include: • Magnetic • Turbine • Vortex shedding • Ultrasonic • Doppler • Time-of-flight
Magnetic Flow Meters Magnetic flow meters operate on the principal of Faraday’s Law which states that the magnitude of voltage induced in a conductive medium moving through a magnetic field at right angles to the field is directly proportional to the product of the strength of the magnetic flux density (B), the velocity of the medium (V), and the path length between the probes (L). These terms are expressed in the following formula. E = (KBLV), where K is a constant based on the design of the meter and other terms are as previously defined. To continue the discussion of meter operation, a magnetic coil around the flow tube establishes a magnetic field of constant strength through the tube. L is the distance between the pick-up electrodes on each side of the tube, and V, the only variable, is the velocity of a material to be measured through the tube. It can be seen from the above expression that a voltage is generated directly proportional to the velocity of the flowing fluid. Remembering that flow through a pipe, Q = A(V), where A is the cross-sectional area of the pipe at the point of velocity measurement, V. The application of magnetic flow meters is limited to liquids with a conductivity of about 1–5 micro mhos or microsiemens. This limits the use of magnetic meters to conductive liquids or a solution where the mixture is at least about 10% of a conductive liquid.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
The velocity range of most flow meters is about 3–15 ft per second for liquids. The accuracy of magnetic flow meters can be as good as 0.5%–2% of rate with about a 10:1 turndown. When magnetic meters were first developed they were four-wire devices where the field excitation voltage was supplied by a twisted pair of wires which was separated from the signal cable and because of phase shift consideration, the field voltage and signalconditioning voltage were supplied from the same source. Now, however, magnetic meters are two-wire systems where the supply and signal voltages are on the same cable. Magnetic meters have no obstruction to the fluid flow, have no Reynolds number constraints, and some can measure flow in either direction. Calibration and initial startup can be provided with a compatible external handheld communicator. Earlier models were calibrated with a secondary calibrator which replaced the magnetic pick-up coil signal for an input equal to that corresponding to a given flow rate.
Turbine Meters Turbine meters consist of a rotating device called a rotor that is positioned in a flowing
stream so that the rotational velocity of the rotor is proportional to the velocity of the flowing fluid. The rotor generates a voltage the amplitude or frequency of which is proportional to the angular rotation and fluid velocity. A pulse signal proportional to angular velocity of the rotor can also be generated. A signal conditioning circuit converts the output of the turbine meter to a scaled signal proportional to flow rate. The accuracy of turbine meters can be as high as ±0.25% of rate for liquids and about ±0.5% of rate for gas service. They have good repeatability, as great as ±0.05% of rate. Some may not be suitable for continuous service and are most prominent for flow measurement rather than flow control. In turbine meter operation, a low Re will result in a high pressure drop and a high Re can result in excessive slippage. The flowing fluid should be free of suspended solids and solids along the bottom of the pipe.
Ultrasonic Flow Meters
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Ultrasonic flow meters are velocity measuring devices and operate on the Doppler effect and time-of-flight. Both operate on the use of acoustic waves or vibration to detect flow through a pipe. Ultrasonic energy is coupled to the fluid in the pipe using transducers that can be either wetted or non-wetted depending on the design. Doppler meters operate on the principal of the Doppler shift in the frequency of a sound wave as a result of the velocity of the sound source. The Doppler meter has a transmitter that injects a sound wave of specific frequency into the flowing fluid. The sound wave is reflected to a receiver across the pipe and the frequency shift between the injected wave form and the reflected wave form is a function of the velocity of the particle that reflected the injected wave form. The injected and reflected wave forms are “beat together” and a pulse is generated equal to the beat frequency and represents the velocity of the flowing fluid. Because Doppler meters require an object or substance in the flowing fluid to reflect the injected signal to form an echo, in cases where the fluid is extremely clean, particles such as bubbles can be introduced to the fluid to cause the echo to form. Care should be exercised to prevent contamination of the process fluid with the injected material. Time-of-flight ultrasonic flow transmitters measure the difference in travel time for a given length of pipe between pulses transmitted downstream in the fluid and upstream in the fluid. The transmitters and receivers are transponders that alternate between functions as a transmitter and a receiver each cycle of operation. The difference in downstream and upstream transit time is a function of fluid velocity. Differential frequency ultrasonic flow meters incorporate a transducer positioned so that
the ultrasonic wave form is beamed at an angle in the pipe. One transducer is located upstream of the other. The frequency of the ultrasonic beam in the upstream and downstream direction are detected and used to calculate the fluid flow through the pipe. Ultrasonic flow meters can be used in nearly any pipe size above about 1/8 in and flows as low as 0.1 gpm (0.38 L/min). Pipe thickness and material must be considered with clamp-on transducers to prevent attenuation on the signal to make certain that the signal strength does not fall to a point that will render the device inoperable. Ultrasonic flow meter accuracy can vary from about 0.5% to 10% of full scale depending on specific applications, meter types, and characteristics. These flow meters are usually limited to liquid flow applications.
Vortex Meters As the fluid in a vortex flow meter passes an object in the pipe of specific design called a bluff body, vortices are caused to form and traverse alternately to each side of the pipe. The frequency of these vortices, called the von Kármán effect, is measured by various means. The velocity of the fluid in the pipe can be determined relating to the following expression: f = (St)(V/shedder width) where
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
f V
= =
the frequency of the vortices the velocity of the fluid in the pipe
shedder width = the physical dimensions and design of the bluff body St = Strouhel’s number, which is a dimensionless number determined by the manufacturer While most vortex flow meters use shedders of nearly similar width, their specific design will vary significantly from one manufacturer to another. Specific shapes include trapezoidal, rectangular, triangular, various T-shapes and others. The frequency of the vortices varies directly with the fluid velocity and inversely with the pressure at the bluff body. Various frequency sensing systems are designed to respond to the physical properties of the vortices. Meters using metal shedders vary in size from 1/2–12 in for liquid flow rate with flow values ranging from 3–5,000 gpm. PVC vortex flow meters sizes vary from 1/4–2 in for flow rates with flow values ranging from .6–200 gpm. The accuracy of vortex meters can be as good, under ideal conditions, as ±0.5–10% for liquids and 1.5%–2% for gases. They have a limited turndown of 7–8:1 and they do
have Re constraints when the combination of Re and St number vary to the extent that operation becomes nonlinear at extreme values. This situation can exist when Re values drop as low as 3,000. Manufacturers can be consulted for low Re and velocity constraints. Vortex meters are very popular; the technology is one of those that are replacing head meters. When high accuracies are required for flow measurement applications of custody transfer, batching applications, and totalizing, among others, the following technologies may be considered.
Coriolis Meters Coriolis meters are true mass-measuring devices which operate on the existence of the Coriolis acceleration of a fluid relating to the fluid mass. The meter consists of a tube with the open ends fixed with the fluid entering and leaving. The closed end is forced to vibrate and a Coriolis force causes the tube to twist; it is the amount of “twist” that is detected. The amount of twist is a function of the mass of the fluid flowing through the tube. These meters have no significant pressure drop and no Re constraint. Temperature and density variations do not affect the accuracy: the mass-volume relationship changes but the true mass is measured. The accuracy can be as good as 0.1–0.2% of rate.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Coriolis meter applications are usually applied to liquids as the density of most gases is generally too low to accurately operate the meter. In cases where gas applications are possible temperature and pressure compensation is very advantageous.
Positive Displacement Flow Meters Positive displacement (PD) flow meters operate on the principle of repeatedly filling and emptying a chamber of known volume and counting the times of this operation. Accuracy in the order of 0.2–0.4% of rate can be realized. Some PD meters have no Re constraint but can have significant pressure drop, especially with highly viscous fluids. Low viscous fluid applications can result in significant error caused by slippage. They are commonly used in custody transfer applications, such as the measurement for fuel purchase at the pump, household water measurement, and natural gas measurement, but not as transmitters for process control.
Applications for Improved Accuracy In cases where the accuracy of a device is given as a percentage of the measured value, the accuracy is the same at every point on the measurement scale. However, when the
accuracy statement of a device is given as a percentage of full-scale value, that accuracy is achieved only when the measurement is at full-scale. Therefore, it may be seen that the entire measurement range may be sacrificed for improved accuracy. If the accuracy is given as ±1% full scale, for example, that accuracy can be achieved only when the measurement is at 100%. At 50% measurement, the error can be ±2%, and so forth. Therefore, it is desired to maintain the measurement at or near the full-scale value. This can be done by re-ranging the transmitter, which is a laborious process, to keep the measured value near the 100% range. For this purpose, a procedure of wide-range measurement application was developed whereby two, and seldom more, pipelines with transmitters of different measurement range values can be switched to keep the measurement at the upper end of the scale. With digital transmitters, this re-ranging can easily be performed. Another term used to define the quality of a measurement system is turndown, which is the ratio of the maximum measurement to the minimum measurement that can be made with an acceptable degree of error. For head-type flow measurement systems, this can be as high as 10:1. This limited turndown is because the flow rate is proportional to the square root of the pressure drop across a restrictor or differential producer inserted in the flowing fluid. This means if the error is ±1% at full scale (which is a good quality statement), the error would be ±5% at 25% of flow rate resulting in a turndown of 4:1. Some flow transmitters are said to have a turndown of 100:1.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Temperature Temperature is a measurement of relative heat in a substance or energy associated with the activity of molecules. Two scales, Centigrade and Fahrenheit, which were arbitrarily established and their associated absolute scales, Kelvin and Rankin, are used in temperature measurement. Many methods have been developed to measure temperature. Some of the most common include: • Filled thermal systems • Liquid in glass thermometers • Thermocouples • Resistance temperature detector (RTD) • Bimetallic strip • Thermistors
• Optical and other non-contact pyrometers This section will deal with those predominantly used with transmitters in the processing industries. The list can be classified as mechanical and electrical which will be discussed.
Filled Thermal Systems A large classification of mechanical temperature measuring devices are filled thermal systems, which consist of sensors or bulbs filled with a selective fluid which is either liquid, gas, or vapor. This bulb is connected by a capillary tubing to a pressure readout device for an indication or signal generation for transmission. Flapper-nozzle systems are most commonly used in pneumatic temperature transmitters using filled thermal systems. Filled temperature systems are grouped into four classes denoted by the fill fluid. They are: Class I II III V
Fill Fluid Liquid Vapor Gas Mercury*
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
(*The use of mercury is not very common in the processing industries and this class is not prominent.) As the readout device can be separated from the measuring sensor or bulb by several feet through changing ambient temperatures, compensation must be provided so the measurement is immune to ambient temperature fluctuations. Full compensation corrects for temperature changes along the capillary tube and the readout section, called the case. Case-only compensation corrects for temperature variations in the case only. Full compensation is designated by “A” in the related nomenclature and case compensation is noted as “B.” Class III does not require compensation but the measured temperature cannot cross the ambient temperature. Class IIA specifies that measured temperature must be above ambient and IIB indicates measured temperatures below ambient. The class of system will be selected in accordance with desired measurement range, speed of response, compensation required/desired, sensor size, capillary length, and cost or complexity in design. The accuracy of all classes is ±0.5 to ±1% of the measured span. Filled systems are used with pneumatic transmitters and local indicators.
Thermocouples The most commonly used electrical temperature systems are thermocouples (T/C) and RTDs. Thermocouples consist of two dissimilar metals connected at the ends which form measured and referenced junctions. An electromotive force (EMF) is produced when the junctions are at different temperatures. The measuring or hot junction is in the medium whose temperature is to be measured and the reference or cold junction is normally at the connection to the readout device or measuring instrument. The EMF generated is directly proportional to the temperature difference between the two junctions.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
As the thermocouple EMF output is a function of the temperature difference of the two junctions, compensation must be made for temperature fluctuations at the reference junction. Thermocouple types depend on the material of the metal and are specified by the International Society of Automation (ISA) with tables that relate EMF generation for thermocouple types and temperature at the measured junction with the reference junction at a standard temperature, normally 32°F or 0°C. In calibration and laboratory applications, the reference junction is normally maintained at the standard temperature. This method of reference junction compensation cannot be used in most industrial applications. The reference junction is established at the junction of the thermocouple and the connection of the readout device. Because of the expense of thermocouple material, it is inconvenient to run the thermocouple wire from the processing area to the readout instrument. Accordingly, thermocouple lead wire has been developed whereby the connection of the lead wire to the thermocouple does not form a junction and the reference junction is established at the location of the readout instrument. The current trend is towards mounting the transmitter in the thermowell head, thus minimizing the thermocouple wire length and eliminating the need for extension wires. In such cases a method of reference junction compensation is to design and construct an electrical reference junction compensator. This device generates an electrical bucking voltage that is in opposition to the voltage at the measuring junction and is subtracted from the measuring junction voltage. Care must be taken to prevent the temperature of the measuring junction from crossing the temperature of the reference junction. This will cause a change in polarity of the thermocouple output voltage. To reiterate, thermocouples are listed as types in accordance with material of construction which establishes the temperature/EMF relationship. Thermocouple instruments used for calibration and process measurement are designed for a particular thermocouple type. A thermocouple arrangement is shown in Figure 6-2. Figure 6-3 shows EMF versus
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
temperature for several types of thermocouples at 32°F reference junction.
Thermocouple configurations can be designed for different applications. When thermocouples are connected in parallel, an average temperature at the individual thermocouples can be determined. An example of this configuration could be to measure the temperature at individual points in a reactor with individual thermocouple and reactor bed temperatures by measuring the voltage at the parallel connection. Another common thermocouple application is to measure the temperature difference between two points of measurement. This could be the temperature drop across a heat exchanger, for example. Thermocouple connection in series opposition is used for this purpose. When the temperatures at both points are equal, regardless of the magnitude of voltage generated, the net EMF will be zero. From the observation of thermocouple calibration curves, it is determined that the voltage/temperature relationship is small and expressed in millivolts. Therefore, to measure small increments of temperature a very sensitive voltage measuring device is needed. Potentiometric recorders and millivolt potentiometers are used to provide the sensitivity needed.
Resistance Temperature Devices
The final type of temperature measuring devices discussed here is a resistance temperature device (RTD). An RTD is a conductor, usually a coil of platinum wires, with a known resistance/temperature relationship. By measuring the resistance of the RTD and using appropriate calibration tables the temperature of the sensor can be determined. Obtaining an accurate RTD resistance measurement with a conventional ohm meter results in the self-heating effect produced by the sensor caused by the current flow established by the ohm meter. Also, such measurement techniques require humanintervention which cannot provide continuous measurement needed for process control. Most RTD applications use a DC Wheatstone Bridge circuit to measure temperature. A temperature measurement results in a corresponding change in resistance with temperature which produces a change in voltage or current provided by the bridge circuit. Equivalent circuit theorems can be used to establish the relationship between temperature and bridge voltage for a particular application. Empirical methods are normally used to calibrate an RTD temperature measuring device. The resistance of wires used to connect the RTD to the bridge circuit will also change with temperature. To negate the effect of the wire resistance, a three-wire bridge circuit is used with RTD applications.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Thermistors Thermistors like platinum elements or other metal electrical conductors can also be used for temperature measurement. Being much more sensitive to temperature, they can be used to measure much smaller temperature changes. They are a semiconductor or P-N junction and have a nonlinear resistance/temperature relationship. This requires linearization techniques for wide temperature range applications.
Conclusion This chapter presents the most common types of measuring devices used in the process industries. No attempt has been made to discuss the entire principle of operation nor applications but to merely introduce the industrial measurement methods. Many process transmitters are used in analytical applications which include chromatographs, pH, conductivity, turbidity, O2 content, dissolved oxygen, and others discussed elsewhere. For further clarification, the reference material should be consulted.
Further Information Anderson, Norman A. Instrumentation for Process Measurement and Control. 3rd ed. Boca Raton, FL: CRC Press, Taylor & Francis Group, 1998.
Berge, Jonas. Fieldbuses for Process Control: Engineering, Operation and Maintenance. Research Triangle Park, NC: ISA, 2004. Gillum, Donald R. Industrial Pressure, Level, and Density Measurement. 2nd ed. Research Triangle Park, NC: ISA, 2009. Lipták, Béla G. Instrument Engineer’s Handbook: Process Measurement and Analysis. Vol. 1, 4th ed. Boca Raton, FL: CRC Press, 1995.
About the Author
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Donald R. Gillum’s industrial background includes 10 years as an instrument/analyzer engineering technician and application engineer at Lyondell Petrochemical. Before that he was a process operator at that plant and a chemical operator at a gaseous diffusion nuclear facility. He was employed by Texas State Technical College for 40 years where he was actively involved in all facets of student recruitment, curriculum development, laboratory design, instruction, industrial training, and continuing education. Gillum developed and taught courses for ISA from 1982 to 2008 and was instrumental in the purchase and development of the original training center in downtown Raleigh. He has taught courses at this facility and on location since that time, and he has written articles for various technical publications and training manuals. He is the author of the textbook Industrial Pressure, Level, and Density Measurement. On the volunteer side, Gillum served two terms on ISA’s Executive Board as vicepresident of District VII and vice-president of the Education Department, now PDD. As PDD director, he served in the area of certification and credentialing. He was a program evaluator for ABET, Commissioner of the Technical Accreditation Commission, and was a member of ABET’s Board of Directors. Gillum is an ISA Life Member and a recipient of ISA’s Golden Achievement Award. He holds a BS from the University of Houston and is a PE in Control System Engineering.
7 Analytical Instrumentation By James F. Tatera
Introduction
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Process analytical instruments are a unique category of process control instruments. They are a special class of sensors that enable the control engineer to control and/or monitor process and product characteristics in significantly more complex and various ways than traditional, more physical sensors—such as pressure, temperature, and flow —allow. Today’s safety and environmental requirements, time-sensitive production processes, inventory reduction efforts, cost reduction efforts, and process automation schemes have made process analysis a requirement for many process control strategies. Most process analyzers are providing real-time information to the control scheme that many years ago would have been the type of feedback the production process would have received from a plant’s quality assurance laboratory. Today most processes require faster feedback to control the process, rather than just being advised that their process was or wasn’t in control at the time the lab sample was taken, and/or that the sample was or wasn’t in specification. Various individuals have attempted to categorize the large variety of monitors typically called process analyzers. None of these classification schemes has been widely accepted; the result is that there are many categorization schemes in use, simultaneously. Most of these schemes are based on either the analytical/measurement technology being utilized by the monitor, the application to which the monitor is being applied, or the type of sample being analyzed. There are no hard and fast definitions for analyzer types. Consequently, most analytical instruments are classed under multiple and different groupings. Table 7-1 depicts a few of the analyzer type labels commonly used.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
An example of how a single analyzer can be classified under many types would be a pH analyzer. This analyzer is designed to measure the pH (an electrochemical property of a solution—usually water-based). As such, it can be used to report the pH of the solution and may be labeled as a pH analyzer or electrochemical analyzer (its analytical technology label). It may be used to monitor the plant’s water out-fall and, in this case, it may be called an environmental- or water-quality analyzer (based on its application and sample type). It may be used to monitor the acid or base concentration of a process stream, in which case it may be labeled a single-component concentration analyzer (based on its application and the desired result being reported). This is just an example and it is only intended to assist in understanding that there are many process analyzers that will be labeled under multiple classifications. Don’t allow this to confuse or bother you. There are too many process analyzer technologies to mention in this chapter so only a few will be used as examples. A few of the many books published on process analysis are listed in the reference summary at the end of this chapter. I recommend consulting them for further information on individual/specific technologies. The balance of this chapter will be used to introduce concepts and technical details that are important in the application of process analyzers.
Sample Point Selection Determining which sample to analyze is usually an iterative process based on several factors and inputs. Some of these factors include regulatory requirements, product quality, process conditions, control strategies, economic justifications, and more. Usually, the final selection is a compromise that may not be optimum for any one factor, but is the overall best of the options under consideration. Too often, mistakes are made on existing processes when selections are based on a simple single guideline, like the final product or the intermediate sample that has been routinely taken to the lab. True, you usually have relatively good data regarding the composition of that sample. But, is
it the best sample to use to control the process continuously to make the best product and/or manage the process safely and efficiently? Or, is it the one that will just tell you that you have or have not made good product? Both are useful information, but usually the latter is more effectively accomplished in a laboratory environment.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
When you consider all the costs of procurement, engineering, installation, and maintenance, you rarely save enough money to justify installing process analyzers just to shut down or replace some of your lab analyzers. Large savings are usually achieved through improved process control, based on analyzer input. If you can see a process moving out of control and correct the issue before it loses control, you can avoid making bad products, rework, waste, and so on. If you must rely on detecting a final product that is already moving toward or out of specification, you are more likely to make additional bad product. Consequently, lab approval samples usually focus on the final product for approval, while process analyzers used for process control more often measure upstream and intermediate samples. Think of a distillation process. Typically, lab samples are taken from the top or bottom of columns and the results usually contain difficult-to-measure, very low concentrations of lights or heavies because they are being distilled out. Often you can better achieve your goal by taking a sample from within the column at or near a major concentration break point. This sample can indicate that lights or heavies are moving in an undesired direction before a full change has reached the column take-offs, and you can adjust the column operating parameters (temperature, pressure, and/or flow) in a way that returns the column operation to the desired state. An intermediate sample from within the distillation column usually contains analytes in concentrations that are less difficult and more robust to analyze. To select the best sample point and analyzer type, a multidisciplinary team is usually the best approach. In the distillation example mentioned above, you would probably want a process engineer, controls engineer, analyzer specialist, quality specialist, and possibly others on the team to help identify the best sampling point and analyzer type. If you have the luxury of an appropriate pilot plant, that is often the best place to start because you do not have to risk interfering with production while possibly experimenting with different sample points and control strategies. In addition, pilot plants are often allowed to intentionally make bad product and demonstrate undesirable operating conditions. Other tools that are often used to help identify desirable sample points and control strategies are multiple, temporary, relocatable process analyzers (TURPAs). With an appropriate selection of these instruments, you can do a great job modeling the process and evaluating different control strategies. Once you have identified the sample point
and desired measurement, you are ready to begin the instrument selection phase of the process.
Instrument Selection Process analyzer selection is also typically best accomplished by a multidisciplinary team. This team’s members often include the process analytical specialist, quality assurance and/or research lab analysts, process analyzer maintenance personnel, process engineers, instrument engineers, and possibly others. The list should include all the appropriate individuals who have a stake in the project. Of these job categories, the individuals most often not contacted until the selection has been made—and the ones who are probably most important to its long-term success—are the maintenance personnel. Everyone relies on them having confidence in the instrument and keeping it working over the long term.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
At this point, you have identified the sample and component(s) to be measured, the concentration range to be measured (under anticipated normal and upset operating conditions), the required measurement precision and accuracy, the speed of analysis required to support the identified control strategy, and so on. You also should have identified all other components/materials that could be present under normal and abnormal conditions. The method selection process must include determining if these other components/materials could interfere with the method/technologies being considered. You are now identifying technology candidates and trying to select the best for this measurement. If you have a current lab analytical method for this or a similar sample, you should not ignore it; however, more often than not, it is not the best technology for the process analysis. It is often too slow, complex, fragile, or expensive (or it has other issues) to successfully pass the final cut. Lastly, if you have more than one good candidate, consider the site’s experience maintaining these technologies. Does maintenance have experience and training on the technologies? Does the site have existing spare parts and compatible data communications systems? If not, what are the spare parts supply and maintenance support issues? These items and additional maintenance concerns should be identified by the maintenance representative on the selection team and need to be considered in the selection process. The greatest measurement technology in the world will not meet your long-term needs if it cannot be maintained and kept performing in a manner consistent with the processes operations. The minimum acceptable availability of the technology is an important factor and is typically at least 95%.
At this point, the selection team needs to review its options and select the analytical technology that will best serve the process needs over the long term. The analytical technology selected does not need to be the one that will yield the most accurate and precise measurement. It should be the one that will provide the measurement that you require in a timely, reliable, and cost-effective manner over the long-term.
Sample Conditioning Systems
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Sample conditioning sounds simple. You take the sample that the process provides and condition/modify it in ways that allow the selected analyzer to accept it. Despite this relatively simple-sounding mission, most process analyzer specialists attribute 50% to 80% of process analyzer failures to sample conditioning issues. Recall that the system must deliver an acceptable sample to the analyzer under a variety of normal, upset, startup, shut down, and other process conditions. Usually the sampling system not only has to consider the interface requirements of getting an acceptable sample from the process to the analyzer, but it usually must dispose of that sample in a reliable and cost-effective manner. Disposing of the sample often involves returning it to the process and sometimes conditioning it to make it appropriate for the return journey. What is considered an acceptable sample for the analyzer? Often this is defined as one that is “representative” of the process stream and compatible with the analyzer’s sample introduction requirements. In this case, “representative” can be a confusing term. Usually it doesn’t have to represent the process stream in a physical sense. Typical sample conditioning systems change the process sample’s temperature, pressure, and some other parameters to make the process sample compatible with the selected process analyzer. In many cases, conditioning goes beyond the simple temperature and pressure issues, and includes things that change the composition of the sample—things like filters, demisters, bubblers, scrubbers, membrane separators, and more. However, if the resulting sample is compatible with the analyzer and the resulting analysis is correlatable/representative of the process, the sample conditioning is considered to be done well. Some sampling systems also provide stream switching and/or auto calibration capabilities. Because the process and calibration samples often begin at different conditions and to reduce the possibility of cross contamination, most process analyzers include sampling systems with stream selection capabilities (often double-block and bleed valving) just before the analyzer. Figure 7-1 depicts a typical boxed sample conditioning system. A total sampling system would normally also include process sample extraction (possibly a tee, probe, vaporizer,
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
etc.), transport lines, return lines, fast loop, slow loop, sample return, and so on. The figure contains an assortment of components including filters, flow controllers, and valves. Figure 7-2 depicts an in-situ analyzer that requires little sample conditioning and is essentially installed in the process loop.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Another type of sample condition system is defined by ISA-76.00.02 and IEC 62339-1 Ed. 1.0 B:2006 standards and is commonly referred to as NeSSI (New Sample System Initiative). This sample condition system minimizes dead volume and space by using specially designed sample conditioning components on a predefined high-density substrate system. Most analyzers are designed to work on clean, dry, noncorrosive samples in a specific temperature and pressure range. The sample system should convert the process sample conditions to the conditions required by the analyzer in a timely, “representative,” accurate, and usable form. A well-designed, operating, and maintained sampling system is necessary for the success of the process analyzer project. Sampling is a crucial art and science for successful process analysis projects. It is a topic that is too large to more than touch on in this chapter. For more information on sampling, refer to some of the references listed.
Process Analytical System Installation
The installation requirements of process analyzers vary dramatically. Figure 7-2 depicts an environmentally hardened process viscometer installed directly in the process environment with very little sample or environmental conditioning.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Figure 7-3 depicts process gas chromatographs (GCs) installed in an environmentally conditioned shelter. These GC analyzers require a much more conditioned/controlled sample and installation environment.
Figure 7-4 shows the exterior of one of these environmentally controlled shelters. Note the heating and ventilation unit near the door on the left and the sample conditioning cabinets mounted on the shelter wall under the canopy.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
When installing a process analyzer, the next most important thing to measurement and sampling technologies—as in real estate—is location. If the recommended environment for a given analyzer is not appropriate, the project is likely doomed to failure. Also, environmental conditioning can be expensive and must be included in the project cost. In many cases, the cost of a shelter and/or other environmental conditioning can easily exceed the costs of the instrument itself. Several highly hardened analyzers are suitable for direct installation in various hazardous process environments, while others may not be. Some analyzers can operate with only a moderate amount of process-induced vibration, sample condition variation, and ambient environmental fluctuation. Others can require highly stable environments (almost like a lab environment). This all needs to be taken into consideration during the technology selection and installation design processes. To obtain the best technology and design choice for the situation, you need to consider all these issues and process/project specific sampling issues, such as how far and how fast the sample will have to be transported and where it must be returned. You should also determine the best/nearest suitable location for installing the analyzer. Typical installation issues to consider include the hazardous rating of the process sample
area, as compared to the hazardous area certification of the process analyzers. Are there large pumps and /or compressors nearby that may cause a lot of vibration and require special or distant analyzer installation techniques? Each analyzer comes with detailed installation requirements. These should be reviewed prior to making a purchase. Generalized guidelines for various analyzer types are mentioned in some of the references cited at the end of this chapter.
Maintenance Maintenance is the backbone of any analyzer project. If the analyzer cannot be maintained and kept running when the process requires the results, it should not have been installed in the first place. No company that uses analyzers has an objective of buying analyzers. They buy them to save money and/or to keep their plants running.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
A cadre of properly trained and dedicated craftsmen with access to appropriate maintenance resources is essential to keep process analyzers working properly. It is not uncommon for a complex process analyzer system to require 5% to 10% of its purchase price in annual maintenance. One Raman spectroscopy application required a $25,000 laser twice a year. The system only cost $150,000. The result was a 33% procurement maintenance cost to annual maintenance ratio. This is high, but not necessarily unacceptable, depending on the benefit the analyzer system is providing. Many maintenance approaches have been used successfully to support analyzers for process control. Most of these approaches have included a combination of predictive, preventive, and break-down maintenance. Issues like filter cleaning, utility gas cylinder replacement, mechanical valve and other moving part overhauls, and many others tend to lend themselves to predictive and/or preventive maintenance. Most process analyzers are complex enough to require microprocessor controllers, and many of these have sufficient excess microprocessor capacity that vendors have tended to utilize for added features like performing appropriate diagnostics to advise the maintenance team of failures and/or approaching failures, and resident operation and maintenance manuals. Analyzer shelters have helped encourage frequent and thorough maintenance checks, as well as appropriate repairs. Analyzer hardware is likely to receive better attention from the maintenance department if it is easily accessible and housed in a desirable work environment (like a heated and air-conditioned shelter). Figure 7-5 depicts a moderately complex GC analyzer oven. It has multiple detectors, multiple separation columns, and multiple valves to monitor, control, and synchronize the GC separation and measurement.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Obviously, a system this complex will be more easily maintained in a well-lit and environmentally conducive work environment. Complex analyzers, like many GC systems and spectroscopy systems, have typically demonstrated better performance when installed in environmentally stable areas. Figure 7-3 depicts one of these areas with three process GCs. The top section of the unit includes a microprocessor and the complex electronics required to control the analyzer functions and operation, communicate with the outside world, and (sometimes) to control a variety of sample system features. The middle section is the utility gas control section that controls the required flows of several application essential gases. The lower section is the guts, so to speak. It is the thermostatically controlled separation oven with an application-specific assortment of separation columns, valves, and detectors (see Figure 7-5).
Utilizing an acceptable (possibly not the best) analytical technology that the maintenance department is already familiar with can have many positive benefits. Existing spare parts may be readily available. Maintenance technicians may already have general training in the technology and, if the demonstrated applications have been highly successful, they may go into the start-up phase with a positive attitude. In addition, the control system may already have connections to an appropriate GC and/or other data highways. Calibration is generally treated as a maintenance function, although it is treated differently in different applications and facilities. Most regulatory and environmental applications require frequent calibrations (automatic or manual). Many plants that are
strongly into statistical process control (SPC) and Six Sigma have come to realize they can induce some minor instability into a process by overcalibrating their monitoring and control instruments. These organizations have begun using techniques like averaging multiple calibrations and using the average numbers for their calibration. They also can conduct benchmark calibrations, or calibration checks, and if the check is within the statistical guidelines of the original calibration series, they make no adjustments to the instrument. If the results are outside of the acceptable statistical range, not only do they recalibrate, but they also go over the instrument/application to try to determine what may have caused the change.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
With the declining price of microprocessors, equipment manufacturers are using them more readily, even in simpler analytical instruments that don’t really require them. With the excess computing capability that comes with many of these systems, an increasing number of vendors have been developing diagnostic and maintenance packages to aid in maintaining these analytical systems. The packages are typically called performance monitoring and/or diagnostics systems; they often monitor the status of the instrument and its availability to the process, keep a failure and maintenance history, contain software maintenance documentation/manuals, and provide appropriate alarms to the process and maintenance departments. Lastly, maintenance work assignments and priorities are especially tricky for process analytical instruments. Most are somewhat unique in complexity and other issues. Two process GC systems that look similar may require significantly different maintenance support because of process, sample, and/or application differences. Consequently, it is usually best to develop maintenance workload assignments based on actual maintenance histories when available, averaged out to eliminate individual maintenance worker variations, and time weighted to give more weight to recent history and, consequently, to give more weight to the current (possibly improved or aging and needing replacement soon) installation. Maintenance priorities are quite complex and require a multidisciplinary team effort to determine. What the analyzer is doing at any given time can impact its priority. Some analyzers are primarily used during start-up and shutdown and have a higher priority as these operations approach. Others are required to run the plant (environmental and safety are a part of this category) and, consequently, have an extremely high priority. Others can be prioritized based on the financial savings they provide for the company. The multidisciplinary team must decide which analyzers justify immediate maintenance, including call-ins and/or vendor support. Some may only justify normal available workday maintenance activities. After you have gone through one of these priority
setting exercises, you will have a much better understanding of the value of your analytical installations to the plant’s operation. If you don’t have adequate maintenance monitoring programs/activities in place, it can be very difficult to assess workloads and/or priorities. The first step in implementing these activities must be to collect the data that is necessary to make these types of decisions.
Utilization of Results Process analytical results are used for many purposes. The following are some of the most prominent uses: • Closed-loop control • Open-loop control • Process monitoring • Product quality monitoring • Environmental monitoring
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
• Safety monitoring With the large installed data communications base that exists in most modern plants, the process analysis results/outputs are used by most major systems including process control systems, laboratory information management systems, plant information systems, maintenance systems, safety systems, enterprise systems, and regulatory reporting systems (like environmental reports to the EPA). As mentioned previously, various groups use these results in different ways. Refer to the cited references for more information.
Further Information Carr-Brion, K. G., and J. R. P. Clarke. Sampling Systems for Process Analysers. 2nd ed. Oxford, England: Butterworth-Heinemann, 1996. Houser, E. A. Principles of Sample Handling and Sampling Systems Design for Process Analysis. Research Triangle Park, NC: ISA (Instrument Society of America, currently the International Society of Automation), 1972. IEC 62339-1:2006 Ed. 1.0. Modular Component Interfaces for Surface-Mount Fluid Distribution Components – Part 1: Elastomeric Seals. Geneva 20 - Switzerland: IEC (International Electrotechnical Commission).
ISA-76.00.02-2002. Modular Component Interfaces for Surface-Mount Fluid Distribution Components – Part 1: Elastomeric Seals. Research Triangle Park, NC: ISA (International Society of Automation). Lipták, Béla G., ed. Instrument Engineers’ Handbook. Vol. 1, Process Measurement and Analysis. 4th ed. Boca Raton, FL: CRC Press/ISA (International Society of Automation), 2003. Sherman, R. E., ed., and L. J. Rhodes, assoc. ed. Analytical Instrumentation: Practical Guides for Measurement and Control Series. Research Triangle Park, NC: ISA (International Society of Automation), 1996. Sherman, R. E. Process Analyzer Sample-Conditioning System Technology. Wiley Series in Chemical Engineering. New York: John Wiley & Sons, Inc., 2002. Waters. Tony. Industrial Sampling Systems. Solon, OH: Swagelok, 2013.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
About the Author James F. (Jim) Tatera is a senior process analysis consultant with Tatera Associates. For many years, he has provided process analytical consulting/contracting services to user, vendor, and academic organizations, authored many book chapters, and coauthored many process-analytical-related marketing reports with PAI Partners. His more than 45-year career included 27 years of working with process analyzers for Dow Corning, including both U.S. and international assignments in analytical research, process engineering, project engineering, production management, maintenance management, and a Global Process Analytical Expertise Center. Tatera is an ISA Fellow, is one of the original Certified Specialists in Analytical Technology (CSAT), is active in U.S. and IEC process analysis standards activities, and has received several awards for his work in the process analysis field. He is the ANSI U.S. National Committee (USNC) co-technical advisor to IEC SC 65B (Industrial Process Measurement, Control, and Automation—Measurement and Control Devices), the convener of IEC SC 65B WG14 (Analyzing Equipment), and has served as the chair and as a member of the ISA-SP76 (Composition Analyzers) committee. He has also served in several section and international leadership roles in both the International Society of Automation (ISA) and American Chemical Society (ACS).
8 Control Valves By Hans D. Baumann Since the onset of the electronic age, because of the concern to keep up with everincreasing challenges by more sophisticated control instrumentation and control algorithms, instrument engineers paid less and less attention to final control elements even though all process control loops could not function without them.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Final control elements may be the most important part of a control loop because they control process variables, such as pressure, temperature, tank level, and so on. All these control functions involve the regulation of fluid flow in a system. The control valve is the most versatile device able to do this. Thermodynamically speaking, the moving element of a valve—may it be a plug, ball, or vane—together with one or more orifices, restricts the flow of fluid. This restriction causes the passing fluid to accelerate (converting potential energy into kinetic energy). The fluid exits the orifice into an open space in the valve housing, which causes the fluid to decelerate and create turbulence. This turbulence in turn creates heat, and at the same time reduces the flow rate or pressure. Unfortunately, this wastes potential energy because part of the process is irreversible. In addition, high-pressure reduction in a valve can cause cavitation in liquids or substantial aerodynamic noise with gases. One must choose special valves designed for those services. There are other types of final control elements, such as speed-controlled pumps and variable speed drives. Speed-controlled pumps, while more efficient when flow rates are fairly constant, lack the size ranges, material choices, high pressure and temperature ratings, and wide flow ranges that control valves offer. Advertising claims touting better efficiency than valves cite as proof only the low-power consumption of the variable speed motor and omit the high-power consumption of the voltage or frequency converter that is needed. Similarly, variable speed drives are mechanical devices that vary the speed between a motor and a pump or blower. These don’t need an electric current converter because
their speed is mechanically adjusted. Control valves have a number of advantages over speed-controlled pumps: they are available in a variety of materials and sizes, they have a wider rangeability (range between maximum and minimum controllable flow), and they have a better speed of response. To make the reader familiar with at least some of the major types of control valves (the most important final control element), here is a brief description.
Valve Types There are two basic styles of control valves: rotary motion and linear motion. The valve shaft of rotary motion valves rotates a vane or plug following the commands of a rotary actuator. The valve stem of linear motion valves moves toward or away from the orifice driven by reciprocating actuators. Ball valves and butterfly valves are both rotary motion valves; a globe valve is a typical linear motion valve. Rotary motion valves are generally used in moderate-to-light-duty service in sizes above 2 in (50 mm), whereas linear motion valves are commonly used for more severe duty service. For the same pipe size, rotary valves are smaller and lighter than linear motion valves and are more economical in cost, particularly in sizes above 3 in (80 mm).
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Globe valves are typical linear motion valves. They have less pressure recovery (higher pressure recovery factor [FL]) than rotary valves and, therefore, have less noise and fewer cavitation problems. The penalty is that they have less CV (KV) per diameter compared to rotary types.
Ball Valves When a ball valve is used as a control valve, it will usually have design modifications to improve performance. Instead of a full spherical ball, it will typically have a ball segment. This reduces the amount of seal contact, thus reducing friction and allowing for more precise positioning. The leading edge of the ball segment may have a V-shaped groove to improve the control characteristic. Ball valve trim material is generally 300 series stainless steel (see Figure 8-1).
Segmented ball valves are popular in paper mills due to their capability to shear fibers. Their flow capacity is similar to butterfly valves; hence they have high pressure recovery in mid- and high-flow ranges.
Eccentric Rotary Plug Valves
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Another form of rotary control valve is the eccentric, rotary-plug type (see Figure 8-2) with the closure member shaped like a mushroom and attached slightly offset to the shaft. This style provides good control along with a tight shutoff, as the offset supplies leverage to cam the disc face into the seat. The advantage of this valve is tight shutoff without the elastomeric seat seals used in ball and butterfly valves. The trim material for eccentric disc valves is generally 300 series stainless steel, which may be clad with Stellite® hard facing.
The flow capacity is about equal to globe valves. These valves are less susceptible to slurries or gummy fluids due to their rotary shaft bearings.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Butterfly Valves Butterfly valves are a low-cost solution for control loops using low pressure and temperature fluids. They save space due to their small profile, they are available in sizes from 2 in (50 mm) to 50 in (1,250 mm), and they can be rubber lined for tight shut off (see Figure 8-3). A more advanced variety uses a double-eccentric vane that, in combination with a metal seal ring, can provide low leakage rates even at elevated temperatures.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
A more advanced design of butterfly valves is shown in Figure 8-4. Here the vane is in the shape of a letter Z. The gradual opening described above produces a preferred equal percentage characteristic in contrast to conventional butterfly valves having a typically linear characteristic.
A butterfly valve used as a control valve may have a somewhat S-shaped disc (see Figure 8-3) to reduce the flow-induced torque on the disc; this allows for more precise positioning and prevents torque reversal. Butterfly valve trim material may be bronze, ductile iron, or 300 series stainless steel. A feature separating on/off rotary valves from those adapted for control, is the tight connection between the plug or vane and the seat to ensure a tight seal when closed. Also needed is a tight coupling between the valve and actuator stem, which avoids loose play that leads to deadband, hysteresis, and, in turn, control-loop instability.
Globe Valves These valves can be subdivided in two common styles: post-guided and cage-guided. In post-guided valves, the moving closure member or stem is guided by a bushing in the valve bonnet. The closure member is usually unbalanced, and the fluid pressure drop acting on the closure member can create significant forces. Post-guided trims are well suited for slurries and fluids with entrained solids. The post-guiding area is not in the active flow stream, reducing the chance of solids entering the guiding area. Post-guided
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
valve-trim materials are usually either 400 series, 300 series, or type 17-4 PH stainless steel. Post-guided valves are preferred in valve sizes of 1/4 in (6 mm) to 2 in (50 mm) due to the lower cost (see Figure 8-5).
A cage-guided valve has a cylindrical cage between the body and the bonnet. Below the cage is a seat ring. The cage/seat ring stack is sealed with resilient gaskets on both ends. The cage guides the closure member, also known as the plug. The plug is often pressurebalanced with a dynamic seal between the plug and cage. The balanced plug will have passageways through the plug to eliminate the pressure differential across the plug and the resulting pressure-induced force. The trim materials for cage-guided valves are often either 400 series, 300 series, or 17-4 PH stainless steel, sometimes nitrided or Stellite hard-faced (see Figure 8-6). Care should be taken to allow for thermal expansion of the cage at high temperatures, especially when the cage and housing are of dissimilar metals. One advantage of cage-guided valves is that they can easily be fitted with lownoise or anti-cavitation trim. The penalty is a reduction in flow capacity.
Note: Do not use so-called cage-guided valves for sticky fluids or fluids that have solid contaminants, such as crude oil. Typical configurations are used for valve sizes from 1/2 in (12.5 mm) to 3 in (80 mm). The plug and seat ring may be hard-faced for high pressure or slurries.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Three-Way Valves Another common valve type is the three-way valve. As the name implies, these valves have three ports and are used for either bypass or mixing services where a special plug (if it is linear motion) or a rotating vane or plug (if it is rotary motion) controls fluid flow. Most designs are either for bypass or for mixing, although some types, like the one shown in Figure 8-7, can be used for either service. Three-way valves are quite common in heating and air conditioning applications.
Figure 8-7 shows a novel three-way valve design, featuring a scooped-out vane that can
be rotated 90 degrees to open either port A or port B for improved flow capacity. There is practically no flow interruption when throttling between ports because flow areas totaling A plus B are always equal to the flow area of port C regardless of vane travel; this maintains a constant flow through port C. This valve is equally suitable for either mixing or bypasses operations. The valve has a very good flow characteristic. Flow enters from ports A and B and discharges through port C for mixing service. In case of bypass service, fluid enters port C and is discharged alternately through ports A and B. Regardless of the angular vane position, the combined flow through port A and B always equals the flow through port C.
Actuators By far, the most popular actuators for globe valves are diaphragm actuators discussed in this section. Here an air-loaded diaphragm creates a force that is opposed by a coiled spring. They offer low-cost, fail-safe action and have low friction, in contrast to piston actuators.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
The air signal to a pneumatic actuator typically has a range from 3 to 15 psi or 6 to 30 psi (0.2 to 1 bar or 0.4 to 2 bar) and can originate from an I/P transducer or, more commonly, from an electronic valve positioner typically receiving a 4–20 mA signal from a process controller. These are command signals and can be different from internal actuator signals needed to open or close a valve.
Diaphragm Actuators Diaphragm actuators are comprised of stamped housings clamped between a flexible rubber diaphragm that, when subjected to an actuator signal, exert a force that is opposed by coiled springs. Multiple parallel springs are also popular, because they offer a lower actuator profile. Typical internal actuator signals are 5–15 psi (0.35–1 bar) or 3–12 psi (0.2–0.8 bar). The difference between a command signal from a controller or transducer and the internal signal span is used to provide sufficient force to close a valve plug. In a typical example, a diaphragm actuator having a 50 in2 (320 cm2) effective diaphragm area receives a controller signal of 3 psi (0.2 bar) to close a valve. This then provides an air pressure excess of 2 psi (0.14 bar), because the internal actuator signal is 5–15 psi (3.5–1 bar). At a 50 in2 (320 cm2) area, there is a 100 lb (45 kg) force available to close a valve plug against fluid pressure. The spring rate can be calculated by multiplying the difference between the maximum and minimum actuator signals times the diaphragm area, and then by dividing this by
the amount of travel. The spring rate of the coiled spring is important to know because it defines the stiffness factor (Kn) of the actuator-valve combination. Such stiffness is necessary to overcome fluid-imposed dynamic instability of the valve trim. It is also necessary to prevent a single-seated plug from slamming onto the valve orifice on a valve with a flow direction that tends to close. Equation 8-1 below can be used to estimate the stiffness factor for actuator springs. Required Kn = orifice area • max. inlet pressure divided by one-quarter of the valve travel
(8-1)
Example: Valve orifice = 4 in (0.1 m) P1 = 100 psi (7 bar) valve travel = 1.5 in (38 mm) Result: 12.6 in2 • 100/(1.5/4) = 3,360 lb/in (3,818 kg/cm) In this case, a spring having a rating of at least 3,360 lb/in could meet the stiffness requirement.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Commercially available spring-diaphragm actuators are available in sizes from 25 in2 to 500 in2 (160 cm2 to 3,226 cm2). Rotary valves more commonly employ piston-type pneumatic actuators using a linkage to convert linear piston travel to rotary action. The stiffness of piston-type actuators depends primarily on the volume-change-induced pressure change in the actuator. Below is a generally accepted method for calculating the stiffness factor, Knp. Knp = Sb • Ab2/(Vb + h • Ab) + St • Ab2/Vt + h • At where h = stem motion Ab = area below the piston At = area above the piston
(8-2)
Vb = unused volume of the cylinder below the piston Vt = volume on the top piston Sb = signal pressure below the piston St = signal pressure on the top piston Ab = typically: At less the area of the stem Example: h = 1 in (2.5 cm) At = 12 in2 (75 cm2) Ab = 11 in2 (69 cm2) Vb = 2.5 in3 (6.25 cm3) Vt = 5 in3 Sb = 30 psia (2 barabs) St = 60 psia (4 barabs)
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Result: Knp = 610 lb/in (714 kg/cm) Any reduction in unused (dead) volume and increase in signal pressures aids stiffness. Commercially available spring-diaphragm actuators are available in sizes from 25 to 500 in2 (160 cm2 to 1300 cm2). Rotary valves (example in Figure 8-8) more commonly employ piston-type pneumatic actuators using a linkage to convert linear piston travel to rotary action.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
The actuator shown in Figure 8-9 is in the fail-close position. By flipping the diaphragm and placing the spring below it, the actuator makes the valve fail-open. The normal flow direction for this globe valve is flow-to-open for better dynamic stability (here the entry port is on the right).
However, an opposite flow direction may be desired to help the actuator close the valve in case of an emergency. Here the fluid pressure tends to push the plug down against the seat, which can lead to instability; a larger actuator with greater stiffness (factor Kn) may be required to compensate for this downward force.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Pneumatic, Diaphragm-Less Piston Actuators Another way of using compressed air to operate valves is by utilizing piston-cylinder actuators (see Figure 8-10). Here a piston is driven by high-pressure air of up to 100 psi (7 bar), which can be opposed by a coiled spring (for fail-safe action) or by a somewhat lower air pressure on the opposite side of the piston. As a result, piston-cylinder actuators are more powerful than diaphragm actuators for a given size.
A disadvantage of piston-cylinder actuators is that they always need a valve positioner and they require higher air pressure supply—up to 100 psi (7 bar)—than is required for diaphragm actuators.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Electric Actuators Electric actuators are found more often in power plants where there are valves with higher pressure requirements, or where there is no compressed air available. They employ either a mechanical system (i.e., with gear and screws) or a magnetic pistontype system to drive the valve stem. Screw or gear types offer more protection against sudden erratic stem forces caused by fluid-induced instability due to their inherent stiffness. A potential problem with electric actuators is the need for standby batteries to guard against power failure. Travel time typically is 30 to 60 seconds per inch (25.4 mm) compared to 7 to 10 seconds per inch for pneumatic actuators. Magnetic actuators are limited by available output forces. Electric actuators utilize alternating current (AC) motors to drive speed-reducing gear trains. The last gear rotates an acme threaded spindle, which in turn is attached to a valve stem. Acme threaded spindles have less than 50% efficiency and, therefore, provide “self-locking” against high or sudden valve stem forces. Smaller electric actuators sometimes employ ball-bearing-supported thread and nuts for greater force
output. However, they do not prevent back-sliding under excess loads. The positioning sensitivity of electric actuators is relatively low due to the high inertia of the mechanical components. There always is a trade-off between force output and travel speed. High-force output is typically chosen over speed due to the high cost of electric actuators
Hydraulic Actuators
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
These are typically electrohydraulic devices (see example in Figure 8-11) because they derive their operating energy from electric motor-driven oil pumps. They too offer stiffness against dynamic valve instability. The cost of these actuators is high, and they require more maintenance than pneumatic actuators. Hydraulic actuators are found primarily indoors because the viscosity of operating oil varies significantly at lower outdoor temperatures. Another problem to consider is oil leakage that would cause a drift in the stem position.
Fail-safe action can be provided by placing a coiled spring opposite the oil pressure driven piston in a cylinder. The oil flow is controlled by three-way spool valves positioned by a voice coil. A stem connected potentiometer provides feedback. The typical operating signal is 4–20 mA. Quick-acting solenoids can evacuate a cylinder and close or open a valve in an emergency. Such features make electrohydraulic actuators popular on fuel-control valves used for commercial gas turbines. Electrohydraulic units are typically self-contained; hence, there is no need for external pipes or tanks. These devices are used on valves or dampers that require a high operating torque and that need to overcome high-friction forces, where the
incompressible oil provided stiffness overcomes deadband normally caused by such friction. One must realize that oil pumping motors have to run continuously, hence there is larger power consumption and a need for oil blocking valves in case of power failure. Electrohydraulic actuators require special positioners using a 4–20 mA signal. Typically, a three-way spool valve is used, activated by a voice coil to guide the oil. Other systems use stepping motors to pulse the oil flow at the pump. Potential problems with hydraulic systems are change in viscosity (at low temperatures), which reduces response time, and fluid leakage, which causes stem drifting.
Accessories Valve Positioners
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
These devices are attached to an actuator to compare the position of the valve stem or rotary valve shaft with the position intended by the signal generated by a digital controller or computer. There are two basic types: pneumatic and electronic (digital). Pneumatic positioners are gradually being phased out in favor of electronic ones. However, they are useful in plant areas that must be explosion-proof and in gas fields where compressed gas is used instead of compressed air (which is by far the most common actuating medium). Commonly, 30 psi (2 bar) pressure controlled air is used as the supply and 3–15 psi (0.2–1 bar) is used as the output signal to the actuator. Positioners must be considered as “position controllers.” The input is the signal from the process controller. The feedback is a signal from a potentiometer or Hall-effect sensor measuring the position of the valve stem. Any offset between the input and feedback causes activation of a command signal to the actuator. As with any process controller, positioners have stability problems. Adjustable gain and speed of travel adjustments are used to fight instability, which is typically caused by valve friction. Typical modes of adjustment include the following: if a high gain (increased position accuracy) is required, reduce the travel speed for the actuator, if higher speed requirements (such as in pressure control applications) are needed, reduce the gain to avoid “hunting.” Varying the speed of the valve (time constants) can also be used to alter the ratio between the time constant of the process loop and the time constant of the valve. A minimum ratio is 3:1 (preferably above 5:1) to avoid process-loop instability (see Table 8-1.)
Electronic or digital positioners typically employ a 4–20 mA or a digital signal from an electronic control device. Most digital positioners employ microprocessors that, besides controlling the stem position, can monitor the valve’s overall function and can feed back any abnormality. Some devices may be able to function as their own process controllers responding directly to input from a process data transmitter. The word “digital” is somewhat of a misnomer because, besides the microprocessor, the positioners employ analog signal-activated voice coils to control the air output to the valve.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Most modern digital valve positioners (see Figure 8-12) can monitor the performance of the valve by triggering an artificial controller signal upset and then measuring the corresponding valve stem reaction (see Figure 8-13). Such tests normally are done when the process to be controlled is shut down because process engineers are averse to having an artificial upset while a process is running.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Self-testing electronic positioners need sufficient software programming to fulfill the following functions: • Reading the valve’s diagnostic data
• Supporting the diagnostic windows display • Checking the valve configuration and associated database • Monitoring the device capabilities • Configuring the valve’s characteristic • Listing general processes and the valve database • Showing the diagnostic database • Indicating security levels as required by the user
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
The overall aim is to create a real-time management control system integrating all aspects of plant management, production control, process control, plant efficiency (see Figure 8-14), and maintenance.
Position transmitters determine the size of the valve opening by reading the stem position and then transmit the information to the remote operator, who uses the data to verify that the valve is functioning. Limit switches restrict the amount of travel and can also be used to monitor the valve position. Activation can trigger an alarm system, or it can trigger a solenoid valve to quickly open or close the valve or confirm the valve is fully open or closed. Hand wheels are used to move the valve stem manually, which enables rudimentary manual control of the process in case of signal or power failure. Hand wheels are
typically on top of a diaphragm actuator, for valves up to 2 in (50 mm) size, or on the side of the actuator yoke. I/P transducers are sometimes used in place of electronic valve positioners; they are popular with small valves and in noncritical applications where no valve positioners are specified. They receive input from a 4–20 mA command system from a process controller, and normally create a 3–15 psi (0.2–1.0 bar) air signal to a valve. Solenoid valves can be used as safety devices to shut off air supply to the valve in an emergency. Alternatively, they can lock air in a piston operator. Air sets are small pressure regulators used to maintain a constant supply pressure, typically 30 psi (2 bar), to valve positioners or other pneumatic instruments.
Further Information ANSI/FCI-70-2-2003. Control Valve Seat Leakage. Cleveland, OH: FCI (Fluid Controls Institute, Inc.). ANSI/ISA-75.01.01-2002 (IEC 60534-2-1 Mod). Flow Equations for Sizing Control Valves. Research Triangle Park, NC: ISA (International Society of Automation). ASME/ANSI-B16.34-1996. Valves—Flanged, Threaded, and Welding End. New York: ASME (American Society of Mechanical Engineers).
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Borden, G., ed., and P. G. Friedmann, style ed. Control Valves: Practical Guides for Measurement and Control. Research Triangle Park, NC: ISA (International Society of Automation), 1998. Baumann, Hans D. Control Valve Primer: A User’s Guide. 4th ed. Research Triangle Park, NC: ISA (International Society of Automation), 2009. IEC 60534-8-3:2010. Noise Considerations – Control Valve Aerodynamic Noise Prediction Method. Geneva 20 – Switzerland: IEC (International Electrotechnical Commission). IEC 60534-8-4:2015. Noise Considerations – Prediction of Noise Generated for Hydrodynamic Flow. Geneva 20 – Switzerland: IEC (International Electrotechnical Commission). ISA-75.17-1989. Control Valve Aerodynamic Noise Prediction. Research Triangle Park, NC: ISA (International Society of Automation). ISA-dTR75.24.01-2017. Linear Pneumatic Control Valve Actuator Sizing and Selection. Research Triangle Park, NC: ISA (International Society of Automation).
About the Author
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Hans D. Baumann, PhD, PE, and honorary member of the International Society of Automation (ISA), is a primary consultant for HB Services Partners, LLC, of West Palm Beach, Florida. He was formerly a corporate vice president of Fisher Controls and the owner of Baumann Associates, a manufacturer of control valves. He is the author of the acclaimed Control Valve Primer and owns titles to 103 U.S. patents. He served for 36 years as a U.S. technical expert on the International Electrotechnical Commission (IEC) Standards Committee on control valves, where he made significant contributions to valve sizing and noise prediction standards.
9 Motor and Drive Control By Dave Polka and Donald G. Dunn
Introduction
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Automation is a technique, method, or system of operating or controlling a process by highly automatic means utilizing electronic devices, which reduces human intervention to a minimum. Processes utilize mechanical devices to produce a force, which produces work within the process. Thus, a motor is a device that converts electrical energy into mechanical energy. There are both alternating current (AC) and direct current (DC) motors with the AC induction motor being the most common type utilized within most industries. It is vital that automation engineers have a basic understanding of motor and electronic drive principles. The drive is the device that controls the motor. The two interact or work together to provide the torque, speed, and horsepower (hp) necessary to operate the application or process. The simplest concept of any motor, either direct or alternating current, is that it consists of a magnetic circuit interlinked with an electrical circuit in such a manner to produce a mechanical turning force. It was recognized long ago that a magnet could be produced by passing an electric current through a coil wound around magnetic material. Later it was established that when a current is passed through a conductor or a coil, which is situated in a magnetic field, there is a setup force tending to produce motion of the coil relative to the field. Thus, a current flowing through a wire will create a magnetic field around the wire. The more current (or turns) in the wire, the stronger the magnetic field; by changing the magnetic field, one can induce a voltage in the conductor. Finally, a force is exerted on a current-carrying conductor when it is in a magnetic field. A magnetic flux is produced when an electric current flows through a coil of wire (referred to as a stator), and current is induced in a conductor (referred to as a rotor) adjacent to the magnetic field. A force is applied at right angles to the magnetic field on any conductor when current flows through that conductor.
DC Motors and Their Principles of Operation There are two basic circuits in any DC motor: the armature (device that rotates) and the field (stationary part with windings). The two components magnetically interact with one another to produce rotation of the armature. Both the armature and the field are two separate circuits and are physically next to each other, in order to promote magnetic interaction.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
The armature (IA) has an integral part, called a commutator (see Figure 9-1). The commutator acts as an electrical switch, always changing polarity of the magnetic flux to ensure there is a “repelling” force taking place. The armature rotates as a result of the “repelling” motion created by the magnetic flux of the armature, in opposition to the magnetic flux created by the field winding (IF).
The physical connection of voltage to the armature is done through “brushes.” Brushes are made of a carbon material that is in constant contact with the armature’s commutator plates. The brushes are typically spring loaded to provide constant pressure of the brush to the commutator plates.
Control of Speed The speed of a DC motor is a direct result of armature voltage applied. The field receives voltage from a separate power supply, sometimes referred to as a field exciter. This exciter provides power to the field, which in turn generates current and magnetic flux. In a normal condition, the field is kept at maximum strength, allowing the field winding to develop maximum current and flux (known as the armature range). The only way to control the speed is through change in armature voltage.
Control of Torque Under certain conditions, motor torque remains constant when operating below base speed. However, when operating in the field weakening range, torque drops off inversely as 1/Speed2. If field flux is held constant, as well as the design constant of the motor, then torque is proportional to the armature current. The more load the motor sees, the more current that is consumed by the armature.
Enclosure Types and Cooling Methods
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
In most cases, to allow the motor to develop full torque at less than 50% speed, an additional blower is required for motor cooling. The enclosures most commonly found in standard industrial applications are drip-proof, fully guarded (DPFG; see Figure 9-2); totally enclosed, non-ventilated (TENV); and totally enclosed, fan-cooled (TEFC).
DC Motor Types Series-Wound A series-wound DC motor has the armature and field windings connected in a series circuit. Starting torque developed can be as high as 500% of the full load rating. The high starting torque is a result of the field winding being operated below the saturation point. An increase in load will cause a corresponding increase in both armature and field winding current, which means both armature and field winding flux increase together. Torque in a DC motor increases as the square of the current value. Compared to a shunt wound motor, a series-wound motor will generate a larger torque increase for a given increase in current.
Shunt (Parallel) Wound Shunt wound DC motors have the armature and field windings connected in parallel. This type of motor requires two power supplies—one for the armature and one for the field winding. The starting torque developed can be 250% to 300% of the full load torque rating, for a short period of time. Speed regulation (speed fluctuation due to load) is acceptable in many cases, between 5% and 10% of maximum speed, when operated from a DC drive.
Compound-Wound Compound-wound DC motors are basically a combination of shunt and series-wound configurations. This type of motor offers the high starting torque of a series-wound motor and constant speed regulation (speed stability) under a given load. The torque and speed characteristics are the result of placing a portion of the field winding circuit, in series, with the armature circuit. When a load is applied, there is a corresponding increase in current through the series winding, which also increases the field flux, increasing torque output.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Permanent Magnet Permanent magnet motors are built with a standard armature and brushes but have permanent magnets in place of the shunt field winding. The speed characteristic is close to that of a shunt wound DC motor. This type of motor is simple to install, with only the two armature connections needed, and simple to reverse, by simply reversing the connections to the armature. Though this type of motor has very good starting torque capability, the speed regulation is slightly less than that of a compound-wound motor. Peak torque is limited to about 150%.
AC Motors and Their Principles of Operation All AC motors can be classified into single-phase and polyphase motors (poly, meaning many phase or three-phase). A polyphase squirrel-cage induction motor does not require a commutator, brushes, or slip rings that are common in DC motors. It has the fewest windings, least insulation, and lowest cost per horsepower when compared to other motors designs. The two main electrical components of an AC induction motor are the stator (i.e., the stationary element that generates the magnetic flux) and the rotor (i.e., the rotating element). The stator is the stationary or primary side, and the rotor is the rotating or secondary part of the motor. The power is transmitted to the rotor inductively from the
stator through a transformer action. The rotor consists of copper or aluminum bars, connected at the ends by end rings. The rotor is filled with many individual discs of steel, called laminations. The stator consists of cores that are also constructed with laminations. These laminations are coated with insulating varnish and then welded together to form the core (see Figure 9-3).
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
The revolving field set up by the stator currents cut the squirrel-cage conducting aluminum bars of the rotor. This causes voltage in these bars, with a corresponding current flow, which sets up north and south poles in the rotor. Torque (turning of the rotor) is produced due to the attraction and repulsion between these poles and the poles of the revolving stator field. Each magnetic pole pair in Figure 9-4 is wound in such a way that allows the stator magnetic field to “rotate.” A simple two-pole stator shown in the figure has three coils in each pole group (a two-pole motor would have two poles and three phases, equaling six physical poles). Each coil in a pole group is connected to one phase of the threephase power source. With three-phase power, each phase current reaches a maximum value at different time intervals. This is shown by maximum and minimum values in the lower part of Figure 9-4.
Control of Speed The speed of a squirrel-cage motor depends on the frequency and number of poles for which the motor is wound (see Equation 9-1). (9-1)
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
where N = Shaft speed (RPM) F = frequency of the power supply (hertz) P = number of stator poles (pole pairs) Squirrel-cage motors are built with the slip ranging from about 3% to 20%. The actual “slip” speed is referred to as base speed, which is the speed of the motor at rated voltage, rated frequency, and rated load. Motor direction is reversed by interchanging any two motor input leads.
Control of Torque and Horsepower Horsepower (hp) takes into account the “speed” at which the shaft rotates (see Equation 9-2). By rearranging the equation, a corresponding value for torque can also be determined. (9-2)
where T = Torque in lb-ft N = Speed in RPM A higher number of poles in a motor means a larger amount of torque is developed, with a corresponding lower base speed. With a lower number of poles, the opposite would be true.
Enclosure Types and Cooling
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
The more common types of AC motor enclosures are the open drip-proof motor (ODP); the TENV motor; and the TEFC motor (see Figure 9-5).
AC Motor Types Standard AC Motors AC motors can be divided into two major categories: asynchronous and synchronous. The induction motor is the most common type of asynchronous motor (meaning speed is dependent on slip) (see Figure 9-6).
There are major differences between a synchronous motor and an induction motor, such as construction and method of operation. A synchronous motor has a stator with axial slots that consist of stator windings wound for a specific number of poles. Typically, a salient pole rotor is used on which the rotor winding is mounted. The rotor winding, which contains slip rings, is fed by a DC supply or a rotor with permanent magnets.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
The induction motor has a stator winding that is similar to that of a synchronous motor. It is wound for a specific number of poles. A squirrel-cage rotor or a wound rotor can be used. In squirrel-cage rotor, the rotor bars are permanently short-circuited with end rings. In a wound rotor, windings are also permanently short-circuited; therefore, no slip rings are required. Synchronous motor stator poles rotate at the synchronous speed (Ns) when fed with a three-phase supply. The rotor is fed with a DC supply. The rotor needs to be rotated at a speed close to the synchronous speed during starting. This causes the rotor poles to become magnetically coupled with the rotating stator poles, and thus the rotor starts rotating at the synchronous speed. A synchronous motor always runs at a speed equal to its synchronous speed (i.e., actual speed = synchronous speed or N = Ns = 120 × F/P). When an induction motor stator is fed with a two- or three-phase AC supply, a rotating magnetic field (RMF) is produced. The relative speed between stator’s rotating magnetic field and the rotor will cause an induced current in the rotor conductors. The rotor current gives rise to the rotor flux. The direction of this induced current is such that it will tend to oppose the cause of its production (i.e., the relative speed between the stator’s RMF and the rotor). Thus, the rotor will try to catch up with the RMF and reduce the relative speed. An induction motor always runs at a speed that is less than the synchronous speed (i.e., N < Ns).
The following is a subset of the uses and advantages of both a synchronous motor and an induction motor. A synchronous motor is used in various industrial applications where constant speed is necessary, such as compressor applications. The advantages of these types of motors are the speed is independent of the load and the power factor can be adjusted. Facilities with numerous synchronous machines have the ability to operate the power factor above unity that creates a power factor similar to that of a capacitor (the current phase leads the voltage phase), which helps achieve power factor correction for the facility.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
The induction motor is the most commonly used motor within manufacturing facilities. The advantages of induction motors are they can operate in a wide range of industrial conditions and they are robust and sturdy. Induction motors are cheaper in cost due to their simple construction. Induction motors do not have accessories, such as brushes, slip rings, or commutators, which make their maintenance costs low in comparison to synchronous machines. Simply put, induction motors require very little maintenance if applied and installed correctly. In addition, they do not require any complex circuit for starting. The three-phase motor is self-starting while the single-phase motor can be made self-starting simply by connecting a capacitor in the auxiliary winding. Induction motors can be operated in hazardous environments and even under water because they do not produce sparks like DC motors do. However, the proper motor enclosure classification is required for operation in these types of applications. The disadvantage of an induction motor is the difficulty of controlling the speed. At low loads, the power factor drops to very low values, as does the efficiency. The low power factor causes a higher current to be drawn and results in higher copper losses. Induction motors have low starting torque; thus, they cannot be used for applications such as traction and lifting loads.
Wound Rotor The wound-rotor motor has controllable speed and torque characteristics. Different values of resistance are inserted into the rotor circuit to obtain various performance results. Changes in resistance values normally begin with a secondary resistance connected to the rotor circuit. The resistance is reduced to allow the motor to increase in speed. This type of motor can develop substantial torque and, at the same time, limit the amount of locked rotor current.
Synchronous The two types of synchronous motors are non-excited and DC-excited. Without complex electronic control, this motor type is inherently a fixed-speed motor. The synchronous
motor could be considered a three-phase alternator, only operated backwards. DC is applied directly to the rotor to produce a rotating electromagnetic field, which interacts with the separately powered stator windings to produce rotation. In reality, synchronous motors have little to no starting torque. An external device must be used for the initial start of the motor.
Multiple Pole Multiple pole motors could be considered “multiple speed” motors. Most of the multiple pole motors are “dual speed.” Essentially, the conduit box would contain two sets of wiring configurations—one for low-speed and one for high-speed windings. The windings would be engaged by electrical contacts or a two-position switch.
Choosing the Right Motor Application Related
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
The application must be considered to apply the correct motor. Factors that influence the selection of the correct motor are application, construction, industry or location, power requirements and/or restrictions, as well as installed versus operating costs (see Figure 9-7). Typical applications for an induction motor are pumps, fans, compressors, conveyors, crushers, mixers, shredders, and extruders. Typical applications for a synchronous motor are high- and low-speed compressors and large pumps, extruders, chippers, special applications with large drives and mining mills.
In addition, the manufacturer constructs the motors to comply with the standards provided or specified by the end user. In the United States, the standards utilized vary depending on the size and application of the motor. The primary standard for the construction of motors in the United States is the National Electrical Manufacturers
Association (NEMA) MG 1. In addition, there are several other standards typically utilized that vary depending on the horsepower (hp) or type of machine. The following are some of those standards: • American Petroleum Institute – API Standard 541, Form-wound Squirrel Cage Induction Motors—375 kW (500 Horsepower) and Larger • American Petroleum Institute – API Standard 546, Brushless Synchronous Machines—500 kVA and Larger • Institute of Electrical and Electronics Engineers – IEEE Std. 841-2009, IEEE Standard for Petroleum and Chemical Industry—Premium-Efficiency, SevereDuty, Totally Enclosed Fan-Cooled (TEFC) Squirrel Cage Induction Motors—Up to and Including 370 kW (500 hp) If the motor is constructed outside of the United States, it typically complies with International Electrotechnical Commission (IEC) standards. NEMA motors are in widespread use throughout the United States and are used by some end users globally. There are some differences between NEMA and IEC standards with regard to terms, ratings, and so on. Typically, NEMA standards are considered more conservative, which allows for slight variations in design and applications. IEC standards are specific and require significant care in applying them for the specific application.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
AC versus DC There are no fundamental performance limitations that would prevent a flux vector adjustable speed drive (ASD) from being used in any application where DC drives are used. In areas such as high-speed operation, the inherent capability of AC motors exceeds the capability of DC motors. Inverter duty motors have speed range capabilities that are equal to or above the capabilities of DC motors. DC motors usually require cooling air forced through the interior of the motor in order to operate over wide speed ranges. Totally enclosed AC motors are also available with wide speed range capabilities. Although DC motors are usually significantly more expensive than AC motors, the motordrive package price for a ASD is often comparable to the price of a DC drive package. If spare motors are required, the package price tends to favor the ASD. Because AC motors are more reliable in a variety of situations and have a longer average life, the DC drive alternative may require a spare motor while the AC drive may not. AC motors are available with a wide range of optional electrical and mechanical configurations and accessories. DC motors are generally less flexible and the optional features are generally more expensive.
DC motors are typically operated from a DC drive, which has reduced efficiency at lower speeds. Because DC motors tend to be less efficient than AC motors, they generally require more elaborate cooling arrangements. Most AC motors are supplied in totally enclosed housings that are cooled by blowing air over the exterior surface. The motor is the controlling element of a DC drive system, while the electronic controller is the controlling element of an AC drive system. The purchase cost of a DC drive, in low horsepower sizes, may be less than that of its corresponding AC drive of the same horsepower. However, the cost of the DC motor may be twice that of the comparable AC motor. Technology advancements in ASD design have brought the purchase price gap closer to DC. DC motor brushes and commutators must be maintained and replaced after periods of operation. AC motors are typically less maintenance intensive and are more “off-the-shelf” compared to comparable horsepower DC motors.
Adjustable Speed Drives (Electronic DC) Principles of Operation
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
IEEE 100 defines an adjustable speed drive (ASD) as “an electric drive designed to provide an easily operable means of speed adjustment of the motor, within a specified speed range.” Note the use of the word adjustable rather than variable. Adjustable implies that the speed can be controlled, while variable implies that it may change on its own. The definition also refers to the motor as being part of the adjustable speed system, implying that it is a system rather than a single device. This type of drive converts fixed voltage and frequency AC to an adjustable voltage DC. A DC drive can operate a shunt wound DC motor or a permanent magnet motor. Most DC drives use silicon-controlled rectifiers (SCRs) to convert AC to DC (see Figure 9-8).
SCRs provide output voltage when a small voltage is applied to the gate circuit. Output
voltage depends on when the SCR is “gated on,” causing output for the remainder of the cycle. When the SCR goes through zero, it automatically shuts off until it is gated “on” again. Three-phase DC drives use six SCRs for full-wave bridge rectification. Insulated gate bipolar transistors (IGBTs) are now replacing SCRs in power conversion. IGBTs also use an extremely low voltage to gate “on” the device. When the speed controller circuit calls for voltage to be produced, the “M” contactor (main contactor) is closed and the SCRs conduct. In one instant of time, voltage from the line enters the drive through one phase, is conducted through the SCR, and into the armature. Voltage flows through the armature and back into the SCR bridge and returns through the power line through another phase. At the time this cycle is about complete, another phase conducts through another SCR, through the armature and back into yet another phase. The cycle keeps repeating 60 times per second due to 60-hertz line input. Shunt field winding power is supplied by a DC field exciter, which supplies a constant voltage to the field winding, thereby creating a constant field flux. Many field exciters have the ability to reduce supply voltage, used in above base speed operation.
Control of Speed and Torque
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
A speed reference is given to the drive’s input, which is then fed to the speed controller (see Figure 9-9).
The speed controller determines the output voltage for desired motor speed. The current controller signals the SCRs in the firing unit to “gate on.” The SCRs in the converter section convert fixed, three-phase voltage to a DC voltage and current output in relation to the desired speed. The current measuring/scaling section monitors the output current and makes current reference corrections based on the torque requirements of the motor. If precise speed is not an issue, the DC drive and motor could operate “open loop.” When more precise speed regulation is required, the speed measuring/scaling circuit will be engaged by making the appropriate feedback selection. If the feedback is fed by the EMF measurement circuit, then the speed measuring/scaling circuit will monitor the armature voltage output. The summing circuit will process the speed reference and feedback signal and create an error signal. This error signal is used by the speed controller as a new speed command—or a corrected speed command. If tighter speed regulation is required ( 10) or rate settings (> 60 seconds), it may be inadvisable to move to MPC. Such settings are frequently seen in loops for tight column and reactor temperature, pressure, and level control. PID controllers thrive on the smooth and gradual response of a large time constant (low integrator gain) and can achieve unmeasured load disturbance rejection that is hard to duplicate. The analog-to-digital converter (A/D) chatter and resolution limit of large scale ranges of temperature inputs brought in through distributed control system (DCS) cards rather than via dedicated smart transmitters severely reduces the amount of rate action that a PID can use without creating valve dither. The low-frequency noise from the scatter of an analyzer reading also prohibits the full use of rate action. Process variable filters can help if judiciously set, based on the DCS module execution time and the analysis update time. An MPC is less sensitive to measurement noise and sensor resolution because it looks at the error over a time horizon and does not compute a derivative.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Valve deadband (backlash) and resolution (stick-slip) is a problem for both the PID and MPC. In the MPC, an increase in the minimum move size limit to be just less than the resolution will help reduce the dead time from valve deadband and resolution but will not eliminate the limit cycles. In general, there is a tradeoff between performance (the minimum peak and integrated error in the controlled variable) and robustness (the maximum allowable unknown change in the process gain, dead time, or time constant). Higher performance corresponds to lower robustness. An increase in the process dead time of 50% can cause damped oscillations in an aggressively tuned PID, but a decrease in process dead time of 50% can cause growing oscillations in an MPC or PID with dead time compensation (PIDx) with default tuning that initially has a better performance than the PID. An MPC or PIDx is more sensitive to a decrease than an increase in dead time. A decrease in process dead time rapidly leads to growing oscillations that are much faster than the ultimate period. An increase in dead time shows up as much slower oscillations with a superimposed high-frequency limit cycle. A PID goes unstable for an increase in process dead time. A decrease in process dead time for a PID just translates to lost opportunity associated with a greater than optimal controller reset time and a smaller than optimal controller gain. An enhanced PID can handle increases in dead time from analyzers.
For a single controlled and manipulated variable, MPC shows the greatest improvement over PID for a process where the dynamics are fixed and move suppression is greatly reduced. However, MPC is more sensitive to an unknown change in dead time. For measured disturbances, the MPC generally has a better dynamic disturbance model than a PID controller with feedforward control, primarily because of the difficulty in properly identifying the feedforward lead-lag times. Often the feedforward dynamic compensation for PID controllers is omitted or tuned by trial and error. For constraints, the MPC anticipates a future violation by looking at the final value of a trajectory versus the limit. MPC can simultaneously handle multiple constraints. PID override controllers, however, handle constraints one at a time through the low or high signal selection of PID controller outputs. For interactions, the MPC is much better than PID controller. The addition of decoupling to a PID is generally just based on steady-state gains. However, the benefits of the MPC over detuned or decoupled PID controllers deteriorate as the condition number of the matrix increases. The steady-state gains in the 2 x 2 matrix in Equation 18-1 show that each manipulated variable has about the same effect on the controlled variables. The inputs to the process are linearly related. The determinant is nearly zero and provides a warning that MPC is not a viable solution.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
(18-1) The steady-state gains of a controlled variable for each manipulated variable in Equation 18-2 are not equal but exhibit a ratio. The outputs of the process are linearly related. Such systems are called stiff because the controlled variables move together. The system lacks the flexibility to move them independently to achieve their respective set points. Again, the determinant is nearly zero and provides a warning that MPC is not a viable solution. (18-2) The steady-state gains for the first manipulated variable (MV1) are several orders of magnitude larger than for the second manipulated variable (MV2) in Equation 18-3. Essentially, there is just one manipulated variable MV1 because the effect of MV2 is negligible in comparison. Unfortunately, the determinant is 0.9, which is far enough
above zero to provide a false sense of security. The condition number of the matrix provides a more universal indication of a potential problem than either the determinant or relative gain matrix. A higher condition number indicates a greater problem. For Equation 18-3, the condition number exceeds 10,000. (18-3) The condition number should be calculated by the software and reviewed before an MPC is commissioned. The matrix can be visually inspected for indications of possible MPC performance problems by looking for gains in a column with the same sign and size, gains that differ by an order of magnitude or more, and gains in a row that are a ratio of gains in another row. Very high process gains may cause the change in the MV to be too close to the deadband and resolution limits of a control valve and very low process gains may cause an MV to hit its output limit.
Costs and Benefits
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
The cost of MPC software varies from $10K to $100K, depending on the number of manipulated variables. The cost of high-fidelity process modeling software for real-time optimization varies from $20K to $200K. The installed cost of MPC and RTO varies from about 2 to 20 times the cost of the software depending on the condition of the plant and the knowledge and complexity of the process and its disturbances. Process tests and model identification reveal measurements that are missing or nonrepeatable and control valves that are sloppy or improperly sized. Simple preliminary bump tests should be conducted to provide project estimates of the cost of upgrades and testing time. Often a plant is running beyond nameplate capacity or at conditions and products never intended. An MPC or RTO applied to a plant that is continually rocked by unmeasured disturbances, or where abnormal situations are the norm, require a huge amount of time for testing and commissioning. The proper use of advanced control can reduce the variability in a key concentration or quality measurement. A reduction in variability is essential to the minimization of product that is downgraded, recycled, returned, or scrapped. Less obvious is the product given away in terms of extra purity or quantity in anticipation of variability. Other benefits from a reduction in variability often manifest themselves as a minimization of fuel, reactant, reagent, reflux, steam, coolant, recycle, or purge flow and a more
optimum choice of set points. Significant benefits are derived from the improvements made to the basic regulatory control system identified during testing. New benefits in the area of abnormal situation management are being explored from monitoring the adaptive control models as indicators of changes in the instrumentation, valve, and equipment. The benefits for MPC generally range from 1% to 4% of the cost of goods for continuous processes with an average of around 2%. The benefits of MPC for fed-batch processes are potentially 10 times larger because the manipulated variables are constant or sequenced despite varying conditions as the batch progresses. Other advanced control technologies average significantly less benefits. RTO has had the most spectacular failures but also the greatest future potential.
MPC Best Practices The following list of best practices is offered as guidance and is not intended to cover all aspects. 1. Establish a user company infrastructure to make the benefits consistent. 2. Develop corporate standards to historize and report key performance indicators (KPI).
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
3. Screen and eliminate outliers and bad inputs and review results before reporting KPI. 4. Train operators in the use and value of the MPC for normal and abnormal operation. 5. Installation must be maintainable without developer. 6. Improve field instrumentation and valves and tune regulatory controllers before MPC pre-tests. 7. Eliminate oscillations from overly aggressive PID controller tuning that excite nonlinearities. 8. Realize that changes even in PID loops not manipulated by the MPC can affect the MPC models. 9. Use secondary flow loops so that MPC manipulates a flow set point rather than a valve position to isolate valve nonlinearities from the MPC. 10. Use secondary jacket/coil temperature loops so MPC manipulates a temperature set point rather than a coolant or steam flow to isolate jacket/coil nonlinearities
from MPC. 11. Use flow ratio control in the regulatory system so that the MPC corrects a flow ratio instead of using flow as a disturbance variable in the MPC. 12. Generally, avoid replacing regulatory loops with MPC if the PID execution time must be less than 1 second or the PID gain is greater than 10 to deal with unmeasured disturbances. 13. Use inferential measurements (e.g., the linear dynamic estimators from Chapter 17) to provide a faster, smoother, and more reliable composition measurement. 14. Bias the inferential measurement prediction by a fraction of the error between the inferential measurement and an analyzer after synchronization and eliminating noise and outliers. 15. Eliminate data historian compression and filters to get raw data. 16. Conduct tests near constraint limits besides at normal operating conditions. 17. Use pre-tests (bump tests) to get step sizes and time horizons. 18. Step size should be at least five times deadband, stick-slip, resolution limit, and noise. 19. Get at least 20 data points in the shortest time horizon.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
20. Use a near-integrator approximation to shorten time horizon if optimization not affected. 21. Get meaningful significant movement in the manipulated variables at varying step durations. 22. Make sure steady-state process gains are accurate for analysis, prediction, and optimization. 23. Use engineering knowledge and available models or simulators to confirm or modify gains. 24. Combine reaction and separation into the same MPC when the separation section limits reaction system performance. 25. Use singular value decomposition (SVD) and linear program (LP) cost calculation tools to build and implement large MPC applications. 26. Reformulate the MPC to eliminate interrelationships between process input variables as seen by similar process gains in a column of the matrix.
27. Reformulate the MPC to eliminate interrelationships between process output variables as seen by process gains in one column of the matrix having the same ratio to gains in another column. 28. Make sure the MPC is in sync and consistent with targets from planning and scheduling people. 29. For small changes in dynamics, modify gains and dead times online. 30. For major changes in dynamics, retest using automated testing software.
Further Information Kane, Les A., ed. Advanced Process Control and Information Systems for the Process Industries. Houston, TX: Gulf Publishing, 1999. McMillan, Gregory K. Advances in Reactor Measurement and Control. Research Triangle Park, NC: ISA (International Society of Automation), 2015. ——— Good Tuning: A Pocket Guide. 4th ed. ISA, Research Triangle Park, NC: ISA (International Society of Automation), 2015. McMillan, Gregory K., and Robert A. Cameron. Models Unleashed: Applications of the Virtual Plant and Model Predictive Control – A Pocket Guide. Research Triangle Park, NC: ISA (International Society of Automation), 2004.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
About the Author Gregory K. McMillan is a retired Senior Fellow from Solutia Inc. and an ISA Fellow. He received the ISA Kermit Fischer Environmental Award for pH control in 1991 and Control magazine’s Engineer of the Year award for the process industry in 1994. He was inducted into Control magazine’s Process Automation Hall of Fame in 2001; honored as one of InTech magazine’s most influential innovators in 2003; and presented with the ISA Life Achievement Award in 2010. McMillan earned a BS in engineering physics from Kansas University in 1969 and an MS in electrical engineering (control theory) from Missouri University of Science and Technology.
VI Operator Interaction
Operator Training Operator training continues to increase in importance as systems become more complex, and the operator is expected to do more and more. It sometimes seems that, the more we automate, the more important it is for the operator to understand what to do when that automation system does not function as designed. This topic ties closely with the modeling topic because simulated plants allow more rigorous operator training.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Operator Interface: Human-Machine Interface (HMI) Software Operator interfaces, data management, and other types of software are now basic topics for automation professionals, and they fit in this category better than anywhere else. Packaged automation software that is open with respect to Open Platform Communications (OPC) covers a significant portion of the needs of automation professionals; however, custom software is still needed in some cases. That custom software must be carefully designed and programmed to perform well and be easily maintained.
Alarm Management Alarm management has become a very important topic in the safety area. The press continues to report plant incidents caused by poorly designed alarms, alarm flooding, and alarms being bypassed. Every automation professional should understand the basic concepts of this topic.
19 Operator Training By Bridget A. Fitzpatrick
Introduction Advances in process control and safety system technology enable dramatic improvements in process stability and overall performance. With fewer upsets, operators tend to make fewer adjustments to the process. Additionally, as the overall level of automation increases, there is less human intervention required. With less human intervention, there is less “hands on” learning. However, even the best technology fails to capture the operators’ knowledge of the realtime constraints and complex interactions between systems. The console operator remains integral to safe, efficient, and cost-effective operation. Operator training manages operator skills, knowledge, and behaviors.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Optimizing overall operations performance is a complex undertaking that includes a review of process design, safety system design, the level of automation, staffing design, shift schedules, and individual job design across the entire operations team. This chapter focuses on control room operator training.
Evolution of Training In early control rooms, panel-board operator training was traditionally accomplished through a progression from field operator to control room operator. This progression was commonly accomplished through on-the-job training (OJT) where an experienced operator actively mentored the student worker. As process and automation technology has advanced, these early methods have been augmented with a mix of training methods. These methods and the advantages and disadvantages of each will be discussed in the following sections.
The Training Process A successful training program is based on understanding that training is not a single pass
program, but an ongoing process. This is complicated by continual changes in both human resources and the process itself. A successful program requires support for initial training and qualification, training on changes to the process and related systems, and periodic refresher training. Developing and maintaining operator skills, knowledge, and behavior is central to operational excellence.
Training Process Steps The key steps of the training process include setting learning objectives (functional requirements), training design, materials and methods testing, metrics selection, training delivery, assessment, and continual improvement. Learning objectives define the expected learning outcomes or identified needs for changes to student skills, knowledge, or behaviors. This applies for each segment of training, as well as for the overall training program. Training design includes design work to define the best training delivery methods to be used to meet the functional requirements, schedule, and budget. Materials and methods testing refines the design and includes a “dry run” testing phase to ensure that materials are effective at meeting functional requirements on a small scale. For a new program, testing all major methods on a small scale is recommended to ensure that the technologies in use and the training staff are executing successfully.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Metrics selection is important to ensure that a baseline of student performance is available prior to training and to ensure that all personnel have a clear understanding of expectations. Training delivery is the execution phase of the training, which should include continual feedback, such as instructor-to-student, student-to-instructor, and peer feedback in the student and instructor populations. The assessment phase includes evaluating the success of both the student learning and the training program. Student learning assessment can be formal or informal. It can be performed internally or by a third party. The proper method depends on the nature of the subject. Assessment of the training program requires feedback from the participants on training relevance, style, presentation, and perceived learning. This feedback can be gathered by anonymous post-training questionnaires or follow-up discussions. For the training program’s continuous improvement, it is recommended to include an improvement phase to refine and adjust content and work processes. If training materials are largely electronic or prepared in small batches, continual improvement is not hampered by cost concerns.
Role of the Trainer Depending on the scope of the objectives, a range of limited part-time to multiple dedicated training staff may be required. As with any role, there are skills, knowledge, and behaviors required for the trainer role. Experienced operators commonly progress into the training role. This progression leverages the operations skills and knowledge and may include familiarity with the student population. For training success, it is also important to consider other skills, including presentation and public speaking, listening skills, meeting facilitation, records management, computer skills, and coaching/mentoring skills. If the training role does not include the requirement to remain a certified operator, then active efforts to maintain knowledge retention of the trainer are important. It is common to employ a “train the trainer” approach, where new process operations or changes to existing process operations are explained to the trainer and then the trainer delivers this training to the operators. In this model, the learning skills of the trainer are critical.
Training Topics
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Training topics for operators span a broad range of topics, including such topics as: • Safety training – Perhaps the most ubiquitous training content in the industry is that of safety training. Much of this is required through regulation, and the content is continually assessed and updated. Training requirements are also set by a variety of automation standards, including ISA-84, ISA-18.2, and ISA-101. The format is generally a mix of classroom, computer-based training (CBT), and hands-on (e.g., live fire training) methods. • Generic process equipment training – Traditionally, operators began their career as field helpers and progressed into field operators and from there into a control room operator role. As such, they were trained hands-on to understand the operations and, to some extent, the maintenance of different process equipment. This commonly would include pumps, compressors, heat exchangers, and so on. As the efficiency with which these types of equipment were operated and the underlying complexity of these devices increased, it became more critical that staff members have a deeper understanding of how to operate and troubleshoot these basic unit operations. Training on this topic helps develop and maintain troubleshooting skills. Formal training may be more important in cases where no or limited field experience is developed.
• Instrumentation training – The technology evolution in the area of instrumentation has also had an impact in an increasing need for staff to understand all common types of instrumentation for improved operations performance. • Control training – Training on common control theory is important for operators. This includes the basic concepts of proportional-integral-derivative (PID) control, ratio control, and cascade control. Overview training on advanced process control (APC) concepts and interaction is also important when APC is used. • System training – Training on the control system is critical because this is the main means of interacting with the process. This includes general system training, training specifically on the alarm system (see requirements in ISA-18.2), and training on the standard and custom displays in the human-machine interface (HMI). Where APC is used, specific interactions on the HMI and any other software packages that manage the HMI are also important considerations. • Unit-specific training – Unit-specific training will include details on the general process design (i.e., the process flow diagram [PFD], mass, and energy balance) and the related integrity operating windows for key variables. Training will include a review of related safety studies. Specific training will be delivered for operator response to alarms, including specific expectations for all critical alarms.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Nature of Adult Learning There is an old adage, generally attributed to either Confucius or Xunzi, that states, “What I hear, I forget. What I see, I remember. What I do, I understand.” Some versions of the adage include percentage effectiveness of reading, seeing, doing, and teaching. While the adage seems to be true, it has not been based on research and misrepresents the complexity of learning. Knowles suggested four principles that are applied to adult learning: 1. Adults need to be involved in the planning and evaluation of their instruction. 2. Experience (including mistakes) provides the basis for the learning activities. 3. Adults are most interested in learning subjects that have immediate relevance and impact to their job or personal life. 4. Adult learning is problem-centered rather than content-oriented (Kearsley 2010).
Key learning skills to be supported include creativity, curiosity, collaboration, communication, and problem solving (Jenkins 2009). These skills also apply to effective operations. Given the impact of student-learning skills, multiple approaches may be required in order to engage and support the student base.
Training Delivery Methods To be most effective, training delivery methods must be tailored to align with the instructor, student, content, and the related learning targets. In selecting methods, it is important to understand that while the subject matter may lend itself conceptually to one or two methods, the student population will respond to these methods across a spectrum of performance. As such, a key consideration for training effectiveness includes individual student learning assessment. Often the most overlooked aspect of training success is the aptitude, continued assessment, and development of the instructor. For operator training, it is important to note that the common and best practice training methods have evolved over time as both available technology and the nature of operations have changed. There are also related pragmatic initial and life-cycle costs for the training materials and labor. Training methods should be dictated by an assessment of the best method to meet training functional requirements. The training delivery methods considered here include:
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
• “Book” learning—self study • On-the-job training (OJT) • Formal apprenticeship • External certifications/degrees • Classroom training (instructor-led) • Computer-based training (CBT) • Scenarios (paper-based) • Simulation • Expert systems
“Book” Learning—Self-Study Books or reference materials, either hard copy or electronic, can be provided in a variety
of topics. This may include: • Process technology manuals with engineering details, including design ranges • Operating manuals with detailed discussions of operating windows with both target and normal operating limits • Equipment manuals and related engineering drawings • Standard operating procedures • Emergency operating procedures In self-study, students review the material in a self-paced progression. This method is likely included in most training programs and, at a minimum, provides familiarity with reference material for later use. Learning assessment is commonly through practice tests during training, followed by a test or instructor interviews after completing the self-study. Self-study is more effective for general awareness training or incremental learning on a topic where the student has already demonstrated proficiency.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Advantages
Disadvantages
1. Training is self-paced, allowing the 1. Retention may be limited if the only student to focus on areas of weakness interaction and reinforcement is and proceed through areas of reading the information. established knowledge. 2. Materials can be expensive to 2. Familiarity with reference material generate and maintain over the life that may later be used for cycle of the facility. troubleshooting can be useful.
On-the-Job Training On-the-job training (OJT) allows for two general types of training: 1. Real-time mentoring of a student by a more experienced operator. This allows for a student-paced initial training method. OJT provides specific task instruction by verbal review of requirements, demonstration, and discussion, followed by supervised execution by the student. 2. Training on changes in operations, where the change is explained and potentially demonstrated to all control room operators before commissioning. Learning assessment should be included.
For initial operator training, it is unlikely that the student will observe a broad range of abnormal or emergency conditions. It is also unlikely that the mentor/instructor will have time to dissect and explain activities during upsets/abnormal conditions. Advantages
Disadvantages
1. Training customized for unit and for 1. Higher labor cost for one-on-one or specific student needs. one-to-few. 2. High attention to student-learning assessment. 3. Improved instructor performance through teaching others solidifies their understanding of the content.
2. Limited number of instructors ideally suited to teaching. Quality of the training is dependent upon the instructor. 3. Training across all modes of normal operation may not be feasible. 4. Training for abnormal and emergency conditions may be impossible. 5. If OJT is on shift, normal work load will interrupt the training and extend the schedule.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
6. If OJT is on shift, instructor may be distracted and result in upset to operations.
Formal Apprenticeship A formal apprenticeship is similar to OJT but it has a more defined curriculum and certification. An apprentice must be able to demonstrate mastery of all required skills and knowledge before being allowed to graduate to journeyman status. This is documented through testing and certification processes. Journeymen provide the on-thejob training, while adult education centers and community colleges typically provide the classroom training. In many locales, formal apprenticeship programs are regulated by governmental agencies that also set standards and provide services. Advantages 1. Certified skill levels for the student.
Disadvantages
1. Where apprentices are training outside of the organization, unit2. Motivated students that complete the specific details are limited.
2. Training generic to the industry unless managed or influenced by the 3. Effective method to establish basic operating company directly. knowledge in process equipment and instrumentation and control skills. 3. Entry-level resources may be more expensive. program.
External Programs with Certification External training programs are common. These include degreed 1- to 2-year programs. In cases where the industry has participated in curriculum development, these programs have been quite effective. The curriculum is commonly related to either instrumentation and control or generic process knowledge for a given industry. Advantages
Disadvantages
1. Training generic to the industry unless managed by the operating 2. Motivated students that complete the company directly. program. 3. External programs can be an effective 2. Entry-level resources may be more expensive. way to establish basic knowledge in process equipment and instrumentation and control skills.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
1. Certified skill level for the student.
4. Cost structure may be attractive where expertise in training these topics is not maintained at site.
Classroom Training (Instructor-Led) Classroom training is commonly used in conjunction with OJT. The classroom setting has limited interruptions and fewer distractions. Methods in the classroom environment include lecture and varied types of discussions. Lectures are effective for overview training or improving behavior. Lectures may include varied multimedia materials to enhance student attention. This format allows for training in limited detail to a large group in a short period of time. Students can be overwhelmed with excessive information in lecture format without reinforcement with more interactive methods. Q&A and group discussion sessions engage the students more directly, allowing for
clarifying questions, which enhance learning and keep the students focused. Student interaction validates learning and provides insight to the trainer on areas that require additional lecture or alternate training methods. Discussions are more effective at training for procedural or complex topics because they allow the students to process the training material incrementally with clarifications and insights from other students. A discussion of scenarios is included below. Advantages 1. Training is customized for each job position. 2. Peer-student interaction improves understanding. 3. Learning is actively observed.
Disadvantages 1. Higher student-to-instructor ratio can lower the ability of the instructor to assess understanding. 2. Students are largely passive for lectures. Attention may be hard to sustain.
4. Non-instructor perspectives are shared, which can enhance learning. 3. A variety of delivery methods will be required to ensure student 5. Preassigned roles for follow-up attentiveness. discussion segments can improve 4. Can be costly to generate and student attention. maintain large training programs.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
5. Quality of the training is dependent on the instructor.
Computer-Based Training Computer-based training (CBT) includes the use of technology to provide training materials electronically. This may include a lecture delivered from a location remote to the students. Alternately, electronic materials may be provided for self-paced study. Advanced CBT methods enable interaction with the instructor and other students (this is often done asynchronously). Advantages 1. Training is delivered consistently.
Disadvantages
1. Asynchronous interaction can lower the ability of the instructor to change 2. Learning assessment is consistent. methods or correct student 3. Software methods can detect areas of misunderstanding. student weakness and provide
additional content in these areas.
2. Effectiveness limited by the performance of the underlying software and related hardware.
Scenarios (Paper-Based) Facilitated discussions of possible “What-If” scenarios or review of actual facility upsets are an effective method to refine the skills, knowledge, and behaviors required. This is an extension of classroom training. Review of actual process upsets can be an effective method of development or training on lessons learned. Stepping students through the troubleshooting and decision points helps refine their understanding of both the process and general troubleshooting skills. Reviewing the upsets offline may identify creative causes and innovative workarounds. Scenarios can be discussed in a variety of formats to optimize engagement and specific student training targets. Options include: • Discuss the event to generate a consensus best response or series of responses. Review related operating procedures for accuracy and completeness as a team. • Review actual incidents with a replay of actual facility process information, alarms, and operator actions. Review the key actions and critique response.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Advantages
Disadvantages
1. Customized to match facility areas of 1. A small number of students may concern (e.g., loss of power, dominate the discussion. instrument air, and steam). 2. The student group may not be 2. Institutional knowledge of best effective at understanding or response to the scenario is captured resolving the scenario. and refined. 3. Students may not reach consensus. 3. Well-facilitated sessions result in broad student engagement.
4. The instructor must be able to respond to ideas outside the planned 4. Abnormal conditions are reviewed in curriculum. detail in a safe setting. 5. The team may have negative learning 5. Storytelling on the active topic and related events strengthen operator troubleshooting skills.
with incorrect conclusions.
Simulation Simulation uses software and hardware that emulates the process and allows for the students to monitor the process, follow procedures, and engage in near real-time decision-making in a training environment. To be effective, the simulator must match, as close as practical, the physical and psychological demands of the control room. Operator training simulators (OTSs) are effective for new operator training, refresher training, and specific training for abnormal and emergency operations. The inclusion of the simulator allows for validation of scenarios and evaluations of ad-hoc student responses during the training process. It is important to understand the levels of fidelity that are available and their uses. A simple “tie back” model loops outputs back to inputs with some time delay, or filtering, to achieve the simplest directional response form of simulation. This can be useful for system checkout and provide the operator hands-on experience on the HMI, especially if the process has very simple dynamics. Often operators soon recognize the limits of this type of simulation and lose confidence in its ability to respond as the real process would.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Higher fidelity models that include both mass and energy balances and process reaction dynamics more closely represent the real system. When used for training in upset conditions and emergency response, they provide operator confidence in both their abilities and the response of the actual system under those conditions. Where models already exist for other uses, the investment to make them useful for training may be small compared to the benefits. Advantages 1. Unit-specific training.
Disadvantages 1. Higher cost for higher fidelity.
2. Training on the process response 2. Negative training with lower fidelity offline with no risk for process upset. models, resulting in operators learning an incorrect response profile. 3. Flexibility to test all student responses without detailed instructor 3. Loss of operator confidence in the preparation. model with unexpected responses.
Scenarios with Expert Guidance Scenario discussions with expert guidance can be applied to both paper- and simulatorbased systems. This method requires that engineering experts provide the “correct engineered” response to the scenario to be used as guidance for the training session. The method requires breaking the scenario into decision points where the students make a
consensus decision and then compare their answer to the expert advice. Each decision is discussed in detail to ensure that the instructor understands the group logic and the students understand the advantages of the expert recommendations. Advantages
Disadvantages
1. The ability for unit-specific training. 1. Scenario design impacts options in training. 2. Training on the process response offline with no risk for process upset. 2. Expert advice must be generated. 3. Expert guidance cross checks the accuracy of the instructor and/or simulator results.
3. Students may not agree with expert recommendations.
3. Can be time intensive to maintain 4. Decision points in the scenarios focus with changes in the facility. the team on key learning targets. 5. Learning can be observed in the team setting. 6. Collaboration in troubleshooting improves skills and behaviors.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Summary The console operator remains integral to safe, efficient, and cost-effective operation. Operator training manages operator skills, knowledge, and behaviors. For operator training, common and best practice training methods have evolved over time as both available technology and the nature of operations have changed. To be most effective, training delivery methods must be tailored to meet training functional requirements.
Further Information ANSI/IACET 1-2018 (R9.13.17) Standard for Continuing Education and Training. American National Standards Institute (ANSI) and the International Association for Continuing Education and Training (IACT). Blanchard, P. N. Training Delivery Methods. 2017. Accessed 22 March 2018. http://www.referenceforbusiness.com/management/Tr-Z/Training-DeliveryMethods.html#ixzz4wrUDqDw3.
Carey, Benedict. How We Learn: The Surprising Truth About When, Where, and Why It Happens. New York: Random House Publishing Group, 2014. Driscoll, M. P. Psychology of Learning for Instruction. 3rd ed. Boston: Pearson Education, Inc., 2005. Gonzalez, D. C. The Art of Mental Training: A Guide to Performance Excellence. GonzoLane Media, 2013. Grazer, B. A. Curious Mind: The Secret to a Bigger Life. New York: Simon & Schuster, 2015. Illeris, K. Contemporary Theories of Learning: Learning Theorists ... In Their Own Words. New York: Taylor and Francis. Kindle Edition, 2009. Jenkins, H. Confronting the Challenges of Participatory Culture: Media Education for the 21st Century. The John D. and Catherine T. MacArthur Foundation Reports on Digital Media and Learning. Cambridge, MA: The MIT Press, 2009. Kearsley, G. Andragogy (M. Knowles). The Theory into Practice Database. 2010. Accessed 22 March 2018. http://www.instructionaldesign.org/about.html. Klein, G. Streetlights and Shadows Searching for the Keys to Adaptive Decision Making. Cambridge, MA: The MIT Press, 2009. Knowles, M. The Adult Learner: A Neglected Species. 3rd ed. Houston, TX: Gulf Publishing, 1984.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Leonard, D. C. Learning Theories: A to Z. Santa Barbara: Greenwood Publishing, 2002. Pink, D. H. Drive: The Surprising Truth About What Motivates Us. New York: Penguin Publishing Group, 2011. Silberman, M. L., Biech, E. Active Training: A Handbook of Techniques, Designs, Case Examples, and Tips. New Jersey: John Wiley & Sons, Inc., 2015. Strobhar, D. A. Human Factors in Process Plant Operation. New York: Momentum Press, 2013.
About the Author Bridget Fitzpatrick is the process automation authority for the Automation and Control organization within Wood. She holds an MBA in technology management from the University of Phoenix and an SB in chemical engineering from the Massachusetts Institute of Technology.
20 Effective Operator Interfaces By Bill Hollifield
Introduction and History
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
The human-machine interface (HMI) is the collection of monitors, graphic displays, keyboards, switches, and other technologies used by the operator to monitor and interact with a modern control system (typically a distributed control system [DCS] or supervisory control and data acquisition system [SCADA]). The design of the HMI plays a vital role in determining the operator’s ability to effectively manage the process, particularly in detecting and responding to abnormal situations. The primary issues with modern process HMIs are the design and content of the process graphics displayed to the operator. As part of the changeover to digital control systems in the 1980s and 1990s, control engineers were given a new task for which they were ill prepared. The new control systems included the capability to display real-time process control graphics for the operator on cathode ray tube (CRT) screens. However, the screens were blank and it was the purchaser’s responsibility to come up with graphic depictions for the operator to use to control the process. Mostly for convenience, and in the absence of a better idea, it was chosen to depict the process as a piping and instrumentation drawing or diagram (P&ID) view covered in live numbers (Figure 20-1). Later versions added distracting 3D depictions, additional colors, and animation. The P&ID is a process design tool that was never intended to be used as an HMI, and such depictions are now known to be a suboptimal design for the purposes of overall monitoring and control of a process. However, significant inertia associated with HMI change has resulted in such depictions remaining commonplace. Poor graphics designed over 20 years ago have often been migrated, rather than improved, even as the underlying control systems were upgraded or replaced multiple times.
For many years, there were no available guidelines as to what constituted a “good” HMI for control purposes. During this time, poorly designed HMIs have been cited as significant contributing factors to major accidents. The principles for designing effective process graphics are now available, and many industrial companies have graphic improvement efforts underway.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
An effective HMI has many advantages, including significantly improved operator situation awareness; increased process surveillance; better abnormal situation detection and response; and reduced training time for new operators.
Basic Principles for an Effective HMI This chapter provides an overview of effective practices for the creation of an improved process control HMI. The principles apply to modern, screen-based control systems and to any type of process (e.g., petrochemical, refining, power generation, pharmaceutical, mining). Application of these principles will significantly improve the operator’s ability to detect and successfully resolve abnormal situations. This chapter’s topics include: • Appropriate and consistent use of color • The display of information rather than raw data • Depiction of alarms • Use of embedded trends • Implementation of a graphic hierarchy
• Embedded information in context • Performance improvements from better HMIs • The HMI development work process
Issues with Color Coding Many existing graphics are highly colorful, and usually even a cursory review of a system’s graphics will likely uncover many inconsistencies in the use of color. It is well known that many people have difficulty differentiating a variety of colors and color combinations, with red-green, yellow-green, and white-cyan as the most common. People also do not detect color change in peripheral vision very well, and control room consoles are often laid out with several screens spread horizontally. To accommodate these facts, the most important and primary principle for color is: Color is not used as the sole differentiator of an important condition or status.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Most graphics throughout the world violate this principle. Important information on the screen should be coded redundantly via methods besides color. As an example, consider Figure 20-2, which shows the usual red-green coding of pump status on the left, with redundant grayscale coding incorporating brightness on the right. An object brighter than the graphic background is ON, darker is OFF (think of a light bulb inside them). A status word is placed next to the object. There is no mistaking these differences, even by persons having color detection problems.
Note that the printing of this book in grayscale rather than color places additional burdens on the reader in understanding some of these principles and actually illustrates
the need for redundant coding. Descriptions of the figures are intended to compensate for this.
Color Palette and Principles In order to use color effectively, a color palette should be prepared, tested in the control room environment, then documented and used. The palette should contain a limited number of easily distinguishable colors, and there should be consistent and specific uses for each color. The palette is usually part of an HMI Philosophy and Style Guide document discussed later.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Bright colors are used to bring or draw attention to abnormal situations rather than normal ones. Graphics depicting the operation running normally should not be covered in brightly saturated colors, such as bright red or green pumps, equipment, valves, and so forth. It is common to find existing graphics covered in bright red objects even when the process is running normally, and even when red is also used as an alarm color on those same graphics. Gray backgrounds for graphics are preferred. This poses the least potential problems with color combination issues. Designs that are basically color-neutral are also preferred. When combined with modern LCD (rather than CRT) displays, this enables the reintroduction of bright lighting to the control room. A darkened control room promotes drowsiness, particularly for shift workers. Control rooms were darkened many years ago, often because of severe glare and reflection issues with CRT displays using bright colors on dark backgrounds. Those were the only possible graphic combinations when digital control systems were originally introduced, and inertia has played a significant role since then. Attempts to color code a process line with its contents are usually unworkable for most processes. A preferred method is to use consistent labeling along with alternative line thicknesses based on a line’s importance on a particular graphic. Use labeling judiciously; a process graphic is not an instruction manual nor is it a puzzle to be deciphered.
Depiction of Alarms on Graphics Alarmed conditions should stand out clearly on a process graphic. When colors are chosen to be associated with alarms, those colors should not be used for non-alarmrelated depiction purposes. Many of the traditional methods for depicting an alarm are ineffective at being visually prominent and also violate the primary principle of color use.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Figure 20-3 shows three usual, but poor, methods in which color alone is the distinguishing factor of the alarm’s importance. The fourth method shown complies with the primary principle and uses a redundantly coded alarm indicator element that appears next to a value or condition in alarm. The indicator flashes while the alarm is unacknowledged (one of the very few proper uses of animation) and ceases flashing after acknowledgement, but remains visible as long as the alarm condition is in effect. The indicator’s colors, shapes, and markings are associated with the priority of the alarm. A unique sound for each priority should also be annunciated when a new alarm occurs.
Unlike color, object movement (such as flashing) is readily detected in peripheral vision, and a new alarm needs to be noticed. When properly implemented, such indicators visually stand out and are easily detected, even on a complex graphic. A very effective practice is to provide direct access from an alarm depicted on a graphic to the information about that alarm (e.g., causes, consequences, corrective actions) typically stored in a master alarm database. Ideally a right-click or similar action on the alarm calls up a faceplate with the relevant information. There are a variety of ways to accomplish such a linkage.
Display of Information Rather Than Raw Data It is typical for an operator to have dozens of displayable graphics, each covered with
dozens to hundreds of numeric process values. Most graphics provide little to no context coding of these raw numbers. The cognitive process for monitoring such a graphic is a difficult one. The operator must observe each number and compare it to a memorized mental map of what values constitute normal or abnormal conditions. Additionally, there are usually combinations of different process values that represent burgeoning abnormal situations. The building of a mental model containing this information is a complex and lengthy process taking months to years. The proficiency of an operator may depend on the number and type of costly process upsets that they have personally experienced, and then added to their mental maps. The situation awareness of an operator can be significantly improved when graphics are designed to display not just the numbers, but to also provide contextual information as to whether the process is running normally or not. One method of supplying desirable context is in the use of properly coded analog indication.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
In Figure 20-4, much money has been spent on the process instrumentation. But this quite usual and commonplace depiction of the readings provides no clue as to whether the process is running well or poorly. Interpretation of the column temperature profile requires a very experienced operator.
By contrast, Figure 20-5 shows coded analog representations of values. The “normal” range of each is shown inside the analog indicator. In the rightmost element, this is highlighted with a dotted line. On an actual graphic, this normal region would have a subtle color-coding (e.g., pale blue or green) relative to the background gray rather than the dotted line shown here for grayscale printing purposes.) Any configured alarm ranges are also shown and the alarm range changes color when the alarm is active, along with the appearance of the redundantly coded alarm indicator element.
Some measurements are inputs to automated interlock actions, and these are also shown clearly instead of expecting them to be memorized by the operator. The proximity of the process value to abnormal, alarm, and interlocked ranges is clearly depicted.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
The indicators provide much-needed context. Much of the desirable mental model becomes embedded into the graphic. An operator can scan dozens of such indicators in a few seconds and immediately spot any that are abnormal and deserve further investigation. This makes training easier and facilitates abnormal situation detection even before an alarm occurs, which is highly desirable. The newest, least experienced operator can easily spot an abnormal temperature profile. The display of analog values that include context promotes process surveillance and improved overall situation awareness. Such analog indicators are best used when placed in easily scanned groups, rather than scattered around a P&ID type of depiction. Figure 20-6 shows analog indicators with additional elements (e.g., set point, mode, output) to depict a proportional-integralderivative (PID) controller. Analog position feedback of the final control element in a loop is becoming increasingly common, and the depiction shown makes it easy for the operator to spot a mismatch between the commanded and actual position. Such mismatches can be alarmed.
It is common to show vessel levels in ways that use large splotches of bright colors. A combination trend with analog range depiction shows much more useful information in the same amount of space. Space precludes the inclusion of dozens of additional comparisons of conventional depictions versus designs that are more informative and effective. See the references section.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Embedded Trends It is common but surprising to enter a control room, examine dozens of screens, and not find a single trend depiction. Every graphic generally contains one or two values that would be much better understood if presented as trends. However, the graphics rarely incorporate them. One reason for this lack is that it is assumed that the operators can and will create any trends as needed, using trending tools supplied with the control system. In actual practice, it can take 10 to 20 clicks and data inputs to create a properly scaled, timed, and usable trend, and such trends often do not persist if the displayed graphic is changed. Trending-on-demand is often a frustrating process that takes too much operator time when handling an abnormal situation. The benefit of trends should not be dependent on an individual operator’s skill level. Trends should be embedded in the graphics and appear whenever the graphic is called up, immediately showing proper range and history. This is usually possible, but it is a graphic capability that is often not utilized. Trends should incorporate elements that depict both the normal and abnormal ranges for the trended value. There are a variety of ways to accomplish this as shown in Figure 20-7.
Graphic Hierarchy Graphics should be designed in a hierarchy that enables the operator to access progressive exposure of detail as needed. Graphics designed from a stack of Pads will not have this; they will be “flat,” similar to a computer hard disk with only one folder for all the files. Such a structure does not provide for optimum situation awareness and control. A four-level hierarchy is recommended.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Level 1 – Process Overview Graphic An overview graphic shows the key performance indicators of the entire system under the operator’s span of control—the big picture. It provides clear indication of the current performance of the operation. The most important parameters utilize trends. It is designed to be easy to scan and detect any abnormal conditions. Status of major equipment is shown. Alarms are easily seen. Figure 20-8 is an example overview graphic from a large coal-fired power plant. This graphic was used in a proof test of advanced HMI concepts conducted by the Electric Power Research Institute (EPRI, see references) and was highly rated for providing overall situation awareness. It is a common practice that overview graphics do not incorporate actual control functionality (such as faceplate call-up), thus providing more screen space for monitoring elements. Overview graphics are often depicted on larger off-console wall monitors.
In the test, the operators found this overview graphic to be far more useful than the existing “typical” graphics in providing overall situation awareness and useful in detecting burgeoning abnormal situations.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Level 2 – Process Unit Control Graphic A single operator’s span of control is usually made up of several smaller, significantly instrumented unit operations. Examples include a single reactor, a pipeline segment, a distillation train, a furnace, and a compressor. A Level 2 graphic should exist for each of these separate unit operations. Typically, the Level 2 breakdown consists of 10 to 20 different graphics. Figure 20-9 is an example of a Level 2 graphic for a reactor. It is specifically designed such that 90+ percent of the control interactions needed for effective monitoring and control of the reactor can be accomplished from this single graphic. The primary control loops are accessible. Important parameters are trended. Interlock status is clearly shown. Production plan versus actual is depicted. Analog indicators provide context. Important abnormal situation command buttons are available. Navigation to the most likely graphics is provided. Details of reactor subsystems are provided at Level 3.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
A special note about faceplates: The paradigm of most control system graphic implementations is that the selection of a measurement or object on the screen calls up a standardized faceplate for that point type. Manipulation is actually made through the faceplate element. This is a very workable paradigm. It is desirable that the faceplates be caused to appear in an area on the graphic reserved for them, rather than on top of and obscuring the main depiction. The Level 2 example shows this reserved area in the upper right portion of the graphic.
Level 3 – Process Unit Detail Graphic Level 3 graphics provide the detail about a single piece of equipment. These are used for a detailed diagnosis of problems. They show all the instruments and include the highly detailed interlock status. A P&ID schematic type of depiction is often the basis for a Level 3 graphic. Figure 20-10 is an example of a Level 3 graphic of a compressor. It is still possible and desirable to include analog indication and trends even at Level 3.
Most of the existing graphics in the world can be considered as “improvable” Level 3 graphics. Significant HMI improvement can be accomplished inexpensively by the introduction of new Level 1 and 2 graphics, and the more gradual alteration and improvement of the existing Level 3 screens. This will introduce inconsistencies between the new and the old, but most existing old-style graphic implementations already have significant inconsistencies that the operators have learned to deal with.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Level 4 – Support and Diagnostic Graphics Level 4 graphics provide the most detail of subsystems, individual sensors, or components. They show the most detailed possible diagnostic or miscellaneous information. A point detail or logic detail graphic is a typical example. The dividing line between Level 3 and Level 4 graphics can be somewhat indefinite. It is good to incorporate access to useful external knowledge based on process context, such as operating procedures, into the operator’s HMI.
Abnormal Situation Response Graphics Many process operations have a variety of known abnormal situations that can occur. These are events like loss of feed, heat, cooling, and compression. The proper operator response to such situations is often difficult and stressful, and, if not done correctly, can result in a more significant and avoidable upset. In many cases operators are expected to use the same existing P&ID-type of graphics to handle such situations, and often the information needed is spread out across many of those.
Operator response to such abnormal situations can often be greatly improved by the creation of special purpose Level 2 graphics, specifically designed to contain every item needed by the operator to handle certain abnormal situations.
Other Graphic Principles Besides those discussed in detail, some additional recommendations are contained in Table 20-1.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Expected Performance Improvements The overview of principles discussed in this chapter—compared to traditional P&IDstyle, number-covered graphics—have been tested and proven in at least two published studies involving actual operators and full-capability process simulators. The first was in March 2006, concerning a study at a Nova Chemicals ethylene plant. The second was a test conducted in 2009 by the EPRI. See the reference section for the reports. In these tests, operators performed significant abnormal situation detection and response tasks, using both existing “traditional” graphics and graphics created in accordance with the new principles. This chapter’s author participated in the EPRI test. In the EPRI test, new Level 1 and 2 graphics were created. Operators had many years of experience with the existing graphics, which had been unchanged for more than a decade. But with only one hour of practice use, the new graphics were proven to be significantly better in assisting the operator in: • Maintaining situational awareness
• Recognizing abnormal situations • Recognizing equipment malfunctions • Dealing with abnormal situations • Embedding knowledge into the control system Operators rated the Level 1 overview screen (Figure 20-8) highly, agreeing that it provided continuously useful “big picture” situation awareness. Positive comments were also received on the use of analog depictions, alarm depictions, and embedded trends. There were consistent positive comments on how “obvious” the new HMI made the various process situations. Values moving towards a unit trip were clearly shown and noticed. The operators commented that an HMI like this would enable faster and more effective training of new operations personnel. The best summary quote was this: “Once you got used to these new graphics, going back to the old ones would be hell.”
The HMI Development Work Process
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
There is proven, straightforward methodology for the development of an effective HMI —one that follows proper principles and is based on process objectives rather than Pads. Step 1: Adopt a comprehensive HMI Philosophy and Style Guide. This is a detailed document containing the proper principles for creating and implementing an effective HMI. The style guide portion provides details and functional descriptions for objects and layouts that implement the intent of the philosophical principles within the capabilities of a specific control system. Most HMIs were created and altered for many years without the benefit of such a document. Step 2: For existing systems, assess and benchmark existing graphics against the HMI Philosophy and Style Guide. Create a gap analysis. Step 3: Create the Level 2 breakdown structure. For each portion of the process to be controlled by a Level 2 graphic, determine the specific performance and goal objectives for that area. These are such factors as: • Safety parameters and limits • Production rate • Energy usage and efficiency
• Run length • Equipment health • Environmental (e.g., emission controls and limits) • Production cost • Product quality • Reliability It is important to document these, along with their goals, normal ranges, and target values. This is rarely done and is one reason for the current poor state of most HMIs. Step 4: Perform task analysis to determine the specific measurements and control manipulations needed to effectively monitor the process and achieve the performance and goal objectives from Step 3. The answer determines the content of each Level 2 graphic. The Level 1 graphic is an overall distillation of the Level 2 graphics. Level 3 graphics have the full details of the subparts of the Level 2 structure. Step 5: Design the graphics using the design principles in the HMI philosophy and elements from the style guide to address the identified tasks. Appropriate reviews and refinements are included in this step. Each graphic is designed to give clear guidance as to whether the process is running well or poorly. Step 6: Install, commission, and provide training on the new HMI additions or replacements. Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Step 7: Control, maintain, and periodically reassess the HMI performance.
The ISA-101 HMI Standard In July 2015, the International Society of Automation (ISA) published the ANSI/ISA101.01-2015 Human Machine Interfaces for Process Automation Systems standard. ISA101 is a relatively short document containing consistent definitions of various aspects of an HMI. It contains generic principles of good HMI design, such as “the HMI should be consistent and intuitive” and “colors chosen should be distinguishable by the operators.” ISA-101 uses the life-cycle approach to HMI development and operations. It is mandatory to have an HMI philosophy, style guide, and object library (called System Standards). Methods to create these documents are discussed, but there are no examples. It is also mandatory to place changes in the HMI under Management of Change (MOC) procedures, similar to those that govern other changes in the plant and the control system. User training for the HMI is the only other mandatory requirement.
ISA-101 provides brief descriptions of several methods for interacting with an HMI, such as data entry in fields, entering and showing numbers, using faceplates, and use of navigation menus and buttons. It contains a brief section on the determination of the tasks that a user will accomplish when using the HMI and how those tasks feed the HMI design process. There are no examples of proper and improper human factors design and no details, such as appropriate color palettes or element depictions. ISA-101 mentions the concept of display hierarchy, but contains no detailed design guidance or detailed examples, such as those shown in the “Graphic Hierarchy” section in this chapter.
Conclusion Sophisticated, capable, computer-based control systems are currently operated via ineffective and problematic HMIs, which were created without adequate knowledge. In many cases, guidelines did not exist at the time the graphics were created and inertia has kept those graphics in commission for two or more decades. The functionality and effectiveness of these systems can be greatly enhanced if graphics are redesigned in accordance with effective principles. Significantly better operator situation awareness and abnormal situation detection and response can be achieved.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Further Information EPRI (Electric Power Research Institute). Operator HMI Case Study: The Evaluation of Existing “Traditional” Operator Graphics vs. High Performance Graphics in a Coal Fired Power Plant Simulator. EPRI Product ID 1017637. Charlotte, NC: Electric Power Research Institute, 2009. (Note that the full and lengthy EPRI report is restricted to EPRI member companies. A condensed version with many figures and detailed results is in the white paper mentioned previously.) Errington, J., D. Reising, and K. Harris. “ASM Outperforms Traditional Interface.” Chemical Processing (March 2006). https://www.chemicalprocessing.com. Hollifield, B., D. Oliver, I. Nimmo, and E. Habibi. The High Performance HMI Handbook. Houston TX: PAS, 2008. (All figures in this chapter are courtesy of this source and subsequent papers by PAS.) Hollifield, B., and H. Perez. High Performance HMI™ Graphics to Maximize Operator Effectiveness. Version 3.0. White paper, Houston, TX: PAS, 2015. (Available free from PAS.com.)
About the Author Bill R. Hollifield is the principal consultant responsible for the PAS work processes and intellectual property in the areas of both alarm management and high-performance HMI. He is a member of the International Society of Automation ISA18 Instrument Signals and Alarms committee, the ISA101 Human-Machine Interface committee, the American Petroleum Institute’s API RP-1167 Alarm Management Recommended Practice committee, and the Engineering Equipment and Materials Users Association (EEMUA) Industry Review Group. Hollifield was made an ISA Fellow in 2014. Hollifield has multi-company, international experience in all aspects of alarm management and HMI development. He has 26 years of experience in the petrochemical industry in engineering and operations, and an additional 14 years in alarm management and HMI software and service provision for the petrochemical, power generation, pipeline, and mining industries.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Hollifield is co-author of The Alarm Management Handbook, The High Performance HMI Handbook, and The Electric Power Research Institute (EPRI) Guideline on Alarm Management. He has authored several papers on alarm management and HMI, and he is a regular presenter on these topics in such venues as ISA, API, and EPRI symposiums. He has a BS in mechanical engineering from Louisiana Tech University and an MBA from the University of Houston.
21 Alarm Management By Nicholas P. Sands
Introduction
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
The term alarm management refers to the processes and practices for determining, documenting, designing, monitoring, and maintaining alarms from process automation and safety systems. The objective of alarm management is to provide the operators with a system that gives them an indication at the right time, to take the right action, to prevent an undesired consequence. The ideal alarm system is a blank banner or screen that only has an alarm for the operator when an abnormal condition occurs and clears to a blank banner or screen when the right action is taken to return to normal conditions. Most alarm systems do not work that way in practice. The common problems of alarm management are well documented. The common solutions to those common problems are also well documented in ANSI/ISA-18.2, Management of Alarm Systems for the Process Industries, and the related technical reports. This chapter will describe the activities of alarm management following the alarm management life cycle, how to get started on the journey of alarm management, and which activities solve which of the common problems. The last section discusses safety alarms.
Alarm Management Life Cycle The alarm management life cycle was developed as a framework to guide alarm management activities and map them to other frameworks like the phases of a project. A goal of the life-cycle approach to alarm management is continuous improvement, as the life-cycle activities continue for the life of the facility. The alarm management life cycle is shown in Figure 21-1 [1].
Philosophy
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
A key activity in the alarm management life cycle is the development of an alarm management philosophy; a document that establishes the principles and procedures to consistently manage an alarm system over time. The philosophy does not specify the details of any one alarm, but defines each of the key processes used to manage alarm systems: rationalization, design, training, monitoring, management of change, and audit. Alarm system improvement projects can be implemented without a philosophy, but the systems tend to drift back toward the previous performance. Maintaining an effective alarm system requires the operational discipline to follow the practices in the alarm philosophy. The philosophy includes definitions. Of those definitions, the most important is the one for alarm: ...audible and/or visible means of indicating to the operator an equipment malfunction, process deviation, or abnormal condition requiring a timely response [1]. This definition clarifies that alarms are indications • that may be audible, visible, or both, • of abnormal conditions and not normal conditions, • to the operator, • that require a response, and • that are timely.
Much of alarm management is the effort to apply this definition. The alarm philosophy includes many other sections to provide guidance, including: • Roles and responsibilities • Alarm class definitions • Alarm prioritization methods • Basic alarm design guidance • Advanced alarm design methods • Implementation of alarms • Alarm performance metrics and targets • Management of change • Audit A list of the required and recommended contents of an alarm philosophy can be found in ISA-18.2 [1] and ISA technical report ISA-TR18.2.1-2018 [2].
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Identification Identification is the activity that generates a list of potential alarms using some of the various methods that address undesired consequences. For safety consequences, a process hazard analysis (PHA) or hazard and operability study (HAZOP) might be used. For quality or reliability, a failure modes and effects analysis (FMEA) might be used. For environmental consequences, a compliance or permit review might be used. For existing sites, the list of existing alarms is usually incorporated. For best results, the participants in the identification methods should be trained on alarm rationalization and should document basic information for each potential alarm: the consequence, corrective action, and probable cause.
Rationalization Rationalization is the process of examining one potential alarm at a time against the criteria in the definition of alarm. The product of rationalization is a set of consistent, well-documented alarms in the master alarm database. The documentation supports both the design process and operator training. Rationalization begins with alarm objective analysis, determining the rationale for the alarm. This information may have been captured in identification:
• Consequence of inaction or operator preventable consequence • Corrective action • Probable cause The key to this step is correctly capturing the consequence. This can be viewed two different ways but should have the same result. The consequence of inaction is what results if the operator does not respond to the alarm. The operator preventable consequence is what the operator can prevent by taking the corrective action. Either way, the consequence prevented by automatic functions like interlocks is not included. This relates to the definition of alarm, as the alarms are for the operator. If there is no operator, there are no alarms. If the condition is normal, the consequence is minimal, or there is no corrective action the operator can take, the alarm should be rejected. Another important criterion is the action required. Acknowledging the alarm does not count as an action as it would justify every alarm. The operator action should prevent, or mitigate, the consequence and should usually return the alarm condition to normal. The corrective action often relates directly to the probable cause. The next step is set-point determination. For this step, it helps to document the: • Basis for the alarm set point • Normal operating range
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
• Allowable response time or time to respond The basis is the reason to set the alarm set point at one value or another, like the freezing temperature of a liquid. The normal operating range is the where the condition is not in alarm. The allowable response time is the time the operator has from the alarm to prevent the consequence. This is usually determined by the process dynamics, and not chosen based on how fast the operator should respond. It is usually estimated in time ranges, like 3–10 minutes. With this information, the key is to determine if the operator has enough time to take the corrective action. If not, the alarm set point can be moved to allow more time to respond. Alarm classification is the next step in rationalization. Classification precedes alarm prioritization because there may be rules in the alarm philosophy that set priority by class, for example, using the highest priority for safety alarms. Alarm classes are defined in the alarm philosophy. Class is merely a grouping based on common requirements, which is more efficient than listing each requirement for each alarm. These requirements are often verified during audits. These requirements typically include:
• Training requirements • Testing requirements • Monitoring and reporting requirements • Investigation requirements • Record retention requirements The next step is prioritization. The alarm priority is an indication of the urgency to respond for the operator. Priority is of value when there is more than one active alarm. For priority to be meaningful, it must be assigned consistently and must be based on the operator preventable consequence. In the past, priority was often based on an engineering view that escalated priority as conditions progressed away from the normal operating range, regardless of automatic actions. Prioritization usually uses: • Allowable response time or time to respond • Consequence severity
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
The consequence severity is a rating based on a table from the alarm philosophy, like the example in Table 21-1 [3].
Alarm priority is assigned using a matrix from the alarm philosophy, which usually includes the consequence severity and the time to respond. Table 21-2 is an example from ISA-TR18.2.2-2016 [3].
All of the information from rationalization is captured in the master alarm database. This information is used in detailed design.
Detailed Design The design phase utilizes the rationalized alarms and design guidance in the philosophy to complete basic alarm design, advanced alarm design, and the human-machine interface (HMI) design for alarms. Design guidance is often documented in an annex to the alarm philosophy and typical configurations. As systems change, the guidance should be updated to reflect features and limitations of the control system.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
The guidance on basic configuration may include default settings for alarm deadbands, delays, alarm practices for redundant transmitters, timing periods for discrete valves, alarm practices for motor control logic, and the methods for handling alarms on bad signal values. Many alarm system problems can be eliminated with good basic configuration practices. ISA-TR18.2.3-2015 provides additional guidance [4]. The guidance on the HMI may include alarm priority definitions, alarm color codes, alarm tones, alarm groups, alarm summary configuration, and graphic symbols for alarm states. Alarm functions are only one part of the HMI, so it is important that these requirements fit into the overall HMI philosophy as described in ANSI/ISA-101.012015 [5]. A common component of the HMI design guide is a table of alarm priorities, alarm colors, and alarm tones. Some systems have the capability to show shapes or letters next to alarms. This is a useful technique for assisting color-blind operators in recognizing alarm priorities. Beyond the basic configuration and HMI design, there are many techniques to reduce the alarm load on the operator and improve the clarity of the alarm messages. These techniques range from first-out alarming to state-based alarming to expert systems for fault diagnosis. The techniques used should be defined in the alarm philosophy, along with the implementation practices in the design guide. Some advanced alarming (e.g.,
suppressing low temperature alarms when the process is shutdown) is usually needed to achieve the alarm performance targets. There are many methods for advanced alarming, which often vary with the control system and change over time as new functions are introduced. The common challenge is maintaining the advanced alarming design over time. For this reason, the advanced alarming should be well documented [6].
Implementation The implementation stage of the life cycle is the transition from design to operation. The main tasks are procedure development, training, and testing. Training is one of the most essential steps in developing an alarm system. Since an alarm exists only to notify the operator to take an action, the operator must know the corresponding action for each alarm, as defined in the alarm rationalization. A program should be in place to train operators on these actions. Documentation on all alarms, sometimes called alarm response procedures, should be easily accessible to the operator. Beyond the alarm-specific training, the operator should be trained on the alarm philosophy and the HMI design. A complete training program includes initial training and periodic refresher training. Additional procedures on alarm shelving, if used, and taking an alarm out of service should be developed and personnel should be trained on the procedures.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Testing for alarms is usually class-dependent. Some alarms require end-to-end testing and others just verification of alarm set points and priorities.
Operation In the operation stage of the alarm life cycle, the alarm performs its function as designed, indicating to the operator that it is the right time to take the right action to avoid an undesired consequence. The main activities of this stage are refresher training for the operator and use of the shelving and out-of-service procedures. The ANSI/ISA-18.2 standard does not use the words disable, inhibit, hide, or similar terms, which might be used in different ways in different control systems. It does describe different types of alarm suppression (i.e., hiding the alarm annunciation from the operator, based on the engineering or administrative controls for the suppression). It is described in the alarm state transition diagram shown in Figure 21-2 [1].
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Maintenance In the maintenance stage of the life cycle, the alarm does not perform its designed function, but is out of service for testing, repair, replacement, or another reason. The out-of-service procedure is the explicit transition of an alarm from operation to maintenance and return-to-service is the transition back to operation.
Monitoring and Assessment Monitoring is the tracking of all the transitions in Figure 21-2. This data can be consolidated into performance and diagnostic metrics. The performance metrics can be assessed against the targets in the alarm philosophy. If the performance does not meet the targets, the related diagnostic metrics can usually point to specific alarms to be reviewed for changes. Monitoring the alarm system performance is a critical step in alarm management. Since each alarm requires operator action for success, overloading the operator reduces the effectiveness of the alarm system. Instrument problems, controller performance issues, and changing operating conditions will cause the performance of the alarm system to degrade over time. Monitoring and taking action to address bad actors can maintain a
system at the desired level of performance. Table 21-3 shows the recommended performance metrics and target from ANSI/ISA-18.2-2016 [1]. A more detailed discussion of metrics can be found in ISA-TR18.2.5-2012 [7].
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
The alarm philosophy should define report frequencies, metrics, and thresholds for action. The performance metrics are usually calculated per operator position or operator console. Common measurements include: • Average alarm rate, such as total number of alarms per operator per hour • Time > 10 alarms per 10 minutes, or time in flood • Stale alarms • Annunciated alarm priority distribution Measurement tools allow reporting of the metrics at different frequencies. Typically, there are daily reports to personnel responsible to take action, and weekly or monthly reports to management. The type of data reported varies, depending on the control system or safety system and the measurement tool. One of the most useful reports to create is one that lists the top 10 most frequent alarms. This can clearly highlight the alarms with issues. It is recommended to set a target and an action limit for each performance metric, where better than the target is good, worse than the action limit is bad, and between the limits
indicates improvement is possible. This allows a green-yellow-red stoplight indication for management.
Management of Change Alarm monitoring and other activities will drive changes in the alarm and alarm system. These changes should be approved through a management of change process that includes the activities of updating rationalization, design, and implementation. Usually there are one or more management of change processes already established for process safety management (PSM) or current good manufacturing practices (cGMP), which would encompass changes for alarms. The alarm philosophy will define the change processes and the steps necessary to change alarms.
Audit The audit stage of the life cycle represents a benchmark and audit activity to drive execution improvements and updates to the alarm philosophy. The benchmark is a comparison of execution and performance against criteria like those in the ANSI/ISA18.2-2016. Audit is a periodic check of execution and performance against the alarm philosophy and site procedures. It is recommended to include an interview with the operators in any benchmark or audit [7].
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Getting Started The best way to start the alarm management journey depends on the whether it is a new or existing facility. The alarm management life cycle has three official starting points: 1. Philosophy 2. Monitoring and Assessment 3. Audit
New Facilities The alarm philosophy is the recommended starting point for new facilities. A philosophy should be developed or adopted early in the project, and the life cycle used to identify project activities and deliverables. A new facility will only start up with a good alarm system if the activities above are included in the project schedule and budget. There are technical reports on applying alarm management to batch and discrete processes, ISA-TR18.2.6-2012 [8], and to packaged systems, ISA-TR18.2.7-2017 [9].
Existing Facilities Existing facilities may start with an alarm philosophy, but it is common to start with monitoring and assessment or a benchmark. The advantage of starting with monitoring is that alarm load can be quantified and problem alarms, sometimes called bad actors, can be identified and addressed. This usually allows the plant team to see the problem and that progress can be made, which can make it easier to secure funding for alarm management work. A benchmark can serve the same purpose, highlighting gaps in training and procedures. The benchmark leads to the development of the alarm management philosophy.
Alarms for Safety Safety alarms require more attention than general alarms, though the meaning of the term safety alarm is different in different industries. Within the ANSI/ISA-18.2 standard, a set of requirements are called out for Highly Managed Alarms, such as safety alarms. These requirements include training with documentation, testing with documentation, and control of suppression. These requirements are repeated in ANSI/ISA-84.91.01-2012 [10].
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
When alarms are used as protection layers, the performance of the individual alarm and then alarm system should be considered. Safety alarms should not be allowed to become nuisance alarms. If the alarm system does not perform well, no alarm in the system should be considered reliable enough to use as a layer of protection.
References 1. ANSI/ISA-18.2-2016. Management of Alarm Systems for the Process Industries. Research Triangle Park, NC: ISA (International Society of Automation). 2. ISA-TR18.2.1-2018. Alarm Philosophy. Research Triangle Park, NC: ISA (International Society of Automation). 3. ISA-TR18.2.2-2016. Alarm Identification and Rationalization. Research Triangle Park, NC: ISA (International Society of Automation). 4. ISA-TR18.2.3-2015. Basic Alarm Design. Research Triangle Park, NC: ISA (International Society of Automation). 5. ANSI/ISA-101.01-2015. Human Machine Interfaces for Process Automation Systems. Research Triangle Park, NC: ISA (International Society of
Automation). 6. ISA-TR18.2.4-2012. Enhanced and Advanced Alarm Methods. Research Triangle Park, NC: ISA (International Society of Automation). 7. ISA-TR18.2.5-2012. Alarm System Monitoring, Assessment, and Auditing. Research Triangle Park, NC: ISA (International Society of Automation). 8. ISA-TR18.2.6-2012. Alarm Systems for Batch and Discrete Processes. Research Triangle Park, NC: ISA (International Society of Automation). 9. ISA-TR18.2.7-2017. Alarm Management When Utilizing Packaged Systems. Research Triangle Park, NC: ISA (International Society of Automation). 10. ANSI/ISA-84.91.01-2012. Identification and Mechanical Integrity of Safety Controls, Alarms, and Interlocks in the Process Industry. Research Triangle Park, NC: ISA (International Society of Automation).
About the Author
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Nicholas P. Sands, PE, CAP, ISA Fellow, is currently a senior manufacturing technology fellow with more than 28 years at DuPont, working in a variety of automation roles at several different businesses and plants. He has helped develop company standards and best practices in the areas of automation competency, safety instrumented systems, alarm management, and process safety. Sands has been involved with ISA for more than 25 years, working on standards committees—including ISA18, ISA101, ISA84, and ISA105—as well as training courses, the ISA Certified Automation Professional (CAP) certification, and section and division events. His path to automation started when he earned a BS in chemical engineering from Virginia Tech.
VII Safety
HAZOP Studies Hazard and operability studies (HAZOP), also termed HAZOP analysis or just HAZOP, are systematic team reviews of process operations to determine what can go wrong and to identify where existing safeguards are inadequate and risk-reduction actions are needed. HAZOP studies are often required to comply with regulatory requirements, such as the U.S. Occupational Safety and Health Administration’s (OSHA’s) Process Safety Management Standard (29?CFR 1910.119), and are also used as the first step in determining the required safety integrity level (SIL) for safety instrumented functions (SIFs) to meet a company’s predetermined risk tolerance criteria.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Safety Life Cycle A basic knowledge of reliability is fundamental to the concepts of safety and safety instrumented systems. Process safety and safety instrumented systems (SISs) are increasingly important topics. Safety is important in all industries, especially in large industrial processes, such as petroleum refineries, chemicals and petrochemicals, pulp and paper mills, and food and pharmaceutical manufacturing. Even in areas where the materials being handled are not inherently hazardous, personnel safety and property loss are important concerns. SIS is simple in concept but requires a lot of engineering to apply well.
Reliability In the field of reliability engineering, the primary metrics employed include reliability, unreliability, availability, unavailability, and mean time to failure (MTTF). Failure modes, such as safety-instrumented function (SIF) verification, also need to be considered.
22 HAZOP Studies By Robert W. Johnson
Application
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Hazard and operability studies (HAZOPs), also termed HAZOP analyses or just HAZOPS, are systematic team reviews of process operations to determine what can go wrong and to identify where existing safeguards are inadequate and risk-reduction actions are needed. HAZOP Studies are typically performed on process operations involving hazardous materials and energies. They are conducted as one element of managing process risks, and are often performed to comply with regulatory requirements such as the U.S. Occupational Safety and Health Administration’s (OSHA’s) Process Safety Management Standard (29 CFR 1910.119). HAZOP Studies are also used as the first step in determining the required safety integrity level (SIL) for safety instrumented functions (SIFs) to meet a company’s predetermined risk tolerance criteria in compliance with IEC 61511, Functional Safety: Safety Instrumented Systems for the Process Industry Sector. An international standard, IEC 61882, is also available that addresses various applications of HAZOP Studies.
Planning and Preparation HAZOP Studies require significant planning and preparation, starting with a determination of which company standards and regulatory requirements need to be met by the study. The study scope must also be precisely determined, including not only the physical boundaries but also the operational modes to be studied (continuous operation, start-up/shutdown, etc.) and the consequences of interest (e.g., safety, health, and environmental impacts only, or operability issues as well). HAZOP Studies may be performed in less detail at the early design stages of a new facility, but are generally reserved for the final design stage or for operating facilities. HAZOP Studies are usually conducted as team reviews, with persons having operating experience and engineering expertise being essential to the team. Depending on the process to be studied, other backgrounds may also need to be represented on the team
for a thorough review, such as instrumentation and controls, maintenance, and process safety. Study teams have one person designated as the facilitator, or team leader, who is knowledgeable in the HAZOP Study methodology and who directs the team discussions. Another person is designated as the scribe and is responsible for study documentation. Companies often require a minimum level of training and experience for study facilitators. To be successful, management must commit to providing trained resources to facilitate the HAZOP Study and to making resources available to address the findings and recommendations in a timely manner. A facilitator or coordinator needs to ensure all necessary meeting arrangements are made, including reserving a suitable meeting room with minimum distractions and arranging any necessary equipment. For a thorough and accurate study to be conducted, the review team will need to have ready access to up-to-date operating procedures and process safety information, including such items as safety data sheets, piping and instrumentation diagrams, equipment data, materials of construction, established operating limits, emergency relief system design and design basis, and information on safety systems and their functions.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Nodes and Design Intents The first step in the HAZOP Study is to divide the review scope into nodes or process segments. Adjacent study nodes will generally have different relevant process parameters, with typical study nodes being vessels (with parameters of importance such as level, composition, pressure, temperature, mixing, and residence time) and transfer lines (with parameters such as source and destination locations, flow rate, composition, pressure, and temperature). Nodes are generally studied in the same direction as the normal process flow. The HAZOP Study team begins the analysis of each node by determining and documenting its design intent, which defines the boundaries of “normal operation” for the node. This is a key step in the HAZOP Study methodology because the premise of the HAZOP approach is that loss events occur only when the facility deviates from normal operation (i.e., during abnormal situations). The design intent should identify the equipment associated with the node including source and destination locations, the intended function(s) of the equipment, relevant parameters and their limits of safe operation, and the process materials involved including their composition limits. An example of a design intent for a chemical reactor might be to:
Contain and control the complete reaction of 1,000 kg of 30% A and 750 kg of 98% B in EP-7 by providing mixing and external cooling to maintain 470–500ºC for 2 hours, while venting off-gases to maintain < 100 kPa gauge pressure. For procedure-based operations such as unloading or process start-up, the established operating procedure or batch procedure is an integral part of what defines “normal operation.”
Scenario Development: Continuous Operations
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Figure 22-1 illustrates how the HAZOP Study methodology interfaces with a typical incident sequence to develop possible incident scenarios associated with a study node. Terminology in this figure is consistent with the definitions in the Guidelines for Hazard Evaluation Procedures, Third Edition, published by the American Institute of Chemical Engineers – Center for Chemical Process Safety (AIChE-CCPS).
The typical incident sequence starts with the initiating cause, which is the event that marks a transition from normal operation to an abnormal situation or deviation. If a preventive safeguard such as an operator response to an alarm or a safety instrumented function is successful, the process will be brought back to normal operation or to a safe state such as unit shutdown. However, if the preventive safeguards are inadequate or do not function as intended, a loss event such as a toxic release or a bursting vessel explosion may result. Mitigative safeguards such as emergency response actions can reduce the impacts of the loss event. The HAZOP Study starts by applying a set of guide words to the design intent (described in the preceding section) to develop meaningful deviations from the design intent. This can be done either one guide word at a time, or one parameter (such as flow
or temperature) at a time. Once a meaningful deviation is identified, the team brainstorms what could cause the deviation, then what possible consequences could develop as a result of the deviation, then what safeguards could intervene to keep the consequences from being realized. Each unique cause-consequence pair, with its associated safeguards, is a different scenario. Facilitators use various approaches to ensure a thorough yet efficient identification of all significant scenarios, such as identifying only local causes (i.e., only those initiating causes associated with the node being studied), investigating global consequences (anywhere, anytime), and ensuring that incident sequences are taken all the way through to the loss event consequences.
Scenario Development: Procedure-Based Operations
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Procedure-based operations are those process operations that manually or automatically follow a time-dependent sequence of steps to accomplish a specific task or make a particular product. Examples of procedure-based operations are tank truck unloading; product loadout into railcars; start up and shut down of continuous operations; transitioning from one production mode to another; sampling procedures; and batchwise physical processing, chemical production, and waste treatment. Many companies have discovered the importance of studying the procedure-based aspects of their operations in detail, with the realization that most serious process incidents have occurred during stepwise, batch, nonroutine, or other transient operational modes. Scenario development for procedure-based operations starts with the steps of an established operating procedure or batch sequence, and then uses the HAZOP guide words to identify meaningful deviations from each step or group of steps. For example, combining the guide word “NONE” with a procedural step to close a drain valve would lead to identifying the deviation of the drain valve not being closed, such as due to an operator skipping the step. Note: Section 9.1 of the AIChE-CCPS guidelines describes an alternative two-guideword approach that can be used to identify potential incident scenarios associated with procedure-based operations.
Determining the Adequacy of Safeguards After possible incident scenarios are identified, the HAZOP Study team evaluates each scenario having consequences of concern to determine whether the safeguards that are
currently built into the process (or process design, for a new facility) are adequate to reduce risks to a tolerable level. Some companies perform this evaluation as each scenario is identified and documented; other companies wait until all scenarios are identified. A range of approaches is used by HAZOP Study teams to determine the adequacy of safeguards, from a purely qualitative judgment, to the use of risk matrices (as described below), to order-of-magnitude quantitative assessments (AIChE-CCPS guidelines, Chapter 7). However, all approaches fundamentally are evaluating the level of risk posed by each scenario and deciding whether the risk is adequately controlled or whether it is above or below a predetermined action threshold. Scenario risk is a function of the likelihood of occurrence and the severity of consequences of the scenario loss event. According to the glossary in the AIChE-CCPS guidelines, a loss event is defined as:
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
The point of time in an abnormal situation when an irreversible physical event occurs that has the potential for loss and harm impacts. Examples include the release of a hazardous material, ignition of flammable vapors or an ignitable dust cloud, and the overpressurization rupture of a tank or vessel. An incident might involve more than one loss event, such as a flammable liquid spill (first loss event) followed by the ignition of a flash fire and pool fire (second loss event) that heats up an adjacent vessel and its contents to the point of rupture (third loss event). In the multiple loss event example in the definition above, each of the three loss events would pose a different level of risk and each can thus be evaluated as a separate HAZOP Study scenario. Figure 22-2 illustrates that the likelihood of occurrence of the loss event (i.e., the loss event frequency) is a function of the initiating cause frequency reduced by the effectiveness of all preventive safeguards taken together that would keep the loss event from being realized, given that the initiating cause has occurred. The severity of consequences is the loss event impact, which is generally assessed in terms of human health effects and environmental damage. The assessed severity of consequences may include property damage and other business impacts as well. The scenario risk is then the product of the loss event frequency and severity.
Companies typically define a threshold level of scenario risk. Below this threshold, safeguards are considered to be adequate and no further risk-reduction actions are considered to be warranted. This threshold can either be numerical, or, more commonly, can be shown in the form of a risk matrix that has frequency and severity on the x and y axes. The company’s risk criteria or risk matrix may also define a high-risk region where greater urgency or priority is placed on reducing the risk. The risk criteria or risk matrix provides the HAZOP Study team with an aid for determining where risk reduction is required.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Recording and Reporting HAZOP Study results in the form of possible incident scenarios are generally recorded in a tabular format, with different columns for the various elements of the HAZOP scenarios. In the book HAZOP: Guide to Best Practice, Third Edition, Crawley et al. give the minimum set of columns as Deviation, Cause, Consequence, and Action. Current usage generally documents the Safeguards in an additional separate column, as well as the Guide Word and/or Parameter. The table might also include documentation of the likelihood, severity, risk, and action priority. Table 22-1, adapted from the AIChECCPS guidelines, shows a typical HAZOP scenario documentation with the numbers in the Actions column referring to a separate tabulation of the action items (not shown).
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Johnson (2008, 2010) shows how the basic HAZOP Study can be extended by obtaining a risk magnitude for each scenario by adding cause frequency and consequence severity magnitudes, then subtracting safeguards effectiveness magnitudes. By considering the safeguards specifically as independent protection layers (IPLs) and conditional modifiers, as shown in Table 22-2, the resulting HAZOP Study has the features of a Layer of Protection Analysis (LOPA), which is useful for determining the required safety integrity level (SIL) to meet a company’s risk tolerance criteria for complying with IEC 61511.
Computerized aids are commercially available for documenting HAZOP Studies (AIChE-CCPS guidelines, Appendix D). The final HAZOP Study report consists of documentation that gives, as a minimum, the study scope, team members and attendance, study nodes, HAZOP scenarios, and the review team’s findings and recommendations (action items). The documentation may also include a process description, an inventory of the process hazards associated with the study scope, a listing of the process safety information on which the study was based, any auxiliary studies or reference to such studies (e.g., for human factors, facility siting, or inherent safety), and a copy of marked-up piping and instrumentation diagrams (P&IDs) that show what equipment was included in each node. The timely addressing of the HAZOP Study action items, and documentation of their resolution, is the responsibility of the owner/operator of the facility. HAZOP Studies are not generally kept as evergreen documents (always immediately updated whenever a change is made to the equipment or operation of the facility). They are typically updated or revalidated on a regular basis, at a frequency determined by company practice and regulatory requirements.
Further Information AIChE-CCPS (American Institute of Chemical Engineers – Center for Chemical Process Safety). Guidelines for Hazard Evaluation Procedures, Third Edition, New York: AIChE-CCPS, 2008. AIChE (American Institute of Chemical Engineers). Professional and technical training courses. “HAZOP Studies and Other PHA Techniques for Process Safety and Risk Management.” New York: AIChE. www.aiche.org/academy. Crawley, F., M. Preston, and B. Tyler. HAZOP: Guide to Best Practice. 3rd ed. Rugby, UK: Institution of Chemical Engineers, 2015. IEC 61511 Series. Functional Safety – Safety Instrumented Systems for the Process Industry Sector. Geneva 20 – Switzerland: IEC (International Electrotechnical Commission). IEC 61882. Hazard and Operability Studies (HAZOP Studies) – Application Guide. Geneva 20 – Switzerland: IEC (International Electrotechnical Commission). Johnson, R. W. “Beyond-Compliance Uses of HAZOP/LOPA Studies.” Journal of Loss Prevention in the Process Industries 23, no.6 (November 2010): 727-733.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
——— “Interfacing HAZOP Studies with SIL Determinations using Exponential Frequency and Severity Categories.” ISA Safety Symposium, Calgary, Alberta, April 2008.
About the Author Robert W. Johnson is president of the Unwin Company process risk management consultancy. Johnson, a process safety specialist since 1978, has authored six books and numerous technical articles and publications on process safety topics. He is a fellow of the American Institute of Chemical Engineers (AIChE) and past chair of the AIChE Safety and Health Division. Johnson lectures on HAZOP Studies and other process safety topics for the AIChE continuing education program and has taught process safety at the university level. He has a BS and MS in chemical engineering from Purdue University. Johnson can be contacted at Unwin Company, 1920 Northwest Boulevard, Suite 201, Columbus, Ohio 43212 USA; +1 614 486-2245; [email protected].
23 Safety Instrumented Systems in the Process Industries By Paul Gruhn, PE, CFSE
Introduction
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Safety instrumented systems (SISs) are one means of maintaining the safety of process plants. These systems monitor a plant for potentially unsafe conditions and bring the equipment, or the process, to a safe state if certain conditions are violated. Today’s SIS standards are performance-based, not prescriptive. In other words, they do not mandate technologies, levels of redundancy, test intervals, or system logic. Essentially, they state, “the greater the level of risk, the better the safety systems needed to control it.” Hindsight is easy. Everyone always has 20/20 hindsight. Foresight, however, is a bit more difficult. Foresight is required with today’s large, high-risk systems. We simply cannot afford to design large petrochemical plants by trial and error. The risks are too great to learn that way. We have to try to prevent certain accidents, no matter how remote the possibility, even if they have not yet happened. This is the subject of system safety. There are a number of methods for evaluating risk. There are also a variety of methods for equating risk to the performance required of a safety system. The overall design of a safety instrumented system (SIS) is not a simple, straightforward matter. The total engineering knowledge and skills required are often beyond that of any single person. An understanding is required of the process, operations, instrumentation, control systems, and hazard analysis. This typically calls for the interaction of a multidisciplined team. Experience has shown that a detailed, systematic, methodical, well-documented design process or methodology is necessary in the design of SISs. This is the intent of the safety life cycle, as shown in Figure 23-1.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
The intent of the life cycle is to leave a documented, auditable trail, and to make sure that nothing is neglected or falls between the inevitable cracks within every organization. Each phase or step of the life cycle can be defined in terms of its objectives, inputs (requirements to complete that phase), and outputs (the documentation produced). These steps and their objectives, along with the input and output documentation required to perform them, are briefly summarized below. The steps are described in more detail later in this chapter.
Hazard and Risk Assessment Objectives: To determine the hazards and hazardous events of the process and associated equipment, the sequence of events leading to various hazardous events, the process risks associated with each hazardous event, the requirements for risk reduction, and the safety functions required to achieve the necessary risk reduction.
Inputs: Process design, layout, staffing arrangements, and safety targets. Outputs: A description of the hazards, the required safety function(s), and the associated risk reduction of each safety function.
Allocation of Safety Functions to Protection Layers Objectives: To allocate safety functions to protection layers, and to determine the required safety integrity level (SIL) for each safety instrumented function (SIF). Inputs: A description of the required safety instrumented function(s) and associated safety integrity requirements. Outputs: A description of the allocation of safety requirements.
Safety Requirements Specification Objectives: To specify the requirements for each SIS, in terms of the required safety instrumented functions and their associated safety integrity levels, in order to achieve the required functional safety. Inputs: A description of the allocation of safety requirements. Outputs: SIS safety requirements; software safety requirements.
Design and Engineering
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Objectives: To design the SIS to meet the requirements for safety instrumented functions and safety integrity. Inputs: SIS safety requirements and software safety requirements. Outputs: Design of the SIS in conformance with the SIS safety requirements; plans for the SIS integration test.
Installation, Commissioning, and Validation Objectives: To install and test the SIS, and to validate that it meets the specifications (functions and performance). Inputs: SIS design, integration test plan, safety requirements, and validation plan. Outputs: A fully functional SIS in conformance with design and integration tests, as well as the results of the installation, commissioning, and validation activities.
Operations and Maintenance
Objectives: To ensure that the functional safety of the SIS is maintained during operation and maintenance. Inputs: SIS requirements, SIS design, operation, and maintenance plan. Outputs: Results of the operation and maintenance activities.
Modification Objectives: To make corrections, enhancements, or changes to the SIS to ensure that the required safety integrity level is maintained. Inputs: Revised SIS safety requirements. Outputs: Results of the SIS modification.
Decommissioning Objectives: To ensure the proper review and sector organization, and to ensure that safety functions remain appropriate. Inputs: As-built safety requirements and process information. Outputs: Safety functions placed out of service.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Verification (of All Steps) Objectives: To test and evaluate the outputs of a given life-cycle phase to ensure the correctness and consistency with respect to the products and standards provided as inputs to that phase. Inputs: Verification plan for each phase. Outputs: Verification results for each phase.
Assessments (of All Steps) Objectives: To investigate and arrive at a judgment as to the functional safety achieved by the SIS. Inputs: Safety assessment plan and SIS safety requirements. Outputs: Results of the SIS functional safety assessments.
Hazard and Risk Analysis One of the goals of process plant design is to have a facility that is inherently safe.
Trevor Kletz, one of the pillars of the process safety community, has said many times, “What you don’t have, can’t leak.” Hopefully, the design of the process can eliminate many of the hazards, such as unnecessary storage of intermediate products and the use of safer catalysts. One of the first steps in designing a safety system is developing an understanding of the hazards and risks associated with the process.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Hazard analysis consists of identifying the hazards and hazardous events. There are numerous techniques that can be used (e.g., a hazard and operability study [HAZOP], a what-if, a fault tree, and checklists). Techniques such as checklists are useful for wellknown processes where there is a large amount of accumulated knowledge. The accumulated knowledge can be condensed into a checklist of items that needs to be considered during the design phase. Other techniques, such as HAZOP or what-if, are more useful for processes that have less accumulated knowledge. These techniques are more systematic in their approach and typically require a multidisciplined team. They typically require the detailed review of design drawings, and they ask a series of questions intended to stimulate the team into thinking about potential problems and what might cause them; for example: What if the flow is too high, too low, reversed, etc.? What might cause such a condition? Risk assessment consists of ranking the risk of the hazardous events that have been identified in the hazard analysis. Risk is a function of the frequency or probability of an event and the severity or consequences of the event. Risks may affect personnel, production, capital equipment, the environment, company image, etc. Risk assessment may be either qualitative or quantitative. Qualitative assessments subjectively rank the risks from low to high. Quantitative assessments attempt to assign numerical factors to the risk, such as death or accident rates and the actual size of a release. These studies are not the sole responsibility of the instrument or control system engineer. Obviously, several other disciplines are required to perform these assessments, such as safety, operations, maintenance, process, mechanical design, and electrical.
Allocation of Safety Functions to Protective Layers Figure 23-2 shows an example of multiple independent protection layers that may be used in a plant. Various industry standards either mandate or strongly suggest that safety systems be completely separate and independent from control systems. Each layer helps reduce the overall level of risk. The inner layers help prevent a hazardous event (e.g., an explosion due to an overpressure condition) from occurring; they are referred to as protection layers. The outer layers are used to lessen the consequences of a hazardous event once it has occurred; they are referred to as mitigation layers.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Risk is a measure of the frequency and severity of an event. Figure 23-3 is a graphical way of representing the risk reduction that each layer provides. Let us consider an example of an explosion causing multiple fatalities. The vertical line on the right side of the figure represents the frequency of an initiating event, such as an operator closing or opening the wrong valve, which could cause the hazardous event if left unchecked (i.e., no other safety layer reacted). Let us also assume that our corporate safety target (i.e., the tolerable level of risk, shown as the vertical line on the left side of the figure) for such an event is 1/100,000 per year. (Determining such targets is a significant subject all unto itself and is beyond the scope of this chapter.) The basic process control system (BPCS) maintains process variables within safe boundaries and, therefore, provides a level of protection (i.e., the control system would detect a change in flow or pressure and could respond). Standards state that one should not claim more than a risk reduction factor of 10 for the BPCS. If there are alarms separate from the control system—and assuming the operators have enough time to respond and have procedures to follow— one might assume a risk reduction factor of 10 for the operators (i.e., they might be able to detect if someone else in the field closed the wrong valve). If a relief valve could also prevent the overpressure condition, failure rates and test intervals could be used to calculate their risk reduction factor (also a significant subject all unto itself and beyond the scope of this chapter). Let us assume a risk reduction factor of 100 for the relief valves.
Without an SIS, the level of overall risk is shown in Table 23-1:
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Without a safety system, the example above does not meet the corporate risk target of 1/ 100,000. However, adding a safety system that provides a level of risk reduction of at least 10 will result in meeting the corporate risk target. As shown in Table 23-2 in the next section, this falls into the safety integrity level (SIL) 1 range. This is an example of Layer of Protection Analysis (LOPA), which is one of several techniques for determining the performance required of a safety system.
If the risks associated with a hazardous event can be prevented or mitigated with something other than instrumentation—which is complex, is expensive, requires maintenance, and is prone to failure—so much the better. For example, a dike is a simple and reliable device that can easily contain a liquid spill. KISS (keep it simple, stupid) should be an overriding theme.
Determine Safety Integrity Levels
For all safety functions assigned to instrumentation (i.e., safety instrumented functions [SIFs]), the level of performance required needs to be determined. The standards refer to this as the safety integrity level (SIL). This continues to be a difficult step for many organizations. Note that the SIL is not directly a measure of process risk, but rather a measure of the safety system performance of a single safety instrumented function (SIF) required to control the individual hazardous event down to an acceptable level. The standards describe a variety of techniques on how safety integrity levels can be determined. This chapter will not attempt to summarize that material beyond the brief LOPA example given above. Tables in the standards then show the performance requirements for each integrity level. Table 23-2 lists the performance requirements for low-demand-mode systems, which are the most common in the process industries. This shows how the standards are performance-oriented and not prescriptive (i.e., they do not mandate technologies, levels of redundancy, or test intervals).
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Develop the Safety Requirements Specification The next step consists of developing the safety requirements specification (SRS). This consists of documenting the I/O (input/output) requirements, functional logic, the SIL of each safety function, and a variety of other design issues (bypasses, resets, speed of response, etc.). This will naturally vary for each system. There is no general, across-theboard recommendation that can be made. One simple example might be: “If temperature sensor TT2301 exceeds 410 degrees, then close valves XV5301 and XV5302. This function must respond within 3 seconds and needs to meet SIL 2.” It may also be beneficial to list reliability requirements if nuisance trips are a concern. For example, many different systems may be designed to meet SIL 2 requirements, but each will have a different nuisance trip performance. Considering the costs associated with lost production downtime, as well as safety concerns, this may be an important issue. In addition, one should include all operating conditions of the process, from startup through shutdown, as well as maintenance. One may find that certain logic conditions conflict during different operating modes of the process. The system will be programmed and tested according to the logic determined during this step. If an error is made here, it will carry through for the rest of the design. It will not matter how redundant the system is or how often the system is manually tested; it simply will not work properly when required. These are referred to as systematic or functional failures.
SIS Design and Engineering Any proposed conceptual design (i.e., a proposed implementation) must be analyzed to determine whether it meets the functional and performance requirements. Initially, one needs to select a technology, configuration, test interval, and so on. This pertains to the field devices, as well as the logic solver. Factors to consider are overall size, budget, complexity, speed of response, communication requirements, interface requirements, method of implementing bypasses, testing, and so on. One can then perform a simple quantitative analysis (i.e., calculate the average probability of failure on demand [PFDavg] of each safety instrumented function) to determine if the proposed function meets the performance requirements. The intent is to evaluate the system before one specifies the solution. Just as it is better to perform a HAZOP before you build the plant rather than afterwards, it is better to analyze the proposed safety system before you specify, build, and install it. The reason for both is simple. It is cheaper, faster, and easier to redesign on paper. The topic of system modeling/analysis is described in greater detail in the references listed at the end of this chapter.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Detail design involves the actual documentation and fabrication of the system. Once a design has been chosen, the system must be engineered and built following strict and conservative procedures. This is the only realistic method we know of for preventing design and implementation errors. The process requires thorough documentation that serves as an auditable trail that someone else may follow for independent verification purposes. It is difficult to catch one’s own mistakes. After the system is constructed, the hardware and software should be fully tested at the integrator’s facility. Any changes that may be required will be easier to implement at the factory rather than the installation site.
Installation, Commissioning, and Validation It is important to ensure that the system is installed and started up according to the design requirements, and that it performs according to the safety requirements specification. The entire system must be checked, this time including the field devices. There should be detailed installation, commissioning, and testing documents outlining each procedure to be carried out. Completed tests should be signed off in writing to document that every function has been checked and has passed all tests satisfactorily.
Operations and Maintenance
Not all faults are self-revealing. Therefore, every SIS must be periodically tested and maintained. This is necessary to make certain that the system will respond properly to an actual demand. The frequency of inspection and testing will have been determined earlier in the life cycle (i.e., system modeling/analysis). All testing must be documented. This will enable an audit to determine if the initial assumptions made during the design (failure rates, failure modes, test intervals, diagnostic coverage, etc.) are valid based on actual experience.
Modifications As process conditions change, it may be necessary to make modifications to the safety system. All proposed changes require returning to the appropriate phase of the life cycle in order to review the impact of the change. A change that may be considered minor by one individual may actually have a major impact to the overall process. This can only be determined if the change is documented and thoroughly reviewed by a qualified team. Hindsight has shown that many accidents have been caused by this lack of review. Changes that are made must be thoroughly tested.
System Technologies
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Logic Systems There are various technologies available for use in safety systems—pneumatic, electromechanical relays, solid state, and software-based. There is no overall “best” system; rather, each has advantages and disadvantages. The decision over which system may be best suited for an application will depend on many factors, such as budget, size, level of risk, flexibility (i.e., ease of making changes), maintenance, interface and communication requirements, and security. Pneumatic systems are most suitable for small applications where there are concerns over simplicity, intrinsic safety, and lack of available electrical power. Relay systems are fairly simple, relatively inexpensive to purchase, and immune to most forms of electromagnetic/radio frequency (EMI/RFI) interference; and they can be built for many different voltage ranges. They generally do not incorporate any form of interface or communications. Changes to logic require manually changing both physical wiring and documentation. In general, relay systems are used for relatively small applications. Solid-state systems (hardwired systems that are designed to replace relays, yet do not
incorporate software) are relatively dated, but also available. Several of these systems were built specifically for safety applications and include features for testing, bypasses, and communications. Logic changes still require manually changing both wiring and documentation. These systems have fallen out of favor with many due to their high cost, along with the acceptance of software-based systems. Software-based systems, generally industrial programmable logic controllers (PLCs), offer software flexibility, self-documentation, communications, and higher-level interfaces. Unfortunately, many general-purpose systems were not designed specifically for safety and do not offer features required for more critical applications (such as effective self-diagnostics). However, certain specialized single, dual, and triplicated systems were developed for applications that are more critical and have become firmly established in the process industries. These systems offer extensive diagnostics and better fault tolerance schemes, and are often referred to as safety PLCs.
Field Devices In the process industries, more hardware faults occur in the peripheral equipment—that is, the measuring instruments/transmitters and the control valves—than in the logic system itself.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Table 23-3 is reproduced from the 2016 version of IEC 61511. It shows the minimum hardware fault-tolerance requirement that field devices must meet to achieve each safety integrity level.
Low demand is considered to be less than once a year. High demand is greater than once a year. Continuous mode is frequent/continual demands (i.e., critical control functions with no backup safety function). A minimum hardware fault tolerance of X means that X + 1 dangerous failures would result in a loss of the safety function. In other words, a fault tolerance of 0 refers to a simplex (nonredundant) configuration (i.e., a single failure would cause a loss of the safety function). A fault tolerance of 1 refers to a 1oo2 (one out of two) or 2oo3 (two out of three) configuration. The table is essentially the same as the one in the 2003 version of the standard, with the assumption that devices are
selected based on prior use. The point of the table is to remind people that simply using a logic solver certified for use in SIL 3 will not provide a SIL 3 function or system all on its own. Field devices have a major impact on overall system performance. The table can be verified with simple calculations.
Sensors Sensors are used to measure process variables, such as temperature, pressure, flow, and level. They may consist of simple pneumatic or electric switches that change state when a set point is reached, or they may contain pneumatic or electric analog transmitters that give a variable output in relation to the strength or level of the process variable. Sensors, like any other devices, may fail in a number of different ways. They may cause nuisance trips (i.e., they respond without any change of input signal). They may also fail to respond to an actual change of input condition. While these are the two failure modes of most concern for safety systems, there are additional failure modes as well, such as leaking, erratic output, and responding at an incorrect level. Most safety systems are designed to be fail-safe. This usually means that the safety system makes the process or the equipment revert to a safe state when power is lost, which usually means stopping production. (Nuisance trips should be avoided for safety reasons as well, since start-up and shutdown operations are usually associated with the highest levels of risk.) Thought must be given to how the sensors should respond in order to be fail-safe.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Final Elements Final elements generally have the highest failure rates of any components in the system. They are mechanical devices and subject to harsh process conditions. Safety shutoff valves also suffer from the fact that they usually remain in a single position and are not activated for long periods of time, except for testing. One of the most common failure modes of a valve is being stuck or frozen in place. Valves should be fail-safe upon loss of power, which usually entails the use of a spring-loaded actuator. Solenoids are one of the most critical components of final elements. It is important to use a good industrial grade solenoid valve. The valve must be able to withstand high temperatures, including the heat generated by the coil itself when energized continuously.
System Analysis What is suitable for use in SIL 1, SIL 2, and SIL 3 applications? (SIL 4 is defined in ISA
84/IEC 61511, but users are referred to IEC 61508 because such systems should be extremely rare in the process industry.) Which technology, which level of redundancy, and what manual test interval (including field devices) are questions that need to be answered. Things are not as intuitively obvious as they may seem. Dual is not always better than simplex, and triple is not always better than dual. We do not design nuclear power plants or aircraft by gut feel or intuition. As engineers, we must rely on quantitative evaluations as the basis for our judgments. Quantitative analysis may be imprecise and imperfect, but it is a valuable exercise for the following reasons: • It provides an early indication of a system’s potential to meet the design requirements. • It enables one to determine the weak link in the system (and fix it, if necessary). In order to predict the performance of a system, one needs the performance data from all the components. Information is available from user records, vendor records, military style predictions, and commercially available databases in different industries. When modeling the performance of a safety system, one needs to consider two failure modes:
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
• Safe failures – These result in nuisance trips and lost production. Common terms used to describe this mode of performance are mean time between failure spurious (MTBFSPURIOUS) and nuisance trip rate. • Dangerous failures – These result in hidden failures where the system will not respond when required. Common terms used to quantify performance in this mode are probability of failure on demand (PFD) and risk reduction factor (RRF), which is 1/ PFD. Note that safety integrity levels only refer to dangerous system performance. There is no relationship between safe and dangerous system performance. An SIL 4 system may produce a nuisance trip every month, just as a SIL 1 system may produce a nuisance trip just once in 100 years. Knowing the performance in one mode tells you nothing about the performance in the other. There are multiple modeling techniques used to analyze and predict safety system performance, the most common methods being reliability block diagrams, algebraic equations, fault trees, Markov models, and Monte Carlo simulation. Each method has its pros and cons. No method is more “right” or “wrong” than any other. They are all simplifications and can account for different factors. Using such techniques, one can
model different technologies, levels of redundancy, test intervals, and field-device configurations. One can model systems using a hand calculator, or develop spreadsheets or stand-alone programs to automate and simplify the task. Table 23-4 is an example of a “cookbook” that one could develop using any of the modeling techniques.
Table 23-4 Notes:
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Such tables are by their very nature oversimplifications. It is not possible to show the impact of all design features (failure rates, failure-mode splits, diagnostic levels, quantities, manual test intervals, common-cause factors, proof-test coverage, impact of bypassing, etc.) in a single table. Users are urged to perform their own analysis in order to justify their design decisions. The above table should be considered an example only, based on the following assumptions: 1. Separate logic systems are assumed for safety applications. Safety functions should not be performed solely within the BPCS. 2. One sensor and one final element are assumed. Field devices are assumed to have a mean time between failure (MTBF) in both failure modes (safe and dangerous) of 50 years. 3. Simplex (nonredundant) transmitters are assumed to have 30% diagnostics; fault tolerant transmitters with comparison have greater than 95% diagnostics. 4. Transmitters with comparison means comparing the control transmitter with the safety transmitter and assuming 90% diagnostics. 5. Dumb valves offer no self-diagnostics; smart valves (e.g., automated partial stroking valves) are assumed to offer 80% diagnostics. 6. When considering solid-state logic systems, only solid-state systems specifically
built for safety applications should be considered. These systems are either inherently fail-safe (like relays) or they offer extensive self-diagnostics. 7. General-purpose PLCs are not appropriate beyond SIL 1 applications. They do not offer effective enough diagnostic levels to meet the higher performance requirements. Check with your vendors for further details. 8. One-year manual testing is assumed for all devices. (More frequent testing would offer higher levels of safety performance.) 9. Fault tolerant configurations are assumed to be either 1oo2 or 2oo3. (1oo2, or “one out of two,” means there are two devices and either one can trip/shutdown the system.) The electrical equivalent of 1oo2 is two closed and energized switches wired in series and connected to a load. 1oo2 configurations are safe, at the expense of more nuisance trips. 2oo2 configurations are less safe than simplex and should only be used if it can be documented that they meet the overall safety requirements. 10. The above table does not categorize the nuisance trip performance of any of the systems.
Key Points • Follow the steps defined in the safety-design life cycle. • If you cannot define it, you cannot control it. Copyright © 2018. International Society of Automation (ISA). All rights reserved.
• Justify and document all your decisions (i.e., leave an auditable trail). • The goal is to have an inherently safe process (i.e., one in which you do not even need an SIS). • Do not put all of your eggs in one basket (i.e., have multiple, independent safety layers). • The SIS should be fail-safe and/or fault-tolerant. • Analyze the problem, before you specify the solution. • All systems must be tested periodically. • Never leave points in bypass during normal operation!
Rules of Thumb
• Maximize diagnostics. (This is the most critical factor in safety performance.) • Any indication is better than no indication (transmitters have advantages over switches, systems should provide indications even when signals are in bypass, etc.). • Minimize potential common-cause problems. • General-purpose PLCs are not suitable for use beyond SIL 1. • When possible, use independently approved and/or certified components/systems (exida, TÜV, etc.).
Further Information ANSI/ISA-84-2004 (IEC 61511 Mod). Functional Safety: Safety Instrumented Systems for the Process Industry Sector. Research Triangle Park, NC: ISA (International Society of Automation). Chiles, James R. Inviting Disaster. New York: Harper Business, 2001. ISBN 0-06662081-3. “Energized by Safety: At Conoco, Putting Safety First Puts Profits First Too.” Continental magazine (February 2002): 49–51.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Goble, William M. Control System Safety Evaluation and Reliability. Research Triangle Park, NC: ISA (International Society of Automation), 1998. ISBN 1-55617-636-8. Guidelines for Chemical Process Quantitative Risk Analysis. New York: Center for Chemical Process Safety (CCPS) of the AIChE (American Institute of Chemical Engineers), 1989. ISBN 0-8169-0402-2. Guidelines for Hazard Evaluation Procedures. New York: Center for Chemical Process Safety (CCPS) of the AIChE (American Institute of Chemical Engineers), 1992. ISBN 0-8169-0491-X. Guidelines for Safe Automation of Chemical Processes. New York: Center for Chemical Process Safety (CCPS) of the AIChE (American Institute of Chemical Engineers), 1993. ISBN 0-8169-0554-1. Gruhn, P., and H. Cheddie. Safety Instrumented Systems: Design, Analysis, and Justification. Research Triangle Park, NC: ISA (International Society of Automation), 2006. ISBN 1-55617-956-1. IEC 61508:2010. Functional Safety – Safety Related Systems. Geneva 20 – Switzerland: IEC (International Electrotechnical Commission).
ISA-TR84.00.02-2002-Parts 1-5. Safety Instrumented Functions (SIF) Safety Integrity Level (SIL) Evaluation Techniques Package. Research Triangle Park, NC: ISA (International Society of Automation). Kletz, Trevor A. What Went Wrong? Case Histories of Process Plant Disasters. 3rd ed. Houston, TX: Gulf Publishing Co., 1994. ISBN 0-88415-0-5. Layer of Protection Analysis. New York: Center for Chemical Process Safety (CCPS) of the AIChE (American Institute of Chemical Engineers), 2001. ISBN 0-8169-08117. Leveson, Nancy G. Safeware—System Safety and Computers. Reading, MA: AddisonWesley, 1995. ISBN 0-201-11972-2. Marszal, Edward M., and Dr. Eric W. Scharpf. Safety Integrity Level Selection: Systematic Methods Including Layer of Protection Analysis. Research Triangle Park, NC: ISA (International Society of Automation), 2002. Perrow, Charles. Normal Accidents. Princeton, NJ: Princeton University Press, 1999. ISBN 0-691-00412-9.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
About the Author Paul Gruhn is a global functional safety consultant with aeSolutions in Houston, Texas. Gruhn—an ISA member for more than 25 years—is an ISA Life Fellow, co-chair of the ISA84 standard committee (on safety instrumented systems), the developer and instructor of ISA courses on safety systems, the author of two ISA textbooks, and the developer of the first commercial, safety-system software modeling program. Gruhn has a BS in mechanical engineering from Illinois Institute of Technology, is a licensed Professional Engineer (PE) in Texas, and is both a Certified Functional Safety Expert (CFSE) and an ISA84 Safety Instrumented Systems Expert.
24 Reliability By William Goble
Introduction There are several common metrics used within the field of reliability engineering. Primary ones include reliability, unreliability, availability, unavailability, and mean time to failure (MTTF). However, when different failure modes are considered, as they are when doing safety instrumented function (SIF) verification, then new metrics are needed. These include probability of failing safely (PFS), probability of failure on demand (PFD), probability of failure on demand average (PFDavg), mean time to failure spurious (MTTFSPURIOUS), and mean time to dangerous failure (MTTFD).
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Measurements of Successful Operation: No Repair Probability of success – This is often defined as the probability that a system will perform its intended function when needed and when operated within its specified limits. The phrase at the end of the last sentence tells the user of the equipment that the published failure rates apply only when the system is not abused or otherwise operated outside of its specified limits. Using the rules of reliability engineering, one can calculate the probability of successful operation for a particular set of circumstances. Depending on the circumstances, that probability is called reliability or availability (or, on occasion, some other name). Reliability – A measure of successful operation for a specified interval of time. Reliability, R(t), is defined as the probability that a system will perform its intended function when required to do so if operated within its specified limits for a specified operating time interval (Billinton 1983). The definition includes five important aspects: 1. The system’s intended function must be known. 2. When the system is required to function must be judged. 3. Satisfactory performance must be determined.
4. The specified design limits must be known. 5. An operating time interval is specified. Consider a newly manufactured and successfully tested component. It operates properly when put into service (T = 0). As the operating time interval (T) increases, it becomes less likely that the component will remain successful. Since the component will eventually fail, the probability of success for an infinite time interval is zero. Thus, all reliability functions start at a probability of one and decrease to a probability of zero (Figure 24-1).
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Reliability is a function of the operating time interval. A statement such as “system reliability is 0.95″ is meaningless because the time interval is not known. The statement “the reliability equals 0.98 for a mission time of 100 hours” makes perfect sense. A reliability function can be derived directly from probability theory. Assume the probability of successful operation for a 1-hour time interval is 0.999. What is the probability of successful operation for a 2-hour time interval? The system will be successful only if it is successful for both the first hour and the second hour. Therefore, the 2-hour probability of success equals: 0.999 • 0.999 = 0.998
(24-1)
The analysis can be continued for longer time intervals. For each time interval, the probability can be calculated by the equation: P(t) = 0.999t
(24-2)
Figure 24-2 shows a plot of probability versus operating time using this equation. The plot is a reliability function.
Reliability is a metric originally developed to determine the probability of successful operation for a specific “mission time.” For example, if a flight time is 10 hours, a logical question is, “What is the probability of successful operation for the entire flight?” The answer would be the reliability for the 10-hour duration. It is generally a measurement applicable to situations where online repair is not possible, like an unmanned space flight or an airborne aircraft. Unreliability is the complement of reliability. It is defined as the probability of failure during a specific mission time.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Mean time to failure (MTTF) – One of the most widely used reliability parameters is the MTTF. It has been formally defined as the “expected value” of the random variable time to fail, T. Unfortunately, the metric has evolved into a confusing number. MTTF has been misused and misunderstood. It has been misinterpreted as “guaranteed minimum life.” Formulas for MTTF are derived and often used for products during the useful life period. This method excludes wear-out failures. Ask an experienced plant engineer, “What is the MTTF of a pressure transmitter?” He would possibly answer “35 years,” factoring in wear out. Then the engineer would look at the specified MTTF of 300 years and think that the person who calculated that number should come out and stay with him for a few years and see the real world. Generally, the term MTTF is defined during the useful life of a device. “End of life” failures are generally not included in the number. Constant failure rate – When a constant failure rate is assumed (which is valid during the useful life of a device), then the relationship between reliability, unreliability, and MTTF are straightforward. If the failure rate is constant then: λ(t) = λ
(24-3)
For that assumption, it can be shown that: R(t) = e–λt
(24-4)
F(t) = 1 – e–λt
(24-5)
MTTF = 1/λ
(24-6)
And:
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Figure 24-3 shows the reliability and unreliability functions for a constant failure rate of 0.001 failures per hour. Note the plot for reliability looks the same as Figure 24-2, which shows the probability of successful operation given a probability of success for 1 hour of 0.999. It can be shown that a constant probability of success is equivalent to an exponential probability of success distribution as a function of operating time interval.
Useful Approximations Mathematically, it can be shown that certain functions can be approximated by a series of other functions. For all values of x, it can be shown as in Equation 24-7: ex = 1 + x + x2/2! + x3/3! + x4/4! + . . . For a sufficiently small value of x, the exponential can be approximated with: ex = 1 + x Substituting –λt for x:
(24-7)
eλt = 1 + λt Thus, there is an approximation for unreliability when λt is sufficiently small. F(t) = λt
(24-8)
Remember, this is only an approximation and not a fundamental equation. Often the notation for unreliability is PF (probability of failure) and the equation is shown as: PF(t) = λt
(24-9)
Measurements of Successful Operation: Repairable Systems The reliability metric requires that a system be successful for an interval of time. While this probability is a valuable metric for situations where a system cannot be repaired during a mission, something different is needed for an industrial process control system where repairs can be made—often with the process operating.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Mean time to restore (MTTR) – MTTR is the “expected value” of the random variable referred to as restore time (or time to repair). The definition includes the time required to detect that a failure has occurred, as well as the time required to make a repair once the failure has been detected and identified. Like MTTF, MTTR is an average value. MTTR is the average time required to move from unsuccessful operation to successful operation. In the past, the acronym MTTR stood for mean time to repair. The term was changed in IEC 61508 because of confusion as to what was included. Some thought that mean time to repair included only the actual repair time. Others interpreted the term to include both time to detect a failure (diagnostic time) and actual repair time. The term mean dead time (MDT), commonly used in some parts of the world, means the same as MTTR. MTTR is a term created to include both diagnostic detection time and actual repair time. Of course, when actually estimating MTTR, one must include time to detect, recognize, and identify the failure; time to obtain spare parts; time for repair team personnel to respond; actual time to do the repair; time to document all activities; and time to get the equipment back in operation. Reliability engineers often assume that the probability of repair is an exponentially distributed function, in which case the “restore rate” is a constant. The lowercase Greek letter mu is used to represent restore rate by convention. The equation for restore rate is:
µ = 1/MTTR
(24-10)
Restore times can be difficult to estimate. This is especially true when periodic activities are involved. Imagine the situation where a failure in the safety instrumented system (SIS) is not noticed until a periodic inspection and test is done. The failure may occur right before the inspection and test, in which case the detection time might be near zero. On the other hand, it may occur right after the inspection and test, in which case the detection time may get as large as the inspection period. In such cases, it is probably best to model repair probability as a periodic function, not as a constant (Bukowski 2001). This is discussed later in the section “Average Unavailability with Periodic Inspection and Test.” Mean time between failures (MTBF) – MTBF is defined as the “average time period of a failure/repair cycle” (Goble 2010). It includes time to failure, any time required to detect the failure, and actual repair time. This implies that a component has failed and it has been successfully repaired. For a simple repairable component, MTBF = MTTF + MTTR
(24-11)
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
The MTBF term can also be confusing. Since MTTR is usually much smaller than MTTF, MTBF is approximately equal to MTTF. The term MTBF is often substituted for MTTF; it applies to both repairable systems and non-repairable systems. Because of the confusion, the term MTBF is rarely used in recent times. Availability – The reliability measurement was not sufficiently useful for engineers who needed to know the average chance of success of a system when repairs are possible. Another measure of system success for repairable systems was needed; that metric is availability. Availability is defined as the probability that a device is successful at time t when needed and operated within specified limits. No operating time interval is directly involved. If a system is operating successfully, it is available. It does not matter whether it has failed in the past and has been repaired or has been operating continuously from startup without any failures. Availability is a measure of “uptime” in a system, unit, or module. Availability and reliability are different metrics. Reliability is always a function of failure rates and operating time intervals. Availability is a function of failure rates and repair rates. While instantaneous availability will vary during the operating time interval, this is due to changes in failure probabilities and repair situations. Often availability is calculated as an average over a long operating time interval. This is referred to as steady-state availability.
In some systems, especially SISs, the repair situation is not constant. In SISs, the situation occurs when failures are discovered and repaired during a periodic inspection and test. For these systems, steady-state availability is NOT a good measure of system success. Instead, average availability is calculated for the operating time interval between inspections. (Note: This is not the same measurement as steady-state availability.) Unavailability – This is a measure of failure used primarily for repairable systems. It is defined as the probability that a device is not successful (is failed) at time t. Different metrics can be calculated, including steady-state unavailability and average unavailability, over an operating time interval. Unavailability is the ones’ complement of availability; therefore, U(t) = 1 – A(t)
(24-12)
Steady-State Availability – Traditionally, reliability engineers have assumed a constant repair rate. When this is done, probability models can be solved for steady state or average probability of successful operation. The metric can be useful, but it has relevance only for certain classes of problems with random restoration characteristics. (Note: Steady-state availability solutions are not suitable for systems where failures are detected with periodic proof test inspections.)
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Figure 24-4 shows a Markov probability model of a single component with a single failure mode. This model can be solved for steady-state availability and steady-state unavailability.
A = MTTF / (MTTF + MTTR)
(24-13)
U = MTTR / (MTTF + MTTR)
(24-14)
When the Markov model in Figure 24-4 is solved for availability as a function of the operating time interval, the result is shown in Figure 24-5, labeled A(t). It can be seen that the availability reaches a steady state after some period of time.
Figure 24-6 shows a plot of unavailability versus unreliability. These plots are complementary to those shown in Figure 24-5.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Average Unavailability with Periodic Inspection and Test In low-demand SIS applications with periodic inspection and test, the restore rate is NOT constant nor is it random. For failures not detected until a periodic inspection and test, the restore rate is zero until the time of the test. If it is discovered the system is operating successfully, then the probability of failure is set to zero. If it is discovered the system has a failure, it is repaired. In both cases, the restore rate is high for a brief period of time. Dr. Julia V. Bukowski has described this situation and proposed modeling perfect test and repair as a periodic impulse function (Bukowski 2001). Figure 24-7 shows a plot of probability of failure in this situation. This can be compared with unavailability calculated with the constant restore rate model as a function of operating time. With the constant restore model, the unavailability reaches a steady-state value. This value is clearly different than the result that would be obtained by averaging the unavailability calculated using a periodic restore period.
It is often assumed that periodic inspection and test will detect all failed components and the system will be renewed to perfect condition. Therefore, the unreliability function is suitable for the problem. A mission time equal to the time between periodic inspection and test is used. In SIS applications, the objective is to identify a model for the probability that a system will fail when a dangerous condition occurs. This dangerous condition is called a demand. Our objective, then, is to calculate the probability of failure on demand. If the system is operating in an environment where demands are infrequent (e.g., once per 10 years) and independent from system proof tests, then an average of the unreliability function will provide the average probability of failure. This, by definition, is an “unavailability function” because repair is allowed. (Note: This averaging technique is not valid when demands are more frequent. Special modeling techniques are needed in that case.)
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
As an example, consider the single component unreliability function given in Equation 24-5. F(t) = 1 – e–λt This can be approximated as explained previously with Equation 24-8. F(t) = λt The average can be obtained by using the expected value equation:
(24-15)
with the result being an approximation equation: PFavg = λt/2
(24-16)
For a single component (nonredundant) or a single channel system with perfect test and repair, the approximation is shown in Figure 24-8.
Periodic Restoration and Imperfect Testing
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
It is quite unrealistic to assume that inspection and testing processes will detect all failures. In the worst case, assume that testing is not done. In that situation, what is the mission time? If the equipment is used for the life of an industrial facility, plant life is the mission time. Probability of failure would be modeled with the unreliability function using the plant life as the time interval. If the equipment is required to operate only on demand, and the demand is independent of system failure, then the unreliability function can be averaged as explained in the preceding section. When only some failures are detected during the periodic inspection and test, then the average probability of failure can be calculated using an equation that combines the two types of failures—those detected by the test and those undetected by the test. One must estimate the percentage of failures detected by the test to make this split (Van Beurden 2018, Chapter 12). The equation would be: PFavg = CPTλ TI/2 + (1 – CPT) λ LT/2 where λ CPT
= =
the failure rate the percentage of failures detected by the proof test
TI LT
= =
the periodic test interval lifetime of the process unit
(24-17)
Equipment Failure Modes Instrumentation equipment can fail in different ways. We call these failure modes. Consider a two-wire pressure transmitter. This instrument is designed to provide a 4–20 mA electrical current signal in proportion to the pressure input. Detail failure modes, effects, and diagnostic analysis (Goble 1999) of several of these devices reveal several failure modes: frozen output, current to upper limit, current to lower limit, diagnostic failure, communications failure, and drifting/erratic output among perhaps others. These instrument failures can be classified into failure mode categories when the application is known.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
If a single transmitter (no redundancy) were connected to a safety programmable logic controller (PLC) programmed to trip when the current goes up (high trip), then the instrument failure modes could be classified as shown in Table 24-1.
Consider the possible failure modes of a PLC with a digital input and a digital output, both in a de-energize-to-trip (logic 0) design. The PLC failure modes can be categorized relative to the safety function as shown in Table 24-2.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Final element components will also fail and, again, the specific failure modes of the components can be classified into relevant failure modes depending on the application. It is important to know if a valve will open or close when it trips. Table 24-3 shows an example failure mode classification based on a close to trip configuration.
It should be noted that the above failure mode categories apply to an individual instrument and may not apply to the set of equipment that performs an SIF, as the equipment set may contain redundancy. It should be also made clear that the above listings are not intended to be comprehensive or representative of all component types.
Fail-Safe Most practitioners define fail-safe for an instrument as “a failure that causes a ‘false or spurious’ trip of a safety instrumented function unless that trip is prevented by the architecture of the safety instrumented function.” Many formal definitions, including IEC 61508:2010, define it as “a failure that causes the system to go to a safe state or increases the probability of going to a safe state.” This definition is useful at the system level and includes many cases where redundant architectures are used. However, it also includes failures of automatic diagnostic components, which have a very different impact of probabilities for a false trip. IEC 61508:2000 uses the definition of “failure [that] does not have the potential to put the safety-related system in a hazardous or fail-to-function state.” This definition includes many failures that do not cause a false trip under any circumstances and is quite different from the definition practitioners need to calculate the false-trip probability.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Fail-Danger Many practitioners define fail-danger as “a failure that prevents a safety instrumented function from performing its automatic protection function.” Variations of this definition exist in standards. IEC 61508 provides a similar definition that reads “failure that has the potential to put the safety-related system in a hazardous or fail-to-function state.” The definition from IEC 61508 goes on to add a note: “Whether or not the potential is realized may depend on the channel architecture of the system; in systems with multiple channels to improve safety, a dangerous hardware failure is less likely to lead to the overall dangerous or fail-to-function state.” The note from IEC 61508 recognizes that a definition for a piece of equipment may not have the same meaning at the SIF level or the system level.
Annunciation Some practitioners recognize that certain failures within equipment used in an SIF prevent the automatic diagnostics from correct operation. When reliability models are built, many account for the automatic diagnostics’ ability to reduce the probability of failure. When these diagnostics stop working, the probability of dangerous failure or false trip is increased. While these effects may not be significant, unless they are modeled, the effect is not known. An annunciation failure is therefore defined as “a failure that prevents automatic diagnostics from detecting or annunciating that a failure has occurred inside the
equipment” (Goble 2010). Note the failure may be within the equipment that fails or inside an external piece of equipment designed for automatic diagnostics. These failures would be classified as fail-safe in the definition provided in IEC 61508:2000.
No Effect Some failures within a piece of equipment have no effect on the safety instrumented function nor do they cause a false trip or prevent automatic diagnostics from working. Some functionality performed by the equipment is impaired but that functionality is not needed. These may simply be called no effect failures. They are typically not used in any reliability model intended to obtain probability of a false trip or probability of a faildanger.
Detected/Undetected Failure modes can be further classified as detected or undetected by automatic diagnostics performed somewhere in the SIS.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Safety Instrumented Function Modeling of Failure Modes When evaluating SIF safety integrity, an engineer must examine more than the probability of successful operation. The failure modes of the system must be individually calculated. The normal metrics of reliability, availability, and MTTF only suggest a measure of success. Additional metrics to measure safety integrity include PFD, PFDavg, MTTFD, and risk reduction factor (RRF). Other related terms are MTTFSPURIOUS and PFS.
PFS/PFD There is a probability that a safety instrumented function will fail and cause a spurious/false trip of the process. This is called the probability of failing safely (PFS). There is also a probability that a safety instrumented function will fail such that it cannot respond to a potentially dangerous condition. This is called the probability of failure on demand (PFD).
PFDavg PFD average (PFDavg) is a term used to describe the average probability of failure on demand. PFD will vary as a function of the operating time interval of the equipment. It will not reach a steady-state value if any periodic inspection, test, and repair are done.
Therefore, the average value of PFD over a period of time can be a useful metric if it is assumed that the potentially dangerous condition (also called a hazard) is independent from equipment failures in the SIF. The assumption of independence between hazards and SIF failures seems very realistic. (Note: If control functions and safety functions are performed by the same equipment, the assumption may not be valid! Detailed analysis must be done to ensure safety in such situations, and it is best to avoid such designs completely.) When hazards and equipment are independent, it is realized that a hazard may come at any time. Therefore, international standards have specified that PFDavg is an appropriate metric for measuring the effectiveness of an SIF. PFDavg is defined as the arithmetic mean over a defined time interval. For situations where a safety instrumented function is periodically inspected and tested, the test interval is the correct time period. Therefore:
(24-18)
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
This definition is used to obtain numerical results in several of the system-modeling techniques. In a discrete-time Markov model using numerical solution techniques, a direct average of the time-dependent numerical values will provide the most accurate answer. When analytical equations for PFD are obtained using a fault tree, the above equation can be used to obtain equations for PFDavg. It has become recognized that at least nine variables may impact a PFDavg calculation depending on the application (Van Beurden 2016). It is important that realistic analysis be used for safety design processes.
Redundancy There are applications where the reliability or safety integrity of a single instrument is not sufficient. In these cases, more than one instrument is used in a design. Some arrangements of the instruments are designed to provide higher reliability (typically to protect against a single “safe” failure). Other arrangements of instruments are designed to provide higher safety integrity (typically to protect against a single “dangerous” failure). There are also arrangements that are designed to provide both high reliability and high safety integrity. When multiple instruments are wired (or configured) to provide redundancy to protect against one or more failure modes, these arrangements
are known as architectures. A listing of some common architectures is shown in Table 24-4. These architectures are described in detail in Chapter 14 of Control System Safety Evaluation and Reliability, Third Edition (Goble 2010).
The naming convention stands for X out of Y, where Y is the number of equipment sets in the design and X is the number of equipment sets needed to perform the function. In some advanced architecture names, the term D is added to designate a switch that is controlled by diagnostics to reconfigure the equipment if a failure is detected in one equipment set.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Conclusions General system availability as well as the dangerous failure mode metric, PFDavg, are dependent on variables (Van Beurden 2016) as described above. These include failure rates, proof testing intervals, proof test coverage, proof test duration, automatic diagnostics, redundancy, and operational/maintenance capability. It is important that realistic parameters be used and that all relevant parameters be included in any calculation.
Further Information Billinton, R., and Allan, R. N. Reliability Evaluation of Engineering Systems: Concepts and Techniques. New York: Plenum Press, 1983. Bukowski, J. V. “Modeling and Analyzing the Effects of Periodic Inspection on the Performance of Safety-Critical Systems.” IEEE Transactions of Reliability 50, no. 3 (2001). Goble, W. M. Control System Safety Evaluation and Reliability. 3rd ed. Research Triangle Park, NC: ISA (International Society of Automation), 2010.
Goble, W. M., and Brombacher, A. C. “Using a Failure Modes, Effects and Diagnostic Analysis (FMEDA) to Measure Diagnostic Coverage in Programmable Electronic Systems.” Reliability Engineering and System Safety 66, no. 2 (November 1999). IEC 61508:2010 Ed. 2.0. Functional Safety of Electrical/Electronic/Programmable Electronic Safety-Related Systems. Geneva 20 – Switzerland: IEC (International Electrotechnical Commission). IEC 61511:2016 Ed. 2.0. Application of Safety Instrumented Systems for the Process Industries. Geneva 20 – Switzerland: IEC (International Electrotechnical Commission). Van Beurden, I., and Goble W. M. Safety Instrumented System Design: Techniques and Design Verification. Research Triangle Park, NC: ISA (International Society of Automation), 2018. ——— The Key Variables Needed for PFDavg Calculation. White paper. Sellersville, PA: exida, 2016. www.exida.com.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
About the Author William M. Goble, PhD, is currently managing director of exida.com, a knowledge company that provides ANSI-accredited functional safety and cybersecurity certification, failure data research, system consulting, training, and support for safetycritical and high-availability process automation. He has more than 40 years of experience in control systems product development, engineering management, marketing, training, and consulting. Goble has a BSEE from the Pennsylvania State University, an MSEE from Villanova, and a PhD from Eindhoven University of Technology in reliability engineering. He is a registered professional engineer in the state of Pennsylvania and a Certified Functional Safety Expert (CFSE). He is an ISA Fellow and an author of several ISA books.
VIII Network Communications
Analog Communications This chapter provides an overview of the history of analog communications from direct mechanical devices to the present digital networks, and it also puts the reasons for many of the resulting analog communications standards into context through examples of the typical installations.
Wireless Wireless solutions can dramatically reduce the cost of adding measurement points, making it feasible to include measurements that were not practical with traditional wired solutions. This chapter provides an overview of the principle field-level sensor networks and items that must be considered for their design and implementation.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Cybersecurity Integrating systems and communications is now fundamental to automation. While some who work in a specific area of automation may have been able to avoid a good understanding of these topics, that isolation is rapidly coming to an end. With the rapid convergence of information technology (IT) and operations technology (OT), network security is a critical element in an automation professional’s repertoire. Many IT-based tools may solve the integration issue; however, they usually do not deal with the unique real-time and security issues in automation, and they often ignore the plant-floor issues. As a result, no topic is hotter today than network security—including the Internet. Automation professionals who are working in any type of integration must pay attention to the security of the systems.
25 Analog Communications By Richard H. Caro
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
The earliest process control instruments were mechanical devices in which the sensor was directly coupled to the control mechanism, which in turn was directly coupled to the control valve. Usually, a dial indicator was provided to enable the process variable value to be read. These devices are still being used today and are called self-actuating controllers or often just regulators. These mechanical controllers often take advantage of a physical property of some fluid to operate the final control element. For example, a fluid-filled system can take advantage of the thermal expansion of the fluid to both sense temperature and operate a control valve. Likewise, process pressure changes can be channeled mechanically or through filled systems to operate a control valve. Such controllers are proportional controllers with some gain adjustment available through mechanical linkages or some other mechanical advantage. We now know that they can exhibit some offset error. While self-actuating controllers (see Figure 25-1) are usually low-cost devices, it was quickly recognized that it would be easier and safer for the process operator to monitor and control processes if there was an indication of the process variable in a more convenient and protected place. Therefore, a need was established to communicate the process variable from the sensor that remained in the field to a remote operator panel. The mechanism created for this communication was air pressure over the range 3–15 psi. This is called pneumatic transmission. Applications in countries using the metric system required the pressure in standard international units to be 20–100 kPa, which is very close to the same pressures as 3–15 psi. The value of using 3 psi (or 20 kPa) rather than zero is to detect failure of the instrument air supply. The value selected for 100% is 15 psi (or 100 kPa) because it is well below nominal pressures of the air supply for diagnostic purposes.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
However, the operator still had to go to the field to change the set point of the controller. The solution was to build the controller into the display unit mounted at the operator panel using pneumatic computing relays. The panel-mounted controller could be more easily serviced than if it was in the field. The controller output was in the 3–15 psi air pressure range and piped to a control valve that was, by necessity, mounted on the process piping in the field. The control valve was operated by a pneumatic actuator or force motor using higher-pressure air for operation. Once the pneumatic controller was created, innovative suppliers soon were able to add integral and derivative control to the original proportional control in order to make the control more responsive and to correct for offset error. Additionally, pneumatic valve positioners were created to provide simple feedback control for control valve position. A pneumatic control loop is illustrated in Figure 25-2.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Thousands of pneumatic instruments, controllers, and control valves have remained in use more than 50 years after the commercialization of electronic signal transmission and well into the digital signal transmission age. However, except for a few processes in the manufacture of extremely hazardous gases and liquids, such as ether, there has been no growth in pneumatic instrumentation and signal transmission. Many pneumatic process control systems are being modernized to electronic signal transmission or directly to digital data transmission and control. While pneumatic data transmission and control proved to be highly reliable, it is relatively expensive to interconnect sensors, controllers, and final control elements with leak-free tubing. Frequent maintenance is required to repair tubing, to clean instruments containing entrained oil from air compressors, and to remove silica gel from air driers. In the 1960s, it was decided that the replacement for pneumatic signal transmission was to be a small analog direct current (DC) signal, which could be used over considerable distances on a single pair of small gauge wiring without amplification. While most supplier companies agreed that the range of 4–20 mA was probably the best, one supplier persisted in its demand for 10–50 mA because its equipment could not be powered from the base 4 mA signal. The first ANSI/ISA S50.1-1972 standard was for 4–20 mA DC with an alternative at 10–50 mA. Eventually, that one supplier changed technologies and accepted 4–20 mA DC analog signal communications. The alternative for 10–50 mA was removed for the 1982 edition of this standard. The reason that 4 mA was selected for the low end of the transmission range was to provide the minimal electrical power necessary to energize the field instrument. Also, providing a “live zero” that is different from zero mA proves that the field instrument is
operating and provides a small range in which to indicate a malfunction of the field instrument. The upper range value of 20 mA was selected because it perpetuated the tradition of five times the base value from 3–15 psi pneumatic transmission. There is no standard meaning for signals outside the 4–20 mA range, although some manufacturers have used such signals for diagnostic purposes.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
One reason for selecting a current-based signal is that sufficient electrical power (4 mA • 24 V = 96 mW) to energize the sensor can be delivered over the same pair of wires as the signal. Use of two-wires for both the signal and power reduces the cost of installation. Some field instruments require too much electrical energy to be powered from the signal transmission line and are said to be “self-powered,” meaning that they are powered from a source other than the 4–20 mA transmission line. Another reason for using a current-based signal is that current is unaffected by the resistance (length or diameter) of the connecting wire. A voltage-based signal would vary with the length and gauge of the connecting wire. A typical electronic control loop is illustrated in Figure 25-3.
Although the transmitted signal is a 4–20 mA analog current, the control valve is most often operated by high-pressure pneumatic air because it is the most economic and responsive technology to move the position of the control valve. This requires that the 4–20 mA output from the controller be used to modulate the high-pressure air driving the control valve actuator. A device called an I/P converter may be required to convert from 4–20 mA to 3–15 psi (or 20–100 kPa). The output of the I/P converter is connected to a pneumatic valve positioner. However, more often the conversion takes place in an
electronic control valve positioner that uses feedback from the control valve itself and modulates the high-pressure pneumatic air to achieve the position required by the controller based on its 4–20 mA output.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
The 4–20 mA signal is achieved by the field transmitter or the controller acting as a current regulator or variable resistor in the circuit. The two-wire loop passing from the DC power source through the field transmitter can therefore have a maximum total resistance such that the total voltage drop cannot exceed that of the DC power source— nominally 24 V. One of the voltage drops occurs across the 250 ohm resistor connected across the input terminals of a controller, or the field wiring terminals of a digital control system analog input point. Other instruments may also be wired in series connection in the same two-wire current loop as long as the total loop resistance does not exceed approximately 800 ohms. The wiring of a loop-powered field transmitter is illustrated in Figure 25-4.
More than 25 years after the work began to develop a digital data transmission standard, 4–20 mA DC still dominates the process control market for both new and revamped installations because it now serves as the primary signal transmission method for the Highway Addressable Remote Transducers (HART) protocol. HART uses its one 4–20 mA analog transmission channel for the primary variable, usually the process variable measurement value, and transmits all other data on its digital signal channels carried on the same wires as the 4–20 mA analog signal. Analog electronic signal transmission remains the fastest way to transmit a measured
variable to a controller because it is a continuous signal. This is especially true when the measurement mechanism itself continuously modulates the output current as in forcemotor-driven devices. However, even in more modern field transmitters that use inherent digital transducers and digital-to-analog converters, delays to produce the analog signal are very small compared to process dynamics; consequently, the resulting signals are virtually continuous and certainly at a higher update rate than the associated control system. Continuous measurement, transmission, and analog electronic controllers are not affected by the signal aliasing errors that can occur in sampled data digital transmission and control. The design of process manufacturing plants is usually documented on process piping and instrumentation diagrams (P&IDs), which attempt to show the points at which the process variable (PV) is measured, the points at which control valves are located, and the interconnection of instruments and the control system. The documentation symbols for the instruments and control valves, and the P&ID graphic representations for the instrumentation connections are covered elsewhere in this book. All the P&ID symbols are standardized in ANSI/ISA 5.01.
Further Information
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
ANSI/ISA-5.01-2009. Instrumentation Symbols and Identification. Research Triangle Park, NC: ISA (International Society of Automation). ANSI/ISA-50.00.01-1975 (R2017). Compatibility of Analog Signals for Electronic Industrial Process Instruments. Research Triangle Park, NC: ISA (International Society of Automation).
About the Author Richard H. (Dick) Caro is CEO of CMC Associates, a business strategy and professional services firm in Arlington, Mass. Prior to CMC, he was vice president of the ARC Advisory Group in Dedham, Mass. He is the chairman of ISA50 and formerly the convener of the IEC (International Electrotechnical Committee) Fieldbus Standards Committees. Before joining ARC, Caro held the position of senior manager with Arthur D. Little, Inc. in Cambridge, Mass., was a founder of Autech Data Systems, and was director of marketing at ModComp. In the 1970s, The Foxboro Company employed Caro in both development and marketing positions. He holds a BS and MS in chemical engineering and an MBA. He holds the rank of ISA Fellow and is a Certified Automation Professional. Caro was named to the Process Automation Hall of Fame in
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
2005. He has published three books on automation networks including Wireless Networks for Industrial Automation.
26 Wireless Transmitters By Richard H. Caro
Summary Process control instrumentation has already begun the transition from bus wiring as in FOUNDATION Fieldbus and PROFIBUS, to wireless. Many wireless applications are now appearing using both ISA100 Wireless and WirelessHART, although not yet in critical control loops. As experience is gained, user confidence improves; and as microprocessors improve in speed and in reduced use of energy, it appears that wireless process control instrumentation will eventually become mainstream.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Introduction to Wireless Most instrument engineers would like to incorporate measurement transmitters into processes without the associated complexity and cost of installing and maintaining interconnecting wiring to a host system or a distributed control system (DCS). Wireless solutions can dramatically reduce the cost of adding measurement points, making it feasible to include measurements that were not practical with traditional wired solutions. With a wired plant, every individual wire run must be engineered, designed, and documented. Every termination must be specified and drawn so that installation technicians can perform the proper connections. Even FOUNDATION Fieldbus, in which individual point terminations are not important, must be drawn in detail since installation technicians are not permitted to make random connection decisions. Wireless has no terminations for data transmission, although sometimes it is necessary to wire-connect to a power source. Often the physical location of a wireless instrument and perhaps a detachable antenna may be very important and the antenna may need to be designed and separately installed. Maintenance of instrumentation wiring within a plant involves ongoing costs often related to corrosion of terminations and damage from weather, construction, and other accidental sources. Wireless has a clear advantage since there are no wiring terminations and there is little likelihood of damage to the communications path from construction
and accidental sources. However, there are temporary sources of interference such as mobile large equipment blocking line-of-sight communications and random electrical noise from equipment such as an arc welder.
Wireless Network Infrastructure Traditional distributed control systems (DCSs) use direct point-to-point wiring between field instruments and their input/output (I/O) points present on an analog input or output multiplexer card. There is no network for the I/O. If HART digital signals are present, they are either ignored, read occasionally with a handheld terminal, or routed to or from the DCS through the multiplexer card. If the field instrumentation is based on FOUNDATION Fieldbus, PROFIBUS-PA, or EtherNet/IP, then a network is required to channel the data between the field instruments and to and from the DCS. Likewise, if the field instruments are based on digital wireless technology such as ISA100 Wireless or WirelessHART, then a network is required to channel the data between the field instruments and to and from the DCS.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
The nature of wired field networks is discussed in Chapter 8. The elements of the wireless portion of field networks is often referred to as the wireless network infrastructure. Unlike wired networks in which every signal is conducted by wire to its intended destination, wireless messages can be interrupted from delivery when the signals encounter obstacles or interference, or when they are not powerful enough to survive the distances involved. The wireless network infrastructure includes the following: • Formation of mesh networks in which intermediate devices store and forward signals to overcome obstacles and lengthen reception distances • Redundant or resilient signal paths so that messages are delivered along alternative routes for reliability • Use of frequency shifting so that error recovery does not use the same frequency as failed messages • Directional antennas to avoid interference and to lengthen reception distances Wireless field networks always terminate in a gateway that may connect to a data acquisition or control system with direct wired connections, with a wired network, or with a plant-level wireless network. The cost of the wireless field networks must always include the gateway that is usually combined with the network manager, which controls the wireless network performance. The gateway almost always includes the wireless network security manager as well. Note that the incremental cost of adding wireless
network field instruments does not include any additional network infrastructure devices.
ISM Band Wireless interconnection relies on the radio frequency spectrum, a limited and crowded resource in which frequency bands are allocated by local/country governments. Governmental organizations in most nations have established license-free radio bands, the most significant of which is the industrial, scientific, and medical (ISM) band centered at 2.4 GHz. This band is widely used for cordless telephones, home and office wireless networks, and wireless process control instrumentation. It is also used by microwave ovens, which are often located in field control rooms (microwave leakage may show up as interference on wireless networks). Although the 2.4 GHz ISM band is crowded, protocols designed for it provide many ways to avoid interference and to recover from blocked messages. The blessing of the ISM band is that the broad availability of ISM components and systems leads to lower cost products. The curse of the ISM band is the resulting complexity of the protocol needed to assure reliable end-to-end communications. There are additional ISM bands at 433 MHz, 868–915 MHz, 5.8 GHz, and 57–66 GHz.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Need for Standards While field transmitters of process information are free-standing devices, the information they provide must be sent to other devices to participate in the overall control scheme. Likewise, the field instrument must be configured or programmed to do its assigned task. Wired and wireless field instruments and the devices to which they connect must “speak” the same language. Both the physical connection and the semantic content of the communications must be the same for this to occur. In communications language, we refer to this as the protocol. To ensure that the same protocol is followed by both the field transmitter and the connected receiver, standards must be established. The standards for field instrumentation have usually originated with the International Society of Automation (ISA). However, to assure worldwide commonality, instrumentation standards must be established and maintained by a worldwide standards body. The standards body assigned to enforce both wired and wireless communications standards for industrial process control is the International Electrotechnical Commission (IEC), headquartered in Geneva, Switzerland. With such standards, field transmitters designed and manufactured by any vendor for use in any country will be able to communicate with devices from any other vendor who designs according to the
requirements of the standard. Wired communications standards must ensure that the electrical characteristics are firmly established and that the format of the messages are organized in a way that they can be understood and are usable by the receiver. Wireless communications standards must also specify the format of the messages and their organization, but must properly apply the laws of radio physics and statutory law. Not only must the radio (electrical interface) be powerful enough to cover the required distance, but the same radio channel or carrier frequency must be used at both the transmitting and receiving ends. Additionally, energy conservation for wireless instrumentation is achieved by turning the transmitter and the receiver off most of the time; they awaken only to transmit or receive data. Careful coordination of the awake cycle is necessary to communicate. Requirements for radio channel compatibility and energy conservation are key elements in the standards to which wireless field transmitters are designed.
Limitations of Wireless Process plants contain many pieces of process equipment fabricated from steel. These pieces of equipment are mounted in industrial buildings also made of steel. In fact, the phrase “canyons of steel” is usually used to describe the radio environment of the process plant. Buildings and equipment made from steel, the size of the plant, and external noise all affect the ability of wireless devices to communicate in the following ways:
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
• Steel reflects radio signals in many directions which may cause them to arrive at the destination at different times; this is called multipath interference. • The wireless transmitter may not be in a direct line-of-sight with the device with which it must communicate. • Often process plants are very large, requiring signal paths that may be longer than those that are physically achievable by the type of wireless communications being used. • There may be sources of interference or noise produced by outside forces both incidental and covert. The wireless protocol must be designed to overcome these challenges of distance, multipath interference, and other radio frequency (RF) sources.
Powering Wireless Field Instruments
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Wireless field instruments may be battery powered, powered from a source of local electricity, or “self-powered” from a power generator source. The convenience of being able to install devices where there are no sources of electrical power is one of the principal reasons to invest in the additional cost of wireless field instrumentation, yet many times electrical power is found nearby from which the wireless instrument may be powered, but the communications connection remains wireless. This is the case when the process instrument itself is already installed, but it is connected to a control system by means of an analog signal that also supplied instrument power (e.g., 4–20 mA.) Adapters are available to convert process measurement data from such instruments to data on wireless networks. Often, such adapters may themselves be powered from the same source as the wired field instrument. Several manufacturers are producing “energy harvesting” devices to attach to wireless instruments in order to transform them into self-powered devices. The electrical power is produced by using a local source of light (solar), vibration, thermal energy, or air pressure to generate enough electrical energy to power the instrument. Often, a primary battery is used to back up the harvester during times when the scavenged source of power is not available. Solar cells obviously depend upon daylight, but may also be energized from local sources of artificial lighting. Most process plants have vibration from fluid pumping, and have high temperature equipment from which thermal energy can be harvested. Finally, most process operations have ordinary compressed air used for maintenance purposes that is conveniently piped to areas where instrumentation is installed. Note that power harvesting depends only on the use of non-rechargeable (primary) batteries for backup when the harvesting energy is not available. IEC 62830 is a standard that defines the attachment fitting common to all energy-harvesting and primary battery-powered devices used for wireless communications in industrial measurement and control. IEC 60086 sets the international standard for primary batteries. Most wireless instruments are designed to use primary (non-rechargeable) batteries. This means that such instruments are very frugal in their use of electricity. In most cases, the instruments are designed to be operated by replaceable cell batteries in which the replacement cycle is no shorter than 5 years.
Interference and Other Problems Since most of the currently available wireless transmitters are designed to operate in the 2.4 GHz ISM band, they are subject to interference from Wi-Fi (IEEE 802.11) operating in that same band. The protocols using IEEE 802.15.4 are all designed for this band and
use a variety of methods to avoid interference and to recover messages that are not delivered due to interference. The three process control protocols discussed below (ISA100 Wireless, WirelessHART, and WIA-PA) all use channel hopping within the 2.4 GHz band to avoid interference and to overcome multipath effects. A specific requirement for radio communications in Europe has been issued by the Committee European Normalisation Electrical (CENELEC, the standards authority for the European Union) that all telecommunications standards shall provide a “Listen Before Talk” protocol. While this has no direct effect on international standards, all newly approved IEC standards have specifically implemented this requirement.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
A few users have objected to the use of wireless instrumentation for any critical application because of the ability to intercept wireless signals and to jam the entire wireless network with a powerful radio outside the plant fence. These are threats to wireless networks that are not applicable to wired networks. Capitulating to the fear of jamming is irrational and hardly a reason to deny the technology that has so many benefits. Even so, it can be shown that even in the event of a broadband jammer, the frequency-hopping and direct-sequence spread spectrum used by all three process control wireless networks will prevent total signal loss. It is possible to assemble a very powerful jamming broadband transmitter to operate in the 2.4 GHz band in a covert attack on plant wireless networks. Such a jammer has been shown to disrupt Wi-Fi communications but only when it is located inside the range of a Wi-Fi network using omnidirectional antennas; recovery is possible when the network is switched to channels away from the center of the band. While the jammer may send its signals over the full 2.4 GHz band, more than 60% of its power is confined to the center frequencies, with much less power at the highest and lowest frequencies. Process control networks use at least 15 channels scattered across the 2.4 GHz band; they are designed to extract correlated data from the ever-present white noise and to reject channels that generate excessive retries due to noise interference. This is not to say that a covert and illegal jammer has no effect on industrial wireless communications, just that the design of the protocol—using both direct-sequence and frequency-hopping spread spectrum technologies—is specifically designed to mitigate the threat of covert jamming. All wireless networks based on IEEE 802.15.4 use 100% full time AES 128-bit encryption for all messages to provide secure messaging. Additionally, the three process control protocols assure privacy because only devices authorized in advance are permitted to send messages. WirelessHART authenticates new devices only by direct attachment to a HART handheld device such that a random signal from outside that network is rejected. WIA-PA also authenticates new network devices when they are
attached to a network configuration device. ISA100 Wireless authenticates over the air, but only devices that are pre-registered for network membership and preconfigured with an out-of-band (infrared) configuration device. Furthermore, ISA100 Wireless may use 256-bit encryption to validate the new network member. These security measures are far more than industry standard and are widely accepted as “wired-equivalent” security.
ISA-100 Wireless ANSI/ISA-100.11a-2011 was developed to be the preferred network protocol for industrial wireless communications. Specifically, ISA100 Wireless was designed to fulfill all the communications requirements for FOUNDATION Fieldbus, if it is to be implemented on a wireless network.1 This requires direct peer-to-peer messaging and time synchronization to ±1.0 ms. ANSI/ISA-100.11a-2011 is also identified as IEC 62734. The ISA100 Wireless Compliance Institute, WCI, is the industry organization responsible for testing new equipment and validating it for standards conformance. ISA100 Wireless registered products are listed on the WCI website: http://www.isa100wci.org/End-UserResources/Product-Portfolio.aspx.2 The current standard is based on the use of IEEE 802.15.4-2006 radios using 128-bit encryption, direct-sequence spread spectrum, and a basic slot time of 10 ms. Additions to the base protocol are as follows: • Configurable slot times for network efficiency
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
• Less than 100 ms message latency • Peer-peer messaging to support “smart” field devices • Over-the-air provisioning (initialization) not requiring a terminal device • Hopping among the 16 channels according to a configurable hopping table • Mesh communications using two or more routes • Backbone network to reduce the depth of the mesh when necessary • Local nodes use IEEE EUI-64 bit addressing; externally, nodes are addressed using IPv6 networking using the Internet standards RFC6282 and RFC6775 (6LoWPAN - IPv6 over low power wireless personal area networks) • End-to-end message acknowledgement using UDP/IP (User Datagram Protocol/ Internet Protocol) • Application layer compatible with IEC 61804 (EDDL or Electronic Device
Description Language), including HART • Capable of tunneling other wireless protocols • Duocast messaging, where every message is simultaneously sent to two neighbors in the mesh to improve reliability The protocol for ANSI/ISA-100.11a-2011 served to be a model for the development of IEEE 802.15.4e-2011, the most recent version of that standard. As the IEEE 802.15.4 radios move to that latest standard, ISA100 Wireless instruments will already be able to use them.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
ISA100 Wireless has been implemented by several of the major suppliers of process control instrumentation and is the core of their wireless strategy. Most varieties of process instruments are available with ISA100 Wireless including HART adapters. The ISA100.15 subcommittee has developed the technology to be used for a backhaul network, the network used to connect the ISA100 Gateway with other network gateways and applications. For process control applications, the current conclusion is that the use of FOUNDATION Fieldbus High Speed Ethernet (HSE) protocol on whatever IP network, such as Ethernet or Wi-Fi, is appropriate for the speed and distance. While early users of ISA100 Wireless have concentrated on monitoring applications, it is anticipated that their experience will lead to applications in feedback loop control. Since the architecture of ISA100 Wireless supports FOUNDATION Fieldbus, it has been predicted by some members of the ISA100 standards committee that ISA100 Wireless will serve as the core technology for a wireless version of FOUNDATION Fieldbus when microprocessors with suitable low energy requirements become available.
WirelessHART WirelessHART was designed by the HART Communication Foundation (HCF) to specifically address the needs of process measurement and control applications. It is a wireless extension to the HART protocol that is designed to be backwards compatible with previous versions of HART. The design minimizes the impact of installing a wireless network for those companies currently using HART instrumentation. WirelessHART provides a wireless network connection to existing HART transmitters installed without a digital connection to a control system. WirelessHART is defined by the IEC 62591 standard. HCF conducts conformance testing and interchangeability validation for WirelessHART instruments. Registered devices are listed on their website: http://www.hartcommproduct.com/inventory2/index.php?
action=listcat&search=search...&tec=2&cat=&mem=&x=24&y=15.3 Unfortunately, WirelessHART is not compatible with ISA100 Wireless. The two networks may coexist with each other in the same plant area with recoverable interactions; however, interoperation, or the ability for devices on one network to communicate directly with those on the other network is not possible, although such messages can be passed through a gateway that has access to both networks (dualfunction gateway). WirelessHART uses the IEEE 802.15.4-2006 radio with AES 128-bit encryption, directsequence spread spectrum, channel hopping, and a fixed slot time of 10 ms. WirelessHART is well supported by current chip suppliers. Features of the WirelessHART protocol include: • Hopping among 15 channels according to a pseudo-random hopping table • Low-latency, high-reliability mesh communications using two or more routes • A proprietary network layer with IEEE EUI-64-bit addresses • A proprietary transport layer with end-to-end message acknowledgement • An application layer consisting of all HART 7 commands plus unique wireless commands • HART 7 compatible handheld devices used to provision/initialize field instruments
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
• Field-proven performance WirelessHART has been very popular among early wireless users who can purchase instruments and HART adapters from several instrumentation suppliers. Since WirelessHART instruments were the first to market, they have built an installed base greater than ISA100 Wireless.
WIA-PA WIA-PA was developed by a Chinese consortium initially formed by Chongqing University. It is very similar to both ISA100 and WirelessHART, but it has small yet significant differences. Like ISA100 and WirelessHART, WIA-PA is based on the use of the IEEE 802.15.4-2006 radio, including 128-bit AES encryption. The slot time is adjustable, but no default appears in the standard. ISA100 modifies the medium access control (MAC) sublayer of the data link layer specified by IEEE 802.15.4-2006, while WirelessHART and WIA-PA do not. WIA-PA provides channel hopping among the 16
channels approved for the 2.4 GHz spectrum in China using its own hopping table that is not specified in the standard. The local address conforms to IEEE EUI-64, but there is no IP addressing and no services. The network layer supports the formation of a mesh network similar to WirelessHART, but it is unlike ISA100 since there is no duocast. There is no transport layer. The application layer supports an object model, but with no specific object form. Tunneling is not supported. At this writing, there are no known commercial suppliers of WIA-PA outside of China.
WIA-FA WIA-FA is a protocol designed for factory automation that is under development in an IEC SC65 standards committee. While this protocol is similar to WIA-PA, it is based on the Wi-Fi physical layer (IEEE 802.11) using only the 2.4 GHz band. It is likely to change from the initial committee draft now that it has been subjected to international ballot and comment (2016).
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
ZigBee ZigBee is a specification defined by the ZigBee Alliance consortium. There are several versions of the specification, but all of them use the IEEE 802.15.4-2006 standard radios with 128-bit encryption. Since ZigBee is based on the IEEE 802.15.4-2006 standard, the application must select one of the 16 available channels. ZigBee is widely used in commercial applications but not industrial applications, except a few specialized process applications. The following are the specialized forms of ZigBee: • ZigBee PRO – This is a mesh network optimized for low power consumption and large networks. • ZigBee Mesh networking – This is a simpler protocol for small networks. • ZigBee RF4CE – This is an interoperable specification intended for simple consumer products needing two-way communications at low cost; there is no meshing. Note that the meshing specification used for ZigBee and ZigBee PRO is not the same as that used by IEEE 802.15.4e-2011, WirelessHART, or ISA100 Wireless. ZigBee has been very successful in applications such as automatic meter reading for gas, electricity, and water utilities; for early heating, ventilating, and air-conditioning (HVAC) applications; and is a leading candidate for use in the Smart Grid applications. In these applications, the emphasis is on long battery life and low cost.
ZigBee has found several applications in process control including a system to read valve positions. Many users have trial ZigBee applications in factory automation.
Other Wireless Technologies Wi-Fi and WiGig
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Wi-Fi is the ubiquitous wireless network technology found in homes, offices, and industrial plants. It is informally known as wireless Ethernet since it is generally completely integrated with installed Ethernet networks. Over the years, Wi-Fi has improved from the very slow IEEE 802.11b standard, operating at about 1 Mbps data rate, to today’s IEEE 802.11ac standard, operating at about 500 Mbps with up to eight bonded channels at 5 GHz. Not only does IEEE 802.11ac have a high throughput, but it uses multiple-input multiple-output (MIMO) to enhance performance by using multipath signals. Wi-Fi is not well suited to use in communications with low-powered field instrumentation, at least not until a low power version is created. However, most wireless field networks using IEEE 802.15.4 will often require a high-speed wireless link to operate in the field in order to keep the depth of their mesh shallow to reduce latency times. Wi-Fi is configured into at least one supplier’s architecture for just this purpose when the field network is used to gather closed loop process control data. In such an architecture, there are field access points for ISA100 Wireless or WirelessHART, and the Wi-Fi network links those access points to a gateway. IEEE 802.11ac with MIMO has been very successful, since the multipath reflections from process equipment and building steel are used to enhance the transmitted signal. Many factory automation projects are now using Wi-Fi networks to unite remote I/O units that support EtherNet/IP, Modbus/TCP, PROFINET, PowerLink, EtherCAT, SERCOS III, or CC Link IE, all of which are specified to use 100/1000 Ethernet. The Wi-Fi becomes simply a part of the physical layer to join remote I/O with the appropriate PLC units. Reliability of WiFi has not been an issue. The Wi-Fi Alliance is the industry consortium responsible for developing new versions of Wi-Fi and preparing certification tests to validate interoperability. This organization recently combined with the WiGig Alliance to administer a developing technology operating in the 60 GHz ISM band. This technology is expected to find use in the broadband commercial and residential markets. While the high frequency can limit application to short distances, the short wavelength allows the use of small, highly directional, planar and phased array antennas for point-to-point data links over longer
distances.
DASH7 Alternative technologies for low power radio continue to be explored for use in industrial automation. DASH7 is based on the use of ISO/IEC 18000-7 standard for data transmission in the 433 MHz ISM band. This band is also used by some longer-range RF tags. The maximum data rate for DASH7 is 200 Mbps, only slightly slower than that for IEEE 802.15.4-based networks at 250 Mbps, but only 28 Kbps net data rate is actually claimed. However, DASH7 has a nominal range of about 1 km, compared with IEEE 802.15.4 radios at about 100 m. DASH7 defines tag-to-tag communications that can be used for field networking or meshing. The other appealing feature of DASH7 is the very low energy consumption necessary for long battery life. Currently, there are no commercial or industrial applications for DASH7.
Global System Mobile
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Global System Mobile (GSM) is the most popular network for cellular telephony in the world. The GSM technology uses time division multiple access (TDMA) across two different frequency channels, one for each direction of data flow. The channels available for telephony in North America are different from those used elsewhere in the world. Use of GSM for data transmission is referred to as 1G, a very slow rate. GSM telephones use power very wisely and have remarkably long battery life. One of the applications for GSM modems is data collection from supervisory control and data acquisition (SCADA) system remote terminal units (RTUs).
Code Division Multiple Access Code division multiple access (CDMA) is a cellular telephony technology used in North America, parts of Japan, and many countries in Asia, South America, the Caribbean, and Central America. CDMA efficiently uses the limited telephony channels by packetizing voice and only transmitting when new data is being sent. CDMA is very conservative in the use of battery energy.
Long Term Evolution In the effort to speed up the use of telephone-dedicated channels for high-speed data communications, the latest (2016) leader is Long Term Evolution (LTE). It now appears that LTE has replaced Worldwide Interoperability for Microwave Access (WiMAX) technology in the effort to achieve a 4 Gbps download speed for 4G networks. LTE
chips are designed to conserve energy in order to achieve long battery life. Currently, no applications for LTE exist in the industrial market, other than voice and conventional data use.
Z-Wave Z-Wave is a low-cost, low-power, low-data-rate wireless network operating in the 900 MHz ISM band. The primary application for which Z-Wave was intended is home automation. This was one of the intended markets for ZigBee, but it appears that the slow, short message length of Z-Wave, using frequency-shift keying (FSK), has a better application future in home automation. While no industrial applications are yet planned for Z-Wave, it appears that this lowcost, long-battery-life technology may be applicable for remote discrete I/O connections.
Further Information Caro, Dick. Wireless Networks for Industrial Automation. 4th ed. Research Triangle Park, NC: ISA (International Society of Automation).
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
ANSI/ISA-100.11a-2011. Wireless Systems for Industrial Automation: Process Control and Related Applications. Research Triangle Park, NC: ISA (International Society of Automation).
About the Author Richard H. (Dick) Caro is CEO of CMC Associates, a business strategy and professional services firm in Arlington, Mass. Prior to CMC, he was vice president of the ARC Advisory Group in Dedham, Mass. He is the chairman of ISA SP50 and formerly the convener of IEC (International Electrotechnical Committee) Fieldbus Standards Committees. Before joining ARC, Caro held the position of senior manager with Arthur D. Little, Inc. in Cambridge, Mass., was a founder of Autech Data Systems, and was director of marketing at ModComp. In the 1970s, The Foxboro Company employed Dick in both development and marketing positions. He holds a BS and MS in chemical engineering and an MBA. He holds the rank of ISA Fellow and is a Certified Automation Professional. In 2005 Dick was named to the Process Automation Hall of Fame. He has published three books on automation networks including Wireless Networks for Industrial Automation.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
1. Currently, FOUNDATION Fieldbus computations require too much energy to be implemented in a battery-operated wireless node. 2. Accessed 27 April 2016. 3. Accessed 27 April 2016.
27 Cybersecurity By Eric C. Cosman
Introduction What is the current situation with respect to cybersecurity, and what are the trends? Cybersecurity is a popularly used term for the protection of computer and communications systems from electronic attack. Also referred to as information security, this mature discipline is evolving rapidly to address changing threats.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Although long applied to computers and networks used for basic information processing and business needs, more recently attention has also been focused on the protection of industrial systems.1 These systems are a combination of personnel, hardware, and software that can affect or influence the safe, secure, and reliable operation of an industrial process. This shift has resulted in the creation of something of a hybrid discipline, bringing together elements of cybersecurity, process automation, and process safety. This combination is referred to as industrial systems cybersecurity. This is a rapidly evolving field, as evidenced by the increasing focus from a variety of communities ranging from security researchers to control engineers and policy makers. This chapter gives a general introduction to the subject, along with references to other sources of more detailed information. To appreciate the nature of the challenge fully, it is first necessary to understand the current situation and trends. This leads to an overview of some of the basic concepts that are the foundation of any cybersecurity program, followed by a discussion of the similarities and differences between securing industrial systems and typical information systems. There are several fundamental concepts that are specific to industrial systems cybersecurity, and some basic steps are necessary for addressing industrial systems cybersecurity in a particular situation.
Current Situation
Industrial systems are typically employed to monitor, report on, and control the operation of a variety of different industrial processes. Quite often, these processes involve a combination of equipment and materials where the consequences of failure range from serious to severe. As a result, the routine operation of these processes consists of managing risk. Risk is generally defined as being the combination or product of threat, vulnerability, and consequence. Increased integration of industrial systems with communication networks and general business systems has contributed to these systems becoming a more attractive target for attack, thus increasing the threat component. Organizations are increasingly sharing information between business and industrial systems, and partners in one business venture may be competitors in another. External threats are not the only concern. Knowledgeable insiders with malicious intent or even an innocent unintended act can pose a serious security risk. Additionally, industrial systems are often integrated with other business systems. Modifying or testing operational systems has led to unintended effects on system operations. Personnel from outside the control systems area increasingly perform security testing on the systems, exacerbating the number and consequence of these effects. Combining all these factors, it is easy to see that the potential of someone gaining unauthorized or damaging access to an industrial process is not trivial.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Even without considering the possibility of deliberate attack, these systems are increasingly vulnerable to becoming collateral damage in the face of a nonspecific attack, such as the release of malicious software (viruses, worms, Trojan horses, etc.). The vulnerability of industrial systems has changed as a result of the increased use of commodity technology, such as operating systems and network components. However, a full understanding of the level of risk is only possible after considering the consequence element. The consequence of failure or compromise of industrial systems has long been well understood by those who operate these processes. Loss of trade secrets and interruption in the flow of information are not the only consequences of a security breach. Industrial systems commonly connect directly to physical equipment so the potential loss of production capacity or product, environmental damage, regulatory violation, compromise to operational safety, or even personal injury are far more serious consequences. These may have ramifications beyond the targeted organization; they may damage the infrastructure of the host location, region, or nation. The identification and analysis of the cyber elements of risk, as well as the determination of the best response, is the focus of a comprehensive cybersecurity
management system (CSMS). A thorough understanding of all three risk components is typically only possible by taking a multidisciplinary approach, drawing on skills and experience in areas ranging from information security to process and control engineering. While integrated with and complementary to programs used to maintain the security of business information systems and the physical assets, the industrial system’s response acknowledges and addresses characteristics and constraints unique to the industrial environment.
Trends The situation with respect to cybersecurity continues to evolve. There are several trends that contribute to the increased emphasis on the security of industrial systems, including: • Increased attention is being paid to the protection of industrial processes, particularly those that are considered to be part of the critical infrastructure. • Businesses have reported more unauthorized attempts (either intentional or unintentional) to access electronic information each year than in the previous year.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
• New and improved tools have been developed to automate attacks. These are commonly available on the Internet. The sources of external threat from the use of these tools now includes cyber criminals and cyberterrorists who may have more resources and knowledge to attack an industrial system. • Changing business models in the industrial sector have led to a more complex situation with respect to the number of organizations and groups contributing to the security of industrial systems. These practices must be taken into account when developing security for these systems. • The focus on unauthorized access has broadened from amateur attackers or disgruntled employees to deliberate criminal or terrorist activities aimed at impacting large groups and facilities. These and other trends have contributed to an increased level of risk associated with the design and operation of industrial systems. At the same time, electronic security of industrial systems has become a more significant and widely acknowledged concern. This shift requires more structured guidelines and procedures to define electronic security applicable to industrial systems, as well as the respective connectivity to other systems.
General Security Concepts Do common security concepts and principles also apply to industrial systems security? There has been considerable discussion and debate on the question of whether industrial system cybersecurity is somehow “different” from that of general business systems. A more constructive approach is to start with an overview of some of the general concepts that form the basis of virtually any cybersecurity program, and then build on these concepts by looking at those aspects that differentiate the system from industrial systems.
Management System Regardless of the scope of application, any robust and sustainable cybersecurity program must balance the needs and constraints in three broad areas: people, processes, and technology. Each of these areas contributes to the security of systems, and each must be addressed as part of a complete management system, regardless of whether the focus is on information or industrial systems security. People-related weaknesses can diminish the effectiveness of technology. These include a lack of necessary training or relevant experience, as well as improper attention paid to inappropriate behavior.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Strong processes can often help to overcome potential vulnerabilities in a security product, while poor implementation can render good technologies ineffective. Finally, technology is necessary to accomplish the desired goals, whether they are related to system functionality or operational performance. Figure 27-1 shows how the three aspects described above come together in the form of a management system that allows a structured and measured approach to establishing, implementing, operating, monitoring, reviewing, maintaining, and improving cybersecurity.
An organization must identify and manage many activities in order to function effectively. Any activity that uses resources and is managed in order to enable the transformation of inputs into outputs can be considered to be a process. Often the output from one process directly becomes the input to the next process. The application of a system of processes within an organization, together with the identification and interactions of these processes, and their management, can be referred to as a process approach, which encourages its users to emphasize the importance of: • Understanding an organization’s cybersecurity requirements and the need to establish policy and objectives for cybersecurity • Implementing and operating controls to manage an organization’s cybersecurity risks relative to the context of overall business risks
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
• Monitoring and reviewing the performance and effectiveness of the industrial system’s security management system (SMS) • Regular improvements based on objective measurements
Program Maturity Establishing a basis for continual improvement requires that there first be some assessment of program effectiveness. One commonly used method is to apply a Maturity Model,2 which allows various aspects of the program to be assessed in a qualitative fashion. A mature security program integrates all aspects of cybersecurity, incorporating desktop and business computing systems with industrial automation and control systems. The development of a program shall recognize that there are steps and milestones in achieving this maturity. A model such as this may be applied to a wide variety of requirements for the system in
question. It is intended that capabilities will evolve to higher levels over time as proficiency is gained in meeting the requirements.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Table 27-1 illustrates the application of maturity levels to industrial control systems (ICSs), and a comparison to Capability Maturity Model Integration for Services (CMMI-SVC).
Improvement Model The need for continual improvement can be described in the context of a simple plando-check-act (PDCA) model, which is applied to structure all processes. Figure 27-2 illustrates how an industrial automation and control systems security management
system (IACS-SMS)3 takes the security requirements and expectations of the interested parties as input and produces outcomes that meet those requirements and expectations.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Each phase in the above model is described briefly in Table 27-2.
Common Principles There are several common principles that may be employed as part of virtually any security program, regardless of the nature of application. Several of these are of particular relevance in the industrial systems context. • Least privilege – Each user or system module must be able to access only the information and resources that are necessary for its legitimate purpose. • Defense in depth – Employ multiple techniques to help mitigate the risk of one component of the defense being compromised or circumvented.
• Threat-risk assessment – Assets are subject to risks. These risks are in turn minimized through the use of countermeasures, which are applied to address vulnerabilities that are used or exploited by various threats. Each of these will be touched on in more detail in subsequent sections.
Industrial Systems Security What makes ICS security different from “normal” security? With a solid foundation composed of the general concepts of information security, it is possible to move on to additional concepts that are in fact somewhat different in the context of industrial systems. It is a solid understanding of these concepts and their implications that is critical to the development of a comprehensive, effective, and sustainable cybersecurity response in this environment.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Safety and Security A good place to start is with the important link between cybersecurity and process safety. It is a fact that many industrial systems are connected to physical equipment that makes the treatment of these systems different. To varying degrees, and depending on the nature of the physical environment, failure or compromise of this equipment can have serious consequences, ranging from adverse environmental impact to injury or death. It is for this reason that the overriding objective in these industrial systems is to ensure that the underlying physical process operates safely. Ineffective or nonexistent cybersecurity presents a potential means by which this objective can be compromised.
Security Life Cycle In order to be effective over the long term, the security program applied to an industrial system must consider all phases of that system’s life cycle. This perspective is particularly important in this context because of the often long operational life of industrial systems and processes. All significant decisions must be made with a longterm perspective, given that the underlying system may be in place for decades. The security-level life cycle is focused on the security level of a portion of the industrial system over time. It should not be confused with the life-cycle phases of the actual physical assets comprising the industrial system. Although there are many overlapping and complimentary activities associated with the asset life cycle and the security-level life cycle, they each have different trigger points to move from one phase to another. A change to a physical asset may trigger a set of security-level activities or a change in
security vulnerabilities, but it is also possible that changes to the threat environment could result in changes to the configuration of one or more asset components.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
There are several views of the security life cycle. One of these is illustrated in Figure 27-3.
Reference Model With an understanding of the importance of security to safety, the next step is to consider the nature of the system to be secured. In most cases, this begins with the selection or development of a reference model that can be used to represent the basic system functionality in generic terms. This approach is well established in the
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
development of technology standards and practices. One model that addresses the industrial systems domain has been derived from earlier models appearing in related industry standards, which are in turn based on the Purdue Reference Model.4 This model is shown in Figure 27-4 below.
The primary focus of industrial systems security is on the lower three layers of this model. While the security of systems at Layer 4 is typically well addressed by a general business security program, the nature of systems at Levels 1 through 3 means that they require specific attention.
System Definition The reference model shown in Figure 27-4 provides the context or backdrop for defining the specific boundaries of the security system. This can be a complex activity because these boundaries can be described in a variety of terms, and the results are often not consistent. Perspectives that must be considered include: • Functionality included – The scope of the security system can be described in terms of the range of functionality within an organization’s information and automation systems. This functionality is typically described in terms of one or
more models. Industrial automation and control includes the supervisory control components typically found in process industries, as well supervisory control and data acquisition (SCADA) systems that are commonly found in other critical and noncritical infrastructure industries. • Systems and interfaces – It is also possible to describe the scope in terms of connectivity to associated systems. The range of industrial systems includes those that can affect or influence the safe, secure, and reliable operation of industrial processes. They include, but are not limited to: ○ Industrial systems and their associated communications networks, including distributed control systems (DCSs); programmable logic controllers (PLCs); remote terminal units (RTUs); intelligent electronic devices; SCADA systems; networked electronic sensing and control, metering, and custody transfer systems; and monitoring and diagnostic systems. In this context, industrial systems include basic process control system and safety instrumented system (SIS) functions, whether they are physically separate or integrated.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
○ Associated systems at Level 3 or below of the reference model. Examples include advanced or multivariable control, online optimizers, dedicated equipment monitors, graphical interfaces, process historians, manufacturing execution systems, pipeline leak detection systems, work management, outage management, and energy management systems. ○ Associated internal, human, network, software, machine, or device interfaces used to provide control, safety, manufacturing, or remote operations functionality to continuous, batch, discrete, and other processes. • Activity-based criteria – The ANSI/ISA-95.00.035 standard defines a set of criteria for defining activities associated with manufacturing operations. A similar list has been developed for determining the scope of industrial systems security. A system should be considered to be within this scope if the activity it performs is necessary for any of the following: ○ Predictable operation of the process ○ Process or personnel safety ○ Process reliability or availability ○ Process efficiency ○ Process operability ○ Product quality
○ Environmental protection ○ Compliance with relevant regulations ○ Product sales or custody transfer affecting or influencing industrial processes • Asset-based criteria – Industrial systems security may include those systems in assets that meet any of several criteria or whose security is essential to the protection of other assets that meet these criteria. Such assets may: ○ Be necessary to maintain the economic value of a manufacturing or operating process ○ Perform a function necessary to operation of a manufacturing or operating process ○ Represent intellectual property of a manufacturing or operating process ○ Be necessary to operate and maintain security for a manufacturing or operating process ○ Be necessary to protect personnel, contractors, and visitors involved in a manufacturing or operating process ○ Be necessary to protect the environment ○ Be necessary to protect the public from events caused by a manufacturing or operating process
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
○ Fulfill a legal requirement, especially for security purposes of a manufacturing or operating process ○ Be needed for disaster recovery ○ Be needed for logging security events This range of coverage includes systems whose compromise could result in the endangerment of public or employee health or safety; loss of public confidence; violation of regulatory requirements; loss or invalidation of proprietary or confidential information; environmental contamination; economic loss; or impact on an entity or on local or national security. • Consequence-based criteria – During all phases of the system’s life cycle, cybersecurity risk assessments must include a determination of what could go wrong to disrupt operations, where this could occur, the likelihood that a cyber attack could initiate such a disruption, and the consequences that could result. The output from this determination will include sufficient information to help in
the identification and selection of relevant security properties.
Security Zones For all but the simplest of situations, it is impractical or even impossible to consider an entire industrial system as having a single common set of security requirements and performance levels. Differences can be addressed by using the concept of a security “zone,” or an area under protection. A security zone is a logical or physical grouping of physical, informational, and application assets sharing common security requirements. Some systems are included in the security zone and all others are outside the zone. There can also be zones within zones, or subzones, that provide layered security, giving defense in depth and addressing multiple levels of security requirements. Defense in depth can also be accomplished by assigning different properties to security zones. A security zone has a border, which is the boundary between included and excluded components. The concept of a zone implies the need to access the assets in a zone from both within and without. This defines the communication and access required to allow information and people to move within and between the security zones. Zones may be considered as trusted or untrusted.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Security zones can be defined in either a physical sense (i.e., a physical zone) or in a logical manner (i.e., a virtual zone). Physical zones are defined by grouping assets by physical location. In this type of zone, it is easy to determine which assets are within each zone. Virtual zones are defined by grouping assets, or parts of physical assets, into security zones based on functionality or other characteristics, rather than the actual location of the assets. When defining a security zone, the first step is to assess the security requirements or goals in order to determine if a particular asset should be considered within the zone or outside the zone. The security requirements can be broken down into the following types: • Communications access – For a group of assets within a security border, there is also typically access to assets outside the security zone. This access can be in many forms, including physical movement of assets (products) and people (employees and vendors) or electronic communication with entities outside the security zone. Remote communication is the transfer of information to and from entities that are not in proximity to each other. Remote access is defined as communication with assets that are outside the perimeter of the security zone being addressed. Local access is usually considered communication between assets within a single
security zone. • Physical access and proximity – Physical security zones are used to limit access to a particular area because all the systems in that area require the same level of trust of their human operators, maintainers, and developers. This does not preclude having a higher-level physical security zone embedded within a lowerlevel physical security zone or a higher-level communication access zone within a lower-level physical security zone. For physical zones, locks on doors or other physical means protect against unauthorized access. The boundary is the wall or cabinet that restricts access. Physical zones should have physical boundaries commensurate with the level of security desired and aligned with other asset security plans. One example of a physical security zone is a typical manufacturing plant. Authorized people are allowed into the plant by an authorizing agent (e.g., security guard or ID), and unauthorized people are restricted from entering by the same authorizing agent or by physical barriers.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Assets that are within the security border are those that must be protected to a given security level, or to adhere to a specific policy. All devices that are within the border must share the same minimum level security requirements. In other terms, they must be protected to meet the same security policy. Protection mechanisms can differ depending on the asset being protected. Assets that are outside the security zone are, by definition, at a different security level. They are not protected to the same security level and cannot be trusted to the same security level or policy.
Security Conduits Information must flow into, out of, and within a security zone. Even in a non-networked system, some communication exists (e.g., intermittent connection of programming devices to create and maintain the systems). This is accomplished using a special type of security zone: a communications conduit. A conduit is a type of security zone that groups communications that can be logically organized into a grouping of information flows within and external to a zone. It can be a single service (i.e., a single Ethernet network) or it can be made up of multiple data carriers (i.e., multiple network cables and direct physical accesses). As with zones, it can be made of both physical and logical constructs. Conduits may connect entities within a zone or may connect different zones.
As with zones, conduits may be either trusted or untrusted. Conduits that do not cross zone boundaries are typically trusted by the communicating processes within the zone. Trusted conduits crossing zone boundaries must use an end-to-end secure process. Untrusted conduits are those that are not at the same level of security as the zone endpoint. In this case, the security of the communication becomes the responsibility of the individual channel.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Figures 27-5 and 27-6 depict examples of zone and conduit definitions for the process and manufacturing environments, respectively.
Foundational Requirements
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
The ISA-62443 and IEC 62443 series of standards describe a small set of foundational requirements (FRs) that encompass and, at times, help organize more specific detailed requirements, as well as the actions required for a security program. These requirements are: • FR 1. Identification and Authentication Control (IAC) – Based on the target security level, the industrial control system (ICS) shall provide the necessary capabilities to reliably identify and authenticate all users (i.e., humans, software processes, and devices) attempting to access it. Asset owners will have to develop a list of all valid users (i.e., humans, software processes, and devices) and to determine the required level of IAC protection for each zone. The goal of IAC is to protect the ICS from unauthenticated access by verifying the identity of any user requesting access to the ICS before activating the communication. Recommendations and guidelines should include mechanisms that will operate in mixed modes. For example, some zones and individual ICS components require strong IAC, such as authentication mechanisms, and others do not. • FR 2. Use Control (UC) – Based on the target security level, the ICS shall provide the necessary capabilities to enforce the assigned privileges of an
authenticated user (i.e., human, software process, or device) to perform the requested action on the system or assets and monitor the use of these privileges. Once the user is identified and authenticated, the control system has to restrict the allowed actions to the authorized use of the control system. Asset owners and system integrators will have to assign to each user (i.e., human, software process, or device) a group or role, and so on, with the privileges defining the authorized use of the industrial control systems. The goal of UC is to protect against unauthorized actions on ICS resources by verifying that the necessary privileges have been granted before allowing a user to perform the actions. Examples of actions are reading or writing data, downloading programs, and setting configurations. Recommendations and guidelines should include mechanisms that will operate in mixed modes. For example, some ICS resources require strong use control protection, such as restrictive privileges, and others do not. By extension, use control requirements need to be extended to data at rest. User privileges may vary based on time-of-day/date, location, and means by which access is made.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
• FR 3. System Integrity (SI) – Based on the target security level, the ICS shall provide the necessary capabilities to ensure integrity and prevent unauthorized manipulation. Industrial control systems will often go through multiple testing cycles (unit testing, factory acceptance testing [FAT], site acceptance testing [SAT], certification, commissioning, etc.) to establish that the systems will perform as intended even before they begin production. Once operational, asset owners are responsible for maintaining the integrity of the industrial control systems. Using their risk assessment methodology, asset owners may assign different levels of integrity protection to different systems, communication channels, and information in their industrial control systems. The integrity of physical assets should be maintained in both operational and non-operational states, such as during production, when in storage, or during a maintenance shutdown. The integrity of logical assets should be maintained while in transit and at rest, such as being transmitted over a network or when residing in a data repository. • FR 4. Data Confidentiality (DC) – Based on the target security level, the ICS shall provide the necessary capabilities to ensure the confidentiality of information on communication channels and in data repositories to prevent unauthorized disclosure. Some control system-generated information, whether at rest or in transit, is of a confidential or sensitive nature. This implies that some communication channels
and data-stores require protection against eavesdropping and unauthorized access. • FR 5. Restricted Data Flow (RDF) – Based on the target security level, the ICS shall provide the necessary capabilities to segment the control system via zones and conduits to limit the unnecessary flow of data. Using their risk assessment methodology, asset owners need to identify the necessary information flow restrictions and thus, by extension, determine the configuration of the conduits used to deliver this information. Derived prescriptive recommendations and guidelines should include mechanisms that range from disconnecting control system networks from business or public networks to using unidirectional gateways, stateful firewalls, and demilitarized zones (DMZs) to manage the flow of information. • FR 6. Timely Response to Events (TRE) – Based on the target security level, the ICS shall provide the necessary capabilities to respond to security violations by notifying the proper authority, reporting needed evidence of the violation, and taking timely corrective action when incidents are discovered.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Using their risk assessment methodology, asset owners should establish security policies and procedures, and proper lines of communication and control needed to respond to security violations. Derived prescriptive recommendations and guidelines should include mechanisms that collect, report, preserve, and automatically correlate the forensic evidence to ensure timely corrective action. The use of monitoring tools and techniques should not adversely affect the operational performance of the control system. • FR 7. Resource Availability (RA) – Based on the target security level, the ICS shall provide the necessary capabilities to ensure the availability of the control system against the degradation or denial of essential services. The objective is to ensure that the control system is resilient against various types of denial of service events. This includes the partial or total unavailability of system functionality at various levels. In particular, security incidents in the control system should not affect SIS or other safety-related functions.
Security Levels Safety systems have used the concept of safety integrity levels (SILs) for almost two decades. This allows the safety integrity capability of a component or the SIL of a deployed system to be represented by a single number that defines a protection factor required to ensure the health and safety of people or the environment based on the
probability of failure for that component or system. The process to determine the required protection factor for a safety system, while complex, is manageable since the probability of a component or system failure due to random hardware failures can be measured in quantitative terms. The overall risk can be calculated based on the consequences that those failures could potentially have on health, safety, and the environment (HSE). Security systems have much broader applications, a much broader set of consequences, and a much broader set of possible circumstances leading up to a possible event. Security systems protect HSE, but they are also meant to protect the process itself, company-proprietary information, public confidence, and national security among other things in situations where random hardware failures may not be or “are not.” In some cases, it may be a well-meaning employee that makes a mistake, and in other cases it may be a devious attacker bent on causing an event and hiding the evidence. The increased complexity of security systems makes compressing the protection factor down to a single number much more difficult. Security levels provide a qualitative approach to addressing security for a zone. As a qualitative method, security level definition has applicability for comparing and managing the security of zones within an organization. It is applicable to both end-user companies and vendors of industrial systems and security products, and may be used to select industrial systems devices and countermeasures to be used within a zone and to identify and compare security of zones in different organizations across industry segments. Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Security levels have been broken down into three different types: 1. A target security level is the desired level of security for a particular system. This is usually determined by performing a risk assessment on a system and determining that it needs a particular level of security to ensure its correct operation. 2. An achieved security level is the actual level of security for a particular system. Achieved security levels are measured after a system design is available or when a system is in place. They are used to establish that a security system meets the goals that were originally set out in the target security levels. 3. Capability security levels are the security levels that components or systems can provide when properly configured. These levels state that a particular system or component is capable of meeting the target security levels without additional compensating controls, when properly configured and integrated.
While related, these types have to do with different aspects and phases of the security life cycle. Starting with a target for a particular system, the design team would first develop the target security level necessary for a particular system. They would then design the system to meet those targets, usually in an iterative process, where the achieved security levels of the proposed design are measured and compared to the target security levels after each iteration. As part of that design process, the designers would select systems and components with the necessary capability security levels to meet the target security-level requirements—or, where such systems and components are not available—that complement the available ones with compensating security controls. After the system went into operation, the actual security levels would be measured as the achieved security levels and then compared to the target levels. Four different security levels have been defined (1, 2, 3, and 4), each with an increasing level of security. Every security level defines security requirements or achievements for systems and products, but there is no requirement that they be applied. The language used for each of the security levels includes terms like casual, coincidental, simple, sophisticated, and extended. The following sections will provide some guidance on how to differentiate between the security levels.
Security Level 1: Protection Against Casual or Coincidental Violation
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Casual or coincidental violations of security are usually through the lax application of security policies. These can be caused by well-meaning employees just as easily as they can be by an outside threat. Many of these violations will be security program-related and will be handled by enforcing policies and procedures. A simple example would be an operator who is able to change a set point on the engineering station in the control system zone to a value outside certain conditions determined by the engineering staff. The system did not enforce the proper authentication and use control restrictions to disallow the change by the operator. Another example would be a password being sent in clear text over the conduit between the control system zone and the plant network, allowing a network engineer to view the password while troubleshooting the system. The system did not enforce proper data confidentiality to protect the password. A third example would be an engineer who means to access the PLC in Industrial Network #1 but actually accesses the PLC in Industrial Network #2. The system did not enforce the proper restriction of data flow preventing the engineer from accessing the wrong system.
Security Level 2: Protection Against Intentional Violation Using Simple Means with Low Resources, Generic Skills, and Low Motivation
Simple means do not require much knowledge on the part of the attacker. The attacker does not need detailed knowledge of security, the domain, or the particular system under attack. The attack vectors are well known and there may be automated tools for aiding the attacker. Such tools are also designed to attack a wide range of systems instead of targeting a specific system, so an attacker does not need a significant level of motivation or resources at hand. An example would be a virus that infects the email server and spreads to the engineering workstations in the plant network because the server and workstations both use the same general-purpose operating system. Another example would be an attacker who downloads an exploit for a publicly known vulnerability from the Internet and then uses it to compromise a web server in the enterprise network. The attacker then uses the web server as a pivot point in an attack against other systems in the enterprise network, as well as the industrial network. A third example would be an operator who views a website on the human-machine interface (HMI) located in Industrial Network #1 which downloads a Trojan that opens a hole in the routers and firewalls to the Internet.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Security Level 3: Protection Against Intentional Violation Using Sophisticated Means with Moderate Resources, System Specific Skills, and Moderate Motivation Sophisticated means requiring advanced security knowledge, advanced domain knowledge, advanced knowledge of the target system, or any combination of these. An attacker going after a Security Level 3 system will likely be using attack vectors that have been customized for the specific target system. The attacker may use exploits in operating systems that are not well known, weaknesses in industrial protocols, specific information about a particular target to violate the security of the system, or other means that require a greater motivation, skill, and knowledge than are required for Security Level 1 or 2. An example of sophisticated means could be password or key-cracking tools based on hash tables. These tools are available for download, but applying them takes knowledge of the system (such as the hash of a password to crack). Another example would be an attacker who gains access to the safety PLC through the Modbus conduit after gaining access to the control PLC through a vulnerability in the Ethernet controller. A third example would be an attacker who gains access to the data historian by using a bruteforce attack through the industrial or enterprise DMZ firewall initiated from the enterprise wireless network.
Security Level 4: Protection Against Intentional Violation Using
Sophisticated Means with Extended Resources, System Specific Skills, and High Motivation Security Level 3 and Security Level 4 are very similar in that they both involve using sophisticated means to violate the security requirements of the system. The difference comes from the attacker being even more motivated and having extended resources at their disposal. These may involve high-performance computing resources, large numbers of computers, or extended periods of time. An example of sophisticated means with extended resources would be using super computers or computer clusters to conduct brute-force password cracking using large hash tables. Another example would be a botnet used to attack a system employing multiple attack vectors at once. A third example would be a structured crime organization that has the motivation and resources to spend weeks attempting to analyze a system and develop custom “zero-day” exploits.
Standards and Practices “Help is on the way.”
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
As the hybrid discipline of industrial automation and control systems security evolves, so do the standards and practices related to its application. International standards are the joint responsibility of the International Society of Automation (ISA) and the International Electrotechnical Commission (IEC). Several standards are available as part of the ISA-62443 or IEC 62443 series, with more under development. In addition, the ISA Security Compliance Institute6 manages the ISASecure™ program, which recognizes and promotes cyber-secure products and practices for industrial automation suppliers and operational sites. As the standards and certifications evolve and gain acceptance, practical guidance and assistance will become increasingly available. Such assistance is typically available through trade associations, industry groups, and private consultants.
Further Information Byres, Eric, and John Cusimano. Seven Steps to ICS and SCADA Security. Tofino Security and exida Consulting LLC, 2012. Krutz, Ronald L. Securing SCADA Systems. Indianapolis, IN: Wiley Publishing, Inc.,
2006. Langner, Ralph. Robust Control System Networks. New York: Momentum Press, 2012. Macaulay, Tyson, and Bryan Singer. Cybersecurity for Industrial Control Systems. Boca Raton, FL: CRC Press, 2012. U.S. Department of Energy. Twenty-One Steps to Improve Cybersecurity of SCADA Networks. http://energy.gov/oe/downloads/21-steps-improve-cyber-security-scadanetworks. Weiss, Joseph. Protecting Industrial Control Systems from Electronic Threats. New York: Momentum Press, 2010.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
About the Author Eric C. Cosman is a former operations IT consultant with The Dow Chemical Company. In that role, his responsibilities included system architecture definition and design, technology management, and integration planning for manufacturing systems. He has presented and published papers on various topics related to the management and development of information systems for process manufacturing. Cosman contributes to various standards committees, industry focus groups, and advisory panels. He has been a contributor to the work of the ISA95 committee, served as the co-chairman of the ISA99 committee on industrial automation systems security, and served as the vicepresident for standards and practices at ISA. Cosman sponsored a chemical sector cybersecurity program team that focused on industrial control systems cybersecurity, and he was one of the authors of the Chemical Sector Cybersecurity Strategy for the United States. 1. Many terms are used to describe these systems. The ISA-62443 series of standards uses the more formal and expansive term industrial automation and control systems (IACS). 2. One such model is summarized in Table 27-1. The maturity levels are based on the CMMI-SVC model, defined in CMMI® for Services, Version 1.3, November 2010, (CMU/SEI-2010-TR-034, ESC-TR-2010-034). 3. IACS-SMS is a term used in the ISA-62443 series of standards. 4. http://www.pera.net/ 5. http://en.wikipedia.org/wiki/ANSI/ISA-95 6. http://www.isasecure.org/
IX Maintenance
Maintenance Principles Maintenance, long-term support, and system management take a lot of work to do well. The difference in cost and effectiveness between a good maintenance operation and a poor one is easily a factor of two and may be much more. Automation professionals must understand this area so that their designs can effectively deal with life-cycle cost.
Troubleshooting Techniques
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Automation professionals who only work on engineering projects in the office and leave the field work to others may not realize the tremendous amount of work required to get a system operating. Construction staff and plant technicians are doing more and more of the checkout, system testing, and start-up work today, which makes it more important that automation professionals understand these topics.
Asset Management Asset management systems are processing and enabling information systems that support managing an organization’s assets, both physical (tangible) assets and nonphysical (intangible) assets. Asset management is a systematic process of costeffectively developing, operating, maintaining, upgrading, and disposing of assets. Due to the number of elements involved, asset management is data and information intensive. Using all the information available from various assets will improve asset utilization at a lower total cost, which is the goal of asset management programs.
28 Maintenance, Long-Term Support, and System Management By Joseph D. Patton, Jr.
Maintenance Is Big Business
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Maintenance is a challenging mix of art and science, where both economics and emotions have roles. Please note that serviceability and supportability parallel maintainability, and maintenance and service are similar for our purposes. Maintainability (i.e., serviceability or supportability) is the discipline of designing and producing equipment so it can be maintained. Maintenance and service include performing all actions necessary to restore durable equipment to, or keep it in, specified operation condition. There are several forces changing the maintenance business. One is the technological change of electronics and optics doing what once required physical mechanics. Computers are guiding activities and interrelationships between processes instead of humans turning dials and pulling levers. Remote diagnostics using the Internet reduce the number of site visits and help improve the probability of the right technician coming with the right part. Many repairs can be handled by the equipment operator or local personnel. Robots are performing many tasks that once required humans. Many parts, such as electronic circuit boards, cannot be easily repaired and must be replaced in the field and sent out for possible repair and recycling. Fast delivery of needed parts and reverse logistics are being emphasized to reduce inventories, reuse items, reduce environmental impact, and save costs. Life-cycle costs and profits are being analyzed to consider production effects, improve system availability, reduce maintenance, repair, and operating (MRO) costs, and improve overall costs and profits. Change is continuous! Organizations that design, produce, and support their own equipment, often on lease, have a vested interest in good maintainability. On the other hand, many companies, especially those with sophisticated high-technology products, have either gone bankrupt or sold out to a larger corporation when they became unable to maintain their creations.
Then, of course, there are many organizations such as automobile service centers, computer repair shops, and many factory maintenance departments that have little, if any, say in the design of equipment they will later be called on to support. While the power of these affected organizations is somewhat limited by their inability to do more than refuse to carry or use the product line, their complaints generally result in at least modifications and improvements to the next generation of products. Maintenance is big business. Gartner estimates hardware maintenance and support is $120 billion per year and growing 5.36% annually. The Northwestern University Chemical Process Design Open Textbook places maintenance costs at 6% of fixed capital investment. U.S. Bancorp estimates that spending on spare parts costs $700 billion in the United States alone, which is 8% of the gross domestic product.
Service Technicians
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Typically, maintenance people once had extensive experience with fixing things and were oriented toward repair instead of preventive maintenance. In the past, many technicians were not accustomed to using external information to guide their work. Maintenance mechanics or technicians often focused on specific equipment, usually at a single facility, which limited the broader perspective developed from working with similar situations at many other installations. Today, service technicians are also called field engineers (FEs), customer engineers (CEs), customer service engineers (CSEs), customer service representatives (CSRs), and similar titles. This document will use the terms “technicians” or “techs.” In a sense, service technicians must “fix” both equipment and customer employees. There are many situations today where technicians can solve problems over the telephone by having a cooperative customer download a software patch or perform an adjustment. The major shift today is toward remote diagnostics and self-repairs via Internet software fixes, YouTube guidance of procedures, supplier’s websites, and call centers to guide the end user or technician. Service can be used both to protect and to promote. Protective service ensures that equipment and all company assets are well maintained and give the best performance of which they are capable. Protective maintenance goals for a technician may include the following: • Install equipment properly • Teach the customer how to use the equipment capability effectively • Provide functions that customers are unable to supply themselves
• Maintain quality on installed equipment • Gain experience on servicing needs • Investigate customer problems and rapidly solve them to the customer’s satisfaction • Preserve the end value of the product and extend its useful life • Observe competitive activity • Gain technical feedback to correct problems
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Service techs are becoming company representatives who emphasize customer contact skills instead of being solely technical experts. In addition, the business of maintenance service is becoming much more dependent on having the correct part. A concurrent trend is customer demand and service level agreements (SLAs) that require fast restoration of equipment to good operating condition. This is especially true with computer servers, communications equipment, medical scanners, sophisticated manufacturing devices, and similar equipment that affects many people or even threatens lives when it fails. Accurate and timely data reporting and analysis is important. When focusing on completing a given job, most technicians prefer to take a part right away, get equipment up and running, and enter the related data later. Fortunately, cell phones, hand-held devices, bar code readers and portable computers with friendly application software facilitate reporting. Returning defective or excess parts may be a lower priority, and techs may cache personal supplies of parts if company supply is unreliable. However, there are many situations where on-site, real-time data entry and validation can be shown to gain accurate information for future improvement. As a result, a challenge of maintenance management is to develop technology that stimulates and supports maintenance realities.
Big Picture View Enterprise asset management (EAM) is a buzzword for the big picture. There are good software applications available to help manage MRO activities. However, most data are concentrated on a single facility and even to single points in time, rather than covering the life cycle of equipment and facilities. As Figure 28-1 illustrates, the initial cost of equipment is probably far exceeded by the cost to keep it operating and productive over its life cycle. Many maintenance events occur so infrequently in a facility that years must pass before enough data is available to determine trends and, by then, the
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
equipment is probably obsolete or at least changed. Looking at a larger group of facilities and equipment leads to more data points and more rapid detection of trends and formation of solutions.
Interfacing computerized information on failure rates and repair histories with human resources (HR) information on technician skill levels and availability, pending engineering changes, procurement parts availability, production schedules, and financial impacts can greatly improve guidance to maintenance operations. Then, if we can involve all plants of a corporation, or even all similar products used by other companies, the populations become large enough to provide effective, timely information. Optimizing the three major maintenance components of people, parts, and information, shown in Figure 28-2, are all important to achieving effective timely information.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Historically, the two main maintenance costs have been labor and materials (people and parts). Labor costs are increasing. This means organizations must give priority efforts to reducing frequency, time, and skill level, and thereby the cost of labor. The costs of parts are also increasing. A specific capability probably costs less, but integrating multiple part capabilities into a single part brings high costs and more critical need for the replaceable costs. A third leg is becoming important to product development and support: information as generally provided by software on computer and communications systems. Digital electronic and optical technologies are measurably increasing equipment capabilities while reducing both costs and failure rates. Achieving that reduction is vital. Results are seen in the fact that a service technician, who a few years ago could support about 100 personal computers, can now support several thousand. Major gains can be made in relating economic improvements to maintainability efforts. Data gathered by Patton Consultants shows a payoff of 50:1; that is, a benefit of $50 prevention value for each $1 invested in maintainability.
No Need Is Best Everything will fail sometime—electrical, electronic, hydraulic, mechanical, nuclear, optical, and especially biological systems. People spend considerable effort, money, and time trying to fix things faster. However, the best answer is to avoid having to make a repair at all. To quote Ben
Franklin, “An ounce of prevention is worth a pound of cure.” The failure-free item that never wears out has yet to be produced. Perhaps someday it will be, but meanwhile, we must replace burned-out light bulbs, repair punctured car tires, overhaul jet engines, and correct elusive electronic discrepancies in computers. A desirable long-range life-cycle objective is to achieve very low equipment failure rates and require replacement of only consumables and the parts that wear during extended use, which can be replaced on a condition-monitored predictive basis. Reliability (R) and maintainability (M) interact to form availability (A), which may be defined as the probability that equipment will be in operating condition at any point in time. Three main types of availability are: inherent availability, achieved availability, and operational availability. Service management is not particularly interested in inherent availability, which assumes an ideal support environment without any preventive maintenance, logistics, or administrative downtime. In other words, inherent availability is the pure laboratory availability as viewed by design engineering. Achieved availability also assumes an ideal support environment with everything available. Operational availability is what counts in the maintenance tech’s mind, since it considers a “real-world” operating environment. The most important parameter in determining availability is failure rate, as a product needs corrective action only if it fails. The main service objective for reliability is mean time between failure (MTBF), with “time” stated in the units most meaningful for the product. Those units could include:
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
• Time – Hours, days, weeks, etc. • Distance – Miles, kilometers, knots, etc. • Events – Cycles, gallons, impressions, landings, etc. It is important to realize equipment failures caused by customer use should be anticipated in the design. Coffee spilling in a keyboard, a necklace dropping into a printer, and panicked pushing of buttons by frustrated users add more calls for help. Operating concerns by inexperienced users often result in more than half the calls to a service organization. What the customer perceives as a failure may vary from technical definitions, but customer concerns must still be addressed by the business. For operations where downtime is not critical, the need for a highly responsive maintenance organization is not critical. However, for manufacturing operations where the process performance is directly related to the performance of the automation systems, or any other part of the process, downtime can be directly related to the revenue-generation potential of the plant. Under these conditions, response time
represents revenue to the plant itself. Thus, plants that would have revenue generation capacity of $10,000 worth of product per hour, operating in a 24-hour day, 7-day week environment, would be losing approximately $240,000 of revenue for every day that the plant is shut down. A 24-hour response time for plants of this type would be completely unsatisfactory. On the other hand, if a manufacturing plant that operates on a batch basis has no immediate need to complete the batch because of the scheduling of other products, then a 24-hour response time may be acceptable. A typical automobile, for example, gives more utility at lower relative cost than did cars of even a few years ago; however, it must still be maintained. Cars once required frequent spark plug changes and carburetor adjustments, but fuel injection has replaced carburetion. A simple injector cleaning eliminates the several floats, valves, and gaskets of older carburetors— with fewer failures and superior performance. Computer-related failures that used to occur weekly are now reduced to units of years.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Service level agreements (SLAs) increasingly require that equipment be restored to good operation the same day service is requested, and often specify 4 hours, 2 hours, or even faster repair. Essential equipment may cause great hardship physically and financially if it is down for long periods of time. For example, a production line of an integrated circuit fabrication facility can lose $100,000 per hour of shutdown. A Tennent Healthcare Hospital reports that a magnetic resonance induction (MRI) scanner that cannot operate costs $4,000 per hour in lost revenue and even more if human life is at risk. Failure of the central computer of a metropolitan telephone system can cause an entire city to grind to a stop until it is fixed. Fortunately, reliability and the availability (uptime) of equipment are improving, which means there are fewer failures. However, when failures do occur, the support solutions are often complex.
Evolution of Maintenance Maintenance technology has also been rapidly changing during recent years. The idea that fixed-interval preventive maintenance is right for all equipment has given way to the reliability-based methods of on-condition and condition monitoring. The process of maintenance is illustrated in Figure 28-3.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Many defective parts are now discarded rather than being maintained at organizational or even intermediate levels. The multilevel system of maintenance is evolving into a simplified system of more user participation, local first- and second-level maintenance, and backup direct from a third-party or original equipment manufacturer (OEM) service organization. Expert systems and artificial intelligence are being developed to help diagnostics and to predict the need for preventive maintenance. Parts are often supplied directly from vendors at the time of need, so maintenance organizations need not invest hard money in large stocks of parts.
Automatic Analysis of Device Performance There is increased focus and resource deployment to design durable products for serviceability. Durable equipment is designed and built once, but it must be maintained for years. With design cycles of 6 months to 3 years and less, and with product lives ranging from about 3 years for computers through 40+ years for hospital sterilizers, alarm systems, and even some airplanes, the initial investment in maintainability will either bless or haunt an organization for many years. If a company profits by servicing equipment it produced, good design will produce high return on investment in user satisfaction, repeat sales, less burden for the service force, and increased long-term profits. In many corporations, service generates as much revenue as product sales do, and the profit from service is usually greater. Products must be designed right the first time. That is where maintainability that enables condition monitoring and on-condition
maintenance becomes effective. Instruments that measure equipment characteristics are being directly connected to the maintenance computer. Microprocessors and sensors allow vibration readings, pressure differentials, temperatures, and other nondestructive test (NDT) data to be recorded and analyzed. Historically these readings primarily activated alarm enunciators or recorders that were individually analyzed. Today, there are automated control systems in use that can signal the need for more careful inspection and preventive maintenance. These devices are currently cost effective for high-value equipment such as turbines and compressors. Progress is being made in this area of intelligent device management so all kinds of electrical, electronic, hydraulic, mechanical, and optical equipment can “call home” if they begin to experience deficiencies. Trend analysis for condition monitoring is assisted by accurate, timely computer records.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Capabilities should also be designed into computer programs to indicate any other active work orders that should be done on equipment at the same time. Modifications, for example, can be held until other work is going to be done and can be accomplished most efficiently at the same time as the equipment is down for other maintenance activities. A variation on the same theme is to ensure emergency work orders will check to see if any preventive maintenance work orders might be done at the same time. Accomplishing all work at one period of downtime is usually more effective than doing smaller tasks on several occasions. Products that can “call home” and identify the need to replace a degrading part before failure bring major advantages to both the customer and support organization. There are, however, economic trade-offs regarding the effort involved versus the benefit to be derived. For example, the economics may not justify extensive communication connections for such devices as “smart” refrigerators. However, business devices that affect multiple people need intelligent device management (IDM) with remote monitoring to alert the service function to a pending need, hopefully before equipment becomes inoperable. The ability to “know before you go” is a major help for field technicians, so they have the right part and are prepared with knowledge of what to expect. It is important to recognize the difference between response time and restore time. Response time is the time in minutes from notification that service is required until a technician arrives on the scene. Restore time adds the minutes necessary to fix the equipment. Service contracts historically specified only response time, but now usually specify restore time. Response is action. Restore is results (see Figure 28-4).
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
The challenge is that, to restore operation, the technician often needs a specific part. Many field replaceable units (FRUs) are expensive and not often required. Therefore, unless good diagnostics identifies the need for a specific part, techs may arrive at the customer location and then determine they need a part they do not have. Diagnostics is the most time-consuming portion of a service call. Technicians going to a call with a 4hour restore requirement will often consume an hour or more to complete the present assignment and travel to the new customer. Diagnostics adds even more time, so the techs could easily consume 2 hours of the 4 available before even knowing what part is needed. Acquiring parts quickly then becomes very important. The value of information is increasing. Information is replacing inventory. Knowing in an accurate, timely way that a part was used allows a company to automatically initiate resupply to the authorized stocking site, even to the extent of informing the producer who will supply the warehouse with the next required part. An organization can fix considerable equipment the next day without undue difficulty. A required part can be delivered overnight from a central warehouse that stocks at least one of every part that may be required. Overnight transportation can be provided at relatively low cost with excellent handling, so orders shipped as late as midnight in Louisville, Ky. or Memphis, Tenn. can be delivered as early as 6:00 a.m. in major metropolitan areas. Obviously, those are “best case” conditions. There are many locations around the world where a technician must travel hours in desolate country to
get to the broken equipment. That technician must have all necessary parts and, therefore, will take all possible parts or acquire them en route. Service parts is a confidence business. If technicians have confidence the system will supply the parts they need when and where they need them, then techs will minimize their cache of service parts. If confidence is low, techs will develop their own stock of parts, will order two parts when only one is needed, and will retain intermittent problem parts. Parts required 24/365 can be shared through either third-party logistics companies (TPLs) or intelligent lockers instead of being carried by the several individual technicians who might provide the same coverage. Handoffs from the stock-keeping facility to the courier to the technician can be facilitated by intelligent lockers. Today, most orders are transmitted to the company warehouse or TPL location that picks and packs the ordered part for shipment and notifies the courier. Then the courier must locate the technician, who often has to drop what he or she is doing, go to meet the courier, and sign for the part. Avoid arrangements that allow the courier to leave parts at a receiving dock or reception desk, because they often disappear before the technician arrives.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Intelligent lockers can facilitate the handoff procedures at both ends of the delivery process. The receiving personnel can put parts in the intelligent locker and immediately notify the courier or technician by page, cell phone, fax, e-mail, or other method that the part is ready for pick up. The receiver can then retrieve parts at his or her convenience, and the access code provides assurance that the correct person gets the part. A single vendor can manage one-to-many intelligent lockers to provide parts to many users. For example, Granger or The Home Depot could intelligently control sales of expensive, prone-to-shrink tools and accessories by placing these items in intelligent lockers outside their stores where the ordered items can be picked up at any hour. Public mode allows many users to place their items in intelligent lockers for access by designated purchasers. Vendors could arrange space as required so a single courier “milk run” could deliver parts for technicians from several companies to pick up when convenient. This “controlled automat” use is sure to excite couriers themselves, as well as entrepreneurs who could use the capability around the clock for delivery of airline tickets, laptop computer drop-off and return, equipment rental and return, and many similar activities. Installation parts for communications networks, smart buildings, security centers, and plant instrumentation are high potential items for intelligent lockers. These cabinets can be mounted on a truck, train, or plane and located at the point of temporary need.
Communications can be by wired telephone or data, and wireless cell, dedicated or pager frequencies so there are few limits on locations. Installations tend to be chaotic, without configuration management, and with parts taken but not recorded. Intelligent lockers can improve these and many other control and information shortages. Physical control is one thing, but information control is as important. Many technicians do not like to be slowed down with administration. Part numbers, usage, transfers, and similar matters may be forgotten in the rush of helping customers. Information provided automatically by the activities involving intelligent lockers should greatly improve parts tracking, reordering, return validation, configuration management, repair planning, pickup efficiency, and invoicing.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Production Losses from Equipment Malfunction In-plant service performance is primarily directed at supporting the plant operations. As most equipment failures in a plant represent production loss, measuring the amount of loss that results from inaccurate or improper service is a key element to measuring the service operation. Because other parameters can affect production loss, only by noting the relationship of production losses caused by equipment malfunction to production losses caused by other variables, such as operator error, poor engineering, or random failures, can a true performance of the service function be assessed. By maintaining long-term records of such data, companies can visualize the success of the service department by noting the percent of the total production loss that results from inadequate or improper service. The production loss attributable to maintenance also represents a specific performance measure of the generic element of accuracy in problem definition. Effective preventive maintenance (PM) is a fundamental support for high operational availability. PM means all actions are intended to keep durable equipment in good operating condition and to avoid failures. New technology has improved equipment quality, reliability, and dependability by fault-tolerance, redundant components, selfadjustments, and replacement of hydraulic and mechanical components with more reliable electronic and optical operations. However, many components can still wear out, corrode, become punctured, vibrate excessively, become overheated by friction or dirt, or even be damaged by humans. For these problems, a good PM program will preclude failures, enable improved uptime, and reduce expenses. Success is often a matter of degree. Costs in terms of money and effort to be invested now must be evaluated against future gains. This means the time-value of money must be considered along with business priorities for short-term versus long-term success.
Over time, the computerized maintenance management system must gather data, which must then be analyzed to assist with accurate decisions. The proper balance between preventive and corrective maintenance that will achieve minimal downtime and costs can be tenuous. PM can prevent failures from happening at inconvenient times, can sense when a failure is about to occur and fix it before it causes damage, and can often preserve capital investments by keeping equipment operating for years as well as the day it was installed. Predictive maintenance is considered here to be a branch of preventive maintenance.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Inept PM, however, can cause problems. Humans are not perfect. Whenever any equipment is touched, it is exposed to potential damage. Parts costs increase if components are replaced prematurely. Unless the PM function is presented positively, customers may perceive PM activity as, “that machine is broken again.” A PM program requires an initial investment of time, materials, people, and money. Payoff comes later. While there is little question that a good PM program will have a high return on investment, many people are reluctant to pay now if the return is not immediate. That challenge is particularly predominant in a poor economy where companies want fast return on their expenditures. PM is the epitome of, “pay me now, or pay me later.” The PM advantage is that you will pay less now to do planned work when production is not pushing, versus having very expensive emergency repairs that may be required under disruptive conditions, halting production and losing revenue. Good PM saves money over a product’s life cycle. In addition to economics, emotions play a prominent role in preventive maintenance. It is a human reality that perceptions often receive more attention than do facts. A good computerized information system is necessary to provide the facts and interpretation that guide PM tasks and intervals. PM is a dynamic process. It must support variations in equipment, environment, materials, personnel, production schedules, use, wear, available time, and financial budgets. All these variables impact the how, when, where, and who of PM. Technology provides the tools for us to use, and management provides the direction for their use. Both are necessary for success. These ideas are equally applicable to equipment and facility maintenance and to field service in commerce, government, military, and industry. The foundation for preventive maintenance information is equipment records. All equipment and maintenance records should be in electronic databases. The benefits obtained from computerizing maintenance records are much greater than the relatively
small cost. There should be a current data file for every significant piece of equipment, both fixed and movable.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
The equipment database provides information for many purposes beyond PM and includes considerations for configuration management, documentation, employee skill requirements, energy consumption, financials, new equipment design, parts requirements, procurement, safety, and warranty recovery. Essential data items are shown in Table 28-1.
The data for new equipment should be entered into the computer database when the equipment is procured. The original purchase order and shipping documents can be the source, with other data elements added as they are fixed. It is important to remember there are three stages of configuration: 1. As-designed 2. As-built 3. As-maintained The as-maintained database is the major challenge to keep continually current. The master equipment data should be updated as an intuitive and real-time element of the
maintenance system. If pieces of paper are used, they are often forgotten or damaged, and the data may not get into the single master location on the computer. Part number revisions are especially necessary so the correct part can be rapidly ordered if needed. A characteristic of good information systems is that data should only need to be entered once, and all related data fields will be automatically updated. Many maintenance applications today are web-based so they can be accessed from anywhere a computer (or personal digital assistant [PDA], tablet, or enabled cell phone) can connect to the Internet. Computers are only one component of the information system capability. Electronic tablets, mobile phones, two-way pagers, voice recognition, bar codes, and other technologies are coming to the maintenance teams, often with wireless communications. A relatively small investment in data-entry technology can gain immediate reporting, faster response to discovered problems, accurate numbers gathered on the site, less travel, knowledge of what parts are in stock to repair deficiencies, and many other benefits.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
It is important that the inspection or PM data be easily changeable. The computer program should accomplish as much as possible automatically. Many systems record the actual odometer reading at every fuel stop, end of shift, and other maintenance, so meter reading can be continually brought up to date. Other equipment viewed less often can have PM scheduled more on predicted dates. Meter information can be divided by the number of days to continually update the use per day, which then updates the next due date. When an inspection or PM is done and the work order closed, these data automatically revise the date last done, which in turn revises the date next due. Most maintenance procedures are now online, so they are available anytime via an electronic screen. Paperwork is kept to a minimum due to the ability for the inspector or customer to sign to acknowledge completion and/or input comments directly using a touch-sensitive screen. Single-point control over procedures via a central database is a big help, especially on critical equipment. When the work order is closed out, the related information should be entered automatically onto history records for later analysis. Sharing the data electronically with other factories, manufacturers, and consultants can allow “big data” to be analyzed effectively and often results in improvements that can be shared with all users. Safety inspections and legally required checks can be maintained in computer records for most organizations without any need to retain paper copies. If an organization must maintain those paper records for some legal reason, then they should be microfilmed or kept as electronic images rather than in bulky paper form.
Humans are still more effective than computers at tasks that are complex and are not repeated. Computers are a major aid to humans when tasks require accurate historical information and are frequently repeated. Computer power and intelligent software greatly enhance the ability to accurately plan, schedule, and control maintenance.
Performance Metrics and Benchmarks The heart of any management system is establishing the objectives that must be met. Once managers determine the objectives, then plans, budgets, and other parts of the management process can be brought into play. Too often service management fails to take the time to establish clear objectives and operates without a plan. Because service may contribute a majority of a company’s revenues and profits, that can be a very expensive mistake. Objectives should be: • Written • Understandable • Challenging • Achievable
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
• Measurable Each company must develop its own performance measures. Useful performance measures, often referred to as benchmarks or key performance indicators (KPIs), include the following:
Asset Measures—Equipment, Parts, and Tools
(Note that this may be divided into the technician’s days to return and the repair time once the decision is made to repair the defective part.)
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Cost Measures
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Equipment Measures
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Preventive Measures
Human Measures
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Example Calculation: The most important measure for production equipment support is operational availability, which we also term uptime. This is item E1 and definition AO above. This is the “real world” measure of what percent of time equipment is available for production. In the following example, we evaluate an item of automation equipment for 1 year, which is 365 days • 24 hours per day = 8,760 total possible “up” hours. Our equipment gets preventive maintenance for 1 hour every month (12 hours per year) plus additional quarterly PM of another hour each quarter (4 more hours per year). There was one failure that resulted in 6 hours of downtime. Thus, total downtime for all maintenance was 12 + 4 + 6 = 22 hours.
That would be considered acceptable performance in most operations, especially if the PM work can be done at times that will not interfere with production. The maintenance challenge is to avoid failures that adversely affect production operations.
Automation professionals should consider life-cycle cost when designing or acquiring an automation system. Design guidelines for supportability include: 1. Minimize the need for maintenance by: Lifetime components High reliability Fault-tolerant design Broad wear tolerances Stable designs with clear yes/no indications 2. Access: Openings of adequate size Fasteners few and easy to operate Adequate illumination Workspace for large hands Entry without moving heavy equipment Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Frequent maintenance areas have best access Ability to work on any FRU (field replaceable unit) without disturbing others 3. Adjustments: Positive success indication No interaction effects Factory/warranty adjustments sealed Center zero and increase clockwise Fine adjustments with large movements Protection against accidental movement
Automatic compensation for drift and wear Control limits Issued drawings show field adjustments and tolerances Routine adjustment controls and measurement points in one service area 4. Cables: Fabricated in removable sections Each wire identified Avoids pinches, sharp bends, and abrasions Adequate clamping Long enough to remove connected components for test Spare wires at least 10% of total used Wiring provisions for all accessories and proposed changes 5. Connectors:
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Quick disconnect Keyed alignment Spare pins Plugs cold, receptacles hot No possible misconnection Moisture prevention, if needed Spacing provided for work area and to avoid shorts Labeled; same color marks at related ends 6. Covers and panels:
Sealed against foreign objects Independently removable with staggered hinges Practical material finishes and colors Moves considered—castors, handles, and rigidity Related controls together Withstand pushing, sitting, strapping and move stress Easily removed and replaced No protruding handles or knobs except on control panel 7. Consumables: Need detected before completely expended Automatic shutoff to avoid overflow Toxic exposure under thresholds 8. Diagnostics:
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Every fault detected and isolated Troubleshooting cannot damage Self-tests preferred Go/no go indications Isolation to field replaceable unit Never more than two signals observed simultaneously o Condition monitoring on all major inputs and outputs o Ability for partial operation of critical assemblies 9. Environment—equipment protected from: Hot and cold temperatures High and low humidity
Airborne contaminants Liquids Corrosives Pressure Electrical static, surges, and transients 10. Fasteners and hardware: Few in number Single turn Captive Fit multiple common tools Non-clog Common metric/imperial thread 11. Lubrication:
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Disassembly not required Need detectable before damage Sealed bearings and motors 12. Operations tasks: Positive feedback Related controls together Decisions logical Self-guiding Checklists built-in
Fail-safe 13. Packaging: Stacking components avoided Ease of access guides replacement Functional groups Hot items high and outside near vents Improper installation impossible Plug-in replaceable components 14. Parts and components: Labeled with part number and revision level Able to replace knobs and buttons without having to replace the entire unit Delicate parts protected Stored on equipment if user replaceable
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Standard, common, proven Not vulnerable to excessive heat Mean time between maintenance known Wear-in/wear-out considered 15. Personnel involvement: Weight for portable items 35 lb. (16 kg.) maximum Lowest ability expected to do all tasks Male or female Clothing considered
Single-person tasks 16. Refurbish, rejuvenate, and rebuild: Materials and labels resist anticipated solvents and water Drain holes Configuration record easy to see and understand Aluminum avoided in cosmetic areas 17. Safety: Interlocks Electrical shutoff near equipment Circuit breaker and fuses adequately and properly sized Protection from high voltages Corners and edges round Protrusions eliminated
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Electrical grounding or double insulation Warning labels Hot areas shielded and labeled Controls not near hazards Bleeder and current limiting resistors on power supplies Guards on moving parts Hot leads not exposed Hazardous substances not emitted Radiation given special considerations
18. Test points: Functionally grouped Clearly labeled Accessible with common test equipment Illuminated Protected from physical damage Close to applicable adjustment or control Extender boards or cables 19. Tools and test equipment: Standardized Minimum number Special tools built into equipment Metric compatible
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
20. Transport and storage: Integrated moving devices, if service needs to move Captive fluids and powders Delivery and removal methods practical Components with short life easily removed o Ship ready to use The preferred rules for modern maintenance are to regard safety as paramount, emphasize predictive prevention, repair any defect or malfunction, and, if the system works well, strive to make it work better.
Further Information Patton, Joseph D., Jr. Maintainability & Maintenance Management. 4th ed. Research
Triangle, Park, NC: ISA (International Society of Automation), 2005. ——— Preventive Maintenance. 3rd ed. Research Triangle, Park, NC: ISA (International Society of Automation), 2004. Patton, Joseph D., Jr. and Roy J. Steele. Service Parts Handbook. 2nd ed. Research Triangle, Park, NC: ISA (International Society of Automation), 2003. Patton, Joseph D., Jr., and William H. Bleuel. After the Sale: How to Manage Product Service for Customer Satisfaction and Profit. New York: The Solomon Press, 2000. Author’s note: With the Internet available to easily search for publications and professional societies, using a search engine with keywords will be more effective than a printed list of references. An Internet search on specific topics will be continually up to date, whereas materials in a book can only be current as of the time of printing. Searching with words like maintenance, preventive maintenance, reliability, uptime (which finds better references than does the word availability), maintainability, supportability, service management, and maintenance automation will bring forth considerable information, from which you can select what you want.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
About the Author Joseph D. Patton, Jr. is retired. He was the founder and chairman (for more than 35 years) of Patton Consultants, Inc., which advised management on product service, logistics, and support systems. Before founding Patton Consultants in 1976, Patton was an officer in the Regular Army and spent 11 years with Xerox Corp. He has authored more than 200 published articles and 8 books. He earned a BS degree from the Pennsylvania State University and an MBA in marketing from the University of Rochester. He is a registered Professional Engineer (PE) in Quality Engineering and a Fellow of both ASQ, the American Society for Quality, and SOLE, The International Society of Logistics. He is a Certified Professional Logistician (CPL), Certified Quality Engineer (CQE), Certified Reliability Engineer (CRE), and a senior member of ISA.
29 Troubleshooting Techniques By William L. Mostia, Jr.
Introduction Troubleshooting can be defined as the method used to determine why something is not working properly or is not providing an expected result. Troubleshooting methods can be applied to physical as well as nonphysical problems. As with many practical skills, it is an art but it also has an analytical or scientific basis. As such, basic troubleshooting is a trainable skill, while advanced troubleshooting is based on experience, developed skills, information, and a bit of art. While the discussion here centers on troubleshooting instrumentation and control systems, the basic principles apply to broader classes of problems. Troubleshooting normally begins with identifying that a problem exists and needs to be solved. The first steps typically involve applying a logical/analytical framework.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Logical/Analytical Troubleshooting Framework A framework underlies a structure. Logical frameworks provide the basis for structured methods to troubleshoot problems. However, following a step-by-step method without first thinking through the problem is often ineffective; we also need to couple logical procedures with analytical thinking. To analyze information and determine how to proceed, we must combine logical deduction and induction with a knowledge of the system, then sort through the information we have gathered regarding the problem. Often, a logical/analytical framework does not produce the solution to a troubleshooting problem in just one pass. We usually have several iterations, which cause us to return to a previous step in the framework and go forward again. Thus, we can systematically eliminate possible solutions to our problem until we find the true solution.
Specific Troubleshooting Frameworks Specific troubleshooting frameworks have been developed that apply to a particular instrument, class of instruments, system, or problem domain. For example, frameworks
might be developed for a particular brand of analyzer, for all types of transmitters, for pressure control systems, or for grounding problems. When these match up with your system, you have a distinct starting point for troubleshooting.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Figure 29-1 is an example of a specific troubleshooting framework (also called a flowchart or tree) for transmitters.
Generic Logical/Analytical Frameworks Since we do not always have a specific structured framework available, we need a more general or generic framework that will apply to a broad class of problems. Figure 29-2 depicts this type of framework as a flowchart.
The framework shown in Figure 29-2, while efficient, leaves out some important safetyrelated tasks and company procedural requirements associated with or related to the troubleshooting process. As troubleshooting increases the safety risk to the troubleshooter due to troubleshooting actions that involve online systems, energized systems, and moving parts, a few important points should be made here:
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
• Always make sure that what you are doing is safe for you, your fellow workers, and your facility. • Follow company procedures. • Get the proper permits and follow their requirements. • Always communicate your actions to the operator in charge and other involved people. • Never insert any part of your body into a location where there is the potential for hazards. • Never take unnecessary risks. The life you save may be your own!
The Seven-Step Troubleshooting Procedure The following is a description of the seven-step procedure illustrated in Figure 29-2, which provides a generic, structured approach to troubleshooting.
Step 1: Define the Problem You cannot solve a problem if you do not know what the problem is. The problem definition is the starting point. Get it wrong, and you will stray off the path to the solution to the problem. The key to defining the problem is communication. When defining the problem, listen carefully and allow the person(s) reporting the problem to provide a complete report of the problem as they see it. The art of listening is a key element of troubleshooting. After listening carefully, ask clear and concise questions. All information has a subjective aspect to it. When trying to identify a problem, you must strip away the subjective elements and get to the meat of the situation. Avoid high-level technical terms or “technobabble.” A troubleshooter must be able speak the “language” of the person reporting the problem. This means understanding the process, the plant physical layout, instrument locations, process functions as they are known in the plant, and the “dialect” of abbreviations, slang, and technical words commonly used in the plant. Some of this is generic to process plants in general, while some is specific to the plant in question.
Step 2: Collect Additional Information Regarding the Problem
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Once a problem has been defined, collect additional information. This step may overlap Step 1 and, for simple problems, these two steps may even be the same. For complex or sophisticated problems, however, collecting information is typically a more distinct stage. Develop a strategy or plan of action for collecting information. This plan should include determining where in the system you will begin to collect information, what sources will be used, and how the information will be organized. Information gathering typically moves from general to specific, though there may be several iterations of this. In other words, continue working to narrow down the problem domain. Symptoms The information gathered typically consists of symptoms, what is wrong with the system, as well as what is working properly. Primary symptoms are directly related to the cause of the problem at hand. Secondary symptoms are downstream effects—that is, not directly resulting from what is causing the problem. Differentiation between primary and secondary symptoms can be the key to localizing the cause of the problem. Interviews and Collecting Information A large part of information gathering will typically be in the form of interviews with the
person(s) who reported the problem and with any other people who may have relevant information. People skills are important here: good communication skills, the use of tact, and nonjudgmental and objective approaches can be key to getting useful information. Then, review the instrument or system’s performance from the control system’s faceplates, trend recorders, summaries, operating logs, alarm logs, recorders, and system self-diagnostics. System drawings and documentation can also provide additional information. Inspection Next, inspect the instrument(s) that are suspected of being faulty or other local instruments (such as pressure gauges, temperature gauges, sight glasses, and local indicators) to see if there are any other indications that might shed light on the matter.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
History Historical records can provide useful information. The facility’s maintenance management system (MMS) may contain information regarding the failed system or ones like it. Also, check with others who have worked on the instrument or system. Beyond the Obvious If there are no obvious answers, testing may be in order. Plan the testing to ensure it is done safely and it obtains the information needed with a minimum of intrusion. When testing by manipulating the system, plan to test or manipulate only one variable at a time. Altering more than one variable might solve the problem, but you will be unable to identify what fixed the problem. Always make the minimum manipulation necessary to obtain the desired information. This minimizes the potential upset to the process.
Step 3: Analyze the Information Once the information is collected, you must analyze it to see if there is enough data to propose a solution. Begin by organizing the collected information. Then, analyze the problem by reviewing what you already know and the new information you have gathered, connecting causes and effects, exploring causal chains, applying IF-THEN and IF-THEN NOT logic, applying the process of elimination, and applying other relevant analytical or logical methods. Case-Based Reasoning Probably the first analytical technique that you will use is past experience. If you have seen this situation or case before, then you know a possible solution. Note that we say, “a possible solution” because similar symptoms sometimes have different causes and,
hence, different solutions. “Similar To” Analysis Compare the system you are working on to similar systems you have worked on in the past. For example, a pressure transmitter, a differential pressure transmitter, and a differential pressure level transmitter are similar instruments. Different programmable logic controller (PLC) brands often have considerable similarities. RS-485 communication links are similar, even on very different source and destination instruments. Similar instruments and systems operate on the same basic principles and have potentially similar problems and solutions. “What, Where, When” Analysis This type of analysis resembles the Twenty Questions game. Ask questions about what the gathered information may tell you. These are questions such as: • What is working? • What is not working? Has it ever worked? • What is a cause of an effect (symptom) and what is not? • What has changed? • What has not changed? • Where does the problem occur?
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
• Where does it not occur? • When did the problem occur? • When did it not occur? • What has been done to troubleshoot so far? • What do I not know? • What physical properties or principles must be true? Patterns Symptoms can sometimes be complex, and they can be distributed over time. Looking for patterns in symptom actions, in lack of actions, or in time of occurrence can sometimes help in symptom analysis. Basic Principles Apply basic scientific principles to analyze the data—such as electrical current can only
flow certain ways, Ohm’s and Kirchhoff’s Laws always work, and mass and energy always balance—and applicable physical and material properties that apply to the process—such as state of matter, boiling point, and corrosive properties—as a result of these principles. The Manual When in doubt, read the manual! It may have information on circuits, system analysis, or troubleshooting that can lead to a solution. It may also provide voltage, current, or indicator readings; test points; and analytical procedures. Manuals often have troubleshooting tables or charts to assist you.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Logical Methods You will need a logical approach to make this iterative procedure successful. Several approaches, such as the linear approach and the divide-and-conquer method, may apply. The linear or walk-through approach is a step-by-step process (illustrated in Figure 293) that you follow to test points throughout a system. The first step is to decide on an entry point. If the entry point tests good, then test the next point downstream in a linear signal path. If this point tests good, then you choose the next point downstream of the previous test point, and so on. Conversely, if the entry point is found to be bad, choose the next entry point upstream and begin the process again. As you move downstream or upstream, each step narrows down the possibilities. Any branches must be tested at the first likely point downstream on the branch.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
The divide-and-conquer method is a general approach (illustrated in Figure 29-4). You choose a likely point or, commonly, the midpoint of the system, and test it. If it tests bad, then the upstream section of the system contains the faulty part; if it tests good, the downstream section contains the faulty part. Divide the section of the system (upstream or downstream) that contains the faulty part into two parts and test the section at the dividing point. Determine whether the faulty part is upstream or downstream of the test point and continue dividing the sections and testing until the cause of the problem has been found.
Step 4: Determine Sufficiency of Information When gathering information, how do you know that you have enough? Can you determine a cause and propose a solution to solve the problem? If the answer is yes, this is a decision point for moving on to the next step of proposing a solution. If the answer is no, return to Step 2, “Collect Additional Information Regarding the Problem.”
Step 5: Propose a Solution When you believe you have determined the cause of the problem, propose a solution. In fact, you may propose several solutions based on your analysis. The proposed solution will usually be to remove and replace (or repair) a defective part. In some cases,
however, the proposal may not offer complete certainty of solving the problem and will have to be tested. If there are several possible solutions, propose them in the order of their probability of success. If the probability is roughly equal, or other operational limitations come into play, you can use other criteria to determine the order of the solutions. You might propose solutions in the order of the easiest to the most difficult. In cases where there may be cost penalties (labor costs, cost of consumable parts, lost production, etc.) associated with trying various solutions, you may propose trying the least costly viable option. Do not try several solutions at once. This is called the shotgun approach and it will typically only confuse the issue. Management will sometimes push for a shotgun approach due to time or operational constraints, but you should resist it. With a little analytical work, you may be able to solve the problem and meet management constraints at a lower cost. With the shotgun approach, you may find that you do not know what fixed the problem and it will be more costly both immediately and in the long term; if you do not know what fixed the problem, you may be doomed to repeat it.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Also, do not rush to a compromise solution proposed by a “committee.” Consider the well-known “Trip to Abilene” story, illustrating the “group think” concept that is the opposite of synergy. In the story, some people are considering going to Abilene, though none of them really wants to go. They end up in Abilene, however, because everyone in the group thinks that everyone else wants to go to Abilene. This sometimes occurs in troubleshooting when a committee gets together to “assist” the troubleshooter and the committee gets sidetracked by a trip to Abilene.
Step 6: Test the Proposed Solution Once a solution, or a combination of solutions, has been proposed, it must be tested to determine if the problem analysis is correct. Specific versus General Solutions Be careful of specific solutions to more general problems. At this step, you must determine if the solution needed is more general than the specific one proposed. In most cases, a specific solution will be repairing or replacing the defective instrument, but that may not solve the problem. What if replacing an instrument only results in the new instrument becoming defective? Suppose an instrument with very long signal lines sustains damage from lightning transients. The specific solution would be replacing the instrument; the general solution might be to install transient protection on the instrument as well. The Iterative Process
If the proposed and tested solution is not the correct one, then return to Step 3, “Analyze the Information.” Where might you have gone astray? If you find the mistake, then propose another solution. If you cannot identify the mistake, return to Step 2, “Collect Additional Information Regarding the Problem.” It is time to gather more information that will lead you to the real solution.
Step 7: The Repair In the repair step, implement the proposed solution. In some cases, testing a solution results in a repair, as in replacing a transmitter, which both tests the solution and repairs the problem. Even in this case, there will generally be additional work to be done to complete the repair, such as tagging the repaired/new equipment, updating the database, and updating maintenance records. Document the repair so future troubleshooting is made easier. If the system is a safety instrumented system (SIS) or an independent protection layer (IPL), additional information will have to be documented, such as the as-found and as-left condition of the repair, the mode of failure (e.g., safe or dangerous), and other information pertinent to a safety/critical system failure.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Vendor Assistance: Advantages and Pitfalls Sometimes it is necessary to involve vendors or manufacturers in troubleshooting, either directly, by the Internet, or by phone. A field service person, or the in-house service personnel, can be helpful (and can provide a learning experience), but some are quick to blame other parts of the system (not their own) when they cannot find anything wrong— in some cases before they have even checked out their own system. When trying to solve a problem, do not let vendors or manufacturers off the hook just because they say it is not their equipment. Ask questions and make them justify their position. Be very careful to follow your company’s cybersecurity procedures before you allow a vendor or other people remote access to your systems via the Internet or by phone modem. While allowing remote access to your system can be beneficial sometimes in troubleshooting complex problems, the cybersecurity threat risk generally outweighs allowing vendor or manufacturer remote access to your systems. You should also be careful that vendor field maintenance personnel do not gain computer access to your systems. Note that any system that is password protected should not use the default password. Better to be safe than sorry regarding cybersecurity threats.
Other Troubleshooting Methods There are other troubleshooting methods that complement the logical/analytical
framework. Some of these are substitution, fault insertion, remove-and-conquer, circlethe-wagons, complex-to-simple, trapping, consultation, intuition, and out-of-the-box thinking.
The Substitution Method This method substitutes a known good component for a suspected bad component. For modularized systems or those with easily replaceable components, substitution may reveal the component that is the cause of the problem. First, define the problem, gather information, and analyze the information just as you do in the generic troubleshooting framework. Then, select a likely replacement candidate and substitute a known good component for it. By substituting components until the problem is found, the substitution method may find problems even where there is no likely candidate or only a vague area of suspicion. One potential problem with substitution is that a higher-level cause can damage the replacement component as soon as you install it, or a transient (such as lightning) may have caused the failure and your fix may only be temporary (e.g., until the next lightning strike). Using this method can raise the overall maintenance cost due to the cost of extra modules.
The Fault Insertion Method
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Sometimes you can insert a fault instead of a known good signal or value to determine how the system responds. For example, when a software communication interface is periodically locking up, you may suspect that the interface is not responding to an input/output (I/O) timeout properly. You can test this by inserting a fault—an I/O timeout.
The Remove-and-Conquer Method For loosely coupled systems that have multiple independent devices, removing the devices one at a time may help you find certain types of problems. For example, if a communication link with 10 independent devices talking to a computer is not communicating properly, you might remove the devices one at a time until the defective device is found. Once the defective device has been detected and repaired, the removed devices should be reinstalled one at a time to see if any other problems occur. The remove-and-conquer technique is particularly useful when a communication system has been put together incorrectly or it exceeds system design specifications. For example, there might be too many devices on a communication link, cables that are too long, cable mismatches, wrong cable installation, impedance mismatches, or too many repeaters. In these situations, sections of the communication system can be disconnected to determine what is causing the problem.
A similar technique called add-back-and-conquer works in the reverse. You remove all the devices and add them back one by one until you find the cause of the problem.
The Circle-the-Wagons Method
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
When you believe the cause of a problem is external to the device or system, try the circle-the-wagons technique. Draw an imaginary circle or boundary around the device or system. Determine what interfaces (such as signals, power, grounding, environmental, and electromagnetic interference [EMI]) cross the circle, and then isolate and test each boundary crossing. Often this is just a mental exercise that helps you think about external influences, which then leads to a solution. Figures 29-5 and 29-6 illustrate this concept.
A Trapping We Shall Go Sometimes an event that triggers or causes the problem is not alarmed, it is a transient, or it happens so fast the system cannot catch it. This is somewhat like having a mouse in your house. You generally cannot see it, but you can see what it has done.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
How do you catch the mouse? You set a trap. In sophisticated systems, you may have the ability to set additional alarms or identify trends to help track down the cause of the problem. For less sophisticated systems, you may have to use external test equipment or build a trap. If software is involved, you may have to build software traps that involve additional logic or code to detect the transient or bug.
The Complex-to-Simple Method Many control loops and systems may have different levels of operation or complexity with varying levels of sophistication. One troubleshooting method is to break systems down from complex to simple. This involves identifying the simple parts that function to make the whole. Once you identify the simplest nonfunctioning “part,” you can evaluate that part or, if necessary, you can start at a simple known good part and “rebuild” the system until you find the problem.
Consultation Consultation, also known as the third head technique, means you use a third person, perhaps someone from another department or an outside consultant, with advanced knowledge about the system or the principles for troubleshooting the problem. This person may not solve the problem, but they may ask questions that make the cause
apparent or that spark fresh ideas. This process allows you to stand back during the discussions, which sometimes can help you distinguish the trees from the forest. The key is to know when you have reached the limitations of your investigation and need some additional help or insight.
Intuition Intuition can be a powerful tool. What many people would call “troubleshooting intuition” certainly improves with experience. During troubleshooting or problem solving, threads of thought in your consciousness or subconsciousness may develop, one of which may lead you to the solution. The more experience you have, the more threads will develop during the troubleshooting process. Can you cultivate intuition? Experience suggests that you can, but success varies from person to person and from technique to technique. Find what works for you.
Out-of-the-Box Thinking Difficult problems may require approaches beyond normal or traditional troubleshooting methods. The term out-of-the-box thinking was a buzzword for organizational consultants during the 1990s. Out-of-the-box thinking means approaching a problem from a new perspective, not being limited to the usual ways of thinking about it. The problem in using this approach is that our troubleshooting “perspective” is generally developed along pragmatic lines; that is, it is based on what has worked before and changing can sometimes be difficult.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
How can you practice out-of-the-box thinking? How can you shift your perspective to find another way to solve the problem? Here are some questions that may help: • Is there some other way to look at the problem? • Can the problem be divided up in a different way? • Can different principles be used to analyze the problem? • Can analyzing what works rather than what does not work help to solve the problem? • Can a different starting point be used to analyze the problem? • Are you looking at too small a piece of the puzzle? Too big? • Could any of the information on which the analysis is based be in error, misinterpreted, or looked at in a different way? • Can the problem be conceptualized differently?
• Is there another “box” that has similarities that might provide a different perspective?
Summary While troubleshooting is an art, it is also based on scientific principles, and it is a skill that can be taught and developed with quality experience. An organized, logical approach to troubleshooting is necessary to be successful and can be provided by following a logical framework, supplemented by generalized techniques such as substitution, remove-and-conquer, circle-the-wagons, and out-of-the-box thinking.
Further Information Goettsche, L.D., ed. Maintenance of Instruments & Systems. 2nd ed. Practical Guides for Measurement and Control series. Research Triangle Park, NC: ISA (International Society of Automation), 2005. Mostia, William L., Jr., PE. “The Art of Troubleshooting.” Control IX, no. 2, (February 1996): 65–69. ——— Troubleshooting: A Technician’s Guide. Research Triangle Park, NC: ISA (International Society of Automation), 2006.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
About the Author William (Bill) L. Mostia, PE, has more than 35 years of experience in instrumentation, controls, safety, and electrical areas. He is the principle engineer at WLM Engineering Co., a consultant firm in instrument, electrical, and safety areas. Mostia has worked for Amoco, Texas Eastman, Dow Chemical Co., SIS-TECH Solutions, and exida; and he has been an independent consultant in instrument, electrical, and safety areas. He graduated from Texas A&M University with a BS in electrical engineering. He is a professional engineer registered in the state of Texas, a certified Functional Safety (FS) Engineer by TUV Rheinland, and an ISA Fellow. Mostia is an active member of ISA, and he serves on the ISA84 and ISA91 standards committees, as well as various ISA12 standards committees. He has published more than 75 articles and papers, and a book on troubleshooting; he has also been a contributor to several books on instrumentation.
30 Asset Management By Herman Storey and Ian Verhappen, PE, CAP Asset management, broadly defined, refers to any system that monitors and maintains things of value to an entity or group. It may apply to both tangible assets (something you can touch) and to intangible assets, such as human capital, intellectual property, goodwill, and/or financial assets. Asset management is a systematic process of costeffectively developing, operating, maintaining, upgrading, and disposing of assets. Asset management systems are processing and enabling information systems that support management of an organization’s assets, both physical assets, called tangible, and nonphysical, intangible assets. Due to the number of elements involved, asset management is data and informationintensive.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
What you expect the asset to do is known as the function of the asset. An important part of asset management is preserving the asset’s ability to perform its function as long as required. Maintenance is how an assets function is preserved. Maintenance usually costs money, as it consumes time and effort. Not doing maintenance has consequences. Failure of some assets to function can be expensive, harm both people and the environment, and stop the business from running, while failure of other assets may be less serious. As a result, one of the important first steps in any asset management program is understanding the importance of your assets to your operations, as well as the likelihood and consequence of their failure, so you can more effectively mitigate actions to preserve their functions. This exercise is commonly referred to as criticality ranking. Knowing the criticality of your assets makes it easier to determine the appropriate techniques or strategies to manage those assets. Managing all this data requires using computer-based systems and increasingly integrating information technology (IT) and operational technology (OT) systems. Industry analyst Gartner confirms this increased integration, predicting that OT will be used to intelligently feed predictive data into enterprise asset management (EAM) IT systems in the near future. This alerts the asset manager to potential failures, allowing effective intervention before the asset fails.
This integration promises significant improvement in asset performance and availability to operate in the near future. Several organizations already offer asset performance management systems that straddle IT and OT, thus providing more sophistication in how these systems can be used to manage assets. Additional guidance on how to implement asset management systems is available through recently released, and currently under development, international standards. The International Standards Organization (ISO) has developed a series of documents similar in nature to the ISO 9000 series on quality and ISO 14000 series on environmental stewardship. The ISO 55000 series of three separate voluntary asset management standards was officially released 15 January 2014. 1. ISO 55000, Asset Management – Overview, Principles, and Terminology 2. ISO 55001, Asset Management – Management Systems – Requirements 3. ISO 55002, Asset Management – Management Systems – Guidelines for the Application of ISO 55001 Like ISO 9000 and ISO 14000, ISO 55000 provides a generic conceptual framework—it can be applied in any industry or context.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
The requirements of the ISO 55000 standards are straightforward. An organization (such as a company, plant, mine, or school board) has a portfolio of assets. Those assets are intended (somehow) to deliver on part of the organization’s objectives. The asset management system creates the link from corporate policies and objectives, through a number of interacting elements to establish policy (i.e., rules), asset management objectives, and processes through which to achieve them. Asset management itself is the activity of executing on that set of processes to realize value (as the organization defines it) from those assets. Policies lead to objectives, which require a series of activities in order to achieve them. Like the objectives, the resulting plans must be aligned and consistent with the rest of the asset management system, including the various activities, resources, and other financing. Similarly, risk management for assets must be considered in the organization’s overall risk management approach and contingency planning. Asset management does not exist in a vacuum. Cooperation and collaboration with other functional areas will be required to effectively manage and execute the asset management system. Resources are needed to establish, implement, maintain, and
continually improve the asset management system itself, and collaboration outside of the asset management organization or functional area will be required to answer questions such as: • What is the best maintenance program? • What is the ideal age at which to replace the assets? • What should we replace them with? • How can we best utilize new technologies? • What risks are created when the asset fails? • What can we do to identify, manage, and mitigate those risks? These challenges and questions must be answered or a high price will be paid if ignored. Good asset management, as described in industry standards, gives us a framework to answer the above questions and deal with the associated challenges.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
NAMUR NE 129 provides a view of maintenance processes that is complementary to ISO 55000 and the concept of intelligent device management (IDM). NAMUR NE 129 (plant asset management) states that asset management tasks encompass all phases of the plant life cycle, ranging from planning, engineering, procurement, and construction to dismantling the plant. The International Standard ISO 14224 provides a comprehensive basis for the collection of reliability and maintenance (RM) data in a standard format for equipment in all facilities and operations within the petroleum, natural gas, and petrochemical industries during the operational life cycle of equipment.
Asset Management and Intelligent Devices The International Society of Automation (ISA) and the International Electrotechnical Commission (IEC) are collaborating on a proposed standard, currently in draft-review stages, to be named IEC 63082/ISA-108, Intelligent Device Management, to help facilitate integrating the OT and IT realms by effectively using the information available from the sensors and controllers in facilities. As instruments evolve to become truly intelligent, they transmit more data digitally, delivering more benefits to users along with the potential for simpler deployment and operation. The proliferation of intelligent devices has resulted in an increase in volume and complexity of data requiring standards for identifying errors, diagnostic codes, and critical configuration parameters.
A successful measurement is built on the proper type of instrument technology correctly installed in the right application. Aside from input from its direct sensors, a nonintelligent device cannot perceive any other process information. Intelligent devices have diagnostics that can detect faults in the installation or problems with the application, each of which could compromise measurement quality and/or reliability. Intelligent devices also have the ability to communicate this information over a network to other intelligent devices. Smart instruments can also respond to inquiries or push condition information to the balance of the automation system. Many of the same technologies used in enterprise business systems, such as Ethernet and open OT systems, have been adapted to automation platforms. As a result, many of the same security and safety concerns found in these systems must be addressed before connecting critical process devices to non-process control networks. Combining all these advances will result in better integration of intelligent devices into the automation and information systems of the future. This will make it practical for users to realize all the advantages that intelligent devices offer: better process control, higher efficiency, lower energy use, reduced downtime, and higher-quality products.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Established maintenance practices, such as run to failure, time-based inspection and testing, and unplanned demand maintenance associated with traditional devices, were sufficiently effective, given the limitations of the device’s technology. In general, traditional maintenance practices were applied in the following contexts: • Run to failure – Used to manage failure modes, which were sudden, hidden, or deemed to be low impact such that their failure would not impact reliable process operations • Time-based inspection and testing – Used to manage failure modes, which were both gradual and predictable, such as mechanical integrity of devices • Demand maintenance – Used to manage failure modes that were either sudden or gradual but unpredictable Most intelligent devices contain configuration data and diagnostics that can be used to optimize maintenance practices. In many cases, the promise of intelligent devices in the facility remains unrealized. This is not so much a technology issue as a lack of management understanding of IDM value, as well as insufficient skilled personnel and work processes. This lack of understanding results in less than optimum risk management and unrealized benefits of device intelligence. With the implementation of intelligent devices, IDM, and diagnostic maintenance, significant benefits can be realized. The benefits come from:
• The ability of intelligent devices to detect and compensate for environmental conditions, thus improving accuracy • The ability to detect and compensate for faults that might not be detected by traditional inspection and testing Figure 30-1 provides a conceptual illustration of the enhanced diagnostic capability.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Associated new IDM-based work processes also provide opportunities to improve: • Data management through having access to all the information related to not only the process but also the integrity of the resulting signals, the device configuration and settings, as well as each stage of the signal processing steps • Worker knowledge and knowledge management by capturing all changes made to the individual devices and resulting analysis, which help develop best practices and optimize work processes • Maintenance work processes resulting from better understanding the root cause of a device’s deterioration and to focus activities on the source of the deterioration in signal integrity • The impact of faults on the process under control With the implementation of diagnostic messaging, routine scheduled testing is unnecessary for intelligent devices and inspection procedures can be simplified. Tools and processes that use the device’s built-in diagnostics can improve testing and inspection, thus improving overall facility risk management.
As indicated in the text above, asset management of the enterprise manages all its assets. Figure 30-2 represents the relationship between IDM and intelligent devices. In the context of asset management in a unified modeling language (UML) class diagram, an intelligent device is one form of asset the IDM manages following the principles of asset management.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Figure 30-3 shows how an IDM program translates objectives provided by higher levels of the organization, such as business management processes, into requirements of technology and resources and then passes them to IDM work processes. Similarly, the IDM program translates performance history recorded by work processes into key performance indicators (KPIs) and passes them to higher levels of the organization.
Implementation methods for enterprise business management are nontechnical and best left to operating companies. Many enterprise business management stakeholders can have an impact on the success of IDM. Most enterprise business management processes are created for larger purposes and are not structured to directly manage IDM. A technical and business focus for IDM, as well as a mid-level management structure, can provide valuable coordination between the IDM work processes and higher-level management. IDM programs are implemented and supported via IDM work processes. The IDM program requires a well-defined functional structure, but the structure can be tailored to the enterprise’s needs.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
One of the main driving forces for program-level activity is the fast rate of change in software used in intelligent devices. Software drives much faster change in devices than the natural change in technology or application requirements for those devices. Program-level activities can improve the efficiency of managing those changes by managing the change once per device type (i.e., a specific model from a particular manufacturer) for an enterprise rather than managing the change in each application at each facility. The supporting functions for IDM can be expressed by the various applications, subsystems, and intelligent devices that can comprise the IDM. Intelligent devices are the fundamental building block of IDM. The diagnostics and other information available from intelligent devices can be integrated with these associated functions in the facility to implement IDM work processes and advanced maintenance processes. A true IDM does not need to incorporate all the elements included here, but the intelligent devices are a necessary element of the system. Figure 30-4 shows an overview of a representative sample of the supporting functions for IDM.
IDM supporting functions range from real time or semi-real time to planned activities. Information from intelligent devices is provided in real time or close to real time, depending on the types of devices used. This real-time information can then be used to generate an alarm for the operator in real time or for operator human-machine interface (HMI) applications. In the transactional world, intelligent device information is provided to the Computerized Maintenance Management System (CMMS). Primary functional elements of the IDM physical architecture include:
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
• Intelligent devices • Field maintenance tools • Control hardware and software • Simulation and optimization tools • Historian • Asset management tools • Reporting and analysis tools • Alarm management tools • Operator HMI • Engineering and configuration tools • CMMS • Design tools
• Planning and scheduling tools Field maintenance tools include portable tools and field workstations, which fall into three general types: • Laptop workstations • Hand-held tools • Remote wireless clients Asset management tools have functions, such as the collection and analysis of information from intelligent devices, with the purpose of enacting various levels of preventive, predictive, and proactive maintenance processes. Asset management tools give maintenance people a real-time view into what is happening with their field device assets, but they can also connect to other intelligent assets in the facility, including control hardware and software, intelligent drives and motor controls, rotating machinery, and even electrical assets. Asset management tools can be used to manage overall facility maintenance processes and can use information collected from other functional domains, such as device management tools. Traditional asset management practices apply to devices and systems used for automation. However, the widespread use of intelligent devices brings requirements for new techniques and standards for managing this new class of devices.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Further Information Many standards efforts are being undertaken in the process industries that revolve around plant assets and devices, but none of these efforts addresses IDM for maintenance and operations. The ISO PC 251 effort is part of the ISO 55000 asset management standard. ISO 55000 primarily focuses on the inspection and test side of asset management. ISO 55000 does not address the requirements of IDM. The Institute of Asset Management (IAM) is a UK-based (physical) asset management association. In 2004, IAM, through the British Standards Institution (BSI), published Publicly Accepted Specifications (PAS) for asset management. These include PAS 551:2008, Part 1: Specification for the Optimized Management of Physical Assets and PAS 55-2:2008, Part 2: Guidelines for the Application of PAS 55-1. However, the PAS standards’ primary focus is not on instrumentation and again leans heavily toward the inspect and test side of the business. These standards tend to be heavily focused on physical assets, not the devices that control them.
IEC TR 61804-6:2012 Function Blocks (FB) for Process Control – Electronic Device Description Language (EDDL) – Part 6: Meeting the Requirements for Integrating Fieldbus Devices in Engineering Tools for Field Devices defines the plant asset management system as a system to achieve processing equipment optimization, machinery health monitoring, and device management. NAMUR NE 129 recommendation, Requirements of Online Plant Asset Management Systems, does a very good job at outlining the purpose of plant asset management systems and their place in the world of plant asset health.
About the Authors Herman Storey is an independent automation consultant and the chief technology officer for Herman Storey Consulting, LLC. He has been active for many years in standards development with ISA and other organizations, including the FieldComm Group. Storey is co-chair of the ISA100 Wireless Systems for Automation and ISA108 Intelligent Device Management committees.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
After a 42-year career, Storey retired from Shell Global Solutions in 2009. At that time, he was a senior consultant in the Process Automation, Control, and Optimization group. His role was the principle technology expert for Instrumented Systems Architecture and the subject-matter expert for distributed control systems technology. Storey graduated from Louisiana Tech with a BSEE. Ian Verhappen, PE, CAP, and ISA Fellow, has worked in all three aspects of the automation industry: end user, supplier, and engineering consultant. After approximately 25 years as an end user in the hydrocarbon industry (where he was responsible for analyzer support, as well as integration of intelligent devices in the facilities), Verhappen moved to a supplier company as director of digital networks. For the past 5+ years, he has been working for engineering firms as a consultant. In addition to being a regular trade journal columnist, Verhappen has been active in ISA and IEC standards for many years, including serving a term as vice president of the ISA Standards and Practices (S&P) Board. He is presently the convener of IEC SC65E WG10 Intelligent Device Management, a member of the SC65E WG2 (List of Properties), and managing director of several ISA standards including ISA-108.
X Factory Automation
Mechatronics Many engineering products of the last 30 years possess integrated mechanical, electrical, and computer systems. Mechatronics has evolved significantly by taking advantage of embedded computers and supporting information technologies and software advances. The result has been the introduction of many new intelligent products into the marketplace and associated practices as described in this chapter to ensure successful implementation.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Motion Control Motion control of machines and processes compares the desired position to the actual scale and takes whatever corrective action is necessary to bring them into agreement. Initially, machine tools were the major beneficiary of this automation. Today, packaging, material handling, food and beverage processing, and other industries that use machines with movable members are enjoying the benefits of motion control.
Vision Systems A vision system is a perception system used for monitoring, detecting, identifying, recognizing, and gauging that provides local information useful for measurement and control. The systems consist of several separate or integrated components including cameras and lenses, illumination sources, mounting and mechanical fixturing hardware, computational or electronic processing hardware, input/output (I/O) connectivity and cabling electrical hardware, and, most importantly, the software that performs the visual sensing and provides useful information to the measurement or control system.
Building Automation
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
This chapter provides insight into the industry that automates large buildings. Each large building has a custom-designed heating, ventilating, and cooling (HVAC) air conditioning system to which automated controls are applied. Access control, security, fire, life safety, lighting control, and other building systems are also automated as part of the building automation system.
31 Mechatronics By Robert H. Bishop
Basic Definitions Modern engineering design has naturally evolved into a process that we can describe in a mechatronics framework. Since the term was first coined in the 1970s, mechatronics has evolved significantly by taking advantage of embedded computers and supporting information technologies and software advances. The result has been the introduction of many new intelligent products into the marketplace. But what exactly is mechatronics? Mechatronics was originally defined by the Yasakawa Electric Company in trademark application documents [1]:
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
The word, Mechatronics, is composed of “mecha” from mechanism and “tronics” from electronics. In other words, technologies and developed products will be incorporating electronics more and more into mechanisms, intimately and organically, and making it impossible to tell where one ends and the other begins. The definition of mechatronics evolved after Yasakawa suggested the original definition. One of the most often quoted definitions comes from Harashima, Tomizuka, and Fukada [2]. In their words, mechatronics is defined as: The synergistic integration of mechanical engineering, with electronics and intelligent computer control in the design and manufacturing of industrial products and processes. Other definitions include: • Auslander and Kempf [3] Mechatronics is the application of complex decision-making to the operation of physical systems. • Shetty and Kolk [4] Mechatronics is a methodology used for the optimal design of electromechanical products.
• Bolton [5] A mechatronic system is not just a marriage of electrical and mechanical systems and is more than just a control system; it is a complete integration of all of them.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
These definitions of mechatronics express various aspects of mechatronics, yet each definition alone fails to capture the entirety of the subject. Despite continuing efforts to define mechatronics, to classify mechatronic products, and to develop a standard mechatronics curriculum, agreement on “what mechatronics is” eludes us. Even without a definitive description of mechatronics, engineers understand the essence of the philosophy of mechatronics from the definitions given above and from their own personal experiences. Mechatronics is not a new concept for design engineers. Countless engineering products possess integrated mechanical, electrical, and computer systems, yet many design engineers were never formally educated or trained in mechatronics. Indeed, many socalled mechatronics programs in the United States are actually programs embedded within the mechanical engineering curriculum as minors or concentrations [6]. However, outside of the United States, for example in Korea and Japan, mechatronics was introduced in 4-year curriculum about 25 years ago. Modern concurrent engineering design practices, now formally viewed as an element of mechatronics, are natural design processes. From an educational perspective, the study of mechatronics provides a mechanism for scholars interested in understanding and explaining the engineering design process to define, classify, organize, and integrate the many aspects of product design into a coherent package. As the historical divisions between mechanical, electrical, biomedical, aerospace, chemical, civil, and computer engineering give way to more multidisciplinary structures, mechatronics can provide a roadmap for engineering students studying within the traditional structure of most engineering colleges. In fact, undergraduate and graduate courses in mechatronic engineering are now offered in many universities. Peer-reviewed journals are being published and conferences dedicated to mechatronics are very popular. However, mechatronics is not just a topic for investigative studies by academicians; it is a way of life in modern engineering practice. The introduction of the microprocessor in the early 1980s, coupled with increased performance to cost-ratio objectives, changed the nature of engineering design. The number of new products being developed at the intersection of traditional disciplines of engineering, computer science, and the natural sciences is expanding. New developments in these traditional disciplines are being absorbed into mechatronics design. The ongoing information technology revolution, advances in wireless communication, smart sensors design, the Internet of Things, and embedded systems engineering ensures that mechatronics will continue to evolve.
Key Elements of Mechatronics The study of mechatronic systems can be divided into the following areas of specialty: • Physical system modeling • Sensors and actuators • Signals and systems • Computers and logic systems • Software and data acquisition
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
The key elements of mechatronics are illustrated in Figure 31-1. As the field of mechatronics continues to mature, the list of relevant topics associated with the area will most certainly expand and evolve. The extent to which mechatronics reaches into various engineering disciplines is revealed by the constituent key elements comprising mechatronics.
Physical System Modeling Central to mechatronics is the integration of physical systems (e.g., mechanical and electrical systems) utilizing various sensors and actuators connected to computers and software. The connections are illustrated in Figure 31-2. In the design process, it is necessary to represent the physical world utilizing mathematical models; hence, physical system modeling is essential to the mechatronic design process. Fundamental principles of science and engineering (such as the dynamical principles of Newton, Maxwell, and Kirchhoff) are employed in the development of physical system models of mechanical and dynamical systems, electrical systems, electromechanical systems, fluid
systems, thermodynamic systems, and micro-electromechanical (MEMS) systems. The mathematical models might include the external physical environment for simulation and verification of the control system design, and it is likely that the mathematical models also include representations of the various sensors and actuators. An autonomous rover, for example, might utilize an inertial measurement unit for navigation purposes, and the software hosted on the computer aboard the vehicle would utilize a model of the inertial measurement unit to compute the expected acceleration measurements as part of the predictive function in the rover trajectory control.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Sensors and Actuators Mechatronic systems utilize sensors to measure the external environment and actuators to manipulate the physical systems to achieve the desired goals. We classify sensors by their measurement goals. Sensors can measure linear motion, rotational motion, acceleration, force, torque, pressure, flow rates, temperature, range and proximity, light intensity, and images. Furthermore, sensors can be classified as either analog or digital, and as passive or active. A cadmium sulfide cell that measures light intensity is an example of an analog sensor. A digital camera is a common digital sensor. When remotely sensing the Earth, we can use passive sensors that measure energy that is naturally available when the sun is illuminating the Earth. For example, a digital camera might serve as a passive sensor for remote sensing. On the other hand, we can also use an active sensor that provides its own energy source for illumination. An example of an active sensor for remote sensing is the synthetic aperture radar. Sensors are becoming increasingly smaller, lighter, and, in many instances, less expensive. The trend to microscales and nanoscales supports the continued evolution of mechatronic system design to smaller and smaller scales. Actuators are following the same trends as sensors in reduced size and cost. Actuators can be classified according to the nature of their energy: electromechanical, electrical, electromagnetic, piezoelectric, hydraulic, and pneumatic. Furthermore, we can classify actuators as binary or continuous. For example, a relay is a binary actuator and a stepper
motor is a continuous actuator. Examples of electrical actuators include diodes, thyristors, and solid-state relays. Examples of electromechanical actuators include motors, such as direct current (DC) motors, alternating current (AC) motors, and stepper motors. Examples of electromagnetic actuators include solenoids and electromagnetic relays. As new smart material actuators continue to evolve, we can expect advanced shape memory alloy actuators and magnetostrictive actuators. As the trend to smaller actuators continues, we can also look forward to a larger selection of microactuators and nanoactuators. When working with sensors and actuators, one must necessarily be concerned with the fundamentals of time and frequency. Three types of time and frequency information can be conveyed: clock time or time of day (the time an event occurred), time interval (duration between events), and frequency (rate of a repetitive event). In mechatronic systems, we often need to time tag and synchronize events, such as time measurements that are obtained from sensors or the time at which an actuator must be activated. The accuracy of time and frequency standards has improved by many orders of magnitude over the past decades allowing further advances in mechatronics by reducing uncertainties in sensing and actuating timing.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
In addition to timing and frequency, the issues of sensor and actuator characteristics must be considered. The characteristics of interest include range, resolution, sensitivity, errors (calibration, loading, and sensitivity), repeatability, linearity and accuracy, impedance, friction, eccentricity, backlash, saturation, deadband, input, and frequency response.
Signals and Systems An important step in the design process is to accurately represent the system with mathematical models. What are the mathematical models used for? They are employed in designing and analyzing appropriate control systems. The application of system theory and control is central to mechatronic system design. The relevance of control system design to mechatronic systems extends from classical frequency domain design using linear, time-invariant, single-input single-output (SISO) system models, to modern multiple-input multiple-output (MIMO) state space methods (again assuming linear, time-invariant models), to nonlinear, time-varying methods. Classical design methods use transfer function models in conjunction with root locus methods and frequencyresponse methods, such as Bode, Nyquist, and Nichols. Although the transfer function approach generally employs SISO models, it is possible to perform MIMO analysis in the frequency domain. Modern state-space analysis and design techniques can be readily
applied to SISO and MIMO systems. Pole placement techniques are often used to design the state-space controllers. Optimal control methods, such as linear quadratic regulators (LQRs), have been discussed in the literature since the 1960s and are in common use today. Robust, optimal, control design strategies, such as H2 and H∞, are applicable to mechatronic systems, especially in situations where there is considerable uncertainty in the plant and disturbance models. Other common design methodologies include fuzzy control, adaptive control, and nonlinear control (using Lyapunov methods and feedback linearization).
Computers and Logic Systems
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Once the control system is designed, it must be implemented on the mechatronic system. This is the stage in the design process where issues of software and computer hardware, logic systems, and data acquisition take center stage. The development of the microcomputer, and associated information technologies and software, has impacted the field of mechatronics and has led to a whole new generation of consumer products. The computer is used to monitor and/or control processes and operations in a mechatronic system. In this mode, the computer is generally an embedded computer hosting an operating system (often real time) running codes and algorithms that process input measurement data (from a data acquisition system) to prescribe outputs that drive actuators to achieve the desired closed-loop behavior. Embedded computers allow us to introduce intelligence into the mechatronic systems. Unfortunately, nearly half of all embedded system designs are late or never make it to the product, and about one-third fail once they are deployed. One of the challenges facing designers in this arena is related to the complexity of the algorithms and codes that are being implemented on the embedded computers. One of the newer (and effective) approaches to addressing the coding complexity issue is using graphical system design that blends intuitive graphical programming and commercial off-the-shelf hardware to design, prototype, and deploy embedded systems. The computer also plays a central role in the design phase where engineers use design and analysis software (off-the-shelf or special purpose) to design, validate, and verify the expected performance of the mechatronic system. Sometimes this is accomplished completely in simulation, but more often the computer is but one component of a test procedure that encompasses hardware-in-the-loop simulations and other laboratory investigations.
Data Acquisition and Software
The collection of signals from the sensors is known as data acquisition (DAQ). The DAQ system collects and digitizes the sensor signals for use in the computer-controlled environment of the mechatronic system. The DAQ system can also be used to generate signals for control purposes. A DAQ system includes various computer plug-in DAQ devices, signal conditioning (e.g., linearization and scaling), and a suite of software. The software suite controls the DAQ system by acquiring the raw data, analyzing the data, and presenting the results. Sensors are typically not connected directly to a plug-in DAQ device in the computer because the measured physical signals are often low voltage and susceptible to noise, thus they require some type of signal conditioning. In other words, the signals are appropriately modified (e.g., amplified and filtered) before the plug-in DAQ device converts them to digital information. One example of signal conditioning is linearization where the voltage levels from the sensors or transducers are linearized so that the voltages can be scaled to measure physical phenomena.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Two main hardware elements of the DAQ system are the analog-to-digital converter (ADC) and the digital-to-analog converter (DAC). The ADC is an electronic device (often an integrated circuit) that converts an analog voltage to a digital number. Similarly, the DAC is an electronic device that converts a digital number to an analog voltage or current. The transfer of data to or from a computer system involving communication channels and DAQ interfaces is referred to as input/output, or I/O. The I/O can be either digital I/O or analog I/O. There are several key questions that arise when considering analog signal inputs and DAQ. For example, you need to know the signal magnitude limits. It is also important to know how fast the signal varies with time. The four parameters of concern are (1) resolution, (2) device range, (3) signal input range, and (4) sampling rate. The resolution of the ADC is measured in bits. For example, an ADC with 16 bits has a higher resolution (and thus a higher degree of accuracy) than a 12-bit ADC. The device range is the minimum and maximum analog signal levels that the ADC can digitize. The signal input range is the range that is specified as the maximum and minimum voltages of the analog input signals. A signal range that includes both positive and negative values (e.g., −5 V to 5 V) is known as bipolar. A signal range that is always positive (e.g., 0 V to 10 V) is unipolar. The sampling rate is the rate at which the DAQ device samples an incoming signal.
The Modern Automobile as a Mechatronic Product The evolution of modern mechatronics is reflected in the development of the modern
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
automobile. Until the 1960s, the radio was the only significant electronics in an automobile. Today, the automobile is a comprehensive mechatronic system. For example, before the introduction of sensors and microcontrollers, a mechanical distributor was used to select the specific spark plug to fire when the fuel-air mixture was compressed. The timing of the ignition was the control variable. Modeling of the combustion process showed that for increased fuel efficiency there existed an optimal time when the fuel should be ignited depending on the load, speed, and other measurable quantities. As a result of efforts to increase fuel efficiency, the electronic ignition system was one of the first mechatronic systems to be introduced in the automobile. The electronic ignition system consists of crankshaft position, camshaft position, airflow rate, throttle position, and rate-of-throttle position change sensors, along with a dedicated microcontroller to determine the timing of the spark plug firings. The mechanical distributor is now a thing of the past. Other mechatronic additions to the modern automobile include the antilock brake system (ABS), the traction control system (TCS), and the vehicle dynamics control (VDC) system. Modern automobiles typically use combinations of 8-, 16-, and 32-bit processors to implement the control systems. The microcontroller has onboard memory, digital and analog inputs, analog-to-digital and digital-to-analog converters, pulse width modulation, timer functions (such as event counting and pulse width measurement, prioritized inputs, and, in some cases, digital signal processing). Typically, the 32-bit processor is used for engine management, transmission control, and airbags; the 16-bit processor is used for the ABS, TCS, VDC, instrument cluster, and air-conditioning systems; and the 8-bit processor is used for seat control, mirror control, and window lift systems. By 2017, some luxury automobiles were employing over 150 onboard microprocessors—and the trend of increasing the use of microprocessors continues [7]. And what about software? Modern automobiles have millions of lines of code with conventional autos hosting up to 10 million lines of code and high-end luxury sedans hosting nearly 100 million lines of code [8]. With this many lines of code and the growing connectivity of the automobile to the Internet, the issue of cybersecurity is rapidly becoming a subject of great interest as it has been demonstrated that hacking your automobile subsystems is possible [9]. Automobile makers are searching for high-tech features that will differentiate their vehicles from others. It is estimated that 60% of the cost of a car is associated with automotive electronic systems and that the global automotive electronics market size will reach $300 billion by 2020 [10]. New applications of mechatronic systems in the automotive world include driverless and connected automobiles, safety enhancements, emission reduction, and other features including intelligent cruise control, and brake-by-
wire systems that eliminate the hydraulics. An upcoming trend is to bring the computer into the automobile passenger compartment with, for example, split screen monitors that facilitate front seat passenger entertainment (such as watching a movie) while the driver navigates using GPS and mapping technology [11]. As the number of automobiles in the world increases, stricter emission standards are inevitable. Mechatronic products will likely contribute to meeting the challenges in emission control by providing substantial reduction in CO, NO, and HC emissions and by increasing vehicle efficiency. Clearly, an automobile with 150 microprocessors (or microcontrollers), up to 100 electric motors, about 200 pounds of wiring, a multitude of sensors, and millions of lines of software code can hardly be classified as a strictly mechanical system.
Classification of Mechatronic Products In the late 1970s, the Japan Society for the Promotion of Machine Industry (JSPMI) classified mechatronics products into four categories: 1. Class I – Primarily mechanical products with electronics incorporated to enhance functionality. Examples include numerically controlled machine tools and variable speed drives in manufacturing machines.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
2. Class II – Traditional mechanical systems with significantly updated internal devices incorporating electronics. The external user interfaces are unaltered. Examples include the modern sewing machine and automated manufacturing systems. 3. Class III – Systems that retain the functionality of the traditional mechanical system, but the internal mechanisms are replaced by electronics. An example is the digital watch. 4. Class IV – Products designed with mechanical and electronic technologies through synergistic integration. Examples include photocopiers, intelligent washers and dryers, rice cookers, and automatic ovens. The enabling technologies for each mechatronic product class illustrate the progression of electromechanical products in stride with developments in control theory, computation technologies, and microprocessors. Class I products were enabled by servo technology, power electronics, and control theory. Class II products were enabled by the availability of early computational and memory devices and custom circuit design capabilities. Class III products relied heavily on the microprocessor and integrated circuits to replace mechanical systems. Finally, Class IV products marked the beginning of true mechatronic systems, through integration of mechanical systems and electronics.
It was not until the 1970s, with the development of the microprocessor by the Intel Corporation, that integration of computational systems with mechanical systems became practical. There are literally hundreds of thousands of digital process-control computers installed worldwide. Whatever definition of mechatronics one chooses to adopt, it is evident that modern mechatronics involves computation as the central element. In fact, the incorporation of the microprocessor to precisely modulate mechanical power and to adapt to changes in environment are the essence of modern mechatronics and smart products.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
The Future of Mechatronics Future growth in mechatronics will be fueled by the growth in the constituent areas by providing “enabling technologies.” For example, the invention of the microprocessor had a profound effect on the redesign of mechanical systems and the design of new mechatronics systems. We should expect continued advancements in cost-effective microprocessors and microcontrollers, sensor and actuator development enabled by advancements in applications of MEMS, adaptive control methodologies and real-time programming methods, networking and wireless technologies, mature computer-aided engineering (CAE) technologies for advanced system modeling, virtual prototyping, and testing. The continued rapid development in these areas will only accelerate the pace of smart product development. The Internet is a technology that, when utilized in combination with wireless technology, will also lead to new mechatronic products, especially with the growth of the Internet of Things [12]. While developments in automotives provide vivid examples of mechatronics development, there are numerous examples of intelligent systems in many walks of life, including smart home appliances, such as dishwashers, vacuum cleaners, microwaves, and wireless network–enabled devices. In the area of “human-friendly machines,” we can expect advances in robotassisted surgery, and implantable sensors and actuators. Other areas that will benefit from mechatronic advances include robotics, manufacturing, space technology, and transportation.
References 1. Mori, T. “Mecha-tronics.” Yasakawa Internal Trademark Application Memo 21.131.01, July 12, 1969. 2. Harshama, F., M. Tomizuka, and T. Fukuda. “Mechatronics—What Is It, Why,
and How? An Editorial.” IEEE/ASME Transactions on Mechatronics 1, no. 1 (1996): 1–4. 3. Auslander, D. M., and C. J. Kempf. Mechatronics: Mechanical System Interfacing. Upper Saddle River, NJ: Prentice Hall, 1996. 4. Shetty, D., and R. A. Kolk. Mechatronic System Design. Boston, MA: PWS Publishing Company, 1997. 5. Bolton, W. Mechatronics: Electrical Control Systems in Mechanical and Electrical Engineering. 2nd ed. Harlow, England: Addison Wesley Longman, 1999. 6. Lee, J. “College Education Curriculum of Automation Mechatronics in the Republic of Korea.” 16th International Conference on Research and Education in Mechatronics (REM), Bochum, Germany, 2015. 7. O’Donnell, B. “The Digital Car.” Techpinions, May 23, 2017. https://techpinions.com/the-digital-car/50164. 8. O’Donnell, B. “Your average car is a lot more code-driven than you think.” USA Today, June 28, 2016. Accessed September 26, 2017. https://www.usatoday.com/story/tech/columnist/2016/06/28/your-average-carlot-more-code-driven-than-you-think/86437052/.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
9. Pagliery, J. “Your car is a giant computer and it can be hacked.” CNN Tech (June 2, 2014). Accessed September 26, 2017. http://money.cnn.com/2014/06/01/technology/security/car-hack/index.html. 10. “Global and Chinese Automotive Electronics Industry Chain Report, 20152018.” PR Newswire (March 14, 2016). Accessed September 26, 2017. http://www.prnewswire.com/news-releases/global-and-china-automotiveelectronics-industry-chain-report-2015-2018-300235868.html. 11. Sackman, J. “Cool Tech Options for Your Car.” Goliath (February 10, 2016). Accessed September 26, 2017. http://www.goliath.com/auto/12-cool-techoptions-for-your-car/. 12. Bradley, D., D. Russell, I. Ferguson, J. Isaacs, A. MacLeod, and R. White. “The Internet of Things – The Future or the End of Mechatronics.” Mechatronics: The Science of Intelligent Machines, 27. International Federation of Automatic Control (2015): 57–74.
Further Information
These reference texts are recommended to gain a deeper understanding of mechatronics. Bishop, R. H., ed. The Mechatronics Handbook. 2nd ed. Boca Raton, FL: CRC Press, Taylor & Francis Group, 2008. Bolton, W. Mechatronics: A Multidisciplinary Approach. 4th ed. Upper Saddle River, NJ: Prentice Hall, 2009. De Silva, C. W. Mechatronics: An Integrated Approach. Boca Raton, FL: CRC Press, Taylor & Francis Group, 2004. De Silva, C. W., F. Khoshnoud, M. Li, and S. K. Halgamuge. Mechatronics: Fundamentals and Applications. Boca Raton, FL: CRC Press, Taylor & Francis Group, 2015. Hehenberger, P., and D. Bradley, eds. Mechatronic Futures: Challenges and Solutions for Mechatronic Systems and their Designers. Springer International Publishing, 2016. Merzouki, R., A. K. Samantaray, P. M. Pathak, and B. O. Bouamama. Intelligent Mechatronic Systems. Springer-Verlag London, 2013. Onwubolu, G. Mechatronics: Principles and Applications. Oxford: ButterworthHeinemann, 2005. Reif, K., ed. Automotive Mechatronics. Wiesbaden: Springer Vieweg, 2015.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Shetty, D., and R. A. Kolk. Mechatronics System Design. Stamford, CT: Cengage Learning, 2010.
Peer-Reviewed Journals IEEE/ASME Transactions on Mechatronics. http://www.ieee-asme-mechatronics.org/. Mechatronics: The Science of Intelligent Machines. Elsevier. https://www.journals.elsevier.com/mechatronics/.
About the Author Robert H. Bishop is the dean of Engineering at the University of South Florida and is a full professor in the Department of Electrical Engineering. He also held the endowed position of Opus Dean of Engineering and was on the faculty of the Department of Electrical and Computer Engineering at Marquette University. Previously he was on the faculty at The University of Texas at Austin for 20 years where he served as the chairman of the Department of Aerospace Engineering and Engineering Mechanics,
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
held the Joe J. King Professorship, and was a University Distinguished Teaching Professor. Prior to academia, he was a member of the technical staff at the Charles Stark Draper Laboratory. Bishop is the co-author of one of the world’s leading textbooks on control theory; he has authored 2 other textbooks, edited 2 handbooks, and authored/coauthored over 135 journal and conference papers. He developed the Mechatronics Handbook and the spin-off book Mechatronics: An Introduction. He is a Fellow of the American Institute of Aeronautics and Astronautics (AIAA) and a Fellow of the American Astronautical Association (AAS). He received his PhD from Rice University in electrical and computer engineering and his MS and BS from Texas A&M University in aerospace engineering.
32 Motion Control By Lee A. Lane and Steve Meyer
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
What Is Motion Control? Within the general field of automation, motion control is a special field that deals with the automatic actuation and control of mechanical systems. In the early days of the industrial revolution, many mechanical systems required a person to power and control the machinery by turning cranks or moving levers, actuating the motion as they watched a measuring scale. The brain was the control, comparing the desired position to the actual position and making corrections to get the machine to the correct position. With the introduction of automation, the scale was replaced with a feedback sensor, the human muscle was replaced with a powered actuator, and the brain was replaced with a controller. An operator would enter the desired position; the controller would compare the feedback position to the desired position and decide the direction and speed needed to achieve the desired position. The controller would send instructions to an electric motor (or other power source) until the desired position was achieved. Initially, machine tools were the major beneficiary of this automation. Today, almost every manufacturing facility—from food and beverage processing, consumer product packaging, semiconductors and electronics—all use machinery with some type of motion control.
Advantages of Motion Control Saving time is a major benefit of motion control. It might take a person a minute or two to hand crank a machine a short distance and align it with the scale. A typical servo will do this task in a fraction of a second. A typical servo system will let the machine repeat the task accurately day after day without the need for constant manual measurement techniques. • Servo – Any actuator mechanism with a feedback sensor that, when combined with a controller, can regulate position. Coordinating two or more precision axes of motion, as is required in a metal cutting machine tool, is impossible for a human operator with manual hand cranks, but easily
done with electronically controlled servos. Position regulation is another benefit. As will be seen shortly, a servo will return an axis to position when an outside force has moved the axis. • Axis of motion – A principle direction along which motion occurs. A single motor coupled to a mechanical actuator and feedback device may be referred to as an axis of motion.
Feedback The feedback device can be considered the eyes of the system; it provides information to the control system about the velocity and the position of the load in an axis of motion. Motion control systems use many different types of feedback devices, which can be analog or digital and come in both incremental and absolute configurations. Both incremental and absolute types of devices can track position changes: the difference is in how they respond to a loss of power. Absolute devices can determine their position on power up, providing the axis was calibrated during start up. Incremental devices will lose their position and need to go through a homing sequence on power up. In this chapter, we will briefly discuss several of the more popular types of feedback devices.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Resolvers Resolvers are analog devices relying on magnetic coupling to determine position. They do this by looking at the magnetic coupling of a rotating winding, the rotor, compared to two stationary windings, stators (see Figure 32-1). This coupling varies with the angle of the shaft relative to the stators. As such, resolvers are rotary transformers and typically are interfaced with an analog-to-digital (A/D) converter to be used with a digital controller. These circuits typically provide 12 or 13 bits of resolution, although there are models with up to 16 bits of resolution. Resolvers are commonly used as velocity feedback devices in brushless DC (direct current) or ACPM (alternating current permanent magnet) machines. They are extremely rugged and provide absolute feedback for one revolution of their shafts. This makes them ideal for AC servomotors, as the absolute nature in one revolution allows the drive to know where the motor shaft is. This also allows the resolver to commutate the motor. Although this works well for commutation, it is less than ideal for absolute position feedback. Typically an axis travels over more than one revolution of a motor shaft; therefore a single resolver loses its ability to act as an absolute device. If the application allows the axis to perform a homing routine on power up, this configuration offers the advantage of using a single feedback device. You can use the resolver for the drive for commutation and the
controller for position and velocity. To achieve absolute positioning over multiple turns, use a dual resolver set or master vernier resolver set. Two resolvers are connected to the load, but each resolver is geared at a different ratio. By looking at the phase shift between the two resolvers, it is possible to determine the absolute position of the axis over multiple turns.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Magnetostrictive Transducers Magnetostrictive transducers are unique due to the noncontact nature of this type of feedback device, which makes them ideal for linear hydraulic applications. Magnetostrictive transducers operate in a manner similar to sonar. A sensing magnet is placed on or, in the case of a hydraulic cylinder, inside the actual load. The magnetostrictive transducer sends out a pulse to the moving magnet, which causes a mechanical strain that is conducted back to the unit. The time it takes for the strain to conduct back determines the position of the axis. These devices are absolute and must be used in linear applications. The downside to these devices is their limited resolution, typically only several hundred counts per inch with moderate accuracy.
Encoders Encoders are extremely popular in motion control applications. They are digital, relatively inexpensive, and often have very high resolution and accuracy. Encoders come in both incremental and absolute configurations and are available from many vendors. Incremental encoders are extremely popular and are used in many motion control applications. They are basically discs with slots cut in them and a through-beam photo sensor (see Figure 32-2). This configuration creates a pulse train as the encoder shaft is turned. The controller counts the pulse train and thereby determines how far the axis has traveled from a known position. This known position is determined at power up by going through a homing sequence. Typically there are two photo detectors, channel A
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
and B, set 90 degrees apart. By looking at which channel rises first, the controller determines the direction of travel. The price of incremental encoders is primarily determined by their resolution. Incremental encoders of 1000 counts or less are very inexpensive. Incremental encoders with more than 10,000 counts are considerably more expensive. Specialty encoders with greater than 50,000 counts per revolution are available but are very expensive.
Absolute encoders, as their name suggests, are absolute position reference sensors (see Figure 31-3). Instead of creating a pulse train, an absolute encoder uses a disc that reveals a unique binary or grey code for each position of the disc in 360 degrees of rotation. Absolute encoders can be either single turn or multiturn. A single turn absolute encoder, like a resolver, gives an absolute position over one turn of its shaft. A multiturn absolute encoder incorporates an integrated gear that is encoded so that the number of turns can be recorded. Like incremental encoders, the higher the resolution, the more expensive the encoder is. Typical systems use absolute encoders with 12 bits (4,096 counts) or 13 bits (8,192 counts) per revolution. In addition, a multiturn absolute encoder will record 12 or 13 bits of revolutions. Typically a multiturn encoder will be described by the resolution bits plus the revolution bits; therefore, a 13-bit (8,192) resolution disc with the ability to track 12 bits (4,092) revolutions will be referred to as a 25-bit multiturn absolute encoder. Many of today’s absolute encoders are programmable and sit on a variety of industrial networks, such as Actuator Sensor Interface (AS-i), Controller Area Network (CAN), Devicenet, or PROFINET.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Linear encoders are incremental encoders that have been rolled out along the length of an axis. Also known as glass scales, these encoders have been used for decades in machine tool applications. A reader placed on the moving axis picks up the pulse train generated by moving along the glass scale. Today, with the increasing use of linear motors, this form of encoder is seeing more and more use. Today we are seeing a rapid growth in the use of magnetic encoders, better known as sin cos encoders. These encoders are similar to a resolver in that they use a magnetic field to determine position. Sin cos encoders have two sensors that are 90 degrees apart and permanent magnets on the rotor to develop a sine and cosine signal. This enables the encoder to detect the direction of rotation and, through interpolation, the position of the rotor. Today, modern interfaces, such as Serial Synchronous Interface (SSI) and Hiperface DSL® from SICK, allow very high resolutions from these encoders. Typically, 15 or 16 bits of resolution are available, with some encoders providing up to 23 bits of resolution. This allows anywhere from tens of thousands counts to several million counts per revolution. Sin cos encoders are also typically capable of 12 bits of resolution with absolute feedback up to 4,096 turns. The high resolution and absolute capability of these encoders are encouraging more adoption in industry.
Other Feedback Devices There are many other types of feedback devices that we have not discussed. These include linear variable displacement transducers (LVDTs), laser interferometers, synchros, and even Hall effect devices. These devices, while in use today, are not as prevalent as the previously described types of feedback in industrial systems. LVDTs still see a lot of use in aerospace applications, and laser interferometers are used in very high-precision applications. Today, using special interfaces and encoders with sine and cosine signals, it is possible to achieve 4 million counts per revolution! Many servo drive and motor vendors offer this technology, which is patented by a company in
Germany.
Actuators The actuator takes the command from the controller and moves the axis. Based on the signal coming from the feedback device, the controller will command the actuator to move the axis at a particular velocity until it comes to the desired position. The actuator provides the means of accelerating and decelerating the axis and maintaining its velocity and position. The actuator can be considered the muscles of the motion control system and can be pneumatic, hydraulic, or electric.
Pneumatic
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Pneumatic systems employ compressed air under high pressure to move an axis. Compressed air is held in a tank and released into an expandable chamber with a rod attached to it. The air is released into the chamber through the use of an electrically operated valve. As the air expands in the chamber it pushes the rod forward. The rod can be pulled back in; either through a second chamber on the other side of the rod, or through a mechanical mechanism, such as a spring. Due to the compressibility of a gas, pneumatic systems are usually not stiff enough for typical industrial motion control applications. They have limited use in specialty robotics or are used to position point-topoint systems, such as a flipper or diverter. Typically, these systems are open loop, relying on a mechanical or proximity switch to tell them that the axis is in position.
Hydraulic Hydraulic systems are used when great force is required to move the axis and its load. Like pneumatic systems, hydraulic systems employ a pump and a valve, but in this case, a liquid is used. The liquid is incompressible; thus, the system is extremely stiff when tuned correctly. This liquid, hydraulic fluid, is made up of many different chemicals and is typically toxic to people and the environment. Today, water/glycol and other plantand animal-based oils are used as nontoxic hydraulic fluids. These new fluids are environmentally friendly; however, they cannot just be used as a one-for-one replacement of petroleum-based fluids. Typically, they require special gaskets and changes to the hydraulic system. They do not have the same temperature, oxidation stability, hydrological stability, and longevity as their petroleum-based counterparts and typically cost more. Based on increasing government regulations around toxic substances and their disposals, these new eco-friendly fluids are worth a look. Proper equipment maintenance and proper storage and handling are therefore very important,
which increases the ongoing costs of the system. However, there are applications that simply must use this actuator due to its capability to provide tremendous force on demand. Typical applications use a magnetostrictive transducer for feedback and a hydraulic actuator to move large loads. For example, off-road construction machinery and other heavy machinery require hydraulic actuators due to high load requirements.
Electric Electric systems consisting of a servomotor and drive are widely used in industry for motion control applications. An axis of motion might consist of a permanent magnet DC or brushless AC motors, integrated feedback devices, and a motor controller. Today, the performance of variable frequency drives and induction motors has increased tremendously. In some simple applications, they have been used to control positioning of axes of motion. High precision applications still use servomotors over standard drives; however, the two technologies are evolving together. Today, many servoamplifiers can run synchronous motors, permanent magnet AC motors, or standard induction motors. Linear motors are also finding increased application outside of the traditional semiconductor equipment they have historically been associated with. A linear motor is basically a synchronous motor rolled out on a flat plane. Linear motors are capable of incredible acceleration, high force, and millionth-of-an-inch positioning.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Electric Motors In the motion control industry, the two most popular motor types of motors used are brushless servomotors and stepping motors. Due to the fact that stepping motors can be controlled with two digital switches and rarely require feedback, they are a low-cost alternative to a brushless servomotor and amplifier (drive). Both systems can be digitally controlled, but the stepping motor is less complex. Because most stepping motors are constructed with 200 steps per rotation, motion programs are reduced to simple instructions based on 1 pulse equaling 1.8 degrees of rotation, or in combination with a 5:1 pitch lead screw, 1 step equals 0.001 inch of linear travel. Some motion control applications do not require positioning. Variable frequency drives run electric motors at a speed proportional to the input signal. For turning a drill or milling cutter, this is sufficient. When you need better speed regulation, use velocity feedback to close a velocity loop. The velocity regulation is determined by how good the velocity feedback device is. More demanding applications use servo drives and motors, which are typically sold as a set from a particular vendor. This allows the motor to be precisely matched to the drive
to extract the maximum performance from the system. Permanent magnet DC and brushless AC are the most common types of servomotors.
Permanent Magnet DC
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Unlike AC motors, which have only one active magnetic source, DC motors are constructed with two active magnetic sources interacting to cause rotation (see Figure 32-4). In the traditional DC motor, the stationary magnetic part of the motor is called the field winding (compared with the stator in AC motors) and the rotor, or rotating part of the motor. You must replace the brushes in DC servomotors after extended running time due to abrasion caused by the brushes contacting the moving rotor. The brush commutator also suffers from an effect called arc over. This occurs when the speed of the motor becomes great enough to induce a spark from one plate to another on the commutator. Due to the wound rotor, DC motors tend to have poor heat dissipation. Still, they are relatively inexpensive and require fewer electronics than AC servo motors do. Permanent magnets can be substituted for the stator in small motors, eliminating the need for field control, or magnets can be glued to a steel rotor eliminating the use of brushes, the latter being the most common for industrial motion control applications.
Brushless DC A brushless DC motor is the same as a permanent magnet DC motor turned inside out, which produces some useful differences. By moving the windings to the outside, the overall thermal performance of the motor is improved. Moving the magnets to the rotor eliminates the brushes, so brush arcing and wear are eliminated as well. Instead, the motor is commutated electronically by providing phase-shifted currents timed to each phase of the motor. Because the most common configuration is three-phase, the winding and timing are 120 degrees apart in a three-phase Y circuit, which is identical to an AC induction motor. This creates a lot of confusion for users, but the main difference between an AC induction motor and a DC brushless motor is that the DC motor has a
second magnetic field made from permanent magnets. Because of the additional magnetic field, the DC brushless motor has increased energy density. For the same size motor, a DC brushless motor will always produce more torque than a brush DC or AC induction motor. Another important benefit of DC brushless technology is that the motors are very efficient over a wider operating range. DC brush motors and AC motors have a very narrow range of efficiency.
Controllers
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
The controller is the brain of the system. Its commands can be entered manually or downloaded. Manual entry occurs through the operator’s station (keyboard, switches, etc.) The earliest controllers used punched paper tape to enter prepared commands, but almost any computer-friendly media is possible now, as is downloading from a host computer and even a web browser. The controller executes the program, produces the command signal, reads the feedback, compares it to the command, and decides the action that needs to be signaled to the amplifier/motor to bring the difference to zero. The motion controller has evolved over time. It has gone from a dedicated unit that interfaced with other CPUs on the machine, typically a PLC, to becoming integrated with the PLC. Some vendors have added limited PLC functionality to their motion controllers, while others have integrated the motion controller into the PLC. This has led to many improvements to motion systems, such as eliminating the need to program two CPUs with different languages and to create handshaking routines to deal with two asynchronous processors. A single software package and single CPU make it easier to program and troubleshoot a modern system. The integrated controllers can be either resident on a PC or in a traditional dedicated hardware package. Either way, today’s user will be able to use less space, have fewer points of failure, and use less wiring than the traditional systems of a PLC and dedicated motion controller.
Servos A servo is the combination of the three components described above. Its basic block diagram has two elements as shown in Figure 32-5. The summing network that subtracts the feedback number from the digital command, thereby generating an error, is part of the controller.
The amplifier consists of the drive and motor. The feedback device that shows the movements is attached to the physical motor output or to the load. The error signal from the summing network to the drive can be either digital or analog, depending on the drive used. The drive is designed to run the motor at a velocity proportional to this error signal.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
The beauty of a servo is the error must be zero for the motor to be at rest. This means the motor will continue to drive the load in the proper direction until the command and feedback are equal. Only then will it remain at rest. For positioning servos, this has the added benefit that if some external force disturbs the load, the resulting position loop error will force the motor back into position. Servo motion control systems typically consist of three cascaded loops. The innermost loop is the current or torque loop. This loop is typically set by the vendor of the drive motor package and controls the amount of torque produced by the motor. Torque is produced by current: the more current in the motor, the more torque is produced. The amount of torque produced is based on the Kt constant of the motor. The Kt constant is measured in units of torque per ampere. The amount of torque produced by the motor can be calculated by T=Kt*A, where T is torque and A is the amps going to the motor. This loop always resides in the drive. The velocity loop is the second loop. This loop takes a velocity command and feeds it into the torque loop to control acceleration or deceleration. This loop can reside in either the drive or the motion controller depending on what mode you are operating the drive in. If the drive is set to torque mode, then the velocity loop resides in the controller. This means the controller will give the drive a torque command. If the drive is in velocity mode, then the drive handles the velocity loop. In this configuration, the controller produces a velocity command for the drive. The velocity loop must be tuned by the user, and it determines the ability of the system to accurately follow the velocity command and to overcome disturbances to the velocity of the axis. The velocity loop must always be properly tuned before the user attempts to tune the position loop. The position loop is the outermost loop, and it always resides in the motion controller.
This loop is the final loop the application engineer must tune. The methods employed to tune a position loop are similar to the methods used to tune any other loop. In the end, the user typically verifies performance by looking at the following error. The following error is the difference between the command and the actual position. Typically, the following error will jump when an axis accelerates or decelerates, and it tends to become smaller when the axis is at steady speed. By using velocity feedforward, you can minimize the jump in position error when changing speeds. The smaller the following error is at speed, the hotter the axis is tuned. Depending on the application, this can be good or bad. In a point-to-point move, an extremely hot system will tend to overshoot, which would not be desirable. For an application requiring two axes to be synchronized, an electronic gearing or camming situation, it is desirable to minimize following error. It is up to the user to determine the best method of tuning the axis and what level of performance is expected.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Feedback Placement When the feedback is directly on the motor shaft (the usual case), the shaft is exactly where the servo is being commanded to go when the system comes to rest. Because there may be a coupling, gearbox, rotary-to-linear converter, or other element between the motor and the actual load, the load may not be perfectly in position. Backlash and wind-up are the two main culprits explaining the discrepancy, so the mechanical design has to keep these below the maximum error allowed. An alternative is to place the feedback on the load itself, but now the backlash and wind-up are within the servo loop and may present stability problems. There are notch filters and other compensation networks that can alleviate some of these stability problems, but they are not for the novice. A solid mechanical design is paramount.
Multiple Axes Most applications require multiple axes, so the coordination of those axes becomes a major issue. Most “setup” type applications allow the axes to run independently and reach their positions at different times. For instance, corrugated box making requires that dozens of axes be moved during setup to accommodate the different box sizes. These axes will determine where the cuts, creases, and printing occur. They can all be moved independently. However, once the machine starts running the boxes, any movements must be coordinated. Historically, gears and cams coordinated these motions mechanically, but electronics is taking over many functions.
Leader/Follower Many machines (automotive transfer lines or cereal packaging) were originally designed with a line shaft that ran the length of the machine. Each station where particular operations occurred had one or more cams attached to the shaft to synchronize the operations within the station and with other stations. The shape of the cam dictated the movement that any particular axis had to make. The entire line was controlled by changing the speed of the line shaft. The electronic leader/follower is replacing these mechanisms. Each axis now has a servo, and its command is generated from a data table. The data table for each axis contains the position needed for that axis at each count position of the leader. The leader could be a counter that allows you to vary the count rate to simulate the line speed changing. This “electronic cam” arrangement allows many axes to be synchronized to a variable leader.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Interpolation Linear/circular interpolation is a common way to coordinate two axes. It uses an algorithm that meters out movement increment commands to each axis servo on a fixedtime basis (1 millisecond is a typical time basis). Interpolation was incorporated in the earliest numerical controls for machine tools in the 1950s and is still in common use today. Parts that needed to be milled could get quite complex, but the theory was that any complex shape could be described with a series of straight lines and circles. The Electronic Industries Alliance, formerly the Electronic Industries Association, developed a programming standard (RS-274) for instructing the machines in the early 1960s. The linear/circular algorithm executed those instructions. For linear interpolation, it is only necessary to program the end point and velocity. The controller figures out how far each axis must move in every time increment to get it from its present location to the end point at the required speed. For circles, you must program the center of the circle from the present machine position, the end point on the circle, the velocity, and whether it should execute the circle in a clockwise or counterclockwise direction. The velocity can be changed by operator intervention without affecting the coordination, so the process can be slowed down or sped up as needed.
Performance The performance of a motion control axis is typically specified in bandwidth or in
response to a step input. Bandwidth refers to the frequency at which the servo loop output begins to fall off from its command. If you command a position loop servo with an AC function, the output will follow exactly at low frequencies. As you increase the frequency, a point is reached where the output begins to fall off, and it will do so rapidly with further frequency increases. The bandwidth is defined as the point where the output is 0.7071 of the input in amplitude. With typical industrial machinery, this is about 3 hertz (Hz). Amplifier/drive builders also use bandwidth to rate the performance of their products. This is the bandwidth of the velocity loop that they provide, not the position loop. When tied to a real machine, you can expect this velocity loop bandwidth to be in the 30 Hz ballpark. Many vendors might claim 100 Hz or more, but that is not possible on industrial machinery. Response to a step input is a measure of how fast the servo will get to its final state when a small command is initiated. You might recall that a spring/mass system gets to 63.6% of its final value in one time constant and follows the natural logarithmic curve. This time constant and the amount of overshoot (if any) are the values to consider for performance. A typical position loop servo on industrial machinery will have a time constant of 50 milliseconds. The amplifier/drive (velocity loop) will have about a 5millisecond time constant.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Conclusion Motion control has offered many benefits in automating factories. It is a key enabling technology in the quest for high-speed high-precision manufacturing. Its use continues to expand faster than the economy grows as companies convert many applications to this technology. The vendors are also making controllers easier to apply and tune, thereby taking some of the previous mystique away.
Further Information Bullock, T. B. Servo Basics for the Layman. Fond du Lac, WI: Bull’s Eye Research, Inc., 1991. Younkin, G. W. Industrial Servo Control Systems: Fundamentals and Applications. 2nd ed, revised and expanded. New York: Marcel Dekker, Inc., 2002.
About the Authors Lee Lane is the vice president and general manager of the Safety, Sensing, and
Connectivity business for Rockwell Automation and the chief product security officer. He is responsible for the daily operations of the business, the creation and execution of the strategic plan, and meeting the key performance goals of the business. As chief product security officer, he is also responsible for the cybersecurity strategy for all of Rockwell Automations products and services. Lee has 26 years of experience with Rockwell Automation, including leadership roles in services, marketing, product management, business management, and product security. He began his career in 1991 as an application engineer in Portland, Maine and has subsequently served in positions in Milwaukee, Wisconsin and Cleveland, Ohio. He is now located in Chelmsford, Massachusetts. Lee is a graduate of the University of Maine, Orono, where he earned a bachelor’s degree in electrical engineering. He currently also serves as chairman for FDT Group AIBSL, an international non-profit organization consisting of leading global member companies active in industrial automation that provides an open and nonproprietary standardization interface for the integration of field devices.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Steve Meyer has a bachelor’s degree in business administration from the University of Houston and 40 years of experience in the field of industrial control, motor control, application of servo systems, and electric motor R&D. Meyer continues to apply motor technology and explore how motors and controls will interact in machines and consumer devices in the Internet age.
33 Vision Systems By David Michael
Using a Vision System A vision system is a perception system that provides local information useful for measurement and control. It is an implementation of visual sensing used to monitor, detect, identify, recognize, and gauge objects in the vicinity of the system. Visual perception is the ability to see and interpret visual information. Fundamentally, a vision system involves generating images of the environment and then using a computer to discover from the images what objects are there, where they are, what are their dimensions, and how well the objects visually meet our expectations.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
A vision system should be used when visual sensing is required or beneficial. Sensing provides data about the environment that is needed for action. Visual sensing is a powerful sensing method because it is fast, nondestructive, requires no contact, and can generate a great deal of information. A vision system is but one way to provide perception to machines. Other methods of perception that are also nondestructive include interpreting sound, temperature, radio frequency, and electric fields, as well as detecting chemical vapor presence. Destructive sensing methods, such as touch, deep chemical analysis, or mechanical/crash testing, can perturb an object or its environment. Visual sensing isn’t just powerful, it is also popular. In 2016, the global market for machine vision systems was estimated at USD 9.1 billion and the market is estimated to grow to over USD 19 billion by 2025.1 Its popularity is primarily due to the far-reaching and useful applications of this technology. Another reason for its popularity is that people have good intuition of what a vision system can do from what they see in the world (see Figure 33-1).
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Vision System Components A vision system consists of several separate or integrated components working together. Its parts include: cameras and lenses, illumination sources, mounting and mechanical fixturing hardware, computational or electronic processing hardware, input and output (I/O) connectivity, and cabling electrical hardware. Most important, however, is its visual sensing software, which provides useful data to the measurement and control system (see Figure 33-2).
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Cameras and lenses are used to form and collect images. Critical imaging parameters will be discussed later in this chapter. The illumination sources shine light to make the object under observation visible. Mechanical mounting and fixturing hardware secure the vision system in the right place and ensure that the image collection apparatus remains free from vibration, motion, or drift. Well-designed mechanical hardware allows worn parts or broken cameras to be replaced without requiring a complete system rebuild. The I/O connectivity hardware and cables provide power to components that need it, connect the cameras to the processing elements, and electrically connect the vision system’s communication ports to transceivers that need to receive the visual information. The transceivers receiving the visual information are typically process control systems, factory information systems, and/or human-machine interfaces (HMIs). The vision system’s most complex feature is found in the visual sensing software, which utilizes artificial intelligence to implement computer vision analysis techniques to guide, inspect, recognize, and measure systems. The vision system senses its environment and reports the visual data that it senses over the I/O channel to an HMI or to a process control system. The visual data, for example, can be discrete, such as pass/fail, a number representing quality for sorting, a label describing the object recognized, or a measurement. A vision system must communicate the result of its inspection to an operator or electronic controller that knows what to do with the data. The mechanism can be toggling a single bit, accessing a polled register in a Modbus programmable logic controller (PLC) chain, or sending an integer or floating-
point value, or a text string via a real-time industrial Ethernet protocol, to name a few methods. For an operator, it can be a simple light, a number, a label, or an audio cue. Sometimes, the output needs to be more sophisticated than simple I/O; it may need to be tied into a relational database management system (RDBMS) where analytics are performed on that inspection and all the others, for an overall process view. Open Platform Communications (OPC) is a common method for large data back ends.
Vision Systems Tasks in Industrial/Manufacturing/Logistics Environments
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
In an industrial environment, a vision system is typically used for automation in the following four applications (see Figure 33-3 for examples):
1. Guide industrial robots so that they can see what they are picking up and see where to place what they pick up (e.g., such as in an automated assembly) 2. Inspect items being processed or manufactured to ensure correct manufacturing or to confirm the absence of defects in a quality inspection 3. Identify and recognize parts for inventory tracking and synchronizing and sorting 4. Measure and gauge physical features and dimensions (e.g., providing feedback control for an automated process)
Guidance Vision-guided robotics (VGR) usually involves a robot picking up something from one location and placing it at a second location. The vision system first needs to identify the
object to pick up. Next, it determines where that object is located in space relative to the vision system. Last, it estimates the position of the object relative to the position of the robot so that the robot can pick the object up. Additional information and steps may be required for planning the robot’s path. Similar steps are needed for placing the object where it goes after picking it up. Typically, robot guidance requires the vision system camera to be mounted near the robot, on the robot, or even inside the robot. In order to guide a robot accurately, the vision system performs its calculations in the coordinate space of the robot, which requires a robot hand-eye calibration step during vision system implementation.
Inspection Inspection is about visually determining correctness. Simple inspection tasks may be determining presence or absence. More typically, it involves comparing the object being inspected with one that has no defects and judging whether the visual differences in size or appearance between the two are meaningful. Often, that comparison is predicated on accurately recognizing, aligning, and re-rendering the object being inspected to match as closely as possible the one without defects. Judging whether differences are meaningful and classifying those differences as defects may require statistical descriptions of objects. These statistical descriptions can be generated from multiple examples of the objects being inspected. Machine-learning methods are also useful for this task.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Identification Identification can involve reading a code, reading text, or identifying something from its color, texture, or shape. Reading a one-dimensional (1-D) barcode or two-dimensional (2-D) barcode is typically accomplished by either a fixed-mount or hand-held vision system. The illumination and imaging system will be quite different if the barcode is well-printed black ink on a white label or if the code has been dot peened into a piece of metal or microscopically onto a fastener. Reading a code usually involves careful imaging, finding the code, decoding the code, and sometimes verifying readability of the code.
Measuring and Gauging Vision systems are used to take physical measurements, such as position, length, width, depth, area, or volume. They are used to take specific measurements, such as the thickness of an applied coating, the diameter of a drilled hole, or the roughness of a surface. Typically, the first step in taking the measurement is visually fixturing what needs to be measured. The accuracy and repeatability of the measurement will often
depend on how well the fixturing step is accomplished. Next the measurement is taken typically by locating 2-D or 3-D features in the fixtured part using geometric pattern matching, edge finding, shape fitting, or other techniques from image understanding and computer vision.
Implementing a Vision System
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Implementing a vision system entails eight stages of assessment, installation, testing, and diagnostics. These stages are illustrated in Figure 33-4 and discussed in the following sections. The process typically takes months and requires using process engineers and system engineers familiar with vision systems for scoping and successful deployment. Technicians and manufacturing engineers often perform the required postdeployment and continued maintenance.
Requirements/Feasibility/Selection The vision system selection process begins with determining feasibility. This includes collecting or defining requirements and identifying applications and business issues. Application issues include understanding details about the object being inspected and
the system being controlled. Business issues include responsibilities, warranties, and contractual issues. Determining feasibility means figuring out if the system is technically possible, and what are the risks and costs. Generating a formal feasibility study is advisable if the vision task has never been done before. Determining feasibility can be done before, or simultaneously with, vendor selection. It may even be a key part of vendor selection. Once the feasibility is determined, the vendor is selected and design can begin. Depending on internal expertise and the complexity of requirements, the vision system can be purchased through a systems integrator, a distributor, or directly from the vision system manufacturer.
Physical Setup The physical setup of the vision system and hardware installation requires designing, procuring, and installing the mechanical fixturing hardware, cameras, lenses, illumination, processor, and electrical cabling for I/O and power. The components need to fit within environmental constraints, including space, vibration, motion, humidity, sterility, temperature, ambient light, radio frequency (RF)/electrical noise, and available power.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Software Installation After the physical setup stage, the software needs to be installed and configured. Vision software performs a number of tasks, such as guidance, inspection, identification, and measurement. In addition to the vision tasks, the communications, reporting, and HMI need to be configured. Some vision systems will require programming or software system development in addition to configuration, depending on the complexity of the task and the capability of the purchased system.
Calibration Oftentimes cameras, lenses, lights, and even the overall system need to be calibrated. Calibration converts the pixel-based raw measurements coming from the camera(s) into real-world units such as mm or mm2. Approximate values for calibration can begin with camera and lens nominal specifications. The camera and lens calibration procedure(s) estimate and compensate for lens distortion, lens mounting, focus, settings, camera manufacturing variation, and camera mounting. Photometric calibration can estimate and compensate for illumination and color variation. Robot hand-eye calibration is necessary for estimating the coordinate space of the robot or motion stage from the coordinate space of the vision system.
Testing/Validation A test plan should be developed, approved, executed, and then signed off during this phase. The amount of time required for the testing/validation phase depends on the risks associated with the manufacturing processes involved and the risks of deploying the vision system. The risk associated with the manufacturing process may be around how difficult it is to control the automation process. The risks associated with the vision system may be around the difficulty in detecting the smallest or lowest-contrast feature or determining the maximum vision system operating speed. This phase typically takes anywhere from a few hours to a few weeks.
Deployment At this stage, the system should be started up and put in production doing its work and communicating the visual information. Technicians/users need to be trained immediately prior to this stage. Sometimes a final acceptance test will be required as well.
Diagnostic Monitoring
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Once the system is deployed in production, a secondary output is diagnostic monitoring. The idea is to monitor the health of the vision system and collect both normal performance statistics as well as anomalous data. The vision system should transmit and save problem images, accumulate throughput and update the data, and report overall yield. This data can be used to spot trends and to predict problems either in the vision system or in the process being controlled.
Maintenance/Life-Cycle Management The last phase is maintenance and life-cycle management. Maintenance is standard in any vision system in production. The details depend on system specifics. Many vision systems have no moving parts so they don’t need lubrication, but cameras and illumination sometimes go bad and need to be replaced or get dirty and bumped and need to be cleaned and regularly recalibrated. Mechanical parts of the system wear, and as they wear, they change characteristics of the manufacturing line that may need increased tolerances and adjustments or even programming changes to handle new variations. Decommissioning and system replacement depend on the specifics, but a 5year lifetime is probably typical.
What Can the Camera See?
A vision system can only measure or sense what it can see. The object of interest must be visible to the camera in size, focus, and fineness of detail. For objects in motion, a strobe flash or an electronic camera shutter captures it momentarily at rest. Measuring successful heat treatment or detecting mechanical wear or friction may require seeing with a thermal camera. Conventional visible light cameras capture images with photosensitive sensors recording the position of objects located along light rays coming into the camera through a lens in the x and y dimensions on an imager. In such a conventional camera, the information about the distance of an object to the camera isn’t present. Increasingly, 3-D cameras are used to record the distance of an object to the camera. Objects are imaged with 3-D cameras in x, y, and z dimensions and visual processing can be accomplished in 3-D.
Image Size and Camera Resolution
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
A vision system image must be large enough to span the object and the object needs to be in focus. Image size, when magnified or reduced by the chosen camera lens from a given distance, must be large enough so that the object is visible. The selected camera lens needs to support that magnification and focus, pass enough light to the camera, have sufficient resolving power, and have low enough distortion at the desired wavelengths of light. The camera’s imager must have enough samples in each direction to give fine enough detail to accomplish the visual task. In a 2-D image, each sample in the image is called a picture element or a pixel. The number of pixels in the image is called the resolution and the maximum resolution in a system is a fixed characteristic of the camera (see the example image camera resolutions in Table 33-1). Another characteristic of a pixel is its dynamic range. For example, a pixel can take on values from 0–255, which can be represented by 8 bits, or 0–4095, which can be represented by 12 bits. In general, a wider dynamic range is necessary when measurements or inspection need to be made at more than one intensity level. Cameras for vision systems typically perform best when pixel values respond linearly to increases in light or depth (e.g., when something twice as bright has a pixel value twice as high).
Illumination, Strobing, and Triggering
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Illumination is critical for vision system tasks. Illumination varies based on the direction of the light from the object’s point of view. For example, light may come from the top of the object or from the sides. Illumination is also affected by the collimation of the light, either directional or diffuse, as well as the intensity, color, and polarization of the light. Light can also be steady, continuous, or intermittent, as during strobing or to reduce the power consumption or the production of heat. A trigger or simple optical detector is often employed to detect the object and send a signal to the vision system when it would be an optimum moment to take the picture. Light interacts with surfaces by reflection, refraction, absorption, and scattering. Ideal lighting is dependent on the details of the vision task. Commonly used lights in vision systems are diffuse on-axis lights, dome lights, backlights, line lights, multiaxis lights, and structured lighting. Vision systems use many types of light sources, such as incandescent, fluorescent, halogen, and LEDs. Increasingly, LED illumination sources have gained popularity. This is because LED sources are cooler, have longer lives, require less maintenance, and have built-in controllers. Refer to Figure 33-5 for lighting examples.
Coaxial Lighting—Noncollimated and Collimated Coaxial lighting that is in line with the axis of the imaging camera has many uses. It makes flat, shiny things bright in the image. It is able to discriminate between small angle differences and texture on the surface of the object.
Backlight
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
A backlight silhouettes objects so that only the edge of the object is visible. The surface of an object disappears.
Domes and Continuous Diffused Illumination Domes and continuous diffused illumination sources are used to minimize texture or elevation changes on the surface of an object. It creates a “flat field” and allows measurement of surface absorption.
Structured Lighting Structured lighting that projects specific patterns of light onto objects is often used to detect specific deviations and to allow the inference of object depth or Z height.
Multiaxis A combination of different lights is also common. This allows for solutions of interesting projects.
Visible Light versus Thermography/Near Infrared
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Vision systems work with different electromagnetic wavelengths and imaging modalities depending on needs. Visible light has wavelengths between 380 nm and 760 nm, including all the colors of the rainbow. Cameras sensitive to those wavelengths are the ones most commonly used for imaging, because what is visible in the images approximate human experience. Other frequencies from the electromagnetic spectrum and even other imaging modalities are used in vision systems as well. Thermal/near infrared (NIR)/short-wave infrared (SWIR) use wavelengths from 750–2,500 nm. Cameras that are sensitive to those frequencies are used to measure heat and can see better through haze, fog, and smoke. Thermal imaging is used for sensing metal smelting, monitoring furnaces, or imaging processing of hot glass. Sometimes it is used for agricultural, pharmaceutical, or semiconductor inspection. It can also be used to make optically transparent coatings visible or to make optically opaque coatings transparent (see Figure 33-6).
2-D Image Data versus 3-D Image Data Conventional imaging involves cameras that take 2-D pictures. A conventional 2-D camera creates a picture or image where each picture element represents a color or intensity. The image is formed with plastic or glass lenses that use principals of ray optics. A 3-D camera uses different principals to create a 3-D picture. Each 3-D picture element typically encodes distance, depth, 3-D position, or shape, instead of or in addition to, color or intensity. Many visual tasks are better suited or more easily accomplished with 3-D images such as finding 3-D locations, inspecting for 3-D defects, and making 3-D measurements. Several physical principals are used to create 3-D images in 3-D cameras. Examples of 3-D imaging techniques are as follows:
• Use principles of stereo triangulation with two or more conventional cameras • Rely on structured-light illumination with one or more conventional cameras • Use special electronic time-of-flight detection at each location in an image • Measure focus or defocus at each location in an image • Infer surface orientation by estimating object shading • Estimate 3-D with light interferometry • Reconstruct 3-D from light fields
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
The various imaging techniques create 3-D images with different specifications, especially with respect to image capture speed, accuracy, dynamic range, and noise when imaging different types of surfaces (see Figure 33-7).
Conclusion Vision systems are useful perception systems for measurement and control. They are powerful and popular, especially for automation tasks such as guidance, inspection, identification, and measurement. A vision system consists of many components including cameras, illumination, lenses, computation, software, and mechanicals. Understanding these components is helpful as a vision system is implemented.
Further Information Grand View Research, San Francisco, California, published a market analysis in May 2017. They estimated the global machine vision market size and its growth rate. More details are provided at: https://www.grandviewresearch.com/industry-analysis/machine-
vision-market. To learn more about computer vision, consult the following textbook: • Szeliski, Richard. Computer Vision: Algorithms and Applications. London: Springer-Verlag, 2011. The following are annual or semiannual academic technical conferences in computer vision sponsored by the Institute of Electrical and Electronics Engineers (IEEE) Computer Society and the Computer Vision Foundation. More information is provided at: http://www.cv-foundation.org/?page_id=100. • Computer Vision and Pattern Recognition (CVPR) • International Conference on Computer Vision (ICCV) • European Conference on Computer Vision (ECCV) For academic technical papers, current in the field, see the following journals: • IEEE Pattern Analysis and Machine Intelligence (TPAMI) (1979–present), IEEE Computer Society • International Journal of Computer Vision (IJCV) (Volume 1/1987–Volume 124/2017), Springer U.S. The following are semiannual trade shows. Links for the trade shows are available here: www.visiononline.org.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
• VISION, a biannual trade show in Stuttgart, Germany. https://www.messestuttgart.de/vision/en/. • The Vision Show, a biannual trade show in Boston, Mass. https://www.visionshow.org. • Automate, a biannual tradeshow in Chicago, Ill. http://www.automateshow.com.
About the Author David Michael, PhD, is a technologist known for his expertise in industrial computer vision and pattern recognition algorithms and applications. As director of Core Vision Tool Development, Michael leads the research and development of AI software for machine vision products at Cognex Corporation. Michael has written or co-authored over 50 issued U.S. patents in various aspects of machine vision, including 3-D vision, pose estimation, camera and robot calibration, image registration, color, image processing, and inspection.
He received a BSEE degree from Cornell University and MS and PhD degrees from the Massachusetts Institute of Technology. Michael resides in Wayland, Massachusetts, with his wife Monica and three children.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
1. Grand View Research market analysis, http://www.grandviewresearch.com/industry-analysis/machine-vision-market.
34 Building Automation By John Lake, CAP
Introduction Today’s building automation systems (BASs) function to control the environment in order to maintain occupant comfort and indoor air quality, manage energy consumption, provide life-safety capability, and provide historical access. BASs are often combined with other smart systems to facilitate an interoperable control and monitoring system for the entire facility, campus, or multilocation buildings. From their humble beginnings around the turn of the last century, BASs have progressed from single-device control to fully computerized building control. This chapter is an overview of the basic structure of BASs.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Cloud computing has become more prevalent with the acceptance and implementation of open system protocols by many manufacturers. In addition, software vendors are now providing powerful display engines that can be combined with dashboards to become a powerful tool to communicate information. With the already massive amount of cloud memory increasing every day, the wider bandwidth enables us to collect a significantly larger amount of information regarding the operation and maintenance of BASs and other connected open systems. There are many names for building automation systems. They vary by geographical area, building type, and industry or building purpose. Here are a few examples: • Building automation system (BAS) • Building management system (BMS) • Facility management system (FMS) • Facility management and control system (FMCS) • Energy management system (EMS) • Energy management and control system (EMCS)
Direct Digital Controls In the early 1980s, direct digital controls (DDCs) began using powerful microprocessors to accomplish programmed tasks; this enabled them to enter the commercial building arena. DDC controllers generally contain a library of specific algorithms to perform their intended functions, which enables them to provide an accurate, cost-effective approach to building control and management. The early DDC controllers were standalone with limited communication ability. When network capability was added to the DDC controllers at the end of the 1980s, a new path and a coordinated approach to building control and management was born. The DDC controller is the basic building block of modern BASs. Building block components of a typical BAS of today are: • Building-level controllers • Equipment-level controllers • Plant-level controllers • Unit-level controllers
Building Level (Network) Controller
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
The building network controller is more of a system level router rather than a controller. It handles the network traffic and allows information sharing between other controllers. It provides global information, network real time of day, building schedules, and so on. It handles access to the BAS via high-speed Internet or dial-up. By contrast, in legacy systems the building level controller drove the high speed (48 kb– 72 kb, depending on the vendor) proprietary network traffic, and it handled the slower speed controller traffic (9,600–19,200 baud, also vendor specific). Dial-up systems are quite common.
Equipment-Level Controllers These controllers are designed to handle built-up systems and are available with varying point counts in order to manage the various control points that may be needed on a builtup system. Equipment-level controllers are designed for larger applications and point counts than typical unit-level controllers. They most often have their own enclosure and stand-alone capability. They can be placed in original equipment manufacturers (OEM) equipment at a vendor’s factory or in the field. Their point count can be expanded by adding additional input/output (I/O) modules. The controllers are backed up by a battery
so they will keep time and remember their last state in the event of a loss of power to the controller. Custom applications can be built by applying combinations of built-in libraries and custom programs. These controllers can have enough capacity added to be able to do all the work of the lower-level controllers.
Plant-Level Controllers Plant-level controllers are similar to equipment-level controllers, but they generally have a higher point capacity and routinely encompass multiple pieces of equipment to form a single system. These controllers are most commonly used in boiler and chiller plants; however, they are appropriate for other types of systems. Plant-level applications are frequently configured with a larger point count than equipment-level controllers, and the controller point capacity is adjusted by adding or removing expanded I/O modules. Like equipment-level controllers, plant-level controllers contain battery-backed real-time clocks to keep time and last state information in case of power failure. The programming capabilities are the same as the equipment-level controllers
Unit-Level Controllers
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
A unit-level controller is synonymous with and more accurately described as an application-specific controller. An application-specific controller has a built-in “canned” program that is designed to control a specific piece of equipment. Here are just a few examples of the applications: • Unitary zone control • Single-zone rooftop units • Terminal units (variable air volume (VAV) boxes, more than two dozen types) • Fan coil units • Single-zone condensing units • Hotel room controllers
I/O I/O types and quantities vary from manufacturer to manufacturer; however, there are common types of I/O that all manufactures provide with their controllers, such as: • Analog input
• Analog output • Digital input • Digital output • Pulse input • Universal I/O • Pulse width modulation (PWM) output The inputs generally accept standard types of instruments, including resistance temperature detectors (RTDs), thermistors, 4–20 mA, 0–20 mA, 0–10 V, 0–5 V, pulse, and so on. • The analog outputs are 0–10 V, 0–5 V, 4–20 mA 0–20 mA, and pulse width modulation (PWM). • The digital inputs can be dry contact and/or triac. • The digital outputs can be dry contract and/or triac.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Many controllers have provisions for RS-485, RS-232, or USB serial inputs. These specialty inputs allow access to the data contained within a particular piece of equipment or hardware to be read and written to the controller in order to provide greater visibility and controllability. Variable frequency drives (VFDs) operational and configuration parameters and settings are some examples of the many types of specialty data collected for use by the BAS system. Most of the time, gateways are required to extract the information contained within the equipment or hardware. Not all controllers have all the types of I/O listed, so the engineer must thoroughly evaluate the application and instrument selection during the project design. It is highly recommended that the product data sheets from the instrument manufacturers be consulted for compatibility with the controller prior to specification of the basis of design.
Legacy Systems Figure 34-1 illustrates a top-down architecture of a typical legacy system and contains a brief explanation of the major system components. It generally consists of an engineering workstation, sometimes referred to in older systems as the host computer, with peripherals, such as a high-resolution monitor, high-capacity hard disk, proprietary network card, printer, alarm printer, and phone modem. The engineering workstation or host computer serves as an offline programming device for the controllers, a graphic
display, and storage for programs, graphic files, user information, trend logs, and so on. Programs generated offline must be downloaded via the network to the controller before they will function. The systems can be accessible via a dial-up modem.
The proprietary network card connects to all building and equipment-level controllers. The building and equipment-level controllers then connect to plant-level and unit-level controllers and remote distributed I/O via RS-485 on a proprietary network protocol.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
The remote I/O points connect to physical devices, such as valves, motor controls, temperature sensors, humidity sensors, flow sensors, pressure sensors, and so on. The programs for legacy systems reside in the equipment-level controllers or the standalone unit-level controllers where they are maintained in electrically erasable programmable read-only memory (EEPROM) memory. A copy of the programs must be uploaded to the host computer so that current backup and archival copies of the program are maintained. Legacy BASs (systems provided prior to the Internet and open connectivity) all have their own individual proprietary protocols used for host-to-controller, controller-tocontroller and controller-to-end-device communication. The size of legacy systems varies from a single host and controller to a fully expanded BAS containing several thousand points.
Open Systems To have an open (or interoperable system) that enables multiple manufacturers to communicate with each other and give and receive commands, a common protocol needed to be developed and adhered to. There are three major open protocols in use with
BASs: Lon Works, Building Automation and Control Network protocol (BACnet) and Modbus. LonWorks was developed by Echelon and consists of both software and hardware. The Neuron chip, primarily manufactured by Motorola and Toshiba, is the hardware that was designed and built for this purpose. LonWorks is a peer-to-peer network that is architecture agnostic. LonTalk (ANSI/CEA 709.1 and IEEE 1473-L) is the recognized open protocol. LonTalk is the LonWorks communication language and is capable of interfacing and communicating with both BACnet and Modbus.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Figure 34-2 is a high-level illustration of one of the many ways LonWorks can be designed for interoperability. The backbone of the system in this example is a highspeed Ethernet local area network (LAN) that communicates with as many automation servers as necessary to distribute the control system points to their lowest level. Each automation server can handle a LonWorks subnet, a BACnet subnet, and a Modbus subnet. The systems generally support web-based applications. The diagram does not attempt to cover all connection possibilities.
BACnet is an American National Standards Institute (ANSI) and American Society of Heating, Refrigerating, and Air-Conditioning Engineers (ASHRAE) standard that specifies a common communication protocol that enables building automation systems to standardize communication with each other using a common language, BACnet (ASHRAE/ANSI 135-2001). In 2003, BACnet became an international standard, ISO16484-5. LonTalk is also part of the BACnet specification. Figure 34-3 is a high-level illustration of how BACnet systems may be designed. The backbone of the system is generally a high-speed Ethernet LAN that communicates with as many BACnet routers or a combination router/controller as necessary to distribute the
system to its lowest level of control. BACnet architecture supports Transmission Control Protocol/ Internet Protocol (TCP/IP), BACnet/IP, Master-Slave/Token-Passing (MS/TP), LonTalk, Modbus, and custom gateways to other open systems. The systems generally support web-based applications.
Modbus was developed in 1979 by Modicon, Inc.; its primary use is for industrial automation and power monitoring and control applications. Modbus is an open, free protocol available with no licensing fees. It is one of the most widely used industrial automation protocols.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Modbus can establish master-slave, client-server, and a wide range of other device configurations. It can be configured for Ethernet or serial protocols. BACnet is the most widely used protocol, but it should be noted that all three of these protocols are widely used and intermixed in today’s interoperable BASs. No single protocol is best for every application. All three protocols have their advantages and can be implemented to provide the best interoperable solution. These protocols function well and can even reside on the same network. These protocols are vital to provide future backward compatibility and ensure interoperability. That being said, a designer or engineer must be aware that systems utilizing these protocols can be customized by OEM manufactures to suit their products. The designer or engineer should exercise the same caution in selecting compatible products.
Information Management Dashboards are a convenient and effective way to display BAS and other open system
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
information. Dashboards provide a means to convey key operational points and links to users at a glance. Various vendors provide templates to easily convey information to the intended audience. As an example, to setup a dashboard such as Lucid, you select the type of template to display, choose the protocol, choose the data display format, and then fill in the parameters. These same dashboards are used in many types of industries and businesses (see Figure 34-4).
BAS dashboards, such as those furnished by Niagara (see Figure 34-5), are a powerful display tool for BASs and other systems.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Summary The building automation field is changing at a rapid pace with new innovations and demands placed on the systems. Just a few years ago, facility managers were satisfied to have a fancy graphic screen that displayed the status of the building systems on demand. Now, these same managers are demanding energy, maintenance, and operational information. With the proliferation of building and cloud server bandwidth, individual equipment predictive management and maintenance are within our grasp. Subjects not covered in this chapter, which are now coming to the forefront, are tools such as Project Haystack-based analytic monitoring systems that ride on top of the BAS and other smart systems that can effectively collect system operation data to optimize energy usage daily, weekly, and yearly based on statistical predictions. This same tool is able to be set up to provide preventative maintenance suggestions. Specifying open protocol communications is a critical component of ongoing building controls system maintenance as well as for future analytic system integration. The use of industry standards and open protocols (such as ASHRAE’s BACnet protocol), ensure multiple systems can be integrated with the building as the industry continues to mature and evolve. Other emerging protocols, such as Project Haystack, utilize a data tagging protocol structure to enable the use of richer data streams through semantic tagging. One note on protocol specification: it is important to specify protocols down to the field device layer. Often, otherwise open systems can be layered with proprietary extensions on the field networks that can impede deep analytics system integration, as well as future maintainability.
Further Information Calabrese, Steven R. Typical Controllers in DDC. Accessed 5/11/2018. http://www.automatedbuildings.com/news/apr08/columns/080325113602calabrese.h Echelon Corporation. Designing Open Building Control Systems Based on LonWorks® Technology. Version 2. Copyright Echelon Corporation. Accessed 5/11/18. http://www.enerlon.com/JobAids/designguide.pdf. Echelon Corporation. Introduction to the LonWorks® Platform. Revision 2. Echelon Corporation, 2009. https://www.google.comurlsa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&u 018301B_Intro_to_LonWorks_Rev_2.pdf&usg=AOvVaw1zwjC9DVYcnoOvFMuFsiC. Haakenstad, Larry K. “How to Specify BACnet-Based Systems.” Alerton Technologies. Accessed 3 April 2018. http://www.bacnet.org/Bibliography/ES-6-97.htm. Haynes, Roger W. Control Systems for Heating, Ventilating, and Air Conditioning. 3rd ed. New York: Van Nostrand Reinhold Environmental Engineering Series, 1983. ISBN: 0-442-23649-2.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Hess, Mark. “BacNet and LonTalk: Why We Need Them Both.” Accessed 5/11/18. www.automatedbuildings.com/news/jan00/articles/trane/trane.htm. Modbus.org. MODBUS Over Serial Line Specification & Implementation Guide. V1.0. Accessed 3 April 2018. http://www.modbus.org/docs/Modbus_over_serial_line_V1.pdf. North Beach Consulting. “LonWorks Fundamentals: A Guide to a Basic Understanding of the LonTalk Protocol.” Richmond, VA: North Beach Consulting, LLC. Accessed 3 April 2018. http://www.circon.com/wp-content/uploads/2011/03/LonWorksFundamentals.pdf. Piper, James. “Bacnet, Lonmark, and Modbus: How and Why They Work.” Accessed 3 April 2018. https://www.facilitiesnet.com/buildingautomation/article.aspx?id=7712 Stehmeyer, Rick. BacNet vs. LON – A Network Comparison. Accessed 3 April 2018. https://buildingenergy.cx-associates.com/bacnet-vs-lon-a-network-datacomparison. Sullivan, Edward, ed. “BACnet, LonWorks and Modbus: Getting What You Want.” Accessed 3 April 2018. http://www.facilitiesnet.com/buildingautomation/article/BACnet-Lon-Works-and-
Modbus-Getting-What-You-Want-Facilities-Management-Building-AutomationFeature--14059. (Page 1 only.)
About the Author John Lake is the director of Automation and Controls for DPR Construction in Redwood City, California. He is an ISA Lifetime Member, a Certified Automation Professional (CAP), and a USGBC Leed Accredited Professional (LEED AP). Lake has 35 years of experience in the process and building automation fields, and extensive experience in the design and implementation of cross-platform automation systems. His experience in designing process and building automation systems includes integration with mechanical, electrical, and chemical systems for semiconductor fabrication, biosafety Level 3 (BSL) 3 labs, microgrid systems, mission critical facilities, hospitals, biopharmaceutical manufacturing, and chemical facilities. Lake has worked as a product director for automation and has delivered presentations to societies and design institutes in China and Singapore on building automation and process automation.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Lake participates on the ISA105 (Commissioning, Loop Checks, and Factory and Site Acceptance/Integration Tests for Industrial Automation Systems) and ISA111 (Unified Automation for Buildings) committees; and he is a subject matter expert for Audiovisual and Integrated Experience Association (AVIXA).
XI Integration
Data Management Data are the lifeblood of industrial process operations. The levels of efficiency, quality, flexibility, and cost reduction needed in today’s competitive environment cannot be achieved without a continuous flow of accurate, reliable information. Good data management ensures the right information is available at the right time to answer the needs of the organization. Databases store this information in a structured repository and provide for easy retrieval and presentation in various formats. Managing the data itself and associated relationships make it possible to provide meaningful interpretations and hence use of the data proper.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Manufacturing Operations Management The automation industry needs a good coupling of information technology (IT) knowhow with a broad knowledge of plant floor automation—either by having IT systems specialists learn plant floor controls or by having automation professionals learn more about integration, or both. Functionality and integration of the shorter time frame operating systems with both plant floor controls and with company business systems is called by several names; manufacturing execution systems (MESs) is the most common. The concepts of where functions should be performed and when data flows should occur were wrestled with for decades after the computer-integrated manufacturing (CIM) work of the 1980s. The ISA-95 series on enterprise-control system integration provided real standardization in this area. The ISA-95 standards have been adopted by some of the biggest names in manufacturing and business systems. While a large percentage of automation professionals do not know what MES is, this topic, like integration in general, cannot be ignored any longer.
Operational Performance Analysis
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
The phrase operational performance analysis has referred to a variety of different technologies and approaches in the field of industrial automation. There are four components to operational performance analysis that must be considered: the industrial operation under consideration, the measurement of the performance of the operation, the analysis of the performance based on the measurements and the performance expectation, and the implementation of appropriate performance improvements. All four components must continuously work in concert to have an effective operational performance analysis system.
35 Data Management By Diana C. Bouchard
Introduction Data are the lifeblood of industrial process operations. The levels of efficiency, quality, flexibility, and cost reductions needed in today’s competitive environment cannot be achieved without a continuous flow of accurate, reliable information. Good data management ensures the right information is available at the right time to answer the needs of the organization. Databases store this information in a structured repository and provide for easy retrieval and presentation in various formats.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Database Structure The basic structure of a typical database consists of records and fields. A field contains a specific type of information—for example, the readings from a particular instrument or the values of a particular laboratory test. A record contains a set of related field values, typically taken at one time or associated with one location in the plant. In a spreadsheet, the fields would usually be the columns (variables) and the records would be the rows (sets of readings). In order to keep track of the information in the database as it is manipulated in various ways, it is desirable to choose a key field to identify each record, much as it is useful for people to have names so we can address them. Figure 35-1 shows the structure of a portion of a typical process database, with the date and time stamp as the key field.
Data Relationships Databases describe relationships among entities, which can be just about anything: people, products, machines, measurements, payments, shipments, and so forth. The simplest kind of data relationship is one-to-one, meaning that any one of entity a is associated with one and only one of entity b. An example would be customer name and business address. In some cases, however, entities have a one-to-many relationship. A given customer has probably made multiple purchases from your company, so customer name and purchase order number would have a one-to-many relationship. In other cases, many-to-many relationships exist. A supplier may provide you with multiple products, and a given product may be obtained from multiple suppliers.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Database designers frequently use entity-relationship diagrams (Figure 35-2) to illustrate linkages among data entities.
Database Types The simplest database type is called a flat file, which is an electronic analog of a file drawer, with one record per folder, and no internal structure beyond the twodimensional (row and column) tabular structure of a spreadsheet. Flat-file databases are adequate for many small applications of low complexity. However, if the data contain one-to-many or many-to-many relationships, the flat file structure cannot adequately represent these linkages. The temptation is to reproduce information in multiple locations, wherever it is needed. However, if you do this, and you need to update the information afterwards, it is easy to do so in some places and forget to do it in others. Then, your databases contain inconsistent and inaccurate
information, leading to problems such as out-of-stock situations, wrong customer contact information, and obsolete product descriptions.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
A better solution is to use a relational database. The essential concept of a relational database is that ALL information is stored as tables, both the data themselves and the relations between them. Each table contains a key field that is used to link it with other tables. Figure 35-3 illustrates a relational database containing data on customers, products, and orders for a particular industrial plant.
Additional specifications describe how the tables in a relational database should be structured so the database will be reliable in use and robust against data corruption. The degree of conformity of a database to these specifications is described in terms of degrees of normal form.
Basics of Database Design The fundamental principle of good database design is to create a database that will support the desired uses of the information it contains. Factors such as database size, volatility (frequency of changes), type of interaction desired with the database, and the knowledge and experience of database users will influence the final design. Key fields must be unique to each record. If two records end up with the same key
value, the likely result is misdirected searches and loss of access to valuable information. Definition of the other fields is also important. Anything you might want to search or sort on should be kept in its own field. For example, if you put first name and last name together in a personnel database, you can never sort by last name.
Queries and Reports
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
A query is a request to a database to return information matching specified criteria. The criteria are usually stated as a logical expression using operators such as equal, greater than, less than, AND, and OR. Only the records for which the criterion evaluates as TRUE are returned. Queries may be performed via interactive screens, or using query languages, such as SQL (Structured Query Language), which have been developed to aid in the formulation of complex queries and their storage for re-use (as well as more broadly for creating and maintaining databases). Figure 35-4 shows a typical SQL query.
Reports pull selected information out of a database and present it in a predefined format as desired by a particular group of end users. The formatting and data requirements of a particular report can be stored and used to regenerate the report as many times as desired using up-to-date data. Interactive screens or a report definition language can be used to generate reports. Figure 35-5 illustrates a report generation screen.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Data Storage and Retrieval How much disk storage a database requires depends on several factors: the number of records in the database, the number of fields in each record, the amount and type of information in each field, and how long information is retained in the database. Although computer mass storage has rapidly expanded in size and decreased in cost over the last few decades, human ingenuity in finding new uses for large quantities of data has steadily kept pace. Very large databases, such as those used by retail giant Walmart, to track customer buying trends now occupy many terabytes (trillions of bytes) of storage space. Managing such large databases poses a number of challenges. The simple act of querying a multi-terabyte database can become annoyingly slow. Important data relationships can be concealed by the sheer volume of data. As a response to these problems, data mining techniques have been developed to explore these large masses of information and retrieve information of interest. Assuring consistent and error-free data in a database that may experience millions of modifications per day is another problem. Another set of challenges arises when two or more databases that were developed separately are interconnected or merged. For example, the merger of two companies often results in the need to combine their databases. Even within a single company, as awareness grows of the opportunities that can be seized by leveraging their data assets,
management may undertake to integrate all the company’s data into a vast and powerful data warehouse. Such integration projects are almost always long and costly, and the failure rate is high. But, when successful, they provide the company with a powerful data resource. To reduce data storage needs, especially with process or other numerical data, data sampling, filtering, and compression techniques are often used. If a reading is taken on a process variable every 10 minutes as opposed to every minute, simple arithmetic will tell you that only 10% of the original data volume will need to be stored. However, a price is paid for this reduction: loss of any record of process variability on a timescale shorter than 10 minutes, and possible generation of spurious frequencies (aliasing) by certain data analytic methods. Data filtering is often used to eliminate certain values, or certain kinds of variability, that are judged to be noise. For example, values outside a predefined range, or changes occurring faster than a certain rate, may be removed.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Data compression algorithms define a band of variation around the most recent values of a variable and record a change in that variable only when its value moves outside the band (see Figure 35-6). Essentially the algorithm defines a “deadband” around the last few values and considers any change within that band to be insignificant. Once a new value is recorded, it is used to redefine the compression deadband, so it will follow longer-term trends in the variable. Detail variations in this family of techniques ensure a value is recorded from time to time even if no significant change is taking place, or adjust the width and sensitivity of the deadband during times of rapid change in variable values.
Database Operations
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
The classic way of operating on a database, such as a customer database in a purchasing department, is to maintain a master file containing all the information entered so far, and then periodically update the database using a transaction file containing new information. The key field in each transaction record is tested against the key field of each record in the master file to identify the record that needs to be modified. Then the new information from the transaction record is written into the master file, overwriting the old information (see Figure 35-7). This approach is well suited to situations where the information changes relatively slowly and the penalties for not having up-to-theminute information are not severe. Transactions are typically run in batches one to several times a day.
As available computer power increased and user interfaces improved, interactively updated databases became more common. In this case, a data entry worker types transactions into an on-screen form, directly modifying the underlying master file. Builtin range and consistency checks on each field minimize the chances of entering incorrect data. With the advent of fast, reliable computer networks and intelligent remote devices, transaction entries may come from other software packages, other computers, or portable electronic devices, often without human intervention. Databases can now be kept literally up-to-the-minute, as in airline reservation systems.
Since an update request can now arrive for any record at any moment (as opposed to the old batch environment where a computer administrator controlled when updates happened), the risk of two people or devices trying to update the same information at the same time has to be guarded against. File and record locking schemes were developed to block access to a file or record under modification, preventing other users from operating on it until the first user’s changes were complete.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Other database operations include searching for records meeting certain criteria (e.g., with values for a certain variable greater than a threshold) or sorting the database (putting the records in a different order). Searching is done via queries, as already discussed. A sort can be in ascending order (e.g., A to Z) or descending order (Z to A). You can also do a sort within a sort (e.g., charge number within department) (see Figure 35-8).
Special Requirements of Real-Time Process Databases When the data source is a real-time industrial process, a number of new concerns arise. Every piece of data in a real-time process database is now associated with a time stamp and a location in the plant, and that information must be retained with the data. A realtime process reading also has an “expiry date” and applications that use that reading must verify that it is still good before using it. Data also come in many cases from measuring instruments, which introduce concerns about accuracy and reliability. In the case of a continuous process, the values in the database represent samples of a
constantly changing process variable. Any changes that occur in the variable between sample times will be lost. The decision on sampling frequency is a trade-off between more information (higher sampling rate) and compact data storage (lower sampling rate). Many process databases allow you to compress the data, as discussed earlier, to store more in a given amount of disk space. Another critically important feature of a real-time process database is the ability to recover from computer and process upsets and continue to provide at least a basic set of process information to support a safe fallback operating condition, or else an orderly shutdown. A process plant does not have the luxury of taking hours or days to rebuild a corrupted database. Most plants with real-time process databases archive the data as a history of past process operation. Recent data may be retained in disk storage in the plant’s operating and control computers; older data may be written onto an offline disk drive or archival storage media such as CDs. With today’s low costs for mass storage, there is little excuse not to retain process data for many years.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
The Next Step: NoSQL and Cloud Computing For 40 years, relational databases have worked very well. They solved many of the problems that had bedeviled earlier database systems, providing a firm foundation for the expansion of information technology and automated data handling into our workplaces and our lives. However, the computing world of today is very different from when relational database management system (RDBMS) technology was invented in the 1970s. Figure 35-9 illustrates some of the changes that have occurred. Just to mention some of the most prominent: the Internet did not exist 40 years ago, the size of today’s massive databases (think Walmart, Amazon, and Google) was unimaginable then, and the people wanting to interact with your database can be anywhere on the planet instead of in a secure “computer center” in your building.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
One approach has been to try to extend the life of the proven reliable RDBMS technology in this new computing environment. This usually means bending, if not compromising outright, some of the strict requirements that have ensured the robustness of relational database technology and giving up some of its benefits. One approach, called sharding, partitions a data set across two or more servers in an effort to maintain input/output (I/O) throughput for multiple users. However, a query cannot address multiple shards, so getting an answer to a simple question may involve running a query on each of several servers and piecing together the results. Any modifications to database structures must also be performed on each server individually, increasing maintenance effort. Another tactic, denormalizing, bends the rules of the relational database “normal form,” which requires that all tables where parts of a record are stored be locked down while the record is being updated. The impact of these lockdown periods on data throughput in a highly interactive system can be significant. However, denormalizing generally leads to duplication of data items in different locations, which is strictly forbidden in a conventional RDBMS. Distributed caching is also used to speed up access to data from relational databases. The cache stores recently used data in memory on the assumption that they are likely to be required again soon. When an application requests data, the cache (which is fast) is searched first, before the main database (which is slow). However, this works only for read operations; for data integrity reasons, data must be written to the permanent database storage rather than to the cache, which is vulnerable to power outages and memory overflows and subsequent loss of data. The cache also adds yet another layer of complexity to manage. Breaking out of the strict relational model also involves making queries (information requests) easier. Structured Query Language (SQL) has been one of the mainstays of RDBMS, but it takes a specialist to write an SQL query. Newer database models are far more flexible about the forms in which they accept queries. This trend has recently been named “NoSQL,” which is misleading because it involves, not getting rid of SQL, but
moving outside it while keeping traditional RDBMS technology as a component. (Those of a conciliatory bent interpret “NoSQL” as “Not Only SQL.”) Because there is as yet no broad consensus about what should replace relational databases or SQL, NoSQL database systems tend to be custom-built, either by user companies themselves or by system vendors. Another trend is for today’s huge, complex, distributed, and volatile databases to outgrow even a large company’s computing facilities. With networks and the Internet now ubiquitous and reliable, the response is often to move storage outside the company’s premises, into an entity that has come to be known as “the cloud.” A cloud storage provider can relieve a company of the cost and bother of maintaining redundant equipment, upgrading servers and software, backing up data, and keeping an army of computer specialists on-site. In the mind of many, this frees them to concentrate on their core business.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
How much of a company’s computing activity should be located “in the cloud” and how much should remain on-site must be evaluated in light of each company’s priorities and resources. Options include software as a service (SaaS), in which software resides on the cloud server and is accessed over the Internet or a corporate network; platform as a service (PaaS), which provides components needed to develop and operate Internet applications; and infrastructure as a service (IaaS), in which storage or computing units dedicated to you reside at the service provider’s site and are accessed remotely. Cloud computing is not, however, without its challenges. Security is an obvious one: the minute you move data outside a facility that you physically control, you are running a risk that someone unauthorized will gain access to it. Fortunately, cloud service providers are aware of this concern and make extensive efforts to keep their facilities and their network connections secure. Another challenge is data synchronization: making sure that data residing in different locations remain consistent through multiple revisions and transfers. Response time and reliability must also be addressed because now there is an extra link in the chain connecting you with your computing resources, which may introduce slowdowns and connection failures. Because of these challenges, real-time process computing and control applications have not so far embraced cloud computing, preferring to keep their information local and secure. Business computing, however, is rapidly moving toward the cloud computing model under pressure of cost and relentless technological advances.
Data Quality Issues
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Data quality is a matter of fitness for intended use. The data you need to prepare a water quality report for a governmental body will be different from the data required for fastresponse control of a paper machine wet end. In the broadest sense, data quality includes not only attributes of the numbers themselves, but how accessible, understandable, and usable they are in their database environment. Figure 35-10 shows some of the dimensions, or aspects, of data quality.
Data from industrial plants are often of poor quality. Malfunctioning instruments or communication links may create ranges of missing values for a particular variable. Outliers (values that are grossly out-of-range) may result from transcription errors, communication glitches, or sensor malfunctions. An intermittently unreliable sensor or link may generate a data series with excessive noise variability. Data from within a closed control loop may reflect the impact of control actions rather than intrinsic process variability. Figure 35-11 illustrates some of the problems that may exist in process data. All these factors mean that data must often be extensively preprocessed before statistical or other analysis. In some cases, the worst data problems must be corrected and a second series of readings taken before analysis can begin.
Database Software
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Many useful databases are built using off-the-shelf software, such as Microsoft Excel and MS Access. As long as query and report requirements are modest and real-time interaction with other computers or devices is not needed, this can be a viable and lowcost approach. The next step up in sophistication is general-purpose business databases, such as Oracle. If you choose a database that is a corporate standard, your database can work seamlessly with the rest of the enterprise data environment and use the full power of its query and reporting features. However, business databases still do not provide many of the features required in a realtime process environment. A number of real-time process information system software packages exist, either general in scope or designed for particular industries. They may operate offline or else be fully integrated with the millwide process control and reporting system. Of course each level of sophistication tends to entail a corresponding increase in cost and complexity.
Data Documentation Adequate data documentation is a frequently neglected part of database design. A database is a meaningless mass of numbers if its contents cannot be linked to the
processes and products in your plant or office. Good documentation is especially important for numerical fields, such as process variable values. At a minimum, the following information should be available: location and frequency of the measurement; tag number if available; how the value is obtained (sensor, lab test, panel readout, etc.); typical operating value and normal range; accuracy and reliability of the measurement; and any controllers whose operation may affect the measurement. Process time delays are useful information, as they allow you to time-lag values and detect correlations, which include a time offset. A process diagram with measurement locations marked is also a helpful adjunct to the database.
Database Maintenance
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Basic ongoing maintenance involves regular checks of the data for out-of-range data and other anomalies that may have crept in. Often the first warning of a sensor malfunction or dropout is a change in the characteristics of the data it generates. In addition, changing user needs are certain to result in a stream of requests for modifications to the database itself or to the reports and views it generates. A good understanding of database structure and functioning are needed to implement these changes while maintaining database integrity and fast, smooth data access. Version upgrades in the database software pose an ongoing maintenance challenge. All queries and reports must be tested with the new version to make sure they still work, and any problems with users’ hardware or software configurations or the interactions with other plant hardware and software must be detected and corrected. Additional training may be needed to enable users to benefit from new software features or understand a change in approach to some of their accustomed tasks.
Data Security Data have become an indispensable resource for today’s businesses and production plants. Like any other corporate asset, they are vulnerable to theft, corruption, or destruction. The first line of defense is to educate users to view data as worthy of the same care and respect as other, more visible corporate assets. Protective measures, such as passwords, firewalls, and physical isolation of the database servers and storage units, are simply good practice. Software routines that could change access privileges, make major modifications to the database, or extract database contents to another medium must be accessible only to authorized individuals. Regular database backups, with at least one copy kept offsite, will minimize the loss of information and operating
capability in case of an incident.
Further Information Date, C. J. An Introduction to Database Systems. 8th ed. Boston, MA: Addison Wesley Longman, 2003. Gray, J. “Evolution of Data Management.” IEEE Computer (October 1999): 38–46. Harrington, J. L. Relational Database Design Clearly Explained. 3rd ed. Burlington, MA: Morgan Kaufmann, 2009. Huth, A., and J. Cebula. “The Basics of Cloud Computing.” US-CERT (U.S. Computer Emergency Readiness Team). Carnegie Mellon University, 2011. Accessed November 17, 2017. http://www.uscert.gov/sites/default/files/publications/CloudComputingHuthCebula.pdf. Litwin, Paul. Fundamentals of Relational Database Design. 2003. Accessed November 17, 2017. http://r937.com/relational.html. Stankovic, J. A., S. H. Son, and J. Hansson. “Misconceptions About Real-Time Data Bases.” IEEE Computer (June 1999): 29–36. Strong, D. M., Y. W. Lee, and R. Y. Wang. “Data Quality in Context.” Communications of the ACM 40, no. 5 (May 1997): 103–110.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Walker, G., “Cloud Computing Fundamentals.” Accessed February 15, 2013. http://www.ibm.com/developerworks/cloud/library/cl-cloudintro/index.html. Wang, R. Y., V. C. Storey, and C. P. Firth. “A Framework for Analysis of Data Quality Research.” IEEE Transactions on Knowledge and Data Engineering 7, no. 4 (1995): 623–640.
About the Author Diana C. Bouchard offers scientific and technical writing, editing, and translation services on a consulting basis. She holds an MSc (computer science) degree from McGill University in Montreal and worked for 26 years as a scientist in the Process Control Group at the Pulp and Paper Research Institute of Canada (Paprican). Her activities at Paprican included modeling and simulating kraft and newsprint mills, expert system development, and multivariate statistical data analysis. In the context of the process integration chair at Ecole Polytechnique, she has lectured on steady state and dynamic simulation and multivariate data analysis.
36 Mastering the Activities of Manufacturing Operations Management Understanding the Functional Level above Automation and Control and below Enterprise By Charlie Gifford
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Introduction Automation only begins with equipment control on the plant floor; it also includes higher levels of control that manage production workflows, production job orders, and resources such as personnel, equipment, and materials across production areas. Effective manufacturing in the plant and across its supply chain is only partially based on production’s equipment control capability. In an environment executing as little as 20% make-to-order (MTO) orders (80% make-to-stock [MTS]), resource optimization is critical to effective low-cost order fulfillment. In the 21st century global market, manufacturing companies must be effective at coordinating and controlling resources (personnel, materials, and equipment) across production and its supporting operations activities and their control systems to reach their maximum potential. This is usually accomplished using industrialized manufacturing applications to manage and optimize operations execution and governance procedures. These operations activities are collectively called the manufacturing operations management (MOM) functional domain level. MOM defines a diverse set of functions and tasks to execute operations job orders while effectively applying resources above automation control systems; these operations management functions reside below the functional level of enterprise business systems, and they are typically local to a site or area. This chapter explains the activities and functions of the MOM level and how these functions exchange information between each other for production optimization and within the context of other corporate business systems. The term manufacturing execution system (MES), described in earlier editions of this book, was defined by the Advanced Market Research (AMR, acquired by the Gartner Group in 2009) in the early 1990s and was a high-level explanation that did not describe
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
the actual functionality set in general or in a vertical industry way. MES did not explain the inner MOM data exchanges (Level 3 in Figure 36-1) or the business Level 4 exchanges. For the most part, MES has been a highly misunderstood term in manufacturing methods and systems. This term was primarily based on defining production management for a 20th century make-to-stock manufacturing environment. MES was focused on describing the execution and tracking of a production order route/sequence and associated material transitions—not on the execution of critical supporting operations such as quality, maintenance, and intra-plant inventory movement to effectively utilize available resource capabilities and capacity. This is key to cost effectiveness in operation manufacturing for make-to-order or lean pull supply chains. In the ANSI/ISA-95.00.03-2013 standard, Enterprise-Control System Integration – Part 3: Activity Models of Manufacturing Operations Management [1], the basic MES definition was incorporated into the functions of the production operations management (POM) activity model. The ISA-95 Part 3 activity models include a definition that describes the actual functions, the tasks within functions, and the data exchanges between functions. No other MES definition by industry analysts or international standards provides a more comprehensive level of definition. The Gartner Group has updated their MES definition to use the term manufacturing operations system (MOS), which is a system abstraction from ISA-95 Part 3 activities instead of their 1990s MES term. The ISA-95 Part 3 POM activity model is supported by activity models for quality operation management (QOM), inventory operations management (IOM), and maintenance operations management (MaintOM); these four activity models define all the MOM activities (functions, tasks, and data exchanges) for the Purdue Enterprise Reference Architecture and Level 3 of the ISA-95 Functional Hierarchy Model. Since 2006, ISA-95 Part 3 is the primary requirements definition template used by 80% of manufacturers worldwide to define their Level 3 MOM systems in their requests for proposals (RFPs).
The ISA-95 standard’s Functional Hierarchy Model defines the five levels of functions and activities of a manufacturing organization as originally described in the Purdue Enterprise Reference Architecture. Automation and control supports Level 1 and Level 2, while Level 3 MOM supports the lower and the enterprise level to fulfill orders, as shown in Figure 36-1. • Level 0 defines the actual physical processes.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
• Level 1 defines the activities involved in sensing and manipulating the physical processes. Level 1 elements are the physical sensors and actuators attached to the Level 2 control functions in automation systems. • Level 2 defines the activities of monitoring and controlling the physical processes, and in automated systems this includes equipment control and equipment monitoring. Level 2 automation and control systems have real-time responses measured in subseconds and are typically implemented on programmable logic controllers (PLCs), distributed control systems (DCSs), and open control systems (OCSs). • Level 3 defines the activities that coordinate production and support resources to produce the desired end products. It includes workflow “control” and procedural “control” through recipe execution. Level 3 typically operates on time frames of days, shifts, hours, minutes, and seconds. Level 3 functions also include production, maintenance, quality, and inventory operations; these activities are collectively called MOM. Level 3 activities and functions are directly integrated and automated to support production operations activities within MOM.
• Level 4 defines business-related activities that manage a manufacturing organization. Manufacturing-related activities include establishing the basic plant schedule (such as material use, delivery, and shipping), determining inventory levels, logistics “control,” and material inventory “control” (making sure materials are delivered on time to the right place for production). Level 4 is called Business Planning and Logistics. Level 4 typically operates on time frames of months, weeks, and days. Enterprise resource planning (ERP) logistics systems are used to automate Level 4 functions. It is important to remember that each functional level has its own distinct form of control with its own definition and view for real-time execution. Level 3 systems consider real time to mean information available a few seconds after shop floor events occur in Level 0. Level 4 systems consider real time in the view of logistics, planning, and material information as available daily or within a few hours after the end of a shift.
Level 3 Role-Based Equipment Hierarchy
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Figure 36-2 shows the role-based equipment hierarchy defined in ANSI/ISA-95.00.032013, which is used to provide the context for the data exchanged between Level 3 functions, as well as between Level 3 and Level 4 functions. Level 4 ERP and logistics systems typically coordinate and manage the entire enterprise and sites within the enterprise, but it may also schedule to the area or work center level in less complex make-to-stock configurations. The Level 3 MOM systems typically coordinate and schedule areas, work centers, and work units.
The role-based equipment hierarchy is an expansion of the equipment hierarchy defined
in ANSI/ISA-88.01-2010, Batch Control Part 1: Models and Terminology, as the batch control standard that addresses the equipment types used in continuous production, discrete production, and inventory storage and movement. The role-based equipment hierarchy provides a standard naming convention and context for the organization of resources (i.e., equipment, materials, physical assets, and personnel), operations workflows, automation control, and manual control.
MOM Integration with Business Planning and Logistics The ANSI/ISA-95.00.01-2010, Enterprise-Control System Integration – Part 1: Models and Terminology, [2] and ANSI/ISA-95.00.02-2018, Enterprise-Control System Integration – Part 2: Objects and Attributes for Enterprise-Control System Integration [3], standards define the terminology to be used for interfaces between Level 3 and Level 4 systems. This information is used to direct production activities and to report on production. Formal data models for exchanged information include four MOM resources object models and four MOM information categories object models.
Four MOM Resources Object Models
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
1. Personnel class, person, and operations test information – This is the object definition of the persons and personnel classes (roles) involved in production. This information may be used to associate an operations activity with specific persons as part of a work or operations event record, or with personnel classes to allocate an operations activity costs. 2. Equipment class, equipment, and operations test information – This is the object definition of the equipment and equipment classes involved in an operations activity. This information may be used to associate an operations activity with specific equipment as part of a work or operations event record, or with equipment classes to schedule an operations activity and allocate resources and costs. 3. Material class, material definition, material lot, material sublot, and operations test information – This is the object definition of the material lots, material sublots, material definitions, and material classes involved in an operations activity. This information allows Level 3 and Level 4 systems to unambiguously identify materials specified in operations schedules and consumed or produced in actual production, maintenance, quality, or inventory activities.
4. Process segment information – This is the object definition of the business views of an operations activity, based on Level 4 business processes that must send information to an operations management activity or receive information from one. Examples include process segments for setup, inspection, segments, and cleanup.
Four MOM Information Categories Object Models 1. Operations definition information – This is the definition of the materials, equipment, personnel, and instructions required to execute an operations activity, such as make a product or repair equipment. This includes the manufacturing bill (a subset of the bill of material [BOM] that contains the quantity and type of material required for producing a product) or the resource specifications. It also includes operations segments defining the workflow and specific resource specification required at each segment of an operations activity, such as production.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
2. Operations capability information – This is the definition of the capability and capacities available from an operations activity for current and future periods of time. Capability and capacity information is required for both Level 4 scheduling and Level 3 detailed operations scheduling. 3. Operations schedule information – This specifies what products are to be made and the support activities required for production. It may include the definition of the specific personnel or roles to be used, equipment or equipment classes to be used, material lots or material classes to be produced, and material lots or material classes to be consumed for each segment of an operations activity. 4. Operations performance information – This specifies what was actually produced or completed in the operations activity. It may include the definition of the actual personnel or personnel classes used, the actual equipment or equipment classes used, the actual material lots and quantities consumed, and the actual material lots and quantities produced for each segment of an operations activity.
MOM Execution of the Level 4 Production Order through Coordinated Operations The above ISA-95 Part 2 resource object models of the four manufacturing resources (personnel, materials, equipment, and physical assets) are used to construct the four
MOM information category object models (operations definition including process segments, operations capability, operations schedule, and operations performance).
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
The resource object models construct a unit of work known as a segment, which is the ISA-95 generic term for an operation, step, or phase in a work sequence or workflow. The process segment is the foundation construct of the ISA-95 operations management information models that allows Level 2 real-time data to be contextualized and aggregated as operations information to business process activities through Level 3– Level 4 exchanges. This architectural construct allows data collection, analytics, reporting, and interfaces to be configurable and much less costly. Level 2 data are aggregated by resource objects to and from the information category objects shown in Figure 36-3.
The derived operations definition and operations schedule information models use this object structure for information exchanges between detailed scheduling, dispatching, and execution applications to contextualize Level 4 business process information into a form required for Level 2 and Level 3 MOM (Part 3) applications. The resulting operations performance and capability object models use this information structure to contextualize real-time data in data collection applications so that applications for operations analytics, tracking, and reporting, as well as the supporting Level 3–4 interfaces are easily able to aggregate Level 2 and Level 3 applications into a business information form required for Level 4 applications. A segment in a recipe (batch) or production route (discrete) is constructed by using the combined resource models (personnel, equipment, physical assets, and materials) in unison to describe the unit of work in terms of resources and resources’ test
specification requirements. For example, each person in a plant has an identification (ID) with their skills defined as a set of personnel classes by personnel class properties that have test specifications. The person’s qualifications are tested prior to permitting the person to be scheduled, dispatched, or executed as a resource in an operations segment or process segment. Personnel activities are recorded by Level 2 functions and tracked, analyzed, and reported by Level 3 functions based on this ID for each person, personnel class, and required class property for the operations segment scheduled.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
In a business planning and logistics form, process segments are used to construct a library of general plant capabilities that are used to generate “operations segments” for actual products’ production operations and supporting operations tasks. For example, an operations segment may have the form of production routes in the discrete hybrid environments or recipes in batch-process hybrid environments. For the actual MOM form of defining the executable work unit, ISA-95 operations segments are recursive with increased levels of nested granularity for the “operations segment” definition for each Level 3 activity function (schedule, dispatch, and execution). The benefit is that the actual application outputs are contextualized in a form needed for each function’s specific interface. Each Level 3 application’s performance results and associated Level 2 process control work data are fed back to Level 3 data collection applications in the ISA-95 information model contextualized form. The operations performance applications (analysis, tracking, reporting, and interfacing) readily aggregate the operations information for the Level 4 production orders and supporting operations orders in the operations context as ISA-95 information data models, equipment role-based hierarchy, and operations segments. This operations segment construct is the basis for the four information categories of operations performance. Part 3 defines the detailed activities of several MOM application categories and their dependent workflows and information exchanges within Level 3, as well as dependent interactions and information exchanges with Level 4 applications. As shown in Figure 36-4, a generic operations detailed activity model is defined and used to elaborate four key MOM activities: production, maintenance, quality test, and (intra-plant) inventory operations. Functions and high-level data exchanges are defined for each MOM activity.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
In Figure 36-5, the shaded elements define the information flows for supporting operations within Level 3 areas that support production operations in the execution of a Level 4 production order. The production operations cycle time is directly dependent on the responses of other operations activities. As a make-to-stock (MTS) manufacturing environment transforms to a higher percentage of make-to-order (MTO) or engineer-toorder (ETO), the constraints or barriers to production workflow move from equipment constraints to the supporting operations constraints. In MTO and ETO production and their associated supporting operations workflows, the work masters (methods and production rules), parameters, workflow dependency, work instructions, bill of material, and material and resource test specifications are still being defined and stabilized using continuous improvement corrective actions.
Note: Some supporting operations information may flow to other Level 4 systems, such as enterprise asset management or quality management systems.
MOM and Production Operations Management Figure 36-4 illustrates the different Level 3 operations-oriented functions at all manufacturing sites and areas. Each bubble in the figure represents a collection of operations activities that occur in a manufacturing facility or asset as the Level 4 operations schedule is converted into actual production. It illustrates how production and support operations requirements from the business are used to coordinate and control plant activities. The four arrows in an activity model between Level 3 and Level 4 identify previously defined ISA-95 information categories that are exchanged between MOM and business logistics systems. The MOM model is driven by operations schedules developed by the business and sent to the manufacturing facility for production or support activities. The operations schedules are used for detailed operations scheduling activities that define detailed operations schedules containing operations work requests and associated job orders. The operations job orders are dispatched to work centers and work units based on time and events; the operations job orders are executed and data are collected in an operations data collection activity for the operations type (production, quality, maintenance, or inventory).
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Note: In batch systems, a control recipe is the equivalent of a production job order. The collected data are used in production operations and support operations tracking activities that relate the time-series and transactional information to the job order information to generate a report on operations performance and tracking information. The collected data are used in production and operations analysis functions to generate reports and key performance indicators (KPIs). Operations capability information about the current and future availability of resources and operations segments is provided to business scheduling systems by the resource management activities of production and supporting operations. Operations definition information about the operations segment, work master, master recipe, procedures, BOMs, resource requirements, and workflow specifications needed for production and supporting operations is managed by product definition management activities.
Detailed Scheduling (Production and Operations) The detailed scheduling function in a manufacturing facility takes a business Level 4 production master schedule and applies operations capability information about local resources to generate a detailed operations schedule as work requests containing job orders or job order lists. This can be an automated process, but in many plants,
scheduling is done manually by expert production planners or production-planning staff. Automated systems are sometimes referred to as plant-level advanced scheduling and planning systems. The key element of this function is detailed scheduling of work assignments and material flows to a finer level of granularity than the business operations schedule into a set of work requests containing job orders for specific work tasks. While Level 4 master schedule may schedule work assignments to sites, areas, and work centers, detailed operations scheduling schedule work assignments as job orders to work centers and work units. Additionally, many business Level 4 master schedules are based on unlimited or partially constrained capacity, while detailed (or finite capacity) scheduling takes into account the constraints around personnel, material, equipment, physical assets, and processes as work in progress.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Dispatching (Production and Operations) Once a detailed operations schedule is created and available as a work request containing job order lists and/or job orders, the job orders or job order lists are dispatched to production lines, process cells, production units, and storage zones. This function can take the form of supervisors receiving daily schedules and dispatching work as job orders to technicians or automated systems to performing campaign management of batches and production runs to fulfill the requirement of the Level 4 operations (master) schedule. Operations dispatching function includes handling conditions not anticipated in the detailed operations schedule. This may involve managing material workflows and buffers. Unanticipated conditions may have to be communicated to maintenance operations management, quality operations management, and/or inventory operations management in support of production operations. The dispatching function is a core function of a MOM activity.
Execution Management (Production and Operations) Production and support operations execution management functions receive the dispatched job orders and using paper-based systems, MOM systems, or recipe execution systems, coordinate and control the actual work execution to the work master identified in the job order. The work master may include the execution of procedural logic in recipes and the display of workflow instructions to operators as a workflow specification object. The activities include selecting, starting, and moving units of work (such as a batch or production run) through the appropriate sequence of operations to physically produce
the product or execute a support task. The actual equipment control is part of the Level 2 functions. Production and operations execution management is one of the core functions of a MOM activity, but it may also be performed by recipe or manual workflow instruction systems in DCS systems or batch execution systems. The standards for information flows from Level 3 to Level 2 are defined in the ANSI/ISA-88.01-2010, Batch Control – Part 1: Models and Terminology; Open Platform Communications (OPC); and fieldbus standards.
Data Collection (Production and Operations)
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Production and support operations data collection functions gather, compile, and manage production data for specific units of work (batches or production runs). Level 2 manufacturing control systems generally deal with process information such as quantities (weight, units, etc.), properties (rates, temperatures, etc.), and equipment information such as controller, sensor, and actuator statuses. Collected production and operations data includes sensor readings, equipment states, event data, operator-entered data, transaction data, operator actions, messages, calculation results from models, and other data of importance in the making of a product. The collected data are inherently time-based and event-based, that is, structured to the ISA-95 context of the operations schedule, operations segment, equipment hierarchy, or operations capability to give context to the collected information for operations analysis and tracking functions. This information is usually made available to various operations analysis activities including product analysis, production and operations analysis, and process analysis. Real-time data historians and automated batch record logging systems support this activity.
Tracking (Production and Operations) The production and operations tracking functions convert contextualized operations information into information related to assigned work (batches and production runs) and into tracking information about resources (equipment, material, physical assets, and personnel) used in production. Production and operations tracking also merges and summarizes information that is reported back to the Level 4 business activities in the contextualized form required. This function is a core function of a MOM activity. When automated systems are used, they usually link to data historians and batch record logging systems.
Resource Management (Production and Operations) The resource management function monitors and categorizes the availability of
personnel, material, physical assets, equipment, and operations segments. This information is used by detailed operations scheduling and business logistics planning. These activities take into account the current and future predicted availability, using information such as planned maintenance and vacation schedules in addition to material order status and delivery dates. This function may also include material reordering functions, such as Kanban [4]. Kanban is a shop floor material management system used as part of just-in-time production and inventory operations where components and subassemblies are produced, based on notification by a demand signal from a subsequent operation or work step. Kanban, the Japanese word for “sign,” is a signaling system to trigger action, an alarm, or an alert to signal the need for an item or to trigger the movement, production, or supply of a unit in a factory. Resource management is usually a mixed operation with manual work, automation, and database management. Management of the resources may include local resource reservation systems; there may be separate reservation systems for each type of managed resource (personnel, equipment, physical assets, and material). This function is a core function of MOM activities.
Definition Management (Production and Operations)
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
The operations definition management function includes the management of product and operations definitions, such work master, resource specifications, and workflow specifications. These may be recipes, work instructions, assembly instructions, standard operating procedures, and other information used by production to make or assemble products. This function is a core function of a MOM activity.
Performance Analysis (Production and Operations) The activities associated with the analysis of production, operations, process, and product are defined as operations performance analysis function. These are usually offline activities that look for ways to improve processes through chemical or physical simulation, analysis of good and bad production runs, and analysis of delays and bottlenecks in production. Operations performance analysis also includes calculating performance indicators, leading, and trailing predictors of behavior. These activities are generally major users of information collected in plant data historians. There are often separate tools for production, operations, process, and product analysis, and the tool sets vary based on the type of production (continuous, discrete, or batch).
Other Supporting Operations Activities
The list above defines all the MOM activities of a manufacturing facility. Production operations management activities are supported by maintenance operations management activities, quality operations management activities, and ainventory operations management activities. • Maintenance operations management – The functions that coordinate, direct, and track the functions that maintain the equipment, tools, and related assets to ensure their availability for manufacturing. • Quality operations management – The functions that coordinate, direct, and track the functions that measure and report on quality. The broad scope of quality operations management includes both quality operations and the management of those operations to ensure the quality of intermediate and final products. • Inventory operations management – The functions that coordinate, direct, and track the functions that transfer materials between and within work centers and manage information about material locations and statuses. • Manufacturing operations infrastructure activities – Manufacturing operations also require infrastructure activities that may be specific to manufacturing, but are often elements that are also required by other parts of a manufacturing company. The infrastructure activities include: Managing security within manufacturing operations Managing information within manufacturing operations
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Managing configurations within manufacturing operations Managing documents within manufacturing operations Managing regulatory compliance within manufacturing operations o Managing incidents and deviations
The Operations Event Message Enables Integrated Operations Management Operations events are created by each of the Level 3 functions shown in Figure 36-5 and can be represented in an operation event message using the ISA-95 objects from Part 2 [3] and Part 4 [4]. The operations event is a notification of manufacturing operations action (production, quality, maintenance, and inventory movement) after the operations event has occurred within a MOM activity, function, task, operation, or phase. Not all real-world events warrant creating an operations event message by a manufacturing
system. Functions in Levels 2 to 4 require inputs of the operations events’ data to complete their tasks. The operations event data exchange is typically completed as a publish-subscribe message where all the functions subscribe to the specific data required for task inputs. Valid examples of operations events are: • Completion of a MOM function or task (operations event: work-schedule created, released, or dispatched) • A Level 4 or Level 3 business process step or task completed (operations event: job order started or completed, or an operations schedule created, released, or dispatched) • Resource status changes (operations event: resource acquired, released, available, committed, etc. • Physical process step actions (operations event: material lot created, updated, or deleted).
The Level 3-4 Boundary Four rules are applied to determine if an activity should be managed as part of Level 4 or as part of Levels 3, 2, or 1. An activity should be managed at Level 3 or below if the activity is directly involved in manufacturing; includes information about personnel, equipment, physical assets, or material; and meets any of the following conditions: • The activity is critical to plant safety. Copyright © 2018. International Society of Automation (ISA). All rights reserved.
• The activity is critical to plant reliability. • The activity is critical to plant efficiency. • The activity is critical to product quality. • The activity is critical to maintaining product or environmental regulatory compliance. Note: This includes such factors as safety, environmental, and current good manufacturing practice (cGMP) compliance. This means, in some cases, the Level 3 activities defined above may be performed as part of logistics instead of operations. Typically, this involves operations detailed scheduling and operations dispatching. The scope of a MOM system is determined by applying the above rules to each site or area within a site.
References 1. ANSI/ISA-95.00.03-2013 (IEC 62264-3 Modified). Enterprise-Control System Integration – Part 3: Activity Models of Manufacturing Operations Management. Research Triangle Park, NC: ISA (International Society of Automation). 2. ANSI/ISA-95.00.01-2010 (IEC 62264-1 Mod). Enterprise-Control System Integration – Part 1: Models and Terminology. Research Triangle Park, NC: ISA (International Society of Automation). 3. ANSI/ISA-95.00.02-2018. Enterprise-Control System Integration – Part 2: Objects and Attributes for Enterprise-Control System Integration. Research Triangle Park, NC: ISA (International Society of Automation). 4. Wikipedia. Accessed May 21, 2018. http://en.wikipedia.org/wiki/Kanban.
Further Information ANSI/ISA-95.00.04-2012. ECSI – Part 4: Objects & Attributes for MOM Integration (IEC 62264-3 Mod Draft). Research Triangle Park, NC: ISA (International Society of Automation).
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Brandl, Dennis. Design Patterns for Flexible Manufacturing. Research Triangle Park, NC: ISA (International Society of Automation), 2006. Brown, Mark Graham. Baldrige Award Winning Quality: How to Interpret the Malcolm Baldrige Award Criteria for Performance Excellence. 2nd ed. ASQC Quality Press, 1992. Cox III, James F., and John H. Blackstone, Jr. APICS Dictionary. 9th ed. APICS, 1998. Gifford, Charlie. The Hitchhiker’s Guide to Manufacturing Operations Management: ISA-95 Best Practices Book 1.0. Research Triangle Park, NC: ISA (International Society of Automation), 2007. Goldratt, Eliyahu M., and Jeff Cox. The Goal: A Process of Ongoing Improvement. 2nd ed. Great Barrington, MA: North River Press, 1992. “MES Functionalities and MRP to MES Data Flow Possibilities.” White Paper Number 2, MESA International, 1994. Scholten, Bianca. The Road to Integration: A Guide to Applying the ISA-95
Standard in Manufacturing. Research Triangle Park, NC: ISA (International Society of Automation), 2007. Williams, Theodore J. The Purdue Enterprise Reference Architecture: A Technical Guide for CIM Planning and Implementation. Research Triangle Park, NC: ISA (International Society of Automation), 1992.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
About the Author Charlie Gifford is the owner and senior advanced manufacturing consultant for 21st Century Manufacturing Solutions LLC and is a senior decision automation researcher in the Strategy and Innovation Group for a Fortune 25 Global Mining Company. He holds BS degrees in chemical and material engineering and an MS in solid-state physics from the University of Maryland. As an intelligent manufacturing thinker/protagonist, for the past 31 years he developed advanced manufacturing systems in 12 industries: mining and minerals, biotech, life sciences, aerospace, electronics, semiconductor, automotive, food and beverage, telecom, energy, oil and gas, and paper products. Known as an internationally recognized expert combining Lean Six Sigma practices with supply chain and operations management system architectures, his background includes handson design, design supervision, and team leadership in innovation transformation. As an industry leader in professional organizations such as ISA and the OPC Foundation, Gifford is a lead contributor to, editor of, and teacher of many intelligent manufacturing standards, such as ISA-88, ISA-95, OPC Foundation, OPEN-Serialization Communication Standard (OPEN-SCS), Open Application Group Integration Specification (OAGIS), and Supply Chain Operations Reference (SCOR®) model. On intelligent manufacturing best practices, he has published more than 50 papers, authored/edited 5 books, authored/taught over 20 courses, chaired 6 working groups, and earned the MESA International Outstanding Contributor Award (2007). Gifford has authored three ISA books: 1. The Hitchhiker’s Guide to Manufacturing Operations Management: ISA-95 Best Practices Book 1.0. 2. When Worlds Collide in Manufacturing Operations: ISA-95 Best Practices Book 2.0. Awarded the Thomas G. Fisher Award of Excellence, Best Standards Book in 2011. 3. The MOM Chronicles: ISA-95 Best Practices Book 3.0. Awarded the G. Fisher Award of Excellence, Best Standards Book in 2013.
37 Operational Performance Analysis
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
By Peter G. Martin, PhD The phrase operational performance analysis has referred to many different technologies and approaches in the industrial automation field. An overall discussion of this topic requires a functional definition of operational performance analysis that will provide a framework. Figure 37-1 illustrates a block diagram with such a framework. There are four components to operational performance analysis that must be considered. First is the industrial operation under consideration. For the purposes of this discussion, an industrial operation can range from a single process control loop to an entire plant. Second is measuring the operation’s performance, which is a critical component of the model since the performance cannot be controlled if it is not measured. Third is the analysis of the performance based on the measurements and the performance expectations. This function has traditionally been thought of as the domain of engineering, but performance analysis should be conducted by every person directly involved with the operation from the operator through executive management. The final component is implementing appropriate performance improvements and determining if those improvements impact the industrial operation in the expected manner, which is revealed through changes in the measurements. All four components of this loop must continuously work in concert to have an effective operational performance analysis system.
Automation technologies are the most promising for the effective development and calculation of operational performance analytics and for their presentation to operations and management. Also, automation technologies offer one of the best ways to improve operational performance.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Operational Performance Analysis Loops As the model in Figure 37-1 shows, operational performance analysis can be viewed almost as a control loop. In complex manufacturing operations, multiple nested loops are required for effective operations performance analysis (Figure 37-2). Effective operational performance will only be realized if these loops are both cascaded top to bottom and operate in a bottom-up manner. The lowest-level loop is the classic process control loop. The next level represents an operational control loop, which drives the operational performance of a larger industrial operation than a control loop and may actually provide operational outputs that are set points to multiple process control loops. Advanced control loops utilize advanced control technologies, such as multivariable predictive control, linear optimization, or nonlinear optimization. The next level involves plant business control loops. These loops evaluate plant business measures, such as key performance indicators (KPIs) and accounting measures, and determine the most effective operational control outputs that are set-point inputs to multiple operational controllers. The fourth level involves enterprise control loops, which take current strategic set points in from the business strategy functions and provide business outputs that become set points to the plant business controllers. Although this multilevel
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
cascade control perspective may be a simplification, fully integrated cascade control from the plant floor through the enterprise business systems has been the objective of both automation and information systems since the advent of the computer in manufacturing.
The higher levels of operational performance loops may involve much more complicated control algorithms or processes than the lower-level loops. The higher-level control mechanisms may involve recipe management, product management, production planning, resource allocation, profit analysis, and other functions not traditionally viewed as control functions. The general multiple cascade control model applies in both continuous and batch manufacturing operations with the control approach at each level adapted to the process characteristics. This cascade operational performance analysis model provides both a bottom-up and a top-down perspective of effective operational performance analysis. The bottom-up perspective is the classic operational perspective of “doing things right” (i.e., making sure that the resources of the operation are most effectively deployed and managed to do the job at hand). The top-down perspective involves doing the “right things” (i.e., making the right products at the right time and cost to maximize market value for the business). Both doing the right things and doing things right are necessary for effective operational performance management.
As with any cascade control system, the lower-level loops must be operating correctly and effectively to enable the upper-level loops to work. Trying to produce the correct product slate with all the basic control loops working poorly will produce a poor result. Also, as with any cascade control system, the time constants of the loops must increase by at least a factor of four from the lower-level to the upper-level loop. This will prevent an upper-level loop from requesting responses faster than the lower-level loop can respond. Maximizing the control effectiveness of any of these loops involves both controlling the operation of the process and the operational condition of the process. Controlling the operation of the process involves driving the process at a level that meets the requirements as input from the next higher-level loop in the cascade. Controlling the operational condition of the process involves ensuring the process is in a condition at which it can respond as required. These two aspects must be managed in a collaborative manner to optimize the operational performance of any industrial operation.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Process Control Loop Operational Performance Analysis The base level of operational performance analysis is determining a process control loop’s performance. The basic process control loop is comprised of the process under control, a measurement device, a process controller, and a valve (Figure 37-3). Each of these four components must be operating effectively for optimal control loop operation. The process must be relieved of conditions that provide physical constraints to the operation of the loop. Limitations may include a constrained energy supply, constrained material supply, constrained output capacity, and reduced process equipment effectiveness. For example, material supply may be constrained due to a poorly sized valve or underpowered pump. The process equipment may be limited by scaling in pipes or residue on the walls of a vessel. The measurement instrument may need calibration. The controller may need tuning. The valve may be worn or may stick. Any of these conditions will result in an underperforming loop.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Traditionally, plant engineers or maintenance staff would need to conduct a process control loop operational performance analysis manually. With thousands of control loops in a typical industrial operation, process control loop analysis involved significant human resources. With the downsizing of engineering and maintenance staffs in industrial operations over the past 15 years, the level of control loop performance analysis in many plants was significantly reduced, resulting in many poorly performing loops. It has been fascinating to notice investments in higher-level optimization in plants with poorly operating control loops. In these instances, the impact of the optimization was nullified by poorly performing controls. The result is that control loop performance has become a critical weakness in many process manufacturing operations. Recently, a new class of software has been marketed that provides continual automatic analysis of all process control loop components. Figure 37-4 shows a display generated by loop analysis software. This software analyzes the controller tuning, instrument calibration, process constraints, valve operation, and other aspects of a process control loop to identify problems and impending problems, and in many cases, offer solutions. This has been a major step forward in providing effective regulatory control on top of which higher levels of operational and business control and management functions can more effectively perform.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Although the cascade of operational performance loops model (Figure 37-2) shows four levels of control loops, this is a simplification. Each level may actually be more complex than what is implied. For example, the basic control loop level may include single-loop feedback controllers or simple cascade controllers. The loop performance analysis software commercially available typically handles both single and cascade control loops. Improving the performance of the control loops in a production operation can result in significant financial gains from the operation. It has been estimated that up to 50% of all the improvement potential within a process plant may be realized through effective loop management. The control at this level is classic control of process variables, such as flow, level, pressure, temperature, and composition, which is difficult to translate into specific financial improvement value; however, without effective controls at this level, higher-level manufacturing performance improvement activities will not be effective.
Advanced Control Operational Performance Analysis The next level in the control hierarchy is advanced control. For the purposes of
operational performance analysis, this level includes advanced process control, multivariable predictive control, linear optimization, and nonlinear optimization of a major functional process segment within a process manufacturing plant. The control strategies at this level involve multiple measurement inputs from process instrumentation connected to the process segment being controlled. The multiple measurements are evaluated through some advanced mathematical model, and outputs are transmitted to the set points of regulatory controllers. As with the process control loop, advanced process controllers may be cascaded. For example, the output of a linear optimizer may be used to set constraint conditions in a multivariable controller.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Because advanced process controllers and optimizers are provided as software applications, the applications may have built in performance analysis to warn engineers when adjustments are required. These diagnostics are sometimes developed based on the expected process response from a particular control output compared against the actual response. When this difference exceeds a threshold limit, a warning is generated. The necessary response when this happens is specifically associated with the process segment and control strategy. At this level, the engineering staff accomplishes a considerable percentage of the operational performance analysis manually. As process dynamics or loads change over time, advanced controllers may lose their effectiveness. When this occurs, process operators will frequently put the advanced process controllers into manual, effectively bypassing the control algorithm or process. Unfortunately, this happens very frequently resulting in less-than-optimal performance of the process segment. Estimates of the advanced controllers that have been switched to manual often exceed 50% of all the advanced controllers installed. Advanced process control can provide significant financial benefit to a plant operation, but significant resources must be invested to ensure that it is not turned off. One of the most effective tools in monitoring the performance of advanced control implementations may be business performance dashboards that provide real-time feedback on the operating performance of the process segment under control. The actual development and construction of the dashboards is considered a business control level function at the next level up in the hierarchy. Using these business intelligence dashboards at the advanced control level is a good example of how the top-down flow of intelligence can be valuable. These dashboards should be available for operations, maintenance, and engineering to have maximum effect and should provide information on both the operating performance and the financial performance of the process. If the performance starts to decline, root cause analysis can be initiated. Engineering can check the tuning of the advanced control strategy, maintenance can check for increased process constraints due to changes in the operating condition of the process, and
operations can investigate operating changes that may have had a negative impact on performance. Operators tend to keep advanced control in automatic mode when they understand the performance impact these controls are having. Significant business benefits can be gained when the manufacturing organization collaborates to resolve performance issues and well-orchestrated, real-time business intelligence and guidance systems encourage the collaboration.
Plant Business Control Operational Performance Analysis The next category of control loop in the hierarchy is the plant business control loop. As the name implies, this level of control is targeted at controlling the measurements of the business within the plant as compared to the control of physical variables at the first two levels. One of the highest barriers to implementing the plant business control strategies is developing correct business performance measures. Most manufacturing operations have not put the effort into developing a comprehensive business performance measurement system, which makes control at this level impossible.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
There are two types of business performance measures that must be developed and made available in real time for plant business control to become a possibility: operational performance measures and financial performance measures. Operational performance measures are often referred to as key performance indicators (KPIs) while financial performance measures are referred to as real-time accounting (RTA) measures. The highest impact business performance measurement systems include both KPIs and RTA. In the 1990s, as competition and globalization increased in the industrial sector, the manufacturing business environment became much more competitive, and the requirement for more detailed operational performance measurement systems increased. Many industrial companies responded by starting to identify some operational performance measures they believed would be useful in evaluating the performance of their plants. Production and engineering personnel frequently developed these operational measures calling them KPIs. KPIs were typically calculated on a monthly, weekly, or even as frequently as a daily basis and were tracked over time to analyze the ongoing performance of the plants and processes. Typically, numerous KPIs were identified and calculated to monitor different aspects of the operational performance of plants and organizations. The ISA95 standards committee identified some of the more common KPIs utilized in industrial operations today. Figure 37-5 presents a subset of those identified in the ISA-95 series, EnterpriseControl System Integration. What is most interesting is how many different KPIs are
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
listed. Many plants monitored 20 or more KPIs for their operations on an ongoing basis. Although having a large number of KPIs may be acceptable for after-the-fact analysis of the various aspects of an operation’s performance, they become useless when trying to manage the plant’s performance in real time. This may be why most manufacturing operations only measured KPIs on a daily basis. A lot can go wrong in a manufacturing operation in a day, and although having daily measures of performance is better than having no performance measures at all, they are essentially useless to the parts of the manufacturing operation responsible for the performance of the operation on a minuteby-minute basis—operations and maintenance.
Analyzing and controlling the operational performance of any process manufacturing operation requires performance measures that are available in the time frame of the process itself—real time. The first step in developing an effective plant business performance measurement system is making the performance measures available in real time. The recent availability of real-time KPIs has been a major step toward business performance control. An important second step in the process is determining which KPIs actually have the greatest business impact on the operation. This must be done in conjunction with the accounting group, which is responsible for the business measurement of the operation. The association between the performance of the KPIs being measured and the financial
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
accounting measures of the operation has been very difficult to discern. Part of the reason for this is the difference in perspective of what performance is between operations and accounting. The second reason is the traditional difference in the time frame of the measures. KPIs are daily measures while monthly accounting is the norm. Understanding how 20 or more daily KPIs impact monthly accounting measures presents an insurmountable problem. Composite KPIs that convey a more holistic view of operational performance by mathematically combining multiple aspects of operational performance into a single measurement have recently started to gain popularity. One good example of this is the overall equipment effectiveness (OEE) KPI. OEE is a KPI formed by the product of three traditionally independent KPIs: product quality, equipment availability, and production performance. A composite metric of this type provides a very nice combined performance measure of a plant or plant area. The measure is actually impacted by the performance of the production, maintenance, and quality teams in the plant. All three teams need to be working efficiently and effectively for the measure to improve. The intent is to encourage a sense of collaboration between these organizational functions. The potential downside of a composite measure of this type is that it can obscure the root cause of a problem by trying to convey too much information into a single number. As a result, it can actually be less effective in helping front-line operations and maintenance personnel to modify their behaviors to drive performance improvements. This can be easily remedied by supplementing the composite measures with component measures, such as separate quality, availability, and production performance measures, which makes available the information operations and maintenance personnel need to make minute-by-minute and second-by-second decisions, as well as the information required for root cause analysis. The measures executives and managers depend on to analyze the business and report results come from accounting. KPIs may be of interest to management, but that interest is minor— partly because they cannot perceive the relationship between the KPIs and the accounting measures. Unfortunately, accounting measures, as currently compiled, are also inadequate in helping them manage the performance of manufacturing. Accounting measures are compiled over entire plants, for the entire enterprise, and on a monthly basis. The information in the financial reports is not at the required level of granularity from either a time or a space perspective to support good operational performance analysis and decision-making. From a time perspective, executives really need to know what is going on in their operations in real time (i.e., as it is happening). From a space perspective, they need to have information right down to the operating unit level so they can get to the root causes of problems impacting performance.
Knowing that a problem exists is important, but understanding what the problem is, is necessary to its remediation. Monthly financial information may be adequate to track the financial performance of an operation, but it is inadequate for managing the operational performance.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
For example, separating financial and operational measurement systems in manufacturing operations resulted in a major disconnect between management and plant operations teams. Plant teams might execute improvement projects targeted at improving one or more KPIs. If the project was successful and the plant teams claimed some incremental financial benefit based on the change in KPIs, managers would check the claimed results with accounting to determine if the benefit really happened. If accounting could not detect the improvement, which was typically the case due to the monthly and plant-wide nature of the financial information, the claimed value of the improvements would be rejected. In these instances, it did not matter what the truth was —the fact was that the improvements did not happen. This situation is a disaster for automation teams because it means that they never appeared to realize the returns on the capital investments that were projected to gain the project funding. Animosity frequently developed between the plant operations teams, who were trying their best to improve performance, and accounting, who honestly could not detect the improvements. More than one CFO commented that, “If one more engineer tells me how much money he has made my company based on a KPI, I will fire his....” Clearly this has created a huge organizational disconnect based on a serious lack of alignment between the different measurement systems deployed in industrial plants. The most effective operational performance measurement and analysis system must combine both operational and accounting measures. This is often very difficult for engineers and operations to accept due to the increased animosity between them and the accounting team.
Real-Time Accounting The good news is that a fundamental change in the way cost accounting systems work is taking place. This change is starting to bring visibility to the benefit of automation systems, as well as other improvement initiatives on the plant floor. Automation systems are critical to the implementation of this change. Traditional cost accounting systems provide plant-wide monthly financials. Variance reports created in the accounting systems provide the primary output intended for manufacturing performance analysis. Variance reports present monthly cost-per-unit product made for each product line. This information is insufficient for performance visibility into plant operations. Executives
have been requesting extensions of cost accounting systems that will provide real-time accounting data right down to the process unit level. The availability of such information would make visible the economic impact of any plant capital investment or performance initiative in plants. Accountants have been stymied as to how to generate this financial information on a real-time basis. They have not been able to determine a database source that is available for real-time assessment of the appropriate cost and profit information. Fortunately, plant engineers are well aware of the existence of such a real-time plant database. The database sourced from the plant instrumentation that is already used for process monitoring and control provides an ideal set of data from which the appropriate financials can be modeled. It has been demonstrated that this plant instrument database can be effectively used as source data for plant accounting systems enabling the necessary real-time accounting calculations.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
The most appropriate systems for real-time accounting calculations are automation systems (Figure 37-6). Executing real-time accounting in information technology (IT) systems would be very difficult due to their lack of real-time computing capability and the lack of process connection. Automation systems are designed to work in real time and are process connected, thereby making them ideal for real-time accounting. This requires dissolution of the traditional separation between IT for business functions and automation systems for engineering functions. Both exist to support the business of manufacturing. Real-time accounting models typically execute in process controllers, and although still in its infancy, the initial results have been very promising.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
KPIs must also be determined in real time to empower operational performance management. The same process used for developing real-time accounting applies to real-time KPIs. Because manufacturing and production operations have large numbers of instruments and measuring variables (flow, level, temperature, pressure, speed, composition, etc.), real-time KPIs can be directly modeled from instrument-based measures on a second-by-second basis (Figure 37-7). Each KPI can be evaluated with respect to its financial impact on the operation by comparing changes in the KPI with the corresponding changes in the RTA measures. Some KPIs that may have been thought to provide significant financial impact may not, while changes in others may have a profound financial impact on the operation’s performance. The overall number of KPIs required to measure an operation may be able to be reduced to those with significant financial impact. Fewer high-impact measures result in simpler high-impact performance analysis.
Both the RTA measures and KPI measures can be aggrandized to hourly, shift, daily, weekly, or monthly by using standard process historian software. This provides a consolidated, time-sequenced performance measurement database for the analysis of both accounting and operational (KPI) performance measures. This historian data is ideal for ongoing performance analysis, including performance event recognition and prediction. This is exactly the type of measurement information required for plant business control. There are two essential design criteria to developing a combined KPI and accounting performance measurement system. First is getting both sets of metrics to the same time
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
basis—real time. Second is the prioritization of the combined accounting measures and KPIs. Prioritizing the measures is very important because typically the combined collection of performance measures can be quite large. Managing a large number of performance measures in real time is very difficult. It is essential that the most important measures of operational performance are considered before those that are of less importance. The current manufacturing strategy being executed within the plant serves as the basis for prioritizing the performance measures. The manufacturing strategy may be production-, cost-, or agility-based depending on the current market conditions. Other manufacturing strategies may also be appropriate. The priorities of the combined accounting and KPI measures will be different under each of these strategies. Manufacturing strategy serves as a prioritizing filter or strategic lens for the combined accounting and KPI measures (Figure 37-8).
As plant and enterprise business control are considered, one of the dynamic changes in the marketplace impacting this level of control is the increase in the speed of business. Almost every aspect of industrial business is starting to change in real time. A decade ago energy and material contracts were developed for 6 months or even a year. These long-range contracts essentially relegated these costs to constant values over these extended time frames. Most product prices were also constant over extended periods of time. This is just not the case anymore. Energy and material costs may change multiple
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
times each day with almost as frequent product price fluctuations. Manufacturing strategies need to change at a similar frequency. For example, in the power industry, it is not unusual to find the production strategy of a single plant changing multiple times in a single day. The business of industrial production is becoming much more real time, and operational performance analysis must match this dynamic. Real-time operational performance analysis systems are no longer just nice to have—they are absolutely necessary for effective and efficient operations. As with any control system, once the performance measurement system is in place, operational performance analysis can be assessed and improvements made. It is difficult to separate these two because the improvements should be the direct actions that result from the analysis. Prioritizing the performance measures according to the manufacturing strategy is not just an organizing function; it is also a first step in analyzing operational performance. The prioritized performance information actually provides the basis for communicating the current operational performance to every person in the operation to enable them to make performance-improving decisions in real time. The information presented to each front-line person should typically be a prioritized view of the top performance measures. A dashboard display for each operator and maintenance person may be the most effective way to convey this information (Figure 37-9). Because these front-line workers are managing their operations in real time, only the four most important performance measures are presented. It is difficult for humans to effectively deal with more than four competing real-time inputs at the same time. Experience has shown that real-time performance management by the front-line employees provides greater potential for performance improvements than much more mathematically sophisticated, closed-loop approaches. Humans adjust faster than mathematical models and tend to be far superior at handling unexpected situations.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Real-time strategic business guidance can be provided to all organizational levels in a very similar manner. This requires a multilevel business guidance approach with the strategic business information contextualized to the job of every person in the operation from the front line to the executive suite. This will tend to break down traditional organizational silos by pulling together the different functions in a manufacturing organization to the mission of improving operational performance through collaboration driven by changes in business performance measures. One of the most underutilized resources in most manufacturing operations is people. Strategic business guidance systems tap into this underutilized resource providing exceptional results. Incremental business improvement may be available through more sophisticated engineering analysis of the performance measurement database. Engineering may perform model-based performance analysis, historical event analysis, asset performance analysis, and other sophisticated performance identification and improvement approaches within the operation. It may be tempting for engineers with a technical interest to jump right to these sophisticated techniques, but it is usually more effective to empower the operation, maintenance, and management personnel with strategic business information first. These sophisticated approaches should typically be used to top off the performance of these operations. The availability of prioritized financial and operational business performance measures
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
opens the door for closed-loop business control (Figure 37-10). Closed-loop business control is a technology that is currently under development. New approaches combining algorithmic and heuristic techniques are offering the hope that the extension of control theory to the business may be a reality in the very near future. This is a truly exciting area for control engineers. We appear to be at the doorstep of the next major advancement in the science of control.
Enterprise Business Control Operational Performance Analysis The concepts of enterprise business control are very similar to those of plant business control. Real-time operational and financial performance measures must be developed in each plant in the enterprise rather than across the enterprise as a whole. These are developed as an extension of the process discussed previously. Once the performance measurement system is in place, the control strategy needs to be implemented. Empowering key operational personnel across the enterprise with prioritized performance measurement information that communicates what is taking place in each plant, along with the business information at the corporate level that indicates how the enterprise can maximize profits, provides the decision-making information required to drive higher levels of operational and business performance.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
It is the enterprise level at which the drivers for “doing the right things” for the business of the enterprise originate (Figure 37-11). It is the plant floor at which the drivers for “doing things right” originated. The four-level cascade control hierarchy brings these two together so that the business is continually doing the right things right. This is the ultimate objective of any manufacturing business. This objective can only be realized by coordinating bottom-up process control with top-down business control. This is the new control domain, and every automation and control engineer must strive to get the skills to succeed across the entire control landscape. The operational and business performance of your company is depending on it.
Summary The phrase operational performance analysis seems quite simple, but effective operational performance analysis requires the combination of effective operational performance measurement from both an accounting and operational perspective, effective analysis of those measures with respect to the ideal state, and effective actions to improve the performance. All these components must work together for optimal operational performance, which is the ultimate objective of industrial automation technologies.
Further Information Berliner, Callie, and James Brimson. Cost Management for Today’s Advanced Manufacturing. Boston: Harvard Business School Press, 1988. (This book presents the logic behind the movement to more timely cost management and accounting approaches.) Blevins, T. et al. Advanced Control Unleashed: Plant Performance Management for Optimum Benefit. Research Triangle Park, NC: ISA (International Society of Automation), 2003. (This provides a nice overview of the value advanced control can have on an operation.) Cooper, Robin, and Robert S. Kaplan. Cost & Effect: Using Integrated Cost Systems to Drive Profitability and Performance. Boston: Harvard Business School Press, 1998. (This presents a history and projection of cost accounting and its impact on manufacturing operations.) Harvard Business Review on Measuring Corporate Performance. Boston: Harvard Business School Press, 1998. (This book presents a series of essays on topics related to overall business performance measurement by leading academics and thinkers in the field.)
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
ISA-95 Series. Enterprise-Control System Integration. Research Triangle Park, NC: ISA (International Society of Automation). Martin, Peter G. Bottom-Line Automation. Research Triangle Park, NC: ISA (International Society of Automation), 2002. (This book presents an overview of real-time performance measurement and management systems and how they can impact business and operational performance.) MESA Metrics That Matter Guidebook, A MESA International Guidebook. Chandler, AZ: MESA. 2006. (This study presents an overview of both operational and business performance measurement systems.) Trevathan, Vernon L., ed. A Guide to the Automation Body of Knowledge. Research Triangle Park, NC: ISA (International Society of Automation), 2006. (This is the first printing of this guide which covers a wide variety of related topics.)
About the Author Peter G. Martin, PhD, has over 35 years of experience in industrial automation and control and holds multiple U.S. and international patents. Martin has authored numerous
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
articles and technical papers. In addition, he has authored three books, coauthored two books, and was a contributing author on three more. Martin was named a Hero of U.S. Manufacturing by Fortune magazine; he was named one of the 50 Most Influential Innovators of All Time by InTech magazine; and he received the Life Achievement Award from the International Society of Automation (ISA). In 2013, Martin was elected to the Process Automation Hall of Fame and was selected as an ISA Fellow. He has a bachelor’s and a master’s degree in mathematics, a master’s degree in administration and management, a master’s of biblical studies degree, a doctorate in industrial engineering, and a doctorate in biblical studies.
XII Project Management
Automation Benefits and Project Justifications Many manufacturers felt it was important just to get new technology installed in order to have the potential to run their operations more efficiently. There are many aspects that must be understood and effectively managed when dealing with automation benefits and project justification. These include understanding business value in production processes, capital projects, life-cycle cost analysis, life-cycle economic analysis, return on investment, net present value, internal rate of return, and project-justification hurdle levels. Each of these topics is addressed in detail. Also included is a discussion on an effective way to start developing automation benefits and project justification.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Project Management and Execution Automation professionals who work for engineering contractors and system integrators and who see the project only after it has been given some level of approval by the enduser management, may not realize the months or even years of effort that go into identifying the scope of a project and justifying its cost. In fact, even plant engineers who are responsible for performing the justifications often do not realize that good processes exist for identifying the benefits from automation. In the process of developing the Certified Automation Professional® (CAP®) certification program, the first step was to analyze the work of an automation professional. By focusing on describing the job, it became clear how important project leadership and interpersonal skills are in the work of an automation professional. That analysis helped the CAP development team realize that these topics had to be included in any complete scope of automation. If the job analysis team had jumped immediately into defining skills, they might have failed to recognize the importance of project leadership and interpersonal skills for automation professionals—whether those professionals are functioning in lead roles or
not.
Interpersonal Skills
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Interpersonal skills cover the wide range of soft skills from communicating and negotiating to motivating. Process automation professionals need interpersonal skills to communicate, work with people, and provide appropriate leadership. A balance of technical, business, and interpersonal skills is needed on every project or team.
38 Automation Benefits and Project Justifications By Peter G. Martin, PhD
Introduction For many industrial operations, installing one of the early computer-based automation systems was taken for granted. That is, many manufacturers felt it was important just to get the new technology installed in order to have the potential to run their operations more efficiently. Little consideration seems to have been given to the actual economic impact the system would provide. A survey of manufacturing managers indicated that the primary motivators driving manufacturers to purchase automation systems included their desire to: • Improve plant quality • Improve safety
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
• Increase manufacturing flexibility • Improve operations reliability • Improve decision-making • Improve regulatory compliance • Increase product yields • Increase productivity • Increase production • Reduce manufacturing costs Although few would disagree with this list, it appears that these criteria were seldom taken into consideration either during the purchase of an automation system or over the system’s life cycle. However, most of the criteria listed have a direct impact on the ongoing economic performance of the manufacturing operation.
Identifying Business Value in Production Processes
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Certainly, there are many different types of production and manufacturing processes for which the effective application of automation systems may provide improved business value. Each would need to be evaluated on its own merit, but all production processes have similar basic characteristics that can help to identify the potential areas of value generation. Figure 38-1 provides a very general and high-level functional diagram of a production process. From this diagram, it is fairly straightforward to develop a discussion of at least some general areas of potential value generation from automation technology. Essentially, every production process consumes energy and raw materials, transforming them into products. Simplistically, the business value of the production process can be viewed as a function of the product value minus the energy cost, the material cost, and the transformation cost over a given time period or production run. Therefore, the potential business value improvements are energy cost reductions, material cost reductions, transformation cost reductions, and production increases. Of course, if there is not incremental market demand for the product or products being produced, an increased production rate may lead to a negative business value proposition due to the incremental storage costs required to handle the surplus inventory. It is critical to the success of any production business that the production team does not create false economics by, for example, reducing cost in one area and taking value credit for that reduction while inadvertently creating cost in other areas or negatively impacting production. Production operations have lost credibility with the financial teams in their organizations by taking too narrow a view of the business value generated. I recommend involving plant accounting personnel on the project team as early as possible to develop an appropriate analysis from a financial perspective.
Controlling costs and production can provide value to many production operations, but there are other higher-level decisions that can also drive business value. Going back to Figure 38-1, the two arrows coming into the top of the production process block
represent decisions on “what to make” and “how to make it.” These decisions can have a huge economic impact in a multiproduct and/or multigrade operation. Certainly, the cost and production value of each potential product and each potential grade will be different and variable. This functionality is typically referred to as production planning and scheduling. For some production operations, the most reasonable time frame for making planning and scheduling changes is daily or even weekly, but more and more operations are moving toward an Agile manufacturing approach in which changes can be made much more frequently. In some manufacturing operations, this agility can directly translate into business value. The involvement of plant accounting personnel will certainly help in estimating the appropriate value of these improvements. This is a general overview of potential areas of business value generation in production processes. This general model can be applied to specific manufacturing operations to identify specific business benefit areas and values. There is a very real danger in being too simplistic in identifying business value generation potential in production processes. Simplistic analysis may lead to poor investments, which can in turn lead to an organizational reticence toward automation spending.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Capital Projects Automation systems are purchased using manufacturer’s capital budgets. Any discussion of the economic benefits of automation systems and technologies must be from the perspective of the capital budgeting and project process. Capital budgeting is typically a long process often involving multiple years for each capital project. The process is initiated when a manufacturing operation identifies a need for a capital project and then develops a nomination package for the proposed project that is forwarded to corporate planners. Corporate planners evaluate all the nominated projects against qualifying criteria, as well as available capital, and then select a set of nominated projects for implementation, typically for the following fiscal year. At this point, the project moves from planning to execution. A project team is convened and provides a bid package to a set of vendors who can provide the products and services necessary to satisfy the defined project requirements. The vendors are evaluated, one is selected, and the order is negotiated and purchased. The project is executed to install the system, start it up, and get it operational. It is interesting to note that in the typical capital project process, automation systems vendors have little to no say on what the actual solution is. By the time the request for proposal (RFP) is put out for bid, the solution has already been defined. Vendors must only respond with the lowest possible priced system that meets the solution definition.
Over the past decade, as manufacturing companies have significantly reduced headcount in their engineering departments, this issue is beginning to be very important in that the vendors now may have a stronger engineering talent base than the manufacturers and may be in a better position to define performance-generating automation solutions.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Since automation systems are purchased from capital budgets, it is very important to understand capital budget economics in order to effectively analyze the economic benefits of automation systems. Figure 38-2 displays a classic life-cycle capital economic profile. The lower bar chart represents the cost of the capital project including hardware, software, engineering, installation, start-up, commissioning, as well as ongoing annual system operations and maintenance costs. The costs in an automation project tend to be quite high at the beginning of the life cycle due to the purchase of the system, engineering, installation, and start-up. The costs tend to level out after start-up and are largely comprised of ongoing engineering, operations, and maintenance of the system. Toward the end of the life cycle, annual automation costs tend to increase due to aged equipment, spare parts, and repairs, as well as increased training levels. A review of just over 80 automation projects revealed that most manufacturers have a fairly good understanding of their automation costs even if they do not have specific programs to capture them over the life cycle of the equipment. As Figure 38-2 shows, the cost of the automation system is often a small percentage of the potential benefit derived from effectively utilizing the system. This differential indicates that any effective analysis of the benefit of an automation system should focus on the business benefits as compared to the system costs.
The upper dashed line represents the annual economic benefit derived by the deployment of the automation system. Notice that this value begins at start-up and is expected to continuously grow over the useful life of the automation system due to the expectation that plant personnel will continue to use the technology to implement improvements over its life cycle. The same review of the automation projects, which
revealed that most manufacturers have a good handle on the cost of their automation systems, also revealed that few of them had any real understanding as to what the true benefits were due to the implementation of automation systems. This is because the benefit line in Figure 38-2 is seldom, if ever, measured. The finance systems in most manufacturing operations cannot capture benefit information at the required level of specificity. This is a huge problem when trying to assess the true economic benefit provided by automation investments.
Life-Cycle Cost Analysis
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
The lack of any effective way to measure the benefit from automation investments has resulted in attempts by project teams to justify the expenditure for automation systems being relegated to the life-cycle cost of the proposed system providing a cost savings over the existing system (Figure 38-3). This relegates the justification of the new automation system to an automation cost savings, with no associated measurable benefit, which in turn, relegates the automation technology to a cost without benefit. Any offering evaluated only from the perspective of cost becomes a “necessary evil,” moving automation technology toward commoditization. This has been the case with automation systems over the past decade.
The following model was developed to reflect this life-cycle cost view of automation systems. The basic equation used for the analysis of the life-cycle cost is: Life-Cycle Cost = Price + Project Engineering + Installation + Ongoing Costs It is interesting to note that the system price tends to be a reasonably small component of the overall automation system cost. In fact, studies have placed the average price at less than 35% of the total project cost, without even taking into consideration the ongoing costs. This tends to demonstrate one of the deficiencies in the price-only
approach.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Approximately 80 automation projects were analyzed and from the data collected, an actual life-cycle cost profile was developed (Figure 38-4). An interesting aspect of the data collected was how the life-cycle cost data was distributed. Price, which had traditionally been the primary economic variable in the automation system decision, represents only 24.2% of the life-cycle cost on average. This clearly demonstrates the deficiency of a price-only automation system evaluation. Between the initial engineering costs and the 5-year life-cycle engineering costs, the engineering of the automation system accounted for 37.8% of the costs. This is considerably greater than the price of the initial system.
The expansion of economic perspective from price to life-cycle cost was a major step forward for the industry, but was still very limiting. This is the perspective of most of the industrial automation users today since most of them still use a cost-only economic evaluation approach. If there is no economic benefit to the manufacturing operation for putting in an automation system, then automation systems are certainly not meeting management’s objectives, and probably nothing but the most rudimentary system should ever be deployed.
Life-Cycle Economic Analysis Providing a more substantial justification for automation expenditures requires both accurate measurement of the life-cycle costs and of the life-cycle benefits the manufacturing operation will derive from the effective utilization of the automation system. In interviews conducted with hundreds of manufacturing executives, it was learned that not even one person felt their company was effectively measuring the benefits derived from automation or any other manufacturing improvement activity. Most admitted that they had no idea how to get at these metrics in any reasonable way and that their current cost accounting systems did not provide the detailed economic data to be able to infer the benefit value.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
A manufacturing operation realizes the economic benefits from automation systems in two areas. The first area is manufacturing cost savings through such things as reduced power consumption, raw material costs, and manpower requirements. Second is increased production that can be gained through better asset utilization. Of these, the only one that was regularly monitored was reduced manpower due to automation because it is relatively easy to measure. The other elements of the benefit calculation are variables that constantly change as products are produced and are, therefore, very difficult to capture. Justifying automation systems based on complete life-cycle economic profiles gained momentum during the late 1990s. A workshop on this topic at the International Society of Automation (ISA) Technical Conference of 1996, involving professionals from companies such as E. I. DuPont, General Foods, Eli Lilly, and Dow Chemical, contributed a considerable amount of data to help build a life-cycle economic profile. The high-level model in Figure 38-5 was developed as part of this. This profile defines the benefit as the ongoing manufacturing cost savings and the production increases resulting from the automation system. Product quality benefits in the process manufacturing industries were determined to be incorporated in the combination of cost savings due to reduced rework and lost products, and corresponding production increases. Therefore, it was determined that a separate quality term would not be necessary. The value for improved safety due to the automation system can be handled much in the same manner as quality since safety incidences typically impact production value. Project costs can be captured in the three general categories of price, initial engineering costs, and installation costs. The ongoing life-cycle costs include engineering costs, operations costs, and maintenance costs for the automation system.
One caution in valuing production increases is that the project team must make sure that increases in production will have a positive financial impact on their business. If, for example, a product has a limited current available market, producing more may result in increased cost to the business due to the cost to store the surplus production until market demand increases. Involving a production planner on the project team can provide good insight in these areas.
An analysis of system benefits that was conducted on a small number of automation projects provides some interesting information. The number of projects was small because it was difficult to find projects for which the financial benefit was available. The data revealed that the benefit due to the automation systems tended to degrade continually over time. Perhaps this is due to lack of performance monitoring or perhaps to one of Murphy’s Laws, but this points out the need for better measures of the operational performance of a production facility on an ongoing basis in order to sustain the potential benefits due to automation in industrial operations. The life-cycle economic profile provides a much more complete perspective of the overall potential business impact of an automation project investment than previous perspectives have provided. Although few production companies have gone to a complete automation profile for evaluating an automation investment, more and more companies are becoming increasingly sophisticated with their approach to justifying all capital investments. Collecting the data for a full life-cycle economic profile is a necessary first step in utilizing a more sophisticated financial approach to analyzing and justifying a capital investment.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Return on Investment The most common way to discuss the financial justification for any capital investment is in terms of return on investment (ROI). ROI is defined as the cash inflows resulting from a capital investment, such as an automation system, divided by the initial investment made over a given period of time. ROI has traditionally been determined in a variety of ways. The simplest and most common approach is to evaluate the purchase price as the initial investment of the automation system against the cash inflows that result from the deployment of the system. Although the price approach is often utilized, a more complete view of ROI would be to evaluate the purchase price and all other initial (project) costs associated with the project against the accumulated cash inflows. An even more complete investment evaluation would be to evaluate the purchase price, project costs, and ongoing operational and maintenance costs against the accumulated cash inflows; however, this approach is very difficult to execute due to the lack of financial information at the appropriate level of resolution. In any case, the basic ROI evaluation is similar. When the accumulated cash inflows equal the purchase price, the purchase price plus initial project costs, or the purchase price plus the initial project costs plus the ongoing automation costs, depending on which method is utilized, 100% return on investment is achieved. ROI is often either stated in terms of the time it takes to reach 100% return or in terms
of the percentage of total cost recovered per year. The time to reach 100% ROI is referred to as the payback period. The projected ROI for an automation system is typically estimated by the project team before applying for capital funding. The project team typically estimates the production improvements that will be realized through the new system. Sometimes an additional ROI analysis is requested from automation system vendors vying for an automation system project. This analysis is constructed based on what the vendor believes the unique features of their offering might be able to provide if effectively used in the manufacturing operations. The good news for both the project teams and the automation vendors has been that after the automation systems are installed and operating, manufacturers almost never go back to check to determine if the ROI projections were ever really achieved. One of the major contributing factors for not doing this analysis is the aforementioned lack of any effective way to capture the benefit side of the ROI model in an operating plant. The accounting systems in place do not have the level of resolution necessary to systematically calculate and verify an ROI analysis of this type.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Net Present Value An ROI analysis is quite simple, but it may also be deceptive if it takes an extended period of time to realize the return. The amount of the investment is based on the value of that money at the time the investment is made, while the inflows may occur at a later time. An expected cash inflow at some future time is not worth the same as that amount today. The larger the amount of time that passes prior to receiving a particular cash inflow, the less its value is today. Accurate financial analysis requires that future cash inflows be converted to their current value. Making an appropriate determination on how capital is invested requires a more appropriate evaluation of the value of money over time. For example, if a company has a capital budget of $1 million and there are two potential projects that would cost $1 million each, one of which has a uniform return of $2 million over 20 years and the other $1.5 million over 2 years, which would be the best capital investment? The answer to this question may not be immediately obvious. The net present value (NPV) function is designed to address this issue. The NPV of capital expenditure with an expected set of cash inflows over a period of time is a function that takes the time value of money into consideration. The inputs to this function are the expenditure amount and the expected cash inflows each year for the duration of the investment’s payback, and a discount rate. The discount rate represents the typical interest that could be applied to an investment today. The result of this function provides a financial amount that represents the current value of an expected
time series of cash inflows minus the initial investment amount. The NPV on any potential investment can be compared to other potential investments to determine which investment is in the best interest of the company. The equation of NPV is as follows: (38-1) In the examples posed in the question above, assuming a 4% rate of return, the net present value of the $1 million investment, which has a uniform return of $2 million over 20 years, would be $359,033 while the net present value of the $1 million investment with a uniform return of $1.5 million over 2 years would be $414,571. This means that even though the gross return on the first investment is considerably greater than that of the second, the second investment is better for the company because the returns occur over a much shorter period of time.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
The NPV function can be directly applied to the life-cycle cost, as well as the life-cycle economic and ROI data to develop a more appropriate financial analysis of an automation investment. It is important that automation engineers working to justify a potential automation investment understand and utilize an NPV approach in order to gain credibility with the business and financial professionals in the company and to enable comparisons between the proposed investment and other investments competing for capital.
Internal Rate of Return An alternative approach to conduct an analysis to determine the value of a capital investment over an extended period is referred to as internal rate of return (IRR). The IRR function is mathematically equivalent to the NPV function, but the variable of interest in the IRR calculation is the discount rate that causes the NPV calculation to be zero (see Equation 38-2). In comparing potential capital investments, a higher value of IRR means that the investment is better financially for the company. Solving the following equation for IRR would result in the appropriate calculation.
(38-2)
Unfortunately, this equation cannot be directly solved for a given set of data and a
method of successive approximation, or trial and error, must be deployed. As an example, the IRR for the previously discussed investment of $1 million with a uniform $1.5 million return over the first 2 years will be estimated. Using a spreadsheet function to calculate the IRR for the $1 million investment with a uniform $2 million return over 20 years results in an IRR of 8%. This means that the first investment is the better choice, which is the same conclusion derived by the NPV analysis. IRR provides an effective tool for evaluating multiple potential investments; therefore, engineers need to become familiar with this approach as they work to justify automation investments. Both NPV and IRR are common financial functions in standard spreadsheet packages, such as Microsoft Excel®, as well as standard functions of financial calculators. Using a spreadsheet or financial calculator is an easy way to perform either calculation. It should be noted that although both NPV and IRR provide a more appropriate projection of a capital project’s expected value, neither addresses the fundamental issue involved with automation projects. That is, the benefit value or cash inflows associated with an automation system are seldom, if ever, captured in a cost accounting system. This means that ROI, NPV, and IRR are not verifiable for any automation project over time. No matter which of these methods is employed to justify the automation investment during the evaluation process, it is still difficult to verify the actual value of a project after it is implemented. This issue must be addressed for automation investments to gain financial credibility.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Project Justification Hurdle Many business organizations develop capital investment hurdle values that need to be exceeded in order to get approval for proposed capital investments. The hurdle value is often known throughout the organization and any project competing for capital funding must overcome the hurdle value to even be considered for approval. For example, an organization may have a blanket hurdle value of 33% return on any capital investment within the first year. Any proposed capital project that shows a greater than 33% return will be considered. For small capital investments, this may be all that is required. For larger capital projects, such as an upgrade of an automation system, overcoming the hurdle value limit may not be enough. With a limited amount of available capital for projects, the proposed investments providing the highest NPV or IRR will be funded in order until the capital is exhausted. Automation professionals must become as proficient as possible in credibly demonstrating value. It is very difficult to demonstrate that a newer automation system intended to replace an existing system is a reasonable capital investment. The older system may still be able to
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
do much of what the proposed system can do. For example, automation systems installed 25 years ago may have been a little more challenging to configure than current systems, but it would be difficult to make a reasonable case that the basic control capabilities in the new system offer much—if any—benefit over those of the installed system. Even though there are measurable benefits that can be derived from improved process control, existing systems offer excellent control. Replacement system benefits typically need to be justified based on the application of more advanced applications, such as optimization, simulation and modeling, multivariable predictive controls, asset management, safety management, business intelligence, cybersecurity, operating system support, and business system integration applications. Each of these advanced capabilities must be analyzed on its own merit, and then the complete set of advanced capabilities must be combined to eliminate double and triple counting of benefits that may result from independent analysis. From an automation cost perspective, developing a business case for replacement automation is very challenging. The cost of operating and maintaining an existing system must be evaluated against the cost of buying, engineering, installing, operating, and maintaining the new system. Since the operating costs between older and newer systems are difficult to differentiate, the primary automation cost issue often comes down to increasing maintenance costs, support for operating systems (security patches, etc.), and failures in the installed system. The increasing maintenance costs typically only become significant when the increasing failure rate of system components combines with drastically increasing cost of spare parts to make the ongoing maintenance of the system extremely high. Justifying a new automation investment on this basis alone is almost impossible. If the installed automation system is failing on a regular basis and the failures cause interruptions or slowdowns in production, the lost value associated with these incidents can provide a compelling economic case for a new system investment. To make a case on this basis, the project team must evaluate the history of the operation of the plant over a number of years and demonstrate an increasing frequency of failures resulting in lost production. The value of the incremental lost production can be determined merely by multiplying the incremental lost volume by the value of the lost production. In process plants, the expenditure for a new automation system can often be completely justified from a small amount of lost production. This type of justification is often referred to as a lost opportunity approach. Management often perceives lost opportunity justification with a high degree of suspicion. The project team must do a thorough analysis to justify a new system investment solely on cost avoidance. The hurdle that must be overcome, or at least considered, is the cost differential between keeping the
older technology and installing newer technology. Even if the newer system is less expensive and less costly than the older system was when it was acquired, which is typically the case with automation systems, the initial cost of a new system presents a huge hurdle. Automation professionals must build a strong case on cost avoidance and the incremental benefits the newer technology will provide in the form of reduced energy costs, reduced material costs, reduced transformation costs, increased production, or improved product planning and scheduling.
Getting Started
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Justifying automation system investments is, to say the least, an extremely challenging task. But it is also an important task. With such an onerous task, getting started in a reasonable manner is a challenge. Many project teams start with an assessment of the price of a new system and the ongoing operating costs of the exiting system because these two areas represent information that may be readily available and accessible. Although there is nothing inherently wrong with this approach, in a typical project justification, this information is not very important. Referring to the life-cycle capital economic profile, it is clear that the benefits due to the effective application of automation technology are much more significant to the justification of an automation investment than all the automation costs combined. It makes good sense, therefore, to start by trying to develop an accurate assessment of the potential benefit of the new technology. These benefits are normally realized in energy savings, material savings, headcount reductions, and production value improvements. Even the value of lost opportunities due to reduced system failures will fit into one of these four categories. Each of these categories should be evaluated for the operation in which the automation system will be installed. The evaluation should take into consideration applications and capabilities provided in the new technology that are not available in the installed system. Twenty years ago this might have included better regulatory controls, but most systems that are due to be replaced today have sufficient control capabilities. Today, some of the applications that can drive value include advanced process control, multivariable predictive control, linear and nonlinear optimization, simulation, business intelligence and guidance, predictive maintenance, asset management, operator training, production planning and scheduling, recipe management, cybersecurity, operating system support, and business system interoperation. Each of these areas would need to be evaluated with respect to the potential benefit it could provide over the effective system life. This is usually
accomplished by evaluating the first-year potential value and extending it for some predetermined number of years that represent the useful system life. Once the value for each application is determined, the benefit overlap must be determined. Benefit overlap represents an area of benefit that is derived separately from more than one of the applications. The benefit can only be credited one time so any overlap benefits must be subtracted. The life-cycle cost table can be used to extrapolate typical project and lifecycle costs from the estimated system price. Once this is accomplished, the ROI, NPV, and IRR should be developed. History has demonstrated that automation systems effectively utilized to drive business value represent one of the best financial investments a process manufacturing company can make. Providing a complete and reasonable financial justification for automation systems investments is absolutely essential in today’s tight capital environments. Automation professionals must learn to be proficient at justifying these expenditures.
Further Information
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Berliner, Callie, and James Brimson. Cost Management for Today’s Advanced Manufacturing. Boston: Harvard Business School Press, 1988. (This book presents a perspective on the requirement for new cost accounting approaches for manufacturing operations.) Friedman, Paul G. Economic of Control Improvement. Research Triangle Park, NC: ISA (International Society of Automation), 1995. (This may be the best reference for this topic from a control technology perspective. Event though the financial benefits due to improved regulatory controls are available in older systems, the thought process presented is quite good.) Gitman, Lawrence J. Principles of Managerial Finance. 10th ed. Boston: AddisonWesley, 2003. (This book presents a detailed management-level view of financial analysis, which is important to understand since the management team approves capital expenditures.) Kaplan, Robert S., and Robin Cooper. Cost & Effect: Using Integrated Cost Systems to Drive Profitability and Performance. Boston: Harvard Business School Press, 1999. (This book presents the transition of cost accounting in manufacturing and provides good background information.) Martin, Peter G. Dynamic Performance Management: The Path to World Class Manufacturing. New York: Van Nostrand Reinhold, 1993. (This book provides background information on the evolution of automation technologies and some of
the new applications that automation systems are being used to solve.) Sweeny, Allen. Accounting Fundamentals for Nonfinancial Executives and Managers. New York: McGraw-Hill Book Company, 1972. (This presents a very high-level management view of financial analysis, which serves as a good reference when conducting automation investment justifications.) Trevathan, Vernon L., ed. A Guide to the Automation Body of Knowledge. 2nd ed. Research Triangle Park, NC: ISA (International Society of Automation), 2006.
About the Author
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Peter G. Martin, PhD, vice president, innovation and marketing, for Schneider Electric, has more than 37 years of experience in industrial control and automation. He has authored numerous articles, technical papers, and books; and he holds multiple patents. Fortune magazine named Martin a Hero of U.S. Manufacturing, and the International Society of Automation (ISA) InTech magazine named him as one of the 50 Most Influential Innovators of All Time. In 2013, Martin was elected to the Process Automation Hall of Fame and was selected as a Fellow of the International Society of Automation, from whom he also has received the Life Achievement Award. He has a bachelor’s and a master’s degree in mathematics, a master’s degree in administration and management, a master of biblical studies degree, a doctorate in industrial engineering, and a doctorate in biblical studies.
39 Project Management and Execution By Michael D. Whitt
Introduction A project is a temporary activity with the purpose of creating a product or service. Projects have a defined beginning and end. Projects usually involve a sequence of tasks with definite starting and ending points. These points are bounded by time, resources, and end results.1 An engineering project is a means to an end, with a specific objective. For such a project to exist there must be a perceived need and an expectation that the need can be met with a reasonable investment. The owner must weigh the risks against the rewards and conclude the project is worthwhile. Making that risk/reward assessment is sometimes more of an art than a science. Every project—particularly an automation project— involves some level of risk.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
All pilots take chances from time to time, but knowing—not guessing— about what you can risk is often the critical difference between getting away with it and drilling a fifty-foot hole in Mother Earth. ~ Chuck Yeager, 19852 More than in most endeavors, the effects of mishandling risk in an automation project can be catastrophic. Beyond the economic ramifications of a poor estimate, which are bad enough, the potential risk to the operators and the public at large can be extensive. Therefore, a well-conceived process of preliminary evaluation, short- and long-range planning, and project control is necessary. This evaluation begins with a thorough understanding of the issues. Like General Yeager, the key is to know, not to guess. Proper project management starts with the project manager (PM). According to the Project Manager’s Institute (PMI), a PM performs variations of five basic activities. The PM: 1. Initiates – When a project begins, the PM should identify all the project’s stakeholders and determine their relative impact(s) on the project. The PM will then develop strategies to satisfy stakeholders’ needs for information. The PM will also develop a Project Charter in which he documents his rights and
responsibilities, as well as his boundary limits. The Charter will also describe the project’s objectives, initial risks, and acceptance criteria, among other things. 2. Plans – The PM creates a Project Plan by defining the scope, developing a schedule, determining the budget, and developing several subordinate plans, such as a Quality Plan, a Staffing Plan, a Communications Plan, a Risk Response Plan, and a Procurement Plan. 3. Executes – The PM directs and manages project execution, and ensures quality assurance (QA) is being performed. The PM also acquires, develops, and manages the project team; distributes information to the project team and to the stakeholders; and manages stakeholder expectations. The PM is also responsible for ensuring the Procurement Plan is being executed. 4. Monitors and controls – The PM monitors and controls project work, and performs integrated change control. The PM verifies and controls scope, schedule, and budget. The PM ensures quality control (QC) activities are being performed per the QA Plan. The PM monitors and controls risks, according to the Risk Response Plan, and procurements according to the Procurement Plan. And, the PM reports status according to the Communications Plan.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
5. Closes – When the project’s objectives and acceptance criteria have been met, as described in the Project Charter, the PM closes the project and any lingering procurement-related aspects. The amount of responsibility and authority given to the PM varies depending on the type of organization to which he/she is attached, so the PM may or may not be responsible in all the areas described above. However, those responsibilities do reside somewhere, and it is a good idea to determine where these responsibilities lie in your organization. The PMI has grouped organizations into five basic categories: 1. The Projectized organization is one in which the PM has almost unlimited authority. Each project has a project team that reports to the PM, who has ultimate authority as to project execution. The PM has budget authority and works full-time on the project. 2. The Functional organization is one in which the authority of the PM is the least. Staff report to a functional manager (FM), who is primarily responsible for the staff and for the projects. In this organization, the PM is in a support role, does not have budget authority, and usually works part-time on a project. 3. The Matrix organization is one in which a blend of responsibilities exists, shared between the FM(s) and the PM. A Guide to the Project Management Body of
Knowledge (PMBOK), fourth edition recognizes three variations of matrix organizations: a. The Weak Matrix is a variation of a functional organization. The PM role in this organization is one of coordination and facilitation. Authority is very limited, and the success of the project ultimately rests with the FM. b. The Strong Matrix is a variation of a projectized organization in which the PM may retain almost all the responsibility and authority. The functional organization may, for example, manage the staff, placing them on loan to the project for the project duration, then taking them back into a pool after the project. In this scenario, the PM retains responsibility for the success of the project. c. The Balanced Matrix, as one might expect, is a blend of attributes in which the PM and the FM are each responsible for the success of the project.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Most of the larger engineering organizations are typically matrixed, with the PM having a lot of project authority, but less say in staffing and methodology, which is more in the domain of the FM. The smaller the organization, the more likely the structure is to be purely projectized or purely functional. In virtually all of these engineering environments, however, the PM is the face of the project, as far as the customer is concerned. The PM, in addition to varying levels of responsibility in the five areas of project management, in most cases, is also responsible for future sales and repeat business. In order to succeed in the dual role of manager and salesperson, the PM must: • Understand who the stakeholders are and gain a thorough understanding of the forces driving the customer/owner (risk-taker) to make the investment in the first place. • Be the customer’s advocate when working within his own organization and be an advocate for his own organization when working with the customer. The PM must, more than any other individual on the project team, live with one foot in each camp. • Have knowledge relating to the techniques of project management and the technological, logistical, and interpersonal challenges being faced. • Ensure that each member of the project team has a thorough knowledge of the issues that relate to project success. • Continually monitor the project’s progress, measuring against the agreed-upon parameters.
Whether you are a project manager or a project team member, a thorough understanding of the principles and concepts discussed here will broaden your avenue to success. The following major topics are discussed in this chapter: • Contracts – What are some of the most common project types? • Project life cycle – What is the normal order of things? • Project management tools – What is in the project manager’s toolbar (i.e., project controls)? • Project execution – What are some of the key techniques for managing an ongoing project?
Contracts Each member of the project team defines success from his or her own unique perspective, as viewed through the prism of the project parameters. From the customer’s point of view, success is achieved when the desired end is reached within the time allotted and/or the funds allocated. This is likely to be a broader interpretation than that of the service provider, who is also interested in making a profit.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
A truly successful project is one in which both the customer and the service provider are satisfied with the outcome. For this to occur, a zone of success must be created that is as large as possible (see Figure 39-1).
A success triangle is formed at the point where the goods delivered meet the customer’s cost and quality expectations, while still allowing the service provider to make a fair profit and maintain his reputation for high-quality work. Staying within this comfort zone is sometimes a bit tricky, and it really helps to understand the “physics” involved. The physics of a project are defined by its constraints.
Constraints
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Time and resources are two parameters that impose limits on the design process (see Figure 39-2). Time, as it relates to a project, can mean either duration (calendar time) or intensity (labor hours). The term time driven in this context implies the project is constrained by the calendar; the term cost driven implies the project is constrained by cost. Cost is calculated by finding material cost and then measuring the level of intensity per unit of project time (in man-hours) required to design and construct.
The relationship between the customer and the engineer or constructor requires clear definition. If a vendor provides a quote, then that vendor is bound by it and must provide the materials or services for the price offered in the quote. The same concept applies to the automation service provider (seller). As the seller, the act of submitting a proposal constitutes initiating the contracting process. If the customer (buyer) accepts the proposal, the seller is legally bound to execute according to the contract, and the buyer is legally bound to honor it. Following are some of the most common types of contracts: • Cost-plus • Time and materials (T&M) (also, time and materials not-to-exceed [T&M NTE]) • Lump sum or fixed fee • Turnkey • Contract modifiers • Hybrid Each contract type strikes a different balance of risk/reward for each participant as noted in the following commentary.
Table 39-1 (below) depicts the relative risk/reward factors of several of the most common types of contracts from the perspective of both the seller and the buyer. Each contract type is analyzed with respect to the project constraint and rated on a scale of 0– 3 as follows:
0: None 1: Minimal 2: Moderate
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
3: Maximum A risk/reward ratio of 1/3, therefore, for the seller, would indicate the condition has minimal risk with maximum likelihood of a good profit margin, while a 1/3 for the seller would indicate low risk and a high probability of meeting the project’s financial goals. The following sections discuss each of these major contract types in detail.
Cost-Plus (CP) A cost-plus contract guarantees a (usually small) fixed, previously-negotiated profit margin for the seller. In return, the buyer may retain control over project content and the seller’s method of execution. The parameters defined by the contract are unit rates—not project price. The unit rates can be applied to either the various employee hourly rates, or to negotiated rates based on employee classification (engineer, programmer, designer, clerk, etc.). These unit rates are valid over the life of the contract, which can extend into the future until the scope of work is satisfied (see Table 39-1). For the seller, this guarantees a minimal, negotiated profit regardless of project constraint (Profit 0/1: No risk, minimal reward). The buyer has minimal leverage in controlling his cost (Cost 3/1: High risk, with only a minimal chance of meeting budget). If the seller completes the
task below budget, the buyer realizes a windfall. But, if the seller exceeds budget, the buyer must still pay, which drives up the buyer’s risk, and minimizes the chance he will meet the budget in the end. Note the effect if the schedule is added as a constraint. The seller still makes his profit margin with little risk, but the seller’s odds of hitting his budget has increased, since, at some point, the seller must stop work. Quality is the winner in both of these CP scenarios. If the only constraint is a desire for a high-quality product, then the seller has an infinite amount of time to achieve it, and the seller has an infinite amount of resources to fund the effort.
Time and Material (T&M) and Time and Material/Not-to-Exceed (T&M/NTE)
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
The T&M contract and the CP contract are very similar. For CP, the service provider is reimbursed for his cost, plus profit and expenses. The T&M contract reimburses for cost, plus profit and expenses, plus material and markup. Again, the parameters defined by the contract are unit rates—not project price. Therefore, the only major difference between the two is a higher potential profit for the seller due to his markup on materials (see Table 39-1). Please note that the T&M contract’s risk/reward scenarios for quality and cost are the same as for the CP format. Please refer to the narrative description for CP for more insight into this contract type. The CP and T&M scenarios place most of the budgetary risk squarely on the shoulders of the buyer. The “not-to-exceed” stipulation mitigates this by adding the constraint of “total cost.” The seller’s potential reward remains moderate, as in the straight T&M format, but his profit risk climbs to very high levels quite quickly as cost constraints are added. As in CP and T&M, if the seller finishes below budget, the buyer receives the windfall. However, if the seller finishes above budget, the seller is under no obligation to pay. From a quality standpoint, this high-risk/minimal reward condition for the seller causes an increase in his intensity and focus, and decreases his desire to work with the buyer to “tweak” the product. Thus, the buyer loses some control over the way the project is executed, and the likelihood of quality problems and disagreements between the two organizations increases somewhat.
Lump Sum or Fixed Price The terms lump sum and fixed price are interchangeable. In the lump-sum contract, the parameters defined by the contract are a fixed project price for a fixed set of tasks or deliverables. Since the price is negotiated before the services are rendered, the fixedprice contract minimizes cost risk for the buyer. Further, the buyer usually signs a contract only after a bidding process in which several potential sellers compete for the
contract by providing their lowest bids. The buyer can either accept the lowest bid, or he can analyze them to determine the lowest “reasonable” bid that will save him the most money while letting him retain a sense of comfort that the seller can execute to the terms of the contract. The profit risk to the seller depends on whether the service or product is deterministic or probabilistic. If selling “widgets,” in which little or no research and development is needed, the product is deterministic, and presumably the seller knows exactly how much resources and funds are required in their production. If making retrofit modifications to an existing facility in which drawings and/or software listings are out of date, for example, then the project is deemed probabilistic, and the seller’s risk is maximized. One way for the seller to mitigate risk, and also maximize reward, is to add contingency to the bid. Contingency is a factor used to cover normal design development issues that are hard to quantify, and also to make allowances for those things that are unknown. If the level of uncertainty is low, then the level of contingency can be low. If not, then contingency should be high. The proper amount of contingency reflects the balance between the level of uncertainty that exists, and the level of risk deemed acceptable. Sometimes the seller is willing to forego profit to retain staff, reducing contingency; sometimes the staff is busy, leading the seller to raise it.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Profit, for the seller, is at risk, depending on the scenarios described above. But, the possibility of reward is also high. If the seller manages to execute his or her task below budget, the buyer must pay the full amount of the contract. In other words, this is a high risk/high reward scenario. A fixed-price or lump-sum project offers the buyer several benefits, foremost of which is an enhanced ability to allocate resources. The buyer can set aside project funds (plus a safety buffer) with a high level of confidence that additional funds will not be required. This works to the seller’s advantage in planning other work. The buyer gains these benefits, but, in comparison to the cost-plus format, loses much control during the execution of the project. To submit fixed-price bids, bidders work at their own expense to clearly define not only the set of deliverables, but the methods they expect to employ to meet the buyer’s defined scope of work. A fixed-cost project, if properly managed, can be the most efficient and effective format for both organizations. However, the seller must be ever vigilant in managing scopecreep.3 Once the buyer accepts a bid, the seller has no obligation to adjust the deliverables or methods if it can be demonstrated that doing so will negatively affect his ability to turn a profit. If the buyer makes a request that is out of bounds with respect to
the scope of work, the seller has the right—and obligation—to refuse the request until the buyer approves an engineering change order (see “Develop a Change Control Plan”). This defensive posture on the seller’s part can lead to a fractious relationship with the customer if a previously agreed-upon method of change control is not employed.
Contract Modifiers A contract can be modified several ways to make it more powerful, more focused, or more relaxed. Incentives can be added to the CP contract, for example, that would add profit to the seller based on meeting an early schedule milestone. Or, an incentive fee could be added to a fixed-price project that would come into play if a certain quality milestone were met. Milestones could be schedule, cost, or quality-based, and contract modifiers can be incentives or penalties. Some common contract modifiers are: • Cost sharing – Cost overruns are funded by a prearranged ratio between the buyer and seller. • Incentive fee – The seller can increase his fee based on meeting a milestone. • Bonus penalty – Meeting a milestone is good for the seller, triggering a bonus; but missing it triggers a penalty in which the seller’s fee is reduced for each day the deliverable is late.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
• Economic price adjustment – A time-based price adjustment is triggered if the project duration exceeds an agreed-upon timeframe. • Liquidated damages (LDs) – The seller is penalized for not meeting one or more term(s) of the contract. LDs are typically tied to a schedule milestone in which the seller must meet a requirement by a specific date. The damage assessment will be predefined in the agreement, and often takes the form of a specific penalty that accrues for each day of delay.
Hybrid A particular project may exhibit several of the characteristics of the scenarios described above. For example, the preliminary engineering (discovery) phase is frequently done on a cost-plus basis because there is simply not enough information available to produce a responsible fixed quote. Subsequent phases such as detail engineering and construction may then be done on a lump-sum basis.
Project Life Cycle
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
No matter what style of project, whether lump sum or T&M, most automation projects are similar in the way they are developed and executed. Figure 39-3 depicts the International Society of Automation (ISA) Certified Automation Professional® (CAP) Program’s model for this cycle.
The Project Charter Automation projects often begin on the production floor. A problem needs to be fixed or a process needs to be streamlined. Any change has an associated cost. For the issue to be addressed, someone must isolate the problem, develop a potential remedy, and determine if the remedy is technically feasible. If it’s found to be technically feasible, then an economic evaluation will be performed to build the business case and prove its economic feasibility. The Project Charter will contain these and other feasibility studies, and will define the anticipated roles and responsibilities of the stakeholders, managers, subcontractors, and staff. It will also develop a high-level risk matrix and milestonebased schedule for the project, and prepare a written description detailing its properties
and effects. Some of the areas to consider when describing the effects are as follows: Perform a Feasibility Study Feasibility studies should be performed to determine the effects of both performing the project and NOT performing the project from the standpoint of technical effects, economic effects, and physical effects. For example, a great technical solution that is cost-prohibitive or impossible to implement due to lack of available space is not a solution at all. Develop a Project Specification Write a project specification that clearly describes the project goals. It should provide enough information to let someone prepare a reasonably accurate estimate with minimal expense. The following information should be discussed in the document: • Document lists • Equipment lists • Scope of work • Software/hardware performance specifications • Service provider performance criteria, such as: o How should documents be transmitted?
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
What media should be used to prepare the documents? Computer-aided design & drafting (CADD)? Manual? What design standards should be adhered to? National Fire Protection Association (NFPA)? National Electrical Code (NEC)? Internal? What is the desired timetable? What specific deliverables will be expected? • Approved vendors lists • Safety concerns • Schedule • Security Identify Stakeholders A stakeholder is anyone who has a vested interest in, and who can exert influence on the project. This could be a member of the public, a public official, a manager, or a co-
worker. A stakeholder’s interests could align with the interests of the project, or they could be opposed, and be prepared to fight progress every step of the way. It is recommended that each stakeholder be identified, and a strategy developed to optimize the positives associated with the individual. Sometimes, the biggest positive is to find a way to neutralize detractors. Perform a Cost/Benefit Analysis The cost/benefit analysis compares the most likely investment cost to the most likely return. The result of this analysis can be expressed in units of calendar-time. For example, if a project costs $10M and the net profit on the product is expected to be $2M/yr, then the project has a 5-year payback. Develop an Automation Strategy The automation strategy must take into account operability issues such as plant shutdowns, equipment availability, and the state of the plant’s infrastructure. It must accommodate the operations and maintenance departments’ concerns, as well as satisfy technology issues. Automation strategies often take the form of a written document called a Control Narrative. Perform Technical Studies Technical studies to prove basic concepts are essential to a successful project. The aim of these studies is to eliminate as many of the variables and unknowns as possible. Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Perform a Justification Analysis The justification analysis looks at marketplace effects and risk, in addition to cost. This analysis reviews the relative position of the company with respect to its competitors in the marketplace; it forecasts the effects of a successful project, and an unsuccessful project, in the marketplace. Generate a Summary Document The feasibility study should be published as a document that summarizes the findings of each of the aforementioned activities to support a value assessment of the merits of the project.
Project Definition The definition phase of the project identifies customer requirements and completes a high-level analysis of the best way to meet those requirements.
Determine Operational Strategies Key stakeholders should be identified and interviewed to establish the true impetus of the project. What is the need this project is envisioned to fulfill? The service provider should take the time to satisfy himself or herself that the customer has considered all the issues and will be happy with the results of the project if it is executed well. The law of unintended consequences should be considered in this step. Analyze Technical Solutions Once the true reason for the project is identified, and the operational strategy developed, technical solutions to the problem should be sought. This can entail visits to sites with similar situations, vendor and/or manufacturer interviews, and sometimes test-bed evaluations or pilot projects. Establish Conceptual Details Several sets of documents are produced during the conceptual stages of the project. These documents are referred to as “upper-tier” documents. They describe the conceptual details to develop a design basis.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Develop a Detailed Scope of Work (SOW) An SOW is developed as the result of a cycle of investigation intended to eliminate or reduce any assumptions that may exist. It should be based on a clearly defined and documented set of requirements that are traceable to their source. It should be detailed to a task level, which becomes the basis of a Work Breakdown Structure (WBS). It should include a complete list of deliverables, tasks, and services that are modified as part of a management-of-change process to be defined elsewhere. Develop a Change Control Plan A plan for managing change should be developed and communicated to all team members and stakeholders early in the project. During the subsequent execution phases of the project, potential changes in scope should be identified, elevated, and vetted prior to implementation. In order to know that a deviation from scope is about to occur, it is important that each individual fully understand the scope baseline. After the need for a deviation has been identified, the same process that was followed during the feasibility studies and hazard and operability study (HAZOP) should be employed as part of the approval process. After a scope change is approved by the stakeholders and the change management team, all the associated project plans should be updated with the new information. Only then should the change be implemented.
Generate a High-Resolution Cost Estimate Scope and schedule define the “mechanics” of the project. The estimate defines the “physics” of the situation, setting limits that affect the way in which the project will be executed. These physical measures are the primary tools of the PM when tracking the project on the strategic level and of the supervisor managing the work on the tactical level. The scope of work defines the work to be done. The material cost estimate and labor cost estimate define the parameters within which the work will be done. The three main classes of estimates, in progressive order of accuracy, are budget, bid, and definitive. Following is a brief description of each: • The budgetary cost and labor estimate is produced by a group intimate with the site and process. This group may or may not execute the remainder of the project. The purpose of this type of estimate is primarily to obtain initial funding. It is typically “quick and dirty” and is expected to be rather inaccurate. In fact, an error margin of ±30% is acceptable for this type of estimate. Prior to this estimate, a formal scope of work has probably not been done and, quite possibly, the project specification has not been finalized. This type of estimate is generally unfunded by the customer, or at least underfunded.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
• The bid material cost and labor estimate is produced by the various engineering bidders vying for the work. Again, this estimate is typically unfunded by the customer. • The definitive material cost and labor estimate is prepared by the engineering firm that was awarded the contract based on their bid. The customer typically includes this as a part of the project, so it probably is fully funded. Once the contract has been awarded and any secrecy agreement issues have been settled, the engineering contractor is given full access to the information developed by the customer during the internal evaluation process. The engineering firm then does some research to validate the basic assumptions and develops a finalized scope of work. This information sometimes alters the picture significantly, and the engineering contractor is given an opportunity to adapt the estimate and schedule to the new information. Of course, this re-estimating process is bypassed if full disclosure was made during the bid process. In that case, the bid estimate becomes the definitive estimate. Some of the techniques listed by the Project Manager’s Institute that may be used to generate an estimate are: • Expert judgement – A good estimator, having accurate historical information
and related experience, can apply expert judgement to all or part of a project estimate. • Analogous estimating – If the project is similar in many respects to one previously executed, then hard data may be available for use in developing an estimate. • Parametric estimating – If parameters, such as size, weight, relative height, or other measures are available, and if they can be compared to past project parameters, then it may be possible to derive an estimate from past data. • Bottom-up estimating – Estimating each task, then rolling the numbers up into a master estimate is called a bottom-up estimate. • Three-point estimating – Doing three estimates from the point of view of Most Likely, Optimistic, and Pessimistic, then using an algorithm to compute a weighted result is called three-point estimating. Develop the Project Schedule The PMI-based approach for developing a Project Schedule is as follows:
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
• Define activities – This should be WBS-based (see “Develop a Detailed Scope of Work (SOW)” above). Outputs of this step include an activity list with attributes, and a milestone list. • Sequence activities – This step arranges the activities in order of execution by defining predecessors and successors for each line item. It also takes the relationships into account. For example, if two items need to complete at the same time, the relationship between them is Finish-Finish (FF). If one item needs to finish before another starts, then the relationship is Finish-Start (FS). If an item can start 3 days after another item finishes, then the relationship is Finish-Start, with a lag of 3 days (FS + 3). • Estimate activity resources – This step staffs the resources with the team required for execution. • Estimate activity duration – Based on the resources allocated, how long will it take to complete the activity? • Develop a schedule – Refine and optimize the schedule based on resource loading, material delivery schedules, and other real-world effects in order to derive a project schedule that is realistic and useful. • Control schedule – Obtain regular updates from the project team, and update the
schedule accordingly. Plan Procurements Identify potential long-lead (LL) time procurement items. Generate bid packages and obtain quotes from vendors on large-dollar items and engineered items. Generate a procurement plan that defines who is responsible for obtaining the materials and when they should be ordered. Consider cash flow aspects, both for the project and for suppliers. Ensure adequate funding availability at the appropriate times.
System Design Steps to the “System Design” phase of the project are: Perform Hazard Analysis (HAZOP)4 After the design basis is defined to a fairly high degree, but before the detail design has begun, the envisioned system should be analyzed for operability and safety. Establish Guidelines Establish standards, templates, and guidelines as applied to the automation system using the information gathered in the definition stage and considering human-factor effects to satisfy customer design criteria and preferences.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Develop Equipment Specifications and Instrument Data Sheets Create detailed equipment specifications and instrument data sheets based on vendor selection criteria, characteristics and conditions of the physical environment, regulations, and performance requirements to purchase long-lead-time equipment and support system design and development. Define the Data Structure Layout and Data Flow Model Define the data structure layout and data flow model, considering the volume and type of data involved to provide specifications for hardware selection and software development. Select the Network Physical Communication Media, Network Architecture, and Protocols Select the physical communication media, network architecture, and protocols based on data requirements to complete system design and support system development.
Develop a Functional Description of the Automation Solution Develop a functional description of the automation solution (e.g., control scheme, alarms, human-machine interface [HMI], and reports) using rules established in the definition stage to guide development and programming. Develop a Test Plan Design the test plan using chosen methodologies to execute appropriate testing relative to functional requirements. Perform Detail Design Perform the detailed design for the project by converting the engineering and system design into purchase requisitions, drawings, panel designs, and installation details consistent with the specification and functional descriptions to provide detailed information for development and deployment. Prepare Construction Work Packages Prepare comprehensive construction work packages by organizing the detailed design information and documents to release the project for construction. Perform Long-Lead Procurement
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Several procurement activities could occur during the latter part of the design phase. Delivery lead times must be considered with respect to the timing of orders.
Software Development If the system design phase is properly executed, the software development phase of the project should be simply a matter of executing the plan. In most cases, the content being developed in the areas described below are defined in the plans produced previously. Following are some of the steps involved in developing software: Develop the Human-Machine Interface (HMI) System using: • The Alarm Grouping and Annunciation Plan • The Alarm and Alarm Set Point List • The Human-Machine Interface (HMI) Screen Hierarchy and Navigation Plan • The HMI Color Scheme and Animation Plan • The Cybersecurity Plan
Develop Database and Reporting Functions using: • The Data Historian List and Archival Plan • The Data Backup and Restore Plan • The Report and Report Scheduling Plan Develop the Control System Configuration and/or Program using:5 • Logic Diagrams • Device Control Detail Sheets (DCDS) • Process Control Detail Sheets (PCDS) • Sequence Control Detail Sheets (SCDS) • Recipes • Interlock Lists and Alarm/Trip Set Point Lists Implement the Data Transfer Plan Implement data transfer methodology that maximizes throughput and ensures data integrity using communication protocols and specifications to ensure efficiency and reliability.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Implement the Cybersecurity and Data Integrity Plan Implement security methodology in accordance with stakeholder requirements to mitigate loss and risk. Perform a Scope Compliance Review Review configuration and programming using defined practices to establish compliance with functional requirements. Develop and Implement the Functional (or Factory) Acceptance Test Plan (FAT) Test the automation system using the test plan to determine compliance with functional requirements. Assemble Documentation and Prepare for Turnover Assemble all required documentation and user manuals created during the development process to transfer essential knowledge to customers and end users.
Deployment The deployment phase, or construction phase, is the phase in which the design is implemented. The deployment phase frequently overlaps the design phase activities by some margin, beginning construction as WBS area designs are completed. During this time of overlap, the needs of the constructor ascend in importance over the needs of design. Work stoppages should be avoided at all costs, even to the point of interrupting ongoing design activities to concentrate on a construction problem. Upon arrival at the site, the automation professional should begin the processes described below. Verify Field Device Status Perform a receipt verification of all field devices by comparing vendor records against design specifications to ensure devices are as specified. Inspect Installed Equipment
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Perform physical inspection of installed equipment against construction drawings to ensure installation in accordance with design drawings and specifications. Construction activity status should be captured in a report similar to the Start-Up Readiness Report, shown in Figure 39-4.
Using the instrument database and input/output (I/O) list as a basis, a Start-Up Readiness Report can be produced that will make this an organized, verifiable process. Install Software Install configuration and programs by loading them into the target devices to prepare for testing.
Perform Preliminary System Checks; Troubleshoot and Correct Faults Solve unforeseen problems identified during installation using troubleshooting skills to correct deficiencies. Perform the Site Acceptance Test for the Control Software Test configuration and programming in accordance with the design documents by executing the test plan to verify the system operates as specified. Perform the Site Acceptance Test for Communications and Field Devices Test communication systems and field devices in accordance with design specifications to ensure proper operation. This is sometimes referred to as the checkout, or bump and stroke phase, each field device should be exercised to demonstrate proper operation. Perform the Site Safety Test Test all safety elements and systems by executing test plans to ensure safety functions operate as designed. Perform the Site Cybersecurity Test Test all security features by executing test plans to ensure security functions operate as designed.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Execute the Training Plan Provide initial training for facility personnel in system operation and maintenance through classroom and hands-on training to ensure proper use of the system. Perform the Site System Integrity Test (i.e., Start-Up and Commissioning) Execute system-level tests in accordance with the test plan to ensure the entire system functions as designed. A successful test at this stage constitutes the system start-up. The system is generally considered to be “commissioned” at the conclusion of this step. Start-up should be well organized and sequential. A formal start-up procedure is recommended. Document Lessons Learned Before disbanding the team, hold a meeting in which problems and successes may be discussed and lessons learned may be captured for future reference. Closeout and Assess
The final step in the deployment phase is the closeout step. This is the step in which: • Project documentation is finalized to reflect as-built conditions (thus, documenting changes that accrued during the deployment process). Redline markups that occurred due to construction adjustments should be transcribed into the original design package. • Project documents are turned over to the end user. • Final billing issues are resolved. • A post-mortem is held to evaluate and discuss lessons learned and to recognize high achievement. • A project celebration is held to commemorate conclusion of the project. Recognizing the staff for their work is of great importance.
Support The support phase of the project should be very important to the automation professional. In many cases, this is where the next big project is derived. This phase consists of the following elements.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Develop a Maintenance Plan This is usually the domain of Plant Operations, but input into this plan may be one of the project’s outputs. Product support manuals and datasheets of all equipment procured by the project is usually stipulated as a deliverable by the project specification. This information is used by Plant Operations to develop a maintenance plan. Provide Warranty Support The project specification will usually stipulate the warranty that is desired on all installed hardware and software. It is incumbent on the seller to ensure that adequate reserves exist for warranty work. The warranty period often begins after formal acceptance of the project deliverables and could extend for up to several years from that point. For a fee that is typically a few percentage points of the purchase price, most manufacturers will provide extended warranties on equipment. However, the project team will need to have labor cost reserves to support technical phone support and plant visits. Develop and Implement a System Performance Monitoring Plan Verify system performance and records periodically using established procedures to
ensure compliance with standards, regulations, and best practices. Perform Periodic Inspections and Tests to Recertify the System Perform periodic inspections and tests in accordance with written standards and procedures to verify system or component performance against requirements. Develop and Implement a Continuous Improvement Plan Perform continuous improvement by working with facility personnel to increase capacity, reliability, and/or efficiency. Document Lessons Learned Document lessons learned by reviewing the project with all stakeholders to improve future projects. Develop and Implement a License and Service Contract Maintenance Plan Maintain licenses, updates, and service contracts for software and equipment by reviewing both internal and external options to meet expectations for capability and availability. Provide a Recommended Spare Parts List
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Determine the need for spare parts, based on an assessment of the installed base and probability of failure, to maximize system availability and minimize cost. Develop a System Management Plan Provide a system management plan by performing preventive maintenance, implementing backups, and designing recovery plans to avoid and recover from system failures.
Project Management Tools and Techniques A well-conceived, well-managed project is one that has been given a chance to succeed. A set of tools has been developed to help the project team stay on track. If properly developed and used, these tools will help define the tasks to be performed, facilitate accurate progress reporting, and provide early warning of potential cost overruns and/or scheduling conflicts.
The Status Report
Periodically, the design team will be asked to provide some feedback as to their execution status. This feedback will answer questions such as: Did you start the project on time? How far along are you? Which tasks have you started, and when did you start them? Will you finish the project on time and within budget? Using the WBS milestone schedule is a way to subdivide the available man-hours and/or costs. This effectively breaks the budget into smaller, easier to manage sections that can be reported against individually. Data should be collected for each task (WBS item) and then plowed back into the overall project schedule by the PM. Status reporting typically includes updating the following parameters: • Start date • Finish date • Percent complete • Man-hours estimate to complete (manhrs ETC)
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Figure 39-5 shows a status data collection form that has 10 WBS groupings. Each grouping is subdivided by six subtasks, which, in this case, equates to the six phases of a project (CAP Model), discussed previously. Of the 10, only WBS-T01 and WBS-T02 have been started. WBS-T01 covers the Railcar Unloading area of the plan and has 250 man-hours allocated. Ideally, the estimate would have been done in this format, with each subtask being estimated on its own merit, and the hours rolling up into the main task equaling 250. The Subtask Weight must equal 100%.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Total man-hours, subtask weight, and subtask man-hours values were loaded at the beginning of the project, forming the baseline data to which all updates are compared. The data to be updated are the project timeline dates and the manhrs ETC. The scheduling team plows the date information back into the schedule, and the project manager plows the man-hour data back into the budget for analysis (to be discussed later). Notice that the ETC for WBS-T01 shows 188 man-hours to complete, with 250 available. From this, it might be inferred the design staff is 188/250 = 75% complete. This is incorrect, as we shall see.
Assessing Project Status The project status update data that were collected in the previous section can be used by the project manager to develop additional data that will help forecast the likelihood of success for the project (See Figure 39-6). Some of the additional information needed will be:
• Earned hours – Estimated percent complete X budgeted man-hours • Actual hours – The number of hours expended to date, as reported by the timesheet system • Estimate to complete (ETC) – Entered by the design team to reflect the amount of work remaining • Estimate at completion (EAC) = manhrs actual + manhrs ETC – Indicates the expected final manhrs
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
• Apparent percent complete = manhrs actual/allocated manhrs – Measures actual versus initial expectations • Actual percent complete = manhrs actual/manhrs EAC – Measures actual versus final expectations • Efficiency rating = Actual percent complete/apparent percent complete Note the differences between the estimated, apparent, and actual completion percentages. In the case of WBS-T01-2, the design staff thought they were 95% complete (estimated). But, they had already exceeded the original budget by 60% (apparent) and should have been done a long time ago. From the standpoint of the EAC, the one that counts, they are really at 83% (actual).
References 1. ANSI/PMI 99-001-2008. A Guide to the Project Management Body of
Knowledge. 4th ed. Newtown Square, PA: Project Management Institute, Inc. 2. Cockrell, Gerald W. Practical Project Management: Learning to Manage the Professional. Research Triangle Park, NC: ISA (International Society of Automation), 2001. 3. Courter, Gini, and Annette Marquis. Mastering Microsoft® Project 2000. Alameda, CA: Sybex Books, 2001. 4. Kerzner, Harold, PhD. Project Management: A Systems Approach to Planning, Scheduling, and Controlling. 10th ed. Hoboken, NJ: John Wiley & Sons, Inc., 2009. 5. Scholtes, Peter R. The Team Handbook: How to Use Teams to Improve Quality. Madison, WI: Joiner Associates, 1988. 6. Whitt, Michael D. Successful Instrumentation and Control Systems Design. 2nd ed. Research Triangle Park, NC: ISA (International Society of Automation), 2012.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
About the Author Michael D. Whitt is a certified Project Management Professional (PMP) and project manager/program manager for Automation at Mesa Associates, Inc. His experience includes 15 years with Raytheon Engineers & Constructors as an instrumentation and controls (I&C) design supervisor and lead systems integrator; and 16 years with Mesa Associates, Inc. in various roles, including functional manager for the I&C engineering department and portfolio manager for several of Mesa’s larger clients. Whitt is the author of Successful Instrumentation and Control Systems Design. He is a native of Asheville, N.C. 1. Cockrell, Gerald W., Practical Project Management: Learning to Manage the Professional (Research Triangle Park, NC: ISA, 2001), 2. 2. Yeager, General Chuck and Leo Janos, Yeager: An Autobiography (New York: Bantam Books, 1985), 84. 3. “Scope-creep” describes the case in which the seller agrees to perform additional out-of-scope work without amending the contract. 4. HAZOP: hazard and operability study. See the “Safety Instrumented Systems in the Process Industries” chapter in this book. 5. Many of the deliverables listed in this section are described fully in Successful Instrumentation and Control Systems Design by Michael Whitt.
40 Interpersonal Skills By David Adler
Introduction
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Interpersonal skills are important to automation professionals. When someone has that great job in automation, what determines if they will be a productive employee or an effective leader? Research has found that the key to individual success for professionals is how they manage themselves and their relationships. This is also true for automation professionals. In fact, an emotional intelligence skill set matters more than IQ. In a highIQ job pool, soft skills like discipline, drive, and empathy mark those who emerge as outstanding [1]. The implications of this research for automation professionals are that they are expected to have great technical skills, but those who have good interpersonal skills experience more career success. Being part of an automation team is a collaborative effort that requires interpersonal skills. An automation professional does not sit alone at a desk developing an automation design and implementing an automation system. It is a team-based activity that requires many different and complementary skill sets. Successful automation teams have team members who: 1. Know the process, technology, and equipment. 2. Use appropriate business processes and project management skills. 3. Have interpersonal skills. Team members must plan an automation project, design instruments, write application code, tune process control loops, and start up the automated equipment. They must gather requirements from engineering peers, cooperate on design and development, and train operators. Interpersonal skills are especially important for automation leaders. Automation leaders need to influence business leaders to invest in new automation technology, upgrade obsolete automation, and support the never-ending optimization of operations that automation enables. Automation leaders select, develop, and motivate team members.
Automation professionals need leadership to coordinate the team’s many activities when executing an automation project or supporting manufacturing.
Communicating One-on-One Modern technology has increased the ways automation professionals interact. Email, text messages, instant messaging, and video conferencing are all part of how automation professionals communicate with others today. But, in-person conversations enable automation professionals to socialize and interact in ways that the virtual tools cannot. When give and take is required, there is no form of communication that works better than speaking to another person face-to-face.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Unfortunately, face-to-face contact is not always possible. Increasingly, automation project work requires an international workforce with engineering professionals from different cultures and in different time zones. Automation software development can be done around the clock with separate teams dispersed in Asia, Europe, and North America. Cultural differences or situational circumstances can create barriers to overcome. Communication is via teleconferencing, emails, instant messaging, and desktop sharing rather than face-to-face. This requires flexibility from everyone. Speaking and writing requires clear and crisp communications. Care must be given to what technical content is being discussed, as well as how it is communicated. Treat even the most casual work conversation as important and take the time to prepare. While automation professionals may prepare extensively for group presentations, when it comes to one-to-one communication, they wing it. Good communicators do not just talk, they also listen. In fact, good communicators spend more time listening than speaking. An automation professional who wants to communicate effectively [2]: • Eliminates distractions • Pays attention • Puts the other individual(s) at ease • Keeps the message clear and concise • Asks questions • Responds appropriately
Eliminate Distractions Temporarily put current automation problems out of mind. This will enable you to
concentrate on the topic to be discussed. If you cannot free yourself from the pressure of other issues, postpone the conversation until you can give this interaction your full attention.
Pay Attention Focus on the speaker. When automation professionals are conversing, they often aren’t listening to each other because they are thinking about what to say next. Avoid the urge to multitask, especially when the conversation is not face-to-face. Avoid reading emails or checking text messages during a virtual conversation.
Put the Other Person at Ease Engage in small talk before addressing the main topic, for example, exchange pleasantries, and be sincere. Keep the tone of voice and inflection at a pleasant and professional level. Watch for nonverbal cues such as eye contact, posture, facial expression, and hand gestures. While it is a challenge virtually, watch for the subtle cues of attitude, humor, and aggression and adapt interactions appropriately.
Keep the Message Clear and Concise Keep it simple. Avoid technical jargon especially if the conversation is with a nonautomation person or business leader. Keep the message to one or two main ideas. Do not try to solve all the world’s problems in one conversation.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Ask Questions Take the time to honestly solicit the other person’s thoughts and opinions. Keep asking questions until the speaker’s main point is understood.
Respond Appropriately Wait until the speaker is done talking and paraphrase the message in your response. It lets the speaker know that they were heard and understood. Be candid, open, and honest when responding. Assert differing opinions respectfully.
Communicating in Group Meetings What is the point of a group meeting? Group consensus and cooperation can often be generated by the automation professional’s presentation and discussions during a meeting. But make sure there is business value in having the participants come together,
as many automation professionals dislike group meetings. They see them as time away from getting their “real” engineering work done. When preparing for a meeting, outline the purpose of the meeting and the role of each of the participants. Have a clear and crisp purpose for the meeting. Develop an agenda that includes a list of topics to discuss and the expected results; a meeting that is worth having will generate follow-up actions. A formal presentation is often the best way to sell ideas, but keep it interesting. Remember that even the most adoring audience has a short attention span. In this day of instant gratification, video games, and music videos, a presenter needs to do something to entertain the audience and keep them awake. Ask questions, show provocative slides, or change the pace. Be prepared to repeat key points more than once. Three keys to effective presentations [3] are: 1. Tell stories. 2. Show passion. 3. Create an interactive atmosphere.
Tell Stories Personal stories about the automation project or the use of automation technology can get complex ideas across and create meaningful connections between the automation professional and business leaders. It is much more effective to tell stories than to just rely on statistics and PowerPoint slides to keep the audience engaged.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Show Passion Speak clearly. Vary the tone of your voice. Be enthusiastic. Remember, it is not just what is said, but how it is said. The speaker’s passion is contagious; speak with the confidence of an automation expert.
Create an Interactive Atmosphere Ask the audience questions. A show of hands in response to a question about one of the presentation slides is effective in larger groups. Be sure to manage the pace and do not let the audience hijack the presentation.
Writing The Internet and social media are becoming the place to quickly share technical information about the automation discipline, especially with younger automation
professionals. A wide variety of tools facilitate this professional exchange: email, blogs, instant messaging, Twitter, Facebook, and LinkedIn to name just a few. These tools make it easy to share and find technical information about the automation profession with just a couple of clicks on a laptop, tablet, or smart phone; however, it is important to be careful when using these tools to communicate. It is easy to type out a passionate response without thinking about the consequences. That note produced in 30 seconds can quickly ripple throughout the whole corporation. Take the time to proof it. And if the content is controversial, get a trusted peer to read the note before sending it out. A formal written report is still a great way to capture and communicate detailed technical information. With the pressure of day-to-day assignments increasing all the time, it is easy to forget to write the final project report in the rush to continue to the next project. Do not consider the project complete until the final report is finished. Two years from now, no one will remember all the details about the project. Further, not creating a report denies others the benefit of this learning. One process for creating a successful report [4] is to: 1. Clarify the topic. 2. Identify the reader. 3. Write an outline. 4. Write the first page. 5. Write the core.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
6. Edit the report.
Clarify the Topic Focus the lens. Determine the objectives. Review the original purpose of the project from the project plan and address the report to the original objectives. If the report is on a new technology, make sure to read and understand the literature. The report must be narrow and focused enough to be interesting, yet broad enough to include some technical detail.
Identify the Reader Consider the audience. Hopefully, business leaders will want to read the report. Avoid the overuse of process control system jargon so readers will be able to understand it.
Write an Outline
It will organize your thoughts into a coherent structure. Focus on the needs of the readers. Potential elements of the report are: the problem, the automation solution, the benefits of the automation solution, technical/project details, and an example of the technology or the project learning points.
Write the First Page At the beginning of the report, give the reader a clear idea of the purpose of the report. A one-page summary is a great way to start.
Write the Core Document the details in the body of the report. Do not hide the important work under a mountain of minor details. Make sure to repeat the most important points. To create a balanced report, include learning points from project challenges as well as project successes. Be positive and celebrate the teams’ and peers’ successes. Illustrate data in charts and graphs; it is an effective way to reinforce and emphasize key points.
Edit the Report Ask a peer or mentor to review the report before it is distributed.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Building Trust Trust is the top need in a work relationship [5, 6]. Automation professionals are trusted because of their way of being, not because of what they say. The best way to maintain a trusting work environment is to keep from breaking trust in the first place. Once an automation professional has lost trust, it is difficult to get it back. Automation professionals build reputations of trust by demonstrating four core values: 1. Integrity 2. Intent 3. Capabilities 4. Results Fortunately, these attributes are commonly found in the professionals who make a career in automation!
Integrity Integrity is telling the truth and being honest in a principled way. But it is not just telling
the truth; it is also doing the right thing. When an automation professional makes a promise to complete a task by a certain date, they will work hard when it is needed without waiting to be asked. It is engrained in the automation professional’s normal behaviors. Ethical behavior is always striving to do what is morally right, particularly when times are tough and automation professionals are challenged to cut corners to get things done quickly and cheaply.
Intent Avoid having hidden agendas and strive to have only the best of intentions to help others. Automation professionals respectfully share what is on their minds and their passion for automation. It is important to genuinely care about and have empathy for all people. Empathy is the ability to relate to others and even feel what they feel. It is the human touch. It is using our intuitive understanding of people to translate the cold reality of automation systems into something that will serve the operator.
Capabilities Automation professionals are inspired by and trust those with strong automation capabilities. What does “strong capabilities” mean? It means keeping automation skills current. It means staying late because you love automation technology and you must have time to play with and learn the newest technology. It means constantly growing and developing your skills and abilities.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Results Track records and actual performances do matter. Automation professionals must walk the talk. They must deliver quality automation systems on time and under budget to be taken seriously. Automation professionals can have the most honorable integrity, transparent intent, and noteworthy capabilities, but if they never deliver results, people will not trust them.
Mentoring Automation Professionals Mentoring occurs when an experienced automation professional shares expertise, wisdom, and past automation experiences with a less-experienced automation professional. Mentors offer hints on automation technology, problem solving, and clarity on the big picture. They provide advice to protégés for navigating organizational politics, managing career development, and getting along with supervisors. Mentors can point protégés in the right direction, act as a sounding board, offer a different
perspective, and provide positive encouragement. Mentors use their own experiences— both positive and negative—to advise protégés on their current problems and concerns. It can require time and effort, but mentors get a lot in return. Protégés often bring boundless enthusiasm about the automation discipline to balance the cynicism of the more experienced automation professional. The exchange of information may prompt the mentor to examine their own automation practices when asked why they do something. It is rewarding to share knowledge and expertise and then see a young adult grow into a peer in automation. How does an automation professional get started? A junior automation professional picks a mentor, someone they would like to emulate, who is nurturing and a person of integrity. The mentor must have a passion for automation that is infectious, overpowering, and irresistible. The protégé and mentor need to enjoy hanging out with each other. The mentor’s only interest in the protégé is their unqualified success, and, to achieve that, the mentor is willing to challenge the protégé to be the best they can be. The mentor must genuinely be interested in helping the protégé achieve their hopes and dreams. It is always best if the junior automation professional picks their own mentor, but it needs to be mutual. The senior automation professional will be pleased to be asked and thrilled that someone else wants to make a career in automation. Effective mentoring requires [7]: • Trusting each other • Spending time together
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
• Talking in a dialogue • Sharing honestly
Trust Each Other Trust between the mentor and protégé is earned over time through positive experiences together. This requires integrity and trustworthiness. What is discussed between the mentor and protégé is confidential and stays between the two of them.
Spend Time Together The mentor and protégé should set regular mentoring meetings and conduct them away from the protégé’s normal working environment. This should be a fixed activity, not one that accidentally happens when the mentor and protégé see each other in the hallway.
Talk in a Dialogue
The mentor should spend more time actively listening than talking. The mentor should ask the protégé open-ended questions to catalyze discussions. The mentor should critique and challenge the protégé in a way that is nonthreatening and that helps them look at a situation from a new perspective.
Share Honestly Sharing involves honest insights, observations, and suggestions. No topic is off the table and an uninhibited give-and-take takes place. One caution for mentors is to offer advice to the protégé only when they ask for it. Senior automation professionals are all firstclass problem solvers who will be tempted to jump in and solve the problem. The role of the mentor is to help the protégé become the problem solver. The protégé needs to come to the solution themselves; it does not help them grow if the mentor tells them the answer.
Negotiating Automation professionals need to know what they are up against when negotiating with vendors. The vendor’s primary goal is to sell automation products and services to the user, not to help them be successful. Automation professionals negotiate with vendors for high-quality, low-cost systems and instrumentation; competitive rates for competent automation application designers and developers; and cost-effective, long-term service contracts. The keys to successful negotiations [8] are:
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
• Being well prepared • Showing patience • Understanding the vendor’s gambits • Focusing on what is important Always negotiate with honesty and integrity. Treat the other person who is negotiating with respect. The skill of tough negotiating without bullying can be critical; after the bid is awarded, the automation professional and vendor will then need to work together. Keep in mind that the automation professional could be working with this vendor for a long time. Even if the negotiations are not successful, the individuals at the vendor will resurface somewhere else in the future: the automation community is a very small world.
Plan the Negotiating Session
When you go into a negotiating session with a vendor, be aware that the vendor has spent days preparing. Know that you have leverage; you are the one with the purchase order. Also, know what you want to get out of the negotiation: costs, services, products, and so on. Make sure to aim high and be prepared not to accept the first offer. If you want to get leverage with a vendor, surprise them and buy lunch instead of letting them pick up the tab.
Show Patience Be patient, confident, and do not let emotions interfere. Haggle and then haggle some more. Take control of the meeting at the beginning of negotiations. Do not be afraid to change the agenda to obtain the advantage. Stay assertive and nonmanipulative.
Understand the Vendor’s Gambits
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Know the tactics that the salesperson will use to gain the advantage, such as: asking for more than they expect to get, flinching, the vise, never offering to split the difference, asking for a trade-off, good guy/bad guy, nibbling, withdrawing an offer, positioning for easy acceptance, the decoy, red herring, cherry picking, the deliberate mistake, and getting the other side to commit first [8]. You can probably guess what each of the gambits is, and you should know that salespeople are trained in them. What can an automation professional do when one of these tactics is used? The best course of action when confronted with any of these gambits is silence. Salespeople cannot stand long pauses in a negotiation session. It is foolish to say a word until the vendor is really willing to negotiate honestly.
Focus on What Is Important Trade rather than make concessions. Give each trade its highest psychological value. Look for trades of high value to the vendor but of low value to the automation professional. If a vendor will not reduce the automation service contract price, ask for more services for the same price. If a vendor does not deliver what is needed and each side has been reasonable, walk away and find somebody else.
Resolving Conflict Conflict is a normal part of automation projects in which opinions vary, the stakes are high, and emotions run strong. The need to balance schedule, cost, and scope while maintaining high-quality instrumentation design, application development, and process control capability creates tension. This tension can help engineers to think harder and
create even better solutions. But, if two strongly opinionated automation professionals choose different paths for the technical solution and then make it personal, there can be conflict. When this happens, automation professionals need to be able to resolve this conflict. Automation professionals might choose silence and ignore the issue, or they might express strong emotion with each other, which escalates the situation. What should automation professionals do if they find themselves in a serious conflict? They can have a crucial conversation. The seven-step process for a successful conversation [9] is: 1. Start with the heart. 2. Learn to look. 3. Make it safe. 4. Master the story. 5. State the path. 6. Explore the other’s paths. 7. Move to action.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Start with the Heart Prepare before you have the conversation. Make sure your motives are true. Evaluate the situation from both sides by putting yourself in the other person’s situation and point of view. Decide to attack the problem and not the other person. An experienced automation professional focuses on what they really want and has respect for the other person.
Learn to Look Establish an atmosphere in which the other person in the conversation feels comfortable. Pay attention and learn to look for emotional, physical, and behavioral cues. When a discussion starts to become stressful, the interaction with the other person can often end up doing the exact opposite of improving the situation. Watch for signs that the conversation is making the other person feel unsafe or uncomfortable.
Make It Safe When there is a decreasing comfort level, stop the crucial conversation. If passions are involved, the listener will not be able to process what the speaker is saying. If the other person does not feel safe, they will not share their honest thoughts. Apologize if you
hurt the other person’s feelings. Restoring safety frequently requires understanding the interpretations and judgments that automation professionals add to the behaviors they observe in others. When safety is restored, continue the dialogue. If the situation gets out of hand, stop the conversation and try to resolve the conflict another day.
Master the Story Respect the others involved in a crucial conversation. This can be difficult as either of you may have done or said something that has caused a loss of respect. Master your emotions. Don’t get mad; make a new story. Learn to create different emotions and stories that influence a return to a healthy dialogue.
State the Path State the story as a story and do not disguise it as a fact. Make it safe for the other person to express differing or even opposing views. Establish an atmosphere where the other person in the conversation feels comfortable.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Explore Other’s Paths Ask the other person to share both their facts and their stories (their paths). The automation professional needs to stay connected to the other person, listening with empathy and avoiding escalation. Express interest in the other person’s views. Understand the underlying problem. When a topic has been discussed, summarize what was heard and ask the other person if all the issues are on the table. Find areas of mutual agreement. Clearly state the area of agreement and then try to resolve any remaining issues.
Move to Action Determine who does what by when. Make the deliverables crystal clear. Hold the other person accountable for their promises. Set up a plan to resolve and to follow up.
Justifying Automation Many automation leaders find it challenging to justify the investment needed for that next automation technology enhancement or upgrade to their operations business leaders. Why is justifying automation such a challenge? Automation and process control enhancements are among the best ways to improve plant efficiency and reduce costs. Most automation leaders can build a strong business case for automation. In a recent
industry benchmark survey of operations business leaders, about half thought their current level of automation was too low! While the value of these investments is obvious to the automation professional and operations leaders who want more automation, it is often difficult for top business leaders to understand the need for significant investments for automation technology and support. Automation opportunities need to be explained in language that operations business leaders understand. How do automation leaders get better at justification? Automation leaders must have a clear purpose and be able to articulate that purpose. Operations business leaders are confronted with many production issues and opportunities. Automation leaders must develop automation proposals that address operations issues and opportunities. They must build consensus in their organizations and inspire team members and business leaders with their enthusiasm. They must explain the rationale to everyone who will listen, not just the business leaders. And finally, they must be prepared to give this message many times. To achieve change and have everyone buy in, the automation leader needs to be prepared to give the pep talk hundreds of times. The real key to achieving a business leader’s support is not through better sales and marketing, but by developing a long-term relationship. An automation leader cannot wait until they need the business leader’s support to develop a relationship. The automation leader must earn it. To develop a long-term, positive relationship with a business leader, the automation leader must [10]:
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
1. Establish a collaborative relationship. 2. Solve problems so they stay solved. 3. Pay attention to both the technical problems and the relationships. Trust and respect are gained by resolving operations issues and completing each automation project in a quality manner, on time, and within cost expectations.
Establish a Collaborative Relationship Take the time to meet directly with each business leader. Understand their problems. Find out their expectations and state clearly your desire for a balanced relationship. Be willing to say “no” to unreasonable requests. Develop a service-level agreement with specific deadlines and obtain sign-off from the business leaders.
Solve Problems so They Stay Solved Deliver good, workable solutions. Implementation is more difficult than planning. An
automation professional’s hands should get dirty in the details of the problem. Solicit feedback throughout the project. Understand that criticism and resistance is not personal. Persevere in solving any issues. Train operators and teach them to solve problems. At the end of a project, take the time to celebrate the team’s success. And for those few things that did not go well, learn from them and forgive.
Ensure Attention Is Given to Both the Technical Problems and the Relationships Do not assume that the business leader’s understanding is the same as yours. Understand their resistance to automation investments. Engage the business leaders in long-range automation planning. What does the automation team want to accomplish, and how will it be achieved? From this joint effort, develop a high-level strategy of the future of automation in operations.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Selecting the Right Automation Professionals Successful automation leaders hire the best professionals. Automation professionals must develop unique solutions to resolve complex manufacturing problems that require a wide range of automation technology skills. It is important to build a team of automation professionals that has all the needed technical skills. One automation professional is not interchangeable with another. This makes hiring automation professionals a difficult process. No other decision will have a longer-lasting impact and be more difficult to undo. Take the extra time to get the right professionals who have technical skills, integrity, and interpersonal skills. A rule of thumb is that the automation leader should spend as much time selecting individual team members for the project as the team will spend selecting the vendor system and defining the architecture. The following are the best practices for hiring automation professionals [11]: 1. Understand the position. 2. Screen candidates efficiently and effectively. 3. Make a well-considered final decision. 4. Close the deal. Assembling (and then motivating) the right people who have all the needed automation and interpersonal skills is the single most important task an automation leader must do on a large project or as a manger of the site automation support team.
Understand the Position
Before you start interviewing, define the needed roles and responsibilities for team members. Remember, the key areas of focus are: • Technical ability • Problem solving skills • Interpersonal skills, especially integrity and motivation Top candidates will want to understand the interesting technical challenges of the automation assignment and will want the opportunity to learn new technologies.
Screen Candidates Efficiently and Effectively Focus the process on identifying the candidate’s skills and behaviors. Dig into the candidate’s past accomplishments to observe their behaviors. Do not let the candidate talk about what the group did, but probe to find out what the candidate did. Keep the candidate focused or draw him out. Understand the candidate’s ability to learn and adapt to new technologies. Ask tough questions to understand their motivation and integrity. The key to understanding future potential is to understand past performance and behaviors.
Make a Well-Considered Final Decision
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Select the candidate who has the needed skills. Make the selection objectively using the evidence and stories from the interview process. Consider all of the critical success factors for the defined role and responsibility. Choose the candidate who has the appropriate technical skills, obvious integrity, and interpersonal skills.
Close the Deal The best people accept jobs based primarily on what they are going to be doing, learning, and becoming. Research [5] shows that they want: • Trust • Challenge and growth • High self-esteem • Opportunity for competence and skill • Appreciation • Excitement about the work • Meaningful work
Articulate these possibilities honestly. Tell the candidate how their skills will help the team solve its technical challenges. Describe their role on the automation team in a way that makes it sound interesting and exciting.
Building an Automation Team The characteristics of high-performing automation teams are hard work, personal integrity, passion for automation, technical excellence, well-defined requirements, a shared measurable goal, genuine respect and caring among co-workers, an atmosphere of fun, individual accountability, and recognition of accomplishments. While many of these attributes are fuzzy concepts, the most successful automation teams have this culture. Selecting the appropriate automation technology or changing the organization structure is much easier. It takes effective leadership to achieve this desired culture.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
To cultivate a high-performing team, the leader must empower the automation team members and not attempt to tightly control them. This can only be accomplished by the automation leader’s selfless service to the team members [12]. Automation professionals do not appreciate the automation leader who misuses authority in order to look good or to get promoted. The automation leader needs to give up power while still holding the responsibility. The leader needs to create a democracy rather than an autocracy. In today’s complex automation environment, delivering projects and resolving problems would be impossible without teamwork. No individual automation professional can know all there is to know about instrumentation, wiring panels, software applications, process control, networks, and data management. Successful automation projects require a team with individual members who, among them, have all the required skills. A lot of work today is performed in teams with team members coming from many different commercial organizations. This is a challenge because some of these organizations compete with each other, yet the team has to function as one. Developing a functioning team requires effort and attention to the team dynamics. It requires that the talented individual team members function together toward a common goal. Teams go through a normal growth process that can be messy, emotional, and unnerving to some automation professionals. The normal evolution of teams is categorized as [13]: 1. Forming 2. Storming 3. Norming 4. Performing
The Team Forms When a team forms, it can generate a lot of excitement and positive energy. But some team members can also be concerned and uncertain about new activities. Within the team, relationships are built one at a time between automation professionals. Team members must be sensitive to the unique personalities, cultures, and backgrounds of each individual. The team must clearly and concisely articulate its purpose and priorities so all members can align to a common goal.
The Team Storms Before a team can get to the highest level of performance, the members must learn to work together. This requires testing roles, authority, and boundaries. Team members must learn to work through differences of opinion. When the team encourages equal participation, it will grow stronger from the tension as ideas are discussed.
The Team Norms When team members have learned to work through differences of opinion and to solve common problems, they accept the team’s norms and each other’s roles. They are able to give each other feedback. Each team member feels empowered, works hard, and is able to solve problems. The project is now making progress. Each team member shares responsibility for the team’s success.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
The Team Performs The individual team members now cooperate and perform as a cohesive team. The team delivers at a higher level than the sum of each individual’s efforts. Each team member supports and complements the other team members to achieve the highest level of performance. Individuals not only take pride in their work, they also take satisfaction in the overall contribution of the whole team. Each team member takes the time to celebrate the success of all the team members.
Motivating Automation Professionals Some automation leaders might think the only motivation is tangible rewards such as a good salary, generous benefits, and a bonus for achieving results. This is not the case with automation professionals over the long term. Automation professionals expect to be fairly compensated but are driven by intrinsic rewards. Their motivation comes from the pleasure they get from the task itself and the sense of satisfaction in completing a
task. Individuals with careers in automation like to solve difficult technical challenges. They find this type of work challenging and interesting. Many automation professionals think automation is fun! How does an automation leader achieve the desired culture to motivate automation professionals? How does a leader create an environment that encourages automation professionals to be motivated, accountable, and responsible? While individuals must find within themselves their own motivation to perform, the leader can create a culture of motivation by giving automation professionals [14]: 1. Autonomy 2. Purpose 3. Mastery
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Autonomy It is about creating a culture where automation professionals can direct themselves and get pleasure from their work. Leaders provide ample choice over what to do and how to do it. If an automation professional has a strong desire to achieve certain goals, they will do whatever it takes to get there, not as a result of pressure from an outside force, but because of their own inner drive to succeed. The leader can remove barriers, eliminating unnecessary permissions and approvals, to give individuals autonomy to achieve the goals. In an autonomous environment, each team member is empowered and given the support needed. Leaders give meaningful feedback and information. However, the leader and other team members must still hold the individual accountable to agreedupon commitments and expected results. Autonomy is not go-it-alone, rely-on-nobody individualism, but rather an environment in which individuals are responsible for their own tasks while being interdependent with other team members.
Purpose A sense of purpose can inspire automation professionals to get up in the morning and be excited about their cool automation assignment. It is the yearning to do what they do in the service of something larger than themselves. Everyone is driven by some internal motivation and the desire to be connected to some bigger purpose. Automation professionals want to do tasks that matter. Purpose is the immediate motivator behind day-to-day tasks. The automation leader needs to give team members a compelling purpose and then help them connect with it emotionally. That bigger purpose could be making life-saving medicine in a pharmaceutical facility, or saving the environment in a waste water plant, or being the lowest-cost gasoline producer in a refinery. The
automation leader helps the automation professional understand the organization’s larger purpose and then connects it to their automation assignment. The automation leader connects a short-term goal, such as completing a process control software application by a certain time, to the larger purpose, such as producing higher-quality products for customers.
Mastery
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
Mastery is the urge to get better and better at the discipline of automation. Each individual is unique in what they find enjoyable in automation. The automation specialty that they choose to master could be instrumentation, process control, application software, project management, or data management. Automation professionals want an opportunity to grow, and the leader should make sure that the opportunity exists for them to learn on the job. In each new assignment, the leader needs to help the automation professional understand that they are not only getting the task done, but they are also growing professionally, learning something new, and working with other interesting professionals. The automation leader should also make sure that training is available to keep current with technology. Automation professionals have a passion for emerging technology and can easily get excited about new technology like wireless, virtualization, portable handheld devices, or smart instruments. This passion keeps automation professionals engaged in tasks that others might find difficult. Automation professionals will work hard and long, and in ways that are not always understood by family and friends, for the sake of the automation technical challenge.
Conclusion This is a plug for mastery of interpersonal skills for automation professionals. Interpersonal skills are just as important as technical skills, and they are critical for career success. Make time to read further on these topics and attend training classes. (Refer to the bibliography for a list of magazine articles and books on interpersonal skills.) These skills, just like any automation technical skills, must be practiced to be mastered; and, just like mastering any technical skill, mastering interpersonal skills takes a long time. Develop interpersonal skills by volunteering for those tasks that everyone tries to avoid: lead the task team that is implementing the next dreaded new policy coming from Human Resources, volunteer to present at the local International Society of Automation (ISA) section meeting on that new technology, or find a positive way to confront that difficult operations manager who does not support the needed automation upgrade. An automation professional can develop interpersonal skills when they make time to practice.
References 1. Goleman, Daniel. “They’ve Taken Emotional Intelligence Too Far.” Psychology Magazine, (January 2011). 2. Lemay, Eunice, and Jane Schwamberger. Listen Up! How to Communicate Effectively at Work. Soquel, CA: Papilio Publishers, 2007. 3. Rein, Shaun. “Three Keys to Giving Great Presentations.” Forbes Magazine (November 2010). 4. Stelzner, Michael. Writing White Papers: How to Capture Readers and Keep Them Engaged. Poway, CA: WhitePaperSource Publishers, 2007. 5. Bacon, Terry. What People Want: A Manager’s Guide to Building Relationships that Work. Mountain View, CA: Davies-Black Publishing, 2006. 6. Covey, Stephen, and Rebecca Merrill. The Speed of Trust, the One Thing That Changes Everything. New York: Free Press, 2006. 7. Nigro, Nicholas. The Everything Coaching and Mentoring Book. Avon, MA: Adams Media, 2008. 8. Dawson, Roger. Secrets of Power Negotiating, Inside Secrets from a Master Negotiator. Pompton Plains, NJ: Career Press, 2011.
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
9. Patterson, Kerry, Joseph Grenny, Ron McMillan, and Al Switzler. Crucial Conversations: Tools for Talking When Stakes Are High. New York: McGraw-Hill Education, 2002. 10. Block, Peter. Flawless Consulting: A Guide to Getting Your Expertise Used. 2nd ed. San Francisco: Josey-Bass Pfeiffer, 2000. 11. Yate, Martin. Hiring the Best, A Manager’s Guide to Effective Interviewing and Recruiting. 5th ed. Avon, MA: Adams Media, 2006. 12. Block, Peter. Stewardship: Choosing Service Over Self Interest. San Francisco, CA: Berrett Koehler Publishers, 1993. 13. Levi, Daniel. Group Dynamics for Teams. Los Angeles, CA: Sage Publications, 2001. 14. Pink, Daniel H. Drive, The Surprising Truth About What Motivates Us. New York: Riverhead Books, 2009.
About the Author
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
David Adler has a master’s degree in chemical engineering from Purdue and over 40 years of experience improving manufacturing using automation technologies. Adler has managed complex automation projects, led global automation programs, and developed automation strategies. While at Eli Lilly, he collected the life-cycle cost and benefits of 44 projects to confirm the justifications for automation. Adler retired from Eli Lilly in 2008 and has continued to pursue his passion for manufacturing efficiency using automation technologies as an independent consultant. He has expanded his industry benchmarking dataset to include hundreds of projects across 25 companies. He is an industry authority on automation project management, automation best practices, and workforce development.
Index
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
12-pulse ready 161 2-D image data 526 3-D image data 526 abnormal situation 294, 296, 316, 333–334, 336–337, 339–341, 343, 345, 347, 364–366 AC induction motor 511 AC motors 148, 511 control of speed 150 control of torque and horsepower 150 types 150 AC versus DC 154 access control 418, 491 accessories 139, 152, 154, 453, 462 accounting real-time 581–582 accuracy 62, 79, 82, 90, 102, 104, 106–108, 117, 139, 189, 206, 297, 328, 330, 454, 484, 496–498, 507, 521, 527, 550, 554, 616 action qualifiers 74 non-stored 74 reset 75 stored (set) 75 time-limited 75 actions 70 actuation 505 actuator sensor interface (AS-i) 508 actuators 1, 26, 57, 128, 133, 233, 236, 263, 311, 494–497, 501, 509–510, 558 diaphragm 133–135, 140
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
electric 136–137, 510 hydraulic 137, 139, 509 magnetic 136 piston 133 piston-cylinder 135–136 pneumatic 134–137, 509 adaptive control 310, 316, 497, 500 adaptive-tuning 34, 289 adjustable speed drives 155, 157 advanced process control 41, 257, 296, 299, 323, 578, 603 advanced regulatory control 35 air bubble 99 air purge 99 air sets 142 alarm management 349 audit 357 detailed design 354 identification 351 implementation 354 maintenance 356 management of change 357 monitoring and assessment 356 operation 355 philosophy 350 rationalization 351 alarms 124, 196, 212–213, 218, 240, 252, 319, 324, 328, 334, 341, 349–354, 356–358, 375, 477, 618 and interlock testing 218 classification 352 depiction on graphics 336 design 351, 354 flooding 319 highly managed 358 management 240, 296, 319, 349–350, 356–358, 488 nuisance 358 philosophy 350–354, 356–358 priority 353 rationalization 351, 354
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
reset 62 response procedures 354 safety 349, 352, 358 stale 356–357 suppression 355 testing 355 algebraic equations 382 alternating current (AC) 136, 145, 172, 248, 496, 506 ambient temperature 100, 109, 173, 368 ampacity 168 amplifier 512 analog 140, 213, 225, 235–236, 269, 273, 337–339, 341–342, 344–345, 407, 409, 414, 496, 506, 512 communications 403, 405 controllers 24, 28–29 input 185, 189, 212–214, 408, 412, 499, 533 instrumentation 179 output 26, 213–214, 249, 302, 412, 533 position feedback 339 transmission 406, 408 transmitters 381 analog-to-digital (A/D) converter 313, 498–499, 506 analytical 90, 116–118, 123, 467, 471, 474 applications 112 equations 399 framework 467–468, 476 instrumentation 79, 115, 124, 248 procedures 472 system installation 120, 125 analyzer 115–117, 120–121, 123, 125, 236, 285–286, 289, 294, 297, 299, 302, 313–314, 317 hardware 123 installation 122 shelter 121–122 shelters 123 types 116 annunciation 252, 355, 396, 398, 619 failure 398 ANSI/ISA 5.01 409
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
ANSI/ISA 50.1 407 ANSI/ISA-100.11a 416–417 ANSI/ISA-101.01 239, 346, 354 ANSI/ISA-18.2 240, 349, 355–358 ANSI/ISA-5.1 6 ANSI/ISA-84.00.01 18 ANSI/ISA-84.91.01 358 ANSI/ISA-88.00.01 46–48, 50–51, 53–54 ANSI/ISA-88.01 18, 560, 566 ANSI/ISA-95.00.01 560 ANSI/ISA-95.00.02 560 ANSI/ISA-95.00.03 432, 558–559 apparatus selection 195 application 262 designer/engineer 265 programmer/designer 265 requirements 486 servers 233, 243 apprenticeship 326 approximations 390 arc over 511 area classification 10, 192–193, 203 areas 47 armature 146–148, 155–157 range 146 artificial neural network (ANN) 291 as-found 34 AS-i 508 as-maintained 455 ASME/engineering classifications 85 combined uncertainty 87 expanded uncertainty 87 random standard uncertainty 86 systematic standard uncertainty 86 assembler 275 assessing project status 625 asset
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
management 443, 481–483, 488 measures 457 performance 482 asynchronous 150, 328, 512 audit 350–352, 357, 379, 429 trail 242, 371, 378 authentication 437, 440 authorization 211 automatic process control 21 automatic tank gauge 96 automation 35, 79, 115, 145, 160, 165, 207, 236, 243, 245, 247, 253, 260, 269, 285–286, 294–297, 319, 321, 349, 403, 418–421, 427, 430, 441, 449, 483–484, 488, 505, 520, 523, 527, 535–536, 541, 557–558, 560, 567, 573–574, 581–582, 593–596, 599, 605, 612, 614– 615, 618, 620, 630, 636 and control (A&C) 3 benefits 591, 593 building 531, 535, 537 costs 595–599, 602–603 engineer 145, 288, 587, 601 equipment 201, 240, 460 factory 491 investments 601–603 leaders 627 measurements 81 professional 46–48, 167–168, 173–174, 176, 181, 186, 188–189, 207, 219, 443, 460, 541, 591, 602, 622, 627–628, 631, 638 solution 618 service provider 608 standards 323 strategy 614 vendors 600 auto-tuning 34 availability 118, 124, 251, 361, 387, 391–393, 398, 400, 413, 433, 445, 448–450, 454, 459– 460, 482, 500, 565, 567–568, 579–580, 582, 586, 614, 618, 623 steady-state 392 axis of motion 506 backlash 294, 296, 302, 311–313, 497, 513
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
backlight 526 backup 238, 240, 250–251, 302, 380, 414, 450, 535, 555, 619, 623 BACnet 536–537 ball valves 128–129 bandwidth 515 base 25, 116, 125, 248, 400, 407, 416, 418, 575, 623 base speed 147, 150, 155 basic process control system (BPCS) 375, 432 batch process 3, 35, 45, 47–48, 76, 233, 285, 291 control 45–46 plants 6 versus continuous process control 45–46 bellows 94 bench calibration 209, 211, 213–215 bid material cost and labor estimate 616 binary 496 bipolar 498 block valve 214 blocking 59, 139, 412 bonding 168–169, 174–176, 178–179, 187, 195 electrical equipment 168 electrically conductive materials and other equipment 168 Boolean logic 268 variable 73 Bourdon elements 94 braking methods 157, 159–160 bridge 94, 112, 155, 157 brushes 146, 148, 152, 154, 511–512 brushless DC motors 511 bubbler 118 budgetary cost and labor estimate 616 buffer 229, 565, 611 building automation 162, 491, 531, 537 open protocols 535, 537 specifying systems 537 building automation and control network protocol (BACnet) 535 building network controller 532
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
bump and stroke 621 bump test 307–308 bumpless transfer 25, 29 buoyancy 96–97 business planning 559–560, 563 business value 594 improvements 594 butterfly valves 128–131 bypass 131–132, 161–162, 231, 236, 319, 378, 380, 383–384, 578 C tube 94 cable 168 selection 170, 173 tray system 168 trays 187 cabling 173 cage-guided valves 130–131, 133 calibration 88, 93, 100, 105, 110–112, 119, 124, 209, 212–214, 279, 456, 497, 520, 523, 576 capacitance 95, 196, 198 capital budgeting 595 expenditure 600 projects 591, 595, 602 capsule 94 carrier frequency 158 cascade control 36–37, 311, 323, 574–575, 577, 587 cathode ray tube (CRT) screens 333 cavitation 127–128, 131 change control 239, 242, 606, 612, 616 change management 217 checklists 375 check-out 188, 621 chopper 160 classroom training 327 climate-control 237 close to trip 396 closed-loop business control 586 closed-loop tests 32–34
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
closeout 622 cloud computing 551–552 coaxial lighting 526 code division multiple access (CDMA) 420 collector 175 color coding 335 color palette 336 combined uncertainty 90 commissioning 207–211, 214, 217, 219, 296, 316, 326, 437, 595, 622 common-mode noise 184 communication 21, 51, 170, 173–174, 187, 189, 218, 223, 225, 227–229, 233–237, 243–245, 247–251, 253, 261–262, 273, 296, 324, 378–380, 403, 405, 407, 411–416, 418–420, 423– 424, 432, 434–435, 437–438, 447–448, 452, 454, 456, 470–471, 476, 494, 498, 518, 523, 532, 535, 553, 606, 618, 620–621, 628 access 434 failure 395–396 media 618 commutator 146, 148, 152, 154, 511 compliance 216–217, 351, 363, 433, 568, 593, 623 compound-wound 148 computer-based training 328 computer-integrated manufacturing (CIM) 541 computerized maintenance management system (CMMS) 487 conceptual design 378 conceptual details 615 conductor 170 grounded 171 identification 171 selection 172 ungrounded 172 un-isolatable 172 conduit 435 confidence 90 configuration 26–27, 29, 46, 55, 111, 131, 141, 148, 153–154, 200–201, 215–217, 219, 226– 228, 237–238, 240–242, 249, 262, 271, 288, 295–296, 302–303, 354, 378, 380, 382–383, 416, 430, 437–438, 454–455, 464, 483–484, 488, 506–507, 511, 513, 523, 534, 536, 555, 559, 568, 620–621 drive 161
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
connectivity 233, 236, 243–244, 425, 431, 491, 499, 518, 535 consensus standards 16 constant failure rate 389–390 constraint variable 305, 307 constraints 105, 107, 265, 309–310, 314, 321, 424–425, 474, 563, 565, 576, 578, 608–610 construction work packages 619 contactor 63–64, 155, 157 contingency 54, 482, 611 continuous 496 control 1, 21–22, 301 diffused illumination 526 improvement plan 623 mode 380 operations 364 process 550 process control 12, 22 contracts 452, 584, 607–609, 623, 634 constraints 608 contract modifiers 609, 612 cost-plus 610 fixed fee 609 fixed price 611 hybrid 609, 612 lump sum 609, 611 time and material (T&M) 610 time and material/not-to-exceed (T&M/NTE) 609–610 turnkey 609 control algorithm 25–28, 40–41 cascade 36 decoupling 39 discrete 57 feedback 23, 39 feedforward 36–37, 39–40 feedforward-feedback 38 ladder 59 logic 21, 68 modes 11
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
narratives 18, 615 network 237 override 42 program 264, 283 ratio 35–36, 38–39 recipe 6, 565 relay 58 scheme 618 selector 42 system documentation 3–4 time proportioning 25 training 323 valves 79, 127 control modules 48, 50, 55, 237 database 238 redundancy 238 control system 260, 263 configuration 619 discrete 263 process 264 controlled variable 305, 307 controller 1, 7, 21–22, 24, 26, 28, 32–39, 42–43, 48–49, 55, 61, 69, 80, 119, 123, 133–134, 139–140, 142, 185, 188, 210, 233– 238, 243, 247, 262, 268, 286, 288–289, 296, 300, 303– 307, 310, 313–315, 356, 406–409, 441, 483, 497, 505–507, 509–510, 512–515, 525, 532, 534, 554, 566, 574–578 building network 532 equipment-level 532 gain 23–24, 32, 34, 314 input 213 output 22–25, 28–29, 32–33, 40, 202, 293, 311 parameters 218 plant-level 532 primary 36 secondary 36 set-point weighting 27 tuning 29–30, 32–33 two-degree-of-freedom 26 unit-level 533
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
controller area network (CAN) 508 controlling costs 594 conversations 628 converter 7, 127, 156–157, 160, 408, 498–499, 506, 513 cooling 11, 25, 45, 49–51, 147, 150, 154, 304–305, 343, 364 methods 147 coriolis 107–108, 297 correlation 90 cost driven 608 cost measures 458 cost/benefit analysis 614 cost-only 597 cost-plus 608, 610–612 criticality ranking 481 crosstalk noise 185 current carriers 170 current controller 156 current measuring/scaling 156–157 custom graphic displays 239 cybersecurity 244, 252, 403, 423, 429, 620 dangerous failures 380, 382 DASH7 420 dashboards 537, 585 data acquisition 498 analytics 290 collection 566 compression algorithms 548 confidentiality 438 documentation 554 flow model 618 integrity plan 620 management 319, 484, 541, 543, 639, 642 reporting and analysis 447 quality 553 security 555 sheets 10
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
storage and retrieval 546 structure layout 618 transfer plan 620 data relationships entities 543 many-to-many 544 one-to-many 544 one-to-one 543 database 543–544, 619 design 545 maintenance 554 operations 548–549 query 546 reports 546 software 553 types 544 database structure 543 fields 543 key field 543 records 543 DC bus section 157 DC injection or flux braking 159 DC motor 146, 154, 511 control of speed 146 control of torque 147 DC motor types 147 compound-wound 148 permanent magnet 148 series-wound 147 shunt wound 148 DC power distribution systems 179 DCS evolution 245 dead time 30, 285, 288, 301 deadband 285, 497 decommissioning 374, 524 decoupling 40 control 39 defense in depth 429
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
definitive material cost and labor estimate 616 degree of hazard 194 degrees of freedom 83–84, 88, 90 demand 394 maintenance 484 dependent error sources 84 deployment 523, 620 derivative mode 23–24, 26–27, 301, 313 derivative-on-error 26 derivative-on-measurement 26 detailed design 619 detailed scheduling 565 detected 398 deterministic 611 device performance analysis 450 DeviceNet 236, 508 diagnostic monitoring 523 diagnostics 484–485 systems 124 diaphragm 94 actuators 134 differential pressure (d/p) transmitters 94–95 digital 140 buses 236 inputs 213, 533 outputs 213, 533 positioners 139 digital-to-analog converters 408, 498–499 dip tube 99 direct current (DC) 145 motors 496 direct digital controls (DDCs) 532 direct torque control 158 direct-acting 26 directly represented variables 228 discrete control 21, 57 forms of PID 28
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
input 213, 230–231 output 213, 230–231 displacers 97 distillation 43, 45, 52, 117, 300–301, 305, 308, 341, 346 distributed control system (DCS) 7, 22, 57, 223, 233, 261, 313, 333, 411, 412, 432 distribution equipment 188 disturbance variable 305, 307 divide-and-conquer method 472 documentation 3, 216, 620, 622–623 installation details 4, 14 instrument lists 4, 10 instrument location drawings 4, 12 location plans 4, 12 logic diagrams 4, 12 loop diagrams 4, 14 loop numbering 4, 7 operating instructions 18 piping and instrument diagrams 4, 6 process flow diagrams 4 specification forms 4, 10 standards and regulations 4, 16 tag numbering 4, 7 types 4 Doppler 105–106 drip-proof 147 drive configurations 161 driver 267, 499, 587 dumb devices 248 dynamic braking 159–160 compensation 38–39 models 285, 292–293, 295 reset limit 301 eccentric rotary plug valves 129 eccentricity 497 efficiency 303 electric actuators 136
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
electric motors 510 electrical 496 equipment 165, 168–169, 174–175, 177, 191, 193, 196 installations 165, 167–168, 174–175 noise 181, 225, 412, 522 safety 165, 188, 201, 219 strain gauge 95 system grounding subsystem 178 electrochemical analyzer 116 electrode-to-ground 177 electromagnetic 496 compatibility (EMC) 186 interference noise (EMI) 181 electromagnetic/radio frequency (EMI/RFI) interference 379 electromechanical 496 electromotive force (EMF) 110 electronic 157, 524 AC 157 analog 544 analog transmission 408 bypass 161 cam 514 circuitry 177 communication 434 control 17, 152 control loop 407 control valve positioner 407 controller 154, 519 DC 155 devices 145, 251, 432, 498, 549 displacement-type level transmitter 12 equipment 175, 179, 181 ignition system 499 information 425 positioners 139, 141 processing hardware 491, 518 signal transmission 407–408 signals 175
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
transmitters 93–94 valve positioner 133, 139, 142 electrostatic noise 182 embedded 340 controllers 262 emission controls 346, 499 reduction 499 emitter 181–183 emulation 242 enabling technologies 500 encapsulation 195, 197–198, 226 enclosures 147, 150, 168, 185 encoders 507 encryption 415–418 end-of-the-probe algorithm 102 energy harvesting 414 engineering classification 85 enterprise 47 asset management (EAM) 447, 481 entities 543 entity-relationship diagrams 544 environmental 26, 102, 115–116, 125, 195, 346, 351, 353, 367 applications 124 conditioning 120–121 conditions 173, 186, 484 constraints 522 contamination 433 damage 424 effects 22 impact 363, 429, 445 interfaces 477 limits 200, 303 monitoring 79, 125 protection 433 regulatory compliance 569 risks 244 environmentally controlled 243
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
shelters 121 environmental-quality analyzer 116 EPRI test 345 equal percentage characteristic 130 equipment 192, 201 grounding conductor (EGC) 169, 177 measures 459 modules 49 requirements 50 specifications 618 equipment-level controllers 532 ergonomic 217 error 81–82, 90, 497 systematic 85 errors that affect results 85 estimates 616 Ethernet 236 execution management 566 explosion-proof 139, 195–196 external reset 43 extreme temperature 220, 225 faceplates 239 factory acceptance testing (FAT) 216, 620 factory automation 491 fail-danger 397 fail-safe 397 failure modes 361, 395–396 failure modes and effects analysis (FMEA) 351 failure rate 376, 379, 381, 383, 387, 389–390, 392, 395, 400, 448–449, 547, 602 fall off potential 177 fault 382, 621 protection subsystem 177 tolerance 380 trees 382 feasibility studies 613 summary document 615 feedback 506, 513
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
control 23, 39 devices 509 trim 39 variable 64 feedforward 301 control 36–37, 39–40 fiber-optic communications 161 field 146, 543 calibration 215 device status 620 devices 79, 380 exciter 146 maintenance tools 488 replaceable units (FRUs) 452 fieldbus 236 communications 161 module 161 filled thermal systems 109 filter 25, 118–119, 123, 158, 185, 207, 219, 231, 244, 286, 288, 294, 301, 313, 317, 498, 514, 584 filtering 285, 329, 547 final control elements 79, 127 final elements 381 firing unit 156 firmware 160 first principle models 291, 294 first-order-plus-dead-time (FOPDT) 30 first-out 354 fixed-price 611 flameproof 195–197 flash 336, 366, 524 flash memory 161 flashing 336, 344 flat field 526 flat file 544 flexible metal conduit (FMC) 187 flip time 308 float 11, 96–97, 450
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
floating 179, 266, 519 flow meters magnetic 105 positive displacement 108 turbine 106, 235 ultrasonic 106 vortex 107 flowchart 468 fluid flow measurement 103 flux vector drive 154 forcing 230 forward decoupling 40 FOUNDATION Fieldbus 236, 411 four-quadrant system 160 framework 467–469 frequency modulated continuous waves (FMCW) 101 frequency response 497 function block 277 diagram (FBD) 57, 64 fuzzy logic controller (FLC) 299 galvanized rigid conduit (GRC) 187 gap analysis 345 gas chromatographs (GC) 121, 236 gate drivers 158 gated on 155–156 GC analyzer oven 123 general recipe 47 general-purpose requirements 192 generic process equipment training 323 global system mobile (GSM) 420 global variable 228 globe valves 128–130, 132–133, 135, 137 Grafcet 279 graphics 341–343 ground 168, 174, 178 grounded (grounding) 168 conductors 171
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
ground-fault current path 168 guidelines 618 Hall effect devices 509 hand wheels 142 hand-off-auto (HOA) 62 hardware 217 hazard analysis 375, 618 hazard and operability studies (HAZOP) 351, 361, 363, 365–366 recording and reporting 367 HAZOP analysis 361, 363 head flow measurement 103 head-level measurement 99 health, safety, and the environment (HSE) 439 heating, ventilating, and cooling (HVAC) 491 high demand 380 high-frequency noise 185 high-resolution cost estimate 616 highway addressable remote transducer (HART) 235, 408 historian 288, 317, 432, 441, 487, 566–567, 583, 619 historian/trend 240 historical data collection 240 horsepower 150 human measures 459 human-machine interface (HMI) 209, 237, 333–334, 354, 487, 618–619 alarms 240 configuration 240 development work process 345 engineering workstation 240 keyboards 238 operator workstations 238 sequence of events 240 standard displays 239 hurdle value 601 hybrid 423, 441, 563, 609, 612 hydraulic 496 actuators 137 head measurement 97
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
hysteresis 130 I/O 533 checklists 189 I/P converter 407 transducers 142 ideal 23 PID algorithm 23 identification 437 IDM architecture 487 IEC 205 61131-3 283 63082/ISA-108 483 impedance 497 implicit feedback path 64 induction 152 industrial automation and control systems security management system (IACS-SMS) 428 industrial networks 508 industrial, scientific, and medical (ISM) band 412 inferential measurements 296, 299, 317 influence coefficient 90 information technology (IT) 541 infrastructure as a service (IaaS) 552 inhibits 61 initial step 69 initiating event 375 input 497 input image table 229 input/output (I/O) 498, 533 disabling 230 discrete 235 forcing 230 processing 234 specialized 235 inspection 620, 623 installation 165 details 4, 14
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
verification 211 installed equipment 620 installed flow characteristic 285 instruction list (IL) 57, 68 instructions 68, 73, 283 instrument 212 and control (I&C) 3 commissioning 210 data sheets 618 drawings 5 index 10 lists 4, 10 location drawings 4, 12 selection 117 tag numbers 9 insulated gate bipolar transistors (IGBTs) 158 intangible assets 481 integral mode 312 integrated development environment (IDE) 262 integrator block 288 intelligent devices 483–485 electronic devices 432 intended function 387 interactive 23 PID algorithm 24 interfaces 319 interference 415 interlock testing 218 interlocks 55, 192, 196, 218, 352, 464 internal rate of return (IRR) 601 internet protocol (IP) 161 interoperable systems 535 interpersonal skills 592, 627 building trust 631 communicating in group meetings 629 communicating one-on-one 628 justifying automation 636
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
mentoring 632 motivating automation professionals 640 negotiating 634 resolving conflict 635 writing 630 interpolation 514 interpretive 266 interviews 325, 470, 598, 615 intrinsically safe (IS) 200 system 199 inventory 115, 368, 453, 520, 558–561, 563, 565–567, 594 inverted decoupling 41 inverter 154, 158 ISA-100 Wireless 415–416 ISA-101 HMI standard 346 ISA-95 558, 561 ISO 14000 series 482 14224 483 55000 482 9000 series 482 classifications 82 isolators 201 justification analysis 615 key field 543 key performance indicators (KPIs) 565, 579 keyboard 233, 238, 262, 275, 333, 449, 512 labor costs 448 ladder 65 control 59 diagram 57, 271, 272 logic 57, 60, 62–63, 65, 73 logic notation 58 lag dominant 299 lag time 301 lambda tuning 30
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
laminations 148 laser interferometers 509 latching circuit 61 latent variables 290 layer of protection analysis (LOPA) 368, 377 lead time 301 leader/follower 514 leadership 254, 591–592, 627, 639 leak detection 252, 432 leakage 129, 137, 139, 413 least privilege 429 legacy systems 534 level 3-4 boundary 569 level measurement 95 level of protection ia apparatus 200 ib apparatus 200 life cycle 371, 449 capital economic profile 603 cost analysis 596 economic analysis 598 management 523 lighting 182, 185, 336, 358, 414, 491, 525–526 lightning protection subsystem 177 limit switches 142 linear approach 472 characteristic 130 dynamic estimators 289, 294 encoders 508 interpolation 514 variable differential transformer (LVDT) 94, 509 linearity 497 live zero 407 loading 497 local area network (LAN) 247 local variable 228 location plans 4, 12
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
lockouts 61 logic diagrams 4, 12 logic systems 379 logical frameworks 467 logistics 208, 445, 449, 453, 520, 559–560, 563, 564, 567, 569 long-lead procurement 619 LonWorks 535 loop 15 checks 212–213 diagrams 4, 14 numbering 4, 7 performance criteria 29 tuning 218 loop-back 179 loss event 366 low demand 380 low-voltage DC power supplies 179 lump-sum 611 machine vision 517, 525, 528 macros 161 magnetic 105 flow meters 105 noise 183 magnetostrictive transducers 506 magnitude of displacer travel 97 maintainability 445 maintenance 123, 125, 411, 443, 445–446, 450, 481, 523 maintenance plan 622 maintenance, repair, and operating (MRO) costs 445 make-to-order (MTO) 557 make-to-stock (MTS) 557 management of change (MOC) 18, 346, 350–351, 357 mandatory standards 16 manipulated variable 305–307 manual-automatic switching 25 manufacturing execution system (MES) 218, 541, 557–558 manufacturing operations management (MOM) 541, 557, 564–565, 567
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
execution 561 functional domain level 557 integration 560–561 many-to-many 544 mapping 241, 272, 499 Markov model 382, 392–393 mass flow 103, 293 master terminal unit (MTU) 223, 247 maximum disturbance rejection 301 mean time between failure (MTBF) 225, 391, 449 mean time between failure spurious 382 mean time to dangerous failure 387 mean time to failure (MTTF) 361, 387, 389 mean time to failure spurious 387 mean time to restore (MTTR) 391 measurement uncertainty 81, 90 accuracy 82 measurements 390 closed-tank applications 99 fluid flow 103 head flow 103 hydraulic head 97 level 95 open-tank head level 98 radar 101 mechatronics 491, 493–497 classification 500 migration 295 minutes per repeat 24 mission time 388–389, 393–394 mitigation layers 375 mobile devices 243 Modbus 535–536 mode 230 model predictive control (MPC) 286, 305 best practices 316 modifications 128, 160, 177, 191, 266, 379, 446, 451, 547, 551, 554–555, 611 modifiers 68
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
modules 249 monitoring 59, 79, 124–125, 141, 223, 233, 250–251, 281, 304, 316, 333, 337, 341, 349–350, 352, 356–358, 417, 426, 432, 438, 450–452, 463, 489, 491, 523, 526, 531, 536–537, 558, 578, 582, 599, 623 Monte Carlo simulation 382 motion control 491, 505, 510 motion controllers 261 motivation 440–441, 638, 640–641 motor controls 80 motoring 157 multiaxis 526 multiple axes 514 multiple pole 153 multiple-input multiple-output (MIMO) 497 multi-ported 248 multivariate statistical process control (MSPC) 290 NAMUR NE 129 483 National Electrical Code (NEC) 167, 174, 193, 614 near infrared 526 near-integrating 299 negotiations 634 net present value 600 network architecture 618 function block diagram (FBD) 73 security 403, 412 neural networks 291 NFPA 205 no effect failures 398 nodes 364 noise common-mode 184 crosstalk 185 electrostatic 182 high-frequency 185 magnetic 183 reduction 181
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
non-excited 152 non-language 279 nonredundant 380 non-stored action qualifier 74 normal form 545 NoSQL 551–552 nuisance alarms 358 object linking and embedding (OLE) 243 OLE for Process Control (OPC) 243, 319, 519, 566 on-the-job training 326 open drip-proof motor 150 open modular automation controller (OMAC) 267 open systems 535 open-loop control 252 test tuning 30 open-tank head level measurement 98 operands 68 operating instructions 4, 18 operating systems 251, 424, 441, 541, 602 operational displays 252 operational performance analysis 573–574, 588 advanced control 578 enterprise business control 587 plant business control 579 process control loop 575 operational strategies 615 operations 52 and maintenance (O&M) manuals 188 capability 565 definition 565 performance 565 schedules 564 segments 563 work requests 565 operator 68 interface 217, 319, 333
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
station 233, 237–238, 240 training 319, 321, 324 training systems (OTS) 285 organization balanced matrix 607 functional 606 matrix 606 projectized 606 strong matrix 607 weak matrix 606 original equipment manufacturers (OEM) 532 oscillation 33–34 overcurrent 169 overload 169 fault 62 override 230 control 42 controllers 304 paddle, vibrating 96 parallel 23, 59 parallel PID algorithm 25 partial least squares (PLS) 291 pattern recognition controllers 312 PC-based control 262 performance 515 metrics and benchmarks 457 monitoring 124 period 1, 32–34, 74, 148, 154, 160, 201, 250, 313, 327, 354, 356, 381, 389, 391, 393, 441, 450–451, 551, 561, 584, 594, 599–601, 622 permanent magnet 148, 511 permissives 61 permits 61 PFD average 382, 387, 398 pH analyzer 116 phase 50 mix tank 51 reactor 51
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
physical model 50 physical system modeling 495 picture element 524 PID controller 312 discrete forms 28 PID algorithm 23, 43, 313 ideal 23 interactive 24 parallel 25 piezoelectric 496 Piezoresistivity transmitters 95 piping and instrumentation diagrams (P&IDs) 4, 6, 188, 333, 409 pixel 524 plan-do-check-act (PDCA) 428 plant operations 622 plant-level advanced scheduling and planning systems 565 plant-level controllers 532 platform as a service (PaaS) 552 PLCopen 271 pneumatic 496 control loop 406 positioners 139 systems 379 transmission 405 pneumatic, diaphragm-less piston actuators 135 position controllers 139 position transmitters 141 positive displacement flow meters 108 post-guided valves 130 potentiometers 94 pressure transmitters 94 pressurization 195–196, 366 pressurized systems 220 pressurized vessels measurement 99 preventive maintenance (PM) 454–455 preventive measures 459
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
primary controller 36 principal component analysis (PCA) 290 probabilistic 611 probability of failing safely (PFS) 387, 398 of failure on demand (PFD) 382, 387, 398 of failure on demand average 387 of success 387 procedural control model 50 equipment requirements 50 formulas 50 header 50 procedure 50 procedural language 274 procedure 50, 53 procedure-based operations 365 process analyzers 115, 119–120 process automation 236, 423 process cell 47 dead time 312 flow diagram (PFD) 4 gain 30, 286 hazard analysis (PHA) 351 segments 364 time constant 311 variable (PV) 286 process control characteristics 22 improvement 285, 296 process models 257, 285 costs and benefits 297 procurements 618 product definition management 565 production 35, 45, 117, 152, 304, 353, 365, 375, 381–382, 437–438, 450, 455, 474, 523, 525, 560–561, 564–567, 598 capacity 424 conditions 216
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
control 141 cost 346, 353 downtime 378 effects 445 equipment 460 job orders 557 losses 454, 458 management 243, 558 operations 559, 563 operations management 564 optimization 557 plan 341 plants 555 processes 115 rate 35, 295, 301–302, 305–307, 346 schedules 448, 455 workflows 557 production and operations data collection 566 definition management 567 detailed scheduling 565 dispatching 565 execution management 566 performance analysis 567 resource management 567 tracking 566 PROFIBUS 236, 411 PROFINET 508 profit 611 program 271 code testing 215 mode 230 organization unit (POU) 227 scan 229 programmable automation controllers (PACs) 261 programmable logic controller (PLC) 22, 57, 161, 170, 189, 209, 223, 225, 236, 248, 261, 267, 380, 396, 432, 471, 519, 558 architecture 225
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
programming 161 languages 257, 259, 269, 283 project 605 charter 612–613 definition 615 execution 591, 607 justification 591 life cycle 607, 612 management 591 management tools 607, 623 manager 605 process 595 schedule 617 specification 614, 622 propagation of uncertainty 90 proportional band 24 proportional mode 23, 26–27 proportional-integral-derivative (PID) 161, 252, 286, 299 control 257, 323 control algorithm 23 controller 46, 339 controller algorithm 236 stand-alone 261 proportional-on-error 26 proportional-on-measurement 26 protection types 195–198 protocol 161, 618 prototyping 296 proximity switches 265 pseudo random binary sequence (PRBS) 308 pulse counts 235 pulse width modulation (PWM) 157–158, 533 pulsed radar waves 101 purging 195–196 pushbuttons 188, 213 quartz resonant frequency 95 query 546, 551–553
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
raceway 169, 186 rack room 243 radar measurement 101 random error 85, 91 random standard uncertainties 85, 91 range 497 ratings 10, 127, 154, 170, 186, 196–197, 201, 204 ratio control 35–36, 38–39 rationalization 350–353, 357 real time execution 559 accounting 579, 581–582 optimization 285, 309 process databases 550 receipt verification 211, 620 receptors 182 recertification 623 recipe 47, 51–54, 64, 264–265, 281, 558, 562–563, 566–567, 620 batch 36, 50 changes 46 control 6, 565 management 574, 603 procedures 54–55 software 50 unit procedure 48 records 543 redundancy 399 regenerative braking 157, 159–160 regulations 4, 16 relational database 545 management system (RDBMS) 551 relative gain analysis 39 relay ladder logic 58, 60, 249 method 32 systems 379 reliability 361, 387–388, 390–391, 398 block diagrams 382
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
remote accessibility 243 remote I/O 225, 227, 229–230, 419, 535 remote terminal units (RTUs) 223, 247, 420, 432 repairable systems 390 repeatability 497 repeaters closed-tank measurement 100 repeats per minute 24 repelling 146 replacement system 602 reporting functions 619 request for proposal (RFP) 595 reset 312 rate 24 resistance 173 resistance temperature detector (RTD) 109–112, 210, 212–214, 235, 533 input 214 resistive devices 94 resolution 285, 497–498 resolvers 506 resonant frequency 95 resource 271 availability 438 management 567 response time 139, 267, 292, 300, 308, 352–353, 449–450, 452, 459, 552 restoration 392, 394, 447 restore time 391 restricted data flow 438 restrictions 61 return on investment 599 reverse-acting 26 Reynolds number (Re) 104, 105 risk assessment 375 management 482 reduction factor (RRF) 382, 398 /reward analysis 609 robot controllers 261
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
role-based equipment hierarchy 559 rotary motion valves 128 rotate 149 run mode 230 run to failure 484 runaway response 299 rung 59, 73 safe failures 382 safeguards adequacy 366 safety 165, 219, 361, 429 alarm 358 electrical 219 instrumented function (SIF) 377, 387 integrity levels (SILs) 439 PLCs 262 pressurized systems 220 requirements specification (SRS) 378 standards 191 training 323 safety instrumented systems (SISs) 296, 361, 371, 432 assessments 374 commissioning 373, 379 decommissioning 374 design and engineering 373 hazard and risk analysis 374 hazard and risk assessment 373 installation 373, 379 modifications 374, 379 operations and maintenance 374, 379 protection layers 373 requirements 296 risk 377 safety functions 373, 375 safety integrity levels 377 safety requirements specification 373, 378 validation 373, 379
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
verification 374 safety-instrumented function (SIF) verification 361 sample conditioning 118–119 sample point selection 116 sampling systems 119 saturation 497 SCADA systems 432 key concepts 253 scan sequence 267 time 228 scenario 365 schedule 16, 35, 188, 216–217, 321–322, 326, 358, 448, 455, 457, 532, 559–561, 564–566, 606, 610, 612, 614, 616–617, 623–624, 635 scheduled tuning 35 scope compliance 620 scope of work (SOW) 615 scores plot 290 seal 61 secondary controller 36 secondary transducers 95 second-order plus dead-time step response 288 security activity-based criteria 432 asset-based criteria 433 communications access 434 concepts 425 conduits 435 consequence-based criteria 434 foundational requirements 437 functionality included 430 improvement model 428 industrial systems 429 life cycle 430 management system 425 physical access and proximity 434 program maturity 427 reference model 430
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
standards and practices 441 system definition 430 systems and interfaces 431 testing 215 zones 434 security levels 439–441 achieved 439 capability 439 target 439 selector control 42 self-actuating controllers 405–406 self-powered 407, 414 self-regulating 312 self-testing electronic positioners 141 self-tuning 34 sensitivity 90–91, 285, 497 sensors 79, 381, 496, 498 separately derived system (SDS) 169, 178 sequence of events (SOE) 216, 240, 373 sequence of execution 267 sequence selection 71 start 62 stop 62 sequential control 46–47, 64, 69, 252 sequential function chart (SFC) 57, 69 steps 69 serial communications 161, 236 series 59 series-wound 147 service 445–446 level agreements (SLAs) 450 technicians 446 serviceability 445 servo 505, 512–513 set point 23, 26, 29, 33, 35–37, 40, 42, 51–52, 64, 218–219, 257, 288, 299–303, 305–307, 309–310, 312–313, 315–317, 339, 352, 355, 381, 405, 440, 574, 578, 619–620 set-point weighting controller 27
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
setup 51, 93, 145, 215, 286, 300–301, 307, 514, 522–523, 537, 561 shotgun approach 474 shunt wound 148 silicon resonant sensors 95 simulation 242, 257, 329 simultaneous sequences 71 single-board computers 261 single-component concentration analyzer 116 single-input single-output (SISO) 497 site acceptance test (SAT) 217, 621 site cybersecurity test 621 site safety test 621 site system integrity test 622 slip rings 152 smart instrumentation 235 software 216 as a service (SaaS) 552 development 215, 619 installation 523, 621 testing 215 software-based systems 380 solenoid valves 142 solid-state systems 380 source 5, 81–86, 90, 102, 105–106, 149, 158, 161, 168–169, 176, 178–179, 182, 185, 191, 194–195, 198, 219, 237, 301, 303, 364, 407–408, 411, 414, 455, 471, 484, 496, 505, 511, 550, 582, 615 sourcing 192 spaghetti code 272 span 23–24, 110, 134, 289, 308, 341, 524 spare parts list 623 specialized I/O 235 specification forms 4, 10 speed controller 155–157 measuring/scaling 156 reference 156 speed-controlled pumps 127 spiral 94
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
squirrel-cage motor 148, 150–151 stakeholder 614–615 stale alarm 356–357 standard AC motors 150 deviation 83 deviation of the average 81, 91 electronic analog I/O 235 standards 4, 18, 283, 413 consensus 16 mandatory 16 starting torque 147–148, 152–153 start-up 188, 622 state-based alarming 354 statistical process control (SPC) 124 status report 623 steady state 392 first principle models 295 gain 315 models 257, 285, 288, 292 step elapsed time 70 evolution 70 flag 70 response models 286, 289 stepper 496 stepwise flow 70 stick-slip 296, 311, 313, 317 stiction 31, 294, 311–312 stoichiometry 301 storage and retrieval 546 structured lighting 526 structured query language (SQL) 552 structured text (ST) 57, 67, 72 subroutines 272 success triangle 608 supervisory control and data acquisition (SCADA) 223, 247, 333, 420 support 622
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
supportability 445 suppression 100 surge protection 181 switch frequency 158 switching manual-automatic 25 synchronous 150, 152 synchros 509 syntax 67, 268, 274–275 system acceptance testing (SAT) 285 analysis 381 checks 621 design 618 designer/engineer 264 integrity 437 level testing 218 management plan 623 performance monitoring plan 623 training 323 systematic error 85, 91 standard uncertainties 85, 91 system-oriented 198 system-to-ground 177 tachometer generator 156 tag marks 9 tag numbering 4 tangible assets 481 task 228, 271 TCP/IP 236–237, 248, 250–251, 535 teamwork 639 technical solutions 615 technical studies 615 temperature 109 class 194 filled thermal systems 109
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
input 214 resistance temperature devices (see also resistance temperature detector) 111 thermistors 112 thermocouples 110, 235 test mode 230 test plan 618 testing 188, 296, 523 factory acceptance 216 site acceptance 217 software 215 system level 218 thermocouple input 214 thermography 526 third head technique 478 third party 191, 201, 322 threat-risk assessment 429 three phase 149, 252, 511 three-way valves 131 tieback models 286 time and material 609–610 time base 268, 278 time constant 30, 301 time driven 608 time proportioning control 25 time to repair 391 time-based inspection 484 timed out 61 time-limited action qualifier 75 timely response to events 438 time-of-flight 102, 105–106 timing diagram 62 torque 130, 139, 145, 148, 152–153, 156, 158, 160, 162, 185, 496, 511, 513 control 147, 150 controlling 157 tracking 566 trainer role 322 training 19, 296, 319, 321, 621 apprenticeship 326
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
classroom 327 computer-based 328 control 323 delivery methods 324 external programs with certification 327 generic process equipment 323 instrumentation 323 on-the-job 326 process 322 safety 323 scenarios 328 scenarios with expert guidance 329 self-study 325 simulation 329 system 323 topics 323 unit-specific 324 transformer 94, 148, 161, 177–179, 181, 183, 197, 199, 252, 506 transistors 155, 158 transition condition 70 transmission control protocol (TCP) 161 transmitters differential pressure (d/p) 94 pressure 94 trend 57, 110, 235, 245, 288, 334, 339, 341–342, 423, 425, 447, 477, 496, 499, 523, 546, 548, 552 analysis 451 data 217 embedded 334, 340, 344–345 logs 534 package 240 plots 303 recorders 471 trial-and-error tuning 30, 33 troubleshooting 296, 443, 467, 470, 475–476, 621 circle-the-wagons 477 complex-to-simple 477 consultation 478
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
fault insertion 476 framework 468 intuition 478 out-of-the-box thinking 478 remove-and-conquer 476 set a trap 477 substitution 476 true value 91 tuning controller 29–31, 35, 42, 316, 576 initial 35 parameters 23–24, 27, 30–35, 241 scheduled 35 self 33–34 trial-and-error 30, 33 turbine 105 meters 106 turndown ratio 103 turnover 620 two-degree-of-freedom controller 26 ultimate gain 32 ultimate sensitivity 32 ultrasonic 95, 102, 105 flow meters 106–107 unavailability 361, 387, 391–394, 438 undetected 398 ungrounded conductors 172 unified modeling language (UML) 485 unipolar 498 un-isolatable conductors 172 unit procedures 53 unit-level controller 533 units 24, 48, 50, 52, 54, 83, 86, 88, 93, 102–104, 139, 168, 207, 228, 244, 248, 286, 289, 400, 405, 419, 449–450, 513, 523, 533, 552, 559, 565–566, 614 unit-specific training 324 unreliability 361, 387, 389–390 uptime 460
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
validation 296, 329, 417, 447, 454, 523 valves ball 128–129 butterfly 128–131 cage-guided 130–131, 133 deadband 311, 313 eccentric rotary plug 129 globe 128–130, 132–133, 135, 137 operation 576 position controller (VPC) 302–303, 305 positioners 139 post-guided 130 rotary motion 128 three-way 131 types 128 variable capacitance device 94 variable conductors 95 variable frequency drives (VFDs) 185, 534 variable reluctance 95 variable speed drive 127, 236, 285–286, 500 variable-displacement measuring devices 96 variables 228 variance reports 582 vector 306, 440–441 velocity 28, 102–107, 285, 506, 509–510, 512–515 vendor assistance 475 vendor calibration 214 vibrating paddle 96 virus 424, 440 vision system 491, 517 camera 524–526 components 518–519 guidance 520 identification 521 implementation 521–523 inspection 521 measuring and gauging 521 tasks 520
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
vision-guided robotics (VGR) 520 voice coil 137, 139–140 voltage drop 173 vortex meters 107 vortex shedding 105 vulnerability 252 walk-through 472 warranty support 622 watchdog timer 230 water-quality analyzer 116 Welch-Satterthwaite 91 wet testing 218–219 WIA-FA 418 WIA-PA 415, 418 wide area network (WAN) 248 Wi-Fi 243, 415, 417–419 WiGig 419 wild flow 301 wire 169 insulation selection 171 selection 170 wired field networks 412 wireless 161, 237, 243, 245, 250, 286, 294, 302, 403, 411–419, 421, 441, 454, 456, 488, 494, 501, 642 field instrumentation network 237 plant network 237 transmitters 411 WirelessHART 411–412, 415, 417–419 wiring 170 practices 169 work breakdown structure (WBS) 615 work master 566 workflows 565 Worldwide Interoperability for Microwave Access (WiMAX) 250 worm 290, 424 wound rotor 151–152, 511
Copyright © 2018. International Society of Automation (ISA). All rights reserved.
zero elevation 100 zero suppression 100 zero-volt reference 179 Ziegler-Nichols 30, 32, 218 ZigBee 418–419, 421 zone 434 zone designation 194 Z-Wave 421