1,195 123 19MB
English Pages 518 [519] Year 2022
The ASQ Certified Quality Process Analyst Handbook Third Edition
Sandra L. Furterer, editor
ASQ Quality Press Milwaukee, Wisconsin
Published by ASQExcellence, Milwaukee, WI Produced and distributed by Quality Press, ASQ, Milwaukee, WI © 2013, 2021 by ASQExcellence Library of Congress Cataloging-in-Publication Data Names: Furterer, Sandra L., editor. Title: The certified quality process analyst handbook / Sandra L. Furterer, editor. Description: Third edition. | Milwaukee, Wisconsin : ASQ Quality Press, [2021] | Previous edition entered under: Eldon H. Christensen. | Includes bibliographical references and index. | Summary: "This handbook is designed as a reference for ASQ's Certified Quality Process Analyst Body of Knowledge (BoK), providing the basic information needed to prepare for the CQPA examination. It has been revised and expanded to match the 2020 BoK. The book and certification are aimed at the paraprofessional who, in support of and under the direction of quality engineers or supervisors, analyzes and solves quality problems and is involved in quality improvement projects. This book is perfect for both recent graduates and those with work experience who want to expand their knowledge of quality tools and processes. The main sections in the CQPA Body of Knowledge are subdivided into related subsections. The relevant portion of the CQPA Body of Knowledge is excerpted at the beginning of each Body of Knowledge section in the text as a guide for the reader"—Provided by publisher. Identifiers: LCCN 2021036478 | ISBN 9781951058388 (hardback) | ISBN 9781952236150 (hardback) | ISBN 9781951058395 (epub) | ISBN 9781951058401 (pdf) | ISBN 9781952236167 (epub) Subjects: LCSH: Quality control—Handbooks, manuals, etc. Classification: LCC TS156 .C533 2021 | DDC 658.5/62—dc23 LC record available at https://lccn.loc.gov/2021036478. A87 2021 | DDC 363.19/2—dc23 No part of this book may be reproduced in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior written permission of the publisher. ASQ and ASQExcellence advance individual, organizational, and community excellence worldwide through learning, quality improvement, and knowledge exchange. Attention bookstores, wholesalers, schools, and corporations: Quality Press and ASQExcellence books are available at quantity discounts with bulk purchases for business, trade, or educational uses. For information, please contact Quality Press at 800-248-1946 or [email protected]. To place orders or browse the selection of ASQExcellence and Quality Press titles, visit our website at: http://www.asq.org/quality-press. Printed in the United States. 25 24 23 22 21 GP 5 4 3 2 1
To my family—husband Dan, and children Kelly, Erik, and Zach—for the love, support, and care that you provide each and every day; and to Demi, Lily, and Louis, my rescue dogs and cat, who bring joy and hang with me when I’m writing. Sandra L. Furterer
Table of Contents
List of Figures and Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix
Part I Quality Concepts and Team Dynamics Chapter 1 Professional Conduct and Ethics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Code of Ethics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Chapter 2 Quality Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Quality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Quality Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Quality Standards, Requirements, and Specifications . . . . . . . . . . . . . . . . . . . Quality Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cost of Quality (COQ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6 6 16 21 23 27
Chapter 3 Quality Audits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Audit Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Audit Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Audit Roles and Responsibilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
29 29 31 34
Chapter 4 Team Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Types of Teams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Team Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Team Stages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Team Roles and Responsibilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Team Conflict . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
37 37 44 48 49 53
Chapter 5 Training and Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Link Training to Organizational Goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Needs Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Adapt Information to Meet Adult Learning Styles . . . . . . . . . . . . . . . . . . . . . . Select the Training Delivery Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Evaluate Training Effectiveness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Link to Organizational and Individual Performance Measures . . . . . . . . . . . Tools for Assessing Training Effectiveness . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
56 56 57 58 58 61 65 66
v
vi
Table of Contents
Part II Quality Tools and Process Improvement Techniques Chapter 6 Process Improvement Concepts and Approaches . . . . . . . . . . . . . . . Plan–Do–Check–Act (PDCA) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Kaizen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Incremental and Breakthrough Improvement . . . . . . . . . . . . . . . . . . . . . . . . . .
70 70 71 74
Chapter 7 Basic Quality Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cause-and-Effect Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Flowcharts (Process Maps) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Check Sheets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Pareto Charts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Scatter Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Histograms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
82 82 84 91 92 93 96
Chapter 8 Process Improvement Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Six Sigma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Benchmarking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Risk Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Business Process Management (BPM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
99 99 111 121 125 133
Chapter 9 Management and Planning Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 Quality Management Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 Project Management Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
Part III Data Analysis Chapter 10 Basic Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Basic Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Basic Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Probability Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Reliability Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
168 168 172 181 188
Chapter 11 Data Types, Collection, and Integrity . . . . . . . . . . . . . . . . . . . . . . . . Measurement Scales . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Data Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Data Collection and Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Data Integrity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Data Plotting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
195 195 197 199 203 207
Chapter 12 Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sampling Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Acceptance Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ANSI/ASQ Z1.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ANSI/ASQ Z1.9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
213 213 215 224 230
Chapter 13 Measurement System Analysis (MSA) . . . . . . . . . . . . . . . . . . . . . . . 245 Measurement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245 Concepts in Measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
Table of Contents
vii
Chapter 14 Statistical Process Control (SPC) . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fundamental Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Rational Subgroups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Control Charts for Attributes Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Control Charts for Variables Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Common and Special Cause Variation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Process Capability Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
254 254 257 258 265 268 270
Chapter 15 Advanced Statistical Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Regression and Correlation Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hypothesis Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Null Hypothesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Design of Experiments (DOE) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Taguchi Concepts and Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Analysis of Variance (ANOVA) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
277 277 281 291 295 303 308
Part IV Customer–Supplier Relations Chapter 16 Internal and External Customers and Suppliers . . . . . . . . . . . . . . . Internal Customers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Improving Internal Processes and Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Effect of Treatment of Internal Customers on That of External Customers . . Methods to Energize Internal Customers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . External Customers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . External Customers’ Influence on Products and Services . . . . . . . . . . . . . . . . .
316 316 318 319 320 320 323
Chapter 17 Customer Satisfaction Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Surveys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Focus Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Complaint Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Warranty and Guarantee Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Quality Function Deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
325 326 331 332 335 336
Chapter 18 Product and Process Approval Systems . . . . . . . . . . . . . . . . . . . . . . 342 Verification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342 Validation and Qualification Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344 Chapter 19 Supplier Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Supplier Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Supplier Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Key Measures of Supplier Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Supplier Assessment Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
346 346 348 349 350
Chapter 20 Material Identification, Status, and Traceability . . . . . . . . . . . . . . Importance of Materials Identification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Key Requirements for Tracking Materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Methods for Segregating Nonconforming Materials . . . . . . . . . . . . . . . . . . . . . A Sample Procedure for Tracking Materials . . . . . . . . . . . . . . . . . . . . . . . . . . . .
352 352 352 354 356
viii Table of Contents
Part V Corrective and Preventive Action (CAPA) Chapter 21 Corrective Action . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Problem Identification and Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Contain the Negative Effects Resulting from the Defect . . . . . . . . . . . . . . . . . . Identify Root Causes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Corrective Action . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Verify and Confirm Effectiveness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
360 361 362 363 364 365
Chapter 22 Preventive Action . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366 Identify Potential Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367 Identify Potential Causes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367
Part VI Appendices
Appendix A Body of Knowledge for the ASQ Certified Quality Process Analyst (CQPA)—2020 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374 Levels of Cognition Based on Bloom’s Taxonomy—Revised (2001) . . . . . . . . . 381
Appendix B Areas under Standard Normal Curve . . . . . . . . . . . . . . . . . . . . . . . 382 Appendix C Control Limit Formulas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384 Variables Charts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384 Attributes Charts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384 Appendix D Factors for Control Charts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385 Endnotes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . About the Editor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
387 396 481 485 495
List of Figures and Tables
Part I Figure 2.1
Emergency services value chain . . . . . . . . . . . . . . . . . . . . . . . . . . .
Figure 2.3
Emergency services stakeholder analysis . . . . . . . . . . . . . . . . . . . 10
Figure 2.2 Figure 2.4
Figure 2.5 Table 2.1
Figure 2.6 Figure 4.1
Figure 4.2
Figure 4.3 Table 4.1
Figure 4.4 Table 4.2 Table 5.1
Figure 5.1 Table 5.2
Figure 5.2
Emergency services SIPOC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8 9
Quality impact . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15
Example document traceability matrix . . . . . . . . . . . . . . . . . . . . .
26
Deming’s chain reaction of quality . . . . . . . . . . . . . . . . . . . . . . . . Quality costs—general description . . . . . . . . . . . . . . . . . . . . . . . .
15
28
Typical U-shaped cell layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
40
Objectives made the SMART WAY . . . . . . . . . . . . . . . . . . . . . . . .
46
Sample rules of engagement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
48
Job training selection criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
59
Levels of training evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
63
Guidelines for successful self-managed teams . . . . . . . . . . . . . . . Meeting ground rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Team roles, responsibilities, and performance attributes . . . . . . Considerations for selecting a training system . . . . . . . . . . . . . .
41
47 51
62
Characteristics of evaluation levels . . . . . . . . . . . . . . . . . . . . . . . .
64
Figure 6.1
The PDCA/PDSA cycle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
71
Figure 6.3
DMAIC phases and activities . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Part II Figure 6.2 Table 6.1
Table 6.2
Table 6.3 Table 6.4
Table 6.5
Kaizen and the PDCA cycle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
72
Define phase activities and common tools . . . . . . . . . . . . . . . . . .
76
76
Measure phase activities and common tools . . . . . . . . . . . . . . . .
77
Improve phase activities and common tools . . . . . . . . . . . . . . . .
79
Analyze phase activities and common tools . . . . . . . . . . . . . . . . Control phase activities and common tools . . . . . . . . . . . . . . . . . ix
78
80
x
List of Figures and Tables
Figure 6.4
DMAIC cycle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Figure 7.2
Cause-and-effect diagram for a manufacturing problem . . . . . .
Figure 7.1
Figure 7.3
The seven basic quality tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
83
Completed cause-and-effect diagram for a manufacturing problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
84
Process architecture map icons or symbols . . . . . . . . . . . . . . . . .
88
Figure 7.4 Cause-and-effect diagram for a financial services process . . . . Figure 7.5 Figure 7.6 Figure 7.7
Figure 7.8 Figure 7.9
Figure 7.10 Figure 7.11
Figure 7.12
Time-related check sheet example . . . . . . . . . . . . . . . . . . . . . . . . .
92
Scatter diagrams depicting high degree and low degree of positive correlation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
94
Defects table example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Pareto diagram example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Scatter diagrams depicting high degree and low degree of negative correlation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Scatter diagram depicting no correlation . . . . . . . . . . . . . . . . . . .
Histogram worksheet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Figure 7.16
Histogram of student SAT scores . . . . . . . . . . . . . . . . . . . . . . . . . .
Table 8.1
Figure 8.2
Figure 8.3 Figure 8.4
Figure 8.5 Figure 8.6 Table 8.2
Figure 8.7 Table 8.3
Figure 8.8 Table 8.4
Table 8.5
85
90
Figure 7.14
Figure 8.1
83
Sample process architecture map of emergency services—arrival and triage process . . . . . . . . . . . . . . . . . . . . . . .
Figure 7.13 Example of a run chart for triage time in an emergency department . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 7.15
81
Tally and frequency distribution . . . . . . . . . . . . . . . . . . . . . . . . . .
91
93
94
95 96 97
98 98
Pull system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 5S terms and descriptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 Continuous flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 Symbols on process architecture map (PAM) for value analysis Sample lean analysis with value analysis and waste analysis for the emergency department arrival and triage process . . . . . Value stream map example—macro level (partial) . . . . . . . . . . . Value stream map example—plant level (partial) . . . . . . . . . . . .
104 106 107 107
Defects per million opportunities (DPMO) at various process sigma levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 y is a function of x . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Traditional and Lean-Six Sigma (LSS) thinking in an organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
114 114
Six Sigma tools by DMAIC phase . . . . . . . . . . . . . . . . . . . . . . . . . .
117
Risk likelihood . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
129
Six Sigma roles and responsibilities . . . . . . . . . . . . . . . . . . . . . . . .
118
List of Figures and Tables
xi
Table 8.6
Magnitude of impact or risk consequences . . . . . . . . . . . . . . . . . 130
Table 8.8
Risk handling approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
Table 8.7 Table 8.9
Figure 8.9
Risk assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 Risk prioritization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 Risk cube for emergency department . . . . . . . . . . . . . . . . . . . . . . 133
Figure 8.10 The evolution to business process management . . . . . . . . . . . . .
134
Figure 8.12 Value chain and process meta model . . . . . . . . . . . . . . . . . . . . . . .
140
Figure 8.11 BPM life cycle phases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
136
Figure 9.1
Affinity diagram of “methods to improve team performance” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Figure 9.3
Process decision program chart example . . . . . . . . . . . . . . . . . . .
148
L-shaped matrix example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
149
Figure 9.2 Table 9.1
Figure 9.4
Figure 9.5 Figure 9.6 Figure 9.7
Figure 9.8 Table 9.2
Figure 9.9
Figure 9.10 Figure 9.11
Tree diagram example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . When to use differently shaped matrices . . . . . . . . . . . . . . . . . . .
146 147
148
Roof-shaped matrix example . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
150
Prioritization matrix example . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
151
Interrelationship digraph (relations diagram) example . . . . . . . Activity network diagram example (activity on node) . . . . . . . .
151
152
Project management phases, activities, and tools . . . . . . . . . . . .
154
Network diagram example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
157
An example work breakdown structure . . . . . . . . . . . . . . . . . . . . PERT chart example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
156 157
Figure 9.12 CPM chart example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
158
Figure 9.14
A sample Gantt chart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
159
Figure 9.16
Responsibility assignment matrix (RAM) example . . . . . . . . . . .
Figure 9.13 Network diagram example created by MS Project . . . . . . . . . . . Figure 9.15 Figure 9.17
Line graph budget chart example . . . . . . . . . . . . . . . . . . . . . . . . . .
158
160 161
Risk probability impact matrix example . . . . . . . . . . . . . . . . . . . .
162
Figure 10.1 Standard deviation calculation . . . . . . . . . . . . . . . . . . . . . . . . . . . .
170
Part III Figure 10.2 Binomial distribution with n = 10 and p = .20 . . . . . . . . . . . . . . .
173
Figure 10.4 Standard normal curve . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
177
Figure 10.3 Histograms of error values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 10.5 Area under the standard normal curve between z = 0 and z = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
176
178
xii
List of Figures and Tables
Figure 10.6 Area under the standard normal curve between z = −2 and z = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 10.7 Area under a normal curve between 0.750" and 0.754" . . . . . . . .
178
179
Figure 10.8 Bathtub curve . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
190
Table 10.1 Relationship between each bathtub phase, its probability distribution, and failure causes . . . . . . . . . . . . . . . . . . . . . . . . . . .
191
Figure 10.9 Reliability can be modeled by the exponential distribution . . .
191
Figure 11.1 Data collection plan for a women’s healthcare center . . . . . . . . . 201 Figure 11.2 Operational definitions for a women’s healthcare center . . . . . . 201 Table 11.1 Strategies for ensuring data integrity, validity, and reliability .
206
Figure 11.4 Histogram of triage time data . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
209
Figure 11.3 Data quality management framework processes . . . . . . . . . . . . . Figure 11.5 Stem-and-leaf plot for the triage time data . . . . . . . . . . . . . . . . . .
207
209
Figure 11.6 Box plot of triage time data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210 Figure 11.7 Dot plot of triage time data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
Figure 11.8 Scatter plot or diagram of triage time versus age . . . . . . . . . . . . 211 Figure 11.9 Pareto chart of how much the triage time fluctuates with each patient across time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
Figure 11.10 Time series plot of a subset of triage time . . . . . . . . . . . . . . . . . . . 212 Figure 12.1 An operating characteristic (OC) curve . . . . . . . . . . . . . . . . . . . . . Figure 12.2 Average outgoing quality curve for N = ∞, n = 50, c = 3 . . . . . . .
217 219
Figure 12.3 Effect on an OC curve of changing sample size n when accept number c is held constant . . . . . . . . . . . . . . . . . . . . . . . . . . 221 Figure 12.4 Effect of changing accept number c when sample size n is held constant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
Figure 12.5 Effect of changing lot size N when accept number c and sample size n are held constant . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 Figure 12.6 Operating characteristic curves for sampling plans having the sample size equal to 10% of the lot size . . . . . . . . . . . . . . . . . 223 Figure 12.7 Switching rules for normal, tightened, and reduced inspection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Table 12.1 Sampling plan with sample size, acceptance number, and rejection number . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
226 229
Figure 12.8 General structure and organization of ANSI/ASQ Z1.9-2003 . . 231
Table 12.2 Example for ANSI/ASQ Z1.9-2003 sampling plan . . . . . . . . . . . . 232 Table 12.3
Example for ANSI/ASQ Z1.9-2003 variables sampling plan . . . .
234
List of Figures and Tables xiii
Figure 12.9 ANSI/ASQ Z1.4-2003 Table VIII: Limit numbers for reduced inspection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 12.10 ANSI/ASQ Z1.4-2003 Table I: Sample size code letters per inspection levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 12.11 ANSI/ASQ Z1.4-2003 Table II-A: Single sampling plans for normal inspection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 12.12 ANSI/ASQ Z1.4-2003 Table III-A: Double sampling plans for normal inspection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 12.13 ANSI/ASQ Z1.4-2003 Table IV-A: Multiple sampling plans for normal inspection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 12.14 ANSI/ASQ Z1.9-2003 Table A-2. Sample size code letters . . . . . . Figure 12.15 ANSI/ASQ Z1.9-2003 Table C-1: Master table for normal and tightened inspection for plans based on variability unknown (single specification limit—form 1) . . . . . . . . . . . . . . . . . . . . . . . . Figure 12.16 ANSI/ASQ Z1.9-2003 Table B-5: Table for estimating the lot percent nonconforming using the standard deviation method . .
235 236 237 238 239
241
242 243
Figure 12.17 ANSI/ASQ Z1.9-2003 Table B-3: Master table for normal and tightened inspection for plans based on variability unknown (double specification limit and form 2—single specification limit) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
244
Figure 13.2 Gage accuracy is the difference between the measured average of the gage and the true value, which is defined with the most accurate measurement equipment available . . . . . . . . . . . . . . . . .
248
Figure 13.1 Factors affecting the measuring process . . . . . . . . . . . . . . . . . . . .
Figure 13.3 Measurement data can be represented by one of four possible scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 13.4 Gage R&R study for measurements summary report . . . . . . . . .
247
248 249
Figure 13.5 Gage R&R study for measurements variation report . . . . . . . . .
250
Figure 13.7 Attribute agreement analysis results for mortgage appraisal process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
251
Figure 13.6 Gage R&R study for measurements report card . . . . . . . . . . . . .
Figure 13.8 Misclassification report for mortgage appraisal process . . . . . .
251
252
Figure 13.9 Attribute agreement analysis accuracy report . . . . . . . . . . . . . . .
253
Figure 14.2 Graphs of individual values versus averages of the subgroup samples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
256
Figure 14.1 Attribute agreement analysis accuracy report . . . . . . . . . . . . . . .
Figure 14.3 Conveyor belt in chocolate-making process; possible sampling plans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 14.4 Example of a p chart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
256
257
259
xiv List of Figures and Tables
Figure 14.5 Example of an np chart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
260
Table 14.1 Attributes data with totals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
263
Figure 14.6 Example of a u chart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
261
Figure 14.7 P chart out-of-control tests from Minitab . . . . . . . . . . . . . . . . . . . – Table 14.2 Data for X and R chart control limit calculations . . . . . . . . . . . . . – Figure 14.8 X and R chart example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
264
Figure 14.10 Point outside control limit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
269
266 267
Figure 14.9 Control chart indicators of process change . . . . . . . . . . . . . . . . . .
269
Figure 14.11 Seven successive points trending upward . . . . . . . . . . . . . . . . . .
269
Figure 14.12 Seven successive points on one side of the average . . . . . . . . . . . Figure 14.13 Nonrandom pattern going up and down across the average line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 14.14 Nonrandom patterns with apparent cycles . . . . . . . . . . . . . . . . . Table 14.3 Target process capability indices . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 15.1 Example of linear correlation as demonstrated by a series of scatter diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 15.2 Some frequently occurring patterns of data that lead to seriously misleading values of r and are not recognized as a consequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 15.3 Fifty realizations of a 95% confidence interval for μ . . . . . . . . . . Figure 15.4 Density of the t-distribution for 1, 2, 3, 5, 10, and 30 degrees of freedom compared to the normal distribution . . . . . . . . . . . . Table 15.1 t-table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Table 15.2 Experimental plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Table 15.3 Experimental results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
270 270 270
272 280 281
284
288 289 297
298
Table 15.4 The 2 configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300 2
Table 15.5 Signs of interaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300 Table 15.6 Treatment combination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301 Table 15.7 Analysis of variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302 Figure 15.5 Main effects and interaction plots for the 22 design in Table 15.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
304
Figure 15.7 Comparison between the traditional and the Taguchi concept for assessing the impact of product quality . . . . . . . . . . . . . . . . .
306
Figure 15.6 The four classes of variables in the Taguchi method . . . . . . . . .
305
Table 15.8 Emergency department ANOVA factors and levels . . . . . . . . . . 311
Figure 15.8 ANOVA results for the emergency department study . . . . . . . . 312 Figure 15.9 Tukey comparisons for disposition factor and levels . . . . . . . . . 313
List of Figures and Tables xv
Figure 15.10 Tukey comparisons for number of orders, factors, and levels . . 313 Figure 15.11 Emergency department factorial plots . . . . . . . . . . . . . . . . . . . . .
314
Part IV Figure 16.1 The customer–supplier value chain . . . . . . . . . . . . . . . . . . . . . . . .
317
Table 17.1 Survey types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
327
Figure 17.1 Customer complaint form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
334
Figure 17.2 Quality function deployment matrix—“house of quality” . . . .
336
Table 17.2 Survey mediums . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Table 17.3 Test to identify an organization’s attitude toward complaints . . . Figure 17.3 Voice of the customer deployed through an iterative QFD process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 17.4 QFD house of quality matrix example . . . . . . . . . . . . . . . . . . . . . . Figure 18.1 Stages of the product development cycle . . . . . . . . . . . . . . . . . . . Table 19.1 SCOR model performance attributes, definitions, and metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Table 19.2 Supplier performance assessment metrics . . . . . . . . . . . . . . . . . .
329
335
337
339
343
350 350
Part V Table 22.1 FMEA table example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
369
Acknowledgments
I
would like to acknowledge the editors of the second edition of the Certified Quality Process Analyst Handbook, who built a great foundation for this handbook: Chris Christensen, Kathleen M. Betz, and Marilyn S. Stein. I would like to thank my editors and reviewers for making this handbook happen. To all of the quality process analysts who have the desire to learn and apply the methods, tools, and concepts in this handbook, thank you for providing me with the impetus for editing this handbook. Sandra L. Furterer
xvii
Introduction
T
his handbook is designed as a reference for the Certified Quality Process Analyst (CQPA) Body of Knowledge (BoK), providing the basic information needed to prepare for the CQPA examination. However, it can also be leveraged to learn and deepen the understanding of a quality process analyst, which is the first step to becoming a quality engineer. There are five main sections in the CQPA Body of Knowledge, further subdivided into related subsections: • Quality Concepts and Team Dynamics
• Quality Tools and Process Improvement Techniques • Data Analysis
• Customer–Supplier Relations
• Corrective and Preventative Action Part I of the handbook describes the fundamental elements of a quality system. The section begins with the quality principles embodied by the ASQ Code of Ethics. It then covers the importance of quality to improve processes, products, and services. Lastly, it describes quality planning, standards, requirements, and specifications; documentation and the Cost of Quality (COQ), quality audits, team dynamics, and training and evaluation. Part II focuses on quality tools and process improvement techniques, including Lean-Six Sigma, benchmarking, risk management and business process management, management and planning tools, and project management tools. Part III provides the analytical methods to interpret and compare data sets and model processes. Basic statistics, probability, reliability concepts, data types, data collection and data integrity, sampling methods, measurement systems analysis, statistical process control (SPC), advanced statistical analysis tools, regression and correlation, hypothesis testing, design of experiments (DOE), Taguchi concepts, and analysis of variance are explained in this section. Customer-supplier relations are covered in Part IV, including internal and external customers and suppliers, customer satisfaction methods, product and process approval systems, supplier management, material identification, status, and traceability. Finally, as discussed in Part V, an overall effective quality management system must employ a corrective and preventive action (CAPA) system. The relevant portion of the CQPA Body of Knowledge is excerpted at the beginning of each BoK section in the text as a guide for the reader.
xix
Part I
Quality Concepts and Team Dynamics Chapter 1
Professional Conduct and Ethics
Chapter 2
Quality Concepts
Chapter 3
Quality Audits
Chapter 4
Team Dynamics
Chapter 5
Training and Evaluation
Chapter 1 Professional Conduct and Ethics
Identify and apply behaviors that are aligned with the ASQ Code of Ethics. (Apply) Body of Knowledge I.A
CODE OF ETHICS Introduction The purpose of the American Society for Quality (ASQ) Code of Ethics is to establish global standards of conduct and behavior for its members, certification holders, and anyone else who may represent or be perceived to represent ASQ. In addition to the code, all applicable ASQ policies and procedures should be followed. Violations to the Code of Ethics should be reported. Differences in work style or personalities should be first addressed directly with others before escalating to an ethics issue. The ASQ Professional Ethics and Qualifications Committee, appointed annually by the ASQ Board of Directors, is responsible for interpreting this code and applying it to specific situations, which may or may not be specifically called out in the text. Disciplinary actions will be commensurate with the seriousness of the offense and may include permanent revocation of certifications and/or expulsion from the society.
Fundamental Principles ASQ requires its representatives to be honest and transparent. Avoid conflicts of interest and plagiarism. Do not harm others. Treat them with respect, dignity, and fairness. Be professional and socially responsible. Advance the role and perception of the Quality professional.
Expectations of a Quality Professional A. Act with Integrity and Honesty 1. Strive to uphold and advance the integrity, honor, and dignity of the Quality profession. 2
Chapter 1: Professional Conduct and Ethics
3
2. Be truthful and transparent in all professional interactions and activities. 3. Execute professional responsibilities and make decisions in an objective, factual, and fully informed manner. 4. Accurately represent and do not mislead others regarding professional qualifications, including education, titles, affiliations, and certifications. 5. Offer services, provide advice, and undertake assignments only in your areas of competence, expertise, and training. B. Demonstrate Responsibility, Respect, and Fairness 1. Hold paramount the safety, health, and welfare of individuals, the public, and the environment. 2. Avoid conduct that unjustly harms or threatens the reputation of the Society, its members, or the Quality profession. 3. Do not intentionally cause harm to others through words or deeds. Treat others fairly, courteously, with dignity, and without prejudice or discrimination. 4. Act and conduct business in a professional and socially responsible manner. 5. Allow diversity in the opinions and personal lives of others. C. Safeguard Proprietary Information and Avoid Conflicts of Interest 1. Ensure the protection and integrity of confidential information. 2. Do not use confidential information for personal gain. 3. Fully disclose and avoid any real or perceived conflicts of interest that could reasonably impair objectivity or independence in the service of clients, customers, employers, or the Society. 4. Give credit where it is due. 5. Do not plagiarize. Do not use the intellectual property of others without permission. Document the permission as it is obtained. As a Certified Quality Process Analyst (CQPA), you will be expected to perform your process analysis duties as a professional, which includes acting in an ethical manner. The ASQ Code of Ethics has been carefully drafted to serve as a guide for ethical behavior of its members and those individuals who hold certifications in its various disciplines. Specifically, the Code of Ethics requires that CQPAs are honest and impartial and serve the interests of their employers, clients, and the public with dedication. CQPAs should join with other quality process analysts to increase the competence and prestige of the profession. CQPAs should use their knowledge and skill for the advancement of human welfare and to promote the safety and reliability of products for public use. They should earnestly strive to aid the work of the
4
Part I: Quality Concepts and Team Dynamics
American Society for Quality in its efforts to advance the interests of the quality process analyst profession. Performing your duties in an ethical manner is not always easy. Ethical choices are often not clear-cut, and the right course of action is not always obvious. The ASQ Code of Ethics is not meant to be the one final answer to all ethical issues you may encounter. In fact, the code has been revised and will continue to be revised from time to time to keep it current as new perspectives and challenging dilemmas arise. To improve relations with the public, CQPAs should do whatever they can to promote the reliability and safety of all the products and services that fall within their jurisdiction. They should endeavor to extend public knowledge of the work of ASQ and its members as it relates to the public welfare. They should be dignified and modest in explaining quality process analysis work and its merit. CQPAs should preface any public statements they make by clearly indicating on whose behalf they are made. An example of the CQPA’s obligation to serve the public at large arose when a process analyst found that her manufacturing facility was disposing hazardous waste materials in an illegal fashion. Naturally, she wrote up the violation and sent her report to the director of manufacturing. Her manager told her a few days later to “just forget about” the report she had written. This CQPA considered the incident a significant violation of her personal ethics, and she told her boss that she felt the larger public interest outweighed the company’s practice in this case. Her arguments were persuasive, and eventually the company reversed its former practice of disposing of the hazardous materials in an illegal manner. To advance relations with employers and clients, CQPAs should act in professional matters as faithful agents or trustees of each employer or client. They should inform each client or employer of any business connections, interests, and affiliations that might influence the CQPA’s judgment or impair the equitable character of his or her services. CQPAs should indicate to his or her employer or client the adverse consequences to be expected if his or her judgment is overruled. CQPAs should not disclose information concerning the business affairs or technical processes of any present or former employer or client without formal consent. CQPAs should not accept compensation from more than one party for the same service without the consent of all parties. For example, if the CQPA is employed, he or she should not engage in supplementary employment in consulting practice without first gaining the consent of the employer. Another example of an ethical issue concerning loyalty to employers and clients occurred when a quality consultant was asked to give another client in a competing firm a short description of the first client’s product. The information that the second company was requesting was readily available on the internet, but this CQPA felt that he should not be the source of information about a competitor’s project since that information was obtained while engaged by the competitor. To improve relations with peers, CQPAs should give credit for the work of others to those to whom it is due. CQPAs should endeavor to aid the professional development and advancement of those in his or her employ or under his or her supervision. Finally, CQPAs must not compete unfairly with others. They should extend friendship and confidence to all associates and to those with whom he or she has business relations.
Chapter 1: Professional Conduct and Ethics
5
An example of a potential violation of the ASQ Code of Ethics as it relates to dealing with peers occurred when a consulting quality process analyst was asked to do a job that required more work than she could do on her own. She properly enlisted the support of a peer, a highly qualified and very senior Certified Quality Engineer (CQE). The CQE so impressed the client that they asked him to take the lead on the engagement. The CQE, however, chose not to violate the Code of Ethics. Instead, he discussed the situation openly with the CQPA and the client, and it was agreed to keep the leadership for this engagement in the CQPA’s domain.
Chapter 2 Quality Concepts
QUALITY Describe how using quality techniques to improve processes, products, and services can benefit all parts of an organization. Describe what quality means to various stakeholders (e.g., employees, organization, customers, suppliers, community) and how each can benefit from quality. (Understand) Body of Knowledge I.B.1
Definitions of Quality In this section, we will first discuss the concept of a system, and a value chain, which provides a high-level perspective of the value provided to customers from a manufacturing and a service organization. This will provide the foundation and context for how quality is defined from the perspective of the various customers and stakeholders in the organization.
System Definition Dr. W. E. Deming, the quality guru, defined a system as: an interconnected complex of functionally related components, divisions, teams, platforms, whatever you call them, that work together to try to accomplish the aim of the system. A system must have an aim. Without an aim there is no system. The aim of the system must be clear to everyone in the system. The aim includes plans for the future. (Deming and Orsini 2013, Chapter 5)1 In order to achieve quality of a product or service, the system, with its processes, people, and technology, must be well understood. A way to understand a system is to model it from the perspective of a value chain.
Value System and Value Chains The value chain is a chain of activities that provide value to your customer and enable competitive advantage. The concept of a value chain was first 6
Chapter 2: Quality Concepts
7
described and popularized by Michael Porter (Porter 1985).2 The industrywide synchronized interactions of those local value chains create an extended value chain, sometimes global in extent. Porter terms this larger interconnected system of value chains the “value system.” A value system includes the value chains of a firm’s supplier (and their suppliers, etc.), the firm itself, the firm’s distribution channels, and the firm’s buyers (and presumably extended to the buyers of their products, and so on). Value systems can consist of value activities, also called primary activities, and support activities. Value activities are the key critical activities that provide value to the customer and that can differentiate an organization from its competitors. The support activities are necessary but do not provide core value to customers. Porter’s value chains include the following primary activities for a manufacturing organization: • Inbound logistics: includes receiving the materials, storing and managing inventory, and scheduling transportation • Operations: all of the operational processes required to make the product • Outbound logistics: the distribution activities to ship the finished product, including warehousing, order management, transportation, and distribution • Marketing and sales: marketing and sales of the product • Service: customer service, repair, installation, and training The support activities in Porter’s model include these activities: • Procurement: purchasing raw materials and components • Technology development: Research and development, and product design • Human resource management: recruiting, development, retention, and employee compensation activities • Firm infrastructure: includes all of the managerial support functions including legal, accounting, finance, quality management, strategic planning, and so on For a service value system, we define the concept somewhat differently than Porter’s model. We define a value system as an aggregation of the value chains that provide value to the customers. The value chains are the sets of activities that provide a specific service delivery or product line to the customer. In a manufacturing organization, the majority of the product lines would use the same activities and value chains to create the product, so multiple sets of value chains would not be necessary. For a service organization, the activities that are part of the value chain could vary for different service lines more than a manufacturing organization’s value chains. By defining our value system differently, we enable our model to be used for a manufacturing and a service organization.3
8
Part I: Quality Concepts and Team Dynamics
To further illustrate the distinction, a healthcare organization’s value system could consist of the following value chains for value activities for service delivery value chains: • Provide emergency services • Provide inpatient services • Provide outpatient services • Provide surgical services • Provide women’s services Some examples of support value chains that support the clinical service delivery systems include: • Provide supply chain services: activities that provide supplies, equipment, and instruments to the organization for clinical and nonclinical services • Provide human resources services: activities that support the employees, including recruiting, training and development, career and succession planning, performance evaluation, reward, and recognition • Provide infrastructure services: includes all managerial support functions such as legal, accounting, finance, quality management, strategic planning, and more As an example, the highest-level business functions of the emergency services value chain is shown in Figure 2.1.
Emergency Services Value Chain
Emergency Services Value Chain
Emergency Services Value Activities
Emergency Services Support Activities
Emergency Services Value Chain Emergency Services Value Activities
ED Patient Arrival
ED Patient Triage
ED Patient Treatment
Patient Testing and Diagnosis
Patient Transportation
ED Patient Discharge
ED Patient Disposition
Emergency Services Value Chain Emergency Services Support Activities Patient Authorization
Patient Billing
ED Patient Admission
Patient Placement
Figure 2.1 Emergency services value chain. Source: Created by Sandra L. Furterer.4
ED Patient Documentation
Patient Flow Management
ED Patient Registration
Patient Transfer
Chapter 2: Quality Concepts
9
SIPOC There are many different definitions of quality, especially as defined by different stakeholders within the system. The SIPOC (suppliers–inputs–process–outputs– customers) tool helps us to define the customers and stakeholders involved in performing a process. The five to seven high-level process steps correspond roughly to the high-level activities in the value chain model. The suppliers in the SIPOC model are stakeholders (they can also be customers) that provide an input into the process, and the customers are also stakeholders that receive an output from the process. These suppliers and customers are the stakeholders that have a stake in the process and can be responsible for some level of quality within their process or receive a quality product or service as an external customer. An example of the SIPOC for the emergency services process is shown in Figure 2.2. SIPOC
Explain the customer/supplier relationship in the process. Identify where the process begins and where it ends as well as the activities included within the scope of the process to be improved.
Inputs
Process
Outputs
Patient EMS Physicians Other facilities
Request for ED care Referrals Patient information
Triage patient
Acuity level Triage decision
Medical staff Patient
Patient Medical staff
Patient information Acuity level Triage decision
Register patient
Registered patient
Registration Medical staff ED physicians
Patient Medical staff ED physicians
Patient information Acuity level Triage decision
Treat patient
Patient information Physician orders
Patient Medical staff Ancillary staff ED physicians
Patient Medical staff ED physicians Ancillary staff
Patient information Physician orders Patient medical record
Test results Specimens Diagnosis Patient medical record
Patient Medical staff Ancillary staff
Diagnosis Patient medical record
Patient Medical staff ED physicians Admitting & consulting physicians Ancillary staff
Suppliers
Patient Medical staff Ancillary staff Patient Medical staff ED physicians Ancillary staff Patient Ancillary staff Medical staff
Test results Diagnosis
Test/diagnose patient
Disposition patient
Diagnosis Patient medical record
Transport patient
Patient Patient medical record
Discharge or admit patient
Figure 2.2 Emergency services SIPOC. Source: Created by Sandra L. Furterer.
Transported patient Patient medical record Discharged or admitted patient Bill
Customers
Patient Ancillary staff Medical staff Patient Insurance companies Medicare/Medicaid
10
Part I: Quality Concepts and Team Dynamics
Stakeholder Analysis The stakeholder analysis is a tool that can be used to define the people and their roles that are impacted by the process. The stakeholder analysis includes a listing of the stakeholders, type of stakeholder, primary roles that have a stake in the process, potential impact/concerns, and receptivity (see Figure 2.3).5 Stakeholder Analysis Identify the internal and external customer roles and concerns to ensure their expectations are addressed. Define roles, impacts, concerns, and receptivity. Stakeholder
Potential impacts/ Initial Future concerns receptivity receptivity
Type
Primary role
ED patients
Primary
Patients who go through the Emergency Department. Includes patients who are seen and discharged, admitted, or who leave without being seen
Quality of care Low waiting time Patient satisfaction
Neutral
Neutral
ED physicians
Primary
Physicians who provide care for ED patients
Efficient processes Patient satisfaction Patient throughput Patient capacity
Strongly support
Strongly support
Medical staff
Primary
Nurses, technicians, transport, lab, radiology, pharmacy, other inpatient areas who provide care for ED patients
Efficient processes Moderately Patient satisfaction support
Strongly support
Administration
Primary
Administration of the hospital
Efficient use of resources Patient satisfaction Patient throughput
Strongly support
Strongly support
Emergency Medical Services Secondary who transport patients to the ED from outside the hospital
Quality of care Low waiting time Patient satisfaction
Neutral
Neutral
EMS
Registration
Primary
Register the patient
Admitting physicians and consultants
Physicians and specialty consultants who admit Secondary patients to the hospital or provide care for the patients that go through the ED
Regulatory agencies
Secondary
Ancillary support
Support staff, environmental Secondary services, central supply, dietary, insurance, payors
Regulatory agencies who define regulatory criteria
Correct registration Moderately Accurate billing support
Strongly support
Patient satisfaction Quality of care
Neutral
Strongly support
Quality of care Revenue integrity
Neutral
Neutral
Efficient processes Moderately Patient satisfaction support Patient throughput
Figure 2.3 Emergency services stakeholder analysis. Source: Created by Sandra L. Furterer.
Strongly support
Chapter 2: Quality Concepts
11
Stakeholder (Group Name): The stakeholder group name describes the function that the stakeholder group holds related to the process. The stakeholder name should never be a specific person but the function that they perform. People change jobs or functions, but the function usually persists. Type: If there are many stakeholders, they can be categorized as primary or secondary types. Primary stakeholders are those who are directly impacted by the system being developed and may be operating the system or benefit directly from the system. Secondary stakeholders are those who are impacted by the system but not as directly as the primary stakeholders. Primary Role: The role of each stakeholder group with respect to the process is described. Potential Impacts/Concerns: The project sponsor, team lead, or systems engineer should interview stakeholder representatives to understand their primary concerns regarding the process and/or how they are impacted by the process or system. Initial and Future Receptivity: The project sponsor and team lead should assess how receptive the stakeholders are to changes in the process. They should also assess how receptive the stakeholders need to be when the process or system is deployed or revised for use. The receptivity is typically assessed based on a rating scale, from strongly support, moderately support, neutral, moderately against, and strongly against. How does the project sponsor and the team leader assess the stakeholders’ receptivity to the project? Assessing receptivity can be done in several ways: (1) talk to the stakeholders to assess their receptivity, (2) monitor body language and conversations of the stakeholders within and outside of meetings, and (3) talk with other people who know the stakeholders. Supportive and engaged stakeholders, team members, and leaders: • attend meetings and are prompt and engaged • have a positive outlook and willingness to participate • exude genuine interest in the project and system • share ideas in meetings and brainstorming sessions • perform tasks outside of meetings without being prompted • offer management perspective to help the team with improvement ideas Unsupportive and unengaged stakeholders, team members, and leaders: • complain within and outside of meetings • have a confrontational tone in meetings • discourage most ideas • may not provide ideas in meetings and brainstorming sessions • do not encourage (or even frown upon) out-of-the-box thinking Some examples of unsupportive body language include crossed arms, eye rolling, always on phone, holding head in hands, or putting hands out in front of body as if to stop someone from advancing.
12
Part I: Quality Concepts and Team Dynamics
Quality Definitions Quality can be defined based upon the impacts or concerns identified from the voice of the customer data collection as the stakeholder analysis is performed. James Evans and William Lindsay define quality based upon the view of the stakeholders within the system. They have six quality definitions based upon these different perspectives:6 • Transcendent: Quality transcends or rises above normal or average quality. This definition represents excellence in quality. This is not a very useful definition, because it is difficult to measure due to the subjective and judgmental nature of the perspective. An example of a product defined as excellent in a transcendent manner can be a Lexus automobile or an Apple iPhone. • Product: This definition relates quality to a quantity of product or service attributes. An example could be a coffee maker that has a timer to automatically brew your coffee, with the ability to make espresso and a latte all with the same machine. The more attributes or features, the supposed higher quality. This may not always be true, though. The author likes brewed tea and once bought a very fancy tea maker with a temperature gage, timed heating, and a glass carafe. Well, the controls and buttons to start, set, and turn on and off the unit were so complicated it hardly constituted a high-quality product, in addition to the glass carafe being easily chipped. She quickly returned to her triedand-true (and much less expensive) stainless steel percolator, minus the fancy features. • User or Fitness for Intended Use: This definition, advocated by the quality guru Joseph Juran, defines quality based upon how well the product or service achieves the users’ intended use or functionality. One user may want an athletic watch with many features, to track multiple types of activities, such as walking, running, swimming, biking, and more, while another user may just want the old-fashioned watch that tells them the date and time. Quality is defined by the functions that the user wants in the product. • Value: The value definition defines quality based upon the trade-off between price and benefits that the product or service provides. An example is the difference and expectations between the quality of the food at a fast-food restaurant—getting lower quality, although it may be consistent lower quality, for a lower price—compared to a high-end restaurant, which is more expensive, includes lovely ambience, and offers much higher-quality food. The user determines the trade-off between price and quality or value for the product or service that they choose. • Manufacturing: The manufacturing of a product to specifications, or conformance to specifications, defines quality from how the product is consistently made to meet the specifications, derived from the customers’ needs or requirements. The quality of a car door can be measured to achieve no defects on the door when produced. A quality
Chapter 2: Quality Concepts
13
of a service can also be defined based on meeting certain specifications, such as being in an emergency room for less than three hours while receiving high-quality medical care. • Customer: This definition defines quality based upon the ability to meet or exceed customers’ expectations. The customers in this definition can consist of both internal and external customers or consumers of the final product or service. The value chain model and the SIPOC tools help us to define the chain of customers throughout the system of activities, departments, suppliers, and customers involved in making a product or delivering a service. The voice of the customer data collection helps us to understand the customers’ needs in each step of the process, which can help the organization ensure that these needs are met. An example is a mortgage department providing a loan to a consumer purchasing a home. Ultimately, the customer wants to close a loan within 30 days or less, with minimal documentation and a smooth process. Each department can have requirements for the prior department to perform their work in a timely and quality manner (without errors). The definition used depends upon the stakeholders and the activity being performed in the value chain. The customers may typically measure quality from a transcendent, product, or customer perspective. The customer needs are commonly translated by the marketing department, which defines quality based upon the user perspective. The design engineers apply the value perspective to make tradeoffs between costs of materials and meeting desired customer needs from a value perspective. They translate the customers’ requirements into technical or design requirements. The manufacturing and industrial engineers set up manufacturing processes to meet the manufacturing specifications and measure quality based on the manufacturing perspective. In our healthcare emergency services example, the patient may use the transcendent, product, or customer perspective to define quality, completing a patient satisfaction survey based upon their perception of excellence or how their experience in the emergency department transcended their expectations. They also may have selected the hospital’s emergency room based upon the product attributes or specialty services that the emergency room is known to provide, such as having a stroke center, a heart center, a trauma center, and the like. They may also assess quality from a customer perspective, that the emergency room met their needs by providing emergency services in a timely manner. The user perspective can be used by marketing to market the hospital, providing information on how the hospital has state-of-the-art technology, the latest MRI or CT machines, or laser technology used to provide surgical services. There probably is not a great deal of the value definition applied in a hospital setting, since lack of price transparency comparing like procedures is still not advanced enough to provide the patient the ability to compare from hospitalto-hospital. However, at some point in the future it could be possible as data analytics progresses for the patient to apply a value definition, having a trade-off decision between price and quality of the patient experience. The manufacturing or conformance to specifications can be used within the processes for each major
14
Part I: Quality Concepts and Team Dynamics
activity in the value chain to be assessed against a target, such as the patient will see a healthcare professional within five minutes of their arrival, or they will be triaged within seven minutes of their arrival, or their total length of stay will be under three hours for discharged patients and five hours for patients who are admitted to the hospital. The different quality definitions can effectively be used based upon the different stakeholders and the value chain activity being performed.
Benefits of Quality Quality has many benefits for any organization. A number of research studies have shown that quality can provide a competitive advantage and enhanced business results. A study by PIMS Inc. found that there are nine major strategic drivers of profitability and cash flow. One of these is customer preference for the product or service offered, which aligns to the customer-focused definition of quality: Customer preference for the products and/or services offered: The specifying customers’ preference for the non-price attributes of the business’s product/service package, compared to those of competitors, has a generally favourable impact on all measures of performance.7 In another study, the researchers compared the Malcolm Baldrige National Quality Award winners to the performance of the S&P 500 as a whole. The Baldrige winners outperformed the S&P 500 overall. Beyond the financial results, the drive for achieving high quality can transform the culture on its journey to performance excellence.8 Quality can have a positive impact on product or service design as well as improved quality of conformance. The improved quality of design can lead to higher perceived value in the marketplace, leading to increased market share and higher prices. The higher prices can lead to increased revenues and higher profitability. The improved quality of conformance can lead to lower manufacturing and service costs and subsequently higher profitability.9 These relationships are illustrated in Figure 2.4. The pursuit for high quality can also enhance the employees’ experience with the organization. It can have a positive impact on the culture, the ability of the employees to be able to do their jobs, through having documented processes and procedures, leading to less frustration and errors. Higher employee satisfaction can lead to lower turnover rates for these employees.10 Quality of products and services can enhance the organization.11 Deming’s chain reaction supports this point. Improved quality can lead to decreased costs due to less rework and scrap, fewer mistakes and delays, and better use of resources. Decreased costs can improve productivity, which can help to capture higher market share due to better quality and lower prices. Higher market share enhances the ability of the organization to stay in business, enabling the organization to provide more jobs.12 Figure 2.5 illustrates these relationships. Customers are critical to any organization. An important part of developing a quality system is to define methods for understanding the customers’ needs or listening to the voice of the customer (VOC): eliciting requirements, listening
Chapter 2: Quality Concepts
Improved conformance quality
Improved design quality
Customer perceives higher value
Higher prices
Increased market share
Increased revenues
Reduced manufacturing and service costs
Higher profitability
Figure 2.4 Quality impact. Source: Adapted from Evans and Lindsay.12 Improve quality Decrease costs Improve productivity Improve market share Stay in business Create jobs and more jobs
Figure 2.5 Deming’s chain reaction of quality. Source: Adapted from Evans and Lindsay.12
15
16
Part I: Quality Concepts and Team Dynamics
to the customer by monitoring and resolving customer complaints, and improving customer satisfaction and—perhaps, most importantly—customer loyalty. Having a customer for a lifetime provides a huge benefit. It is much easier to keep a customer than gain new ones. Suppliers are important stakeholders to the organization. Selecting and certifying suppliers based on quality, not just price, is an important part of ensuring the quality of the product or service. Partnering with suppliers to seamlessly connect the needs of the customer with the supplied parts or services is also critical. Where would a hospital be if they didn’t seamlessly design and provide services with their partner physicians, vendors, and other suppliers? Finally are the community benefits from the quality orientation of an organization. Employees, organizations, customers, and suppliers are taxpayers who contribute to the communities in which they live and work, enhancing the viability of these communities.13
QUALITY PLANNING Define a quality plan, describe its purpose for the organization as a whole, and know who has responsibility for contributing to its development. (Understand) Body of Knowledge I.B.2
To get things done, an organization needs a plan. Plans may be very simple, with merely a goal to be reached and a list of the steps necessary to achieve the goal. But effective organizational quality plans have many of the following elements: 1. Defined responsibilities of all the people associated with the quality plan 2. Alignment to quality policy 2. Clear goals or objectives 3. Logical steps to reach the goals or objectives 4. Identification of necessary resources 5. An implementation approach: a. Deployment of the goals or objectives b. Conveying priorities and rationale to stakeholders and performers c. Measures of progress d. Milestones (checkpoints in the schedule to check progress) 6. Measures of success
Chapter 2: Quality Concepts
17
Responsibilities Top-level management in an organization is responsible for establishing the firm’s commitment to quality. It is this commitment from top management that constitutes total quality management. Armand Feigenbaum14 defined total quality control as follows: Total quality control’s organization-wide impact involves the managerial and technical implementation of customer-oriented quality activities as a prime responsibility of general management and of the mainline operations of marketing, engineering, production, industrial relations, finance, and service as well as of the quality control function itself. Kaoru Ishikawa defined companywide quality control (CWQC) this way:15 To practice quality control is to develop, design, produce, and service a quality product which is most economical, most useful, and always satisfactory to the customer. Juran called the function or department responsible for quality the Quality and Excellence Office, which could also be known as the Quality or Quality Assurance Department. In highly regulated industries, such as medical device or pharmaceutical companies, it can be referred to as Quality Assurance/Regulatory Assurance.16 In addition to top management endorsing its commitment to quality and having a separate quality department, most organizations appoint a group to develop guidelines, measure progress, and assist in implementing the quality objectives. A cross-functional team of members representing the key functions in an organization to help assure quality is often referred to as a quality council. Many organizations call it by different names or merely include the quality council function in the roles of the executive committee. The council acts as a steering committee for the corporate quality initiative. In addition to ensuring that the quality initiative is implemented, the quality council must incorporate total quality strategies into the strategic business plan. Some of the specific responsibilities of the quality council may include the following:17 • Developing an educational module: The modules may be different in content and duration for different organizational levels. • Defining quality objectives for each division of the organization: This encompasses what improvement methods to employ, who is responsible, and what measures will be used. • Developing and helping implement the company improvement strategy: Roadblocks and system problems must be resolved, and the quality council is in a unique position to influence these decisions. • Determining and reporting quality cost: There is no better way to encourage quality improvement than to show that poor quality costs a great deal. Evaluating the dollar value associated with poor quality in terms of prevention, appraisal, and failure costs helps management to measure quality using an easily understood denominator.
18
Part I: Quality Concepts and Team Dynamics
• Developing and maintaining an awareness program: The awareness program should be built slowly. It should not be a surge of slogans, posters, and banners that will subsequently fade away. Continuous emphasis on excellence, sharing of success stories, recognition, and setting standards are all part of this process. Problems identified by the quality council provide opportunities for improvement. For a condition to qualify as a problem, it must meet the following three criteria: 1. Variable performance from an established standard 2. Deviation from the perception and the facts 3. The cause is unknown; if we know the cause, there is no problem Identifying problems for improvement is not difficult, as there often are many more than can be analyzed. The quality council or work group must prioritize them using the following selection criteria: 1. Is the problem important and not superficial? Why? 2. Will problem solution contribute to the attainment of goals? 3. Can the problem be defined clearly using objectives measures? Whenever possible, the planning process should include the people who will ultimately implement the plan. There are two reasons for this: 1. The people who will ultimately implement the plan probably have more realistic expectations of what is feasible. 2. People who feel that they participated in planning the work to be done are often more committed to performing it later. The focus of this perspective is for organizations to continually improve their products, services, processes, and practices, with an emphasis on reducing variation and reducing cycle time. This approach implies extensive use of the quality management tools, including cost of quality, process management approaches, and measurement techniques.
Aligned to Quality Policy A quality policy provides a guide to managerial action related to quality. It is important to define a clear and concise quality policy that serves as a guide to the quality system. The quality policy statement should be short and concise, as well as align to the organization’s strategic direction and core values. According to the ISO 9001: 2015 standard regarding a quality policy: Top management shall establish, implement, and maintain a quality policy that: (a) Is appropriate to the purpose and context of the organization and supports its strategic direction. (b) Provides a framework for setting quality objectives. (c) Includes a commitment to satisfy applicable requirements. (d) Includes a commitment to continual improvement of the quality management system.18
Chapter 2: Quality Concepts
19
Set Goals/Objectives Quality planning always begins with establishing what needs to be accomplished or what products and services need to be provided. This is often not an easy task since the various stakeholders in any process often have significantly different perspectives of what needs to be accomplished. Therefore, it is almost always necessary to negotiate the goals that will be the objectives of the quality project or effort. Quality goals should be set for the organization and for each quality project to be performed. The quality council may establish these for the organization, but corporate quality goals can be often derived from the corporation’s strategic mission, which is influenced by customers and top management. The quality goals must be consistent with the organization’s mission and should be included in the corporate strategic plan. Quality goals, like all strategic goals, should be SMART: S = Specific M = Measurable A = Achievable R = Realistic T = Time-bound or timely To be SMART, the goal must identify who is responsible, what must be accomplished, where the goal is to be achieved, and when the goal is to be accomplished. To be measurable, the goal must be quantified. Goals that are not quantified are not measurable and are not SMART. The goal must be achievable. It must be possible to reach the goal, at least in theory. If the goal is not feasible, it is not reasonable to expect people to try to achieve it. The goal should stretch the organization slightly so that everyone reaches for higher levels of performance, but it must not be beyond the organization’s ability to reach. Breaking difficult long-term goals into simpler, attainable steps so the managers and workforce believe that what they are doing is possible is a good management practice. To be realistic, a goal must represent an objective that is both important and feasible, given the capabilities of the organization. In establishing goals, it is necessary to consider the abilities of the individuals who will achieve them and to consider the capability of the organization’s processes. If the organization does not have the skills, materials, tools, training, processes, and knowledge to achieve the goal, it is unrealistic to expect that the organization can achieve it. For a goal to be timely, it should be accomplished in a timely manner, having a due date or time frame assigned for each goal/objective. This drives a sense of urgency and ensures achievement of goals and objectives. Here is an example of a customer satisfaction goal that is specific, measurable, attainable, realistic, and timely: To ensure that we are satisfying our customers, 90% of our customers will give us 5s or 6s on our customer satisfaction survey by January 2024.
20
Part I: Quality Concepts and Team Dynamics
According to Juran,19 total quality control should be a structured process and should include the following: • A quality council • Quality policies • Strategic quality goals • Deployment of quality goals • Resources for control • Measurement of performance • Quality audits Quality goals must be linked to product or service satisfaction, customer satisfaction, or cost of quality.
Logical Steps to Reaching the Goals Stated goals alone are not sufficient for a plan. The plan must provide a road map for reaching the goals. The job must be broken into specific tasks that, when performed in sequence, will result in the goals being met. The project management tools and practices described in Chapter 9 of this handbook will aid in breaking the quality initiative into specific, interrelated tasks.
Resources All the personnel, equipment, materials, facilities, and processes necessary to perform the tasks must be identified in the plan. The company’s total quality processes must include the means for identifying and procuring the resources necessary. Anyone who is involved with the quality programs should be identified in the corporate quality plan. Individuals must be assigned specific tasks to plan, implement, measure, and improve the quality program in the organization. It is imperative that those assigned specific responsibilities be held accountable for performing the assigned tasks. Holding individuals responsible for their actions is the role of the individual’s supervisor or manager, though this process may be monitored by the quality council and senior management.
An Implementation Approach For a plan to be of any real value to an organization, it is essential that the plan be implemented. “Deployment of the plan” means informing the individuals who will execute the plan of their responsibilities and holding them accountable for implementing the plan. Plan deployment is necessary, or the plan is merely a document that has no impact on the organization’s activities.
Chapter 2: Quality Concepts
21
Measures of Success Finally, for a plan to be effective, it must include measures of success. The success of corporate quality planning can include these measures: • Are products or services delivered on time? • Are the costs of developing/delivering the products or services less than the price charged? • Are any of the products produced unsold (overproduction)? • Are the customers’ requirements and expectations met? • Are the customers satisfied with the service provided? • Was documentation provided (if expected)?
QUALITY STANDARDS, REQUIREMENTS, AND SPECIFICATIONS Define and distinguish between national or international standards, customer requirements, and product or process specifications. (Understand) Body of Knowledge I.B.3
The quality assurance (QA) function may include design, development, production, installation, servicing, and documentation. The phrases “fit for use” and “do it right the first time” arose from the quality movement in Asia and the United States. The quality goal is to assure quality at every step of a process, from raw materials, assemblies, products and components, services related to production, and management, production, and inspection processes through delivery to the customer, including a feedback loop at each step for continuous improvement. For a service delivery process, the steps include service process design, deployment and delivery of the services, and providing feedback for continuous improvement. One of the most widely used paradigms for QA management is the plan–do– check–act (PDCA) cycle, also known as the Shewhart cycle. The main goal of QA is to ensure that the product fulfills or exceeds customer expectations, also known as customer requirements. A companywide quality approach places an emphasis on four aspects: • Infrastructure (as it enhances or limits functionality) • Elements (e.g., controls, job management, adequate processes, performance and integrity criteria, and identification of records) • Competence (e.g., knowledge, skills, experience, and qualifications) • Soft elements (e.g., personnel integrity, confidence, organizational culture, motivation, team spirit, and quality relationships)
22
Part I: Quality Concepts and Team Dynamics
The quality of the outputs is at risk if any of these four aspects are deficient in any way. In manufacturing and construction activities, these business practices can be equated to the models for quality assurance defined by the international standards contained in the ISO 9000 series for quality systems. From the onset of the Industrial Revolution through World War II, companies’ quality systems relied primarily on shop floor inspection or final lot inspection. This quality philosophy lacked a proactive approach and held the potential that if a product made it to final lot inspection and defects were found, the entire lot was rejected, sometimes at considerable cost and waste of personnel and material resources. The quality problem was not addressed and corrected during manufacture. With the pressures of raw material costs and limited resources, most businesses adopted the more recent quality philosophy to limit adding value to a product or activity as soon as poor quality is observed, and to implement check steps along the process to observe and correct as early as possible. This led to the quality assurance or total quality control philosophy. The key to any process, product, or service is to understand and provide what the customer wants. There are three major process, product, or service characteristics: • Reliability • Maintainability • Safety Various industries have lost market share due to a failure to adequately understand customer requirements and failure to translate those requirements into technical requirements or specifications that a company would need in order to be able to satisfy the customer requirements. Specifications are typically an engineering function and are critical in the product design, planning, and realization phases. In addition to physical, measurable specifications, there may be other customer expectations, such as appearance. This may be included as a quality requirement that the quality function would establish, document, and implement. For instance, parameters for a pressure vessel must include not only the material and dimensions, but also operating, environmental, safety, reliability, and maintainability requirements. Moreover, external standards outside of the company and the customer may dictate further requirements that the product must meet. They may be published federal environmental standards or statistical sampling standards, such as ASQ Z1.4-2003, or performance and testing method standards published by a professional association such as the American Society for Testing and Materials (ASTM).20
Chapter 2: Quality Concepts
23
QUALITY DOCUMENTATION Identify and describe common elements of various document control systems, including configuration management, and describe the relationship between quality manuals, procedures, and work instructions. (Understand) Body of Knowledge I.B.4
To the quality professional—and other business support personnel— documentation systems are essential although often underappreciated tools. There are three purposes for a documentation system in an organization: 1. Guide individuals in the performance of their duties 2. Standardize the work processes throughout the organization 3. Provide a source of evidence regarding practices New employees, people recently reassigned to fulfill new responsibilities, and candidates for appointment to new jobs need a reference for the policies, materials, tools, resources, and practices necessary to do their new job. Documentation serves as the basis for training new employees in both formal training settings and on-the-job training. Documentation also provides more-seasoned individuals a reference for the work they are doing. For an organization to function effectively, the practices of all of the individuals must be standardized so that others will know what to expect from each person. Failure to standardize is one of the most prevalent reasons for confusion and friction in processes involving multiple departments, functions, or teams. As will be stressed in the discussion of configuration management, the documentation system must permit changes so that improvements can be made and to serve the unique needs of specific customers, but the documentation system must standardize the common elements of the work being done by everyone. Finally, the documentation system serves as a source of evidence for practices in which the organization is engaged. Documents must be available to support appropriate responses under the following conditions: 1. When legal or regulatory challenges are made 2. When marketing and sales requires proof of the organization’s capabilities 3. When senior managers and project managers need to plan and estimate future performance 4. When quality personnel need to spot opportunities for improvement, sources of waste, and problems Nearly all quality certification programs require the organization being certified to demonstrate that it employs a documentation system to control its documents.
24 Part I: Quality Concepts and Team Dynamics
For example, ISO 9001:201521 criteria include a requirement that registered organizations possess documented information. The extent of the documented information depends upon the organization and the following factors:21 • Size of the organization, type of activities, processes, products, and services • Complexity and interactions of the processes • Competence of personnel The ISO 9001:2015 requirements address the control of documented information as follows:22 7.5.3 Control of documented information 7.5.3.1 Documented information required by the quality management system and by this International Standard shall be controlled to ensure: a) it is available and suitable for use, where and when it is needed; and b) it is adequately protected (e.g., from loss of confidentiality, improper use, or loss of integrity). The ISO 9001:2015 standard changed the terminology related to documentation. The ISO 9001:2008 used the term “records,” whereas the ISO 9001:2015 standard uses the term “retained documented information.” The 2008 standard refers to documentation, quality manual, documented procedures and records, whereas the 2015 standard instead uses the term “documented information.” However, there is no requirement that the information is documented, but it is instead up to the organization to decide if the information needs to be documented.23
Need for a Flexible and Current Documentation System No documentation system is perfect. While every effort must be made to ensure that the documents in the system are complete, the manner in which jobs are performed changes over time, and the documents describing the work are seldom 100% up to date. Accordingly, the documentation system must be flexible. When individuals retrieve a document and discover that it is out of date, they will be reluctant to retrieve documents from the documentation system in the future. It is essential that the documents in the system are kept current even though it is often an expensive and difficult activity.
Library Every organization needs to have a library of documents necessary for the organization to perform its functions. ISO 9001:2015 has several documentation requirements, including documentation of the quality management system. Items typically contained in the quality manual or in the organization’s library may include the following, but are not all necessarily required:24
Chapter 2: Quality Concepts
25
• Policies and procedures • Work instructions • Contracts • Design inputs • Process details • Engineering changes • Final inspection data • Warranty changes • Legal agreements • History of defects • Disposition of customer complaints • Warranties • Document descriptions and blueprints • Histories of supplier performance There is an expense associated with archiving and maintaining the library. The cost of building and maintaining the library must be borne by the organization. Companies that have tried to eliminate the costs associated with the documentation system have often discovered that this was a serious mistake when they were later unable to respond to an environmental agency challenge or were sued and unable to produce evidence of their practices in court.
Configuration Management Organizations that produce products (hardware or software) control their documents with a configuration management (CM) program. Examples of organizations that produce products include • a manufacturing firm that produces electronic components, • a software firm that makes and sells computer games, and • a consulting firm that produces training materials for courses to prepare candidates for ASQ certifications. The organization’s documentation system must be flexible to permit changes to be made to documents when changes are made to products or their design. To allow documents to be modified in a systematic and intelligent fashion, most product-producing organizations adopt configuration management. The CM system ensures that all the documentation describing the product reflects all of the changes that have been made to it during design, development, or installation. For example, if an electronic component manufacturing firm designs a capacitor to satisfy a customer requirement for handling 12.5 amps, but later is
26
Part I: Quality Concepts and Team Dynamics
Table 2.1
Example document traceability matrix. CPI Program 7267.39 Requirements
Control number
Requirement
ORD
SS
CPCI specification
VCRM
Test document
CN 301
Sense change signal
3.1.7
2.1.2
3.4.1.2
4.1.2
4.1.2
CN 302
Assemble control data
3.1.8
2.1.5
3.1.3
4.1.2.1
4.1.2.1
CN 303
Transfer data to external module
3.1.9
2.7.6
3.4.1.3
4.1.3
4.1.3
CN 304
Calculate new delay time
3.1.10
2.1.4
3.7.9
4.1.4
4.1.4
CN 305
Inform control module
3.1.11
2.1.7
3.4.1.4
4.1.5
4.1.5
CN 306
Return to search mode
3.1.12
2.1.8
3.4.1.5
4.1.5.1
4.1.5.1
informed that this requirement must be increased to 13.0 amps, the customer and the company must be certain that this change in the requirement has been reflected everywhere that it appears. Of course, the requirement will be changed in the operational requirement document (ORD) and the system specification (SS). It must also be changed in the component specification, in the installation instructions, and in any training documents that have been generated. The purpose of a configuration management system is to make sure the requirements are changed in all the documents, not just some of them. Failure to make the changes universally will result in confusion and perhaps serious failure. Configuration management is an established discipline. Most professional organizations have adopted configuration management to control their policies, standards, practices, and manuals. For example, ASQ practices CM to control its handbooks, certification requirements, bodies of knowledge, and other documents. The IEEE also has a standard, 729 1983, for software CM.8, 25 CM is often administered by a configuration control board in the organization. This is a team of personnel responsible for ensuring that document revisions are universally incorporated throughout all the organization’s documents. At a minimum, there must be a document traceability matrix, a spreadsheet that traces each requirement. An example of a simplified requirements document traceability matrix is shown in Table 2.1.
Documentation Control All organizations should employ some version of a documentation control (DC) system. The DC system is very much like the CM system that manufacturers and product providers require. But even if the organization does not provide customers with products, it is highly desirable that the organization’s documents all be organized and kept current and available. Revising the documents should follow a simple procedure that ensures that all of the agencies and individuals who are affected by any proposed changes have an opportunity to review the revisions before they are made, and that the changes, once they are made, are reflected universally in all relevant documents. ISO 9001:201525 requires that
Chapter 2: Quality Concepts
27
revisions to documents be controlled with version control, stored, and preserved. Change markings can occur throughout the document. For example, additions may be shown underlined and deleted words can be shown as lined through— features all word processors provide. A word of caution: Individuals should be discouraged from making hard copies or soft/electronic copies of documents to keep handy since these documents will not reflect the latest changes that have been made. Once a documentation control system is in place, it should be adhered to by everyone in the organization; if it has been automated, the continued use of hard copies and local copies on someone’s own computer hard drives or shared drives should be discouraged.
COST OF QUALITY (COQ) Define and describe the four basic cost of quality categories: prevention, appraisal, internal failure, external failure. (Understand) Body of Knowledge I.B.5
Cost of quality (COQ) is a term that’s widely used—and widely misunderstood. The “cost of quality”26 isn’t the price of creating a quality product or service. It’s the cost of not creating a quality product or service. It is also referred to as the cost of poor quality (COPQ). Every time work is redone, the cost of quality increases. Here are some obvious examples: • The reworking of a manufactured item • The retesting of an assembly • The rebuilding of a tool • The correction of a bank statement • The reworking of a service, such as the reprocessing of a loan operation or the replacement of a food order in a restaurant In short, any cost that would not have been expended if quality were perfect contributes to the cost of quality.
Total Quality Costs As Figure 2.6 shows, quality costs are the total of the costs incurred by • investing in the prevention of nonconformance to requirements, • appraising a product or service for conformance to requirements, and • failing to meet requirements.
28
Part I: Quality Concepts and Team Dynamics
Prevention Costs The costs of all activities specifically designed to prevent poor quality in products or services. Examples are the costs of: • New product review • Quality planning • Supplier capability surveys • Process capability evaluations • Quality improvement team meetings • Quality improvement projects • Quality education and training
Appraisal Costs The costs associated with measuring, evaluating, or auditing products or services to assure conformance to quality standards and performance requirements. These include the costs of: • Incoming and source inspection/test of purchased material • In-process and final inspection/test • Product, process, or service audits • Calibration of measuring and test equipment • Associated supplies and materials
Failure Costs The costs resulting from products or services not conforming to requirements or customer/user needs. Failure costs are divided into internal and external failure categories. Internal Failure Costs Failure costs occurring prior to delivery or shipment of the product, or the furnishing of a service, to the customer. Examples are the costs of: • Scrap • Rework • Reinspection • Retesting • Material review • Downgrading External Failure Costs Failure costs occurring after delivery or shipment of the product—and during or after furnishing of a service—to the customer. Examples are the costs of: • Processing customer complaints • Customer returns • Warranty claims • Product recalls
Total Quality Costs The sum of the above costs. This represents the difference between the actual cost of a product or service and what the reduced cost would be if there were no possibility of substandard service, failure of products, or defects in their manufacture.
Figure 2.6 Quality costs—general description.
Chapter 3 Quality Audits
AUDIT TYPES Define and distinguish between basic audit types, including internal and external audits; product, process, and systems audits; and first-, second-, and third-party audits. (Understand) Body of Knowledge I.C.1
An audit is an evaluation of an organization, its processes, or its products. Audits are performed by an independent and objective person or team who are called auditors. The purpose of an audit is to provide decision makers such as senior managers with unbiased information on which decisions can be made. Audits can be concerned with a single aspect of the business, such as its financial status, or they can assess performance in many or all of the functions engaged in a business process, such as an audit of a supplier’s operation to determine if a contract award should be made. Auditors collect data from their observations and interviews with personnel who function within the audited area. The data are compared to standards, regulations, or practices that have been established for the organization. The results of the audit will be communicated to management for the purpose of assessing their organization’s performance and the effectiveness of their quality management system. There are two major classifications of audits: • Internal audits are engaged by the company being audited. The auditors may be specially trained members of the audited firm’s staff, or they may be consultants or contracted personnel who are capable of collecting and analyzing the necessary audit data. Internal auditors should be independent of the activities that are being audited to ensure the audit is unbiased. The results of an internal audit are delivered to the audited company’s management so that these managers can make decisions or direct changes based on the audit results. Nearly always, if a firm is anticipating an external audit, it will arrange for an internal 29
30 Part I: Quality Concepts and Team Dynamics
audit to prepare for the external one. One of the procedures required for an organization to be certified to ISO 9001 is to have an internal audit procedure. Also known as a first-party audit, an internal audit is basically an organization auditing itself. • External audits are engaged by an agency outside the audited company, such as a government agency or an independent auditing firm. There are regional, national, and international accounting firms and auditing corporations that are capable of performing extensive and thorough audits of finance, quality, safety, and compliance with regulations such as environmental and hazardous waste disposal restrictions. The board of directors of a company may engage an external audit, or an external agency may require that an organization have an external audit to grant a certification or for some other reason. An audit performed by any person or organization outside the organization that is being audited is considered an external audit and can be further defined as either a second-party audit or a third-party audit. Third-party audits are performed by auditors who are independent of both the auditee and the requesting agency. When an organization is seeking ISO 9001:2008 certification, an external certification body or registrar performs the audit and certification. ISO (International Organization for Standardization) does not perform certification (or registration), and so a company cannot be certified by ISO. A second-party audit is performed by a customer to verify that the supplier conforms to applicable requirements. Auditing is an integral part of any organization’s quality control program. ISO 9001:2015 certification requires external audits in order to register a company as compliant with its standards. A fundamental requirement of ISO 9001:2015 is that a company demonstrates that it conducts routine internal audits; it is a key continuous improvement tool to monitor and improve its quality management system. Ordinarily, the auditee is informed that the audit will occur and when. This is necessary since the auditors will need to interview workers, staff, and managers in the audited organization, and they will need to look at data that may not be immediately available. In order to minimize the time spent auditing (which impacts routine performance of the audited organization), it is nearly always necessary to notify the auditee in advance. However, sometimes an unannounced audit is appropriate. A third-party or external audit team can arrive and announce that they are present to conduct an unannounced audit. Firms that are subject to such unannounced audits must be prepared at all times for such an audit to interrupt their normal duties. They should perform frequent, routine internal audits and have sufficient margin in their schedules and budgets to accommodate these otherwise disruptive audits. There are three types of quality audits: 1. Product audit 2. Process audit 3. System audit
Chapter 3: Quality Audits
31
The product audit assesses the final product or service to determine its quality before it is used by the customer. It is an assessment of conformance to required product or service specifications. A product audit may be an external audit performed by the customer or an internal audit such as a final production inspection. Product audits usually take less time than a process or system audit since they assess a single product at a time. The process audit is an assessment in which steps are traced for process flow to assure the process is being implemented as planned and the company has the ability to produce the desired results. It is, accordingly, more complex than a product audit but ordinarily less difficult and time-consuming than a system audit. The process audit may be performed as an internal or an external audit. The focus of a process audit is a single function, such as all of the steps in manufacturing a product or all of the steps in servicing a type of customer. The audit appraises completeness, correctness of conditions, and probable effectiveness. The system audit is an assessment of the entire system or all of the functions of an organization. These audits typically cannot be conducted rapidly. It is not uncommon for a full system audit to take from two to five days and include several auditors who engage dozens of individuals from the audited organization’s staff in interviews. ISO 9001:2015 audits by a team of registrar auditors and a pre-award audit to determine if a new supplier can be awarded a large contract to provide materials are examples of system audits. System audits are often performed by third parties. However, as has already been suggested, organizations often perform internal audits even of the entire system in order to better prepare for an external full system audit. For more information about quality audits, consult The ASQ Certified Quality Auditor Handbook.1
AUDIT COMPONENTS Identify various elements of the audit process, including audit purpose and scope, the standard to audit against, audit planning (preparation) and performance, opening and closing meetings, final audit report, and verification of corrective actions. (Understand) Body of Knowledge I.C.2
The audit process is described in four phases: 1. Preparation 2. Performance 3. Reporting 4. Closure
32
Part I: Quality Concepts and Team Dynamics
Preparation When preparing for any quality audit, the following actions should be taken: Receive Formal Authorization. The first step in the audit process is obtaining formal authorization from someone in a leadership position, usually someone in upper management, for the audit. The authorization provides the audit team leader the authority to execute the audit, the permission to the audited organization to cooperate with the auditors, and the organization resources necessary to plan and execute the audit. Establish the Purpose of the Audit. Audits are conducted to improve processes, implement a continuous improvement program, take corrective action, and many other reasons. Ultimately, the audit purpose must be made clear to the auditors and auditees alike. Determine the Audit Type. There are three audit types: product audits, process audits, and system audits. In addition, audits may be customized to specific auditing needs. Establish the Scope of the Audit. To avoid the audit’s scope growing over time (scope creep), the scope should be clearly defined during the preparation phase. Organizations have processes that sustain production, support, and external interfaces. Processes in these three areas should be examined: • Production processes make goods and services, such as teaching, operating, and assembling. • Support processes can be administrative or help production, such as purchasing, quality, IT (information technology), and accounting. • External interface processes are customer or supplier based and can include various departments such as customer service, purchasing, and regulatory affairs. Once the audit is underway, the scope may change, but a clear understanding of the scope that was agreed to at the beginning of the project will provide a basis for orderly and systematic change rather than chaos. As far as timing, a new audit can be based on past experiences—consult the records of past audits to determine how long it took to perform them, and then adjust the estimates derived from the actual time and effort required to perform them in the past with what is known about the new audit. For example, if a similar audit was performed in the past by three people in three days, but the three people felt that was not sufficient time, the new audit may be scheduled to take three people four days. Identify the Necessary Resources. The personnel, equipment, materials, documentation (that is, standards and quality manual), training, tools, and procedures that are required to perform the audit must be stated. Form a Team. The auditors should be identified and then provided an opportunity to develop into a team before the audit commences. It may be useful to employ a facilitator to expedite the team-building process.
Chapter 3: Quality Audits
33
Assign Team Roles and Responsibilities. Once the auditors have been identified, it is necessary to assign them to specific roles and to identify each of their responsibilities. The team leader is identified and given the responsibility of leading the team, facilitating the audit process, and constituting the remainder of the audit team. Other audit team members are normally assigned responsibilities within their areas of expertise in auditing methods and practices or in specific technical areas. The audit team leader should attempt to assign individuals to duties they are capable of performing and arrange for instruction or training in areas that the team members will need during the audit. Identify the Audit Requirements. In Deming’s plan–do–check–act cycle, auditing is the checking cycle. An audit measures to performance standards or requirements (the plan part of the cycle). It is important to be prepared by reading the procedures and reviewing process information and previous audit information. Specific audit activities and deliverables, including audit forms to be completed, working papers (that is, checklists), and detailed questions to be asked during interviews, debriefing activities, and reports, all constitute the audit requirements. Schedule the Audit Activities. A final component of the audit plan is a detailed audit time schedule. This schedule must include not only dates and times for the delivery of required documents and reports but must also identify milestones. Milestones are checkpoints in the schedule at which time the status of the audit activities can be assessed to determine progress. Communicate. The audit plan is communicated to the audit team members and also presented to the organization being audited ahead of time for agreement. If there are objections by the auditee, they need to be brought to the auditor’s attention for clarification and/or revision of the plan. It is the auditee’s responsibility to communicate to their organization any relevant information to help prevent surprises and be prepared. This can facilitate cooperation and prevent delays.
Performance The execution of the audit constitutes the performance phase. The following activities are required during this phase: • The opening meeting is for the auditing staff and the senior management of the organization being audited. The auditor presents the scope and time frame along with notification of potential interviews. • Interviewing by asking open-ended questions is more useful in obtaining relevant information than yes/no-type questions. • Examining records and observing activities helps gather objective evidence—both conforming (good) and nonconforming (bad). • Discussion—it is important to have daily briefings and team meetings to communicate progress and issues. Analyze data to find patterns and trends and to create findings.
34 Part I: Quality Concepts and Team Dynamics
Reporting Audit results will need to be reported to management and should include reference to procedures and standards that have not been met or that need follow-up for improvement. The reporting should be balanced to give both negative and positive feedback. • Findings sheets will have a statement of the problem with the supporting facts. • Nonconformities are documented when it’s been demonstrated that a requirement is not being met. • Positive practices help reinforce the good things that are being done (conformances). • The audit report should be objective, providing conformances, nonconformances, and observations for improvements.
Closure The audit team works with the audit coordinator to finalize the audit report. The team leader assures there is information for the background, executive summary, and request for corrective actions. The following reports are presented at the closing meeting: • Final report • Final report with exceptions • Terms and conditions for closure (agreements between the agent authorizing the audit and the auditee regarding what constitutes closure) It is important to thank the organization being audited for their cooperation. This can be done at the closing meeting as well as in the final report. Thanking the audit team for their participation is also valuable—along with feedback and lessons learned—and can be done in the presence of the audit team members. An audit is considered closed when both parties involved with the audit are satisfied that all corrective actions have been verified and successfully completed. For more on corrective actions, see Chapter 21.
AUDIT ROLES AND RESPONSIBILITIES Identify and describe the roles and responsibilities of key audit participants: lead auditor, audit team member, client, and auditee. (Understand) Body of Knowledge I.C.3
Chapter 3: Quality Audits
35
The roles and responsibilities for each participant in an audit are listed in this section.
Agent Authorizing the Audit (Client) This may be an external agency, registrar, certifying body, board of directors, customer purchasing department, process manager, or product manager. • Authorize the audit • State the audit purpose • Determine the audit type • Establish the scope of the audit • Approve the schedule • Approve the closure terms and conditions
Senior Management • Authorize resources • Direct the organization to cooperate with and assist the auditors
Audit Team Leader (Lead Auditor) • Create the audit plan for approval by senior management and the agent authorizing the audit • Manage and administer the audit process • Form the team • Assign the team members’ roles and responsibilities • State the audit requirements • Schedule the audit activities, deliverables, and milestones • Administer documents during the execution phase • Call and lead the kickoff, progress, and exit meetings
Auditor (Audit Team) • Participate in planning the audit • Participate in team-building activities • Collect data • Analyze data
36
Part I: Quality Concepts and Team Dynamics
• Prepare working papers • Attend meetings • Give reports • Document findings
Auditee • Be available when the audit is scheduled • Provide all evidence requested by the auditor, such as documents, work in progress, and computer data normally employed in the performance of the auditee’s duties • Answer all questions posed by the auditor (normally, auditees should not volunteer information not requested by the auditor)
Chapter 4 Team Dynamics
TYPES OF TEAMS Distinguish between various types of teams: process improvement teams, work groups/ work cells, self-managed teams, temporary/ ad hoc project teams, and cross-functional teams. (Analyze) Body of Knowledge I.D.1
A team is a group of people who perform interdependent tasks to work toward a common mission. Some teams have a limited life, for example, a design team developing a new product or a process improvement team organized to solve a particular problem. Others are ongoing, such as a department team that meets regularly to review goals, activities, and performance. Understanding the many interrelationships that exist between organizational units and processes, and the impact of these relationships on quality, productivity, and cost, makes the value of teams apparent. Many of today’s team concepts originated in the United States during the 1970s through the use of quality circles or employee involvement initiatives. But the initiatives were often seen as separate from normal work activities, in addition to daily tasks. Team design has since evolved into a broader concept that includes many types of teams formed for different purposes.1 Although they may take different names in different industries or organizations, this text presents eight types of teams: • Process improvement teams • Work groups • Work cells (cellular teams) • Self-managed teams • Temporary/ad hoc teams 37
38 Part I: Quality Concepts and Team Dynamics
• Cross-functional teams • Special project teams • Virtual teams
Process Improvement Teams Process improvement teams focus on improving or developing specific business processes and are more likely to be trying to accomplish breakthrough-level improvement (as explained in Chapter 8). Process improvement team • are typically cross-functional—different functions/skills related to the process; • have a management sponsor who charters the team, ensures that the team has appropriate resources, and ensures that the team has organizational support; • achieve a specific goal; • are guided by a well-defined project plan; and • have a negotiated beginning and end. The leader of a process improvement team is usually selected by the project’s sponsor, and the team meets on a regular basis to accomplish the following: • Plan activities that will occur outside the meeting • Review information gathered since the previous meeting • Make decisions to implement process changes In cases where a team member does not have the expertise, an independent facilitator (not associated with the specific process) may be asked to work with the team. There are many organizations that have implemented the kaizen approach for productivity improvement, with its main goal of eliminating waste (kaizen is discussed further in Chapter 6). An organization taking the kaizen approach may also use kaizen events or kaizen blitzes to concentrate on process improvement through an accelerated team process that focuses on a narrow project implementing the recommended changes within a three- to five-day period. In these cases, the team facilitator typically has more authority than with most teams.2 A quality process analyst (QPA) will often be a member of a process improvement team. Because it is challenging enough to support the development and deployment of organizational change, it is always good to first ensure that the mission has been accepted and approved by upper management and that its purpose is welldefined and measurable. Two of the challenges a QPA might experience include dedicating enough time to work on a specific project and employees being reluctant to provide data or information. Once a solution is provided, employees may be unwilling to embrace the change that needs to happen in order for the solution to be effective for quality improvement. To promote the change process, the QPA
Chapter 4: Team Dynamics
39
can communicate the positive benefits of the change to their peers. Including and encouraging fellow employees to participate when possible will make them more inclined to support the mission.
Work Groups Also called natural teams, work groups bring together employees in a participative environment. Work groups have an ongoing charter to continually improve the work processes over which they have direct responsibility and ownership. The team leader is usually the individual responsible for the work area, such as a department supervisor. Work groups function similarly to quality circles, in which department personnel meet weekly or monthly to review the performance of their process. They may • monitor feedback from customers, • track internal performance measures of processes, and • track internal performance measures of suppliers. Work groups are similar to process improvement teams, with the key differences being that work groups are neither cross-functional nor temporary. Should they need additional support, a facilitator is usually available, and outside resources may be brought in on a temporary basis if necessary. Since work groups are an ongoing organizational structure, it is critical that the organizational systems and values support the effort. Certain basic elements should be considered when an organization is attempting to initiate work groups: • Top management support • Clear communication • Improvement objectives and expectations • Team training • Appropriate competencies • Supportive compensation and performance appraisal systems Other issues to consider include the following: • Team’s scope of responsibility and authority • Degree of autonomy • Information needed by team and where obtained • Decision-making process • Performance measures and success factors • Recognition and rewards for performance • Competencies that must be developed
40 Part I: Quality Concepts and Team Dynamics
• Selection process for leaders and facilitators • Risk management process An implementation plan including the necessary support systems should be defined before initiating such groups. If work groups are being implemented in an existing organization where a participative management style has not been used before, a pilot program in a department where they are likely to succeed is highly recommended.3
Work Cells (Cellular Teams) Processes can be organized into work cells, or cellular teams, where the layout of workstations and/or machines used in a designated part of a process is typically arranged in a U-shaped configuration (see Figure 4.1). This allows operators to perform progressive activities on the same part, moving from a machine to produce a whole product or a major subassembly. The team that operates the cell is usually completely cross-trained on every step in the series. Team effectiveness depends on coordination, timing, and cooperation. Team members’ competencies must be as closely matched as possible to maintain a reasonable and consistent work pace. Cellular team members usually assume ownership and responsibility for establishing and improving the work processes. Leadership of the cellular team may be by a person designated as lead operator or a similar title. In some cases, the role of lead operator may be rotated among the members. The lead operator is usually responsible for training new team members, reporting team output, and balancing the flow of work through the process. A cellular team is a specialized form of a self-directed work group.4
Stock point Outbound
Work path
Work path
Stock point Inbound
Figure 4.1 Typical U-shaped cell layout. Source: R. T. Westcott, The Certified Manager of Quality/Organizational Excellence Handbook, 3rd ed. (Milwaukee, WI: ASQ Quality Press, 2006), 403.
Chapter 4: Team Dynamics
41
Self-Managed Teams Also called self-directed teams or high-performance teams because of their broader scope of responsibilities, self-managed teams are groups of employees involved in directly managing the day-to-day operation of their particular process or department. Some of their responsibilities are those that are traditionally held for managers. Self-managed teams • are authorized to make decisions on a wide range of issues (for example, safety, quality, maintenance, scheduling, and personnel), • set goals, • allocate assignments, and • resolve conflict.5 Because the team members have broader responsibilities, they assume ownership of a work process. The leader of a self-managed team is usually selected by the team members; in many cases, the role is rotated among the members over time. Other positions (entailing scheduling or safety, for example) are also often rotated among the team members. Self-managed teams require up-front planning, support structures, and nurturing. Figure 4.2 highlights a number of guidelines that can increase the success of self-managed teams. With the various roles that self-managed teams take on, training is necessary to move from a traditional work environment to a self-managed work environment. Often, even the managers need training to transition from their current role to the new support role. For self-managed teams to be successful, they will need training. For example, one Fortune 500 company provides training in the following subjects: • How to maintain focus on the customer • How to develop a vision and mission that are integrated with the larger organizational mission • Understanding roles and operating guidelines
• Have a well–thought out vision of how the team will fit into the entire organization • Ensure that the organizational culture can and will support the team • Ensure that the organization has the resources to commit to this type of change • Train team members in skills that will allow them to function together
• Set performance expectations for the team • Develop a method for providing feedback to the team • Set boundaries in which the team will be allowed to operate • Recognize that the self-managed team is not leaderless and that they may need management intervention
Figure 4.2 Guidelines for successful self-managed teams. Source: ASQ’s Foundations in Quality Self-Directed Learning Series, Certified Quality Manager, Module 1 (Milwaukee, WI: ASQ Quality Press, 2001), 1–58.
42 Part I: Quality Concepts and Team Dynamics
• Skills required for working together to make decisions, plan work, resolve differences, and conduct meetings • The concepts and strategies for empowerment • Setting goals (objectives) and solving problems for continuous improvement6 Attempts to transition from a traditional work environment to a self-managed work environment require a culture change. This culture change can be very dramatic and is best approached by first implementing cross-functional process improvement teams and/or work teams as learning mechanisms. Self-managed teams are more likely to be successful as part of a startup of a new facility.
Temporary/Ad Hoc Teams These types of teams are often initiated to address a specific problem or situation. The team would be formed based on the work to be performed. It might not use the usual formal structure (e.g., agenda, regular meeting frequency), but the same general principles and processes still apply. Organizations open to using ad hoc teams must be flexible. An empowered organization will use temporary teams when deemed useful to carry out a short-term mission, such as conducting internal audits. Some companies, for example engineering design companies, may form ad hoc teams with other engineering, design, and/or manufacturing experts to meet their customers’ needs. Various situations can arise that require immediate and dedicated attention, in which case ad hoc teams would have to be formed, such as in the following situations: • Preparing for a major customer’s audit team coming to do an assessment on the adequacy of your quality management system • Disaster relief—fire occurring and the process of needing to relocate • Recovery from your organization’s information management computer system being compromised by an outside virus Usually, management will designate a person to • form a small, cross-functional team; • address the situation to the team; and • call on other technical expertise as needed.
Cross-Functional Teams Cross-functional teams are teams that have members from multiple departments or functions within the organization. The purpose of the team could be to bring subject matter experts from across different disciplines or functions, such as a new product design team, a process improvement team, or a problem-solving team. These teams are designed to connect knowledge across the teams, to eliminate
Chapter 4: Team Dynamics 43
silos that departmental boundaries create, and to engage members from different skill sets. Here is an example of a cross-functional team: The emergency department of a hospital is not meeting its patient length-of-stay (LOS) key performance indicator (KPI) goal. It forms a cross-functional continuous improvement team to improve the process. The team consists of members from the emergency department (nursing, physicians), radiology, laboratory, registration services, compliance, information technology, pharmacy, environmental services, and patient transport. The team members apply the Lean-Six Sigma methodology and tools to successfully improve the process and implement the improvements within the emergency department and in their own departments and processes as applicable. Some of the improvements also have a positive impact on other patients throughout the hospital, where the improved processes cross the multiple services and patient types, including inpatients, surgical patients, and outpatients.7
Special Project Teams This type of team is used when a need develops to form a long-term, totally dedicated project team to implement a major new product, system, or process. These are some examples: • Reengineering an entire process • Relocating a major segment of an operation • Mergers, acquisitions, or divestitures • Replacing a major information system • Entering a new market with a new product • Preparing to apply for the Malcolm Baldrige National Quality award Such special project teams may operate as a parallel organization during their existence. They may even be located away from the main organization or exempt from some of the constraints of the main organization. The core team members are usually drafted for the duration of the project. Persons with additional expertise may be called into the team on a temporary, as-needed basis. Usually, the project is headed by an experienced project manager. Depending on the nature of the team’s objectives, external specialists or consultants may be retained to augment the core competencies of the team members.8
Virtual Teams In today’s global and electronic business environment, it may be expedient to have team members who do not work in the same geographic location. These virtual teams also require many of the same roles and processes, but the substitution of electronic communications (e.g., e-mail, videoconferencing) for face-to-face meetings provides an additional challenge to team leadership. A key competency of team members is the ability and motivation to self-manage their time and priorities.
44 Part I: Quality Concepts and Team Dynamics
Virtual teams have special needs, some of which include the following: • Telephones, pagers, local area networks, satellites, videoconferencing • Computers, high-speed and wireless connections, internet technology, e-mail, web-based chat rooms/forums • Software for communications, meeting facilitation, time management, project management, groupware, and access to online databases There are many benefits of virtual teams: • Team members can work from anywhere, at any time of day • Team members are selected for their competence, not for their geographic location • Expenses may be reduced or eliminated, such as travel, lodging, parking, and renting/leasing or owning a building9
TEAM DEVELOPMENT Identify various elements in team-building, such as inviting team members to share information about themselves during the initial meeting, using icebreaker activities to enhance team membership, and developing a common vision and agreement on team objectives. (Apply) Body of Knowledge I.D.2
The Certified Manager of Quality/Organizational Excellence Handbook (2021) explains how team processes have two major groups of components to consider: task type and maintenance type. Team-building techniques enhance both groups. Tasktype processes keep teams focused and on the path toward meeting their goals. Maintenance-type processes help maintain or protect the well-being of relationships among team members. Key task-type components include • documenting and reviewing the team’s objective; • having an agenda and staying on task during every team meeting; • defining or selecting and following a technical process that fits the particular project mission; • using decision-making techniques appropriate to the situation; and • defining action items, responsibilities, and timing, and holding team members accountable.10
Chapter 4: Team Dynamics
45
Maintenance-type components also have a dramatic impact on team performance. The dynamics of interactions between team members are just as critical as the tasks that keep the team on the path toward meeting its objectives. Team maintenance helps alleviate the problems created by the fact that each team member has his or her own perspectives and priorities. These are shaped by individual personalities, current issues in each member’s personal life, and the attitudes of both the formal and informal groups within the organization to which the team member may belong. Although the team may have specific objectives and agendas, each individual may interpret them differently. The first team meeting can set the tone for the entire team effort. With good planning, the first meeting will include these elements: • Team member introductions, where members share information about themselves • Team members getting acquainted • The sponsor attending the meeting and emphasizing the importance of the meeting • Defining the team’s mission and scope • Defining the structure for future meetings • Defining or selecting the technical process improvement methodology/ model to be used • Drafting the team’s objectives • Defining/reviewing the project plan and schedule • Setting the meeting ground rules (norms) • Working out decision-making issues • Clarifying team members’ roles11
Handling Team Introductions There are various methods to ease the anxiety team members might be having about working with one another as they approach their initial meeting. The manager can conduct team introductions by asking team members to share basic information with the group, such as name and job title, the thing they like best about their job, or an interest outside of work. Conducting special exercises, or “icebreakers,” during the first meeting makes it easier for the team members to feel more comfortable in their new environment. The introductions promote spontaneous interaction and conversation. A very simple icebreaker is one where the team members get into small groups of two or three people and share a recent photo that they’ve taken that is special to them, and explain to the team why this photo is special. This creates wonderful discussion and quickly shares something of special meaning to each team member, whether it is a family member, a pet, or a recent memorable vacation photo. Another quick, yet powerful icebreaker is again done in small groups.
46 Part I: Quality Concepts and Team Dynamics
Everyone gets some M&M candies. They may eat all but one M&M, which they save to identify a theme of a story to share with their group, aligning it to the M&M color, as follows:12 • Blue = proud • Orange = failure • Yellow = scared • Green = embarrassed • Brown = success/accomplishment • Red = aspiration Each person has two minutes to prepare the story and two minutes to share it with the group. Depending upon the time available, stories can also be shared with the larger team. Another idea is to initiate building camaraderie among the team members by coming up with a team name. Also, allowing the team members to share their individual good or bad experiences of working on other teams is a great way to expose their personality and values. This kind of sharing can bring team members closer together.
Setting Team Goals and Objectives Team goals must flow from organizational goals and departmental goals to ensure that team members and the entire organization are moving in the same direction. Objectives are defined as quantitative statements of future expectations that include a deadline for completion. Team objectives should be approached the “SMART WAY,” as described in Figure 4.3.13
Specific:
Focus on specific needs and opportunities
Measurable:
Make each objective measurable
Achievable:
Ensure that objectives are challenging yet achievable
Realistic:
Stretching is fine, but the objective should be realistic
Time frame:
Every objective should indicate a time frame for achievement
Worthwhile:
Every objective should be worth doing
Assigned:
Assign responsibility for each objective
Yield results:
Ensure that all objectives yield desired results
Figure 4.3 Objectives made the SMART WAY. Source: ASQ’s Foundations in Quality Self-Directed Learning Series, Certified Quality Manager, Module 1 (Milwaukee, WI: ASQ Quality Press, 2001), 1–66. Reproduced with permission from R. T. Westcott & Associates.
Chapter 4: Team Dynamics
47
Preventing Problems Team members need basic guidelines to add predictability to their work environment, along with creating a sense of safety around team interactions. Having ground rules, or norms, working out decision-making issues, and having assigned roles are common ways of preventing problems associated with team dynamics.
Ground Rules or Rules of Engagement Ground rules should be considered group norms regarding how meetings will be run, how team members will interact, and what kind of behavior is acceptable. Each member is expected to respect these rules during the project’s duration. Some areas to consider when establishing ground rules are provided in Table 4.1. A sample rules of engagement is shown in Figure 4.4. The parking lot issues in the rules of engagement include issues brought up in meetings that must be resolved at a later time and can be logged for further tracking and resolution.
Decision-Making Method “Consensus decision making is the process recommended for major project issues. The approach is more time-consuming, and demanding, but provides the most thorough and cohesive results. It allows the team to work out different points of Table 4.1
Meeting ground rules.
Area
Topics to consider
Logistics
• Regular meeting time and place • Procedure for notifying members of meetings • Responsibilities for taking notes, setting up the room, and so on
Attendance
• Legitimate reasons for missing a meeting • Procedure for informing the team leader when a meeting will be missed • Procedure for bringing absent members up to speed
Promptness
• Agreed-upon definition of “on time” • Value of promptness • Ways to encourage promptness
Participation
• Basic conversational courtesies (e.g., listening attentively, holding one conversation at a time) • Tolerable interruptions (e.g., phone calls, operational emergencies) • Confidentiality guidelines • Value of timely task completion
Source: ASQ’s Foundations in Quality Self-Directed Learning Series, Certified Quality Manager, Module 1 (Milwaukee, WI: ASQ Quality Press, 2001), 1–65.
48 Part I: Quality Concepts and Team Dynamics
• Be present and be committed at the meetings • Look at issues, not people; do not be defensive • Be open to new ideas • Think outside of the box • Help each other work together as a team • Be respectful and allow people to finish their thoughts • Focus on structure: agendas, outcomes, parking lot issues • Limit our time; keep our meetings to the time; finish meetings • Fix the process; don’t blame • Maintain confidentiality
Figure 4.4 Sample rules of engagement. Source: Sandra L. Furterer
view on an issue and come to a conclusion that is acceptable. Consensus means that the decision is acceptable enough that all team members will support it. To achieve consensus, all team members must be allowed to express their views and must also objectively listen to the views of others. The decision and alternatives should be discussed until consensus is reached.”14
TEAM STAGES Describe the classic stages of team evolution: forming, storming, norming, performing, and adjourning. (Understand) Body of Knowledge I.D.3
According to the Tuckman model, there are typical stages that a team goes through as the members work together over time and the team grows and matures. Understanding these development stages is valuable for effective management of the team process:15 Stage 1: Forming. The team usually clarifies its purpose, identifies roles for each member, and defines rules of acceptable behavior (often called norms). Members can be proud that they were chosen for the team, but they also have some fear of the unknown. It is common for members to question why they are on the team. They also try to determine acceptable team behavior and establish team rules; they may even complain about the organization and the tasks. Leadership can help teams through this stage by helping members get to know each other, through icebreakers, as we previously discussed. They should provide clear direction and purpose, and involve the team members in developing plans and rules of engagement.16
Chapter 4: Team Dynamics
49
Stage 2: Storming. Team members are still thinking primarily as individuals and are therefore basing decisions on their individual experiences rather than pooling information with other members. Collaboration is not yet the norm, which can lead to behavior that may involve confrontations. This stage can be the most difficult for the team. They feel like they are thrashing about, without much direction. The team could fall apart if they are not coached adequately through this stage. Team members may feel frustrated or anxious, and they may even withdraw from engaging with the team. To guide the team through this stage, leadership should help the team resolve issues of power and authority, teaching them how to make team decisions. If team ground rules weren’t adequately developed, they should be now, or reviewed and refreshed as needed.17 Stage 3: Norming. The team’s focus at this stage has shifted to meeting the teamrelated challenges. Team members are more agreeable, working out their differences for the sake of the team, resulting in more cooperation and dialogue. In this stage, team members have begun to accept the norms of behavior established through the ground rules, and emotional turmoil is reduced. Team members will begin to joke, laugh, and confide in each other. To help the team get through this stage, leadership should encourage and acknowledge members’ respect for each other, encourage members to work collaboratively, and modify ground rules if needed.18 Stage 4: Performing. The team at this point has a better appreciation of their processes. The team’s members are supportive of each other, allowing the team to make significant progress toward achieving its goals. In this stage the team has settled into its relationships, norms, and expectations. The members perform consistently and start making change happen and get work done. To continue the team as a performing team, leadership can help them to understand how to manage change, advocate for the team with other groups and individuals, and celebrate team achievements.19 Stage 4: Adjourning. This final stage of a team is when the team disbands. Leadership should ensure that the tasks are complete and that the team’s goals have been met before breaking up the team.20 If needed, team members should have training on how to interact with one another in a positive manner, deal with difficult people or situations, contribute to accomplishing the team’s goals, and give and receive constructive feedback. With a basic understanding of these techniques, team members will be more productive, enhancing the development and performance of their team.
TEAM ROLES AND RESPONSIBILITIES Describe the roles and responsibilities of various team stakeholders: sponsor, champion, facilitator, team leader, and team member. (Understand) Body of Knowledge I.D.4
50 Part I: Quality Concepts and Team Dynamics
Process analysts will inevitably be asked to participate on teams since most work in organizations—including quality processes—is performed by working teams. Seven team roles are listed in Table 4.2. Of the seven roles described, those of timekeeper and scribe are the only ones that are optional, depending upon the mission of the team. While the remaining five roles are essential, they may be combined in a variety of ways. However, the most crucial roles for the success of the team, once it is formed, are those of the team leader and the facilitator. The team leader is responsible for the content—the work done by the team. The facilitator is responsible for ensuring that the process affecting the work of the team is the best for the stage and situation the team is in.21 The need for a trained facilitator needs to be considered in situations such as these: • The team has been meeting for some time and is incapable of resolving conflicting issues • A new member has been added, thus upsetting established relationships • A key contributor has been lost to the group • There are other factors, such as lack of sufficient resources, the threat of project cancellation, or a major change in requirements, with the potential of disrupting the smooth functioning of the team Supplementing the team with on-call experts can often compensate for a shortfall in either the number of members or members’ competencies. Selected members must willingly share their expertise, listen attentively, abide by the team’s ground rules, and support all team decisions. It is helpful to select a team member to serve as a timekeeper until the team has become more adept at self-monitoring its use of time. The role of timekeeper is often rotated, giving consideration to whether the selected member has a full role to play in the deliberations at a particular meeting. Some team missions require very formal documentation, and a scribe or notetaker may be needed. This role can be distracting for a member whose full attention is needed on the topics under discussion. For this reason, an assistant, not a regular member of the team, is sometimes assigned to take the minutes and publish them. Care should be taken not to select a team member for this role solely on the basis of that team member’s gender or position in the organization. Another function that should be assigned early in the development of a team is the responsibility of booking a meeting room, sending the invites, and ensuring supplies are at hand in the meeting room. All team members must adhere to expected standards of quality, responsibility, ethics, and confidentiality (see “ASQ Code of Ethics,” Chapter 1). It is imperative that the most competent individuals available are selected for each role. See Table 4.2 for the attributes of good role performance. Very frequently, a team must function in parallel with day-to-day assigned work and with the members not relieved of responsibility for the regularly assigned work. This, of course, places a burden and stress on the team members. The day-to-day work and the work of the team must both be conducted effectively.
Responsibility
Advocate
Backer, risk taker
Change agent, chair, head
Helper, trainer, adviser, coach
Role name
Champion
Sponsor
Team leader
Facilitator
A person who: • Observes the team’s processes and team members’ interactions and suggests process changes to facilitate positive movement toward the team’s goals and objectives • Intervenes if discussion develops into multiple conversations • Intervenes to skillfully prevent an individual from dominating the discussion or to engage an overlooked individual in the discussion • Assists the team leader in bringing discussions to a close • May provide training in team building, conflict management, and so forth
A person who: • Staffs the team or provides input for staffing requirements • Strives to bring about change/improvement through the team’s outcomes • Is entrusted by followers to lead them • Has the authority for and directs the efforts of the team • Participates as a team member • Coaches team members in developing or enhancing necessary competencies • Communicates with management about the team’s progress and needs • Handles the logistics of team meetings • Takes responsibility for team records
The person who supports a team’s plans, activities, and outcomes
The person initiating a concept or idea for change/improvement
Definition
Team roles, responsibilities, and performance attributes.
Table 4.2
Believes in the concept/idea Has sound business acumen Is willing to take risk and responsibility for outcomes Has authority to approve needed resources Will be listened to by upper management
Chapter 4: Team Dynamics
Continued
Is trained in facilitating skills Is respected by team members Is tactful Knows when and when not to intervene Deals with the team’s process, not content Respects the team leader and does not override his or her responsibility • Respects confidential information shared by individuals or the team as a whole • Will not accept facilitator role if expected to report to management information that is proprietary to the team • Will abide by the ASQ Code of Ethics
• • • • • •
• Is committed to the team’s mission and objectives • Has experience in planning, organizing, staffing, controlling, and directing • Is capable of creating and maintaining channels that enable members to do their work • Is capable of gaining the respect of team members; serves as a role model • Is firm, fair, and factual in dealing with a team of diverse individuals • Facilitates discussion without dominating • Actively listens • Empowers team members to the extent possible within the organization’s culture • Supports all team members equally • Respects each team member’s individuality
• • • • •
• Is dedicated to seeing it implemented • Holds absolute belief it is the right thing to do • Has perseverance and stamina
Attributes of good role performance
51
Responsibility
Gatekeeper, monitor
Recorder, note taker
Participants, subject matter experts
Role name
Timekeeper
Scribe
Team members
The persons selected to work together to bring about a change/improvement, achieving this in a created environment of mutual respect, sharing of expertise, cooperation, and support
A person designated by the team to record critical data from team meetings. Formal “minutes” of the meetings may be published and distributed to interested parties.
A person designated by the team to watch the use of allocated time and remind the team members when their time objective may be in jeopardy
Definition
• Are willing to commit to the purpose of the team • Are able to express ideas, opinions, and suggestions in a nonthreatening manner • Are capable of listening attentively to other team members • Are receptive to new ideas and suggestions • Are even-tempered and able to handle stress and cope with problems openly • Are competent in one or more fields of expertise needed by the team • Have favorable performance records • Are willing to function as team members and forfeit “star” status
• Is capable of capturing on paper or electronically the main points and decisions made in a team meeting and providing a complete, accurate, and legible document (or formal minutes) for the team’s records • Is sufficiently assertive to intervene in discussions to clarify a point or decision in order to record it accurately • Is capable of participating as a member while still serving as a scribe
• Is capable of assisting the team leader in keeping the team meeting within the predetermined time limitations • Is sufficiently assertive to intervene in discussions when the time allocation is in jeopardy • Is capable of participating as a member while still serving as a timekeeper
Attributes of good role performance
Source: R. T. Westcott, The Certified Manager of Quality/Organizational Excellence Handbook, 3rd ed. (Milwaukee, WI: ASQ Quality Press, 2006), 75–76.
Continued.
Table 4.2
52 Part I: Quality Concepts and Team Dynamics
Chapter 4: Team Dynamics
53
The inability to be in two places at once calls for innovative time management, conflict resolution, and negotiation skills.22
TEAM CONFLICT Identify common group challenges, including groupthink, members with hidden and/or competing agendas, intentional distractions, and other disruptive behaviors. Describe ways of resolving these issues and keeping team members on task. (Understand) Body of Knowledge I.D.5
Team members are most productive in an environment in which others are responsive and friendly, encourage contributions, and promote a sense of worth. Patrick Lencioni identified some problems that teams experience in his book The Five Dysfunctions of a Team: A Leadership Fable.23 The five “dysfunctions” are as follows: 1. Absence of trust 2. Fear of conflict 3. Lack of commitment 4. Avoidance of accountability 5. Inattention to results Current expert opinion on team building holds that team members should each assume the responsibility for identifying anything wrong with the team’s collaboration and take steps to improve the team’s performance. When some team members fail to trust others on the team, it is the responsibility of the team members themselves to increase their trust. Rather than expect the team leader or sponsor to observe conflict and resolve it, the team members should openly confront conflict and resolve it themselves as quickly as possible. When one or more members of the team seem to lack commitment, fail to accept accountability for their responsibility, or are not fully concerned with the team’s results, the team members themselves are responsible for identifying and correcting these behaviors. The team members may need to enlist the support of some outside resource, such as the company’s human resources department or a consultant, but the responsibility for improving the team’s performance rests on the team members. Peter Scholtes, one of the authors of The Team Handbook , spelled out 10 problems that frequently occur within teams and are typical of the types of situations for which team members, leaders, and facilitators must be prepared. Following is Scholtes’s list, along with recommended actions:24
54 Part I: Quality Concepts and Team Dynamics
Problem 1. Floundering or difficulty in starting or ending an activity. Solution: Redirect team to the project plan and written statement of purpose. Problem 2. Team members attempt to influence the team process based on their position of authority in the organization. Solution: Talk to the members off-line; clarify the impact of their organizational role and the need for consensus. Ask for cooperation and patience. Problem 3. Participants who talk too much. Solution: Structure the meeting so that everyone is encouraged to participate (e.g., have members write down their opinions, then discuss them in the meeting one person at a time). Problem 4. Participants who rarely speak. Solution: Practice gatekeeping by using phrases such as, “John, what’s your view on this?” or divide tasks into individual assignments and have all members report. Problem 5. Unquestioned acceptance of opinions as facts, or participants making opinions sound like facts. Solution: Do not be afraid to ask whether something is an opinion or a fact. Ask for supporting data. Problem 6. Rushing to get to a solution before the problem-solving process is worked through. Solution: Remind the group of the cost of jumping to conclusions. Problem 7. Attempting to explain other members’ motives. Solution: Ask the other person to clarify. Problem 8. Ignoring or ridiculing another’s values or statements made. Solution: Emphasize the need for listening and understanding. Support the discounted person. Problem 9. Digression/tangents creating unfocused conversations. Solution: Remind members of the written agenda and time estimates. Continually direct the conversation back on track. Remind the team of its mission and the norms established at the first meeting. Problem 10. Conflict involving personal matters. Solution: Request that these types of conflict be taken off-line. Reinforce ground rules. Groupthink is another issue that is quite common in teams, especially if team members want to avoid conflict and just get along. Groupthink occurs when most of the team members support an idea or decision that hasn’t been fully explored, or when some team members won’t voice their disagreement. Several actions may help reduce groupthink:25 • Brainstorming alternatives before selecting an approach • Encouraging members to express their concerns • Ensuring that ample time is given to surface, examine, and comprehend all ideas and suggestions • Developing rules for examining each alternative • Appointing an “objector” to challenge proposed actions
Chapter 4: Team Dynamics
55
Hidden and/or competing agendas are another team problem, especially with cross-functional teams, where there may be allegiance of team members to elevate their own department’s goals above the team’s goals. The team leader needs to be cognizant of competing agendas, goals, and priorities—and bring the team back to the team’s project goals and the team’s ground rules—to ensure these conflicts are resolved. Solutions to conflicts should be in the best interest of the team. Team members should be nonjudgmental, listening to team discussions and new ideas. Group feelings should be verbalized by periodically surfacing undercurrents or by giving feedback. One important skill needed in working with teams is the ability to provide constructive feedback during and/or at the end of a meeting. Feedback is an important vehicle to help the team mature. This feedback can be provided by the facilitator or by team members. There are two types of feedback: motivational and coaching. Motivational feedback must be constructive, that is, specific, sincere, clear, timely, and descriptive of what actually occurred in the meeting. Coaching or corrective feedback specifically states the improvements that need to be made. Scholtes provides the following guidelines for providing constructive feedback:26 • Be specific • Make observations, not conclusions • Share ideas or information, not advice • Speak for yourself • Restrict feedback to known things • Avoid using labels • Do not exaggerate • Phrase the issue as a statement, not a question Having a team complete a self-evaluation at the end of each meeting (or occasionally) can be useful in helping the team members further develop their team skills and take more responsibility for team progress. Team members can be asked to write down how well the team is doing on each of the norms (for example, on a scale of 1–5) and to list any additional norms they believe need to be added. A group discussion of the information can then result in revised norms and specific actions the team will take in the future to improve.
Chapter 5 Training and Evaluation
Describe the various elements of training, including linking the training to organizational goals, identifying training needs, adapting information to meet adult learning styles, and using coaching and peer training methods. Use various tools to measure the effectiveness of the training, including post-training feedback, end-of-course tests, and individual and department performance improvements measures. (Understand) Body of Knowledge I.E
LINK TRAINING TO ORGANIZATIONAL GOALS Like all other programs undertaken by an organization, mandatory training should be conducted only to satisfy a business need. The starting point for all training should be linking the training to organizational goals. Training may provide employees with the skills and knowledge necessary to perform their jobs better, and it may prepare the employees to be capable of taking on more responsibility and performing more-important tasks. The primary aim of all training is to satisfy the business objectives of the organization, so it is imperative that those goals be clearly identified and that the training be linked to achieving them. Not all training is mandatory. Often, organizations provide self-selected optional training to allow employees to improve their general or discipline competencies. Even this optional training should satisfy a clearly stated objective, such as raising the workforce morale or enhancing employees’ professional development. Joseph Juran and Frank Gryna suggested the following elements to consider when ensuring alignment of training to the organization’s objectives:1 • The quality problems and challenges faced by the organization
56
Chapter 5: Training and Evaluation
57
• The knowledge and skills needed to solve these problems and meet these challenges • The knowledge and skills actually possessed by the jobholders • The training facilities and processes already in existence • The prevailing climate for training in the organization, based on the record of past programs • What needs to be done differently to achieve the desired quality
NEEDS ASSESSMENT Once the business purpose for performing training has been identified, the next step is to determine which skills and business knowledge the persons receiving the training should have as the result of the training. This constitutes a training needs assessment. The needs assessment produces a list of the knowledge that the trainees need to know and the skills they need to be able to perform in sufficient detail such that the people developing the training can provide the appropriate amount of detail and content in the training materials. A training needs assessment may be conducted employing any or several of the following sources: • Accidents and scrap • Assessment centers • Assimilation of new employees • Business changes • Changes in procedures • Customer returns/calls • Employee grievances/complaints • Employee interviews needs assessment questionnaire • Employee opinion/climate surveys • Employment/skills tests • Exit interviews • Focus groups • Job redesign • Needs analysis • New equipment/software • Observations • Performance appraisal results • Promotions and terminations
58 Part I: Quality Concepts and Team Dynamics
• Reorganization • Technology changes • Quality and process improvement • Unique business circumstance, such as merger, operations ramp-up, and so on
ADAPT INFORMATION TO MEET ADULT LEARNING STYLES The training provided must be adapted to the individual trainees. People learn in different ways, and the training provided to team members must accommodate the unique learning style of each trainee. While nearly all adults can learn using a variety of methods, most people learn best in one style or another. Anthony Gregorc developed a model that classifies adult learning styles as concrete sequential, concrete random, abstract sequential, or abstract random. Neal Fleming developed a model that divides adult trainees into visual, auditory, reading/writing, or kinesthetic learners. What is important to understand about adult learning styles is that adult learners are not the same. Whenever possible, training should be customized to the intended audience; when it is, it will be more effective.2 Consider an adult learner whose preferences are indicated in the following table and the sort of training that would be most effective in each case: Preferred style
Suggested methods
Visual
Visual cues, facial expressions, and utilization of maps, graphics, and pictures
Auditory
Verbal instructions, talking, and discussions
Kinesthetic (tactile)
Physical activities and hands-on experiences
Cognitive Systems perspective
Show the “big picture,” how the pieces fit into a larger system
Logical steps
Provide a plan with practical steps that clearly lead to the desired outcome
Experimental
Give learners an opportunity to try the concepts out to determine if they work
Impact
Demonstrate how the concept will result in significant and desirable results
SELECT THE TRAINING DELIVERY METHOD Training delivery methods are divided into two major categories: on-the-job training (OJT) and off-the-job training. Table 5.1 compares when each category should be selected. On-the-job training involves having the trainer coach or guide the trainee while the trainee is actually performing a work-related job. OJT involves an expert coaching the trainee to do a specific job when the trainee can apply what is learned to his or her job immediately. Off-the-job training involves
Chapter 5: Training and Evaluation
Table 5.1
59
Job training selection criteria. Consider on-the-job training when . . .
Consider off-the-job training when . . .
Task frequency
Key tasks occur on a regular basis
Key tasks do not occur on a regular basis
Skill mastery
Skills can be mastered only over time with practice
Skills can be acquired rapidly with little practice
Subject matter/content
Content does not change frequently
Content requires frequent revisions
Numbers of prospective trainees
Few trainees are to be trained at the same time
Many employees have the same training needs, and there are cost savings in training a group
Training facility
Required equipment and/or other resources are not available on site
Required equipment and/or other resources can be brought into a classroom
Work environment
The work environment is conducive to learning
The work environment is not appropriate or comfortable for learning
Instructor availability
Qualified instructors are not available on site but master performers are
Qualified instructors are available
Accuracy requirements
Actual job requirements are best experienced firsthand
Actual job requirements can be accurately simulated in a classroom environment
Target audience level
Potential trainees have diverse backgrounds and experience levels
Potential trainees have similar backgrounds and experience levels
Target audience motivation
Learners are well motivated and can work on their own with limited supervision
Learners are not sufficiently motivated to work on their own
Time requirements
Time to complete training is not a critical factor
Time to complete training is critical
Selection factors
Source: The ASTD Training and Development Handbook (New York: McGraw-Hill, 1996).
taking the trainee away from his or her normal job for a short period in order to provide training that the trainee can apply upon returning to his or her normal duties. Organizations often prefer on-the job training to off-the-job training. On-the-job training may be conducted by an expert who is engaged specifically to provide the OJT training or by a fellow employee (or peer) who provides the OJT when the need for it arises. In either case, the instructor should be an expert who thoroughly understands the job to be performed, possesses the skills necessary to do the job, and has access to all of the information necessary to successfully perform the job.
60 Part I: Quality Concepts and Team Dynamics
On-the-job training is training usually conducted at the workstation when the trainer is physically collocated with the person being trained, or at the normal place of work. On-the-job training is typically done one-on-one,3 and it is best for developing motor tasks, such as • procedures for installing software, • conducting specific part or product inspections, and • procedures for performing routine troubleshooting. Off-the-job training, which takes place away from the actual work site, may be more preferable for knowledge-based tasks that require some degree of problem solving and decision making, such as • continuous improvement tools, • communication skills, and • data analysis. There certainly are many cases where training plans would utilize both off-the-job and on-the-job training. Training can start off-the-job in a classroom setting learning the concepts and then move to practicing or doing what was learned in the on-the-job setting. Another example would be learning general concepts as a group in a classroom setting and then receiving training on-the-job for the jobspecific skills by a more experienced person with the same job. When the objective of the training is understood and the individuals who are to be trained have been identified, a selection of the best method of delivering the training can be made. Five training delivery methods are listed. All five of these methods are now available as both face-to-face instruction and virtually over the internet. One important consideration in selecting one of these methods is how long it will take for the trainee to learn what is needed. Everyone is busy today, so training must not take people away from their normal duties and their assigned work more often or for longer periods than necessary. Workbooks and self-study materials are available on the internet, from professional trainers, or from media outlets on most topics the process analyst and other employees will need. Since hard-copy workbooks must be prepared in advance of the training, they are usually not interactive. Online training is often interactive, employing simulations, quizzes, exercises, and games. Lectures from highly qualified instructors who possess both the knowledge of the subject and the skill to present it in an interesting manner are still a common means for delivering training. Lectures can be interactive if the instructor is capable of engaging the trainees in dialogue, but normally are not—lectures, like workbooks, are usually prepared before the training session so they are normally not interactive. Many courses can be provided by trainers who deliver their training online over the internet or an organization’s intranet, and with contemporary software tools they can still be interactive. Most corporate trainers are required to be certified by the organization or by professional organizations. Job aids are a very popular means of delivering training. They can include avatars and “smart applications” that are available on the trainees’ computers or handheld devices, or they can be printed (sometimes laminated) cards that are
Chapter 5: Training and Evaluation
61
easy for the employee to carry and access. When these are prepared before the training session, they are not interactive, but today many job aids are interactive. Video training is a powerful means for delivering training. Live lectures and training sessions can be recorded and played back when the trainee needs them, and lectures can be interspersed with videos to stimulate discussions and facilitate learning. Normally they are not interactive, but they can be used to stimulate interactive dialogue between trainees and instructors. Computer-based training utilizes preprogrammed lectures, interesting videos, and cartoon videos to present the training. However, with modern technology, they can also be interactive. Many excellent training programs have been developed and are available on a variety of topics in subjects that process analysts and employees will need to learn. Trainees located anywhere in the world can “attend” a virtual learning session that includes workbooks, lectures, job aids, and video training, as well as interactive discussions. There are important considerations in selecting a training method. Once training needs have been identified and the objectives are in place, it is time to translate those training concepts into practice. Decisions on where and how the training is to be delivered need to happen. When selecting a delivery method, the following must be considered: • Audience makeup • Organizational culture • Financial issues • Administrative issues • Instructor competencies • Safety issues Figure 5.1 outlines a checklist of important questions to consider when determining an appropriate delivery system for training.
EVALUATE TRAINING EFFECTIVENESS Departmental and Training Performance Improvement Measures Evaluation is essential in determining whether the training program meets the objectives of the training plan. Evaluation includes these elements:4 1. Verification of the design (for example, subject content and delivery methods) 2. Applicability of facilities, equipment and tools, and media to be used 3. Qualifications of the trainer 4. Selection of the participants 5. Measures to be used to assess training effectiveness (validation) 6. Measures to be used to assess outcomes resulting from training (validation)
62
Part I: Quality Concepts and Team Dynamics
• Training content – Abstract or concrete? – Technical or nontechnical? – Didactic or experiential? – Mandated or required (for example, by a regulatory body, or management in support of a strategic objective)? • Learning constraints – Short or long time for learning mastery? – Short or long time to apply content? • Training audience – Size—individual learners or groups? – Geographic dispersion—one location or multiple locations? – Education level—basic or advanced? – Training experience on a topic—new or refresher? – Internal employees or external personnel (for example, customers and suppliers)? – Low or high experience? – Low or high motivation? • Instructor competence/train-the-trainer requirements (if applicable) – Professional or nonprofessional training capability? – Internal employee or external consultant? • Training facilities – Ad hoc or dedicated training facilities? – On site or external? – Media/equipment available or required? • Organizational preferences and capabilities – Self-directed or instructor led? – Traditional modes or technology driven? • Budget practices for training expenditures – Departmental funding or companywide coverage? – Individually funded or budgeted line items? • Evaluation provisions – Outputs (for example, new/enhanced skills, improved productivity)? – Outcomes (for example, improved customer satisfaction, increased profits, return on training investment)?
Figure 5.1 Considerations for selecting a training system. Source: ASQ’s Foundations in Quality Self-Directed Learning Series, Certified Quality Manager, Module 7 (Milwaukee, WI: ASQ Quality Press, 2001), 7–26.
Chapter 5: Training and Evaluation
63
Technical training effectiveness can be greatly enhanced by four key considerations: 1. Appropriate preparation of the training facilities 2. Appropriate timing of the training relative to when performance is required 3. Appropriate sequence of training relative to skills mastery 4. Appropriate feedback indicating performance difficulties and progress Table 5.2 describes five levels for evaluating training. Donald Kirkpatrick is credited with defining the first four levels (reaction, learning, behavior, and results) for evaluating training.5 Dana Robinson and James Robinson expanded level three to distinguish between type A (are participants applying the skills and behavior as taught?) and type B (are participants applying nonobservable skills— for example, mental skills or learning—to the job?) learning.6 Level 5 in Table 5.2 is a perspective addressed by Russ Westcott.7 In Figure 5.2, a matrix comparing each level of evaluation to value of information, power to show results, frequency of use, and difficulty of assessment is shown. The five levels represent a hierarchy with evaluation progressing from lowest to highest. The first four levels provide individual performance improvement measures, while the fifth level provides a departmental or Table 5.2
Levels of training evaluation.
No.
Level
Questions
Comments
1
Reaction
Were the participants pleased with the training?
The typical “smile sheets” collected at the end of a program, session, or module.
2
Learning
Did the participants learn what was intended for them to learn?
Were the training objectives met? Typically determined from some kind of test.
3
Behavior
Did participants change their behavior on the basis of what was taught?
Typically determined from an assessment or how learned behaviors are applied on the job.
4
Results
Was there a positive effect on the organization resulting from participants’ changed behavior?
Typically measured from pretraining versus post-training analysis of the organization’s outcomes. Usually a “quantity” type measure (increased production, lower rejects, number of time periods without a customer complaint, and so on).
5
ROTI
Was there a measurable net dollar payback for the organization resulting from participants’ application of learned behavior?
Return on training investment (ROTI) is based on the dollar value added by the investment in the training.
Source: R. T. Westcott, The Certified Manager of Quality/Organizational Excellence Handbook, 3rd ed. (Milwaukee, WI: ASQ Quality Press, 2006), 564.
64 Part I: Quality Concepts and Team Dynamics
Chain of impact
Value of information
Power to show results
Frequency of use
Difficulty of assessment
Reaction
Lowest
Lowest
Frequent
Easy
Highest
Highest
Infrequent
Difficult
Learning Job applications Results ROI
Figure 5.2 Characteristics of evaluation levels. Source: R. T. Westcott, The Certified Manager of Quality/Organizational Excellence Handbook, 3rd ed. (Milwaukee, WI: ASQ Quality Press, 2006), 564.
organizational performance improvement measure to evaluate training effectiveness. Descriptions of the levels are listed below. 1. The reaction level provides the lowest-value information and results and is frequently and easily used. 2. The learning level is used to obtain a more in-depth assessment of whether specific training objectives were met. 3. The behavior (job applications) level assesses whether the training is applied back on the job, usually after some time has passed. 4. The results level measures the quantitative difference between the outcomes of the work units affected prior to their people receiving training (the baseline) and the outcomes some time after the training ended. The objectives for training measured could be decrease in scrap rate, increase in number of savings accounts opened, increased classroom attendance, elimination of medication errors in a hospital, and so on. 5. The ROI level offers the highest-value information and results, is not used as frequently, and is more difficult to assess. Level 5 is based on placing dollar values on the baseline, the results, and the improvement, then determining the value received for the dollars spent. When expressed as a ratio, a $3 return for every $1 spent is a useful minimum ROI. Quality training programs that score high at all levels are likely to be very successful. Two formulas can be used to evaluate at level 5: the benefits-to-cost ratio (BCR) and the return on training investment (ROTI). BCR =
Program benefits Program costs
ROTI =
Net program benefits (total benefits – costs) Program costs
Chapter 5: Training and Evaluation
65
A rule of thumb is that if ROTI is less than $3 (or a ratio of 3:1), meaning $3 of benefits are obtained from every $1 spent, then training should not be considered (unless mandated).
LINK TO ORGANIZATIONAL AND INDIVIDUAL PERFORMANCE MEASURES Whenever training is provided, it should be followed by an evaluation that compares the results of the training to the objectives of the training, if this is permitted. Some countries prohibit individual training evaluations, including the European Work Councils. As stated earlier, the training objectives against which the effectiveness of the training are measured should be linked to business objectives. The training should also be linked to the measures used to assess organizational and individual performance. For example, if the objective of the training is to increase safety on the shop floor, those employees and teams who are responsible for shop floor safety should be given the training they need, and then after the training their implementation of safety tools and activities—which should always be assessed—can be used as a measure of how successful the training was. In this world of global competition between world-class quality organizations, a competent and empowered workforce could be the differentiating factor for success. It is important to use the results of training effectiveness as feedback. The feedback may show failures or ineffectiveness. As in the PDCA cycle (explained in Chapter 6), use the feedback for continuous improvement to improve the quality training program. The ineffectiveness or failure is likely to be linked to one or more of the following causes: • Cultural resistance by line managers • Doubt as to the usefulness of the training, which occurs when – no linkage exists with the strategic plans of the organization, – no apparent connection is evident between the behaviors to be learned and their application on the job, – no projected goals are established for post-training improvement, and – the question of “What’s in it for me?” is inadequately addressed for participants and their supervisors. Lack of participation (involvement) by line managers could cause the following issues: • Technique versus problem orientation • Inadequacies of leaders • Mixing of levels of participants • Lack of application during the course • Language that is too complex • Lack of participation by the training function
66 Part I: Quality Concepts and Team Dynamics
• Inadequate training delivery or lack of trainer competency • Operational and logistical deficiencies8 A successful quality training program will provide the workforce with the skills necessary to carry out additional responsibilities, grant access to information for better decision making, and, ultimately, lead to empowerment. An empowered workforce results in better responsiveness to customers, increased business performance, and a better quality of work life.
TOOLS FOR ASSESSING TRAINING EFFECTIVENESS There are many post-training feedback surveys used to assess the training effectiveness. In a research study, Baba Deros and colleagues evaluated the areas of participants’ perceptions with respect to the following:9 • the overall training program; • teaching materials, delivery, and tools; and • facilitators. These elements should be assessed for participants’ perceptions with respect to the overall training program: • Achievement of workshop/training course objectives • Usefulness of workshop/training course • Positive effects from workshop/training course • Relevance of workshop/training course to work • Appropriateness of workshop/training course duration • Importance of workshop/training course to work • Overall training program Assess these elements for participants’ perceptions on training materials, delivery, and tools: • Notes for workshop/training course are clear • Notes for workshop/training course are easily understood • Notes for workshop/training course contain adequate information • Adequate exercises are provided in workshop/training course • Effective exercises are provided in workshop/training course • Audiovisual aid provided is satisfactory • Understand overall participants’ perceptions on training materials, delivery, and tools The elements should be assessed for participants’ perceptions on facilitators: • Facilitator is well-prepared
Chapter 5: Training and Evaluation
67
• Facilitator knows the material he or she is presenting • Facilitator provides real examples • Facilitator presentation is clear • Facilitator presentation is easy to understand • Facilitator involves participants during discussions • Facilitator involves participants in training activity • Facilitator presentation is effective • Facilitator presentation helps me to learn • Facilitator receives high overall perceptions from participants In addition to perception surveys from the participants, end-of-course tests can also be administered to assess the participants’ knowledge of the skills attained during training. The researchers in the previously mentioned study also assessed the following before and after training: • Knowledge of the practices of the skills • Level of understanding of the skills • Level of practice of the skills
Part II
Quality Tools and Process Improvement Techniques Chapter 6 Process Improvement Concepts and Approaches Chapter 7
Basic Quality Tools
Chapter 8
Process Improvement Techniques
Chapter 9
Management and Planning Tools
Chapter 6 Process Improvement Concepts and Approaches
Define and explain elements of Plan–Do–Check–Act (PDCA), activities, incremental and breakthrough improvement, and DMAIC phases (define, measure, analyze, improve, control). (Apply) Body of Knowledge II.A
PLAN–DO–CHECK–ACT (PDCA) The most well-known methodology for continuous improvement, plan–do–check–act (PDCA), was developed by Walter Shewhart. W. Edwards Deming adapted the cycle to plan–do–study–act (PDSA), also known as the Deming wheel, emphasizing the role of learning in improvement. PDCA is a model for continual process improvement. By developing a measurable action plan, decision making based on facts can happen, putting the plan into action. As Figure 6.1 shows, the cycle is endless. The cycle must be continued to see resulting improvements. As depicted in Figure 6.1, there are four steps of the PDCA/PDSA cycle: Plan. Recognize the opportunity for improvement and gather the data or do the research to answer, “What are the key targets the team expects this project to achieve?” Once a problem is identified, the situation needs to be observed and data gathered to give a better definition of the problem. When the problem is defined, root cause analysis (discussed further in Chapter 21) takes place, which also occurs in this step of the PDCA cycle. A popular and very effective tool for root cause analysis is the five why’s tool. By asking the question “Why?” at least five times, a series of causes and effects results. This domino effect should end with the root cause. After the root cause has been identified, a countermeasure needs to be created and planned out. Know what the criteria are so progress can be measured—how will the team know if it was worth it? Or, how will the team know if it is working? Data need to be gathered to determine whether the plan will work. Do. Communicate the plan and implement according to the plan. As Paul Palmes says in his audiocast on the PDCA cycle, “If you change horses in midstream . . . people drown.”1 Communication is important. Everyone affected 70
H
E
C
PL
O
C
3. Check: a. Determine whether the plan worked b. Study the results
T C
N A
4. Act: a. If it worked, institutionalize/ standardize the change b. If it didn’t, try something else c. Continue the cycle
A
Chapter 6: Process Improvement Concepts and Approaches 71
K
D
1. Plan: a. Study the situation b. Determine what needs to be done c. Develop a plan and measurement process for what needs to be done 2. Do: Implement the plan
Figure 6.1 The PDCA/PDSA cycle.
by the implementation of the countermeasure needs to be involved. If possible, the proposed solution should be done on a small scale first, with as much data being collected as possible while noting the conditions that occur as the change is being implemented. Check. Check according to the plan just as the team implements according to the plan (in the do cycle). The team analyzes the information collected from the do phase. Did the action produce the desired results? Were new problems created? Act. A decision is made on whether it is feasible to adopt the change. If the change is adopted, the team uses the PDCA cycle as an automatic improvement cycle and looks for more opportunities for improvement. Sometimes, as a result of the check phase, it is decided the plan should be altered, and the PDCA cycle starts over again. Other possible actions include abandoning the idea, amplifying or reducing the scope, and then, always, beginning the PDCA cycle over again.
KAIZEN Kaizen is a combination of two Japanese words (kai + zen) meaning continuous improvement. Kaizen is a process-oriented system and encompasses the whole organization, with the key to success being management’s demonstration of complete support. Some of the goals of kaizen include the following: • Continuous improvement • Elimination of waste • Just-in-time (JIT) manufacturing • Standardized work • Total quality control The kaizen philosophy works with the principle of having a strong respect for the people. Improving the quality of the workforce impacts the quality of the
72
Part II: Quality Tools and Process Improvement Techniques
product and/or service. When there is a focus on improvement in all aspects of the workplace, problems are not looked at as mistakes but as opportunities. When a workforce embraces kaizen, they participate in • training on problem-solving and quality tools, • doing the problem solving, • implementing improvements, • receiving recognition for their successes, and • a suggestion program (which costs little to nothing to implement). In Masaaki Imai’s book, Kaizen: The Key to Japan’s Success,1 he relates the kaizen approach to the PDCA cycle, as shown in Figure 6.2. The Toyota production system is well-known for its kaizen approach to productivity improvement, with the main goal of eliminating waste. Kaizen events, also called kaizen blitzes, are typically five-day, highly intensive events with a structured approach. These events focus on process improvement that may also result in the development of a new or revised work standard. Kaizen blitzes involve a team of workers that may be targeting eliminating waste, improving the work environment, reducing cycle time, reducing costs, or setup time reduction (single-minute exchange of dies [SMED], explained more in Chapter 8). The goal of kaizen events is to create a sense of urgency and generate enthusiasm for rapid results. Because of the immediacy, resources need to be in place and readily available for use. The kaizen event follows the Plan–Do–Check–Act cycle, including the following steps:2
PLAN: 1. Identify need: Determine the purpose of the kaizen. 2. Form kaizen team: Typically six to eight team members. 3. Develop kaizen objectives: To document the scope of the project. The objectives should be SMART (Specific, Measurable, Achievable, Realistic, and Time-based).
What? Plan
Definition of problem Analysis of problem
Why?
Identification of causes
How?
Planning countermeasures
Do
Implementation
Check
Confirmation of the result
Act
Standardization
Figure 6.2 Kaizen and the PDCA cycle.
Chapter 6: Process Improvement Concepts and Approaches 73
4. Collect current state baseline data: Either from measure phase or from additional data as needed. 5. Develop schedule and kaizen event agenda: Typically, one week or less.
DO: 6. Hold kaizen event using DMAIC. Sample kaizen event agenda: • Review kaizen event agenda • Review kaizen objectives and approach • Develop kaizen event ground rules with team • Present baseline measure and background info • Hold event: – Define: Problem (derived from objectives), agree on scope for the event – Measure: Review measure baseline collected – Analyze: Identify root causes, wastes, and inefficiencies – Improve: Create action item list and improvement recommendations – Control: Create standard operating procedures to document and sustain improvements. Prepare summary report and present to sponsor. • Identify and assign action items • Document findings and results • Discuss next steps and close meeting 7. Implement: Implement recommendations, fine-tune, and train.
CHECK/ACT: 8. Summarize: Summarize results. Kaizen summary report items: • Team members • Project scope • Project goals • Before kaizen description • Pictures (with captions) • Key kaizen breakthroughs • After kaizen description
74
Part II: Quality Tools and Process Improvement Techniques
• Results • Summary • Lessons learned • Kaizen report card with follow-up date 9. Control: If targets are met, standardize the process. If targets are not met, or the process is not stabilized, restart the kaizen event PDCA cycle.
INCREMENTAL AND BREAKTHROUGH IMPROVEMENT Quality programs that focus on continuous improvement are vital in providing incremental improvement of processes, products, and services in an existing technological paradigm. These programs view quality as being measurable in some quantitative way. Breakthrough improvement that leads to development of new paradigms requires thinking of quality in a different way.3 The plan–do–check/study–act cycle and kaizen strategy typically focus on incremental improvement on a continuous basis. By taking one step at a time, process improvement is achieved on a gradual basis, as opposed to 100% improvement upon completion of the first process improvement event. Although this strategy is being implemented on processes and subprocesses throughout the organization, the output of one improvement event is the input to another new process improvement event, and businesses often lose patience because of the slow pace of progress. Incremental improvement initiatives alone do not allow a business to keep up with the rapid pace of change in the technological arena, as well as keep up with customer demands and competition. The Six Sigma philosophy and reengineering are two approaches to achieving breakthrough improvement. Breakthrough improvement is a method of solving chronic problems that results from the effective execution of a strategy designed to reach the next level of quality. Such change often requires a paradigm shift within the organization. When initiating a method of improvement, the primary goal is to control variation. After reviewing the data, special cause problems and common cause problems are determined by using control charts (see Chapter 14). Once a state of controlled variation is reached, higher levels of quality are achieved by eliminating chronic problems. At this point, to achieve breakthrough improvement, organizations should focus on reducing variation in those areas most critical to meeting customers’ requirements. The Six Sigma philosophy translates to the organizational belief that it is possible to produce totally defect-free products or services. Reengineering is an extreme action for breakthrough improvement, unlike the gradual improvement of processes resulting from incremental changes. When an organization undergoes reengineering, they are seeking drastic improvement results, which often means a paradigm shift has to happen within the organization. With a total reengineering of an entire organization, its present structure and how the current work processes produce and deliver its products or services may not be a consideration. With a completely new approach, the organization
Chapter 6: Process Improvement Concepts and Approaches 75
may ignore the current status and start with a fresh, new tactic. Unfortunately, companies have used this tactic in desperation to cut costs by drastically reducing the number of employees. This, obviously, is a very unfavorable approach, and rather than reengineer the entire company all at once, process reengineering has become a more constructive method. In process reengineering, a team will map out the structure and functions of a process to identify and purge the non-value-added activities, reducing waste and costs accordingly. This results in the development of a new and less costly process that performs the same functions. Another aspect to reengineering is using creativity and innovation to help a business keep up with the rapid pace of change in the technological arena, as well as keeping up with customer demands and competition. Breakthrough improvement needs insight. To develop radical or paradigm-shifting products and services, incremental improvement must be combined with creativity and innovation. Some useful tools can help promote creativity: • Lateral thinking. A process that includes recognizing patterns, becoming imaginative with the old ideas, and creating new ones • Imaginary brainstorming. Breaks traditional thinking patterns • Knowledge mapping. A process of visual thinking • Picture association. Using pictures to generate new perspectives • Morphological box. Used to develop the anatomy of a solution
Define, Measure, Analyze, Improve, Control Phases Six Sigma is a quality management philosophy and methodology that focuses on reducing variation, measuring defects (per million output/opportunities), and improving the quality of products, processes, and services. The concept of Six Sigma was developed in the early 1980s at Motorola Corporation and became popularized in the late 1990s by General Electric Corporation and their former CEO, Jack Welch. The DMAIC approach is the Six Sigma methodology that stands for the five phases of the Define, Measure, Analyze, Improve, and Control method. Figure 6.3 shows the activities within each DMAIC phase. These activities may vary by reference but are generally performed for each project. Each phase and the activities will be described next.
Define Phase The purpose of the Define phase is to delineate the business problem and scope the project and process to be improved. The activities and most commonly used tools in the Define phase are shown in Table 6.1. The following steps can be applied to meet the objectives of the Define phase: 1. Create project charter, plan, and stakeholder analysis In this activity, the project team works with the project sponsor and champion to define the problem to be improved and the goals and scope of the project. The scope of the process, with the high-level steps that are part of the process and project, are also identified.
76
Part II: Quality Tools and Process Improvement Techniques
Phase
Step
Define
1. 2. 3.
Measure
4.
Create project charter, plan and stakeholder analysis Perform initial VOC (voice of the customer) and identify critical to satisfaction (CTS) criteria Select team and launch project
5. 6.
Define the current process and VOP (voice of the process) performance Define detailed VOC Validate measurement system
Analyze
7. 8. 9.
Develop cause-and-effect relationships Determine and validate root causes Develop process capability
Improve
10. 11. 12.
Design future state with costs and benefits Establish performance targets and project scorecard Gain approval, train, and pilot
Control
13. 14. 15.
Implement improvement recommendations and manage change Incorporate process control plans and scorecard Implement continuous improvement cycle, plan, do, check, act (PDCA)
Figure 6.3 DMAIC phases and activities. Source: Reprinted from S. Furterer and D. Wood, ASQ Certified Manager of Quality/Organizational Excellence Handbook, 2021.4
Table 6.1
Define phase activities and common tools.
No.
Activities
Tools
1
Create project charter, plan, and stakeholder analysis
• Project charter • SIPOC (suppliers, inputs, process, outputs, customers) • Stakeholder analysis • Communication plan • Change plan
2
Perform initial voice of the customer (VOC) and identify critical to satisfaction (CTS)
CTS summary
3
Select team and launch the project
• • • •
RACI (responsibilities) matrix Rules of engagement IFR (items for resolution) Project plan
Source: S. Furterer, “Lean-Six Sigma Course Materials,” University of Dayton, 2018.
2. Perform initial voice of the customer (VOC) and identify CTS/CTQ The initial voice of the customer (VOC) is derived based on interviews and qualitative data collection from the stakeholders of the process. A stakeholder analysis is performed to identify the roles, concerns, and engagement of the stakeholders impacted by the
Chapter 6: Process Improvement Concepts and Approaches 77
process to be improved. This helps to develop the initial Critical to Satisfaction (CTS)/Critical to Quality (CTQ) criteria. 3. Select team and launch project The stakeholder helps to identify the team members, who will represent each stakeholder group, to work through the DMAIC process.
Measure Phase The purpose of the Measure phase is to understand and document the current state of the processes to be improved, collect the detailed voice of the customer information, baseline the current state, and validate the measurement system. The activities and most commonly used tools in the Measure phase are shown in Table 6.2. The activities performed during the Measure phase include the following: 4. Define the current process and VOP (voice of the process) performance In this step, the current process is mapped using a process map and/or value stream map. Additionally, a data collection plan is defined that identifies the data to be collected to understand the CTS. The data are then collected to understand the voice of the process, which helps us to understand the current performance of the process. 5. Define detailed VOC (voice of the customer) Additional and more detailed VOC information will be collected in the Measure phase, using qualitative methods, such as interviews or focus groups, and quantitative surveys to assess stakeholder satisfaction. Table 6.2
Measure phase activities and common tools. Measure Phase
No.
Activities
Tools
1
Define the current process and VOP performance
• Process map, value stream map • Data collection plan with operational definitions • Metrics with baseline • Pareto chart • VOP matrix • Benchmarking • Check sheets • Histograms
2
Define the detailed VOC
• Surveys, interviews, focus groups, affinity diagrams
3
Validate the measurement system
• Measurement system validation
Source: S. Furterer, “Lean-Six Sigma Course Materials,” University of Dayton, 2018.
78
Part II: Quality Tools and Process Improvement Techniques
6. Validate measurement system The measurement system is assessed to ensure that the system that is used to collect and measure data is accurate and represents the process, Gage R&R (Repeatability and Reproducibility) study and attribute agreement analysis tool are used to assess the repeatability and reproducibility of the measurement system, the first for variable measures, and the second for attribute measures. This helps ensure that operators who are measuring the metrics are doing so consistently and accurately.
Analyze Phase The purpose of the Analyze phase is to analyze the data collected related to the voice of the customer and the voice of the process to identify the root causes of the process problems and to develop the capability of the process. The activities and most commonly used tools in the Analyze phase are shown in Table 6.3. The activities performed and tools applied during the Analyze phase include the following: 7. Develop cause-and-effect relationships Root cause analysis is performed to identify the input and process factors that impact the process. Cause-and-effect diagrams, why–why diagrams, and failure mode and effects analysis (FMEA) are used to develop the cause-and-effect relationships to identify the critical few causes that contribute to the process problems. 8. Determine and validate root causes If necessary, additional data are collected to validate the root causes. To determine and validate root causes, the Six Sigma team can perform a process analysis coupled with waste elimination. A process analysis consists of the following steps: 1. Document the process (using process maps from the Measure phase) 2. Identify non-value-added activities and waste
Table 6.3
Analyze phase activities and common tools. Analyze Phase
No.
Activities
Tools
7
Develop cause-and-effect relationships
• Cause-and-effect diagrams • Cause-and-effect matrix • Why–why diagram
8
Determine and validate root causes
• Process analysis, histograms and graphical analysis (covered in Measure), waste analysis, value analysis, 5S tool, FMEA, basic statistics (covered in Measure)
9
Develop process capability
• DPPM/DPMO, process capability
Source: S. Furterer, “Lean-Six Sigma Course Materials,” University of Dayton, 2018.
Chapter 6: Process Improvement Concepts and Approaches 79
3. Consider eliminating non-value-added activities and waste 4. Identify and validate (collect more data if necessary) root causes of non-value-added activities and waste 5. Begin generating improvement opportunities 9. Develop process capability Process capability analysis is performed to assess whether the current processes are capable of meeting the required specifications of the process.
Improve Phase The purpose of the Improve phase is to identify improvement recommendations, design the future state, implement pilot projects, train, and document the new processes. The activities and most commonly used tools in the Improve phase are shown in Table 6.4. The activities performed and tools applied during the Improve phase are the following: 10. Design future state with costs and benefits During this activity, the improvement ideas to improve the process are generated. Action plans specifying how to implement the improvements are developed. The improvement ideas are prioritized and organized into ideas that can be implemented in the short term, within three to six months, and long-term items, which will take more time to implement. Table 6.4
Improve phase activities and common tools. Improve Phase
No.
Activities
Tools
10
Design future state process and improvements
• • • • • • • • • • •
11
Establish performance targets and project scorecard
• Dashboards/scorecards • Revised VOP matrix
12
Gain approval, train, and pilot
• • • •
Quality function deployment Recommendations for improvement Action plan Cost/benefit analysis Future state process map Future state value stream map Appreciative inquiry Affinity diagram Relationship diagram Opportunity map Strategies and strengths matrix
Training plan and materials Procedures Policies RACI (Responsible, Approve, Consult, Inform)
Source: S. Furterer, “Lean-Six Sigma Course Materials,” University of Dayton, 2018.
80 Part II: Quality Tools and Process Improvement Techniques
11. Establish performance targets and project scorecard The performance targets and project scorecard are developed to assess whether the improvements implemented had a positive impact on the processes. 12. Gain approval, train, and pilot The improvement ideas are piloted on a smaller basis, perhaps in one department or one area of the company, to ensure that they work as planned. Approval to implement the ideas are obtained from the project sponsor and/or steering committee. The improvements are then rolled out to the organization more broadly. Any appropriate and necessary training to ensure that operators and staff can properly perform the process is performed.
Control Phase The purpose of the Control phase is to measure the results of the pilot projects and manage the change on a broader scale, report scorecard data and the control plan, identify replication opportunities, and develop future plans for improvement. The activities and most commonly used tools in the Control phase are shown in Table 6.5. The activities performed and tools applied during the Control phase are the following: 13. Implement improvement recommendations and manage change The improvements are rolled out to the organization more broadly than the initial pilot activities. Change management techniques should be applied to manage the process change within the organization. 14. Incorporate process control plans and scorecard Process control plans are developed in the Control phase to ensure that the new process changes are maintained. The control Table 6.5
Control phase activities and common tools. Control Phase
No.
Activities
Tools
13
Implement improvement recommendations and manage change
• Hypothesis testing
14
Incorporate process control plans and dashboards
• Dashboards • Basic statistics, graphical analysis, sampling, mistake proofing, FMEA (Failure Mode and Effects Analysis), control plan, process capability, DPPM (Defective Parts Per Million)/DPMO (Defective Parts Per Million Opportunities) • Statistical process control • Standard work
15
Implement continuous improvement cycle; plan– do–check–act (PDCA)
• Prioritization matrices • Process map repository
Source: S. Furterer, “Lean-Six Sigma Course Materials,” University of Dayton, 2018.
Chapter 6: Process Improvement Concepts and Approaches 81
plans and dashboards or scorecards are used to measure the change and maintain the gains achieved. 15. Implement continuous improvement cycle, plan–do–check–act (PDCA) Additional improvements are implemented using the PDCA cycle for continuous improvement within the process. Thomas Pyzdek’s graphic, shown in Figure 6.4, shows the key questions that should be asked and answered in each phase of the DMAIC methodology. Define
• Why must this project done now ?
• What is the business case for the project? Next project
• Who is the customer?
• What is the current state?
• What will be the future state?
• What is the scope of this project?
• What are the tangible deliverables? • What is the due date?
Control • During the project, how will we control risk, quality, cost, schedule, scope, and changes to the plan? • What type of progress reports should we send to sponsors? • How will we ensure that the business goals of the project were accomplished? • How will we maintain the gains made?
Measure • What are the key metrics for this business process? • Are metrics valid and reliable? • Do we have adequate data on this process? • How will we measure progress? • How will we measure ultimate success?
Improve • What is the work breakdown structure for this project? • What specific activities are necessary to meet the project’s goals? • How will we reintegrate the various subprojects? • Do the changes produce the desired effects? • Any there any unanticipated consequences?
Analyze • What is the current state? • Is the current state as good as the process can do? • Who will help make the changes? • What resources will we need? • What could cause this change effort to fail? • What major obstacles do we face in completing this project?
Figure 6.4 DMAIC cycle. Source: Copyright © 2003 by Thomas Pyzdek, all rights reserved. Reproduction permitted providing copyright notice remains intact. For more information, visit http://www/pyzdek.com. Also see DMAIC versus DMADV. Search this website for more information on DMAIC and DMADV.
Chapter 7 Basic Quality Tools
Select, construct, apply, and interpret the seven basic quality tools: (1) cause and effect diagrams, (2) flowcharts (process maps), (3) check sheets, (4) Pareto charts, (5) scatter diagrams, (6) run charts and control charts, and (7) histograms. (Evaluate) Body of Knowledge II.A
The seven basic quality tools are scientific tools used in analyzing and improving process performance. These classic quality tools are also referred to as the seven basic tools. They are primarily a graphical means for process problem analysis through the examination of data. Figure 7.1 shows that some of the basic quality tools are • quantitative: used for organizing and communicating numerical data, or • nonquantitative: used to process useful information for decision making With a team in place ready to collect useful data, the next step is for the team to know how to select and apply the tools designed to manage the data. In some instances, just one tool can be used for problem identification, problem solving, process improvement, or process management. In many cases it takes various combinations of the seven basic quality tools, depending on the nature of the problem or objective.
CAUSE-AND-EFFECT DIAGRAM The cause-and-effect diagram is used to generate a list of possible root causes of a problem. It is also known as the Ishikawa diagram, after its developer, Kaoru Ishikawa, or the fishbone diagram due to its distinctive shape. Once a team has identified a problem, the diagram is used to divide root causes into broad categories. Results of further brainstorming of those root causes can be sorted into the identified categories.
82
Chapter 7: Basic Quality Tools
Quantitative
83
Nonquantitative
• Check sheets
• Cause-and-effect diagrams
• Pareto charts
• Flowcharts or process maps
• Scatter diagrams • Run and control charts • Histograms
Figure 7.1 The seven basic quality tools.
Ms:
Categories typical for the manufacturing environment are known as the four • Manpower • Machinery • Methods • Materials
Sometimes measurement or environment will be added (Figure 7.2). For service processes, the categories often used are: • Measurement • Materials • People • Environment • Method (could include policies and procedures) • Information systems The cause-and-effect diagram provides a way of organizing the large number of subcategories generated from the brainstorming session. Figure 7.3 illustrates a completed cause-and-effect diagram. Methods
Machinery
Measurement
The Problem Material
Manpower
Environment
Figure 7.2 Cause-and-effect diagram for a manufacturing problem.
84 Part II: Quality Tools and Process Improvement Techniques
Methods
Machinery
Wrong lathe speed
Measurement
Bad bearing on #23
Hone not used
Gage calibration
Wrong chuck pressure
Bore gage not available
Valve leaks Nonmetallic inclusions
Castings porous
Material
Manpower
Excess atmospheric pressure affects testing equipment
Improper training
Poorly motivated
Excess psychological pressure caused by emphasis on quotas
Environment
Figure 7.3 Completed cause-and-effect diagram for a manufacturing problem.
Figure 7.4 shows a cause-and-effect diagram for a financial services mortgage loan origination process. The effect is for delays in the loan origination process. The causes were brainstormed with the process improvement team members, who were subject matter experts in the consumer loan origination process at the bank. The next step in the root cause analysis is to collect data to determine which of the causes are most likely to be acting on the problem being studied.
FLOWCHARTS (PROCESS MAPS) For a team to work on process improvement, they first need to make sure they have a good understanding of how the process works. A flowchart (also known as a process map) is a basic quality tool that provides graphical representation of the sequential steps in a process. A process map is a graphical display of steps, events, and operations that constitute a process. A process map helps us to document, understand, and gain agreement on a process. The process map can be used to document the current “as is” process or a future state, or “to be” process, to designate the process after it is implemented or improved. A process is a sequence of activities with the purpose of getting work done. A typical process map uses rectangles to represent an activity or task, diamonds to represent decisions, and ovals or circles to represent connectors to either activities on the same page or another page. Flow usually flows from left to right, and top to bottom. Process maps help process owners understand how work is accomplished, how all processes transform inputs into outputs, how small processes combine to form bigger processes, and how processes are both connected and interdependent. Process maps also help process owners see how their work affects others and how other roles affect their work, how even small improvements make a difference, and that process improvements require communication and teamwork. This section on the process architecture map is derived from the systems engineering book by S. L. Furterer.1 A special tool, developed by the author,
Chapter 7: Basic Quality Tools
85
Mortgage Loan Origination Cause-and-Effect Diagram Measurement
Materials
People
Missing critical docs Not measuring quality
Hard copy files
Don't know info to give Ignore request Don’t understand
No central training Lack of process time
Inadequate staffing
Lack of procedures
Lack of product knowledge
Printing for compliance Nondedicated lenders Seasonal variation Unpredictable market Not to bother customer Autonomy in process Loan product Purchase contract Environment
Difference in UW stds Differences by market Different markets
Complex products Many hand-offs
No standard process Method
No sense of urgency Delays in Origination Duplication of data entry Lack of automated UW
Lack of automated checklist Systems not set up efficiently
Information Systems
Figure 7.4 Cause-and-effect diagram for a financial services process. Source: Sandra L. Furterer.
combines both a process map and a process architecture and is called a process architecture map (PAM).2 The process architecture provides the key elements or information used within a process or system. The traditional process map shows the sequence of activities performed by “actors” or roles within a system. To accurately define a PAM, we must first define a process. A process is a sequence of activities that are performed in coordination to achieve some intended result. A process is also described as a set of activities involved in transforming inputs into outputs. Processes typically cross traditional organizational boundaries or departments.3 The activities jointly realize a course of action. Each process is enacted by a single organization, but it may interact with processes performed by other organizations, systems, or subsystems. The PAM has three primary purposes: communication, knowledge capture, and analysis. In using a PAM for communication, we want to convey a rigorous understanding of the process and the ability to manage change. In knowledge gathering, we are interested in understanding the associations of processes and activities. Finally, in analysis we want to be able to study the process for optimization and transformation purposes. The steps to develop the process scenarios are as follows: 1. Identify the processes that will be mapped, creating a process map list. Define the boundaries of the processes to be mapped based on the value chain or SIPOC.
86 Part II: Quality Tools and Process Improvement Techniques
2. From the processes to be mapped and a stakeholder analysis, identify the roles that perform the processes. Identify representatives who perform these roles to take part in the process map development. 3. Identify the major activities within the process and the sequence of activities performed or to be performed within the process. Identify the process steps and uncover complexities using brainstorming and storyboarding techniques. Brainstorming is a useful tool to have the participants generate ideas in a free-form manner, without evaluating the ideas prematurely. Storyboarding is a way to develop different stories of how the system will be used, while generating what the process looks like from a user perspective. 4. Arrange the steps in time sequence and differentiate operations by the particular symbols chosen to represent activities, decisions, connectors, and start and end points. 5. Identify who (the actor) performs each activity. Do not use names of people; use roles. 6. Identify the documents or information used for each activity. 7. Identify the system, technology, or method used in each activity. 8. Create a process architecture map of each process. 9. For a more detailed process architecture, you may identify potential risks to the process and the control mechanisms. You may then add additional knowledge that is helpful to perform the process. 10. Always observe the actual process as it is performed, if possible, and revise your process map. For the PAM specifically, we use the following conventions: 1. Swim lanes, located down the left side of the diagram, are used in a PAM to represent the different elements related to the process, including • the activities or tasks of the process in sequential order; • the functional roles or actors of the activities taking place in that part of the flow (decisions or delays do not need a role designated); • information or documents, forms, or printed reports used in the activity; • technology, equipment, information systems, or methods related to the activities. The methods could include indicating a completely automated activity, a completely manual activity, or a combination of a human activity augmented by automation; and • knowledge that is important for training purposes, such as a link to the detailed procedure or work instructions, notes on the process, metrics to be measured, and so on. • Knowledge can relate to any rows above the knowledge column.
Chapter 7: Basic Quality Tools
Note that there are three rows of activities, three corresponding rows of roles, information, and technology. 2. Start and end points in the process flow, designated by an oval. 3. Activities. A series of activities that make up a process. 4. Decisions must have at least two arrows exiting, and lines should be labeled with the appropriate condition. 5. Connectors are oval symbols that connect the sequence or flow between shapes. Connectors are circles that can denote flow on the existing page of the process map or connect to another page. For an on-page connector, the number in the circle starts with the page number, and numbers after the period indicate the sequential order of the ovals used on that page. For an off-page connector, the first number denotes the page number where the flow continues, and the number after the dot is the sequential order of the off-page connectors. Connectors are used to minimize the number of arrows crossing the page and causing confusion for the reader. 6. Delay (D symbol) is used to represent any delay in the process where work is stalled or dependent on a given time frame. Indicate the amount of time and unit of time when using this symbol, if known (e.g., 10 minutes or 4 hours). Some tips for creating process maps: • Use roles on process map; avoid names • Flow is from top to bottom, left to right • Connect all symbols, either to the first start oval, the last end oval, or a sequential symbol • Never (or rarely) cross arrows • Share process maps with other process analysts developing connected subsystems and processes • Control documents: include the process name, date of origination or revision, and version number • Use the wall and flip chart paper or whiteboards to map out the process with the people involved in the process • Validate steps/activities/data outside of the meeting • Do not be afraid to start over • Do not get stuck on the detail—get stuck on the flow and value • Walk through the actual existing process, if there is one • Numbering conventions are recommended • The activities are inherently referred to by the numbering using the row (1, 2, and 3) and then the column numbers (1 through 9)
87
88 Part II: Quality Tools and Process Improvement Techniques
The PAM is used as the foundational tool to document the current process, perform a lean analysis, identify improvement opportunities, and document the future process after improvements are implemented. Process maps are great tools used to train process owners on the process as well. The process map can be used to generate customer requirements, information systems requirements, and develop an information model such as a class diagram. The PAM template allows the creation of a cover page that summarizes the high-level process steps from the SIPOC or value stream map to provide the connection to the system view of where this process fits within the entire system. A system here is referring to the set of processes that interconnect to achieve a work system. The cover page also provides an inventory of the process architecture roles or process stakeholders, information, and technology. A brief description of the symbols shown in Figure 7.5 and used in the PAM follows. • The rectangle is used to describe an activity or task of the process. • A diamond represents a decision made in the process. The questions in the decision should be designed to ask a Yes or No question. If more paths are desired, additional yes/no questions can be asked. • The Start/End oval is used to identify where the process starts and ends. • The circle connector is used to connect symbols where the arrows may cross the page and make the process map difficult to read. It can be used to connect to other symbols on the page and to other pages. Task
Decision
Start/End
Connector
Decay
External Process
Kaizen Burst
SIPOC
DRAFT
Knowledge
Info
Auto
Review
Quality Control
Control
Risk
Limited Value Added Non Value Added Manual
Value Added Idea Metrics
Figure 7.5 Process architecture map icons or symbols. Source: Sandra L. Furterer.
Chapter 7: Basic Quality Tools 89
• The D is used to denote a wait or delay in the process. • The external process can be used to connect the process to a process map that is external, separate, but related to this process map. • The kaizen burst identifies an idea generated while process mapping. • The SIPOC activity is used on the cover page for a SIPOC process step. • The DRAFT symbol can be used to identify that the process map is a draft and must be reviewed further before it is finalized for use. • The Knowledge symbol is used to add additional knowledge text to the knowledge swim lane. • The Info icon is placed in the square on the knowledge swim lane to identify that text providing additional knowledge will be placed on the swim lane. • The additional symbols are used within the knowledge lane to designate other important actions or information: – auto—an automated process – review—a review step – quality control—an inspection or quality control step – control—a control mechanism embedded in a process activity – risk—identifying an activity that has inherent risk, and where a control may be incorporated • The value-added, Limited-value-added, and Non-value-added activity icons (green check mark, yellow triangle, and red x within a circle) identify the value of the activity to the customer of the process. • As the process analyst is working with the customers to develop the process maps, if ideas for improvement arise, they can be noted in the knowledge swim lane and identified with an idea light bulb symbol. • If a task is fully manual, identify that it is a manual task; where there is further opportunity to automate the task, it can be marked in the knowledge swim lane with a manual activity symbol. • If there are metrics that are collected at certain points in the process, the metrics icon can be shown in the knowledge swim lane, and the metric titles can be entered beneath the symbol. An example of a completed process architecture map is shown in Figure 7.6 for the arrival and triage process in an emergency department.
Patient
2
See guidelines for acuity
2
3
Car
Ambulance
1
3
2
1
3
Patient
1
No Patient
Stretcher
Kiosk
ER room process
Kiosk
Patient arrival info Patient arrival info
Nurse
Patient
To ER room processes
Enter information into kiosk
4
Acuity 4?
Wait for triage room
No
Call patient to triage room
6
Nurse
Nurse
Yes Move patient to minor care
Triage Yes room ready?
5
Nurse
ER room process
Medical record
Patient information
No
Minor care processes
Triage patient
7
Source: Sandra L. Furterer.
Figure 7.6 Sample process architecture map of emergency services—arrival and triage process.
Kiosk
Patient arrival information
Nurse
EM5
Move patient to ER room
Make ER room available
Acuity 1?
2.1
3
Yes
2.1
No
Start
Help patient with kiosk
2
Start
1
Enter patient information into kiosk
Yes
3
Transport patient via ambulance into ER
2
Need help?
1
Emergency Services: Arrival and Triage Process
Enter ER
s/e/c
© Sandra L. Furterer, 2020
Knowledge
Technology
Information
Role
Activity
Process Map
Patient
Wait for ER room
2.1
No
ER room ready?
8
Yes
Nurse
To ER room processes
Move patient to ER room
9
s/e/c
90 Part II: Quality Tools and Process Improvement Techniques
Chapter 7: Basic Quality Tools
91
CHECK SHEETS The check sheet is a quality tool that is used in processes where there is a significant human element. A check sheet (sometimes called a tally sheet) is a structured form that is custom designed by the user to enable data to be quickly and easily recorded for analysis. Generally, check sheets are used to gather data on the frequency or patterns of particular events, problems, defects, and so on. Check sheets must be designed to gather the specific information needed. For instance, depending on the problem-solving effort, organizing the data by machines versus shifts or versus suppliers can show different trends. Figure 7.7 shows defects observed by a quality inspector or technician who inspected a batch of finished products. The check sheet in this case shows the following: • The fact (not opinion) that incomplete lenses are not a big problem • Groupings to be investigated: a. Why are most cracked covers black? b. Why are all the fogged lenses green? • Rows and columns can be easily totaled to obtain useful subtotals As depicted in Figure 7.8, time-related check sheets show multiple examples of the variety of trends that can be investigated. Two observations worth noting in this hotel housekeeping check sheet are the following: • “TV Guide missing” category shows a trend worth investigating. • Something amiss appears to have happened on July 8 and should be investigated. In some cases, recording the location of defects is necessary to provide a more visually oriented data collection device. This type of check sheet is called a measles chart or a defect concentration diagram. For example, if inspecting a vehicle’s paint finish, a diagram of the vehicle’s body would be useful for marking where any defects are located. Clusters will then indicate the biggest problem areas.
Line 12 Dec. 4, 2013 Cracked cover
Lens color Red
||
Fogged Pitted Incomplete
|||| |
Green
Black
| |||| |||
|||||| ||||| |
Figure 7.7 Defects table example. Source: D. W. Benbow, A. K. Elshennawy, and H. F. Walker, The Certified Quality Technician Handbook (Milwaukee, WI: ASQ Quality Press, 2003), 12.
92
Part II: Quality Tools and Process Improvement Techniques
Deficiencies noted July 5–11, 2013
5
6
7
8
|||||||| ||||||| || ||| | | |
||||| ||||| ||||| ||||| |||||
Towels incorrectly stacked Soap or shampoo missing TV Guide missing Mint missing from pillow Toilet paper not folded into V
9
10
11
||||||| ||||||||| | |
Figure 7.8 Time-related check sheet example. Source: D. W. Benbow, A. K. Elshennawy, and H. F. Walker, The Certified Quality Technician Handbook (Milwaukee, WI: ASQ Quality Press, 2003), 12.
PARETO CHARTS Pareto charts are based on the Pareto principle, which suggests that most effects are the result of relatively few causes. Vilfredo Pareto, a nineteenth-century Italian economist, noted that 80% of the wealth in Italy was held by 20% of the population. This observation was later defined by Joseph Juran in 1950 as the Pareto principle. Pareto principle—The suggestion that most effects are the result of relatively few causes, that is, 80% of effects come from 20% of the possible causes (e.g., machines, raw materials, operators). Pareto chart—A basic tool used to graphically rank causes from most significant to least significant. It utilizes a vertical bar graph in which bar height reflects the relative importance of causes. Because the Pareto chart displays the category bars in descending order of length, from longest to shortest, it gives a clear visual representation of their relative contribution to the total effect. Using the data from Figure 7.8, the Pareto diagram in Figure 7.9 shows that the problem-solving team working on reducing deficiencies should put their main efforts into preventing missing TV Guides. If the team were to focus their attention on finding a resolution to the towels being incorrectly stacked, it would only solve about 7.6% of occurrences. The right side of Figure 7.9 includes another feature, cumulative percentage. Although in Figure 7.9 “TV Guide missing” appears to be the biggest contributor to the overall problem, the most frequent is not always the most important. It is also important to do a similar analysis based on costs (or severity for impacts to the goals of the business or customer satisfaction) as not all problems have equal financial impact. In this case, one would assume “TV Guide missing” does have a high financial impact compared to the other causes. Sometimes when collecting data, a team may find that there are a large number of causes but only a few occurrences of each. In those cases, it may be necessary to do some creative grouping to obtain a single cause that can amount to a “big hitter.”
Chapter 7: Basic Quality Tools
Pareto for week of July 5, 2007
80
120.0
# of deficiencies
70
100%
60
83.3%
50
30
60.0 40.0
30.3%
10 TV Guide missing (27)
Soap or shampoo missing (20)
100.0 80.0
40.9%
20
0
92.4%
71.2%
40
93
12.1%
9.1%
7.6%
Mint missing from pillow (8)
Toilet paper not folded into V (6)
Towels incorrectly stacked (5)
Cause
20.0 0.0
Figure 7.9 Pareto diagram example.
SCATTER DIAGRAM A scatter diagram is a basic tool that graphically portrays the possible relationship between two variables or process characteristics. Two sets of data are plotted on a graph. The scatter diagram can help analyze data when a problem-solving team is working to determine the root cause of a problem but has several causes proposed. If the diagram shows there is a correlation between two variables, it does not necessarily mean a direct cause-and-effect relationship exists. If it appears that the y-axis variable can be predicted based on the value of the variable, then there is correlation. A regression line can be drawn through the points to better gauge the strength of the correlation. The closer the point pattern is to the regression line, the greater the correlation (which ranges from −1.0 to +1.0). When determining if two variables are related, measurements for the independent variable (horizontal or x-axis) are plotted against the dependent variable (vertical or y-axis). The resulting pattern of points is then checked to see if a relationship is obvious. If the data points fall along a single, straight line (left side of Figure 7.10 and Figure 7.11), there is a high degree of correlation. In Figure 7.10, the slope of the plot is upward, and so it has a positive correlation. On the right side of Figure 7.11, the scatter diagram shows a lower degree of positive correlation, indicating the existence of other sources of variation in the process. A numeric value called the correlation coefficient (r) can be calculated to further define the degree of correlation. If the data points fall exactly on a straight line with the right end tipped upward (left side of Figure 7.10), r = +1 and so has a high degree of positive correlation. If the data points fall exactly on a straight line that tips down on the right end (left side of Figure 7.11), r = −1 and so has a high degree of negative correlation. The value of r is always between –1 and +1 inclusive: −1 < r < +1.
94 Part II: Quality Tools and Process Improvement Techniques y
0
y
x
Positive correlation An increase in y may depend on an increase in x
0
x
Possible positive correlation If x is increased, y may increase somewhat
Figure 7.10 Scatter diagrams depicting high degree and low degree of positive correlation. Source: Adapted from D. W. Benbow, R. W. Berger, A. K. Elshennawy, and H. F. Walker, The Certified Quality Engineer Handbook (Milwaukee, WI: ASQ Quality Press, 2002), 281.
y
0
y
x
Negative correlation A decrease in y may depend on an increase in x
0
x
Possible negative correlation As x is increased, y may decrease somewhat
Figure 7.11 Scatter diagrams depicting high degree and low degree of negative correlation. Source: Adapted from D. W. Benbow, R. W. Berger, A. K. Elshennawy, and H. F. Walker, The Certified Quality Engineer Handbook (Milwaukee, WI: ASQ Quality Press, 2002), 281.
If the correlation coefficient (r) has a value near zero, it indicates no correlation. See Figure 7.12. There are three points to consider: • There may be some relationships that are nonlinear. • The scales of graphs may be arbitrarily expanded or compressed on either axis and, therefore, a visual reading of the slope will not indicate the strength of the correlation. • A direct and strong correlation does not specify a cause-and-effect relationship.
Run Charts A run chart is a chart that tracks a quality characteristic over time. The run chart graphs the data in the time order that the data were collected. Other terms for a run chart are a time series chart and a trend chart. A run chart helps to identify trends, fluctuations over time, and unusual events.
Chapter 7: Basic Quality Tools
95
y
0
x
No correlation There is no demonstrated connection between x and y
Figure 7.12 Scatter diagram depicting no correlation. Source: Adapted from D. W. Benbow, R. W. Berger, A. K. Elshennawy, and H. F. Walker, The Certified Quality Engineer Handbook (Milwaukee, WI: ASQ Quality Press, 2002), 281.
The basic steps involved in constructing a run chart are as follows:4 1. Construct a horizontal (x) axis line and a vertical (y) axis line. The horizontal axis represents time, and the vertical axis represents the values of measurement or the frequency at which an event occurs. 2. Collect data for an appropriate number of time periods, in accordance with your data collection strategy. 3. Plot a point for each time a measurement is taken. 4. Connect the points with a line. Keeping in mind the process, interpret the chart. Possible signals that the process has significantly changed are the following: • Six points in a row that steadily increase or decrease • Nine points in a row that are on the same side of the average • Other patterns such as significant shifts in levels, cyclical patterns, and bunching of data points An example of a run chart that graphs the triage time of an emergency department during the day is shown in Figure 7.13.
Control Charts Control charts are run charts with control limits that help to identify whether a process is stable and in control. The most widely used control charts were introduced by Walter Shewhart in the 1920s and are called Shewhart charts. They include variable control charts, where the variable is a quality characteristic of a – and R charts, x – and s charts, and individual and continuous variable. There–are x moving range charts. The X chart graphs the average of a subgroup sample from the process– over time. The range chart graphs the range of the subgroup over time. The X chart graphs the central tendency or average of the sample, while the R chart graphs the variability of the subgroup sample over time. The control limits typically are set at plus or minus three standard deviations from the mean
96 Part II: Quality Tools and Process Improvement Techniques
Run Chart of Triage Time (Minutes)
20
Triage Time
15
10
5
0 4
237
435
520
824
838 916 Time of Day
938
1000
1020
1036
1048
Figure 7.13 Example of a run chart for triage time in an emergency department. Source: Sandra L. Furterer.
– for the X chart, and three standard deviations from the average range on the R chart or standard deviation of the s chart. When only random variation exists, where there are no identifiable nonrandom trends or points that fall outside of the control limits, the process is considered to be in control. There are also attribute control charts, and p, np, c and u charts, that graph the percentage of defective units, number of defective units, defects, and defects per unit, respectively. The control charts will be discussed further and in more detail in Chapter 14.
HISTOGRAMS The histogram is the tool most commonly used to provide a graphical picture of the frequency distribution of large amounts of data. It also is used to show how often each different value occurs in a set of data. The data are shown as a series of rectangles of equal width but varying height. With a minimum of 50 data points, histograms can be used in these situations: • Determining if the data are numerical • Wanting to see the shape of the data’s distribution; this is important when determining whether the output of a process has a normal distribution • Analyzing whether a process can meet a customer’s requirements
Chapter 7: Basic Quality Tools
97
• Analyzing what the output from a supplier’s process looks like • Determining whether a process change has occurred from one time period to another • Determining whether the outputs of two or more processes are different • Required to communicate the distribution of data quickly and easily to others Utilizing a minimum of 50 consecutive points, the worksheet in Figure 7.14 can help you construct a histogram. Histogram Worksheet Process:
Calculated by:
Data dates:
Date:
Step 1. Number of bars Find how many bars there should be for the amount of data you have. This is a ballpark estimate. At the end, you may have one more or less. # Data points 50
# of bars (B ) 7 8 9 10 11 12 13 14
100 150 200
B = ________
Step 2. Width of bars Total range of the data = R = Largest value – Smallest value = ________ – ________ = ________ =W=
Width of each bar
R
/
B
= ________ / ________ = ________ Adjust for convenience. W must not have more decimal places than the data. W = ________ Step 3. Find the edges of the bars Choose a convenient number, L1, to be the lower edge of the first bar. This number can be lower than any of the data values. The lower edge of the second bar will be W more than L1. Keep adding W to find the lower edge of each bar. L1
L2
L3
L4
L5
L6
L7
L8
L9
L10
L11
L12
L13
L14
__
__
__
__
__
__
__
__
__
__
__
__
__
__
Figure 7.14 Histogram worksheet. Source: Adapted from http://www.asq.org/learn-about-quality/data-collection-analysis-tools/overview /histogram-worksheet.html.
98 Part II: Quality Tools and Process Improvement Techniques
The histogram worksheet helps determine the number of bars, the range of numbers that go into each bar, and the labels for the bar edges. After calculating W in step 2 of the worksheet, judgment is used to adjust it to a convenient number. For example, 0.9 can be rounded to an even 1.0. The value for W must not have more decimal places than the numbers being graphed. As shown in Figure 7.15, for 100 data points and a range of scores from 300 to 800, the histogram will have 10 bars, each with a width of 50, the first bar starting with 300 and the last bar starting with 750. The resulting histogram is shown in Figure 7.16. Histogram worksheet example using 100 students’ math SAT scores (data is fabricated) ranging from 300 to 800: SAT score 300–349 350–399 400–449 450–499 500–549 550–599 600–649 650–699 700–749 750–800
Tally of 100 student scores
Frequency
||||| || ||||| ||||| ||| ||||| ||||| ||||| | ||||| ||||| ||||| |||| ||||| ||||| ||||| ||||| ||||| ||||| ||| ||||| ||| || | |
Step 1: For 100 data points, # of bars B = 10
7 13
Step 2: The range (R ) = 800 – 300 = 500
16
The width of each bar
19
W = R/B
20
W = 500/10 = 50 Step 3: L1 = 300
13
L2 = 350 . . .
8 2 1
L10 = 750
1
Figure 7.15 Tally and frequency distribution. Frequency histogram
25
Frequency
20 15 10
Math SAT scores
Figure 7.16 Histogram of student SAT scores.
750–800
700–749
650–699
600–649
550–599
500–549
450–499
400–449
350–399
0
300–349
5
Chapter 8 Process Improvement Techniques
LEAN Identify and apply lean concepts and tools, including set-up reduction (SUR), pull (including just-in-time [JIT] and kanban), 5S, continuous flow manufacturing (CFM), value-added analysis, value stream mapping, theory of constraints (TOC), poka-yoke, and total productive/predictive maintenance (TPM) to reduce waste in areas of cost, Inventory, labor, and distance. (Apply) Body of Knowledge II.C.1
The lean approach, or lean thinking, is “A focus on reducing cycle time and waste using a number of different techniques and tools, for example, value stream mapping, and identifying and eliminating monuments and non-value-added steps. Lean and agile are often used interchangeably.”1 Lean practices use both incremental and breakthrough improvement approaches to eliminate waste and variation, making organizations more competitive, agile, and responsive to markets. As noted in George Alukal’s article “Create a Lean, Mean Machine,” “Waste of resources has direct impact on cost, quality, and delivery. . . . The elimination of waste results in higher customer satisfaction, profitability, throughput, and efficiency.”2
Setup Reduction (SUR) The goal of setup reduction is to provide a rapid and efficient way of converting a manufacturing process from running the current product to running the next product. This method is also referred to as quick changeover, single-minute exchange of die (SMED), or rapid exchange of tooling and dies (RETAD). The time that is lost in production from machine setups can be greatly reduced by applying the SUR method. The concept is that all changeovers (and startups) can and should take less than 10 minutes.
99
100 Part II: Quality Tools and Process Improvement Techniques
There are seven basic steps to reducing changeover time using the SMED system: 1. Observe the current methodology. 2. Separate the internal and external activities. Internal activities are those that can only be performed while the process is stopped, whereas external activities can be done while the last batch is being produced or once the next batch has started, for example, gathering the required tools for the job before the machine stops. 3. Convert (where possible) internal activities into external ones (preheating of tools is a good example of this). 4. Streamline the remaining internal activities by simplifying them. Focus on fixings—Shigeo Shingo rightly observed that it is only the last turn of a bolt that tightens it; the rest is just movement. 5. Streamline the external activities so that they are of a similar scale to the internal ones. 6. Document the new procedure and actions that are yet to be completed. 7. Do it all again: for each iteration of the process, a 45% improvement in setup times should be expected, so it may take several iterations to cross the 10-minute line.3 The SMED concept is credited to Shigeo Shingo, one of the main contributors to the consolidation of the Toyota production system, along with Taiichi Ohno.4
Pull (including just-in-time and kanban) The pull system (Figure 8.1) ensures that the product is provided to each customer on time in the amount needed. It pulls the product demanded or ordered by the customer starting at the end of the production process and working backward. Just-in-time and kanban techniques are used to achieve the pull system.
Just-in-Time (JIT) This lean methodology is an inventory strategy that provides for the delivery of material or product at the exact time and place where the material or product will be used. When this material requirements planning system is implemented, there is a reduction of in-process inventory and its related costs (such as warehouse space), which in turn can dramatically increase the return on investment, quality, and efficiency of an organization. By implementing JIT, buffer stock is eliminated or reduced, and new stock is ordered when stock reaches the reorder level (facilitated by the use of kanban cards/signals).
Chapter 8: Process Improvement Techniques 101
Kanban This tool takes its name from the Japanese word meaning sign or card. Although kanban is related to just-in-time (JIT), it is in fact the means through which JIT is managed to form a “pull” system. This is done with kanban cards. Kanban cards are visible signs that tell what is needed, when it is needed, and how much is needed. A kanban card signals when to refill goods removed from a stocking location or when a subassembly has been used. After a certain quantity is used, the kanban card is returned upstream to the supplying process, thereby signaling replenishment. You need kanban cards to have a pull system. Visible cards let you control inventory so you have the correct amount of product (i.e., no more, no less) ready to ship when your customer needs product (not before or after). There are many types of kanban systems and sundry complicated pull systems, but the concept is straightforward. Think of kanban as a tool for managers to control the flow of product in a manufacturing operation. It is only as good as those who design it, so you need to be sure the people who design your kanban system use and understand it.5
5S This tool is used with other lean concepts as the first step in clearing things out that are not necessary to create a clean and orderly workspace. Maintaining cleanliness and efficiency allows waste and errors to be exposed. In Table 8.1 a brief description of each 5S term is provided along with the actual Japanese word from which 5S originated. Kanban
Stamping
Kanban
Store Customer order
Assembly line
Staging time 1
Kanban
2 2
Staging
1
2 3
1
Kanban
Store
Figure 8.1 Pull system. Source: Adapted from T. J. Shuker, “The Leap to Lean,” in 54th Annual Quality Congress Proceedings (Indianapolis, IN: Lean Concepts, 2000), 109.
102 Part II: Quality Tools and Process Improvement Techniques
Table 8.1
5S terms and descriptions.
5S English term
5S Japanese term
Brief description
Sort
Seiri
Separate needed tools, parts, and instructions from unneeded materials, removing the latter
Set in order
Seiton
Arrange and identify parts and tools for ease of use. Where will they best meet their functional purpose?
Shine
Seiso
Clean up the workplace, eliminating the root causes of waste, dirt, and damage
Standardize
Seiketsu
Develop and maintain agreed-upon conditions to keep everything clean and organized
Sustain
Shitsuke
Apply discipline in following procedures
Note: These are adopted descriptions and English terms adapted from and not literally translated from the original 5S Japanese terms.
By implementing 5S, the workforce is empowered to control their work environment and consequently promote awareness of the 5S concept and principles of improvement.
Continuous Flow Manufacturing This lean methodology is also known as one-piece flow, meaning product moves through the manufacturing process one unit at a time. This is in contrast to batch processing, which produces batches of multiples of the same item and then sends the product through the process batch by batch. As each processing step is completed, the unit is then pulled downstream by the next step in the process (see Figure 8.2). The worker does not have to deal with waiting (as you would with batch processing), transporting products, and storing inventory. Continuous-Flow Processing
Materials
A
B
C
Finished products
Figure 8.2 Continuous flow. Source: Adapted from T. J. Shuker, “The Leap to Lean,” in 54th Annual Quality Congress Proceedings (Indianapolis, IN: Lean Concepts, 2000): 108.
Chapter 8: Process Improvement Techniques 103
For continuous flow to make sense, you need to understand takt time, standardized work, and pull system. Henry Ford used continuous flow when he “pulled” cars along an assembly line as workers performed one operation or did one job at a time. Since Ford’s Model T assembly line, the idea of continuous flow has embraced increasingly sophisticated and novel manufacturing concepts. Today, products with dozens of colors and hundreds of options must flow continuously, just-in-time, without waste or stagnation (muda). Think of takt time as a metronome that fixes the tempo at which work is done. Takt time measures the pace of production, like an orchestra conductor’s baton, making sure everyone is on the beat and the process is smooth and continuous. If the pace is disrupted, continuous flow is lost, and corrections must be made.6 Takt time is the available production time divided by the rate of customer demand. Operating to takt time sets the ideal production pace to customer demand:7 Daily available work time Time = = Takt time Parts required per day Volume Standardized work is documented and agreed-upon work instructions that express the best-known methods and work sequence for each manufacturing or assembly process.8
Value-Added Analysis9 The value analysis (VA) is a technique to distinguish the activities in the process as being value added, of limited value, or non-value added. The value analysis can be performed for manufacturing or service processes that are part of the system. The definitions follow: • Value-added activities: Activities that the customer would pay for, that add value for the customer • Limited-value-added activities: Activities that the customer would not want to pay for but are required because of regulatory, financial reporting, or documentation purposes, or because of the way the process is currently designed • Non-value-added activities: Activities that the customer would not want to pay for, or do not add value for the customer To calculate the percentage of value-added activities: 100% X (Number of value-added activities / Number of total activities [value added + limited value added + non-valued added]) You can calculate the percentage of value-added time as 100% X (Total time spent in value-added activities /
104 Part II: Quality Tools and Process Improvement Techniques
Total time for process [value-added time + limited-value-added time + non-value-added time]) Typically, percentage of value-added time is about 1% to 5% (non-value-added time and limited-value-added time = 95% to 99%) Steps for the value analysis: 1. Identify the activities (from the process maps) 2. Identify each activity as value added, limited value added, or nonvalue added 3. Calculate either the percentage of value-added activities or the percentage of value-added time as a baseline 4. Attempt to eliminate, combine, or change each activity that is nonvalue added or limited value added For the process architecture map (PAM) used to develop process maps discussed in Chapter 7, the symbols shown are used while entering the activity information to identify the type of activity as value added, limited value added, non-value added, or not defined yet, as shown in Figure 8.3. Consider the following focus areas for optimizing the process:10 • Labor-intensive processes—can they be reduced, eliminated, combined? • Delays—can they be eliminated? • Review or approval cycles—are all reviews and approvals necessary and value added? • Decisions—are they necessary?
Task 1
= Value-added activity
Task 2
= Limited-value-added activity
Task 3
= Non-value-added activity
Task
= Undefined
Figure 8.3 Symbols on process architecture map (PAM) for value analysis. Source: Sandra L. Furterer.
Chapter 8: Process Improvement Techniques 105
• Rework—why is it necessary? • Paperwork—is all of the documentation necessary? Try to limit the number of ways you handle and store it. • Duplications—across the same or multiple organizations • Omissions—what is slipping through the cracks, causing customer dissatisfaction? • Activities that require accessing multiple information systems • Travel—look at the layout requiring the travel • Storing and retrieving information—is it necessary, do we need that many copies? • Inspections—are they necessary? • Is the sequence of activities, or flow, logical? • Is standardization, training, and documentation lacking? • Look at inputs to processes—are they all necessary? • Look at data and information used—are they all necessary? • What are the outputs of processes? Are they all necessary? • Can you combine tasks? • Is the responsible person at too high or low of a level? • Optimize the processes by incorporating the improvements identified in the value analysis Some other optimization tools are beyond the scope of this text: • Simulation—simulate changes in the process flow • Design of experiments—to identify factors that contribute to the most variability in the processes • Linear regression modeling • Lean-Six Sigma • Statistical analysis and process capability analysis • Statistical process control • Work sampling • Job design • Operations research methods An example of a lean analysis with the value analysis plus a waste analysis for the emergency department arrival and triage process (from Chapter 7) is shown in Figure 8.4.
106 Part II: Quality Tools and Process Improvement Techniques
Lean Analysis #
Activity
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27
Enter ER Help patient with kiosk
Emergency Services: Arrival and Triage Process-1
Value Analysis
Transportation
OverProduction
Motion
Defects
Delay
Inventory Processing
People
Root Cause(s)
Enter information into kiosk Call patient to triage room Triage patient Move patient to ER room
Transport patient via ambulance into ER Enter patient information into kiosk Wait for triage room Make ER room available Move patient to ER room Move patient to minor care Wait for ER room
© Sandra L. Furterer, 2020
Figure 8.4 Sample lean analysis with value analysis and waste analysis for the emergency department arrival and triage process. Source: Sandra L. Furterer.
Value Stream Mapping This section describes the primary actions required to bring a product from concept to placing the product in the hands of the end user.11 Value stream mapping is charting the sequence of movements of information, materials, and production activities in the value stream (all activities involving the designing, ordering, producing, and delivering of products and services to the organization’s customers). An advantage to this is that a “before action is taken” value stream map depicts the current state of the organization and enables identification of how value is created and where waste occurs. Plus, employees see the whole value stream rather than just the one part in which they are involved. This improves understanding and communications and facilitates waste elimination. A value stream map (VSM) is used to identify areas for improvement. At the macro level, a VSM identifies waste along the value stream and helps strengthen supplier and customer partnerships and alliances. At the micro level, a VSM identifies waste— non-value-added activities—and identifies opportunities that can be addressed with a kaizen blitz. Figures 8.5 and 8.6 are value stream map examples (macro level and micro level, respectively).12
Theory of Constraints (TOC) The theory of constraints was developed by Eliyahu Goldratt, initially in his book, The Goal13 and then in his book, What Is This Thing Called Theory of Constraints and How Should It Be Implemented?14 The Goal is written as a novel and describes the
Chapter 8: Process Improvement Techniques 107
[Australia] Raise sheep
[Cargo ship and train]
[Alabama]
[Truck]
[Texas]
[Truck]
Ship wool
Convert to yarn
Ship yarn
Warehouse
Ship yarn
Shear sheep
[St. Louis]
Schedule production
[Los Angeles]
[Truck]
Identify prospects
Make sale
Order for factory
Confirmation
[NYC]
[New York City]
Weave sweaters
Ship to customer
Distributor Customer
Retail store
Consumer [Chicago]
Figure 8.5 Value stream map example—macro level (partial). Source: R. T. Westcott, The Certified Manager of Quality/Organizational Excellence Handbook, 3rd ed. (Milwaukee, WI: ASQ Quality Press, 2006), 391.
Yarn replenishment orders Production control
Temporary yarn storage
Distributors r orde r e tom Cus tion firma Con otice Ship n Shipping release notice
Yarn release notice
Yarn wash
Weaving
Dyeing
Shipping
I
[Process data— performance]
I
[Process data— performance]
I
[Process data— performance]
I
4 days
30 min.
3 days
57 min.
5 days
38 min.
4 days
Production lead time = 17 days
[Process data— 1 performance] day
Value-added time = 2.75 hours
Figure 8.6 Value stream map example—plant level (partial).
Product
Supplier’s warehouse
40 min.
108 Part II: Quality Tools and Process Improvement Techniques
journey of Alex, the plant’s manager, to applying the theory of constraints. The second book describes in more detail the concepts and how to apply them. The five steps of focusing of the TOC method are the following: 1. Identify the systems constraints: The system constraints should be identified and prioritized according to their impact on the system’s goal. 2. Decide how to exploit the system’s constraints: Identify how to manage the system’s resources to allow the constraints to be improved. 3. Subordinate everything else to the previous decision: Identify ways to reduce the limiting impact of the constraints, which many times are policies, such as marketing, purchasing, production, and so on. 4. Elevate the system’s constraints: Elevate the constraint to continually improve it. 5. If in the previous steps a constraint has been broken, go back to step one, but don’t allow inertia to cause a system constraint. TOC advocates using cause and effect to understand the causes of constraints and reduce or eliminate them. To implement TOC, Goldratt recommends the follow method: 1. How to Become a Jonah: In The Goal, Jonah was a catalyst for change, using the Socratic method for Alex, the plant manager, to find his own problems and be guided to improvement. 2. The devastating impact of the organization’s psychology: Everyone in the organization must define a common way to make change happen. 3. Reaching the initial consensus and the initial step: All functions should buy in before any significant change occurs. 4. How to reach the top: Work sideways through the organization to convince others of the value of TOC, then work up the organization, through each successive layer of management. 5. What about existing new projects? Integrating the other philosophies, such as JIT, TQM, and SPC (Statistical Process Control), by understanding and aligning the goals of each management philosophy with TOC. The TOC states that there are three ways to increase money making: (1) increase throughput, (2) decrease inventory, and (3) decrease operating expense. You must understand the relationship of these three avenues together within the concept of the different management philosophies. For additional information on TOC, see the references in the endnotes.
Poka-Yoke This tool also takes its name from a Japanese term that means to mistakeproof a process by building safeguards into the system that avoid or immediately
Chapter 8: Process Improvement Techniques 109
find errors. It comes from poka, which means “error,” and yokeru, meaning “to avoid.”15 Poka-yoke devices can provide defect prevention for an entire process. These mechanisms can give warning or control, or can even shut down an operation: • When a defect is about to occur, a warning buzzer or light can be activated in order to alert workers. This is similar to the “lights on” warning buzzer that sounds in most automobiles when the driver leaves a vehicle with the engine off and lights on. Hopefully, the driver will notice the warning and correct the mistake (turn off the lights) before a defect (dead battery) occurs. The driver may elect to ignore the warning, pointing out the fact that warnings, while good, are not always the most effective poka-yoke devices that can be deployed. • Probably the most effective mistake-proofing is achieved through control of operations. By deploying poka-yoke devices throughout a process in order to prevent errors, defects will not occur. For example, lights that are automatically turned off by a timer in the car when the engine is off are controlled in order to prevent a defect (dead battery). Good poka-yoke devices can even prevent intentional errors. This factor is especially important in the area of computer and automatic teller machine security. • Another function that can be performed by poka-yoke devices is shutdown. When an error is detected, an operation can be shut down, preventing defects from occurring. Missing screws in chassis assemblies are a common manufacturing headache. Workers can be told to make sure that all screws are installed. Banners and quality signs can be hung; however, missing screws will still be a problem. Poka-yoke can come to the rescue! One way to prevent the missing screw problem is to install a counter on the power screwdriver used to put in the chassis screws. The counter will register the number of installations performed on each unit. When the required number of screws has been installed, a switch in the conveyer assembly can be closed, allowing the chassis to move to the next work position. The conveyor is “shut down” until all the screws have been put in.16 In all cases, whether your goal is detecting or preventing defects, defects are caused by errors. Mistakes can be classified into four categories: • Information errors. Misread or misinterpreted information • Misalignment. Parts misaligned or machine is mistimed • Omission or commission. Material or part is added or parts are omitted • Selection errors. Wrong part, wrong orientation, or wrong operation To implement poka-yoke, the workforce needs to be involved to determine where human errors can occur. After determining the source for each potential error, a poka-yoke method or mechanism can be devised for prevention. If there is no method to prevent the error, then a method to lessen the potential for occurrence of that error should be devised.
110 Part II: Quality Tools and Process Improvement Techniques
Total Productive/Predictive Maintenance (TPM) Total productive (predictive) maintenance (TPM) aims to reduce loss due to equipmentrelated wastes, including the following: • Downtime (due to inefficient setup or work stoppages) • Speed losses • Defective product (needing rework or repair) • Constant adjustments • Machine breakdowns TPM allows the worker to not only run the machine or equipment but to proactively maintain it. The machine operator can maintain the machinery with proper routine care, minor repairs, and routine maintenance. With the operators being responsible for and more involved in the machinery’s operation, they understand the equipment better and are motivated to take better care, leading to less abuse and accidental damage. Cooperation among the operators and maintenance workers in performing maintenance inspections and preventive maintenance activities is critical to fully achieving implementation of TPM. “The aim of TPM is to attain the ‘four zeros,’ that is, zero defects, zero downtime, zero accidents, and zero waste.”17 Alukal says that cutting out the following eight types of waste (called muda in Japanese) is the major objective of lean implementation. 1. Overproduction. Making more, earlier, or faster than required by the next process. 2. Inventory waste. Any supply in excess of a one-piece flow (make one batch and move one batch) through the manufacturing process, whether it is raw materials, work in process, or finished goods. 3. Defective product. Product requiring inspection, sorting, scrapping, downgrading, replacement, or repair. 4. Overprocessing. Extra effort that adds no value to the product (or service) from the customer’s point of view. 5. Waiting. Idle time waiting for such things as manpower, materials, machinery, measurement, or information. 6. Underutilized people. Not fully using people’s mental and creative skills and experience.* 7. Motion. Any movement of people, tooling, and equipment that does not add value to the product or service. 8. Transportation waste. Transporting parts or materials around the plant.18 *The traditional definition for the major objectives of lean implementation describes seven types of waste. Alukal adds the additional waste of people who are underutilized.
Chapter 8: Process Improvement Techniques 111
There are a number of lean tools and techniques, and the selection of which to use greatly depends on the root causes of variation and waste. Refer to Figure 8.4 for an example of the identification of the wastes from the process architecture map (PAM).
SIX SIGMA Identify key Six Sigma concepts including variation reduction, voice of the customer (VOC), belt levels (yellow, green, black, master black), and their roles and responsibilities. (Understand) Body of Knowledge II.C.2
Six Sigma is a methodology that provides businesses with the tools to improve the capability of their business processes. From Benbow and Kubiak’s The Certified Six Sigma Black Belt Handbook, “Six Sigma is a fact-based, data-driven philosophy of quality improvement that values defect prevention over defect detection. It drives customer satisfaction and bottom-line results by reducing variation and waste, thereby promoting a competitive advantage. It applies anywhere variation and waste exist, and every employee should be involved.”19 Motorola first developed Six Sigma in the mid-1980s as a set of tools and strategies for process improvement; when Jack Welch adopted it as General Electric’s business strategy in 1995, the Six Sigma philosophy became well-known. The most basic and simplest idea behind Six Sigma is that if you can measure how many “defects” you have in a process, you can systematically figure out how to eliminate them and get as close to “zero defects” as possible. The Six Sigma approach is a quality philosophy that embodies a collection of techniques and tools for use in reducing variation and advocates a program of improvement that focuses on strong leadership tools and an emphasis on bottom-line financial results. Although Six Sigma was originally designed for the manufacturing sector, it became a recognized quality strategy throughout many sectors of industry. The International Organization for Standardization (ISO) came out with a new standard in 2011 defining the Six Sigma methodology. Titled ISO 13053-1:2011, Quantitative Methods in Process Improvement—Six Sigma, it is a two-part standard: Part 1: DMAIC Methodology, and Part 2: Tools and Techniques. Both parts pertain to all sectors and organizations. The word sigma (s) is actually a statistical term that measures how far a process deviates from perfection; it is the standard deviation of a given process. The quantity of units processed divided into the number of defects actually occurring, multiplied by one million, results in defects per million: # Defects × 106 #Units produced
112 Part II: Quality Tools and Process Improvement Techniques
The term defects per million opportunities (DPMO) is used to determine a sigma level based on the number of DPMO: D = number of defects U = number of units sampled or produced during the time period O = number of opportunities for a defect to occur (ways to have defects) Calculate the DPMO: DMPO =
D × 106 (U × O)
With the added statistical tolerance of a 1.5 sigma shift in the mean, a Six Sigma process can be achieved for processes that produce no more than 3.4 defective parts per million opportunities. See Table 8.2. An example of DPMO calculation for a payroll paycheck process is given. There are three opportunities for a defect to occur, O = 3:20
NUMBER OF OPPORTUNITIES • Timesheet process • Data entry of payroll data • Computer application Number of units = 300 paychecks per pay period Number of defects = 3 defects per payroll batch (on average) DPMO =
3 × 106 = 3, 333.33 (300 × 3)
Using a conversion table, as shown in Table 8.2, the sigma level is about 4.2. When an organization decreases the amount of variation occurring, the process sigma level will increase. The Certified Six Sigma Black Belt Handbook offers the definitions of Six Sigma as a philosophy, a set of tools, and a methodology.21 Table 8.2
Defects per million opportunities (DPMO) at various process sigma levels. Process sigma level
Defects per million opportunities
1 sigma
=
690,000 DPMO
2 sigma
=
308,000 DPMO
3 sigma
=
66,800 DPMO
4 sigma
=
6,210 DPMO
5 sigma
=
230 DPMO
6 sigma
=
3.4 DPMO
Chapter 8: Process Improvement Techniques 113
Dr. W. Edwards Deming, the quality guru, incorporated the basic understanding of statistical theory and variation into his system of profound knowledge. Variation occurs in all processes, and excessive variation can cause defects and inconsistency in processes. Statistical tools help us to quantify and analyze variation and the factors that contribute to it. Tools such as design of experiments, analysis of variance (ANOVA), regression, and correlation analyses all help to identify factors that contribute to the variability of processes. Peter Scholtes provided some interesting observations related to variation:22 • They do not see trends that are occurring in a process. • They see trends when there are none. • They do not know when expectations are realistic, or not. • The do not understand past performance, so they can’t predict future performance. • They do not know the difference between prediction, forecasting, and guesswork. • They give others credit or blame when those people are simply either lucky or unlucky, which usually occurs because people tend to attribute everything to human effort, heroics, frailty, error, or deliberate sabotage, no matter what the systemic cause. • They are less likely to distinguish between fact and opinion. Understanding the mean and standard deviation, and the factors that contribute to these statistics of a process, helps us to improve the process through the application of Six Sigma methods and tools. Statistical knowledge of the metrics to measure a process helps us to predict performance in the process. For example, if we have a stable process in a hospital’s emergency department, we are able to predict how long, on average, or within plus or minus one, two, or three standard deviations, a patient is likely to spend in the emergency department.
Six Sigma Philosophy Six Sigma as a philosophy views all work as processes. Every process can be defined, measured, analyzed, improved, and controlled (DMAIC). Using the formula y = f(x), key process drivers can be identified. With a process input (x) being a causal factor, the process output (y) can be measured (y is a function of x). x's are variables that will have the greatest impact on y. If y is customer satisfaction, then x will be key influences to customer satisfaction, be they inputs (i.e., from customers or suppliers) or process variables (i.e., materials, methods, environment, machine issues, and so on). If an organization controls the inputs to the process, (x), the outputs, or process performance (y) can be controlled. It is important that all x's be identified in order to determine which of those inputs are controllable. Figure 8.7 shows a system where, with proper data collection and analysis, feedback is part of the process for improvement to the inputs to produce high-quality outputs.
114 Part II: Quality Tools and Process Improvement Techniques
Feedback
x = inputs
Process
y
Figure 8.7 y is a function of x. Table 8.3
Traditional and Lean-Six Sigma (LSS) thinking in an organization.
From traditional thinking
To LSS principles and thinking
Problem driven
Customer driven
Reacting to dissatisfaction
Preventing dissatisfaction
Results at any cost-oriented thinking
Cross-functional, process-oriented thinking, and discipline
Used to waste and rework
Eliminate waste to improve processes and throughput
Fixing blame
Fixing the problems
People management
System management, reducing variation, process measurement, standardization
Reward firefighting and crisis management
Reward team effort and improvement
Measure cost and productivity
Measure throughput, customer satisfaction, processes, quality
Authoritative
Empowerment, accountability
Source: Sandra L. Furterer.
Table 8.3 illustrates the traditional thinking in an organization with respect to quality of products and services, versus an organization where the culture change happens to embrace quality practices.23 An explanation of each thinking is shown next.
PROBLEM-DRIVEN VERSUS CUSTOMER-DRIVEN • Problem-driven: Focused on each problem as they arise, drop projects and process improvement efforts to fix unrelated problems • Customer-driven: Focused on customers’ needs and how we can improve our processes to provide seamless service that meets their needs; this includes both external and internal customers
Chapter 8: Process Improvement Techniques 115
REACTING TO DISSATISFACTION VERSUS PREVENTING DISSATISFACTION • Reacting to dissatisfaction: Reacting to the loudest complaints when they occur • Preventing dissatisfaction: Mistake-proofing and improving processes to prevent problems and customer dissatisfaction
RESULTS AT ANY COST-ORIENTED THINKING VERSUS CROSSFUNCTIONAL, PROCESS-ORIENTED THINKING AND DISCIPLINE • Results at any cost-oriented thinking: Thinking that quality requires higher costs, more QC reviews, and that compliance testing and audits can prevent problems • Cross-functional, process-oriented thinking and discipline: More efficient and reliable processes can be delivered defect-free, focusing on process improvement, customer satisfaction, and on the entire process, instead of departmental silos
USED TO WASTE AND REWORK VERSUS ELIMINATING WASTE TO IMPROVE PROCESSES AND THROUGHPUT • Used to waste and rework: Allowing inefficient activities, redundant data entry, mistakes, and poor quality to exist and languish in our processes (that rework is OK, as long as the defects are fixed); designing processes based on compliance and regulatory requirements only, versus customers’ needs and requirements • Eliminating waste to improve processes and throughput: Focus on how long it takes to get through the end-to-end process the first time defectfree (throughput), with a focus on eliminating waste activities that do not add customer value
FIXING BLAME VERSUS FIXING THE PROBLEMS • Fixing blame: Blaming people in the process for mistakes, instead of the multiple factors that impact process quality, timeliness, and service, such as the way work is organized, the order that activities are performed in, information systems and lack of automation, forms and information flow, lack of clear process design, lack of metrics, and lack of accountability • Fixing the problems: Focus on process improvement and transformation to identify the problems, root causes of those problems, and designing the processes with the needs of the customers as a major focus
116 Part II: Quality Tools and Process Improvement Techniques
PEOPLE MANAGEMENT VERSUS SYSTEM MANAGEMENT • People management: Thinking that it just takes better people management or organizational structure, not process design and focus on root causes and factors that impact quality • System management, reducing variation, process measurement and management, standardization: Understanding the entire process, the hand-offs, inefficiencies, and root causes that impede customer satisfaction, measuring the process so that we know when we improve, or when we are “out of control”; distinction of “random” (many factors that cause small variation; normal in a stable process) versus “special” causes (sporadic, extreme factors that lead to unstable processes)
REWARD FIREFIGHTING AND CRISIS MANAGEMENT VERSUS REWARD TEAM EFFORT AND IMPROVEMENT • Reward firefighting and crisis management: When there is a problem that could have been prevented with good process and practice, we reward the effort it takes to fix the problem (double-posting errors), instead of preventing the problem and designing a quality process in the first place • Reward team effort and improvement: Focus on process transformation teams to work together with process owners and subject matter experts to improve processes and eliminate root causes forever
MEASURE COST AND PRODUCTIVITY VERSUS MEASURE THROUGHPUT, CUSTOMER SATISFACTION, PROCESSES, AND QUALITY • Measure cost and productivity: Measure cost of a process, and how many full-time equivalents (FTEs) it takes to perform the process, without focus on measuring customer satisfaction and quality • Measure throughput, customer satisfaction, processes, and quality: Understanding how our processes work, documenting the processes for common understanding, improvement, and training. Incorporating metrics focused on throughput, customer satisfaction (service), and quality.
AUTHORITATIVE VERSUS EMPOWERMENT • Authoritative: Telling people what to do and how to do it • Empowerment: Leveraging process owners and subject matter experts to improve their processes and measuring the processes and holding people accountable to perform the process in the best practice manner
Chapter 8: Process Improvement Techniques 117
Define
• Project charter • Stakeholder analysis • SIPOC • Process map (high level) • Communication plan • Change plan • Project plan • Responsibilities matrix • Items for resolution (IFR) • Ground rules
Measure
• Process map • Critical to quality characteristics • Data collection plan with operational definitions • Pareto chart • Measurement system analysis • Metrics with baseline • Benchmarking • Check sheets • Histograms • Surveys, interviews, focus groups, affinity diagrams • Descriptive statistics
Analyze
• Cause-and-effect diagrams • Cause-and-effect matrix • Why–why diagram • Process analysis, histograms and graphical analysis, waste analysis, value analysis, FMEA • Statistical analysis • ANOVA • DPPM/DPMO, process capability
Improve
• Quality function deployment • Recommendations for improvement • Action plan • Cost/benefit analysis • Future state process map • Affinity diagram • Dashboards/ scorecards • Training plan and materials • Procedures • Policies
• • • • • • • • • • •
Control
Hypothesis testing Dashboards Statistical analysis Graphical analysis Sampling Mistake proofing FMEA Control plan Process capability DPPM/DPMO Statistical process control • Standard work • Prioritization matrices
Figure 8.8 Six Sigma tools by DMAIC phase. Source: Sandra L. Furterer.
Engagement with quality tools and Lean-Six Sigma can help to change the culture in an organization from one that acts in the traditional thinking (left side of Table 8.3) to the Lean-Six Sigma thinking on the right side of Table 8.3.
Six Sigma Tools There are many tools that are part of the Lean-Six Sigma (LSS) tool kit, which are selected based on the problem to be solved. Figure 8.8 shows the most common tools applied within each LSS DMAIC phase.24
Six Sigma Methodology Six Sigma organizations actually utilize two project methodologies, each with five phases: DMAIC (define, measure, analyze, improve, control) and DMADV (define, measure, analyze, design, verify). Both are measurement-based strategies that focus on process improvement and reducing variation.25 Used to improve existing business processes, DMAIC is a rigorous and robust method that is the most widely adopted and recognized. A Six Sigma team member working on an improvement project will follow the five steps of DMAIC, utilizing the tools necessary. See Chapter 6 for more information on the DMAIC methodology. Where DMAIC is used on existing processes needing incremental improvement, the DMADV methodology is mostly used on DFSS (Design for Six Sigma) projects for a new design or—in some cases when the process, product, or service has proven incapable of meeting customer requirements—a redesign.
118 Part II: Quality Tools and Process Improvement Techniques
Businesses with successful implementation of Six Sigma programs delineate roles and responsibilities for the various people involved in project activities. Six Sigma roles and responsibilities are defined within a hierarchy or ranking order in organizations that have implemented a Six Sigma program (see Table 8.4). Depending on an organization’s vision and deployment of Six Sigma, implementations and roles may vary, but every project must have organizational support. Six Sigma executives and champions set the direction for selecting and deploying projects. They ensure, at a high level, that projects succeed. The executives and champions, sometimes with the assistance of a Master Black Belt (smaller companies may sometimes use consultants for the Master Black Belt role), use a selection process that makes sure project goals are tied to the organizational goals to add value and are appropriate with the organizational plan. At the project level, there are Black Belts and Green Belts (larger organizations will sometimes have Yellow Belts and White Belts as well). These are the people conducting projects and implementing improvements. Many corporations have instituted a certification program for these roles that verifies skills for the appropriate belt level. The Green Belt and Black Belt roles tend to be the most common and used consistently to lead Six Sigma projects. While the Black Belt role usually functions in a full-time quality position, the Green Belts have other responsibilities and roles within the company. Traditionally, the Black Belts run the improvement projects and are supported by the Green Belts, who may have roles as engineers,
Table 8.4
Six Sigma roles and responsibilities.
Roles
Responsibilities
Executive leadership
Provide overall alignment by establishing the strategic focus of the Six Sigma program within the context of the organization’s culture and vision.
Champions
Translate the company’s vision, mission, goals, and metrics into an organizational deployment plan and identify individual projects. Identify resources and remove roadblocks.
Master Black Belt
Trains and coaches Black Belts and Green Belts. Functions more at the Six Sigma program level by developing key metrics and the strategic direction. Acts as an organization’s Six Sigma technologist and internal consultant.
Black Belt
Leads problem-solving projects. Trains and coaches project teams.
Green Belt
Assists with data collection and analysis for Black Belt projects. Leads Green Belt projects or teams.
Yellow Belt
Participates as a project team member. Reviews process improvements that support the project.
White Belt
Can work on local problem-solving teams that support overall projects but may not be part of a Six Sigma project team. Understands basic Six Sigma concepts from an awareness perspective.
Chapter 8: Process Improvement Techniques 119
data analysts, technicians, or assemblers. Typically, Six Sigma organizations will require that the leaders who are responsible for a business area, such as a department or area supervisor, be trained and often certified as Green Belts. Often, analysts such as the Certified Quality Process Analyst would assist the Green Belt while training and working toward the Green Belt certification. It is expected at the Green Belt level that the individual would have a basic understanding of statistics. By collecting data and doing the analysis, he or she can do problem solving and root cause analysis to solve quality problems. The training that Green Belts receive allows them to use Six Sigma tools and methodology and, with the supervision of a Black Belt, they can lead small projects, usually within their own department or specific business area. Six Sigma can be successful when these things happen: • Management must lead the improvement efforts and foster an environment that supports Six Sigma initiatives as a business strategy • Every employee must focus on customer satisfaction • Teams are assigned well-defined projects • Provide champions as leaders of the Six Sigma improvement process to remove roadblocks for the Six Sigma improvement teams • Provide experts who are designated as Black Belts for guidance and coaching • Value and use the collected data • View defects as opportunities • Provide statistical training at all levels, ensuring each employee a minimum of Six Sigma Green Belt status Six Sigma can achieve improvements in quality and cost reduction quickly and on an incremental, continuous basis to achieve breakthrough levels of process quality.
Voice of the Customer (VOC) The voice of the customer (VOC) is a term used to describe collecting and listening to the needs or voice of the customer. There are quantitative and qualitative ways to collect the VOC from the customers of the process, as follows:26 – Surveys—a properly designed questionnaire, which can provide Likert scale quantification of survey results or open-ended qualitative responses – Focus groups—3 to 12 individuals assembled to explore specific topics and questions – Face-to-face interviews—individual interviews of 30 to 60 minutes to gather information on processes and ideas for improvement – Satisfaction/complaint cards—used to collect overall satisfaction and/ or complaints
120 Part II: Quality Tools and Process Improvement Techniques
– Dissatisfaction sources—complaints, rework, customer re-visit – Competitive shopper—shopping a competitor by purchasing their products Here are some tips on listening to the customer: • Do not use your own instincts as research data (you are not the customer) • See the world from the customer’s side • The higher you are in the organization, the more out of touch you are • Get customers to talk; 90–96% of unhappy customers will not complain • Do research to retain customers • Determine how satisfied customers are • Conduct research on customer expectations • Develop a customer profile • Share the results of customer research studies Surveys have many advantages: • Conducting a survey is an efficient way of collecting information from a large number of respondents • Statistical techniques can be used to determine validity, reliability, and statistical significance • A wide range of information can be collected • They are relatively easy to administer • There is an economy in data collection due to the focus provided by standardized questions Surveys can also have disadvantages: • Surveys depend on subjects’ motivation, honesty, memory, and ability to respond • Subjects may have forgotten their reasons • They may not be motivated to give accurate answers • Structured surveys, particularly those with closed questions, may have low validity • Errors due to nonresponse may exist • Survey question answer choices could lead to vague data sets, based on personal abstract notion concerning “strength of choice”
Chapter 8: Process Improvement Techniques 121
BENCHMARKING Define and describe this technique and how it can be used to support best practices. (Understand) Body of Knowledge II.B.3
A carpenter might make a mark on his workbench so he can easily and efficiently cut every table to the exact length. In business, benchmarks are standards against which performance can be measured. Benchmarks are commonly used by companies to achieve quality performance. In theory, benchmarks are those standards that the organization strives to achieve. They are identified from many possible sources: • Customer specifications • Customer expectations • The competition’s product performance • The best in the company’s own or a related industry • The best in the world Benchmarking can be informal (when an individual or a team observes and copies the work that an outstanding individual or team is doing) or formal. Formal benchmarking is a term that normally refers to emulating the best practices in the industry or even the best practices in the world. Formal benchmarking consists of following these steps: 1. Obtain a commitment to change the organization’s business practices 2. Determine which processes the company needs to improve 3. Identify the best practices 4. Form a benchmarking partnership with a firm that displays the best practices 5. Study the benchmarking partner’s operations and products for best measures and practices 6. Create benchmarks that are measures of outstanding performance 7. Compare current practices with the benchmark partner’s practices 8. Develop a plan to change the firm’s operational practices to achieve the benchmark standards
122 Part II: Quality Tools and Process Improvement Techniques
Obtain a Commitment to Change the Organization’s Business Practices Upper management must endorse the policy to employ benchmarking to drive change in the organization. This commitment must not be sought or rendered lightly. Significant changes will often be necessary, and the resistance to change old practices is very strong in most firms. Obtaining a commitment from upper management is a good start toward making benchmarking an effective driver of change in the organization, but it is not sufficient. Employees and leaders at all levels who participate in the processes that need to be improved must be invited to contribute to planning and implementing the necessary changes, or the benchmarking initiative will almost certainly fail.
Determine Which Processes the Company Needs to Improve Means for determining which processes need to be improved consist of identifying waste (muda) and determining which business processes created the waste. Techniques for identifying waste are described in the lean section in this chapter. Internal weaknesses in the company and threats as determined in a strengths–weaknesses–opportunities–threats (SWOT) analysis are good candidates for benchmarking. W. Edwards Deming warned that unless the company possesses deep understanding of its own processes, benchmarking (copying) processes that have been successful in other companies will not produce significant improvement. It is unlikely that a company will select an appropriate benchmarking partner unless its own processes to be benchmarked are fully understood.
Identify the Best Practices Determining which companies and agencies possess the processes with the best practices requires some research. Professional societies and trade organizations frequently publish a list of those companies that practice best-in-class processes in a particular industry. The Malcolm Baldrige National Quality award is granted each year to organizations in various categories that have demonstrated that they possess best quality practices, and the Baldrige award winners have agreed in advance to serve as benchmark partners for organizations seeking support for their own quality improvement programs. Winners of other national and international award programs are also nearly always proud of their accomplishments and accordingly are available to provide information about their processes. The marketing department of most large firms includes some individuals or teams who are tasked with continuous scanning of the competitive marketplace to identify companies and agencies that are likely to pose a threat in the future to the parent organization. These internal competitive intelligence experts can be consulted or even tasked to locate competitors who possess processes that exhibit the best practices in a given industry. It is not always essential to find benchmarking practices from within a company’s own industry. One large aerospace company was having trouble implementing statistical process control in its space-qualified component
Chapter 8: Process Improvement Techniques 123
manufacturing process. A team of manufacturing and industrial engineers from the aerospace firm partnered with a large cookie manufacturer in the same city and were able to model the cookie manufacturer’s implementation of Statistical Process Control for its own operation. Benchmarking against best in class may not be possible in the following scenarios:27 • The best in the world is not known • There is no related process available • The best in class is inaccessible due to geography or expense • The best in class is not willing to partner
Form a Benchmarking Partnership There are two considerations in selecting a benchmarking partner. First, the benchmarking partner should be best in class with respect to the processes that need to be improved. There is little point in performing a benchmarking activity if the benchmarking partner is not significantly better at something than the company seeking to improve. Second, the benchmarking partner must want to provide information that can be used to develop measures of outstanding performance. Again, there is little value in performing a benchmarking activity if the necessary information cannot be obtained. The best benchmarking partners are companies that believe it is in their own best interest to share information about their practices and business processes. Most companies are reluctant to share information about their operations with other firms, particularly competitors. Accordingly, in order to form a benchmarking partnership, it may be necessary to seek companies and organizations in different industries. Regardless of which industry a company is in, the benchmarking partner should feel that there is some value to itself in sharing information with a benchmarking partner. Some companies believe that to share information about themselves is a method for improving their public image and increasing sales. There is great public image and marketing value to a company in winning the Baldrige award, and a condition of applying for that award is to agree to make information about your processes available to other companies. Quid quo pro is a term that describes giving something of value to get something of value—offering a benchmarking partner something that they value is an excellent way to encourage them to share information about their practices. Some companies will accept monetary payments in exchange for information about their processes, but most will not. If one of a company’s business processes is already recognized as best in class, this company would have no difficulty in forming a partnership with another company whose processes are also best in class in a quid quo pro exchange for information about each other’s best-in-class processes.
Study the Benchmarking Partner’s Operations and Products This step consists of collecting and analyzing information and data on the performance of the benchmark leader. Visiting the benchmark leader is always
124 Part II: Quality Tools and Process Improvement Techniques
desirable. Source information may be obtained from internet searches; books and trade journals; professional associations; corporate annual reports and other financial documents; and consulting firms that specialize in collecting, collating, and presenting benchmark information for a fee. Caution should be exercised whenever benchmark information is collected and analyzed since the data any company presents about itself are bound to be biased and will always be presented in as favorable a light as possible.
Create Benchmark Measures There are many sorts of comparisons that benchmarking can highlight. These include organizational, process, performance, and project benchmarks. Organizational benchmarking compares the overall strategic activities of the benchmark leader with another company. How the benchmark leader sets and accomplishes goals, integrates all the aspects of its business, satisfies or delights its customers, and drives its financial successes are examples of questions one would ask to understand how this best-in-class firm operates. Process benchmarking compares a specific business process performed by the leading firm to processes the other firm’s desire to improve. Enterprise resource planning (ERP), strategic project management, customer relationship management (CRM), and employee retention are examples of specific best-in-class processes that a company who is striving to improve its business might seek to understand. Performance benchmarking enables managers to compare their own company’s products and services to those recognized as the best in class. Price, technical quality, customer service, packaging, bundling, speed, reliability, and attractiveness are examples of factors performance benchmarking compares. Project benchmarking consists of comparing a best-in-class project with other projects. Of all the types of benchmarking, this is perhaps the easiest to execute since it allows comparison to be made with organizations that are not direct competitors. While the objectives and even the practices between two projects are different, they will share many of the same elements. Projects share the common constraints of time, cost, resources, and performance. Before a new project is begun, benchmarking can stimulate the selection and application of better techniques and tools for planning, scheduling, monitoring, and controlling the project. Many companies insist on a postmortem, a formal meeting following the completion of each major project, to surface lessons learned during the execution of the project. These lessons learned can lead to benchmarks that can be used to stimulate improvements in subsequent projects.
Compare Current Practices with the Partner’s One of the main reasons benchmarking projects fail is that the company neglects to establish specific plans for closing the gap between current practices and the benchmark leader’s practices. To avoid this pitfall, it is mandatory that specific gaps be identified.
Chapter 8: Process Improvement Techniques 125
Develop a Plan to Change Operational Practices When gaps exist between the firm’s current practices and the benchmark leader’s practices, a plan to close those gaps must be developed and implemented. This plan must address overcoming the inevitable resistance to change that will surface. Employees and even managers who have performed a job in the same way for a long time—sometimes decades—always find it difficult to change what they do. The benchmarking activity will not have impact if it does not result in an action plan to improve current processes.
RISK MANAGEMENT Recognize the types of risk that can occur throughout the organization, such as scheduling, shipping/receiving, financials, operations and supply chain, employee and user safety, and regulatory compliance changes. Describe risk control and mitigation methods: avoidance, reduction, prevention, segregation, and transfer. (Understand) Body of Knowledge II.B.
The purpose of risk management is to reduce potential risks to an acceptable level before they occur, throughout the life of a product or service. We will use the term system to describe a product or service and the related processes for design, development, testing, deployment, and retirement. Risk management is defined as an [o]rganized, analytic process to identify what might cause harm or loss (identify risks); to assess and quantify the identified risks; and to develop and, if needed, implement an appropriate approach to prevent or handle causes of risk that could result in significant harm or loss.28 Risk is defined as [a] measure of the potential inability to achieve overall program objectives within defined cost, schedule, and technical constraints and has two components: 1. The probability (or likelihood) of failing to achieve a particular outcome and 2. The consequences (or impact) of failing to achieve that outcome.29 The risk management process includes the following activities: • Risk planning: The risk planning step develops a strategy for identifying, analyzing, handling, and monitoring risks. The strategy should include both a process and the way that it will be implemented,
126 Part II: Quality Tools and Process Improvement Techniques
and the risks will be documented and traced throughout the phases of the product or service life cycle. • Risk identification: The risk identification step entails brainstorming and identifying potential risks that could impact the successful completion of the mission. If an existing similar product or service exists, collecting data on warranty claims and customer complaints can be an excellent way to identify potential risks for the new or revised system. • Risk analysis: The risk analysis assesses the contributing causes, outcomes, and impacts of the risk events. • Risk handling: The team, during the risk handling activity, develops risk handling approaches for each risk to investigate whether the risk likelihood or consequence can be reduced. • Risk monitoring: Risk monitoring is the active monitoring throughout the life cycle to monitor and reassess the risks as more knowledge is acquired. Components of the risk management tools include risk assessment, risk handling approach, risk prioritization, and the risk cube. Each tool will be discussed next.
Risk Assessment The risk assessment tool includes the generation of the potential risk events that could occur related to the system, at this point in time. • The risk event is the potential risk that can occur related to the system • The contributing causes are those causes that contribute to the risk event possibly occurring • The outcomes are the potential results if the risk event was to occur • The impact is the effect on the system if the risk event should occur An easy way to generate the risk assessment is to write a sentence including the terms: “As a result of , may occur, causing , which can be expressed as .”30 Here’s an example for an emergency department process: As a result of , , causing , which can be expressed as .
Risk Types The types of risk are specific to the organization, their environment, and the product, service, or system that they provide. Some possible types of risk could include the following:
Chapter 8: Process Improvement Techniques 127
• Scheduling risks: Risks of scheduling the production of a product, managing all of the factors inherent in a production process; scheduling resources for an information technology project • Shipping/receiving: Risks of shipping a product or receiving raw materials from suppliers, such as product defects, obsolescence, damage to the product, or using an ineffective acceptance sampling plan • Financials: Risks related to tracking financials, use of appropriate accounting techniques, getting appropriate data for financial measures, risk of fraud • Operations: Risks related to operations, risk of cybersecurity breeches in a bank • Supply chain: Disruptions in the supply chain due to supplier issues, shipping delays, resource shortages, or pandemics • Employee: Risk of people having the training and skills to perform their work; user errors • User safety: Risk of spread of disease during a global pandemic; safety of employees with machinery • Regulatory compliance and changes: Risk of ineffective monitoring of regulatory changes; not appropriately documenting regulatory compliance changes An effective way to identify risk types is to follow your value chain activities and identify potential risks related to the activities within the processes.
Risk Handling Approach Risk handling is the process that identifies and selects options to manage the identified risks to an acceptable level during the systems project. Risk handling options include acceptance (assumption), avoidance, control (mitigation), transfer, research, and monitor.31,32 In the CQPA body of knowledge, segregation, or removing the risk from normal operations (e.g., removing defective products from the production process) is listed as a risk control and mitigation method. The other risk handling options listed are explored and applied before a segregation approach. The segregation approach would be one that is performed if the risks are not appropriately prevented or mitigated.
Risk handling options33,34 This section provides each of the most common risk handling options with an example of each. 1. Acceptance (assumption): Assumption accepts the risks and takes no action at this time. It is typically assumed that the risk is rated low, or it would take an inordinate amount of resources to reduce the risk to an acceptable level.
128 Part II: Quality Tools and Process Improvement Techniques
Here is an example of an acceptance or assumption risk handling approach for the emergency department (ED): The risk is that a bus drops patients from the senior living center for those who need emergency care, causing a brief surge in patient arrivals. It would be assumed that this will occur, and since it is of minor impact it is just accepted. 2. Avoidance: In the avoidance handling approach, action is taken to eliminate the risk to reduce the likelihood of the risk to 5% and/or eliminate the negative consequence to the system. This is an example of an avoidance risk handling approach for the ED: The risk is delays in giving reports from the ED to inpatient nurses, which causes a delay in getting patients admitted to the inpatient floors. To avoid the risk, define accountability, priority, and measure, and provide technology to ensure report is given. 3. Control (mitigate) or reduction: In the control or mitigation handling approach, activities are performed to reduce the likelihood and/or the consequence to an acceptable level. It does not completely eliminate the risk, as avoidance does. Here is an example of a control (mitigate) risk handling approach for the ED: The risk is that patient volume surges cause a backup in triage. The team would create a surge staffing plan to control or mitigate this risk. 4. Transfer: In the transfer handling approach, the risk event or impact is shifted away from the program. The risk could be transferred between stakeholders or within a system itself. This approach does not change the likelihood or the impact of the risk. This is an example of the transfer risk handling approach for the ED: The risk event is that patients are not registered before they are ready to be discharged. To transfer the risk, align registration resources to ED volumes to transfer responsibility to the registration department. 5. Research: The research handling approach includes collecting additional information before a decision is made to reduce the risk. Here is an example of a research risk handling approach for the ED: The risk is that a new regulation requires reporting of ED surges. Research the requirements of the new regulation and assess the impact on resources. 6. Monitor The monitor handling approach is to take no immediate action but monitor for changes related to the risk. Following is an example of the monitor risk handling approach for our ED: The risk is of a hurricane occurring during hurricane season. The weather will be monitored, as there may not be much else that can be done to mitigate weather conditions.
Chapter 8: Process Improvement Techniques 129
Risk Prioritization Within the risk prioritization tool, the team assesses the risk event likelihood and the consequences. The risk likelihood is the probability that the risk will occur, typically assessed on a scale of 1 to 5, as shown in Table 8.5. Level 1 is not likely that the risk will occur. Level 2 represents a low likelihood that the risk will occur. Level 3 is that the risk is likely to occur. Level 4 is that the risk is highly likely to occur, and the level 5 is a near certainty that the risk will occur.
Risk Cube The risk cube is a matrix that maps each risk aligned to its risk likelihood and the consequences of the risks. This provides a simple graphical view of where to focus the risks, based on the color coding of the risk cube: red is for high risk—these risks should be closely monitored and continually re-assessed; yellow is for medium risk—these risks should be monitored and re-assessed periodically; and green is for low risk— occasional monitoring and re-assessment should be applied for these risk events. The consequences represent the magnitude of the impact of the risk. It is also typically assessed on a scale of 1 to 5. The impact or consequences are typically assessed related to the technical aspects, the system project schedule and/or the cost of the program. The impact matrix can be found in Table 8.6. Once the risk likelihood and consequences are assessed, the risk cube is used to display the risks (by risk event number) on the space in the cube based on the intersection of the risk likelihood level and the impact number. The risk cube does a nice job of visually showing which risks are most important to incorporate a satisfactory risk handling approach if needed to reduce the risk likelihood or impact. The risk assessment for the emergency department is shown in Table 8.7. The risk handling approach is shown in Table 8.8. The Risk Prioritization is shown in Table 8.9, and the Risk Cube is shown in Figure 8.9 for the Emergency Department. These represent examples risks and are not a comprehensive list of all of the risks for the Emergency Department. Table 8.5
Risk likelihood. What is the likelihood the risk will happen?
No.
Level
Your approach and processes . . .
1
Not likely
. . . will effectively avoid or mitigate this risk based on standard practice
2
Low likelihood
. . . have usually mitigated this type of risk with minimal oversight in similar cases
3
Likely
. . . may mitigate this risk, but workarounds will be required
4
Highly likely
. . . cannot mitigate this risk, but a different approach might
5
Near certainty
. . . cannot mitigate this type of risk; no known processes or workarounds are available.
Source: Richard Sugarman, “University of Dayton ENM 505 Course Materials,” University of Dayton, 2015.
130 Part II: Quality Tools and Process Improvement Techniques
Table 8.6
Magnitude of impact or risk consequences. Given the risk is realized, what would be the magnitude of the impact?
Level
Technical
Schedule
Cost
1
Minimal or no impact
Minimal or no impact
Minimal or no impact
2
Minor performance shortfall; same approach retained
Additional activities required, able to meet key dates
Budget increase or unit production cost increase < 1%
Budget increase or unit Moderate performance Minor schedule slip, will miss needed dates production cost increase < 5% shortfall, but workarounds available Table 8.7 Risk assessment. Project critical path Budget increase or unit 4 Unacceptable, but affected production cost increase < 10% Risk workarounds available Contributing no. Risk event Impact causes Cannot achieve key Outcomes 5 Unacceptable; no Budget increase or unit 3
project milestones Patients leave alternatives exist production cost increase > 10% Patient volume Impacts patient’s Only one nurse surgesSugarman, cause health and being without being seen Source: Richard “University ofscheduled Dayton ENM 505 Course Materials,” University of Dayton, 2015. backup in triage hospital loses for triage, even in by a healthcare revenue provider Table 8.7 Risk assessment. patient volume surges; resource scheduling not Risk Contributing based on patient no. Outcomes Risk event Impact causes volumes Impacts patient’s 1 Patient volume Patients leave Only one nurse 2 Patient waiting patients’ No clinical Patients being leave seen Impacts surges cause health and without being scheduled room becomes visibility without being seen health backup in triage hospital loses by a healthcare for triage,toeven in full waiting room; by a healthcare revenue provider patient volume no one “owns” provider surges; resource patients in not scheduling waiting based onroom patient volumes 3 Patients not Impacts length Not enough Patient is delayed registered before of stay, patient registration for discharge 2 Patient waiting Impacts patients’ Patients leave No clinical staff is ready becomes to be and scheduled; room health without being seen satisfaction, visibility topatient discharged patient care volume full by a healthcare waiting surges room; provider no one “owns” 4 Delays in Impacts length Admitting Delay in patient patients in patients transporting of stay, patient in batches; care waiting room patients to exams satisfaction, and transport priorities or inpatient patient not 3 Patients not floors Impactscare length Patient is delayed Notaligned enoughto ED; transporters not is registered before of stay, patient for discharge registration staff trained in telemetry ready to be satisfaction, and scheduled; patient transports discharged patient care volume surges 1
54
Delays in admitting patients to inpatient transporting floors or to units patients exams or inpatient floors
Staffing schedules Admitting patients notbatches; aligned in between ED transport priorities andaligned inpatient not to ED; floors/units; not transporters lengthy in admission trained telemetry process transports
Delay in patient care
Impacts length of stay, patient satisfaction, and and patient patient care care
5
Delays in admitting patients to inpatient floors or units
Staffing schedules not aligned between ED and inpatient floors/units; lengthy admission process
Delay in patient care
Continued Impacts length of stay, patient satisfaction, and patient care
registered before ready to be discharged
4
Delays in transporting patients to exams or inpatient floors Table 8.7 Continued. Risk no.
Risk event
5 6
Delays in admitting giving patients to inpatient reports from ED to floors or units inpatient nurses Table 8.7 Continued. Risk 7 no.
A brief patient Risk surgeevent of arrivals is experienced 6 Delays in giving reports ED to Table 8.7 Riskfrom assessment. inpatient nurses Risk no. Risk event 8 A new regulation 7 A brief patient requires reporting surge arrivals is 1 Patient volume of ED of surges experienced surges cause backup in triage 9 Risk of a hurricane during hurricane season 8 A new regulation requires reporting Source: Sandra L. Furterer. of ED surges 2 Patient waiting room becomes 9 Risk of a hurricane full during hurricane season
registration staff is scheduled; patient volume surges
for discharge
Admitting patients in batches; transport priorities not aligned to ED; transporters not trained in telemetry Contributing transports causes
Delay in patient care
Impacts length of stay, patient satisfaction, and patient care
Outcomes
Impact
Staffing schedules Not a priority with not aligned inpatient nurses; no between EDto technology and inpatient contact nurses floors/units; Contributing A bus drops lengthy admission causes patients process from the senior living center Not a priority with for emergency care inpatient nurses; no technology to contact nurses Contributing causes A new regulation is A bus drops passed in the patients the Only onefrom nurse county senior living center being scheduled for emergency triage, evencare in patient volume Climate change surges; resource causing more scheduling not hurricanes A newon regulation based patient is passed in volumes the county No clinical visibility to Climate change waiting room; causing more no one “owns” hurricanes patients in
Delay in patient care
Impacts length of of stay, patient stay, patient satisfaction, and and patient patient care care Patients must Impact wait longer than desired, and other Impacts length of Continued patients may stay, patient leave without satisfaction, and treatment patient care Impact Additional Patients resourcesmust may be wait longer than Impacts patient’s needed for the desired, and other health and reporting patients may hospital loses leave without revenue Emergency treatment department operations may Additional be disrupted resources may be needed for the Impacts reportingpatients’ health Emergency department operations may be disrupted
Chapter 8: Process Improvement Techniques 131
waiting room
Source: Sandra L. Furterer.
3
Patients not Not enough registered before registration staff is ready to be scheduled; patient discharged volume surges Table 8.8 Risk handling approach.
4Risk no. 1 2 5 3
4
Delays in transporting Risk event patients to exams volume surges Patient or inpatient floors cause backup in triage
of stay, patient satisfaction, and patient care
Delay in patient Outcomes care Delay in patient care Outcomes is Reporting Delay in patient provided by the ED care Patients leave without being seen by a healthcare provider Emergency hurricane procedures will Reporting need to be is enacted provided by the ED Patients leave without being seen Emergency by a healthcare hurricane provider procedures will need to be enacted Patient is delayed for discharge
Impacts length of stay, patient satisfaction, and patient care
Impacts length Admitting patients Delay in patient Risk handling of stay, patient in batches; carehandling description type Risk satisfaction, and transport priorities patient care notMitigation aligned to ED; Create surge staffing plan transporters not trained in telemetry Provide alerts for volume in waiting room Mitigation transports
Patient waiting room becomes full Delays in admitting Staffing schedules Patients not registered Transfer patients to inpatient not aligned before ready to be floors or units between ED discharged and inpatient Delays in transporting floors/units; Mitigation lengthy admission patients to exams or process inpatient floors
Impacts length Delay in patient Align registration resources to ED of stay, patient care volumes satisfaction, and patient Change process to reducecare batch processing of admissions
5
Delays in admitting patients to inpatient floors or units
Mitigation
Continued Align staffing between ED and inpatient floors/units to historical ED demand
6
Delays in giving reports from ED to inpatient nurses
Avoidance
Define accountability and priority, and provide appropriate technology
cause backup in triage 2
Patient waiting room becomes full
Mitigation
Provide alerts for volume in waiting room
132 Part II: Quality Tools and Process Improvement Techniques Transfer Patients not registered before ready to be discharged Table 8.8 handling approach. 8.7 Risk Continued. Mitigation 4 Delays in transporting patients to exams or Risk handling Risk Contributing inpatient floors type Risk event no. causes 3
5 16
Delays admitting Patient in volume givingsurges NotMitigation a priority with patients to inpatient cause in triage reportsbackup from ED to inpatient nurses; no floors or units inpatient nurses technology to Mitigation 2 Patient waiting room contact nurses 6 Delays infull giving Avoidance becomes reports from ED to 7 A brief patient A bus drops Transfer 3 Patients registered inpatient nurses surge of not arrivals is patients from the before ready to be experienced senior living center 7 A brief patient surge of Assumption discharged for emergency care arrivals is experienced Mitigation 4 Delays in transporting 8 A new regulation Research patients to exams or requires reporting of inpatient floors 8 A new regulation A new regulation is ED surges requires reporting passed in the Mitigation 5 Delays in admitting of ED surges county 9 Risk of a hurricane Monitor patients to inpatient during floors orhurricane units season Risk of in a hurricane Climate change 69 Delays giving Avoidance Source: Sandra Furterer. duringL.hurricane causing more reports from ED to season nurses hurricanes inpatient 7
A brief patient surge of arrivals is experienced
Source: Sandra L. Furterer.
Assumption
Align registration resources to ED volumes
Change process to reduce batch processing of admissions Risk handling description Impact Outcomes Align staffing between ED and inpatient Create surge staffing plan Impacts length of Delay in patient floors/units to historical stay, ED demand patient care satisfaction, and Provide alerts for volumepatient in waiting care room Define accountability and priority, and provide Patients must Delay appropriate in patient technology Align ED than waittolonger careregistration resources volumes desired, and other Accept the risk since it is of minimal patients may impact without Change process to reduceleave batch treatment Research the requirements of the new processing of admissions regulation and assess the impact on Additional Reporting is resources resources may be provided by the ED Align staffing between ED and inpatient needed for the Monitor weather conditions during floors/units to historical ED demand reporting hurricane season Emergency Define accountability andEmergency priority, and department hurricane provide appropriate technology operations may procedures will be disrupted need to be enacted Accept the risk since it is of minimal impact
Table 8.9 Risk prioritization. 8 A new regulation Research Research the requirements of the new requires reporting of regulation and assess the impact on Risk ED surges resources no. Risk likelihood Risk impact Risk event Monitor weather conditions during 9 Risk of a hurricane Monitor 1 Patient hurricane 4 volume surges cause backup in triagehurricane season 5 during season 3 2 Patient waiting room becomes full 4 Source: Sandra L. Furterer.
3
Patients not registered before ready to be discharged
1
1
4
Delays in transporting patients to exams or inpatient floors
2
2
5
Delays in admitting patients to inpatient floors or units
4
3
6
Delays in giving reports from ED to inpatient nurses
4
2
7
A brief patient surge of arrivals is experienced
1
2
8
A new regulation requires reporting of ED surges
1
1
9
Risk of a hurricane during hurricane season
1
5
Source: Sandra L. Furterer.
Chapter 8: Process Improvement Techniques 133
Risk Prioritization Cube 5 6
Likelihood
4
5
1 2
3 4
2
3
1
8 1
7 2
9 3 Risk Impact
4
5
Figure 8.9 Risk cube for emergency department. Source: Sandra L. Furterer.
Risk-Based Thinking and Prevention In the ISO 9001:2015 standard, section 0.3.3 and clause A.4, Risk-Based Thinking, encourage risk-based thinking applied to planning and implementing quality management system (QMS) processes and will determine the level of documented information. A key purpose of a QMS is to act as a preventive tool. According to this standard, “Organizations can decide whether or not to develop a more extensive risk management methodology than is required by this International Standard, e.g. through the application of other guidance or standards.”35 The organization is responsible for determining the level of application of risk-based thinking and the actions to address risk. Controls are incorporated in a proactive way before problems occur.
BUSINESS PROCESS MANAGEMENT (BPM) Define and describe this continuous process improvement practice, including the business process lifecycle phases (Design, Modeling, Execution, Monitoring, and Optimization). (Understand) Body of Knowledge II.B.5
In Figure 8.10, we see three bodies of knowledge—quality, productivity, and information technology—that have evolved into what is today known as business process management (BPM). The quality body of knowledge started with statistical
134 Part II: Quality Tools and Process Improvement Techniques
process control in the 1920s. The statistical concepts along with the principles of quality gurus Dr. W. Edwards Deming, Joseph Juran, Philip Crosby, Armand Feigenbaum, and others evolved into total quality management, which became popular in the 1980s. At about the same time, business process reengineering grew in popularity. The principles and tools evolved into Six Sigma. Six Sigma began at Motorola with Mikel Harry and Richard Schroeder. It was popularized at General Electric when Jack Welch was the CEO. He was a great advocate of Six Sigma. Six Sigma uses the DMAIC method and a large variety of quality problemsolving tools, including statistics. The value of Six Sigma is in the very structured approach, the focus on defects and variation, and then also the mentoring and certifications of Black Belts and Green Belts.36 The productivity body of knowledge, started in the early 1900s with the Ford production system, and evolved into just-in-time and the Toyota production system, which then evolved into lean. Lean and Six Sigma integrated and synthesized the principles, tools, and methodology combining to form LSS. In the Information Technology body of knowledge, we began in the 1970s with MRP (materials requirements planning) and MRP II (manufacturing
Quality:
Statistical Process Control
Business Process Reengineering Six Sigma Total Quality Management
Productivity:
Lean-Six Sigma Toyota Production System
Ford Production System
Business Process Management
Lean Just-in-Time
Information Technology: MRP, MRP II
ERP CRM
Figure 8.10 The evolution to business process management. Source: Sandra L. Furterer.
Business Process Management Systems
Chapter 8: Process Improvement Techniques 135
resource planning), which evolved into ERP (enterprise resource planning), and CRM (customer relationship management). BPMS (business process management systems) technology supported business process management, where the principles of Lean-Six Sigma formed the basis for the principles and improvement tools of BPM. Business process management (BPM) is a disciplined approach to analyze, identify, design, execute, document, measure, monitor, improve, and control both automated and nonautomated business processes to achieve consistent, targeted results aligned with an organization’s strategic goals. BPM is thereby the method by which an enterprise’s “Quality” program (e.g., TQM) is carried out. There are several essential factors that will help to ensure that an organization can embrace BPM and leverage the value of BPM. .
• BPM enables the alignment of the business strategy, the core value chains, and business processes. • It is important to align the process goals to the business processes with organizational strategy. • Executive sponsorship/governance is critical to ensuring change happens. Like any organization-wide change effort, top management support is critical. Setting up a governance structure guides and manages the organization with respect to BPM goals and initiatives. • Process ownership is critical when improving cross-functional processes that span many departments. • Institution practices include how BPM is practiced in the organization. • It is important to incorporate technology of the business processes to enable and optimize these processes. • Complex problems exist in the cross-organizational nature of business processes, and the impact must be assessed and optimization of the processes must be performed from a cross-functional perspective to avoid sub-optimization of processes. • The BPM concepts, tools, and training should be continuously adapted and enhanced to improve business processes. • And finally, alignment of employee bonuses and rewards to business process performance looks very different from a functionally/ departmentally managed organization, versus a process-oriented organization. There are three main goals of BPM: 1. To improve the products and services of the organization through a structured approach to performance improvement that centers on systematic design and management of a company’s business processes 2. To transform the culture to support BPM implementation
136 Part II: Quality Tools and Process Improvement Techniques
3. To gain business capabilities by transforming to an organization that has IT-enabled business process changes
Business Process Management Principles There are several important principles of BPM: • Processes are assets. Processes should create value for customers. Processes are responsible for the end-to-end work of delivering value to the customer. Customers can be internal and external. Different organizations have different objectives and different core processes. The organization acts very differently when processes are treated as an asset. BPM practices ensure that the processes are documented, kept up to date, and are accessible to the organization to leverage and improve. • Processes should be managed and continuously improved. A managed process produces consistent value to customers and has the foundation for the process to be improved. Processes should be measured, documented, monitored, controlled, and analyzed. • Information technology (IT) is an essential enabler. Business processes are the focus for improvement, and IT is the key enabling tool. IT provides information (real-time process information) to monitor and control processes.
BPM Life Cycle The BPM life cycle for designing, assessing, documenting, measuring, executing, and monitoring processes is shown in Figure 8.11.
Phase 1: Design The purpose of process design is the formal definition of the goals, deliverables, and organization of the activity and rules needed to produce a product, service, or outcome. Design Optimization
Monitoring
Figure 8.11 BPM life cycle phases. Source: Sandra L. Furterer.
Modeling
Execution
Chapter 8: Process Improvement Techniques 137
Process design can be performed at two levels: • Cross-functional process • Business function workflow The cross-functional process is a set of activities that cross departments in an organization. It is usually broader in scope, more complex, and potentially more strategic to the organization. The cross-functional process is a combination of all the activities and support needed to produce and deliver an objective, outcome, product, or service—regardless of where the activity is performed. Activities are shown in the context of their relationship with one another, to provide a picture of sequence and flow.37 A workflow, as defined by the Association of Business Process Management Professionals in the body of knowledge, is a set of activities performed within a department. A workflow is the aggregation of activity within a single Business Unit. Activity will be a combination of work from one or more processes. Organization of this work will be around efficiency. Modeling will show this work as a flow that describes each activity’s relationship with all the others performed in the Business Unit.38 This definition varies somewhat from the one used in BPM systems, where it is defined as the flow of work through the system to get the work accomplished, which almost by the nature of the automation can easily connect disparate processes across departments. However, for this material we will use the concept that workflow versus process provides an important distinction to differentiate activities within a single department and those that cross department boundaries. The important concept to glean from this discussion is the system of processes, from Deming’s theory of systems. Small processes that exist within departments or business units connect and form larger, cross-functional processes, which cross departments or business units. It is important to focus not only on the goals and efficiencies of the individual departments but on the entire stream or system of the processes when optimizing processes. In this phase, processes are assessed if they exist and are designed if they do not exist. It is important to define the process design requirements based on the voice of the customer. Quality function deployment, discussed in Chapter 17, can be used to ensure alignment and prioritization between the customer requirements and the process requirements. It is important to purposefully design the organization’s processes. We find that many processes were never truly “designed” for efficiency and to meet the strategic goals of the organization. Many processes just grew with the company and met the need at the time but never were designed for optimization and to ensure that they meet the customers’ goals. Most processes are less efficient than they could be if they were properly designed. When processes are designed, the focus should be on the sets of activities, their sequences, the interaction of the activities with the people that perform the activities, the supporting technology and information systems that they use, and any data, information, tools, and equipment that is used as well. Metrics and control mechanisms should be incorporated into the processes to ensure that the goals of the processes are met, along with the customers’ needs, and to ensure that the processes are continually improved.
138 Part II: Quality Tools and Process Improvement Techniques
It is important to understand how the process is performed today. Starting from a blank sheet of paper or whiteboard, and then designing an ideal process without regard to the challenges in the current process will tend to create either infeasible designs or ones that are not cost-effective. So, before the organization starts to design or redesign the process, process modeling should be used to understand the current state of the processes. Following are some best practices in process design: • Design around value-added activities: It is important to understand what provides value to the customer within the process. Understanding which activities provide value to the process is important to enhance the value of these activities, and reduce or eliminate activities that do not add value to the process. • Perform work where it makes most sense: This design principle is important, because too often the processes move back and forth between departments, due to either politics or where a process “grew up” in an organization. It is important for the people who have the tools, knowledge, and skills to perform the process, regardless of organization structure. • Create a single point of contact for the customer: This design principle will put the customer first when designing the process. Too often, especially large companies assume that a customer can navigate the maze of organizations when trying to solve a problem. To them, they shouldn’t need to know the distinction between the channels and the organization that supports them. If the customer has a problem and seeks help via the web, versus the phone, we should not be sending them to a completely different organization to solve their problem, and too often companies do. The customer ends up confused and perhaps with a different solution, depending on the channel that they come into our company through. When the author was working in healthcare, one of the IT folks believed it was just part of doing business in a hospital, that the patient should need a patient advocate to maneuver the maze of departments in a hospital when being provided care. This type of thinking should be abolished as organizations design their processes for the care and ease of the customer. • Reduce handoffs: Many of these design principles are interconnected. If work is performed where it makes the most sense, the number of hand-offs between the steps of the process may potentially be reduced. Hand-offs many times are where the process gets stuck. The next upstream process may not realize that the work is ready to move to the next step of the process, so the work sits and waits. Focusing on the hand-offs can vastly streamline the process and reduce total cycle time. • Reduce batch sizes: Reducing the size of the batches can help the work flow through the process more quickly and reduce the inventory and wait time that builds up between process steps. Continuous flow and reducing the batch size is a critical tenet of lean thinking.
Chapter 8: Process Improvement Techniques 139
• Put access to information where it is needed the most: With integrated systems and BPM systems, it should be easier to provide information where it is needed most to ensure the quality and timeliness of the process. • Capture information once and share it with everyone: Along with giving access to information where it is most needed, quality information should be shared with all of the process stakeholders, including, as appropriate, customers and suppliers. Web-based patient charts and electronic medical information systems are attempts to share patient information with the patient and all other appropriate medical providers. This is quickly changing healthcare processes for the better. Online financial and banking information is another technology that is providing information when and where it is needed. • Redesign the process before considering automation: Automating inefficient processes increases costs by requiring technology to adopt inefficiencies. This design principle encourages eliminating unnecessary activities before automating the process. Again, BPM systems enable quickly changing processes and business rules that manage the processes. • Design for desired performance metrics: As the saying goes, you get what you measure. It is important to select the critical process metrics carefully to encourage the desired process performance. • Standardize processes: This design principle encourages standardizing the process so that everyone performing the optimized process is performing it in the same manner. • Consider co-located networked teams and outsourcing: This design principle incorporates co-locating teams that perform a cross-functional process; also consider whether the process can be outsourced to a vendor.
Phase 2: Modeling In the business process modeling phase, the process models are developed for the organization. Furterer provides four main meta or conceptual models included in her strategic business process architecture (SBPA). The meta models describe the elements of an organization that can be used to model the organization and the relationships between the elements of the models. The four main meta models are39 1. strategic models, which include the elements of the strategic plans, including an organization’s strategies, goals, objectives, and initiatives, as well as other elements that help to define the strategies in an organization; 2. value chain and process models, which help to describe the operational processes of the organization;
140 Part II: Quality Tools and Process Improvement Techniques
3. leadership and workforce models, which describe how leaders govern, lead, and manage the workforce; and 4. information and application models, which describe the conceptual information and information system applications that support the business’s capabilities. The value chain and process meta model (Figure 8.12) will guide the modeling of the processes in the organization. Starting from the value chain activities, describing the high-level value-added and support activities, these provide a system view of the organization’s operational processes. The business function decomposition provides an inventory of all of the functions that can be used to guide the design and documentation of the business processes via process maps, or the process architecture map (PAM), discussed earlier. class Value System and Process Model generalizes Business Function Model: Business Function 1..*
Value System Model: Support Chain
consists
Gets value 1..*
1..*
Value System Model: Value System
1.. Value System Model: Value Activity
Value System Model: Support Activity
Provides 1..*
1..*
Value System Model: Value
Materializes Business Process Model: Variability
Business Capability Model: Business 0..* Capability
Enables
1..* Enables
(OR)
Enables
1..* 1..* Business Process Model: Business Process 1..* 1.. Supports
0..*
Component Business Model: Business Component
Supports
Migrates
(OR)
1..*
0..*
Application Model: Service
1..*
1..* Post condition Optimizes
Choreographs Business Process: Business Process 1..* Model
Figure 8.12 Value chain and process meta model. Source: Sandra L. Furterer.
Pre condition
Chapter 8: Process Improvement Techniques 141
Phase 3: Execution In the execution phase, the processes that were designed and modeled are implemented. Listed are eight aspects of good solutions:40 • Involves “change” not “train”: The revisions to the new process should involve significant change and improvement, not just training people how to do the process better; training should be a given. • Take the root cause out of the process: It is important to have identified the root causes within the process analysis phase and ensure that the process design fixes the root causes. • Cost-effective: The new solution should be cost-effective. • Innovative “upstream” fix: For cross-functional processes, the entire process should be optimized. • Employs poka-yoke (mistake-proofing): Poka-yoke, a Japanese term for mistake-proofing, should be designed into the process so that the problems are prevented, not fixed after they occur. • Involves the customer/customer requirements: Ensure that the process design meets the customers’ requirements. • Allows you to meet your performance target: Ensure that the process design allows the performance targets to be met. These six elements help to ensure we are designing a lasting solution: 1. Review the data gathered (process map, data and graphical analysis, etc.) in both the process modeling and process analysis phases. 2. Study the verified root causes identified in the process analysis phase. 3. Look for changes in the process that will eliminate the root causes. 4. Use process knowledge but also look for other similar processes, both inside and outside your organization. 5. Talk with many people—including customers and subject matter experts—to get creative ideas. 6. Look at processes in other industries—benchmarking similar processes—even in other industries to consider innovative ideas that others have made work in their processes. There are three main goals of organizational or process change: • Change the way people think and behave in an organization—a new mindset; this begins with a person at an individual level and is why training and project work is vital. • Change the norms—standards, models, or patterns—that guide behavior.
142 Part II: Quality Tools and Process Improvement Techniques
• Change the organization’s systems or processes—all work is a process, and improvement requires change at the process and system level; this cannot happen until people’s behavior and norms change. What is process transformation? • Process transformation is the fundamental rethinking of a process— the goal is innovation and the creative application of new business approaches, techniques, and technology. • Process transformation is cross-functional. • Change agents must understand the culture, resistance to change, and limitations to change to make change happen. • Process transformation can be applied at any level in a business as long as it is related to the radical rethinking of how the business area should work—including its markets and products. By definition, improvement makes whatever you have better. It does not rethink it—it improves it. So, if you are looking for ways to do the same things faster, for lower cost, or with improved quality, you are doing the same things. At some point, however, the industry will have evolved. Technology will have moved beyond your ability to simply improve what you have. Your competition will have better ways, and the market will require new approaches. For these reasons, transformation should be regarded as a strategic move. It is a long-term commitment to the business and to its ability to compete in the global market. It is also a commitment to modernize, upgrade, and rethink how the business should work in the future.
Phase 4: Monitoring In the monitoring phase, process performance management can be used to ensure that the processes maintain the gains and are performed as designed, or improvements are identified as well. Process performance management is the measurement of specific operational characteristics as defined by key performance indicators (KPIs), standards, labor contracts, the finance department, industry best practices, ISO, and others.41 These factors affect the ability to measure processes: • Level of flexibility in accessing data from multiple applications • Process understanding • Agreement in what to measure and how to measure it • Ability of IT to build flexible performance measurement applications • Reporting presentation and data drill-down • Acceptance of performance measurement by those who will be measured
Chapter 8: Process Improvement Techniques 143
Process performance management involves both an understanding of what to measure and how to measure it. Process managers must measure processes and define metrics to understand how the process is performing to help focus process optimization. Some examples of measuring operational performance are the following: • Quality: Defects, exceptions, waste, problems, opportunities, compliance to standards and regulatory/legal, first run yield • Timeliness: Process time, delay time, takt time, throughput time • Cost: Cost of processes, scrap, savings • Customer satisfaction/experience: Satisfaction; interaction, errors, requirements, complaints, experience • Capacity: Transaction volume • Flow: Backlog, constraints, bottlenecks Characteristics of effective measurement: • Alignment: Alignment with corporate strategies and objectives • Accountability: Metrics owner defines, monitors, and controls • Predictive: Predict process performance • Actionable: Timely, actionable data, so can intervene in the process • Few in number: Select high-value information • Easy to understand: Straightforward, connected to action • Balanced and linked: Metrics are balanced and aligned • Transformative: Metrics support transformational change • Standardized: Standard metrics • Context-driven: Within the process, gauge performance over time • Reinforced: Attach compensation or incentives • Relevant: Reviewed and refreshed metrics There are several measurement success factors, including: • Measurement skills sets exist and are used • Measurement roles and responsibilities are clear and communicated • The organizational structure supports the measurement system • There is empowerment with accountability to transform processes based on results • Process performance results are tied to compensation and incentives
144 Part II: Quality Tools and Process Improvement Techniques
Phase 5: Optimization42 The purpose of the optimization phase is to optimize the process by streamlining the process, removing inefficiencies, and ensuring that the process goals of the stakeholders are met. There are many optimization tools for optimizing a process. Process simulation: Simulation tools can be applied to enable process modeling. This provides the ability to test potential process solutions and propose these solutions prior to actual implementation of the process changes. Optimization modeling: A number of operations research optimization tools can be applied: • Linear programs • Network models • Queuing models • Markov chains • Heuristics • Inventory and production planning applications • Transportation, assignment models, and logistics applications • Integer programming It is important to pilot improvements and test that they work. There should be an assessment of the process performance against the process metrics using the appropriate statistical inferential tests. Continuous improvement through a plan–do–check–act cycle is an important part of the optimize phase. Process improvements should continue throughout the life cycle of the process, including continual improvement of technology to support the processes.
Chapter 9 Management and Planning Tools
QUALITY MANAGEMENT TOOLS Select and apply affinity diagrams, tree diagrams, process decision program charts, matrix diagrams, interrelationship digraphs, prioritization matrices, and activity network diagrams. (Apply) Body of Knowledge II.D.1
The quality management tools were developed to promote innovation, communicate information, and successfully plan major projects, and they are used mostly by managers and knowledge workers. These tools analyze problems of a complex nature and are especially valuable in situations where there is little or no data available for decision making—that is, exploring problems, organizing ideas, and converting concepts into action plans. There are seven quality management tools: 1. Affinity diagram. Organizes a large number of ideas into their natural relationships. 2. Tree diagram. Breaks down broad categories into finer and finer levels of detail, helping you move your thinking step by step from generalities to specifics. 3. Process decision program chart (PDPC). Systematically identifies what might go wrong in a plan under development.1 4. Matrix diagram. Shows the relationship between two, three, or four groups of information and can give information about the relationship, such as its strength, the roles played by various individuals, or measurements. 5. Interrelationship digraph (relations diagram). Shows cause-and-effect relationships and helps you analyze the natural links between different aspects of a complex situation. 145
146 Part II: Quality Tools and Process Improvement Techniques
6. Prioritization matrices and matrix data analysis. A complex mathematical technique for analyzing matrices, often replaced in this list by the similar prioritization matrix. One of the most rigorous, careful, and time-consuming of decision-making tools, a prioritization matrix is an L-shaped matrix that uses pairwise comparisons of a list of options to a set of criteria in order to choose the best option(s). 7. Activity network diagram (arrow diagram). Shows the required order of tasks in a project or process, the best schedule for the entire project, and potential scheduling and resource problems and their solutions.
Affinity Diagram This tool is used to organize a large group of facts or thoughts into smaller themes that are easier to understand and work with. After conducting a brainstorming session, an affinity diagram helps to organize the resulting ideas. Affinity diagrams help break down large, complicated issues into easy-to-understand categories, themes, or affinities, allowing teams to take ideas that seem unconnected and identify an inherent organization and pattern. This facilitates group consensus when necessary. There are two ways to create an affinity diagram: (1) Each group member writes their ideas on sticky notes (one idea per note). Then, without talking, the group members sort the ideas simultaneously into related groupings or categories. Upon completion of grouping the ideas, a heading is selected for each group. The resulting pattern is the affinity diagram, which at this point should have promoted discussion of the situation being analyzed; (2) generate the themes first, then brainstorm ideas for each theme. An example of an affinity diagram for generating ideas for improving team performance is shown in Figure 9.1. Team Success Factors Demonstrated commitment • Assign management sponsors • Assign a facilitator to each team • Have all team members volunteer Team Process Knowledge Technical process • Better data analysis skills • Training for team leaders • Simpler process improvement model
Meeting process • Improved use of meeting time • Reduced conflict between team members • More listening, less talking, in meetings
Support tools • Access to a computer • Standardized forms for agencies and minutes • Better meeting facilities
Figure 9.1 Affinity diagram of “methods to improve team performance.”
Chapter 9: Management and Planning Tools 147
Tree Diagram This is a method used to separate a broad goal into increasing levels of detailed actions identified to achieve the stated goal. This analysis helps identify actions or sub-tasks required to solve a problem, implement a solution, or achieve an objective. The tree diagram can supplement the affinity diagram and interrelationship digraph by identifying items not previously detected. This process allows teams to go from thinking of broad goals to more-specific details. The tree diagram is also useful as a communication tool when many details need to be explained to others. See an example of a tree diagram in Figure 9.2.
Process Decision Program Chart No matter how well planned an activity is, things do not always turn out as expected. The process decision program chart, similar to fault tree analysis, can be used to identify contingency plans for several situations that might contribute to failure of a particular project. The tasks to be completed for the project are listed, followed by a list of problems that could occur. For each problem, a decision can be made in advance as to what will be done if each problem does occur. The contingency plan should also include ensuring availability of the resources needed to take the action. Figure 9.3 shows an example process decision program chart, a tool that is especially useful when a process is new, meaning there are no previous data on types and frequencies of failure.
Matrix Diagrams This tool presents information in a table format of rows and columns to graphically reveal the strength of relationships between two or more sets of information. Patterns of project responsibilities become apparent so that tasks may be assigned Develop check sheets
How can we make operating instructions more clear?
Certify instruction writers
Involve operators in review of instructions
Standardize procedures Document key tasks Select people with proper skills Send them to a training course Interview operators for ideas Follow up to make sure operators' ideas are used
Figure 9.2 Tree diagram example. Source: ASQ’s Foundations in Quality Self-Directed Learning Series, Certified Quality Manager, Module 3 (Milwaukee, WI: ASQ Quality Press, 2001), 3–26.
148 Part II: Quality Tools and Process Improvement Techniques
The objective
The tasks
Write draft procedure
What could go wrong
Contingency planning
Research
No materials Library closed Computer malfunction
Verify ahead of time Check times
Type
Computer malfunction
Arrange loaner
Print
Computer malfunction Printer malfunction Out of ink/toner Run out of paper
Arrange loaner Spare Supplies
Figure 9.3 Process decision program chart example. Source: R. T. Westcott, The Certified Manager of Quality/Organizational Excellence Handbook, 3rd ed. (Milwaukee, WI: ASQ Quality Press, 2006), 343.
Table 9.1
When to use differently shaped matrices.
Matrix type
No. of groups
Relationships
L-shaped
2 groups
A ↔ B (or A ↔ A)
T-shaped
3 groups
B ↔ A ↔ C but not B ↔ C
Y-shaped
3 groups
A↔B↔C↔A
C-shaped
3 groups
All three simultaneously (3D)
X-shaped
4 groups
A ↔ B ↔ C ↔ D ↔ A but not A ↔ C or B ↔ D
Roof-shaped
1 group
A ↔ A when also A ↔ B in L or T
appropriately. Depending on how many groups are being compared, there are six differently shaped matrices: L, T, Y, C, X, and roof-shaped. Table 9.1 shows how to decide when to use each type of matrix. An L-shaped matrix relates two groups of items to each other (or one group to itself). A T-shaped matrix relates three groups of items: groups B and C are each related to A. Groups B and C are not related to each other. A Y-shaped matrix relates three groups of items. Each group is related to the other two in a circular fashion. A C-shaped matrix relates three groups of items together simultaneously, in 3D.
Chapter 9: Management and Planning Tools 149
Customer Requirements Customer D
Customer M
Customer R
Customer T
> 99.2
> 99.2
> 99.4
> 99.0
Purity % Trace metals (ppm)
4) = P(X = 4) + P(X = 5) ≈ .00000724 + .00000005 ≈ .00000729
An alternate calculation for P(X > 4) = 1 − P(X < 3) = 1 − 0.99999271 ≈ .00000729.
The mean and standard deviation of a binomial distribution are given by the formulas:
µ = np and σ = p(1− p)
In this example, μ = 5 × 0.035 = .175 σ = .035 × .965 ≈ .184
Poisson Distribution When counting defects rather than defective units, the Poisson distribution is used rather than the binomial distribution. For example, a process for making sheet goods has an upper specification of eight bubbles per square foot. Every
Chapter 10: Basic Concepts 175
30 minutes, the number of bubbles in a sample square foot are counted and recorded. If the average of these values is c , the Poisson distribution is: P(X = x) =
e −c c x x!
where x = number of defects c = mean number of defects e = natural log base (use the ex key on the calculator) In the example, if the average number of defects (bubbles) per square foot is 2.53, the probability of finding exactly four bubbles is: P(X = 4) =
e −2.53 2.53 4 ≈ .136 4!
Consult the calculator instruction manual to evaluate e–2.53. The symbol in the position of the c in the above formula varies from book to book; however, many statistics texts use the Greek letter l, lambda. The use of c in this formula is more consistent with the control limit formulas for the c-chart to be discussed later. The formulas for the mean and standard deviation of the Poisson distribution are:
µ = c and σ = c
Weibull Distribution The Weibull distribution is often used in the field of life data analysis due to its flexibility—it can mimic the behavior of other statistical distributions such as the normal and the exponential. It requires the use of the natural logarithmic base for probability calculations. If the failure rate decreases over time, then k < 1. If the failure rate is constant over time, then k = 1. If the failure rate increases over time, then k > 1. An understanding of the failure rate may provide insight as to what is causing the failures: A decreasing failure rate would suggest “infant mortality.” That is, defective items fail early, and the failure rate decreases over time as they fall out of the population. A constant failure rate suggests that items are failing from random events. An increasing failure rate suggests wear-out—parts are more likely to fail as time goes on. When k = 3.4, then the Weibull distribution appears similar to the normal distribution. When k = 1, then the Weibull distribution reduces to the exponential distribution.1 This type of failure profile is called the bathtub curve, because of its resemblance to a bathtub shape.
176 Part III: Data Analysis
Normal Distribution The normal distribution results from “continuous” data—that is, data that come from measurement on a continuous scale such as length, weight, or temperature, for example. Between any two values on a continuous scale there are infinitely many other values. Consider, on an inch scale, that between 1.256 and 1.257 there are values such as 1.2562, 1.2568, 1.256999, and so on.
Area under the Normal Curve There are several applications in the quality field for the area under a normal curve. This section discusses the basic concepts so that the individual applications will be simpler to understand when introduced later in the book. Example: A 1.00000″ gage block is measured 10 times with a micrometer, and the readings are 1.0008, 1.0000, 1.0000, .9996, 1.0005, 1.0001, .9990, 1.0003, .9999, and 1.0000. The slightly different values were obtained due to variation in the micrometer, the technique used, and other factors contributing to measurement variation. The errors for these measurements can be found by subtracting 1.00000 from each observation. The errors are, respectively, .0008, 0, 0, −.0004, .0005, .0001, −.0010, .0003, −.0001, and 0. These error values have been placed on a histogram in Figure 10.3a. If 500 measurements had been made and their errors plotted on a histogram, it would look something like Figure 10.3b. The histogram in Figure 10.3b approximates the normal distribution. This distribution occurs frequently in various applications in the quality sciences. The normal curve is illustrated in Figure 10.4.
–0.0010
–0.0005
0
0.0005
0.0010
0.0005
0.0010
(a) Histogram of 10 error values
–0.0010
–0.0005
0
(b) Histogram of 500 error values
Figure 10.3 Histograms of error values. Source: D. W. Benbow, A. K. Elshennawy, and H. F. Walker, The Certified Quality Technician Handbook (Milwaukee, WI: ASQ Quality Press, 2003), 37.
Chapter 10: Basic Concepts 177
–3
–2
–1
0
1
2
3
Figure 10.4 Standard normal curve. Source: D. W. Benbow, A. K. Elshennawy, and H. F. Walker, The Certified Quality Technician Handbook (Milwaukee, WI: ASQ Quality Press, 2003), 37.
It has a fairly complex formula, so locations on the normal curve are seldom calculated directly. Instead, a “standard normal table,” such as the one in Appendix B, is used. Some properties of the “standard normal curve” are: • The mean is zero and the curve is symmetric about zero. • It has a standard deviation of 1. The units on the horizontal axis are standard deviations. • The total area under the curve is one square unit. • The curve never touches the horizontal axis; it extends infinitely far in each direction to the right and the left. Example: Use the standard normal table (Appendix B) to find the area under the standard normal curve to the right of 1. The values on the horizontal line are often referred to as z-values, so this problem is sometimes stated as “Find the area under the standard normal curve to the right of z = 1.”
In Appendix B, find 1 in the z-column. The area associated with z = 1 is 0.1587, which is the correct answer to this problem.
Example: Find the area under the standard normal curve to the right of z = 0.
Intuitively, since the curve is symmetric about 0, we would suspect that the answer is 50%, or .50. Verify this by finding 0 in the z-column in Appendix B.
Example: Find the area under the standard normal curve to the right of z = −1. The area to the right of z = 1 is 0.1587, and because of the symmetry of the curve, the area to the left of z = −1 is also 0.1587. Since the total area under the curve is 1, the area to the right of z = −1 is 1 – 0.1587 = 0.8413. Example: Find the area under the standard normal curve between z = 0 and z = 1. This is the shaded area in Figure 10.5. The entire area to the right of z = 0 is 0.5, and the area to the right of z = 1 is 0.1587. The shaded area is 0.5 – 0.1587 = 0.3413.
178 Part III: Data Analysis
–3
–2
–1
0
1
2
3
Figure 10.5 Area under the standard normal curve between z = 0 and z = 1. Source: D. W. Benbow, A. K. Elshennawy, and H. F. Walker, The Certified Quality Technician Handbook (Milwaukee, WI: ASQ Quality Press, 2003), 38.
–3
–2
–1
0
1
2
3
Figure 10.6 Area under the standard normal curve between z = −2 and z = 1. Source: D. W. Benbow, A. K. Elshennawy, and H. F. Walker, The Certified Quality Technician Handbook (Milwaukee, WI: ASQ Quality Press, 2003), 38.
Example: Find the area under the standard normal curve between z = −2 and z = 1. This is the shaded area in Figure 10.6. From the table in Appendix B, the area to the right of z = 2 is 0.0228, so the area to the left of z = −2 is also 0.0228. Consequently, the area to the right of z = 2 is 1 – 0.0228 = 0.9772, and the area to the right of z = 1 is .1587. The shaded area is 0.9772 – 0.1587 = 0.8185. Some normal distributions are not the standard normal distribution. The next example shows how the standard normal table in Appendix B can be used to find areas for these cases. Example: An automatic bar machine produces parts whose diameters are normally distributed with a mean of 0.750″ and standard deviation of 0.004″. What percentage of the parts have diameters between 0.750″ and 0.754″?
Figure 10.7 illustrates the problem. This is not the standard normal distribution, because the mean is not 0 and the standard deviation is not 1. Figure 10.7 shows vertical dashed lines at standard deviations from −3 to 3. The horizontal axis has the mean scaled at the given value 0.750″, and the distance between
Chapter 10: Basic Concepts 179
.738
.742
.746
.750
.754
.758
.762
Figure 10.7 Area under a normal curve between 0.750″ and 0.754″. Source: D. W. Benbow, A. K. Elshennawy, and H. F. Walker, The Certified Quality Technician Handbook (Milwaukee, WI: ASQ Quality Press, 2003), 39.
standard deviation markers is the given value of 0.004″. The problem asks what percentage of the total area is shaded. From the diagram, this is the area between z = 0 and z = 1 on the standard normal curve, so the area is 0.5 – 0.1587 = 0.3413. Since the total area under the standard normal curve is 1, this represents 34.13% of the area and is the answer to the problem. Further examples of this type are given in the Process Capability discussion. The standard normal curve can be related to probability in the previous example by asking the question, “If a part is selected at random from the output of the automatic bar machine, what is the probability that it will have a diameter between 0.750″ and 0.754″?” Since 34.13% of the parts have diameters in this range, the probability is .3413. How accurate are the answers to these examples? Suppose that 10,000 parts are produced. If the distribution is exactly normal, and the population mean and standard deviation are exactly 0.750″ and 0.004″, respectively, then the number with diameters between 0.750″ and 0.754″ would be 3,413 parts. However, since exactly normal distributions exist only in theory, and sample data are typically used to estimate the population mean and standard deviation, the actual number will vary somewhat. Following are typical histogram shapes and what they mean (Normal, Skewed, and Double-Peaked or Bimodal).2
Normal A common pattern is the bell-shaped curve known as the normal distribution. In a normal distribution, points are as likely to occur on one side of the average as on the other. Be aware, however, that other distributions look similar to the normal distribution. Statistical calculations must be used to prove a normal distribution. Don’t let the name “normal” confuse you. The outputs of many processes—perhaps even a majority of them—do not form normal distributions, but that does not
180 Part III: Data Analysis
mean anything is wrong with those processes. For example, many processes have a natural limit on one side and will produce skewed distributions. This is normal—meaning typical—for those processes, even if the distribution is not called “normal”!
Normal distribution
Skewed The skewed distribution is asymmetrical because a natural limit prevents outcomes on one side. The distribution’s peak is off-center toward the limit, and a tail stretches away from it. For example, a distribution of analyses of a very pure product would be skewed because the product cannot be more than 100% pure. Other examples of natural limits are holes that cannot be smaller than the diameter of the drill bit or call-handling times that cannot be less than zero. These distributions are called right- or left-skewed according to the direction of the tail.
Right-skewed distribution
Double-Peaked or Bimodal The bimodal distribution looks like the back of a two-humped camel. The outcomes of two processes with different distributions are combined in one set of data. For example, a distribution of production data from a two-shift operation might be bimodal if each shift produces a different distribution of results. Stratification often reveals this problem.
Bimodal (double-peaked) distribution
Chapter 10: Basic Concepts 181
PROBABILITY CONCEPTS
Describe and use probability concepts: independent and mutually exclusive events, combinations, permutations, additive and multiplicative rules, and conditional probability. Perform basic probability calculations. (Apply) Body of Knowledge III.A.3
The probability that a particular event occurs is a number between 0 and 1 inclusive. For example, if a lot consisting of 100 parts has four defectives, we would say that the probability of randomly drawing a defective is .04 or 4%. Symbolically, this is written P(defective) = .04. The word “random” implies that each part has an equal chance of being drawn. If the lot had no defectives, the probability of drawing a defective would be 0, or 0%. If the lot had 100 defectives, the probability of drawing a defective would be 1, or 100%.
Basic Probability Rules Complement Rule: The probability that an event A will not occur is 1 − (the probability that event A does occur). Stated symbolically, P(not A) = 1 − P(A). Some texts use symbols for “not A” such as −A, ~A, and sometimes A with a bar over it. Special Addition Rule: Suppose a card is randomly selected from a standard 52-card deck. What is the probability that the card is a club? Since there are 13 clubs, P(♣) = 13/52 = .25. What is the probability that the card is either a club or a spade? Since there are 26 cards that are either clubs or spades, P(♣ or ♠) = P(♣) + P(♠) = 13/52 + 13/52 = 26/52 = .25 + .25 = .5 Therefore, it appears that P(♣ or ♠) = P(♣) + P(♠), which, generalized, becomes the special addition rule: P(A or B) = P(A) + P(B) with a word of warning—use only if A and B cannot occur simultaneously. Note that the probability that event A or event B occurs can be denoted also by P(A ∪ B). General Addition Rule: What is the probability of selecting either a king or a club? Using the special addition rule, if it were applicable (it will be demonstrated that it is not), P(K or ♣) = P(K) + P(♣) = 4/52 + 13/52 = 17/52. This is incorrect, because there are only 16 cards that are either kings or clubs or both (13 clubs, including one of which is a king of clubs, plus K♦, K♥, and K♠). The reason that the special addition rule does not work here is that the two events (drawing a king and drawing a club) can occur simultaneously. The probability that events A and B both occur is denoted by P(A and B) or P(A ∩ B). This leads to the general addition rule: P(A or B) = P(A ∪ B) = P(A) + P(B) − P(A ∩ B)
182 Part III: Data Analysis
The special addition rule has the advantage of being somewhat simpler, but its disadvantage is that it is not valid when A and B can occur simultaneously. The general addition rule, although seemingly more complicated, is always valid! For the previous example, P(K and ♣) = 1/52 since only one card is both a K and a ♣. To complete the example, P(K or ♣) = P(K) + P(♣) − P(K and ♣) = 4/52 + 13/52 – 1/52 = 16/52 Two events that cannot occur simultaneously are called mutually exclusive. So, the warning for the special addition rule is sometimes stated as follows: “Use only if events A and B are mutually exclusive.”
Contingency Tables Suppose various parts in a lot or batch have two categories of attributes. The categories are color and size. Then further suppose that each part is one of four colors (red, yellow, green, or blue) and one of three sizes (small, medium, or large). A useful tool that displays these attributes is the contingency table:
Red
Yellow Green Blue
Small 16 21 14 19 Medium 12
11
19
15
Large 18 12 21 14
Each part belongs in exactly one column and exactly one row. So, each part belongs in exactly one of the 12 cells in our example. When columns and rows are totaled, the table becomes:
Red
Yellow Green Blue
Totals
Small 16 21 14 19 70 Medium 12
11
19
15
57
Large 18 12 21 14 65
Totals 46 44 54 48 192
Note that the total of all of the parts with the various attributes can be computed in two ways: by adding the subtotals for size vertically or by adding the subtotals for color horizontally. If one of the 192 parts is randomly selected, find the probability that the part is red. Solution: P(red) = P(red and small) + P(red and medium) + P(red + large) = 16/192 + 12/192 + 18/192 = 46/192 ≈ .240 Find the probability that the part is small. Solution: P(small) = P(small ∩ red) + P(small ∩ yellow) + P(small ∩ green) + P(small ∩ blue) = 16/192 + 21/192 + 148/192 + 19/192 = 70/192 ≈ .365
Chapter 10: Basic Concepts 183
Find the probability that the part is red and small. Solution: Since there are 16 parts that are both red and small, P(red ∩ small) = 16/192 ≈ .083. Find the probability that the part is red or small. Solution: Since it is possible for a part to be both red and small simultaneously, the general addition rule must be used.
(red or small) = P(red) + P(small) − P(red ∩ small) = 46/192 + P 70/192 – 16/192 ≈ .521
Find the probability that the part is red or yellow. Solution: Since no part can be both red and yellow simultaneously, the special addition rule may be used.
P(red or yellow) = P(red) + P(yellow) = 46/192 + 44/192 ≈ .469
Notice that the general addition rule could also have been used:
(red or yellow) = P(red) + P(yellow) − P(red ∩ yellow) = P 46/192 + 44/192 – 0 ≈ .469
Conditional Probability Continuing with the previous example, suppose that the selected part is known to be green. With this knowledge, what is the probability that the part is large? Solution: Since the part is located in the green column of the table, it is one of the 54 green parts. So, the lower number in the probability fraction is 54. Since 21 of the 54 parts are large, P(large, given that it is green) = 21/54 ≈ .389. This is referred to as conditional probability. It is denoted P(large | green) and read as “The probability that the part is large given that it is green.” It is useful to remember that the category to the right of the | in the conditional probability symbol is the denominator in the probability fraction. Find the following probabilities: P(small | red) Solution: P(small | red) = 16/46 ≈ .348 P(red | small) Solution: P(red | small) = 16/70 ≈ .229 P(red | green) Solution: P(red | green) = 0/54 = 0 A formal definition for conditional probability is P(B | A) = P(A ∩ B) / P(A) Verifying that this formula is valid in each of the above examples will aid in understanding this concept.
General Multiplication Rule Multiplying both sides of the conditional probability formula by P(A): P(A ∩ B) = P(A) × P(B | A)
184 Part III: Data Analysis
This is called the general multiplication rule. It is useful to verify that this formula is valid using examples from the contingency table.
Independence and the Special Multiplication Rule Consider the contingency table:
X Y Z Totals
F
17 18 14 49
G 18 11 16 45 H 25 13 18 56
Totals 60 42 48 150
P(G | X) = 18/60 = .300 and P(G) = 45/150 = .300 so P(G | X) = P(G) The events G and X are called statistically independent, or just independent, events. Knowing that a part is of type X does not affect the probability that it is of type G. Intuitively, two events are called independent if the occurrence of one does not affect the probability that the other occurs. The formal definition of independence is P(B | A) = P(B). Making this substitution in the general multiplication rule produces the special multiplication rule: P(A ∩ B) = P(A) × P(B) So, the warning for the special multiplication rule is sometimes stated as follows: “Use only if events A and B are independent.” Example: A box holds 129 parts, of which six are defective. A part is randomly drawn from the box and placed in a fixture. A second part is then drawn from the box. What is the probability that the second part is defective?
The probability cannot be determined directly unless the outcome of the first draw is known. In other words, the probabilities associated with successive draws depend on the outcome of the previous draws. Using the symbol D1 to denote the event that the first part is defective and G1 to denote the event that the first part is good, and so on, following is one way to solve the problem.
There are two mutually exclusive events that result in a defective part for the second draw: good on the first draw and defective on the second draw, or else defective on the first draw and defective on the second draw. Symbolically, these two events are (G1 and D2) or else (D1 and D2)
The first step is to find the probability for each of these events.
By the general multiplication rule: P(G1 ∩ D2) = P(G1) × P(D2 | G1) = 123/129 × 6/128 = .045
Chapter 10: Basic Concepts 185
Also, by the general multiplication rule: P(D1 ∩ D2) = P(D1) × P(D2 | D1) = 6/129 × 5/128 = .002
Using the special addition rule: P(D2) = .045 + .002 = .047
When drawing two parts, what is the probability that one will be good and one defective? Drawing one good and one defective can occur in two mutually exclusive ways: P(one good and one defective) = P((G1 and D2) or (D1 and G2)) = P((G1 ∩ D2) or (D1 ∩ G2)) = P(G1 ∩ D2) + P(D1 ∩ G2) (Mutually exclusive) P(G1 ∩ D2) = P(G1) × P(D2 | G2) = 123/129 × 6/128 = .045 P(D1 ∩ G2) = P(D1) × P(G2 | D2) = 6/129 × 123/128 = .045 So, P(one good and one defective) = P(G1 ∩ D2) + P(D1 ∩ G2) = .045 + .045 = .090
Combinations Example: A box of 20 parts has two defectives. The quality process analyst inspects the box by randomly selecting two parts. What is the probability that both parts selected are defective? The general formula for this type of problem is: P=
number of ways an event can occur number of possible outcomes
The “event” in this case is selecting two defectives, so “number of ways an event can occur” refers to the number of ways two defective parts could be selected. There is only one way to do this since there are only two defective parts. Therefore, the top number in the fraction is 1. The lower number in the fraction is the “number of possible outcomes.” This refers to the number of different ways of selecting two parts from the box. This is also called the “number of combinations of two objects from a collection of 20 objects.” The formula is:
Number of combinations of n objects taken r at a time = n
Cr =
n! r! (n − r)!
⎛ n ⎞ Note: Another way of writing this is ⎜ r ⎟ . ⎠ ⎝ In this formula, the exclamation mark is pronounced “factorial,” so n! is pronounced “n factorial.” The value of 6! is 6 × 5 × 4 × 3
186 Part III: Data Analysis
× 2 × 1 = 720. The value of n! is the result of multiplying the first n positive whole numbers. Most scientific calculators have a factorial key, typically labeled x!. To calculate 6! using this key, press 6 followed by the x! key. Returning to the previous example, the lower number in the fraction is the number of possible combinations of 20 objects taken two at a time. Substituting into this formula: ⎛ 20 ⎞ 20! 20! = ⎜ ⎟= 2 2! 18! 2! (20 − 2)! ⎝ ⎠
This value would be calculated by using the following sequence of calculator keystrokes: 20 x! ÷ (2 x! x 18 x!) = 190
An easier, yet manual calculation for the combination with factorial problem is 20 × 19 × 18! 20 × 19 = = 10 × 19 = 190 2 ( 2 × 1) × 18!
The answer to the example is 1/190 ≈ .005.
How might this be useful in the inspection process? Suppose that a supplier has shipped this box with the specification that it has no more than two defective parts. What is the probability that the supplier has met this specification? The answer ≈ .005, which does not bode well for the supplier’s quality system. Example: A box of 20 parts has three defectives. The quality process analyst inspects the box by randomly selecting two parts. What is the probability that both parts selected are defective?
The bottom term of the fraction remains the same as in the previous example. The top term is the number of combinations of three defective objects taken two at a time: ⎛ 3 ⎞ 3! 3×2×1 3! = = =3 ⎜ ⎟= ⎝ 2 ⎠ 2! (3 − 2)! 2! 1! 2 × 1 × 1
To see that this makes sense, name the three defectives A, B, and C. The number of different two-letter combinations of these three letters is AB, AC, BC. Note that AB is not a different combination from BA because it is the same two letters. If two defectives are selected, the order in which they are selected is not important. The answer to the problem is 3/190 ≈ .016.
An important thing to remember—combinations are used when order is not important! Note: Calculators have an upper limit to the value that can use the x! key. If a problem requires a higher factorial, use the statistical function on a spreadsheet
Chapter 10: Basic Concepts 187
program. It is interesting to observe that a human can calculate the value of some factorial problems that a calculator cannot. Example: Find 1000! . Most calculators cannot handle 1000!. But humans 997! know that the terms of this fraction can be written as 1000 × 999 × 998 × 997 × 996 × 995 × 994 and so on 997 × 996 × 995 × 994 and so on
The factors in the bottom term cancel out all but the first three factors in the top term so that the answer is 1000 × 999 × 998, which most of us prefer to calculate using a calculator. 1000 × 999 × 998 × 997! = 997, 002, 000 997!
Permutations With combinations, the order of the objects does not matter. Permutations are very similar, except that the order does matter. Example: A box has 20 parts labeled A through T. Two parts are randomly selected. What is the probability that the two parts are A and T in that order? The general formula applies: P=
number of ways an event can occur number of possible outcomes
The bottom term of the fraction is the number of orderings or permutations of two objects from a collection of 20 objects. The general formula: n! Number of permutations of n objects taken r at a time = n Pr = (n − r)! In this case, n = 20 and r = 2: 20
P2 =
20! 20! 20 × 19 × 18 × . . . × 1 = = = 20 × 19 = 380 (20 − 2)! 18! 18 × 17 × . . . × 1
This fraction can be calculated on a calculator using the following keystrokes: 20 x! ÷ 18 x! =. The answer is 380. Of these 380 possible permutations, only one is AT, so the top term in the fraction is 1. The answer to the probability problem is 1/380 = .003. Example: A team with seven members wants to select a task force of three people to collect data for the next team meeting. How many different three-person task forces could be formed? This is not a permutations problem because the order in which people are selected does not matter. In other words, the task force consisting of Barb, Bill, and Bob is the same task force as the one consisting
188 Part III: Data Analysis
of Bill, Barb, and Bob. Therefore, the combinations formula will be used to calculate the number of combinations of three objects from a collection of seven objects, or seven objects taken three at a time: 7
C3 =
7! 7! = = 35 3! (7 − 3)! 3! 4!
Thirty-five different three-person task forces could be formed from a group of seven people.
Example: A team with seven members wants to select a cabinet consisting of a chairperson, facilitator, and scribe. How many different three-person cabinets could be formed? Here, the order is important because the cabinet consisting of Barb, Bill, and Bob will have Barb as chairperson, Bill as facilitator, and Bob as scribe, while the cabinet consisting of Bill, Barb, and Bob has Bill as chairperson, Barb as facilitator, and Bob as scribe. This is a permutations problem because the order in which people are selected matters. Therefore, the permutations formula will be used to calculate the number of combinations of three objects from a collection of seven objects, or seven objects taken three at a time: 7
P3 =
7! 7! = = 210 (7 − 3)! 4!
The answer is 210 different three-person cabinets could be formed from a group of seven people.
RELIABILITY CONCEPTS Define basic reliability concepts: mean time to failure (MTTF), mean time between failures (MTBF), mean time between maintenance (MTBM), mean time to repair (MTTR). Identify elements of the bathtub curve model and how they are used to predict failure patterns. (Remember) Body of Knowledge III.A.4
Reliability is a major area in total quality control. When a customer purchases a product, that customer expects dependability and consistency. Consumers are looking for more than flashy, innovative products; they also expect the product to work correctly every time for a long period of time.
Chapter 10: Basic Concepts 189
Reliability is a considerable part of the product development cycle. If reliability is planned early in the design stages of the product life cycle, the greater the reliability will be to satisfy the customer and reduce field warranty issues. While deploying QFD, the team may be using VOC tools such as warranty data along with SPC data on similar parts and processes but also should be using reliability tools such as process failure mode and effects analysis (PFMEA), failure rates, and maintainability, as discussed further in this chapter.
Basic Concepts Reliability is the probability that an item can perform its intended function for a specified interval under stated conditions.3 The definition consists of the following concepts: • Probability. Since reliability is a probability, it is a variable that can be measured. Probability theory provides a foundation to describe the properties of a physical process mathematically. • Performance. To calculate probability, a product (or “unit”) must exist in one of two states: performs intended function or fails to perform intended function. A unit’s failure condition must be clearly defined as it could be a degree of diminishing performance. • Conditions. The stated conditions specify limits within which a unit should operate. Customers have some responsibility to use a unit within these limits; however, product designers must anticipate and design for stress conditions beyond those that can occur during proper use. • Time. A specified interval is a period of use, or “mission time,” that must be stated or implied to calculate reliability. The interval can be measured in hours, miles, cycles, or other units of product use. There are some devices, sometimes referred to as one-shot devices, that do not have a traditional mission time. Examples are the crash sensor for an automobile airbag system or the igniter for a rocket engine. These devices operate only when called on, resulting in instant success or failure.
Probability Density Functions Reliability engineers must know how a product fails in order to calculate, predict, and improve reliability. To understand how a product fails, a reliability engineer will reference an existing probability density function that describes the failure. Probability density functions show the probability of product failure with time or some other measure of product use such as miles or cycles. There are several distributions that reliability engineers can choose from when analyzing a product’s reliability. Three of the more commonly used distributions are the normal, exponential, and Weibull.
Bathtub Curve The bathtub curve, named for its shape, is a model used to describe failure patterns of a population over the entire life of a product. See Figure 10.8.
Failure rate
190 Part III: Data Analysis
Infant mortality
Random failures
Wear-out
Decreasing failure rate
Constant failure rate
Increasing failure rate
Time
TW — Beginning of wear-out
Figure 10.8 Bathtub curve.
The curve shows how fast product failures are taking place and whether the failure rate is increasing, decreasing, or staying the same. The curve can be divided into three regions: • The infant mortality period (also called the early life period or burn-in period) has a rapidly decreasing failure rate. These early life failures can be extracted through a process called burn-in. Burn-in can be performed at normal operating conditions or at higher-than-normal stress levels. Burn-in can serve two purposes: to weed out the units with nonconformities, hence improving the reliability of units delivered to the customer, and to investigate each failure to eliminate root causes and make design improvements. Burn-in is very costly. Methods to reduce early life failures include the following: – Proper vendor certification – Manufacturing and quality engineers participating in design reviews – Acceptance sampling – Use of process failure mode and effects analysis (PFMEA) – Use of continuous process capability studies and statistical process control procedures, including control charts – Design for manufacture and assembly (DFMA) • The random failure period (also called the useful life period) has a constant or flat failure rate. With a constant failure rate, the product’s reliability is at its highest during this period, and product use and customer use are also at their highest. The exponential distribution (see Figure 10.9), having a constant failure rate, corresponds with that of the useful life period, and so can be used to model a product’s time to failure in this period of the life cycle.
Chapter 10: Basic Concepts 191
f(t)
R(t)
Time
Figure 10.9 Reliability can be modeled by the exponential distribution. Source: R. A. Dovich, Reliability Statistics (Milwaukee, WI: ASQ Quality Press, 1990), 13.
• The wear-out period has an increasing failure rate signifying that there is an increasing probability of failure and that age is a factor in the probability of failure. Most items become more likely to fail with age due to accumulated wear. A good maintenance plan will require replacing a unit or appropriate parts of the unit just before it is predicted to enter the wear-out period. If the shortest-lived part cannot be replaced, it becomes the determinant of the start time of the product’s wear-out phase. Engineers must take that into account when designing the product and determining a useful service life to the customer. Because the normal distribution has an increasing failure rate, it can be used to model a product’s time to failure in the wear-out period. Each region of the bathtub curve has an associated probability distribution and typical failure causes, as listed in Table 10.1. Table 10.1
Relationship between each bathtub phase, its probability distribution, and failure causes.
Bathtub curve period
Probability distribution
Typical failure causes
Infant mortality failures (decreasing failure rate)
Weibull and decreasing exponential
Product does not meet specifications due to any number of causes: design, material, tooling, assembly, and so on
Random failures (constant failure rate); also known as the useful life period
Exponential and Weibull
Design factors (for example, operational stresses— situations where stress exceeds the strength of the unit)
Wear-out failures (increasing failure rate)
Normal or Weibull
Wear and tear on a product over time
192 Part III: Data Analysis
The Weibull distribution, named for Waloddi Weibull, is not associated with any single region of the bathtub curve as it could be used for any of the regions, depending on certain circumstances. The Weibull distribution is dependent on three parameters: 1. Shape parameter 2. Scale parameter 3. Location parameter It is the shape parameter that provides the information to determine which phase of the life cycle the product or system is in. It can assume the shape of distributions that have increasing, decreasing, or constant failure rates.4
Exponential Distribution This is the most commonly used reliability distribution for predicting the probability of survival to time (t). See Figure 10.9. The probability density function (pdf) of the exponential is f(t) = le–lt or f(t) = (1/q)e–t/q
where t≥0 l = Failure rate
q = Mean time between failures (MTBF) l = 1/q A reliability engineer can use the exponential distribution to calculate a product’s reliability. The reliability for a given time t during the constant failure rate period can be calculated with the formula R(t) = e–lt
where
e = Base of the natural logarithms, which is 2.718281828 . . . The failure rate (l) is the rate at which failures occur for a population of units over a given time interval. It is calculated as the number of failures during the interval divided by the number of operating units at the beginning of the interval and by the length of the time interval (Dt). Several variables are involved. For example,
λ=
r Σti + (n − r)(T)
Total number of units being tested is n Total number of unit failures is r Scheduled test time = T
Operating time until ith failure is ti (i = 1 to r)
Chapter 10: Basic Concepts 193
If 30 parts were being tested for 60 hours each, and four parts failed at 20, 40, 45, and 50 hours,
λ=
4 20 + 40 + 45 + 50 + (26)(60) hours
λ = .002 hour Mean Time to Failure (MTTF). This is a basic measure of system reliability, symbolized by q, referring to non-repairable items: the total number of life units of an item divided by the total number of failures within that population during a particular measurement interval under stated conditions. Mean Time between Failures (MTBF). This is also a basic measure of system reliability, symbolized by q, referring to repairable items. MTBF is the mean number of life units during which all parts of the item perform within their specified limits during a particular measurement interval under stated conditions. To simplify: q = T/r = 1/l where T = Total test time of all items for both failed and non-failed items r = The total number of failures occurring during the test l = The failure rate Maintainability. This is the ability of a product to be retained in, or restored to, a state in which it can perform service under the conditions for which it was designated; it is focused on reducing the duration of downtime. Maintainability is incorporated into system design during the early stages of product development. Maintainability measures the amount of time a system is not in a functional state and is calculated based on the type of maintenance performed: corrective or preventive. • Corrective maintenance consists of actions required to restore a system to its functional state after a failure occurs. It is quantified by the average time required to complete the maintenance—mean time to repair. • Preventive maintenance consists of all actions required to retain a unit in its functional state and prevent a failure from occurring. Mean Time between Maintenance Actions (MTBMA). This is a measure of the system reliability parameter related to demand for maintenance manpower: the total number of system life units divided by the total number of maintenance actions (preventive and corrective) during a stated period of time. Mean Time to Repair (MTTR). This is a basic measure of maintainability: the sum of corrective maintenance times at any specific level of repair divided by the total number of failures within an item repaired at that level during a particular interval under stated conditions.
194 Part III: Data Analysis
Availability. This is the probability that a unit will be ready for use at a stated time, or over a stated period of time, based on the combined aspects of reliability and maintainability. There are three types of availability: 1. Inherent availability is a function of reliability (MTBF) and maintainability (MTTR). To calculate inherent availability: (A I) = MTBF/(MTBF + MTTR) 2. Achieved availability includes the measure of preventive and corrective maintenance. To calculate achieved availability: (A A) = MTBMA/(MTBMA + MMT) where MMT = Mean maintenance time = The sum of preventive and corrective maintenance times divided by the sum of scheduled and unscheduled maintenance events during a stated period of time. 3. Operational availability includes both the inherent and achieved availability, and it also includes logistics and administrative downtime. To calculate operational availability: (AO) = MBMA/(MBTMA + MDT) where MDT = Mean maintenance downtime, which includes supply time, logistics time, administrative delays, active maintenance time, and so on.
Chapter 11 Data Types, Collection, and Integrity
MEASUREMENT SCALES Define and use nominal, ordinal, interval, and ratio measurement scales. (Apply) Body of Knowledge III.B.1
Nominal Measurement In this type of measurement, names are assigned to objects as labels. This assignment is performed by evaluating, by some procedure, the similarity of the to-be-measured instance to each of a set of named category definitions. The name of the category in the set is the “value” assigned by nominal measurement to the given instance. If two instances have the same name associated with them, they belong to the same category, and that is the only significance that nominal measurements have. For practical data processing, the names may be numerals, but in that case the numerical value of these numerals is irrelevant, and hence their order also has no meaning. The only comparisons that can be made between variable values are equality and inequality. There are no “less than” or “greater than” relations among the classifying names, nor operations such as addition or subtraction. “Nominal measurement” was first identified by psychologist Stanley Smith Stevens in the context of a child learning to categorize colors (red, blue, and so on) by comparing the similarity of a perceived color to a set of named colors previously learned by ostensive definition. Other examples include geographical location in a country represented by that country’s international telephone access code, the marital status of a person, or the make or model of a car. The only kind of measure of central tendency is the mode. Statistical dispersion may be measured with a variation ratio, index of qualitative variation, or via information entropy, but no notion of standard deviation exists. Variables that are measured only nominally are also called categorical variables. In social research, variables measured at a nominal level include gender, race, religious affiliation, political party affiliation, college major, and birthplace. 195
196 Part III: Data Analysis
Ordinal Measurement In this classification, the numbers assigned to objects represent the rank order (first, second, third, and so on) of the entities measured. The numbers are called ordinals. The variables are called ordinal variables or rank variables. Comparisons of greater and less than can be made, in addition to equality and inequality. However, operations such as conventional addition and subtraction are still meaningless. Examples include the Mohs scale of mineral hardness; the results of a horse race, which say only which horses arrived first, second, third, and so on, but with no time intervals; and most measurements in psychology and the other social sciences, for example, attitudes like preference, conservatism, or prejudice and social class. The central tendency of an ordinally measured variable can be represented by its mode or its median; the latter gives more information.
Interval Measurement The numbers assigned to objects have all the features of ordinal measurements, and in addition, equal differences between measurements represent equivalent intervals—that is, differences between arbitrary pairs of measurements can be meaningfully compared. Operations such as addition and subtraction are therefore meaningful. The zero point on the scale is arbitrary; negative values can be used. Ratios between numbers on the scale are not meaningful, so operations such as multiplication and division cannot be carried out directly. For instance, it cannot be said that 30° Celsius is twice as hot as 15° Celsius. But ratios of differences can be expressed; for example, one difference can be twice another. The central tendency of a variable measured at the interval level can be represented by its mode, its median, or its arithmetic mean; the mean gives the most information. Variables measured at the interval level are called interval variables, or sometimes scaled variables, though the latter usage is not obvious and is not recommended. Examples of interval measures are the year date on many calendars, and temperature in Celsius scale or Fahrenheit scale. About the only interval measures commonly used in social scientific research are constructed measures such as standardized intelligence (IQ) tests.
Ratio Measurement The numbers assigned to objects have all the features of interval measurement, and they also have meaningful ratios between arbitrary pairs of numbers. Operations such as multiplication and division are therefore meaningful. The zero value on a ratio scale is nonarbitrary. Variables measured at the ratio level are called ratio variables. Most physical quantities, such as mass, length, or energy, are measured on ratio scales; so is temperature measured in Kelvin, that is, relative to absolute zero. The central tendency of a variable measured at the ratio level can be represented by its mode, its median, its arithmetic mean, or its geometric mean; however, as with an interval scale, the arithmetic mean gives the most useful information. Social variables of ratio measure include age, length of residence in a given place, number of organizations belonged to, or number of church attendances in a particular time period.
Chapter 11: Data Types, Collection, and Integrity 197
Interval and/or ratio measurement are sometimes called “true measurement,” though it is often argued that this usage reflects a lack of understanding of the uses of ordinal measurement. Only ratio or interval scales can correctly be said to have units of measurement.
Debate on Classification Scheme There has been, and continues to be, debate about the merit of the classifications, particularly in the cases of the nominal and ordinal classifications.1 Thus, while Stevens’s classification is widely adopted, it is not universal.2 Among those who accept the classification scheme, there is also some controversy in the behavioral sciences over whether the mean is meaningful for ordinal measurement. Mathematically, it is not, but many behavioral scientists use it anyway. This is often justified on the basis that ordinal scales in behavioral science are really somewhere between true ordinal and interval scales—although the interval difference between two ordinal ranks is not constant, it is often of the same order of magnitude. Thus, some argue that so long as the unknown interval difference between ordinal scale ranks is not too variable, interval scale statistics such as means can meaningfully be used on ordinal scale variables. L. L. Thurstone made progress toward developing a justification for obtaining interval-level measurements based on the law of comparative judgment. Further progress was made by Georg Rasch, who developed the probabilistic Rasch model, which provides a theoretical basis and justification for obtaining interval-level measurements from counts of observations such as total scores on assessments.
DATA TYPES Identify, define, and classify data in terms of continuous (variables) and discrete (attributes or counts). Determine when it is appropriate to convert attributes data to variables measures. (Apply) Body of Knowledge III.B.2
Categorical, Qualitative, or Attributes Data Answers to questions like, “Which type of book do you prefer to read: mysteries, biographies, fiction, nonfiction?” are examples of qualitative data since the responses vary by type, but a numerical response would not make sense. Other types of qualitative variables would be gender, socioeconomic status, or highest level of education. Qualitative data are also called categorical data or attributes data, that is, data that describes a characteristic, attribute, or quality. How many are good or bad? How many conform to specification? How many are wearing red, blue, green, or white? How many have a pleasant taste?
198 Part III: Data Analysis
In every industry, there are important quality characteristics that either cannot be measured or are very difficult or costly to measure.3 Such characteristics can be classified as defective or not defective, conforming or nonconforming. Even though a process may be able to provide data on variables, the available measurement method may be very slow and inefficient compared to an attribute or go/no-go gage, where the part either passes (“goes”) or does not pass (“no-go”). A collection of attributes data may be converted into numeric form in a couple of different ways for different purposes: 1. When inspecting parts and classifying them as either good or bad, computing the proportion good or bad provides a numeric value. For instance, when inspecting 200 light bulbs and whether they have been correctly assembled, if two fail to light, then the percentage bad would be 2/200 or 1%. 2. If a ranking of attributes makes sense, then this ranking can be given a numeric value, also called a Likert scale, typically five values. For example, consider the statement: “The color of the part matches customer requirements.” 1. Strongly disagree 2. Disagree 3. Neither agree nor disagree 4. Agree 5. Strongly agree It can be decided that any score less than 3 is nonconforming, at which point the part is labeled as defective. Although having only two choices, either good or bad, when inspecting parts may sound more efficient than using a precision variables gage, lumping parts into good or bad piles does not give any indication of how good or bad. Attribute sampling does not lend itself to understanding process capability, whereas variables measurements would. Risk and its cost as it relates to inspection also play a role in considering what type of inspection or sampling to perform. There are two types of errors that can be made during the inspection process: • The inspection doesn’t detect a defective item. The consumer risks receiving a defective part. • The inspection reports a defect when there really isn’t one. The producer risks rejecting a good part. These concerns or risks directly relate to sample size and how much of either risk can be tolerated by the consumer and the producer. For attribute data, large sample sizes—typically hundreds of parts—are needed in order to bring risk to a reasonable level. By contrast, even with small samples of variables data, much value in terms of process information can be gleaned, such as process spread and the ability to detect process change.
Chapter 11: Data Types, Collection, and Integrity 199
Quantitative Data Quantitative data can be broken into two main subtypes: discrete and continuous. Discrete data take the form of countable measures: 1. Number of students enrolled in a statistics course 2. Number of pizzas ordered on a particular night 3. Number of games that a baseball team won last season 4. Number of defective parts Continuous or variable data can assume any one of the uncountable number of values on the number line. For example, the distance from home to work may be 3 miles, but in more exact terms it may be 2.9849 miles. The time that it takes a plane to travel from Chicago to Los Angeles may be 4.5 hours, but it would be more accurate to say 4 hours, 33 minutes, 2.43 seconds for a particular flight, while another trip may be 4 hours, 33 minutes, 1.892 seconds long, and so on. Variables measures can be converted to attributes data, to simplify the data collection and to leverage p charts, np charts, and c or u charts versus x and R charts or x and s charts. For example, a measurement device can be used to capture variables measures, or a go/no-go gage can be used to track an attribute instead of a variable measure, identifying that the unit met the specification or not (two categories).
DATA COLLECTION AND ANALYSIS Identify and describe the advantages of collecting and analyzing real-time data. (Understand) Body of Knowledge III.B.3
The questions that follow are just a few that should be carefully addressed when planning a study to troubleshoot, control, or improve a process. Clearly, the mere act of collecting data does not directly imply that useful information can be gleaned from it. By not clearly understanding the question under investigation, it would be nearly inevitable that the collection of useless data would result. By failing to answer one or more of the questions, obtaining meaningful insight in order to troubleshoot or improve a process becomes highly unlikely. Meaningful output of well-planned data collection and analysis provides the basis for data-driven decision making. Data can reveal causality, that is, what input factors drive changes in the response of interest. Data can also help to identify relationships, sources of variation, and process stability and capability. • What data should be collected and why? • Who will collect the data? Who will analyze the data?
200 Part III: Data Analysis
• How frequently will the data be collected and reported? Does it need to be in real time? How much data need to be collected? • How will the data be reported? What kind of analysis is to be used? • How much will the data collection cost? What equipment will be needed? Will inspector training be needed? • How will the quality of the data be verified? • Will control charts or other types of graphical tools be used, and exactly which types would be most applicable to the question or characteristic under study? • For how long will the data be collected—that is, will the data be collected and the signals that it provides be acted on as part of process control, or will the data be collected as part of a design of process improvement for experiments?
Data Collection Plan Example A data collection plan, such as used in a Lean-Six Sigma project for designing the processes for a new women’s healthcare center, is discussed next.4 A data collection plan should be developed to identify the data to be collected that relate to the important quality characteristics. In a Lean-Six Sigma project, the quality characteristics to be measured are called Critical to Satisfaction (CTS) criteria. Critical to Satisfaction is a characteristic of a product or service that fulfills a critical customer requirement or a customer process requirement. CTSs are the basic elements to be used in driving process measurement, improvement, and control. It is critical to ensure that our selected CTSs accurately represent what is important to the customer. The data collection plan ensures the following: • Measurement of CTS metrics • Identification of the right mechanisms to perform the data collection • Collection and analysis of data • Definition of how and who is responsible to collect the data Figure 11.1 shows a data collection plan, and Figure 11.2 shows the related operational definitions for the women’s healthcare center. The steps for creating a data collection plan are as follows: 1. Define the CTS criteria 2. Develop metrics 3. Identify data collection mechanism(s) 4. Identify analysis mechanism(s) 5. Develop sampling plans 6. Develop sampling instructions
Chapter 11: Data Types, Collection, and Integrity 201
Data Collection Plan
Required
Identify metrics to measure and assess improvement that relate to the CTSs from the Define Phase. Critical to Satisfaction (CTS) Metric Procedure time
Timeliness
Data collection source
Operational definition Time from when patient goes into procedure room to when they leave
Process to collect and report
Medical information system
Wait time for procedure
Time from when the patient is registered to when they get into the procedure room
Time to register patient
Time from when patient arrives in the facility to when registration is complete
Time to receive results
Imaging Time to receive results of exam, from system when you finished the procedure
Wait to get Time to wait until an getting an appointment appointment
Sampling Analysis plan (size, mechanism frequency)
Registration system
Statistical analysis
IT director July through will September provide reports
Scheduling system
Figure 11.1 Data collection plan for a women’s healthcare center. Source: Sandra L. Furterer. Operational Definition Define a clear and concise description of the measurement and the process by which it is collected. Operational definition
Defining measure
Purpose
Clear way to measure
Time from when patient goes into procedure room to when they leave
Procedure time
From medical information system report
Time from when the patient is registered to when they get into the procedure room
Wait time for procedure
From medical information system report
Time from when patient arrives in the facility to when registration is complete
Time to register patient
Time to receive results of exam, from when you finished the procedure
Time to receive results
Time to wait to get an appointment
Wait to get an appointment
Identify issues with resources and process issues
From registration system report From imaging system report From scheduling system report, implementing new fields to capture desired appointment time compared to actual appointment time
Figure 11.2 Operational definitions for a women’s healthcare center. Source: Sandra L. Furterer.
202 Part III: Data Analysis
A description of each step in the data collection plan development follows. 1. Define the CTS criteria: This should be based on what is most important from a quality and customer perspective. 2. Develop metrics: In this step, metrics are identified that help to measure and assess improvement related to the identified CTSs. Here are some rules of thumb for selecting metrics:5 • Consider the vital few versus the trivial many • Metrics should focus on the past, the present, and the future • Metrics should be linked to meet the needs of stakeholders, customers, and employees It is vital to develop an operational definition for each metric so it is clearly understood how the data will be collected by anyone who collects it. The operational definition should include a clear description of a measurement, including the process of collection. Include the purpose and metric measurement. It should identify what to measure, how to measure it, and how the consistency of the measure will be ensured. A summary of an operational definition follows.
Operational Definition The operational definition consists of (1) defining the measure, the definition; (2) the purpose of the metric; and (3) a clear way to measure the process. Defining the measure, definition: A clear, concise description of a measurement and the process by which it is to be collected Purpose: Provides the meaning of the operational definition, to provide a common understanding of how it will be measured. It presents a clear way to measure the process: • Identifies what to measure • Identifies how to measure • Makes sure the measuring is consistent 3. Identify data collection mechanism(s): Next, you can identify how you will collect the data for the metrics. Some data collection mechanisms include customer surveys, observation, work sampling, time studies, customer complaint data, emails, websites, and focus groups. 4. Identify analysis mechanism(s): Before collecting data, consider how you will analyze the data to ensure that you collect the data in a manner that enables the analysis. Analysis mechanisms can include the type of statistical tests or graphical analysis that will be performed. The analysis mechanisms can dictate the factors and levels for which you may collect the data.
Chapter 11: Data Types, Collection, and Integrity 203
5. Develop sampling plans: In this step you should determine how you will sample the data and the sample size for your samples. There are several types of sampling: • Simple random sample: Each unit has an equal chance of being sampled. • Stratified sample: The N (population size) items are divided into subpopulations or strata, and then a simple random sample is taken from each strata. This is used to decrease the sample size and cost of sampling. • Systematic sample: N (population size) items are placed into k groups. The first item is chosen at random, the rest of the sample selecting every kth item. • Cluster sample: N items are divided into clusters. This is used for wide geographic regions. 6. Develop sampling instructions: Clearly identify who will be sampled, where you will sample, and when and how you will take your sample data.
DATA INTEGRITY Recognize methods that verify data validity and reliability from source through data analysis using various techniques such as auditing trails, vendor qualification, error detection software, training for record management etc., to prevent and detect data integrity issues. (Apply) Body of Knowledge III.B.4
Data integrity is becoming a more critical issue for quality professionals and organizations, especially as big data is becoming more prevalent, especially those sets that are part of the IoT (Internet of Things) and downloaded from the internet without knowledge of the data pedigree. Data integrity, as defined by the FDA, refers to the completeness, consistency, and accuracy of data. Complete, consistent, and accurate data should be attributable, legible, contemporaneously recorded, original or a true copy, and accurate (ALCOA). Data integrity is critical throughout the CGMP data life cycle, including in the creation, modification, processing, maintenance, archival, retrieval, transmission, and disposition of data after the record’s retention period ends. System design and controls should enable easy detection of errors, omissions, and aberrant results throughout the data’s life cycle.6
204 Part III: Data Analysis
Data pedigree is defined as documentation of the origins and history of a data set, including its technical meaning, background on the process that produced it, the original collection of samples, measurement processes used, and the subsequent handling of the data, including any modifications or deletions made, through the present.7 Core elements of a data pedigree include the following:8 • What the data represent. This should include a basic explanation of the domain being measured, along with the units of measurement. • A description of the process that produced the data, such as a manufacturing or healthcare process • How the samples were obtained from the process • The specific measurement process used for the variables or attributes measured • The existence (or lack) of recent analyses of the measurement system, such as gage repeatability and reproducibility (GR&R), attribute agreement analysis, or calibration studies • The history of the data documenting the chain of custody, such as who had access to the original data, what changes or deletions were made, and access to a copy of the original data that can be verified The rigor of the data pedigree depends upon the decisions made from the data. The pedigree should be rigorous when approving a new medical device or medicine, or when high-risk and safety challenges exist. Some other questions should be asked regarding data and how it was collected:9 • Is the measurement system capturing the correct data? • Do the data reflect what is happening? • Can we detect process changes when they occur? • What are the sources of measurement error? • Is the measurement system stable over time? • Is the measurement system capable of generating the data needed for making decisions for this study? • Can the measurement system be improved in the future? • Does the data reflect the truth? Or is it too good to be true (doesn’t show variation, when there should be)?10 When validating electronic data, some additional best practices should be considered: • Check for all records being complete o Check for missing values/gaps, zero records
Chapter 11: Data Types, Collection, and Integrity 205
• Create a time series plot/graph/histogram o Look for data that does not fit the norm • Review subsets o Do like processes give similar results? • Data types o Is the data type consistent? • For example, have text entry when numeric is expected Data quality issues can happen for many reasons, including the following:11 • Data sources: o Unreliability, trust, data copying, inconsistency, multi-sources • Generation of data: o Human data entry, sensors devices readings, social media, unstructured data, and missing values • The process and/or application level o Acquisition, collection, and transmission of data A framework by Taleb, Adel Seerhani, and Dssouli (2018)12 defined several critical data quality management processes, shown in Figure 11.3. These processes can help ensure data integrity, validity, and reliability: Data reliability refers to the consistency of the data over time, across data types and across different operators measuring the data. Validity is the extent to which the scores from a measure represent the variable they are intended to.13 • Data inception: How data are created • Data sources: Identifying the sources of data • Data collection: How data are collected • Data transport: How data are moved electronically, or otherwise • Data storage: How and where data are stored, and who has access to it • Data pre-processing: Cleansing, integrating, normalizing, and transforming the data prior to analysis • Data processing analytics: Processing and analyzing the data • Data visualization: Visualizing the data with graphs, figures, and charts Table 11.1 illustrates some strategies for ensuring data integrity, validity, and reliability for each process stage of the data quality management framework.14,15
206 Part III: Data Analysis
Table 11.1
Strategies for ensuring data integrity, validity, and reliability. Quality Management Process Stages
Data Data Select data Data Data Data Data processing Data collection transport storage pre-processing analytics Data integrity strategies inception sources visualization Deploy best practices in X data design for defining data fields, operational definitions, meta data, information class diagrams, data modeling, data dictionary, permissions (create, view, update, delete) Measurement system analysis (MSA)
X
X
X
Data wrangling software tools that provide an audit trail of removed data
X
Statistical software applications such as Minitab, SAS, JMP, SPSS, R, and others
X
Data visualization tools, such as Tableau, SPSS, Python
Statistical tests: hypothesis tests to compare observed and expected values; historical data testing to compare to known standards; reproducibility testing— repeated measures statistical tests; assess process changes via statistical process control and testing15
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
Supplier qualification of X data provided
X
X
X
X
Data error detection software
X
X
X
X
X
X
Develop processes and training for record management and document control
X
X
X
X
X
X
Data audits Requirements traceability
Source: Sandra L. Furterer.
X
X
Chapter 11: Data Types, Collection, and Integrity 207
Data Inception Data Visualization
Select Data Sources
Data Processing Analytics
Data Collection
Data Transport
Data Pre-processing Data Storage
Figure 11.3 Data quality management framework processes. Source: Adapted from Taleb, M. Adel Seerhani, and R. Dssouli, “Big Data Quality: A Survey,” 2018 IEEE International Congress on Big Data, IEEE Computer Society, 2018.
DATA PLOTTING Identify the advantages and limitations of using this method to analyze data visually. (Understand) Body of Knowledge III.B.5
One of the hazards of using software for statistical analysis is the temptation to perform analysis on data without first “looking” at the data. One example would be the calculation of linear correlation coefficients. This number helps determine whether two variables have a relationship that permits the prediction of one variable using the other variable in a linear relationship. The correct procedure is to construct a scatter diagram of the data, as shown in Chapter 7. If the points seem to be grouped around a straight line in the scatter diagram, it would be appropriate to calculate the coefficient. Otherwise, the calculation may be misleading. In some
208 Part III: Data Analysis
cases (especially with small data sets), a fairly high correlation coefficient may result from data that, when graphed, are clearly not related. Following are some of the advantages of presenting data using control charts or time series analysis: 1. Computations are simple and easy, as in computing averages and ranges for x and R charts. 2. Errors in calculation or data entry may become obvious when presented graphically and may be identified even by the untrained. 3. The unexpected presence of certain types of nonrandomness, for example, cyclical patterns (assignable causes), may be suggested by analyzing the data graphically. 4. Graphical presentation of data does not solve problems but may point to where and when to look. The use of control charts is not a cure-all. It takes time and effort to establish and use control charts properly. Moreover, they are not appropriate in every situation. For example, when only a few pieces of a product are manufactured at any given time, acceptance sampling rather than process control would be more appropriate. The importance of displaying data graphically cannot be over emphasized. “Always graph your data in some simple way—always.”16 A few graphical tools are highlighted below: • Histogram or frequency chart (Figure 11.4): Shows the distribution of the data, including the central tendency and the variation in the data, as well as any outliers • Stem-and-leaf plot (Figure 11.5): Shows the distribution of the data, being able to track back to the original data values • Box plot (Figure 11.6): Shows the median and quartiles (percentage of data points within a certain percent of the data), as well as outliers (shown as asterisks) • Dot plot (Figure 11.7): Shows each individual data point and the distribution of the data • Scatter diagram (Figure 11.8): Shows a relationship between two variables in a scatter plot • Pareto chart (Figure 11.9): Shows the frequency of data within categories from largest to smallest, demonstrating the 80:20 rule (20% of the causes contribute to 80% of the problems) • Time series analysis, as in process control charts (Figure 11.10): Shows data plotted across time, to show trends and stability of the process The following figures show examples of the tools. The histogram, stem-and-leaf plot, box plot, dot plot, and time series analysis are plotted using triage time data in an emergency department. The triage time was collected in an emergency department bed board system that tracked each
Chapter 11: Data Types, Collection, and Integrity 209
Histogram of Triage Time Normal Mean StDev N
120
8.446 7.024 664
100
Frequency
80 60 40 20 0
–8
0
8
16
24
Triage Time (Minutes)
32
40
48
Figure 11.4 Histogram of triage time data. Source: Sandra L. Furterer.
Stem-and-leaf of Triage Time 247 (201) 216 100 49 19 10 5 2 2
0 0 1 1 2 2 3 3 4 4
N = 664
0000000000000000000000000000000000000000000000111111111111111111111111111111111+ 5555555555555555555555555555555555555555555555555555555555555555555566666666666+ 0000000000000000000000000000000000011111111111111111111111111111111112223333333+ 555555555555555555567777777777777777778888888889999 000000011111111111113333334444 566777888 00012 779 69
Leaf Unit = 1
Figure 11.5 Stem-and-leaf plot for the triage time data. Source: Sandra L. Furterer.
major activity in the emergency services process. The triage time was the time from when the patient arrived to the emergency department to when the patient was triaged. The triage process collects initial patient information related to their condition, along with their vital statistics such as temperature, blood pressure, and heart rate.
210 Part III: Data Analysis
Box plot of triage time 50
Triage Time (Minutes)
40
30
20
10
0
Figure 11.6 Box plot of triage time data. Source: Sandra L. Furterer.
Dot plot of triage time
0
7
14
28 21 Triage Time (Minutes)
Each symbol represents up to 2 observations.
Figure 11.7 Dot plot of triage time data. Source: Sandra L. Furterer.
35
42
49
Chapter 11: Data Types, Collection, and Integrity 211
Scatter plot of triage time versus age
Triage Time (Minutes)
50 40 30 20 10 0 0
20
40
AGE
60
80
100
Figure 11.8 Scatter plot or diagram of triage time versus age. Source: Sandra L. Furterer.
Pareto Chart of disposition types 700
100
600
Count
60
400 300
40
200 20
100 0 DISPOSITION Count Percentage Cum %
Discharge 403 59.9 59.9
Admission 247 36.7 96.6
Other
0
23 3.4 100.0
Figure 11.9 Pareto chart of how much the triage time fluctuates with each patient across time. Source: Sandra L. Furterer
Percentage
80
500
212 Part III: Data Analysis
Time Series Plot of Triage Time
Triage Time (Minutes)
50 40 30 20 10
00
22 10
5/
7/
20
10
11
:4
:0
0:
5:
1: :3 20 7/
6/ 5/
5/
20
10
17
21 10
00
00
00 :5
:0 20 5/ 5/
5/
5/
20
10
11
14 10
20 4/ 5/
9:
8:
0: :2
:2 21 10
00
00
00 6:
2: :4 20 3/
3/ 5/
5/
20
10
11
14 10
20 2/ 5/
00
00 8: :5
:1 18
10 20
2/ 5/
5/
1/
20
10
0:
01
5:
:0
00
0
0
Arrival Date and Time
Figure 11.10 Time series plot of a subset of triage time. Source: Sandra L. Furterer.
For the histogram in Figure 11.4, the average is 8.446 minutes, with a standard deviation of about 7 minutes. There are 664 data points, or triage times for the patients. The stem-and-leaf plot for the triage time shows the original data points and the location of the highest frequency points. For example, at the bottom of the plot, there is a 4.6-minute triage time and then a 4.9-minute triage time. The box plot shows the median at 7 minutes, with 25% of the data from 0 to the bottom of the box at 3 minutes, and 75% of the data from 0 to 11 minutes at the top of the box. The outliers are above 22 to almost 50 minutes. The dot plot shows the data points, although Minitab notes that each symbol represents up to 2 observations, to reduce the number of dots when there is a lot of data. The scatter plot or scatter diagram shows that there is not a linear relationship between triage time and the age of the patient. The Pareto chart shows the frequency by the patient disposition type, including number of discharges, admitted patients, and the other category which is a grouping of the other disposition types.
Chapter 12 Sampling
SAMPLING METHODS Define and distinguish between various sampling methods, such as random, sequential, stratified, systemic/fixed sampling, rational subgroup sampling, and attributes and variables sampling. (Understand) Body of Knowledge III.C.1
Sampling is that part of statistical practice concerned with the selection of individual observations intended to yield some knowledge about a population of concern, especially for the purposes of statistical inference. In particular, results from probability theory and statistical theory are employed to guide practice. The sampling process consists of five stages: 1. Definition of the population of concern 2. Specification of a set of items or events that it is possible to measure 3. Specification of a sampling method for selecting items or events from the set of items 4. Sampling and data collecting 5. Review of the sampling process Successful statistical practice is based on a focused problem definition. Typically, we seek to take action on some population, for example, when a batch of material from production must be released to the customer or dispositioned for scrap or rework. Alternatively, we seek knowledge about the cause system of which the population is an outcome, for example, when a researcher performs an experiment on rats with the intention of gaining insights into biochemistry that can be applied for the benefit of humans. In the latter case, the population of concern can be difficult to specify, as it is in the case of measuring some physical characteristic, such as the electrical conductivity of copper. 213
214 Part III: Data Analysis
However, in all cases, time spent precisely defining the population of concern is usually well spent, often because it raises many issues, ambiguities, and questions that would otherwise have been overlooked at this stage.
Random Sampling In random sampling, also known as probability sampling, every combination of items from the sample set, or stratum, has a known probability of occurring, but these probabilities are not necessarily equal. With any form of sampling there is a risk that the sample may not adequately represent the population, but with random sampling there is a large body of statistical theory that quantifies the risk and thus enables an appropriate sample size to be chosen. Furthermore, once the sample has been taken, the sampling error associated with the measured results can be computed. There are several forms of random sampling. For example, in simple random sampling, each element has an equal probability of occurring. Other examples of probability sampling include stratified sampling and multistage sampling.
Sequential Sampling It is often beneficial and possible to obtain and inspect samples over a period of time. Such sampling, if it is recorded on a control chart, is typically referred to as time series analysis. This type of sampling is of value when time may be a factor contributing to variation and may signal an assignable cause, such as tool wear or chemical depletion.
Stratified Sampling Where the population embraces a number of distinct categories, the sample set can be organized by these categories into separate strata or demographics. One of the sampling methods discussed is then applied to each stratum separately. Major gains in efficiency (either lower sample sizes or higher precision) can be achieved by varying the sampling fraction from stratum to stratum. The sample size is usually proportional to the relative size of the strata. However, if variances differ significantly across strata, sample sizes should be made proportional to the stratum standard deviation. Disproportionate stratification can provide better precision than proportionate stratification. Typically, strata should be chosen to have • means that differ substantially from one another, and • variances that are different from one another and lower than the overall variance.
Systemic or Fixed Sampling Selecting, for example, every 10th name from the telephone directory is called an “every 10th sampling scheme,” which is an example of systematic sampling. It is a type of non-probability sampling unless the directory itself is randomized. It is easy to implement, and the stratification induced can make it efficient, but it is
Chapter 12: Sampling 215
especially vulnerable to periodicities in the list. If periodicity is present and the period is a multiple of 10, then bias will result. Another type of systematic or fixed sampling scheme is to always sample, say, 5% of the batch or lot to make a decision to accept or reject the lot. There are several disadvantages to such a sampling scheme that should be resolved before undertaking such sampling: 1. Rules for accepting or rejecting the lot based on how many out of the 5% sample are found to be nonconforming 2. Estimation of consumer/producer risk and protection 3. Concurrence between consumer and producer to employ such a non– statistically based sampling scheme
Convenience Sampling Sometimes called grab or opportunity sampling, this is the method of choosing items arbitrarily and in an unstructured manner from the sample set. Though almost impossible to treat rigorously, it is the method most commonly employed in many practical situations. In social science research, snowball sampling is a similar technique, where existing study subjects are used to recruit more subjects into the sample.1
Rational Subgroup Sampling Rational subgroups or rational samples should be chosen so that the variation within the subgroup sample is only due to common causes. Special causes should occur between subgroups. The differences within the samples of the subgroup should be minimized. Typically, the sampling is done in time order, say sample five parts done by the instant-of-time method. The values are sampled at approximately the same time. Understanding the degree of shift that we want to detect with our control charts helps us to determine the subgroup sample size. If it is important to detect small shifts, then having a larger sample size is important. Choosing large samples very frequently provides the most information on the process. However, it may not be logistically or economically feasible. So, choosing small samples at frequent intervals may be used as a trade-off.2 Rational subgroups related to control chart sampling is discussed in Chapter 14.
ACCEPTANCE SAMPLING Identify and define sampling characteristics, such as lot size, sample size, acceptance number, and operating characteristic (OC) curve. Identify when to use the probability approach to acceptance sampling. (Understand) Body of Knowledge III.C.2
216 Part III: Data Analysis
Inspection can be done by sampling (i.e., selecting a portion of a larger group of items), or inspection can be done by screening (also called sorting or 100% inspection), in which all units are inspected. Acceptance sampling is the process of inspecting a portion of the product in a lot for the purpose of making a decision regarding classification of the entire lot as either conforming or nonconforming to quality specifications. Sampling provides the economic advantage of lower inspection costs due to fewer units being inspected. In addition, the time required to inspect a sample is substantially less than that required for the entire lot, and there is less risk of damage to the product due to reduced handling. Most inspectors find that selection and inspection of a random sample is less tedious and monotonous than inspection of a complete lot. Another advantage of sampling inspection is related to the supplier/customer relationship. By inspecting a small fraction of the lot and forcing the supplier to screen 100% in case of lot rejection (which is the case in rectifying inspection), the customer emphasizes that the supplier must be more concerned about quality. On the other hand, variability is inherent in sampling, resulting in sampling errors: rejection of lots of conforming quality and acceptance of lots of nonconforming quality. Acceptance sampling is most appropriate when inspection costs are high and when 100% inspection is monotonous and can cause inspector fatigue and boredom, resulting in degraded performance and increased error rates. Obviously, sampling is the only choice available for destructive inspection. Rectifying sampling is a form of acceptance sampling. Sample units detected as nonconforming are discarded from the lot, replaced by conforming units, or they are repaired. Rejected lots are subject to 100% screening, which can involve discarding, replacing, or repairing units detected as nonconforming. In certain situations, it is preferable to inspect 100% of the product. This would be the case for critical or complex products, where the cost of making the wrong decision would be too high. Screening is appropriate when the fraction for nonconforming is extremely high. In this case, most of the lots would be rejected under acceptance sampling, and those accepted would be so as a result of statistical variations rather than better quality. Screening is also appropriate when the fraction nonconforming is not known and an estimate based on a large sample is needed. It should be noted that the philosophy now being espoused in supplier relations is that the supplier is responsible for ensuring that the product shipped meets the user’s requirements. Many larger customers are requiring evidence of product quality through the submission of process control charts showing that the product was produced by a process that was in control and capable of meeting the specifications.
Sampling Plans Sampling may be performed according to the type of quality characteristics to be inspected. There are three major categories of sampling plans: 1. Sampling by attributes 2. Sampling by variables 3. Special sampling plans
Chapter 12: Sampling 217
It should be noted that acceptance sampling is not advised for processes in continuous production and in a state of statistical control. For these processes, Dr. W. Edwards Deming provides decision rules for selecting either 100% inspection or no inspection.3
Lot-by-Lot versus Average Quality Protection Sampling plans based on average quality protection from continuing processes have their characteristics based on the binomial and/or Poisson distributions. Plans used for lot-by-lot protection—not considered to have been manufactured by a continuing process—have their characteristics based on the hypergeometric distribution, which takes the lot size into consideration for calculation purposes. Sampling plans based on the Poisson and binomial distributions are more common than those based on the hypergeometric distribution. This is due to the complexity of calculating plans based on the hypergeometric distribution. New software on personal computers, however, may eliminate this drawback.
The Operating Characteristic (OC) Curve No matter which type of attribute sampling plan is being considered, the most important evaluation tool is the operating characteristic (OC) curve. The OC curve allows a sampling plan to be almost completely evaluated at a glance, giving a pictorial view of the probabilities of accepting lots submitted at varying levels of percentage defective. The OC curve illustrates the risks involved in acceptance sampling. Figure 12.1 shows an OC curve for a sample size n of 50 drawn from an infinite lot size, with an acceptance number c of 3.
1.00 0.90 0.80
n = 50 c=3
Pa
0.70 0.60 0.50 0.40 0.30 0.20 0.10 0
0
5
10
15
p (Percent nonconforming)
Figure 12.1 An operating characteristic (OC) curve.
20
25
218 Part III: Data Analysis
As can be seen by the OC curve, if the lot were 100% to specifications, the probability of acceptance Pa would also be 100%. But if the lot were 13.4% defective, there would be a 10% probability of acceptance. There are two types of OC curves to consider: type A OC curves and type B OC curves. Type A OC curves are used to calculate the probability of acceptance on a lot-by-lot basis when the lot is not a product of a continuous process. These OC curves are calculated using the hypergeometric distribution. Type B OC curves are used to evaluate sampling plans for a continuous process. These curves are based on the binomial and/or Poisson distributions when the requirements for inspection and test usage are met. In general, the ANSI/ ASQ Z1.4 standard OC curves as of 2021 are based on the binomial distribution for sample sizes through 80, and the Poisson approximation to the binomial is used for sample sizes greater than 80.
Acceptance Sampling by Attributes Acceptance sampling by attributes is generally used for two purposes: (1) protection against accepting lots from a continuing process whose average quality deteriorates beyond an acceptable quality level, and (2) protection against isolated lots that may have levels of nonconformances greater than can be considered acceptable. The most commonly used forms of acceptance sampling are sampling plans by attributes. The most widely used standard of all attributes plans, although not necessarily the best, is ANSI/ASQ Z1.4. The following sections provide more details on the characteristics of acceptance sampling and discussion of military standards in acceptance sampling. Acceptable Quality Level. Acceptable quality level (AQL) is defined as the maximum percentage or fraction of nonconforming units in a lot or batch that, for the purposes of acceptance sampling, can be considered satisfactory as a process average. This means that a lot that has a fraction defective equal to the AQL has a high probability (generally in the area of 0.95, although it may vary) of being accepted. As a result, plans that are based on AQL, such as ANSI/ASQ Z1.4, favor the producer in getting lots accepted that are in the general neighborhood of the AQL for fraction defective in a lot. Lot Tolerance Percent Defective. The lot tolerance percent defective (LTPD), expressed in percentage defective, is the poorest quality in an individual lot that should be accepted. The LTPD has a low probability of acceptance. In many sampling plans, the LTPD is the percentage defective having a 10% probability of acceptance. Producer’s and Consumer’s Risks. There are risks involved in using acceptance sampling plans. The risks involved in acceptance sampling are producer’s risk and consumer’s risk. These risks correspond with type I and type II errors in hypothesis testing. The definitions of producer’s and consumer’s risks follow: • Producer’s risk (a)—The producer’s risk for any given sampling plan is the probability of rejecting a lot that is within the acceptable quality level.4 This means that the producer faces the possibility (at level of significance a) of having a lot rejected even though the lot has met the requirements stipulated by the AQL.
Chapter 12: Sampling 219
• Consumer’s risk (b )—The consumer’s risk for any given sampling plan is the probability of acceptance (usually 10%) for a designated numerical value of relatively poor submitted quality.5 The consumer’s risk, therefore, is the probability of accepting a lot that has a quality level equal to the LTPD. Average Outgoing Quality. The average outgoing quality (AOQ) is the expected average quality of outgoing products, including all accepted lots plus all rejected lots that have been sorted 100% and have had all of the nonconforming units replaced by conforming units. There is a given AOQ for specific nonconforming fractions of submitted lots sampled under a given sampling plan. When the nonconforming fraction is very low, a large majority of the lots will be accepted as submitted. The few lots that are rejected will be sorted 100% and have all nonconforming units replaced with conforming units. Thus, the AOQ will always be less than the submitted quality. As the quality of submitted lots becomes poor in relation to the AQL, the percentage of lots rejected becomes larger in proportion to accepted lots. As these rejected lots are sorted and combined with accepted lots, an AOQ lower than the average fraction of nonconformances of submitted lots emerges. Therefore, when the level of quality of incoming lots is good, the AOQ is good; when the incoming quality is bad and most lots are rejected and sorted, the result is also good. To calculate the AOQ for a specific nonconforming fraction and a sampling plan, the first step is to calculate the probability of accepting the lot at that level of the nonconforming fraction. Then, multiply the probability of acceptance by the nonconforming fraction for the AOQ. Thus, AOQ = Pap[1− sample size/lot size] If the desired result is a percentage, multiply by 100. The average outgoing quality limit is the maximum AOQ for all possible levels of incoming quality. Average Outgoing Quality Limit. The average outgoing quality limit (AOQL) is a variable dependent on the quality level of incoming lots. When the AOQ is plotted for all possible levels of incoming quality, a curve as shown in Figure 12.2 results. The AOQL is the highest value on the AOQ curve. AOQ (%) 0.045
N=∞
0.04
0.035
n = 50
0.03
0.025
c=3
0.02
0.015
0.01
0.005
0
0
2
4
6
8
10
p (Percent nonconforming)
12
Figure 12.2 Average outgoing quality curve for N = ∞, n = 50, c = 3.
14
16
18
220 Part III: Data Analysis
Assuming an infinite lot size, the AOQ can be calculated as AOQ = Pap. Probability of acceptance (Pa) can be obtained from tables, as explained earlier, and then multiplied by p (associated value of fraction nonconforming) to produce a value for AOQ, as shown in the next example, using the previous equation. Example: Given an OC with curve points (Pa and p) as shown, construct the AOQ curve. Note that Pa and p are calculated as explained in the previous example. Probability of acceptance 0.998 0.982
Fraction defective AOQ 0.01 0.00998 0.02 0.01964
0.937
0.03 0.02811
0.760
0.05 0.03800
0.861 0.647
0.533 0.425
0.330
0.250
0.04 0.03444 0.06 0.03882 0.07 0.03731
0.08 0.03400 0.09 0.02970
0.10 0.02500
As can be seen, the AOQ rises until the incoming quality level of 0.06 nonconforming is reached. The maximum AOQ point is 0.03882, which is called the AOQL. This is the AOQL for an infinite lot size, sample size = 50, accept the lot on three or fewer nonconformances. Lot Size, Sample Size, and Acceptance Number. For any single sampling plan, the plan is completely described by the lot size, sample size, and acceptance number. In this section, the effect of changing the sample size, acceptance number, and lot size on the behavior of the sampling plan will be explored along with the risks of constant percentage plans. The effect on the OC curve caused by changing the sample size while holding all other parameters constant is shown in Figure 12.3. The probability of acceptance changes considerably as sample size changes. The Pa for the given sample sizes for a 10% nonconforming lot and an acceptance number of zero are shown. Sample size
Probability of acceptance (Pa%)
10 35
4 68 2 82
1 90
Chapter 12: Sampling 221
n=1
1.00
n=2
0.90
n=4
0.80 0.70
n = 10
Pa
0.60 0.50 0.40 0.30 0.20 0.10 0
10
20
30
40
50
60
70
80
p (Percent nonconforming)
90
100
Figure 12.3 Effect on an OC curve of changing sample size n when accept number c is held constant. Source: D. W. Benbow, A. K. Elshennawy, and H. F. Walker, The Certified Quality Technician Handbook (Milwaukee, WI: ASQ Quality Press, 2003), 139.
1.00
n = 10
c=0
0.90 0.80
c=1
0.70
c=2
Pa
0.60
Indifference quality level
0.50 0.40 0.30 0.20 0.10 0
10
20
30
40
50
60
70
80
p (Percent nonconforming)
90
100
Figure 12.4 Effect of changing accept number c when sample size n is held constant. Source: D. W. Benbow, A. K. Elshennawy, and H. F. Walker, The Certified Quality Technician Handbook (Milwaukee, WI: ASQ Quality Press, 2003), 139.
222 Part III: Data Analysis
The effect of changing the acceptance number on a sampling plan while holding all other parameters constant is shown in Figure 12.4. Another point of interest is that for c = 0, the OC curve is concave in shape, while plans with larger accept numbers have a “reverse s” shape. Figure 12.4 and the following table show the effect of changing the acceptance number of a sampling plan on the indifference quality level (IQL): 50–50 chance of accepting a given percent defective. Sample Acceptance size number
Percent defective at indifference quality level (%)
10
2 27
10
0 7
10
1 17
The parameter having the least effect on the OC curve is the lot size, N. For this reason, using the binomial and Poisson approximations, even when lot sizes are known (and are large compared to sample size), results in little error in accuracy. Figure 12.5 shows the changes in the OC curve for a sample size of 10, acceptance number of zero, and lot sizes of 100, 200, and 1000. As can be seen, the differences due to lot size are minimal. Some key probabilities of acceptance points for the three lot sizes follow. Fraction defective
Probability of Lot acceptance (Pa) size
0.10 0.330
100
0.50 0.001
100
0.30 0.026
200
0.30 0.023
0.10 0.340
100
200
0.50 0.001
200
0.30 0.028
1000
0.10 0.347 0.50 0.001
1000
1000
Computing the sample size as a percentage of the lot size has a large effect on risks and protection, as shown in Figure 12.6. In this case, plans having a sample size totaling 10% of the lot size are shown. As can be seen, the degree of protection changes dramatically with changes in lot size, which results in low protection for small lot sizes and gives excessively large sample requirements for large lot sizes.
Types of Attributes Sampling Plans There are several types of attributes sampling plans in use, with the most common being single, double, multiple, and sequential sampling plans. The type of sampling plan used is determined by ease of use and administration, general quality level of incoming lots, average sample number, and so on.
Chapter 12: Sampling 223
1.00 0.90
n = 10
0.80
c=0
0.70 N = 1000
Pa
0.60
N = 200
0.50
N = 100
0.40 0.30 0.20 0.10 0
10
20
30
40
50
60
70
80
p (Percent nonconforming)
90
100
Figure 12.5 Effect of changing lot size N when accept number c and sample size n are held constant. Source: D. W. Benbow, A. K. Elshennawy, and H. F. Walker, The Certified Quality Technician Handbook (Milwaukee, WI: ASQ Quality Press, 2003), 140.
1.00 0.90
N = 250, n = 25, c = 0
0.80
N = 100, n = 10, c = 0
0.70 Pa
0.60 N = 50, n = 5, c = 0
0.50 0.40 0.30 0.20 0.10 0
10
20
30
40
p (Percent nonconforming)
50
Figure 12.6 Operating characteristic curves for sampling plans having the sample size equal to 10% of the lot size. Source: D. W. Benbow, A. K. Elshennawy, and H. F. Walker, The Certified Quality Technician Handbook (Milwaukee, WI: ASQ Quality Press, 2003), 141.
224 Part III: Data Analysis
Single Sampling Plans. When single sampling plans are used, the decision to either accept or reject the lot is based on the results of the inspection of a single sample of n items from a submitted lot. In the example shown earlier, the OC curve and AOQ curve were calculated for a single sampling plan where n = 50 and c = 3. Single sampling plans have the advantage of ease of administration, but due to the unchanging sample size, they do not take advantage of potential cost savings of reduced inspection when incoming quality is either excellent or poor. Double Sampling Plans. When using double sampling plans, a smaller first sample is taken from the submitted lot, and one of three decisions is made: (1) accept the lot, (2) reject the lot, or (3) draw another sample. If a second sample is drawn, the lot will either be accepted or rejected after the second sample. Double sampling plans have the advantage of a lower total sample size when the incoming quality is either excellent or poor because the lot is either accepted or rejected on the first sample. Example: A double sampling plan is to be executed as follows: Take a first sample (n1) of 75 units and set c1 (the acceptance number for the first sample) = zero. The lot will be accepted based on the first sample results if no nonconformances are found in the first sample. If three nonconformances are found in the first sample, the lot will be rejected based on the first sample results. If after analyzing the results of the first sample one or two nonconformances are found, take a second sample (n2 = 75). The acceptance number for the second sample (c2) is set to 3. If the combined number of nonconformances in the first and second samples is three or less, the lot will be accepted, and if the combined number of nonconformances is four or more, the lot will be rejected. The plan is represented as follows: Sample Acceptance Rejection number number c number r n1 = 75 c1 = 0 r1 = 3
n2 = 75 c2 = 3 r2 = 4
Multiple Sampling Plans. Multiple sampling plans work in the same way as double sampling with an extension of the number of samples to be taken up to seven, according to ANSI/ASQ Z1.4-2003. In the same manner that double sampling is performed, acceptance or rejection of submitted lots may be reached before the seventh sample, depending on the acceptance/rejection criteria established for the plan.
ANSI/ASQ Z1.4 ANSI/ASQ Z1.4 is probably the most commonly used standard for attribute sampling plans. The wide recognition and acceptance of the plan could be due to government contracts stipulating the standard rather than its statistical importance. Producers submitting products at a nonconformance level within AQL have a high probability of having the lot accepted by the customer.
Chapter 12: Sampling 225
When using ANSI/ASQ Z1.4, the characteristics under consideration should be classified. The general classifications are critical, major, and minor defects: • Critical defect. A critical defect is a defect that judgment and experience indicate is likely to result in hazardous or unsafe conditions for the individuals using, maintaining, or depending on the product, or a defect that judgment and experience indicate is likely to prevent performance of the unit. In practice, critical characteristics are commonly inspected to an AQL level of 0.40% to 0.65%, if not 100% inspected. One hundred percent inspection is recommended for critical characteristics if possible. Acceptance numbers are always zero for critical defects. • Major defect. A major defect is a defect, other than critical, that is likely to result in failure or to materially reduce the usability of the unit of product for its intended purpose. In practice, AQL levels for major defects are generally about 1%. • Minor defect. A minor defect is a defect that is not likely to materially reduce the usability of the unit of product for its intended purpose. In practice, AQL levels for minor defects generally range from 1.5% to 2.5%. Levels of Inspection. There are seven levels of inspection used in ANSI/ASQ Z1.4: reduced inspection, normal inspection, tightened inspection, and four levels of special inspection. The special inspection levels should only be used when small sample sizes are necessary and large risks can be tolerated. When using ANSI/ ASQ Z1.4, a set of switching rules must be followed as to the use of reduced, normal, and tightened inspection. The following guidelines are taken from ANSI/ ASQ Z1.4-2003: • Initiation of inspection. Normal inspection level II will be used at the start of inspection unless otherwise directed by the responsible authority. • Continuation of inspection. Normal, tightened, or reduced inspection shall continue unchanged for each class of defect or defectives on successive lots or batches except where the following switching procedures require change. The switching procedures shall be applied to each class of defects or defectives independently. Switching Procedures. Switching rules are graphically shown in Figure 12.7. • Normal to tightened. When normal inspection is in effect, tightened inspection shall be instituted when two out of five consecutive lots or batches have been rejected on original inspection (i.e., ignoring resubmitted lots or batches for this procedure). • Tightened to normal. When tightened inspection is in effect, normal inspection shall be instituted when five consecutive lots or batches have been considered acceptable on original inspection.
226 Part III: Data Analysis
• Preceding 10 lots accepted, with
Start
• Total nonconforming less than limit number (optional), and
• 2 out of 5 consecutive lots not accepted
• Production steady, and • Approved by responsible authority
Reduced
Normal
Tightened
• Lot not accepted, or • Lot accepted but nonconformities found lie between Ac and Re of plan, or • Production irregular, or
• 5 consecutive lots accepted
• Other conditions warrant
• 5 consecutive lots remain tightened
• Discontinue inspection under Z1.4
Figure 12.7 Switching rules for normal, tightened, and reduced inspection. Source: D. W. Benbow, A. K. Elshennawy, and H. F. Walker, The Certified Quality Technician Handbook (Milwaukee, WI: ASQ Quality Press, 2003), 143.
• Normal to reduced. When normal inspection is in effect, reduced inspection shall be instituted, providing that all of the following conditions are satisfied: a. The preceding 10 lots or batches (or more), as indicated by the note on ANSI/ASQ Z1.4-2003 Table I, shown as Figure 12.10, have been on normal inspection and none has been rejected on original inspection. b. The total number of defectives (or defects) in the sample from the preceding 10 lots or batches (or other such numbers as was used for condition [a]) is equal to or less than the applicable number given in Table VIII of ANSI/ASQ Z1.4-2003 (shown as Figure 12.9). If double or multiple sampling is in use, all samples inspected should be included, not “first” samples only. c. Production is at a steady rate. d. Reduced inspection is considered desirable by the responsible authority. • Reduced to normal. When reduced inspection is in effect, normal inspection shall be instituted if any of the following occur on original inspection:
Chapter 12: Sampling 227
a. A lot or batch is rejected. b. A lot or batch is considered acceptable under reduced inspection but the sampling procedures terminated without either acceptance or rejection criteria having been met. In these circumstances, the lot or batch will be considered acceptable, but normal inspection will be reinstated starting with the new lot or batch. c. Production becomes irregular or delayed. d. Other conditions warrant that normal inspection shall be instituted. • Discontinuation of inspection. In the event that 10 consecutive lots or batches remain on tightened inspection (or other such numbers as may be designated by the responsible authority), inspection under the provisions of this document should be discontinued pending action to improve the quality of submitted material. Types of Sampling. ANSI/ASQ Z1.4-2003 allows for three types of sampling: 1. Single sampling 2. Double sampling 3. Multiple sampling The choice of the type of plan depends on many variables. Single sampling is the easiest to administer and perform but usually results in the largest average total inspection. Double sampling in ANSI/ASQ Z1.4-2003 results in a lower average total inspection than single sampling but requires more decisions to be made, such as • accept the lot after first sample, • reject the lot after first sample, • take a second sample, • accept the lot after second sample, and • reject the lot after second sample. Multiple sampling plans further reduce the average total inspection but also increase the number of decisions to be made. As many as seven samples may be required before a decision to accept or reject the lot can be made. This type of plan requires the most administration. A general procedure for selecting plans from ANSI/ASQ Z1.4-2003 is as follows: 1. Decide on an AQL. 2. Decide on the inspection level. 3. Determine the lot size. 4. Find the appropriate sample size code letter. See Table I from ANSI/ ASQ Z1.4-2003, also shown as Figure 12.10.
228 Part III: Data Analysis
5. Determine the type of sampling plan to be used: single, double, or multiple. 6. Using the selected AQL and sample size code letter, enter the appropriate table to find the desired plan to be used. 7. Determine the normal, tightened, and reduced plans as required from the corresponding tables. Example: A lot of 1750 parts has been received and is to be checked to an AQL level of 1.5%. Determine the appropriate single, double, and multiple sampling plans for general inspection level II. Steps in defining the plans are as follows: 1. Table I of ANSI/ASQ Z1.4-2003, also shown as Figure 12.10, stipulates code letter K. 2. Normal inspection is applied. For code letter K, using Table II-A of ANSI/ASQ Z1.4-2003, also shown as Figure 12.11, a sample of 125 is specified. 3. For double sampling, two samples of 80 may be required. Refer to Table III-A of the standard, shown as Figure 12.12. 4. For multiple sampling, at least two samples of 32 are required, and it may take up to seven samples of 32 before an acceptance or rejection decision is made. Refer to Table IV-A of the standard, shown as Figure 12.13. A breakdown of all three plans is shown in Table 12.1.
Dodge-Romig Tables Dodge-Romig tables were designed as sampling plans to minimize average total inspection (ATI). These plans require an accurate estimate of the process nonconforming average in selecting the sampling plan to be used. The DodgeRomig tables use the AOQL and LTPD values for plan selection rather than AQL as in ANSI/ASQ Z1.4-2003. When the process nonconforming average is controlled to requirements, Dodge-Romig tables result in lower average total inspection, but rejection of lots and sorting tend to minimize the gains if process quality deteriorates. Note that if the process nonconforming average shows statistical control, acceptance sampling should not be used. The most economical course of action in this situation is either no inspection or 100% inspection.6
Variables Sampling Plans Variables sampling plans use the actual measurements of sample products for decision making rather than classifying products as conforming or nonconforming, as in attributes sampling plans. Variables sampling plans are more complex in
Chapter 12: Sampling 229
Table 12.1
Sampling plan with sample size, acceptance number, and rejection number.
Sampling plan
Sample(s) size
Ac
Re
Single sampling
125
5
6
First
80
2
5
Second
80
6
7
First
32
*
4
Second
32
1
5
Third
32
2
6
Fourth
32
3
7
Fifth
32
5
8
Sixth
32
7
9
Seventh
32
9
10
Double sampling
Multiple sampling
Ac = Acceptance number Re = Rejection number * Acceptance not permitted at this sample size
administration than attributes plans; thus, they require more skill. They provide some benefits, however, over attributes plans. Two of these benefits are as follows: 1. Variables sampling provides equal protection as an attributes sampling plan but with a much smaller sample size. There are several types of variables sampling plans in use, including: (1) known, (2) unknown but can be estimated using sample standard deviation s, and (3) unknown, and the range R is used as an estimator. If an attributes sampling plan sample size is determined, the variables plans previously listed can be compared as a percentage to the attributes plan. 2. Variables sampling plans allow the determination of how close to nominal or a specification limit the process is performing. Attributes plans either accept or reject a lot; variables plans give information on how well or poorly the process is performing.
230 Part III: Data Analysis
Variables sampling plans, such as ANSI/ASQ Z1.9, have some disadvantages and limitations: 1. The assumption of normality of the population from which the samples are being drawn is required for variables sampling plans. 2. Unlike attributes sampling plans, separate characteristics on the same parts will have different averages and dispersions, resulting in a separate sampling plan for each characteristic. 3. Variables plans are more complex in administration. 4. Variables gauging is generally more expensive than attribute gauging.
ANSI/ASQ Z1.9 The most common standard for variables sampling plans is ANSI/ASQ Z1.9, which has plans for (1) variability known, (2) variability unknown—standard deviation method, and (3) variability unknown—range method. Using the aforementioned methods, this sampling plan can be used to test for a single specification limit, a double (or bilateral) specification limit, estimation of the process average, and estimation of the dispersion of the parent population. As in ANSI/ASQ Z1.4, several AQL levels are used, and specific switching procedures for normal, reduced, and tightened inspection are followed. ANSI/ ASQ Z1.9 allows for the same AQL value for each specification limit of double specification limit plans, or the use of different AQL values for each specification limit. The AQL values are designated ML for the lower specification limit and MU for the upper specification limit. There are two forms used for every specification limit ANSI/ASQ Z1.9 plan: form 1 and form 2. Form 1 provides only acceptance or rejection criteria, whereas form 2 estimates the percentage below the lower specification and the percentage above the upper specification limit. These percentages are compared to the AQL for acceptance/rejection criteria. Figure 12.8 summarizes the structure and organization of ANSI/ASQ Z1.9-2003. There are 14 AQL levels used in ANSI/ASQ Z1.9-2003 that are consistent with the AQL levels used in ANSI/ASQ Z1.4-2003. Section A of ANSI/ASQ Z1.92003 contains both an AQL conversion table and a table for selecting the desired inspection level. Level IV is generally considered normal inspection, with Level V being tightened inspection and Levels I, II, and III being reduced inspection. Table A-3 of ANSI/ASQ Z1.9-2003 contains the OC curves for the sampling plans in Sections B, C, and D. Section B contains sampling plans used when the variability is unknown and the standard deviation method is used. Part I is used for a single specification limit, Part II is used for a double specification limit, and Part III is used for estimation of process average and criteria for reduced and tightened inspection. Section C contains sampling plans used when the variability is unknown and the range method is used. Parts I, II, and III are the same as Parts I, II, and III in Section B.
Chapter 12: Sampling 231
Section A AQL conversion and inspection levels
Variability unknown
Variability known
Standard deviation method
Single specification limits
k-Method Procedure 1
M-Method Procedure 2
Double specification limits
Variability unknown Range method
Process average estimation and criteria for reduced and tightened inspection
M-Method Procedure 2
Figure 12.8 General structure and organization of ANSI/ASQ Z1.9-2003. Source: D. W. Benbow, A. K. Elshennawy, and H. F. Walker, The Certified Quality Technician Handbook (Milwaukee, WI: ASQ Quality Press, 2003), 148.
Section D contains sampling plans used when variability is known. Parts I, II, and III are the same as Parts I, II, and III in Section B. Variability Unknown—Range Method. An example from Section C will be used here to illustrate the use of the variability unknown—range method for a single specification limit. The quality indices for a single specification limit are: U −X X −L or ⋅ R R where U = upper specification limit L = lower specification limit X = sample average R = average range of the sample The acceptance criterion is a comparison of the quality (U − X)/R or (X − L)/R to the acceptability constant k. If the calculated quantity is equal to or greater than k, the lot is accepted; if the calculated quantity is negative or less than k, the lot is rejected.
232 Part III: Data Analysis
The following example illustrates the use of the variability unknown—range method, form I variable sampling plan and is similar to examples from Section C of ANSI/ASQ Z1.9-2003. Example: The lower specification limit for electrical resistance of a certain electrical component is 620 ohms. A lot of 100 items is submitted for inspection. Inspection Level IV, normal inspection, with AQL = 0.4 percent, is to be used. From ANSI/ASQ Z1.9-2003 Table A-2 and Table C-1, shown Figure 12.14 and Figure 12.15, respectively, it is seen that a sample of size 10 is required. Suppose that values of the sample resistances (in the order reading from left to right) are: 645, 651, 621, 625, 658 (R = 658—621 = 37) 670, 673, 641, 638, 650 (R = 673—638 = 35) Determine compliance with the acceptability criterion (see Table 12.2). The lot does not meet the acceptability criterion, since (X − L)/R is less than k. Note: If a single upper specification limit U is given, then compute the quantity (U − X)/R in line 6, and compare it with k. The lot meets the acceptability criterion if (U − X)/R is equal to or greater than k. Variability Unknown—Standard Deviation Method. In this section, a sampling plan is shown for the situation where the variability is not known and the standard deviation is estimated from the sample data. The sampling plan will be that for a double specification limit, and it is found in Section B of the standard, with one AQL value for both upper and lower specification limits combined.
Table 12.2 Example for ANSI/ASQ Z1.9-2003 sampling plan. Line
Information needed
Value
1.
Sample size: n
10
2.
Sum of measurement: ΣX
6472
3.
– Sample mean X: ΣX/n
647.2
6472/10
4.
– Average range R: ΣR/no. of subgroups
36
(37 + 35)/2
5.
Specification limit (lower): L
620
6.
– – The quantity: (X − L)/R
.756
(647.2 − 620)/36
7.
Acceptability constant: k
.811
See Table C-1
Explanation
(Figure 12.15) 8.
Acceptability criterion:
.756 ≤
– – Compare (X − L)/R with k
.811
Chapter 12: Sampling 233
The acceptability criterion is based on comparing an estimated percent nonconforming to a maximum allowable percent nonconforming for the given AQL level. The estimated percent nonconforming is found in ANSI/ASQ Z1.92003 Table B-5, shown as Figure 12.16. The quality indices for this sampling plan are U −X X −L or s s where U = upper specification limit L = lower specification limit X = sample average s = estimate of lot standard deviation The quality level of the lot is in terms of the lot percent defective. Three values are calculated: PU, PL, and p. PU is an estimate of conformance with the upper specification limit, PL is an estimate of conformance with the lower specification limit, and p is the sum of PU and PL. The value of p is then compared with the maximum allowable percentage defective. If p is less than or equal to M (ANSI/ASQ Z1.9-2003 Table B-5, shown as Figure 12.16) or if either QU or QL is negative, the lot is rejected. The following example illustrates this procedure. Example: The minimum temperature of operation for a certain device is specified as 180°F. The maximum temperature is 209°F. A lot of 40 items is submitted for inspection. Inspection Level IV, normal inspection with AQL = 1%, is to be used. ANSI/ASQ Z1.9-2003 Table A-2, shown as Figure 12.14, gives code letter D, which results in a sample size of five from ANSI/ASQ Z1.9-2003 Table B-3, shown as Figure 12.17. The results of the five measurements in degrees Fahrenheit are as follows: 197, 188, 184, 205, 201. Determine if the lot meets acceptance criteria (see Table 12.3).
234 Part III: Data Analysis
Table 12.3 Example for ANSI/ASQ Z1.9-2003 variable sampling plan. Line
Information needed
Value obtained
1.
Sample size: n
5
2.
Sum of measurements: ΣX
975
3.
Sum of squared measurements: ΣX2
190,435
4.
Correction factor: (ΣX)2/n
190,125
9752/5
5.
Corrected sum of squares (SS): ΣX2 − CF
310
190,435 − 190,125
6.
Variance V: SS/n − 1
77.5
310/4
7.
Standard deviation s: V
8.81
8.
– Sample mean X–: ΣX/n
195
9.
Upper specification limit: U
209
10.
Lower specification limit: L
180
11.
– Quality index: QU = (U − X)/s
1.59
(209 − 195)/8.81
12.
– Quality index: QL = (X − L)/s
1.7
(195 − 180)/8.81
13.
Estimate of lot percent defective above U: PU
2.19%
QU = 1.59, sample size = 5. See Table B-5 (Figure 12.16)
14.
Estimate of lot percent defective below L: PL
0.66%
QL = 1.7, sample size = 5. See Table B-5 (Figure 12.16)
15.
Total estimate of percent defective in lot: p = PU + PL
2.85%
2.19 + .66
16.
Maximum allowable percent defective: M
3.33%
See Table B-3 (Figure 12.17)
17.
Acceptability criterion: Compare p = PU + PL with M
2.85% < 3.33%
Explanation
975/5
Note: The lot meets the acceptability criterion, since p = PU + PL is less than M. ANSI/ASQ Z1.9-2003 provides a variety of other examples for variable sampling plans.
*
* * *
* *
0
0
* * *
* * *
0
0
20000–31499
31500 & Over
0
2
4
8
0
2
4
0
* * *
* * *
0
* * *
* * *
* * *
14
8
4
2
0
0
* *
* * *
* * *
* * *
24
14
7
4
38
22
13
7
3
1
0 2
0
0
* * *
* * *
* * *
0
*
* * *
* * *
* * *
67
40
24
14
7
4
2
0
0
*
* *
111
68
40
24
14
8
4
2
0
0
* * *
* * *
* * *
* * *
186
115
69
42
25
14
8
4
2
0
0
* *
* * *
7
4
181
110
68
40
24
14
169
105
63
38
22
13
3
7
1
2
0
*
0
* * *
1.5
0
0
* * *
* * *
1.0
181
110
67
40
24
14
7
4
2
0
0
* * *
2.5
181
111
68
49
24
14
8
4
2
0
0
* *
4.0
186
115
181
169
105
68
42 110
63
40
69
39
24
25
22
14
8 14
7 13
7
3
2 4
1
4
0
0
15
0
10
2
0
*
0
6.5
Acceptance Quality Limits
181
113 181
189
471 277 181
115
68
40 68
490 301 177
72
42
110
297 181
105
68 115
42
24
110
63
40
25
14
7
25
277 301 181
63
36
14
181 115 178
68 105
40
22
22
650 1000
14
400
8
250
13
150
7
100
4
65
3
40
2
25
Figure 12.9 ANSI/ASQ Z1.4-2003 Table VIII: Limit numbers for reduced inspection.
lots or batches can be used for the calculation, provided that the lots or batches used are the most recent ones in sequence, that they have all been on normal inspection, and that none has been rejected while on original inspection.
* = Denotes that the number of sample units from the last 10 lots or batches is not sufficient for reduced inspection for this AQL. In this instance, more than 10
12500–19999
8000–12499
5000–7999
3150–4999
2000–3149
1250–1999
800–1249
1
* * *
* * *
* * *
500–799
320–499
200–319
* * *
* * *
* * *
130–199
80–129
50–79
* * *
* * *
30–49
20–29
* * *
Number of sample units from last 10 lots or batches 0.010 0.015 0.025 0.040 0.065 0.10 0.15 0.25 0.40 0.65
Chapter 12: Sampling 235
to to to
to to to
151 281 501
1201 3201 10001
3200 10000 35000
280 500 1200
50 90 150
8 15 25
D D D
C C C
B B C
A B B
A A A
S-1
E E E
D D D
C C C
B B B
A A A
S-2
G G H
E F F
D D E
B C C
A A B
S-3
J J K
G G H
E E F
C C D
A A B
S-4
Figure 12.10 ANSI/ASQ Z1.4-2003 Table I: Sample size code letters per inspection levels.
to 150000 to 500000 and over
to to to
26 51 91
35001 150001 500001
to to to
2 9 16
Lot or batch size
Special inspection levels
L M N
H J K
E F G
C C D
A A B
I
N P Q
K L M
G H J
D E F
A B C
II
P Q R
L M N
H J K
E F G
B C D
III
General inspection levels
236 Part III: Data Analysis
Acceptance Quality Limits, AQLs, in Percent Nonconforming Items and Nonconformities per 100 Items (Normal Inspection)
8 13 20
32 50 80
125 200 315
500 800 1250
2000
D E F
G H J
K L M
N P Q
R
1
1
2
0 1
2
1
3
2
3
1 2
4
2 3
5
1 2 3
0
6
2 3 4
1
7
2 3 5
1 3 5 7
1 2
1 2 3
2 3 4
1
2 3 5
1 3 4 6
2 3 5 7
1 2
1 2 3
2 3 4
1
2 3 5
1 3 4 6
2 3 5 7
1 2
1 2 3
2 3 4
1
2 3 5
1 3 4 6
2 3 5 7
1 2
1 2 3
2 3 4 2 3 5
3 4 6 3 5 7
4 5 6 7 8 10 11 14 15 21 22 30 31 6 7 8 10 11 14 15 21 22 30 31 44 45 8 10 11 14 15 21 22 30 31 44 45
4 5 6 7 8 10 11 14 15 21 22 30 31 44 45 6 7 8 10 11 14 15 21 22 30 31 44 45 8 10 11 14 15 21 22
2 3
4 5 6 7 8 10 11 14 15 21 22 6 7 8 10 11 14 15 21 22 8 10 11 14 15 21 22
2 3
0
4 5 6 7 8 10 11 14 15 21 22 6 7 8 10 11 14 15 21 22 8 10 11 14 15 21 22
2 3
0
4 5 6 7 8 10 11 14 15 21 22 6 7 8 10 11 14 15 21 22 8 10 11 14 15 21 22
2 3
0
8 10 11 14 15 21 22
3 4 6
2
1
1
1
1
1
1
= Use the first sampling plan above the arrow.
= Use the first sampling plan below the arrow. If sample size equals or exceeds lot size, carry out 100% inspection.
1
1
1
Re = Rejection number
Ac = Acceptance number
0
0
0
0
0
0
0
0
0
0
Figure 12.11 ANSI/ASQ Z1.4-2003 Table II-A: Single sampling plans for normal inspection.
2 3 5
A B C
Sample size Sample 0.010 0.015 0.025 0.040 0.065 0.10 0.15 0.25 0.40 0.65 1.0 1.5 2.5 4.0 6.5 10 15 25 40 65 100 150 250 400 650 1000 code size letter Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re
Chapter 12: Sampling 237
Sample size code letter
First Second
First Second
First Second
First Second
First Second
First Second
First Second
First Second
First Second
First Second
First Second
First Second
First Second
C
D
E
F
G
H
J
K
L
M
N
P
Q
R
Sample size
0.25
0.40
0.65
1.0
1.5
2.5
4.0
6.5
10
15
25
40
65
100
✸
✸
0 1
✸
0 1 2 0 2 3
✸
2 2 3 4
0 1 0 3 1 4
✸
2 2 3 4 4 5
0 1 0 3 1 4 2 6
✸
2 2 3 4 4 5 5 7
0 1 0 3 1 4 2 6 3 8
✸
2 2 3 4 4 5 5 7 7 9
0 2 1 2 0 3 3 4 1 4 4 5 2 5 6 7 3 7 8 9 5 9 12 13
✸ 0 2 1 2 0 3 3 4 1 4 4 5 2 5 6 7 3 7 8 9 5 9 12 13 7 11 18 19
✸ 0 1 0 3 1 4 2 6 3 8 5 12 7 18 11 26
2 2 3 4 4 5 5 7 7 9 9 13 11 19 16 27
✸ 0 1 0 3 1 4 2 6 3 8 5 12 7 18 11 26
2 2 3 4 4 5 5 7 7 9 9 13 11 19 16 27
✸ 0 1 0 3 1 4 2 6 3 8 5 12 7 18 11 26
2 2 3 4 4 5 5 7 7 9 9 13 11 19 16 27
✸ 0 1 0 3 1 4 2 6 3 8 5 12 7 18 11 26
2 2 3 4 4 5 5 7 7 9 9 13 11 19 16 27
✸ 0 1 0 3 1 4 2 6 3 8 5 12 7 18 11 26
2 2 3 4 4 5 5 7 7 9 9 13 11 19 16 27
✸ 0 1 0 3 1 4 2 6 3 8 5 12 7 18 11 26 2 2 3 4 4 5 5 7 7 9 9 13 11 19 16 27
✸ 0 1 0 3 1 4 2 6 3 8 5 12 7 18 11 26
2 2 3 4 4 5 5 7 7 9 9 13 11 19 16 27
0 1 0 3 1 4 2 6 3 8 5 12 7 18 11 26
2 2 3 4 4 5 5 7 7 9 9 13 11 19 16 27
0 3 1 4 2 6 3 8 5 12 7 18 11 26
3 4 4 5 5 7 7 9 9 13 11 19 16 27
+ 1 4 2 6 3 8 5 12 7 18 11 26
4 5 5 7 7 9 9 13 11 19 16 27
+ 2 6 3 8 5 12 7 18 11 26
5 7 7 9 9 13 11 19 16 27
+ 3 8 5 12 7 18 11 26
Use first sampling plan below arrow. If sample size equals or exceeds lot or batch size, do 100% inspection. Use first sampling plan above arrow. Acceptance number Rejection number Use corresponding single sampling plan. Use corresponding single sampling plan or double sampling plan for code letter B and below.
2 4 3 6 5 10 8 16 13 26 20 40 32 64 50 100 80 160 125 250 200 400 315 630 500 1000 800 1600 1250 2500
= = Ac = Re = ✸ = + =
2 2 3 3 5 5 8 8 13 13 20 20 32 32 50 50 80 80 125 125 200 200 315 315 500 500 800 800 1250 1250
Cumulative sample size
Figure 12.12 ANSI/ASQ Z1.4-2003 Table III-A: Double sampling plans for normal inspection.
First Second
B
Sample
First Second
A
0.15
150
250
400
650
1000
7 9 9 13 11 19 16 27
+ 5 12 7 18 11 26 17 37
9 13 11 19 16 27 22 38
+ 7 18 11 26 17 37 25 56
11 19 16 27 22 38 31 57
+ 11 26 17 37 25 56
16 27 22 38 31 57
+
17 37 25 56
+ 22 25 31 38 56 57 31 57
+
Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re
0.010 0.015 0.025 0.040 0.065 0.10
Acceptance Quality Limits (Reduced Inspection)
238 Part III: Data Analysis
Sample size code letter
Sample
2 4 6 8 10 12 14 3 6 9 12 15 18 21 5 10 15 20 25 30 35 8 16 24 32 40 48 56 13 26 39 52 65 78 91 20 40 60 80 100 120 140
0.25
0.40
0.65
1.0
1.5
2.5
4.0
6.5
10
15
Acceptance Quality Limits (Normal Inspection)
✸
✸
✸
# # 0 0 1 1 2
✸
2 2 2 3 3 3 3
# # 0 0 1 1 2 # 0 0 1 2 3 4
✸
2 2 2 3 3 3 3 2 3 3 4 4 5 5
# # 0 0 1 1 2 # 0 0 1 2 3 4 # 0 1 2 3 4 6
✸
2 2 2 3 3 3 3 2 3 3 4 4 5 5 3 3 4 5 6 6 7
# 2 # 2 0 2 0 3 1 3 1 3 2 3 # 2 0 3 0 3 1 4 2 4 3 5 4 5 # 3 0 3 1 4 2 5 3 6 4 6 6 7 # 4 1 5 2 6 3 7 5 8 7 9 9 10
✸
# # 0 0 1 1 2 # 0 0 1 2 3 4 # 0 1 2 3 4 6 # 1 2 3 5 7 9 0 1 3 5 7 10 13
2 2 2 3 3 3 3 2 3 3 4 4 5 5 3 3 4 5 6 6 7 4 5 6 7 8 9 10 4 6 8 10 11 12 14
✸
= Use first sampling plan above arrow.
++ = Use corresponding double sampling plan or multiple sampling plan for code letter D below.
2 2 2 3 3 3 3 2 3 3 4 4 5 5 3 3 4 5 6 6 7 4 5 6 7 8 9 10 4 6 8 10 11 12 14 5 8 10 13 15 17 19
✸
# # 0 0 1 1 2 # 0 0 1 2 3 4 # 0 1 2 3 4 6 # 1 2 3 5 7 9 0 1 3 5 7 10 13 0 3 6 8 11 14 18
= Use first sampling plan below arrow (refer to continuation of table on following page, when necessary). If sample size equals or exceeds lot or batch size, do 100% inspection.
2 2 2 2 2 2 2 3 3 3 3 3 3 3 5 5 5 5 5 5 5 8 8 8 8 8 8 8 13 13 13 13 13 13 13 20 20 20 20 20 20 20
Sample size
✸ = Use corresponding single sampling plan.
First Second Third Fourth Fifth Sixth Seventh First Second Third Fourth Fifth Sixth Seventh First Second Third Fourth Fifth Sixth Seventh First Second Third Fourth Fifth Sixth Seventh First Second Third Fourth Fifth Sixth Seventh First Second Third Fourth Fifth Sixth Seventh
Cumulative sample size
25
40
# 0 0 1 2 3 4 # 0 1 2 3 4 6 # 1 2 3 5 7 9 0 1 3 5 7 10 13 0 3 6 8 11 14 18 1 4 8 12 17 21 25
2 3 3 4 4 5 5 3 3 4 5 6 6 7 4 5 6 7 8 9 10 4 6 8 10 11 12 14 5 8 10 13 15 17 19 7 10 13 17 20 23 26
++ # 0 1 2 3 4 6 # 1 2 3 5 7 9 0 1 3 5 7 10 13 0 3 6 8 11 14 18 1 4 8 12 17 21 25 2 7 13 19 25 31 37
65
100
150
4 5 6 7 8 9 10 4 6 8 10 11 12 14 5 8 10 13 15 17 19 7 10 13 17 20 23 26 9 14 19 25 29 33 38
0 1 3 5 7 10 13 0 3 6 8 11 14 18 1 4 8 12 17 21 25 2 7 13 19 25 31 37
4 6 8 10 11 12 14 5 8 10 13 15 17 19 7 10 13 17 20 23 26 9 14 19 25 29 33 38
✸ ++ ++ 0 3 6 8 11 14 18 1 4 8 12 17 21 25 2 7 13 19 25 31 37
5 8 10 13 15 17 19 7 10 13 17 20 23 26 9 14 19 25 29 33 38
✸ ++ ++ 1 4 8 12 17 21 25 2 7 13 19 25 31 37
7 10 13 17 20 23 26 9 14 19 25 29 33 38
✸ ++ ++ 2 7 13 19 25 31 37 4 11 19 27 36 45 53
250
400
650
1000
4 11 19 27 36 45 53 6 17 29 40 53 65 77
12 19 27 34 40 47 54 16 27 39 49 58 68 78
✸ ++ ++ 6 17 29 40 53 65 77
16 27 39 49 58 68 78
✸ ++ ++
✸ ## ##
✸ ##
## = Use corresponding double sampling plan.
# = Acceptance not permitted at this sample size.
9 14 19 25 29 33 38 12 19 27 34 40 47 54
✸ ++ ++
Ac = Acceptance number
# 1 2 3 5 7 9 0 1 3 5 7 10 13 0 3 6 8 11 14 18 1 4 8 12 17 21 25 2 7 13 19 25 31 37
✸ ++ ++
Re = Rejection number
3 3 4 5 6 6 7 4 5 6 7 8 9 10 4 6 8 10 11 12 14 5 8 10 13 15 17 19 7 10 13 17 20 23 26 9 14 19 25 29 33 38
++ ++
Figure 12.13 ANSI/ASQ Z1.4-2003 Table IV-A: Multiple sampling plans for normal inspection.
J
H
G
F
E
D
A B C
0.15
Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re
0.010 0.015 0.025 0.040 0.065 0.10
Chapter 12: Sampling 239
Sample size code letter
32 32 32 32 32 32 32 50 50 50 50 50 50 50 80 80 80 80 80 80 80 125 125 125 125 125 125 125 200 200 200 200 200 200 200 315 315 315 315 315 315 315 500 500 500 500 500 500 500
Sample
32 64 96 128 160 192 224 50 100 150 200 250 300 350 80 160 240 320 400 480 560 125 250 375 500 625 750 875 200 400 600 800 1000 1200 1400 315 630 945 1260 1575 1890 2205 500 1000 1500 2000 2500 3000 3500
0.25
0.40
0.65
1.0
1.5
2.5
4.0
6.5
10
15
Acceptance Quality Limits (Normal Inspection) 25
40
65
100
150
250
400
650
1000
✸
✸
# # 0 0 1 1 2
✸
2 2 2 3 3 3 3
# # 0 0 1 1 2 # 0 0 1 2 3 4
✸
2 2 2 3 3 3 3 2 3 3 4 4 5 5
# # 0 0 1 1 2 # 0 0 1 2 3 4 # 0 1 2 3 4 6
✸
2 2 2 3 3 3 3 2 3 3 4 4 5 5 3 3 4 5 6 6 7
# 2 # 2 0 2 0 3 1 3 1 3 2 3 # 2 0 3 0 3 1 4 2 4 3 5 4 5 # 2 0 3 1 4 2 5 3 6 4 6 6 7 # 4 1 5 2 6 3 7 5 8 7 9 9 10
✸
# # 0 0 1 1 2 # 0 0 1 2 3 4 # 0 1 2 3 4 6 # 1 2 3 5 7 9 0 1 3 5 7 10 13
2 2 2 3 3 3 3 2 3 3 4 4 5 5 3 3 4 5 6 6 7 4 5 6 7 8 9 10 4 6 8 10 11 12 14
# # 0 0 1 1 2 # 0 0 1 2 3 4 # 0 1 2 3 4 6 # 1 2 3 5 7 9 0 1 3 5 7 10 13 0 3 6 8 11 14 18
2 2 2 3 3 3 3 2 3 3 4 4 5 5 3 3 4 5 6 6 7 4 5 6 7 8 9 10 4 6 8 10 11 12 14 5 8 10 13 15 17 19
Ac = Acceptance number
Re = Rejection number
# # 0 0 1 1 2 # 0 0 1 2 3 4 # 0 1 2 3 4 6 # 1 2 3 5 7 9 0 1 3 5 7 10 13 0 3 6 8 11 14 18 1 4 8 12 17 21 25
2 3 3 4 4 5 5 3 3 4 5 6 6 7 4 5 6 7 8 9 10 4 6 8 10 11 12 14 5 8 10 13 15 17 19 7 10 13 17 20 23 26 9 14 19 25 29 33 38
# 0 1 2 3 4 6 # 1 2 3 5 7 9 0 1 3 5 7 10 13 0 3 6 8 11 14 18 1 4 8 12 17 21 25 2 7 13 19 25 31 37
3 3 4 5 6 6 7 4 5 6 7 8 9 10 4 6 8 10 11 12 14 5 8 10 13 15 17 19 7 10 13 17 20 23 26 9 14 19 25 29 33 38
# 1 2 3 5 7 9 0 1 3 5 7 10 13 0 3 6 8 11 14 18 1 4 8 12 17 21 25 2 7 13 19 25 31 37
4 5 6 7 8 9 10 4 6 8 10 11 12 14 5 8 10 13 15 17 19 7 10 13 17 20 23 26 9 14 19 25 29 33 38
0 1 3 5 7 10 13 0 3 6 8 11 14 18 1 4 8 12 17 21 25 2 7 13 19 25 31 37
4 6 8 10 11 12 14 5 8 10 13 15 17 19 7 10 13 17 20 23 26 9 14 19 25 29 33 38
0 3 6 8 11 14 18 1 4 8 12 17 21 25 2 7 13 19 25 31 37
5 8 10 13 15 17 19 7 10 13 17 20 23 26 9 14 19 25 29 33 38
1 4 8 12 17 21 25 2 7 13 19 25 31 37
7 10 13 17 20 23 26 9 14 19 25 29 33 38
2 7 13 19 25 31 37
9 14 19 25 29 33 38
= Use first sampling plan above arrow (refer to preceding page, when necessary).
# 0 0 1 2 3 4 # 0 1 2 3 4 6 # 1 2 3 5 7 9 0 1 3 5 7 10 13 0 3 6 8 11 14 18 1 4 8 12 17 21 25 2 7 13 19 25 31 37
# = Acceptance not permitted at this sample size.
✸ = Use corresponding single sampling plan.
2 2 2 3 3 3 3 2 3 3 4 4 5 5 3 3 4 5 6 6 7 4 5 6 7 8 9 10 4 6 8 10 11 12 14 5 8 10 13 15 17 19 7 10 13 17 20 23 26
Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re Ac Re
0.15
= Use first sampling plan below arrow. If sample size equals or exceeds lot or batch size, do 100% inspection.
Sample size
First Second Third Fourth Fifth Sixth Seventh First Second Third Fourth Fifth Sixth Seventh First Second Third Fourth Fifth Sixth Seventh First Second Third Fourth Fifth Sixth Seventh First Second Third Fourth Fifth Sixth Seventh First Second Third Fourth Fifth Sixth Seventh First Second Third Fourth Fifth Sixth Seventh
Cumulative sample size
Figure 12.13 Continued Figure 12.13 Continued
R
Q
P
N
M
L
K
0.010 0.015 0.025 0.040 0.065 0.10
240 Part III: Data Analysis
Chapter 12: Sampling 241
Inspection Levels
Lot Size
Special S3 S4
2 to 8 9 to 15 16 to 25 26 to 50 51 to 90 91 to 150 151 to 280 281 to 400 401 to 500 501 to 1200 1201 to 3200 3201 to 10,000 10,001 to 35,000 35,001 to 150,000 150,001 to 500,000 500,001 and over
B B B B B B B B B B B C B D C E C E D F E G F H G I H J H K H K
General I II III B B B C D E F G G H I J K L M N
B B C D E F G H I J K L M N P P
C D E F G H I J J K L M N P P P
Figure 12.14 ANSI/ASQ Z1.9-2003 Table A-2: Sample size code letters.
242 Part III: Data Analysis
Sample Size Code Letter
Sample Size
Acceptance Quality Limits (normal inspection) T
.10
.15
.25
.40
.65
1.00
1.50
2.50
4.00
k
k
k
k
k
k
k
k
k
k
.651
.598
.587
.502
.401 .296
.663 .614
.565
.498
.431
.352 .272
.755 .703
.650
.579
.507
.424 .341
B
3
D
5
F
10
G
15
1.04
I
30
1.10
C E
H
4 7
25
.702 .659
.916 .863 .811
.336 .266
.958 .903 .850
.792 .738
.684
.610
.536
.452 .368
.959 .904
.843 .787
.730
.654
.577
.490 .403
.835 .779
.723
.647
.571
.484 .398
.951 .896
1.13
1.08
1.04
.978 .921
.860 .803
.746
.668
.591
.503 .415
.768
.689
.610
.521 .432
.791
.711
.631
.539 .449
40
1.16
M
115
1.19
N
175 230
.405
.364 .276
1.01
60
P
.465
k
1.05
J
85
.525
.450
k
1.10
K L
.999
.613 .569
.525
6.50 10.00
1.06 1.11
1.06 1.00
.948
.885 .826
1.14
1.09 1.03
.975
.911 .851
1.17
1.13
1.21
1.16
.10
.15
1.21
1.02
1.16
1.08 1.02
.962
.899 .839
.780
.701
.621
.530 .441 .552 .460
1.11
1.05
.994
.929 .868
.807
.726
.644
.25
.40
.65
1.00 1.50
2.50
4.00
6.50 10.00
1.12 1.06
.996
.931 .870
.809
.728
.646
.553 .462
Acceptance Quality Limits (tightened inspection) All AQL values are in percent nonconforming. T denotes plan used exclusively on tightened inspection and provides symbol for identification of appropriate OC curve. Use first sampling plan below arrow; that is, both sample size as well as k value. When sample size equals or exceeds lot size, every item in the lot must be inspected.
Figure 12.15 ANSI/ASQ Z1.9-2003 Table C-1: Master table for normal and tightened inspection for plans based on variability unknown (single specification limit—form 1).
Chapter 12: Sampling 243
QU or QL
Sample Size
1.50 1.51 1.52 1.53 1.54 1.55 1.56 1.57 1.58 1.59
3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
4 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
5 3.80 3.61 3.42 3.23 3.05 2.87 2.69 2.52 2.35 2.19
7 5.28 5.13 4.97 4.82 4.67 4.52 4.38 4.24 4.10 3.96
10 5.87 5.73 5.59 5.45 5.31 5.18 5.05 4.92 4.79 4.66
15 6.20 6.06 5.93 5.80 5.67 5.54 5.41 5.29 5.16 5.04
20 6.34 6.20 6.07 5.94 5.81 5.69 5.56 5.44 5.32 5.20
25 6.41 6.28 6.15 6.02 5.89 5.77 5.65 5.53 5.41 5.29
30 6.46 6.33 6.20 6.07 5.95 5.82 5.70 5.58 5.46 5.34
35 6.50 6.36 6.23 6.11 5.98 5.86 5.74 5.62 5.50 5.38
1.70 1.71 1.72 1.73 1.74 1.75 1.76 1.77 1.78 1.79
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.66 0.55 0.45 0.36 0.27 0.19 0.12 0.06 0.02 0.00
2.62 2.51 2.41 2.30 2.20 2.11 2.01 1.92 1.83 1.74
3.41 3.31 3.21 3.11 3.02 2.93 2.83 2.74 2.66 2.57
3.84 3.75 3.65 3.56 3.46 3.37 3.28 3.20 3.11 3.03
4.02 3.93 3.83 3.74 3.65 3.56 3.47 3.38 3.30 3.21
4.12 4.02 3.93 3.84 3.75 3.66 3.57 3.48 3.40 3.32
4.18 4.09 3.99 3.90 3.81 3.72 3.63 3.55 3.47 3.38
4.22 4.13 4.04 3.94 3.85 3.77 3.68 3.59 3.51 3.43
1.60 1.61 1.62 1.63 1.64 1.65 1.66 1.67 1.68 1.69
1.80 1.81 1.82 1.83 1.84 1.85 1.86 1.87 1.88 1.89
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
2.03 1.87 1.72 1.57 1.42 1.28 1.15 1.02 0.89 0.77
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
3.83 3.69 3.57 3.44 3.31 3.19 3.07 2.95 2.84 2.73
1.65 1.57 1.49 1.41 1.34 1.26 1.19 1.12 1.06 0.99
4.54 4.41 4.30 4.18 4.06 3.95 3.84 3.73 3.62 3.52
2.49 2.40 2.32 2.25 2.17 2.09 2.02 1.95 1.88 1.81
4.92 4.81 4.69 4.58 4.47 4.36 4.25 4.15 4.05 3.94
2.94 2.86 2.79 2.71 2.63 2.56 2.48 2.41 2.34 2.28
5.08 4.97 4.86 4.75 4.64 4.53 4.43 4.32 4.22 4.12
3.13 3.05 2.98 2.90 2.82 2.75 2.68 2.61 2.54 2.47
5.17 5.06 4.95 4.84 4.73 4.62 4.52 4.42 4.32 4.22
3.24 31.6 3.08 3.00 2.93 2.85 2.78 2.71 2.64 2.57
5.23 5.12 5.01 4.90 4.79 4.68 4.58 4.48 4.38 4.28
3.30 3.22 3.15 3.07 2.99 2.92 2.85 2.78 2.71 2.64
50 6.55 6.42 6.29 6.17 6.04 5.92 5.80 5.68 5.56 5.45
75 6.60 6.47 6.34 6.21 6.09 5.97 5.85 5.73 5.61 5.50
100 6.62 6.49 6.36 6.24 6.11 5.99 5.87 5.75 5.64 5.52
150 6.64 6.51 6.38 6.26 6.13 6.01 5.89 5.78 5.66 5.55
200 6.65 6.52 6.39 6.27 6.15 6.02 5.90 5.79 5.67 5.56
5.27 5.16 5.04 4.94 4.83 4.72 4.62 4.52 4.42 4.32
5.33 5.22 5.11 5.01 4.90 4.79 4.69 4.59 4.49 4.39
5.38 5.27 5.16 5.06 4.95 4.85 4.74 4.64 4.55 4.45
5.41 5.30 5.19 5.08 4.98 4.87 4.77 4.67 4.57 4.47
5.43 5.32 5.21 5.11 5.00 4.90 4.80 4.70 4.60 4.50
5.44 5.33 5.23 5.12 5.01 4.91 4.81 4.71 4.61 4.51
3.35 3.27 3.19 3.11 3.04 2.97 2.89 2.82 2.75 2.69
3.43 3.35 3.27 3.19 3.12 3.05 2.97 2.90 2.83 2.77
3.48 3.40 3.33 3.25 3.18 3.10 3.03 2.96 2.89 2.83
3.51 3.43 3.36 3.28 3.21 3.13 3.06 2.99 2.92 2.85
3.54 3.46 3.38 3.31 3.23 3.16 3.09 3.02 2.95 2.88
3.55 3.47 3.40 3.32 3.25 3.17 3.10 3.03 2.96 2.90
4.30 4.20 4.11 4.02 3.93 3.84 3.76 3.67 3.59 3.51
4.35 4.26 4.17 4.08 3.99 3.90 3.81 3.73 3.64 3.56
4.38 4.29 4.19 4.10 4.01 3.93 3.84 3.76 3.67 3.59
4.41 4.31 4.22 4.13 4.04 3.95 3.87 3.78 3.70 3.62
4.42 4.32 4.23 4.14 4.05 3.97 3.88 3.80 3.71 3.63
Figure 12.16 ANSI/ASQ Z1.9-2003 Table B-5: Table for estimating the lot percent nonconforming using the standard deviation method.
244 Part III: Data Analysis
Sample Size Code Letter
Sample Size
Acceptance Quality Limits (normal inspection) T
.10
.15
.25
.40
.65
1.00
1.50
M
M
M
M
M
M
M
M
B
3
D
5
F
10
0.077
G
15
0.186
I
25
0.250
C E
H
4 7
20
1.49
2.50
4.00
M
M
6.50 10.00 M
M
7.59 18.86 26.94 33.69
5.46 10.88 16.41 22.84 29.43
0.041 1.34 3.33
5.82
9.80 14.37 20.19 26.55
2.14 3.27
4.72
7.26 10.53 15.17 20.73
5.34
8.40 12.19 17.34 23.30
0.005 0.087 0.421 1.05
2.13 3.54
0.311 0.491 0.839 1.33
2.09 3.06
4.32
6.55
9.48 13.74 18.97
0.378 0.551 0.874 1.32
2.00 2.86
3.97
5.98
8.65 12.60 17.55
0.179 0.349 0.714 1.27
0.228
0.356 0.531 0.864 1.33
2.03 2.93
4.10
6.18
8.95 13.01 18.07
J
35
0.253
0.373 0.534 0.833 1.24
1.87 2.66
3.70
5.58
8.11 11.89 16.67
K
50
0.243
0.355 0.503 0.778 1.16
1.73 2.47
3.44
5.21
7.61 11.23 15.87
M
100
0.218
0.315 0.444 0.684 1.02
1.52 2.18
3.06
4.67
6.88 10.29 14.71
N
150
L
P
75
200
0.225 0.202 0.204 .10
0.326 0.461 0.711 1.06
1.59 2.27
3.17
4.83
7.10 10.58 15.07 9.86 14.18
0.292 0.412 0.636 0.946 1.42 2.05
2.88
4.42
6.56
1.00 1.50
2.50
4.00
6.50 10.00
0.294 0.414 0.637 0.945 1.42 2.04 .15
.25
.40
.65
2.86
4.39
6.52
9.80 14.11
Acceptance Quality Limits (tightened inspection) All AQL values are in percent nonconforming. T denotes plan used exclusively on tightened inspection and provides symbol for identification of appropriate OC curve. Use first sampling plan below arrow; that is, both sample size as well as k value. When sample size equals or exceeds lot size, every item in the lot must be inspected.
Figure 12.17 ANSI/ASQ Z1.9-2003 Table B-3: Master table for normal and tightened inspection for plans based on variability unknown (double specification limit and form 2—single specification limit).
Chapter 13 Measurement System Analysis (MSA)
MEASUREMENT Define and distinguish between accuracy, precision, repeatability and reproducibility (gage R&R) studies, bias, and linearity. (Apply) Body of Knowledge III.D
A measurement is a series of manipulations of physical objects or systems according to a defined protocol, which results in a number. The number is purported to uniquely represent the magnitude (or intensity) of a certain characteristic, which depends on the properties of the test object. This number is acquired to form the basis of a decision affecting some human goal or satisfying some human object need, the satisfaction of which depends on the properties of the test subject. These needs or goals can be usefully viewed as requiring three general classes of measurements: 1. Technical. This class includes those measurements made to assure dimensional compatibility or conformation to design specifications necessary for proper function, or, in general, all measurements made to ensure fitness for intended use of some object. 2. Legal. This class includes those measurements made to ensure compliance with a law or regulation. This class is the concern of weights and measures bodies, regulators, and those who must comply with those regulations. The measurements are identical in kind with those of technical metrology but are usually embedded in a much more formal structure. Legal metrology is more prevalent in Europe than in the United States, although this is changing. 3. Scientific. This class includes those measurements made to validate theories of the nature of the universe or to suggest new theories. These measurements, which can be called scientific metrology (properly the domain of experimental physics), present special problems.1
245
246 Part III: Data Analysis
CONCEPTS IN MEASUREMENTS Measurement Error Error in measurement is the difference between the indicated value and the true value of a measured quantity. The true value of a quantity to be measured is seldom known. Errors are classified as random and systematic. Random errors are accidental in nature. They fluctuate in a way that cannot be predicted from the detailed employment of the measuring system or from knowledge of its functioning. Sources of error such as hysteresis, ambient influences, or variations in the work piece are typical but not completely all-inclusive in the category of random error. Systematic errors are those not usually detected by repetition of the measurement operations. An error resulting from either faulty calibration of a local standard or a defect in contact configuration of an internal measuring system is typical but not completely inclusive in the systematic class of errors.2 It is important to know all the sources of error in a measuring system, rather than merely to be aware of the details of their classification. Analysis of the causes of errors is helpful in attaining the necessary knowledge of achieved accuracy.3 There are many different sources of error that influence the precision of a measuring process in a variety of ways depending on the individual situation in which such errors arise. The permutation of error sources and their effects, therefore, is quite considerable. In general, these errors can be classified under three main headings:4 1. Process environment 2. Measuring equipment 3. Operator fallibility These factors constitute an interrelated three-element system for the measuring process as shown in Figure 13.1. The areas in which operator fallibility arises can be grouped as follows:5 1. Identification of the measuring situation 2. Analysis of alternative methods 3. Selection of equipment 4. Application (or measurement) The identification of measuring situations becomes increasingly complex in modern metrology. As parts become smaller and more precise, greater attention must be paid to geometric qualities such as roundness, concentricity, straightness, parallelism, and squareness. Deficiencies in these qualities may consume all the permitted design tolerance so that a simple dimensional check becomes grossly insufficient. Operators must be knowledgeable about what they have to measure and how satisfactorily the requirements of the situation will be met by the measuring
Chapter 13: Measurement System Analysis (MSA) 247
• • • •
Operator fallibility Identification of the situation Analysis of alternative methods Selection of equipment Application/ measurement
Process environment Temperature Vibration Structural instability Humidity Factors of atmospheric pollution • Atmospheric pressure • Gravity
• • • • •
Measured characteristics
• • • •
Measuring equipment Sensitivity Accuracy and precision Consistency Repeatability
Figure 13.1 Factors affecting the measuring process. Source: D. W. Benbow, A. K. Elshennawy, and H. F. Walker, The Certified Quality Technician Handbook (Milwaukee, WI: ASQ Quality Press, 2003), 122.
instrument. Correct identification of the measuring situation will eliminate those methods found unsuitable for the situation. A proper selection of measuring equipment can therefore be made from a smaller range of measuring process alternatives. Method analysis can then be applied to such alternatives to determine which best satisfies the situation. This usually involves examining each method for different characteristics and evaluating the relative accuracies between the different methods.
Accuracy Accuracy is the degree of agreement of individual or average measurements with an accepted reference value, level, or standard.6 See Figure 13.2.
Precision Precision is the degree of mutual agreement among individual measurements made under prescribed-like conditions, or simply how well identically performed measurements agree with each other. This concept applies to a process or a set of measurements, not to a single measurement, because in any set of measurements, the individual results will scatter about the mean.7 See Figure 13.3.
248 Part III: Data Analysis
True value
Measured average
Operator 1
Accuracy
Figure 13.2 Gage accuracy is the difference between the measured average of the gage and the true value, which is defined with the most accurate measurement equipment available. Source: E. R. Ott, et al., Process Quality Control, 4th ed. (Milwaukee, WI: ASQ Quality Press, 2005), 527.
Accurate and precise
Precise but not accurate
Accurate but not precise
Neither accurate nor precise
Figure 13.3 Measurement data can be represented by one of four possible scenarios. Source: E. R. Ott, et al., Process Quality Control, 4th ed. (Milwaukee, WI: ASQ Quality Press, 2005), 527.
Repeatability and Reproducibility Repeatability refers to how close the measurements of an instrument are to each other if such measurements were repeated on a part under the same measuring conditions by the same operator. Reproducibility is a measure of the degree of agreement between two single test results made on the same object in two different, randomly selected measuring locations or laboratories, with different operators. While repeatability is normally used to designate precision for measurements made within a restricted set of conditions (for example, individual operators), reproducibility is normally used to designate precision for measurements involving variation between certain sets (for example, laboratories) as well as within them. Gage repeatability and reproducibility gage R&R studies are undertaken to quantify and identify the factors that affect measurement system variation,
Chapter 13: Measurement System Analysis (MSA) 249
with the goal being to minimize measurement system “noise” so as to obtain measurement values as close as possible to the true measurement value. It is never possible to know the true measurement value of a part, but by identifying and remedying sources of gage “noise” or error, the relative usefulness of the measurement system for the product being measured can be quantified. For a comprehensive treatment of this topic, refer to Ott et al. (2005).
Linearity Linearity is the degree of accuracy of the measurements of an instrument throughout its operating range.8 Linearity is related to hysteresis in that as an instrument measures an object, arriving at the measurement from below, it may provide a particular answer. As the object’s dimensions are “overshot” by the instrument, arriving at the object’s measurement from above, the object’s measurement from above may differ from the measurement arrived at from below. This difference is called hysteresis and is a form of nonlinearity that may be built into the measuring instrument. It is an undesirable measuring instrument quality and needs to be understood and corrected for as a source of error.9
Gage R&R Study for Measurements Summary Report Can you adequately assess process performance? 100% 30% 0% 10% Yes
No
35.3%
The measurement system variation equals 35.3% of the process variation. The process variation is estimated from the parts in the study. Can you sort good parts from bad? 0% 10% 30% Yes
100% 95.3%
No
The measurement system variation equals 95.3% of the tolerance.
100
Variation by Source %Study Var
%Tolerance
Study Information Number of parts in study Number of operators in study Number of replicates
(Replicates: Number of times each operator measured each part) Comments General rules used to determine the capability of the system: 30%: unacceptable Examine the bar chart showing the sources of variation. If the total gage variation is unacceptable, look at repeatability and reproducibility to guide improvements: • Test-Retest component (Repeatability): The variation that occurs when the same person measures the same item multiple times. This equals 99.0% of the measurement variation and is 34.9% of the total variation in the process. • Operator component (Reproducibility): The variation that occurs when different people measure the same item. This equals 14.4% of the measurement variation and is 5.1% of the total variation in the process.
50 30 10 0
Total Gage
Repeat
Reprod
Figure 13.4 Gage R&R study for measurements summary report. Source: Sandra L. Furterer.
10 3 2
250 Part III: Data Analysis
Bias Bias is the amount by which a precision-calibrated instrument’s measurements differ from a true value. Accuracy is often called the bias in the measurement.10 Gage R&R Example:11 An example of a gage R&R study will be illustrated next. Three operators measured the quality characteristic of a part. The analysis was performed using Minitab statistical software. The three operators measured 10 different parts, two times for each part. An example of the gage R&R study results is shown in Figure 13.4. The measurement system variation is 35.3%, and typically, it shouldn’t be over 30%. There is high variability in the operator repeatability, the variability of each person measuring the parts multiple times, equating to 99% of the measurement variation and 34.9% of the total variation in the process. The gage R&R study for measurements variation report is shown in Figure 13.5. It shows for which parts there is more variability across the operators. Figure 13.6 shows a gage R&R report card, which gives suggestions for a better gage R&R study. The measurement error in this study example is too high. The operators should be retrained to measure the parts, and if needed the operational definitions for the quality characteristics should be redefined. The measurement system should be improved before using it to measure the parts’ quality. Gage R&R Study for Measurements Variation Report
Reproducibility — Operator by Part Interaction
Test-Retest Ranges (Repeatability)
Look for abnormal points or patterns.
14.7
Operators and parts with larger ranges have less consistency. 0.8
14.4
0.6 Range
14.1 13.8
0.4 0.2
13.5 1
2
3
4
5
6
7
8
9
10
0.0
Parts
1
Reproducibility — Operator Main Effects 15.0
Look for operators with higher or lower averages.
14.5 14.0 13.5
2
3
1
2
3
4
5
6
7
8
9
10
Parts
Operators Source
StDev
%Study Variation
%Tolerance
Total Gage Repeatability Reproducibility Operator Part-to-Part
0.143 0.142 0.021 0.021 0.379
35.32 34.95 5.08 5.08 93.56
95.34 94.35 13.72 13.72 252.56
Study Variation
0.405
100.00
269.95
Tolerance (upper spec - lower spec): 0.9 1
2 Operators
3
The Operator by Part interaction was not statistically significant and was removed from the table.
Figure 13.5 Gage R&R study for measurements variation report. Source: Sandra L. Furterer.
Chapter 13: Measurement System Analysis (MSA) 251
Check Status Description
Gage R&R Study for Measurements Report Card
To determine if a measurement system is capable of assessing process performance, you need good estimates of the process variation and the measurement variation. • Process variation: Comprised of part-to-part and measurement variation. It can be estimated from a large sample of historical data, or from the parts in the study. You choose to estimate from the parts. Although the number of parts (10) satisfies the typical requirement of 10, the estimate may not be precise. If the selected parts do not represent typical process variability, consider entering a historical estimate or using more parts. • Measurement variation: Estimated from the parts, it is broken down into Repeatability and Reproducibility. The number of parts (10) and operators (3) meets the typical requirement of 10 parts and 3 operators. This is usually adequate for estimating Repeatability, but the estimate of Reproducibility is less precise. If the %Study Variation for Reproducibility estimate is large, you may want to examine the differences between operators and determine if these differences are likely to extend to other operators.
Amount of Data
Figure 13.6 Gage R&R study for measurements report card. Source: Sandra L. Furterer.
Attribute Agreement Analysis for Results Summary Report < 50%
Is the overall % accuracy acceptable?
Misclassification Rates
100%
No 48.3%
Yes
The appraisals of the test items correctly matched the standard 48.3% of the time.
% Accuracy by Appraiser 60 50
50.0
48.3% 40.0
30
Consider the following when assessing how the measurement system can be improved: • Low Accuracy Rates: Low rates for some appraisers may indicate a need for additional training for those appraisers. Low rates for all appraisers may indicate more systematic problems, such as poor operating definitions, poor training, or incorrect standards. • High Misclassification Rates: May indicate that either too many Pass items are being rejected, or too many Fail items are being passed on to the consumer (or both). • High Percentage of Mixed Ratings: May indicate items in the study were borderline cases between Pass and Fail, thus very difficult to assess.
20 10 0
Appraiser 1
Appraiser 2
Appraiser 3
Figure 13.7 Attribute agreement analysis results for mortgage appraisal process. Source: Sandra L. Furterer.
51.7% 38.9% 70.8% 43.3%
Comments 55.0
40
Overall error rate Pass rated Fail Fail rated Pass Mixed ratings (same item rated both ways)
252 Part III: Data Analysis
Attribute Agreement Analysis Example:12 We performed an attribute measurement analysis study for the mortgage appraisal process for a mortgage company. The mortgage company’s appraisal department works with external appraisal companies to assess the quality of the appraisals they provide to the mortgage company, prior to a consumer home loan being given. There were three appraisers who reviewed the quality of the appraisals. We wanted to assess whether the appraisal assessors were assessing the quality consistently with themselves and across the other appraisal assessors. We took a sample of 10 different appraisals provided by multiple appraisal companies. Each of the three appraisal assessors reviewed each appraisal twice. The results were: • The overall error rate was 51.7%. • The appraisals that should have passed were rated as needing rework 38.9% of the time, versus the appraisals that should have been rated as rework by the appraisal company were rated as pass 70.8% of the time. • The mixed ratings percentage is 43.3%, identifying the percentage that did not get rated the same for each reviewer across the two trials. Minitab statistical software was used to set up the study, enter the assessors’ data, and analyze the results. The summary results are shown in Figure 13.7. Figure 13.8 shows the misclassification report, showing which appraisals were most difficult to classify. Attribute Agreement Analysis for Results Misclassification Report
Overall Error Rate = 51.7%
% Pass rated Fail
Most Frequently Misclassifed Items
Item 6
% Fail rated Pass
Item 1
Item 9
Item 2
Item 10
Item 4
Item 3
Item 5
Item 7 0
20
40
60
80
0
20
40
Appraiser Misclassification Rates
% Pass rated Fail
% Fail rated Pass
Appraiser 1
Appraiser 1
Appraiser 2
Appraiser 2
Appraiser 2
Appraiser 3
Appraiser 3
Appraiser 3
50
100
0
50
100
0
Figure 13.8 Misclassification report for mortgage appraisal process. Source: Sandra L. Furterer.
80
% Rated both ways
Appraiser 1
0
60
50
100
Chapter 13: Measurement System Analysis (MSA) 253
Attribute Agreement Analysis Accuracy Report
All graphs show 95% confidence intervals for accuracy rates. Intervals that do not overlap are likely to be different.
% by Appraiser
% by Appraiser and Standard Pass
Appraiser 1 Appraiser 1
Appraiser 2 Appraiser 3 0
20
Pass
40 60 % by Standard
80
40 60 % by Trial
80
Appraiser 2 Appraiser 3
Fail
Fail 0
20
Appraiser 1
1
Appraiser 2
2
Appraiser 3 0
20
40
60
80
0
20
40
60
80
Figure 13.9 Attribute agreement analysis accuracy report. Source: Sandra L. Furterer
Figure 13.9 shows the attribute agreement analysis accuracy report, which identifies how each appraiser rated against other appraisers and the standard. There was a great deal of variability in the ratings by each assessor and across the assessors. Some appraisals were be rejected when they should have been accepted, causing additional rework and expense by the appraisal companies. Some appraisals should have been rejected for poor quality, but they were not. The attribute agreement analysis was a powerful tool to demonstrate that the appraisal department needed to set a more explicit and well-defined operational definition for the quality of the appraisals and communicate these standards to the appraisal companies, as well as to better train the appraisers on the standard. The attribute agreement analysis study could be performed again after the measurement system was improved.
Chapter 14 Statistical Process Control (SPC)
FUNDAMENTAL CONCEPTS Distinguish between control limits and specification limits and between process stability and process capability. (Apply) Body of Knowledge III.E.1
Control limits are used on control charts to detect when a process changes. These limits are calculated using statistical formulas that guarantee that, for a stable process, a high percentage of the points will fall between the upper and lower control limits. In other words, the probability that a point from a stable process falls outside the control limits is very small, and the user of the chart is almost certainly correct to assume that such a point indicates that the process has changed. When the chart is correctly used, the user will take appropriate action when a point occurs outside the control limits. The action that is appropriate varies with the situation. In some cases, it may be that the wisest course is to increase vigilance through more frequent sampling. Sometimes, an immediate process adjustment is called for. In other cases, the process must be stopped immediately. One of the most common mistakes in using control charts is to use the specification limits as control limits. Since the specification limits have no relationship to control limits or the stability of the process, the user of the chart has no statistical basis for assuming that the process has changed when a point occurs outside the specification limits. The statement, “I just plotted a point outside the control limits, but it is well within specification limits, so I don’t have to worry about it,” represents a misunderstanding of the chart. The main purpose of the control chart is to signal the user that the process has changed and an assignable cause should be identified. If the signal is ignored, the control chart loses much of its value as an early warning tool. This misunderstanding of the significance of the control limits is especially dangerous in the case of the averages portion of the x and R or x and s chart. Recall that the points on this portion of the chart are calculated by averaging several measurements. It is possible for the average (or mean) to fall within the specification limits even though none of the actual measurements are within these limits. For example, suppose the specification 254
Chapter 14: Statistical Process Control (SPC) 255
limits for a dimension are 7.350 to 7.360, and a sample of five parts yields the following measurements: 7.346, 7.344, 7.362, 7.365, 7.366. The mean of these values is 7.357, well within the specification limits. In this case, the range chart would likely have shown the point above its upper control limit. Should the user take comfort in the fact that the plotted point, located at the mean, is well inside the specification limits? Control limits are calculated based on data from the process. Formulas for control limits and examples of each are given in the following sections. The formulas are repeated in Appendix C. Several constants are needed in the formulas. These appear as subscripted capital letters such as A2. The values of these constants are given in Appendix D. When calculating control limits, it is prudent to collect as much data as is practical. Many authorities specify at least 25 sample subgroups. As explained in the following section, the control limits for each control chart are calculated based on data from the process. The control chart compares each new point with the distribution that was used as the basis for the control limits. The control limits enclose the vast majority of the points from the distribution, 99.72% if it is a normal distribution and the control limits are calculated to be at plus and minus three standard deviations from the mean. Therefore, for the normal distribution, if the point is outside the control limits, it is common to state that there is a 99.72% probability that the point did not come from the distribution that was used to calculate the control limits. The percentages vary somewhat from chart to chart, but the control limits are constructed so that this probability will be quite high. It should be noted that these probabilities are somewhat theoretical because no process runs as if its output were randomly selected numbers from some historical distribution. It is enough to say that when a point falls outside the control limits, the probability is quite high that the process has changed. One should never show specification limits on a control chart. The main reason is that specification limits are designed for each part, or individual unit, to fall within the specification limits. The control limits on an X and R chart sample a number of units or a subgroup and plot the average and range of the subgroup sample. So, the control limits are typically used with the average and range, which tends to dampen the impact of individual outliers. Figure 14.1 shows the difference between the distribution of the individual values of a distribution compared to the averages of the subgroups. The reader can view the higher variability in the individuals versus the averages in Figure 14.1. Notice that the individual values range from 1.0931 to 1.9404, with a standard deviation of .1435. The averages of the subgroup sample of five each range from 1.3947 to 1.7700, with a standard deviation of .0773. The averages reduce the variation, so you are not reacting to out-of-control signals that are not assignable causes. When the probability is very high that a point did not come from the distribution used to calculate the control limits, the process is said to be “out of statistical control.” The “out of statistical control” condition is often very subtle and would perhaps not be detected without the aid of a control chart. This, in fact, is one of the main values of the control chart: it detects changes in a process that would not otherwise be noticed. This may permit adjustment or other action on the process before serious damage is done, such as making large quantities of nonconforming product or making large quantities of product that differ significantly one part from another in some measurable quality characteristic.
256 Part III: Data Analysis
Dot Plot of Individuals
1.08
1.20
1.32
1.44
1.56
1.68
Individuals
1.80
1.92
Figure 14.1 Attribute agreement analysis accuracy report. Source: Sandra L. Furterer
Dot Plot of Averages
1.38
1.44
1.50
1.56
Averages
1.62
1.68
1.74
Figure 14.2 Graphs of individual values versus averages of the subgroup samples. Source: Sandra L. Furterer.
However, one of the hazards of using a control chart without proper training is the tendency to react to a point that is not right on target by adjusting the process even though the chart does not indicate that the process has statistically changed. If an adjustment is made whenever a point is not exactly on target, it may tend to destabilize a stable process. Moreover, the operator adjusting the process becomes part of the variation. Additionally, such adjustments may instead serve to “oversteer” the process. For instance, suppose a new student nurse wakes a patient for morning vitals at 4:00 a.m. and finds the body temperature to be 98.64°. The student nurse, having just covered normal body temperature in class the previous day, sees that the patient has a 0.04° fever and administers a feverreducing medicine, like aspirin. At the 5:00 a.m. vitals, the temperature is down to 98.4°, the aspirin having done its job. The nurse realizes that the temperature is now below target and administers a medicine to warm the patient. By 6:00 a.m. the patient is warm enough to justify three aspirin tablets, and by 7:00 a.m. the patient needs more medicine to raise their body temperature, and so on. In the ideal situation, a process should not need adjustment except when the chart indicates that it is out of statistical control. Dr. W. E. Deming, one of the key figures in the quality field, stated, “The function of a control chart is to minimize the net economic loss from . . . overadjustment and underadjustment.”2
Chapter 14: Statistical Process Control (SPC) 257
RATIONAL SUBGROUPS Explain and apply the principles of rational subgroups. (Apply) Body of Knowledge III.E.2
The method used to select samples for a control chart must be logical or “rational.” In the case of the X and R chart, in order to assign a “rational” subgroup or sample size, there should be a high probability of variation between successive samples, while the variation within the sample is kept small. Therefore, samples frequently consist of parts that are produced successively by the same process to minimize the within-sample variation. The next sample is chosen somewhat later so that any process shifts that have occurred will be displayed on the chart. Choosing the rational subgroup requires care to make sure that the same process is producing each item. For example, suppose a candy-making process uses 40 pistons to deposit 40 gobs of chocolate on a moving sheet of waxed paper in a 5 × 8 array as shown in Figure 14.3. How should rational subgroups of size five be selected? The choice consisting of the first five chocolates in each row, as shown in Figure 14.3a, would have the five units of the sample produced by five different processes (the five different pistons). A better choice would be to select the upper left-hand chocolate in five consecutive arrays as shown in Figure 14.3b, because they are all formed by the same piston. If the original sampling plan had been used, and one piston were functioning improperly, the data might look like this:
(a) Subgroup sample
(b) Rational subgroup sample
Figure 14.3 Conveyor belt in chocolate-making process; possible sampling plans. Source: D. W. Benbow, A. K. Elshennawy, and H. F. Walker, The Certified Quality Technician Handbook (Milwaukee. WI: ASQ Quality Press, 2003), 62.
258 Part III: Data Analysis
Piston 1
.25
.24
.26
.24
.24
.25
.26
.26
.23
.24
Piston 3
.24
.26
.25
.25
.24
.23
.24
.25
.26
.24
Piston 2 Piston 4 Piston 5
Average Range
.36 .24 .25
.248 .12
.38 .21 .26
.270 .17
.33
.34
.25
.24
.29
.24
.276
.262
.08
.10
.36
.35
.23
.38
.21
.23
.260 .13
.35
.22
.25
.22
.26
.258
.21
.24
.268
.16
.34
.24
.26
.272
.14
.34
.29
.262
.13
.264
.11
.13
If the upper and lower control limits for the averages chart were based on these data, they would be UCL ≈ 0.34 and LCL ≈ 0.19 and the averages would all fall ridiculously near the centerline of the chart, 0.264. The chart would be worthless as a tool to signal process change. This happened because the errant piston #2 caused the ranges to be very high, which in turn caused the control limit values to be spread too far apart. Another mistake sometimes made in selecting a rational subgroup is to measure what is essentially the same thing multiple times and use those values to make up a sample. Examples of this might be to check the hardness of a heat-treated shaft at three different points or to measure the temperature of a well-stirred vat at three different points. Data such as these might result in the following:
Average
Range
7 am
8 am
9 am
10 am
11 am
Noon
1 pm
2 pm 3 pm 4 pm 5 pm
21
33
19
30
22
25
31
31
22
29
28
22
30
19
29
20
26
33
32
22
28
29
23
31
17
29
22
28
29
33
19
30
28
22.0
31.3
18.3
29.3
21.3
26.3
31.0
32.0
21.0
29.0
28.3
2
3
2
1
2
3
4
2
3
2
1
For the averages chart, the control limits are 29.6 and 25.6. In this situation, the readings in each sample are so close to each other that the range is very small, which places the UCL and LCL so close together that only two points on the chart will be within the control limits. Again, the chart is worthless as a tool for signaling process change.
CONTROL CHARTS FOR ATTRIBUTES DATA Identify, select, and interpret, and control charts (p, np, c, and u) for data that is measured in terms of discrete attributes or counts. (Analyze) Body of Knowledge III.E.3
Chapter 14: Statistical Process Control (SPC) 259
Attributes charts are used for count data. If every item is in one of two categories such as good or bad, “defective units” are counted. If each part has several flaws, “defects” are counted.
Charting Defective Units If defective units are being counted, the p chart can be used. For example, a test for the presence of the Rh negative factor in 13 samples of donated blood gives the following results: Test number 1
2
3
4
5
6
7
8
9
10
11
12
13
No. of units of blood
125
111
113
120
118
137
108
110
124
128
144
138
132
No. of Rh– units
14
18
13
17
15
15
16
11
14
13
14
17
16
These data are plotted on a p chart in Figure 14.4.
p chart
Machine/process: Blood anal #A87
Product: Donated blood Date: 2013 Sept 8 8 Oper.: Sarah, Emily, Dana
8
8
Defective =Rh negative 9 9 12 12 12
12
12
12
12
13
17
15
13
14
17
16
# Defectives
14
Sample size
125 111 133 120 118 137 108 110 124 128 144 138 132
18
Fraction, p
.11
.15
.10
.200 .180 .160 .140 .120 .100 .080 .060 .040 Notes:
Figure 14.4 Example of a p chart.
.21
.13
15 .11
16 .15
11 .10
14 .11
.10
.10
.11
.12
260 Part III: Data Analysis
Note that the p chart in Figure 14.4 has two points that are outside the control limits. These points indicate that the process was out of statistical control, which is sometimes referred to as out of control. It means that there is a very low probability that these points came from the same distribution as the one used to calculate the control limits. It is therefore very probable that the distribution has changed. These out-of-control points are a statistical signal that the process needs attention of some type. People familiar with the process need to decide how to react to various points that are outside the control limits. In the situation in this example, an unusually high number of units of blood test Rh negative. This could indicate a different population of donors or possibly a malfunction of the testing equipment or procedure. If defectives are being counted and the sample size remains constant, the np chart can be used. Example: Packages containing 1000 light bulbs are randomly selected, and all 1000 bulbs are light-tested. The np chart is shown in Figure 14.5. Note that on March 25 the point is outside the control limits. This means that there is a high probability that the process was different on that day than on the days that were used to construct the control limits. In this case, the process was different np chart
Machine/process: 1Wty-89
Product: 100-watt Glo-Soft Defective =Bulb does not light Date: 2013 3/16 3/17 3/18 3/19 3/22 3/23 3/24 3/25 3/26 3/29 3/30 3/31 4/1 Oper.: Josiah, Regan, Alec
4/2
# Defectives
10
9
Sample size
12
13
12
11
9
7
0
12
8
9
7
11
1000 1000 1000 1000 1000 1000 1000 1000 1000 1000 1000 1000 1000 1000 14 12 10 8 6 4 2 0
Notes: March 25—Filaments inserted at 31
Figure 14.5 Example of an np chart. Source: D. W. Benbow, A. K. Elshennawy, and H. F. Walker, The Certified Quality Technician Handbook (Milwaukee, WI: ASQ Quality Press, 2003), 48.
Chapter 14: Statistical Process Control (SPC) 261
u chart
Machine/process: Finish grind
Product: d2192 Date: 6/25/13 Oper.: Turner, Hoelsher, Hawks, Brandt
Defective =Scratches, nicks < 0.005
# Defectives
6
7
8
8
6
7
7
6
3
1
2
3
3
4
Sample size
12
10
8
9
8
9
8
10
10
10
9
12
10
12
Fraction
.50
.70 1.00 .89
.75
.78
.88
.60
.30
.10
.22
.25
.30
.33
1.4 1.2 1.0 .8 .6 .4 .2 0 Notes:
Figure 14.6 Example of a u chart. Source: D. W. Benbow, A. K. Elshennawy, and H. F. Walker, The Certified Quality Technician Handbook (Milwaukee, WI: ASQ Quality Press, 2003), 49.
in a good way. It would be advisable to pay attention to the process to see what went right and to see if the conditions could be incorporated into the standard way of running the process. Notice the operator note at the bottom of the chart. The u chart and c chart are used when defects rather than defective units are being counted. If the sample size varies, the u chart is used. If the sample size is constant, the c chart may be used. An example of a u chart is shown in Figure 14.6. A c chart would look much like the np chart illustrated in Figure 14.5 and is not shown here. Here is a guide to decide which attributes chart to use: • For defective units, use p or np. – Use p for varying sample size. – Use np for constant sample size. • For defects, use u or c. – Use u for varying sample size. – Use c for constant sample size.
262 Part III: Data Analysis
Control Limits for p Charts As indicated earlier, p charts are used when the data indicate the number of defective units in each sample. An example of this type of data is shown in Table 14.1, where the column labeled “No. discrepancies d” shows the number of defective units. The formulas for the control limits for the p chart are: UCL p = p + 3
p(1 − p) n
p(1 − p) LCL p = p − 3 n
.
where n=
Sum of the sample sizes Number of samples
p=
Sum of the discrepancies Σ discrepancies = . Sum of the sample sizes Σn
The first step in calculating these control limits is to find n and p. The totals needed for these values are calculated in Table 14.1. n=
1401 Σn = = 48.3 Number of samples 29
p=
Σd 85 = = 0.061 Σn 1.401
Using these values in the formula for the upper control limit: UCL p = 0.061 + 3
0.061(1 − 0.061) = 0.061 + 3 0.1186 = 0.061 + 0.103 = 0.164 48.3
Using the formula for the lower control limit (LCL) and taking advantage of some of the values just calculated: LCLp = 0.061 − 0.103 = −0.042 Since this value is negative, there is no lower control limit. It is important to note that zero is commonly used as the lower control limit. It is equally important to note that using zero as the lower control limit ignores the statistical significance of a grouping of points near zero, wherein such a grouping does not mean the same thing statistically as a grouping of points near a lower control limit. The tests for identifying nonrandom conditions or out-of-control points for p, np, c, and u charts from the Minitab software are shown in Figure 14.7. Typically, only one or a few of these tests are selected, so as not to increase the type I error.
Chapter 14: Statistical Process Control (SPC) 263
Table 14.1 Time
Attributes data with totals. Sample size n
No. discrepancies d
8:00 a.m.
48
3
8:30
45
6
9
47
2
9:30
51
3
10
48
5
10:30
47
4
11
48
1
11:30
50
0
12:00 p.m.
46
1
12:30
45
0
1
47
2
1:30
48
5
2
50
3
2:30
50
6
3
49
2
3:30
46
4
4
50
1
4:30
52
1
5
48
6
5:30
47
5
6
49
6
6:30
49
4
7
51
3
7:30
50
4
8
48
1
8:30
47
2
9
47
5
9:30
49
0
10
49
0
1401
85
Totals
Source: D. W. Benbow, A. K. Elshennawy, and H. F. Walker, The Certified Quality Technician Handbook (Milwaukee, WI: ASQ Quality Press, 2003), 57.
264 Part III: Data Analysis
Figure 14.7 P chart out-of-control tests from Minitab Source: “Minitab Help,” Minitab, State College, PA, Accessed 7/7/2021.
Control Limits for u Charts If the values in the “d” column of Table 14.1 represent the number of defects rather than the number of defectives, then the u chart is appropriate. The formulas for control limits for the u chart are: UCL u = u + 3
u n
u LCLu = u − 3 n
.
where u=
Σd Σn
and n =
Σn Number of samples
Using the data from Table 14.1: Σd 85 = = 0.061 Σn 1401 0.061 = 0.061 + 0.107 ≈ 0.168 UCL u = 0.061 + 3 48.3
u=
LCL u = 0.061 − 3
0.061 ≈ 0.061 − 0.107 ≈ − 0.046 48.3
Chapter 14: Statistical Process Control (SPC) 265
Again, since the lower control limit formula returns a negative value, there is no lower control limit. (Some authors use zero as the lower control limit.) Figures 14.5 and 14.6 illustrate np and u charts. The c chart is similar to a u chart and is not shown. Analysis of attributes charts follows the same procedures as those for the x and R charts outlined in the following section on variables charts. The same out-of-control tests can be used for the u chart as shown for the p charts.
CONTROL CHARTS FOR VARIABLES DATA Identify, select, and interpret control charts ( X and R, X and s, and X and mR) for data that is measured on a continuous scale. (Analyze) Body of Knowledge III.E.4
The X and R chart is called a variables chart because the data to be plotted result from measurement on a variable or continuous scale. The X and s chart is another variables control chart. With this chart, the sample standard deviation s is used instead of the range to indicate dispersion. The standard deviation is a better measure of spread than the range, so this chart provides a somewhat more precise statistical signal than the X and R chart. Users who are comfortable with the standard deviation function on a handheld calculator will be able to calculate s almost as easily as R. The X and s chart is often the preferred chart when the chart is constructed using SPC software.
Control Limits for the X and R Chart Upper control limit for the averages chart: UCLX = X + A 2 R Lower control limit for the averages chart: LCLX = X − A 2 R Upper control limit for the range chart: UCLR = D4 R Lower control limit for the range chart: LCLR = D3 R Example: Data are collected in a face-and-plunge operation done on a lathe. The dimension being measured is the groove inside diameter (ID), which has a tolerance of 7.125 ± 0.010. Four parts are measured every hour. The data are shown in Table 14.2. Since the formulas use the values of X and R, the next step is to calculate the average (X) and range (R) for each time and then calculate the average of the averages (X) and the average range (R). These calculations have been completed in Table 14.2. The values of X and R are listed in the last row of Table 14.2. Notice that the value of X is 7.125, which happens to be at the center of the tolerance in this example. This means that the process is centered at the same point as the center of the specifications.
266 Part III: Data Analysis
UCLX = X + A 2 R = 7.125 + 0.729 × 0.004 = 7.128 LCLX = X − A 2 R = 7.125—0.729 × 0.004 = 7.122 UCLR = D4R = 2.282 × 0.004 ≈ 0.009 LCLR = D3 R = 0 × 0.004 = 0 Table 14.2
– Data for X and R chart control limit calculations.
Time
1st Meas.
2nd Meas.
3rd Meas.
4th Meas.
Average
Range
Std. Dev.
4 PM
7.124
7.122
7.125
7.125
7.124
0.003
0.0014
5
7.123
7.125
7.125
7.128
7.125
0.005
0.0021
6
7.126
7.128
7.128
7.125
7.127
0.003
0.0015
7
7.127
7.123
7.123
7.126
7.125
0.004
0.0021
8
7.125
7.126
7.121
7.122
7.124
0.005
0.0024
9
7.123
7.129
7.129
7.124
7.126
0.006
0.0032
10
7.122
7.122
7.124
7.125
7.123
0.003
0.0015
11
7.128
7.125
7.126
7.123
7.126
0.005
0.0021
12 AM
7.125
7.125
7.121
7.122
7.123
0.004
0.0021
1
7.126
7.123
7.123
7.125
7.124
0.003
0.0015
2
7.126
7.126
7.127
7.128
7.127
0.002
0.0010
3
7.127
7.129
7.128
7.129
7.128
0.002
0.0010
4
7.128
7.123
7.122
7.124
7.124
0.006
0.0026
5
7.124
7.125
7.129
7.127
7.126
0.005
0.0022
6
7.127
7.127
7.123
7.125
7.126
0.004
0.0019
7
7.128
7.122
7.124
7.126
7.125
0.006
0.0026
8
7.123
7.124
7.125
7.122
7.124
0.003
0.0013
9
7.122
7.121
7.126
7.123
7.123
0.005
0.0022
10
7.128
7.129
7.122
7.129
7.127
0.007
0.0034
11
7.125
7.125
7.124
7.122
7.124
0.003
0.0014
12 PM
7.125
7.121
7.125
7.128
7.125
0.007
0.0029
1
7.121
7.126
7.12
7.123
7.123
0.006
0.0026
2
7.123
7.123
7.123
7.123
7.123
0
0.0000
3
7.128
7.121
7.126
7.127
7.126
0.007
0.0031
4
7.129
7.127
7.127
7.124
7.127
0.005
0.0021
Average
7.125
7.125
7.125
7.125
7.125
0.004
0.0020
Source: Adapted from D. W. Benbow, A. K. Elshennawy, and H. F. Walker, The Certified Quality Technician Handbook (Milwaukee, WI: ASQ Quality Press, 2003), 53.
Chapter 14: Statistical Process Control (SPC) 267
In these calculations, the values A 2, D3, and D4 are found in Appendix D. The row for subgroup size n = 4 is used because each hourly sample has four readings. The goal now is to construct the chart and use it to monitor future process activity. The first step is to choose scales for the X and R sections of the chart in such a way that the area between the control limits covers most of the chart area. One such choice is shown in Figure 14.8. Figure 14.8 also shows the chart with several points plotted. As each point is plotted, the user should check to determine whether any of the indicators of process change listed in Figure 14.8 has occurred. If none has occurred, continue with the process. If one of the indicators has occurred, appropriate action should be taken as spelled out in the process operating instructions.
X and R chart
Machine/process: Face and plunge #318
Product: CV265 Operator: S. Smith Time: Reading:
Average:
Range:
Dimension: D7.125 ±0.010 Date/time: July 21, 2013
Gage: Dial calipers #12
7:00 a.m.
7:30
7.120 7.126 7.108 7.120
7.129 7.116 7.124 7.129
7.130 7.117 7.134 7.122
7.119 7.133 7.130 7.120
7.124 7.130 7.128 7.133
7.122 7.129 7.125 7.121
7.119 7.133 7.122 7.122
.018
.013
.017
.014
.009
.008
.014
7.125 7.125 7.126 7.126 7.129 7.124 7.124
Average
7.130 7.128 7.126 7.124 7.122 7.120 0.020 Range
0.016 0.012 0.008 0.004 0.0
Notes:
Figure 14.8 X and R chart example. Source: D. W. Benbow, A. K. Elshennawy, and H. F. Walker, The Certified Quality Technician Handbook (Milwaukee, WI: ASQ Quality Press, 2003), 64.
268 Part III: Data Analysis
Control Limits for the X and s Chart Table 14.2 also shows the calculated values for s from the data. The formula calculations for the control limits for the X and s chart are: Upper control limit for the averages chart: UCLX = X + A3 s Lower control limit for the averages chart: LCLX = X − A3 s Upper control limit for the range chart: UCL s = B4 s Lower control limit for the range chart: LCL s = B3 s UCLX = X + A3 s = 7.125 + 1.628 × 0.0020 = 7.128 LCLX = X − A3 s = 7.125 − 1.628 × 0.002 = 7.122 UCL s = B4 s = 2.266 × 0.0020 ≈ 0.004 LCL s = B3 s = 0 × 0.0020 = 0 Construction and use of the X and s chart with these control limits is quite similar to that for the X and R chart, and it is not shown here.
COMMON AND SPECIAL CAUSE VARIATION Interpret various control chart patterns (runs, hugging, trends) to determine process control, and use SPC rules to distinguish between common cause and special cause variation. (Analyze) Body of Knowledge III.E.5
The variation of a process that is in statistical control is called common cause variation. This variation is inherent in the process and can only be reduced through changes in the process itself. Therefore, it is important that process operators not respond to changes attributed to common cause variation. When additional variation occurs, it is referred to as special cause variation. This variation can be assigned to some outside change that affects the process output. Hence, it is also called assignable cause variation. It is important that the process operator responds to this variation. The purpose of a control chart is to distinguish between these two types of variation. It does this by providing a statistical signal that a special or assignable cause is impacting the process. This permits the operator to have immediate process feedback and to take timely and appropriate action. Although the term “assignable” is used synonymously with “special” causes of variation, the root cause of the variation and its remedy may not be known but may be “assigned” to some cause, perhaps yet to be determined, other than the random noise inherent in the process. The statistical signal consists of the occurrence of an event that is on the list of indicators of process change (see Figure 14.9).
Chapter 14: Statistical Process Control (SPC) 269
1. A point above the upper control limit or below the lower control limit 2. Seven successive points above (or below) the average line 3. Seven successive points trending up (or down) 4. Middle one-third of the chart includes more than 90% or fewer than 40% of the points after at least 25 points have been plotted 5. Nonrandom patterns
Figure 14.9 Control chart indicators of process change. Source: D. W. Benbow, A. K. Elshennawy, and H. F. Walker, The Certified Quality Technician Handbook (Milwaukee, WI: ASQ Quality Press, 2003), 20.
The action that is appropriate for the operator to take depends on the event and process. The action that is needed should be clearly spelled out in the operating instructions for the process. These instructions should provide for logging of the event and actions taken. Figures 14.10 through 14.14 illustrate the use of the indicators of process change to distinguish between special cause and common cause variation. The caption for each figure explains the indicator involved.
UCL
LCL
Figure 14.10 Point outside control limit. Source: D. W. Benbow, A. K. Elshennawy, and H. F. Walker, The Certified Quality Technician Handbook (Milwaukee, WI: ASQ Quality Press, 2003), 67.
UCL
LCL
Figure 14.11 Seven successive points trending upward. Source: D. W. Benbow, A. K. Elshennawy, and H. F. Walker, The Certified Quality Technician Handbook (Milwaukee, WI: ASQ Quality Press, 2003), 67.
270 Part III: Data Analysis
UCL
LCL
Figure 14.12 Seven successive points on one side of the average. Source: D. W. Benbow, A. K. Elshennawy, and H. F. Walker, The Certified Quality Technician Handbook (Milwaukee, WI: ASQ Quality Press, 2003), 67.
UCL
LCL
Figure 14.13 Nonrandom pattern going up and down across the average line. Source: D. W. Benbow, A. K. Elshennawy, and H. F. Walker, The Certified Quality Technician Handbook (Milwaukee, WI: ASQ Quality Press, 2003), 67.
UCL
LCL
Figure 14.14 Nonrandom patterns with apparent cycles. Source: D. W. Benbow, A. K. Elshennawy, and H. F. Walker, The Certified Quality Technician Handbook (Milwaukee, WI: ASQ Quality Press, 2003), 68.
PROCESS CAPABILITY MEASURES Describe the conditions that must be met in order to measure capability. Calculate Cp, Cpk, Pp, and Ppk measures and interpret their results. (Analyze) Body of Knowledge III.E.6
Chapter 14: Statistical Process Control (SPC) 271
The data in Table 14.2 were used to calculate the control limits for the X and R control charts. If the control chart does not exhibit any of the indicators of instability as discussed in the section on control limits and specification limits, it is assumed that the process is stable. If the process is stable, it is appropriate to conduct a process capability analysis. The purpose of a capability analysis is to estimate the percentage of the population that meets specifications. If the process continues to run exactly the same way, this estimate can be used as a prediction for future production. Note that such a study is not valid if the process is not stable (that is, out of control) because the analysis produces a prediction of the ability of the process to meet specifications. If the process is unstable, then it is unpredictable. The process capability study answers the question, “Is my process good enough?” This is quite different from the question answered by a control chart, which is “Has my process changed?” Properly, use of a control chart to establish that a process is stable and predictable precedes the use of a capability study to see if items produced by the process are good enough. Five indices are produced by a capability study—the Cp index, Pp index, Cpk index, Ppk index, and CR index. The most optimistic of these is Cp, which disregards centering and is insensitive to “shifts and drifts” (special causes) in the data. It indicates what the capability of the process would be if no such problems existed. It is also sometimes called the process potential because it represents the ideal number of times that the process distribution fits within the specification limits, assuming that the process is perfectly centered. The most realistic estimate is Ppk, which indicates how the process really is. Because of their different properties, experienced users can compare these indices and know what type of remedial action a process needs: removal of a special cause, centering, or reduction of natural variation. In the case where R/d2 ≈ s, the sample standard deviation, the same formula that produces Cp also produces Pp. The difference is in how standard deviation is estimated. The same is true for Ppk and the Cpk formula. Generally, Ppk numbers below 1.0 indicate a process in need of work, while numbers as high as 1.5 indicate an excellent process. If a process is stable and predictable and has consistently provided high Ppk numbers for a long period, the process is a candidate for removing final inspection, which is likely to produce more defects than it finds. The process should then have the input variables controlled and undergo periodic audits. Perhaps the most common capability study error is requesting suppliers to provide just Cpk along with shipped goods. Cpk can easily be manipulated by selectively changing the order of the data. To get spectacularly inflated values, simply sort the data. If you are to rely on a single index, then Ppk is the appropriate choice. Authoritative texts allow the substitution of Cpk for Ppk if the process is stable and predictable. This is allowed because in that case they are equal. The following discussion shows how to calculate five different capability indices: Cpk, Cp, Ppk, Pp, and CR. These indices were developed to provide a single number to quantify process capability. Table 14.3 shows target process capability indices values depending upon whether you have an existing process, a new process, or one that requires safety, strength, or other critical parameters.
272 Part III: Data Analysis
Table 14.3
Target process capability indices.
Process
Two-sided specs
One-sided specs
Existing process
1.33
1.25
New process
1.50
1.45
Safety, strength, or critical parameter—existing process
1.50
1.45
Safety, strength, or critical parameter—new process
1.67
1.60
Source: Adapted from D.C. Montgomery, Introduction to Statistical Quality Control, 3rd ed. (Wiley, 1996).
Calculating Cpk Probably the most useful capability index is Cpk. The first step in calculating Cpk is to find the values of ZU and ZL using the following formulas: USL − X σ X − LSL ZL = σ
ZU =
USL − X σ X − LSL ZL = where USL and LSL are the upper and lower specification limits and σ is the process standard deviation, which may be approximated by ZU =
R σˆ = ( values of d2 are given in Appendix C) d2 For example, suppose the tolerance on a dimension is 1.000 ± 0.005, and a control chart shows that the process is stable with X = 1.001 and R = 0.003 with sample size n = 3. For this example, USL = 1.005, LSL = 0.995, X = 1.001, R = 0.003 and the estimated value of s is 0.003 ≈ 0.0018% 1.693 Substituting these values into the Z formulas: 1.005 − 1.001 = 2.22 0.0018 1.001 − 0.995 ZL = ≈ 3.33 0.0018
ZU =
Chapter 14: Statistical Process Control (SPC) 273
The formula for Cpk: C pk =
Min ( ZU , ZL ) 3
where Min means select the smaller of the values in the parentheses. In this example, C pk =
Min(2.22, 3.33) = 0.74 3
The higher the value of Cpk, the more capable the process. A few years ago, a Cpk value of one or more was considered satisfactory. This would imply that there is at least three s between the process mean (X) and the nearest specification limit. More recently, many industries require four, five, or even six standard deviations between the process mean and the nearest specification limit. This would correspond to a Cpk of 1.33, 1.67, and 2, respectively. The push for six standard deviations was the origin of the Six Sigma methodology. If the process data are normally distributed, the Z-values may be used in a standard normal table (Appendix B) to find the approximate values for the percentage of production that lies outside the specification. In this example, the upper Z of 2.22 corresponds to 0.0132 in Appendix B. This means that approximately 1.32% of the production of this process violates the upper specification limit. Similarly, the lower Z of 3.33 corresponds to 0.0004 in Appendix B. This means that approximately 0.04% of the production of this process violates the lower specification limit. Note that the use of the standard normal table in Appendix B is only appropriate if the process data are normally distributed. Since no real-world process data are exactly normally distributed, it is best to state these percentages as estimates. The percentages can be useful in estimating return on investment for quality improvement projects. When the two Z-values are not equal, this means that the process average is not centered within the specification limits, and some improvement to the percentages can be made by centering. Suppose in the above example, that a process adjustment can be made that would move X to a value of 1.000. Then ZU and ZL would each be 2.77, and from the table in Appendix B, the two percentages would be 0.28%. Cpk is a type of process capability index. It is sensitive to whether the process is centered but insensitive to special causes. In simplest terms, Cpk indicates how many times you can fit three standard deviations of the process between the mean of the process and the nearest specification limit. Assuming that the process is stable and predictable, if you can do this once, Cpk is 1.0, and your process probably needs attention. If you can do it 1.5 times, your process is excellent, and you are on the path to being able to discontinue final inspection. If you can do it two times, you have an outstanding process. If Cpk is negative, the process mean is outside the specification limits. There is a more sophisticated way to calculate Cpk, using a sum of squares estimate of within-subgroup standard deviation. In the more common case shown here, the same formula is used to calculate both Cpk and Ppk. The difference is in
274 Part III: Data Analysis
how standard deviation is estimated. For Cpk, the moving range method is used, and for Ppk, the sum of squares for all the data is used. Since the moving range method of estimating standard deviation is insensitive to shifts and drifts (special causes), Cpk tends to be an estimate of the capability of the process, assuming special causes are not present. Anyone who wants you to accept Cpk as an indicator of process capability automatically owes you a control chart (process behavior chart) demonstrating that the process is stable and predictable (Ppk will equal Cpk). Absent that, Ppk is the correct indicator of capability. The confidence interval around an estimate of standard deviation tends to be larger than many users expect. This uncertainty becomes part of the calculation. Even with a few dozen samples, the estimates of any of the quality indices may not be very precise. While normality of the data is not a concern in control charts,5 it is of concern in interpreting the results of a process capability study. Data should be tested for nonnormality. If the data are nonnormal, estimates of defective parts per million may be improved by applying a transformation to the data.
Calculating Cp The formula for the capability index Cp is Cp =
USL − LSL 6σ
Using the values from the previous example: Cp =
1.005 − 0.995 ≈ 0.93 6(0.0018)
Note that the formula for this index does not make use of the process average X. Therefore, this index does not consider whether the process average is centered within the specification limits or, indeed, whether it is even between the two limits. In reality, Cp tells what the process could potentially do if it were centered. For centered processes, Cp and Cpk have the same value. For processes that are not centered, Cp is greater than Cpk. Cp is a statistical measure of process capability. This means that it is a measure of how capable a process is of producing output within specification limits. This measurement only has meaning when the process being examined is in a state of statistical control. Cp disregards centering and is insensitive to shifts and drifts (special causes) in the data. If the process mean is not centered within the specification limits, this value may therefore be misleading, and the Cpk index should be used for analyzing process capability instead.6
Calculating CR This index is also referred to as the capability ratio. It is the reciprocal of Cp. CR =
1 6σ or USL − LSL Cp
Chapter 14: Statistical Process Control (SPC) 275
Using the data from the previous example, CR =
6 ( 0.0018) 1 ≈ 1.08 which, not surprisingly, is approximately . 1.005 − 0.995 0.93
Of course, lower values of CR imply more capable processes.
Calculating Ppk Ppk is a type of process performance index, specifically a measure of historical variation in a process. As opposed to the related Cpk index, Ppk cannot be used to predict the future and is used when a process is not in statistical control. It is often used for preproduction runs where the population standard deviation is unknown. First, compute the Z-scores or number of standard deviations the process average X is from each specification, substituting the sample standard deviation s for an estimate of the process standard deviation s : USL − X s X − LSL ZL = s
ZU =
with s computed as
∑(X − X )
2
i
s=
i
(n − 1)
As a formula, Ppk =
Min ( ZU , ZL ) 3
and USL is upper specification limit and LSL is lower specification limit.7,8
Calculating Pp Like the formula for Cp, Pp does not make use of the process average X. However, like the formula for Ppk, it does use the sample standard deviation s instead of R/d2: PP =
USL − LSL 6s
276 Part III: Data Analysis
Machine Capability Process capability analysis was discussed earlier in this section. Machine capability analysis is calculated mathematically in the same way (i.e., the same formulas are used). The difference is that machine capability analysis attempts to isolate and analyze the variation due to an individual machine rather than the entire process. If a process has several machines linked together either in series or parallel, a machine capability analysis should be conducted on each machine. It is usually best to provide a very consistent input to the machine and very precise and accurate measurement of the output. This helps assure that the analysis does not consider variation from other sources or from the measurement system. Machine capability analyses can be very useful in efforts to reduce process variation by identifying points in the process where excess variation is caused.
Chapter 15 Advanced Statistical Analysis
REGRESSION AND CORRELATION MODELS Describe how these models are used for estimation and prediction. (Apply) Body of Knowledge III.F.1
Regression Regression analysis is used to model relationships between variables, determine the magnitude or strength of the relationships between variables, and make predictions based on the models. Regression analysis models the relationship between one or more response variables (also called dependent variables, explained variables, predicted variables, or regressands), (usually named Y) and the predictor variables (also called independent variables, explanatory variables, control variables, or regressors), (usually named X1, . . ., Xp). If there is more than one response variable, we speak of multivariate regression. Simple and Multiple Linear Regression. Simple linear regression and multiple linear regression are related statistical methods for modeling the relationship between two or more random variables using a linear equation. Simple linear regression refers to a regression on two variables, while multiple linear regression refers to a regression on more than two variables. Linear regression assumes that the best estimate of the response is a linear function of some parameters (though not necessarily linear on the predictors). Nonlinear Regression Models. If the relationship between the variables being analyzed is not linear in parameters, a number of nonlinear regression techniques can be used to obtain a more accurate regression. Linear Models. Predictor variables may be defined quantitatively, that is, as continuous variables, or qualitatively, that is, as categorical variables. Categorical predictors are sometimes called factors. Although the method of estimating the 277
278 Part III: Data Analysis
model is the same for each case, different situations are sometimes known by different names for historical reasons: • If the predictors are all quantitative, we speak of multiple regression. • If the predictors are all qualitative, one performs analysis of variance. • If some predictors are quantitative and some qualitative, one performs an analysis of covariance. The linear model usually assumes that the dependent variable is continuous. If least squares estimation is used, and it is assumed that the error component is normally distributed, the model is fully parametric. If it is not assumed that the data are normally distributed, the model is semi-parametric. If the data are not normally distributed, there are often better approaches to fitting than least squares. In particular, if the data contain outliers, robust regression might be preferred. If two or more independent variables are correlated, we say that the variables are multicollinear. Multicollinearity results in parameter estimates that are unbiased and consistent but which may have relatively large variances. If the regression error is not normally distributed but is assumed to come from an exponential family, generalized linear models should be used. For example, if the response variable can take only binary values (for example, a Boolean or yes/ no variable), logistic regression is preferred. The outcome of this type of regression is a function that describes how the probability of a given event—for example, the probability of getting “yes”—varies with the predictors.1
Correlation In probability theory and statistics, correlation, using a statistic called a correlation coefficient, indicates the strength and direction of a linear relationship between two random variables. In general, statistical usage correlation refers to the departure of two variables from independence, although correlation does not imply causality. A number of different coefficients are used for different situations. The best known is the Pearson product-moment correlation coefficient, which is obtained by dividing the covariance of the two variables by the product of their standard deviations. Despite its name, it was first introduced by Francis Galton. The correlation rX,Y (the Greek symbol r, pronounced “row”), also symbolized as r between two random variables X and Y with expected values mX and mY and standard deviations sX and sY, is defined as
ρX ,Y =
cov(X,Y) E (( X − µ X ) (Y − µY )) = σ Xσ Y σ Xσ Y
where E is the expected value of the variable and cov means covariance. Since
mX = E(X),
sX2 = E(X2) − E2(X) and likewise for Y.
Chapter 15: Advanced Statistical Analysis 279
We may also write
ρX ,Y =
E(XY) − E(X)E(Y) 2
E(X ) − E 2 (X) E(Y 2 ) − E 2 (Y)
The correlation is 1 in the case of an increasing linear relationship, −1 in the case of a decreasing linear relationship, and some value in between in all other cases, indicating the degree of linear dependence between the variables. The closer the coefficient is to either −1 or 1, the stronger the correlation between the variables. If the variables are independent, then the correlation is zero, but the converse is not true because the correlation coefficient detects only linear dependencies between two variables. Here is an example: Suppose the random variable X is uniformly distributed on the interval from −1 to 1, and Y = X2. Then, Y is completely determined by X, so that X and Y are dependent, but their correlation is zero; they are uncorrelated. However, in the special case when X and Y are jointly normal, independence is equivalent to uncorrelatedness. A correlation between two variables is diluted in the presence of measurement error around the estimates of one or both variables, in which case disattenuation provides a more accurate coefficient. Interpretation of the Size of a Correlation. Several authors have offered guidelines for the interpretation of a correlation coefficient. Cohen, for example, has suggested the following interpretations for correlations in psychological research:2 Correlation
Negative
Positive
Small
−0.29 to −0.10
0.10 to 0.29
Medium
−0.49 to −0.30
0.30 to 0.49
Large
−1.00 to −0.50
0.50 to 1.00
As Cohen himself has observed, however, all such criteria are in some ways arbitrary and should not be observed too strictly. This is because the interpretation of a correlation coefficient depends on the context and purposes. A correlation of 0.9 may be very low if one is verifying a physical law using high-quality instruments but may be regarded as very high in the social sciences where there may be a greater contribution from complicating factors.3 A scatter diagram is helpful in detecting evident nonlinearity, bimodal patterns, mavericks, and other nonrandom patterns. The pattern of experimental data should be that of an “error ellipse”: One extreme, representing no correlation, would be circular; the other extreme, representing perfect correlation, would be a perfectly straight line. Figure 15.1 illustrates various correlations and their respective coefficients. Whether the data are not normally distributed is not entirely answered by a histogram alone; some further evidence on the question can be had by plotting accumulated frequencies Sfi on normal-probability paper and checking for mavericks or outliers. Some Interpretations of r, under the Assumptions. It can be proved that the maximum attainable absolute value of the correlation coefficient r is + 1; this corresponds to a straight line with positive slope. (Its minimum attainable value
280 Part III: Data Analysis
–1
0
1
2
–2
–1
0
1
2
0.40
0.025
0.76
0.38
0.029
0.32
0.0046
–3 –2 –1
0.80
1
2
–2
–1
0
1
2
–2
–1
0
1
2
0.96
0
1
2
–2
–2
–1
0
1
2
–2
–1
0
0.03
–3 –2 –1
0
1
2
–2
–1
0
1
2
–2
–1
0
1
2
Figure 15.1 Example of linear correlation as demonstrated by a series of scatter diagrams.
of −1 corresponds to a straight line with negative slope.) Its minimum attainable absolute value is zero; this corresponds to a circular pattern of random points. Very good predictability of Y from X is indicated by a value of r near +1 or −1. Very poor predictability of Y from X is indicated by values of the correlation coefficient r close to zero. Thus, r = 0 and r = ±1 represent the extremes of no predictability and perfect predictability. Values of r greater than 0.8 or 0.9 (r2 values of 0.64 and 0.81, respectively) computed from production data are uncommon. Misleading Values of r. The advantage of having a scatter diagram is important in the technical sense of finding causes of trouble. There are different patterns of data that will immediately indicate the inadvisability of computing r until the data provide reasonable agreement with the required assumptions. Some patterns producing seriously misleading values of r when a scatter diagram is not considered are as follows: • Figure 15.2(a); a curvilinear relationship will give a low value of r. • Figure 15.2(b); two populations each with a relatively high value of r. When a single r is computed, a low value of r results. It is important to recognize the existence of two sources in such a set of data. • Figure 15.2(c); a few mavericks give a spuriously high value of r.
Chapter 15: Advanced Statistical Analysis 281
(a)
(b)
(d)
(e)
(c)
Figure 15.2 Some frequently occurring patterns of data that lead to seriously misleading values of r and are not recognized as a consequence. Source: E. R. Ott, E. G. Schilling, and D. V. Neubauer, Process Quality Control: Troubleshooting andInterpretation of Data, 4th ed. (Milwaukee, WI: ASQ Quality Press, 2005), 401.
• Figure 15.2(d); a few mavericks give a spuriously low value of r and a potentially opposite sign of r. • Figure 15.2(e); two populations distinctly separated can give a spuriously high value of r and a potentially opposite sign of r. All the patterns shown in Figure 15.2 are typical of sets of production and experimental data. Hence, the value of plotting the data using a scatter diagram cannot be overemphasized.
HYPOTHESIS TESTING Calculate confidence intervals using t tests and the Z statistic and determine whether the result is significant. (Analyze) Body of Knowledge III.F.2
Population Parameter versus Sample Statistic From sample data, we can calculate values called statistics, such as the average ‒ X, the standard deviation s, or the range R. These are called descriptive statistics.
282 Part III: Data Analysis
Note that both “sample” and “statistics” start with “s.” These statistics are denoted with English letters. We use statistics to infer population parameters, hence the branch of statistics called inferential statistics. Note that both “population” and “parameter” start with the letter “p.” These are related to “sample” and “statistics,” respectively. Parameters are signified by lowercase Greek letters. The population ‒ mean m, pronounced “mew,” is related to the sample mean X. The population standard deviation s (sigma) is related to the sample standard deviation s. One can never truly know a parameter unless it is possible to measure the entire population under consideration. Hence, the purpose of inferential statistics is to make inferences about a population parameter based on a sample statistic. Inferential statistics fall into two categories: 1. Estimating (predicting) the value of a population parameter from a sample statistic 2. Testing a hypothesis about the value of a population parameter, answering questions such as, “Is the population parameter equal to this specific value?”
Point Estimate
‒ An example of a point estimate would be a sample mean X for m or s for s. The sample mean can also be used to form an interval estimate, known as a confidence interval, described next.
Confidence Interval In statistics, a confidence interval (CI) for a population parameter is an interval between two numbers with an associated probability p that is generated from a random sample of an underlying population such that if the sampling were repeated numerous times and the confidence interval recalculated from each sample according to the same method, a proportion p of the confidence intervals would contain the population parameter in question. Confidence intervals are the most prevalent form of interval estimation. If U and V are statistics (that is, observable random variables, like sample averages) whose probability distribution depends on some unobservable parameter m, and P(U ≤ m ≤ V) = x (where x is a number between 0 and 1) then the random interval (U, V) is a “(100x)% confidence interval for m.” The number x (or 100x%) is called the confidence level or confidence coefficient. In modern applied practice, almost all confidence intervals are stated at the 95% level.4 Example
A machine fills cups with margarine and is supposed to be adjusted so that the mean content of the cups is close to 250 grams of margarine. Of course, it is not possible to fill every cup with exactly 250 grams of margarine. Hence, the weight of the filling can be considered to be a random variable X. The distribution of X is assumed here to be a normal distribution with unknown expectation or average m and (for the sake of simplicity) known
Chapter 15: Advanced Statistical Analysis 283
standard deviation s = 2.5 grams. To check if the machine is adequately adjusted, a sample of n = 25 cups of margarine is chosen at random and the cups weighed. The weights of margarine are X1, X2, . . ., X25, a random sample from X. To get an impression of the population average m, it is sufficient to give an ‒ estimate. The appropriate estimator is the sample mean X: 1 n µˆ = X = ∑X i n i=1 The sample shows actual weights X1, X2, . . ., X25, with mean X=
1 25 ∑ X i = 250.2 (grams) 25 i=1
If we take another sample of 25 cups, we could easily expect to find values like 250.4 or 251.1 grams. A sample mean value of 280 grams, however, would be extremely rare if the mean content of the cups is in fact close to 250g. There is a whole interval around the observed value 250.2 of the sample mean within which, if the whole population mean actually takes a value in this range, the observed data would not be considered particularly unusual. Such an interval is called a confidence interval for the parameter m. How do we calculate such an interval? The endpoints of the interval have to be calculated from the sample, so they are statistics, functions of the sample X1, X2, . . ., X25, and hence random variables themselves. In our case, we can determine the endpoints by considering that the sample ‒ ⎛ is alsoXnormally ⎞ −µ mean X from0.95 a normally distributed, with the = 1− α =distributed P(−z ≤ Z ≤ z)sample = P ⎜ −1.96 ≤ ≤ 1.96 ⎟ ⎛ ⎞ X −µ σ / n = 0.5 ⎠(grams). The value m, but with standard deviation ⎝ ≤ Z ≤ z) = P ⎜ −1.96 ≤ same expectation ≤ 1.96 ⎟ ‒ ‒ σ / n is called error ⎝ ⎠ the standard ⎛ ⎞ sX. By standardizing, we get a σ for X, denotedσ by ≤ µ ≤ X + 1.96 = P X − 1.96 ⎟ ⎜ variable ⎞ σ σ random ⎝ n n⎠ ≤ µ ≤ X + 1.96 ⎟ n n⎠ X −× µ 0.5) µ ≤− µ X + 1.96 = P(X − 1.96 × 0.5 ≤ X Z= = × 0.5 ≤ µ ≤ X + 1.96 × 0.5) 0.5 σ / n = P(X − 0.98 ≤ µ ≤ X + 0.98)
≤ µ ≤ X + 0.98)
dependent on m, but with a standard normal distribution independent of the parameter m to be estimated. Hence, it is possible to find numbers −z and z, independent of m, where Z lies in between with probability 1 − a, a measure of how confident we want to be. We take 1 − a = 0.95. So, we have P(–z ≤ Z ≤ z) = 1 − a = 0.95 The number z follows from Φ(z) = P(Z ≤ z) = 1 −
α = 0.975 2
Hence, the value of z, known as the z-score, can be found by using a z-table (refer to Areas under Standard Normal Curve, Appendix B): z = Φ–1(Φ(z)) = Φ–1(0.975) = 1.96
284 Part III: Data Analysis
Thus, ⎛ ⎞ X −µ 0.95 = 1− α = P(−z ≤ Z ≤ z) = P ⎜ −1.96 ≤ ≤ 1.96 ⎟ σ/ n ⎝ ⎠ ⎛ σ σ ⎞ ≤ µ ≤ X + 1.96 = P ⎜ X − 1.96 ⎟ ⎝ n n⎠ = P(X − 1.96 × 0.5 ≤ µ ≤ X + 1.96 × 0.5) = P(X − 0.98 ≤ µ ≤ X + 0.98) This might be interpreted as when sampling 100 times, 95% of the intervals generated will contain the population mean: x − 0.98 and X + 0.98.ʹʹ Every time the measurements are repeated, there will be another value for the ‒ mean X of the sample. In 95% of the cases, m will be between the endpoints calculated from this mean, but in 5% of the cases it will not be. The actual confidence interval is calculated by entering the measured weights into the formula. Our 0.95 confidence interval becomes (X − 0.98; X + 0.98) = (250.2 − 0.98; 250.2 + 0.98) = (249.2; 251.18) . This interval has fixed endpoints, where m might be in between (or not). There is no probability of such an event. We cannot say “with probability 1 − a the parameter m lies in the confidence interval.” We only know that by repetition in 100(1 − a)% of the cases μ will be in the calculated interval. In 100a% of the cases, however, it does not. And unfortunately, we do not know in which of the cases this happens. That is why we say, “with confidence level 100(1 − a)%, m lies in the confidence interval.” Figure 15.3 shows 50 realizations of a confidence interval for m.
μ
Figure 15.3 Fifty realizations of a 95% confidence interval for m.
Chapter 15: Advanced Statistical Analysis 285
As seen in the figure, there was a fair chance that we chose an interval containing m; however, we may be unlucky and picked the wrong one. We will never know. We are stuck with our interval. Theoretical Here is a familiar example. Suppose X1, . . ., Xn are an Example independent sample from a normally distributed population with mean m and variance s2. Let X 1 + X 2 + …+ X n n n 2 1 S2 = ∑ (Xi − X ) n − 1 i=1
X=
then T=
X −µ S/ n
has a Student’s t-distribution with n − 1 degrees of freedom. Note that the distribution of T does not depend on the values of the unobservable parameters m and s2. If c is the 95th percentile of this distribution, then P(–c < T < c) = 0.9.
(Note: “95th” and “0.9” are correct in the preceding expressions. There is a 5% chance that T will be less than −c and a 5% chance that it will be larger than +c. Thus, the probability that T will be between −c and +c is 90%.)
Consequently, P(X − cS/ n < µ < X + cS/ n) = 0.9,
and we have a theoretical 90% confidence interval for m. After actually observing the sample, we find values for x‒ to ‒ replace the theoretical X and s to replace the theoretical S, from which we compute the confidence interval [x − cs / n < µ < x + cs / n],
an interval with fixed numbers as endpoints, of which we can no longer say with a certain probability that it contains the parameter m. Either m is in this interval or it is not.
Confidence Intervals in Measurement More concretely, the results of measurements are often accompanied by confidence intervals. For instance, suppose a scale is known to yield the actual mass of an
286 Part III: Data Analysis
object plus a normally distributed random error with mean zero and known standard deviation s. If we weigh 100 objects of known mass on this scale and report the values ±s, then we can expect to find that around 68 of the reported ranges include the actual mass. This is an application of the empirical rule: If a set of measurements can be illustrated by a roughly mound-shaped histogram, then ‒ • the interval X ± s contains approximately 68% of the measurements, ‒ • the interval X ± 2s contains approximately 95% of the measurements, and ‒ • the interval X ± 3s contains approximately 99.73% of the measurements.5
If we wish to report values with a smaller standard error value, then we repeat ⎛ X − µ n times⎞ and average the results. Then the 68.2% confidence = 1− α = P(−z ≤ Z ≤ z)the = Pmeasurement ≤ 1.96 ⎟ ⎜−1.96 ≤ interval repeating the measurement 100 times reduces ⎠ ⎝ is ± σ / n . For example, the confidence interval to 1/10 of the original width. ⎛ ⎞ σ σ ≤ µ ≤ X Note + 1.96that⎟when we report a 68.2% confidence interval (usually termed standard = P ⎜ X − 1.96 ⎝ n n⎠ error) as n ± s, this does not mean that the true mass has a 68.2% chance of being X +reported 1.96 × 0.5)range. In fact, the true mass is either in the range or not. How can = P(X − 1.96 × 0.5 ≤ µin≤the = P(X − 0.98 ≤ µ ≤ X a+value 0.98) outside the range be said to have any chance of being in the range? Rather, our statement means that 68.2% of the ranges we report are likely to include the true mass. This is not just a quibble. Under the incorrect interpretation, each of the 100 measurements described would specify a different range, and the true mass supposedly has a 68% chance of being in each and every range. Also, it supposedly has a 32% chance of being outside each and every range. If two of the ranges happen to be disjoint, the statements are obviously inconsistent. Say one range is 1 to 2, and the other is 2 to 3. Supposedly, the true mass has a 68% chance of being between 1 and 2, but only a 32% chance of being less than 2 or more than 3. The incorrect interpretation reads more into the statement than is meant. On the other hand, under the correct interpretation, each and every statement we make is really true, because the statements are not about any specific range. We could report that one mass is 10.2 ± 0.1 grams, while really it is 10.6 grams, and not be lying. But if we report fewer than 1000 values, and more than two of them are that far off, we will have some explaining to do.
How Large a Sample Is Needed to Estimate a Process Average? There are many things to consider when answering this question. In fact, the question itself requires modification before answering. Is the test destructive, nondestructive, or semi-destructive? How expensive is it to obtain and test a sample of n units? How close an answer is needed? How much variation among measurements is expected? What level of confidence is adequate? All these questions must be considered. We begin by turning to the discussion of variation expected in averages of random samples of n items around the true but unknown process average m. The expected variation of sample averages about m is ±2
σ (confidence about 95%) n
Chapter 15: Advanced Statistical Analysis 287
and
±3
σ (confidence about 99.7%) n
Now let the allowable deviation (error) in estimating m be ± D (read “delta”); also let an estimate or guess of s be ŝ ; then Δ ≅±2
2 ⎛ 2σˆ ⎞ σˆ and n ≅ ⎜ ⎟ (about 95% confidence) ⎝ Δ ⎠ n
Also, Δ ≅±3
2 ⎛ 3σˆ ⎞ σˆ and n ≅ ⎜ ⎟ (about 99.7% confidence) ⎝ Δ ⎠ n
Confidence levels other than the two shown can be used by referring to a table for areas under the normal curve (z-table) (Appendix B). When our estimate or guess of a required sample size n is even as small as n = 4, then the distribution of averages of random samples drawn from almost any shaped parent population (with a finite variance) will essentially be normal.
Student’s t-Distribution In probability and statistics, the t-distribution, or Student’s t-distribution, is a probability distribution that arises in the problem of estimating the mean of a normally distributed population when the sample size is small. It is the basis of the popular Student’s t-tests for the statistical significance of the difference between two sample means and for confidence intervals for the difference between two population means. Student’s t-distribution is a special case of the generalized hyperbolic distribution. The derivation of the t-distribution was first published in 1908 by William Sealy Gosset while he worked at a Guinness brewery in Dublin. He was not allowed to publish under his own name, so the paper was written under the pseudonym Student. The t-test and the associated theory became well known through the work of R. A. Fisher, who called the distribution “Student’s distribution.” Student’s distribution arises when (as in nearly all practical statistical work) the population standard deviation is unknown and has to be estimated from the data. Textbook problems treating the standard deviation as if it were known are of two kinds: (1) those in which the sample size is so large that one may treat a databased estimate of the variance as if it were certain, and (2) those that illustrate mathematical reasoning, in which the problem of estimating the standard deviation is temporarily ignored because that is not the point that the author or instructor is then explaining. The t-distribution is related to the F-distribution as follows: the square of a value of t with n degrees of freedom is distributed as F with 1 and n degrees of freedom. The overall shape of the probability density function of the t-distribution resembles the bell shape of a normally distributed variable with mean zero and variance 1, except that it is a bit lower and wider. As the number of degrees of
288 Part III: Data Analysis
0.4
0.4
0.4
0.3
0.3
0.3
0.2
0.2
0.2
0.1
0.1
0.1
0.0
0.0 –4
–2
0
2
4
0.0 –4
–2
1 df
0
2
4
–4
0.4
0.4
0.3
0.3
0.3
0.2
0.2
0.2
0.1
0.1
0.1
0.0
0.0 –2
0
2
5 df
4
0
2
4
2
4
3 df
0.4
–4
–2
2 df
0.0 –4
–2
0
2
4
–4
10 df
–2
0
30 df
Figure 15.4 Density of the t-distribution for 1, 2, 3, 5, 10, and 30 degrees of freedom compared to the normal distribution.
freedom grows, the t-distribution approaches the normal distribution with mean zero and variance 1. Figure 15.4 shows the density of the t-distribution for increasing values of n (degrees of freedom). The normal distribution is shown as a dotted line for comparison. Note that the t-distribution (darker line) becomes closer to the normal distribution as n increases. For n = 30 the t-distribution is almost the same as the normal distribution. Table of Selected Values. Table 15.1 lists a few selected values for distributions with n degrees of freedom for the 90%, 95%, 97.5%, and 99.5% confidence intervals. These are “one-sided”—that is, where we see “90%,” “4 degrees of freedom,” and “1.533”: It means P(T < 1.533) = 0.9; it does not mean P(−1.533 < T < 1.533) = 0.9. Consequently, by the symmetry of this distribution, we have P(T < −1.533) = P(T > 1.533) = 1 − 0.9 = 0.1, and consequently P(−1.533 < T < 1.533) = 1 − 2(0.1) = 0.8. Note that the last row of the table also gives critical points: a t-distribution with infinitely many degrees of freedom is normally distributed.
Chapter 15: Advanced Statistical Analysis 289
Table 15.1
t-table.
m
75%
80%
85%
90%
95%
97.5%
99%
99.5%
99.75% 99.9% 99.95%
1
1.000
1.376
1.963
3.078
6.314
12.71
31.82
63.66
127.3
318.3
636.6
2
0.816
1.061
1.386
1.886
2.920
4.303
6.965
9.925
14.09
22.33
31.60
3
0.765
0.978
1.250
1.638
2.353
3.182
4.541
5.841
7.453
10.21
12.92
4
0.741
0.941
1.190
1.533
2.132
2.776
3.747
4.604
5.598
7.173
8.610
5
0.727
0.920
1.156
1.476
2.015
2.571
3.365
4.032
4.773
5.893
6.869
6
0.718
0.906
1.134
1.440
1.943
2.447
3.143
3.707
4.317
5.208
5.959
7
0.711
0.896
1.119
1.415
1.895
2.365
2.998
3.499
4.029
4.785
5.408
8
0.706
0.889
1.108
1.397
1.860
2.306
2.896
3.355
3.833
4.501
5.041
9
0.703
0.883
1.100
1.383
1.833
2.262
2.821
3.250
3.690
4.297
4.781
10
0.700
0.879
1.093
1.372
1.812
2.228
2.764
3.169
3.581
4.144
4.587
11
0.697
0.876
1.088
1.363
1.796
2.201
2.718
3.106
3.497
4.025
4.437
12
0.695
0.873
1.083
1.356
1.782
2.179
2.681
3.055
3.428
3.930
4.318
13
0.694
0.870
1.079
1.350
1.771
2.160
2.650
3.012
3.372
3.852
4.221
14
0.692
0.868
1.076
1.345
1.761
2.145
2.624
2.977
3.326
3.787
4.140
15
0.691
0.866
1.074
1.341
1.753
2.131
2.602
2.947
3.286
3.733
4.073
16
0.690
0.865
1.071
1.337
1.746
2.120
2.583
2.921
3.252
3.686
4.015
17
0.689
0.863
1.069
1.333
1.740
2.110
2.567
2.898
3.222
3.646
3.965
18
0.688
0.862
1.067
1.330
1.734
2.101
2.552
2.878
3.197
3.610
3.922
19
0.688
0.861
1.066
1.328
1.729
2.093
2.539
2.861
3.174
3.579
3.883
20
0.687
0.860
1.064
1.325
1.725
2.086
2.528
2.845
3.153
3.552
3.850
21
0.686
0.859
1.063
1.323
1.721
2.080
2.518
2.831
3.135
3.527
3.819
22
0.686
0.858
1.061
1.321
1.717
2.074
2.508
2.819
3.119
3.505
3.792
23
0.685
0.858
1.060
1.319
1.714
2.069
2.500
2.807
3.104
3.485
3.767
24
0.685
0.857
1.059
1.318
1.711
2.064
2.492
2.797
3.091
3.467
3.745
25
0.684
0.856
1.058
1.316
1.708
2.060
2.485
2.787
3.078
3.450
3.725
26
0.684
0.856
1.058
1.315
1.706
2.056
2.479
2.779
3.067
3.435
3.707
27
0.684
0.855
1.057
1.314
1.703
2.052
2.473
2.771
3.057
3.421
3.690
28
0.683
0.855
1.056
1.313
1.701
2.048
2.467
2.763
3.047
3.408
3.674
29
0.683
0.854
1.055
1.311
1.699
2.045
2.462
2.756
3.038
3.396
3.659
30
0.683
0.854
1.055
1.310
1.697
2.042
2.457
2.750
3.030
3.385
3.646
40
0.681
0.851
1.050
1.303
1.684
2.021
2.423
2.704
2.971
3.307
3.551
50
0.679
0.849
1.047
1.299
1.676
2.009
2.403
2.678
2.937
3.261
3.496 Continued
290 Part III: Data Analysis
Chapter 15: F. Advanced Statistical Analysis 217
Table 15.1
Continued.
m
75%
80%
85%
90%
95%
97.5%
99%
99.5%
99.75% 99.9% 99.95%
60
0.679
0.848
1.045
1.296
1.671
2.000
2.390
2.660
2.915
3.232
3.460
80
0.678
0.846
1.043
1.292
1.664
1.990
2.374
2.639
2.887
3.195
3.416
100 0.677
0.845
1.042
1.290
1.660
1.984
2.364
2.626
2.871
3.174
3.390
120 0.677
0.845
1.041
1.289
1.658
1.980
2.358
2.617
2.860
3.160
3.373
0.674
0.842
1.036
1.282
1.645
1.960
2.326
2.576
2.807
3.090
3.291
∞
For example, given a sample with a sample variance 2 and sample mean of 10, 6 taken from a sample set of 11 (10ofdegrees of freedom), Consequently, by the symmetry this distribution, weusing have the formula S P(T < −1.533) = P(T X n>± 1.533) A n = 1 − 0.9 = 0.1, n
and consequently we can determine that at 90% confidence, we have a true mean lying below P(−1.533 < T < 1.533) = 1 − 2(0.1) = 0.8.
Testing a Hypothesis And, still at 90% confidence, we have a true mean lying over One may be faced with the problem of making a definite decision with respect to 2 1 . 372 only= through 9 . 415 its observable consequences. an uncertain hypothesis that10is−known 11 a hypothesis test, is an algorithm to state A statistical hypothesis test, or more briefly the alternative (for or against the hypothesis) that minimizes certain risks. so that at 80% confidence, we have a true mean lying between There are several preparations we make before we observe the data: 2 in mathematical/statistical terms that 1. The hypothesis must be stated 10 ± 1. 372 = [9 . 415 , 10 . 585 ] makes it possible to calculate 11 the probability of possible samples, assuming the hypothesis is correct. For example, the mean response to treatment being tested is equal to the mean response to the placebo Testing a Hypothesis in the control group. Both responses have the normal distribution with this unknown mean and the same known standard deviation. One may be faced with the problem of making a definite decision with respect to an uncertain hypothesis that is its observable consequences. 2. A test statistic must beknown chosenonly that through will summarize the information in the sample that is relevant to the hypothesis. In the example above,
Part III.F.2
2 critical points: a t-distribution with Note that the last row of the table also gives 10 + 1.372 = 10.585. 11 infinitely many degrees of freedom is normally distributed. For example, given a sample with a sample variance 2 and sample mean of 10, And, still at 90% confidence, we have a true mean lying over taken from a sample set of 11 (10 degrees of freedom), using the formula6 2 10 − 1.372 S = 9.415 X n ± A 11n n so that at 90% confidence, we have a true mean lying between we can determine that at 90% confidence, we have a true mean lying below 2 10 ± 1.372 = [9.415, 10.585]. 112 10 + 1 . 372 = 10 . 585 11
Chapter 15: Advanced Statistical Analysis 291
it might be the numerical difference between the two sample means, x‒ 1 − x‒ 2. 3. The distribution of the test statistic is used to calculate the probability sets of possible values (usually an interval or union of intervals). In this example, the difference between sample means would have a normal distribution with a standard deviation equal to the common standard deviation times the factor 1 1 + n1 n2 where n1 and n2 are the sample sizes. 4. Among all the sets of possible values, we must choose one that we think represents the most extreme evidence against the hypothesis. This is called the critical region of the test statistic. The probability of the test statistic falling in the critical region when the hypothesis is correct is called the alpha value (or size) of the test. 5. The probability that a sample falls in the critical region when the parameter is q, where q is for the alternative hypothesis, is called the power of the test at q. The power function of a critical region is the function that maps q to the power of q. After the data are available, the test statistic is calculated, and we determine whether it is inside the critical region. If the test statistic is inside the critical region, then our conclusion is one of the following: 1. The hypothesis is incorrect—therefore, reject the null hypothesis. (Thus, the critical region is sometimes called the rejection region, while its complement is the acceptance region.) 2. An event of probability less than or equal to alpha has occurred. The researcher has to choose between these logical alternatives. In the example, we would say, “The observed response to treatment is statistically significant.” If the test statistic is outside the critical region, the only conclusion is that “There is not enough evidence to reject the hypothesis.” This is not the same as evidence in favor of the hypothesis, since lack of evidence against a hypothesis is not evidence for it.7
NULL HYPOTHESIS In statistics, a null hypothesis is a hypothesis set up to be nullified or refuted in order to support an alternative hypothesis. When used, the null hypothesis is presumed true until statistical evidence in the form of a hypothesis test indicates otherwise. Although it was originally proposed to be any hypothesis, in practice it has come to be identified with the “nil hypothesis,” which states that “there is no phenomenon” and that the results in question could have arisen through chance.
292 Part III: Data Analysis
For example, if we want to compare the test scores of two random samples of men and women, a null hypothesis would be that the mean score of the male population was the same as the mean score of the female population, and therefore there is no significant statistical difference between them: H0: m1 = m2 HA: m1 � m2 where H0 = The null hypothesis HA = The alternative hypothesis m1 = The mean of population 1 m2 = The mean of population 2 Alternatively, the null hypothesis can postulate that the two samples are drawn from the same population: H0: m1 − m2 = 0 HA: m1 − m2 � 0 Formulation of the null hypothesis is a vital step in statistical significance testing. Having formulated such a hypothesis, one can establish the probability of observing the obtained data or data different from the prediction of the null hypothesis if the null hypothesis is true. That probability is what is commonly called the “significance level” of the results. When a null hypothesis is formed, it is always in contrast to an implicit alternative hypothesis, which is accepted if the observed data values are sufficiently improbable under the null hypothesis. The precise formulation of the null hypothesis has implications for the alternative hypothesis. For example, if the null hypothesis is that sample A is drawn from a population with the same mean as sample B, the alternative hypothesis is that they come from populations with different means, which can be tested with a two-tailed test of significance. But if the null hypothesis is that sample A is drawn from a population whose mean is lower than the mean of the population from which sample B is drawn, the alternative hypothesis is that sample A comes from a population with a higher mean than the population from which sample B is drawn, which can be tested with a one-tailed test.
Limitations of Hypothesis Testing A null hypothesis is only useful if it is possible to calculate from it the probability of observing a data set with particular parameters. In general, it is much harder to be precise about how probable the data would be if the alternative hypothesis is true. If experimental observations contradict the prediction of the null hypothesis, it means that either the null hypothesis is false or we have observed an event with
Chapter 15: Advanced Statistical Analysis 293
very low probability. This gives us high confidence in the falsehood of the null hypothesis, which can be improved by increasing the number of trials. However, accepting the alternative hypothesis only commits us to a difference in observed parameters; it does not prove that the theory or principles that predicted such a difference are true, since it is always possible that the difference could be due to additional factors not recognized by the theory. For example, rejection of a null hypothesis (that, say, rates of symptom relief in a sample of patients who received a placebo and a sample who received a medicinal drug will be equal) allows us to make a non-null statement (that the rates differed); it does not prove that the drug relieved the symptoms, though it gives us more confidence in that hypothesis. The formulation, testing, and rejection of null hypotheses is methodologically consistent with the falsificationist model of scientific discovery formulated by Karl Popper and widely believed to apply to most kinds of empirical research. However, concerns regarding the power of statistical tests to detect differences in large samples have led to suggestions for redefining the null hypothesis, for example as a hypothesis that an effect falls within a range considered negligible. This is an attempt to address the confusion among non-statisticians between significant and substantial, since large enough samples are likely to be able to indicate differences, however minor.8
Statistical Significance In statistics, a result is significant if it is unlikely to have occurred by chance. Technically, the significance level of a test is the maximum probability, assuming the null hypothesis, that the observed statistic would be observed and still be considered consistent with chance variation (consistent with the null hypothesis). Hence, if the null hypothesis is true, the significance level is the probability that it will be rejected in error (a decision known as a type I error). The significance of a result is also called its p-value; the smaller the p-value, the more significant the result is said to be. Significance is represented by the Greek symbol, a (alpha). Popular levels of significance are 10%, 5%, and 1%. For example, if you perform a test of significance, assuming the significance level is 5% and the p-value is lower than 5%, then the null hypothesis would be rejected. Informally, the test statistic is said to be “statistically significant.” If the significance level is smaller, an observed value is less likely to be more extreme than the critical value. So, a result that is “significant at the 1% level” is more significant than a result that is “significant at the 5% level.” However, a test at the 1% level is more likely to fail to reject a false null hypothesis (a type II error) than a test at the 5% level, and so will have less statistical power. In devising a hypothesis test, the tester aims to maximize power for a given significance, but ultimately has to recognize that the best that can be achieved is likely to be a balance between significance and power, in other words between the risks of type I and type II errors. It is important to note that a type I error is not necessarily any better or worse than a type II error; the severity of the type of error depends on each individual case. However, one type of error may be better or worse on average for the population under study.
294 Part III: Data Analysis
Pitfalls. It is important to differentiate between the vernacular use of the term “significant”—“a major effect; important; fairly large in amount or quantity”—and the meaning in statistics, which carries no such connotation of meaningfulness. With a large enough number being sampled, a tiny and unimportant difference can still be found to be “statistically significant.”9 In statistical hypothesis testing, the p-value is the probability of obtaining a result at least as “impressive” as that obtained by chance alone, assuming the null hypothesis to be true. The fact that p-values are based on this assumption is crucial to their correct interpretation. More technically, the p-value of an observed value tObserved of some random variable T used as a test statistic is the probability that, given that the null hypothesis is true, T will assume a value as or more unfavorable to the null hypothesis than the observed value tObserved. “More unfavorable to the null hypothesis” can in some cases mean greater than, in some cases less than, and in some cases farther away from a specified center. For example, say an experiment is performed to determine if a coin flip is fair (50% chance of landing heads or tails), or unfairly biased toward heads (> 50% chance of landing heads). If we choose a two-tailed test, then the null hypothesis is that the coin is fair and that any deviations from the 50% rate can be ascribed to chance alone. This can be written as: H0: A repeated coin toss will show heads the same number of times as tails HA: A repeated coin toss will show heads turning up more than tails, or vice versa Suppose that the experimental results show the coin turning up heads 14 times out of 20 total flips. The p-value of this result would be the chance of a fair coin landing on heads at least 14 times out of 20 flips (as larger values in this case are also less favorable to the null hypothesis of a fair coin) or at most six times out of 20 flips. In this case the random variable T is a binomial distribution. The probability that 20 flips of a fair coin would result in 14 or more heads is 0.0577. Since this is a two-tailed test, the probability that 20 flips of the coin would result in 14 or more heads or six or less heads is 0.115 or 2 × 0.0577. Generally, the smaller the p-value, the more people there are who would be willing to say that the results came from a biased coin. Generally, one rejects the null hypothesis if the p-value is smaller than or equal to the significance level, often represented by the Greek letter a (alpha). If the level is 0.05, then the results are only 5% likely to be as extraordinary as just seen, given that the null hypothesis is true. In the above example, the calculated p-value exceeds 0.05, and thus the null hypothesis—that the observed result of 14 heads out of 20 flips can be ascribed to chance alone—is not rejected. Such a finding is often stated as being “not statistically significant at the 5% level.” However, had a single extra head been obtained, that is, 15 instead of 14 heads, the resulting p-value would be 0.02. This time the null hypothesis—that
Chapter 15: Advanced Statistical Analysis 295
the observed result of 15 heads out of 20 flips can be ascribed to chance alone—is rejected. Such a finding would be described as being “statistically significant at the 5% level.” Critics of p-values point out that the criterion used to decide “statistical significance” is based on the somewhat arbitrary choice of level (often set at 0.05). Here is the proper way to interpret the p-value (courtesy of faculty at University of Texas Health Science Center School of Public Health): “Assuming the null [hypothesis] is true, the probability of obtaining a test statistic as extreme or more extreme than the one observed (enter the value here) is (enter p-value).”
Frequent Misunderstandings about the Use of the p-Value There are several common misunderstandings about p-values. All of the following statements are false: • The p-value is the probability that the null hypothesis is true, justifying the “rule” of considering as significant p-values closer to zero. • The p-value is the probability that a finding is “merely a fluke” (again, justifying the “rule” of considering small p-values as “significant”). • The p-value is the probability of falsely rejecting the null hypothesis. This error is a version of the so-called prosecutor’s fallacy: guilty until proven innocent. • The p-value is the probability that a replicating experiment would not yield the same conclusion. • 1 − (p-value) is the probability of the alternative hypothesis being true (see the first statement in this list). • The significance level of the test is determined by the p-value.10
DESIGN OF EXPERIMENTS (DOE) Define and explain basic DOE terms: response, factors, levels, treatment, interaction effects, randomization, error, and blocking. (Understand) Body of Knowledge III.F.3
Experimental design is research design in which the researcher has control over the selection of participants in the study, and these participants are randomly
296 Part III: Data Analysis
assigned to treatment and control groups. The first statistician to consider a methodology for the design of experiments was Sir Ronald A. Fisher. He described how to test the hypothesis that a certain lady could distinguish by flavor alone, whether the milk or the tea was first placed in the cup. While this sounds like a frivolous application, it allowed him to illustrate the most important means of experimental design: • Randomization. The process of making something random, for example, where a random sample of observations is taken from several populations under study. • Replication. Repeating the creation of a phenomenon so that the variability associated with the phenomenon can be estimated. N = 1 is called “one replication.” • Blocking. The arranging of experimental units in groups (blocks) that are similar to one another. It is a technique for dealing with nuisance factors. 11 • Orthogonality. Perpendicular, at right angles, or statistically normal. • Factorial experiments. The use of designing an experiment where all factors are combined with all levels of the factors. This allows for understanding the main effects of the factors and levels, and the interaction of factors. • One factor at a time method. Running an experiment where only one factor and its levels are changed at a time, while the other factors of interest are held constant. This is not generally considered an efficient or desired experiment, since interactions of the factors cannot be assessed. • Response. The dependent variable, or quality characteristic that is measured. • Factors. The factors that you are changing to assess the impact on the response or dependent variable. • Treatment. A generic term borrowed from agriculture, such as fertilizers, machines, methods, and so on. It represents the factors that you are assessing the impact on the response variable.12 • Levels. Refers to the levels or types of the factors. If the factor is the emergency department patient’s disposition types, two levels could be “discharge” or “admitted.” • Error. Experimental error refers to variation produced by disturbing factors, both known and unknown. Good experimental design can reduce experimental error.13 • Nuisance factor. A nuisance factor is a factor that probably has some effect on the response, but it is of no interest to the experimenter; however, the variability it transmits to the response needs to be minimized.14
Chapter 15: Advanced Statistical Analysis 297
Table 15.2
Experimental plan. Operator O1 = Dianne
O2 = Tom
(1) M1 = Old
X11
Machine
(2) X12
(3) M2 = New
X21
(4) X22
Source: E. R. Ott, E. G. Schilling, and D. V. Neubauer, Process Quality Control: Troubleshooting and Interpretation of Data, 4th ed. (Milwaukee, WI: ASQ Quality Press, 2005), 287.
Analysis of the design of experiments was built on the foundation of the analysis of variance, which is a collection of models in which the observed variance is partitioned into components due to different factors that are estimated and/or tested.15 Success in troubleshooting and process improvement often rests on the appropriateness and efficiency of the experimental setup and its match to the environmental situation. Design suggests structure, and it is the structure of the statistically designed experiment that gives it its meaning. Consider the simple 22 factorial experiment laid out in Table 15.2, with measurements X11, X12, X21, X22. The subscripts i and j on Xij simply show the machine (i) and operator (j) associated with a given measurement, where i can assume the values 1 or 2 so that x1j represents the measurements using the old machine, x2j the new machine, xi1 represents Dianne’s measurements, and xi2 represents Tom’s measurements. Here there are two factors or characteristics to be tested: operator and machine. There are two levels of each so that operator takes on the levels Dianne and Tom and the machine used is either old or new. The designation 2p means two levels of each of p factors. If there were three factors in the experiment, say by the addition of material (from two vendors), we would have a 23 experiment. In a properly conducted experiment, the treatment combinations corresponding to the cells of the table must be run at random to avoid biasing the results. Tables of random numbers or slips of paper drawn from a hat can be used to set up the order of experimentation. Thus, if we numbered the cells as shown in the diagram and drew the numbers 3, 2, 1, 4 from a hat, we would run Dianne–new first, followed by Tom–old, Dianne–old, and Tom–new, in that order. This is done to ensure that any external effect that might creep into the experiment while it is being run would affect the treatments in random fashion. Its effect would then appear as experimental error rather than biasing the experiment.
298 Part III: Data Analysis
Effects Of course, we must measure the results of the experiment. That measurement is called the response. Suppose the response is “units produced in a given time,” and that the results are as shown in Table 15.3. The effect of a factor is the average change in response (units produced) brought about by moving from one level of a factor to the other. To obtain the machine effect, we would simply subtract the average result for the old machine from that of the new. We obtain Machine effect = X 2• − X 1• =
(5 + 15) (20 + 10) − = − 5, 2 2
which says the old machine is better than the new. Notice that when we made this calculation, each machine was operated equally by both Dianne and Tom for each average. Now calculate the operator effect. We obtain Operator effect = X•2 − X•1 =
(15 + 10) (5 + 20) − = 0 2 2
The dots (•) in the subscripts simply indicate which factor was averaged out in ‒ computing X. It appears that operators have no effect on the operation. Notice that each average represents an equal time on each machine for each operator, and so is a fair comparison. However, suppose there is a unique operator–machine combination that produces a result beyond the effects we have already calculated. This is called
Table 15.3
Experimental results. Operator O1 = Dianne
O2 = Tom
1 M1 = Old
20
Machine
a 10
b M2 = New
5
ab 15
Source: E. R. Ott, E. G. Schilling, and D. V. Neubauer, Process Quality Control: Troubleshooting and Interpretation of Data, 4th ed. (Milwaukee, WI: ASQ Quality Press, 2005), 288.
Chapter 15: Advanced Statistical Analysis 299
an interaction. Remember that we averaged machines out of the operator effect and operators out of the machine effect. To see if there is an interaction between operators and machines, we calculate the machine effect individually for each operator. If there is a peculiar relationship between operators and machines, it will show up as the average difference between these calculations. We obtain Machine effect for Dianne = X21 − X11 = 5 − 20 = −15 Machine effect for Tom = X22 − X12 = 15 − 10 = 5 The average difference between these calculations is Interaction =
(5) − (−15) (20) = = 10. 2 2
The same result would be obtained if we averaged the operator effect for each machine. It indicates that there is, on the average, a 10-unit reversal in effect due to the specific operator–machine combination involved. Specifically, in going from Dianne to Tom on the new machine we get, on the average, a 10-unit increase, whereas in going from Dianne to Tom on the old machine we get, on the average, a 10-unit decrease. For computational purposes in a 22 design, the interaction effect is measured as the difference of the averages down the diagonals of the table. Algebraically, this gives the same result as the previous calculation since (X 22 − X 12 ) − (X 21 − X 11 ) 2 (X 22 − X 11 ) − (X 21 − X 12 ) = 2 (Southeast diagonal) − (Southwest diagonal) = 2 15 + 20 − 5 − 10 = 2 = 10
Interaction =
There is a straightforward method for calculating the effects in more-complicated experiments. It requires that the treatment combinations be properly identified. If we designate operator as factor A and machine as factor B, each with two levels ( − and +), the treatment combinations (cells) can be identified by simply showing the letter of a factor if it is at the + level and not showing the letter if the factor is at the − level. We show (1) if all factors are at the − level. This is illustrated in Table 15.4. The signs themselves indicate how to calculate an effect. Thus, to obtain the A (operator) effect, we subtract all those observations under the − level for A from those under the + level and divide by the number of observations that go into either the + or − total to obtain an average.
300 Part III: Data Analysis
Table 15.4
The 22 configuration. A –
+
–
(1) = X11
a = X12
+
b = X21
ab = X22
B
Source: E. R. Ott, E. G. Schilling, and D. V. Neubauer, Process Quality Control: Troubleshooting and Interpretation of Data, 4th ed. (Milwaukee, WI: ASQ Quality Press, 2005), 290.
The signs that identify what to add and subtract in calculating an interaction can be found by multiplying the signs of its component factors together as in Table 15.5. And we have A (operators) effect =
Table 15.5
(a + ab) − (b + (1)) −(1) + a − b + ab = =0 2 2
Signs of interaction. A –
+
–
(–)(–) = +
(+)(–) = –
+
(–)(+) = –
(+)(+) = +
B
Source: E. R. Ott, E. G. Schilling, and D. V. Neubauer, Process Quality Control: Troubleshooting and Interpretation of Data, 4th ed. (Milwaukee, WI: ASQ Quality Press, 2005), 291.
Chapter 15: Advanced Statistical Analysis 301
B (machines) effect = AB (interaction) =
(b + ab) − (a + (1)) −(1) − a + b + ab = =−5 2 2
((1) + ab) − (a + b) +(1) − a − b + ab = = 10 2 2
Note that the sequence of plus (+) and minus (–) signs in the numerators on the right match those in Table 15.6 for the corresponding effects. Table 15.6
Treatment combination.
Treatment combination
A
B
AB
Response
(1)
–
–
+
X11 = 20
a
+
–
–
X12 = 10
b
–
+
–
X21 = 5
ab
+
+
+
X22 = 15
Source: E. R. Ott, E. G. Schilling, and D. V. Neubauer, Process Quality Control: Troubleshooting and Interpretation of Data, 4th ed. (Milwaukee, WI: ASQ Quality Press, 2005), 291.
Tables of this form are frequently used in the collection and analysis of data from designed experiments.
Sums of Squares Process control and troubleshooting attempt to reduce variability. It is possible to calculate how much each factor contributes to the total variation in the data by determining the sums of squares (SS) for that factor. Sums of squares is simply the numerator in the calculation of the variance, the denominator being degrees of freedom, df. Thus, s2 = SS/df and s2 is called a mean square (MS) in analysis of variance. For an effect Eff associated with the factor, the calculation is SS(Eff) = r2P–2Eff2 where r is the number of observations per cell and p is the number of factors. Here, r = 1 and p = 2, so SS(A) = (0)2 = 0 SS(B) = (–5)2 = 25 SS(A × B) = (10)2 = 100 To measure the variance sˆ 2 associated with an effect, we must divide the sums of squares by the appropriate degrees of freedom to obtain mean squares (MS). Each effect (Eff), or contrast, will have one degree of freedom so that 2 σˆ Eff = Mean square (Eff) = SS(Eff)/1 = SS(Eff)
302 Part III: Data Analysis
Table 15.7
Analysis of variance.
Effect
SS
df
MS
Operator (A)
0
1
0
Machine (B)
25
1
25
Interaction (A × B)
100
1
100
Error
No estimate
Total
125
3
The total variation in the data is measured by the sample variance from all the data taken together, regardless of where they came from. This is n
∑(X − X )
2
i
SS(T) i=1 σˆ r2 = sr2 = = r2 p − 1 df(T) = ⎡⎣(15 − 12.5)2 + (5 − 12.5)2 + (10 − 12.5)2 + (20 − 12.5)2 ⎤⎦ [1(4) − 1] =
125 = 41.67 3
We can then make an analysis of variance table (Table 15.7) showing how the variation in the data is split up. We have no estimate of error since our estimation of the sums of squares for the three effects uses up all the information (degrees of freedom) in the experiment. If two observations per cell were taken, we would have been able to estimate the error variance in the experiment as well. The F-test can be used to assess statistical significance of the mean squares when a measure of error is available. Since the sums of squares and degrees of freedom add to the total, the error sum of squares and degrees of freedom for error can be determined by difference. Alternatively, sometimes an error estimate is available from previous experimentation. Suppose, for example, an outside measure of error for this experiment was obtained and turned out to be sˆ 2 = 10 with 20 degrees of freedom. Then, for machines F = 25/10 = 2.5, the F table for a = 0.05 and 1 and 20 degrees of freedom shows that F* (F-critical) = 4.35 would be exceeded 5% of the time. Therefore, we are unable to declare that the machines show a significant difference from chance variation. On the other hand, interaction produces F = 100/10 = 10, which clearly exceeds the critical value of 4.35, so we declare interaction significant at the a = 0.05 level of risk. Note that this is a onetailed test.
Blocking Occasionally, it is impossible to run all of the experiment under the same conditions. For example, the experiment must be run with two batches of raw material, or on four different days, or by two different operators. Under such circumstances it is possible to “block” out such changes in conditions by confounding, or irrevocably
Chapter 15: Advanced Statistical Analysis 303
combining them with selected effects. It is possible to run the experiment in such a way that an unwanted change in conditions, such as operators, will be confounded with a preselected higher-order interaction in which there is no interest or that is not believed to exist. That is why experiment structure and randomization are so important. It has been pointed out that, particularly in the screening stage of experimentation, the Pareto principle often applies to factors incorporated in a designed experiment. A few of the effects will tend to account for much of the variation observed, and some of the factors may not show significance. This phenomenon has been called factor sparsity or sparsity of effects,16 and provides some of the rationale for deliberately confounding the block effect with a preselected interaction that is deemed likely to not exist.
Main Effects and Interaction Plots The main effects plot simply plots the averages for the low (–) and high (+) levels of the factor of interest. The slope of the line reflects whether the effect is positive or negative on the response when the level changes from low to high. The value of the main effect is simply seen as the difference in the value of the response for the endpoints of the line. Figure 15.5 demonstrates what the main effect plots would look like for the data in Table 15.3. The interaction plot in Figure 15.5 shows the four cell means in the case of a 22 design. One of the factors is shown on the horizontal axis. The other factor is represented by a line for each level, or two lines in the case of the 22 design. Intersecting (or otherwise nonparallel) lines indicate the presence of a possibly statistically significant interaction. The value of the interaction can be seen as the average of the difference between the lines for each level of the factor on the horizontal axis. Note that in Figure 15.5, the main effect for the factor on the horizontal axis can be seen in the interaction plot as the dotted bisecting line.
Conclusion This is only a cursory discussion of design of experiments. It touches only on the most rudimentary aspects. It will serve, however, as an introduction to the concepts and content. Experimental design is a very powerful tool in the understanding of any process—in a manufacturing facility, pilot line facility, or laboratory.
TAGUCHI CONCEPTS AND METHODS Identify and describe Taguchi concepts: quality loss function, robustness, controllable and uncontrollable factors, and signal to noise ratio. (Understand) Body of Knowledge II.F.4
20
20
15
15
B+
10
B–
Average response
Average response
304 Part III: Data Analysis
10
5
Effect = 0
A–
5
A+
Effect = 10
20
15
15
Average response
Average response
Operators
20
10
5
Effect = –5
B–
B+ Machines
A+
A–
Operators
A+
10
5
A–
Effect = 10
B+
B– Machines
Figure 15.5 Main effects and interaction plots for the 22 design in Table 15.4. Source: E. R. Ott, E. G. Schilling, and D. V. Neubauer, Process Quality Control: Troubleshooting and Interpretation of Data, 4th ed. (Milwaukee, WI: ASQ Quality Press, 2005), 303.
Dr. Genichi Taguchi developed a means of dramatically improving processes in the 1980s. Since then, his methods have been widely used in achieving “robustness of design.” “Robust” as used here means strong, healthy, enduring, not easily broken, and not fragile. When we refer to robust design, we are describing a product design that is sturdy and intrinsically avoids defects. The concept of “designing quality into the product or process” is another expression of robust design. Taguchi’s approach for making product designs robust involves the application of five practices: 1. Classify the variables associated with the product into signal (input), response (result), control, and noise factors. 2. Mathematically specify the ideal form of the signal-to-response process that will produce a product that will make the system in which it is used work perfectly.
Chapter 15: Advanced Statistical Analysis 305
3. Compute the loss experienced when the output response deviates from the ideal (this practice is also known as the Taguchi loss function). 4. Predict the field quality through laboratory experiments, given a measured signal-to-noise ratio. 5. Employ design of experiments (DOE) to determine in a small number of experiments the control factors that create the best performance for the process. To illustrate how this Taguchi method reduces variation and produces higherquality products, let us turn to one of the examples Taguchi provided.17
An Example of Robust Design via the Taguchi Method In 1953 the Ina Seito tile manufacturing company experienced nonuniform heat distribution in its brick-baking kiln, causing the tiles nearest the walls of the kiln to warp and break. Replacing the kiln would be expensive, so Taguchi proposed the less costly option of reformulating the clay recipe, so the tiles were less sensitive to temperature variation. Taguchi was able to show via laboratory experiments that if 5% more lime was added to the clay mix, the tiles would be more robust, that is, less likely to break and warp. The Ina Seito plant did not need to purchase a new, expensive kiln, but rather it was able to solve its problem in a much less expensive fashion. This experiment has become the showcase example of how intelligent use of DOE can produce products that are robust to process noise.
Classify the Variables Figure 15.6 shows the relationships between the four classes of variables. Signal factors are the process inputs, and the responses are the results of the process. Control factors are those factors that can be controlled. In the tile example they include limestone content in the clay mixture, fineness of the additives, amalgamate content, type of amalgamate, raw material quantity, waste return content, and type of feldspar. Noise factors are those factors that cannot be controlled. In Dr. Taguchi’s application of robustness experiments, he treated temperature as a noise factor since it is difficult to control and its variation is a necessary evil. He investigated other varying factors that were much less expensive to control than temperature Noise factors
Signal factors
Product/ process
Responses
Control factors
Figure 15.6 The four classes of variables in the Taguchi method.
306 Part III: Data Analysis
LSL
All bad
USL
All good
LSL
USL
All bad L Y
Target
Taguchi concept
Traditional concept
Figure 15.7 Comparison between the traditional and the Taguchi concept for assessing the impact of product quality.
in order to make more-robust products. Robust design is also called minimum sensitivity design.
The Taguchi Quality Loss Function The Taguchi quality loss function is a conception of quality that is more demanding than most. Genichi Taguchi contended that as product characteristics deviate from the design aim, the losses increase parabolically. The diagram on the left in Figure 15.7 illustrates the traditional concept, in which the product is considered good if its characteristics fall within the lower to upper specification limits and is considered bad if it falls outside the specification limits. The diagram on the right in Figure 15.7 shows the Taguchi concept, where the loss that society suffers increases parabolically as the product’s characteristics deviate from the design target. The formula expressing the Taguchi quality loss function is L = K(Y − T)2 where L = Loss in dollars K = Cost coefficient T = Design target or aim Y = Actual quality value The point of this equation is that merely attempting to produce a product within specifications does not prevent loss. Loss occurs even if the product passes final inspection by falling within the specification limits but does not exactly meet the design target. Consider an automobile driveshaft. Suppose its target diameter is 3.3 ±0.1 cm. If the housing that contains the driveshaft is built extremely close to its tolerances (that is, to accommodate a driveshaft exactly 3.3 cm in diameter), and the driveshaft is slightly larger or smaller, even though it is within the specification limits, over time it will wear out faster than it would if it were exactly 3.3 cm. If it is 3.26 cm, it will pass final inspection and will power the automobile for a long time. But after hundreds of thousands of revolutions, it will wear out faster. The
Chapter 15: Advanced Statistical Analysis 307
automobile owner will have to have their car repaired earlier than they would have if the driveshaft were exactly 3.3 cm. The loss function for this drive shaft would be: L = K(3.3 − 3.26)2 = K(0.04)2 = 0.0016K
Signal-to-Noise Ratio The signal-to-noise (S/N) ratio is used to evaluate system performance:
η=
1 ⎡ S − Ve ⎤ S = +10log 10 ⎢ β ⎥ r ⎣ VN ⎦ N
where r is a measure of the magnitude of the input signal Sb is the sum of squares of the ideal function (useful part) Ve is the mean square of nonlinearity V N is an error term of nonlinearity and linearity In assessing the results of experiments, the S/N ratio is calculated at each design point. The combinations of the design variables that maximize the S/N ratio are selected for consideration as product or process parameter settings. There are three cases of S/N ratios: Case 1: S/N ratio for “smaller is better”:
η=
1 Σy i2 s = –10log 10 r n N
where S/N ratio = −10 log10(mean-squared response). This value would be used for minimizing the wear, shrinkage, deterioration, and so on, of a product or process. Case 2: S/N ratio for “bigger is better”: ⎡ 1⎤ Σ 1⎢ y2 ⎥ S η = = −10log 10 ⎢ i ⎥ r⎢ n ⎥ N ⎢⎣ ⎥⎦ where S/N ratio = −10 log10(mean-squared of the reciprocal response). This value would be used for maximizing values like strength, life, fuel efficiency, and so on, of a product or process. Case 3: S/N ratio for “normal is best”: h=
mean 2 y2 S = + 10log 10 = + 10log 10 2 N variance S
This S/N ratio is applicable for dimensions, clearances, weights, viscosities, and so on.
308 Part III: Data Analysis
ANALYSIS OF VARIANCE (ANOVA) Define key elements of ANOVAs and how the results can be used. (Understand) Body of Knowledge III.F.5
In statistics, analysis of variance (ANOVA) is a collection of statistical models and their associated procedures that compare means by splitting the overall observed variance into different parts. The initial techniques of analysis of variance were pioneered by the statistician and geneticist Sir Ronald A. Fisher in the 1920s and 1930s, and the tool is sometimes known as Fisher’s ANOVA or Fisher’s analysis of variance due to the use of Fisher’s F-distribution as part of the test of statistical significance. There are three conceptual classes of such models: • The fixed effects model assumes that the data come from normal populations that may differ only in their means. (Model 1) • Random effects models assume that the data describe a hierarchy of different populations whose differences are constrained by the hierarchy. (Model 2) • Mixed effects models describe situations where both fixed and random effects are present. (Model 3) In practice, there are several types of ANOVA depending on the number of treatments and the way they are applied to the subjects in the experiment: • One-way ANOVA is used to test for differences between three or more independent groups. • One-way ANOVA for repeated measures is used when the subjects are repeated measures; this means that the same subjects are used for each treatment. Note that this method can be subject to carryover effects. • Factorial ANOVA is used when the experimenter wants to study the effects of two or more treatment variables. The most commonly used type of factorial ANOVA is the 2 × 2 (read two by two) design, where there are two independent variables and each variable has two levels or distinct values. Factorial ANOVA can also be multilevel, such as 3 × 3, or higher order, such as 2 × 2 × 2, but analyses with higher numbers of factors are rarely done because the calculations are lengthy, and the results are hard to interpret. Such experiments may also be costly and take a long time to complete. • Multivariate analysis of variance (MANOVA) is used when there is more than one dependent variable.
Fixed Effects Model The fixed effects model of analysis of variance applies to situations in which the experimenter has subjected their experimental material to several treatments,
Chapter 15: Advanced Statistical Analysis 309
each of which affects only the mean of the underlying normal distribution of the response variable. An example might be comparing the gage error of three particular gages used to measure a given part characteristic.
Random Effects Model Random effects models are used to describe situations in which incomparable differences in experimental material occur. The simplest example is that of estimating the unknown mean of a population whose individuals differ from each other. In this case, the variation between individuals is confounded with that of the observing instrument. Contrasted with the fixed effects model, a subtle difference in the example could be first randomly selecting three gages capable of measuring a given part characteristic from all such gages available at a manufacturing facility and then comparing gage error. Inferences now not only relate to the gage error from the sampled gages, but also to what might occur by using other gages from the population. To further illustrate the difference between the fixed effects and the random effects models, consider repetition of the experiment. For the fixed effects model, we would select the same three gages, noting differences in gage errors with each gage. For the random effects model, we would again randomly sample three gages from the population of gages in order to examine the variability of the population of gages.
Assumptions • Independence of cases. This is a requirement of the design. • Scale of measurement. The dependent variable is interval or ratio. • Normality. The distributions in each of the groups are normal (use the Kolmogorov–Smirnov and Shapiro–Wilk tests to test normality). Note that the F-test is extremely non-robust to deviations from normality.18 • Homogeneity of variances. The variance of data in groups should be the same (use Levene’s test for homogeneity of variances). The fundamental technique behind an analysis of variance is a partitioning of the total sum of squares into components related to the effects in the model used. For example, we show the model for a simplified ANOVA with one type of treatment at different levels. (If the treatment levels are quantitative and the effects are linear, a linear regression analysis may be appropriate.): SSTotal = SSError + SSTreatments The number of degrees of freedom (abbreviated df) can be partitioned in a similar way and specifies the chi-square distribution that describes the associated sums of squares (SS): dfTotal = dfError + dfTreatments
310 Part III: Data Analysis
Degrees of Freedom Degrees of freedom indicate the effective number of observations that contribute to the sum of squares in an ANOVA: the total number of observations minus the number of linear constraints in the data. Generally, the degrees of freedom are the number of participants (for each group) minus one. Example 1 Group A is given vodka, group B is given gin, and group C is given a placebo. All groups are then tested with a memory task. A one-way ANOVA can be used to assess the effect of the various treatments (that is, the vodka, gin, and placebo). Example 2 Group A is given vodka and tested on a memory task. The same group is allowed a rest period of five days and then the experiment is repeated with gin. The procedure is repeated using a placebo. A one-way ANOVA with repeated measures can be used to assess the effect of the vodka versus the impact of the placebo. Example 3 In an experiment testing the effects of expectations, subjects are randomly assigned to four groups:
Expect vodka–receive vodka
Expect vodka–receive placebo
Expect placebo–receive vodka
Expect placebo–receive placebo (the last group is used as the control group)
Each group is then tested on a memory task. The advantage of this design is that multiple variables can be tested at the same time instead of running two different experiments. Also, the experiment can determine whether one variable affects the other variable (known as interaction effects). A factorial ANOVA (2 × 2) can be used to assess the effect of expecting vodka or the placebo and the actual reception of either. ANOVA measures within-group variation by a residual sum of squares sˆ 2e whose square root sˆ e will be found to approximate sˆ =
R , d2
the measure of within-group variation or an estimate of inherent variability, even when all factors are thought to be held constant. This within-group variation is used as a yardstick to compare between-group variation. ANOVA compares between-group variation by a series of variance ratio tests (F-tests). It is important to stress that a replicate is the result of a repeat of the experiment as a whole and is not just another unit run at a single setting, which is referred to as a determination.19 This is just a brief introduction to analysis of variance. In combination with the previous section on design of experiments, extremely powerful analytical
Chapter 15: Advanced Statistical Analysis 311
Table 15.8
Emergency department ANOVA factors and levels.
Factors
Levels of Factors
Patient disposition
Admission, Discharge
Number of orders
0, 1, 2, 3, 4, 5, 6
Source: Sandra L. Furterer.
methods can be applied to the further understanding of processes. With these tools employed, sound decisions can be made based on statistically derived, objective results. Emergency Department ANOVA Example:19 An emergency department (ED) wants to understand factors that impact how long the patient stays in the emergency department, or their total length of stay (Total LOS). They believe that patients who get admitted to the hospital spend more time in the ED due to being more acutely ill than those who get discharged from the ED. Additionally, the number of tests can contribute to a longer length of stay. An ANOVA was performed with the following factors and levels shown in Table 15.8. Minitab statistical software was used to perform the ANOVA. The results, in Figure 15.8, illustrate that there is a statistically significant difference in the Total LOS for both factors, patient disposition, and number of orders, with p-values of 0.000. The results output also shows the regression equation for the model. Figures 15.9 and 15.10 show the Tukey comparisons for the factors and levels. The Total LOS for admitted patients was much higher at 333 minutes, compared to 292 minutes for discharged patients. There are significant differences in the Total LOS for 0 orders compared to 2, 3, 4, 5, and 6 orders; between 1 order and 2, 3, 4, 5 and 6 orders; between 2 and 5 orders; between 3 and 5 orders; and between 4 and 5 orders. The factorial plots in Figure 15.11 shows that the Total LOS is greater for admitted patients and increases with the number of test orders performed. ANOVA can provide great insight into what is happening with a process so that improvements can be suggested, tested, and implemented. In this particular study, additional improvements were focused on getting patients admitted more quickly to the inpatient floors and units, by making changes to the patient transportation and nurse reporting processes. Additionally, another Lean-Six Sigma project was initiated to improve the lab and imaging orders and results processes.
312 Part III: Data Analysis General Linear Model: Total LOS versus Disposition, Number of Orders Method Factor coding (–1, 0, +1)
Factor Information Factor Type Levels Values DISPOSITION Fixed 2 Admission, Discharge Number of Orders Fixed 7 0, 1, 2, 3, 4, 5, 6
Analysis of Variance Source DISPOSITION Number of Orders Error Lack-of-fit Pure Error Total
DF Adj SS Adj MS F-Value P-Value 1 167606 167606 15.71 0.000 6 2098862 349810 32.79 0.000 642 6848229 10667 5 48307 9661 0.91 0.477 637 6799922 10675 649 10977937
Model Summary S R-sq R-sq(adj) R-sq(pred) 103.281 37.62% 36.94% 36.17%
Coefficients Term Coef SE Coef T-Value P-Value VIF 0.000 33.78 9.24 Constant 312.31 DISPOSITION Admission 0.000 1.57 3.96 5.23 20.73 Number Orders 0.000 1.56 –11.22 13.4 –149.9 0 0.000 1.51 –4.77 13.4 –64.1 1 0.073 1.35 –1.79 13.2 –23.6 2 0.444 1.52 0.77 11.7 9.0 3 0.212 1.53 1.25 12.3 15.4 4 0.000 1.27 4.91 14.2 69.6 5
Regression Equation Total LOS = 312.31 + 20.73 DISPOSITION_Admission - 20.73 DISPOSITION_Discharge – 149.9 Number Orders_0 – 64.1 Number Orders_1 - 23.6 Number Orders_2 + 9.0 Number Orders_3 + 15.4 Number Orders_4 + 69.6 Number Orders_5 + 143.6 Number Orders_6
Figure 15.8 ANOVA results for the emergency department study. Source: Sandra L. Furterer
Chapter 15: Advanced Statistical Analysis 313
Tukey Simultaneous 95% Cls
DISPOSITION
Differences of Means for Total LOS
–50 –40 –30 –20 –10 –70 –60 If an interval does not contain zero, the corresponding means are significantly different.
0
Figure 15.9 Tukey comparisons for disposition factor and levels. Source: Sandra L. Furterer.
Number of Orders
Tukey Simultaneous 95% Cls 1-0 2-0 3-0 4-0 5-0 6-0 2-1 3-1 4-1 5-1 6-1 3-2 4-2 5-2 6-2 4-3 5-3 6-3 5-4 6-4 6-5
Differences of Means for Total LOS
100 200 300 400 –100 0 If an interval does not contain zero, the corresponding means are significantly different.
Figure 15.10 Tukey comparisons for number of orders, factors, and levels. Source: Sandra L. Furterer.
500
314 Part III: Data Analysis Main Effects Plot for Total LOS Fitted Means
DISPOSITION
Number of Orders
450
Mean of Total LOS
400 350 300 250 200 150 Admission
Discharge
0
Figure 15.11 Emergency department factorial plots.
1
2
3
4
5
6
Part IV
Customer–Supplier Relations Chapter 16
I nternal and External Customers and Suppliers
Chapter 17
Customer Satisfaction Methods
Chapter 18
Product and Process Approval Systems
Chapter 19
Supplier Management
Chapter 20
aterial Identification, Status, and M Traceability
Chapter 16 Internal and External Customers and Suppliers
Define and distinguish between internal and external customers and suppliers. Describe their impact on products, services, and processes, and identify strategies for working with them to make improvements. (Apply) Body of Knowledge IV.A
Customer: Anyone who is affected by the product or by the process used to produce the product. Customers may be external or internal.1
INTERNAL CUSTOMERS An internal customer is the recipient (person or department) of another person’s or department’s output (product, service, or information) within an organization.2 The concept of internal customers cuts across all levels of an organization— executive, managerial, and workforce. As a case in point, consider how factory workers supply manufacturing data to a manager who, in turn, produces a production report for a senior executive. Internal customers have also been defined as the next person in the work process. Kaoru Ishikawa introduced the concept of internal customer as next operation as customer. All work-related activities of an organization can be viewed as a series of transactions between its employees, or between its internal customers and internal suppliers. For example, the marketing department in any given organization might supply creative content to the internet architecture and technology group responsible for the firm’s website design. Each receiver has needs and requirements. Whether the delivered service or product meets these needs and requirements impacts the customers’ effectiveness and the quality of services and/or products delivered to their customers, and so on. Following are some examples of internal customer situations: • If A delivers part X to B one hour late, B may have to apply extra effort and cost to make up the time or else perpetuate the delay by delivering late to the next customer. 316
Chapter 16: Internal and External Customers and Suppliers 317
• Engineering designs a product based on a salesperson’s understanding of the external customer’s need. Production produces the product, expending resources. The external customer rejects the product because it fails to meet the customer’s needs. The provider reengineers the product and production makes a new one, which the customer accepts beyond the original required delivery date. The result is waste—and possibly the last order from this customer. • Information technology (IT) delivers copies of a production cost report (which averages 50 pages of fine print per week) to six internal customers. IT has established elaborate quality control of the accuracy, timeliness, and physical quality of the report. However, of the six report receivers, only two still need information of this type. But neither of these finds the report directly usable for their current needs. Each has assigned clerical people to manually extract pertinent data for their specific use. All six admit that they diligently store the reports for the prescribed retention period.3 In every organization, each employee is a customer and a supplier. Because no work is considered an isolated activity, all internal customers are ultimately parts of a customer–supplier value chain (see Figure 16.1). In this chain, everyone’s work is part of a process of inputs, added value, and outputs in products and services reaching the external customer. The following are some examples of how the customer–supplier value chain can be observed in every work process and management activity: • Employees helping other employees get work done • Supervisors leading employees to accomplish tasks • Managers serving the needs of organizational groups in their area of responsibility • Senior executives creating value for employees they lead
Customer– supplier
Understand and agree
Requirements
Supplier
Requirements
• The chief executive creating value by leading efforts to shape organizational vision, direction, philosophy, values, and priorities
Customer
Understand and agree
Figure 16.1 The customer–supplier value chain. Source: ASQ’s Foundations in Quality Self-Directed Learning Series, Certified Quality Manager, Module 4 (Milwaukee, WI: ASQ Quality Press, 2001), 4.
318 Part IV: Customer–Supplier Relations
A solid foundation for a successful customer-focused business is built when everyone in an organization thinks in terms of serving their internal customer and contributing to the ultimate value for the external end user.4
IMPROVING INTERNAL PROCESSES AND SERVICES In an excerpt from Armand V. Feigenbaum and Donald S. Feigenbaum’s, “The Power of Management Capital,” the authors talk of the focus on enhancing quality value: Companies can no longer focus their quality programs primarily on the reduction of defects or things gone wrong for their customers. Defect reduction has become an entry-level requirement for effective quality improvement initiatives. Companies must build their quality programs throughout the customer value chain by integrating and connecting all key quality work processes.5 The goal of the customer–supplier value chain is to integrate quality into every aspect of a work process and, ultimately, the product or service. The concept of “value added” refers to tasks or activities that convert resources into products or services consistent with external customer requirements. Strategies for working with internal customers and suppliers to improve products, processes, and services are as follows: 1. Identify internal customer interfaces (providers of services/products and receivers of their services/products) 2. Establish internal customers’ service/product needs and requirements 3. Ensure that the internal customer requirements are consistent with and supportive of external customer requirements 4. Document quality-level agreements between internal customers and suppliers 5. Establish improvement goals and measurements 6. Identify and remove non-value-added work to keep work processes efficient and focused on quality 7. Implement systems for tracking and reporting performance and for supporting the continuous improvement of the process6 One method to help improve internal customer–supplier relations is an adaptation of Dr. Ishikawa’s idea of everyone being able to talk with each other freely and frankly in order to create better internal relations and improved products and services. Quality level agreements (QLAs) are an adaptation of this. Internal QLAs can be used between internal suppliers and customers to document their agreed-to levels of service/product outputs. In “Quality Level Agreements for Clarity of Expectations,” Russell Westcott describes how QLAs can be used to manage internal expectations for improved results:7 • Quality specifications for service/product outputs are established (such as quantity, accuracy, usability, service availability and response time, attitude and cooperation of suppliers, and so on).
Chapter 16: Internal and External Customers and Suppliers 319
• Customers express performance targets with appropriate units of measure for each identified service/product output. • A periodic audit is conducted, or a system for ongoing tracking, measuring, evaluating, and reporting performance compared to the QLA terms is put in place. • Identification of deficient performance levels triggers continuous quality improvement. Internal QLAs offer a sound basis for performance measurement and continual improvement. They also • increase employee understanding of internal service/product expectations; • increase employee insight about the potential impact of their contributions on a given business process; • provide a clear link between internal product/service quality and the organization’s quality management system, and ultimately with external customer requirements; and • improve collaboration between internal customers and suppliers. For a successful customer-focused business, everyone in an organization must think of serving their internal customer while keeping aligned with the external customer’s requirements in order to ultimately serve the external customer as the end user.
EFFECT OF TREATMENT OF INTERNAL CUSTOMERS ON THAT OF EXTERNAL CUSTOMERS When management (and management’s systems) are inconsiderate toward internal customers (poor tools and equipment, defective or late material from a previous operation, incorrect/incomplete instructions, illegible work orders or prints, circumvention of worker safety procedures and practices, unhealthy work environment, lack of interest in internal complaints, disregard for external customer feedback, and so forth), this can create careless or indifferent treatment of external customers. This indifference may generate a downward spiral that could adversely affect an organization’s business. If managers want to instill a desire in their employees to care for the needs of external customers, they can’t ignore the needs of internal customers. So many organizations fail to learn, or rather, ignore, the internal customers’ needs and then wonder why their management fails to stimulate internal customers to care about how and what they do for external customers. The rude and uncooperative sales representative, waitperson, housekeeping employee, healthcare provider, delivery person, or customer service representative is often a reflection of a lack of caring for internal customers. Organizations must work constantly to address the internal customers’ lament: “How do you expect me to care about the next operator, or external customer, when no one cares whether I get what I need to do my job right?”8
320 Part IV: Customer–Supplier Relations
METHODS TO ENERGIZE INTERNAL CUSTOMERS The organization should consider all or several of the following methods to energize internal customers to improve products, processes, and services:9 • Ensure that all employees understand who the external customers are and, generally, what the customers need and want • Ensure that employees know what the organization expects of them and what boundaries pertain to what they can do to satisfy the external customer • Involve employees in designing and implementing strategies, procedures, and tools that will facilitate customer satisfaction • Ensure that employees have the information, training, time, and support they need to enable them to focus on external customer needs, wants, and satisfaction • Ensure that employees have the tangible things they need to serve customers well, for example, supplies, tools, working equipment, clean workspace, safe workplace, and flexible policies and procedures • Ensure that the recognition and reward systems are supportive of the customer satisfaction focus • Demonstrate through words and actions the commitment the organization has to serving its customer to the best extent possible • Provide positive reinforcement to employees who are, or are trying, to do their best to satisfy customers • Provide employees with substantive performance feedback and assist in continual improvement
EXTERNAL CUSTOMERS Many times, it is assumed that employees know who the external customer is. As Robin Lawton writes in his article “8 Dimensions of Excellence,” “Ask any diverse group of employees who the customer is and you are likely to hear multiple and competing answers.”10 Terms referring to specific types of external customers include the following.
End User or Final Customer End users or final customers purchase products/services for their own use or receive products/services as gifts. This definition can also apply to organizations, such as regulatory bodies. Types of end users/final customers include the following: • The retail buyer of a product. The retail buyer influences the design and usability of product features and accessories based on the volume purchased. Consumer product watch organizations or
Chapter 16: Internal and External Customers and Suppliers 321
regulatory bodies warn purchasers of potential problems. These can be small, independent regulatory organizations or large, established government organizations such as the FDA (Food and Drug Administration), which is one of the oldest consumer protection agencies in the United States. The factors important to this type of buyer, depending on the type of product, are reasonable price, ease of use, performance, safety, aesthetics, and durability. Other influences on product offerings include easy purchase process, installation, instructions for use, post-purchase service, warranty period, packaging, friendliness of seller’s personnel, and brand name. • Discount buyer. The discount buyer shops primarily for price, is more willing to accept less-well-known brands, and is willing to buy quantities in excess of immediate needs. These buyers have relatively little influence on products, except for, perhaps, creating a market for off-brands, production surpluses, and discontinued items. • Employee buyer. The employee buyer purchases the employer’s products, usually at a deep discount. Often being familiar with or even a contributor to the products bought, this buyer can provide valuable feedback to the employer (both directly, through surveys, and indirectly, through volume and types of items purchased). • Service buyer. The buyer of services (such as TV repair, dental work, or tax preparation) often buys by word of mouth. Word of good or poor service spreads rapidly and influences the continuance of the service provider’s business. • Service user. The captive service user (such as the user of electricity, gas, water, municipal services, and schools) generally has little choice as to which supplier they receive services from. Until competition is introduced, there is little incentive to providers to vary their services. Recent deregulation has resulted in a more competitive marketplace for some utilities, and open enrollment for schools has also allowed more choice. • Organization buyer. Buyers for organizations that use a product or service in the course of their business or activity can have a significant influence on the types of products offered to them, as well as on the organization from which they buy. Raw materials or devices that become part of a manufactured product are especially critical in sustaining quality and competitiveness for the buyer’s organization (including performance, serviceability, price, ease of use, durability, simplicity of design, safety, and ease of disposal). Other factors include flexibility in delivery, discounts, allowances for returned material, extraordinary guarantees, and so forth. Factors that particularly pertain to purchased services are the reputation and credibility of the provider, range of services offered, degree of customization offered, timeliness, and fee structure.
322 Part IV: Customer–Supplier Relations
Intermediate Customers Intermediate customers are divided into two groups. One group of intermediate customers buys products and/or services and makes them available to the final customer by repackaging or reselling them or creating finished goods from components or subassemblies. The second group includes organizations that repair or modify a product or service for the end user. Types of intermediate customers include the following: • Wholesale buyer. Wholesalers buy what they expect they can sell. They typically buy in large quantities. They may have little direct influence on product design and manufacture, but they do influence the providers’ production schedules, pricing policies, warehousing and delivery arrangements, return policies for unsold merchandise, and so forth. • Distributor. Distributors are similar to wholesalers in some ways but differ in the fact that they may stock a wider variety of products from a wide range of producers. What they stock is directly influenced by their customers’ demands and needs. Their customers’ orders are often small and may consist of a mix of products. The distributors’ forte is stocking thousands of catalog items that can be “picked” and shipped on short notice at an attractive price. Customers seeking an industry level of quality at a good price and immediately available mainly influence distributors stocking commodity-type items, such as sheet metal, construction materials, and mineral products. Blanket orders for a yearly quantity delivered at specified intervals are prevalent for some materials. • Retail chain buyer. Buyers for large retail chains, because of the size of their orders, place major demands on their providers, such as pricing concessions, very flexible deliveries, requirements that the providers assume warehousing costs for already purchased products, special packaging requirements, no-cost return policies, and requirements that the providers be able to accept electronically sent orders. • Other volume buyers. Government entities, educational institutions, healthcare organizations, transportation companies, public utilities, cruise lines, hotel chains, and restaurant chains all represent largevolume buyers that provide services to customers. Such organizations have regulations governing their services. Each requires a wide range of products, materials, and external services in delivering its services, much of which is transparent to the consumer. Each requires high quality, and each has tight limitations on what it can pay (for example, based on appropriations, cost-control mandates, tariffs, or heavy competition). Each such buyer demands much for its money but may offer longterm contracts for fixed quantities. The buying organizations’ internal customers frequently generate the influences on the products required. • Service providers. These buyers include plumbers, public accountants, dentists, doctors, building contractors, cleaning services, computer programmers, website designers, consultants, manufacturer’s reps,
Chapter 16: Internal and External Customers and Suppliers 323
actors, and taxi drivers, among many others. This type of buyer, often self-employed, buys very small quantities, shops for value, buys only when the product or service is needed (when the buyer has a job, patient, or client), and relies on high quality of purchases to maintain their customers’ satisfaction. Influences on products or services for this type of buyer may include having the provider be able to furnish service and/or replacement parts for old or obsolete equipment, be able to supply extremely small quantities of an extremely large number of products (such as those supplied by a hardware store, construction materials depot, or medical products supply house), or have product knowledge that extends to knowing how the product is to be used. Intermediate customers must be able to understand the product and/or service requirements of the end users and know how to address product specifications or service elements with the supplier. Customers are diverse; they have different needs and wants, and they certainly do not want to be treated alike. To provide a strategic focus and respond more effectively to groups of either current or prospective buyers, most businesses segment their external customers in order to better serve the needs of different types of customers. Customers sharing particular wants or needs may be segmented in the following ways: • Purchase volume • Profitability (to the selling organization) • Industry classification • Geographic factors (such as municipalities, regions, states, countries, and continents) • Demographic factors (such as age, income, marital status, education, and gender) • Psychographic factors (such as values, beliefs, and attitudes) An organization must decide whether it is interested in simply pursuing more customers (or contributors, in the case of a not-for-profit fundraiser) or in targeting the right customers. It is not unusual for an organization, after segmenting its customer base, to find that it is not economically feasible to continue to serve a particular segment. Conversely, an organization may find that it is uniquely capable of further penetrating a particular market segment, or it may even discover a niche not presently served by other organizations.11
EXTERNAL CUSTOMERS’ INFLUENCE ON PRODUCTS AND SERVICES For an organization to assume they know a customer’s needs would be very risky. Lawton also writes in his article “8 Dimensions of Excellence,” “It is impossible to determine who the customer is without first identifying the product for which they
324 Part IV: Customer–Supplier Relations
are customers.” With reference to his third dimension, “product characteristics customers want,” he goes on to explain that when there is confusion about the product, there is guaranteed confusion about who the customer is.12 After segmenting external customers, an organization must determine what it is about its products and services that is the most important to each group. This involves consideration of customers as people—their likes, dislikes, idiosyncrasies, biases, political orientation, and religious beliefs, as well as moral codes, peer pressures, prestige factors, fads, and so on, that an organization should be aware of—in order to manage the customer relationship effectively.
Chapter 17 Customer Satisfaction Methods
Describe the different types of tools used to gather customer feedback: surveys, focus groups, complaint forms, and warranty analysis. Explain key elements of quality function deployment (QFD) for understanding and translating the voice of the customer. (Understand) Body of Knowledge IV.B
It is the voice of the customer (VOC) that an organization must be listening to in order to satisfy customer needs, improve current products and services, and generate ideas for new products and services. Customer satisfaction is the difference between perceived performance and the customer’s expectations. With customer expectations continuously growing, there are a variety of tools that can be used to capture accurate customer information. From the book How to Win Customers and Keep Them for Life: “A typical business hears from only four percent of its dissatisfied customers. The other 96 percent just quietly go away and 91 percent will never come back. That represents a serious financial loss for companies whose people don’t know how to treat customers, and a tremendous gain for those that do.”1 If an organization wants to be customer driven, it must integrate both qualitative and quantitative customer feedback data to get a more complete picture of the voice of the customer. Qualitative data complements quantitative data in that it helps companies gain a more comprehensive understanding of their customers’ perceptions of quality. There are many tools used to gather and analyze customer feedback: • To seek out customer feedback, organizations can use surveys, focus groups, order forms with comment sections, and interviews. • To facilitate feedback, making it easy for the customer to respond, an organization can set up customer hot lines, such as “24/7” 800 numbers, phones for contact personnel, electronic bulletin boards, and access for feedback or complaints via e-mail, fax, phone, and social media sites. 325
326 Part IV: Customer–Supplier Relations
• Information is also available from the data provided by standard company records, such as claims, refunds, returns, repeat services, litigations, replacements, repairs, guarantee usage, warranty work, and general complaints. Once organizations have a process, collect customer data, and problem solve to improve on their product/service, they must also take action in which the use of the quality function deployment (QFD) method helps to translate customer requirements into appropriate technical requirements for each stage of the product development cycle.
SURVEYS Companies who are customer driven are proactive and will go out and ask the customer for feedback. Surveys are a good way to quantify customers’ attitudes and perceptions of a company’s products or services. By using surveys, a company goes directly to their customers to get the answers to what they want to know. What knowledge is the organization hoping to gain? Facts that they can gather and benefit from may include the answers to the questions of who their customers are, why they buy their products or services, what makes them loyal and want to buy or use their services again, or what made them stop and go to a competitor. Customer satisfaction surveys can be used to drive continuous improvement activities.
Survey Types There are a number of different types of surveys that can be used depending on the organization’s needs. Surveys can elicit a customer’s evaluation of a product, a customer’s purchasing experience, a company employee’s feedback, and so on. See Table 17.1 for a list of various survey types, their uses, advantages, and disadvantages. Note that the types of surveys are listed in alphabetical order; the order does not imply any special significance. Customer satisfaction and perception surveys can be conducted using one of three different formats: in person, over the telephone, or written (traditional paper and pencil, comment cards/suggestion box, or online via e-mail or a website). See Table 17.2 for the advantages and disadvantages of each format.
Survey Design To maximize the quantity of responses and to get good constructive answers, the design of a survey is critical. When designing a survey, the following must be addressed: • What specific research objectives will the survey accomplish? • How will results be used? • How will results be communicated to customers and stakeholders? • Which customers should be surveyed? • What information should be sought, and what kinds of questions should be posed?
Chapter 17: Customer Satisfaction Methods 327
Table 17.1 Type of survey
Survey types. Use
Advantages
Disadvantages
Attrition analysis and reports
• Used to calculate attrition (lost-customer) rates • Used to analyze major events in the organization’s environment that may have led to higher rates of attrition (for example, changes in management, changes in internal quality measures, changes in policies such as merchandise-return policies or frequentflyer policies, or the introduction of a new product or service by a competitor)
• The company has a better understanding of why it is losing customers and the conditions leading up to the losses • The company can predict the kind of events likely to lead to customer attrition in the future
• May not get complete and honest answers
General customer satisfaction surveys
• Used to measure customer satisfaction and dissatisfaction with products and services • Used to track trends • Used to identify major improvement and marketing efforts
• Useful for measuring “desired” quality • General surveys, such as ratings and report cards, are easy to obtain from customers
• Because people typically do not want to write out details, the information received may be too general and therefore not very useful • Companies may work to achieve high ratings on the report cards, leading to other important things being ignored that are not on the report card; a focus on report cards can derail customer satisfaction
Lostcustomer surveys
• Used to survey (typically over the telephone) customers who have significantly reduced or have stopped purchasing a company’s products or services • Used to survey customers as close as possible to time of defection • Used when the organization can define who represents a lost customer (which is based on the typical buying interval) • Used when an organization can define when a customer is lost (expressed in terms of days, months, or perhaps years)
• Help identify serious customer issues • Are one of the more powerful tools for recovering customers • Useful for identifying and measuring “expected” quality • May reveal information about competitors’ advancements in products and services
Require researchers who • deal with serious customer issues; • are very familiar with the company’s products and services; • understand how things get done in the organization; • demonstrate good listening, analytical, and interpersonal skills; and • retrieve customer opinions without using a sales approach.
Continued
328 Part IV: Customer–Supplier Relations
Table 17.1 Type of survey
Continued. Use
Advantages
Disadvantages
Momentof-truth inventories
• Used to collect observations from frontline employees regarding customer needs or complaints
• Employees can make observations and assessments within the context of the whole organization (for example, policies, systems, processes, culture, management styles, employee pay, benefits, and promotions) • Daily aggregates of observations and interactions with customers are more revealing than a once-a-year survey
• If trust is low and fear is high, employees might not feel comfortable sharing “the truth” • Not all employees view complaints as legitimate opportunities for improvement • Managers may subtly punish employees for telling the truth about customer dissatisfaction
Perceptual surveys— typically conducted as a blind survey, meaning the person surveyed does not know who is sponsoring the survey
• Used to get a customer’s comparative perceptions of a company’s products and services relative to the competition’s • Used to get perceptions of a company’s image
• The pool of sampled customers typically includes the company’s customers, the competitor’s customers, and, sometimes, potential customers • Useful for identifying and measuring both “expected” quality and “desired” quality
• Interviewees may not have experience using competitor’s products, thus there is no genuine basis for comparison • The anonymity of the survey (that is, impersonal nature) often causes a reduced rate of responsiveness
Reply cards
• Used to obtain demographic information about buying customers • Used to obtain reasons why customers bought • Used to get customer reactions to the buying experience • Used to solicit information regarding other product/service needs
• Help update company’s customer list • Help company research buying trends
• May be difficult to get a significant return • Not all customers will respond, thus the information received is potentially incomplete and biased
Transaction surveys— “real-time” surveys. A brief survey typically asking two key questions: “What did we do right?” and “How can we improve?”
• Used to survey customers immediately after a transaction (for example, a product delivery, an installation, or a billing) • Used to build customer relationships • Used to get new information related to products and services
• Provide quick feedback • Give employees opportunities to fix a problem on the spot • Useful for measuring “desired” quality
• If frontline employees are not empowered to resolve problems on the spot, customers might perceive the company as not being concerned about service quality issues
Win/loss analysis and reports
• Used to discover the company’s relative ranking with competitors based on the decision maker’s selection process • Used to assess why the company won or lost a competitive bid
• Company has new information to improve future bids • Company gains information about their competitors
• May be difficult to get an honest assessment
Source: ASQ’s Foundations in Quality Self-Directed Learning Series, Certified Quality Manager, Module 4 (Milwaukee, WI: ASQ Quality Press, 2001), 4–31, 4–32.
Chapter 17: Customer Satisfaction Methods 329
Table 17.2 Type of survey
Survey mediums. Advantages
Disadvantages
In-person survey
• Enhances relationship management with key customers • Allows for in-depth probing • Offers a more flexible format • Creates opportunities to establish and maintain rapport with customers
• Is labor intensive • Can be expensive • Can allow interviewer–interviewee bias to be introduced • Requires highly trained interviewers • Requires content analysis of open-ended questions, which is more difficult to do • Requires well-written, open-ended questions • Requires excellent listening skills
Telephone survey
• Allows for greater depth of response than written surveys • Costs less than in-person surveys • Respondents are more at ease in their own environments • Creates a personal connection with the customer that may allow for follow-up
• Calls may be viewed as intrusive • The interviewee may be uncomfortable if others listen in on the conversation • Designing is more difficult than for written surveys because telephone surveys are a form of interpersonal social interaction without a visual contact; people rely on visual as well as spoken cues in a conversation • Requires well-written, open-ended questions that are easy for the interviewee to comprehend • Requires excellent listening skills
Written survey
• Typically is less expensive than in-person survey/interview • Can be self-administered • Can be made anonymous • Offers a variety of design options to choose from • Is one of the most expedient ways to have customers quantify their reactions to a limited number of features in a company’s service or product (if a low response rate is acceptable)
• Requires follow-up to derive a decent response rate • Customers typically do not like answering open-ended questions (therefore, limited to asking only a few open-ended questions) • Targeted addressee may not be the one who responds • Feedback is of the “report card” nature, that is, little or no depth of information • There is less assurance that the questions are understood. (This can be minimized with field testing or validating the survey instrument with a small group of preselected customers.)
Source: ASQ’s Foundations in Quality Self-Directed Learning Series, Certified Quality Manager, Module 4 (Milwaukee, WI: ASQ Quality Press, 2001), 4–33.
• What type of survey will best achieve the research objectives? • Will the survey selected measure what it is intended to measure? • Will the survey be capable of producing the same results over repeated administrations? • How will the results be analyzed (scoring format and editing policy)? • How easy will it be to administer the selected survey? • How easy will it be for the respondents to complete? • Does the budget cover the total costs associated with the survey design, implementation, analysis, and feedback?
330 Part IV: Customer–Supplier Relations
Survey Question Design When designing a survey, it is like the old adage: “garbage in, garbage out”; the answers received will only be as good as the questions that were asked. The survey question design choice depends on the type of data an organization wants to receive. The responses can be qualitative or quantitative: • Qualitative responses. If looking for qualitative data, questions will be designed for a nonnumeric response. The response will be text or open ended. Open-ended questions will result in a richer understanding of the respondent’s attitudes and thought processes, but there are limitations, namely analysis and control. Using tools such as cluster analysis and Pareto analysis can help quantify results. A few wellwritten, thought-provoking questions will get an organization more informative responses than a multitude of questions (no matter how well written!) as respondents are bound to grow tired and either skip questions or give up and not complete the survey at all. • Quantitative responses. If looking for quantitative data, questions will be designed for numeric data or closed ended. Closed-ended questions present discrete choices to the respondent to elicit a simple yes/no response. The response format determines how the closed-ended survey data can be used. There are four popular scales used for these surveys: – The Likert scale format gives customers the opportunity to respond in varying degrees. The high end represents a positive response, while the low end represents a negative response. For example, if measuring level of satisfaction, a five-point scale is used for responses ranging from “Very satisfied” to “Very dissatisfied.” Of all scales, Likert is the most frequently used. – Rank order is used when respondents are asked to rank a series of product/service attributes according to their perceived importance. Five or six items are optimal when rank ordering, although it is possible with up to 20 items but it is more difficult for most respondents. Rank ordering is the least popular for measuring the importance of attributes. – Forced allocation requires respondents to assign points across categories. It is used to identify the importance of different product or service features. – Yes/no response commands a simple response of a yes or no answer from the customer. Generally, the survey results should help the organization understand customer priorities and levels of customer satisfaction. The survey data results should be communicated to all key stakeholders to ensure that the feedback is understood and used to increase customer focus and satisfaction.
Chapter 17: Customer Satisfaction Methods 331
FOCUS GROUPS Focus groups are facilitated meetings with groups of people representing the voice of the customer. Focus groups can be used for multiple reasons: • Hear the voice of the customer (VOC) for product and service requirements • Generate information to be included in survey questionnaires • Perform a training needs assessment • Test new program ideas • Determine Critical to Quality or Critical to Satisfaction criteria for process design or improvement Some of the advantages of focus groups include the following: • Members can stimulate one another with ideas that others can springboard from • You can receive more comments and ideas more quickly than one-on-one interviews • Facilitators can probe for additional information or clarification • Focus groups can produce results that have high face validity, meaning that the content is in the participants’ own words • Information can be obtained at a relatively low cost • Information can be obtained fairly quickly Some of the disadvantages of focus groups include the following: • The facilitator has less control in a group setting than for individual interviews • It can be difficult to analyze dialogue when participants interact with each other • Facilitator’s skills can lead to poor results, if they are not highly skilled • Trained and skilled facilitators can be hard to find • Group-to-group variation can be great • They can be difficult to schedule across all of the participants • Participants who have strong personalities can guide the focus group to a tangent or specific opinions There is not a valid sample size that you try to achieve for focus groups. They are usually run until no new themes are generated. The groups shouldn’t be too large, fewer than 10 or 12 participants. Focus groups can be an effective way to gather the customers’ voice.
332 Part IV: Customer–Supplier Relations
COMPLAINT FORMS As stated in the book How to Win Customers and Keep Them for Life, “Seven out of 10 complaining customers will do business with you again if you resolve the complaint in their favor. If you resolve it on the spot, 95 percent will do business with you again. On average, a satisfied customer will tell five people about the problem and how it was satisfactorily resolved.”2 Most customers who are dissatisfied will not complain; instead, they will go to the competition. When a customer does communicate a complaint, it should be regarded as an important opportunity not only to learn from that customer but also, if handled quickly with positive results for the customer, to enhance that customer’s loyalty. The organization should see this as a missed opportunity if they have only been proactive with other tools, such as customer feedback surveys, QFD, and so on. When a customer complains, he or she is providing information about a problem that other customers are most likely experiencing but the company is not yet aware of. The problem may be with the service received or the product itself. The customer’s complaint may also be about not getting what they expected to get out of that product or service. This also needs to be captured and fed back to the service provider or manufacturer as this is valuable information, providing the opportunity for improvements to a service or product. A complaint can be communicated many ways: by phone calls, comments made to an employee, a section for comments on an order form, face-to-face complaints at a customer service counter, writing a letter, customer feedback cards, or filling out and submitting a complaint form. All these formats can be captured and used for continuous improvement. The least productive and unfortunately very harmful to a company are the complaints that are lodged by customers complaining to friends and acquaintances. If organizations want customer loyalty, they must make it easy for a customer to complain and provide a quick response that satisfies the customer! There are various formats for complaint forms, whether they are interactive online forms or filled out on paper. In general, for complaint forms to be of any use for problem solving, they need to have the following information: • To be provided by the customer: – Customer contact information – A detailed description of the problem – When the problem occurred – Who was involved – Where the problem took place • To be completed by the organization/customer service department: – Person investigating complaint – Due date for closure – Root cause
Chapter 17: Customer Satisfaction Methods 333
– Action taken – Communication back to the customer Figure 17.1 shows an example of a customer complaint form to be filled out by hand. The complaint forms should be just a small piece of a larger picture. A complaint management system (CMS), where there is one person responsible for tracking all complaint forms, needs to be in place. A CMS will allow for recording all complaints and is an added source for gaining information from existing customers and satisfying customers. As part of a continuous improvement process, such as PDCA (as discussed in Chapter 6), after gathering complaint data, problem-solving tools (refer to Chapter 8) can be used to analyze the data, identifying opportunities for improvement. • First, and foremost, a complaint in any form should ideally be resolved on the spot. – A complaining customer is likely to be emotional or angry. Any employee dealing directly with customers must have training in conflict resolution and problem-solving tools. An employee must know through proper training not to take it personally. Rather than feel resentment toward an irate customer, an employee should strive to understand the problem and solve the problem, providing a winwin situation. The customer walks away knowing the company cares, and the employee feels the reward of accomplishment. – An employee must always remember to thank the customer for bringing the problem to his or her attention. • All complaints should be documented along with the resolution whether they were resolved on the spot or not. • Communicate the action taken to the customers. • Using problem-solving tools as discussed in Chapter 8, look for trends, Pareto-ize complaints, and problem-solve. – Recording and tracking complaints will tell a company which particular issue is not just a nuisance but indeed may be a major problem. – Remember the 80/20 rule (Pareto principle)—80% of the complaints come from 20% of the customers. A large number of complaints on a particular issue certainly does signify there is a problem needing resolution. However, there may only be a small number of complaints for a specific problem that has a larger impact on profits. Having already divided customers into segments, an organization should know which of their customers contribute most to their profits. • Don’t stop at problem solving. After prioritizing improvement opportunities, provide resources to a cross-functional team for implementation as part of the PDCA cycle. Customers should be encouraged to express dissatisfaction. Training employees to handle customer complaints as opportunities can be key to a company gaining
334 Part IV: Customer–Supplier Relations
Customer Complaint Form Customer name: Telephone number: Address: Complaint: Date and time of occurrence: Location: Name of product (if applicable): Name of person(s) involved (if applicable): Please give description of problem:
Findings:
Action taken:
Date customer was contacted with the results of the investigation and action taken: Initials of person investigating the complaint: Initials of person taking complaint:
Figure 17.1 Customer complaint form.
Chapter 17: Customer Satisfaction Methods 335
Table 17.3
Test to identify an organization’s attitude toward complaints. A
B
A complaint is
a problem
an opportunity
Receiving complaints is
a painful and awkward situation
a chance to retain dissatisfied customers
Above all, a complainant
wants compensation
gives important information
Employees are
defensive about complaints
open to complaints
Employees tend to
shift blame elsewhere
recognize the needs of dissatisfied customers
Complaints are resolved
with problem-solving techniques
with a systematic process linked to a continuous improvement process
When a complaint is closed
someone will likely be punished
something should be improved
Complaints
must be reduced
are encouraged and welcomed
Source: N. Scriabina and S. Fomichov, “Six Ways to Benefit from Customer Complaints,” Quality Progress (2005).
competitive advantage in the marketplace. Natalia Scriabina and Sergiy Fomichov discuss improving business performance through effective complaint handling in “Six Ways to Benefit from Customer Complaints.” They provide Table 17.3 to help identify whether an organization sees complaints as problems or opportunities.3
WARRANTY AND GUARANTEE ANALYSIS Using data from standard company records, such as claims, refunds, returns, repeat services, litigations, replacements, repairs, guarantee usage, and warranty work, also tells an organization where to focus their problem-solving efforts. A guarantee is a promise or assurance regarding the quality of a company’s product or service. A warranty is an assurance regarding the reliability of a company’s product for a certain period of time. Guarantees and warrantees are legally binding contracts. They are typically offered to build customer confidence and reduce any risks associated with a purchase decision. Having a warranty ensures that the customer seeks service using the warranty rather than replacing the product with one from a competing manufacturer. Warranty cards are often distributed with products for new owners to complete and return to the producer. The purpose of the warranty card is twofold: (1) to register the owner’s product and serial number with the producer and (2) to obtain demographic information, marketing information, and/or customer feedback regarding the owner’s decision to buy, the buying experience, and so on. Ultimately, warranty cards help build/update the company’s customer list and provide opportunities to stay in touch with customers. A pragmatic problem with warranty cards is that many customers do not fill them out and return them.4
336 Part IV: Customer–Supplier Relations
The data from tracking guarantee and warranty issues provide organizations information such as failure rates in products and services, what is unacceptable to customers, and a rough idea of what customers expect. With each guarantee and warranty problem, the organization’s profits are liable through payouts and returns. Using the data to problem-solve for the root cause and taking corrective action to prevent the problem from recurring lessens future impact to a company’s profits.
QUALITY FUNCTION DEPLOYMENT The voice of the customer is the basis for setting requirements. With all the data that are being gathered from surveys, complaints, guarantee usage, and warranty issues for information about customer requirements, a company can use a planning process such as quality function deployment (QFD) to take action. QFD is a structured method in which customer requirements are translated into appropriate technical requirements for each stage of the product development cycle. A Japanese shipbuilding company developed QFD in the early 1970s. In the 1980s, the US auto industry imported QFD, and it has since proven successful in virtually every industry (manufactured products and services) worldwide. QFD, also known as the house of quality (see Figure 17.2), graphically displays the results of the planning process. It links organizations with consumers
(G) Correlations
(E) Relationships and associated strengths of the relationships
(C) Customers’ perceptions relative to your competitors
(B) Importance weighting
(A) Customer wants and needs “voice of the customer”
(D) Critical characteristics of our process that meet customers’ needs— “voice of the company”
(F) Importance rating—summary (H) Targets—Ranking by technical importance
Figure 17.2 Quality function deployment matrix—“house of quality.” Source: R. T. Westcott, The Certified Manager of Quality/Organizational Excellence Handbook, 3rd ed. (Milwaukee, WI: ASQ Quality Press, 2006), 475.
Chapter 17: Customer Satisfaction Methods 337
by translating the customer’s voice into technical design requirements and customer needs into product design parameters. The QFD matrix shows customer requirements (what the customer wants and needs) as rows, and the columns are characteristics of a product or service (how to meet the customer’s needs). In a typical application, a cross-functional team creates and analyzes a matrix that the company can then measure and control. QFD is deployed through a progression of matrices (Figure 17.3). It is an iterative process of four phases, where the decisions made in one matrix drive the next. The four linked matrices convey the voice of the customer through these actions: • Product planning. Translating customer input and competitor analysis into design requirements • Part deployment. Translating design requirements into part specifications • Process planning. Translating part specifications into process requirements • Production planning. Translating process requirements into production requirements Early in the planning phase, the house of quality clarifies the relationship between customer needs and product features. It correlates the customer requirements and competitor analysis with higher-level technical and product characteristics to establish the strong and weak relationships between the two.
Customer requirements
Parts requirements
Process requirements
Production requirements Process requirements
Parts requirements
Design requirements
Customer requirements
Design requirements
Customer satisfaction
Figure 17.3 Voice of the customer deployed through an iterative QFD process. Source: R. T. Westcott, The Certified Manager of Quality/Organizational Excellence Handbook, 3rd ed. (Milwaukee, WI: ASQ Quality Press, 2006), 478.
338 Part IV: Customer–Supplier Relations
Referring to Figure 17.2: • Data for section (A), the customers’ wants and needs, are obtained from several sources, for example, customer listening systems, surveys, and customer focus groups. • Each listed customer want may be given a weight as to its importance (B). • Through research, interviews, focus groups, and so on, data are obtained to be able to rate customers’ perceptions (C) of the wants listed in (A) relative to your competitors’ products/services (the horizontal connection between A and C). • The “how things are done” or critical characteristics of your process that meet the customers’ needs (the voice of the company) are shown in section (D) of the matrix. • In the relationships section of the matrix (E), where the rows and columns of the matrix intersect, the QFD team identifies the interrelationships between each customer requirement and the technical characteristics. Relationship codes are inserted. Usually, these codes are classified as 9 = strongly related, 3 = moderately related, and 1 = weakly related. • Section (F) is a summary of the vertical columns from the matrix (E) representing the sums of the weighted totals represented by the relationships (where coded). • By adding a roof (G) to the house of quality, the question of whether the improvement to one requirement will impact another design requirement will be addressed. This matrix correlates the interrelationship between company descriptors. Usually, a key is provided as to the correlation strengths to show either a positive and supporting relationship or that a trade-off or negative relationship exists. The resulting arrows below the roof’s matrix indicate the direction of improvement for each design requirement (unless a key indicating otherwise is provided). • The house of quality is completed by summarizing conclusions based on the data provided by the other sections of the matrix. In this section (H), there may be a competitive assessment, technical importance/ priorities, and target values. A detailed example of a house of quality matrix is shown in Figure 17.4. Applying the descriptors for each section of the house of quality in Figure 17.2 to the data in the example QFD matrix in Figure 17.4, the importance weighting (B) shows that one of the customer requirements (A), “Latch/lock easy—Does not freeze,” was assigned the highest importance rating of 8, and the lowest importance rating was determined to be a 2 for three of the customer requirements. On the right side of the matrix, the customer ratings (C) show that car B is outperforming the others.
Chapter 17: Customer Satisfaction Methods 339
Relationships: 9 Strong 3 Medium 1 Weak Target Operating efforts
A
Door
B
Good operation and use Latch/ Window Easy to Leak lock easy oper easy open/close
Importance Easy close O/S Easy close I/S Easy to open Doesn’t kick back Stays open in chk Easy crank reach Crank easy hold Easy to operate Wipe dry on oper Operates rapidly I/S lk knob ease Key insert easy Key oper easy Does not freeze No water leak No drip w/open Organizational difficulty
7 5 2 2 5 6 5 6 3 2 6 6 6 8 7 5
Objective target values
Important controls
Engineering competitive assessment
Lock
Window
Lock/ latch
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
Customer rating
C
1 2 3 4 5
45 2 5 35 4 8
E
11 15 12
4 5 1 1 3 1 1 3 3 4 2 1 5 1 1 3 3 4 3 3
A Car B Car C Car
Better 5 4 3 2 Worse 1
Service repairs/1000 cars Service cost: $/car Reg: hinge strength Reg: door crush Reg: lock strength Door off assembly Plastic outer panel
Technical importance
Human factors
7.5 ft lbs 7 lbs max 10 lbs 7–15 lbs 20 ft lbs max 5 lbs max 4 lbs max 5 in lbs max 20 in lbs max 26 in lbs max Target area 7 mm min CW/CCW 30–40 mm 25 mm min acc Target area Target area Target area Target area 4 minute test
Customer requirements
D
Door close effort Rem hdl opn eff O/S hdl opn eff Static hold opn Dynam hold opn I/S lk btn unlk eff Key insert effort Key unlk effort Meet freeze test Wdw eff up/dn Wdo Crank location Crank offset Direct. of oper Knob size (diam) Remote hdl shape Remote hdl loc. I/S lk button loc. O/S mir hdl loc. Mir Dr pull hdl loc. Door Water leak test Service complaints
Design requirements
Correl: Strong + Positive Negative Strong –
Better
Min
Worse
Max
G
Absolute Relative
56 5 14 10 48 61 28 43
25 28
18 56
78 18 18 73 57 54 60 60 84 65 77 21 20 63 6 18 18 9 2 2 8 6 6 7 7 9 7 8 2 2 7 1 2 2
17 28
45 78 913 5 9
F H
Figure 17.4 QFD house of quality matrix example. Source: R. T. Westcott, The Certified Manager of Quality/Organizational Excellence Handbook, 3rd ed. (Milwaukee: ASQ Quality Press, 2006): 477.
340 Part IV: Customer–Supplier Relations
Section (F), identified as the “absolute technical importance” row in this particular sample QFD matrix, represents the sum of the products of each column symbol value (E) and the corresponding customer requirement weight (B). Calculating the design requirement, “Door close effort” is: 7 (Weight of easy close O/S) × 9 (Strong strength relationship) + 5 (Weight of easy close I/S) × 3 (Medium strength relationship) = 78. Each entry in (F) is then divided by the sum of all the entries in that row and multiplied by 100 to convert it into a percentage, giving the relative technical importance (H) rankings. The relative ranking for “Door close effort” is calculated as 78/913 × 100% = 8.54 or rounded off = 9.0 These values represent targets of opportunity for improvement or innovation. The QFD matrix can be changed and/or expanded according to the objectives of the analysis. If desired, customer requirements can be prioritized by importance, competitive analysis, and market potential by adding an additional matrix to the right-hand side of the QFD matrix. The company descriptors can be prioritized by importance, target values, and competitive evaluation with a matrix added to the bottom of the QFD matrix (as displayed in the sample QFD in Figure 17.4). To summarize, these are the QFD approach and deployment steps: 1. Plan—determine the objective(s), determine what data are needed 2. Collect data 3. Analyze and understand the data using the QFD matrix to generate information 4. Deploy the information throughout the organization (depending on the objectives) 5. Employ the information in making decisions 6. Evaluate the value of the information generated and evaluate the process 7. Improve the process The completed QFD matrix indicates which activities for problem solving or continual improvement should be given the highest priority. This graphical representation helps management and personnel involved in an improvement effort understand how an improvement will affect customer satisfaction and helps prevent deviations from that objective. Although the QFD process was initially developed to prioritize design processes, it can be used very effectively for other process-related systems. QFD benefits include the following: • Shortening product/service development cycle time • Decreasing the number of after-launch changes • Increasing customer satisfaction
Chapter 17: Customer Satisfaction Methods 341
• Reducing costs and potentially increasing profits and market share • Breaking down cross-functional barriers • Improving communications within the organization and with suppliers and customers • Providing structure and a systematic planning methodology for product/service development • Providing better data and documentation for refining product/service designs • Leading to a more focused effort in meeting customer needs5 It is important that a company make use of the VOC as early as possible and use QFD to incorporate customers’ needs into the engineering design of a product or into the plan for a service in order to save costs incurred from waiting until the customer receives the product and the company sustaining the losses mentioned— that is, warranty costs, costs incurred from complaints, lost customers who didn’t bother to complain, and so on.
Chapter 18 Product and Process Approval Systems
Describe how validation and qualification methods, including beta testing, firstarticle, in-process, and final inspection, are used to approve new or updated products, processes, and services. (Understand) Body of Knowledge IV.C
Quality function deployment (QFD) is a valuable tool for product, service, and process design. A company or organization should be using a planning process such as QFD to translate customer requirements into appropriate technical requirements. During research and development, it is important to translate the product characteristics into specifications as a basis for description and control of the product. At the conclusion of each stage of a product development cycle (see Figure 18.1), a formal, documented, systematic, and critical review of results should be planned and conducted. A decision is then made based on the review of the results whether to move on to the next stage. This is done from the system design level down to lower levels in the product hierarchy (breakdown). A typical breakdown would be systems, subsystems, components, assemblies, parts, and so on. Depending on the industry, certain regulations may apply. In the medical device industry, companies must be compliant with FDA 21 CFR 820.30 regulation for the control of design changes to ensure that good quality assurance practices are being used for the design of medical devices.
VERIFICATION Verification is conducted by engineering to determine whether design specifications are being met. Verification ensures that the design is complete and meets design input requirements (customer needs and satisfaction, product specifications, and process specifications). Verification methods include the following: • Formal product/service design reviews* * Because a lack of design reviews can cause potentially serious quality problems, quality and reliability engineers should advocate and participate in design reviews.
342
Chapter 18: Product and Process Approval Systems 343
Market research
Continuous improvement
VOC
Feasibility stage
*Design and development stage
Pilot/test stage
Voice of the customer, technological development, market assessment Customer requirements, QFD, project plan and team, business case System and process requirements, product design, process design, design reviews, prototype builds, alpha tests, verification Alpha tests, verification, service and marketing materials, first-article test, beta tests, production validation
Launch stage
Marketing and operation plan implementation, customer support, service training
*The development stage can be lengthy and often broken up into phases, that is, definition phase, design phase, and prototype phase.
Figure 18.1 Stages of the product development cycle.
• Qualification testing • Burn-in/run-in tests (to detect and correct inherent deficiencies) • Performance tests (mean time between failures, mean time to repair, and so on) • Demonstrations • Prototypes • Design comparisons with similar existing designs previously proven to work Design reviews are also used to identify critical and major characteristics so that manufacturing and quality efforts will focus on high-priority items and design for appropriate inspection and tests. In addition to the design reviews, periodic evaluations of the design involving analytical methods should be conducted at significant stages. These methods include failure mode and effects analysis (FMEA), fault tree analysis, and risk assessment. In Juran on Planning for Quality, Joseph M. Juran states, “Design review originally evolved as an early warning device during product development. The method is to create a design review team, which includes specialists from those classes of customers who are heavily impacted by the design—those who are to develop the manufacturing process, produce the product, test the product, use
344 Part IV: Customer–Supplier Relations
the product, maintain the product, and so on. . . . Their responsibility is to provide early warning to the designer: ‘If you design it this way, here are the consequences in my area.’”1 Here are some other definitions of verification: “The act of reviewing, inspecting, testing, checking, auditing, or otherwise establishing and documenting whether items, processes, services, or documents conform to specified requirements.”2 “Confirmation, through the provision of objective evidence, that specified requirements have been fulfilled.”3
VALIDATION AND QUALIFICATION METHODS As an output of the initial planning phase, product/service quality goals are defined based on customer wants/needs. During product and process design and development, the task is to create a process that can, under operating conditions, satisfactorily produce the product or service. Qualification testing can be used for both verification and validation. All test and evaluation results should be documented regularly throughout the qualification test cycle, and any results of nonconformity and failure analysis should be noted. Changes indicated by the results of qualification testing get incorporated into the drawings and manufacturing planning. Qualification process is a set of steps that demonstrates the ability of an entity to fulfill the specified requirements.4 Other definitions from the FDA’s web page on guidelines on general principles of process validation are listed below:5 • Installation qualification of tooling and other process equipment establishes confidence that they are consistently operating within established limits and tolerances. • Process performance qualification establishes confidence that the process is effective and reproducible. • Product performance qualification establishes confidence through appropriate testing that the finished product produced by a specified process meets all requirements for functionality and safety. Once manufacturing begins output of a small number of parts on production tools, and enough parts are available, a pilot assembly run is started. Various production tests are conducted to ensure that the product is being built to specifications. Validation is the process of evaluating a system or component during or at the end of the development process, after verification, to determine whether it satisfies specified requirements. The focus is to ensure that the product functions as it was intended to. It is normally performed on the final product and conducted by the customer under defined operating conditions. The FDA’s web page on guidelines on general principles of process validation defines process validation as Establishing documented evidence that provides a high degree of assurance that a specific process will consistently produce a product meeting its predetermined specifications and quality characteristics.6
Chapter 18: Product and Process Approval Systems 345
The ISO 9000:2015 defines validation as Confirmation, through the provision of objective evidence, that the requirements for a specific intended use or application have been fulfilled.7 First-article testing tests a production process to demonstrate process capability and whether tooling meets quality standards. The process is tested under operating conditions, but the resulting products are not salable units. Approval of first-article testing can validate the process. A pilot run is a trial production run. When enough units are produced during a pilot assembly run, various production tests are conducted to ensure that the product is being built to specifications. A field test validates that the product meets performance requirements. During a pilot run, the entire manufacturing process is analyzed to determine if there are any trouble spots. When a problem is identified and the problem solved for the root cause, a countermeasure is then implemented. The process is then validated to ensure that the processes are functioning as they should to produce a quality product that meets the customer’s needs. Alpha testing, usually done in the development stage, consists of in-house tests conducted on prototypes to ensure that the product is meeting requirements under controlled conditions. Alpha testing may go through multiple iterations. Each Pareto analysis of failure modes yields tighter results toward meeting intent. Beta testing is the final test before the product is released to the commercial market. Beta testers are existing customers. Two criteria should be included in the beta test program: the customer must be friendly and be forthcoming with feedback. The customers in the beta program must be fully aware that they are taking part in a pre–commercial release test program. The beta program should be contained within a set time frame. Like alpha testing, beta testing may go through multiple iterations. Often, the beta product is recovered once the test is completed. This allows a further degree of testing and review. Certain industries may have additional specific methodologies for process/ product approval. In the automotive industry, the production part approval process (PPAP) is used. Documents are used to demonstrate to the customer that the new product/process is ready for full-blown production and to obtain approval to release it for production. PPAP is used to show that suppliers have met all the engineering requirements, based on a sample of parts produced from a production run made with production tooling, processes, and cycle times.
Chapter 19 Supplier Management
SUPPLIER SELECTION Describe and outline criteria for selecting, approving, and classifying suppliers, including internal rating programs and external certification standard requirements, including environmental/ social responsibility. (Understand) Body of Knowledge IV.D.1
Supplier management consists of these steps: 1. Selecting a supplier 2. Monitoring the supplier 3. Correcting the performance of a deficient supplier 4. Eliminating suppliers that are incapable or unwilling to supply what the buyer needs Suppliers should be selected on the basis of expected performance. The best indicator of future supplier performance is past performance, but auditing the supplier’s manufacturing processes is an excellent way to determine whether the supplier is capable of producing what the buyer needs. Buyers can determine which supplier would best meet their needs by checking historical records of previous product and service deliveries, by auditing the potential or candidate supplier, or a combination of both of these methods. Companies often establish a list of preapproved suppliers who have provided satisfactory products and services in the past and/or who have been rated in an internal supplier rating process. Buyers may determine which suppliers should be listed in the preapproved supplier listing by either an internally developed rating system or by using external certification models such as the ISO 9000:2015 criteria. Suppliers’ performance related to environmental and social responsibility may be key criteria for certifying and evaluating suppliers. 346
Chapter 19: Supplier Management 347
The focus of a supplier certification program is ensuring that supplier performance is consistent and the products achieve a certain level of quality. Material from a certified supplier may have reduced incoming inspection verification levels. The supplier certification process includes the following activities:1 • Develop the certification criteria and process • Identify supplier categories for which certification is advantageous • Evaluate selected suppliers against the criteria • Report evaluation results and award the certifications for those qualified suppliers • Perform ongoing monitoring of supplier performance, take actions including decertification if necessary Selection of supplier certification criteria should be performed by a crossfunctional team, including purchasing, quality, engineering, production, and materials at a minimum. The elements to be used to create the selection criteria will typically include the following:2 • Previous experience and past performance with the product/service to be purchased • Relative level of sophistication of the quality system, including meeting regulatory requirements or customer-mandated quality system registration (for example, ISO 9001, IATF 16949) • Capability to meet current and potential future capacity requirements and at the desired delivery frequency • Financial stability of the supplier • Technical support available and willingness to participate as a partner in developing and optimizing design and a long-term relationship • Total cost of dealing with the supplier (material cost, communications methods, inventory requirements, incoming verification required) • Supplier’s past track record for business performance improvement • Supplier’s code of conduct and ability to be socially and ethically responsible • Environmental and social responsibility of the supplier Methods for determining how well a potential supplier fits the criteria include the following:3 • Obtaining Dun & Bradstreet or other available financial reports • Requesting a formal quote, which includes providing the supplier with specifications and other requirements (e.g., testing)
348 Part IV: Customer–Supplier Relations
• Visits to the supplier by management and/or the selection team • Confirmation of quality system status either by on-site assessment, a written survey, or request for a certificate of quality system registration • Discussions with other customers served by the supplier • Review of databases or industry sources for the product line and supplier • Evaluation of samples obtained from the supplier (e.g., prototyping, lab tests, validation testing)
SUPPLIER PERFORMANCE Describe supplier performance in terms of measures such as quality (e.g., defect rates, functional performance), price, delivery speed, delivery reliability, level of service, and technical support. (Understand) Body of Knowledge IV.D.2
After a supplier has been selected and an order placed for either products or services, the buyer should continue to monitor the supplier to ensure that it is delivering what is needed. The same criteria for rating the supplier’s performance that were used to select suppliers should be employed to monitor ongoing supplier performance. Some measures of supplier performance that can be used to select a supplier or to monitor a supplier’s performance are the following, according to Bhote:4 1. No rating. The rationale is that the purchasing and quality departments know which companies are good or bad. No formal rating system is needed. 2. Quality rating only. A rating based on incoming inspection statistics. 3. Quality and delivery rating: graphic methods. A quality rating charted against the delivery rating. 4. Quality and delivery rating: cost index method. Use of a rating system based on a fixed dollar penalty for nonconformances. 5. Comprehensive method. The measuring and rating of variables such as quality, cost, delivery, service, and so on, with applied weightings. In those instances when a supplier is found to be furnishing products or services that are substandard, the buyer can direct the supplier to correct its performance or the buyer can eliminate the supplier. Directing the supplier to improve performance requires cooperation between the supplier and the buyer. In many
Chapter 19: Supplier Management 349
instances the relationship between supplier and buyer is strained and adversarial; in these cases, it is difficult to get the supplier to make the necessary changes to improve product and/or service quality and delivery. This is one reason quality experts have always stressed the importance of developing collaborative relations between suppliers and buyers whenever possible.5 Many companies establish collaborative relationships with their suppliers to remove any barriers to excellent supplier performance. Buyer support to the supplier might include training supplier personnel or providing services that the supplier could not otherwise afford. Providing such support services to the supplier benefits the buyer as much or more than it does the supplier since it reduces the risk of receiving poor materials and services from the supplier. Collaborative relations include the following: • Involving the supplier early in product design • Supplier participation in planning product development • Funding supplier development efforts that lead to better supplies • Reimbursing the supplier for reasonable supply development expenses • Giving timely quality performance feedback
KEY MEASURES OF SUPPLIER PERFORMANCE The SCOR model defines the following performance attributes for measuring the supply chain, which can be adapted to assess supplier performance. The buyer should develop a standard set of measures for assessing supplier performance. The following factors should be included: • Quality: Quality of the product • Defect rates: Whether the product meets specifications • Functional performance: Whether the product meets attributes and fitness for use • Price: Need to ensure that price is not the highest or only priority • Delivery speed: Could use the SCOR model’s Order Fulfillment Cycle Time metric (Table 19.1) • Delivery reliability: Could use the SCOR model’s Perfect Order Fulfillment metric (Table 19.1) • Service: Can use the delivery speed and reliability measures under delivery to assess service. Could also incorporate additional customer service measures. • Availability: Availability of the product, is it working when needed? • Technical support: Assess the technical support for answering product questions and problems
350 Part IV: Customer–Supplier Relations
Table 19.1
SCOR model performance attributes, definitions, and metrics.
Performance attribute
Definition
Metrics Customer focused
Reliability
The ability to perform tasks as expected; the predictability of the outcome of a process
• Perfect order fulfillment
Responsiveness
The speed at which tasks are performed; the speed at which a supply chain provides products to the customers
• Order fulfillment cycle time
Agility
The ability to respond to external influences and marketplace changes to gain or maintain a competitive advantage
• Upside supply chain adaptability • Downside supply chain adaptability • Overall value at risk
Internal Focused Cost
The cost of operating the supply chain processes
• Total supply chain management costs • Cost of goods sold
Asset management efficiency (assets)
The ability to efficiently utilize assets; inventory reduction, insourcing versus outsourcing
• Cash-to-cash cycle time • Return on supply chain fixed assets • Return on working capital
Source: Adapted from information in the SCOR model.6
SUPPLIER ASSESSMENT METRICS Some additional metrics in Table 19.2 can be useful in assessing supplier performance.7
Table 19.2
Supplier performance assessment metrics.
Quality metrics
Percentage defective lots ppm (parts) DPMO (defects) Lots accepted versus lots rejected Percentage nonconforming Special characteristic measurements (MTBF, ohms, etc.) Continued
Chapter 19: Supplier Management 351
Table 19.2
Continued.
Timeliness metrics
Percentage of on-time deliveries Percentage of late deliveries
Delivery metrics
Percentage of over-order quantity Percentage of under-order quantity Percentage of early delivery
Cost metrics
Dollars rejected/dollars purchased (or vice versa) Dollar value in nonconforming product (scrap, returns, rework) Dollar value of purchases (per reporting time segment)
Compliance metrics
Percentage of reported quality information (of that required) Percentage of supplied certifications (of that required)
Subjective metrics
Responsiveness of supplier service functions Timeliness and/or thoroughness of supplier technical assistance Responsiveness of supplier in providing price quotes Ability to follow instructions (in a variety of formats)
Chapter 20 Material Identification, Status, and Traceability
Describe the importance of identifying material by lot, batch, source, and conformance status, including impact for recalls. Describe key requirements for preserving the identity of a product and its origin. Use various methods to segregate nonconforming material and process it according to procedures. (Apply) Body of Knowledge IV.E
IMPORTANCE OF MATERIALS IDENTIFICATION Traceability is the ability to verify the history, location, or application of an item by means of documented recorded identification.1 Organizations that make products need a means to identify and trace the materials that comprise the products. In the event a product is defective or fails to meet its specified requirements when it is completed, it will be necessary to determine the source of what went wrong. Without a history of which materials were used in the development of the product, it will be impossible to determine if the material used was a source of the product’s deficiencies. Since we cannot know which products will fail before they are developed, it is necessary to keep a record of which materials were used in every product the company makes. Accordingly, all organization certification programs such as ISO demand that the manufacturer maintain records of which materials were used in each product developed.2
KEY REQUIREMENTS FOR TRACKING MATERIALS There are four key requirements for a process that tracks materials: 1. Establish the criteria for distinguishing good materials from bad materials 2. Inspect materials and classify them as complying or noncomplying materials 352
Chapter 20: Material Identification, Status, and Traceability 353
3. Trace the material that was used to construct each product 4. Respond to the classification of materials First, the organization’s management must establish the criteria by which classifying and sorting the materials will be performed. There are three possible markings the inspector may apply to inspected products: • Control marking indicates that the inspection was made to comply with an inspection requirement necessitated by a special requirement in the contract, drawing, or product description. This marking often includes a unique serial number or symbol to tie the inspection to a specific inspector. For example, the marking stamp may say, “This product was inspected by Inspector 47.” • Status/acceptance stamp indicates the status and acceptability of the inspected product. Examples of this stamp are “incoming acceptance” stamp, “in-process” stamp, “sample piece identification” stamp, and “nonconforming” stamp. • Lot or group marking indicates that a container of several, usually small, items has been inspected according to an established sampling method. After the criteria have been established, the materials must be inspected when they are received and then marked according to their classification. The quality department is traditionally responsible for inspecting products to determine whether or not they conform to the requirements stated in the contact, specification, and other approved product description. In some companies, the manufacturing department or a supplier may perform this quality inspection rather than the quality department. The inspector often marks the products to indicate the result of their inspection. In most firms, the quality department maintains the inspection stamps to provide centralized control of them so they are not used inappropriately. The quality department stocks, issues, and maintains records of all inspection stamps and their identifying numbers and symbols. The materials traceability process must maintain a history of which materials were used in which products. When a product fails or is determined to be defective, it will be necessary to determine which materials were used in the failed product; since we cannot know which products will fail, a record of all the materials used in each process must be maintained constantly. The development history of a product must include the source of the materials used and the lot or batch from which the material was taken. Material quality may vary from batch to batch, so it is essential that the historical records include the specific batch and lot. Acceptance testing that was performed upon receipt of the material will show if the material conformed to acceptance standards, and that must be maintained as part of the material trace history. Today, most organizations have computer programs that keep track of materials by source, batch, and lot number for every product. There is a tradeoff between the degree of detail that must be maintained in the material trace history records and the cost of collecting and maintaining that data. If the process for recording which materials are used in every product is time-consuming and
354 Part IV: Customer–Supplier Relations
difficult, or if it requires the use of expensive tools and computer programs to perform, the organization is not likely to enforce the record-keeping process in every instance. Accordingly, organizations should continuously look for ways to simplify the materials traceability process and to ensure its use at a level that supports the organization’s business goals. Finally, the materials traceability process includes what the organization does with materials in each classification. Those materials that conform to acceptance testing standards are merely recorded and are used in the development of the products for which they were purchased. But those materials that do not conform to the acceptance testing standards must be dealt with as nonconforming materials, and they must be segregated depending on how and why they failed to conform.
METHODS FOR SEGREGATING NONCONFORMING MATERIALS One of the most important steps in any manufacturing process is to identify, segregate, and trace nonconforming materials. “Nonconforming” means that the requirements specified in the contract, drawing, or other approved product description are not met by the product. The term “nonconforming product” is synonymous with “defective,” since both are defined as a departure from specified requirements. A product may still be acceptable to the customer even if the material used to construct it does not fully conform to the specifications. Clothing manufacturers often sell their “seconds” (that is, products that have a slight mistake in the sewing or that use cloth with a slight unevenness) at special outlet stores. Shoppers at outlet stores usually pay considerably less than they would if they purchased the clothing at a retail store, but they must inspect the clothing to ensure they are not dissatisfied with the defects that might be present. Often, shoppers cannot detect the defects and are perfectly satisfied with the product in the seconds outlet store and are delighted with the lower price. But, of course, the manufacturer earns less for their sale of a second than they do when they sell a conforming product in a retail store. There are three possible actions to take when it is determined that a product contains nonconforming materials: 1. Rework to replace the nonconforming material with conforming material. 2. Use of the product “as is.” This action should only be done with the complete knowledge and permission of a relevant authority, usually the customer. 3. Preclude the product’s intended use, usually by scrapping the product. There are three classifications of defects or nonconformities: • Critical defects are any material, part, component, or product condition that could cause hazardous or unsafe conditions for individuals using or maintaining the product and would prevent the product from performing its intended function.
Chapter 20: Material Identification, Status, and Traceability 355
• Major defects are any material, part, component, or product condition that is likely to fail or materially reduce the usability of the product for its intended purpose. • Minor defects are departures from established standards having little or no significant bearing on the effective use or operation of the product and that are not likely to reduce the usability of the product for its intended purpose. The formal process for using a nonconforming product “as is” is known as a waiver. The supplier should request a waiver from the customer only if it is believed that the defect is minor, and the customer should approve or disapprove the proposed use of the nonconforming product. ISO 9001:20153 specifies that the manufacturer or supplier must maintain records of the nonconformity and the disposition of the material. If the nonconformity is corrected, the reworked product must be reinspected, marked, and recorded. If the nonconformity is discovered after the product has been delivered, the supplier must take appropriate action to mitigate real or potential damage that might result. In addition, ISO/TS 169494 requires the following: • Product with unidentified or suspect status be classified as nonconforming • Instructions for rework and reinspection be accessible and utilized • The customer be informed promptly if nonconforming product has been shipped • A customer waiver, concession, or deviation permit be obtained whenever processing differs from that currently approved • Any product shipped on a deviation authorization be properly identified on each container • Records of the expiration dates and quantities for any authorized deviations be maintained • Upon expiration of any deviation authorization, the original specification and requirement must be met • These waiver requirements be extended equally to any product supplied to an organization before submission to the (final) customer The quality department is not the sole and final responsible agency for identifying, segregating, and tracing nonconforming material. Many functions in the organization are responsible for performing many of the activities that are necessary to identify and segregate nonconforming products and to rework or dispose of them. These may include the manufacturing, purchasing, finance, and marketing departments in addition to the quality department. Whenever nonconforming material is detected, it must be analyzed and its subsequent disposition must be controlled. Many firms have established a material review board (MRB) to provide centralized control of materials processes.
356 Part IV: Customer–Supplier Relations
The MRB is a multidisciplinary panel of representatives established as a material review authority. The MRB reviews, evaluates, and directs either the disposal or rework of nonconforming materials and services. The MRB treats immediate problems when a nonconformity is detected and influences corporate practices to facilitate long-term improvements to the development of future products. The MRB initiates corrective actions that prevent the recurrence of problems by identifying substandard suppliers and sources, instituting better source selection, and tracking the use of all materials, parts, and components. Immediate actions the MRB might direct when a nonconforming material is detected may include analyzing the offending material to determine why it is defective (root cause analysis), assessing the seriousness of the defect by analyzing the impact the offending material has on the product’s usability, and preventing delivery of the product until it has been reworked or a waiver is negotiated. Longterm actions the MRB might take when a defect is detected include determining whether other products have employed the same nonconforming material, reassessing the rating of the supplier or source, determining if alternative sources or supplies of conforming materials exist, and instigating process improvement and product enhancement projects that will eliminate the continued use of the nonconforming material in the future.
A SAMPLE PROCEDURE FOR TRACKING MATERIALS 1. Purpose The purpose of this procedure is to define a system for identifying materials and products to prevent mix-up and provide for a means of tracing product back through previous operation steps.
2. Scope This procedure covers identification and traceability for all purchased materials and manufactured products, including tooling.
3. Responsibility Identification of purchased materials:
Materials manager
Identification of in-process products:
Production leadership
Traceability of materials/products:
Engineering manager
Finished product lot numbering system:
Engineering manager
In-process product lot numbering system:
Production leadership
Finished product numbering system:
Engineering manager
4. Definitions Purchased materials—Purchased or customer-supplied raw materials and components that are incorporated into finished product
Chapter 20: Material Identification, Status, and Traceability 357
Manufactured products—Components, assemblies, tooling, and finished packaged products that are manufactured from purchased materials
5. Identify and Trace Purchased Materials All purchased materials intended for incorporation into the company’s products will be marked with a material description or part number and an identification number that traces back to the source supplier’s process. Receiving personnel will mark the purchase order number on purchased material that is not properly identified and notify the materials manager for supplier corrective action.
6. Identify Manufactured Products Manufacturing personnel will identify in-process products by attaching a lot processing card or tag that describes the product. Techniques including colorcoded containers or logs that are traceable to the product may also be used. Finished product packaging will have a part number and product description printed on the outside.
7. Trace Materials Used in Manufactured Products Traceability back to purchased material will be maintained when required by contract, when there is adverse risk of poor quality, or the product is recalled. Process lot cards or logs are used to provide traceability by requiring lot numbers of materials and components to be filled in at appropriate operation steps. Engineering maintains the process lot cards to reflect current traceability requirements. Production leadership is responsible to maintain process logs when required. Manufacturing personnel will record the lot number of purchased material and in-process product on the lot processing card or process log. Finished product will be packaged and marked with a unique date code lot number.
8. Identify Tooling Production tooling will be identified by its current drawing number and revision or be stored in appropriately marked locations.
Part V
Corrective and Preventive Action (CAPA) Chapter 21
Corrective Action
Chapter 22
Preventive Action
Chapter 21 Corrective Action
Demonstrate key elements of the corrective action process: identify the problem, contain the problem, determine the root causes, propose solutions to eliminate and prevent their recurrence, verify that the solutions are implemented, and confirm their effectiveness. (Apply) Body of Knowledge V.A
Quality problems can have a significant financial effect on businesses and organizations. It should be common business practice to have a CAPA (corrective and preventive action) program as part of the overall quality management system. It gives an organization the ability to correct existing problems or prevent them to begin with. A successfully implemented CAPA program is indispensable for customer satisfaction. Corrective actions are those actions that are taken to repair a problem after it has occurred. In the past, some companies made large quantities of products and then sorted the good from the bad. Today, modern companies try to prevent making bad products altogether. Six Sigma, ISO 9000, and other quality management programs all stress the importance of preventing mistakes to begin with rather than inspecting to catch and repair them. It is nearly always more costly to fix a defect and the situation that the defect caused than it is to avoid making the mistake or creating the defect. Prevention is always the preferred method of quality programs rather than corrective action. In fact, when corrective action must be taken, it is a good idea to learn from the experience—to analyze the causes of the individual defect in order to develop practices and techniques for avoiding making the same mistakes again in the future on other products. Senior management is responsible for instituting a corrective and preventive action program. Corrective action begins when a quality-related problem is detected and continues until the detected problem is resolved and there is reasonable confidence that the problem will not recur. Upper management often delegates the corrective action for a new problem to the individuals and teams in the organization best suited to understand the problem and how to fix it. 360
Chapter 21: Corrective Action 361
A material review board (MRB) or corrective action board (CAB) is often designated as the responsible party to begin the corrective and preventive activities resulting from the discovery of a new quality problem. The MRB is a multidisciplinary panel of representatives established as a material review authority. The MRB reviews, evaluates, and directs either the disposal of or rework of nonconforming materials and services. The MRB treats immediate problems when a nonconformity is detected and influences corporate practices to initiate and invest in long-term improvements to the development of future products. The MRB initiates corrective actions that prevent the recurrence of problems by identifying substandard suppliers and sources, instituting better source selection, and tracking the use of all materials, parts, and components. In many companies, a CAB ensures that appropriate corrective and preventive action is being taken by the responsible managers. The CAB typically evaluates material review results, customer complaints, returned material and products, internal test rejections, internal and external audits, cost of quality results, and product reliability concerns in an attempt to detect quality problems with individual products or the processes that generate them. In some companies, the CAB functions are performed by an executive committee, and in others, by upper management. Ultimately, the manager of the function that is primarily responsible for a quality problem is responsible for implementing the necessary corrective and preventive action. For example, if products are found to be defective due to the use of poor raw materials, the procurement department manager must ultimately accept the responsibility for improving material selection to avoid future problems. If, on the other hand, product defects are found to be the result of manufacturing machine issues, the manufacturing department manager must take ownership of corrective and preventive actions. Corrective action begins once a problem has been identified. The corrective action includes the following steps: 1. Problem identification and assessment 2. Containment 3. Root cause analysis 4. Corrective action 5. Verification and confirmation of effectiveness
PROBLEM IDENTIFICATION AND ASSESSMENT A problem or defect can be identified by a number of sources and should be documented in the first steps of the problem identification. Using the “five W’s” is helpful: who, what, where, when, and why. Who—Helps identify who found the problem along with who else might be involved (e.g., supplier, operator, auditor) What—A description of the problem providing specific part number(s), documents, or relevant data to demonstrate the existing problem
362 Part V: Corrective and Preventive Action (CAPA)
Where—Identifying where in the life cycle of the product the problem occurs can help when doing root cause analysis: incoming inspection, point in assembly process, assembler inspection, test, shipping, or customer. Also, where on the actual product the problem is occurring should be documented. When—Date and time. Document if and when the issue occurred in the past. Also, process monitoring can help identify which work shift or when a process is considered out of control. Why—Why is the defect or process deviation a problem? Assessing the impact of the quality problem that has been identified is a necessary step. If the problem is minor—that is, if it has little impact on the end user’s ability to use the product—then the problem can be solved at a lower priority. If, alternatively, the problem is serious—if it could cause serious harm to a user, or it does not meet the customer’s requirements—then the problem must be dealt with immediately. Understanding the impact that the problem will have on corporate revenue, operating costs, customer satisfaction, and public image can help the problem solvers obtain the appropriate support they will need to solve the problem.
CONTAIN THE NEGATIVE EFFECTS RESULTING FROM THE DEFECT Corrective action is often divided into two steps. First, temporary measures are taken to remedy the situation in the short term, such as 100% inspection, repair at the customer’s site, or additional processing. This short-term remedy is maintained until a long-term corrective action can be implemented. The second step, long-term remedy, consists of implementing preventive measures that scan for problems in processing and actions that prevent or reduce producing defective products. Containment actions are measures taken to eliminate the impact defective products have on the business and customer. Defective products usually result in the customer’s inability to use the product. Sometimes, defective products result in warranty claims or lawsuits, and they often result in costs for reworking and reinspecting the reworked product. Actions taken to minimize the negative effects resulting from a defect are temporary and should not be considered a management policy. Containment activities may include the following: • Defining the problem as clearly as possible • Forming a team of appropriate personnel to analyze and solve the problem • Identifying solutions to the problem and determining the feasibility of rapidly implementing the solution • Identifying the long-term effect that rapid implementation will have on the company and customers
Chapter 21: Corrective Action 363
• Developing an immediate disposition action plan, including how to contain, repair, and inspect the corrected product, or to dispose of the suspect/defective parts • Assigning the appropriate personnel to perform sorting and/or repairing of the defective product and implementing the containment activities quickly • Monitoring the implementation • Documenting the containment activities and results • Notifying everyone affected by the problem and the containment activities It’s important to remember that the immediate action of containment is typically a temporary situation that needs to be in place until a permanent solution is implemented. In some cases, the immediate fix may actually be the corrective action, in which case the CAPA would be closed with proper documentation of the rationale and verification of the resolution.
IDENTIFY ROOT CAUSES A cross-functional, cross-disciplinary team is usually best at determining the root causes of the defect. In many organizations this team is called the corrective action team (CAT). Often, the situations that led to the defective product are complex, and a single, simple root cause cannot be identified. To find long-term solutions to prevent the defect from happening again, it is necessary to explore the impact and relationship of many contributing factors. Using problem-solving techniques as described in the problem-solving and improvement sections of this handbook, the team should determine the leading contributors to the problem and which ones are the most likely root causes. Tools for determining the root causes of a defective product include the following: • Five whys • PDCA • Operator observations • Brainstorming • Nominal group technique • Six thinking hats exercises • Ishikawa (fishbone) diagrams • Process flow analysis • Fault tree analysis • Fault mode and effects analysis (FMEA)
364 Part V: Corrective and Preventive Action (CAPA)
• Check sheets • Pareto analysis • Subgrouping of data • Regression analysis • Data matrix analysis • Partitioning of variation • Control charting • Design of experiments • Analytical testing • Trial testing
CORRECTIVE ACTION Having identified the root causes of the problem, the CAT develops solutions— actions that will, when taken together, remove the root causes of the problem that led to defective products. Rather than simply dealing with the symptoms of the problem, the primary cause needs to be identified in order to determine an effective corrective action plan. Corrective actions—those steps taken to remove the causes of identified defects—are dictated by the unique circumstances and conditions. Using the analysis from determining the root cause, a solution to correct the problem and to prevent reoccurrences is determined. All activities and steps that must be taken to correct and prevent recurrence are identified as tasks and listed as part of an action plan. Each of the tasks must have responsibility assigned and due dates for tracking progress. Identifying everything that contributed to or resulted from the problem is important to assure a positive outcome. Necessary changes may include changes to processes, procedures, documents, or designs. When there is change, there also needs to be training and communication in order for the corrective action to be effective. These actions also need to be identified in the action plan. In some cases, it may take persuasion to get the decision makers to make those changes necessary both to correct the current problem and to ensure that the problem does not recur in the future. To this end, the CAT should develop a convincing case to “sell” management on the necessary corrections. The following factors should be considered in creating a compelling argument for the necessary changes: • The cost of the changes must be outweighed by the reduction in cost of correcting future defects. • The recommended solution must be feasible in the current corporate environment. • The solutions must be consistent with the overall corporate mission and quality policies.
Chapter 21: Corrective Action 365
• Those individuals who resist the changes should be given an opportunity to express their concerns, and the issues they raise must be dealt with honestly. • A best estimate of the cost of implementing the solution should be given, and the rationale on which the estimate is based should be understood by the decision makers. • Hard data to support the CAT’s recommendations should be presented but not used to overwhelm those managers who are not data friendly. • The impact of the solution on other departments and functions in the organization should be thoroughly understood.
VERIFY AND CONFIRM EFFECTIVENESS A very important step in the CAPA process is assuring that the actions identified in the action plan were completed. Verification is needed that all tasks identified in the corrective action plan were successfully accomplished. Verify that the corrective action eliminated the problem and that it did not produce any additional defects. Check that the communication and training were accomplished for those affected. If the root cause took place at the supplier, and corrective action was responded to by the supplier, determine whether their corrective action solved the problem. An effective solution is evident when the root cause of the problem is corrected and being monitored, there have been no other adverse effects, and controls, if necessary, are in place to prevent future occurrence. When this follow-up is validated, the appropriate person on the CAB, or an authorized quality auditor or manager, will document the validation and close the CAPA. Having a corrective and preventive action system in place allows organizations to identify issues, solve problems, and prevent their recurrence. A CAPA system can assure coordinated communication and the ability to quickly solve problems and minimize chances of recurrence. In many organizations the eight disciplines (8D) format is used as the approach for a corrective action process, and there are many template formats that can be found with an internet search. A successfully implemented CAPA program will act as a centralized system in an organization to manage the corrective and preventive action process. Its integration allows for oversight and gathering data on root causes so that it is useful for reporting at management reviews as well as to problem-solving teams.
Chapter 22 Preventive Action
Demonstrate key elements of a preventive action process: track data trends and patterns, use failure mode and effects analysis (FMEA), review product and process monitoring reports, and study the process to identify potential failures, defects, or deficiencies. Improve the process by developing error/mistake-proofing methods and procedural changes, verify that the changes are made, and confirm their effectiveness. (Apply) Body of Knowledge V.B
As opposed to corrective actions, which are those actions that are taken to repair a problem after it has occurred, preventive actions are those taken to avoid making a mistake or creating a defective product in the first place. The book’s Glossary defines preventive action as “Action taken to eliminate the potential causes of a nonconformity, defect, or other undesirable situation in order to prevent occurrence.” Preventive action includes the following steps: 1. Identify potential problems 2. Identify potential causes 3. Create a preventive action plan 4. Implement the preventive action plan 5. Verify and confirm effectiveness Like the corrective action process, a successful CAPA (corrective and preventive action) program can also capture preventive actions. As mentioned in the previous chapter, prevention is always the preferred method of quality programs rather than corrective action. A corrective action should always address preventive action as part of assuring there are no reoccurrences. However, preventive action is also a process of being proactive to identify potential defects and improvements. 366
Chapter 22: Preventive Action 367
IDENTIFY POTENTIAL PROBLEMS There are many methods to help identify potential failures. Information can be derived from sources such as customer feedback, audit results, quality processes, and work instructions and quality records. Corrective actions that resolved a problem can be recorded as lessons learned and put into practice for other processes or products with similar issues and documented as a preventive action in the CAPA process. In process control and trend analysis, preventive action focuses on identifying negative trends and addressing them before they become significant. Events that can trigger a preventive action include monitoring and measurement, trends analysis, tracking of progress on achieving objectives and targets, use of failure mode and effects analysis (FMEA), review of product and process monitoring reports, and studying the process. Here are some examples: • Trend analysis for process and product characteristics (a trend that is getting worse might indicate that if no action is taken, nonconformity could occur) • Trend analysis for process capability • Tracking of progress on achieving business goals • Monitoring of customer satisfaction • Failure mode and effects analysis for processes and products
IDENTIFY POTENTIAL CAUSES When studying a process—whether it was because of a worsening trend identified in trend analysis, a review of process monitoring reports, or when studying a process for improvements—the flowchart is a good tool to help lay out the process steps. While it helps document the “as is” condition of a process, it also serves as a communication tool for the preventive action team members addressing any number of issues, be it a manufacturing process, a company procedure, or a project plan. More information on flowcharts can be found in the Basic Quality Tools section of the ASQ learning website. A very common and useful tool that carries through from identifying potential problems to uncovering the potential causes and effects is the FMEA for processes and products (see Table 22.1 for an example of a bank performing a FMEA on their automated teller machine [ATM] system). The FMEA is used during design of a new ATM to identify what types of failures the customer could experience while using the ATM. The ATM could be redesigned to reduce the occurrence and impact of the potential failures. The book’s Glossary provides the following definition for FMEA: “failure mode and effects analysis (FMEA)—A procedure in which each potential failure mode in every subitem of an item is analyzed to determine its effect on other subitems and on the required function of the item. Typically, two types of FMEAs are used: DFMEA (design) and PFMEA (process).
368 Part V: Corrective and Preventive Action (CAPA)
Ideally, FMEA should start at the earliest part of the product life cycle and continue on throughout the life of the product or service. Automotive, aerospace, and medical device manufacturing companies are among the many industries that use FMEA. A FMEA should be conducted as a team-based activity during the design phase of a product to prevent failures after using quality function deployment (QFD; discussed in more detail in Chapter 17) and then for control when planning new or modified processes. The team evaluates the product and processes for possible problems or failure modes, and then also for the consequences, or the failure mode’s effects. When performing failure analysis, the severity of the failure’s effect (S), the likelihood of the occurrence of the failure (O), and the inspection system’s capability of detecting the failure (D) are assessed. The product or process being evaluated needs to be well-defined. A system, design, process, or service should be broken into subsystems, items, parts, assemblies, or steps of a process so that the function of each can be identified. The team identifies ways that failure can happen for each function, which are considered potential failure modes. The potential effects of each failure need to be described by determining what happens when the failure occurs or what the customer (external and internal) will experience. Determine what the impact is to each output variable or to meeting key requirements. The severity (S) of each potential effect is then rated on a scale from 1 (insignificant) to 10 (catastrophic). The potential causes of each failure mode are determined using root cause analysis tools such as the cause-and-effect diagram. The probability of failure occurring (O) is rated using a scale from 1 (extremely unlikely) to 10 (inevitable). Current process controls of each potential cause are listed and evaluated to determine how detectable it is. If there are existing controls in place, they might prevent the cause from taking place, reduce the likelihood of it taking place, or possibly detect the failure should the cause happen. Each control method is given a detection (D) rating from 1 (detection is absolutely certain) to 10 (no detection or no controls in place). Risk and criticality are calculated to help prioritize the order in which each potential failure should be addressed. The risk priority number (RPN) = S × O × D. Criticality is calculated by multiplying severity by occurrence, S × O. Preventive action plans address potential failures that have high RPN ratings. When the priorities are established, recommended actions are identified. These actions can eliminate the cause, reduce the occurrence of a failure mode, or improve detection. They may be design or process changes. Some preventive actions can include the following: • Systematic process improvement techniques • Planning • Project management practices • Design reviews • Capability studies • Technology and usage forecasting
Takes too long to dispense cash
Dispenses too much cash
Customer very dissatisfied
Does not dispense cash
Customer somewhat annoyed
Discrepancy in cash balancing
Bank loses money
Discrepancy in cash balancing
Incorrect entry to demand deposit system
Potential effect(s) of failure
Potential failure mode
FMEA table example.
2
Power failure during transaction
3
7 2
Power interruption during transaction
3
Heavy computer network traffic
Denominations in wrong trays
2
3
Machine jams
8
Bills stuck together
5
Out of cash
S
6
O
Potential cause(s) of failure
10 160 16
None
6
10
None
60
10 210 21
None
18
4
Two-person visual verification 72
12
10 240 24
Internal jam alert
84
5
Internal low-cash alert
7
200 40
D
Loading procedure (riffle ends of stack)
R P N
Current process controls
C R I T Recommended action(s)
Responsibility and target completion date
Source: “Failure Mode Effects Analysis,” ASQ.org, http://asq.org/learn-about-quality/process-analysis-tools/overview/fmea.html.
Dispense amount of cash requested by customer
Function
Table 22.1
Action taken
C R R P I S O D N T
Action results
Chapter 22: Preventive Action 369
370 Part V: Corrective and Preventive Action (CAPA)
• Pilot projects • Prototype, engineering model, and first-article testing • Preventive maintenance • Reengineering • Kaizen and reengineering • Vendor evaluation and selection • Field testing • Procedure writing and reviews • Safety reviews • Personnel reviews • Job descriptions • Applicant screening • Training • Inspection or testing procedures • Error-proofing/mistake-proofing (poka-yoke) • Design improvement The ASQ glossary defines error-proofing and error detection as follows: error-proofing—Use of process or design features to prevent the acceptance or further processing of nonconforming products. Also known as “mistake-proofing.” error detection—A hybrid form of error-proofing. It means a bad part can be made but will be caught immediately, and corrective action will be taken to prevent another bad part from being produced. A device is used to detect and stop the process when a bad part is made. This is used when error-proofing is too expensive or not easily implemented. As discussed in Chapter 8, error-proofing is also known as poka-yoke, a method of defect prevention by building safeguards into the system that avoid or immediately find errors. It is commonly used when human error can cause mistakes or defects to occur, especially in processes that rely on the worker’s attention, skill, or experience. If there is no method to ensure that the failure or error won’t happen, then detecting the error using inspection methods should be considered. Error-proofing should be considered for failure modes that have a high RPN or high criticality, and when the consequences of an error are expensive or dangerous. Preventive action plans should have assigned responsibilities and target completion dates for the actions. Implementation of preventive actions should be tracked, and as actions are completed, results should be noted along with the date on the FMEA form. When a mistake-proofing method is being selected, it should first be tested and then
Chapter 22: Preventive Action 371
implemented. Inspection methods can be used during testing and when first implemented to provide feedback. When the preventive actions are completed, all tasks assigned in the action plan need to be verified that they were successfully accomplished. The team should reassess the failure modes and note the new scores for severity, probability of occurrence, and likelihood of detection. Recalculate the RPN and criticality. This process of reassessment helps determine the effectiveness of the actions taken. If the preventive actions are being documented in the CAPA system, this follow-up is validated, and the appropriate person will document the validation of the preventive action and close the CAPA.
Part VI
Appendices Appendix A
ody of Knowledge for the ASQ B Certified Quality Process Analyst (CQPA)—2020
Appendix B
Areas under Standard Normal Curve
Appendix C
Control Limit Formulas
Appendix D Factors for Control Charts
Appendix A Body Of Knowledge for the ASQ Certified Quality Process Analyst (CQPA)—2020
I
ncluded in this Body of Knowledge (BoK) are explanations (subtext) and cognitive levels for each topic or subtopic in the test. These details will be used by the Examination Development Committee as guidelines for writing test questions and are designed to help candidates prepare for the exam by identifying specific content within each topic that can be tested. Except where specified, the subtext is not intended to limit the subject or be all-inclusive of what might be covered in an exam but is intended to clarify how topics are related to the role of the Certified Quality Process Analyst (CQPA). The descriptor in parentheses at the end of each subtext entry refers to the highest cognitive level at which the topic will be tested. A complete description of cognitive levels is provided at the end of this document. I. Quality Concepts and Team Dynamics (20 Questions) A. Professional Conduct and Ethics
Identify and apply behaviors that are aligned with the ASQ Code of Ethics. (Apply)
B. Quality Concepts 1. Quality
Describe how using quality techniques to improve processes, products, and services can benefit all parts of an organization. Describe what quality means to various stakeholders (e.g., employees, organization, customers, suppliers, community) and how each can benefit from quality. (Understand)
2. Quality planning
Define a quality plan, describe its purpose for the organization as a whole, and know who has responsibility for contributing to its development. (Understand)
3. Quality standards, requirements, and specifications
Define and distinguish between national or international standards, customer requirements, and product or process specifications. (Understand) 374
Appendix A: Body of Knowledge for the ASQ CQPA—2020 375
4. Quality documentation
Identify and describe common elements of various document control systems, including configuration management. Describe the relationship between quality manuals, procedures, and work instructions. (Understand)
5. Cost of quality (COQ)
Define and describe the four cost of quality categories: prevention, appraisal, internal failure, and external failure. (Understand)
C. Quality Audits 1. Audit types
Define and distinguish between basic audit types, including internal and external audits; product, process, and systems audits; and first-, second-, and third-party audits. (Understand)
2. Audit components
Identify various elements of the audit process, including audit purpose and scope, the standard to audit against, audit planning (preparation) and performance, opening and closing meetings, final audit report, and verification of corrective actions. (Understand)
3. Audit roles and responsibilities
Identify and describe the roles and responsibilities of key audit participants: lead auditor, audit team member, client, and auditee. (Understand)
D. Team Dynamics 1. Types of teams
Distinguish between various types of teams: process improvement teams, workgroups/workcells, self-managed teams, temporary/ad hoc project teams, and cross-functional teams. (Analyze)
2. Team development
Identify various elements in team building, such as inviting team members to share information about themselves during the initial meeting, using ice-breaker activities to enhance team membership, and developing a common vision and agreement on team objectives. (Apply)
3. Team stages
Describe the classic stages of team evolution: forming, storming, norming, performing, and adjourning. (Understand)
376 Part VI: Appendices
4. Team roles and responsibilities
Describe the roles and responsibilities of various team stakeholders: sponsor, champion, facilitator, team leader, and team member. (Understand)
5. Team conflict
Identify common group challenges, including groupthink, members with hidden and/or competing agendas, intentional distractions, and other disruptive behaviors. Describe ways of resolving these issues and keeping team members on task. (Understand)
E. Training and Evaluation
Describe various elements of training, including linking the training to organizational goals, identifying training needs, adapting information to meet adult learning styles, and using coaching and peer training methods. Describe various tools to measure the effectiveness of the training, including post-training feedback, end-of-course tests, and individual and department performance improvement measures. (Understand)
II. Quality Tools and Process Improvement Techniques (26 Questions) A. Process Improvement Concepts and Approaches
Define and explain elements of Plan-Do-Check-Act (PDCA), kaizen activities, incremental and breakthrough improvement, and DMAIC phases (define, measure, analyze, improve, control). (Apply)
B. Basic Quality Tools
Select, construct, apply, and interpret the seven basic quality tools: (1) cause and effect diagrams, (2) flowcharts (process maps), (3) check sheets, (4) Pareto charts, (5) scatter diagrams, (6) run charts and control charts, and (7) histograms. (Evaluate)
C. Process Improvement Techniques 1. Lean
Identify and apply lean concepts and tools, including set-up reduction (SUR), pull (including just-in-time (JIT) and kanban), 5S, continuous flow manufacturing (CFM), value-added analysis, value stream mapping, theory of constraints (TOC), poka-yoke, and total productive/predictive maintenance (TPM) to reduce waste in areas of cost, inventory, labor, and distance. (Apply)
2. Six Sigma
Identify key Six Sigma concepts, including variation reduction, voice of the customer (VOC), belt levels (yellow, green, black, master black), and their roles and responsibilities. (Understand)
Appendix A: Body of Knowledge for the ASQ CQPA—2020 377
3. Benchmarking
Define and describe this technique and how it can be used to support best practices. (Understand)
4. Risk management
Recognize the types of risk that can occur throughout the organization, such as scheduling, shipping/receiving, financials, operations and supply chain, employee and user safety, and regulatory compliance and changes. Describe risk control and mitigation methods: avoidance, reduction, prevention, segregation, and transfer. (Understand)
5. Business process management (BPM)
Define and describe this continuous process improvement practice, including the business process life cycle phases (Design, Modeling, Execution, Monitoring, and Optimization). (Understand)
D. Management and Planning Tools 1. Quality management tools
Select and apply affinity diagrams, tree diagrams, process decision program charts, matrix diagrams, interrelationship digraphs, prioritization matrices, and activity network diagrams. (Apply)
2. Project management tools
Select and interpret scheduling and monitoring tools, such as Gantt charts, program evaluation and review technique (PERT), and critical path method (CPM). (Apply)
III. Data Analysis (33 Questions) A. Basic Concepts 1. Basic statistics
Define, calculate, and interpret measures of central tendency (mean, median, mode) and measures of dispersion (standard deviation, range, variance). (Analyze)
2. Basic distributions
Define and explain frequency distributions (normal, binomial, Poisson, and Weibull) and the characteristics of skewed and bimodal distributions. (Understand)
3. Probability concepts
Describe and use probability concepts: independent and mutually exclusive events, combinations, permutations, additive and multiplicative rules, and conditional probability. Perform basic probability calculations. (Apply)
378 Part VI: Appendices
4. Reliability concepts
Define basic reliability concepts: mean time to failure (MTTF), mean time between failures (MTBF), mean time between maintenance (MTBM), and mean time to repair (MTTR). Identify elements of the bathtub curve model and how they are used to predict failure patterns. (Remember)
B. Data Types, Collection, and Integrity 1. Measurement scales
Define and use nominal, ordinal, interval, and ratio measurement scales. (Apply)
2. Data types
Identify, define, and classify data in terms of continuous (variables) and discrete (attributes or counts). Determine when it is appropriate to convert attributes data to variables measures. (Apply)
3. Data collection and analysis
Identify and describe the advantages of collecting and analyzing real-time data. (Understand)
4. Data integrity
Recognize methods that verify data validity and reliability from source through data analysis using various techniques such as auditing trails, vendor qualification, error detection software, training for record management etc., to prevent and detect data integrity issues. (Apply)
5. Data plotting
Identify the advantages and limitations of using this method to analyze data visually. (Understand)
C. Sampling 1. Sampling methods
Define and distinguish between various sampling methods, such as random, sequential, stratified, systemic/fixed sampling, rational subgroup sampling, and attributes and variables sampling. (Understand)
2. Acceptance sampling
Identify and define sampling characteristics, such as lot size, sample size, acceptance number, and operating characteristic (OC) curve. Identify when to use the probability approach to acceptance sampling. (Understand)
D. Measurement System Analysis
Define and distinguish between accuracy, precision, repeatability and reproducibility (gage R&R) studies, bias, and linearity. (Apply)
Appendix A: Body of Knowledge for the ASQ CQPA—2020 379
E. Statistical Process Control (SPC) 1. Fundamental concepts
Distinguish between control limits and specification limits, and between process stability and process capability. (Apply)
2. Rational subgroups
Explain and apply the principles of rational subgroups. (Apply)
3. Control charts for attributes data
Identify, select, and interpret control charts (p, np, c, and u) for data that is measured in terms of discrete attributes or discrete counts. (Analyze)
4. Control charts for variables data
– – Identify, select, and interpret control charts (X–R, X–s, and XmR) for data that is measured on a continuous scale. (Analyze)
5. Common and special cause variation
Interpret various control chart patterns (runs, hugging, trends) to determine process control, and use SPC rules to distinguish between common cause and special cause variation. (Analyze)
6. Process capability measures
Describe the conditions that must be met in order to measure capability. Calculate Cp, Cpk, Pp, and Ppk measures and interpret their results. (Analyze)
F. Advanced Statistical Analysis 1. Regression and correlation models
Describe how these models are used for estimation and prediction. (Apply)
2. Hypothesis testing
Calculate confidence intervals using t-tests and the z-statistic and determine whether the result is significant. (Analyze)
3. Design of experiments (DOE)
Define and explain basic DOE terms: response, factors, levels, treatment, interaction effects, randomization, error, and blocking. (Understand)
4. Taguchi concepts and methods
Identify and describe Taguchi concepts: quality loss function, robustness, controllable and uncontrollable factors, and signal to noise ratio. (Understand)
5. Analysis of variance (ANOVA)
Define key elements of ANOVAs and how the results can be used. (Understand)
380 Part VI: Appendices
IV. Customer-Supplier Relations (13 Questions) A. Internal and External Customers and Suppliers Define and distinguish between internal and external customers and suppliers. Describe their impact on products, services, and processes, and identify strategies for working with them to make improvements. (Apply) B. Customer Satisfaction Methods Describe the different types of tools used to gather customer feedback: surveys, focus groups, complaint forms, and warranty analysis. Explain key elements of quality function deployment (QFD) for understanding and translating the voice of the customer. (Understand) C. Product and Process Approval Systems Describe how validation and qualification methods, including beta testing, first-article, in-process, and final inspection are used to approve new or updated products, processes, and services. (Understand) D. Supplier Management 1. Supplier selection
Describe and outline criteria for selecting, approving, and classifying suppliers, including internal rating programs and external certification standard requirements, including environmental/social responsibility. (Understand)
2. Supplier performance
Describe supplier performance in terms of measures such as quality (e.g., defect rates, functional performance), price, delivery speed, delivery reliability, level of service, and technical support. (Understand)
E. Material Identification, Status, and Traceability Describe the importance of identifying material by lot, batch, source, and conformance status, including impact for recalls. Describe key requirements for preserving the identity of a product and its origin. Use various methods to segregate nonconforming material and process it according to procedures. (Apply) V. Corrective and Preventive Action (CAPA) (8 Questions) A. Corrective Action Demonstrate key elements of the corrective action process: identify the problem, contain the problem, determine the root causes, propose solutions to eliminate and prevent their recurrence, verify that the solutions are implemented, and confirm their effectiveness. (Apply)
Appendix A: Body of Knowledge for the ASQ CQPA—2020 381
B. Preventive Action Demonstrate key elements of a preventive action process: track data trends and patterns, use failure mode and effects analysis (FMEA), review product and process monitoring reports, and study the process to identify potential failures, defects, or deficiencies. Improve the process by developing error/mistakeproofing methods and procedural changes, verify that the changes are made, and confirm their effectiveness. (Apply)
LEVELS OF COGNITION BASED ON BLOOM’S TAXONOMY – REVISED (2001) In addition to content specifics, the subtext for each topic in this BOK also indicates the intended complexity level of the test questions for that topic. These levels are based on “Levels of Cognition” (from Bloom’s Taxonomy Revised, 2001) and are presented below in rank order, from least complex to most complex.
Remember Recall or recognize terms, definitions, facts, ideas, materials, patterns, sequences, methods, principles, etc.
Understand Read and understand descriptions, communications, reports, tables, diagrams, directions, regulations, etc.
Apply Know when and how to use ideas, procedures, methods, formulas, principles, theories, etc.
Analyze Break down information into its constituent parts and recognize their relationship to one another and how they are organized; identify sublevel factors or salient data from a complex scenario.
Evaluate Make judgments about the value of proposed ideas, solutions, etc., by comparing the proposal to specific criteria or standards.
Create Put parts or elements together in such a way as to reveal a pattern or structure not clearly there before; identify which data or information from a complex set is appropriate to examine further or from which supported conclusions can be drawn.
Appendix B Appendix B Areas under under Standard Standard Normal Normal Curve Curve Areas
z z
z
Area
z
Area
z
Area
z
Area
z
0.00
0.5000
0.50
0.3085
0.01
0.4960
0.51
0.3050
Area
z
1.00
0.1587
1.50
0.0668
1.01
0.1562
1.51
0.0655
Area
2.00
0.0228
2.50
0.0062
3.00 1.35E-03
2.01
0.0222
2.51
0.0060
3.01
1.31E-03 1.26E-03
z
Area
0.02
0.4920
0.52
0.3015
1.02
0.1539
1.52
0.0643
2.02
0.0217
2.52
0.0059
3.02
0.03
0.4880
0.53
0.2981
1.03
0.1515
1.53
0.0630
2.03
0.0212
2.53
0.0057
3.03 1.22E-03
0.04
0.4840
0.54
0.2946
1.04
0.1492
1.54
0.0618
2.04
0.0207
2.54
0.0055
3.04
1.18E-03
0.05
0.4801
0.55
0.2912
1.05
0.1469
1.55
0.0606
2.05
0.0202
2.55
0.0054
3.05
1.14E-03
0.06
0.4761
0.56
0.2877
1.06
0.1446
1.56
0.0594
2.06
0.0197
2.56
0.0052
3.06
1.11E-03
0.07
0.4721
0.57
0.2843
1.07
0.1423
1.57
0.0582
2.07
0.0192
2.57
0.0051
3.07
1.07E-03
0.08
0.4681
0.58
0.2810
1.08
0.1401
1.58
0.0571
2.08
0.0188
2.58
0.0049
3.08 1.04E-03
0.09
0.4641
0.59
0.2776
1.09
0.1379
1.59
0.0559
2.09
0.0183
2.59
0.0048
3.09 1.00E-03
0.10
0.4602
0.60
0.2743
1.10
0.1357
1.60
0.0548
2.10
0.0179
2.60
0.0047
3.10
9.68E-04
0.11
0.4562
0.61
0.2709
1.11
0.1335
1.61
0.0537
2.11
0.0174
2.61
0.0045
3.11
9.36E-04
0.12
0.4522
0.62
0.2676
1.12
0.1314
1.62
0.0526
2.12
0.0170
2.62
0.0044
3.12
9.04E-04
0.13
0.4483
0.63
0.2643
1.13
0.1292
1.63
0.0516
2.13
0.0166
2.63
0.0043
3.13
8.74E-04
0.14
0.4443
0.64
0.2611
1.14
0.1271
1.64
0.0505
2.14
0.0162
2.64
0.0041
3.14
8.45E-04
0.15
0.4404
0.65
0.2578
1.15
0.1251
1.65
0.0495
2.15
0.0158
2.65
0.0040
3.15
8.16E-04
0.16
0.4364
0.66
0.2546
1.16
0.1230
1.66
0.0485
2.16
0.0154
2.66
0.0039
3.16
7.89E-04
0.17
0.4325
0.67
0.2514
1.17
0.1210
1.67
0.0475
2.17
0.0150
2.67
0.0038
3.17
7.62E-04
0.18
0.4286
0.68
0.2483
1.18
0.1190
1.68
0.0465
2.18
0.0146
2.68
0.0037
3.18
7.36E-04
0.19
0.4247
0.69
0.2451
1.19
0.1170
1.69
0.0455
2.19
0.0143
2.69
0.0036
3.19
7.11E-04
0.20
0.4207
0.70
0.2420
1.20
0.1151
1.70
0.0446
2.20
0.0139
2.70
0.0035
3.20 6.87E-04
0.21
0.4168
0.71
0.2389
1.21
0.1131
1.71
0.0436
2.21
0.0136
2.71
0.0034
3.21 6.64E-04
0.22
0.4129
0.72
0.2358
1.22
0.1112
1.72
0.0427
2.22
0.0132
2.72
0.0033
3.22 6.41E-04 continued
382 298
Appendix B: Areas under Standard Normal Curve 299 Appendix B: Areas under Standard Normal Curve 383
continued z
Area
z
Area
z
Area
z
0.23
0.4090
0.24
0.4052
0.25
0.4013
0.26
0.3974
0.27 0.28
Area
z
0.73
0.2327
1.23
0.1093
0.74
0.2296
1.24
0.1075
0.75
0.2266
1.25
0.76
0.2236
1.26
0.3936
0.77
0.2206
0.3897
0.78
0.2177
Area
z
Area
z
Area
1.73
0.0418
1.74
0.0409
2.23
0.0129
2.24
0.0125
2.73
0.0032
3.23
6.19E-04
2.74
0.0031
3.24 5.98E-04
0.1056
1.75
0.1038
1.76
0.0401
2.25
0.0392
2.26
0.0122
2.75
0.0030
3.25 5.77E-04
0.0119
2.76
0.0029
3.26 5.57E-04
1.27
0.1020
1.77
1.28
0.1003
1.78
0.0384 0.0375
2.27
0.0116
2.77
0.0028
3.27 5.38E-04
2.28
0.0113
2.78
0.0027
3.28
5.19E-04
0.29
0.3859
0.79
0.2148
1.29
0.0985
1.79
0.0367
2.29
0.0110
2.79
0.0026
3.29 5.01E-04
0.30
0.3821
0.80
0.2119
1.30
0.0968
1.80
0.0359
2.30
0.0107
2.80
0.0026
3.30 4.83E-04
0.31
0.3783
0.81
0.2090
1.31
0.0951
1.81
0.0351
2.31
0.0104
2.81
0.0025
3.31 4.67E-04
0.32
0.3745
0.82
0.2061
1.32
0.0934
1.82
0.0344
2.32
0.0102
2.82
0.0024
3.32 4.50E-04
0.33
0.3707
0.83
0.2033
1.33
0.0918
1.83
0.0336
2.33
0.0099
2.83
0.0023
3.33 4.34E-04
0.34
0.3669
0.84
0.2005
1.34
0.0901
1.84
0.0329
2.34
0.0096
2.84
0.0023
3.34
4.19E-04
0.35
0.3632
0.85
0.1977
1.35
0.0885
1.85
0.0322
2.35
0.0094
2.85
0.0022
3.35 4.04E-04
0.36
0.3594
0.86
0.1949
1.36
0.0869
1.86
0.0314
2.36
0.0091
2.86
0.0021
3.36 3.90E-04
0.37
0.3557
0.87
0.1922
1.37
0.0853
1.87
0.0307
2.37
0.0089
2.87
0.0021
3.37
0.38
0.3520
0.88
0.1894
1.38
0.0838
1.88
0.0301
2.38
0.0087
2.88
0.0020
3.38 3.62E-04
3.76E-04
0.39
0.3483
0.89
0.1867
1.39
0.0823
1.89
0.0294
2.39
0.0084
2.89
0.0019
3.39 3.50E-04
0.40
0.3446
0.90
0.1841
1.40
0.0808
1.90
0.0287
2.40
0.0082
2.90
0.0019
3.40 3.37E-04
0.41
0.3409
0.91
0.1814
1.41
0.0793
1.91
0.0281
2.41
0.0080
2.91
0.0018
3.41 3.25E-04
0.42
0.3372
0.92
0.1788
1.42
0.0778
1.92
0.0274
2.42
0.0078
2.92
0.0018
3.42
3.13E-04
0.43
0.3336
0.93
0.1762
1.43
0.0764
1.93
0.0268
2.43
0.0075
2.93
0.0017
3.43 3.02E-04
0.44
0.3300
0.94
0.1736
1.44
0.0749
1.94
0.0262
2.44
0.0073
2.94
0.0016
3.44 2.91E-04
0.45
0.3264
0.95
0.1711
1.45
0.0735
1.95
0.0256
2.45
0.0071
2.95
0.0016
3.45 2.80E-04
0.46
0.3228
0.96
0.1685
1.46
0.0721
1.96
0.0250
2.46
0.0069
2.96
0.0015
3.46 2.70E-04
0.47
0.3192
0.97
0.1660
1.47
0.0708
1.97
0.0244
2.47
0.0068
2.97
0.0015
3.47 2.60E-04
0.48
0.3156
0.98
0.1635
1.48
0.0694
1.98
0.0239
2.48
0.0066
2.98
0.0014
3.48 2.51E-04
0.49
0.3121
0.99
0.1611
1.49
0.0681
1.99
0.0233
2.49
0.0064
2.99
0.0014
3.49 2.42E-04
Appendix C Control Limit Formulas
VARIABLES CHARTS x and R chart: Averages chart: x ± A2 R
Range cha rt: LCL = D3 R
UCL = D4 R
x and s chart: Averages c hart: x ± A3 s
Std. dev. chart: LCL = B3 s
UCL = B4 s
Individuals and moving range chart (two-vaa lue moving window): Individuals chart: x ± 2.. 66R
Moving range: UCL = 3 . 267 R
Moving average and moving range (two-value moving window)): Moving average: x ± 1 . 88R
ATTRIBUTES CHARTS p chart :
p±3
p(1− p) n
np chart :
np ± 3 np(1− p)
c chart :
c ±3 c
u chart :
u±3
u n
384
Moving range: U CL = 3 . 267 R
Appendix D Appendix D Factors Factors for for Control Control Charts Charts
Chart for averages Obs
Chart for standard deviations
Factors for control limits
Factors for central line
A
A2
A3
2
2.121
1.881
3
1.732
1.023
4
1.500
0.729
5
1.342
0.577
6
1.225
0.483
7
1.134
0.419
8
1.061
9
1.000
10
0.949
0.308
11
0.905
0.285
12
0.866
0.266
13
0.832
0.249
n
Factors for control limits
c4
1/c4
B3
B4
B5
B6
2.659
0.7979
1.2533
0
3.267
0
2.606
1.954
0.8862
1.1284
0
2.568
0
2.276
1.628
0.9213
1.0854
0
2.266
0
2.088
1.427
0.9400
1.0638
0
2.089
0
1.964
1.287
0.9515
1.0509
0.030
1.970
0.029
1.874
1.182
0.9594
1.0424
0.118
1.882
0.113
1.806
0.373
1.099
0.9650
1.0362
0.185
1.815
0.179
1.751
0.337
1.032
0.9693
1.0317
0.239
1.761
0.232
1.707
0.975
0.9727
1.0281
0.284
1.716
0.276
1.669
0.927
0.9754
1.0253
0.321
1.679
0.313
1.637
0.886
0.9776
1.0230
0.354
1.646
0.346
1.610
0.850
0.9794
1.0210
0.382
1.618
0.374
1.585
14
0.802
0.235
0.817
0.9810
1.0194
0.406
1.594
0.399
1.563
15
0.775
0.223
0.789
0.9823
1.0180
0.428
1.572
0.421
1.544
16
0.750
0.212
0.763
0.9835
1.0168
0.448
1.552
0.440
1.526
17
0.728
0.203
0.739
0.9845
1.0157
0.466
1.534
0.458
1.511
18
0.707
0.194
0.718
0.9854
1.0148
0.482
1.518
0.475
1.496
19
0.688
0.187
0.698
0.9862
1.0140
0.497
1.503
0.490
1.483
20
0.671
0.180
0.680
0.9869
1.0132
0.510
1.490
0.504
1.470
21
0.655
0.173
0.663
0.9876
1.0126
0.523
1.477
0.516
1.459
22
0.640
0.167
0.647
0.9882
1.0120
0.534
1.466
0.528
1.448
23
0.626
0.162
0.633
0.9887
1.0114
0.545
1.455
0.539
1.438
24
0.612
0.157
0.619
0.9892
1.0109
0.555
1.445
0.549
1.429
25
0.600
0.153
0.606
0.9896
1.0105
0.565
1.435
0.559
1.420 Continued
385 301
302 Part VI: Appendices 386 Part VI: Appendices
Continued
Chart for ranges Obs
Factors for central line
Factors for control limits
d2
1/d2
2
1.128
0.8865
0.853
0
3.686
0
3.267
3
1.693
0.5907
0.888
0
4.358
0
2.575
n
d3
D1
D2
D3
D4
4
2.059
0.4857
0.880
0
4.698
0
2.282
5
2.326
0.4299
0.864
0
4.918
0
2.114
6
2.534
0.3946
0.848
0
5.079
0
2.004
7
2.704
0.3698
0.833
0.205
5.204
0.076
1.924
8
2.847
0.3512
0.820
0.388
5.307
0.136
1.864
9
2.970
0.3367
0.808
0.547
5.394
0.184
1.816
10
3.078
0.3249
0.797
0.686
5.469
0.223
1.777
11
3.173
0.3152
0.787
0.811
5.535
0.256
1.744
0.923
5.594
0.283
1.717
12
3.258
0.3069
0.778
13
3.336
0.2998
0.770
14
3.407
0.2935
0.763
15
3.472
0.2880
0.756
16
3.532
0.2831
0.750
17
3.588
0.2787
0.744
18
3.640
0.2747
0.739
19
3.689
0.2711
0.733
20
3.735
0.2677
0.729
21
3.778
0.2647
0.724
22
3.819
0.2618
0.720
23
3.858
0.2592
0.716
24
3.895
0.2567
0.712
25
3.931
0.2544
0.708
Endnotes
Chapter 2 1. W. E. Deming and J. N. Orsini, The Essential Deming [electronic resource]: Leadership Principles from the Father of Total Quality Management (McGraw-Hill, 2013). 2. Michael E. Porter, Competitive Strategy: Techniques for Analyzing Industries and Competitors (New York: The Free Press, 1980). 3. S. Furterer, Introduction to Systems Engineering: Holistic Systems Engineering Life Cycle Architecture, Modeling and Design with Real-World Service Applications, Manuscript and Course Materials (Kennesaw State University, 2014). 4. Furterer, Introduction to Systems Engineering. 5. G. L. Duffy and S. L. Furterer, The ASQ Certified Quality Improvement Associate Handbook, 4th ed. (Milwaukee, WI: ASQ Quality Press, 2020). 6. J. Evans and W. Lindsay, Managing for Quality and Performance Excellence, 11th ed. (Boston, MA: Cengage, 2020). 7. K. Roberts, “PIMS: Nine Basic Findings on Business Strategy,” Malik PIMS Letter, 2011. 8. J. A. DeFeo, Juran’s Quality Handbook, 7th ed. (New York: McGraw-Hill Education, 2017). 9. J. Evans and W. Lindsay, Managing for Quality and Performance Excellence, 11th ed. (Boston, MA: Cengage, 2020). 10. Duffy and Furterer, The ASQ Certified Quality Improvement Associate. 11. Duffy and Furterer, The ASQ Certified Quality Improvement Associate. 12. Evans and Lindsay, Managing for Quality and Performance Excellence. 13. Duffy and Furterer, The ASQ Certified Quality Improvement Associate. 14. A. V. Feigenbaum, Total Quality Control, 3rd ed., 40th anniversary ed. (New York: McGraw-Hill, 1991). 15. K. Ishikawa, What Is Total Quality Control? The Japanese Way (Englewood Cliffs, NJ: Prentice Hall, 1985). 16. DeFeo, Juran’s Quality Handbook. 17. C. M. Borror, ed. The Certified Quality Engineer Handbook, 3rd ed. (Milwaukee, WI: ASQ Quality Press, 2009). 18. ASQ/ANSI/ISO 9001: 2015 Quality Management Systems—Requirements (Milwaukee, WI: ASQ Quality Press, 2015). 19. J. M. Juran, Juran on Quality by Design (New York: The Free Press, 1992). 20. Institute of Electrical and Electronics Engineers, “ANSI/IEEE Standard 828-1990,” in IEEE Standard for Software Configuration Management Plans (New York: IEEE, 1990). 21. ASQ/ANSI/ISO 9001:2015 Quality Management Systems. 22. Ibid.
387
388 Endnotes
23. 24. 25. 26.
Ibid. Ibid. “ANSI/IEEE Standard 828-1990.” J. Campanella, ed., Principles of Quality Costs: Principles, Implementation, and Use, 3rd ed. (Milwaukee, WI: ASQ Quality Costs Committee, 1999).
Chapter 3 1. L.B. Coleman Sr., ed. The Certified Quality Auditor Handbook, 5th ed. (Milwaukee, WI: ASQ Quality Press, 2020).
Chapter 4 1. S. L. Furterer and D. C. Wood, The Certified Manager of Quality/Organizational Excellence Handbook, 5th ed. (Milwaukee, WI: ASQ Quality Press, 2021), 56–62. 2. S. L. Furterer and D. C. Wood, The Certified Manager of Quality/Organizational Excellence Handbook. 3. S. L. Furterer and D. C. Wood, The Certified Manager of Quality/Organizational Excellence Handbook. 4. S. L. Furterer and D. C. Wood, The Certified Manager of Quality/Organizational Excellence Handbook. 5. S. L. Furterer and D. C. Wood, The Certified Manager of Quality/Organizational Excellence Handbook. 6. Institute of Electrical and Electronics Engineers, “ANSI/IEEE Standard 828-1990,” in IEEE Standard for Software Configuration Management Plans (New York: IEEE, 1990). 7. G. Duffy and S. L Furterer, The Certified Quality Improvement Analyst, 3rd ed. (Milwaukee, WI: ASQ Quality Press, 2020). 8. S. L. Furterer, and D. C. Wood, The Certified Manager of Quality/Organizational Excellence Handbook. 9. S. L. Furterer and D. C. Wood, The Certified Manager of Quality/Organizational Excellence Handbook. 10. S. L. Furterer and D. C. Wood, The Certified Manager of Quality/Organizational Excellence Handbook. 11. S. L. Furterer and D. C. Wood, The Certified Manager of Quality/Organizational Excellence Handbook. 12. E. Rice and B. Smedick, “Tools for Creating and Managing Student Teams,” paper presented at 2018 ASEE Conference, (Salt Lake City, UT, 2018. 13. Rice and Smedick, “Tools for Creating and Managing Student Teams.” 14. American Society for Quality, ASQ’s Foundations in Quality Self-Directed Learning Series, Certified Quality Manager, Module 1 (Milwaukee, WI: ASQ Quality Press, 2001). 15. American Society for Quality, ASQ’s Foundations in Quality Self-Directed Learning Series. 16. B. W. Tuckman, “Developmental Sequence in Small Groups,” Psychological Bulletin 63, no. 6 (1965): 384–399. https://doi.org/10.1037/h0022100. 17. Ibid. 18. Ibid. 19. Ibid. 20. Ibid. 21. S. L. Furterer and D.C. Wood, The Certified Manager of Quality/Organizational Excellence Handbook.
Endnotes 389
22. American Society for Quality, The Quality Improvement Handbook (Milwaukee, WI: ASQ Quality Press, 2006), chap. 4. 23. P. Lencioni, The Five Dysfunctions of a Team: A Leadership Fable (New York: Jossey-Bass, 2002). 24. P. R. Scholtes, The Team Handbook (Madison, WI: Joiner Associates Consulting Group, 1988), 6–37. 25. Duffy and Furterer, The Certified Quality Improvement Analyst, 19.
Chapter 5 1. J. M. Juran and F. M. Gryna, eds., Juran’s Quality Control Handbook, 4th ed. (New York: McGraw-Hill, 1988), 11.4–11.5. 2. J. Sheve, K. Allen, and V. Nieter, Understanding Learning Styles: Making a Difference for Diverse Learners (Huntington Beach, CA: Shell Education, 2010). 3. Westcott, The Certified Manager of Quality/Organizational Excellence Handbook. 4. S. L. Furterer and D. Wood, eds., The Certified Manager of Quality/Organizational Excellence Handbook (Milwaukee, WI: ASQ Quality Press, 2021), 568. 5. D. L. Kirkpatrick, “Techniques for Evaluating Training Programs,” a four-part series beginning in the November 1959 issue of the Training Director’s Journal. 6. D. G. Robinson and J. C. Robinson, Training for Impact: How to Link Training to Business Needs and Measure the Results (San Francisco: Jossey-Bass, 1990), 168. 7. R. T. Westcott, “A Quality System Needs Assessment,” in In Action: Conducting Needs Assessment (17 Case Studies), edited by J. J. Phillips and E. F. Holton III (Alexandria, VA: American Society for Training and Development, 1995), 235. 8. J. M. Juran and F. M. Gryna, eds., Juran’s Quality Control Handbook, 11.9–11.10. 9. B. Deros et al., “Evaluation of Training Effectiveness on Advanced Quality Management Practices,” Procedia—Social and Behavioral Sciences 56, no. 8 (October 2012): 67–73, 70–72. https://doi.org/10.1016/j.sbspro.2012.09.633
Chapter 6 1. M. Imai, Kaizen: The Key to Japan’s Competitive Success (New York: McGraw-Hill Higher Education, 1986). 2. P. Palmes, “MP3 Audiocasts: An Introduction to the PDCA Cycle and Its Application”. http://asq.org/learn-about-quality/project-planning-tools/overview/audiocasts.html. 2. S. Furterer, Lean Six Sigma Case Studies in the Healthcare Enterprise (London: Springer, 2014). 3. V. E. Sower and F. K. Fair, “There Is More to Quality Than Continuous Improvement: Listening to Plato,” Quality Management Journal 12, no. 1 (January 2005): 8–20. 4. S. Furterer and D. Wood, ASQ Certified Manager of Quality/Organizational Excellence Handbook (Milwaukee, WI: ASQ Quality Press, 2021).
Chapter 7 1. S. Furterer, Introduction to Systems Engineering: Holistic Systems Engineering Life Cycle Architecture, Modeling and Design with Real-World Service Applications (Boca Raton, FL: CRC Press, 2022). 2. Furterer, Introduction to Systems Engineering. 3. J. Evans and W. Lindsey, Managing for Quality and Performance Excellence, 10th ed. (Cengage Learning, 2011).
390 Endnotes
4. G. Duffy and S. L Furterer, The Certified Quality Improvement Analyst, 3rd ed., (Milwaukee, WI: ASQ Quality Press, 2020), 185.
Chapter 8 1. R. T. Westcott, The Certified Manager of Quality/Organizational Excellence Handbook, 3rd ed. (Milwaukee, WI: ASQ Quality Press, 2006). 2. G. Alukal, “Create a Lean, Mean Machine,” ASQ Quality Progress 36, no. 4 (April 2003): 29–352. 3. The seven basic steps are an excerpt from “Single-Minute Exchange of Die,” Wikipedia, http://en.wikipedia.org/wiki/Single-Minute_Exchange_of_Die.3. 4. S. Shingo, A Revolution in Manufacturing: The SMED System (Portland, OR: Productivity Press, 1985), 26–52. 5. T. J. Shuker, “The Leap to Lean,” in 54th Annual Quality Congress Proceedings, Indianapolis, 2000, 105–12. 6. Shuker, “The Leap to Lean.” 7. Westcott, The Certified Manager of Quality/Organizational Excellence Handbook, Appendix B. 8. Westcott, The Certified Manager of Quality/Organizational Excellence Handbook, Appendix B. 9. S. Furterer, Introduction to Systems Engineering: Holistic Systems Engineering Life Cycle Architecture, Modeling and Design with Real-World Service Applications (Boca Raton, FL: CRC Press, 2022). 10. P. Scholtes, B. Joiner, and B. Streibel, The Team Handbook, 3rd ed. (Madison, WI: Oriel, 2003). 11. Westcott, The Certified Manager of Quality/Organizational Excellence Handbook. 12. Westcott, The Certified Manager of Quality/Organizational Excellence Handbook, 390. 13. E. M. Goldratt, The Goal (Great Barrington, MA: North River Press, 1986). 14. E. M. Goldratt, What Is This Thing Called Theory of Constraints and How It Should Be Implemented (Great Barrington, MA: North River Press, 1990). 15. Westcott, The Certified Manager of Quality/Organizational Excellence Handbook, Appendix B. 16. P. Bayers, “Apply Poka-Yoke Devices Now to Eliminate Defects,” in 51st Annual Quality Congress Proceedings, May 1997, 451–55. 17. A. Douglas, “Improving Manufacturing Performance,” in 56th Annual Quality Congress Proceedings, Denver, CO, 2002, 725–32. 18. G. Alukal, “Create a Lean, Mean Machine,” ASQ Quality Progress 36, no. 4 (April 2003): 29–35; D. W. Benbow and T. M. Kubiak, The Certified Six Sigma Black Belt Handbook. 20. S. L. Furterer, UD Lean-Six Sigma Course Materials (Dayton, OH: 2021). 21. D. W. Benbow and T. M. Kubiak, The Certified Six Sigma Black Belt Handbook. 22. P. Scholtes, “Communities as Systems,” ASQ Quality Progress, July 1997, 49–53. 23. Furterer, UD Lean-Six Sigma Course Materials. 24. S. L. Furterer, Lean-Six Sigma Case Studies in the Healthcare Enterprise (Springer, 2013). 25. Benbow and Kubiak, The Certified Six Sigma Black Belt Handbook. 26. Furterer, UD Lean-Six Sigma Course Materials. 27. D. L. Goetsch and S. B. Davis, Quality Management: Introduction to Total Quality Management for Production, Processing, and Services, 3rd ed. (Upper Saddle River, NJ: Prentice Hall, 2000). 28. ISO/IEC/IEEE, Systems and Software Engineering - System and Software Engineering Vocabulary (SEVocab) (Geneva, Switzerland: International Organization for
Endnotes 391
29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 41. 42.
Standardization (ISO)/International Electrotechnical Commission (IEC)/Institute of Electrical and Electronics Engineers (IEEE), 2010), 24765. DAU, Risk Management Guide for DoD Acquisition: 5th ed. (Ft. Belvoir, VA: Defense Acquisition University (DAU), 2003). Richard Sugarman, “University of Dayton ENM 505 Course Materials,” University of Dayton, 2015. SEBoK Editorial Board. 2020. The Guide to the Systems Engineering Body of Knowledge (SEBoK) (Hoboken, NJ: The Trustees of the Stevens Institute of Technology, 2020). Sugarman, University of Dayton ENM 505 Course Materials. Sugarman, University of Dayton ENM 505 Course Materials. Furterer, UD Lean-Six Sigma Course Materials. ASQ/ANSI/ISO 9001:2015 Quality management systems—Requirements, 22. S. L. Furterer, “Business Process Management Course Materials,” University of Georgia, 2014. International Association of Business Process Management Professionals, BPM CBOK, Version 3.0, 1st ed., www.abpmp.org. Ibid. S. Furterer, “Building a Business Process Management Repository,” Journal of Management and Engineering Integration 7, no. 2 (February 2015): 37–46. Furterer, “Business Process Management Course Materials.” Ibid. Ibid.
Chapter 9 1. Adapted from N. R. Tague, The Quality Toolbox, 2nd ed. (Milwaukee, WI: ASQ Quality Press, 2004). 2. Ibid., 444–46. 3. Ibid., 100–107. 4. Project Management Institute, Guide to the Project Management Body of Knowledge, 5th ed. (Newtown Square, PA: PMI Publications, 2013). 5. Ibid. 6. Ibid.
Chapter 10 1. “Weibull distribution,” Wikipedia, http://en.wikipedia.org/wiki/ Weibull_distribution. 2. N. R. Tague, The Quality Toolbox, 2nd ed. (Milwaukee, WI: ASQ Quality Press, 2004). 3. MIL-STD-721, Definitions of Terms for Reliability and Maintainability, Revision C (Washington, DC: Department of Defense, June 12, 1981). 4. For additional information, see W. G. Ireson, C. F. Coombs, and R. Y. Moss, Handbook of Reliability Engineering and Management, 2nd ed. (New York: McGraw-Hill, 1996), 25.32–25.33.
Chapter 11 1. J. Mitchell, “Measurement Scales and Statistics: A Clash of Paradigms,” Psychological Bulletin 3 (1986): 398–407.
392 Endnotes
2. P. F. Velleman and L. Wilkinson, “Nominal, Ordinal, Interval, and Ratio Typologies Are Misleading,” The American Statistician 47, no. 1 (1993): 65–72. http://www.spss. com/research/wilkinson/Publications/Stevens.pdf. 3. E. R. Ott, E. G. Schilling, and D. V. Neubauer, Process Quality Control: Troubleshooting and Interpretation of Data, 4th ed. (Milwaukee, WI: ASQ Quality Press, 2005). 4. S. Furterer, Introduction to Systems Engineering: Holistic Systems Engineering Life Cycle Architecture, Modeling and Design with Real-World Service Applications (Boca Raton, FL: CRC Press, 2022). 5. J. Evans and W. Lindsay, Managing for Quality and Performance Excellence, 11th ed. (Boston, MA: Cengage, 2020). 6. Data Integrity and Compliance with Drug CGMP Questions and Answers Guidance for Industry, U.S. Department of Health and Human Services Food and Drug Administration Center for Drug Evaluation and Research (CDER) Center for Biologics Evaluation and Research (CBER) Center for Veterinary Medicine (CVM) December 2018 Pharmaceutical Quality/Manufacturing Standards (CGMP). 7. R. W. Hoerl and R. D. Snee, “Show Me the Pedigree: Part of Evaluating the Quality of Data Includes Analyzing Its Origin and History,” Quality Progress 52, no. 1 (January 2019): 16–23. 8. Hoerl and Snee, “Show Me the Pedigree.” 9. S. L. Furterer, “Lean-Six Sigma for Engineers Course Materials,” University of Dayton, 2018. 10. J. Seaman and I. E. Allen, “How Good Are My Data” Quality Progress 51, no. 7 (July 2018): 49–52. 11. I. Taleb, M. Adel Seerhani, and R. Dssouli, “Big Data Quality: A Survey,” 2018 IEEE International Congress on Big Data, IEEE Computer Society, 2018. 12. Taleb, Adel Seerhani, and Dssouli, “Big Data Quality.” 13. P. C. Price, R. Jhangiani, and I-A C. Chiang, “Reliability and Validity of Measurement,” in Research Methods in Psychology, 2nd ed., https://opentextbc.ca/ researchmethods/chapter/reliability-and-validity-of-measurement/. 14. Taleb, Seerhani, and Dssouli, “Big Data Quality.” 15. Seaman and Allen, “How Good Are My Data?” 16. Ott, Schilling, and Neubauer, Process Quality Control.
Chapter 12 1. “Sampling (statistics),” Wikipedia, http://en.wikipedia.org/wiki/ Sampling_%28statistics%29. 2. A. Mitra, Fundamentals of Quality Control and Improvement (Upper Saddle River, NJ: Pearson Education, 2000). 3. W. E. Deming, Out of the Crisis (Cambridge, MA: MIT Center for Advanced Engineering, 1986). 4. ASQC Statistics Division, Glossary and Tables for Statistical Quality Control, 2nd ed. (Milwaukee, WI: ASQC Quality Press, 1983). 5. ASQC Statistics Division, Glossary and Tables for Statistical Quality Control. 6. Deming, Out of the Crisis.
Chapter 13 1. J. A. Simpson, “Foundations of Metrology,” Journal of Research of the National Bureau of Standards 86, no. 3 (May/June 1981): 36–42. 2. W. J. Darmody, “Elements of a Generalized Measuring System,” in Handbook of Industrial Metrology (Englewood Cliffs, NJ: Prentice Hall, 1967).
Endnotes 393
3. Darmody, “Elements of a Generalized Measuring System.” 4. A. F. Rashed and A. M. Hamouda, Technology for Real Quality (Alexandria, Egypt: Egyptian University House, 1974). 5. ASQC Statistics Division, Glossary and Tables for Statistical Quality Control, 2nd ed. (Milwaukee, WI: ASQC Quality Press, 1983). 6. American Society for Testing and Materials, ASTM Standards on Precision and Accuracy for Various Applications (Philadelphia, PA: ASTM, 1977). 7. A. McNish, “The Nature of Measurement,” in Handbook of Industrial Metrology (Englewood Cliffs, NJ: Prentice Hall, 1967): 2–12. 8. D. W. Benbow, A. K. Elshennawy, and H. F. Walker, The Certified Quality Technician Handbook (Milwaukee, WI: ASQ Quality Press, 2003), 70. 9. Benbow, Elshennawy, and Walker, The Certified Quality Technician Handbook, 121–23. 10. E. R. Ott, E. G. Schilling, and D. V. Neubauer, Process Quality Control: Troubleshooting and Interpretation of Data, 4th ed. (Milwaukee, WI: ASQ Quality Press, 2005), 526. 11. S. L. Furterer, “Lean-Six Sigma Course Materials,” University of Dayton, 2018. 12. Furterer, Lean-Six Sigma Course Materials.
Chapter 14 1. S. L. Furterer, “Lean-Six Sigma Course Materials,” University of Dayton, 2018. 2. W. E. Deming, Quality, Productivity, and Competitive Position (Cambridge, MA: Massachusetts Institute of Technology, 1982). 3. D. C. Montgomery, Introduction to Statistical Quality Control, 3rd ed. (Hoboken, New Jersey: Wiley, 1996). 4. D. J. Wheeler, Normality and the Process Behavior Chart (Knoxville, TN: SPC Press, 2000). 5. “Process Capability Index,” Wikipedia, http://en.wikipedia.org/wiki/Cp_index. 6. “Process Performance Index,” Wikipedia, http://en.wikipedia.org/wiki/Ppk_index. 7. “Process Capability Index,” http://en.wikipedia.org/wiki/Cpk_index.
Chapter 15 1. “Regression analysis,” Wikipedia, http://en.wikipedia.org/wiki/ Regression_analysis. 2. J. Cohen, P. Cohen, S. G. West, and L. S. Aiken, Applied Multiple Regression/Correlation Analysis for the Behavioral Sciences, 2nd ed. (Hillsdale, NJ: Lawrence Erlbaum Associates, 2003). 3. “Correlation and Dependence,” Wikipedia, http://en.wikipedia.org/wiki/Correlated. 4. J. H. Zar, Biostatistical Analysis (Englewood Cliffs, NJ: Prentice Hall International, 1984), 43–45. 5. R. L. Ott, An Introduction to Statistical Methods and Data Analysis, 4th ed. (Belmont, CA: Duxbury Press, 1993), 86. 6. “Student’s t-Distribution,” Wikipedia, http://en.wikipedia.org/wiki/ Student%27s_t-distribution. 7. “Statistical Hypothesis Testing,” Wikipedia, http://en.wikipedia.org/wiki/ Hypothesis_testing. 8. “Null Hypothesis,” Wikipedia, http://en.wikipedia.org/wiki/Null_hypothesis. 9. “Statistical Significance,” Wikipedia, http://en.wikipedia.org/wiki/ Statistical_significance. 10. “p-Value,” Wikipedia, http://en.wikipedia.org/wiki/P-value. 11. D. Montgomery, Design and Analysis of Experiments, 8th ed. (Hoboken, New Jersey: Wiley, 2008).
394 Endnotes
12. G. Box, W. G. Hunter, and J. S. Hunter, Statistics for Experimenters (Hoboken, New Jersey: Wiley, 1978). 13. D. Montgomery, Design and Analysis of Experiments. 14. Ibid. 15. “Design of Experiments,” Wikipedia, http://en.wikipedia.org/wiki/ Design_of_experiments. 16. G. E. P. Box and R. D. Meyer, “An Analysis for Unreplicated Fractional Factorials,” Technometrics 28, no. 1 (February 1986): 11–18. 17. G. Taguchi, Introduction to Quality Engineering (White Plains, NY: UNIPUB, Kraus International Publications, Asian Productivity Organization, 1986). 18. H. R. Lindman, Analysis of Variance in Complex Experimental Designs (San Francisco, CA: W. H. Freeman, 1974). 19. S. L. Furterer, “Lean-Six Sigma Course Materials,” University of Dayton, 2018.
Chapter 16 1. J. M. Juran, Juran’s Quality Handbook, 5th ed. (New York: McGraw-Hill, 1999), 2.3. 2. R. T. Westcott, The Certified Manager of Quality/Organizational Excellence Handbook, 3rd ed. (Milwaukee, WI: ASQ Quality Press, 2006), Appendix B. 3. J. E. Bauer, G. L. Duffy, and R. T. Westcott, The Quality Improvement Handbook, 2nd ed. (Milwaukee, WI: ASQ Quality Press, 2007), chap. 10. 4. “Certified Quality Manager,” ASQ’s Foundations in Quality Self-Directed Learning Series, Module 4 (Milwaukee, WI: ASQ Quality Press, 2001). 5. A. V. Feigenbaum and D. S. Feigenbaum, “The Power of Management Capital,” Quality Progress 37, no. 11 (November 2004): 24–29. 6. Bauer et al., Quality Improvement Handbook. 7. R. T. Westcott, “Quality Level Agreements for Clarity of Expectations,” The Informed Outlook (December 1999): 13–15. 8. Westcott, The Certified Manager of Quality/Organizational Excellence Handbook, 456–57. 9. Westcott, The Certified Manager of Quality/Organizational Excellence Handbook, 457. 10. R. Lawton, “8 Dimensions of Excellence,” Quality Progress (April 2006). 11. Bauer et al., Quality Improvement Handbook. 12. Lawton, “8 Dimensions.”
Chapter 17 1. M. LeBoeuf, How to Win Customers and Keep Them for Life (New York: G. P. Putnam & Sons, 1987). 2. M. LeBoeuf, How to Win Customers and Keep Them for Life. 3. N. Scriabina and S. Fomichov, “Six Ways to Benefit from Customer Complaints,” Quality Progress 38, no. 9 (September 2005): 49–54. 4. “Certified Quality Manager,” ASQ’s Foundations in Quality Self-Directed Learning Series, Module 4 (Milwaukee, WI: ASQ Quality Press, 2001), 4–47. 5. ASQ’s Foundations in Quality Self-Directed Learning Series, 4–86.
Chapter 18 1. J. M. Juran, Juran on Planning for Quality (New York: The Free Press, 1988). 2. International Organization for Standardization, Quality Management Systems— Requirements (Geneva: ISO, 2000). 3. Ibid.
Endnotes 395
4. Ibid. 5. FDA Quality Systems Regulation, Medical Device Quality Systems Manual, “4. Process Validation,” http://www.fda.gov/MedicalDevices/ DeviceRegulationandGuidance/PostmarketRequirements/ QualitySystemsRegulations/ MedicalDeviceQualitySystemsManual/ucm122439. htm#terms___definitions. 6. FDA Quality Systems Regulation, Medical Device Quality Systems Manual, “4. Process Validation.” 7. ISO, “Quality Management Systems—Fundamentals and Vocabulary, https://www. iso.org/obp/ui/#iso:std:iso:9000:ed-4:v1:en, accessed July 8, 2021.
Chapter 19 1. S. Furterer and D. Wood, ASQ Certified Manager of Quality/Organizational Excellence Handbook (Milwaukee, WI: ASQ Quality Press), 2021. 2. G. Duffy and S. Furterer, ASQ Certified Quality Improvement Analyst Handbook (Milwaukee, WI: ASQ Quality Press), 2020. 3. Duffy and Furterer, ASQ Certified Quality Improvement Analyst Handbook. 4. K. R. Bhote, Strategic Supply Management (New York: AMACOM, 1989). 5. J. B. Ayers, Supply Chain Project Management: A Structured Collaborative and Measurable Approach (Boca Raton, FL: St. Lucie Press, 2003). 6. “APICS Supply Chain Operations Reference Model SCOR Version 12.0,” APICS, apics.org/scor, 2017. Accessed 7/8/2021.1. 7. J. P. Russell, ed., The ASQ Supply Chain Management Primer (Milwaukee, WI: ASQ Quality Press, 2014).
Chapter 20 1. ASME Boiler and Pressure Vessel Code, Section III, Article NCA-0=9000. 2. American Society for Quality, Quality Management Systems—Requirements (Milwaukee, WI: ASQ Quality Press, 2015). 3. American Society for Quality, ANSI/ISO/ASQ Q9001-2015, Quality management systems—Requirements. 4. International Organization for Standardization, Quality Management Systems— Particular Requirements for the Application of ISO 9001:2000 for Automotive Production and Relevant Service Part Organizations, 2nd ed. (Geneva: ISO, 2002).
Glossary
A Ac—See acceptance number. acceptable process level (APL)—The process level that forms the outer boundary of the zone of acceptable processes. (A process located at the APL will have only a probability of rejection designated a when the plotted statistical measure is compared to the acceptance control limits.) Note: In the case of two-sided tolerances, upper and lower acceptable process levels will be designated UAPL and LAPL, respectively. (These need not be symmetrical around the standard level.) acceptance (control chart or acceptance control chart usage)—A decision that the process is operating in a satisfactory manner with respect to the statistical measure being plotted. acceptance control chart—A control chart intended primarily to evaluate whether or not the plotted measure can be expected to satisfy specified tolerances. acceptance control limit (ACL)—Control limits for an acceptance control chart that permit some assignable shift in process level based on specified requirements, provided within-subgroup variability is in a state of statistical control. acceptance number (Ac)—The largest number of nonconformities or nonconforming items found in the sample by acceptance sampling inspection by attributes that permit the acceptance of the lot as given in the acceptance sampling plan. acceptance quality limit (AQL)—The AQL is the quality level that is the worst tolerable product average when a continuing series of lots is submitted for acceptance sampling. Note 1: This concept only applies when an acceptance sampling scheme with rules for switching and for discontinuation is used. Note 2: Although individual lots with quality as bad as the acceptance quality limit can be accepted with fairly high probability, the designation of an acceptance quality limit does not suggest that this is a desirable quality level. Note 3: Acceptance sampling schemes found in standards, with their rules for switching and for discontinuation of sampling inspection, are designed to 396
Glossary 397
encourage suppliers to have process averages consistently better than the acceptance quality limit. If suppliers fail to do so, there is a high probability of being switched from normal inspection to tightened inspection, where lot acceptance becomes more difficult. Once on tightened inspection, unless corrective action is taken to improve product quality, it is very likely that the rule requiring discontinuance of sampling inspection will be invoked. Note 4: The use of the abbreviation AQL to mean acceptable quality level is no longer recommended since modern thinking is that no fraction defective is really acceptable. Using “acceptance quality limit” rather than “acceptable quality level” indicates a technical value where acceptance occurs. acceptance sampling—A sampling after which decisions are made to accept or not to accept a lot, or other grouping of products, materials, or services, based on sample results. acceptance sampling inspection—An acceptance acceptability is determined by sampling inspection.
inspection
where
the
acceptance sampling inspection by attributes—An acceptance sampling inspection whereby the presence or absence of specified characteristics of each item in a sample is observed to statistically establish the acceptability of a lot or process. acceptance sampling inspection by variables—An acceptance sampling inspection in which the acceptability of a process is determined statistically from measurements on specified quality characteristics of each item in a sample from a lot. Note: Lots taken from an acceptable process are assumed to be acceptable. acceptance sampling inspection system—A collection of acceptance sampling plans or acceptance sampling schemes together with criteria by which appropriate plans or schemes may be chosen. acceptance sampling plan—A plan that states the sample size(s) to be used and the associated criteria for lot acceptance. acceptance sampling procedure—The operational requirements and/or instructions related to the use of a particular acceptance sampling plan. Note: This covers the planned method of selection, withdrawal, and preparation of sample(s) from a lot to yield knowledge of the characteristic(s) of the lot. acceptance sampling scheme—The combination of acceptance sampling plans with switching rules for changing from one plan to another. acceptance sampling system—A collection of sampling schemes. accessibility—A measure of the relative ease of admission to the various areas of an item for the purpose of operation or maintenance. accreditation—Certification, by a duly recognized body, of the facilities, capability, objectivity, competence, and integrity of an agency, service, or operational group or individual to provide the specific service or operation needed. For
398 Glossary
example, the Exemplar Global (U.S.) accredits those organizations that register companies to the ISO 9000 series standards. accuracy—The closeness of agreement between a test result or measurement result and the true value. ACL—See acceptance control limit. ACSI—The American Customer Satisfaction Index is an economic indicator, a cross-industry measure of the satisfaction of U.S. customers with the quality of the goods and services available to them—both those goods and services produced within the United States and those provided as imports from foreign firms that have substantial market shares or dollar sales. action limits—The control chart control limits (for a process in a state of statistical control) beyond which there is a very high probability that a value is not due to chance. When a measured value lies beyond an action limit, appropriate corrective action should be taken on the process. Example: Typical action limits for an x– chart are ± 3σˆ (three standard deviations). action plan—The detailed plan to implement the actions needed to achieve strategic goals and objectives (similar to, but not as comprehensive as a project plan). active listening—Paying attention solely to what others are saying (for example, rather than what you think of what they are saying or what you want to say back to them). activity network diagram (AND)—See arrow diagram. ADDIE—An instructional design model (analysis, design, development, implementation, and evaluation). ad hoc team—See temporary team. adult learning principles—Key principles about how adults learn that impact how education and training of adults should be designed. affinity diagram—A management and planning tool used to organize ideas into natural groupings in a way that stimulates new, creative ideas. Also known as the KJ method. agile approach—See lean approach/lean thinking. AIAG—Automotive Industry Action Group. alias—An effect that is completely confounded with another effect due to the nature of the designed experiment. Aliases are the results of confounding, which may or may not be deliberate. α (alpha)—(1) The maximum probability, or risk, of making a type I error when dealing with the significance level of a test; (2) The probability or risk of incorrectly deciding that a shift in the process mean has occurred when the process is
Glossary 399
unchanged when referring to α in general or as the p-value obtained in a test; (3) αi is usually designated as producer’s risk. alpha testing—In-house tests conducted on prototypes to ensure that the product is meeting requirements under controlled conditions. Usually done in the development stage. alternative hypothesis, H1—A hypothesis that is accepted if the null hypothesis (H0) is disproved. Example: Consider the null hypothesis that the statistical model for a population is a normal distribution. The alternative hypothesis to this null hypothesis is that the statistical model of the population is not a normal distribution. Note 1: The alternative hypothesis is a statement that contradicts the null hypothesis. The corresponding test statistic is used to decide between the null and alternative hypotheses. Note 2: The alternative hypothesis can be denoted H1 or HA with no clear preference as long as the symbolism parallels the null hypothesis notation. analogies—A technique used to generate new ideas by translating concepts from one application to another. analysis of covariance (ANCOVA)—A technique for estimating and testing the effects of treatments when one or more concomitant variables influence the response variable. Note: Analysis of covariance can be viewed as a combination of regression analysis and analysis of variance. analysis of variance (ANOVA)—A technique for determining if there are statistically significant differences between group means by analyzing group variances. An ANOVA is an analysis technique that evaluates the importance of several factors of a set of data by subdividing the variation into component parts. An analysis of variance table generally contains columns for: • Source • Degrees of freedom • Sum of squares • Mean square • F-ratio or F-test statistic • p-value • Expected mean square
The basic assumptions are that the effects from the sources of variation are additive, and that the experimental errors are independent, normally distributed, and have equal variances. ANOVA tests the hypothesis that the within-group variation is homogeneous and does not vary from group to group. The null hypothesis is that the group means are equal to each other. The
400 Glossary
alternative hypothesis is that at least one of the group means is different from the others. analytical thinking—Breaking down a problem or situation into discrete parts to understand how each part contributes to the whole. ANCOVA—See analysis of covariance. andon board—A visual device (usually lights) displaying status alerts that can easily be seen by those who should respond. ANOVA—See analysis of variance. AOQ—See average outgoing quality. AOQL—See average outgoing quality limit. APL—See acceptable process level. AQL—See acceptance quality limit. arithmetic average—See arithmetic mean. arithmetic mean—A calculation or estimation of the center of a set of values. It is the sum of the values divided by the number in the sum: x=
1 n ∑ xi n i=1
where x– is the arithmetic mean.
The average of a set of n observed values is the sum of the observed values divided by n: x=
x1 + x2 +…+ xn n
See also sample mean and population mean.
ARL (average run length)—See average run length. arrow diagram—A management and planning tool used to develop the best possible schedule and appropriate controls to accomplish the schedule; the critical path method (CPM) and the program evaluation review technique (PERT) make use of arrow diagrams. AS9100—An international quality management standard for the aeronautics industry embracing the ISO 9001 standard. ASME—American Society of Mechanical Engineers. ASN—See average sample number. ASQ—American Society for Quality, a society of individual and organizational members dedicated to the ongoing development, advancement, and promotion of quality concepts, principles, and technologies. assessment—An estimate or determination of the significance, importance, or value of something.
Glossary 401
assignable cause—A specifically identified factor that contributes to variation and is feasible to detect and identify. Eliminating assignable causes so that the points plotted on a control chart remain within the control limits helps achieve a state of statistical control. Note: Although assignable cause is sometimes considered synonymous with special cause, a special cause is assignable only when it is specifically identified. See also special cause. ASTD—American Society for Training and Development. ASTM—American Society for Testing and Materials. ATF—Bureau of Alcohol, Tobacco, Firearms and Explosives. ATI—See average total inspection. attribute—A countable or categorized quality characteristic that is qualitative rather than quantitative in nature. Attribute data come from discrete, nominal, or ordinal scales. Examples of attribute data are irregularities or flaws in a sample and results of pass/fail tests. attributes control chart—A Shewhart control chart, where the measure plotted represents countable or categorized data. attributes data—“Does/does not exist” data. The control charts based on attribute data include fraction defective chart, number of affected units chart, count chart, count-per-unit chart, quality score chart, and demerit chart. attributes, inspection by—See inspection by attributes. audit—A planned, independent, and documented assessment to determine whether agreed-on requirements are being met. auditee—The individual or organization being audited. auditor—An individual or organization carrying out an audit. audit team—The group of trained individuals conducting an audit under the direction of a team leader, relevant to a particular product, process, service, contract, or project. autocorrelation—The internal correlation between members of a series of observations ordered in time. Note: Autocorrelation can lead to misinterpretation of runs and trends in control charts. autonomation—Use of specially equipped automated machines capable of detecting a defect in a single part, stopping the process, and signaling for assistance. See also jidoka. availability—(1) The ability of a product to be in a state to perform its designated function under stated conditions at a given time. Availability can be expressed by the ratio uptime total time
402 Glossary
(2) A measure of the degree to which an item is in the operable and committable state at the start of the mission when the mission is called for at an unknown (random) time. average—The arithmetic mean. average outgoing quality (AOQ)—The expected average quality level of outgoing product for a given value of incoming product quality. Note: Unless otherwise specified, the AOQ is calculated over all accepted lots plus all nonaccepted lots after the latter have been 100% inspected and the nonconforming items replaced by conforming items. An approximation often used is AOQ equals incoming process quality multiplied by the probability of acceptance. This formula is exact for accept-zero plans and overestimates other plans. average outgoing quality limit (AOQL)—The maximum average outgoing quality over all possible values of incoming product quality level for a given acceptance sampling plan, and rectification of all nonaccepted lots unless specified otherwise. average run length (ARL)—The expected number of samples (or subgroups) plotted on a control chart up to and including the decision point that a special cause is present. The choice of ARL is a compromise between taking action when the process has not changed (ARL too small) or not taking action when a special cause is present (ARL too large). average sample number (ASN)—The average number of sample units per lot used for making decisions (acceptance or nonacceptance). average total inspection (ATI)—The average number of items inspected per lot, including 100% inspection of items in nonaccepted lots. Note: Applicable when the procedure calls for 100% inspection of nonaccepted lots.
B balanced design—A design where all treatment combinations have the same number of observations. If replication in a design exists, it would be balanced only if the replication was consistent across all the treatment combinations. In other words, the number of replicates of each treatment combination is the same. balanced incomplete block design—An incomplete block design in which each block contains the same number (k) of different levels from the (l) levels of the principal factor arranged so that every pair of levels occurs in the same number (l) of blocks from the b blocks. Note: This design implies that every level of the principal factor appears the same number of times in the experiment.
Glossary 403
balanced scorecard—Translates an organization’s mission and strategy into a comprehensive set of performance measures to provide a basis for strategic measurement and management, typically using four balanced views: financial, customers, internal business processes, and learning and growth. batch—A definite quantity of product accumulated under conditions considered uniform, or accumulated from a common source. This term is sometimes synonymous with lot. batch processing—Running large batches of a single product through the process at one time, resulting in queues awaiting next steps in the process. bathtub curve—Also called life history curve or Weibull curve. A graphic demonstration of the relationship of failures over the life of a product versus the probable failure rate. Includes three phases: early or infant failure (break-in), a stable rate during normal use, and wear-out. behavioral theories—Motivational theories, notably those of Abraham Maslow, Frederick Herzberg, Douglas McGregor, and William Ouchi. benchmarking—An improvement process in which a company measures its performance against that of best-in-class companies (or others who are good performers), determines how those companies achieved their performance levels, and uses the information to improve its own performance. The areas that can be benchmarked include strategies, operations, processes, and procedures. β (beta)—(1) The maximum probability, or risk, of making a type II error. See comment on a (alpha); (2) The probability or risk of incorrectly deciding that a shift in the process mean has not occurred when the process has changed; (3) β is usually designated as consumer’s risk. See power curve. beta testing—The final test before the product is released to the commercial market. Beta testers are existing customers. bias—Inaccuracy in a measurement system that occurs when the mean of the measurement result is consistently or systematically different than its true value. bimodal—A probability distribution having two distinct statistical modes. binomial confidence interval (one sample)—To construct 100(1 − a)% confidence intervals on the parameter p of a binomial distribution, if n is large and p ≤ 0.1. For instance, pˆ − zα/2
pˆ (1− pˆ ) pˆ (1− pˆ ) ≤ p ≤ pˆ + zα/2 n n
where the unbiased point estimator of p is pˆ = x / n.
404 Glossary
If n is small, then tables of the binomial distribution should be used to establish the confidence interval on p. If n is large but p is small, then the Poisson approximation to the binomial is useful in constructing confidence intervals. binomial confidence interval (two sample)—If there are two binomial parameters of interest, p1 and p2, then an approximate 100(1 − a)% confidence interval on the difference is pˆ 1 − pˆ 2 − zα/2
pˆ 1 (1− pˆ 1 ) pˆ 2 (1− pˆ 2 ) pˆ (1− pˆ 1 ) pˆ 2 (1− pˆ 2 ) + ≤ p1 − p2 ≤ pˆ 1 − pˆ 2 + zα/2 1 + n1 n2 n1 n2
where pˆ 1 = x1 n1 and pˆ 2 = x2 n2 . binomial distribution—A two-parameter discrete distribution involving the mean (m) and the variance (s2) of the variable x with probability p where p is a constant, 0 ≤ p ≤ 1, and sample size n. Mean = np and variance = np(1 − p). blemish—An imperfection that causes awareness but does not impair function or usage. block—A collection of experimental units more homogeneous than the full set of experimental units. Blocks are usually selected to allow for special causes, in addition to those introduced as factors to be studied. These special causes may be avoidable within blocks, thus providing a more homogeneous experimental subspace. block diagram—A diagram that shows the operation, interrelationships, and interdependencies of components in a system. Boxes, or blocks (hence the name), represent the components; connecting lines between the blocks represent interfaces. There are two types of block diagrams: a functional block diagram, which shows a system’s subsystems and lower-level products, their interrelationships, and interfaces with other systems; and a reliability block diagram, which is similar to the functional block diagram except that it is modified to emphasize those aspects influencing reliability. block effect—An effect resulting from a block in an experimental design. Existence of a block effect generally means that the method of blocking was appropriate. blocking—The method of including blocks in an experiment in order to broaden the applicability of the conclusions or to minimize the impact of selected assignable causes. The randomization of the experiment is restricted and occurs within blocks. body language—The expression of thoughts and emotions through movement or positioning of the body. Box–Behnken design—A type of response surface design constructed by judicious combination of 2k factorial designs with balanced incomplete block designs. It is
Glossary 405
most useful when it is difficult to have more than three levels of a factor in an experiment or when trying to avoid extreme combinations of factor levels. box plot—Box plots, which are also called box-and-whisker plots, are particularly useful for showing the distributional characteristics of data. A box plot consists of a box, whiskers, and outliers. A line is drawn across the box at the median. By default, the bottom of the box is at the first quartile (Q1) and the top is at the third quartile (Q3) value. The whiskers are the lines that extend from the top and bottom of the box to the adjacent values. The adjacent values are the lowest and highest observations that are still inside the region defined by the following limits: Lower limit: Q1 − 1.5 (Q3 − Q1) Upper limit: Q3 + 1.5 (Q3 − Q1)
Outliers are points outside of the lower and upper limits, and usually are plotted with asterisks (*). Outliers 13
Third quartile (Q3)
12 Data value
First quartile (Q1)
Sample Box Plot
11 10 9 8
Whisker extends to this adjacent value— the highest value within upper limits Median
Whisker extends to this adjacent value— the lowest value within lower limits
7
BPR—Business process reengineering. See reengineering. brainstorming—A problem-solving tool that teams use to generate as many ideas as possible related to a particular subject. Team members begin by offering all their ideas; the ideas are not discussed or reviewed until after the brainstorming session. breakthrough—A method of solving chronic problems that results from the effective execution of a strategy designed to reach the next level of quality. Such change often requires a paradigm shift within the organization. burn-in—The operation of an item under stress to stabilize its characteristics. business partnering—The creation of cooperative business alliances between constituencies within an organization or between an organization and its customers or suppliers. Partnering occurs through a pooling of resources in a trusting atmosphere focused on continuous, mutual improvement. See also customer–supplier partnership.
406 Glossary
business processes—Processes that focus on what the organization does as a business and how it goes about doing it. A business has functional processes (generating output within a single department) and cross-functional processes (generating output across several functions or departments). business process management (BPM)—A disciplined approach to analyze, identify, design, execute, document, measure, monitor, improve, and control both automated and nonautomated business processes to achieve consistent, targeted results aligned with an organization’s strategic goals.
C c (count)—The number of events (often nonconformities) of a given classification occurring in a sample of fixed size. A nonconforming unit may have more than one nonconformity. Example: Blemishes per 100 meters of rubber hose. Counts are attribute data. See c-chart. CAB—Corrective Action Board. capability—(1) The performance of a process demonstrated to be in a state of statistical control. See process capability and process performance. (2) A measure of the ability of an item to achieve mission objectives given the conditions during the mission. capability index—See process capability index. capability ratio (Cp)—Is equal to the specification tolerance width divided by the process capability. cascading training—Training implemented in an organization from the top down, where each level acts as trainers to those below. case study—A prepared scenario (story) that, when studied and discussed, serves to illuminate the learning points of a course of study. cause—A cause is an identified reason for the presence of a symptom, defect, or problem. See effect and cause-and-effect diagram. cause-and-effect diagram—A tool for analyzing process variables. It is also referred to as the Ishikawa diagram, because Kaoru Ishikawa developed it, or the fishbone diagram, because the complete diagram resembles a fish skeleton. The diagram illustrates the main causes and sub-causes leading to an effect (symptom). The cause-and-effect diagram is one of the seven basic tools of quality. CBT—Computer-based training. Training delivered via computer software. c-chart (count chart)—An attributes control chart that uses c (count) or number of events as the plotted values where the opportunity for occurrence is fixed. Events, often nonconformities, of a particular type form the count. The fixed opportunity relates to samples of constant size or a fixed amount of material.
Glossary 407
Examples include flaws in each 100 square meters of fabric, errors in each 100 invoices, or number of absentees per month: Central line: c Control limits: c ± 3 c where c– is the arithmetic mean of the number of events. Note: If the lower control limit calculates ≤ 0, there is no lower control limit. See also u chart, a count chart where the opportunity for occurrence is variable. cell—A layout of workstations and/or various machines for different operations (usually in a U shape) in which multitasking operators proceed with a part from machine to machine to perform a series of sequential steps to produce a whole product or major subassembly. cellular team—The cross-trained individuals who work within a cell. center line—See central line. central line—A line on a control chart representing the long-term average or a standard value of the statistical measure plotted. The central line calculation for each type of control chart is given under the specific control chart term; that is, for the individuals control chart, the central line is x–. certification—The receipt of a document from an authorized source stating that a device, process, or operator has been certified to a known standard. chain acceptance sampling inspection—An acceptance sampling inspection in which the criteria for acceptance of the current lot are governed by the sampling results of that lot and those of a specified number of the preceding consecutive lots. chain reaction—A series of interacting events described by W. Edwards Deming: improve quality > decrease costs > improve productivity > increase market share with better quality and lower price > stay in business, provide jobs, and provide more jobs. chaku–chaku—(Japanese) Meaning load–load in a cell layout, where a part is taken from one machine and loaded into the next. champion—An individual who has accountability and responsibility for many processes or who is involved in making strategic-level decisions for the organization. The champion ensures ongoing dedication of project resources and monitors strategic alignment (also referred to as a sponsor). chance cause—See random cause. chance variation—See random cause. change agent—The person who takes the lead in transforming a company into a quality organization by providing guidance during the planning phase, facilitating implementation, and supporting those who pioneer the changes.
408 Glossary
change management—The strategies, processes, and practices involved in creating and managing change. changeover—Changing a machine or process from one type of product or operation to another. characteristic—A distinguishing feature or inherent property of a product, process, or system related to a requirement. A property that helps to differentiate between items of a given sample or population. The differentiation may be either quantitative (by variable) or qualitative (by attribute). charter—A documented statement officially initiating the formation of a committee, team, project, or other effort in which a clearly stated purpose and approval is conferred. checklist—A tool for organizing and ensuring that all important steps or actions in an operation have been taken. Checklists contain items that are important or relevant to an issue or situation. Checklists should not be confused with check sheets and data sheets. checkout—Tests or observations of an item to determine its condition or status. check sheet—A simple data-recording device. The check sheet is custom designed for the particular use, allowing ease in interpreting the results. The check sheet is one of the seven basic tools of quality. Check sheets should not be confused with data sheets and checklists. χ2 (chi-square) distribution—A positively skewed distribution that varies with the degrees of freedom with a minimum value of zero. χ2 (chi-square) statistic—A value obtained from the χ2 distribution at a given percentage point and a given degree of freedom. χ2 (chi-square) test—A statistic used in testing a hypothesis concerning the discrepancy between observed and expected results. chronic problem—A long-standing adverse situation that can be remedied by changing the status quo. For example, actions such as revising an unrealistic manufacturing process or addressing customer defections can change the status quo and remedy the situation. clearance number—As associated with a continuous sampling plan, the number of successively inspected units of product that must be found acceptable during the 100% inspection sequence before action to change the amount of inspection can be taken. The clearance number is often designated as i. coaching—A continual improvement technique by which people receive one-to-one learning through demonstration and practice and that is characterized by immediate feedback and correction. code of conduct—The expected behavior that has been mutually developed and agreed on by a team. code of ethics—A guide for ethical behavior of the members of ASQ and those individuals who hold certification in its various divisions.
Glossary 409
coefficient of determination (R 2)—A measure of the part of the variance for one variable that can be explained by its linear relationship with another variable (or variables). The coefficient of determination is the square of the correlation between the observed y values and the fitted y values, and is also the fraction of the variation in y that is explained by the fitted equation. It is a percentage between zero and 100, with higher values indicating a stronger degree of the combined linear relationship of several predictor variables x1, x2, . . . xp to the response variable Y. See also regression analysis. coefficient of variation (CV)—This measures relative dispersion. It is the standard deviation divided by the mean and is commonly reported as a percentage. comment cards—Printed cards or slips of paper used to solicit and collect comments from users of a service or product. commercial/industrial market—Refers to business market customers who are described by variables such as location, NAICS code, buyer industry, technological sophistication, purchasing process, size, ownership, and financial strength. common cause—See random cause. common causes of variation—Causes that are inherent in any process all the time. A process that has only common causes of variation is said to be stable or predictable or in control. Also called chance causes. competence—Refers to a person’s ability to learn and perform a particular activity. Competence consists of knowledge, experience, skills, aptitude, and attitude components (KESAA factors). competency-based training—A training methodology that focuses on building mastery of a predetermined segment or module before moving on to the next. complaint handling—The process and practices involved in receiving and resolving complaints from customers. complete block—A block that accommodates a complete set of treatment combinations. completely randomized design—A design in which the treatments are assigned at random to the full set of experimental units. No blocks are involved in a completely randomized design. completely randomized factorial design—A factorial design in which all the treatments are assigned at random to the full set of experimental units. See completely randomized design. compliance—An affirmative indication or judgment that the supplier of a product or service has met the requirements of the relevant specifications, contract, or regulation; also, the state of meeting the requirements. computer-based instruction—Any instruction delivered via a computer. concomitant variable—A variable or factor that cannot be accounted for in the data analysis or design of an experiment but whose effect on the results should be
410 Glossary
accounted for. For example, the experimental units may differ in the amount of some chemical constituent present in each unit, which can be measured, but not adjusted. See analysis of covariance. confidence coefficient (1—α)—See confidence level. confidence interval—A confidence interval is an estimate of the interval between two statistics that includes the true value of the parameter with some probability. This probability is called the confidence level of the estimate. Confidence levels typically used are 90%, 95%, and 99%. The interval either contains the parameter or it does not. See z-confidence interval, t-confidence interval. confidence level (confidence coefficient [1 − α])—(1) The probability that the confidence interval described by a set of confidence limits actually includes the population parameter; (2) The probability that an interval about a sample statistic actually includes the population parameter. confidence limits—The endpoints of the interval about the sample statistic that is believed, with a specified confidence level, to include the population parameter. See confidence interval. configuration management (CM)—A discipline and a system to control changes that are made to hardware, software, firmware, and documentation throughout a system’s life cycle. conflict resolution—A process for resolving disagreements in a manner acceptable to all parties. conformance—An affirmative indication or judgment that a product or service has met the requirements of a relevant specification, contract, or regulation. confounding—Indistinguishably combining an effect with other effects or blocks. When done deliberately, higher-order effects are systematically aliased so as to allow estimation of lower-order effects. Sometimes confounding results are from inadvertent changes to a design during the running of an experiment, or poor planning of the design. This can diminish or even invalidate the effectiveness of the experiment. consensus—Finding a proposal acceptable enough that all team members can support the decision and no member opposes it. consistency—See precision. consultative—A decision-making approach in which a person talks to others and considers their input before making a decision. consumer market customers—End users of a product or service. consumer’s risk (β)—The probability of acceptance when the quality level has a value stated by the acceptance sampling plan as unsatisfactory. Note 1: Such acceptance is a type II error. Note 2: Consumer’s risk is usually designated as β (beta). continual process improvement—Includes the actions taken throughout an organization to increase the effectiveness and efficiency of activities and processes in order to provide added benefits to the customer and
Glossary 411
organization. It is considered a subset of total quality management and operates according to the premise that organizations can always make improvements. Continual improvement can also be equated with reducing process variation. continuous distribution—A distribution where data are from a continuous scale. Examples of continuous scales include the normal, t, and F distributions. continuous quality improvement (CQI)—A philosophy and actions for repeatedly improving an organization’s capabilities and processes with the objective of customer satisfaction. continuous sampling plan—An acceptance sampling inspection applicable to a continuous-flow process, which involves acceptance or nonacceptance on an item-by-item basis and uses alternative periods of 100% inspection and sampling, depending on the quality of the observed process output. continuous scale—A scale with a continuum of possible values. Note: A continuous scale can be transformed into a discrete scale by grouping values, but this leads to some loss of information. contrast—Linear function of the response values for which the sum of the coefficients is zero with not all coefficients equal to zero. Questions of logical interest from an experiment may be expressed as contrasts with carefully selected coefficients. contrast analysis—A technique for estimating the parameters of a model and making hypothesis tests on preselected linear combinations of the treatments or contrasts. control chart—A chart that plots a statistical measure of a series of samples in a particular order to steer the process regarding that measure and to control and reduce variation. Note 1: The order is usually time- or sample number order–based. Note 2: The control chart operates most effectively when the measure is a process characteristic correlated with an ultimate product or service characteristic. control chart, acceptance—See acceptance control chart. control chart, standard given—A control chart whose control limits are based on adopted standard values applicable to the statistical measures plotted on the chart. Note 1: Standard values are adopted at any time for computing the control limits to be used as criteria for action in the immediate future or until additional evidence indicates need for revision. Note 2: This type of control chart is used to discover whether observed values differ from standard values by an amount greater than chance. An example of this use is to establish and verify an internal reference material. Note 3: The subscript zero is used for standard chart parameters, that is, x– 0, y– 0, R0.
412 Glossary
control chart factor—A factor, usually varying with sample size, that converts specified statistics or parameters into control limits or a central line value. Note: Common control chart factors for Shewhart control charts are d2, A 2, D3, and D4. See x– chart and range. control factor—In robust parameter design, a control factor is a predictor variable that is controlled as part of the standard experimental conditions. In general, inference is made on control factors while noise factors are allowed to vary to broaden the conclusions. control limit—A line on a control chart used for judging the stability of a process. Note 1: Control limits provide statistically determined boundaries for the deviations from the central line of the statistic plotted on a Shewhart control chart due to random causes alone. Note 2: Control limits (with the exception of the acceptance control chart) are based on actual process data, not on specification limits. Note 3: Other than points outside of control limits, out-of-control criteria can include runs, trends, cycles, periodicity, and unusual patterns within the control limits. Note 4: The control limit calculation for each type of control chart is given under the specific control chart term, that is, for the individuals control chart under calculation of its control limits. control plan—(1) A document describing the system elements to be applied to control variation of processes, products, and services in order to minimize deviation from their preferred values; (2) A document, or documents, that may include the characteristics for quality of a product or service, measurements, and methods of control. corrective action—Action taken to eliminate the root cause(s) and symptom(s) of an existing deviation or nonconformity to prevent recurrence. correlation—Correlation measures the linear association between two variables. It is commonly measured by the correlation coefficient r. See also regression. correlation coefficient (r)—A number between −1 and 1 that indicates the degree of linear relationship between two sets of numbers: r=
sxy sx sy
=
nΣxy − Σy ⎡⎣nΣx 2 − (Σx)2 − (Σy)2 ⎤⎦
where sx is the standard deviation of x, sy is standard deviation of y, and sxy is the covariance of x and y.
Correlation coefficients of −1 and +1 represent perfect linear agreement between two variables; r = 0 implies no linear relationship at all. If r is positive, y increases as x increases. In other words, if a linear regression equation were fit, the linear regression coefficient would be positive. If r is negative, y decreases as
Glossary 413
x increases. In other words, if a linear regression equation were fit, the linear regression coefficient would be negative. cost of quality (COQ)—The total costs incurred relating to the quality of a product or service. There are four categories of quality costs: internal failure costs (costs associated with defects found before delivery of the product or service), external failure costs (costs associated with defects found during or after product or service delivery), appraisal costs (costs incurred to determine the degree of conformance to quality requirements), and prevention costs (costs incurred to keep failure and appraisal costs to a minimum). covariance—This measures the relationship between pairs of observations from two variables. It is the sum of the products of deviations of pairs of variables in a random sample from their sample means divided by the number in the sum minus one. Sample covariance =
1 n ∑(xi − x)(yi − y) , n − 1 i=1
where x– is the sample mean for the first sample and y– is the sample mean for the second sample. Note: Using n − 1 provides an unbiased estimator of the population covariance. When the covariance is standardized to cover a range of values from −1 to 1, it is more commonly known as the correlation coefficient. Cp (process capability index)—An index describing process capability in relation to specified tolerance of a characteristic divided by a measure of the length of the reference interval for a process in a state of statistical control: Cp =
U −L 6σ
where U is upper specification limit, L is lower specification limit, and 6s is the process capability.
See also Pp, a similar term, except that the process may not be in a state of statistical control.
Cpk (minimum process capability index)—An index that represents the smaller of CpkU (upper process capability index) and CpkL (lower process capability index). CpkL (lower process capability index, CPL)—An index describing process capability in relation to the lower specification limit: C pkL =
µ −L 3σ
where σ = process average, L = lower specification limit, and 3σ = half of the process capability.
414 Glossary
CpkU (upper process capability index; CPU)—An index describing process capability in relation to the upper specification limit: C pkU =
U −µ 3σ
where m = process average, U = upper specification limit, and 3s = half of the process capability. CPM—See critical path method. Cpm (process capability index of the mean)—An index that takes into account the location of the mean, defined as C pm =
(U − L)/ 2 3 σ 2 + (µ − T)2
where U = upper specification limit, L = lower specification limit, s = standard deviation, m = expected value, and T = target value. CQPA—Certified Quality Process Analyst. critical path—Refers to the sequence of tasks that takes the longest time and determines a project’s completion date. critical path method (CPM)—An activity-oriented project management tool that uses arrow-diagramming techniques to demonstrate both the time and cost required to complete a project. It provides one time estimate—normal time— and allows for computing the critical path. critical thinking—The careful analysis, evaluation, reasoning (both deductive and inductive), clear thinking, and systems thinking leading to effective decisions. crossed design—A design where all the factors are crossed with each other. See crossed factor. crossed factor—Two factors are crossed if every level of one factor occurs with every level of the other in an experiment. cross-functional team—A group consisting of members from more than one department or work unit that is organized to accomplish a project. CSR—Customer service representative. cumulative frequency distribution—The sum of the frequencies accumulated up to the upper boundary of a class in a distribution. cumulative sum chart (CUSUM chart)—The CUSUM control chart calculates the cumulative sum of deviations from target to detect shifts in the level of the measurement. The CUSUM chart can be graphical or numerical. In the graphical form, a V-mask indicates action: whenever the CUSUM trace moves outside the V-mask, take control action. The numerical (or electronic) form
Glossary 415
computes CUSUM from a selected bias away from target value and has a fixed action limit. Additional tests for abnormal patterns are not required. curtailed inspection—An acceptance sampling procedure that contains a provision for stopping inspection when it becomes apparent that adequate data have been collected for a decision. curvature—The departure from a straight-line relationship between the response variable and predictor variable. Curvature has meaning with quantitative predictor variables, but not with categorical (nominal) or qualitative (ordinal) predictor variables. Detection of curvature requires more than two levels of the factors and is often done using center points. customer—Recipient of a product or service provided by a supplier. See external customer and internal customer. customer delight—The result achieved when customer requirements are exceeded in unexpected ways the customer finds valuable. customer loyalty/retention—The result of an organization’s plans, processes, practices, and efforts designed to deliver their services or products in ways that create retained and committed customers. customer-oriented organization—An organization whose mission, purpose, and actions are dedicated to serving and satisfying its customers. customer relationship management (CRM)—An organization’s knowledge of its customers’ unique requirements and expectations, and use of that knowledge to develop a closer and more profitable link to business processes and strategies. customer satisfaction—The result of delivering a product or service that meets customer requirements, needs, and expectations. customer segmentation—The process of differentiating customers based on one or more dimensions for the purpose of developing a marketing strategy to address specific segments. customer service—The activities of dealing with customer questions; also sometimes the department that takes customer orders or provides postdelivery services. customer–supplier partnership—A long-term relationship between a buyer and supplier characterized by teamwork and mutual confidence. The supplier is considered an extension of the buyer’s organization. The partnership is based on several commitments. The buyer provides long-term contracts and uses fewer suppliers. The supplier implements quality assurance processes so that incoming inspection can be minimized. The supplier also helps the buyer reduce costs and improve product and process designs. customer value—The market-perceived quality adjusted for the relative price of a product. CUSUM chart—See cumulative sum chart.
416 Glossary
CV—See coefficient of variation. CWQC—Companywide quality control. cycle time—Refers to the time that it takes to complete a process from beginning to end. cycle time reduction—To reduce the time that it takes, from start to finish, to complete a particular process.
D D—The smallest shift in process level that it is desired to detect, stated in terms of original units. data integrity—Refers to the completeness, consistency, and accuracy of data. Complete, consistent, and accurate data should be attributable, legible, contemporaneously recorded, original or a true copy, and accurate (ALCOA). Data integrity is critical throughout the CGMP data life cycle, including in the creation, modification, processing, maintenance, archival, retrieval, transmission, and disposition of data after the record’s retention period ends. System design and controls should enable easy detection of errors, omissions, and aberrant results throughout the data’s life cycle. data pedigree—Documentation of the origins and history of a data set, including its technical meaning, background on the process that produced it, the original collection of samples, measurement processes used, and the subsequent handling of the data, including any modifications or deletions made, through the present. data reliability—The consistency of the data over time, across data types and across different operators measuring the data. data validity—The extent to which the scores from a measure represent the variable they are intended to. debugging—A process to detect and remedy inadequacies. decision matrix—A matrix used by teams to evaluate problems or possible solutions. For example, after a matrix is drawn to evaluate possible solutions, the team lists them in the far-left vertical column. Next, the team selects criteria to rate the possible solutions, writing them across the top row. Then, each possible solution is rated on a scale of 1 to 5 for each criterion and the rating is recorded in the corresponding grid. Finally, the ratings of all the criteria for each possible solution are added to determine its total score. The total score is then used to help decide which solution deserves the most attention. defect—The nonfulfillment of a requirement related to an intended or specified use. Note: The distinction between the concepts defect and nonconformity is important as it has legal connotations, particularly those associated with product liability issues. Consequently, the term defect should be used with extreme caution.
Glossary 417
defective (defective unit)—A unit with one or more defects. defining relation—Used for fractional factorial designs. It consists of generators that can be used to determine which effects are aliased with each other. degrees of freedom (n, df)—In general, the number of independent comparisons available to estimate a specific parameter that allows entry to certain statistical tables. δ (delta lowercase)—The smallest shift in process level that it is desired to detect, stated in terms of the number of standard deviations from the average. It is given by (δ = D/s) where D is the size of the shift it is desired to detect and s is the standard deviation. Deming cycle—See plan–do–check–act cycle. Deming Prize—Award given annually to organizations that, according to the award guidelines, have successfully applied companywide quality control based on statistical quality control and will keep up with it in the future. Although the award is named in honor of W. Edwards Deming, its criteria are not specifically related to Deming’s teachings. There are three separate divisions for the award: the Deming Application Prize, the Deming Prize for Individuals, and the Deming Prize for Overseas Companies. The award process is overseen by the Deming Prize Committee of the Union of Japanese Scientists and Engineers in Tokyo. demographics—Variables among buyers in the consumer market, which include geographic location, age, sex, marital status, family size, social class, education, nationality, occupation, and income. demonstrated—That which has been measured by the use of objective evidence gathered under specified conditions. dependability—A measure of the degree to which an item is operable and capable of performing its required function at any (random) time during a specified mission profile, given item availability at the start of the mission. dependent variable—See response variable. deployment—Term used in strategic planning to describe the process of cascading goals, objectives, and plans throughout an organization. derating—(1) Using an item in such a way that applied stresses are below rated values, or (2) the lowering of the rating of an item in one stress field to allow an increase in rating in another stress field. design for manufacturing (DFM)—The design of a product for ease in manufacturing. Also, design for assembly (DFA) is used. design for Six Sigma (DFSS)—The aim is to produce a robust design that is consistent with applicable manufacturing processes and assures a fully capable process that will produce quality products.
418 Glossary
design generator—See generator. design of experiments (DOE, DOX)—The arrangement in which an experimental program is to be conducted, including the selection of factor combinations and their levels. Note: The purpose of designing an experiment is to provide the most efficient and economical methods of reaching valid and relevant conclusions from the experiment. The selection of the design is a function of many considerations, such as the type of questions to be answered, the applicability of the conclusions, the homogeneity of experimental units, the randomization scheme, and the cost to run the experiment. A properly designed experiment will permit simple interpretation of valid results. design resolution—See resolution. design review—Documented, comprehensive, and systematic examination of a design to evaluate its capability to fulfill the requirements for quality. design space—In design of experiments, the multidimensional region of possible treatment combinations formed by the selected factors and their levels. desired quality—Refers to the additional features and benefits a customer discovers when using a product or service that lead to increased customer satisfaction. If missing, a customer may become dissatisfied. deviation—The difference between a measurement (measurement usage) and its stated value or intended level. deviation control—Deviation control involves process control, where the primary interest is detecting small shifts in the mean. Typical control charts used for this purpose are CUSUM, EWMA, and Shewhart control charts with runs rules. This type of control is appropriate in advanced stages of a quality improvement process. See also threshold control. discrete distribution—A probability distribution where data are from a discrete scale. Examples of discrete distributions are the binomial and Poisson distributions. Attribute data involve discrete distributions. discrete scale—A scale with only a set or sequence of distinct values. Examples: Defects per unit, events in a given time period, types of defects, number of orders on a truck. discrimination—See resolution. dispersion—A term synonymous with variation. dispersion effect—The influence of a single factor on the variance of the response variable. dissatisfiers—Those features or functions that the customer or employee has come to expect, and, if they were no longer present, would result in dissatisfaction. distance learning—Learning where student(s) and instructor(s) are not colocated, often carried out through electronic means.
Glossary 419
distribution—Depicts the amount of potential variation in outputs of a process; it is usually described in terms of its shape, average, and standard deviation. DMAIC—A methodology used in the Six Sigma approach: define, measure, analyze, improve, control. documentation control (DC)—A system used to track and store electronic documents and/or paper documents. DOE—See design of experiments. dot plot—A plot of a frequency distribution where the values are plotted on the x-axis. The y-axis is a count. Each time a value occurs, the point is plotted according to the count for the value. double sampling—A multiple acceptance sampling inspection in which at most two samples are taken. Note: The decisions are made according to defined rules. downsizing—The planned reduction in workforce due to economics, competition, merger, sale, restructuring, or reengineering. duplication—See repeated measures. durability—A measure of useful life (a special case of reliability).
E effect—(1) The result of taking an action; the expected or predicted impact when an action is to be taken or is proposed. An effect is the symptom, defect, or problem. See cause and cause-and effect diagram. (2) (design of experiments usage) A relationship between a factor(s) and a response variable(s). Specific types include main effect, dispersion effect, or interaction effect. eighty–twenty (80–20) rule—A term referring to the Pareto principle, which suggests that most effects come from relatively few causes; that is, 80% of the effects come from 20% of the possible causes. element—See unit. employee involvement—A practice within an organization whereby employees regularly participate in making decisions on how their work areas operate, including making suggestions for improvement, planning, objectives setting, and monitoring performance. empowerment—A condition whereby employees have the authority to make decisions and take action in their work areas, within stated bounds, without prior approval. For example, an operator can stop a production process upon detecting a problem, or a customer service representative can send out a replacement product if a customer calls with a problem. end users—External customers who purchase products/services for their own use.
420 Glossary
environmental analysis/scanning—Identifying and monitoring factors from both inside and outside the organization that may impact the long-term viability of the organization. EPA—Environmental Protection Agency. event—An occurrence of some attribute or outcome. In the quality field, events are often nonconformities. evolutionary operation (EVOP)—A sequential form of experimentation conducted in production facilities during regular production. The range of variation of the factor levels is usually quite small in order to avoid extreme changes, so it often requires considerable replication and time. It is most useful when it would be difficult to do standard experimentation techniques like factorial design. EVOP—See evolutionary operation. EWMA—See exponentially weighted moving average. EWMA chart—See exponentially weighted moving average chart. excited quality—The additional benefit a customer receives when a product or service goes beyond basic expectations. Excited quality wows the customer and distinguishes the provider from the competition. If missing, the customer will still be satisfied. executive education—Usually refers to the education (and training) provided to top management. expected quality—Also known as basic quality, the minimum benefit or value a customer expects to receive from a product or service. experiment space—See design space. experimental design—See design of experiments. experimental error—The variation in the response variable beyond that accounted for by the factors, blocks, or other assignable sources in the conduct of an experiment. experimental plan—The assignment of treatments to an experimental unit, and the time order in which the treatments are to be applied. experimental run—A single performance of an experiment for a specific set of treatment combinations. experimental unit—The smallest entity receiving a particular treatment, subsequently yielding a value of the response variable. experimentation—See design of experiments. explanatory variable—See predictor variable. exploratory data analysis (EDA)—Exploratory data analysis isolates patterns and features of the data and reveals these forcefully to the analyst.
Glossary 421
exponential distribution—A continuous distribution where data are more likely to occur below the average than above it. Typically used to describe the break-in portion of the bathtub curve. exponentially weighted moving average (EWMA)—A moving average that is not greatly influenced when a small or large value enters the calculation: wt = lxi + (1–l)wt–1 where wt is the exponentially weighted moving average (EWMA) at the present time t, l (lambda) is a defined or experimentally determined weighting factor, xi is the observed value at time t, and wt–1 is the EWMA at the immediately preceding time interval. exponentially weighted moving average chart (EWMA chart)—A control chart where each new result is averaged with the previous average value using an experimentally determined weighting factor, l. Usually only the averages are plotted and the range is omitted. The action signal is a single point out of limits. The advantage of the EWMA chart compared to the MA chart is that the average does not jump when an extreme value leaves the moving average. The weighting factor l can be determined by an effective time period. Additional tests are not appropriate because of the high correlation between successive averages. Control limits : X 0 ± 3σ wt Center line : X 0 external audit—An audit performed by independent and impartial auditors who are not members of the audit team. external customer—A person or organization who receives a product, a service, or information but is not part of the organization supplying it. See also internal customer.
F Fv1, v2 —F-test statistic. See F-test. facilitator—An individual who is responsible for creating favorable conditions that will enable a team to reach its purpose or achieve its goals by bringing together the necessary tools, information, and resources to get the job done. A facilitator addresses the processes a team uses to achieve its purpose. factor—A predictor variable that is varied with the intent of assessing its effect on a response variable. factor level—See level. factorial design—An experimental design consisting of all possible treatments formed from two or more factors, each studied at two or more levels. When
422 Glossary
all combinations are run, the interaction effects as well as main effects can be estimated. See also 2k factorial design. failure—The event, or inoperable state, in which an item or part of an item does not, or would not, perform as previously specified. failure, dependent—Failure that is caused by the failure of an associated item(s). Not independent. failure, independent—Failure that occurs without being caused by the failure of any other item. Not dependent. failure, random—Failure whose occurrence is predictable only in a probabilistic or statistical sense. This applies to all distributions. failure analysis—Subsequent to a failure, the logical systematic examination of an item, its construction, application, and documentation to identify the failure mode and determine the failure mechanism and its basic course. failure mechanism—The physical, chemical, electrical, thermal, or other process that results in failure. failure mode—The consequence of the mechanism through which the failure occurs, such as short, open, fracture, excessive wear. failure mode analysis (FMA)—A procedure to determine which malfunction symptoms appear immediately before or after a failure of a critical parameter in a system. After all the possible causes are listed for each symptom, the product is designed to eliminate the problems. failure mode and effects analysis (FMEA)—A procedure in which each potential failure mode in every subitem of an item is analyzed to determine its effect on other subitems and on the required function of the item. Typically, two types of FMEAs are used: DFMEA (design) and PFMEA (process). failure rate—The total number of failures within an item population divided by the total number of life units expended by that population during a particular measurement interval under stated conditions. FDA—Food and Drug Administration. F distribution—A continuous distribution that is a useful reference for assessing the ratio of independent variances. See variance, tests for. feedback—The response to information received in interpersonal communication (written or oral); it may be based on fact or feeling and helps the party who is receiving the information judge how well he or she is being understood by the other party. Generally, feedback is information about a process or performance and is used to make decisions that are directed toward improving or adjusting the process or performance as necessary. feedback loops—Pertains to open-loop and closed-loop feedback. Open-loop feedback focuses on how to detect or measure problems in the inputs and how to plan for contingencies. Closed-loop feedback focuses on how to measure the outputs and how to determine the control points where adjustment can be made.
Glossary 423
filters—Relative to human-to-human communication, those perceptions (based on culture, language, demographics, experience, and so on) that affect how a message is transmitted by the sender and how a message is interpreted by the receiver. finding—A conclusion of importance based on observation(s) and/or research. Example: An audit finding. first-article testing—Tests a production process to demonstrate process capability and that tooling meets the quality standard. fishbone diagram—See cause-and-effect diagram. fitness for use—A term used to indicate that a product or service fits the customer’s defined purpose for that product or service. five S’s—Five practices for maintaining a clean and efficient workplace (Japanese). five why’s—A repetitive questioning technique for probing deeper to surface the root cause of a problem. fixed factor—A factor that only has a limited number of levels that are of interest. In general, inference is not made to other levels of fixed factors not included in the experiment. For example, if a manufacturing environment has three milling machines to be used in the experiment. See also random factor. fixed model—A model that contains only fixed factors. flowchart—A graphical representation of the steps in a process. Flowcharts are drawn to better understand processes. The flowchart is one of the seven basic tools of quality. focus group—A qualitative discussion group consisting of 8 to 10 participants, invited from a segment of the customer base to discuss an existing or planned product or service, led by a facilitator working from predetermined questions (focus groups may also be used to gather information in a context other than customers). fourteen (14) points—W. Edwards Deming’s 14 management practices to help organizations increase their quality and productivity: 1. Create constancy of purpose for improving products and services 2. Adopt a new philosophy 3. Cease dependence on inspection to achieve quality 4. End the practice of awarding business on price alone; instead, minimize total cost by working with a single supplier 5. Improve constantly and forever every process for planning, production, and service 6. Institute training on the job 7. Adopt and institute leadership 8. Drive out fear
424 Glossary
9. Break down barriers between staff areas 10. Eliminate slogans, exhortations, and targets for the workforce 11. Eliminate numerical quotas for the workforce and numerical goals for management 12. Remove barriers that rob people of pride of workmanship and eliminate the annual rating or merit system 13. Institute a vigorous program of education and self-improvement for everyone 14. Put everybody in the company to work to accomplish the transformation fractional factorial design—An experimental design consisting of a subset (fraction) of the factorial design. Typically, the fraction is a simple proportion of the full set of possible treatment combinations. For example, half-fractions, quarterfractions, and so forth, are common. While fractional factorial designs require fewer runs, some degree of confounding occurs. frequency—The number of occurrences or observed values in a specified class, sample, or population. frequency distribution—A set of all the various values that individual observations may have and the frequency of their occurrence in the sample or population. See cumulative frequency distribution. F-test—A statistical test that uses the F distribution. It is most often used when dealing with a hypothesis related to the ratio of independent variances: F=
sL2 ss2
where sL2 is the larger variance and sS2 is the smaller variance. See analysis of variance. fully nested design—See nested design.
G g chart—An attributes control chart based on a shifted geometric distribution for the number of occurrences of a given event per unit of process output. The g chart is for the total number of events: Center line (CL): t ⎞⎛ t ⎛t ⎞ Upper control limit (UCL): t + k n ⎜ − α ⎟⎜ − α + 1⎟ n n ⎠⎝ ⎝ ⎠ ⎞⎛ t ⎛t ⎞ Lower control limit (LCL): t − k n ⎜ − α ⎟⎜ − α + 1⎟ ⎠⎝ n ⎝n ⎠
Glossary 425
where a is the known minimum possible number of events, n is the sample size, k is the s multiplier (for example, 3s), and t– is the average total number of events per subgroup. The calculation for t– is t=
t1 + t2 +…+ tr r
where t1 + t2 + . . . + tr is the total number of events occurring in each subgroup and r is the number of subgroups. (Note that r is not the correlation coefficient r used elsewhere in this glossary.) The g chart is commonly used in healthcare and hospital settings. See h chart for the average number of events chart. gage R&R study—A type of measurement system analysis done to evaluate the performance of a test method or measurement system. Such a study quantifies the capabilities and limitations of a measurement instrument, often estimating its repeatability and reproducibility. It typically involves multiple operators measuring a series of items or multiple times. Gantt chart—A type of bar chart used in process/project planning and control to display planned work and finished work in relation to time. Also called a milestone chart when interim checkpoints are added. gap analysis—A technique that compares a company’s existing state to its desired state (as expressed by its long-term plans) to help determine what needs to be done to remove or minimize the gap. gatekeeping—The role of an individual (often a facilitator) in a group meeting in helping ensure effective interpersonal interactions (for example, someone’s ideas are not ignored due to the team moving on to the next topic too quickly). Gaussian distribution—See normal distribution. generator—In design of experiments, a generator is used to determine the level of confounding and the pattern of aliases in a fractional factorial design. geometric distribution—A case of the negative binomial distribution where c = 1 (c is the integer parameter). The geometric distribution is a discrete distribution. goal—A statement of general intent, aim, or desire; it is the point toward which management directs its efforts and resources; goals are usually nonquantitative. Graeco-Latin square design—An extension of a Latin square design that involves four factors in which the combination of the level of any one of them with the levels of the other three appears once and only once. groupthink—Occurs when most or all team members coalesce in supporting an idea or decision that hasn’t been fully explored, or when some members secretly disagree but go along with the other members in apparent support.
426 Glossary
H h chart—An attributes control chart based on a shifted geometric distribution for the number of occurrences of a given event per unit of process output. The h chart is for the average number of events: Center line (CL): t Upper control limit (UCL):
⎞⎛ t ⎞ ⎛t t k + n ⎜ − α ⎟⎜ − α + 1⎟ n n ⎠⎝ n ⎠ ⎝n
Lower control limit (LCL):
⎞⎛ t ⎞ ⎛t t k − n ⎜ − α ⎟⎜ − α + 1⎟ n n n n ⎠⎝ ⎠ ⎝
where a is the known minimum possible number of events, n is the sample size, k is the s multiplier (for example, 3s), and t– is the average total number of events per subgroup. The calculation for t– is t=
t1 + t2 +…+ tr r
where t1 + t2 + . . . + tr is the total number of events occurring in each subgroup and r is the number of subgroups. (Note that r is not the correlation coefficient r used elsewhere in this glossary.) The h chart is commonly used in healthcare and hospital settings. See g chart for the total number of events chart. H0 —See null hypothesis. H1—See alternative hypothesis. HA—See alternative hypothesis. hierarchical design—See nested design. histogram—A plot of a frequency distribution in the form of rectangles (cells) whose bases are equal to the class interval and whose areas are proportional to the frequencies. The pictorial nature of the histogram lets people see patterns that are difficult to see in a simple table of numbers. The histogram is one of the seven basic tools of quality. hoshin kanri, hoshin planning—Japanese-based strategic planning/policy deployment process that involves consensus at all levels as plans are cascaded throughout the organization, resulting in actionable plans and continual monitoring and measurement. Hotelling’s T2 —See T2. house of quality—A diagram (named for its house-shaped appearance) that clarifies the relationship between customer needs and product features. It helps correlate market or customer requirements and analysis of competitive products with higher-level technical and product characteristics and makes it possible to bring several factors into a single figure. Also known as quality function deployment (QFD).
Glossary 427
hypothesis—A statement about a population to be tested. See null hypothesis, alternative hypothesis, and hypothesis testing. hypothesis testing—A statistical hypothesis is a conjecture about a population parameter. There are two statistical hypotheses for each situation—the null hypothesis (H0) and the alternative hypothesis (H1). The null hypothesis proposes that there is no difference between the population of the sample and the specified population; the alternative hypothesis proposes that there is a difference between the sample and the specified population. See moving average.
I i—See moving average. I chart (individuals control chart)—See individuals control chart. imprecision—A measure of precision computed as a standard deviation of the test or measurement results. Less precision is reflected by a larger standard deviation. incomplete block—A block that accommodates only a subset of treatment combinations. See balanced incomplete block design. incomplete block design—A design in which the design space is subdivided into blocks in which there are insufficient experimental units available to run a complete replicate of the experiment. in-control process—A condition where the existence of special causes is no longer indicated by a Shewhart control chart. It indicates (within limits) a predictable and stable process, but it does not indicate that only random causes remain; nor does it imply that the distribution of the remaining values is normal Gaussian. independent variable—See predictor variable. indicators—Predetermined measures used to measure how well an organization is meeting its customers’ needs and its operational and financial performance objectives. Such indicators can be either leading or lagging indicators. Indicators are also devices used to measure lengths or flow. indifference quality level—The quality level that, in an acceptance sampling plan, corresponds to a probability of acceptance of 0.5 when a continuing series of lots is considered. indifference zone—The region containing quality levels between the acceptance quality limit and the limiting quality level. indirect customers—Customers who do not receive process output directly but are affected if the process output is incorrect or late. individual development—A process that may include education and training, but also includes many additional interventions and experiences to enable an individual to grow and mature both intellectually as well as emotionally. individuals control chart (x chart; I chart)—A variables control chart with individual values plotted in the form of a Shewhart control chart for individuals and subgroup size n = 1.
428 Glossary
Note 1: The x chart is widely used in the chemical industry because of cost of testing, test turnaround time, and the time interval between independent samples. Note 2: This chart is usually accompanied by a moving range chart, commonly with n = 2. Note 3: An individual chart sacrifices the advantages of averaging subgroups (and the assumptions of the normal distribution central limit theorem) to minimize random variation: Center line(CL) : x (x0 if standard given) Control limits : x ± E2 MR or x ± E3 s (if standard given, x0 ± 3s0 ) where x– is the average of the individual variables x=
x1 + x2 +…+ xn n
where R is the range, s– is average sample standard deviation, x0 is the standard value for the average, and s0 is the standard value for the population standard —– deviation. (Use the formula with MR when the sample size is small, and the formula with s– when the sample is larger, generally > 10 to 12.) inherent process variation—The variation in a process when the process is operating in a state of statistical control. inherent R and M value—A measure of reliability or maintainability that includes only the effects of an item design and its application and assumes an ideal operation and support environment. input—Material, product, or service that is obtained from an upstream internal provider or an external supplier and is used to produce an output. input variable—A variable that can contribute to the variation in a process. inspection—Measuring, examining, testing, and gauging one or more characteristics of a product or service and comparing the results with specified requirements to determine whether conformity is achieved for each characteristic. inspection, 100 percent—See 100 percent inspection. inspection, attributes—See inspection by attributes. inspection, curtailed—See curtailed inspection. inspection, normal—See normal inspection. inspection, rectifying—See rectifying inspection. inspection, reduced—See reduced inspection.
Glossary 429
inspection, tightened—See tightened inspection. inspection, variables—See inspection by variables. inspection by attributes—An inspection noting the presence, or absence, of the characteristic(s) in each of the items in the group under consideration and counting how many items do, or do not, possess the characteristic(s), or how many such events occur in the item, group, or opportunity space. Note: When inspection is performed by simply noting whether the item is nonconforming or not, the inspection is termed inspection for nonconforming items. When inspection is performed by noting the number of nonconformities on each unit, the inspection is termed inspection for number of nonconformities. inspection by variables—An inspection measuring the magnitude(s) of the characteristic(s) of an item. inspection level—An index of the relative amount of inspection of an acceptance sampling scheme chosen in advance and relating the sample size to the lot size. Note 1: A lower/higher inspection level can be selected if experience shows that a less or more discriminating operating characteristic curve will be appropriate. Note 2: The term should not be confused with severity of sampling, which concerns switching rules that operate automatically. inspection lot—A collection of similar units or a specific quantity of similar material offered for inspection and subject to a decision with respect to acceptance. interaction effect—The effect for which the apparent influence of one factor on the response variable depends on one or more other factors. Existence of an interaction effect means that the factors cannot be changed independently of each other. interaction plot—The plot providing the average responses at the combinations of levels of two distinct factors. interactive multimedia—A term encompassing technology that allows the presentation of facts and images with physical interaction by the viewer, for example, taking a simulated certification exam on a computer, or training embedded in transaction processing software. intercept—See regression analysis. intermediate customers—Distributors, dealers, or brokers who make products and services available to the end user by repairing, repackaging, reselling, or creating finished goods from components or subassemblies. internal audit—An audit performed by impartial members of the organization being audited. internal customer—The recipient (person or department) of another person’s or department’s output (product, service, or information) within an organization. See also external customer.
430 Glossary
internal reference material—A reference material that is not traceable to a national body but is stable (within stated time and storage conditions), homogeneous, and appropriately characterized. interrelationship digraph—A management and planning tool that displays the relationship between factors in a complex situation. It identifies meaningful categories from a mass of ideas and is useful when relationships are difficult to determine. intervention—An action taken by a leader or a facilitator to support the effective functioning of a team or work group. inventory, active—The group of items assigned an operational status. inventory, inactive—The group of items being held in reserve for possible future assignment to an operational status. Ishikawa diagram—See cause-and-effect diagram. isolated lot—A unique lot, or one separated from the sequence of lots in which it was produced or collected. isolated sequence of lots—A group of lots produced in succession but not forming part of a large sequence or produced by a continuing process. item—A nonspecific term used to denote any product, including systems, materials, parts, subassemblies, sets, accessories, and so on (Source: MIL-STD-280).
J jidoka—Japanese method of autonomous control involving the adding of intelligent features to machines to start or stop operations as control parameters are reached, and to signal operators when necessary. job aid—Any device, document, or other media that can be provided to a worker to aid in correctly performing tasks (for example, laminated setup instruction card hanging on machine, photos of product at different stages of assembly, or a metric conversion table). job description—A narrative explanation of the work, responsibilities, and basic requirements of a job. job enrichment—Increasing the worker’s responsibilities and authority in work to be done. job specification—A list of the important functional and quality attributes (knowledge, skills, aptitudes, and personal characteristics) needed to succeed in the job. joint planning meeting—A meeting involving representatives of a key customer and the sales and service team for that account to determine how better to meet the customer’s requirements and expectations. Juran’s trilogy—See quality trilogy.
Glossary 431
just-in-time (JIT) manufacturing—An optimal material requirement planning system for a manufacturing process in which there is little or no manufacturing material inventory on hand at the manufacturing site and little or no incoming inspection. just-in-time training—Providing job training coincidental with, or immediately prior to, an employee’s assignment to a new or expanded job.
K kaikaku—A Japanese term that means a breakthrough improvement in eliminating waste. kaizen—A Japanese term that means gradual, unending improvement by doing little things better and setting and achieving increasingly higher standards. The term was made famous by Masaaki Imai in his book Kaizen: The Key to Japan’s Competitive Success. kaizen blitz/event—An intense, short time frame (typically three to five consecutive days), team approach to employ the concepts and techniques of continual improvement (for example, to reduce cycle time or increase throughput). kanban—A system inspired by Taiichi Ohno’s (Toyota) visit to a US supermarket. The system signals the need to replenish stock or materials or to produce more of an item (also called a pull approach). Kano model—A representation of the three levels of customer satisfaction defined as dissatisfaction, neutrality, and delight. Named after Noriaki Kano. kurtosis—A measure of peakedness or flattening of a distribution near its center in comparison to the normal distribution.
L L—See lower specification limit. (lambda)—The weighting factor in EWMA calculations where 0 < λ ≤ 1. lateral thinking—A process that includes recognizing patterns, becoming unencumbered with old ideas, and creating new ones. Latin square design—A design involving three factors in which the combination of the levels of any one of them with the levels of the other two appears once and only once. It is often used to reduce the impact of two blocking factors by balancing out their contributions. A basic assumption is that these block factors do not interact with the factor of interest or with each other. This design is particularly useful when the assumptions are valid for minimizing the amount of experimentation. See Graeco-Latin square design. LCALI—A process for operating a listening-post system for capturing and using formerly unavailable customer data. LCALI stands for listen, capture, analyze, learn, and improve.
432 Glossary
LCL—See lower control limit. leader—An individual recognized by others as the person to lead an effort. One cannot be a leader without one or more followers. The term is often used interchangeably with “manager.” A leader may or may not hold an officially designated management-type position. leadership—An essential part of a quality improvement effort. Organization leaders must establish a vision, communicate that vision to those in the organization, and provide the tools, knowledge, and motivation necessary to accomplish the vision. lean approach/lean thinking—A focus on reducing cycle time and waste using a number of different techniques and tools, for example, value stream mapping, and identifying and eliminating monuments and non-value-added steps. “Lean” and “agile” are often used interchangeably. lean manufacturing—Applying the lean approach to improving manufacturing operations. learner-controlled instruction—When a learner works without an instructor, at an individual pace, building mastery of a task. Computer-based training is a form of LCI. Also called self-directed learning. learning curve—The time it takes to achieve mastery of a task or body of knowledge. learning organization—An organization that has as a policy to continue to learn and improve its products, services, processes, and outcomes. least squares, method of—A technique of estimating a parameter that minimizes the sum of the difference squared, where the difference is between the observed value and the predicted value (residual) derived from the model. Note: Experimental errors associated with individual observations are assumed to be independent. The usual analysis of variance, regression analysis, and analysis of covariance are all based on the method of least squares. lesson plan—A detailed plan created to guide an instructor in delivering training and/or education. level—A potential setting, value, or assignment of a factor or the value of the predictor variable. level of significance—See significance level. life cycle—A product life cycle is the total time frame from product concept to the end of its intended use; a project life cycle is typically divided into five stages: concept, planning, design, implementation, evaluation, and closeout. life units—A measure of duration applicable to the item, for example, operating hours, cycles, distance, rounds fired, attempts to operate. limiting quality level (LQL)—The quality level that, for the purposes of acceptance sampling inspection, is the limit of an unsatisfactory process average when a continuing series of lots is considered.
Glossary 433
Note: The LQL is sometimes referred to as the rejectable quality level (RQL), unacceptable quality level (UQL), or limiting quality (LQ), but limiting quality level is the preferred term. linear regression—The mathematical application of the concept of a scatter diagram where the correlation is actually a cause-and-effect relationship. linear regression coefficients—The values associated with each predictor variable in a linear regression equation. They tell how the response variable changes with each unit increase in the predictor variable. See regression analysis. linear regression equation—A function that indicates the linear relationship between a set of predictor variables and a response variable. See regression analysis. linearity (general sense)—The degree to which a pair of variables follows a straightline relationship. Linearity can be measured by the correlation coefficient. linearity (measurement system sense)—The difference in bias through the range of measurement. A measurement system that has good linearity will have a constant bias no matter the magnitude of measurement. If one views the relation between the observed measurement result on the y-axis and the true value on the x-axis, an ideal measurement system would have a line of slope equal to 1. line balancing—A method of proportionately distributing workloads within the value stream to meet takt time. line graph budget—A graphic representation of the planned and/or actual expenditures of a project’s budget throughout the performance of the project life cycle. lognormal distribution—If log x is normally distributed, it is a lognormal distribution. See normal distribution. lot—A definite part of a population constituted under essentially the same conditions as the population with respect to the sampling purpose. Note: The sampling purpose may, for example, be to determine lot acceptability or to estimate the mean value of a particular characteristic. lot-by-lot—The inspection of a product submitted in a series of lots. lot quality—A statistical measure of quality of the product from a given lot. These measures may relate to the occurrence of events or to physical measurements. For the purposes of this glossary, the most commonly used measure of lot quality is likely to be the percentage or proportion of nonconforming units in a lot. lot size (N)—The number of items in a lot. lot tolerance percent defective (LTPD)—When the percentage of nonconforming units is expressed as a percent defective, this may be referred to as the lot tolerance percent defective (LTPD). lower control limit (LCL)—The control chart control limit that defines the lower control boundary.
434 Glossary
lower specification limit (lower tolerance limit, L)—The specification or tolerance limit that defines the lower limiting value. LQL—See limiting quality level. LRPL—See rejectable process level. LTPD—See note under limiting quality level.
M MA chart—See moving average chart. main effect—The influence of a single factor on the mean of a response variable. main effects plot—A plot giving the average responses at the various levels of individual factors. maintainability—The measure of the ability of an item to be retained or restored to specified condition when maintenance is performed by personnel having specified skill levels, using prescribed procedures and resources, at each prescribed level of maintenance and repair. maintenance—All actions necessary for retaining an item in or restoring it to a specified condition. maintenance, corrective—All actions performed, as a result of failure, to restore an item to a specified condition. Corrective maintenance can include any or all of the following steps: localization, isolation, disassembly, interchange, reassembly, alignment, and checkout. maintenance, preventive—All actions performed in an attempt to retain an item in a specified condition by providing systematic inspection, detection, and prevention of incipient failures. maintenance action rate—The reciprocal of the mean time between maintenance actions = 1/MTBMA. maintenance time constraint (t)—The maintenance time that is usually prescribed as a requirement of the mission. Malcolm Baldrige National Quality Award (MBNQA)—An award established by Congress in 1987 to raise awareness of quality management and to recognize U.S. companies that have implemented successful quality management systems. Criteria for Performance Excellence are published each year. Three awards may be given annually in each of five categories: manufacturing businesses, service businesses, small businesses, education institutions, and healthcare organizations. The award is named after the late Secretary of Commerce Malcolm Baldrige, a proponent of quality management. The US Commerce Department’s National Institute of Standards and Technology manages the award, and ASQ administers it. The major emphasis in determining success is achieving results. market-perceived quality—The customer’s opinion of your products or services as compared to those of your competitors.
Glossary 435
materials review board (MRB)—A quality control committee or team, usually employed in manufacturing or other materials-processing installations, that has the responsibility and authority to deal with items or materials that do not conform to fitness-for-use specifications. An equivalent, the error review board, is sometimes used in software development. matrix chart/diagram—A management and planning tool that shows the relationships between various groups of data; it yields information about the relationships and the importance of task/method elements of the subjects. mean—See arithmetic mean. mean maintenance downtime (MDT)—Includes supply time, logistics time, administrative delays, active maintenance time, and so on. mean maintenance time—The measure of item maintainability taking into account maintenance policy. The sum of preventive and corrective maintenance times, divided by the sum of scheduled and unscheduled maintenance events, during a stated period of time. mean time between failures (MTBF)—A basic measure of reliability for repairable items: the mean number of life units during which all parts of the item perform within their specified limits, during a particular measurement interval, under stated conditions. mean time between maintenance (MTBM)—A measure of reliability, taking into account maintenance policy: the total number of life units expended by a given time, divided by the total number of maintenance events (scheduled and unscheduled) due to that item. mean time between maintenance actions (MTBMA)—A measure of the system reliability parameter related to demand for maintenance manpower: the total number of system life units, divided by the total number of maintenance actions (preventive and corrective) during a stated period of time. mean time to failure (MTTF)—A basic measure of system reliability for nonrepairable items: the total number of life units of an item, divided by the total number of failures within that population during a particular measurement interval, under stated conditions. mean time to repair (MTTR)—A basic measure of maintainability: the sum of corrective maintenance times at any specific level of repair, divided by the total number of failures within an item repaired at that level during a particular interval, under stated conditions. means, tests for—Testing for means includes computing a confidence interval and hypothesis testing by comparing means to a population mean (known or unknown) or to other sample means. Which test to use is determined by whether s is known, whether the test involves one or two samples, and other factors. If s is known, the z-confidence interval and z-test apply. If s is unknown, the t-confidence interval and t-test apply. measurement system—Everything that can introduce variability into the measurement process, such as equipment, operator, sampling methods, and accuracy.
436 Glossary
measurement systems analysis (MSA)—A statistical analysis of a measurement system to determine the variability, bias, precision, and accuracy of the measurement system(s). Such studies may also include the limit of detection, selectivity, linearity, and other characteristics of the system in order to determine the suitability of the measurement system for its intended purpose. median—The value for which half the data are larger and half are smaller. The median provides an estimator that is insensitive to very extreme values in a data set, whereas the average is affected by extreme values. Note: For an odd number of units, the median is the middle measurement; for an even number of units, the median is the average of the two middle measurements. mentor—A person who voluntarily assumes a role of trusted advisor and teacher to another person. The mentor may or may not be the mentored person’s organizational superior or even in the same organization. Usually, the only reward the mentor receives is self-gratification in having helped someone else. meta-analysis—The use of statistical methods to combine the results of multiple studies into a single conclusion. method of least squares—See least squares, method of. Mi —See moving average. midrange—(Highest value + lowest value)/2. mind mapping—A technique for creating a visual representation of a multitude of issues or concerns by forming a map of the interrelated ideas. mission profile—A time-phased description of the events and environments an item experiences from initiation to completion of a specified mission, to include the criteria of mission success or critical failures. mission statement—An explanation of purpose or reasons for existing as an organization; it provides the focus for the organization and defines its scope of business. mistake-proofing—The use of process or design features to prevent manufacture of nonconforming product. See poka-yoke. mixed model—A model where some factors are fixed, but other factors are random. See fixed factors and random factors. mixed variables and attribute sampling—The inspection of a sample by attributes, in addition to inspection by variables already made of a previous sample from the lot, before a decision as to acceptability or rejectability of the lot can be made. mixture design—A design constructed to handle the situation in which the predictor variables are constrained to sum to a fixed quantity, such as proportions of ingredients that make up a formulation or blend. mode—The most frequent value of a variable.
Glossary 437
model—The description relating a response variable to predictor variable(s) and including attendant assumptions. model I analysis of variance (fixed model)—An analysis of variance in which the levels of all factors are fixed rather than random selections over the range of versions to be studied for those factors. model II analysis of variance (random model)—An analysis of variance in which the levels of all factors are assumed to be selected at random over the range of versions to be studied for those factors. moving average—Let x1, x2, . . . denote individual observations. The moving average of span w at time i is Mi =
xi + xi−1 +…+ xi−w+1 w
At time period i, the oldest observation in the moving average set is dropped and the newest one added to the set.
The variance, V, of the moving average Mi is V(Mi ) =
1 w2
i
∑
V(xi ) =
j=1−w+1
1 w2
i
∑
J=i=w+1
σ2 =
σ2 w
moving average chart (MA chart)—A variables control chart where the simple unweighted moving average of the last w observations is plotted and provides a convenient substitute for the x– chart. A fixed number of samples, w, is required for averaging, but the choice of the interval for w may be difficult to optimize. The control signal is a single point out of limits. Additional tests are not appropriate because of the high correlation introduced between successive averages that share several common values. The MA chart is sensitive to trends and is sometimes used in conjunction with individuals charts and/or MR charts. It is, however, probably less effective than either the CUSUM or exponentially weighted moving average (EWMA) chart for this purpose. Note 1: This chart is particularly useful when only one observation per subgroup is available. Examples are process characteristics such as temperature, pressure, and time. Note 2: This chart has the disadvantage of an unweighted carryover effect lasting w points. Each plotted point : Mi =
xi + xi−1 +…+ xi−w+1 w
where Mi is the moving range of span w at time i. At time period i, the oldest observation in the moving average set is dropped and the newest one is added to the set. For time periods where i < w, the average of the observations for periods 1, 2 . . . i is plotted. Example (where w = 4):
438 Glossary
Observation, i
xi Mi
1
9.45
9.45
2
7.99
8.72
3
9.29
8.91
4
11.66
5
12.16
9.5975 10.275
Central line: x Control limits: x ±
3σ w
where x– is the average of the individual variables x=
x1 + x2 +…+ xn n
– is the total number of samples, σ the standard deviation, and w is the where n moving range span. moving range chart (MR chart)—A control chart that plots the absolute difference between the current and previous value. It often accompanies an individual’s chart or moving average chart. Note: The current observation replaces the oldest of the latest n + 1 observations. Central line: R Control limits: UCL = D 4 R LCL = D 3 R
where D4 and D3 are control chart factors.
MRB—See material review board. MR chart—See moving range chart. MSA—See measurement systems analysis. (mu)—See population mean. muda—(Japanese) An activity that consumes resources but creates no value; seven categories are correction, processing, inventory, waiting, overproduction, internal transport, and motion (waste). multilevel continuous sampling—A continuous acceptance sampling inspection of consecutively produced items where two or more sampling inspection rates are either alternated with 100% inspection or with each other, depending on the quality of the observed process output. multimodal—A probability distribution having more than one mode. See bimodal and trimodal.
Glossary 439
multiple acceptance sampling inspection—An acceptance sampling inspection in which, after each sample has been inspected, a decision is made based on defined decision rules to accept, not accept, or take another sample. Note: For most multiple sampling plans, the largest number of samples that can be taken is specified, with an accept or not accept decision being forced at that point. multiple linear regression—See regression analysis. multivariate control chart—A variables control chart that allows plotting of more than one variable. These charts make use of the T2 statistic to combine information from the dispersion and mean of several variables. multivoting—A decision-making tool that enables a group to sort through a long list of ideas to identify priorities. mystery shopper—A person who pretends to be a regular shopper in order to get an unencumbered view of how a company’s service process works.
N n—See sample size. N—See lot size. natural process limits—See reference interval. natural team—A work group having responsibility for a particular process. negative binomial distribution—A two-parameter, discrete distribution. negotiation—A process where individuals or groups work together to achieve common goals under an agreement that assures that each party’s values and priorities be addressed to reach a win-win result. nested design—A design in which the second factor is nested within levels of the first factor, and each succeeding factor is nested within levels of the previous factor. nested factor—A factor (A) is nested within another factor (B) if the levels of A are different for every level of B. See crossed factor. next operation as customer—Concept that the organization has service/product providers and service/product receivers, or internal customers. noise factor—In robust parameter design, a noise factor is a predictor variable that is hard to control or is not desired to control as part of the standard experimental conditions. In general, it is not desired to make inference on noise factors, but they are included in an experiment to broaden the conclusions regarding control factors. nominal scale—A scale with unordered, labeled categories, or a scale ordered by convention. Examples: type of defect, breed of dog, complaint category. Note: It is possible to count by category, but not order or measure. nonconforming unit—A unit with one or more nonconformities.
440 Glossary
nonconformity—The result of nonfulfillment of a specified requirement. See also blemish, defect, and imperfection. nondestructive testing and evaluation (NDT)—Testing and evaluation methods that do not damage or destroy the product being tested. non-value-added—Refers to tasks or activities that can be eliminated with no deterioration in product or service functionality, performance, or quality in the eyes of the customer. normal distribution (Gaussian distribution)—A continuous, symmetrical frequency distribution that produces a bell-shaped curve. The location parameter (x-axis) is the mean, m. The scale parameter, s, is the standard deviation of the normal distribution. When measurements have a normal distribution, 68.26% of the values lie within plus or minus one standard deviation of the mean (m ± 1s), 95.44% lie within plus or minus two standard deviations of the mean (m ± 2s), while 99.73% lie within plus or minus three standard deviations of the mean (m ± 3s). The probability function is f (x) =
⎡ (x − µ )2 ⎤ 1 exp ⎢− ⎥ 2σ 2 ⎦ σ 2π ⎣
where − ∞ < x < ∞ and with parameters − ∞ < µ < ∞ and σ > 0. ! ! ! ! ! normal inspection—The inspection that is used when there is no reason to think that the quality level achieved by the process differs from a specified level. See tightened inspection and reduced inspection. normal probability plot—See probability plot. norms—Behavioral expectations, mutually agreed-on rules of conduct, protocols to be followed, social practice. np (number of affected or categorized units)—The total number of units in a sample in which an event of a given classification occurs. A unit (area of opportunity) is counted only once, even if several events of the same classification are encountered. Note: In the quality field, the classification generally is number of nonconforming units. np chart (number of categorized units control chart)—An attributes control chart for number of events per unit where the opportunity is variable. (The np chart is for the number of nonconforming whereas the p chart is for the proportion nonconforming.) Note: Events of a particular type, for example, number of absentees or number of sales leads, form the count. In the quality field, events are often expressed
Glossary 441
as nonconformities, and the variable opportunity relates to subgroups of variable size or variable amounts of material. If a standard is given: Central line: np Control limits: np ± 3 np(1− p) where np is the standard value of the events per unit, and p is the standard value of the fraction nonconforming.
If no standard is given: Central line: np Control limits: np ± 3 np(1− p)
— is the average value of the events per unit and p– is the average value where np of the fraction nonconforming. null hypothesis, H0 —The hypothesis in tests of significance that there is no difference (null) between the population of the sample and the specified population (or between the populations associated with each sample). The null hypothesis can never be proved true, but it can be shown (with specified risks or error) to be untrue, that is, a difference exists between the populations. If it is not disproved, one assumes there is no adequate reason to doubt it is true, and the null hypothesis is accepted. If the null hypothesis is shown to be untrue, then the alternative hypothesis is accepted. Example: In a random sample of independent random variables with the same normal distribution with unknown mean and unknown standard deviation, a typical null hypothesis for the mean, µ, is that the mean is less than or equal to a given value, µ0. The hypothesis is written as H0 = µ ≤ µ0.
O objective—A quantitative statement of future expectations and an indication of when the expectations should be achieved; it flows from goals and clarifies what people must accomplish. objective evidence—Verifiable qualitative or quantitative observations, information, records, or statements of fact pertaining to the quality of an item or service or to the existence and implementation of a quality system element. objective setting—See SMART WAY. observation—An item or incidence of objective evidence found during an audit. observational unit—The smallest entity on which a response variable is measured. It may or may not be the same as the experimental unit. observed value—The particular value of a characteristic determined as a result of a test or measurement.
442 Glossary
OC curve—See operating characteristic curve. off-the-job training—Training that takes place away from the actual work site. ogive—A type of graph that represents the cumulative frequencies for the classes in a frequency distribution. 100% inspection—Inspection of selected characteristics of every item under consideration. 1 − —See confidence level. 1 − —The power of testing a hypothesis is 1 − β. It is the probability of correctly rejecting the null hypothesis, H0. one-tailed test—A hypothesis test that involves only one of the tails of a distribution. Example: We wish to reject the null hypothesis H0 only if the true mean is larger than µ0. H0: μ = μ0 H1: μ < μ0
A one-tailed test is either right-tailed or left-tailed, depending on the direction of the inequality of the alternative hypothesis.
one-to-one marketing—The concept of knowing customers’ unique requirements and expectations and marketing to these. See also customer relationship management. on-the-job training (OJT)—Training usually conducted at the workstation, typically done one on one. operable—The state of being able to perform the intended function. operating characteristic curve (OC curve)—A curve showing the relationship between the probability of acceptance of product and the incoming quality level for a given acceptance sampling plan. Note: Several terms in this glossary need to be referenced for better understanding of operating characteristic curves: a, b, acceptance quality level, consumer’s risk, indifference quality level, indifference zone, limiting quality level, probability of acceptance, probability of nonacceptance, and producer’s risk. ordinal scale—A scale with ordered, labeled categories. Note 1: There is sometimes a blurred line between ordinal and discrete scales. When subjective opinion ratings such as excellent, very good, neutral, poor, and very poor are coded (as numbers 1–5), the apparent effect is conversion from an ordinal to a discrete scale. Such numbers should not be treated as ordinary numbers, however, because the distance between 1 and 2 may not be the same as between 2 and 3, or 3 and 4, and so forth. On the other hand, some categories that are ordered objectively according to magnitude, such as the
Glossary 443
Richter scale, which ranges from zero to 8 according to the amount of energy released, could be related to a discrete scale as well. Note 2: Sometimes nominal scales are ordered by convention. An example is the blood groups A, B, and O, which are always stated in this order. The same is the case if different categories are denoted by single letters; they are then ordered by convention, according to the alphabet. organization culture—Refers to the collective beliefs, values, attitudes, manners, customs, behaviors, and artifacts unique to an organization. original inspection—The inspection of a lot, or other amount, not previously inspected. Note: This is in contrast, for example, to inspection of a lot that has previously been designated as not acceptable and is submitted again for inspection after having been further sorted, reprocessed, and so on. orthogonal contrasts—A set of contrasts whose coefficients satisfy the condition that, if multiplied in corresponding pairs, the sum of the products equals zero. See contrast analysis. orthogonal design—A design in which all pairs of factors at particular levels appear together an equal number of times. Examples include a wide variety of special designs such as a Latin square, completely randomized factorial design, and fractional factorial design. Statistical analysis of the results for orthogonal designs is relatively simple because each main effect and interaction effect can be evaluated independently. OSHA—Occupational Safety and Health Administration. outlier—An extremely high or an extremely low data value compared to the rest of the data values. Great caution must be used when trying to identify an outlier. out-of-control process—A process operating with the presence of special causes. See in-control process. out of spec—A term used to indicate that a unit does not meet a given specification. output—The deliverables resulting from a process, a project, a quality initiative, an improvement, and so on. Outputs include data, information, documents, decisions, and tangible products. Outputs are generated both from the planning and management of the activity and the delivered product, service, or program. Output is also the item, document, or material delivered by an internal provider (supplier) to an internal receiver (customer). output variable—The variable representing the outcome of the process. outsourcing—A strategy and an action to relieve an organization of processes and tasks in order to reduce costs, improve quality, reduce cycle time (for example, by parallel processing), reduce the need for specialized skills, and increase efficiency.
444 Glossary
P p—The ratio of the number of units in which at least one event of a given classification occurs to the total number of units. A unit is counted only once even if several events of the same classification are encountered within it. p can also be expressed as a percentage. P—See probability. p.95, p.50, p.10, p.05 —The submitted quality in terms of the proportion of variant units for which the probability of acceptance is 0.95, 0.50, 0.10, 0.05 for a given sampling plan. p 1—The percent of nonconforming individual items occurring when the process is located at the acceptable process level (APL). p 2 —The percent of nonconforming individual items occurring when the process is located at the rejectable process level (RPL). Pa—See probability of acceptance. panels—Groups of customers recruited by an organization to provide ad hoc feedback on performance or product development ideas. paradigm—The standards, rules, attitudes, mores, and so on, that influence the way an organization lives and behaves. parameter—A constant or coefficient describing some characteristic of a population. Examples: standard deviation and mean. Pareto chart—A basic tool used to graphically rank causes from most significant to least significant. It utilizes a vertical bar graph in which the bar height reflects the frequency or impact of causes. partially nested design—A nested design in which several factors may be crossed, as in factorial experiments, and other factors are nested within the crossed combinations. participative management—A style of managing whereby the manager tends to work from theory Y assumptions about people, involving the workers in decisions made. See theory Y. partnership/alliance—A strategy leading to a relationship with suppliers or customers aimed at reducing costs of ownership, maintenance of minimum stocks, just-in-time deliveries, joint participation in design, exchange of information on materials and technologies, new production methods, quality improvement strategies, and the exploitation of market synergy. parts per million (PPM or ppm)— The number of units of mass of a contaminant per million units of total mass. One ppm is equivalent to the absolute fractional amount multiplied by one million. part-to-part variation—The variability of the data due to measurement items rather than the measurement system. This variation is typically estimated from the measurement items used in a study, but could be estimated from a representative sample of product.
Glossary 445
p chart—See proportion chart. Pearson’s correlation coefficient—See correlation coefficient. percentile—The division of the data set into 100 equal groups. performance appraisal/evaluation—A formal method of measuring employees’ progress against performance standards and providing feedback to them. performance plan—A performance management tool that describes desired performance and provides a way to assess performance objectively. PERT—See program evaluation and review technique. pilot run—A trial production run consisting of various production tests conducted to ensure that the product is being built to specification. Plackett–Burman design—An experimental design used where there are many factors to study. It only studies the main effects and is primarily used as a screening design before applying other types of designs. In general, Plackett– Burman designs are two-level, but three-, five-, and seven-level designs are available. They allow for efficient estimation of the main effects but assume that interactions can initially be ignored. plan–do–check–act (PDCA) cycle—A four-step process for quality improvement. In the first step (plan), a plan to effect improvement is developed. In the second step (do), the plan is carried out, preferably on a small scale. In the third step (check), the effects of the plan are observed. As part of the last step (act), the results are studied to determine what was learned and what can be predicted. The plan–do–check–act cycle is sometimes referred to as the Shewhart cycle, because Walter A. Shewhart discussed the concept in his book Statistical Method from the Viewpoint of Quality Control, and as the Deming cycle because W. Edwards Deming introduced the concept in Japan. The Japanese subsequently called it the Deming cycle. Sometimes referred to as plan–do–study–act (PDSA). plan–do–study–act (PDSA) cycle—A variation of the plan–do–check–act (PDSA) cycle. PMI—Project Management Institute (www.pmi.org). Poisson distribution—The Poisson distribution describes occurrences of isolated events in a continuum of time or space. It is a one-parameter, discrete distribution depending only on the mean. poka-yoke—(Japanese) A term that means to mistake-proof a process by building safeguards into the system that avoid or immediately find errors. It comes from poka, which means error, and yokeru, which means to avoid. pooled standard deviation—A standard deviation value resulting from some combination of individual standard deviation values. It is most often used when individual standard deviation values are similar in magnitude and can be denoted by sp. See t-test (two sample). population—(1) The entire set (totality) of units, quantity of material, or observations under consideration. A population may be real and finite, real
446 Glossary
and infinite, or completely hypothetical. See sample. (2) A group of people, objects, observations, or measurements about which one wishes to draw conclusions. population covariance—See covariance. population mean (m)—The true mean of the population, represented by m (mu). The sample mean, x–, is a common estimator of the population mean. population standard deviation—See standard deviation. population variance—See variance. power—The equivalent to 1 minus the probability of a type II error (1 − β). A higher power is associated with a higher probability of finding a statistically significant difference. Lack of power usually occurs with smaller sample sizes. power curve—The curve showing the relationship between the probability (1 − β) of rejecting the hypothesis that a sample belongs to a given population with a given characteristic(s) and the actual population value of that characteristic(s). Note: If β, the probability of accepting the hypothesis, is used instead of (1 − β), the curve is called an operating characteristic (OC) curve. Pp (process performance index)—An index describing process performance in relation to specified tolerance: Pp =
U−L 6s
s is used for standard deviation instead of σ since both random and special causes may be present. Note: A state of statistical control is not required. Ppk (minimum process performance index)—The smaller of upper process performance index and lower process performance index. PpkL (lower process performance index or PPL)—An index describing process performance in relation to the lower specification limit. For a symmetrical normal distribution: PpkL =
x−L 3s
where s is defined under Pp. PpkU (upper process performance index or PPU)—An index describing process performance in relation to the upper specification limit. For a symmetrical normal distribution: PpkU = where s is defined under Pp.
U−x 3s
Glossary 447
PPL—See PpkL. PPU—See PpkU. Pr —See probability of rejection. precision—The closeness of agreement between randomly selected individual measurements or test results. See repeatability and reproducibility. precision-to-tolerance ratio (PTR)—A measure of the capability of the measurement system. It can be calculated by PTR =
5.15 × σˆ ms USL − LSL
where σˆ ms is the estimated standard deviation of the total measurement system variability. In general, reducing the PTR will yield an improved measurement system. predicted—That which is expected at some future date, postulated on analysis of past experience and tests. predicted value—The prediction of future observations based on the formulated model. prediction interval—Similar to a confidence interval, it is an interval based on the predicted value that is likely to contain the values of future observations. It will be wider than the confidence interval because it contains bounds on individual observations rather than a bound on the mean of a group of observations. predictor variable—A variable that can contribute to the explanation of the outcome of an experiment. prevention versus detection—A term used to contrast two types of quality activities. Prevention refers to those activities designed to prevent nonconformances in products and services. Detection refers to those activities designed to detect nonconformances already in products and services. Another phrase used to describe this distinction is “designing-in quality versus inspecting-in quality”. preventive action—Action taken to eliminate the potential causes of a nonconformity, defect, or other undesirable situation in order to prevent occurrence. primary customer—The individual or group who directly receives the output of a process. priorities matrix—A tool used to choose between several options that have many useful benefits, but where not all of them are of equal value. probability (P)—The chance of an event occurring. probability distribution—A function that completely describes the probabilities with which specific values occur. The values may be from a discrete scale or a continuous scale.
448 Glossary
probability of acceptance (Pa)—The probability that when using a given acceptance sampling plan, a lot will be accepted when the lot or process is of a specific quality level. probability of nonacceptance—See probability of rejection. probability of rejection (Pr)—The probability that when using a given acceptance sampling plan, a lot will not be accepted when the lot or process is of a specified quality level. probability plot—The plot of ranked data versus the sample cumulative frequency on a special vertical scale. The special scale is chosen (that is, normal, lognormal, and so on) so that the cumulative distribution is a straight line. problem solving—A rational process for identifying, describing, analyzing, and resolving situations in which something has gone wrong without explanation. process—An activity or group of activities that takes an input, adds value to it, and provides an output to an internal or external customer; a planned and repetitive sequence of steps by which a defined product or service is delivered. A process can be graphically represented using a flowchart. process analysis—Defining and quantifying the process capability from data derived from mapping and measuring the work performed by the process. process audit—An audit of a process. process average—See sample mean or population mean. process capability—The calculated inherent variability of a characteristic of a product. It represents the best performance of the process over a period of stable operations. Process capability is expressed as 6σˆ , where σˆ is the sample standard deviation (short-term component of variation) of the process under a state of statistical control. process capability index—A single-number assessment of ability to meet specification limits on the quality characteristic(s) of interest. The indices compare the variability of the characteristic to the specification limits. Three basic process capability indices are Cp, Cpk, and Cpm. Note: Since there are many different types and variations of process capability indices, details are given under the symbol for the specific type of index. process control—Process management that is focused on fulfilling process requirements. Process control is the methodology for keeping a process within boundaries and minimizing the variation of a process. process decision program chart (PDPC)—A management and planning tool that identifies all events that can go wrong and the appropriate countermeasures for these events. It graphically represents all sequences that lead to a desirable effect. process improvement—The act of changing a process to reduce variability and cycle time and make the process more effective, efficient, and productive.
Glossary 449
process improvement team (PIT)—A natural work group or cross-functional team whose responsibility is to achieve needed improvements in existing processes. The life span of the team is based on the completion of the team purpose and specific goals. process management—The collection of practices used to implement and improve process effectiveness; it focuses on holding the gains achieved through process improvement and assuring process integrity. process mapping—The flowcharting of a work process in detail, including key measurements. process owner—The manager or leader who is responsible for ensuring that the total process is effective and efficient. process performance—The statistical measure of the outcome of a characteristic from a process that may not have been demonstrated to be in a state of statistical control. Note: Use this measure cautiously since it may contain a component of variability from special causes of unpredictable value. It differs from process capability because a state of statistical control is not required. process performance index—A single-number assessment of ability to meet specification limits on the quality characteristic(s) of interest. The indices compare the variability of the characteristic to the specification limits. Three basic process capability indices are Pp, Ppk, and Ppm. Note: Since there are many different types and variations of process capability indices, details are given under the symbol for the specific type of index. process quality—A statistical measure of the quality of product from a given process. The measure may be an attribute (qualitative) or a variable (quantitative). A common measure of process quality is the fraction or proportion of nonconforming units in the process. process quality audit—An analysis of elements of a process and appraisal of completeness, correctness of conditions, and probable effectiveness. process-to-tolerance ratio—See precision to tolerance ratio. process variable—See variable. producer’s risk (α)—The probability of nonacceptance when the quality level has a value stated by the acceptance sampling plan as acceptable. Note 1: Such nonacceptance is a type I error. Note 2: Producer’s risk is usually designated as α. Note 3: Quality level could relate to fraction nonconforming and be acceptable to AQL. Note 4: Interpretation of the producer’s risk requires knowledge of the stated quality level. product audit—Audit of a product.
450 Glossary
production part approval process (PPAP)—In the automotive industry, PPAP is used to show that suppliers have met all the engineering requirements based on a sample of parts produced from a production run made with production tooling, processes, and cycle times. product/service liability—The obligation of a company to make restitution for loss related to personal injury, property damage, or other harm caused by its product or service. product quality audit—A quantitative assessment of conformance to required product characteristics. product warranty—The organization’s stated policy that it will replace, repair, or reimburse a customer for a defective product, providing the product defect occurs under certain conditions and within a stated period of time. professional development plan—An individual development tool for an employee. Working together, the employee and his/her supervisor create a plan that matches the individual’s career needs and aspirations with organizational demands. program evaluation and review technique (PERT)—An event-oriented project management planning and measurement technique that utilizes an arrow diagram to identify all major project events and demonstrates the amount of time (critical path) needed to complete a project. It provides three time estimates: optimistic, most likely, and pessimistic. project—A temporary endeavor undertaken to create a unique project, service, or result. project constraints—Projects are constrained by schedule, budget, and project scope. project life cycle—Refers to five sequential phases of project management: concept, planning, design, implementation, and evaluation. project management (PM)—The application of knowledge, skills, tools, and techniques to project activities to meet project requirements throughout a project’s life cycle. project management body of knowledge (PMBoK)—An inclusive term that describes the sum of knowledge within the profession of project management. The complete project management body of knowledge includes proven, traditional practices that are widely applied and innovative practices that are emerging in the project management profession. A guide to the PMBoK is maintained and periodically updated by the Project Management Institute (PMI). project plan—All the documents that comprise the details of why the project is to be initiated, what the project is to accomplish, when and where it is to be implemented, who will have responsibility, how the implementation will be carried out, how much it will cost, what resources are required, and how the project’s progress and results will be measured.
Glossary 451
project team—A designated group of people working together to produce a planned project’s outputs and outcome. proportion chart (p chart or percent categorized units control chart)—An attributes control chart for number of units of a given classification per total number of units in the sample expressed as either a proportion or percentage. Note 1: The classification often takes the form of nonconforming units. Note 2: The p chart applies particularly when the sample size is variable. Note 3: If the upper control limit (UCL) calculates ≥ 1, there is no UCL; if the lower control limit (LCL) calculates ≤ 0, there is no LCL. a. When the fraction nonconforming is known, or is a specified standard value: Central line : p Control limits : p ± 3
p(1− p) n
where p is known fraction nonconforming (or standard value). Plotted value : pˆ Plotted value : pˆ is the sample fraction nonconforming: where Plotted value : pˆ = D/n
where D is the number of product units that are nonconforming and n is the sample size.
b. When the fraction nonconforming is not known: Central line : p Control limits : p ± 3
p(1− p) n
where p– is the average value of the fraction of the classification (often average percent nonconforming), and n is the total number of units. These limits are considered trial limits. c. For variable sample size: Central line : p Control limits : p ± 3
p(1− p) ni
where ni is varying sample size. Plotted value : pˆ
452 Glossary
Plotted value : pˆ is the sample fraction nonconforming: where Plotted value : pˆ = Di/n
where Di is the number of nonconforming units in sample i, and n is the sample size.
proportions, tests for—Tests for proportions include the binomial distribution. See binomial confidence interval, and z-test (two proportions). The standard deviation for proportions is given by s=
p(1 − p) n
where p is the population proportion and n is the sample size. psychographic customer characteristics—Variables among buyers in the consumer market that address lifestyle issues and include consumer interests, activities, and opinions. PTR—See precision-to-tolerance ratio. pull system—See kanban. p-value—The probability of observing the test statistic value or any other value at least as unfavorable to the null hypothesis.
Q QPA—Quality process analyst. qualitative data—See attributes data. quality—A subjective term for which each person has his or her own definition. In technical usage, quality can have two meanings: (1) the characteristics of a product or service that bear on its ability to satisfy stated or implied needs, and (2) a product or service free of deficiencies. quality assessment—The process of identifying business practices, attitudes, and activities that are enhancing or inhibiting the achievement of quality improvement in an organization. quality assurance/quality control (QA/QC)—Two terms that have many interpretations because of the multiple definitions for the words “assurance” and “control.” For example, assurance can mean the act of giving confidence, the state of being certain, or the act of making certain; control can mean an evaluation to indicate needed corrective responses, the act of guiding, or the state of a process in which the variability is attributable to a constant system of chance causes. (For a detailed discussion on the multiple definitions, see ANSI/ISO/ASQC A35342 Statistics—Vocabulary and symbols—Statistical quality control.) One definition of quality assurance is all the planned and systematic activities implemented within the quality system that can be
Glossary 453
demonstrated to provide confidence that a product or service will fulfill requirements for quality. One definition for quality control is the operational techniques and activities used to fulfill requirements for quality. Often, however, quality assurance and quality control are used interchangeably, referring to the actions performed to ensure the quality of a product, service, or process. quality characteristics—The unique characteristics of products and of services by which customers evaluate their perception of quality. quality circles—Quality improvement or self-improvement study groups composed of a small number of employees—10 or fewer—and their supervisor, who meet regularly with an aim to improve a process. quality council—The group driving the quality improvement effort and usually having oversight responsibility for the implementation and maintenance of the quality management system, operating in parallel with the normal operation of the business. Sometimes called quality steering committee. quality engineering—The analysis of a manufacturing system at all stages to maximize the quality of the process itself and the products it produces. quality function—The entire spectrum of activities through which an organization achieves its quality goals and objectives, no matter where these activities are performed. quality function deployment (QFD)—A multifaceted matrix in which customer requirements are translated into appropriate technical requirements for each stage of product development and production. The QFD process is often referred to as listening to the voice of the customer. See also house of quality. quality improvement—Actions taken throughout the organization to increase the effectiveness and efficiency of activities and processes in order to provide added benefits to both the organization and its customers. quality level agreement (QLA)—Internal service/product providers assist their internal customers in clearly delineating the level of service/product quality required in quantitatively measurable terms. A QLA may contain specifications for accuracy, timeliness, quality/usability, product life, service availability, or responsiveness to needs. quality management—Coordinated activities to direct and control an organization with regard to quality. Such activities generally include establishment of the quality policy, quality objectives, quality planning, quality control, quality assurance, and quality improvement. quality management system—The organizational structure, processes, procedures, and resources needed to implement, maintain, and continually improve quality management. quality planning—The activity of establishing quality objectives and quality requirements.
454 Glossary
quality trilogy—A three-pronged approach to managing for quality. The three legs are quality planning (developing the products and processes required to meet customer needs), quality control (meeting product and process goals), and quality improvement (achieving unprecedented levels of performance). Attributed to Joseph M. Juran. quantitative data—See variable (control chart usage). questionnaires—See survey. queue processing—Processing in batches (contrast with continuous flow processing). queue time—Wait time of product awaiting next step in process.
R r—See correlation coefficient. R—See range. R chart—See range chart. R 2 —See coefficient of determination. R 2adj —R 2 (coefficient of determination) adjusted for degrees of freedom: 2 Radj =1=
Sum of squares error (n − p) Sum of squares total (n − 1)
where p is the number of coefficients fit in the regression equation. – R (pronounced r-bar)—The average range calculated from the set of subgroup ranges under consideration. See range. RACI—A structure that ensures that every component of the project’s scope of work is assigned to a person/team and that each person/team is responsible, accountable, consulting, or informing for the assigned task. RAM—See responsibility assignment matrix. random cause—The source of process variation that is inherent in a process over time. Also called common cause or chance cause. Note: In a process subject only to random cause variation, the variation is predictable within statistically established limits. random factor—A factor that uses levels that are selected at random from a large or infinite number of possibilities. In general, inference is made to other levels of the same factor. See also fixed factor. randomization—The process used to assign treatments to experimental units so that each experimental unit has an equal chance of being assigned a particular treatment. Randomization validates the assumptions made in statistical analysis and prevents unknown biases from impacting the conclusions.
Glossary 455
randomized block design—An experimental design consisting of b blocks with t treatments assigned via randomization to the experimental units within each block. This is a method for controlling the variability of experimental units. For the completely randomized design, no stratification of the experimental units is made. In the randomized block design, the treatments are randomly allotted within each block, that is, the randomization is restricted. randomized block factorial design—A factorial design run in a randomized block design where each block includes a complete set of factorial combinations. random model—A model that contains only random factors. random sampling—Sampling where a sample of n sampling units is taken from a population in such a way that each of the possible combinations of n sampling units has a particular probability of being taken. random variation—Variation from random causes. range (R)—A measure of dispersion, which is the absolute difference between the highest and lowest value in a given subgroup: R = Highest observed value − Lowest observed value range chart (R chart)—A variables control chart that plots the range of a subgroup to detect shifts in the subgroup range. See range (R). Central line: R Upper control limit: D 4 R Lower control limit: D 3 R – where R is the average range; D3 and D4 are factors from Appendix D. Note: A range chart is used when the sample size is small; if the sample is larger (generally > 10 to 12), the s chart should be used. rational subgroup—A subgroup wherein the variation is presumed to be only from random causes. Re—See rejection number. record—Document or electronic medium that furnishes objective evidence of activities performed or results achieved. Example: A filled-in form is a record. rectifying inspection—An inspection of all, or a specified number of, items in a lot or other amount previously rejected on acceptance sampling inspection, as a consequence of which all nonconforming items are removed or replaced. reduced inspection—An inspection less severe than normal inspection, to which the latter is switched when inspection results of a predetermined number of lots indicate that the quality level achieved by the process is better than that specified.
456 Glossary
redundancy—The existence of more than one means for accomplishing a given function. Each means of accomplishing the function need not necessarily be identical. redundancy, active—That redundancy wherein all redundant items are operating simultaneously. redundancy, standby—That redundancy wherein an alternative means of performing the function is not operating until it is activated upon failure of the primary means of performing the function. reengineering—Completely redesigning or restructuring a whole organization, an organizational component, or a complete process. It is a “start all over again from the beginning” approach, sometimes called a breakthrough. In terms of improvement approaches, reengineering is contrasted with incremental improvement (kaizen). reference interval—The interval bounded by the 99.865% distribution fractile, X99.865%, and the 0.135% distribution fractile, X0.135%, expressed by the difference X99.865% − X0.135%. Note 1: This term is used only as an arbitrary, but standardized, basis for defining the process performance index, Pp, and process capability index, Cp. Note 2: For a normal distribution, the reference interval may be expressed in terms of standard deviations as 6s, or 6s when estimated from a sample. Note 3: For a nonnormal distribution, the reference interval may be estimated by appropriate probability scales or from the sample kurtosis and sample skewness. Note 4: The conditions prevailing must always be stated as process capability conditions (a state of statistical control required) or process performance conditions (a state of statistical control not required). regression—See regression analysis. regression analysis—A technique that uses predictor variable(s) to predict the variation in a response variable. Regression analysis uses the method of least squares to determine the values of the linear regression coefficients and the corresponding model. It is particularly pertinent when the predictor variables are continuous and emphasis is on creating a predictive model. When some of the predictor variables are discrete, analysis of variance or analysis of covariance is likely a more appropriate method. This resulting model can then test the resulting predictions for statistical significance against an appropriate null hypothesis model. The model also gives some sense of the degree of linearity present in the data. When only one predictor variable is used, regression analysis is often referred to as simple linear regression. A simple linear regression model commonly uses a linear regression equation expressed as Y = b 0 + b1x + e, where Y is the response, x is the value of the predictor variable, b 0 and b1 are the linear regression coefficients, and e is the random error term. b 0 is often called the intercept, and b1 is often called the slope.
Glossary 457
When multiple predictor variables are used, regression is referred to as multiple linear regression. For example, a multiple linear regression model with three predictor variables commonly uses a linear regression equation expressed as Y = b0 + b1x1 + b2 x2 + b3x3 + e, where Y is the response; x1, x2, and x3 are the values of the predictor variables; b0, b1, b2, and b3 are the linear regression coefficients; and e is the random error term. The random error terms in regression analysis are often assumed to be normally distributed with a constant variance. These assumptions can be readily checked through residual analysis or residual plots. See regression coefficients, tests for.
regression coefficients, tests for—A test of the individual linear regression coefficients to determine their significance in the model. These tests assume that the response variable is normally distributed for a fixed level of the predictor variable, the variability of the response variable is the same regardless of the value of the predictor variable, and that the predictor variable can be measured without error. See regression analysis. reject—(acceptance sampling usage) To decide that a batch, lot, or quantity of product, material, or service has not been shown to satisfy the requirement criteria based on the information obtained from the sample(s). Note: In acceptance sampling, the words “to reject” generally are used to mean to not accept, without direct implication of product usability. Lots that are rejected may be scrapped, sorted (with or without nonconforming units being replaced), reworked, reevaluated against more-specific usability criteria, held for additional information, and so on. Because the common language usage “reject” often results in an inference of unsafe or unusable product, it is recommended that the words “not accept” be used instead. rejectable process level (RPL)—The process level that forms the inner boundary of the rejectable process zone. Note: In the case of two-sided tolerances, upper and lower rejectable process levels will be designated URPL and LRPL, respectively. (These need not be symmetrical around the standard level.) rejectable process zone—See rejectable process level. rejectable quality level (RQL)—See limiting quality level (LQL). rejection number (Re)—The smallest number of nonconformities or nonconforming units found in the sample by acceptance sampling by attributes that requires the lot to be not accepted, as given in the acceptance sampling plan. relative frequency—The number of occurrences or observed values in a specified class divided by the total number of occurrences or observed values. reliability—In measurement systems analysis, refers to the ability of an instrument to produce the same results over repeated administration— to measure consistently. In reliability engineering, it is the probability of a product performing its intended function under stated conditions for a given period of time. For redundant items, this is equivalent to the definition of mission reliability. See also mean time between failures.
458 Glossary
reliability, mission—The ability of an item to perform its required functions for the duration of a specified mission profile. remedy—Something that eliminates or counteracts a problem cause; a solution. repair—Action taken on a nonconforming product so that it will fulfill the intended usage requirements, although it may not conform to the originally specified requirements. See maintenance, corrective. repeatability—Precision under conditions where independent measurement results are obtained with the same method on identical measurement items by the same operator using the same equipment within a short period of time. repeated measures—The measurement of a response variable more than once under similar conditions. Repeated measures allow one to determine the inherent variability in the measurement system. Also known as duplication or repetition. replicate—A single repetition of the experiment. See also replication. replication—Performance of an experiment more than once for a given set of predictor variables. Each of the repetitions of the experiment is called a replicate. Replication differs from repeated measures in that it is a repeat of the entire experiment for a given set of predictor variables, not just a repeat of measurements on the same experiment. Note: Replication increases the precision of the estimates of the effects in an experiment. It is more effective when all elements contributing to the experimental error are included. In some cases, replication may be limited to repeated measures under essentially the same conditions. In other cases, replication may be deliberately different, though similar, in order to make the results more general. representative sample—A sample that by itself or as part of a sampling system or protocol exhibits characteristics and properties of the population sampled. reproducibility—Precision under conditions where independent measurement results are obtained with the same method on identical measurement items with different operators using different equipment. resample—An analysis of an additional sample from a lot. Resampling not included in the design of the control chart delays corrective action and changes the risk levels on which control charts are calculated. Resampling should not be used to replace sample results that indicate a nonconformity, but it is appropriate if the sample was known to be defective or invalid, or to obtain additional information on the material sampled. residual analysis—The method of using residuals to determine appropriateness of assumptions made by a statistical method. residual plot—A plot used in residual analysis to determine appropriateness of assumptions made by a statistical method. Common forms include a plot of the residuals versus the observed values or a plot of the residuals versus the predicted values from the fitted model. residuals—The difference between the observed result and the predicted value (estimated treatment response) based on an empirically determined model.
Glossary 459
resistance to change—Unwillingness to change beliefs, habits, and ways of doing things. resistant line—A line derived from using medians in fitting lines to data. resolution—(1) The smallest measurement increment that can be detected by the measurement system; (2) In the context of experimental design, resolution refers to the level of confounding in a fractional factorial design. For example, in a resolution III design, the main effects are confounded with the two-way interaction effects. response surface design—A design intended to investigate the functional relationship between a response variable and a set of predictor variables. It is generally most useful when the predictor variables are continuous. See Box–Behnken design. response surface methodology (RSM)—A methodology that uses design of experiments, regression analysis, and optimization techniques to determine the best relationship between a response variable and a set of predictor variables. response variable—A variable representing the outcome of an experiment. responsibility assignment matrix (RAM)—A structure that relates the project organizational breakdown structure to the work breakdown structure to help ensure that each component of the project’s scope of work is assigned to a responsible person/team. resubmitted lot—A lot that previously had been designated as not acceptable and that is submitted again for acceptance inspection after having been further tested, sorted, reprocessed, and so on. Note: Any nonconforming units discovered during the interim action must be removed, replaced, or reworked. RETAD—Rapid exchange of tooling and dies. This includes changing over the tools and dies in a manufacturing line. rework—Action taken on a nonconforming product so that it will fulfill the specified requirements (may also pertain to a service). right the first time—A term used to convey the concept that it is beneficial and more cost-effective to take the necessary steps up front to ensure that a product or service meets its requirements than to provide a product or service that will need rework or not meet customers’ needs. In other words, an organization should engage in defect prevention rather than defect detection. risk—A measure of the potential inability to achieve overall program objectives within defined cost, schedule, and technical constraints; risk has two components: 1. The probability (or likelihood) of failing to achieve a particular outcome 2. The consequences (or impact) of failing to achieve that outcome risk, consumer’s ()—See consumer’s risk (). risk, producer’s (α)—See producer’s risk (α).
460 Glossary
risk assessment/management—Organized, analytic process to identify what might cause harm or loss (identify risks); to assess and quantify the identified risks; and to develop and, if needed, implement an appropriate approach to prevent or handle causes of risk that could result in significant harm or loss. robust—In statistical use, a characteristic of a statistic or statistical method. A robust statistical method still gives reasonable results even though the standard assumptions are not met. A robust statistic is unchanged by the presence of unusual data points or outliers. robustness—The condition of a product or process design that remains relatively stable with a minimum of variation even though factors that influence operations or usage, such as environment and wear, are constantly changing. robust parameter design—A design that aims to reduce the performance variation of a product or process by choosing the setting of its control factors to make it less sensitive to variability from noise factors. ROI—Return on investment. Umbrella term for a variety of ratios measuring an organization’s business performance, calculated by dividing some measure of return by a measure of investment and then multiplying by 100 to provide a percentage. In its most basic form, ROI indicates what remains from all money taken in after all expenses are paid. ROI is always expressed as a percentage. root cause analysis—A quality tool used to distinguish the source of defects or problems. It is a structured approach that focuses on the decisive or original cause of a problem or condition. ROTI—Return on training investment. A measure of the return generated by the benefits from the organization’s investment in training. RPL—See rejectable process level. RQL—See rejectable quality level. RSM—See response surface methodology. run—See experimental run. run—(control chart usage) An uninterrupted sequence of occurrences of the same attribute or event in a series of observations, or a consecutive set of successively increasing (run up) or successively decreasing (run down) values in a series of variable measurements. Note: In some control chart applications, a run might be considered a series of a specified number of points consecutively plotting above or below the center line, or five consecutive points, three of which fall outside warning limits. running weighted average—An equation used to smooth data sequences by replacing each data value with the average of the data values around it and with weights multiplied in each averaging operation. The weights sum to 1.
Glossary 461
S s—See standard deviation. s 2 —See variance. sales leveling—A strategy of establishing a long-term relationship with customers leading to contracts for fixed amounts and scheduled deliveries in order to smooth the material flow and eliminate surges. sample—A group of units, portions of material, or observations taken from a larger collection of units, quantity of material, or observations that serves to provide information that may be used for making a decision concerning the larger quantity (the population). Note 1: The sample may be the actual units or material, or the observations collected from them. The decision may or may not involve taking action on the units or material, or on the process. Note 2: Sampling plans are schemes set up statistically in order to provide a sampling system with minimum bias. Note 3: There are many different ways, random and nonrandom, to select a sample. In survey sampling, sampling units are often selected with a probability proportional to the size of a known variable, giving a biased sample. sample covariance—See covariance. sample mean—The sample mean (or average) is the sum of random variables in a random sample divided by the number in the sum. It is generally designated by the symbol x–. Note: The sample mean considered as a statistic is an estimator for the population mean. A common synonym is the arithmetic mean. sample size (n)—The number of sampling units in a sample. Note: In a multistage sample, the sample size is the total number of sampling units at the conclusion of the final stage of sampling. sample standard deviation—See standard deviation. sample variance—See variance. sampling interval—In systematic sampling, the sampling interval is the fixed interval of time, output, running hours, and so on, between samples. sampling plan (acceptance sampling usage)—A specific plan that states the sample size(s) to be used and the associated criteria for accepting the lot. Note: The sampling plan does not contain the rules on how to take the sample. sampling scheme (acceptance sampling usage)—A combination of sampling plans with rules (acceptance for changing from one plan to another).
462 Glossary
saturated design—A design where the number of factors studied is nearly equal to the number of experimental runs. Saturated designs do not allow for a check of model adequacy or estimation of higher-order effects. This design should only be used if the cost of doing more experimental runs is prohibitive. scatter diagram (or scatter plot)—A graphical technique for analyzing the relationship between two variables. Two sets of data are plotted on a graph, with the y-axis being used for the variable to be predicted, and the x-axis for the variable being used to make the prediction. The graph will show possible relationships (although two variables might appear to be related, they might not be; those who know most about the variables must make that evaluation). The scatter diagram is one of the seven basic tools of quality. s chart—See standard deviation chart. screening design—An experiment intended to identify a subset of the collection of factors for subsequent study. Examples include fractional factorial designs or Plackett–Burman designs. secondary customer—Individuals or groups from outside the process boundaries who receive process output but who are not the reason for the process’s existence. segmentation—See customer segmentation. self-directed learning—See learner-controlled instruction. self-managed team—A team that requires little supervision and manages itself and the day-to-day work it does; self-directed teams are responsible for whole work processes, with each individual performing multiple tasks. sequential sampling—An acceptance sampling inspection in which, after each item has been inspected, the decision to accept the lot, not accept the lot, or inspect another item is taken based on the cumulative sampling evidence. Note 1: The decision is made according to defined rules. Note 2: The total number of items to be inspected is not fixed in advance, but a maximum number is often agreed on. servicing—The performance of any act to keep an item in operating condition (that is, lubrication, fueling, oiling, cleaning, and so on), but not including other preventive maintenance of parts or corrective maintenance. setup time—The time taken to change over a process to run a different product or service. seven basic quality tools—Tools that help organizations understand their processes in order to improve them. The tools are the cause-and-effect diagram, check sheet, control chart, flowchart, histogram, Pareto chart, and scatter diagram. See individual entries. seven basic tools of quality management—The tools used primarily for planning and managing are the activity network diagram (AND), or arrow diagram,
Glossary 463
affinity diagram (KJ method), interrelationship digraph, matrix diagram, priorities matrix, process decision program chart (PDPC), and tree diagram. Shewhart control chart—A control chart with Shewhart control limits intended primarily to distinguish between variation due to random causes and variation due to special causes. Shewhart control limits—Control limits based on empirical evidence and economic considerations, placed about the center line at a distance of ± 3 standard deviations of the statistic under consideration and used to evaluate whether or not it is a stable process. Shewhart cycle—See plan–do–check–act (PDCA) cycle. σ (sigma)—Greek letter (s) that stands for the standard deviation of a process. See standard deviation. σ2 (sigma square)—See variance. σwt —The standard deviation of the exponentially weighted moving average:
σ wi = σ λ (2 − λ ) [Asymptotic value] σ wi = σ λ (2 − λ )⎡⎣1− (1− λ )2i ⎤⎦ [Initial values] See exponentially weighted moving average chart. σ– (sigma x–)—The standard deviation (or standard error) of x–. x
σˆ (sigma-hat)—In general, any estimate of the population standard deviation. There are various ways to get this estimate depending on the particular application. signal—An indication on a control chart that a process is not stable or that a shift has occurred. Typical indicators are points outside control limits, runs, trends, cycles, patterns, and so on. See also average run length. signal-to-noise ratio (S/N ratio)—A mathematical equation that indicates the magnitude of an experimental effect above the effect of experimental error due to chance fluctuations. significance level—The maximum probability of rejecting the null hypothesis when in fact it is true. Note: The significance level is usually designated by α and should be set before beginning the test. significance tests—Significance tests are a method of deciding, with certain predetermined risks of error, (1) whether the population associated with a sample differs from the one specified, (2) whether the populations associated with each of two samples differ, or (3) whether the populations associated with each of more than two samples differ. Significance testing is equivalent to the testing of hypotheses. Therefore, a clear statement of the null hypothesis,
464 Glossary
alternative hypotheses, and predetermined selection of a confidence level are required. Note: The level of significance is the maximum probability of committing a type I error. This probability is symbolized by α, that is, P (type I error) = α. See confidence interval; means, tests for; and proportions, tests for. simple linear regression—See regression analysis. single-level continuous sampling—A continuous acceptance sampling inspection of consecutively produced items where a single fixed sampling rate is alternated with 100 percent inspection depending on the quality of the observed process output. single-minute exchange of dies (SMED)—A goal to be achieved in reducing the setup time required for a changeover to a new process; the methodologies employed in devising and implementing ways to reduce setup time. single-piece flow—A method whereby the product proceeds through the process one piece at a time rather than in large batches, eliminating queues and costly waste. single sampling (acceptance sampling usage)—An acceptance sampling inspection in which the decision to accept, according to a defined rule, is based on the inspection results obtained from a single sample of predetermined size, n. SIPOC—A macro-level analysis of the suppliers, inputs, processes, outputs, and customers. Six Sigma approach—A quality philosophy; a collection of techniques and tools for use in reducing variation; a program of improvement. Six Sigma quality—A term used generally to indicate that a process is well controlled, that is, within process limits ±3σ from the center line in a control chart, and requirements/tolerance limits ±6σ from the center line. The term was initiated by Motorola. skewness—A measure of symmetry about the mean. For a normal distribution, skewness is zero because the distribution is symmetric. skip-lot sampling—An acceptance sampling inspection in which some lots in a series are accepted without inspection when the sampling results for a stated number of immediately preceding lots meet stated criteria. slope—See regression analysis. SMART goals—A template for setting goals that are specific, measurable, achievable, realistic, and time-bound. SMART WAY—A template for setting objectives: specific, measurement, achievable, realistic, time, worth, assign, yield. sp —See pooled standard deviation, t-test (two sample), and proportion, tests for. spaghetti chart—A before-improvement chart of existing steps in a process with lines showing the many back-and-forth interrelationships (can resemble
Glossary 465
a bowl of spaghetti). It is used to see the redundancies and other wasted movements of people and material. SPC—See statistical process control. special cause—A source of process variation other than inherent process variation. Note 1: Sometimes special cause is considered synonymous with assignable cause, but a special cause is assignable only when it is specifically identified. Note 2: A special cause arises because of specific circumstances that are not always present. Therefore, in a process subject to special causes, the magnitude of the variation over time is unpredictable. specification—The engineering requirement used for judging the acceptability of a particular product/service based on product characteristics such as appearance, performance, and size. In statistical analysis, specifications refer to the document that prescribes the requirements to which the product or service has to conform. specification limit(s)—The limiting value(s) stated for a characteristic. See tolerance. split-plot design—A design where there is a hierarchical structure in the experimental units. One or more principal factors are studied using the largest experimental unit, often called a whole plot. The whole plots are subdivided into smaller experimental units, often called split-plots. Additional factors are studied using each of the split-plots. This type of design is frequently used with a principal factor whose levels are not easily changed, and the other factors can be varied readily within the runs assigned to the principal factor. sponsor—The person who supports a team’s plans, activities, and outcomes; the team’s “backer.” The sponsor provides resources and helps define the mission and scope to set limits. The sponsor may be the same individual as the champion. spread—A term sometimes synonymous with variation or dispersion. stable process—A process that is predictable within limits; a process that is subject only to random causes. (This is also known as a state of statistical control.) Note 1: A stable process will generally behave as though the results are simple random samples from the same population. Note 2: This state does not imply that the random variation is large or small, within or outside of specification limits, but rather that the variation is predictable using statistical techniques. Note 3: The process capability of a stable process is usually improved by fundamental changes that reduce or remove some of the random causes present and/or adjusting the mean toward the target value. staggered nested design—A nested design in which the nested factors are run within only a subset of the levels of the first or succeeding factors. stakeholder—People, departments, and organizations that have an investment or interest in the success or actions taken by the organization.
466 Glossary
stakeholder analysis—The identification of stakeholders and delineation of their needs. standard deviation—A measure of the spread of the process output or the spread of a sampling statistic from the process. When working with the population, the standard deviation is usually denoted by s (sigma). When working with a sample, the standard deviation is usually denoted by s. They are calculated as
σ=
1 ∑(x − µ )2 n
σ=
1 ∑(x − x)2 n−1
where n is the number of data points in the sample or population, m is the population mean, x is the observed value of the quality characteristic, and x– is the sample mean. See standard error. Note: Standard deviation can also be calculated by taking the square root of the population variance or sample variance. standard deviation chart (s chart)—A variable control chart of the standard deviation of the results within a subgroup. It replaces range chart for large subgroup samples (rule of thumb is subgroup size of 10 to 12): Central line: s Upper control limit: B4 s Lower control limit: B3 s where s– is the average value of the standard deviation of the subgroups and B3 and B4 are factors from Appendix D. See standard deviation. standard deviation of proportions—See proportions, tests for. standard deviation of the intercept—See regression coefficients, tests for. standard deviation of the slope—See regression coefficients, tests for. standard deviation of u or c/n—See u chart. standard error—The standard deviation of a sample statistic or estimator. When dealing with sample statistics, we either refer to the standard deviation of the sample statistic or to its standard error. standard error of predicted values—A measure of the variation of individual predicted values of the dependent variable about the population value for a given value of the predictor variable. This includes the variability of individuals about the sample regression line and the sample line about the population line. It measures the variability of individual observations and can be used to calculate a prediction interval. standardized work—Documented and agreed-on work instructions that express the best-known methods and work sequence for each manufacturing or assembly process.
Glossary 467
state of statistical control—See stable process. statistic—A quantity calculated from a sample of observations, most often to form an estimate of some population parameter. statistical measure—A statistic or mathematical function of a statistic. statistical process control (SPC)—The use of statistical techniques such as control charts to reduce variation, increase knowledge about the process, and to steer the process in the desired way. Note 1: SPC operates most efficiently by controlling variation of the process or in-process characteristics that correlate with a final product characteristic, and/ or by increasing the robustness of the process against this variation. Note 2: A supplier’s final product characteristic can be a process characteristic of the next downstream supplier’s process. statistical thinking—A philosophy of learning and action based on the following fundamental principles: • All work occurs in a system of interconnected processes. • Variation exists in all processes. • Understanding and reducing variation are keys to success. statistical tolerance interval—An interval estimator determined from a random sample so as to provide a specified level of confidence that the interval covers at least a specified proportion of the sampled population. storage life—The length of time an item can be stored under specified conditions and still meet specified requirements. storyboarding—A technique that visually displays thoughts and ideas and groups them into categories, making all aspects of a process visible at once. Often used to communicate to others the activities performed by a team as they improve a process. stratified sampling—Sampling such that portions of the sample are drawn from different strata and each stratum is sampled with at least one sampling unit. Note 1: In some cases, the portions are specified proportions determined in advance; however, in post-stratified sampling the specified proportions would not be known in advance. Note 2: Items from each stratum are often selected by random sampling. subgroup—(control chart usage) A group of data plotted as a single point on a control chart. See rational subgroup. subsystem—A combination of sets, groups, and so on, that performs an operational function within a system and is a major subdivision of the system, for example, data processing subsystem, guidance subsystem. supplier—Any provider whose goods and services may be used at any stage in the production, design, delivery, and use of another company’s products and services. Suppliers include businesses, such as distributors, dealers, warranty
468 Glossary
repair services, transportation contractors, and franchises, and service suppliers, such as healthcare, training, and education providers. Internal suppliers provide materials or services to internal customers. supply chain—The series of processes and/or organizations that are involved in producing and delivering a product to the final user. supply chain management—The process of effectively integrating and managing components of the supply chain. survey—An examination for some specific purpose; to inspect or consider carefully; to review in detail (survey implies the inclusion of matters not covered by agreed-on criteria). Also, a structured series of questions designed to elicit a predetermined range of responses covering a preselected area of interest. May be administered orally by a survey taker, by paper and pencil, or by computer. Responses are tabulated and analyzed to surface significant areas for change. switching rules—Instruction within an acceptance sampling scheme for changing from one acceptance sampling plan to another of greater or lesser severity of sampling based on demonstrated quality level. Normal, tightened, reduced inspection, or discontinuation of inspection are examples of severity of sampling. symptom—An indication of a problem or opportunity. system—A network of interdependent processes that work together to accomplish a common mission. system, general—A composite of equipment, skills, and techniques capable of performing or supporting an operational role, or both. A complete system includes all equipment, related facilities, material, software, services, and personnel required for its operation and support to the degree that it can be considered self-sufficient in its intended operational environment. system effectiveness—The result of a combination of availability, dependability, and capability. The definition of system effectiveness is a measure of the degree to which an item or system can be expected to achieve a set of specific mission requirements, which can be expressed as a function of availability, dependability, and capability. availability—A measure of the degree to which an item or system is in the operable and committable state at the start of the mission when the mission is called for at an unknown point in time. The point in time is a random variable. dependability—A measure of the item or system operating condition at one or more points during the mission. It may be stated as the probability that an item will (a) enter or occupy any one of its required operational modes during a specified mission and (b) perform the functions associated with those operational modes. capability—A measure of the ability of an item or system to achieve mission objectives given the conditions during the mission.
Glossary 469
T t distribution—A theoretical distribution widely used in practice to evaluate the sample mean when the population standard deviation is estimated from the data. Also known as Student’s distribution. It is similar in shape to the normal distribution with slightly longer tails. See t-test. T2 (Hotelling’s T2)—A multivariate test that is a generalization of the t-test. Taguchi design—See robust parameter design. Taguchi loss function—Pertains to where product characteristics deviate from the normal aim, and losses increase according to a parabolic function; merely attempting to produce a product within specifications does not prevent loss (loss is that inflicted on society after shipment of a product). takt time—The available production time divided by the rate of customer demand. Operating to takt time sets the production pace to customer demand. tally sheet—Another name for check sheet. target value—The preferred reference value of a characteristic stated in a specification. t-confidence interval for means (one-sample)—When there is an unknown mean m and unknown variance s2. For a two-tailed test, a 100(1 − a)% two-sided confidence interval on the true mean: x − tα/2,n−1
s s ≤ µ ≤ x + tα/2,n−1 n n
where ta/2,n–1 denotes the percentage point of the t distribution with n − 1 degrees of freedom such that P(tn−1 ≤ tα/2,n−1 ) = a / 2 . See confidence interval. t-confidence interval for means (two-sample)—When there is a difference in means (m1 and m2) and the variances are unknown. The combined or pooled estimate of the common variance sp is: sp2 =
(n1 − 1)s12 + (n2 − 1)s22 n1 + n2 − 2
The 100(1 − a)% two-sided confidence interval on m1 and m2 is (x1 − x2 ) − tα/2,n1+n2 −2 sp
1 1 1 1 + ≤ µ1 + µ 2 ≤ (x1 − x2 )+ tα/2,n1+n2 −2 sp + n1 n2 n1 n2
where x–1 and x– 2 are the sample means, ta/2 is the value from a t-distribution table, and a is 1 − confidence level/100. See confidence interval.
470 Glossary
team—A group of two or more people who are equally accountable for the accomplishment of a task and specific performance goals; it is also defined as a small number of people with complementary skills who are committed to a common purpose. team-based structure—Describes an organizational structure in which employees are organized into teams performing a specific function of the business, such as handling customer complaints or assembling an engine. team building/development—The process of transforming a group of people into a team and developing the team to achieve its purpose. team dynamics—The interactions that occur among team members under different conditions. team growth, stages of—Refers to the four development stages through which groups typically progress: forming, storming, norming, and performing. Knowledge of the stages helps team members accept the normal problems that occur on the path from forming a group to becoming a team. team leader—A person designated to be responsible for the ongoing success of the team and keeping the team focused on the task assigned. temporary/ad hoc team—A team, usually small, formed to address a short-term mission or emergency situation. testing—A means of determining the ability of an item to meet specified requirements by subjecting the item to a set of physical, chemical, environmental, or operating actions and conditions. tests for means—See means, tests for. tests for proportions—See proportions, tests for. tests for regression coefficients—See regression coefficients, tests for. tests for significance—See significance test. tests for variances—See variances, tests for. threshold control—Threshold control applies to process control where the primary interest is in events near control limits. Typical control charts used for this purpose are h, g, c, u, np, p, x (individuals), MR, x–, R, and s charts. This type of control is most often used in the early stages of a quality improvement process. See also deviation control. throughput time—The total time required (processing + queue) from concept to launch or from order received to delivery, or raw materials received to delivery to customer. tightened inspection—An inspection more severe than normal inspection to which the latter is switched when inspection results of a predetermined number of lots indicate that the quality level achieved by the process is poorer than that specified. time, active—That time during which an item is in an operational inventory.
Glossary 471
time, checkout—That element of maintenance time during which performance of an item is verified to be in a specified condition. time, delay—That element of downtime during which no maintenance is being accomplished on the item because of either supply or administrative delay. time, down (downtime)—That element of active time during which an item is not in condition to perform its required function. (Reduces availability and dependability.) time, supply delay—That element of delay time during which a needed replacement item is being obtained. time, up (uptime)—That element of active time during which an item is in condition to perform its required functions. (Increases availability and dependability.) time series—The sequence of successive time intervals. tolerance—The difference between upper and lower specification limits. tolerance limit—See specification limit. top management commitment—Participation of the highest-level officials in their organization’s quality improvement efforts. Their participation includes establishing and serving on a quality committee, establishing quality policies and goals, deploying those goals to lower levels of the organization, providing the resources and training that the lower levels need to achieve the goals, participating in quality improvement teams, reviewing progress organization-wide, recognizing those who have performed well, and revising the current reward system to reflect the importance of achieving the quality goals. Commitment is top management’s visible, personal involvement as seen by others in the organization. total productive maintenance (TPM)—Reducing and eventually eliminating equipment failure, setup and adjustment, minor stops, reduced speed, product rework, and scrap. total quality management (TQM)—A term initially coined by the Naval Air Systems Command to describe its management approach to quality improvement. Total quality management (TQM) has taken on many meanings. Simply put, TQM is a management approach to long-term success through customer satisfaction. TQM is based on the participation of all members of an organization in improving processes, products, services, and the culture they work in. TQM benefits all organization members and society. The methods for implementing this approach are found in the teachings of such quality leaders as Philip B. Crosby, W. Edwards Deming, Armand V. Feigenbaum, Kaoru Ishikawa, Joseph M. Juran, and others. training—The skills that employees need to learn in order to perform or improve the performance of their current job or tasks, or the process of providing those skills. training evaluation—The techniques and tools used and the process of evaluating the effectiveness of training.
472 Glossary
training needs assessment—The techniques and tools used and the process of determining an organization’s training needs. transformation—A reexpression of data aimed toward achieving normality. treatment—The specific setting of factor levels for an experimental unit. tree diagram—A management and planning tool that shows the complete range of subtasks required to achieve an objective. A problem-solving method can be identified from this analysis. trimodal—A probability distribution having three distinct statistical modes. TRIZ—(Russian) The theory of the inventive solution of problems. A set of analytical and knowledge-based tools that are typically hidden in the subconscious minds of creative inventors. true value—A value for a quantitative characteristic that does not contain any sampling or measurement variability. (The true value is never exactly known; it is a hypothetical concept.) t-test—A test for significance that uses the t distribution to compare a sample statistic to a hypothesized population mean or to compare two means. See t-test (one sample), t-test (two-sample), t-test (paired data). Note: Testing the equality of the means of two normal populations with unknown but equal variances can be extended to the comparison of k population means. This test procedure is called analysis of variance (ANOVA). t-test (one-sample)— t=
x − µ0 s n
where x– is the mean of the data, m0 is the hypothesized population mean, s is the sample standard deviation, and n is the sample size. The degrees of freedom are n − 1. t-test (paired data)—Samples are paired to eliminate differences between specimens. The test could involve two machines, two test methods, two treatments, and so on. Observations are collected on pairs of the two machines, tests, treatments, and so on. The differences of the pairs of observations on each of the n specimens: dj = x1j − x2j, j = 1, t=
d sd
n
where d=
1 n ∑ dj n j=1
Glossary 473
and 2
⎛n ⎞ ⎜∑ d j ⎟ ⎜ ⎟ n ∑ d 2j − ⎝ j=1n ⎠ j=1
sd2 =
n−1
t-test (two-sample, equal variances)—When there are two populations with unknown means and unknown variances that are assumed to be equal, the variances can be pooled to determine a pooled variance: sp2 =
(n1 − 1)s12 + (n2 − 1)s22 n1 + n2 − 2
where s1 and s2 are the individual sample variances, and n1 and n2 are the respective sample sizes. The t-test is t=
x1 − x2 1 1 sp + n1 n2
where sp is the pooled variance calculated, x–1 and x– 2 are the means of the respective populations, and n1 and n2 are the respective sample sizes. The degrees of freedom, n are n = n1 + n2 − 2. t-test (two-sample, unequal variances)—When there are two populations with unknown means and unknown variances that are assumed to be unequal, the – – sample standard deviation of (x1 − x 2) is: s=
s12 s22 + n1 n2
The degrees of freedom are: v=
(s
⎡ s2 n 2 ⎣( 1 1 )
2
n1 + s22 n2 ) −2 2 ( n1 − 1)⎤⎦ + ⎡⎣( s22 n2 ) ( n2 − 1)⎤⎦ 2 1
2k factorial design—A factorial design in which k factors are studied, each at exactly two levels.
474 Glossary
two-tailed test—A hypothesis test that involves tails of a distribution. Example: We wish to reject the null hypothesis, H0, if the true mean is within minimum and maximum (two tails) limits. H0: m = m0 H1: m ≤ m0 type I error—The probability or risk of rejecting a hypothesis that is true. This probability is represented by a (alpha). See diagram. See operating characteristic curve and producer’s risk. H0 true
H0 false
Do not Correct Error reject H0 decision type II Reject Error Correct H0 type I decision
type II error—The probability or risk of accepting a hypothesis that is false. This probability is represented by b (beta). See power curve and consumer’s risk.
U u (count per unit)—The events or events per unit where the opportunity is variable. More than one event may occur in a unit. Note: For u, the opportunity is variable; for c, the opportunity for occurrence is fixed. U—See upper specification limit. u chart—An attributes control chart for number of events per unit where the opportunity is variable. Note: Events of a particular type (e.g., number of absentees or number of sales leads) form the count. In the quality field, events are often expressed as nonconformities, and the variable opportunity relates to subgroups of variable size or variable amounts of material. Central line: u Control limits: u ± 3 u n where u– is the average number of events per unit and n is the total number of samples. u– is calculated as u– = c–/n. Note: If the lower control limit (LCL) calculates ≤ 0, there is no LCL. UCL—See upper control limit. unacceptable quality level (UQL)—See limiting quality level.
Glossary 475
uncertainty—A parameter that characterizes the dispersion of the values that could reasonably be attributed to the particular quantity subject to measurement or characteristic. Uncertainty indicates the variability of the measured value or characteristic that considers two major components of error: (1) bias and (2) the random error from the imprecision of the measurement process. unconditional guarantee—An organizational policy of providing customers an unquestioned remedy for any real or perceived product or service deficiency. unique lot—A lot formed under conditions peculiar to that lot and not part of a routine sequence. unit—A quantity of product, material, or service forming a cohesive entity on which a measurement or observation can be made. universe—A group of populations, often reflecting different characteristics of the items or material under consideration. upper control limit (UCL)—The control limit on a control chart that defines the upper control boundary. upper specification limit or upper tolerance limit (U)—The specification limit that defines the upper limiting value. UQL—See limiting quality level. URPL—See rejectable process level. USDA—US Department of Agriculture. useful life—The number of life units from manufacture to when an item has an unrepairable failure or unacceptable failure rate.
V validation—Confirmation by examination of objective evidence that specific requirements and/or a specified intended use are met. value-added—Tasks or activities that convert resources into products or services consistent with customer requirements. The customer can be internal or external to the organization. value chain—See supply chain. value stream—The primary actions required to bring a product from concept to placing the product in the hands of the end user. value stream mapping—The technique for mapping the value stream. variable—(control chart usage) A quality characteristic that is from a continuous scale and is quantitative in nature. variables, inspection by—See inspection by variables. variables control chart—A Shewhart control chart where the measure plotted represents data on a continuous scale.
476 Glossary
variance—A measure of the variation in the data. When working with the entire population, the population variance is used; when working with a sample, the sample variance is used. The population variance is based on the mean of the squared deviations from the arithmetic mean and is given by
σ2 =
1 ∑(x − µ )2 n
The sample variance is based on the squared deviations from the arithmetic mean divided by n − 1 and is given by s2 =
1 ∑(x − x)2 n−1
where n is the number of data points in the sample or population, m is the population mean, x– is the observed value of the quality characteristic, and x– is the sample mean. See standard deviation. variances, tests for—A formal statistical test based on the null hypothesis that the variances of different groups are equal. Many times in regression analysis a formal test of variances is not done. Instead, residual analysis checks the assumption of equal variance across the values of the response variable in the model. For two variances, see F-test. variation—The difference between values of a characteristic. Variation can be measured and calculated in different ways, such as range, standard deviation, or variance. Also known as dispersion or spread. verification—The act of reviewing, inspecting, testing, checking, auditing, or otherwise establishing and documenting whether items, processes, services, or documents conform to specified requirements. virtual team—A boundaryless team functioning without a commonly shared physical structure or physical contact, using technology to link the team members. visual control—A technique of positioning all tools, parts, production activities, and performance indicators so that the status of a process can be understood at a glance by everyone; it provides visual clues to aid the performer in correctly processing a step or series of steps, to reduce cycle time, to cut costs, to smooth flow of work, and to improve quality. vital few, useful many—A term used by J. M. Juran to describe his use of the Pareto principle, which he first defined in 1950. (The principle was used much earlier in economics and inventory control methodologies.) The principle suggests that most effects come from relatively few causes; that is, 80% of the effects come from 20% of the possible causes. The 20% of the possible causes are referred to as the vital few; the remaining causes are referred to as the useful many. When Juran first defined this principle, he referred to the remaining causes as the trivial many but, realizing that no problems are trivial in quality assurance, he changed it to useful many.
Glossary 477
voice of the customer (VOC)—An organization’s efforts to understand the customers’ needs and expectations (voice) and to provide products and services that truly meet such needs and expectations.
W w—The span of the moving average. warning limits—There is a high probability that the statistic under consideration is in a state of statistical control when it is within the warning limits (generally two standard deviations) of a control chart. See Shewhart control limits. Note: When the value of the statistic plotted lies outside a warning limit but within the action limit, increased supervision of the process to prespecified rules is generally required. waste—Activities that consume resources but add no value: visible waste (e.g., scrap, rework, downtime) and invisible waste (e.g., inefficient setups, wait times of people and machines, inventory). WBS—See work breakdown structure. wear-out—The process that results in an increase of the failure rate or probability of failure with increasing number of life units. Weibull distribution—A distribution of continuous data that can take on many different shapes and is used to describe a variety of patterns; it is used to define when the “infant mortality” rate has ended and a steady state has been reached (decreasing failure rate); relates to the bathtub curve. work analysis—The analysis, classification, and study of the way work is done. Work may be categorized as value-added (necessary work) or nonvalue-added (rework, unnecessary work, idle). Collected data may be summarized on a Pareto chart, showing how people within the studied population work. The need for and value of all work is then questioned and opportunities for improvement identified. A time use analysis may also be included in the study. workbook—A collection of exercises, questions, or problems to be solved during training; a participant’s repository for documents used in training (e.g., handouts). work breakdown structure (WBS)—A project management technique by which a project is divided into tasks, subtasks, and units of work to be performed. work group—A group composed of people from one functional area who work together on a daily basis and whose goal is to improve the processes of their function. work instruction—A document that answers the question, “How is the work to be done?” wt —The exponentially weighted moving average (EWMA) at the present time t.
478 Glossary
wt–1—The exponentially weighted moving average (EWMA) at the immediately preceding time interval.
X x—The observed value of a quality characteristic; specific observed values are designated x1, x2, x3, and so on. x is also used as a predictor value. See input variable or predictor variable. x– (pronounced x–)—The average, or arithmetic mean. The average of a set of n observed values is the sum of the observed values divided by n: x=
x1 + x2 +…+ xn n
x– 0 —See control chart, standard given. x– i —The average of the ith subgroup (when n = 1, x–i = x = xi). x– chart (averages control chart)—A variables control chart for evaluating the process level in terms of subgroup averages:
Central line : x (if standard given, x0 ) Control limits : x ± A3 s or x ± A2 R(if standard given, x0 ± As0 or x0 ± A2 R0 ) where = x is the average of the subgroup values; s– is the sample standard deviation; A, A 2, and A3 are control chart factors (see Appendix D); s0 is the standard value of the population standard deviation; and R0 is the – standard value of the range. (Use the formula with R when the sample size – is small, the formula with s when the sample is larger—generally > 10 to 12.) x chart—See individuals chart. = x (pronounced = x )—The average for the set under consideration of the subgroup values of x–.
Y y—y is sometimes used as an alternate to x as an observation. In such cases, y0, should be substituted where appropriate. See output variable or response variable. – y (pronounced y–)—The average, or arithmetic mean, of y. It is calculated exactly the same as x–.
y– 0 —See y and control charts, standard given. =)—The average for the set under consideration of the subgroup y= (pronounced y – values of y. See y.
Glossary 479
Z z-confidence interval for means (one-sample)—Given an unknown mean m and known variance s2, then the 100(1 − a)% two-sided confidence interval on m is x − Zα/2
σ σ ≤ µ ≤ x + Zα/2 n n
where x– is the mean of the data, s is the population standard deviation, and n is the sample size. zero defects—A performance standard popularized by Philip B. Crosby to address a dual attitude in the workplace: people are willing to accept imperfection in some areas, while in other areas they expect the number of defects to be zero. This dual attitude has developed because of the conditioning that people are human, and humans make mistakes. The zero-defects methodology states, however, that if people commit themselves to watching details and avoiding errors, they can move closer to the goal of perfection. zero investment improvement—Another name for a kaizen blitz. z-test (one-sample)—Given an unknown mean µ and known variance s2: z=
x − µ0 σ n
where x– is the mean of the data, m0 is the standard value of the population mean, s is the population standard deviation, and n is the sample size. Note: If the variance is unknown, then the t-test applies. z-test (two proportions)—To test if two binomial parameters are equal, the null hypothesis is H0: p1 = p2 and the alternate hypothesis is H1: p1 ≠ p2. If the null hypothesis is true, then p1 = p2 = p, and pˆ =
The test is then z=
n1 pˆ 1 + n2 pˆ 2 n1 + n2
pˆ 1 − pˆ 2 ⎛1 1⎞ pˆ (1− pˆ ) ⎜ + ⎟ ⎝ n1 n2 ⎠
and H0 should be rejected if |z| > zα/2.
z-test (two-sample)—Given unknown means m1 and m2, and known variances s12 and s22: x1 − x2 z= σ 12 σ 22 + n1 n2
480 Glossary where x–1 is the mean of the first population and x– 2 is the mean of the second population, s1 is the standard deviation of the first population and s2 is the standard deviation of the second population, and n1 and n2 are the respective sample sizes. Note: If the variance is unknown, then the t-test applies.
Bibliography
American Society for Quality. ASQ’s Foundations in Quality Self-Directed Learning Series, Certified Quality Engineer. Milwaukee, WI: ASQ Quality Press, 2000. ———. ASQ’s Foundations in Quality Self-Directed Learning Series, Certified Quality Manager. Milwaukee: ASQ Quality Press, 2001. ASQ Statistics Division. Glossary and Tables for Statistical Quality Control. 4th ed. Milwaukee, WI: ASQ Quality Press, 2005. Babbie, E. The Practice of Social Research. 10th ed. Belmont, CA: Thomson/Wadsworth, 2003. Bauer, J. E., G. L. Duffy, and R. T. Westcott. The Quality Improvement Handbook. 2nd ed. Milwaukee, WI: ASQ Quality Press, 2006. Bayers, P. “Apply Poka-Yoke Devices Now to Eliminate Defects.” In 51st Annual Quality Congress Proceedings, Orlando, FL, 1997. Benbow, D. W., R. W. Berger, A. K. Elshennawy, and H. F. Walker. The Certified Quality Engineer Handbook. Milwaukee, WI: ASQ Quality Press, 2002. Benbow, D. W., A. K. Elshennawy, and H. F. Walker. The Certified Quality Technician Handbook. Milwaukee, WI: ASQ Quality Press, 2003. Benbow, D. W., and T. M. Kubiak. The Certified Six Sigma Black Belt Handbook. Milwaukee, WI: ASQ Quality Press, 2005. Brassard, M., and D. Ritter. The Memory Jogger II. Methuen, MA: Goal/QPC Press, 1994. Brassard, M., L. Finn, D. Ginn, and D. Ritter. The Six Sigma Memory Jogger II. Salem, NH: Goal/QPC, 2002. Chapman, C. D. “Clean House with Lean 5S.” Quality Progress 38, no. 6 (June 2005): 27–32. Cheser, R. “Kaizen Is More Than Continuous Improvement.” Quality Progress 27, no. 4 (April 1994). Cooper, R. G. Winning at New Products. 3rd ed. Cambridge, MA: Perseus, 2001. Douglas, A. “Improving Manufacturing Performance.” In 56th Annual ASQ Quality Congress Proceedings, Denver, CO, 2002. Dovich, R. A. Reliability Statistics. Milwaukee, WI: ASQ Quality Press, 1990. Duffy, G., and S. Furterer. ASQ Certified Quality Improvement Analyst Handbook. Milwaukee, WI: ASQ Quality Press, 2020. Feigenbaum, A. V. Total Quality Control. 3rd ed., rev. New York: McGraw-Hill, 1991.
481
482 Bibliography
Feigenbaum, A. V., and D. S. Feigenbaum. “The Power of Management Capital.” Quality Progress 37, no. 11 (November 2004): 24–29. Fisher, C., and J. Schutta. Developing New Services. Milwaukee, WI: ASQ Quality Press, 2003. Fisher, R. A. Statistical Methods and Scientific Inference. Edinburgh: Oliver and Boyd, 1956. Freund, J. E. Mathematical Statistics. Englewood Cliffs, NJ: Prentice Hall, 1962. Furterer, S. Introduction to Systems Engineering: Holistic Systems Engineering Life Cycle Architecture, Modeling and Design with Real-World Service Applications. Boca Raton, FL: CRC Press, 2022. Furterer, S., and D. Wood. ASQ Certified Manager of Quality/Organizational Excellence Handbook. Milwaukee, WI: ASQ Quality Press, 2021. Goldratt, E. M. The Goal. Great Barrington, MA: North River Press, 1986. ———. What Is This Thing Called Theory of Constraints and How Should It Be Implemented? Great Barrington, MA: North River Press, 1990. Griffin, J. Customer Loyalty. New York: Lexington Books, 1995. Hacking, I. Logic of Statistical Inference. Cambridge, UK: Cambridge University Press, 1965. Hradesky, J. L. Total Quality Management Handbook. New York: McGraw-Hill, 1995. Juran, J. M. Juran on Planning for Quality. New York: The Free Press, 1988. Juran, J. M., and B. Godfrey. Juran’s Quality Handbook, 5th ed. New York: McGraw-Hill, 1999. Keeping, E. S. Introduction to Statistical Inference. Princeton, NJ: D. Van Nostrand, 1962. Kiefer, J. Journal of the American Statistical Association 72 (1977). King, B. M., and E. W. Minium. Statistical Reasoning in Psychology and Education. 4th ed. Hoboken, NJ: John Wiley & Sons, 2003. Lawton, R. “8 Dimensions of Excellence.” Quality Progress 39, no. 4 (April 2006). Lindman, H. R. Analysis of Variance in Complex Experimental Designs. San Francisco: W. H. Freeman, 1974. Lowenstein, M. Customer Retention. Milwaukee, WI: ASQC Quality Press, 1995. MacInnes, R. The Lean Enterprise Memory Jogger. Salem, NH: Goal/QPC, 2002. Manz, C. C., and H. P. Sims Jr. Business without Bosses: How Self-Managed Teams Are Building High-Performance Companies. New York: Wiley, 1993. Michell, J. “Measurement Scales and Statistics: A Clash of Paradigms.” Psychological Bulletin 3 (1986). Neyman, J. Philosophical Transactions of the Royal Society of London. London: University College, 1937. (Seminal work.) Okes D., and R. T. Westcott. The Certified Quality Manager Handbook. 2nd ed. Milwaukee, WI: ASQ Quality Press, 2000. Ott, E. R., E. G. Schilling, and D. V. Neubauer. Process Quality Control: Troubleshooting and Interpretation of Data. 4th ed. Milwaukee, WI: ASQ Quality Press, 2005. Palmes, P. An Introduction to the PDCA Cycle and Its Application (MP3 audiocasts). http:// www.asq.org/learn-about-quality/project-planningtools/links-resources/audiocasts .html.
Bibliography 483
Pande, P., R. Neuman, and R. Cavanagh. The Six Sigma Way. New York: McGraw-Hill, 2000. Pyzdek, T. The Complete Guide to the CQE. Tucson, AZ: Quality Publishing, 1996. Robinson, G. K. “Some Counterexamples to the Theory of Confidence Intervals.” Biometrika 62, no. 1 (1975). Scholtes, P. R., B. L. Joiner, and B. J. Streibel. The Team Handbook. 3rd ed. Madison, WI: Joiner Associates Consulting Group, 2003. Schultz, G. The Customer Care and Contact Center Handbook. Milwaukee, WI: ASQ Quality Press, 2003. Scriabina, N., and S. Fomichov. “Six Ways to Benefit from Customer Complaints.” Quality Progress 38, no. 7 (September 2006). Shingo, S. A Revolution in Manufacturing: The SMED System. Cambridge, MA: Productivity Press, 1985. Shuker, T. J. “The Leap to Lean.” In 54th Annual Quality Congress Proceedings. Indianapolis, IN: 2000. Sower, V. E., and F. K. Fair. “There Is More to Quality Than Continuous Improvement: Listening to Plato.” Quality Management Journal 12, no. 1 (January 2005): 8–20. Stevens, S. S. “On the Theory of Scales of Measurement.” Science 103 (1946): 677–680. ———. “Mathematics, Measurement, and Psychophysics.” In Handbook of Experimental Psychology, edited by S. S. Stevens, 1–49. New York: Wiley, 1951. Tague, N. R. The Quality Toolbox. 2nd ed. Milwaukee, WI: ASQ Quality Press, 2004. Van Patten, J. “A Second Look at 5S.” Quality Progress 39, no. 10 (October 2006). Velleman, P. F., and L. Wilkinson. “Nominal, Ordinal, Interval, and Ratio Typologies Are Misleading.” The American Statistician 47, no. 1 (1993). http://www.spss.com/research /wilkinson/Publications/Stevens.pdf. Westcott, R. T. The Certified Manager of Quality/Organizational Excellence Handbook. 3rd ed. Milwaukee, WI: ASQ Quality Press, 2006. ———. “Quality-Level Agreements for Clarity of Expectations.” In Stepping Up to ISO 9004:2000 edited by R.T. Westcott. Chico, CA: Paton Press, 2003. Zar, J. H. Biostatistical Analysis. Englewood Cliffs, NJ: Prentice Hall International, 1984.
WEBSITES American Society for Quality: http://www.asq.org Analysis of variance: http://en.wikipedia.org/wiki/Analysis_of_variance Confidence interval: http://en.wikipedia.org/wiki/Confidence_interval Just in time: https://en.wikipedia.org/wiki/Lean_manufacturing Lean Enterprise Institute: http://www.lean.org Single-Minute Exchange of Die: http://en.wikipedia.org/wiki/Single-Minute _Exchange_of_Die Six Sigma: http://en.wikipedia.org/wiki/Six_Sigma#The_.2B.2F-1.5_Sigma_Drift Weibull distribution: http://en.wikipedia.org/wiki/Weibull_distribution
484 Bibliography
Six Sigma: http://www.isixsigma.com “Quality Management,” A. Blanton Godfray: http://www.qualitydigest.com/oct96 /godfrey.html “The Bathtub Curve and Product Failure Behavior”: http://www.weibull.com/hotwire /issue21/hottopics21.htm http://www.weibull.com/hotwire/issue22/hottopics22.htm http://reliawiki.com/index.php/Introduction_to_Repairable_Systems#Availability
Index
A
Audit agent authorizing, 35 closure phase, 34 components, 31–34 defined, 29 performance phase, 33 preparation phase, 32–33 process, phases of, 31–34 reporting phase, 34 roles and responsibilities, 34–36 team leader, 35 types of, 29–31 Auditee, 30, 36 Auditors, 29, 35–36 Availability, 194 Average outgoing quality (AOQ), 219 Average outgoing quality limit (AOQL), 219–220 Average quality protection, 217
Acceptable quality level/limit (AQL), 218, 235–244 Acceptance region, 291 Acceptance sampling, 215–224 by attributes, 218–224 lot-by-lot vs. average quality protection, 217 operating characteristic curve, 217–218 plans, 216–217 Accuracy, 247, 248 Achieved availability, 194 Activity network diagram/arrow diagram, 146, 152 Ad hoc teams, 42 Affinity diagram, 145, 146 Alpha testing, 345 Alpha value, 291 Analysis of covariance, 278 Analysis of variance (ANOVA), 278, 302, 308–314 assumptions, 309 degrees of freedom, 310–311 ANSI/ASQ Z1.4, 224–230 defects, classifications of, 225 Dodge-Romig tables, 228 inspection, levels of, 225–227 sampling types, 227–228 variables sampling plans, 228–230 ANSI/ASQ Z1.9, 230–244 Appraisal costs, 28 Arrow diagram. See Activity network diagram/arrow diagram Assignable cause variation, 268 Association of Business Process Management Professionals, 137 Attributes data, 197–198 control charts for, 258–265 Attributes sampling plans, 218–224 ANSI/ASQ Z1.4, 224–230 types of, 222–224
B Baseline plan, 165 Bathtub curve, 189–192 Benchmarking, 121–125 measures, creation of, 124 partnership, 123 Benefits-to-cost ratio (BCR), 64, 153 Beta testing, 345 Bias, 250–253 Bimodal, 169 Bimodal distribution, 180 Binomial distribution, 172–174 Blocking, 296, 302–303 Body of knowledge (BOK), 374–381 Box plot, 208, 210, 212 BPM. See Business process management (BPM) Brainstorming, 75 Breakthrough improvement, 74–81 Burn-in, 190
485
486 Index
Business case, 155 Business process management (BPM), 133–144 design phase, 136–139 execution phase, 141–142 goals of, 135–136 modeling phase, 139–140 monitoring phase, 142–143 optimization phase, 144 principles, 136 Buyers, types of, 320–321
C Capability ratio (Cp), 274–275 Categorical data, 197–198 Categorical variables, 195, 277 Cause-and-effect diagram, 82–84, 85 c-chart (count chart), 261, 384 Cellular teams, 40 Central tendency, measures of, 168–169 Certified Manager of Quality/Organizational Excellence Handbook (Furterer and Wood), 44 Certified Quality Process Analyst (CQPA), 374–381 expectations of, 2–5 Chain reaction, 14, 15 Check sheets, 91–92 Clinical service delivery systems, 8 Coaching/corrective feedback, 55 Code of Ethics fundamental principles, 2 overview, 2 quality professional, expectations of, 2–5 violation of, 5 Common cause variation, 268–270 Companywide quality control (CWQC), 17 Complaint forms, 332–335 Complaint management system (CMS), 333 Complement rule, 181 Computer-based training, 61 Conditional probability, 183 Confidence interval (CI), 282–286 Confidence level/coefficient, 282 configuration management (CM), 25–26 Consumer’s risk, 219 Containment actions/activities, 362–363 Continuous flow manufacturing, 102–103 Control charts, 95–96, 208 for attributes data, 258–265 factors for, 385–386 for variables data, 265–268
Control factors, 305 Control limits, 254–255 formulas, 384 for p charts, 262–264 for R chart, 265–267 for s chart, 268 for u charts, 264–265 for X-bar, 265–268 Control marking, 353 Convenience sampling, 215 Corrective action, 360–361, 364–365 containment, 362–363 problem identification and assessment, 361–362 root causes, identification of, 363–364 verify and confirm effectiveness, 365 Corrective action board (CAB), 361 Corrective action team (CAT), 363 Corrective and Preventive Action (CAPA), 360, 366. See also Corrective actions; Preventive actions Corrective maintenance, 193 Correlation, 278–281 interpretation of size of, 279–281 Correlation coefficient, 93–94, 278 Cost of quality (COQ), 27–28 Cp (process capability index), 271, 274 Cpk (minimum process capability index), 271, 272–274 Crashing schedules, 160 Critical path method (CPM), 152, 158–159 Critical region of the test statistic, 291 Criticalto Satisfaction (CTS), 200, 202 Cross-functional process, 137 Cross-functional teams, 42–43 C-shaped matrix, 148 Customer loyalty, 16 Customers, 13 defined, 316 external, 320–324 feedback, 325–326 internal, 316–320 requirements, 21 satisfaction goal, 19 satisfaction methods. See Customer satisfaction SIPOC model, 9 voice of, 14, 16 Customer satisfaction, 325 complaint forms, 332–335 focus groups, 331 quality function deployment, 336–341 surveys, 326–330 warranty and guarantee analysis, 335–336 Customer–supplier value chain, 317
Index 487
D Data qualitative/categorical/attributes, 197–198 quantitative, 199 Data collection, 199–203 plan, 200–203 Data integrity, 203–206 Data pedigree, 204 Data plotting, 207–212 Defect concentration diagram, 91 Defective units, 259–261 Defects per million opportunities (DPMO), 112 Degrees of freedom, 310–311 Deming, W. E., 6, 14, 70, 113, 256 Descriptive statistics, 281–282 Design of experiments (DOE), 295–303 means of, 296 Design phase, BPM, 136–139 Discount buyer, 321 Discrete data, 199 Discrete distribution, 173 Dispersion, measures of, 169–172 Distributions, 172 bimodal, 180 binomial, 172–174 discrete, 173 double-peaked, 180 exponential, 192–194 normal, 176–180 poisson, 174–175 skewed, 180 Weibull, 175, 192 Distributors, 322 DMAIC (Define, Measure, Analyze, Improve, Control) phases, 75–81, 113, 117 Documentation, quality, 23–27 configuration management, 25–26 library, 24–25 Documentation control (DC), 26–27 Document traceability matrix, 26 Dodge-Romig tables, 228 Dot plot, 208, 210, 212 Double-peaked distribution, 180 Double sampling plans, 224
Empirical rule, 286 Employee buyer, 321 Employee risk, 127 End users, 320–321 Error detection, 370 Error-proofing, 370 Evans, James, 12 Execution phase, BPM, 141–142 Experimental design. See Design of experiments (DOE) Experimental error, 296 Exponential distribution, 192–194 External audits, 30 External customers, 320–324 end user/final customer, 320–321 influence on products and services, 323–324 intermediate customers, 322–323
F Factorial ANOVA, 308 Factorial design, 296 Factors, 277 Failure, random, 190 Failure costs, 28 Failure mode and effects analysis (FMEA), 367–370 Failure rate, 192–193 Fairness, 3 Feedback, 55 customers, 325–326 Feigenbaum, Armand, 17 Final customer, 320–321 Financial risk, 127 First-article testing, 345 First-party audit, 30 Fisher, Ronald A., 296, 308 5S, 101–102 Fixed effects model, 308–309 Fixed sampling, 214–215 Flowcharts, 84–90 Focus groups, 331 Fshbone diagram. See Cause-and-effect diagram F-test, 302, 310 Furterer, Sandra, 44
E Effects, 298–301 Emergency services SIPOC, 9 stakeholder analysis, 10–11 value chain, 8
G Gage repeatability and reproducibility (GR&R), 248–249 Gantt charts, 159
488 Index
General addition rule, 181–182 General multiplication rule, 183–184 The Goal (Goldratt), 106, 108 Goals, 19–20, 46 organizational, training and, 56–57 Goldratt, Eliyahu, 106 Grab sampling, 215 Groupthink, 54 Guarantees, 335–336
H Histograms, 96–98, 208, 209, 212 Honesty, 2–3 House of quality, 336–339 Hypothesis testing, 281–291 confidence interval, 282–286 limitations of, 292–293 point estimate, 282 population parameter vs. sample statistic, 282 student’s t-distribution, 287–290
I Imaginary brainstorming, 75 Incremental improvement, 74–81 Infant mortality period, 190 Inferential statistics, 282 Information Technology body of knowledge, 134–135 Inherent availability, 194 Integrity, 2–3 Interaction plot, 303, 304 Intermediate customers, 322–323 Internal audits, 29–30 Internal customers, 316–320 effect of treatment of, 319 improving processes and services, 318–319 methods to energize, 320 Interrelationship digraph, 145, 149–151 Interval measurement, 196 Ishikawa diagram. See Cause-and-effect diagram ISO 9001:2015, 24, 26–27, 30, 31 ISO (International Organization for Standardization), 30, 111 ISO 9001:2008 certification, 30 ISO 9001:201521 criteria, 24 ISO/TS 16949, 355
J Job aids, 60–61 Juran, Joseph, 12
Juran, Joseph M., 343–344 Juran on Planning for Quality (Juran), 343 Just-in-time (JIT), 100
K Kaizen, 38, 71–74 incremental/breakthrough improvement, 74–81 Kanban, 101 Kickoff meetings, 155 Knowledge Information Technology body of, 134–135 Productivity body of, 134 Quality body of, 133–134 Knowledge mapping, 75
L Lateral thinking, 75 Lean approach/thinking, 99 continuous flow manufacturing, 102–103 5S, 101–102 kanban, 101 poka-yoke, 108–109 pull system, 100, 101 setup reduction, 99–100 theory of constraints, 106–108 total preventive (predictive) maintenance, 110–111 value-added analysis, 103–106 value stream mapping, 106 Lectures, 60 Legal measurement, 245 Likert scale, 330 Lindsay, William, 12 Linearity, 249 Linear models, 277–278 Line graph budget chart, 159–160 Lot-by-lot protection, 217 Lot/group marking, 353 Lot tolerance percent defective (LTPD), 218 Loyalty, 4 customer, 16 L-shaped matrix, 148, 149
M Machine capability analysis, 276 Main effects plot, 303, 304 Maintainability, 193 Maintenance-type processes, 44, 45 Malcolm Baldrige National Quality Award, 122 Manufacturing, of product, 12–13
Index 489
Margin, 163–164 Material review board (MRB), 355–356, 361 Materials identification, importance of, 352 Matrix diagram, 145, 147–149 Mean, 168 Mean square (MS), 301 Mean time between failures (MTBF), 193 Mean time between maintenance actions (MTBMA), 193 Mean time to failure (MTTF), 193 Mean time to repair (MTTR), 193 Measles chart, 91 Measurements accuracy, 247, 248 bias, 250–253 classes of, 245 confidence intervals in, 285–286 defined, 245 errors, 246–247 linearity, 249 precision, 247, 248 process capability, 270–276 repeatability, 248–249 reproducibility, 248–249 Measurement scales interval, 196 nominal, 195 ordinal, 196 ratio, 196–197 Median, 169 Meta models, 139–140 Minimum sensitivity design, 306 Mixed effects models, 308 Mode, 169 Modeling phase, BPM, 139–140 Monitoring phase, BPM, 142–143 Morphological box, 75 Motivational feedback, 55 Multiple linear regression, 277 Multiple sampling plans, 224 Multistage samplin, 214 Multivariate analysis of variance (MANOVA), 308
N Natural teams, 39–40 Needs assessment, training, 57–58 Network diagram, 157–158 Next operation as customer, 316 Noise factors, 305–306 Nominal scales, 195 Nonconforming materials segregating, methods for, 354–356 Nonconformity, 172 Nonlinear regression models, 277
Nonquantitative quality tools, 82, 83 cause-and-effect diagram, 82–84, 85 flowcharts/process maps, 84–90 Normal curve, area under, 176–180, 382–383 Normal distributions, 176–180 Norms, 48 np chart, 260, 384 Nuisance factor, 296 Null hypothesis, 291–295
O Off-the-job training, 58–59, 60 One-piece flow, 102 One-shot devices, 189 On-the-job training (OJT), 58–60 Operating characteristic (OC) curve, 217–218 Operational availability, 194 Operational requirement document (ORD), 26 Operation risk, 127 Opportunity sampling, 215 Optimization phase, BPM, 144 Ordinal scales, 196 Organizational benchmarking, 124 Organization buyer, 321 Orthogonal design, 296 Out-of-control process, 260 Out of statistical control condition, 255
P Pareto charts, 92–93, 208, 211, 212 Pareto principle, 92 p-chart. See Proportion chart Pearson product-moment correlation coefficient, 278 Performance benchmarking, 124 PERT equation, 156–157 Picture association, 75 Pilot run, 345 Plan–do–check–act (PDCA) cycle, 21, 70–71 incremental/breakthrough improvement, 74–81 kaizen event, 72–74 Plan–do–study–act (PDSA) cycle, 70–71 Poisson distribution, 174–175 Poka-yoke, 108–109 Porter, Michael, 7 Pp (process performance index), 271, 275 Ppk (minimum process performance index), 271, 275 Precision, 247, 248 Prevention costs, 28
490 Index
Preventive actions, 366–371 defined, 366 potential causes, identification of, 367–371 potential problems, identification of, 367 Preventive maintenance, 193 Prioritization matrix, 146, 151 Probability, 181–188 combinations, 185–187 conditional, 183 contingency tables, 182–183 density functions, 189 general multiplication rule, 183–184 permutations, 187–188 rules, 181–182 Probability sampling, 214 Process architecture map (PAM), 87–90 Process audit, 31 Process benchmarking, 124 Process decision program chart (PCDC), 145, 147 Processes, 136 transformation, 142 Process improvement teams, 38–39 Process maps, 84–90 Process performance management, 142–143 Producer’s risk, 218 Product, quality, 12 Product audit, 31 Product development cycle, 342, 343 Production part approval process (PPAP), 345 Productivity body of knowledge, 134 Project benchmarking, 124 Project management activities, 154 defined, 153 phases, 154 tools. See Project management tools Project management tools, 152–153 baseline plan, 165 benefit-to-cost ratio, 153 business case, 155 comparative selection, 155 contingency plans, 164–165 corrective actions, 166 critical path method, 158–159 Gantt charts, 159 kickoff meetings, 155 lessons learned meeting, 166 line graph budget chart, 159–160 margin, 163–164 network diagram, 157–158 PERT equation, 156–157 progress reviews, 165–166 responsibility assignment matrix, 161 risk list, 161–162 task duration and cost estimates, 156 task list, 156
termination order, 166 time and cost reduction, 160 triggers, 164 variance analysis, 165 work breakdown structure, 155–156 written objectives, 155 Proportion chart, 259–260, 384 control limits for, 262–264 Pull system, 100, 101 p-values, 294–295
Q Qualification process, 344 Qualification testing, 344 Qualitative data, 197–198 Qualitative responses, 330 Quality benefits of, 14–16 defined, 6, 12–14 documentation, 23–27 tools. See Quality tools Quality assurance/quality control (QA/QC), 21–22 Quality audit. See Audit Quality body of knowledge, 133–134 Quality council, 17 responsibilities of, 17–18 Quality function deployment (QFD), 326, 336–341 Quality level agreements (QLAs), 318–319 Quality management tools, 145 activity network diagram/arrow diagram, 146 affinity diagram, 145, 146 interrelationship digraph, 145, 149–151 matrix diagram, 145, 147–149 prioritization matrix, 146, 151 process decision program chart, 145, 147, 148 tree diagram, 145, 147 Quality planning, 16–21 elements of, 16 goals/objectives, 19–20 implementation, 20 measures of success, 21 resources, 20 Quality policy, 18 Quality process analyst (QPA), 38–39 Quality tools cause-and-effect diagram, 82–84, 85 check sheets, 91–92 flowcharts/process maps, 84–90 histograms, 96–98 nonquantitative, 82, 83
Index 491
Pareto charts, 92–93 quantitative, 82, 83 scatter diagram, 93–96 Quantitative data, 199 Quantitative quality tools, 82, 83 check sheets, 91–92 histograms, 96–98 Pareto charts, 92–93 scatter diagram, 93–96 Quantitative responses, 330 Quick changeover. See Setup reduction (SUR) Quid quo pro, 123
R Random effects models, 308, 309 Random errors, 246 Random failure period, 190 Randomization, 296 Random sampling, 214 Range, 169–170 Range method, 231–232 Rank variables, 196 Rapid exchange of tooling and dies (RETAD). See Setup reduction (SUR) Ratio measurement, 196–197 Rational subgroups, 257–258 Rational subgroup sampling, 215 R chart, 384 control limits for, 265–267 Reengineering, 74–75 Regression multiple linear, 277 simple linear, 277 Regression analysis, 277–278 Rejection region, 291 Reliability, 188–189 bathtub curve, 189–192 probability density functions, 189 Repeatability, 248–249 Replication, 296 Reproducibility, 248–249 Resource leveling, 161 Responsibility assignment matrix (RAM), 161 Retail buyer, 320–321 Retail chain buyers, 322 Return on training investment (ROTI), 64–65 Risk assessment, 130–131 tool, 126–127 consequences, 130 cube, 129, 133 defined, 125 handling, 127–128, 131–132 likelihood, 129
list, 161–162 monitor, 166 prioritization, 129, 132 priority of, 162 proactive response, 162–163 types of, 126–127 Risk-based thinking, 133 Risk management, 125–133 process, 125–126 Risk planning, 125–126 Risk probability impact matrix chart, 162 Risk score, 163–164 Robust, 304 Roof-shaped matrix, 149, 150 Run charts, 94–95
S Sampling acceptance, 215–224 convenience, 215 defined, 213 grab/opportunity, 215 plans, 216–217 process, stages of, 213 random/ probability, 214 rational subgroup, 215 sequential, 214 stratified, 214 systemic/fixed, 214–215 types of, 227–228 Scatter diagram, 93–96, 208, 211, 212 s chart, 384 control limits for, 268 Scheduling risks, 127 Scholtes, Peter, 53–54, 113 Scientific metrology, 245 SCOR model, 349–350 Second-party audit, 30 Self-managed teams, 41–42 Self-study materials, 60 Sequential sampling, 214 Service buyer, 321 Service providers, 322–323 Service user, 321 Service value system, 7 Setup reduction (SUR), 99–100 Seven basic quality tools. See Quality tools Shewhart, Walter, 70, 95 Shewhart charts, 95 Shigeo Shingo, 100 Shipping/receiving risk, 127 Sigma (s), 111 Signal factors, 305 Signal-to-noise (S/N) ratio, 307
492 Index
Significance test, 293–295 Simple linear regression, 277 Simple random sampling, 214 Single-minute exchange of die (SMED). See Setup reduction (SUR) Single sampling plans, 224 SIPOC (suppliers–inputs–process–outputs– customers) model, 9–11 Six Sigma, 75, 111–120 methodology, 117–119 philosophy, 113–117 tools, 117 Skewed distribution, 180 SMART goals, 19 SMART WAY, 46 Snowball sampling, 215 Special addition rule, 181 Special cause variation, 268–270 Special multiplication rule, 184–185 Special project teams, 43 Stakeholders, 9 analysis, 10–11 Standard deviation, 111, 170–172, 232 Standardized work, 103 Statistical hypothesis test, 290–291 Statistical process control (SPC), 254–276 fundamental concepts, 254–256 rational subgroups, 257–258 Statistics, 281–282 Status/acceptance stamp, 353 Stem-and-leaf plot, 208, 209, 212 Stratified sampling, 214 Student’s t-distribution, 287–290 Sums of squares, 301–302 Supplier, 16 assessment metrics, 350–351 performance, 348–350 selection of, 346–348 SIPOC model, 9 Supplier management performance of supplier, 348–350 selection of supplier, 346–348 supplier performance assessment metrics, 350–351 Supply chain risk, 127 Surveys, 326–330 design, 326, 329 mediums, 329 question design, 330 types, 326, 327–328 System, defined, 6 Systematic errors, 246 System audit, 31 Systemic sampling, 214–215 System specification (SS), 26
T Taguchi loss function, 303–307 Taiichi Ohno, 100 Takt time, 103 Tally sheets, 91. See also Check sheets Task-type processes, 44 t-distribution, 287–290 Team leader, 35 Teams conflict, 53–55 cross-functional, 42–43 decision-making method, 47–48 defined, 37 design, 37 development, 44–48 introductions, handling, 45–46 process improvement, 38–39 roles and responsibilities, 49–53 self-managed, 41–42 setting goals and objectives, 46 special project, 43 stages, 48–49 temporary/ad hoc, 42 types of, 37–44 virtual, 43–44 work cells/cellular, 40 work groups/natural, 39–40 Technical measurement, 245 Temporary/ad hoc teams, 42 Theory of constraints (TOC), 106–108 Third-party audit, 30 Time series analysis, 208, 212, 214 Time series chart. See Run charts Total preventive (predictive) maintenance (TPM), 110–111 Total quality control, 17, 20, 22 Total quality costs, 27–28 Traceability defined, 352 key requirements for, 352–354 sample procedure for, 356–357 Training adult learning styles, 58 computer-based, 61 delivery methods, 58–61, 62 effectiveness evaluation of, 61, 63–65 tools for assessing, 66–67 needs assessment, 57–58 off-the-job, 58–59, 60 on-the-job, 58–60 organizational goals and, 56–57 video, 61 Transcendent, quality, 12
Index 493
Tree diagram, 145, 147 Trend chart. See Run charts Triggers, 164 T-shaped matrix, 148 Tuckman model, 48–49 Type I error, 293 Type II error, 293
U u charts, 261, 384 control limits for, 264–265 Useful life period, 190 User safety risk, 127
V Validation, 344–345 Value-added analysis, 103–106 Value chains, 6–8, 140 Value definition, of quality, 12 Value stream mapping (VSM), 106 Value system, 7, 8 Variables categorical, 195, 277 classification of, 305–306 interval, 196 ordinal/rank, 196 ratio, 196 Variables sampling plans, 228–230 ANSI/ASQ Z1.9, 230–244 Variance, 171 Variance analysis, 165 Verification, 342–344 Video training, 61 Virtual teams, 43–44 Voice of the customer (VOC), 14, 16, 119–120, 325, 331
W Waiver, 355 Warranty, 335–336 Wear-out, 191 Weibull distribution, 175, 192 What Is This Thing Called Theory of Constraints and How Should It Be Implemented? (Goldratt), 106 Wholesale buyer, 322 Wood, Douglas, 44 Workbooks, 60 Work breakdown structure (WBS), 155–156 Work cells, 40 Workflow, 137 Work groups, 39–40
X X-bar, 168, 384 control limits for, 265–268 X-shaped matrix, 149
Y Y-shaped matrix, 148
Z Z-score, 283
About the Editor
Sandra L. Furterer is an Associate Professor and Department Chair at the University of Dayton in the Department of Engineering Management, Systems, and Technology. Dr. Furterer has over 25 years of experience in business process and quality improvements in multiple service industries, including healthcare, retail, consulting, information systems, and financial services. She previously was a VP of Process Improvement for two financial services firms in Columbus, Ohio. She also was the Enterprise Performance Excellence leader that deployed Lean Six Sigma in a hospital system in south Florida. She holds a PhD in Industrial Engineering from the University of Central Florida, an MBA from Xavier University, and bachelor’s and master’s degrees in Industrial and Systems Engineering from Ohio State. She is an ASQ Certified Manager of Quality/Organizational Excellence, an ASQ Certified Six Sigma Black Belt, an ASQ Certified Quality Engineer, an ASQ fellow, and a certified Six Sigma Master Black Belt. Dr. Furterer is an author or coauthor of several textbooks and journal articles. She has refereed conference papers and proceedings publications on systems engineering, Lean Six Sigma, process improvement, operational excellence, and engineering education. Her research interests include Lean Six Sigma, quality management, and systems engineering applied to service industries.
495
ASQ VIRTUAL AND E-LEARNING
Stay in Control of Your Time and Budget ASQ provides quality professionals with a wide range of learning options for your busy schedule and unique career goals. E-learning and live virtual courses are available in our catalog for all industries and career levels. Advance your expertise and help your business with high-caliber content developed by the best subject matter experts and instructional designers – plus all live courses are delivered by outstanding instructors. Whether you need the freedom and lower cost of e-learning to study at your own pace and location, or if you prefer the focus and engagement provided by instructor-led courses online or in-person, ASQ is ready to help you learn new quality skills and achieve your objectives.
LEARN MORE!
Visit asq.org/training/catalog
ASQ MEMBERS MAKE A DIFFERENCE Every day around the globe, quality professionals help to make the world a better place. Let ASQ membership make a professional difference for you.
Seeking new career resources?
Visit careers.asq.org for your next opportunity.
Want to raise your professional credibility? Find your pathway to the right quality certification at asq.org/cert.
Need peer support?
Join the myASQ.org online community to seek solutions and celebrate your quality projects!
How can we help you make a difference? Contact Customer Care: [email protected] livechat on asq.org