Revolutionizing IT: The Art of Using Information Technology Effectively [1 ed.] 9780471250418, 0471250414

Warning! This book contains easy-to-understand ideas and observations that will change the way you think about the manag

281 88 724KB

English Pages 269 Year 2002

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Revolutionizing IT......Page 3
CONTENTS......Page 9
ABOUT THE AUTHORS......Page 15
PREFACE......Page 19
ACKNOWLEDGMENTS......Page 23
INTRODUCTION......Page 27
CHAPTER 1: The Nature of Projects......Page 33
The Race......Page 34
Terminology......Page 35
Principles and Methodologies......Page 36
The Improvement of Business Processes......Page 37
The Classic View of Project Management Theory......Page 39
Classic Theory Limitations......Page 41
The Methodology Evolution......Page 42
Summary......Page 43
CHAPTER 2: In the Beginning......Page 45
How Projects Are Typically Initiated......Page 46
The Virus That Causes Scope Growth......Page 49
Resource Limitations......Page 50
Management Misunderstanding......Page 51
Did Lewis and Clark Succeed or Fail?......Page 53
Summary: Early Stage Problems......Page 54
Mega-Multi Manufacturing......Page 57
Drowning under the Waterfall......Page 62
Unrealistic Assumption # 1: The Environment Will Remain Stable during the Project......Page 63
Unrealistic Assumption # 2: End Users Can Define, in Advance, Exactly What Will Be Needed......Page 64
Unrealistic Assumption # 3: Complex Problems Can Be Solved Completely on the First Attempt......Page 65
Unrealistic Assumption # 4: Requirements Can Be Precisely Defined before Packaged Software Is Selected......Page 66
A Second Look at Mega-Multi......Page 67
The Evils of Scope Creep......Page 68
Why Has the Waterfall Lasted This Long?......Page 69
Summary: Why So Many Projects Fail......Page 70
Strong Leadership......Page 73
Resource Limitation......Page 74
Concept Recycling......Page 75
Why the Largest IT Project Ever Undertaken Succeeded......Page 76
A Seemingly Simple Success Story: Bass Pro......Page 77
Emergency Behavior......Page 79
Case History: Fort Knox Fails, but Silverlake Shines......Page 80
Summary......Page 82
What Is Success?......Page 83
The Patton/Lombardi Effect......Page 85
Building a Mental Picture......Page 86
More Realistic Assumptions......Page 87
Following Mother Nature......Page 88
A Different View of Projects......Page 89
Better Measures of Success......Page 90
What Project Teams Need to Do......Page 92
Summary......Page 93
CHAPTER 6: Deciding What to Do......Page 95
Defining Specific Project Goals......Page 97
Organizing Task Forces......Page 98
Practical Advice......Page 99
Mega-Multi Manufacturing Revisited......Page 100
A Quick Review......Page 103
Summary......Page 104
CHAPTER 7: Controlling Project Scope......Page 107
Let Time Determine Scope......Page 110
Control the Use of Resources......Page 113
Limit the Size of the Design Team......Page 115
Gauge Your Ability to Absorb Change......Page 116
Imitate, Don’t Invent......Page 117
Management’s Role in Scope Control......Page 119
Bite-Sized Pieces......Page 120
Case History: The Birth of the PC......Page 122
Summary......Page 123
CHAPTER 8: Who Is Accountable?......Page 125
Who Should Be Accountable?......Page 128
IT Accountability......Page 131
Rewarding Project Teams......Page 133
Summary......Page 134
The Role of Packaged Software......Page 137
Package Selection the Traditional Way......Page 138
Another Adventure for Mega-Multi......Page 139
What Is Wrong with the Traditional Approach?......Page 141
The RITE Approach......Page 142
Getting Help Selecting and Installing Packages......Page 146
Package Selection Using the RITE Approach......Page 147
Case History: Mississippi Chemical......Page 149
Summary......Page 150
CHAPTER 10: The Balancing Act......Page 153
Questions to Ask......Page 154
More Speed, Scotty......Page 155
Planning versus Doing......Page 158
Delivering Benefits......Page 160
Disruption: How Much Is Too Much?......Page 161
Controlling Risk......Page 163
Summary......Page 165
CHAPTER 11: Using Outsiders Wisely......Page 167
The 1-5-10 Business Model......Page 169
Other Business Models......Page 171
The Rate Issue......Page 172
Who Assumes the Risk?......Page 174
A Better Model for Obtaining IT Project Assistance......Page 176
Making the Right Choice......Page 177
Summary......Page 178
CHAPTER 12: Managing IT Professionals......Page 181
What Makes IT Professionals Different......Page 183
Motivation and Job Satisfaction......Page 184
Career Management......Page 185
Managers and Leaders......Page 186
High-Potential Employees......Page 187
The Causes of Turnover......Page 189
Summary......Page 190
CHAPTER 13: Management of Projects......Page 193
The Project Scheduling Dilemma......Page 194
Why Most Schedules Are Not Met......Page 195
A Different Approach to Scheduling......Page 197
Using Two-Tier Schedules to Defeat Parkinson’s Law......Page 199
Case History: Sealectro Replaces Its Applications......Page 200
Summary......Page 202
CHAPTER 14: Building Software Yourself......Page 205
The Role of Management......Page 206
The Development Challenge......Page 207
Another Visit to the Waterfall......Page 209
The Planning Stage of Development......Page 210
The Need for a Balanced Development Plan......Page 211
Reviewing and Approving the Project Plan......Page 212
Thoughts about Prototyping......Page 213
The Design Stage of Development......Page 214
The Coding Stage of Development......Page 217
The Test Stage of Development......Page 218
The Delivery Stage of Development......Page 220
The Service Stage of Development......Page 221
The Use of In-Process Metrics......Page 223
The Underlying Tool Base......Page 224
Some Parting Thoughts......Page 226
Summary......Page 227
CHAPTER 15: The Changing World of Software Development......Page 229
Promoting Reusability......Page 231
Building Applications from Objects......Page 232
Searching for the Perfect Object......Page 233
Follow the Leader......Page 234
Java Perks Up......Page 235
Use Case Methodology......Page 236
The Unified Modeling Language (UML)......Page 237
The Agile Movement......Page 238
Summary......Page 241
CHAPTER 16: A Review of the RITE Approach......Page 243
What Often Goes Wrong......Page 244
The Reasons for Failure......Page 245
More Realistic Assumptions......Page 246
The RITE Way to View Projects......Page 247
Scope Control......Page 248
Accountability......Page 250
Favor Packaged Software......Page 251
Using Outside Resources......Page 253
Managing IT Professionals......Page 254
Project Scheduling......Page 255
Developing Custom Software......Page 256
Final Thoughts......Page 258
REFERENCES......Page 259
BIBLIOGRAPHY......Page 261
INDEX......Page 265
Recommend Papers

Revolutionizing IT: The Art of Using Information Technology Effectively [1 ed.]
 9780471250418, 0471250414

  • Commentary
  • 48589
  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Revolutionizing IT THE ART OF USING INFORMATION TECHNOLOGY EFFECTIVELY

David H. Andrews Kenneth R. Johnson

John Wiley & Sons, Inc.

Revolutionizing IT

Revolutionizing IT THE ART OF USING INFORMATION TECHNOLOGY EFFECTIVELY

David H. Andrews Kenneth R. Johnson

John Wiley & Sons, Inc.

Copyright © 2002 by Andrews Consulting Group. All rights reserved. Published by John Wiley & Sons, Inc., Hoboken, N.J. Published simultaneously in Canada. No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except as permitted under Sections 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 750-4744. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008, e-mail: [email protected]. Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages. This publication is designed to provide accurate and authoritative information in regard to the subject matter covered. It is sold with the understanding that the publisher is not engaged in rendering professional services. If professional advice or other expert assistance is required, the services of a competent professional person should be sought. Trademarks BEA is a trademark of BEA Systems, Inc. IBM® is a registered trademark of International Business Machines Corporation. Intel® is a registered trademark of Intel Corporation. Microsoft® is a registered trademark of Microsoft Corporation. Netscape® is a registered trademark of Netscape Communications Corporation. Oracle® is a registered trademark of Oracle Corporation. Seagull is a trademark of Seagull. Sun Microsystems® is a registered trademark of Sun Microsystems, Inc. The RITE Approach is a trademark of Andrews Consulting Group. All other trademarks are the property of their respective owners. ISBN 0-471-25041-4 Printed in the United States of America. 10 9 8 7 6 5 4 3 2 1

This book is dedicated to the extraordinary group of people, past and present, who have made Andrews Consulting Group the unique institution that it has become. Thanks to you all.

Contents

About the Authors

xiii

Preface

xvii

Acknowledgments

xxi

Introduction

xxv

Chapter 1: The Nature of Projects The Race 2 Terminology 3 Principles and Methodologies 4 The Improvement of Business Processes 5 The Classic View of Project Management Theory Classic Theory Limitations 9 The Methodology Evolution 10 Summary 11

Chapter 2: In the Beginning

1

7

13

How Projects Are Typically Initiated 14 The Virus That Causes Scope Growth 17 Resource Limitations 18 Management Misunderstanding 19 Did Lewis and Clark Succeed or Fail? 21 Summary: Early Stage Problems 22

vii

viii

Contents

Chapter 3: The Anatomy of a Project

25

Mega-Multi Manufacturing 25 Drowning under the Waterfall 30 Unrealistic Assumption #1: The Environment Will Remain Stable during the Project 31 Unrealistic Assumption #2: End Users Can Define, in Advance, Exactly What Will Be Needed 32 Unrealistic Assumption #3: Complex Problems Can Be Solved Completely on the First Attempt 33 Unrealistic Assumption #4: Requirements Can Be Precisely Defined before Packaged Software Is Selected 34 Unrealistic Assumption #5: Users Will Cheerfully Accept Changes in Their Work Environment 35 A Second Look at Mega-Multi 35 The Evils of Scope Creep 36 Why Has the Waterfall Lasted This Long? 37 Summary: Why So Many Projects Fail 38

Chapter 4: The Elements of Success

41

Common Characteristics 41 Strong Leadership 41 Time Control 42 Resource Limitation 42 Scope Control 43 Staged Evolution 43 Concept Recycling 43 Why the Largest IT Project Ever Undertaken Succeeded 44 A Seemingly Simple Success Story: Bass Pro 45 Emergency Behavior 47 Case History: Fort Knox Fails, but Silverlake Shines Summary 50

Chapter 5: How to View Projects What Is Success? 51 Why Good Plans Are So Hard to Create The Patton/Lombardi Effect 53 Building a Mental Picture 54 More Realistic Assumptions 55 Following Mother Nature 56 A Different View of Projects 57

48

51

53

Contents

Better Measures of Success 58 What Project Teams Need to Do Summary 61

ix

60

Chapter 6: Deciding What to Do Defining Specific Project Goals 65 Organizing Task Forces 66 Practical Advice 67 Mega-Multi Manufacturing Revisited A Quick Review 71 Summary 72

63

68

Chapter 7: Controlling Project Scope

75

Let Time Determine Scope 78 Control the Use of Resources 81 Limit the Size of the Design Team 83 Gauge Your Ability to Absorb Change 84 Imitate, Don’t Invent 85 Create a Single Point of Accountability 87 Management’s Role in Scope Control 87 Bite-Sized Pieces 88 Case History: The Birth of the PC 90 Summary 91

Chapter 8: Who Is Accountable?

93

Who Should Be Accountable? 96 IT Accountability 99 Rewarding Project Teams 101 Summary 102

Chapter 9: Using Packaged Software The Role of Packaged Software 105 Package Selection the Traditional Way 106 Another Adventure for Mega-Multi 107 What Is Wrong with the Traditional Approach? 109 The RITE Approach 110 Getting Help Selecting and Installing Packages 114 Package Selection Using the RITE Approach 115 Case History: Mississippi Chemical 117 Summary 118

105

x

Contents

Chapter 10: The Balancing Act The Need for Balance 122 Questions to Ask 122 More Speed, Scotty 123 Planning versus Doing 126 Delivering Benefits 128 Disruption: How Much Is Too Much? Controlling Risk 131 Summary 133

121

129

Chapter 11: Using Outsiders Wisely

135

The Business of Providing IT Assistance 137 The 1-5-10 Business Model 137 Other Business Models 139 The Rate Issue 140 Who Assumes the Risk? 142 A Better Model for Obtaining IT Project Assistance Making the Right Choice 145 Summary 146

144

Chapter 12: Managing IT Professionals What Makes IT Professionals Different Motivation and Job Satisfaction 152 Career Management 153 Managers and Leaders 154 Marginal Performers 155 High-Potential Employees 155 Creating Ideal Assignments 157 The Causes of Turnover 157 Summary 158

149 151

Chapter 13: Management of Projects The Project Scheduling Dilemma 162 Why Most Schedules Are Not Met 163 A Different Approach to Scheduling 165 Using Two-Tier Schedules to Defeat Parkinson’s Law 167 Case History: Sealectro Replaces Its Applications Summary 170

161

168

Contents

Chapter 14: Building Software Yourself The Role of Management 174 The Development Challenge 175 Another Visit to the Waterfall 177 The Planning Stage of Development 178 The Need for a Balanced Development Plan Reviewing and Approving the Project Plan The Plan of Record 181 Thoughts about Prototyping 181 The Design Stage of Development 182 The Coding Stage of Development 185 The Test Stage of Development 186 The Delivery Stage of Development 188 The Service Stage of Development 189 Building Quality into Software 191 The Use of In-Process Metrics 191 The Underlying Tool Base 192 Some Parting Thoughts 194 Summary 195

xi

173

179 180

Chapter 15: The Changing World of Software Development

197

The Trend toward Objects 199 Promoting Reusability 199 Building Applications from Objects 200 Searching for the Perfect Object 201 The Unfulfilled Promise 202 Follow the Leader 202 Java Perks Up 203 The Unified Process 204 Use Case Methodology 204 The Unified Modeling Language (UML) 205 The Rational Unified Process (RUP) 206 The Agile Movement 206 Summary 209

Chapter 16: A Review of the RITE Approach What Often Goes Wrong 212 The Reasons for Failure 213 What Sometimes Goes Right 214 More Realistic Assumptions 214 The RITE Way to View Projects 215

211

xii

Contents

A New Way to Define Success 216 Scope Control 216 Accountability 218 Imitate, Don’t Invent 219 Favor Packaged Software 219 Using Outside Resources 221 Managing IT Professionals 222 Project Scheduling 223 Developing Custom Software 224 Final Thoughts 226

References

227

Bibliography

229

Index

233

About

the

Authors

David H. Andrews

For over 35 years David Andrews has been involved in putting information technology to practical use. He joined New Jersey Bell Telephone in 1967 as the first person hired within the computer department under a fast-track management development program. Within two years he was asked to manage a project that involved the replacement of 25,000 electromechanical Teletype machines with a packet switching network. Andrews also developed the concept for and managed the design of an automated service order processing system that was later adopted by a number of other telephone companies. At Citibank Andrews managed a number of projects that helped pioneer the use of information technology in personal banking. He was brought into Timex as Director of Systems in order to rescue the largest project the company had ever undertaken. That project was eventually completed a year ahead of schedule and more than $1 million under budget. As IT Director at Branson Ultrasonics, Andrews oversaw the replacement and upgrading of all internal applications. In 1984 Andrews founded Andrews Consulting Group. The first client engagement at Sealectro Corporation also involved the replacement of all applications. This project was completed six months ahead of schedule even though the scope of the effort had increased significantly while it was under way. Since then hundreds

xiii

xiv

About the Authors

of other organizations have benefited from the advice and assistance of Andrews Consulting Group. In 1987 Andrews published an industry report detailing IBM’s plans for a new computer with the code name Silverlake. The predictions made in that and subsequent reports about what became the AS/400 line of computers turned out to be more accurate than those from any other source. In the following 15 years, Andrews helped write more than 50 additional information technology industry reports on a wide variety of topics. More than a million copes of these reports are in circulation. Andrews has made hundreds of presentations at seminars and industry events throughout the world. His opinions are frequently sought by trade and general business publications. He lives in Connecticut with Janet, his wife of 35 years, and divides his spare time between gardening and golfing.

Kenneth R. Johnson

A veteran of 32 years in the computer industry, Ken Johnson spent 28 years with IBM Corporation helping to develop industryleading software products. Known within IBM as a premier software development expert and executive, he often was sought after to evaluate development opportunities and perform internal development audits. In the late 1970s, Johnson was one of the lead designers for CICS/MVS in IBM’s Hursley, England, laboratory. CICS has endured as one of the most popular and successful software products in the history of the IT industry. Johnson spent most of the 1980s in IBM’s Rochester, Minnesota, laboratory, where he was one of the original design managers of the AS/400. Over the course of a number of assignments, he managed the development of most of the different parts of its unique operating system. Johnson was a key member of the management team that earned the 1990 Malcolm Baldrige Award for the outstanding quality of the AS/400. He also served as Director of Development for Technology Projects in the AS/400 division and managed the development of the Network Station, IBM’s thin client personal computer.

About the Authors

xv

Johnson joined Andrews Consulting Group in 1998 and founded the Technology Solutions practice area. He has been a frequent speaker at user groups including SHARE, GUIDE, and COMMON. He and his wife, Adria, reside in Rochester, Minnesota. When not building software, he likes to play bass guitar in a 1960s rock band, skate on an amateur hockey team, and help his sons run a professional video production company.

Andrews Consulting Group

Andrews Consulting Group (ACG) was founded in 1984 by David Andrews. It became one of the earliest IT service companies to specialize in replacing mainframes with more cost-effective midrange systems. During the 1980s and 1990s more than 300 businesses used ACG to help “rightsize” their IT environments. Over the years Andrews Consulting Group has broadened its expertise as the firm has grown. Current areas of specialization include project management, deployment of enterprise resource planning (ERP) software, development of business intelligence solutions, networking, and design and development of custom software. The professional staff of nearly 100 people work out of offices in Connecticut, New York, and Minnesota. Andrews Consulting Group is known for the high level of experience of its professional staff, the speed with which positive results arrive, and the cost effectiveness of the services provided. Much of this can be attributed to the application of the concepts described in this book. More information about Andrews Consulting Group may be obtained at the company Web site (www.andrewscg.com). The company headquarters are at 700 West Johnson Avenue, Cheshire, Connecticut 06410. The telephone number is 800-775-4261 or 203-271-1300.

Preface

T

here is widespread agreement that information technology (IT) has the potential to dramatically change the way we work and live. At the same time, almost everyone believes that IT has so far failed to live up to its promise. When organizations attempt to use IT, more things seem to go wrong than right. Is the problem with the technology or the way in which organizations put IT to use? A growing number of organizations get excellent returns on the investments they make in IT. Do they know something that the less fortunate majority do not? We think they do, even though few of them could articulate exactly what it is. Offering a clear and nontechnical explanation as to why certain IT projects succeed while many others fail is the goal of this book. We will also offer a concise set of principles to apply that will improve the odds of success. It was not clear what to call this set of ideas and observations. Revolutionizing IT was chosen because the point of view offered differs significantly from traditional ways of thinking. A more precise title would be Evolutionizing IT, but the compelling logic behind that name does not become clear until the book is read. We chose to make a bold statement in the title to get the attention of everyone who is frustrated with how difficult it is to turn the potential of IT into concrete benefits. If your job involves improving the way information is dealt with, then this book was written for you. If you are looking for a book to

xvii

xviii

Preface

take to the beach that will transport your mind to a place of mystery, adventure, or romance, you will have to look elsewhere. We have worked hard to offer compelling new ideas in an interesting and understandable form, but the subject remains a serious one—a new approach to undertaking IT projects.

The Frustration of Managing Information Technology

The IT function can seem like an island where the natives speak a strange tongue. Suspicion abounds between them and those on the mainland. Management understands the importance of this unique function but is unsure exactly how to maximize the return on the steadily growing cost. Most executives have a set of principles that they apply when managing each function. For example, they understand the sales dynamics in their industry—what motivates customers to buy, how to develop incentives for salespeople, and what to say when an important customer is unhappy. Good managers also understand the economics of their business—what products or services make the most profit, which costs must be carefully controlled, and where to look when results do not meet expectations. When overseeing the IT function, however, frustration sets in. The arcane details of the technology are hard to follow, as is the jargon used by those who understand it. Many executives are reluctant to admit how confused they really feel. A common strategy is to let the people who seem to understand technology make decisions about it. The need for those who oversee complex organizations to be good at managing the use of IT is a relatively recent development. Before the mid-1990s, IT was widely treated as an annoying necessity. The impending arrival of January 1, 2000, helped organizations realize how completely dependent they had become. The mid-1990s was also the point when the extraordinary opportunities offered by the Internet became obvious to everyone. A period of Internet euphoria came and went as a new century dawned. It left behind an understanding that IT matters a great deal but that using it effectively is far from automatic. Fortunately, it has become possible to improve the odds of success dramatically by adopting a new approach to IT projects.

Preface

xix

Explaining the new line of thinking in plain, nontechnical language is our mission. In addition, we intend to: Explain why making effective use of information technology is so difficult. ■ Explore the organizational dynamics that are the root cause of most problems. ■ Remove some of the mystery from IT. ■ Be easy and entertaining to read. ■

This book is not about the mechanics of project management— countless books already cover that subject well. Instead, it offers profoundly simple principles to apply when managing technology. It is based on the belief that managers do not have to become experts in technology to make effective use of it. Like it or not, your organization spends a great deal of money managing information. The actual amount spent is probably far larger than you think. Spending this money wisely has become essential for success. Fortunately, a proven set of easy to understand concepts can be applied that will increase the return on that investment.

Who Should Read This Book

The primary audience for this book is managers and knowledge workers within organizations that depend heavily on IT. Those who make decisions for such organizations will benefit the most. In appreciation of the value of their time, the writing offers useful advice as clearly and concisely as possible Even though the writing is intentionally nontechnical, IT professionals from computer operators to chief information officers (CIOs) will find value in understanding the line of thinking presented here. The most common reaction of those already exposed to these ideas has been for them to offer a story from their own personal experience that illustrates one or more of the points made. This experience has given us great confidence that the ideas offered here will be well accepted.

xx

Preface

At the same time, an attack on the status quo is certain to provoke some negative reaction. The suggestions offered here will not be welcomed everywhere. For example, the business models of some IT service providers will be weakened if their clients fully adopt the proposed philosophy. Likewise, others whose power and prestige might be threatened may also choose to disagree strongly. Those who are intrigued by our intentionally bold assertions are welcome to read on and discover more. Whatever your background, this subject is for you. Those looking to escape from reality while lying on a beach will hopefully find something suitable for their needs elsewhere.

Acknowledgments

T

he ideas and observations that form the foundation for this book were collected during the past 35 years—many of them long enough ago that I cannot recall the exact sources. This has made it impossible to give appropriate credit for their contributions to all of the people who have influenced and helped to shape the thinking presented. A first early draft of the book was created in 1999 to help Andrews Consulting Group clients better understand the philosophy behind our approach to projects. The positive reaction to that draft and the five that followed gave us confidence that the point of view being offered was valuable. Clients and advisors who provided useful feedback included Ken Weirman of Keystone Foods, ERP expert Tom Gunn, distribution expert and former GM executive Hank Capro, Don Janezic of R.C. Bigelow, Andy Hughes of Mississippi Chemical, and Hugh Cushing of QIS. ACG was launched in 1984 as a grand experiment. Could the unique principles that had worked in four businesses form the basis for a successful IT service practice? Our first client, Sealectro Corporation, was willing to allow us to try this new approach as its staff undertook to replace the company’s entire suite of applications. For this exceptional act of faith we will be forever indebted to Nick Mihalas, who was then president of Sealectro.

xxi

xxii

Acknowledgments

Over the years many coworkers contributed invaluable insight, including Steve Aufderheide, Justin Adinolfi, George Herchenroether, Nick Ioli, Bob Maxwell, Tom Murray, and Bob Salka. The talented management team at Andrews Consulting Group, including Jim Louys, Doug Corpuel, and Al Remediani, made it possible to take considerable time away from daily business to complete this work. The ACG internal review committee also provided highly useful insight and input. Special thanks go to Dan Relihan, a project manager of exceptional skill, for providing a number of outstanding ideas. Lee Kroon, the leader of the Andrews Consulting Group’s Industry Analysis Group, provided highly useful input on both content and writing style. Since 1986 Margie Mertz has been turning rough drafts and presentation materials into polished works for Andrews Consulting Group. Her patience and tireless effort were invaluable at every stage of the evolution of this book. Finally, the understanding and good humor of my wife, Janet, were especially important during the book’s long gestation period. DAVE ANDREWS No one produces a book without help. I want to express my appreciation to several people, without whom I could not have contributed to this effort. I want to thank my wife, Adria, for her encouragement and for putting up with the additional time commitment in such a goodnatured way. Bryan Wakat of Andrews Consulting Group deserves special mention for his effort in reviewing and editing the entire work, especially the software methodology sections. His patience in helping us all better understand the latest thinking in development methodology was exceptional. Finally, I could not have contributed to this book without the 30 years of software development experience that have made up my career. During that time I have learned much from many professional people with whom I worked in IBM’s development laboratories. I am particularly indebted to my fellow members of the senior management team in the Rochester, Minnesota, laboratory. Our

Acknowledgments

xxiii

collaborative experiences over many years played a key role in shaping my judgment and opinions. I could never list them all, but I clearly remember and am grateful for the lessons I learned from Dave Schleicher, Dick Sulack, Dave Ness, Judy Kinsey, Rod Morlock, Keith Fisher, Bob Chappuis, Don VanRyn, Barbara McLane, Denny Moritz, and Greg Caucutt. KEN JOHNSON

Introduction

F

ifty years ago large numbers of people died needlessly from heart attacks. They did not know that eating a low-fat diet, avoiding smoking, and exercising could reduce the risk of heart disease. Today, large numbers of organizations are suffering needlessly as well. They pour a great deal of money, time, and energy into information technology projects that fail to deliver the benefits that were used to justify them. A high percentage of these failures are preventable if the right measures are taken—the corporate equivalent of giving up cheeseburgers and Camels and joining the gym. Until a few years ago, the fact that a high percentage of IT projects went awry was unfortunate but tolerable. Computers were often considered a necessary annoyance. If it took a few tries to get them working properly, the organization could successfully grind along a little longer without them. But the era when IT did not matter a great deal is now over. Being able to take advantage of all that technology has to offer has become an essential ingredient for success in almost every industry. Knowing how to prevent the common disease that strikes half or more of the IT projects that are undertaken has therefore become critical. Fortunately, the nature of “IT project disease” is becoming apparent along with effective strategies for its prevention. Offering a new view of the nature of this affliction and how to deal with it is the goal of this book. Medieval doctors unknowingly did many things that worsened the conditions they tried to treat. Today, the community of IT professionals often inadvertently does the same thing by following xxv

xxvi

Introduction

approaches that have been widely used for decades but that do not work well in practice. Once a disease is understood, it is possible to develop effective treatments. In this case, there is still no certain cure, but there is a proven approach that will improve the odds of success dramatically. Hundreds of books have been written on the subject of managing information technology. They typically describe in detail the mechanics of how to organize and carry out improvement projects. Most bemoan the historical low rate of success and suggest that organization and discipline are among the cures. A number of them do an outstanding job of providing advice to those who manage complex projects, especially when sophisticated new software needs to be created. However, there is a gaping hole that this large body of work has failed to fill. Too little is available for decision makers to read that helps them understand how to avoid the common pitfalls. Experts on IT project management have turned out a massive amount of material in the past decade, much of it containing outstanding observations and sophisticated suggestions. Very little of it has been offered in a form useful to those who are not full-time IT professionals. The focus of much of the recent advanced thinking has been on the problems faced by large-scale projects, especially those involving hundreds of people. Too little good advice is available to guide the much larger number of organizations that undertake projects of a more modest size. This is especially true when the project revolves around the deployment of packaged software.

The Purpose of This Book

This book is a collection of easy-to-understand suggestions about how organizations can manage IT more profitably. The unifying theme is that it is necessary to undertake projects in what seems like an unusual way in order to improve the odds of success. It is about management, not the esoteric details of the technology itself. Those who feel intimidated by the technology jargon that increasingly creeps into everyday conversations need not be concerned. The observations and ideas offered here were collected over a period of 35 years. The most valuable have come from interaction

Introduction

xxvii

with people struggling with the challenge of making effective use of IT. The postmortem analysis of successes and failures has also been a major source of input. At the same time, this book is based on subjective opinions and is not the result of rigorous research or surveys. It has proven impossible to determine the exact originators of each of the ideas described in this book. Different people came to similar conclusions at approximately the same time, making it difficult to give the proper credit to each of them. We do not claim to be the first to discover any of the truths or ideas articulated here; rather, our role is to organize and present them in a coherent form. It is our hope that we have met our objective of describing them in a clear and useful manner that people within ordinary organizations, especially those of moderate size, can apply. The concrete suggestions offered here are more than just theories. They have been the foundation of an IT service business, now called Andrews Consulting Group, since its inception in 1984. Numerous examples involving Andrews Consulting Group clients have been included. Clients who have been exposed to these concepts have chosen to use them in different ways. A few have adopted the entire philosophy, while many more have chosen to use selected elements. The initial reason for writing this book was to help our clients and prospects understand the hard lessons we have learned. Drafts of the book have been successfully used for this purpose for almost two years. The reaction of those reading drafts was so positive that a decision was made to make these ideas available to everyone. The ideas offered here represent a work in progress that is constantly being updated and improved. It is our hope that publishing them will trigger broader discussion and lead to further refinement and expansion. A Web site has been established at www.riteapproach.com to facilitate this process. Many people will want to quickly understand the essence of our point of view. A concise summary of the ideas in this book has therefore been included in Chapter 16. Those anxious to get to the bottom line quickly are invited to read that chapter first. As we offer our thoughts to a broader audience, it is also our goal to make the presentation interesting and sometimes amusing. We believe that a serious book about important business issues

xxviii

Introduction

should not make the reader’s brain hurt. Having some fun does not imply that the subject is unimportant.

Is Another New Acronym Needed?

We have decided to give the set of observations, ideas, and practical advice assembled in this book a name. Doing so has made writing about the various ideas presented here simpler since they can be referred to collectively. Hopefully, others will find it useful to use the term to refer to a new approach to projects that incorporates the best of many different techniques. The full name of this book is Revolutionizing IT: The Art of Using Information Technology Effectively. We have boiled that down into Revolutionizing IT Effectively so that we can call our ideas the RITE Approach. The RITE Approach is an integrated collection of observations and principles designed to provide guidelines to those who make decisions regarding IT projects. It addresses fundamental issues that include: How organizations should think about, organize, and carry out projects. ■ Who should be accountable for success. ■ What it is reasonable to expect. ■ How success should be defined. ■ Why limits need to be set. ■

The RITE Approach includes many suggestions about how to undertake projects, but it is not itself a methodology. Hopefully, the RITE Approach will become imbedded in numerous formal methodologies as time passes. The RITE Approach differs from the most commonly used approaches in a number of important ways. It demands that organizations and their senior management: ■ ■

Adopt a new way of thinking about IT projects. Use new definitions and measurements for success.

Introduction

■ ■ ■ ■ ■

xxix

Learn a new pattern of behavior. Understand and accept practical limitations. Assign accountability fully and carefully. Pay more attention during the formative stages of projects. Think differently about schedules.

As the RITE Approach and the philosophy behind it unfold, it will become clear that a number of simple principles form its foundation. Some of the most powerful of the RITE Approach concepts are: Let time determine project scope. ■ Success is a never ending succession of incremental improvements. ■ Projects should mimic evolution. ■ Where possible, imitate and don’t invent. ■ Demand progress, not perfection. ■ Make end users fully accountable. ■

The wisdom of these simple thoughts will become apparent as the reasoning behind them is developed.

C h a p t e r

O n e

The Nature of Projects

T

oday it is commonplace for people to exchange messages via e-mail, carry cell phones everywhere, trade securities online, and rapidly obtain a vast amount of information whenever any important decision needs to be made. As recently as the early 1990s, few people were doing any of these things. Advances in information technology (IT) made all of this and much more possible. The pace of technology-driven change is not likely to slow. Improvements in microprocessor speed, storage density, software capability, and communications bandwidth are arriving at an increasingly faster rate. Each cycle of innovation opens up possibilities for either improving existing products and services or inventing new ones that were previously impractical. Putting technology to effective use has thus become an important challenge for the management of most organizations. It is a task that too many decision makers feel awkward attempting to perform. The option of paying little attention to IT is no longer available. Expensive Web sites need to be created and continually upgraded. Massive amounts of data have to be collected and digested. Sophisticated software needs to be built into everyday products. Vendors and buyers can no longer build information systems in isolation—they must be linked together to form electronic supply chains. Advancing technology is now creating opportunities for improvement at a faster rate than businesses can recognize and exploit

1

2

Revolutionizing IT

them. This is because there is a large gap between recognizing an opportunity and taking advantage of it. The bridge across that gap is a project.

The Race

IT projects have earned a reputation over the past 30 years of being expensive, time consuming, and prone to fail. Countless research studies have documented that this conventional wisdom is correct— half or more of the IT projects undertaken do fail. The definition of failure varies with the study, but usually includes project abandonment as well as cost overruns, missed schedules, and the delivery of less benefit than expected. There is evidence that, while still high, the rate of failure has slowly begun to drop because every year the techniques used to manage projects are refined and improved. At the same time, the challenges that project managers face also grow every year. A relentless increase in the complexity of the technology being employed is the most obvious reason why. A less apparent cause is also present— the accelerating rate of changes that organizations everywhere undergo. Great progress has been made in recent years in the management of software development. A large amount of high-quality research is published every year on the subject. Much of this research focuses on the problems faced by those who oversee the creation of large-scale software systems, since their needs are especially pressing. As a result, most of the debate about how to manage projects is being carried out within the software development community. There is nothing wrong with this except that an important point of view is missing—that of the people for whose benefit projects are undertaken. In recent years, executives have been inundated with advice about the growing importance of information technology, the Internet, and how to use them for competitive advantage. Too little of the advice directed at executives has dealt with how to manage the projects that make this happen. One of the many things that the Internet has made obsolete is the way in which information technology projects are undertaken.

The Nature of Projects

3

An ongoing race thus continues between the forces that are making IT projects inherently harder to manage and the widespread use of techniques to improve the odds of success. At the moment, the greatest opportunity for further improvement comes through the adoption of a new set of management principles. Before offering these principles, it is necessary to establish exactly what is meant by the various terms that will be used.

Terminology

Organizations face a never ending series of problems and opportunities that involve the processing of information. An angry customer, unexpected expenses, or the chance to bid on a large contract are examples of common issues that arise. Much of the energy of management is spent handling these situations. Occasionally a larger effort is needed, such as when a new employee profit sharing plan is to be implemented. We will refer to these larger efforts as IT projects, but only when: A formal process is used to launch and oversee them. A number of people are involved over a period of a month or longer. ■ Software is likely to play a major role in resolving the issue. ■ The goal is to put a working improvement in place. ■ ■

Minor changes or enhancements to existing software applications carried out by small numbers of people over a matter of weeks will be referred to as system maintenance. The management of system maintenance is an important subject, but one outside the scope of this book. There will be times when systems of great complexity are created, such as an airline reservation system. Normally such systems are not built all at once but evolve over time. The individual efforts that create specific workable parts of the ultimate system will be called projects. The systems created through a series of related projects will be called products. The software created during a specific

4

Revolutionizing IT

project will be called a release of that software. Releases are usually given numbers as follow-on projects create enhancements. The efforts to evaluate a problem, create a plan, or design a solution are stages within a project and not projects themselves. For an undertaking to be considered a project, something concrete and usable needs to be put in place at the end. We will attempt to use widely accepted names for the various stages through which projects normally proceed. During the planning and design stages, a determination will be made as to exactly what the project will attempt to accomplish. This will be referred to as the scope of the project. It is becoming increasingly common to create software whose primary purpose is to test the validity of a concept before too much effort is invested in development. Such software will be referred to here as prototype software, with the assumption that it will normally be disposed of after completion of the test. Complex software usually needs to be developed and tested in pieces. When an incomplete part of a release is given to users for testing, it will be called an increment. If the process used to create these increments is repetitive through the stages of development, it will be called an iterative process. The result of almost all IT projects is a change in the way some function is performed. These will be referred to as changes in the business process. We recognize that not all organizations that undertake IT projects are businesses; some are government agencies or not-for-profit entities. The term business process will be used, however, since the concept of business process reengineering has become so popular in recent years.

Principles and Methodologies

The principles that make up the RITE Approach alone are not enough to manage IT projects. One or more software development methodologies must also be used. Management principles do not take the place of good development practices. The RITE Approach principles depend on adherence to a strong, disciplined methodology. A good methodology creates the environment in which devel-

The Nature of Projects

5

opers or packaged software implementers do their work. It determines the development lifecycle through which a project passes from start to finish. It determines the stages of the cycle, such as planning, design, implementation, and test. The methodology provides the tools and standards that programmers and testers use. Most importantly, it determines the lines of communication and the interactions of everyone on the project team. Each of the classic and modern methodologies has something of value to offer. The choice should depend on the nature of the project, the philosophy of the organization’s management, and the skills of the project team. The subject of methodologies and their application will be covered in Chapter 15.

The Improvement of Business Processes

The motivation for launching an IT project can come from the need to fix a problem or from the desire to exploit an opportunity for improvement or to put a technological advance to use. In all of these cases, the result will be the same—value will be created if the result of the project is a permanent improvement in the way a function is performed. This simple but profound point is worthy of emphasis: IT projects create value through business process improvements. This thought must be kept in mind as project proposals are evaluated. Some of the questions that need to be answered include: Exactly how will this project change the way the organization functions? ■ What benefits will the change bring? ■ Are other options available to achieve the same benefit? ■

It is too easy for a project to become an exercise in the application of technology. When this occurs, most of the available attention is focused on the mechanics of assembling the necessary software. This can lead to a long and expensive effort to create new software when the business process could have been improved more rapidly and effectively without new technology.

6

Revolutionizing IT

By definition, all IT projects involve the use of software. Most often this means a complex combination of operating systems, middleware, development tools, packaged applications, and code created specifically for the project at hand. It is very easy to lose sight of the purpose of what is being developed when caught up in the mechanics of assembling all these pieces. Unlike human logic, software is maddeningly consistent. Given identical circumstances, software will react exactly the same way every time. This characteristic makes it an excellent vehicle for making a business process consistent and for enforcing discipline. Software is becoming so pervasive and comprehensive that in some cases a business process is largely defined by the software that supports it. Stated briefly, the software defines the process. Checking in for an airline flight provides a good example. Nearly every step in this business process is monitored and controlled by the software that drives the local terminals, making it possible for airlines to undertake a major business process change, such as offering electronic tickets, largely by rewriting the underlying software. The most critical part of any IT project is determining exactly how to improve the business process, not the creation of the software. This obvious truth is easy to lose sight of because the development of the software is usually the most expensive, time consuming, and complex part of the project. IT professionals have a natural inclination to focus most of their attention on the facet of the project with which they are most comfortable—the mechanics of software development. When projects are led by IT professionals, this propensity can contribute to failure without any apparent sign that anything has been done wrong. In recent years, the way in which software is developed has become extraordinarily complex. It is therefore essential for a significant amount of time and expertise to be expended in overseeing the technical aspects of development. There is a tendency for this legitimate need to overshadow the determination of exactly what it is that the software should be doing. Technically oriented developers find the early stage of projects frustrating. They prefer things that are concrete, stable, and logical. Deciding exactly what a system should do is usually messy and inexact. The most rewarding part for developers is turning a precise specification into a workable system. This is where their technical

The Nature of Projects

7

virtuosity can flourish. The inability of end users to quickly and precisely define what should be built is a source of endless frustration, as is the tendency for specifications to constantly change. Developers yearn to determine with certainty exactly how the new business process should work and then get to work creating a sophisticated body of software that carries out the required function. It is necessary to adopt an approach that helps them accept this as unrealistic and that limits their ability to pressure users to commit firmly to a new business process that has not yet been proven to work. Project teams usually present proposed project plans along with a list of the assumptions on which the plans have been made. It is the norm for some of these assumptions to be unrealistic. A common example is the assumption that the requested amount of the time of key users will be available as design details are finalized. The validity of a project plan is largely determined by the assumptions on which it is based. The inclusion of a long list of specific assumptions in a project proposal should therefore represent a red flag for those reviewing it. Experienced project teams frequently use this mechanism to make a plan that contains many risks appear more certain and attractive than it is. When projects are led by development professionals or outside advisory firms, there can also be a temptation to bully the user community into agreeing to a design that is technologically elegant. The propensity of IT people to take control when they are put in charge is not the problem. They impose a necessary degree of order. The risk is that the priority may shift from making the optimal improvement in the business process itself to the creation of elegant software.

The Classic View of Project Management Theory

By 1970 the creation of computer software had become quite complex. It was no longer practical for a few programmers to sit together and informally knock out the code needed to make computers do what was required. As a result, formal system development methodologies began to emerge.

8

Revolutionizing IT

Figure 1.1

The Waterfall Development Cycle

Problem Definition System Design System Creation Test Deployment Maintenance

The first generation of formal IT methodologies were viewed as a logical series of steps carried out one after the other. When a project schedule was created, the resulting graphic looked a little like a waterfall, as illustrated in Figure 1.1. The term waterfall has thus become the generic name for the many different project management methodologies that were built around this way of viewing projects. As a replacement for ad hoc approaches, the waterfall methodologies represented a huge step forward as the first effective way to introduce discipline. The waterfall view of projects is simple. Projects are born when an organization identifies a problem to be solved or sees an opportunity to be exploited. People are assigned to define what the project will attempt to accomplish. A project plan is developed and presented to management for approval along with an estimate of the cost, resources, and time required and the benefits that will come from its completion. Once approved, the project plan forms the starting point for the design of the new systems. The normal way to create this design is for IT professionals to interview people familiar with the affected functions. The resulting requirements become the foundation for

The Nature of Projects

9

the detailed design of custom software or the selection of packaged software. The formal approval of the organizations that will use the new system is then obtained. During the next stage of the project, the design is turned into a working system. A large number of subtasks are usually involved, including selecting and making use of development tools and aids; creation of new code by programmers; testing; development of documentation; acquisition of hardware, system software, and communications components; and step-by-step integration of the various building blocks. A number of activities normally occur in parallel with the development of the technological building blocks of the new system. Plans are formulated for making the transition to the new system, end users are trained, and support processes are tested. When everything is working properly, the new system is put into production. After some documentation and tidying up, the project is considered complete. The waterfall view of how projects should unfold is orderly and tidy. Over time, however, experience has shown that it is not a perfect model of the real world. A predictable set of organizational dynamics that come into play every time a project is initiated is not fully accounted for in the waterfall-based methodologies. Still, these methodologies remain useful because they insist that an orderly process is employed, which ensures that the necessary steps in development are carried out in a logical order. All of the techniques that refine and improve the waterfall approach borrow heavily from its underlying concepts.

Classic Theory Limitations

Kung fu films, grand operas, and romance novels each follow a formula that those familiar with the genre expect. The martial arts master prevails against overwhelming odds, the soprano dies tragically, and soul mates unite after a number of unfortunate circumstances. In a similar way, IT projects also follow predictable patterns. Any approach to projects that does not take these patterns of organizational behavior into consideration is doomed to failure.

10

Revolutionizing IT

Examples of the organizational dynamics that need to be considered include: Problems turn out to be more complex than originally thought. ■ Adequate time is rarely available from people needed to carry out project plans. ■ Designs contain subtle flaws that cannot be discovered until the system is operational. ■ Unanticipated changes undermine the assumptions on which the project plan was built. ■ People whose support is necessary resist change. ■

The waterfall-based approaches to IT projects do not adequately deal with these issues. The predictable outcome is the high project failure rate that has been so well documented. In many cases, the individuals involved in failed projects have taken the blame. It is assumed that the methodology in use would have worked if only they had done a better job of following it.

The Methodology Evolution

It has slowly become commonplace for those involved in projects to make their own alterations to the official methodology in use. Quite frequently this is done quietly by savvy managers who instinctively understand that something different has to be done to succeed. As time has passed, many of the successful techniques used by these pioneers have been incorporated into “modified waterfall” methodologies. The Standish Group, a respected IT research firm, has done an extensive evaluation of IT projects called CHAOS (Standish Group International, 2001). One of the key findings of that work was that the success rate of IT projects in the United States increased slowly during the 1990s. It is likely that the evolution of improved development methodologies was responsible for much of this improvement. Unfortunately, the rate of success has progressed from being horrible to simply bad. An enormous amount of room for improvement remains.

The Nature of Projects

11

Since the waterfall-based methodologies were introduced, the programmer’s world has changed dramatically. The invention of object-oriented programming has allowed the creation of software building blocks with much more reusable function than was previously possible. The Internet has introduced new programming standards that in turn promote interoperability (exchange of data and applications) on a worldwide basis. The steep decline in the cost of computing power has increased the relative value of the time of programmers. Countless tools to make them more productive have emerged. Most of the new thinking about how to manage IT projects has come from elite software development professionals. The ideas they developed have proven to be quite effective when used by large organizations to create sophisticated custom software. Even though the best ideas could also be applied to projects undertaken by more typical organizations, to date this has not tended to happen. The 1990s brought advances in both very thorough methodologies such as the Rational Unified Process (RUP) and lightweight approaches such as Extreme Programming (known as XP to its advocates). Taken together, the current methodologies offer a wide range of choices to meet the particular needs of any project and the skill level of any organization. The RITE Approach is consistent with much of the thinking behind RUP and the “Agile” methodologies, a more complete analysis of which is included in Chapter 15. The RITE Approach has been shaped to apply to projects of all sizes including those that use packaged software. One of the first principles of the RITE Approach is that the fate of a project can be determined to a large degree in its early stages. The best way to begin to develop the RITE Approach line of thinking is therefore to start at the beginning.

Summary

The most important points to take away from this chapter are: Techniques for managing IT development efforts keep improving, but so do the challenges. As a result, the failure rate remains high. ■

12

Revolutionizing IT

The most important reason for undertaking IT projects is to improve the way a business process is performed. ■ IT projects need to follow a disciplined methodology, but at the moment there are serious flaws in the methodologies most commonly used. ■ The fate of projects can be largely determined in the very early stages. ■

C h a p t e r

T w o

In the Beginning

T

he way an organization handles the earliest stages of projects has an enormous impact on their outcome. Many projects are doomed to almost certain failure even before they have officially begun because the attitudes, assumptions, and approach used are seriously flawed. The culture of each organization includes its way of approaching problems and opportunities for improvement. Too often that approach sets up projects for failure. Success can never be assured early on, but it is easy to set the stage for failure when projects are in a formative stage. This principle applies to organizations of all sizes and not only businesses. A highly visible example would be the attempt by the Clinton administration to overhaul the U.S. health care system in 1993. Most observers blamed the failure of this initiative on obvious factors such as partisan politics, the skill of individuals involved, and the lobbying of powerful industry groups. A more complete understanding of the failure would take into consideration the way the problem was approached. When systems become massively large and complex, it is nearly impossible to totally overhaul them. The only practical option is to make incremental improvements. The Clintons failed to see this. They set out to completely redesign the way health care was delivered and paid for. Once this overly ambitious approach to the problem was adopted, the failure of the effort was virtually assured. The early stage of projects is critical for many reasons:

13

14

Revolutionizing IT

■ ■ ■

Goals are formulated at this point. The definition of success is established. Fundamental mistakes become very hard to fix.

The high rate of failure among Internet start-up companies can be directly attributed to fatal flaws in their original business plans. Most were doomed to certain failure before foolish investors poured millions of dollars into them. Classic project management theory offers limited help when it comes to the opening stages of a project. Many methodologies fail to provide practical guidance for dealing with the challenges projects face early on. Examples of these typical real-world problems include: The mandate given to the project team is vague and often contradictory. ■ People with necessary skills and knowledge are unavailable. ■ Executives are unable to make an appropriate amount of their time available. ■ Vital information cannot be obtained in time or at a reasonable cost. ■ The organization cannot absorb change at the rate needed. ■ The nature of the problems being worked on changes before the solution can be put in place. ■ The plan cannot offer the level of precision that management wants before approval. ■

Methodologies cannot make these issues go away. They must instead be structured to take them into account. The best way to understand how is to start by taking a more in-depth look at the dynamics of projects in their early stages.

How Projects Are Typically Initiated

The original seed from which most projects germinate is a line of thinking in one person’s brain. That person can be anyone from the

In the Beginning

15

CEO of a $100 billion conglomerate to the third-shift custodian in a small privately owned business. As the original idea is communicated to others, it grows and is reshaped. Most early stage ideas either die young or are combined with other thoughts to form a more viable concept. Ideas that originate with a senior person gain momentum quickly. Others need to be brought to management to obtain agreement to spend time developing the idea further. In either case, what is said at the start by the most senior people involved has a profound impact on the resulting effort. Executives who play their role effectively can improve the odds of success enormously. Doing so does not require giving precise, detailed orders; simply asking the right set of questions will often suffice. The well-timed icy stare is also a tool many executives know how to use effectively. The first enemy of project success is excessive scope. As you will see, a series of natural forces pulls all projects toward increasing complexity. The best defense against these forces is for management to learn a new mode of behavior that revolves around establishing a few simple ground rules, asking the right questions, and setting realistic expectations. Projects start to take shape when people are assigned to work on them. A critical juncture occurs at this point as two very different types of people start working together. The first type is people from the parts of the organization that will make use of the resulting system. The role they are normally assigned is to provide a detailed understanding of the environment in which the proposed system will operate. These people tend to have a limited understanding of information technology. Many of them strongly distrust it. The second type of person assigned to projects is the IT professional. Such people come from the internal IT department, are hired as contract workers, or are provided by an outside consulting firm. Ideally they bring both knowledge of technology and an understanding of the business processes that will be impacted by the project. The best of them are highly intelligent and confident. These desirable traits can make them seem arrogant to those from the user community. A team made up of these two very different types of individuals must agree on what will be proposed to management. This is much harder and more critical than it appears to be. One reason is the

16

Revolutionizing IT

common belief that projects have an inherent size from the beginning. In reality, the range of possible ways in which an issue can be resolved is extremely broad. Expecting a team made up of people with very different views to select the best choice from within this range can be unrealistic. For example, the mandate from management might be to find ways to increase the productivity of the sales force. Viable options for achieving this goal could range from having the CEO send an inspirational e-mail message to spending $10 million providing salespeople with a custom-developed sales automation system running on top-of-the-line laptops. In between these extremes lies a broad spectrum of alternatives. Finding the choice that best balances cost, risk, time, resources, and benefits is never easy. Formal methodologies too rarely deal with the practical problem of making these decisions. Projects sometimes originate outside of the part of the organization that will feel the greatest impact. For example, a survey might determine that buyers are annoyed at the time it takes to obtain replacement parts. The sales executive demands that the problem be fixed, but it is the purchasing department that needs to create and operate the resulting system. Obtaining the knowledge of those who will feel the greatest impact is essential, but extracting that knowledge is never easy. Those who best understand the existing process are usually attached to it emotionally. The idea that wholesale changes in their work environment are a likely outcome of the project under consideration frightens them. The ultimate end users of proposed new systems have two conflicting forces pulling at them. They are motivated to request everything they ever might want when helping create system specifications; however, they also have a desire to limit disruptive changes in their work environment. Consequently, it is not unusual for the hidden agendas of these people to strongly influence the way in which they articulate requirements for the new system. Many individuals will appear to be cooperating fully while hoping that the project will fail or be abandoned. This challenge has existed as long as there have been IT projects. For example, New Jersey Bell Telephone Company faced this problem back in 1969, when the company wanted to introduce a

In the Beginning

17

computer-based voice response system to provide callers with the new number when one was changed. The new system would eliminate the jobs of the operators then performing this function. Obtaining their cooperation in developing plans for the new system was a delicate problem. At that time, the company was in a position to promise that everyone would be retrained and no one would become unemployed. Today, such an option is not always available. Another challenge is that the people who best understand how things currently work are the ones who have the least time to spare. As a result, when participation in the project is sought, the people whose time is offered can be poorly qualified to help. When a project is large enough to warrant full-time participation by people from the user community, it is not uncommon for their management to use the project as a means of disposing of individuals who do not currently fit in well. Such people can cause enormous harm to the project. The people who bring technical expertise to the project often have the opposite agenda from those from the end user community. IT professionals rarely are motivated to maintain the status quo. They like the thought of putting their stamp on the creation of something better. In their minds, the more ambitious the change the better. As a general rule, the more deeply technical their skills are, the more they will favor a plan that will allow them to fully exercise those skills. The outcome of the early definition stage of projects thus depends greatly on personalities and the relative influence of the conflicting forces. The challenge for management is to see that an appropriate balance is struck. A plan that is too timid may remove all of the value from the effort, while an overly bold approach increases cost, risk, and the time required to achieve benefits.

The Virus That Causes Scope Growth

There is a natural propensity for the factors outlined in the previous section to combine to encourage uncontrolled growth in the scope of the project. This tendency is magnified by the normal management practice of dividing the accountability for success of the project between leaders of the two very different factions described.

18

Revolutionizing IT

Those representing the end user community are encouraged to be comprehensive when defining their needs. They fear being criticized later for leaving something important out of the specifications they create. It is rare for them to be given a strong motivation to hold back. This leads directly to the creation of highly ambitious system requirements. If their role is confined to defining their needs, then it is hard to blame them after the fact for doing so. The IT professionals involved in projects are usually assigned the role of translating the requirements provided by the users into the design of a system that will solve the problem as defined. They have no natural motivation to challenge requests that come from users for two reasons. The first is a genuine desire to be as helpful as possible. The second and more powerful motivation is the lust for challenge. The result is an upward spiraling of scope as the user desire for functionality feeds the IT professional’s need for challenge and excitement. The issue of accountability is important enough to warrant a complete discussion in Chapter 8. A comprehensive analysis of the problem and its solution will be presented there. At this point, it is only necessary to accept that if management does not set the stage properly at the outset of a project by assigning accountability correctly, then another powerful force will be left in place that will encourage scope growth.

Resource Limitations

During the 1990s, it became a competitive necessity to use technology to solve problems and exploit opportunities, but finding the time to do so became more difficult. Recent years have seen competition become stronger, margins tighter, buyers more demanding, staffs smaller, and market conditions more changeable. Problems now arise more frequently than they used to, and when they do they are usually more difficult to solve. Within most organizations today there is precious little capacity to undertake efforts that are not related to the daily operation of the entity. Tactical issues consume much more time and energy in spite of all of the productivity-enhancing technology that surrounds us—cell phones, laptops, personal digital assistants (PDAs), e-mail,

In the Beginning

19

voice mail, and the Internet. When it is a struggle to keep the existing processes working, finding time to use technology to improve them becomes extremely difficult. Problems severe enough to demand immediate attention are arising at an ever increasing rate. Undertaking an IT project is frequently the correct response to a problem, but finding the resources that are needed to do so is always a challenge. Matters are made worse because the way in which projects are initiated reflects a past period when time and resources were more plentiful. Organizations today have a limited capacity to undertake improvement efforts. At any point in time, more projects will exist that could provide a justifiable return on their cost than the organization will have the practical capacity to undertake, making it imperative to undertake only the most important projects. More importantly, it is essential that approved projects be prevented from spinning out of control and burning up an excessive amount of the limited capacity for improvement. The capacity to undertake improvement projects is a precious asset that needs to be budgeted and controlled just as carefully as cash, capital spending, the time of salespeople, or overhead expenses. Good managers are accustomed to exercising control over traditionally scarce resources. The need to budget and control the capacity for improvement has now become just as important.

Management Misunderstanding

Projects don’t go awry only because of the idiosyncrasies of users and IT professionals. The behavior of senior management also has much to do with it. The ways in which senior executives can unintentionally contribute to the failure of projects from the start include: The expectation of perfection. Good managers are demanding. They insist that things be done right the first time and hate to hear excuses. These normally valuable traits can cause harm during the formative stages of projects if they inhibit communication. Any worthwhile project will present challenges to which there are no certain answers. If the attitude of management is ■

20

Revolutionizing IT

that a precise plan must be formulated and carried out flawlessly, then the project team will protect itself by including unrealistic assumptions in the plan. ■ Unclear instructions. It is a near-universal trait for people within an organization to overanalyze the things said by those above them in the hierarchy. When the statements made by senior people are terse and ambiguous, it is common for them to be misunderstood. ■ Too grand a mandate. Senior management gets paid to think big—to see the forest and not just the nearby trees. This can lead to setting mandates for IT projects that attempt to reach too far. A key reason for the high failure rate of the dot-com businesses in recent years was that they undertook to do too much. The unrealistic expectations of investors encouraged them to load too much on the plate. ■ Oversimplification. The way many executives cope with mindnumbing complexity is to simplify the way they think about things. This worthwhile trait can get in the way when IT projects are involved. The problem is that when software is being bought, set up, or created, close attention must be paid to even the most minute details. This means that project teams cannot just look at the problems they are attempting to solve at a high level. When executives use high-level thinking to come to conclusions, it is imperative that the conclusions be tested to ensure they remain valid at the detail level. Too often senior people do not have the time or inclination to understand the details and are unwilling to trust the judgment of those who do. ■ Refusal to make trade-offs. Only management can properly make the difficult trade-off choices that always need to be made. Those who refuse to do so ensure failure. ■ Presentation style. Management approval is usually obtained through formal reports and presentations. The culture of many organizations calls for these presentations to be simple, concise, unambiguous, and delivered with great confidence. This often makes it impractical to communicate the level of uncertainty and risk that really exists. No one benefits if the project team pretends that the plan being presented is certain to succeed.

In the Beginning

21

An example from 200 years ago might help illustrate how important it is to look at projects in a little different light than normal.

Did Lewis and Clark Succeed or Fail?

One of the most important projects in U.S. history was the effort by Meriwether Lewis and William Clark in 1803 to find a Northwest Passage. Historians all seem to agree that this expedition had an enormous influence on the formation of the United States as we know it today. Using the standards that some executives would apply to modern-day projects, the expedition was a complete failure. It took longer than scheduled, cost more than planned, and failed to meet its primary objective—to discover a waterway that led to the Pacific Ocean. Before Lewis and Clark started their project, they had only a vague idea of what lay ahead. They engaged in meticulous planning but could not reasonably be expected to provide a detailed plan for their effort before actually carrying it out. The whole purpose of the project was to explore the unknown. Its real goal was to gather information that could be put to valuable use later. By this definition, the project was wildly successful in that it significantly accelerated the westward expansion of the United States. This obvious example makes a simple point—it is impossible to create and stick to a plan when entering uncharted territory. Ambitious IT projects are often adventures in exploration. When this is the case, a plan is needed at the start, but so is the freedom to change direction when necessary. The best way to undertake such higher-risk projects is to break them into pieces of a manageable size, each of which provides a reasonable return. It is also necessary to resist the desire of modern-day technical Lewis and Clark wannabes to risk everything on a single glorious attempt to build the software equivalent of the Northwest Passage. Lewis and Clark did not “discover” anything not already known to Native American tribes. They simply uncovered and organized intelligence of enormous value to their organization. In this spirit, certain projects need to be structured as organized voyages of discovery and not precisely planned trips to a known destination.

22

Revolutionizing IT

Summary: Early Stage Problems

The litany of problems outlined here was not assembled to discourage those who need to undertake IT projects but to provide the foundation for a more realistic approach. A brief review of some of the points developed so far might help before building the case for a new way of dealing with them. The many factors that need to be taken into consideration during the early stages of a project include the following: Those who best understand the existing processes cannot envision the best way to replace them. ■ Developers lean toward creation of elegant software that will challenge their skills. ■ Outside advisors can have a bias toward long projects that use lots of their expertise. ■ Dividing project accountability by assigning separate roles to users and IT reinforces the tendency for project scope to grow uncontrollably. ■ Management people lack the time and understanding of the details to make all the decisions. ■ Organizations can only absorb change at a limited rate. Too few view the capacity to change as a precious asset that must be allocated and managed carefully. ■ The application of traditional management techniques can have unintended side effects. ■ Projects can focus on the mechanics of using technology instead of business process improvement. ■

As if this list were not long enough, there are other factors that have not yet been examined that will be explored in later chapters: Problems refuse to remain stable while being solved. Plans degrade over time as the assumptions on which they were built erode. ■ The information needed to approve projects is rarely available until after they are under way. ■ ■

In the Beginning

23

Once a project has been launched, it is difficult to change its direction. ■ It is never clear when to stop designing, since few things exist in isolation. ■

Fortunately, a set of techniques is included in the RITE Approach that can limit the negative impact of this long list of factors. The RITE Approach techniques revolve around: Adopting a realistic view of what is possible. Breaking complex problems into pieces of a manageable size. ■ Controlling project time and resources. ■ Limiting the number of new concepts being introduced at one time. ■ Recycling proven concepts whenever possible. ■ Letting the time and resources determine the project scope. ■ Establishing a single point of accountability. ■ Rethinking how project schedules are set and managed. ■ ■

Don’t be discouraged if it is unclear at this point exactly how these techniques solve the problems outlined. Each will be explained in some detail in later chapters.

C h a p t e r

T h r e e

The Anatomy of a Project

T

he easiest way to understand why information technology projects so often get off track is to look closely at the dynamics of a specific project. Chapter 2 looked at the critical early stages of a project. This chapter examines a complete project. A fictional example will be used, not because there is a shortage of reallife examples, but to avoid embarrassing anyone. This example represents a composite of the real-life experiences of a number of organizations.

Mega-Multi Manufacturing

Mega-Multi Manufacturing Company spends $10 million per year on IT, 2 percent of its annual revenues of approximately $500 million. The underlying systems tend to be adequate but outdated. At one point, senior management became unhappy with the time it took to get information about the previous month’s results. The controller was therefore given the task of coming up with a better approach. Since the accounting systems had been automated for many years, the first thing the controller did was meet with the CIO. They agreed that the existing accounting systems that created the management reports were not very effective. These were classic legacy systems that had evolved over the past 20 years, and no one had

25

26

Revolutionizing IT

ever found the time to bring them completely up to date. Instead they had been continually patched and enhanced by a permanent staff of technical maintenance personnel. A project was therefore launched with the goals of shortening the accounting closing and creating more detailed and meaningful information for management. The consulting firm Pricey, Dellaye, and Thensome (PDT) was engaged to help out because the existing IT staff was too busy with other projects and maintenance work. PDT’s big selling point was its Niagara Method of project management. The Niagara Method is a highly structured process that has been refined and improved over the past 25 years. Twelve thick notebooks are needed to describe it all. It is the first thing that new recruits to PDT are taught. Appearing to deviate from it can be fatal to a young consultant’s career. Under the Niagara Method, projects are broken down into a series of stages that include problem definition, system design, system creation, test, deployment, and maintenance. The conceptual structure behind Niagara is simple: Agree on what problems the system will attack, create a plan to solve them, buy or build what is necessary, test it, and then put it in place. If all goes well, the project team can disband shortly after implementation, and a few people can be left around to handle routine support and minor changes. A PDT partner met with the controller to kick off the project, and both of them appointed a project manager. The PDT project manager was given overall responsibility for the effort. The accounting manager, who was appointed by the controller as company project leader, was responsible for coordination of resources within the company. After six weeks of research and interviews, a 40-page document was completed that described, in detail, all of the problems with the existing closing process and the systems that supported it. The cost of this effort was $60,000. Management was then asked to approve the next stage of the project, which would propose solutions to all of the problems that had been identified. It was hard to disagree with any of the carefully documented problems, so approval was given. A team of five consultants was brought in for the system design phase, which was scheduled to take four months and cost $300,000

The Anatomy of a Project

27

in consulting fees. The accounting manager also lined up a committee of people from different parts of the company, including the internal IT staff, to help define exactly what was needed and how it was to be created. The first task was to further refine the definition of requirements. Committee members were encouraged to make sure that everything they would need in the new systems was included so that nothing would need to be added later. This turned out to be a difficult task because problems always seem to be intertwined, making it difficult to know where to stop. For example, a critical part of the closing process involved valuation of inventory. The systems that supported this function were seriously outdated. A related problem was that the warehouse did not use bar codes. In addition, the accounting for scrap and rework was not as tight as it should be. Therefore, the exact amount of usable inventory was not easy to determine. Monthly closings were often delayed in order to make appropriate adjustments. It was not easy to decide if the project plan should include a permanent fix to the inventory accuracy problem. Without replacing the old inventory system, it would be difficult to meet all the requirements for the improved closing. On the other hand, inventory management was one module of a massive old set of legacy applications that could not be replaced one at a time. The debate over inventory accuracy raised a much broader question. Should the project be expanded to include the implementation of a complete new enterprise resource planning (ERP) system? This was the approach the PDT consultants recommended strongly. After considerable debate, however, their suggestion was rejected since it would represent a tremendous increase in cost and time required. Because of the long debate over ERP, it took a month longer than planned to complete the system design project. The final report included a carefully researched recommendation for the implementation of a complete suite of financial systems. It also suggested a series of subprojects to reengineer many of the financial and business processes. The implementation plan called for four months to be spent on software selection and the final definition of enhancements. The proposed plan called for 14 months of effort during which the new software package and all the enhancements would be

28

Revolutionizing IT

installed in stages. The schedule took into consideration the need to implement some of the new software modules at the start of a new fiscal year. Ten PDT consultants would be needed at a cost of $3 million. The software packages plus hardware and middleware were estimated to cost a total of $1 million more. Not counting the value of the time of internal resources, the total project cost would be over $4 million, including what had been spent so far. The benefits still appeared to be significant, but they would largely come more than two years after management first began to discuss the problem. After a few weeks of review, the project was approved. Management had reservations about the time and cost but did not want to stop the effort entirely. For the first few months everything seemed to go fine. A request for proposal was created and sent to 10 application package vendors. Based on the responses, three finalists were selected. None of them were able to meet every one of the critical requirements. The package that the users liked the most required a database management system that the IT staff did not have the expertise to support and did not seem to like. The consultants lobbied hard for a package that they had a great deal of experience installing. Unfortunately, it was the one that would require the greatest level of modification. After a long debate, the package the consultants preferred won out. External events also had an impact on the project. During the package evaluation effort, the company had to fight off a class action lawsuit arising from some unusual fluctuations in the company stock price during the previous year. Most of the users on the selection committee had to cut back on work on the project to help resolve the lawsuit. Within a few months the suit was successfully settled. During that time, the project fell behind schedule and more consulting resources were needed to make up for the lost time. Six weeks behind schedule, the selection phase was done. Those in management felt they had little choice but to approve continuation of the project. The package implementation effort went quite well for a number of months. The software was purchased and set up in a conference room along with a number of PC workstations where users could come in and experiment with it. During this effort, it became clear that the majority of PCs in use would not have adequate power

The Anatomy of a Project

29

to effectively run the new applications. Some workers did not even have PCs (they used old “dumb” terminals). Since the new software was a client/server system, every user needed a relatively powerful PC. As a result, a new subproject was started to determine what had to be done to upgrade the PC infrastructure. Four months into the package implementation phase, management decided to undertake the acquisition of a similar business. The controller, the accounting manager, and much of the staff needed to invest a great deal of time helping in negotiations and due diligence. At first this had a limited impact on the project. It soon became clear, however, that a decision would be needed as to how to integrate the accounting systems of the two companies. The business being acquired had a quite effective set of financial systems in use. This was one of many reasons why it was attractive. The project was therefore put on hold in order to determine what to do. The arguments were long and passionate. The company being acquired used accounting software written for Intel-based servers. Mega-Multi had standardized on Unix computers. Should people in the new division be forced to abandon a system that they knew and liked in order to remain compatible with the new owner? After much study and debate, a compromise was reached in which the new division would keep its software for now and create special links to the new corporate systems. The entire effort caused enough lost time that the year-end date began to appear in jeopardy. The cost of all this was, of course, never included in the original project budget. This was of concern because cash had become tight as a result of the acquisition. A decision was made to bring in additional consulting help and to continue installation of critical applications (including a new chart of accounts and general ledger) in time for the start of the next fiscal year. Unfortunately, it was necessary to postpone completion of some elements of the new system for a few months. This included the new reporting system that would provide management with the additional information it wanted badly. After 28 months of effort, the new accounting systems were fully operational. The closing process had been reduced from 10 to 4 days as a direct result. Management had the set of new reports that had been carefully defined during the planning stages. Unfortunately, the combination of the acquisition and the usual personnel

30

Revolutionizing IT

changes that occur with the passage of time meant that management now really wanted something quite different. Making matters worse, a new set of government regulations had also been issued since the system requirements had originally been defined. The PDT consultants left after more than two years of work, having billed over $4 million, a third more than the original plan. The entire project had taken much longer than expected and had run more than $1 million over budget, not counting the need to invest in new PCs and the value of the time of Multi-Mega people. The PDT consultants felt great about the project since they had completed what they set out to do. It was not their fault that there had been a lawsuit and an acquisition, as well as regulation and management changes. They were hoping to use Mega-Multi as a reference and to convince management to hire them for a new project to upgrade the manufacturing and distribution systems. The controller and accounting manager certainly could not be held accountable for events that were beyond their control that caused the project to take longer than planned, cost more, and fail to deliver exactly what was needed. If anything, they had handled the problems that arose very effectively. Should everyone have been pleased with the result? Could anything have been done differently?

Drowning under the Waterfall

The Mega-Multi Manufacturing example is fictional but quite typical of the way major projects unfold. It illustrates many of the inherent pitfalls in the traditional way of managing projects. In theory, the approach that was used sounded very logical. In practice, it did not work because of unrealistic assumptions that included the following: 1. The environment will remain stable during the project. 2. End users can define, in advance, exactly what will be needed. 3. Complex problems can be solved completely on the first attempt. 4. Requirements can be precisely defined before packaged software is selected. 5. Users will cheerfully accept changes in their work environment.

The Anatomy of a Project

31

These assumptions take an overly simplistic view of how humans behave and the capabilities of the ordinary people that work in real organizations. Each of them needs to be examined.

Unrealistic Assumption #1: The Environment Will Remain Stable during the Project The waterfall-oriented methodology that PDT employed made a hidden assumption that the business environment would remain stable while the project was under way. The underlying logic behind the approach was that the consultants would define the problem, develop a solution, and then figure out how long it should take to put in place. This is a logical way to attack problems in a highly stable environment where there is time to make the planned solution work before it becomes obsolete. But in today’s world, stability is rarely a characteristic of the environment. How long has it been since your organization experienced a long period of stability? Change has become a constant. It comes in many forms— management turnover, product introduction, new sources of competition, government regulation, economic cycles, lawsuits, natural disasters, and even terrorism. In the Mega-Multi example, one source of change was an opportunity—the acquisition. One of the most common sources of change is technology itself. The effective use of new technology by a competitor is an increasingly common source of change. Ironically, the technology projects that help create great instability are often managed under the assumption that the company’s own environment will remain stable. The root cause of many information system failures is the hidden assumption of a stable environment. It is hidden because it is built into a process that focuses on creating an optimal solution to the problems that have been defined. The process takes time into consideration only when a known event is scheduled to occur. It does not take into consideration that as time passes, the probability increases that an unplanned disruptive event will occur. Waterfall-based methodologies define the scope of the project and let it determine the amount of time and resources needed. This seemingly logical assumption is the waterfall’s most flagrant flaw.

32

Revolutionizing IT

Unrealistic Assumption #2: End Users Can Define, in Advance, Exactly What Will Be Needed The people who use existing systems are rarely able to envision the best way to improve them. If anything, an extensive understanding of an existing business process can make it difficult for people to see the benefits of an entirely new one. The most commonly used approach to defining what is required is to ask the people who use a business process how it ought to be improved. Unfortunately, that assumes they are capable of making that determination. Most methodologies also ask users to formally approve a written design specification. There is an unspoken implication that it is the fault of the users if the specifications do not turn out to be perfect. Project managers are trained to repeatedly remind the users that this is their system and that the specification defines what they will ultimately have to live with. The resulting fear of making a mistake and leaving something important out is a frequent cause of delays in completing the design stage of the project. It also leads directly to increases in the project scope and complexity. Experience has shown that the first attempt to design anything highly complex is rarely very good. This holds true even when experts create the design. Thomas Edison and his associates tried out countless strange ideas before coming up with a light bulb that actually worked. If experts are rarely right the first time, how can it be assumed that people who understand the way something currently works will have a gift for developing sound alternatives? The conventional way engineers design physical objects such as cars, airplanes, or vacuum cleaners does not rely on having end users create and approve a technical specification. In most cases, a proposed design is developed and a prototype is built and then tested. When these early prototypes are tested, everyone has a good laugh at the foolish mistakes that were made and goes back to work to fix them. After a few iterations of prototyping and testing, a product is put on the market. Customers then suggest ways to improve the product further. When Boeing or AirBus designs a new jet, typical passengers are interviewed. They are not, however, asked to oversee the creation of detailed technical specifications. The designers of airframes use sophisticated computer simulations to create virtual

The Anatomy of a Project

33

prototypes, which pass through numerous cycles of testing and improvement before a physical airplane is built. It is impractical to develop business process improvements and the associated software using the approach utilized to design physical objects. Prototypes can be created to test business process improvements, but the cost and effort are often quite high. It is often difficult to be certain that the proposed design for a new system will handle all of the conditions it is likely to encounter. The underlying structure of the waterfall approach to projects evolved in the 1970s. At the time, creating even relatively simple computer programs was highly time consuming and expensive. It made sense to put a great deal of thought into the design specification before devoting any of the precious programming resources to the project. Tools to help create prototypes did not exist then, nor did visual program development tools. Over the past 25 years much has changed. Vast improvements have occurred in software development productivity. The ability to build software prototypes has improved dramatically, and a growing number of methodologies have appeared that incorporate iterative design ideas. It has also become increasingly common to use packaged software. These changes have occurred slowly over a long period of time. During that period, the popular development methodologies all kept undergoing refinement. A critical underlying assumption continues to be that the people who currently use the existing business process are capable of defining appropriate requirements for a better business process.

Unrealistic Assumption #3: Complex Problems Can Be Solved Completely on the First Attempt A project is a mechanism for solving problems and implementing improvements. Classic methodologies assume a linear process where the problem is defined, a plan is created, the solution is implemented, and victory is declared. Linear problem solving might be the norm on the planet Vulcan, the Star Trek creation where the inhabitants are exceedingly logical and orderly. Here on Earth we need an approach that takes our human qualities into consideration.

34

Revolutionizing IT

The hidden assumption behind a linear view of projects is that an appropriate solution can be thought up in one try. If the problem is simple enough, or if it is one for which a well-tested solution is already available, then this can be true. In most cases, however, taking a linear view is too simplistic. History has shown most plans to be imperfect. The grander and more complex the plans, the less perfect they will be. Efforts that take a long time and that involve large numbers of people also are more inclined to go awry. The expectation that the project team will find an optimal solution and then smoothly put it in place actually provides encouragement for the project to take longer and become more complex than necessary.

Unrealistic Assumption #4: Requirements Can Be Precisely Defined before Packaged Software Is Selected The basic logic behind most methodologies was created during a period when few application software packages were available. Nearly all software was custom coded by in-house programming staffs. An approach to the selection of packaged software was unnecessary. The packaged software industry matured slowly over many years. The number, quality, and capability of packages have steadily increased. This evolution has been gradual enough that there has never been an obvious point at which it made sense to rethink the way packages are selected. The cost of packaged applications is now often many times the cost of the hardware on which they run. In addition, it is commonplace to spend three to five times the cost of the package itself on outside services to assist in its installation. This means that the direct and indirect cost of packages is generally by far the largest item in the project budget. The way in which packages are usually selected and implemented has remained slow and ineffective. It is commonplace for organizations to study, evaluate, and plan for a year or more before making a choice. Taking this much time is expensive. More importantly, it delays the arrival of the benefits that implementing the system will bring. Lengthy selections also waste a great deal of the time of people involved. The growing availability of packaged applications of high

The Anatomy of a Project

35

quality and capability was one of the most important developments in the IT industry in the 1990s. Those who select, install, and make effective use of these packages are able to gain a measurable competitive advantage over those who cannot do so. Using a process that is slow, politically charged, and ineffective to select packaged applications is a choice few organizations can now afford to make.

Unrealistic Assumption #5: Users Will Cheerfully Accept Changes in Their Work Environment An ancient Chinese curse is sometimes translated: “May you live in a time of great change.” It touches on a universal human trait—the tendency to fear and resist change. A modern version of the curse might be, “May you manage a project that causes great changes.” Those who have tried to do so understand how difficult such a task can be for everyone involved. A common cause of failure is underestimating how much users of a new system will resist its deployment—not because it is bad, but because it is different. Individuals and organizations each have a unique level of tolerance to change. Developers of project plans factor resistance to change into plans. Projects managed in the traditional way often fail to do so. Classic methodologies attempt to deal with this issue by creating a written specification for the new system. Those who will be using it are then asked to sign a document that confirms their approval. If there is resistance to the system after it is built, then the blame is laid at the feet of the users. In practice, this approach does little to solve the problem since users inclined to resist can seize on any minor imperfection as an excuse.

A Second Look at Mega-Multi

Armed with a new way of seeing things, let’s have another look at the project undertaken by Mega-Multi. In hindsight, a number of things could have been approached differently. Some of the problems included the following:

36

Revolutionizing IT

The project proceeded slowly, allowing time for changes in the environment to impact it. ■ Everything requested was included in the project plan. ■ The project evolved into a single large effort to solve many problems. ■ Packaged solutions had to be modified to meet the bloated requirements. ■ Project control was given to outsiders who did little to resist scope growth. ■ Accountability was divided between the users and the consultants. ■ The design team was too large and took too much time. ■

Mega-Multi’s management unwittingly allowed the scope of the project to grow large enough to become the cause of a number of subsequent problems. The managers were unable to prevent this because they did not understand the nature of the forces that create scope creep, nor did they appreciate how important it is to limit project scope.

The Evils of Scope Creep

As the size and complexity of a project increase, a number of bad things happen: Greater scope means more time. This increases the chances that environmental changes will disrupt the effort, possibly even making it irrelevant. ■ Costs increase as complexity grows. A modest increase in complexity can create a large increase in cost. ■ Benefits arrive later. The value of the benefits that new systems will provide is diminished if they arrive in the distant future. It is very difficult to get people to put energy into an effort that will not provide near-term benefits. ■ More problems arise. The number of things that will go wrong increases with time and complexity. ■

The Anatomy of a Project

37

Resistance increases. Organizations naturally resist change. As the complexity of a new system increases, it often means greater change for those who will use it. Their natural inclination to resist increases along with the scope of the project. ■ Resources are expended keeping the existing process working. When a new system is not planned to replace the existing one for some time, the temptation to invest time and money on interim fixes becomes impossible to resist.



It should be obvious that the first priority of any project team must be to put limits on what the team will attempt to accomplish. Often, fatal problems occur when this is not done.

Why Has the Waterfall Lasted This Long?

Traditional methodologies have a hidden bias toward excessive project scope. Why then do so many organizations continue to use them? There are three main reasons why these methodologies have survived for so many years in spite of poor results: 1. Structure and order. It is essential to use an organized and disciplined process to manage the development of new information systems. Waterfall methodologies provide this in a way that is easy to understand. 2. Some people prefer large projects. Being responsible for a large project can increase stature, improve a resume, and help justify an increase in pay. Increasing scope is good for service firms looking to put many people to work. It can also be good if you are a technical IT professional who is eager to deal with a highly challenging assignment. 3. Successful project managers instinctively reject the elements that do not work. Seasoned project managers rarely follow formal methodologies exactly. Lip service is given to the formal process, but the rules are broken to keep things moving. Success therefore occurs in spite of the process that is nominally in use, not because of it.

38

Revolutionizing IT

The RITE Approach aspires to remedy some of the weaknesses of traditional methodologies, not to completely replace them. Accepting the RITE point of view does not mean rejecting the need for a highly organized and rigorous process. What is needed is a disciplined process that is based on a realistic set of assumptions.

Summary: Why So Many Projects Fail

Projects often fail because they encourage a scope-building frenzy between the users and their advisors. Users are given the role of defining their needs. At this stage they are encouraged to dream and think big. They are also encouraged to think in terms of one giant leap forward that will solve all aspects of the problems at hand. In addition, they are told that they will be asked to sign a document describing all the problems they currently have and that they will be blamed if anything is left off. All of this motivates them to put more and more into the problem definition and specification. At the same time, accountability for building a solution is usually given to the technical advisors (either internal IT people or outside consultants). They usually are helpful people who would like to please the users by giving them whatever they want. The best of them love technical challenges and thrive on complexity. They also are optimistic about what they will be able to accomplish. Under this model, as scope begins to spiral upward, there is no one to blame. Users are doing what they have been told to do— identify all the problems they want solved without being concerned about the consequences. The IT advisors are accountable for designing and building solutions that meet the needs given to them by users. Is it their fault if the users keep coming up with more and more for them to do? When packaged software is chosen, the inclination is to select the most complex, comprehensive, and expensive package because of a belief that it will solve the largest number of problems. There is also a tendency to encourage massive changes and customization. Even when senior management senses that something is going wrong, the process makes it difficult to do anything about it. The

The Anatomy of a Project

39

reasons for including more and more in the project definition are meticulously documented as a part of the process, making it hard to dispute them. Fortunately, there are proven techniques for breaking the cycle that cause projects to become too complex. Before describing them, it is appropriate to examine the dynamics of projects that do succeed.

C h a p t e r

F o u r

The Elements of Success Common Characteristics

Not all information technology projects fail. If nothing ever worked, we would have given up long ago on computers. So why do certain IT projects fail miserably while others enjoy tremendous success? One way to find an answer is to look for the things that successful projects have in common. Successful projects share certain characteristics. It is rare for all of them to be present, but the likelihood of success rises as the number increases. Six success characteristics stand out: 1. 2. 3. 4. 5. 6.

Strong leadership. Time control. Resource limitation. Scope control. Staged evolution. Concept recycling.

Strong Leadership Few would argue that the most important determinant of project success is quality of leadership. As the talent, motivation, and experience of project leaders increase, the probability of success rises as

41

42

Revolutionizing IT

well. More than one person can participate in the leadership of a project—in many ways, the more the better. However, a single person must rise above the others and take accountability for overall success. In many cases, a person with ideal qualifications to head up a project is not available. The most successful technique for dealing with this has been to allow the capability of the available leaders to help shape how ambitious the project becomes. Successful projects strike an appropriate balance between the talent, experience, and authority of the leadership and the scope of what is undertaken. Time Control Time is a project’s worst enemy. As more time elapses between approval and deployment of a working increment of the system, the number of things that can go wrong increases. Most successful projects are undertaken when there is a strong sense of urgency, often in the form of a deadline that cannot be missed. It is customary for those working on an urgent project to complain that they are not being given enough time to do things properly. In reality, they should be thankful. Time pressure can be a great ally as long as it is possible to tailor the scope of what is undertaken to be achievable in the time allowed. Think back to situations where you were highly effective. Chances are excellent that you were working against an unreasonable time constraint. Athletes are often at their best late in a game when every second needs to be used wisely. It is an obvious characteristic of human nature that effectiveness increases when there is a sense of urgency. Successful projects find ways to control time as the most precious of resources. Resource Limitation Who should be involved in the initial development of a project plan? The natural tendency is to invite everyone with an even vague interest to become involved so as not to offend anyone. Avoiding this is one of the keys to increasing the odds for success. The more people who are involved in designing something, the more complex it will become.

The Elements of Success

43

The most successful projects start small. A limited number of people—often one—create the original concept in a limited period of time. Scope Control The many techniques that can be used to help limit the scope of projects will be explored in detail in later chapters. The exact choice of which ones are used is not as important as the result. By whatever means, as the scope of what is undertaken is limited, then the probability of success rises. Staged Evolution The best way to combat complexity is to break what needs to be done into pieces of a manageable size. Almost all of the breakthroughs in the ways in which complex systems are created have centered around devising ways to do this. Successful projects follow an evolutionary path. Glorious attempts at a great leap forward in a single bound almost always fail. Those who invested in overly ambitious Internet start-up companies learned this lesson the hard way. Concept Recycling Successful projects invent as little as possible. This does not mean that innovation is unimportant; it means that true innovation should be reserved for the places where it matters most. A highly innovative system can be constructed largely from proven components where a very small number of totally new ideas have been included. The most successful new systems innovate in ways that maximize external impact. When FedEx first offered it, Web-based package tracking by customers was a revolutionary concept. The underlying technology that accomplished this was an extension of the internal tracking system that had been evolving for years. The innovation involved marrying the proven internal systems with the Internet. Successful projects borrow ideas and building blocks from any source that is available and legal.

44

Revolutionizing IT

Why the Largest IT Project Ever Undertaken Succeeded

During the late 1990s, the IT industry faced its greatest challenge. By the end of 1999, software that had been developed over a 30-year period all needed to be checked to ensure that it would not fail when the year 2000 arrived. A fair amount of doomsday speculation filled the air as the date slowly approached. Surely, given the propensity of software development efforts to run behind schedule, any number of important applications would fail, causing untold misery. When January 1, 2000, finally arrived, it was easy to see that the television newscasters were not happy. The number of Y2K computer failures that they were able to report was pitifully small. None of them resulted in anything more than a minor inconvenience for a handful of people. Why was Y2K such a nonevent? Was there ever a real problem, or did the IT community suddenly figure out how to complete a project on time? The problem was indeed very real in that countless systems were certain to fail if action was not taken. The reason so few failures occurred was that unique circumstances allowed those projects to avoid the usual traps. The things that made the Y2K problem unique were: The end date was fixed and nonnegotiable. ■ The definition of project success was clear and never changed. ■ The consequences of failure were dire. ■

The knowledge that the media and hordes of greedy lawyers were all waiting to pounce provided sufficient motivation for an appropriate number of projects to be launched. The exact scope of the project was known more than a decade in advance and never changed. The usual problems of scope creep and ever shifting requirements never occurred. In addition, the challenge was almost entirely a technical one. It was not necessary to constantly negotiate with a community of users over specifications.

The Elements of Success

45

An indirect form of scope creep did occur in that Y2K became the motivation to replace outdated software rather than spend money fixing it. Of the handful of failures, many occurred because projects of this kind ran late. Very few Y2K projects failed, however, because having a fixed date and dire consequences for failure significantly altered the normal project dynamics. An important Y2K lesson is that IT professionals are very good at solving clearly defined technical problems. Failure nearly always results from factors that have little to do with making the actual technology work. A Seemingly Simple Success Story: Bass Pro

To people who love the outdoors, visiting a Bass Pro Outdoor World store is more fun than catching their limit of fish in the first hour. Sometimes, however, it just isn’t convenient to hop in a truck and drive a few hundred miles to the nearest store. In those cases, selecting from the enormous inventory of items for hunting, fishing, camping, and the like can better be handled through the Internet. The basspro.com Web site provides that opportunity. However, in early 2000, the site was in need of an upgrade. The primary goal of the project was to integrate and improve the Web site’s capability as much as possible before the Christmas season rush, due to start in only a matter of months. This decision to give time the highest priority had a profound impact on the project and was the most important reason for the project’s ultimate success. Giving time priority over functionality led to a powerful combination of decisions: The most valuable feature—integration—was built first and implemented in less than two months. ■ A small team of highly experienced software developers handled the project. ■ Stable, proven building blocks formed the technological foundation. ■ The software was built around reusable objects using a highly iterative approach. ■

46

Revolutionizing IT

Building blocks were created one at a time, tested thoroughly, and then integrated. ■ Nonessential features were held until everything vital was built and tested. ■

Less than five months were needed before a complete new version of the site was ready for extensive acceptance testing by the users. By that time, the quality of the code was already very high, performance was excellent, and functionality was significantly improved from the previous site. The system was made fully operational on September 20, 2000. The total cost of the project was a tenth of a large consulting firm’s estimate. More importantly, the new software performed almost flawlessly, handling significantly more orders than forecast. When a project goes this well, it is easy to assume that any fools could have pulled it off. To a limited degree this is true, as long as the fools were savvy enough to avoid getting in over their heads. The real lesson here is not about the details of what was done— countless projects have accomplished more technologically—but the traps that were avoided. The many ways in which this effort could have gone awry include: Investing precious time and funding on a redesign of the site’s appearance. ■ Spending months in evaluation and requirements gathering. ■ Losing the opportunity to go live for the 2000 year end. ■ Attempting to make use of the latest in unproven tools and technology. ■ Allowing the project to become a training ground for a horde of bright young consultants ■ Building one large, monolithic body of software ■

The best project teams make what they do look easy. In effect, that is exactly what they do—use force of personality and good technique to ensure that what is created is easy enough for the organizations to build and put to use. This can make them appear to be

The Elements of Success

47

less successful than their more flamboyant counterparts who expend more time, resources, and heartache attempting to build software Taj Mahals.

Emergency Behavior

Some of the most successful endeavors in history have come in response to emergencies. Normal patterns of behavior are somehow altered in these situations. Few projects were ever more urgent than the Manhattan Project during World War II, when the U.S. government was thrust into a race to develop a working atom bomb. It is hard to imagine an effort of that complexity and magnitude being successfully pulled off in less than four years in peacetime. The same was true of the sinking of the Pacific Fleet at Pearl Harbor. Within nine months, almost all of the ships that had been sunk were back in service. Had the navy taken on a project of this magnitude in peacetime, it would have taken countless years to complete. The urgency created by a wartime atmosphere made it possible to raise these ships in a small fraction of the time that would otherwise be needed. A more recent example involves the impact of a hurricane. Casa Del Campo in the Dominican Republic is one of the largest resorts in the Caribbean islands. After the full force of Hurricane Hugo hit it in October of 1998, not much was left. The roof of every building had been blown off, leaving the contents vulnerable to the rain and wind. La Romana, the city where most of the 4,000 employees lived, was also left without electricity or water for more than a month. In less than four months, Casa Del Campo reopened. Not all of the hotel rooms had been refurbished, but all of the important services had been restored. Dozens of buildings had been completely replaced. Had a project of this magnitude gone through the normal planning and implementation process, years would have gone by. Since so much was at stake, the employees all instantly pitched in to help. Waiters became carpenters and roofers. Caddies learned how to landscape. Everyone grabbed a paintbrush. When projects are undertaken under emergency conditions, certain dynamics occur:

48

Revolutionizing IT

■ ■ ■ ■ ■ ■ ■ ■

A strong leader takes control. Plans are made quickly. The objective becomes clear. Everyone focuses on essentials. The most important tasks are done first. Perfection is not expected. People blame the emergency, not each other. Turf wars and other political behavior diminish.

When these dynamics come into play, extraordinary things tend to occur. The true potential of groups of people is unlocked. Energy, creativity, and goodwill abound. Projects undertaken under normal circumstances are often highly successful when they are able to reproduce the same dynamics that occur naturally during true emergencies. It should not be surprising that the best techniques for improving project success revolve around creating some of these same dynamics. The challenge is doing so without the need to set the office complex on fire. An overused and ineffective management technique is to attempt to make every project into an emergency by edict. It does no good to live up to the Hollywood stereotype of senior business executives by ranting at the staff every time an assignment is given out. More effective alternatives are available that are easier on the lungs. False emergencies do not need to be invented, but some of the behavior that emergencies create must be fostered as IT systems are created.

Case History: Fort Knox Fails, but Silverlake Shines

The failure of an important project can create the emergency atmosphere needed to ensure the success of a follow-on effort. This was certainly the case with two projects in which coauthor Ken Johnson was heavily involved during his career at IBM. The first project had the highly ambitious goal of creating the ultimate minicomputer in response to the success of Digital Equipment Corporation (DEC) in the early 1980s. It was given the code name Fort Knox after the location of the U.S. government’s gold depository.

The Elements of Success

49

The goal of Fort Knox was to unify the hodgepodge of smaller IBM computers then on the market and put DEC and its imitators on the run. Following IBM’s then rigid development process, the specifications for creating this computer grew more complex as each day passed. By 1985 it became obvious that the development team was years away from a workable product, and the project was canceled. Fort Knox was to be the replacement for three computers that were developed and manufactured in IBM’s Rochester, Minnesota, facility. Without it, the future of the more than 5,000 people that worked there looked bleak. From their standpoint, the cancellation of the Fort Knox project was an extreme emergency. Out of the confusion, one of the computer industry’s most successful development projects emerged. The necessary leadership came from an unlikely source. A handful of highly technical system designers formed an ad hoc group on their own and rapidly pulled together a plan. The lab director gave them permission to develop their ideas further, but the ground rules were highly unusual: The product they designed had to be on the market within two years. ■ As much as possible was to be salvaged from the Fort Knox wreckage. ■ A proof-of-concept prototype was needed. ■ The design was to be created by a small team in a matter of months. ■

In March of 1986, less than a year after the ad hoc team had formed itself, IBM’s top management approved over $1 billion in funding for a new computer with the code name Silverlake. At the time, it was the most expensive computer development project ever undertaken. The two-year time frame was also dramatically shorter than normal. Historically, the development of a new product line took five years or more. In June of 1988, only two months longer than planned, the AS/400 family of computers was introduced. AS/400 quickly became one of the most popular computer product lines ever introduced. It remains a major source of profitability for IBM today.

50

Revolutionizing IT

The Silverlake project was highly ambitious, but strict controls were put in place to keep its scope from spinning out of control. Instead of everything being invented from the ground up, key building blocks were taken from the completed parts of the Fort Knox project as well as from two of the products being replaced— the System 36 and System 38 minicomputers. Major enhancements were pushed off until after the basic system was operational. A combination of great leadership, a tight schedule, rigid scope control, extensive reuse, and an exceptionally motivated project team led to one of the great success stories within the IT industry.

Summary

The ideas that form the RITE Approach were assembled over many years by observing those things that successful projects have in common. When the RITE Approach is introduced to people who are unfamiliar with it, the most common reaction is to recall how projects that they were involved in seem to fit its profile. The stories with a happy ending all seem to revolve around the same points even though the details differ dramatically. The six characteristics that successful projects seem to share are: 1. 2. 3. 4. 5. 6.

Strong leadership. Time control. Resource limitation. Scope control. Staged evolution. Concept recycling.

As the story of the RITE Approach unfolds, specific suggestions will be made as to how to give projects these characteristics from the beginning.

C h a p t e r

F i v e

How to View Projects

T

he expectations created when projects are approved often have little chance of being met because of the way in which they are typically viewed. Changing what is expected from IT projects and how they are thought about can help improve the odds of success.

What Is Success?

If the definition of a successful marriage was that the couple never had a disagreement, then the odds of success would be close to zero. It is equally unrealistic to define the success of projects by how closely they adhere to plans created in their early stages. At the other extreme, it would be absurd to consider a marriage successful as long as neither participant murdered the other. Similarly, a project is hardly a success if its barely breathing carcass has been dragged across the finish line long after the reason it was initiated has been forgotten. Projects normally pass through an approval process that involves the presentation of a plan and the associated time schedules, resource requirements, costs, and benefits. This plan then becomes the yardstick against which success is measured. Viewing projects this way seems logical but does not work well in practice because project teams face the following dilemma.

51

52

Revolutionizing IT

Management rightfully wants to see a concrete plan before agreeing to invest resources. Without a precise plan, it is difficult to choose intelligently among alternatives. The management culture of most organizations demands that such plans be presented with great confidence. This is difficult to do because at the stage when plans need to be presented, the project team rarely has the information needed to do a good job. The project time frame, cost, resources, and benefits are all tied closely to the exact way in which the final system is designed. The addition of a few seemingly minor requirements to a design can turn a highly attractive proposition into a financial black hole. This makes providing management with a precise plan a risky proposition before the point where the details have been worked out. Four different strategies are commonly employed as plans are presented: 1. Inexperienced teams often underestimate the real risks. They present plans with confidence and take the blame when things don’t work out. 2. Some teams postpone the presentation for as long as possible. They seek the holy grail of a perfect plan by attempting to study the issue endlessly. Projects that get in this mode can go on for years without ever finalizing and presenting a plan. 3. The most savvy project teams present their plans along with long lists of carefully documented assumptions. They ask not to be held accountable if the assumptions prove inaccurate, which almost always occurs. 4. Contingencies are hidden within the plan to attempt to deal with the unknown. None of this does management any good, nor does it help to improve the odds of success. In the first case, management can blame the team but still has to deal with the failure. The second scenario is the worst since it leads either to efforts that never get off the ground or to the creation of projects that are long, expensive, and resource intensive. The third case isn’t much better—things don’t work out and there is no one to take the rap. Building contingencies into the plan can help, but deciding how much is needed is always a matter of guesswork.

How to View Projects

53

Why Good Plans Are So Hard to Create

It is not possible to create a comprehensive plan for a sophisticated new system until all the details of the requirements are understood and have been turned into a design. The time and effort required to do this represent a significant percentage of the cost of the entire project. Management wants justification based on an analysis of costs and benefits before spending a great deal of money and time. This forces the project team to build a plan based on what the members know early on. It should come as no surprise that such plans hold up poorly because: There is always more detail to discover. Every requirement connects to others—it is hard to decide where to stop. ■ Assumptions fail to remain valid. ■ Changes occur that could not be predicted when the plan was prepared. ■ ■

Even after approval, it is hard for projects to break out of the design stage. It doesn’t help that this phase of a project can be more rewarding for some of the participants than those that follow. Projects may bog down during the planning and design phases as a futile attempt is made to create plans that won’t later embarrass their creators. Proven techniques for dealing with these issues are outlined in the following sections. Too often management is not aware of these techniques and is unable to play an appropriate role in making them work. Part of the reason has to do with the way many executives work.

The Patton/Lombardi Effect

Legendary figures such as World War II hero General George Patton and football coach Vince Lombardi have had a profound impact on the thinking of the current generation of senior executives in many U.S. companies. Both Patton and Lombardi shared the quality of

54

Revolutionizing IT

dogged determination. Each would create a plan and then exhort his followers to overcome any obstacle that stood in the way of success. Lombardi’s famous mantra was that winning is not the most important thing, it is the only thing. Executives who emulate the demanding management style epitomized by Patton and Lombardi are a tough audience for those proposing improvement projects. They want to hear a precise plan presented concisely with great confidence. The project team is expected to commit to doing whatever it takes to make the plan work. Those who fail to make the right kind of presentation are rejected or treated to a stern lecture. A demanding style of management can be effective when the goal is very clear—take that hill or move the ball down the field. It can backfire when applied to IT projects where the target is constantly moving. There is nothing wrong with demanding excellence as long as what is demanded is achievable. Executives therefore need to understand what can and should be demanded from project teams and how to express what is expected.

Building a Mental Picture

When people don’t fully understand something, they look for an analogy. They ask themselves, “What is similar to this in my own experience?” The mental analogies that executives use to try to comprehend IT projects are a major cause of problems. A common analogy is to think of an IT project as being similar to the construction of a building. This is reinforced by some of the terminology used in the industry, including referring to the “architecture” of systems and the “construction” of software. There are certainly ways in which the creation of a complex body of software resembles the design and construction of a building. Both start with the gathering of requirements, agreement on a general concept, the creation of a highly detailed plan, the assembly of resources, and a long and messy period when the plan is turned into something real, followed by last-minute adjustments prior to use. However, using building construction as a mental model for IT projects creates more misunderstanding than enlightenment. The creation of a building is a much more predictable activity than solving a problem with technology. Construction would be more like IT if:

How to View Projects

55

The cost of building materials dropped significantly every year. ■ Additional space had to be added every few months over the life of the building. ■ Standards for wiring, plumbing, heating, cooling, and safety changed every year. ■ Craftsmen treated blueprints more like suggestions than instructions. ■

The largest danger of using mental analogies that are based on the creation of physical objects is that they don’t take into account the highly dynamic nature of technology projects. Overly simplistic and linear thinking is encouraged—we are going to study the problem, design a solution, put it in place, and then be done. In one way, IT projects are more like chess games than construction projects. Good chess players start with a general strategy and a detailed plan for the early stages of the game. After a few moves, the plan needs to be constantly revised in reaction to the opponent’s moves. In most cases, the strategy remains the same. In a similar way, project plans need to be constantly adapted to changing conditions while remaining faithful to the overall goals of the effort. Success at chess is defined by the end result and not by the degree to which the original plan was followed. Chess is also an imperfect analogy for IT projects because in an IT project there is not an opponent. The point of the chess analogy is that it is necessary to react to constantly changing conditions. Consequently, IT projects need to be measured by what they ultimately accomplish and not by how they were defined at the outset.

More Realistic Assumptions

Organizations are in a much better position to succeed when a realistic set of assumptions forms the foundation for the mental picture of projects. The assumptions that need to be made at the outset include the following: It is impractical to attempt to understand every aspect of a problem. ■

56

Revolutionizing IT

Complex problems are made up of an intricate web of smaller ones. ■ A high percentage of benefits can come from a small part of the effort. ■ The nature of the problem will change over time. ■ The more time passes, the more change will occur. ■ Any complex design will be imperfect. ■ Change will meet resistance. ■ There will not be enough human talent available to create the optimum solution. ■

Following Mother Nature

The foundation for a better approach to projects is iterative development, a concept that is not new or revolutionary. Mother Nature has been using this design philosophy for over 10 billion years. Scientists tell us that evolution has been shown to be a simple and powerful way to develop systems of incredible elegance. The basic logic of evolution is to create something new, try it in the field for a period of time, introduce variations, incorporate those that are successful into the design, and reject what doesn’t work. The essence of evolution is incremental improvement. When the human genome was fully mapped out for the first time, one of the greatest surprises was the level of similarity between human DNA and that of much simpler species. The design of our bodies has more in common with fruit flies than we ever imagined. This seems to support the thesis that there is little need to develop a completely new design when a proven one is available. This DNA framework, which almost all living entities share, provides a foundation on which endless variations are possible. Each species has taken the best design aspects of its predecessors and improved on them. Innovation has occurred where change was appropriate. Otherwise, the standard solution that was inherited remained unchanged. Evolution also teaches us that iteration is the best way to develop a great design. Innovations are tested incrementally under real-life conditions, and those that represent a true improvement

How to View Projects

57

become part of the design. An iterative and incremental approach to projects incorporates this concept by encouraging the testing of new concepts, one at a time, under real-life conditions and keeping only those that actually work.

A Different View of Projects

The purpose of a project is to improve the way something functions. Evolution has shown that the best strategy for improvement is incremental. The best way to view projects is therefore to think of each one as a single step forward in a never ending chain. Adopting such a view leads to fundamental changes in how projects are organized and managed. The most common trap that projects fall into is to attempt to develop a precise plan by which the issue at hand will be completely resolved. Doing so seems to make a great deal of sense but in practice leads to ever growing scope and complexity. On the other hand, the unplanned chaos of evolution, where random variations are tried in the absence of a long-term vision or plan, is also an unworkable strategy for projects. A middle ground is the course that seems to work best. The classical approaches to projects are correct in that the first step always needs to involve an evaluation of the challenge at hand. This leads to the creation of a proposed approach to meet it. The critical issue is how to resist the natural gravitational forces that pull toward ever greater complexity. The starting point is to adopt a new set of attitudes about projects: A long-term vision must be created rapidly. Perfection is not attainable, but constant improvement is. ■ Complex problems must be attacked in increments. ■ The goal of each incremental step is progress, not perfection. ■ Mistakes are acceptable if they are discovered and resolved quickly. ■ New concepts should be tested under realistic conditions as early as possible. ■ ■

58

Revolutionizing IT

■ ■

Many small steps forward are better than a few giant leaps. The most fundamental issues should be attacked first.

These beliefs lead to an approach to projects that has a strong flavor of iteration and incremental improvement. The first step is to rapidly develop a broad long-term vision of what is needed. A more specific plan is then formulated based on the philosophy of creating building blocks one at a time. As components are built and tested, imperfections are discovered and resolved. After each wave of improvement, the vision is updated in light of what has been learned and changes that have occurred in the environment. The cycle repeats for as long as necessary as follow-on projects are initiated. The view advocated here uses the term project to describe each segment of the effort where something of value to the organization is created and put to use. By this definition, the development of a design is a stage and not a project because completion of the design does not bring value to the organization. Building and testing a prototype is also a stage within a project and not the project itself. Individual projects can be strung together over time so that the cumulative impact is the complete resolution of the problem being addressed. At the same time, individual projects can thus be complete and successful without having resolved every aspect of the problem being attacked. The way the Intel family of PC microprocessors has evolved provides a good illustration of this philosophy. An improved version of the Intel Pentium family of processors is introduced periodically, often less than a year after its predecessor. Intel works against a long-term vision for the product line that is refined continuously in response to changing technology, the actions of competitors, and the reaction of buyers to the latest offerings. The creation of each new member of the product line can be considered an individual project. More than one such project can be under way at once. Better Measures of Success

Incremental development is one of the foundations of the RITE Approach to projects. Adopting this approach requires taking a whole new view of projects. Instead of being a single carefully

How to View Projects

59

planned effort to solve a problem once and for all, a project is a series of rapidly executed activities repeated as many times as necessary to resolve the existing problem and what it becomes over time. Viewing projects the RITE Approach way does not mean abandoning discipline and accountability. It is not only acceptable for executives to be demanding, it is essential. The change comes in what is demanded and how success is measured. At the end of each individual project, an evaluation needs to be conducted. Like everything else, this evaluation needs to be done rapidly and efficiently. This is the point at which it is possible to assess the performance of the people involved. The answers to a few simple questions form the basis for that evaluation: How long did this specific project take? ■ What resources were used? ■ How much was spent? ■ What worked and what didn’t? ■ What measurable benefits were provided? ■ Were there permanent improvements in business processes? ■ Is the organization well positioned to take the next logical step? ■

Projects that deliver valuable benefits rapidly at a reasonable cost, while consuming modest amounts of resources, are obviously successful. It does not matter how the results compared to the first view of what was intended. When practical, it makes sense to reduce what happened to concrete numbers. This always requires making subjective judgments about the worth of the benefits, the hidden costs of resources, and the value of the time taken to complete the effort. Even though the resulting numbers are never a perfect reflection of what happened, they are still valuable to have. It is essential for those involved to be held accountable for the results of their efforts. The issue is important enough to warrant an in-depth examination in Chapter 8, so further exploration of it will be held until then.

60

Revolutionizing IT

What Project Teams Need to Do

The way that projects proceed under the RITE Approach includes the following steps: Create a long-term vision. Time spent creating this vision must be constrained. Such visions cannot be expected to be flawless. ■ Evaluate the environment. Estimate how much time will pass before the next major environmental change will occur. Use this estimate to help formulate plans. ■ Break down the problem. What are the most important goals of this effort? Are there opportunities to quickly make highimpact, near-term improvements? ■ Plan the first step. Accepting the practical limitations of plans does not suggest that they need not be carefully prepared. When scope and time frames have been limited, it is possible to create plans that stand up well during execution. ■ Iterate activities as needed. Build and test anything new in increments of a manageable size. ■ Make it happen! The first operational deployment needs to arrive quickly and create positive momentum. ■ Evaluate and adjust. Once an increment of the ultimate solution is operational, the situation needs to be reevaluated. Determine what has changed, what has been learned, and how those involved have performed. ■ Update the vision. Spend a limited amount of time reviewing the statement of the long-term vision in light of what has happened so far. ■ Plan the next phase. Knowing that everything does not have to be included in the very first release of the project is a tremendous help in getting it done. Of course, this only works if resources remain in place to continue on for as many increments as make sense. ■

In theory, this cycle of improvements and reevaluation can go on forever. In practice, a point of diminishing returns is often reached after a number of individual projects are completed. The

How to View Projects

61

system can then be supported by a routine maintenance function carried out by the IT staff.

Summary

The way that most organizations think about IT projects is actually one of the reasons why many projects fail. The problem is that project plans are required before it is possible to create them with great accuracy. Demanding accurate plans is a strategy that backfires since it becomes another force encouraging scope growth. The starting point for a solution is to accept the practical limitations of planning. Doing so leads to viewing improvement as a fluid, evolutionary process. IT projects are best thought of as incremental steps forward in an ongoing effort to adapt to ever changing needs. Viewing them this way helps to limit the ambition of each individual project and thereby improves its chance of being successful.

C h a p t e r

S i x

Deciding What to Do

P

rojects are started by evaluating the issue at hand and then developing a proposed solution. Traditional project management approaches have a hidden bias toward exhaustive analysis of the problem, which leads to solutions that resolve all of the issues uncovered but that are time and resource intensive. Chapter 5 offered a nontraditional way of thinking about projects. The key point was that the temptation to make great leaps forward needs to be resisted in favor of a number of rapid small steps. When leaping, it is hard to change direction while in the air. On the other hand, after each step adjustments can be made. The mechanism the RITE Approach uses to encourage a rapid series of small steps forward is a slightly different approach to examining the issues and formulating a solution. Under the RITE Approach, the first step is to assign a task force to examine the issue and formulate a high-level view of the issue and how it might be resolved. The term task force is used to describe this group of people because it has a connotation of urgency and limited duration. It is important not to imply at this point that a project will be necessary. Frequently task forces are able to create solutions that do not require a formal IT project to ever be initiated. They need to be strongly encouraged to propose solutions that can be put in place as rapidly as possible. If they succeed in resolving the issue rapidly, then the members are thanked and the task force is dissolved. Task forces can be assigned to solve routine problems: for example, the need to improve control over travel costs. The vision in this 63

64

Revolutionizing IT

case can simply be to find a better way to assemble travel cost details, perform more analysis, and establish clearer rules for which expenses must be approved in advance. The vision does not need to involve the details of technology—for example, whether data will be entered on a PDA, phoned in, or typed into a PC application. Some understanding of what is technologically possible should also be taken into consideration. The effort to create the vision does not need to be great—the right set of people in a room for a few hours can be enough if some homework has been done in advance. Often, the task force will conclude that one or more IT projects will be needed to carry out the vision. In this case there are two tasks to perform. The first is to develop a broad view of the ultimate solution, called the long-term vision. The second is to identify what the first steps toward that vision should be. Usually, those first steps will require formal IT projects. On the surface this may sound similar to the classic approach, and in many ways it is. The important differences lie in five areas: 1. 2. 3. 4. 5.

The mix of people involved. The level of detail developed. The mandate given to the task force. The time frame in which results are expected. How the result is viewed.

The RITE Approach is based on the principle that the best plans are made rapidly by small groups of people. An effort therefore needs to be made to limit the number of people who are formally involved. Less than 5 is ideal; fewer than 10 is almost essential. In the common situations where the point of view of many people needs to be taken into consideration, it is best for the formal participants to obtain the input of those who are not invited. If more than 10 people absolutely have to participate actively, then consideration should be given to reducing the mandate under which the task force is operating. The first job of the task force is to create a high-level view of the ultimate solution. The task force is asked to do this rapidly and without stopping to gather all the information that would ideally be needed to develop precise plans. The word vision is used to describe the results of this effort because it implies something less precise

Deciding What to Do

65

than a plan and because it is usually associated with the longer term. Using the term vision also avoids the need to call the result a plan. In the culture of most organizations, plans are supposed to be concrete and complete. Once a plan is presented for approval, a fixed stake is driven into the ground. What happens subsequently is inevitably measured against the plan. Some level of embarrassment always seems to exist when things do not work out exactly as planned. A related RITE Approach principle is that plans will be imperfect regardless of how much time and resources are invested. Striving for perfection leads to added cost and encourages complexity without significantly improving the value of the result. Golfing legend Lee Trevino took a pragmatic approach to putting with his advice that golfers should “miss ’em quick.” His point was that many putts will be missed regardless of how much preparation time is taken. In the turbulent world of information systems, it is better to create imperfect plans quickly than slowly. When using the RITE Approach, the task force is asked to rapidly formulate the long-term vision. There is no expectation that the vision will be perfect or that every detail will be worked out. The time and effort invested should be kept in line with the importance of the issue but cannot be allowed to drag on excessively. Depending on the situation, the work of the task force can take anywhere from less than a day to a matter of weeks. It is important to not let even the most vital efforts go on for more than 8 to 10 weeks. Six weeks or less should be the norm, even for issues of reasonable complexity.

Defining Specific Project Goals

If a task force determines that the launching of one or more projects is warranted, then it is the group’s task to define what is to be accomplished. In doing so, an important objective is to limit the scope of each project undertaken. Chapter 7 will introduce a number of techniques for helping make this happen. A project is the mechanism whereby the vision becomes real. Multiple projects are often needed to complete the vision. The ideal project represents a sound first step toward achieving the vision, can

66

Revolutionizing IT

be accomplished rapidly, uses a moderate amount of readily available resources, provides clear and measurable benefits, and is affordable. Few projects will fit this ideal profile perfectly, but it is important to strive to have those that are undertaken come as close as possible. Once a project is launched, it becomes an independent entity. Its success is measured by what is accomplished compared with the time taken and resources employed. The degree to which it has helped realize the original vision is another measure of its success. After a project has been completed, it is necessary to evaluate it. It is then appropriate to update the vision based on what was learned during the project and what has changed in the environment.

Organizing Task Forces

When possible, task forces should be led by the functional manager with the most to lose or gain. Often this is the manager of the function where most of the benefits or costs will occur. The task force leader can be heavily assisted by people from the internal IT staff or by outside advisors. In general, IT people, regardless of how talented they are, are not appropriate task force leaders. Projects have a greatly improved likelihood of success when managers take a strong personal interest in the details of how their function operates. Managers should be the chief business process engineers for the part of the organization they oversee. They must clearly understand how their function connects to all the others. They do not need to be experts in technology but must understand exactly how the IT systems support them. The task of defining how the processes they oversee operate is too important to be handed over to others. Developing the long-term vision for resolving issues of all sizes is a management task that should not be handed off to others regardless of their qualifications. Good managers will obtain useful input from many sources but will manage the process themselves and make all the important decisions. Creating, articulating, and obtaining consensus on long-term visions is one of the most important functions management performs. There is nothing wrong with middle managers or outside advisors doing much of the legwork in

Deciding What to Do

67

developing and organizing ideas. The final product must be something that the appropriate senior executives have helped create, understand completely, and are committed to making happen. If the effort to create these vision statements is kept short, then obtaining the time of busy managers becomes realistic. They avoid active participation when the issues become too deeply technical, the entire effort is too time consuming, or the language being thrown about does not make sense. Too often, executives approve longer-term plans in order to get on to the next item in a busy agenda. If senior management does not have the time, interest, or inclination to put its stamp on the long-term vision for the effort, then it is best to postpone initiating any action until this occurs. Functional managers need to understand and oversee the design and operation of the IT systems that will determine their fate. All of the final decisions are theirs to make. It is not reasonable, however, to expect that they will be experts in process design or in IT systems development. In most cases, the help of trained professionals will be required when tasks of any complexity are undertaken. People who have a talent for creating excellent solutions to complex problems are very rare. Only the largest organizations can afford to keep a large staff of them on hand. As a result, there is often a valid need for help from experienced outsiders. Chapter 11 provides advice on how to make effective use of outside resources. The key point is that outsiders, regardless of their level of talent or experience, cannot make the final decisions as to how important functions should be performed. That can only be effectively done by the people responsible for those functions.

Practical Advice

The long-term vision developed by each task force should: Focus on business process and not on technology. Be based on information gathered quickly from many sources. ■ Involve no more people than necessary. ■ ■

68

Revolutionizing IT

■ ■ ■

Borrow ideas freely from any source including competitors. Make conservative assumptions about the state of technology. Be clearly and crisply articulated.

The long-term vision should be written down using clear and concise language. There is an unfortunate tendency for business documents to sound informed but to convey little real information. Avoid statements like: “The proposed solution will build a standardsbased infrastructure as the foundation for e-enablement of missioncritical transaction systems across the enterprise.” Instead use simple language and syntax, such as: “Customers need more information about the status of orders. The new system will allow them controlled access through the Internet to information in our ERP systems including shipment date, carrier used, invoice price, back-ordered items, and arrival date. Future enhancements will offer expedited shipping and incentives for electronic payment.”

Mega-Multi Manufacturing Revisited

Another look at the situation at Mega-Multi Manufacturing will illustrate how viewing projects the RITE Approach way can make a huge difference in the outcome. We will examine what might have happened if the project had been approached differently from the beginning. The original problem at Mega-Multi Manufacturing Company was that senior management was unhappy with the time it took to get information about the previous month’s results. The controller was given the task of coming up with a better approach. When the RITE view was taken, the sequence of events proceeded very differently. The first step was not to launch a formal project. Instead, the controller, with the help of the CIO and other associates, took a critical look at the situation. The ad hoc group discovered a tangle of interrelated problems: Results were not available to executives for more than 10 days after the end of the month. ■

Deciding What to Do

69

The accuracy of critical data, especially inventory, was inadequate. ■ Existing reports did not provide the information needed to take effective action. ■ The computer applications being used were outdated and ineffective. ■

A task force was assembled to rapidly examine these problems and formulate a proposed solution. The controller personally assumed leadership as the manager with the most at stake. Eight other people were asked to help out on a part-time basis. A full-time IT professional was assigned to the effort as well. Four weeks were taken to complete this effort. Keeping the time frame short made it practical to involve a number of department heads and other highly knowledgeable people. The mandate for the task force was to formulate a long-term vision and to come up with a practical and affordable first step toward that vision. There was not an expectation that the long-term vision would be perfect or comprehensive, but it did need to make sense as far as it went. The task force quickly broke down the problem and concluded that the greatest payback would come from better analysis of existing information. The report submitted made four recommendations: 1. Create a new information analysis system. The goal was to allow executives to better understand information available within existing systems. 2. Reduce the month-end closing by three days. This was to be done by introducing greater discipline and a few simple changes in process. This was not an ideal long-term solution, but the cost and effort were minimal and doing this would buy time during which more extensive changes could be put in place. 3. Improve inventory accuracy. More vigorous enforcement of existing procedures could be used as a near-term approach to inventory accuracy until completely new software applications could be installed. 4. Plan to replace the existing major software applications. The case for replacing existing systems was strong, but the upcoming year was

70

Revolutionizing IT

not a good one for doing so. The other recommendations made postponement of the full upgrade for a year practical. The recommendation was thus to create detailed plans during the latter part of the coming year and implement in the year following. The controller’s department was able to make the appropriate procedural changes necessary to reduce closing time by three days without the need for a formal project or any outside help. At the same time, the VP of manufacturing personally led the effort to make marginal improvements in inventory accuracy. These tasks took time and effort but were carried out without any unusual problems. The only real IT project undertaken was the ad hoc reporting effort. It was led by a young woman on the controller’s staff who was responsible for financial analysis. She was delighted with the assignment since the lack of useful information had limited her effectiveness. Both the controller and CFO took a strong personal interest in the project and made time available as needed. A project leader from the IT staff was given the full-time assignment of assisting the woman doing the reporting. After only a few days of investigation using the Internet, personal networks, and trade journals, these two discovered an interesting and cost-effective solution, an integrated and prepackaged business intelligence system that was available from a local IT service firm. This system, which had been successfully installed at a number of companies similar to Mega-Multi, could be working in a trial mode within 45 days and fully operational 60 days after that. Based on the proposal from the service firm, a budget for the project was set at $250,000, which included software evaluation, the purchase of additional hardware, and the use of outside help for planning and implementation. An additional $100,000 was set aside for ongoing improvements to the new system after full implementation. In the end, this money was indeed needed because a number of custom reports not included in the standard package were implemented. None of these additional reports were created until after the basic system had been in use for more than a month. At the end of less than six months, the closing had been reduced from 10 to 7 days, and a great tool was in place that could extract valuable information from the existing systems. Inventory data accuracy was better but was still far from perfect. The problems

Deciding What to Do

71

Mega-Multi had set out to deal with had not been totally solved to everyone’s satisfaction, but good progress had been made in a reasonable time at an affordable price. Less than $500,000 had been spent on hardware, software, and services. The resulting improvements had begun to create measurable returns on that investment. In contrast, under the traditional project approach, over $600,000 was spent in the first six months. The result had been a detailed plan and no immediate hope for direct benefit from the project. A Quick Review When Mega-Multi applied the RITE Approach, a small task force was assembled and given a short amount of time to make recommendations. This made it practical for people with the right level of knowledge and skills to work together to quickly develop a workable plan. The task force saw that the ultimate solution required the replacement of the ERP applications. Since this was not practical in the near term, a pragmatic high-payback approach that focused on near-term benefits was found. The most urgent need was for better reporting. Fixing the reporting problem before rebuilding the ERP applications was not ideal, but the cost, time frame, and benefits made it the best available near-term option. The new business intelligence system was especially attractive because it was offered as a preintegrated package that could be installed quickly and affordably. Two of the recommendations did not involve technology at all. They were not ideal long-term solutions, but they offered an acceptable level of interim relief with little out-of-pocket cost. The need for a long-term fix was not forgotten. Time was bought so that the long-term solutions could be undertaken when the organization could better afford it and when some of the benefits and experience from the interim solutions would be available. In the previous visit to the fictional Mega-Multi company, the larger project undertaken had to contend with two major unplanned events: a lawsuit and a merger. This time the improvements were in place rapidly enough that they were complete before either of these events occurred. With the traditional project approach, the unplanned events slowed the project down and increased its cost.

72

Revolutionizing IT

Using a fictional example makes it possible to combine many stories into one. In this scenario, the success story is, if anything, less dramatic than the experiences of a number of real companies that have rapidly installed simple, off-the-shelf business intelligence systems. Recent examples from our own experience include Tiffany & Company, the famous jewelry retailer; R.C. Bigelow, a leading maker of specialty teas; and Sargent Manufacturing, which produces high-quality locks. A project does not have to take a long time, cost a great deal, or require an endless amount of effort to make a rapid positive impact. Many of the best available opportunities involve putting simple and powerful tools to work with a limited effort. The key to success is that the offering is simple and comes preintegrated—the buyers have relatively little to do to get something of value working.

Summary

There is nothing wrong with thinking big and developing sophisticated long-term strategies. Problems start when too much time is spent doing so. They multiply when elaborate plans turn into overly ambitious projects. By making a few subtle changes in the way problems are attacked from the start, the natural propensity toward complexity can be resisted. The first step is to give a small number of carefully selected people the mandate to create a long-term vision in a short period of time. Calling this group a task force helps make it clear that this is a quick strike effort and not a long death march. Keeping the time frame short makes it easier to get the best people to participate actively and actually seems to improve the quality of the result. It also helps keep the effort from turning into an orgy of overdesign. Task forces should be led by managers who are responsible for the quality and capability of the information systems that support their function. IT professionals can perform much of the hard work of the task force but should not manage the task force or make critical decisions. Asking the professionals to evaluate the problem and then return with the answer is not the right approach, especially if the solution involves a business process change. The long-term vision is the basis for determining what should

Deciding What to Do

73

be done next. If appropriate, specific IT projects are recommended as part of the solution. Ideal projects are: ■ ■ ■ ■ ■

A step toward the vision. Short in duration. Achievable using available resources. Able to deliver valuable benefits. Affordable.

C h a p t e r

S e v e n

Controlling Project Scope

T

he propensity for projects to grow in scope is a major cause of failures. Controlling scope growth is thus a major key to improving success rates. Achieving scope control starts with an understanding of the forces that cause it. Previous chapters have explored some of these root causes, including: ■ ■ ■ ■ ■ ■

The desire to create specifications that are comprehensive. Management’s expectation of a perfect plan. Difficulty determining where to stop. Reluctance to move out of the design stage. Ambition to work on or oversee a more important effort. The need to do challenging work.

Unless countervailing forces are present, these factors will pull projects toward increasing complexity. The resulting negative impact is obvious—higher cost, a longer time before benefits arrive, and more use of limited resources. An even greater risk associated with excessive scope is that as time passes, the validity of the underlying assumptions used to justify the project will erode. If enough time passes, it becomes commonplace for the original justification for the project to disappear before completion. Complexity and broad scope are not bad per se. It is the resulting impact on time, risk, cost, and resources that is the problem.

75

76

Revolutionizing IT

There are actually many advantages that systems of greater sophistication provide once they are functioning properly: Usability. A well-designed and comprehensive system can be easier to use. ■ Value. Added capability can mean greater benefit. ■ Completeness. More of what is needed is included. ■ Quality. Fewer failures occur, performance is better, and availability is higher. ■ Integration. Mature applications often fit in better with everything else. ■

There is nothing wrong with systems becoming highly complex as long as it occurs in stages. What needs to be resisted is the desire to strive for a high level of sophistication all at once. Adopting the philosophy outlined in Chapter 6 is the best way to balance the need for sophistication and completeness with the risks of attempting too much at once. Amazon.com is a company that appears to have struck the right balance building a software-driven business. The high failure rate of other companies that have attempted to establish Internet-based franchises makes the continuing success of Amazon.com that much more impressive. Long-time Amazon customers have had the experience of watching the Web site rapidly and continually evolve. Throughout its history something new and better seems to be added every few weeks. When Amazon.com was launched, the site was more impressive and professionally designed than most of its contemporaries. By the standards of today, however, it was extremely primitive. Books were the only product offered at first, and there were few frills. Site visitors could find the book they wanted and then order it but not do much else. As time has passed, countless other products have been introduced including music, movies, electronics, and games. The amount of information available about each product keeps improving, as does the ease of obtaining it. Services such as gift wrapping, delivery tracking, and theater listings are regularly added. Most importantly, these improvements arrive frequently, with each one

Controlling Project Scope

77

making the overall experience just a little bit better or more complete. They also are added in a way that does not give customers too much change to absorb at once. The Amazon lesson is simple— rapid evolution through a never ending flow of incremental improvements is superior to less frequent attempts at great leaps forward. The question that remains after adopting an incremental approach is how to make it actually happen. Having a desire to limit the scope of each project is a good start but not a solution. A delicate balance needs to be struck. Enough capability must be included to make the resulting system usable and attractive. Still, functions that can safely be postponed until later need to be kept out of the near-term plan. No magic elixir or ancient tribal ritual is available that makes it possible to strike this balance perfectly. There are, however, a number of principles that can be used to vastly improve the odds of success. Each attacks some of the root causes of scope growth. Used together, they become quite powerful. The foundation for all of them is the evolutionary view of projects outlined in Chapter 6. Unless the organization thinks of projects as a continuous series of incremental improvements, it becomes difficult to apply techniques that limit scope. Taking the traditional view of a project as a one-time effort to solve all aspects of a problem leads to the belief that anything removed from a project plan could be lost forever. Once an evolutionary view of projects has been adopted, the most important way to limit the scope of each individual project is to let time determine the project’s scope. This is the first and most important of six specific techniques for controlling scope as listed here and described in more detail in the following sections: 1. 2. 3. 4. 5. 6.

Let time determine scope. Control the use of resources. Limit the size of the design team. Gauge your ability to absorb change. Imitate, don’t invent. Create a single point of accountability.

78

Revolutionizing IT

Let Time Determine Scope

Traditional thinking involves examining the current situation, coming up with a plan for improvement, and then determining how much time and resources will be needed to carry out the plan. The best way to control scope is to turn this logic around by first determining how much time is appropriate and then letting it act as a constraint on the scope of the effort undertaken. To those who have not tried it, this line of thinking sounds like the twisted logic of Lewis Carroll’s Alice In Wonderland—determine where you are headed by first deciding how long you have to get there. The reason it is necessary to take this approach is the unwillingness of things to stand still. The problems and opportunities that need solving refuse to remain constant while we fix them. The environment into which new business processes must be introduced continually changes as economic conditions shift, executives come and go, competitors disrupt the market, and customer preferences evolve. None of this can be controlled or even accurately predicted. The only thing that can be done is to limit the damage by shortening the time between concept and realization. Time-driven scope also works because there is no inherent right size for a project. A spectrum of solutions exists for almost any problem or opportunity. Using elapsed time as the first constraint on a plan forces those examining the situation to focus their attention on the choices that can be completed quickly. For many projects, quick and simple solutions have more impact than long and elaborate ones. A great side benefit is that faster usually means much less expensive. Once a decision is made to let time constrain scope, it is necessary to establish the length of time that will be allowed. There is as much art as science to doing this. One approach is a grand management edict such as, “No project can take more than six months.” The simplicity of this approach has merit, but normally allowing a little more flexibility is wise. Projects scheduled to take more than a year should only be approved when the case for doing so is compelling. Conclusive evidence should be offered that the effort cannot be broken up into smaller projects and still have value. Some organizations set highly aggressive fixed limits on the

Controlling Project Scope

79

elapsed time for projects. They attempt to limit most projects to 90 days or less. For some types of efforts, a limit of 90 days is appropriate. There are, however, many legitimate projects that cannot reasonably be finished that quickly. Regardless of how aggressively it is set, the imposition of a time limit cannot become an excuse to eliminate quality. Whatever is put in place needs to be acceptable in appearance, reliable, and free of significant defects. The best approach to setting project time limits takes many factors into consideration: The type of project. The installation of a new suite of ERP applications falls in a far different class than an effort to tidy up the format of some screens and reports. Making allowances for these differences is appropriate within reason. ■ The likely impact. Taking a little longer to greatly increase the value of a project can also make sense. ■ The measurement cycle. Almost all organizations measure results on an annual basis. The project start date and duration should take into consideration the fiscal calendar and the way individuals involved in the project are measured. ■ The likelihood of stability. A best guess needs to be made as to how long it will be before a major change in the environment is likely to occur. Sometimes future events are predictable, such as the retirement of the CEO or the switch to the euro. Past history can also provide indications as to how often disruptive events occur. Even though predictions of the future are never perfect, it is appropriate to make the best possible estimate without expending too much time doing so. ■ The availability of resources. The impact of the project on all limited resources—including indirect ones such as cash—needs to be considered. The resource that needs to be most carefully controlled is the time of key people within the organization. ■ Cyclical events. Many organizations face predictable events such as a busy season around the Christmas holidays. Project plans need to take these events into consideration. For example, major changes to accounting applications are usually best put in place at the start of a new fiscal year.



80

Revolutionizing IT

Timebox Development The idea of using time constraints to control scope is not unique to the RITE Approach. DuPont Corporation pioneered the creation of an innovative approach to project management called Timebox Development (Martin, 1991). Under this concept, whose use was not confined to IT efforts, an absolutely firm completion date was set at the time each project was approved. Typical project time frames were in the 60- to 120-day range. A unique twist under the DuPont Timebox concept was that the project was canceled if it went even a single day over the allotted time. This draconian approach to strict schedule control has worked well in DuPont and numerous other organizations. Not everyone who uses timeboxes applies the death penalty to projects that go over schedule. Those organizations, including DuPont, that have such a rule on the books seem to rarely need to enforce it. Like any good idea, countless variations on the timebox theme have been tried. ACG clients have been using the scheduling and scope control techniques described here since the project at Sealectro was undertaken in 1984. The approach advocated in this chapter has much in common with Timebox Development as well as some important differences. To avoid confusion, the term timebox is not used to describe the approach advocated here. However, we do wish to acknowledge the significant contribution to the art of IT management that Timebox Development represents. Whenever constraints are introduced, the people within an organization will look for creative ways to work around them. In this case, a common reaction to the imposition of firm time limits is to attempt to redefine what constitutes the completion of a project. This frequently takes the form of allowing the completion of a stage within the project to be defined as success. Finishing a prototype, developing a specification, and selecting a package are each important achievements. However, they cannot be considered anything other than stages within the project. A project has not accomplished anything of value unless it has had a positive

Controlling Project Scope

81

impact on external results. Impact can take many forms, such as greater sales, lower costs, happier customers, increased market share, or improved product quality. When a completion goal is established at the outset of a project, it must involve defining the expected impact on these types of external results that will be provided by the scheduled date. The ideal time to start more ambitious projects is early in a fiscal year because most managers are motivated to remain tightly focused on current-year results. If they are asked to put in serious time working on an improvement project in the current year that will not provide benefits until the following year, they are likely to lose interest. Conversely, the worst time to start a major project is near the end of a fiscal year, when all attention is focused on meeting near-term goals. A useful approach often involves creating the long-term vision described in the following text, developing plans for specific projects during the latter part of one measurement period, and then starting serious work on the project as early as possible in the new period. This must be timed so that the planning results can be factored into the budget cycle. In the end, all of these considerations must be weighed and a decision made as to what time limit should drive the project plan. Those approving the project, not the participants themselves, must make this final judgment. There is nothing wrong with engaging in a lively debate before such judgment is finalized, since numerous subjective elements are always present. The key is to resist the inevitable forces that constantly pull toward moving the time goal further into the future. So far, three foundations for scope control have been explored: the evolutionary view, rapid development of a long-term vision, and the need to give completion time priority over capability. These concepts work most effectively when used in combination with the techniques described in the following sections.

Control the Use of Resources

If excessive scope is the most common source of the failure of projects, then the availability of people with the right mix of skills is the

82

Revolutionizing IT

most important reason for their success. This leads to the obvious admonition that the best possible people should be assigned to projects. Unfortunately, such advice alone has little practical value since an ideal set of people is almost never available. The real question is what to do when the talent falls short of what appears to be needed. A popular choice is to bring in skilled people from outside the organization. Within limits, this is an appropriate approach. It can backfire, however, if the primary motivation of those brought in is to maximize the scope of the effort. There is also a practical limit on what people brought in from outside the organization can do. Few will instantly understand all the subtleties of how current processes work. Most importantly, they will not know or be trusted by the individuals whose working lives will be most seriously disrupted by the changes that are imposed. In the end, experienced insiders must undertake much of the heavy lifting for any project of consequence. Their understanding of the existing processes, the individuals involved, the organizational culture, and management’s desires and idiosyncrasies is vital to success. When the pool of available talent to perform these vital functions is limited, as it nearly always is, there is only one good answer: Let resource availability drive project scope. Once again, seemingly backward logic needs to be applied. Instead of looking for the optimal solution, it is appropriate to find one that fits the resources available. Behind every successful project there are individuals who have done an excellent job. Failure is often the result of taking on more than the set of people that were actually available were able to handle. Balancing what is attempted with what can be accomplished is therefore a logical approach. Do not be too conservative about what people can accomplish. Most people do their best work when challenged. People function well when working close to the edge. There is a difference between being on the edge and falling off. One of the best ways to help determine what people are capable of is to ask them. If they don’t have the training or a feel for what to do, they will usually say so. It is our experience that most people will not bite off too much when asked to define their own mandate. The major exception to this is highly technical people who have a natural optimism and love of elegance. If the other principles advocated here are applied,

Controlling Project Scope

83

then the influence of highly technically oriented people will be limited at the stage when these decisions are made. Proper application of the next principle helps resolve the issue of how to balance scope and the availability of talent. Limit the Size of the Design Team

It is a good practice to limit the number of people involved in the design stages of projects. This is true even in the rare cases where there is no practical limit on the availability of people with appropriate skills. In addition to providing a natural brake on scope growth, the resulting plan is more cohesive and practical. Smaller teams reduce the occurrence of misunderstandings, factions, and the probability that a Frankenstein monster of ill-fitting parts will be sewn together to form a design. It is usually easy to obtain conceptual agreement that design is best carried out by a limited number of people. The personal experience of most managers reinforces this concept. In practice, this principle is hard to apply because: A small number of people never have all the knowledge needed to create plans for anything complex. ■ The better the individuals are at what they do, the greater their humility will be. This increases their desire to include others with specific expertise they do not have. ■ It is hard to say no to an organization that wants its point of view represented by a person on the design team. ■

The resulting desire to expand the design team needs to be strongly resisted. Doing so will often require a mandate from management rather than engaging in a long debate over who should be included. Assuming that there is no single person capable of doing a design all alone, the ideal size of a design team is three to seven people. An excellent management concept called Joint Application Design (JAD) has been developed to facilitate having a small team quickly create a high-level design. Most JAD advocates recommend design teams of eight or fewer people.

84

Revolutionizing IT

Gauge Your Ability to Absorb Change

The ability of organizations and individuals to deal with change varies widely. A limited number of organizations have established a culture where frequent change is the norm and stability is considered boring. In too many organizations, process change is met with fear and loathing because altering the way a function is performed is almost always disruptive. When an important process is changed, the individuals involved often feel that something of value is being lost. They are concerned about prestige, influence, recognition, compensation, opportunity for advancement, and simply the ability to spend time doing something that they enjoy. Most people take pride in the skill they have developed in doing things a certain way. When the result of a project is to make part of that skill obsolete, both fear and a sense of loss occur. Management cannot allow the tendency to resist change to prevent change from occurring. Mildly disruptive change is as necessary for organizational health as exercise is for the human body. It is as easy for organizations to procrastinate in making necessary changes as it is for individuals to postpone taking a nice long walk. Uncontrollable events provide an endless supply of triggers for change— swings in the economy; government regulations; and the impact of resignations, promotions, and retirements. Better-managed entities deal proactively with these changes, often by initiating a project. The difficult task management faces is to determine what level of change in the form of new systems or procedures is appropriate for their organization. The cowardly way out is to accept the status quo when resistance to change inevitably arises. The ability to adapt and improve has become an important organizational survival skill in the twenty-first century. The highway into the twenty-first century became littered with roadkill in its first few years as countless businesses were unable to adapt rapidly to fast-changing economic conditions. It is unrealistic to believe that the remainder of the new century will be kinder to businesses that are not nimble at adapting. Change needs to be seen as an opportunity more than a threat. For this to happen, management needs to clearly outline its vision for dealing with the changes being encountered. Statements of

Controlling Project Scope

85

vision are of limited value if they are not understood throughout the organization. Management has no choice but to make constant, constructive change an accepted part of the culture. One reason this will be difficult is that resistance to change often starts at the top. This is particularly true where change in the form of technology-based projects is concerned. Making a tough self-assessment of the current ability of the organization to manage change is an appropriate first step. The results need to be used to help decide how ambitious individual projects should reasonably be. The right question is not how much change the organization will comfortably accept but rather how much it can reasonably tolerate. Do not hesitate to push for more change than has occurred in the past or than the most conservative elements would like. Slowly increasing tolerance for absorbing positive change needs to become a goal for every organization—it will increasingly be a survival skill as the rate of externally driven change increases in the future.

Imitate, Don’t Invent

The observation was made earlier that the most successful complex systems imitate the process of evolution. One characteristic of evolution is the reuse of proven designs. If Darwin is to be believed, Mother Nature’s recipe for innovation is a simple one: Take a process that is known to work, rapidly make incremental improvements to it, and repeat endlessly. When examining the history of technology, most focus is on the giant leaps forward: the printing press, gunpowder, the light bulb, telephones, transistors, and the like. These are the rare exceptions, and time has made many of them into legends. The way most progress is made, however, is through an endless series of incremental improvements. When a dramatic breakthrough does occur, it is usually the result of a great deal of trial and error over a long period of time. The common saying, “If it ain’t broke, don’t fix it,” could be modified to say, perhaps less eloquently, “Keep using what works best, then slowly refine the rest.” How does this apply to limiting the scope of IT projects? It encourages taking many small steps

86

Revolutionizing IT

forward and constraining the amount of invention attempted during each step. The best way to limit the amount of invention without sacrificing quality or capability is to incorporate as many proven elements into the design as possible. A clarification may be appropriate here. Something can represent a radical new idea for a given entity but still not represent an invention because it has been proven to work elsewhere. The risk, time, and cost associated with introducing a business process change will be reduced if it has already been proven to work in a similar environment. In academic circles, it is considered unethical to plagiarize. Every educated person was taught at some point that it is wrong to copy the work of others. This training makes us uncomfortable reusing the ideas of others when deciding how a process should work. Overcoming this emotional barrier is an important step toward scope control. In business, it is completely ethical to reuse proven ideas as long as patents or copyrights are not being used without payment of royalties. A polite phrase has come into vogue for doing so—best practices. This wonderful term has made it politically correct to imitate what others have done successfully. It helps remove the irrational feeling of dread that Miss Fenstermacher, your old English teacher, will keep you after class for copying someone else’s work. From a software standpoint, imitation means making as much use of existing, proven code as possible. If a proven application package exists that solves your problem, take it, get it working quickly, and then rapidly improve the ways in which it is used within your organization. This process is so important that Chapter 9 focuses on how to select packaged software. It is becoming increasingly common for a legitimate need to arise for which packaged software has not yet been created. In these cases, there will be a need for some level of invention. The trick will be to focus efforts on innovation as narrowly as possible. Making effective use of object-oriented (OO) technology is a good solution in this case, as described in Chapter 15. The Internet has become a great facilitator of imitation because it is now easier than ever to discover a great deal about what others are doing in even the most esoteric of realms. The Internet has also spawned the formation of ad hoc groups of people from divergent

Controlling Project Scope

87

organizations who use it to work together on problems of common interest. The invention of the Linux operating system is the most obvious example of this New Age phenomenon.

Create a Single Point of Accountability

Highly successful projects often share an important trait—strong leadership by a single-minded person. It also helps a great deal if that person has a passionate belief in the importance of the effort. Selecting the right person to lead a project and then making that person fully accountable for its success is the last and possibly the most important way to control project scope. Project leadership and accountability are so critical that they are the subject of Chapter 8. Before exploring accountability in more detail, it is appropriate to summarize the role management plays in controlling project scope.

Management’s Role in Scope Control

Senior management has a vital role to play in limiting project scope. Some of the things executives must do include: ■ ■ ■ ■ ■ ■ ■

Ask the right questions. Create an air of urgency. Examine exactly how project benefits will be delivered. Make realistic assumptions. Reward progress, not perfection. Forgive mistakes when the time lost was short and the cost low. Assign accountability to a single person.

The workers within every organization listen carefully to the things senior executives say, but they pay much more attention to what they actually do. It is therefore essential for senior management to understand and actively apply the RITE principles themselves rather than just give them lip service.

88

Revolutionizing IT

Executives can make an enormous difference in the outcome of projects by asking the right questions at the outset. Instead of asking, “What is the best solution to this problem?” executives need to ask, “What can be done by a specific date to help alleviate this problem?” A very different answer emerges when the better question is asked. If management were easy, anyone could do it and the pay would be lousy. One of the things that make it so hard is the need to constantly balance conflicting objectives. It is important to convey a sense of urgency whenever projects are undertaken. Management expectations cannot, however, be seen as outrageously unrealistic. The trick is to demand that measurable progress be made in a short period of time while not insisting that every aspect of a problem be solved at the same time. Some of management’s most important roles are to set goals, measure achievement, and pass out rewards and punishment. This process can only work with IT projects if there is a belief that the goals are achievable, the measurement process is understood, and there are positive incentives for success. It is also not possible to reward achievement and punish failure if it is not clear who is responsible for them.

Bite-Sized Pieces

Organizations that become effective at controlling scope rarely have individual projects that can be called out as huge successes. H. Muehlstein & Company provides a good example of how the cumulative effect of many modest successes can be quite dramatic. Muehlstein distributes large quantities of plastic resin throughout the world and is the leading dealer in “wide spec”—resin that does not meet standard specifications. Having accurate, complete, and timely information is a matter of survival in a low-margin industry. Understanding this led Muehlstein to completely upgrade all of its computer applications starting in the mid-1990s. By using the RITE Approach, Muehlstein has become a technology leader within its industry. None of the individual projects that made this possible were particularly dramatic. The cumulative impact of many moderate successes is what has made this case special.

Controlling Project Scope

89

The first step involved replacing a number of outdated applications with an integrated ERP system. A series of modest projects deployed the new applications gradually. A new sales force automation system was also built and deployed in stages, followed by a basic business intelligence system that has been enhanced numerous times since its first installation. Each of these projects faced challenges, but all were completed successfully within a matter of months. When interest in the Internet exploded in the late 1990s, Muehlstein’s management needed to proceed carefully. A number of wellfinanced, high-technology start-up companies noisily announced plans to take control of the market Muehlstein had led for almost 90 years. Muehlstein was tempted to follow the herd and raise tens of millions of dollars in an attempt to build the ultimate plastic resin exchange. Instead, Muehlstein carefully monitored the actions of the potential new competitors and stood ready to react quickly. Muehlstein started by allowing customers to have direct access to its internal system through the Web. Enhancements were gradually added, including railcar tracking, reordering, and a highly innovative approach to negotiating sales terms. The projects that made this possible cost a tiny fraction of what the high-profile start-ups were investing but ended up having a much larger impact on the market. The threat from the Internet start-ups is now gone, leaving Muehlstein in its traditional position of market leadership. The techniques that allowed Muehlstein to succeed should sound familiar: Senior executives personally led each project. ■ No project took longer than a year—most much less. ■ A long-term vision was carried out through numerous modest projects. ■ Relatively little was invented; most ideas had been proven elsewhere. ■ Outside service vendors did much of the work, but under careful control. ■

Customers, suppliers, and competitors now consider Muehlstein a technology leader. The combination of unmatched industry knowledge and the RITE Approach to IT projects made this possible.

90

Revolutionizing IT

Case History: The Birth of the PC

Market conditions often force organizations to control project scope, with positive results. A well-known example of this was the development of the IBM personal computer (PC), a project that had an enormous impact on the structure of the IT industry today. The team IBM assembled in 1982 to create the PC did not have time to stop and think about its place in history. A clear threat needed to be handled. IBM’s ability to dominate the commercial computing market was being challenged by a highly unlikely competitor. A young start-up, Apple Computer, had proven that computers could be small, inexpensive, and simple enough to be used by nontechnical people. Suddenly, IBM was widely viewed as clinging to an outdated view of computing. Up to that point, it had always taken IBM at least five years to develop a new computer. With Apple already firmly in control of an important new market, there was no way IBM could spend even two years developing its own alternative. The first critical decision made was therefore to take the necessary steps to have a product on the market in one year. The implications of that seemingly obvious decision were monumental. If the PC were to reach the market in a year, the necessary building blocks could not be designed and built internally as was normally done. IBM had the expertise to build the microprocessor, memory, disks, operating systems, and other major components needed, but time did not permit doing so. The plan was therefore to rapidly create a first-generation product using components purchased from sources that could meet the ambitious schedule. The two most important vendors IBM chose to buy technology from— Intel and Microsoft—were relatively unknown at the time. IBM planned to upgrade the product over time and replace the components initially bought outside with IBM-built technology. The PC that IBM had cobbled together quickly took the market by storm. It soon attracted a critical mass of independent developers and some great software products such as the Lotus 1-2-3 spreadsheet. IBM was on its way to becoming the leader of what would eventually become the largest segment of the computer hardware market. The launch of the PC was managed brilliantly by departing from

Controlling Project Scope

91

IBM’s traditional way of doing things. However, once the PC business became highly successful, IBM reverted to a more traditional approach to development. A “great leap forward” strategy was put in place. It involved taking back control of the PC processor through a joint venture with Motorola to create the PowerPC family of microprocessors. Another element of the strategy was to push the Microsoft genie back into the lamp through the development of OS/2. After falling back into the traditional approach to development, IBM gave back much of the ground gained by breaking the rules. IBM did not fail to recapture market leadership because of a lack of talent, ambition, funding, or vision. The large teams of highly qualified experts that IBM put together created specifications for products that were superior to those offered by the competition. The PowerPC microprocessors and the OS/2 PC operating system each became excellent, award-winning products. They also each failed to gain sufficient market share to represent a threat to Intel or Microsoft. The primary reason for their failure was that by the time they came to market the opportunity they were seeking to exploit had been taken by a more nimble competitor. Everything was well managed except the time to market. When IBM was forced to limit the scope of the project to invent the PC, the results were glorious. After it reverted to the timehonored approach to development, much of the ground that had been gained was lost. Summary

The adoption of a strategy for scope control is one of the most important ways to increase the rate of project success. The RITE Approach to scope control has six elements: 1. 2. 3. 4. 5. 6.

Let time determine scope. Control the use of resources. Limit the size of the design team. Gauge your ability to absorb change. Imitate, don’t invent. Create a single point of accountability.

92

Revolutionizing IT

The way in which the RITE Approach to scope control is put to use is to: Adopt the evolutionary view—not everything can or should be done at once. ■ Assemble a small team of the best available people. ■ Rapidly develop a long-term vision. ■ Determine which resources are limited, especially people. ■ Estimate how long the environment is likely to remain stable. ■ Factor in the organization’s ability to absorb change. ■ Establish a target date. ■ Create plans for a project that can be completed by the target date. ■

The project plan should maximize near-term impact, represent a step toward the long-term vision, meet the target date, and not require excessive resources. Risk, cost, and time should be limited by using best practices proven elsewhere and by making maximum use of existing software modules—either complete application packages or reusable components.

C h a p t e r

E i g h t

Who Is Accountable?

I

n an effectively managed entity, all participants have a clear understanding of their role and what is expected of them. Managers are given specific functions to oversee. Most importantly, they are held accountable for results, good or bad. For example, the senior vice president of sales and marketing in a manufacturing business is accountable to her peers and to the CEO for total revenues. Even though she does not have complete control over some of the forces that impact sales (for instance, the state of the economy, the action of competitors, or the quality of the company’s product line), she must still take full responsibility for the level of revenues. If challenges arise, she is expected to come forward with solutions. If goals are not reached, she must explain why. When they are met or exceeded, she is allowed to take credit. Full accountability is a tough-minded concept. Those who are accountable cannot make excuses. If the things for which they are accountable go well, they are rewarded; if not, they pay a price. It works because it motivates a single person to see that key goals are met even in the face of adversity. Organizations that believe in the concept of full accountability frequently are at a loss as to how to use it when IT projects are undertaken. An IT project can involve many divergent groups. Multiple functional areas will often experience a major impact from and contribute toward the resulting systems. In addition, the internal IT

93

94

Revolutionizing IT

function usually plays an important role, as do IT professionals brought in from outside. If five or six different groups are involved, to whom can senior management assign accountability? The answer to this simple question can have a profound impact on the success of the effort. Unfortunately, very few organizations come up with the most effective answer. Too often, accountability for IT projects ends up being divided. When more than one person is accountable, in effect no one is. Divided accountability encourages political behavior and discourages creative problem solving. No single individual feels strongly motivated to do whatever is necessary to meet the goal. Instead, each works to make sure that the problem is not seen to be his or hers. Perhaps the most damaging aspect of divided accountability is its tendency to contribute to excessive scope. Let’s look at a typical scenario. The Giant German Auto (GGA) company insists that its supplier, Quality Gizmo, use the Igistic supply chain management system to link the two companies’ ordering and payment systems together. Because of the special skills required, outside technical experts will be needed. The Quality Gizmo project must draw on people who work for the sales, manufacturing, and finance departments as well as the IT staff, the outside service firm, and even GGA. The Quality Gizmo person best qualified to manage the project day to day works for the internal IT staff. The outside service firm will do the greatest amount of work. The largest impact will be on the order entry, production scheduling, and accounts receivable functions. With all this in mind, to whom should accountability for the project be assigned? What if the normal choice is made, and the internal IT project manager is put in charge? Quality Gizmo must take on the project or risk losing its largest customer. As a result, approval to develop a project plan is rapidly granted. The struggle to determine exactly how to integrate the two companies’ systems then begins, with participants each having their own agenda. The project manager has little choice but to act as a traffic cop and make sure that the needs of each faction are met. The service firm encourages each of the user organizations to specify everything they might ever need. The result is an expensive and time-consuming project that management cannot reject without serious consequences.

Who Is Accountable?

95

It is not realistic to ask IT professionals from either the internal staff or an outside firm to make the necessary trade-offs. The IT people must be held accountable for the cost, quality, and timeliness of their work but not for deciding exactly what should be done. Their bias toward technologically elegant solutions also prevents them from being impartial. When accountability is divided, it is normal for representatives from the user community to be responsible for the creation of specifications and for IT professionals to be responsible for creating systems that will meet the users’ needs. When problems occur, each party has an ironclad excuse as to why it was not at fault. Users can say that their job was to describe what was needed. It was not their fault that the resulting system took a long time to create, cost a great deal, and did not produce the expected benefits. This is especially true if unplanned changes contributed to the failure. However, the IT people working on a project can claim that they simply attempted to give the users what was requested. It was not their fault that what was asked for turned out to be expensive, time consuming, and difficult to develop and implement. They also could hardly be held accountable for environmental changes that no one anticipated. Divided accountability is one of the major causes of project scope growth. Users help develop the specification and are then usually asked to sign off on the final result. They are normally admonished to be sure not to leave anything important out. IT then has the responsibility for creating a solution that meets user requirements. Most IT professionals view themselves as solution providers. They are trained to look for problems and are pleased when they find them. So much the better if what is uncovered is an intricate nest of interconnected problems. The larger and more complex the set of problems, the more challenge they will face finding and building a solution. The best of them find this highly invigorating. They are also conditioned to provide users with whatever they want. People whose jobs involve developing solutions for improvement usually believe that a better world is possible. They are optimists by nature. As optimistic people, they downplay the dangers and problems that complexity will bring. Many relish the challenge that a larger and more complex project provides and believe that they have the skill to handle whatever problems arise. All this helps create a strong gravitational pull toward greater complexity.

96

Revolutionizing IT

On the other hand, when users are asked to define their requirements for a new system, a mild panic often sets in. They are comfortable in their knowledge of the current process and its strengths and weaknesses, but they fear that what they request in a new system will turn out neither to work nor to represent an improvement. People who are skilled at using an existing process are rarely good at determining what should be done to improve it. Some try to overcome this by making as many suggestions as they can just to be safe. Finding ways to simplify business processes takes a kind of insight and skill that few people possess. It is especially hard to determine how to remove the complexity from a highly familiar process because those who know it best may not appreciate how complex it really is. Simplification or increased automation also can eliminate the jobs of likable associates. For all these reasons, divided accountability encourages users to put into the specification everything they can imagine ever wanting. Since building the resulting solution is not their problem, they need not be concerned with how long it will take to do so. They may not care that the resulting solution will not arrive quickly because its arrival will represent a change to which they must adjust. The combination of divided accountability and a philosophy that views projects as one giant leap forward is deadly because added pressure is created to make sure that the specification is complete in every respect. This problem is often compounded in situations where users have been forced to wait some time for IT resources to be allocated to meet their needs. They become concerned that anything that is left out of the specification will become a follow-on project that will fall to the end of the line and therefore may not be addressed for a long period of time.

Who Should Be Accountable?

The best solution revolves around making a single person totally accountable for the success of the project. This person must come from within one of the functions that will use or benefit from the resulting system. Making someone from IT accountable is far less

Who Is Accountable?

97

effective. It usually leads back to a de facto dividing of accountability when it is tried. The best model we have seen starts at the top. Executives hold each person who works for them directly responsible for the cost, quality, and capability of the systems and processes that support their function. They are expected to apply the best available technology to continuously make their organizations function more effectively. Concrete goals related to process-driven improvement are set and measured as a routine part of the planning and management function. Stated another way: Those who use systems should be totally accountable for them. This includes their design, creation, cost, quality, functionality, and value. Being accountable for these things does not mean that users actually do all the work that makes them possible. If asked, many executives would say that they already hold users fully accountable for information technology projects. In practice, few carry the concept as far as they should. Total user accountability is best put to use as follows:

Executives are responsible for the cost, quality, and effectiveness of all of the information systems that support the functions they oversee. ■

Accountable leaders are chosen from within the organizations that will make use of the resulting systems. No project should be launched before such a person is assigned. ■

Accountability extends to every aspect of the project including its scope, cost, timely completion, and success at delivering expected benefits. ■

Lack of technical knowledge by the project leader is not an acceptable excuse for failure. ■

IT professionals normally carry out much of the actual project effort, but any failure on their part remains the responsibility of the accountable project leader. ■

The accountable project leader reports progress to management, accepts responsibility for any shortcomings, and also gets full credit for whatever is accomplished. ■

98

Revolutionizing IT

If a project takes longer than planned, costs more than is budgeted, or fails to deliver promised benefits, then the person who is held fully accountable is the project leader. Accountability extends upward to the responsible executives. This is true even if the cause of the failure appears to revolve around the technology used. The project leader is not expected to personally handle the details of project management. An experienced IT professional can be used for this function, especially if the effort is complex enough to warrant the use of a project management application package such as Microsoft Project. This person can be called the project manager as long as the division of responsibilities is clear to everyone. The accountable leader does not need to be dedicated to the project on a full-time basis. It is rarely possible to get the best people to take on this role if it takes them away from their other duties. On the other hand, enough of their time must be made available to allow them to become deeply involved in every important aspect of the effort. Assigning someone full accountability for an important project and then overloading them with other assignments is a formula for failure. It helps a great deal if the project leader not only welcomes the assignment but also has a passionate desire to see the effort succeed. Others involved in the project can always sense the degree of personal commitment the leader has. The more senior the leader is within the organization, the better, providing that person has the necessary time and interest. Assigning total accountability to project leaders will be unfair at times when things occur that are realistically beyond their control. If complete fairness is to be a criterion, no one could ever be held accountable for anything. Using lack of complete fairness as a reason to reject the concept is a serious error. Without accountability there is often chaos. With it there is always some level of unfairness. It is the difficult task of those to whom the leader is accountable to judge what action to take if a goal is missed. Management always has the option of considering mitigating circumstances, but this in no way relieves the leader of being accountable. Total accountability works both ways. The leader can rightfully claim credit for any success, even when dumb luck or the hard work of others largely determined the outcome.

Who Is Accountable?

99

IT Accountability

A common concern about total user accountability is that it seems to let the IT professionals off the hook. However, if the concept is used properly, the opposite can be the case. Making project leaders from the user community accountable to senior management for all aspects of the systems that support them does not relieve IT of responsibility. It redefines IT’s responsibility. In simple terms, the role of the IT function is to support project leaders in a cost-effective and professional way. The best measure of IT’s effectiveness is the level of satisfaction of those who use its services. If those who rely on the IT function are satisfied with the assistance they receive, then a good job has been done. Conversely, if those for whom systems are being developed, maintained, and operated are not pleased, then IT has not been successful. Satisfaction can be a subjective matter. Nonetheless, it is possible to set goals and measure results. The process is surprisingly simple—periodically ask those to whom IT services have been provided to rate the quality of those services. Reduce the ratings to numbers and compile and publish them. Compare the results with preset goals. Under this philosophy, IT professionals are resources under the control of those who will use the resulting systems. The IT function is accountable for satisfying the needs of those who use its services. The level of satisfaction is measured and compared to preset goals. IT does not have direct responsibility for the outcome of specific projects, but any project failure is likely to be reflected in the way those who are accountable rate IT’s performance.

Citibank Fixes Its Back Office: A Personal Note from David Andrews The concept of total user accountability is one for which we cannot take credit. It was a technique I observed when working at Citibank in New York City. At the time, Citibank was in the midst of a long effort to make its back office operations more efficient. Over a seven-year period the company was able to (continued)

100

Revolutionizing IT

reduce head count in the back office operations from approximately 7,000 to 3,000 while handling substantially larger volumes. The key to doing this was the relentless application of information technology to every facet of the operation. The managers of each of the operational units competed with each other to see who could be the most effective at head count reduction through automation. They did so by making proposals to management for improvement projects. As IT resources became available, they were allocated to the manager with the project that promised the greatest measurable return. The unusual catch was that the manager was then accountable for delivering the benefits on the scheduled date regardless of what happened to the project. In a way, Citibank was using a variation of the timebox concept well before DuPont came up with the better-known version of it. In this case, exceeding the time limit meant losing funding and head count, not necessarily immediate project termination. For example, the manager of the check processing department might propose the development of a new program to sort checks in a more efficient way that would allow for a head count reduction of 10. The IT department estimate was that three programmers would be needed for six months to do the development. If the project was selected, the programmers would be made available for six months. At the end of that time, the budget of the department and its approved head count would be reduced. If the project was delayed for any reason (for instance, stupid, lazy programmers were assigned), it did not matter—the budget was still cut. This extreme level of accountability might seem harsh, but it was exceptionally effective. The IT professionals assigned to projects made sure that they fulfilled their commitments to the users who depended on them. Everyone understood that failure was not an acceptable option. Users and the IT people helping them were highly motivated to avoid unnecessary changes in project scope. Anything that was not essential to meeting the project goal could be made part of a follow-on project. An important element of the success at Citibank was the incentive plan that went along with it. Those managers who

Who Is Accountable?

101

failed under this system lost their opportunity to stay in the game (most were not fired but were transferred to less attractive jobs). The ones who succeeded were rewarded with large cash bonuses. Payment amounts were tied to savings and often exceeded the manager’s annual salary. Those who succeeded were also given more responsibility. This was an important element of the system because managers were being asked to reduce the size of their organizations. The ultimate coup for a manager was to so completely automate a function that there was no longer a need for the entire department to exist. Managers would brag that in a few months the job they were doing would no longer exist. Those who could pull this off knew that they would be rewarded with an even greater challenge running some other part of the business (and with a huge bonus check).

Rewarding Project Teams

Good people will not want to be involved in high-profile projects unless there is a reward for doing so. Some organizations take a very direct approach and provide cash bonuses or stock options to key project participants. Few of the recipients complain when this approach is chosen, but in practice the rewards provided do not have to involve direct compensation to be effective. Government agencies or not-for-profit organizations may not have the option of compensating project participants directly. Even within public corporations, direct payments would often disrupt the philosophy of compensation in place. As a result, the most commonly used ways to reward those who have participated in successful projects are indirect. Effective indirect rewards include opportunities for advancement, positive job ratings, recognition by management, or even simply the chance to spend time with senior executives. When given in the right spirit, physical awards that commemorate the successful completion of an effort can be surprisingly effective. Items that people can display in their offices, including group photographs, can work well. Whatever their form, rewards should be

102

Revolutionizing IT

seen by the recipients as substantial if all project goals are met. There also needs to be a meaningful penalty if the effort is late or fails to deliver its intended benefits. Accountable project leaders take the most personal risk and therefore should have the most to gain. It can be counterproductive, however, to heap praise on them and ignore the others who have quietly done most of the heavy lifting. An effective way to resolve this issue is to reward the entire project team as a group with the project leader being the one who visibly does so. Any decisions about the relative merit of different participants should be made entirely by the project leader. Hopefully, the case for avoiding divided accountability has been forcefully made. It is not necessary to violate that principle in order to encourage everyone working on the project to feel personally responsible for its success. One key person must have the authority and motivation to make the trade-offs necessary for success. Everyone else must be accountable to both that person and to the organization for playing their roles effectively. When creating a reward structure, it is important to understand that software developers often have special needs. Recognition of the value of their contribution by their peers and by management is especially important to them. They often get less positive recognition than they deserve because problems over which they have no control make their work appear flawed. One of many reasons why the best developers are hard to keep is that they become frustrated when they have worked harder and done betterquality work than their peers and have nothing to show for it. The reward system should also encourage leaders to complete projects ahead of schedule. Oddly enough, some organizations actually discourage early completion by accusing project teams of padding their schedules if they beat the plan. The next chapter will outline an approach that makes it possible to regularly beat schedules.

Summary

Assigning total accountability to a single person improves the odds but does not guarantee the success of a project. Many other things

Who Is Accountable?

103

can go wrong. There will be times when accountability is given to a person who is not up to the task. The alternative of using the divided accountability model, on the other hand, is much more likely to lead to failure. The reason accountability is such an important issue is that in order for projects to succeed a workable mechanism needs to be in place to make necessary trade-offs. Making a single individual accountable for the outcome of the project is the best mechanism that we have seen in practice for forcing trade-offs to be made.

C h a p t e r

N i n e

Using Packaged Software

T

wenty-first-century organizations float on a sea of software. The many PCs that employees work with directly are crammed with operating systems, browsers, database managers, virus protection, and much more. The networks that connect them together are driven by another elaborate mass of software. Large numbers of server computers and their control software add more complexity to the mix. The sole purpose of this elaborate infrastructure is to run applications—everything from e-mail to office suites or accounting systems. In the early days of computing, applications were mostly homegrown. Now, a high and increasing percentage are packages. Much of the effort involved in IT projects revolves around the selection, customization, deployment, and integration of packaged software. Many of the ideas presented so far have centered on the early stages of projects when plans are formulated. It is now appropriate to look at the challenges that arise as the software needed to carry out the project plan is pulled together. This chapter will focus on the issues associated with selecting and using packaged software. Chapters 14 and 15 will cover the challenges of building custom software.

The Role of Packaged Software

The use of packaged software has become universal. Packaged PC applications are so widely used to create documents, spreadsheets, 105

106

Revolutionizing IT

and presentations that they are taken for granted. Packages are routinely used to manage functions as diverse as accounting, human resources, distribution planning, health care record keeping, insurance claim processing, and the control of cash machines. The challenge that many project teams face today is not whether to make use of packaged software but exactly how to do so. Packaged software is often advocated because it avoids the need to write custom software. This is an obvious benefit but may not provide the largest advantage. The greatest value of packages comes from adopting the business processes around which they are built. Good packages offer flexibility in the way in which they are used. At the same time, they are built around assumptions about how the function they automate will be performed. Previous users of the packages have proved that those assumptions work. The most difficult part of creating custom applications is not writing the code. It is determining exactly what the code should do—the way in which the business process that surrounds it will work. Packages reduce the time spent trying to conceive of new ways to perform necessary functions. They also reduce the risk that what is designed will fail to work in practice. Evaluating packages is the best way to discover how others have successfully performed the function being evaluated. Even in situations where it becomes appropriate to develop a completely custom application, evaluating how the best-of-breed packaged applications work is the appropriate first step.

Package Selection the Traditional Way

Before 1980, organizations that used computers wrote most of their own applications software. Doing so was expensive and time consuming, but they had little choice. The few packages that were then available offered limited capability and flexibility. The picture has changed as a vibrant industry of application developers arose and matured. The point has finally been reached where most organizations that need software look to see if a viable package is available before considering custom development. The transition from largely custom to package-centric computing has been so gradual that there has never been an ideal point to

Using Packaged Software

107

stop and reexamine the way in which packages are selected and put to use. The same seemingly sensible line of thinking has therefore been in use for package evaluation efforts since the 1970s. That line of thinking mirrors the logic of the waterfall view of projects and thus brings with it many of the waterfall’s limitations. The way packages are usually chosen can be boiled down to the following steps: ■ ■ ■ ■ ■

Develop system requirements. Document what is needed in a Request for Proposal (RFP). Send the RFP to all the viable package vendors. Select the package that seems to best match the RFP criteria. Create enhancements that overcome the package’s limitations.

As sound as the logic behind this approach seems, it brings with it some unintended side effects that are illustrated in the following revisit to our favorite fictional company.

Another Adventure for Mega-Multi

After Mega-Multi Manufacturing completed the installation of new financial systems, it was clear that the manufacturing and distribution systems were also candidates for replacement. Having not yet heard of the RITE Approach, Mega-Multi invited back Pricey, Dellaye, and Thensome (PDT) to help out. The PDT partner’s new Turbo Porsche helped him get to Mega-Multi’s office in record time when the call came. PDT’s Niagara Method has been used countless times to help select the kind of enterprise resource planning (ERP) software that Mega-Multi needed. The three consultants assigned full time to the selection process knew exactly what to do. The project began with a rigorous effort to determine exactly what was required. This included documentation of the existing process and interviews with everyone involved. This effort took place over a three-month period at a cost of $180,000. The final deliverable was a highly detailed Request for Proposal (RFP). The next stage of the project was to send the RFP to the nine

108

Revolutionizing IT

software vendors whose products might possibly be a fit for MegaMulti. These vendors were given a month to respond, and six decided to do so. Each was then given a day to present its proposal and run a demonstration of its software. Based on the written responses and the demonstrations, each vendor was rated on a point scale versus the criteria in the RFP. The three with the highest point total became finalists. All this took two more months at a cost of an additional $120,000 in consulting fees. The finalists were then asked to provide lists of references, and arrangements were made for site visits to two accounts for each finalist. The vendors were then given another opportunity to conduct on-site demonstrations, this time over a three-day period so that everyone involved had a chance to examine each module in detail. When all this was done, points were again awarded and totaled. When the consultants presented the point totals at the meeting, a huge argument boiled to the surface. The consultants preferred a package from the same vendor that was used for the financial system. This was the package that they had the most experience installing, would obviously integrate well with existing systems, and seemed to most closely meet the requirements based on the number of points it was awarded. The representatives from manufacturing, however, greatly preferred a different package. They questioned whether the issues that mattered most to them were given an appropriate weight when points were assigned. The users representing the distribution function felt passionately that the third finalist was the right choice. They were mesmerized by the excellent demonstration of this product and thought one of the sites visited was very impressive. In their opinion, the points awarded did not reflect the suitability of the package they favored. After a month of bitter squabbling and debate, the issue was kicked upstairs for resolution. The PDT partner then made a special visit and spent time talking individually with the executive in charge of each functional area and with the CEO and CFO. Based on his persuasive arguments, the executives approved the selection of the package that had been given the most points. By this point, a total of nine months had elapsed and close to $600,000 had been spent on consulting fees. The package that was finally selected came the closest to the

Using Packaged Software

109

RFP on paper but still was missing a large number of the features that the RFP had identified as important. Consequently, the implementation plan needed to include an effort to rectify these weaknesses. The resulting plan called for the implementation of the ERP package over a two-year period. This included the creation of the necessary enhancements. The cost of the purchased software was estimated to be $2 million with an additional $8 million in PDT fees for their assistance in installing the package and building custom enhancements. Once again, members of Mega-Multi’s management were asked to approve a far more expensive and time-consuming effort than they would have liked. What choice did they have other than to proceed? They had used a thorough and logical process to plan for the ERP implementation and to select the appropriate software. Is there something else they could have done to arrive at a better decision sooner? What Is Wrong with the Traditional Approach?

It is easy for projects that select packaged software in a traditional manner to fall into a number of traps, including: Asking users to articulate what they want before they have seen what is available. ■ Believing that the RFP will represent what is really needed. ■ Encouraging modifications that may turn out to be unnecessary. ■ Taking a long time to reach a decision. ■

The traditional approach makes the hidden assumption that users and their advisors understand better than the package developers how best to solve the problems at hand. This could have been a reasonable belief 20 years ago, but it certainly is no longer valid. The foundation for a better approach to selection is to adopt new attitudes about packages. The most important is to accept the obvious: A package contains the combined experience of many other organizations. Their collective wisdom is likely to be better than whatever any single organization can develop.

110

Revolutionizing IT

Good packages have used an evolutionary process to arrive at their current state. Mistakes and imperfections in early releases have given way to better approaches to each facet of the function they automate. The user community has exercised the software vigorously and demanded that necessary improvements be included in the package itself rather than in code they must maintain. As a result, there is normally a very good reason why the package performs each function the way it does. The use of an RFP assumes that the ideas that the project team formulates will be superior to those contained in the packages being evaluated. When the first stage of a package evaluation process is to gather and document requirements and write them down in the form of an RFP, this is exactly the assumption being made without stating it formally. It is arrogant and almost always wrong. The leading development methodologies offer excellent techniques for defining requirements. A widely used technique involves documenting exactly how the function being automated works in business models. Alternative ways of handling the process are then tested with use cases—detailed, concrete examples of exactly how the proposed process will work. When used properly by experienced business analysts, this approach can come up with good results, but it still represents making a determination of what the best solution will be before evaluating what is available. Earlier chapters have examined the complex dynamics of requirements gathering. One of the central problems is that those who understand the current process best are not ideally suited to propose the best way to change it. Using business modeling and use cases or similar sophisticated techniques to define requirements will not eliminate this problem. At times it will make things worse. The act of assembling the experts needed to apply these methodologies can create a bias toward the creation of elaborate requirements that no existing package could possibly meet.

The RITE Approach

If a software package represents the combined experience of many others, then why not assume that there is a valid reason why it works the way it does? If it is a good package, it will be built around a way

Using Packaged Software

111

that has been proven to work. If many organizations similar to yours use a particular package, it is likely that it will be a good fit. Looking at it from another angle, there should be a good reason to not adopt the most widely used approach among similar organizations. Starting the package evaluation process with an open and positive attitude does not preclude making a decision to not follow the pack. Under certain conditions, not following will be exactly the right choice. Such determinations cannot be wisely made ahead of time, however. It is first necessary to obtain a clear idea of what others have done successfully before deciding what to accept or reject. The positive attitude must extend to all packages evaluated even though different ones will take opposite views as to how a specific function should be performed. If you start with the belief that there is a worthwhile reason why each package works the way it does, a great deal more valuable information will be gathered. An informed choice as to which approach is best can then be made. Under the traditional package selection process, the first time a package is viewed is usually during a formal demonstration conducted by an experienced presenter. Unfortunately, demonstrations are more a test of the skill of the presenter than of the package itself. They are a necessary part of the selection process but not the most important one. It is best to hold off on formal demonstrations until after visiting one or more sites where the software is in full use. The ability to probe during the formal demonstration will be greatly enhanced at that point. The most valuable and important part of software package evaluation is on-site visits. Such visits should be conducted very early in the evaluation process rather than later as is traditional. It is imperative that those making such visits go in with a totally open mind and a positive attitude toward what they are seeing. The starting assumption must be that there is a very good reason that is worth understanding as to why each function is being performed the way it is. Field trips to view packages under consideration are wonderful opportunities to gather invaluable intelligence. When on-site visits are impractical, phone interviews can be almost as effective. Visits to reference accounts frequently uncover invaluable ideas or intelligence. Sometimes what is uncovered has nothing to do with the project itself but has great value.

112

Revolutionizing IT

When evaluating packages, it is important to look hard into the things that do not appear to be right, based on past experience. The characteristics of a package that are not what is expected often turn out to be its greatest strengths. For example, an important concept such as just-in-time delivery of production parts might appear at first to be a crazy idea to someone who had never seen it. Creating an RFP too early in the evaluation cycle encourages users to dismiss packages that differ from what has been specified. If a group of people go to the trouble of deciding what they think is best and then write it down, a psychological barrier is created that makes the consideration of a completely different approach difficult. If nothing else, it can be embarrassing to admit that something that you once insisted was necessary really isn’t. Do you remember the very first experience you had interfacing with a computer, a mouse, and a graphical interface? Chances are that you were fascinated but also highly frustrated. Working the mouse, knowing when to right or left click, opening pull-down menus, and dragging and dropping are not skills with which anyone was born. We all had to suffer a little before these skills became automatic. None of us would have included a mouse and a graphical interface in a specification for a computer if we had not previously used one. In this case, would it make sense to spend a great deal of time creating a specification before first examining what was available? Package evaluation should be based on a few simple principles: Examine what is available before deciding what you want. The best test of a package is how effectively it is being used. ■ The least valuable test is how good it looks in a formal demo. ■ Packages are the best way to discover and imitate proven business processes. ■ Look for the reason behind every design element. ■ Encourage the use of packaged software exactly as it was designed. ■ Adapt to the package to the extent practical. ■ Invest heavily in training. ■ A long selection process decreases the quality of the effort. ■ ■

Using Packaged Software

113

The point when a package becomes operational is the middle, not the end of the effort to make use of it. It is rarely possible to take advantage of more than a fraction of the capability of packaged software when it is first installed. Users cannot truly determine how best to use packages until they have lived with them for a period of time. An endless amount of analysis using the best available methodologies will rarely prevent this from happening. Expectations need to be realistic when packages are installed. A perfect world will not suddenly appear the day the applications go live. Resources need to be allocated to examine packages after they have been in use for a number of months and then determine how to tune them to provide the value they are capable of delivering. The normal way of viewing and budgeting for projects makes this impractical. The impending arrival of Y2K created an artificially high demand for packages in the late 1990s as many organizations chose to replace rather than repair old applications. Virtually all of the resulting projects were completed on time, making them successful by that measure. Still, a high level of disappointment remains because the expectations created during the package sales cycle have not yet been met. Once the packages became operational and could handle dates in the new century, most organizations moved on to other priorities. The package vendors who have thrived have done so by adding enough functionality to satisfy the needs of a wide spectrum of users. They have also learned that the lack of a few minor features, even ones rarely used in practice, can knock them out of RFPdriven evaluations. Therefore, almost all of the successful packages on the market are bloated with features. How many of us understand or use even 20 percent of the capability of the software that we deal with every day—word processors, browsers, e-mail, spreadsheets, and the like? Package vendors also relentlessly turn out new releases every year, further compounding the feature bloat issue. At the same time, this creates opportunity to obtain greater value from installed packages. The steady stream of improvements that come with new releases provides yet another reason why the effort is not finished when the initial deployment of the package is complete.

114

Revolutionizing IT

Getting Help Selecting and Installing Packages

Getting the information needed to make an effective package selection can be a challenge. Attempting to do it all with inside resources is too large a task in most cases. However, obtaining outside help offers both advantages and pitfalls. Outside experts can be very helpful, especially those with knowledge of all of the top packages and no strong ties to any one of them. Unfortunately, finding such experts can be difficult. Those who are available charge for their time by the hour. They are therefore not automatically motivated to help you make a decision quickly. One way to change that motivation is to identify an attractive role for them to play after package selection. Firms that provide IT industry information, such as the Gartner Group, can be an excellent source of information about available packages. These companies fiercely protect their independence and have an up-to-date opinion on all of the alternatives. Yet they often temper the opinions that are published for fear of offending important vendors or even triggering lawsuits. The best input from them will often come during one-to-one discussions with analysts rather than via the officially published reports. If you already subscribe to such a service, it should represent your starting point for gathering information about packages. Every package of consequence has an active user group. These groups are an invaluable source of information, especially as an indication of what types of organizations are currently satisfied users. Most of them have set up Web sites that contain a great deal of information that is useful during evaluations. A visit to a user group meeting can be invaluable if the timing is right. Once a package has been selected, it is normal to require a significant amount of outside assistance. The cost of such assistance is often 5 to 10 times the package price. The choice of the firm that will provide this assistance can therefore be more important than the choice of the package itself. There is nothing wrong with first choosing a service firm and then determining if the package that it supports is appropriate. This is especially true if the firm selected uses an approach to the project that is compatible with the philosophy of your organization.

Using Packaged Software

115

If this approach is taken, it must be possible to walk away and look for another package if a careful analysis shows that it is just not a good fit. The best evidence of this would be if there were no similar organizations using it. Many package vendors provide installation assistance themselves. In a high percentage of cases, they will have a network of partner firms that are certified to assist in the installation of their software. It is important to invest time during the package evaluation process, evaluating the credentials of those who will provide this assistance. If possible, meet the actual individuals who will do the work. The increasing complexity of application packages has forced most IT service firms to specialize. Some of the largest firms have multiple practice groups that specialize in competing packages. The more common approach is for IT service providers to form a strong partnership with just one package vendor. It should be no surprise that few firms with the resources to support package installations are completely unbiased as to which package should be chosen. It is part of human nature to have biases. People like what they already understand. It is therefore important to know what experience each person involved in the evaluation process has. It is not possible or desirable to eliminate all bias, but it is important to acknowledge what is present. If you hire a duck as the architect for your house, the plan will surely include a pond. In most package selection situations, there are usually two or more packages that are more than adequate for the job. The best packages have become so comprehensive that the differences will often be meaningless to you. Of the two finalists under consideration, one might have 20 percent more features that will never be used, the second will have 30 percent more, and both will have everything reasonably needed. It is therefore nice but not essential to pick the one that is theoretically best for you. Package Selection Using the RITE Approach

The RITE Approach to package selection is quite different from that which is commonly used. The important steps in the process include:

116

Revolutionizing IT

Assemble the project team. It should be led by an accountable leader from the user community.



■ Line up outside help. Know the background and biases of those involved. ■

Set a goal to make a decision rapidly. Aim for 10 weeks or less.

Identify the most likely choices. Normally there should be three or fewer. ■



Visit reference accounts. Go with a completely open mind.

Adjust your thinking. Analyze what was learned from observing others.



View structured demos. Get your hands dirty and ask lots of hard questions. ■

Decide what matters. Discuss, debate, and document what you have learned.



Document your decisions. This is the point where a written specification is needed. ■

Look for a leader. One package frequently rises quickly to the top of every list. ■

■ Strike a deal. Negotiate an agreement to test the early leader without obligation.

Set up a conference room pilot. Gain a thorough understanding of the package.



Beat the stuffing out of it. Find all the weaknesses and limitations. ■

Be prepared to try number two. If the first choice seems weak, try another.



Evaluate implementation providers. Make sure qualified individuals will be assigned. ■

Plan for a staged implementation. Allocate resources to create the enhancements users identify after using the package for a period of time. ■



Obtain approval and buy the product.



Invest heavily in training.



Implement, let it settle, evaluate, and then upgrade.

Using Packaged Software

117

The foundation of this process is the RITE Approach principle: Imitate, don’t invent. If there is a proven better way to do something, then put it in place rapidly. Live with it long enough to understand it completely and give it time to succeed. Then constantly look for ways to make it even better through an endless series of simple improvements that are rapidly implemented. One reason why software selection can take longer than necessary is that it can be more fun to evaluate software than it is to implement it. During the evaluation phase, vendor salespeople shower the decision makers with attention. Nice lunches and road trips abound. Interesting and articulate people make powerful presentations about how wonderful life will be once their software is operational. It is no wonder that software evaluations can go on for a year or more.

Case History: Mississippi Chemical

When farmers across the Southeast need fertilizer, many of them turn to Mississippi Chemical Corporation, a 50-year-old company located deep in the Mississippi Delta. The company is recognized as a low-cost, efficient producer. Tight cost control is an essential part of its daily operations. In 1994 Mississippi Chemical therefore needed to look for alternatives to its outdated, ineffective, and costly information systems. One of the largest and best-known IT consulting firms was engaged to evaluate the company’s needs. The result was the recommendation that a fresh look at all of its business processes should precede the replacement of any software. The suggested project was estimated to take between two and three years and cost many millions of dollars in service fees. After hearing a presentation that included some of the RITE Approach ideas, Mississippi Chemical decided to attack the problem in a faster and less expensive way, working with Andrews Consulting Group. A new goal was established to replace all existing applications with an integrated suite of packaged software within a year, if possible. The packages would drive any reengineering since they would be used with minimal modifications. The resulting project took just 13 months and met all of the

118

Revolutionizing IT

objectives at a fraction of the cost of the proposed reengineering effort. The new applications represented a dramatic improvement over the ones replaced. In addition, many valuable process improvements occurred as a direct result of the new systems. Best of all, the cost of the IT function dropped while the value provided increased. During the seven years since completion of the original project, it has been possible to build on the original foundation with a steady stream of further improvements. One of the most popular was a data warehousing system that enables end users to rapidly get valuable information out of the other systems. Mississippi Chemical won’t win any awards for the way its systems apply leading-edge technology, but that was never the goal. The systems now in use provide everything their users need and are exceptionally cost effective. The budget for the IT function is currently less than 0.5 percent of annual revenues. Few organizations anywhere can claim to get more value for what is being spent. Quiet successes like this don’t make the front page of The Wall Street Journal. At the same time, they are exactly what most organizations really want out of IT projects—valuable benefits delivered rapidly with limited disruption.

Summary

The widespread use of packaged software has made the way in which it is selected particularly important. Unfortunately, the most commonly used approach is time consuming and may not lead to the best result. The greatest value packaged software brings is the business process around which it has been created. Learning a great deal about how others operate that business process is a valuable opportunity that package evaluation creates. An approach to package evaluation that focuses on learning as much as possible can therefore lead to a faster and better selection. It also can yield additional information of great value about managing the business process itself. Packages are not available for every possible situation. One of the direct results of the rapid maturing of the Internet is an increasing demand for solutions for which packages are not yet available.

Using Packaged Software

119

At the same time, it is always appropriate to start a project with the assumption that others have already solved this problem and that finding out all that is possible about their experience is the best starting point. Selecting, installing, and using packaged software is never easy. Under the best of circumstances it is expensive, difficult, time consuming, and frustrating. Only one thing is worse—not using packaged software.

C h a p t e r

T e n

The Balancing Act “If it were done . . . , then ’twere well it were done quickly.” William Shakespeare, Macbeth

S

hakespeare understood the value of completing a project quickly. Macbeth’s boss—Lady Macbeth—had suggested that he could make himself king by getting rid of the incumbent. Completing the project rapidly was critical to its success. In the 400 years since Macbeth was written, the world has become a much more complex place. In business, reacting quickly to changing conditions has gone from being useful to essential. When it comes to taking advantage of the capability of advancing information technology, speed is important, but it is not all that matters. The most important characteristic of a race car is the speed it is able to achieve. Races are won, however, by cars that balance raw speed with handling, braking, fuel consumption, stability, and reliability. Implementing a complex information system is more like designing a race car than a rocket sled—speed is essential but must be balanced with other important characteristics. The information highway is littered with the burnt-out wrecks of start-up technology companies that thought that being first was all that counted. In the late 1990s, there was constant talk about doing things at “Internet speed.” Use of that term has disappeared as quickly as it gained popularity. It has become clear that the increasing rate at which technological building blocks are being invented

121

122

Revolutionizing IT

has not yet repealed the laws of human nature. Organizations absorb change at their own pace, and that pace is usually slower than the rate at which new technology is being invented. The Need for Balance

A large challenge faced by those who must approve project plans is to strike a balance between eight critical factors: 1. Scope. The definition of what the project will accomplish. 2. Benefits. The net value the project creates. 3. Time. The period between approval and the delivery of benefits. 4. Disruption. The negative impact of the change on the organization. 5. Cost. The profit and cash impact. 6. Risk. The probability that things will not work out as expected. 7. Resources. The talent and skills that will need to be dedicated to this effort. 8. Quality. The reliability, availability, performance, and user satisfaction of the solution. Trade-offs always need to be made, but some managers try to avoid making them. They rely on tough-sounding proclamations that exhort the project team to work hard and give them everything they want. This approach rarely works, but it remains all too popular. The RITE Approach ideas outlined in previous chapters offer a framework for striking a sensible balance. The remainder of this chapter will explore putting the RITE Approach ideas to work in practice, as difficult choices need to be made. Questions to Ask

Before approving a project, management needs to take a hard look at the current state of the organization. The best way is to ask a series of difficult questions:

The Balancing Act

123

What can we afford to spend on this project? Will funding need to be reduced before it is complete? ■ Who can spare time to work on this effort—management, the user community, the IT staff? ■ What will the available people realistically be able to accomplish? ■ Will outside help be needed? Where will it come from? How do the people being considered approach projects? ■ Who will be accountable? What level of skill and experience does this person have? ■ What are the risks if the project is never completed, costs more than planned, takes more time than anticipated, or is completed late? ■ Have the important concepts behind the project been proven elsewhere? ■ Have those who have tried something similar tended to be successful? ■ From where is the next major business disruption likely to come, and when? ■ What other risks can be foreseen? ■ ■

A long effort to obtain precise answers to these questions is not necessary. In most situations, simply having an open discussion about them when the project is in its formative stages is enough. If senior management asks the right questions at the beginning, it has a powerful impact on the shape of the project that emerges. More Speed, Scotty

In the original Star Trek television series, the captain of the spaceship Enterprise would regularly ask Scotty, the chief engineer, to give him more power. In almost every case, the response (delivered in a thick Scottish accent) involved a trade-off for Captain Kirk to consider: “I can give you more speed, Captain, but if I push her too hard the whole thing is going to blow.” Captains of industry face the same dilemma every time a project is presented for their approval. They can blow the whole thing if

124

Revolutionizing IT

more speed is requested without making adjustments elsewhere. There is rarely any debate that fast is better than slow. What is hard is making the necessary trade-offs between speed and the other factors. Three commonly used approaches to increasing speed are worthy of examination. Attempting to gain speed by adding resources, especially additional people, is a common strategy. It is also the source of many failures. This is the approach toward which most organizations gravitate without thinking too much about it. For example, a project is proposed that is scheduled to take 18 months. During the approval process it becomes clear that the benefits the project will provide will be badly needed in support of a product launch that is planned for 10 months out. When asked if the project can be accelerated, the automatic response is a request for more people to be assigned. Those asked to lead projects are inclined to ask for as many people as possible. Their motivation is both to increase the odds of success and to gain stature by overseeing a larger effort. However, the biggest problem with adding people is that as their numbers increase the net contribution that each one makes diminishes. There are no firm rules to go by, but in some cases assigning 20 people to an effort will only result in 50 percent more useful work than 10 people could accomplish. Assigning 30 might mean only 75 percent more useful output. Putting 50 people on a particular project might lead to its complete failure. The mindless addition of resources is usually costly and sometimes can be fatal. This is not a new line of thinking. It was clearly articulated over 20 years ago in an excellent book, The Mythical Man-Month, which correctly asserted that adding resources to a software development project would, in most cases, actually slow the effort rather than speed it up (Brooks, 1995). A second popular approach to gaining speed is the executive edict. The project team is simply told by senior management that the work must be done at a faster pace without any indication as to how to do so. To the extent that priority continues to be given to a project throughout its life, then executive edicts can have an incremental positive impact. They are of limited value as a primary approach to project management, however. The third, and most effective, approach to limiting the time

The Balancing Act

125

projects take revolves around reducing scope. The benefits of doing so have been discussed earlier: Lower cost. ■ Reduced risk of failure. ■ Less disruption of the user environment. ■ Smaller requirements for people. ■ Less chance that changing conditions will make the project plan obsolete ■

The disadvantages are more subtle but quite powerful. Less ambitious plans do not immediately attack every element of a problem, which often makes it difficult to obtain the support of important allies. Everyone likes the theory of controlling project scope, but when a feature that some people treasure is offered up for postponement, suddenly they begin to resist. Some people who work on projects have a selfish interest in greater complexity and thus cannot be counted on to support simplification. This can be the case with some outside advisors who stand to gain economically as scope increases. If scope control is to work in practice, the forces that encourage increasing complexity need to be effectively resisted. Those whose favorite features need to be postponed must feel that their needs will be met later, but still in a reasonable time frame. Project managers need to be motivated to make rapid incremental improvements rather than grand, sweeping changes. Outsiders need to be carefully selected and wisely managed, recognizing that they may have their own agenda. The most important and fundamental RITE Approach principle is worth restating here in different words: The best way to control project scope is to set a firm completion date and let it determine the scope of the project. This seemingly backward approach is the most effective way to balance time and scope. In order for it to work, meeting the end date needs to take priority over the availability of optional features. The measure of success must be the impact that the effort has on the organization within the time allotted. The most successful organizations are those that consistently

126

Revolutionizing IT

produce a steady stream of incremental improvements in the important things they do. The shorter the interval between them the better. What really matters the most, however, is the cumulative impact of these improvements. The first step toward becoming such an organization is to adopt the view that steady, evolutionary progress achieved through an endless succession of small improvement projects must become a permanent part of the culture of the organization. Planning versus Doing

A great dilemma that project teams face is the amount of effort to put into planning and architecture. A strong case can be made for investing heavily in the creation of well-researched and highly detailed plans that describe exactly how the system being implemented will perform. This always sounds like an obvious thing to do. Up to a point, this philosophy is correct. It is also appropriate to carefully plan exactly how all of the technological elements that make up the new system will fit together. This is especially true if sophisticated new software needs to be created. Successful efforts to create anything highly complex usually start with an organized and disciplined process that creates and documents the design. A highly experienced system architect is usually behind the successful creation of complex new systems. The trap that many projects fall into is to do too much of a good thing. They become bogged down and can’t seem to stop designing and start doing. Obviously, time and money are wasted when this occurs. The more insidious problem is that the more time a project team spends on design, the greater the project scope becomes. This delays completion while increasing cost and risk. There are many reasons why too much time is spent during the planning stage of a project: Planning can be more fun than building. Every issue seems to need debate and investigation. It is easy to follow each thread to its end. ■ Vendors are nice as you evaluate their products. Some buy lunch. ■ ■

The Balancing Act

127

Difficult choices seem easier to make after doing a great deal of research. ■ Early in a project less time pressure is felt since the final deadline is far away. ■

Projects often need to be forced out of the planning stage by management. The project team can literally become boggled by all of the options and possibilities. Requests by a project team to extend the time for planning and evaluation are a warning signal that the process being used may be broken. The expectations that management puts on those doing the planning will have an enormous impact on the resulting recommendations. When executives talk tough to a project team, it often has the opposite of the desired effect. Instead of motivating the team members to work harder and faster, it can encourage them to become more cautious in an attempt to avoid making mistakes. When trying to find the optimum amount of time to spend on planning, design, and architecture, the following guidelines can help: 1. Historically, the best plans are created in a limited amount of time by a small group of people. A broad rule of thumb is to never spend more than 90 days on planning and high-level functional design. Usually 30 days is enough. Limiting the number of people involved is also critical. More than 10 is usually fatal and less than 5 is ideal. Of course, the theoretical optimum is one. 2. The mandate must be to create a project plan that can be fully implemented in the time allotted. Postponing nonessential and time-consuming functions should be encouraged. 3. Management’s expectation must be progress, not perfection. If people on the project team think that they must get everything right and complete when the system is first delivered, then they will keep adding functionality until the associated complexity chokes the development effort. Conversely, if the primary measurement of success is how quickly they get something implemented that makes a difference, then they are less likely to overdesign.

128

Revolutionizing IT

Delivering Benefits

The single valid reason for undertaking an IT project is to create benefits. As organizations become caught up in the day-to-day mechanics of complex projects, it is commonplace to lose sight of the reasons why the project exists at all. Benefits can be clear and obvious, such as increased revenue, lower labor cost, more satisfied customers, less inventory, or higher product quality. In other cases, they are the avoidance of something negative such as lawsuits, the ire of a government agency, or even the loss of valuable data on January 1, 2000. When seeking approval, the authors of the project plan usually present a benefit scenario, for example, “By providing field salespeople with a laptop-based order entry system, we will increase sales, reduce delivery time, and improve customer satisfaction.” The world of the benefit scenario, as described by its creators, is a happy place. Problems melt away under the hot sun of technologybased solutions. One of the most important roles management must play is the constructive dissection of project plans. It does not help to take the chicken’s way out and cynically reject anything that is proposed. Accepting what is offered because it is well organized and presented is equally ineffective. The tough and necessary job is to constructively examine the benefit scenario in detail. Failure to carefully examine benefit scenarios was a major cause of the hundreds of failures of high-profile Internet start-up businesses in 2000 and 2001. Often these companies were planning to achieve profitability at a future date through some form of magic. Their business plans focused on creating Web-based services that would attract large numbers of users. The benefit scenario was usually vague as to how the formation of these communities would be turned into profits. That was considered a detail that would work itself out later. Important projects should be approved only after a consensus is reached that the benefit scenario is well understood, in detail, and that there is general agreement that it can work. This process needs to clearly articulate the key assumptions behind it. In the example of providing salespeople with laptop computers, it is essential that those approving the project develop an exact understanding of how

The Balancing Act

129

and why sales will increase as a result of the new system. They also need to understand each of the assumptions that underlie the project plan. The most important reason why sales are projected to increase is that salespeople will be able to make competitive quotes while with their prospects. Further analysis might show that the root of the existing problem is a policy that limits the authority of a salesperson in the field to make price concessions. The new system simply represents a mechanism for changing that policy. The option also exists to extend greater pricing authority immediately, without the need for laptops or software. It may still make sense to go ahead with the sales automation project for other reasons, but not for the sole reason of making competitive quotes. Too often the review process for projects concentrates on issues of technology, resources, schedules, or risks. These things all need to be considered, but the primary focus of project evaluation must always be the benefit scenario. Even after a project has been approved and is under way, it is appropriate to constantly look to see if the assumptions on which the benefit scenario is built have remained intact. Disruption: How Much Is Too Much?

When deciding how ambitious a project should be, it is necessary to take into consideration the ability of the organization to absorb change. The tolerance for change varies enormously from one organization to another. A small percentage thrives on constant change and is disappointed when it does not happen. At the other end of the spectrum are organizations filled with people who will fight hard to prevent even the simplest changes. The question is not whether management has a desire for change but to what extent those who will work with the resulting system want it as well. Reasons for resistance are commonplace: Imprinting. Once people learn something, they are reluctant to abandon what they know for something new, even if it will be better once learned. ■ Power. Whenever an organization changes the way in which something is done, it alters the relationships between the ■

130

Revolutionizing IT

individuals and their perception of relative power and importance. Those who feel they are losing power or stature can be counted on to resist the change. ■ Resentment. It is quite common for those who must live with a new system to develop a personal dislike for the people who are assigned to create it for them. Sometimes it is a general distrust of outsiders. Often it is because it is easier to dislike a person than a thing—the new system that is disrupting their work lives. Occasionally it is because of jealousy over appearance, financial status, or access to management. ■ Fear. Some people feel uncertain about their ability to function effectively after the change. The point is not to avoid taking on ambitious projects that change the status quo just because there will be resistance—there always is. The challenge lies in finding the right balance between the benefits that change will bring and the disruption that it will cause. In most cases, just a few elements of the new process are the most troublesome to the organization. Often the most disruptive elements of the new system are things that are not critical to the success of the project. It is best to give the people who will be impacted by the system a way to voice their concerns. Just listening will help. Then look for ways to reduce the disruption without harm to the overall project goals. A number of years ago one of our clients was preparing to go live with a new suite of ERP applications. There was strong resistance by many people who were avoiding training sessions on the new systems and were generally uncooperative. The project team had done everything possible to make the transition easy and to gain broad support. During the planning stages, there had been little disagreement that the new systems represented a vast improvement over those being replaced. Finally the real reason for the resistance came out: Once the new systems were operational, they would remove the last barrier before the company would move its office to a new location. The people who were resisting lived very close to the current office. The move would increase their commuting time by two hours per day.

The Balancing Act

131

The problem was not the new systems but the location of the new office. The key point is that it is appropriate to make an assessment of the level of resistance (and likely causes) before determining how aggressive a project should be. The project plan itself should deal with the issue of how much resistance is likely and from what sources. Other ways to reduce the level of resistance to change include: Make the accountable project manager someone from the user community who its members know and respect (and who will remain in the community when the project is complete). ■ Budget generously for training and for end user support. ■ Create a culture where incremental improvement is expected and welcomed. ■ Visibly reward those who successfully implement incremental improvements, especially small victories that were achieved quickly. ■ Communicate the reasons for the change to those whose work environment will change the most in terms they can easily understand. ■ Accept that not everyone will embrace the new approach. Do not let those whose resistance cannot be overcome limit what is accomplished. ■

Controlling Risk

Risk is a very hard thing to measure or quantify, much less control, but it is still important to try. The best way to minimize the likelihood of failure is to control project scope and the elapsed time to implementation. There are, however, practical limits to how far this philosophy can be taken. Frequently, it is appropriate to undertake projects that involve significant risk, especially if the rewards of success are high enough. The common causes of project failure have already been established. They include:

132

■ ■ ■ ■ ■ ■ ■

Revolutionizing IT

Excessive scope. Changing conditions. Invalid assumptions about the environment or requirements. Resource limitations. Organizational politics. Changing priorities. Immature technology.

There also have been many examples of projects that fail simply because the necessary technology could not be made to work. Our experience indicates that pure technology issues are rarely the primary cause of failure. When they are, almost always the problem could have been prevented if the project scope had been kept a little simpler. Risk can never be eliminated, but there are ways to control it. One of the most effective techniques is obvious but infrequently practiced: Do the hardest things first. The natural tendency of project teams is exactly the opposite. They start by working on the elements of the project with which they are most comfortable in the hope that inspiration will come before it is necessary to meet the big challenge. It also helps to identify the most important underlying assumptions and find a way to prove them out as rapidly as possible. Many recent examples are available where high-technology start-up companies failed to do this. A notorious one is Pets.com. It was founded on the assumption that pet owners would be interested in purchasing items over the Internet. No serious attempt was made to prove the fundamental assumption behind Pets.com before a huge investment was made in a high-profile launch of a Web site. Long before the entrepreneurs were able to find a workable formula for building a profitable Web community of pet owners, the considerable amount of cash they had raised had been spent on Web site development and television advertising. Less ambitious competitors have proven that a successful business can be built around the creation of a pet owner Web site. A lower-profile test of the fundamental assumptions behind Pets.com could have been carried out fairly rapidly at a small fraction of what was actually spent. That test would have either allowed Pets.com to

The Balancing Act

133

gracefully give up early, or, quite likely, alter the plan until it worked. Testing the underlying assumptions can be called proof-of-concept prototyping. Proof-of-concept prototyping is the best way to reduce risk when something completely new is being tried: User resistance is reduced if an idea has been proven to work on a small scale. ■ Willingness to try innovative things increases when risk and investment go down. ■ Vulnerability to budget cuts is less of an issue in a prototype scenario. ■ Dependencies and conflicting requirements are identified early. ■

The reasons why proof-of-concept prototyping is not more popular are largely political: Management is often in a hurry to get the benefit of the fully implemented system. ■ The culture often does not support experimentation. Those proposing projects must commit to making them work. ■ Project teams prefer to obtain approval only once. The prototype model requires two or more trips to senior management. ■ No one with experience conducting prototypes may be available. ■

Prototyping is sometimes resisted as an unnecessary extra cost. In practice, including a prototype phase in a project almost always reduces the total time and cost of the project. Summary

One of many reasons why IT projects are difficult is that a large number of factors need to be kept in balance. This leads to the need for someone to make the necessary trade-off decisions. Doing so is a management function and not something that can be handed off

134

Revolutionizing IT

to the internal IT staff or outside service providers. One of the best ways management can make sure the best decisions are made is to ask the right questions at the beginning. The speed with which projects are carried out is important, as long as the result of doing things quickly is the rapid arrival of worthwhile benefits. Rushing to finish an interim stage—or toward failure—accomplishes little. Effective IT project management is all about risk reduction. In addition to applying the scope control techniques that are an integral part of the RITE Approach, it is appropriate to take on the hardest tasks first and test critical assumptions early in the effort.

C h a p t e r

E l e v e n

Using Outsiders Wisely

T

here was a time when car owners could tune their own engines and do much of the necessary maintenance themselves. During the last 20 years, however, the increasing sophistication of automobiles has made it largely impractical for owners to do much more than put air in the tires. There also was a time when most organizations handled important IT projects largely on their own. That era has likewise ended due to steady increases in the complexity of the software that most projects depend upon. It has therefore become the norm to make some use of outside assistance when undertaking IT projects. Dependence on outside service providers is certain to increase over time. There are many reasons to use outside people to help with IT projects: Extra pairs of hands. Few organizations can afford to keep all the people on staff needed to handle the highly variable workload that projects create. When the necessary human resources are not available internally, the only practical choice may be to bring in outsiders. ■ Special skills. Almost every project needs people with special skills who are not available internally. For example, the first attempt to develop software using object-oriented programming can be extremely difficult without the aid of experienced people.



135

136

Revolutionizing IT

Process knowledge. Most projects are about business process improvement. The internal staff may understand the existing process but have no clear idea about how to make it better. Experts who have helped many other organizations solve the same problem can be invaluable. ■ Packaged offerings. The most beneficial IT projects often involve using a prepackaged offering from an IT service provider. A good example would be a complete business intelligence system that includes a dedicated server, software to extract and load data from existing systems, and analysis and report generation tools. Such systems can often be put in place in a matter of weeks. ■ Fresh perspective. The people who work inside organizations are good at operating existing processes. They may not have the time or aptitude to understand broad industry trends, new technologies, or vendor offerings that could be of significant value. The advice of qualified outside experts can be highly valuable when used effectively. ■ Variable cost. Outside help is a variable cost that can easily be cut when business conditions make it appropriate to do so. Trying to keep the right mix of skilled people on staff to meet unpredictable demand for projects is expensive and difficult. ■ Organization and discipline. Good service providers apply disciplined methodologies to the projects in which they participate. Internal resources often have limited experience in organizing and managing projects. ■

Obtaining these valuable benefits is not always easy. There are disadvantages as well, such as: Price. The cost per hour of high-quality outside people always seems to be higher than it should. ■ Motivation. Outsiders protect their own profitability before maximizing client benefit. ■ Learning at your expense. Outside people obtain valuable knowledge that leaves when they do. ■ Sensitivity. Outside people do not automatically appreciate the unique culture of their clients.



Using Outsiders Wisely

137

Bias toward larger projects. Outsiders will frequently prefer projects that use a great deal of their time. ■

Many ways in which projects can get off track have already been discussed. Yet another is to use the wrong approach when obtaining outside assistance. Fortunately, there are a few simple techniques that can be applied that will help better align the needs of clients and those who provide IT project assistance. Before offering these suggestions, it is necessary to help potential buyers of IT services better understand how providers run their businesses.

The Business of Providing IT Assistance

Providing IT project assistance is not an easy business. From the provider’s point of view, the expectations of clients can be highly unrealistic. Clients want to know exactly what a project will cost and what benefits will result long before the information needed to make a reasonable estimate is available. Many balk at paying the hourly rates necessary to maintain a staff with the high level of expertise that is expected. Other ways in which some clients act unreasonably include: Wanting to make endless changes in plans without an increase in cost or time frame. ■ Using the outsiders as the scapegoat when anything goes wrong. ■ Expecting clairvoyance—an ability to understand things not directly communicated. ■

Service providers have adopted a number of strategies that have allowed them to prosper under these challenging circumstances. Unfortunately, some of the most successful techniques lead to projects that violate many of the RITE Approach principles. The 1-5-10 Business Model The most successful business strategy among providers of project assistance has been to focus on large engagements. Working on

138

Revolutionizing IT

larger projects allows them to offset the high costs of selling, developing, and maintaining a pool of people with the necessary skills and of dealing with clients who blame them when things beyond their control go wrong. For these and many more reasons, they look for projects that will generate millions of dollars in revenue and take a year or longer. When business is good, they politely determine during the sales cycle whether the project being considered will be large enough to be profitable. If not, many will suggest that the client should wait, handle the project internally, or engage another firm. When service providers have underutilized resources, it becomes tempting to be less forthright and to attempt to turn whatever opportunity is presented into a profitable engagement. One of the largest firms strives to attain what is internally called the 1-5-10 rule on each engagement. The fee for the initial phase of a project, where a broad definition of scope is defined, is the 1 in the equation. The second phase of the engagement is where a detailed design of the proposed system is created. The formula calls for this phase to generate five times as much revenue as the first. The third stage of the engagement is where the resulting system is built and implemented. According to the 1-5-10 rule this should produce 10 times the fees generated in the second phase, or 50 times the first phase. The formula might thus more accurately be called 1-5-50. To illustrate, if the initial planning phase were to cost $100,000 (and few cost less), the goal for phase two would be $500,000, and the phase three goal would be $5 million. The total that the customer would spend over the course of the project under this example would thus be $5.6 million. It should be noted that these goals are not shared with the client at the start of the engagement, nor are they always met. There is nothing wrong with spending millions of dollars on IT improvement projects. For many large entities even the most modest of improvement efforts will cost this much simply because of the scale of the enterprise. For example, upgrading an application within a bank with thousands of branches spread across the globe is certain to be expensive regardless of how effectively the effort is managed. The greatest problem with project engagements that end up

Using Outsiders Wisely

139

following the 1-5-10 ratio is that the hidden agenda to meet the profit model can make the effort uneconomical for the client. The problem is not just the direct cost of the time of the service provider; it is the high indirect costs that result. IT service providers normally charge for project assistance by billing the time of the people involved by the hour. (Some projects are undertaken on a fixed bid basis, but a whole new set of negative dynamics comes into play when that approach is used). The result of billing time by the hour is a need for the service provider to charge enough time to make the engagement profitable. A $5 million engagement billed by the hour will normally mean 15 to 20 person-years of effort. That means an average of 10 fulltime people for two years (or 20 for a single year). If a third of their time is spent interacting with people from the client organization, then over six years of internal resource time will need to be invested in the project as well. It is normal for the service provider to include a mix of experienced and newer people on engagements, so that much of the time being paid for will involve relative newcomers whose efficiency at interacting with internal people may be low. Projects of this scope fall prey to all of the negative influences already documented, especially when the elapsed time from start to finish is greater than a year. As problems occur, the choices available to clients become limited—kill the project or pay more to see it completed. Large project-oriented firms usually do a fine job for those organizations at whom their offerings are targeted. Their approach has been fine-tuned for huge projects carried out over years across the globe. However, this scenario becomes less attractive as the size of either the customer or the project gets smaller. Often both sides become frustrated under these circumstances. If the only mode of doing business that is familiar to a service provider revolves around turning each opportunity into a large project, then that is exactly what can be expected if that provider is engaged. Other Business Models Projects based on the 1-5-10 style are not the only options available, even from the top-of-the-line vendors. Many service providers

140

Revolutionizing IT

recognize that those wishing to undertake projects of a more moderate size represent a growing percentage of the opportunity. The challenge has been finding ways to do so profitably. The most successful model has revolved around what could be called prefabricated projects. Under this model the service provider takes an existing software application or tool set and puts it in place in a number of similar organizations with limited variation. Economies of scale kick in and everyone benefits. A good example of this would be the creation of retail Web sites using one of the many commerce server software systems now available. FreeMarkets provides another good example. The company offers sourcing software and services based on its experience of creating online markets since 1995. The company’s custom-developed software helps commercial buyers conduct online reverse auctions and other online negotiations for items they need to purchase, thereby saving the clients a lot of money. Andrews Consulting Group has been highly successful in offering a prebundled suite of software products that make it possible to create business intelligence systems rapidly at a low cost. This is typical of a rapidly growing number of other prefab project offerings that service firms of all sizes are bringing to market. In addition, it is becoming increasingly commonplace for IT service providers to specialize by industry. A growing number of larger firms organize themselves this way. Having access to experts who have dealt with problems within similar organizations can be very valuable. This kind of structure often leads to offering the type of prefab engagements just discussed.

The Rate Issue

The economics of service providers make it necessary for them to charge what many clients feel are excessive hourly rates. This perception exists because most clients do not have a full appreciation of all the hidden costs associated with providing the time of professional people. The cost of maintaining a staff of fully qualified experts can be very high. A great deal of time must be spent keeping skills current. The practical problems of scheduling make it difficult to maintain

Using Outsiders Wisely

141

high utilization rates as well. This is especially true if clients want the ability to have experts available only when it is convenient for them. In addition, the compensation demands of people of the quality needed are high. Turnover can be a problem as time pressure and heavy travel take their toll. A mechanism is also needed to train a steady stream of replacements for those who move on. As a result of all these factors, service providers must charge a high hourly rate—especially for their most experienced people—in order to make a profit. This can translate into hundreds of dollars per hour for their best people. Those acquiring services too often make an unfortunate assumption. They believe that people with a particular skill set, such as Java programming, are more or less interchangeable. Service providers are compared based on the rate per hour for which they are willing to provide people with specific skills. However, buying the time of IT experts is akin to purchasing diamonds or pearls— quality matters a great deal. In the case of technical professionals, it is a complex combination of knowledge, experience, productivity, collaboration skill, communication skill, and other intangibles. The real value of top performers can be more than five times that of less capable people. Paying a little more for them is usually a great bargain. The best way to get value from IT service providers is to maximize the talent of the people being obtained while minimizing their numbers. Obtaining the cooperation of service providers when this is the agenda is not always easy. If clients are willing, many service providers will assign a large number of average performers and as few stars as possible. Those firms that train their own people are always looking for engagements where the large number of inexperienced people on staff can be placed. Accepting numerous raw recruits in order to get the true experts needed is a poor trade. The methodologies that some of the largest service firms use lead to projects that require a pyramid of skills. It actually helps firms that use this approach when clients insist that the average rate per hour be minimized. The response is to offer people with less experience and capability at lower rates. Insisting on only engaging a limited number of highly talented people can mean paying significantly more per hour for them. This is usually money well spent. Unfortunately, the instinct of many

142

Revolutionizing IT

people is to always make price the highest priority, but doing so results in very false economy. Would it be a good idea to hire the least expensive brain surgeon? There are better ways to save money than to quibble over the rates for top performers. One option is to offer to be flexible on scheduling. Service providers look for projects that can absorb the otherwise nonbillable hours of their people. Agreeing to use the time of people as it becomes available can make it possible to negotiate for favorable rates without sacrificing quality. Being flexible as to exactly when the project starts, when this is practical, also is another way to potentially lower costs.

Who Assumes the Risk?

Earlier chapters have documented all the reasons why IT projects do not always turn out exactly as planned. Application of the RITE Approach can reduce some of the risks that cause project problems but cannot eliminate them. The highly uncertain nature of projects is unsettling to the managements of most organizations. Many respond by attempting to transfer the risk of project failure to the outside service providers. Service engagements undertaken with this as a hidden agenda rarely work out well for either party. Those hoping to limit the risk of project failure sometimes engage a high-profile service firm that presumably has deep pockets and will do what is necessary to avoid embarrassment. This strategy fails because successful firms have become very skilled at protecting their own interests. It also encourages behavior that actually increases the probability that problems will occur. Most service providers are prepared to assume a modest amount of the risk associated with projects. They build the cost of it into their rates. They cannot survive, however, by making openended commitments to do whatever is needed to satisfy the changing needs of their clients at a fixed cost. The successful ones have developed techniques to control the risks faced. These risk control mechanisms can lead directly to growth in project scope and all of the attendant problems. It is useful to examine how this happens.

Using Outsiders Wisely

143

Engagements are usually broken up into at least three major phases. A number of stages take place within each phase. During the first phase, the broad scope of the effort is defined following a detailed assessment of the issues at hand. The need to control risk and a desire to maximize revenues encourage this assessment to be very thorough. The recommendations that come out at the end of this phase include a large number of carefully stated assumptions. An estimate of cost and time is provided at this point, but no commitments are made beyond the next phase. The second phase develops detailed specifications and plans and a more detailed project budget. Once again, numerous assumptions are carefully documented with the understanding that the budget and schedule are dependent on them all holding up and the specifications remaining fixed. Before presentation of this plan, representatives of the community of end users are asked to make a commitment to the specifications. By this point, the project has often become quite large and resource intensive. In the third phase the systems are created and put in place. During this phase, changes almost always occur. When this happens, the service provider can point to the carefully documented assumptions and specifications and make a justifiable claim for greater compensation. Only in the fortunate cases where no changes occur is the service provider expected to stay within budget and on the original schedule. The only risk really being taken is that the estimates were invalid if this occurs. The model for obtaining project assistance, which is too commonly used, is to assign responsibility for the project to a team largely made up of outside people. A plan is created during the first two phases of the engagement that documents all the assumptions made and the details of a specification for the planned systems. Changes to the plan are considered the responsibility of the client, who absorbs their cost. Experience has shown that such changes occur a high percentage of the time. When they don’t, it is often because an enormous amount of time and resources were spent making the project plan bulletproof. It would seem that there has to be a better model for working together—and fortunately there is.

144

Revolutionizing IT

A Better Model for Obtaining IT Project Assistance

A better available strategy involves admitting that risk exists, using the RITE Approach to limit it to the degree practical, and retaining control of the project and with it responsibility for the cost of whatever goes wrong. Attempting to shift risks to service providers only invites them to adopt an approach that encourages scope growth and that does not succeed in eliminating risk anyway. A more effective working relationship between those who provide project assistance and their clients needs to be put in place. In order for this to happen, each party needs to make concessions against what they would ideally prefer. The resulting framework is based on the following outline: Both parties acknowledge that projects are dynamic and subject to change. ■ The client provides a leader responsible for making decisions and trade-offs. ■ The service provider helps manage the project but follows direction from the leader. ■ The schedule and resources are fixed, and the scope is adjusted to meet them. ■ The client agrees that the service provider is entitled to make a profit. ■ Clients expect the people assigned to be capable and to work in their interest. ■ Service providers are not responsible for the ultimate cost of the project. ■ The project team consists of a limited number of highly experienced and capable people. ■ Service providers are entitled to charge high rates for the most capable people. ■ The engagement forms the foundation for a long-term relationship. ■ The service provider can expect to participate in follow-on efforts. ■

Using Outsiders Wisely

145

An appropriate number of people are assigned with high average skill levels. ■

Under this arrangement, clients make key decisions and tradeoffs and assume the associated risks. Service providers agree to participate in smaller projects than they would prefer where the risk they assume is limited. Rates are fair to both parties, and there is an opportunity to build a profitable long-term relationship. Under ideal circumstances, the long-term relationship allows service providers to evolve into trusted advisors to their clients.

Making the Right Choice

Finding the right service provider is like finding the right person to marry—the best choice will vary greatly depending on individual preference and circumstances. It is necessary to first think about what characteristics are most important—the talent of the individuals offered, the approach to project management, specific industry knowledge, global coverage, reputation, knowledge of specific products, or size of the talent pool. Each situation will be different. The best way to see if a good fit exists is to ask direct and detailed questions: What was the nature of the projects that each of the people being proposed for this engagement just completed? How many people were involved, how long did it take, what was the budget, how much new development was involved, and how did the result differ from the original plan? ■ What were the most recent projects completed within similar organizations? ■ What was the smallest successful project engagement completed recently? ■ How many of the people who will work on this engagement have less than five years of experience? ■ Have any recent engagements been completed under budget or ahead of schedule? ■

146

Revolutionizing IT

How many lawsuits and arbitration claims have clients initiated in the past year? ■ Will the people making the sales presentations be critical members of the project team? ■ How much staff turnover did your professional staff incur in the past year? ■

Subjective issues also matter a great deal. For example, the best outside advisors have a great “bedside manner.” They instinctively understand how to provide assistance in an effective and nonthreatening way. In this and other ways, the culture and working style of the firm chosen need to feel right. Most importantly, whoever is selected should feel comfortable using the client’s preferred project management philosophy. If the project has to be large, such as the worldwide deployment of a new collaboration system that tens of thousands of people will use, then the services of a firm with an appropriate number of people in the right locations will be needed. A number of high-quality international service providers are available to assist with projects of this scale. Outside help is also available from individuals who work on a contract basis and from staff augmentation firms. If the most important requirement is finding additional people to fill out a project team, then obtaining help this way can make sense.

Summary

Some use of outside resources is both necessary and desirable on most projects. It is possible to set up these engagements in such a way that there is an opportunity for both parties to benefit. When this is done, it frequently leads to longer-term relationships that benefit both. Clients should retain control over projects and make all the necessary trade-off decisions. Outsiders, regardless of how talented they might be, cannot be expected to make decisions that impact their profitability with the client’s interest solely in mind. Retaining project control means it is also necessary to assume the

Using Outsiders Wisely

147

associated risks. In practice, these risks remain with the client in any case. In exchange for a reduced level of risk and for profitable rates, service providers are asked to put a small number of highly talented people on the project for a relatively short period of time. This is not the model most service providers prefer but is an acceptable one if it provides the opportunity for follow-on efforts.

C h a p t e r

T w e l v e

Managing IT Professionals

T

he RITE Approach can improve the odds of success for projects. Yet success always remains heavily dependent on the talent of the individuals involved. In sports, coaching can have a big impact but can only overcome a modest difference in the talent of teams that are competing. The same thing is true when making use of technology—the greater the quality, experience, and motivation of the project team, the better the chance of success. There are rarely enough people with appropriate skills available, especially where certain types of technical expertise are concerned. The shortage of qualified information technology professionals reached a peak as Y2K approached. Since then it has eased but has not gone away. It is likely to remain a problem for as long as technology continues to be ever changing and highly complex. Chapter 11 discussed how it is increasingly common to make use of outside people to assist in IT projects. This trend does not reduce the need to build and maintain a highly skilled internal IT staff. The IT organization needs to be large and skilled enough to meet predictable ongoing requirements. This will normally include supporting the workstation and networking infrastructure, maintaining the body of software in place, and undertaking some of the development of new systems. Over time, the combination of technology products and software applications that support each organization will likely become highly unique, because there are an astounding number of ways in

149

150

Revolutionizing IT

which popular products can be combined, configured, and used. In addition, it is a rare organization that does not either have a number of custom applications or use packaged applications in a unique way. A group of people are needed who understand in great detail exactly how software and hardware products have been combined to create the systems in use. This is a function that cannot wisely be handed to outside organizations. Knowledge of exactly how the systems that support the organization work is too precious an asset to be turned over to others. The internal IT staff also needs to develop a deep understanding of how the organization functions and the role each application plays in the operation of the entity. The best internal IT functions also develop their own management cultures that include the use of up-to-date development and project management methodologies. It can take years of hard work and a large investment to bring the IT organization to the point where it does all the appropriate things well. The value of that investment can disappear quickly through the loss of a few key people. Learning how to protect it is thus a critical issue for management. It is not enough for senior management to understand and adopt the RITE Approach. A strong internal IT function is needed that is effective at operating, supporting, and continually upgrading the systems that are created using the RITE Approach. In many ways, management of IT is quite similar to any other function. Like others, IT professionals want job content that holds their interest. They want to be respected and to have the value of their contributions acknowledged. They want to be paid fairly and treated with consideration. They want to feel secure about their jobs and have the opportunity to build relationships with other people. At the same time, however, there are a number of important differences. This chapter will be devoted to highlighting some of the more important differences and what to do about them. Some of the things that can be done to help build and maintain a strong internal IT function include: ■ ■ ■

Treat the IT staff as a valuable and fragile asset. Become an attractive place for IT professionals to work. Attract more than your fair share of highly talented individuals.

Managing IT Professionals

151

Retain talent by helping IT professionals manage their careers. ■ Understand their special needs. ■ Help everyone feel like a leader. ■ Manage both marginal performers and high-potential employees wisely. ■

What Makes IT Professionals Different

It is dangerous and politically incorrect to generalize about any large group of people. The best managers, however, develop insight into the behavior and motivations of the different types of people who are attracted to each type of work. Few people would disagree that a broad profile can be drawn that describes the types of people who gravitate toward certain professions. Those who join the clergy hopefully have a different makeup than those who become drill sergeants or used car dealers. In this spirit, a totally unscientific view of technically oriented computer professionals is offered based solely on the experience of the authors, who freely admit to having many of these traits. Good technicians think in a logical, orderly way. Many were good at math in school. A surprisingly high percentage play a musical instrument. They view the world as a logical place and are less comfortable when confronted by seemingly illogical circumstances. ■ Technical professionals are frequently idealistic. In order to design new systems, it is necessary to believe that a better future is possible. Most are, therefore, naturally optimistic people, and many will not be good at predicting the negative consequences of their designs or the problems that will arise. ■ Most display a strong sense of loyalty. They remain loyal to organizations and people who share their conviction that what they are doing means something and is valued. They also develop strong loyalty to products that they like and use. ■ Political behavior is foreign to their nature. If political skills are necessary, they need coaching to do them well, and they are ■

152

Revolutionizing IT

reluctant about using them. Organizational politics seem illogical and unnecessary. ■ People with strong technical skills can be drawn toward complexity. The elegant solutions they prefer lead directly to complex designs. The challenge this presents excites them. ■ They are curious and want the challenge of learning new things. What they consider challenging can be different from the view of their managers. To them, challenge means applying new skills, solving difficult problems, making use of new tools, and demonstrating creativity. If the work assigned to them does not challenge them, they might attempt to undertake an assignment in a way that will add challenge, even if doing so increases time, cost, and risk. ■ Technicians may not have the communications and political skills to know how to express their frustrations. As a result, job problems build up inside. They don’t want to be perceived as prima donnas or become the center of attention, but they continue to feel that injustice is occurring. They make unrealistic assumptions about management’s ability to notice what is bothering them without them having to say anything. When they finally do express what they are feeling, a crisis can result. There are a number of specific things that can be done to help individuals with these characteristics find job satisfaction while working toward the overall good of the organization. The first involves understanding what motivates IT professionals. Motivation and Job Satisfaction

Recognition of the value of their contribution by peers is the leading source of job satisfaction among technical professionals. Underpinning this is a belief that only technical peers can really understand what they do and why it is so important. Providing job assignments that showcase skills is therefore a great way to increase satisfaction. The opportunity to learn and apply new skills, particularly with regard to the latest technologies, is also a major motivator. Professional development opportunities such as making a presentation at a

Managing IT Professionals

153

user group meeting, receiving training on a new technology, or working on a leading-edge project can be viewed as an award for work well done. Perhaps the ultimate sign of recognition is the opportunity to act as a representative to an industry group or standards committee. Becoming a technical leader, such as team leader or design leader, is often viewed as a major milestone, not only because of the recognition it brings but because of the position of influence. Technical professionals like to feel that they have successfully demonstrated the “right” way of doing their job. Being asked to lead an effort is equivalent to management saying, “We want others to follow your example.” When assignments are given out that technical people are likely to feel good about, it is appropriate for management of the IT function to make sure everyone knows about it. The value of being recognized is limited if no one knows about it. Time invested in sharing the successes of each individual with the group is well spent. Since few technical professionals want to be involved in organizational politics, shield them from it as much as possible and, when it is not possible, have an honest discussion about the necessity for political behavior. Coaching will probably be in order until this behavior is learned. Some employees may resist it altogether. Technical professionals like to please others—their peers, their managers and leaders, and the users of their efforts. Nurture this positive inclination while remaining aware of the dangers it brings, including scope creep, overly optimistic schedules, and unrealistic work estimates.

Career Management

IT professionals think more about the long term and the direction their career is taking than some of the other categories of employees. Managers should therefore periodically hold both formal and informal career talks with them. These discussions should not be combined with job performance evaluations. The manager needs to share knowledge of the industry and the way the organization fits into it with the IT staff. Good IT people have an insatiable appetite for information about how their organization functions. They can be quite naive about politically

154

Revolutionizing IT

motivated behavior, but this does not diminish their curiosity about why specific decisions have been made and what their likely impact will be. Time spent, especially by nontechnical managers, having an interactive discussion with them about how the entity is being managed is usually worthwhile. It helps a great deal to paint a picture of how the needs and aspirations of each person are likely to be met. IT people like to feel that they are playing a meaningful role within an organization that is doing something worthwhile. The important point is to invest time in highly individual attention focused on understanding and meeting their career aspirations. In doing so it is important to understand that the needs of individuals will pass through a number of stages.

Managers and Leaders

Unlike some other functions, there are many opportunities within the IT function for nonmanagement people to be leaders. This is especially true when they are the most senior IT people working on a project. Another RITE Approach principle is that everyone can be a leader. By this we mean that every participant in a project should be encouraged to share in the leadership of the effort. Projects of all sizes create leadership opportunities, but those that are larger need more people to assume defined leadership roles. Examples of specific functions that an individual IT professional might oversee include: Leading a team to install a purchased software package at a customer site. ■ Leading a team of programmers doing custom work. ■ Creating test methodology. ■ Leading experienced programmers in design reviews. ■ Leading a team creating a prototype. ■ Being the development representative to a customer, gathering requirements. ■

Managing IT Professionals

155

Those who are appointed to actual management jobs should be selected on their willingness to become as good at management as they were at technical work. They must become management scientists, whereas before they might have been computer scientists, or they will always be regarded as only able to manage certain types of people and work. While they will always be technically inclined, it is impossible to be a full-time manager and stay completely up to date technically.

Marginal Performers

Most organizations wait too long to deal with marginal performance. Once a fair chance has been given, the employees seem to know more quickly than the manager when particular individuals are not carrying their part of the project. The employee’s peers feel this acutely because the lack of performance impacts their own success on a day-to-day basis. Dealing with low performers is considered to be an essential responsibility of management. The other team members feel awkward in this situation, and resentment toward management builds quickly when action is not taken. Management often waits too long, fearing a bad reaction from the group and assuming they will take the side of the disciplined (or terminated) employee. Except for possible personal friendships, this is rarely the case. Employees have a right to expect management to take action firmly, fairly, and in a timely manner. Usually a boost in morale occurs when someone who has not been performing leaves.

High-Potential Employees

The same rule applies to staffing both projects and the permanent IT staff within an organization: A modest number of outstanding people are more valuable than a large number of average ones. In practice, most organizations find it difficult to attract and keep high-potential IT professionals. Part of the problem is that those

156

Revolutionizing IT

with the greatest talent tend to be attracted to work for vendors and service providers, which have become quite good at finding and retaining exceptional people. Competition for those who are available can be intense. The productivity of technical professionals varies more widely than would be expected. In many professions the difference in productivity between the top 10 percent and bottom 10 percent of workers might be around two to one. For example, the most productive bricklayers might work at twice the speed of the slowest. In programming, and particularly in activities of design and architecture, it is not uncommon to see a top performer have as much as 5 or 10 times more impact on the product than an ordinary programmer. For this reason, having a few of these top performers on staff can be very valuable. High-potential IT professionals, especially those in the earlier stages of their careers, can be a huge challenge to manage. The many venture capitalists who threw vast amounts of money at brilliant young software experts can attest to how difficult this can be. In spite of this, such people are worth the extra effort. A common trademark of high-potential people is a confidence in their own ability that often goes beyond what they are realistically able to accomplish. More experienced, and possibly less brilliant, people need to help prevent them from committing to too much. This needs to be done without questioning their capability. The best approach is to make the primary challenge they must meet an ambitious time frame rather than the creation of an elegant solution to the problem. High-potential people should not be kept satisfied at the expense of the morale of everyone else. One of the worst things to do is to go overboard offering special compensation or incentives to high-potential people. Always make the assumption that the entire organization will instantly know about anything special given to individual employees. If it cannot be defended as fair in light of proven contribution, and not just potential, then it is best to look for another way to keep the individual satisfied. The honest and nonpolitical nature of most technical professionals can serve them badly when they wear their ambition on their sleeve. Many will assume that their peers will be suitably impressed with what has been offered to them and their rapid advancement.

Managing IT Professionals

157

Management needs to find ways to keep the high-potential people challenged and motivated without destroying the morale of the rest of the staff. It has been our experience that established, personable employees are not resented for being rewarded or moving ahead as long as it does not come at the expense of others. The group can be pleased to see the success of one of their own if the group shares in the credit for what has been accomplished.

Creating Ideal Assignments

The RITE Approach encourages organizations to prefer numerous small projects instead of a few large ones. Doing so makes better use of the available talent because the motivation, effectiveness, and leadership capability of people increases as work groups become smaller. In practice, most project teams will be made up of people with a less than ideal skill set. This can be less of a problem than it appears to be, because people seem to do their best work when stretched. The perfect project is one that taps into hidden potential while not pushing the participants to undertake something for which they are unprepared. When given an unstructured mandate, most groups will not attempt to undertake anything too ambitious. The natural conservatism of group decisions tends to make this happen. It is therefore best to not be more specific than necessary when giving a group its initial assignment. For example, asking a task force or project team to see what can be done to reduce work in progress inventory is much better than demanding that they investigate the latest generation of bar code inventory control systems.

The Causes of Turnover

Since the talent shortage refuses to go away, keeping skilled people is a priority. Minimizing the loss of IT people requires an understanding of why they change jobs frequently. Although it is commonplace for IT people to significantly increase their income when

158

Revolutionizing IT

making a move, compensation is rarely the primary reason why people leave. The most common reasons for turnover are: Disillusion. Companies often fail to live up to an exalted image that was created in the minds of the technicians when they were hired. Letting them see too much of the political side of the business often creates this problem. ■ Lack of appreciation. Technicians are highly sensitive about how the organization appears to value their contribution. If their good deeds seem to go unrecognized, they will become restless. ■ Boredom. As soon as their job begins to fall into a comfortable routine, many of the best people become restless. ■ Belief of unfair treatment. Even when IT people say they are leaving for compensation reasons, it is more likely that they felt someone else got more than they deserved. If everything else is fine, few will leave for money alone. ■ Lack of technical leadership. IT professionals need someone who they feel understands their world in a position of leadership. ■

Hard work, long hours, lower pay than elsewhere, low-quality office space, limited amenities, and other factors can create dissatisfaction but are rarely the real reasons behind turnover. Good technicians will work under quite marginal conditions if the more important things are present—challenging work, a strong sense of purpose, managers who are understanding, fairness, and the opportunity to learn new things.

Summary

A lean, strong internal IT function is critical if the RITE Approach is to be put to good use. Creating and maintaining such an environment is a challenging task. Organizations are becoming increasingly dependent on a tightly integrated mass of hardware and software. Maintaining a competent group of people to operate,

Managing IT Professionals

159

support, and continually improve this body of technology is essential. It makes sense to use outsiders to help with all this, but turning it completely over to them is fraught with risk. It is worth the effort to attempt to understand what makes IT professionals tick and to invest time in helping them become a motivated and integral part of the organization.

C h a p t e r

T h i r t e e n

Management of Projects

A

fter the scope of a project has been tightly controlled and undivided accountability has been assigned to a leader from the user community, it is not possible to relax. It is then necessary to get the darn thing done. There are still many things that will go wrong even when the RITE Approach principles have been used to define and organize the project. Although it may not seem like it, the RITE Approach philosophy encourages the use of formal development methodologies and project management tools. Once the elements that encourage scope creep have been stripped out of them, many development methodologies are quite effective. A highly organized process needs to be used to manage almost any IT project. That process should include the use of an automated project management tool. It is less important which one is selected than how it is put to use. Once a project plan has been approved and development starts, there are a number of things that management needs to remain concerned about, including: The inherent inaccuracy of project schedules. As described in the next section, perfect schedules for complex projects can rarely be created in advance. It is possible to manage around some of these imperfections, but completely eliminating them is an unattainable goal. ■ Problems tend to surface late. The normal pattern is for projects to appear to be going well until just before scheduled ■

161

162

Revolutionizing IT

completion, when suddenly everything falls apart. Part of the reason this occurs is that project teams often postpone solving the most difficult problems until late in the project. ■ Bad news travels slowly. It is common for management to react violently to the news that something has fallen behind schedule. It is appropriate to remain focused on speed of completion, but an atmosphere that is intolerant of any deviation from the formal plan will encourage those working on the project to hide problems and failures from view for as long as they can. The Project Scheduling Dilemma

IT project scheduling is often viewed by management as a black art involving bubbling cauldrons and chanting. This is because it is rare for projects to be completed on time regardless of how many tough statements are made by those who manage the effort in the early stages. The underlying problem is that it is usually necessary to estimate the cost and time frame for a project long before enough information is available to do so with any precision. This is a universal problem and not one that occurs because the people creating the plan are inexperienced or incapable. Those who would like to use information technology to solve a problem usually want to know what the solution will cost and how long it will take before a great deal of time and money are invested. It is akin to asking an architect how much it will cost to build a new company headquarters. The answer clearly depends on factors such as where the building will be located, how much land will be used, how much will be invested in creating a dramatic appearance, what amenities will be included, and much more. A simple statement of the typical cost per square foot of office space will not lead to an accurate answer to the question. In a similar way, the cost and time required for any IT effort can only be estimated with accuracy after all the important decisions have been made. Gathering information and making all the necessary decisions can take a long time. Management is rarely willing to wait until that point to see a schedule and cost estimate. It is much harder for executives to develop a mental picture of a technology-driven effort than something physical such as a building. Understanding the cost and

Management of Projects

163

effort required to alter software that has already been designed and partially built is harder than accepting the implications of changing the foundation of a building after the steel frame has already been erected. The solution is not for management to insist during the early stages of a project on a precise definition of the plan and a schedule that will never change. Doing so can encourage the project team to overestimate the time and resources needed to avoid being wrong. It is frequently necessary for IT project managers to break out the cauldron and develop project estimates based more on witchcraft than on hard data. There are a number of other factors that make IT project schedules inaccurate: IT professionals are optimistic by nature. The schedules they create sometimes underestimate the time and effort needed. ■ Schedules don’t take into consideration the fact that the environment will change as time passes. ■ In spite of the best efforts of good people, design specifications rarely turn out to be a perfect reflection of what is needed. ■

Why Most Schedules Are Not Met

Project leaders are routinely asked to create project schedules and budgets before even a fraction of the information needed to do so accurately is available. There are two common strategies used to address this dilemma, both of which tend to lead to missed schedules. The first strategy is to build a schedule based on the information available at the point in time when the estimate is demanded. Schedules created this way almost always underestimate what the final project scope will include and how much will change during the project. There is often a temptation for IT professionals to create an optimistic schedule in order to obtain approval for an ambitious project. A second and more popular approach to creating schedules is favored by more experienced project managers. It involves building

164

Revolutionizing IT

a plan based on known information and then adding a buffer or fudge factor based on experience and personal conservatism. With this approach, a plan is first created based on what is known. For example, if the first cut of the plan calls for nine months of total effort, then extra time is added to each stage of the project to account for unknown factors. Depending on the person creating the plan, anywhere from 25 percent to 100 percent additional time is usually added. When determining how much of a buffer to include, a judgment is usually made as to what end date management will tolerate. Building in buffer time is the better of the two approaches, but an interesting dynamic causes it to usually fail. The best explanation of why this happens is the phenomenon widely known as Parkinson’s Law: Work expands to meet the time allowed for it. Because of this endearing characteristic of human nature, even schedules with built-in buffers tend to be missed. What happens is simple. In the early stages of a project, the team members put together the best schedule they can, given the information available at the time. They estimate that it will take nine months of effort if all of their assumptions hold. The wily old veteran running the project knows that unplanned changes will occur and therefore adjusts the schedule to take the unknown into account. The end result is a 14-month plan documented in great detail. Buried within each phase of the project is a total of five months of extra time versus the original estimate. Management gives its blessing and work starts. The first phase is to build a prototype to prove the fundamental ideas behind the project plan and to help users finalize their thinking on interface specifications. Three months have been set aside for building this prototype. Two months had been scheduled before the buffer time was inserted. Work begins and the usual problems occur. The software tool selected for the prototype takes longer than anticipated to learn, one of the project team members is out ill for a few weeks, and the specifications grow in complexity. In spite of all this, everything looks good until the prototype is handed over to the users for their reaction after 10 weeks, 1 week past the planned date. However, the users cannot start testing because of an emergency request from their largest customer. Two more weeks are lost. When they finally

Management of Projects

165

get involved, the review and changes take another week longer than expected. Even though the project is four weeks behind schedule, no one is all that concerned because additional time has been built into each phase of the project. As each phase proceeds, the same sorts of things occur. When the dust all settles, the project is completed in 18 months, 4 months more than planned and twice as long as the original effort created without the buffer time. The project team vows to include an even larger buffer next time but also feels that the long list of unplanned things that occurred were beyond its control. A Different Approach to Scheduling

It is necessary to accept the unpleasant reality that a perfect schedule cannot be created in the early stages of a project. On the other hand, management cannot be expected to approve the expenditure of time and money without a clear plan and the assignment of accountability. One way out of this dilemma is a more creative approach to scheduling that deals with buffer time in a direct and open manner. The two-tier project scheduling technique outlined here is one possible approach to the challenge of creating schedules that will hold up. This particular approach is not for everyone. It has been tried successfully in a number of situations including the case history outlined later in the chapter. When conditions are not exactly right, it is best not to attempt it, and for that reason it is not considered a formal part of the RITE Approach. At the same time, there is enough to be learned about the important dynamics of project scheduling from this approach to make it worth understanding. The essence of this concept is the application of the following scheduling approach: The project leadership creates the best first-tier plan possible given the information and assumptions available. ■ The plan is allowed to contain the usual amount of contingency time to take into account the normal delays and problems that all work efforts face. ■

166

Revolutionizing IT

The project is only approved if the benefits are scheduled to arrive in a reasonable time and if the return on the investment in time, resources, and funds is in line with the risks. ■ The success of the project is measured against this “normal” schedule. If this schedule is met and the associated benefits are provided, then the project is considered a complete success. Appropriate rewards are set in advance for those who are accountable for the success of the project. ■ Additional incentives are offered to the team if the project can be completed ahead of time. ■ On its own, the project team creates an intentionally optimistic project schedule. This second schedule is based on the assumption that absolutely everything throughout the life of the project will go exactly as planned and that nothing will go wrong. In most cases, such a schedule will be one-third to one-half shorter than the elapsed time of the plan that was approved. ■ As the project begins, the team works hard to achieve the highly optimistic schedule. The members know, however, that their commitment to management is the more conservative plan. As they work against the more aggressive schedule, they can give ground when absolutely necessary. ■ As long as the original schedule is met, the project is considered a complete success. Completion ahead of schedule is wonderful if it happens but is not to be counted on. ■

When two-tier scheduling is used effectively, it is commonplace for projects to finish in more time than the optimistic schedule provided but less than the approved, more conservative one. When this happens, it is essential that the project team be heartily congratulated for doing a great job and that whatever rewards are appropriate are given to those who deserve them in a highly visible way. There have been dramatic successes using this simple technique, including the Sealectro case outlined later. Experience has shown that creating the climate needed for it to work is a delicate matter. One element of that climate is a special way to view success. Under this view success means:

Management of Projects

167

Completing the project on or ahead of schedule without exceeding the budget for funding or resources. ■ Providing benefits to the organization that represent good value for the resources spent. ■ Taking a demonstrable step toward a long-term vision that is articulated before the start of the project. ■

It must be noted that under this definition the results of the project do not have to be exactly what was defined at the start to be considered successful. The best test to apply after the fact is whether the project resulted in clear improvements whose value provided a good return on the time, funding, and effort expended. There can never be negative consequences if the optimistic schedule is not met. Management cannot, in effect, turn the optimistic schedule into a firm commitment. If the project team believes that this may happen, they will go back to putting hidden buffers into what they call an aggressive schedule, and Parkinson’s Law will go gleefully back to work on it. It also must be seen as a completely positive development if the project is completed before the commitment date. On the other hand, there must be a significant consequence for not meeting the longer-term commitment (the “normal” schedule). It is an odd thing, but under the traditional approach it is commonplace for managers to react negatively if a schedule is beaten. They do this by accusing the project team of padding the plan. This is, of course, exactly what has happened, but in most cases it never becomes apparent because of the natural tendency of work to fill the allotted time. Clearly, anytime a commitment is exceeded there should be celebration, not accusation. Using Two-Tier Schedules to Defeat Parkinson’s Law

Many projects take longer than planned because buffer time is not acknowledged to exist. Project managers bury extra time within the schedule but usually spread it uniformly across the full project plan. Parkinson’s Law works its evil as those working on the project use every minute that is scheduled for each project phase. Since the

168

Revolutionizing IT

biggest surprises usually come at the end, there is rarely enough buffer time available to absorb them. When a project team is given a risk-free chance to work to an aggressive schedule and is motivated to come as close as possible to meeting it, it is surprising how frequently wonderful things happen. This is especially true if the entire team is able to make adjustments to the scope by concentrating on maximum measurable benefit and not sticking exactly to the original plan for achieving those benefits. When the incentives are right, it is surprising how creative ordinary people can become. Two-tier scheduling attempts to create the kind of atmosphere present when dealing with an emergency. People become highly creative in emergency situations when all the incentives revolve around solving the problem rapidly. In emergencies there are usually no concrete expectations as to when the effort will be complete other than as soon as practical. It should be noted that the two-tier scheduling approach works best when used on projects where the amount of innovation has been limited. The more experimental the project is, the harder it will be to create and live with schedules regardless of what technique is used.

Case History: Sealectro Replaces Its Applications

As a $50 million per year manufacturer of electrical connectors, Sealectro could not afford to continue to spend over $1.5 million per year on the IT function. An IT staff of 25 was needed to support and operate obsolete homegrown applications that ran on a small mainframe. A new management team engaged Andrews Consulting Group to help decide what to do. A task force led by Sealectro’s CFO developed and approved a project plan over a six-week period. It called for the staged replacement of all the applications then in use with packaged applications over an 18-month period. The CIO of Sealectro’s parent company reviewed the work of the task force and approved the resulting plan with some reservations. Based on his experience, he thought that the project was certain to take longer than scheduled and cost much more than budgeted.

Management of Projects

169

Once approved, the project was managed using the scheduling concepts just described. The commitment made to management was to complete everything within 18 months. With the understanding and support of everyone involved, the project teams worked against a much more aggressive schedule. A person from the department that would use the particular application managed each subproject. These user project managers were fully accountable for every aspect of their project including budget and schedule. It was agreed that the new software packages would initially be deployed with as few changes as possible. Funds were allocated to handle enhancements to the packages. It was understood that those that were not essential would be put in place only after the systems had been in use for a period of time. To everyone’s surprise, each of the project teams was able to meet the very ambitious schedule that they were working toward. As a result, all of the new applications became operational within nine months. Over the following three months, appropriate adjustments and enhancements were put in place. At the end of one year, the original 16 applications in use had been replaced with ones that represented major improvements. In addition, four completely new applications that were not in the original plan had also been deployed. The new IT organization consisted of just seven people, and the annual cost of the department had been cut in half. The new software introduced process improvements that led directly to head count reductions of over 50 people in the financial departments alone. Benefits in manufacturing were even greater. The Sealectro project was hardly one of the great milestones in the history of computing. A small number of people working in a business of modest size completed a straightforward project where packaged software replaced a set of outdated applications. What was so unusual was that the project came in six months ahead of schedule and below budget. This unusual level of success resulted from the application of a few simple principles: Start with an end date and work backward. Deploy packaged software with minimal initial changes and then determine what enhancements are necessary after a period of time. ■ ■

170

Revolutionizing IT

Appoint end users as project leaders who are fully accountable for results. ■ Work toward a highly ambitious schedule while committing to a more reasonable one. ■

Summary

Building buffer time into a schedule is a common practice, but under a traditional approach this is usually done without acknowledgment to management that it is happening. Management too often establishes an atmosphere where those who make proposals are expected to be clairvoyant. In this atmosphere, it is unacceptable to state the obvious fact that the plan being presented represents an educated guess, making it necessary to pretend that buffer time does not exist by not explicitly showing it in the schedule. There is also a natural reluctance to admit that any buffer time has been included in a schedule for fear that management will try to micromanage the buffer time rather than let the project team handle it. There is no perfect solution to the problem of needing project plans before the information needed to prepare them is available. The two-tier approach can make the best of an awkward situation. It only works, however, when management is able to establish exactly the right atmosphere by accepting the need for schedules with buffers. Management must then create positive incentives for attempting to finish ahead of the committed schedule. Care must be taken not to make the aggressive schedule into a firm commitment for the project team. When this approach is used successfully, there are many intangible benefits in addition to the direct value the completed project brings. The most important of these benefits is the impact on the morale of those involved in the effort. Under the traditional approach, most projects end up taking longer than scheduled. Because of divided accountability, everyone has an excuse why it was not their fault. It is always good to have an excuse when something goes wrong, but it does not feel very good to be in a position where excuses need to be made. The negative impact on morale on traditional projects

Management of Projects

171

is thus one of the reasons why good people outside the IT function often are not anxious to be involved in projects. Therefore, when a project meets or exceeds its objectives, a celebration is in order. Under these conditions, there is no limit to the number of people that can share in the credit. This can be called the Oscar Effect, since it is like the Motion Picture Academy Awards when those who win an Oscar feel obligated to thank an endless number of people who made their success possible. When a project comes in ahead of the committed schedule, the primary credit goes to the accountable project leader. This person invariably makes a great effort to indicate that the success was the direct result of the work of many others. Everyone from the custodian to the CEO then basks in a small part of the glory. The reason all this is good is that it acts as a strong incentive to encourage the best people within the organization to want to be involved with future projects and to carry off the success again.

C h a p t e r

F o u r t e e n

Building Software Yourself

P

revious chapters have made the case for limiting the amount of software developed internally to the degree practical under the principle imitate, don’t invent. In a growing number of situations, this is simply not possible. A large body of unique code must be developed and supported. An obvious example would be the Internet trading company eBay, where to a large degree the software is the business. The more common situation is for organizations to use packaged software for routine functions such as accounting, personnel management, and distribution but to also need unique software to support specialized functions such as Webbased order entry. The creation of custom software is difficult. It is not a game for amateurs. A village is needed to create excellent software in the sense that a number of people with specialized skills are required. Assembling a group of highly skilled people is not enough. They must be well organized and follow a disciplined methodology. Earlier observations about the limitations of widely used methodologies should not be misinterpreted. The development of highquality custom software demands the effective use of a formal methodology. The RITE Approach concepts can surround and shape the use of such methodologies, but they do not eliminate the need for them. The world of software development is a little like Middle Earth, the fictional land of J.R.R. Tolkien’s The Lord of the Rings. It is beautiful, enchanting, and populated by eccentric individuals, with 173

174

Revolutionizing IT

danger lurking around every turn. Since few executives will have the luxury of avoiding a visit to this wondrous land for long, a guidebook is in order. The goal of this chapter is to provide an overview of the world of custom development without becoming buried in technical detail. The Role of Management

Senior managers often avoid involvement in software development because they: Don’t know what to ask or how to interpret the answer. ■ Don’t want to appear ignorant. ■ Are not confident about making a contribution. ■ Feel uncomfortable dealing with “computer people.” ■ Have experienced disdain or malicious compliance when attempting to help. ■

These concerns can be overcome by defining a clear role for management and through a better understanding of how the development process works. The role that executives who oversee IT projects need to play includes a number of elements: Business process engineering. The person at the top of each part of the organization must make the final decisions as to exactly how critical functions should be performed. For example, only the top sales executive can decide exactly how the compensation plan for salespeople is going to work. ■ Making trade-off decisions. The most challenging part of most projects is making choices between conflicting goals such as delivery date versus functionality, cost versus quality, or meeting the requirements of one type of potential customer versus those of another. These choices should not be left to the technicians building the software. ■ Assigning accountability. Those approving projects must clearly identify in advance who will be accountable for ensuring their success. ■

Building Software Yourself

175

Aligning project team priorities with those of management. Only a few project team members have been known to actually read minds. The others will only consider the information they have been given as plans are formulated. Management must make sure that plans are not made invalid due to a lack of necessary information, especially when events have caused the priorities of senior management to change. ■

Some of the specific things executives need to do include: Approve plans. The approval of project plans and the resulting design need to be formal events presided over by an appropriate set of executives. ■ Measure the fit. As a design emerges, key executives need to evaluate it to see if it is consistent with their view of how functions should be performed. ■ Oversee project reviews. All projects face a steady stream of challenges. Periodic detailed management reviews help resolve problems as early as possible. ■ Set an example. People constantly observe where senior managers spend their time and react by supporting those activities that are currently getting attention. ■ Sell to their peers. Most development projects impact processes in other parts of the organization. For example, bar code readers in the warehouse might save inventory cost but require the people who work there to do more work. Members of the project team frequently need the help of a senior manager in selling a change that benefits the whole but not every part of the organization. ■ Ferret out errors and misunderstandings. The best-qualified people need to review plans and designs with a critical eye before misunderstandings become program code. ■

The Development Challenge

Software development is difficult because it stretches people to their limits. First, a large number of people need to agree on a set

176

Revolutionizing IT

of complex and abstract ideas. Then the result needs to be turned into something concrete, on schedule and within budget. In a way, software engineering is a unique form of manufacturing whose output is intangible. The challenges developers face include the following: There is little agreement on how to do the job properly. ■ A large number of variables must be considered. ■ A great deal of information must be communicated between many people. ■ Success generates an unquenchable thirst for more development. ■ Numerous people, each with specialized skills, are required. ■ A high degree of cooperation is needed from individuals who love independence. ■ Success is hard to define and shifts over time. ■ The problem being addressed won’t stand still. ■ The tools available to developers constantly evolve. ■ Quality is difficult to attain and hard to measure. ■

Good development managers need the technical mind of Bill Gates, the organizational skill of Caesar Augustus, and the patience of Gandhi. If this were not enough, they must also have exceptional people management skills. One of many dilemmas they face is that those with the highest levels of technical skill are often the hardest to manage. Small, talent-rich teams offer the greatest chance of success, but only if a strong leader is able to get them to work effectively together. Good developers like to work hard, but within limits. Their morale can be as delicate as the early versions of the code they create. It is difficult to distinguish between the normal level of griping and serious discontent. In the heat of the development war, there is a fine line between the camaraderie of the battle and the discouragement of the death march. One of many reasons why the RITE Approach focuses so strongly on scope control, especially as it relates to requirements for custom code, is because of the following chain of logic:

Building Software Yourself

177

As design complexity increases, so does the number of programmers needed. ■ The productivity of each programmer decreases as the total number involved increases. ■ The need for discipline, controls, and rigid procedures increases as the staff grows. ■ The accuracy of estimates declines while risks rise. ■

A proper methodology helps. Using one requires education, enforcement, and control mechanisms and obtaining the cooperation, if not the wild enthusiasm, of the development staff. Those who have not used disciplined methodologies can be slow to get on board. Those who grew up using them can feel naked working without one. It helps to build a team that includes respected technical experts who are already champions of the methodology to be used. Improving the development process takes work and needs to be managed just like any other development project. Managers should be suspicious if there is not a great deal of discussion about the methodology being used along with requests for funding for tools and training. The absence of such discussion could mean that an appropriate development methodology is not really being used at all.

Another Visit to the Waterfall

Earlier chapters engaged in some “waterfall bashing” by exposing some of the weaknesses in highly linear software development methodologies: 1. Performing steps in order can take a long time. 2. Requirements need to be finalized at the start. 3. Changing requirements can disrupt plans. 4. Flexibility is sacrificed in order to minimize reworking the design and code.

178

Revolutionizing IT

Altering management’s view of projects, limiting scope, and engaging in evolutionary development can reduce the impact of these limitations. Doing so will require the use of a development methodology that in most cases will still, deep in its heart, have the waterfall concept as its foundation, because the logic underlying the waterfall development cycle still has some validity. The problem at hand needs to be analyzed. A solution must be formulated, and a plan is then needed to translate the resulting concept into something real. The building blocks need to be bought or constructed and then assembled. It all needs to be tested to be sure that it works, and finally an effort is needed to put the resulting system in place. None of the advanced ideas that we and others advocate changes this fundamental logic; they simply find ways to reduce the impact of its weaknesses. With this in mind, it is appropriate to look more closely at what happens during each stage in the development of software. The terms we use for those stages are Planning, Design, Coding, Test, Delivery, and Service. Each stage contains processes, or activities, for both accomplishing the task at hand and controlling for quality.

The Planning Stage of Development

Much of the focus of the RITE Approach has been on techniques to limit the scope of project plans. Once these techniques have been successfully employed, it is still necessary to turn the resulting specifications (the manifestation of scope) into a working system. Doing so demands the creation of an excellent development plan. Such a plan is no guarantee of success, but a poor one will lead to almost certain failure. The planning stage activities include gathering the project requirements, estimating the work, identifying the dependencies, and creating the project development plan. The control processes include validating the requirements, approving the development plan, and controlling plan changes. This is the project stage where application of the RITE Approach ideas can have the greatest impact by helping control scope and manage expectations.

Building Software Yourself

179

The great dilemma of the planning stage is how much time to spend gathering and confirming requirements. If at the end of this effort too little detail is available, then the resulting design will be weak. On the other hand, the dangers of spending too much time developing plans have already been well documented. The answer is easy to state but hard to make work in practice: Limit the overall scope of the project, but then take the time to understand the task that is being undertaken in minute detail. The need to be fastidious about details is precisely the reason why it is so important to limit the scope of each stage of development. In the end, someone will have made a decision or an assumption regarding every detail at some point along the development path. As many of those decisions as possible need to be made as part of the chosen development methodology. The unfortunate alternative will be for a junior programmer to make choices in the middle of the night during the test stage. Requirements go far beyond the things that end users actually see. They include reliability, performance, choices of technology building blocks, relationships to existing applications, and even how easy it will be to support the code being written. The Need for a Balanced Development Plan The output of this stage of the project is a development plan that addresses four key issues: 1. 2. 3. 4.

The capability that is to be delivered. The resources that will be needed. The cost of each element. The time scheduled for each activity.

These four variables must be brought into balance during the planning stage, at least for a moment in time. For example, the capability promised must be achievable with the requested resources, budget, and time frame. As the plan is tested for balance, inconsistencies need to be resolved—often by making trade-offs such as ensuring higher quality by increasing the time allowed for testing.

180

Revolutionizing IT

Project plans must continually be rebalanced in the face of external forces and assumptions that turn out to be inaccurate. Experienced development managers know that unforeseen problems are certain to arise over time. As discussed in Chapter 8, they build contingencies into the plan to allow them to deal with problems as they arise.

Reviewing and Approving the Project Plan Once a project plan has been created and balanced, the approval of management is sought. Most of the discussion about project plans usually centers around cost, schedules, and likely benefits. This is appropriate, but it is also important for the executives who review plans to develop an understanding of exactly how the proposed design is going to lead to the expected benefits. Frequently the style of executive review meetings is crisp and terse. Using time wisely is always in order, but too often time pressures become an excuse to limit the level of detail to which executives are exposed. The creation of custom software is expensive, resource intensive, and stressful. If busy executives cannot spare the time to ensure that it is done well, then the effort may not be worth undertaking at all. The most effective thing executives can do at this stage is to ask a number of detailed questions and take the time to be sure that the answers are clearly understood. Examples of the kinds of things that need to be discussed include: 1. How were the requirements gathered and validated? 2. What was done to put the plan in balance? 3. In what way have the RITE Approach principles been applied? 4. Will a disciplined control process be used to approve changes to the plan? 5. How will plan changes be communicated? 6. From where will the skilled people come? 7. What development tools will be used?

Building Software Yourself

181

8. How much and what kind of customer involvement is assumed? 9. What are the most critical changes to the company’s business processes? 10. Where are the greatest points of risk? 11. What is the plan for technical support and service?

The Plan of Record A major theme of this book is that IT projects are difficult because neither the problem nor the environment will normally stay constant long enough for a sophisticated solution to be put in place. The practical implication of this is that plans will tend to become obsolete over time regardless of how carefully they were crafted. It would be a mistake to conclude from this that the plan in place should not be taken seriously. The opposite is true. After the project approval activity just described, it is essential at all times to have a formal development plan in place to guide everyone involved. The approved plan toward which everyone is working can be called the plan of record. It needs to be complete, detailed, understandable, and easily accessible. There can be many plans under consideration, but there can be only one plan of record at a given point in time. Since changes will inevitably occur, a rigorous and disciplined process is needed by which changes to the plan of record are made. Such changes need to go through a formal change process and be clearly communicated to those who need to know. Documentation of the plan of record must then rapidly be brought up to date. Thoughts about Prototyping Prototyping is the intentional creation of code that will later be discarded in order to validate the proposed technical direction. Prototype code is usually discarded because it is very difficult to “harden” prototype code into a real product that has acceptable quality without sending it back through the disciplined methodology. Given the

182

Revolutionizing IT

speed with which prototypes can now be created, there is little reason to be concerned about disposing of them when they have served their purpose. Prototyping is an excellent technique for three purposes: 1. Reducing risk by testing design theories and providing a proof of concept. 2. Providing a visualization tool so that developers and users can start designing the details of the user interface. 3. Providing a demonstration tool to marketing. Prototyping is too often confused with iterative development, but they are not the same thing. Iterative development is a technique that can be part of a disciplined methodology that builds the actual product code. The two concepts can be used together, with prototyping proving critical assumptions before the investment is made in building permanent code.

The Design Stage of Development

The design stage is where the plan is turned into a detailed blueprint from which working software can be built. For example, the plan for a new shop floor control application might include the requirement for reporting utilization by work center at the end of each shift. During design, the format of the report will be created along with a determination as to exactly how the necessary data will be collected, calculated, and stored. The project’s scope is largely determined during the planning stage, but it is possible for the scope to expand unnecessarily during design if too much gold plating is added (called scope creep by developers). A delicate balance needs to be struck between creating a high-quality and complete design and the addition of unnecessary frills. A good example would be a design decision to add impressive-looking graphics and animation to a Web page whose primary purpose is to collect data from internal users. Design usually occurs at more than one level. High-level design establishes the overall structure (sometimes called the application

Building Software Yourself

183

architecture) and the relationships of major components. Low-level design defines interfaces, data layouts, screen layouts, communication flows, backup/recovery mechanisms, security features, and so forth. Once the high-level design is put in place, work can proceed among parallel teams on the lower levels. At the design stage, many issues must be dealt with beyond turning functional specifications into code: Performance, such as response time, capacity, utilization, and bandwidth. ■ Scalability, including the ability to grow without design changes. ■ Migration from existing applications. ■ Adherence to standards and industry architectures. ■ Usability for both the end users and administrators. ■ Procedures for installation. ■ Problem isolation and repair. ■ Globalization, including foreign languages, currency, and sort sequences. ■ Security. ■ Reliability. ■ User interface standards. ■

If a good job has been done defining scope and requirements, design becomes the most critical stage in software development. If an outstanding design is created, a group of competent programmers will have a relatively easy job turning it into working code. In fact, ease of implementation is a goal of skilled designers. On the other hand, a weak design will show up repeatedly in the form of problems during the code and test stages, and with customers discovering the remaining defects during the service stage. Too few organizations recognize the value of a highly experienced designer. They make the mistake of assuming that anyone who is gifted at creating good code will also be good at design. The best designers have many years of experience; their knowledge is both broad and deep. They have an excellent understanding of the

184

Revolutionizing IT

technology base and a practical approach to making technical choices. The most valuable also work well with others, patiently sharing their knowledge to uplift the skills of the group, and demonstrate strong leadership characteristics. It is no wonder that excellent designers are in short supply. One reason why the failure rate was so high among dot-com companies was the relatively low level of experience among the people they used to create their software. While people with a high aptitude for programming are not always at the point of being great designers, the best designers are usually excellent programmers themselves. They therefore need to be kept involved in a mentoring or review role until the project is complete, to be sure the design is implemented in the best way. Just as project plans need to be balanced, designs need to be carefully verified. Even the work of outstanding designers is susceptible to imperfect understanding, wrong assumptions, and changing priorities. Rigorous design reviews are an effective way to reduce the number of defects created during design. This type of control process is costly in terms of schedule and the time of the most valuable people, but it returns the cost many times over. Studies have shown that the cost of fixing a design error during the test stage is 8 to 10 times the cost of fixing it before design is complete. Fixing it during the service stage can cost 70 to 100 times as much. The goal of design reviews is therefore to determine if the design properly implements the requirements. We were engaged by a dot-com client with a programming staff of 50 to evaluate its development process and make recommendations. The design leader was opposed to spending time on design reviews—in fact, on design process improvement of any kind. We asked how he liked to spend his time. “Doing design, helping others with design problems, and reviewing system architecture,” he said. We asked how much time he had spent on those activities during the previous month. “Almost none,” he admitted. “I’m too busy with emergencies and fighting fires.” We showed him how a small investment in process improvement would return control of his job to him. Larger development organizations will benefit from the use of a

Building Software Yourself

185

formal design control group. The group has its own leadership and controls the development organization’s high-level designs, internal standards, design review schedules, and design control process. When managed correctly, the group promotes design consistency, standardization, best design practices, knowledge transfer, and product quality. Modern methodologies such as Use Case Methodology, the Unified Process, and the Agile movement are now providing innovative methods and tools that greatly assist the processes of design and validation, providing more productivity along with higher quality. The key points for nontechnical management to remember about software design are: Expect to spend more time in design than in coding. ■ The high-level design must fit together well for the low-level design to work. ■ Good programmers are not automatically good designers. ■ The most valuable designers play a key leadership role. ■ Design errors are harder and more costly to fix than coding errors. ■ Design errors are much cheaper to fix if caught during the design stage. ■ A formal review process is an essential control for highquality design. ■

The Coding Stage of Development

Laypeople often refer to the entire process of software development as programming. In reality, writing code takes up less than onethird of the time spent on a software development project. The rest is spent on planning, design, testing, documentation, and deployment. Obviously, the coding step is vitally important as the point where preparation ends and construction begins. The control processes at this stage can include all types of testing as well as reviews of the code.

186

Revolutionizing IT

On projects of any size, the programming task is spread over a number of people whose work must then be integrated. The more people involved, the more important it is to use a tightly controlled and disciplined methodology. A good design breaks the task down into a large number of pieces that can be developed in parallel. The exact way this is done varies considerably depending on whether methodologies such as object-oriented programming or design-to-code tools have been adopted. The pieces (or objects) are constructed one at a time and evaluated by themselves in an activity called unit testing. Sometimes special code called scaffolding must be created to prove the viability of the new module if the ones that surround it are not yet available. Good unit testing has become something of a lost art because most programmers do not seem to like it. They do not regard it as the science that it is. Unit testing must be taken seriously. Finished code should be “signature” work. When a developer completes coding and unit test, each is certifying the quality of his or her workmanship and declaring, “This is the best that I can do.” Code reviews are not used as widely as they should be. The goal is to have programmers examine each other’s work in detail in a nonthreatening atmosphere. A key goal of code reviews is to determine: Does this code implement the approved design?

The Test Stage of Development

Once individual programs have been written and unit tested, the difficult process of integrating them into a unified whole begins. As this is done, all of the remaining errors, inconsistencies, and wrong assumptions need to be wrung out. One of the most important elements of this effort is the use of test cases, which are specially written programs and databases designed to verify that the code does what it is supposed to do. Larger development organizations have the luxury of establishing independent groups of people who only perform testing. This has the advantage of using people who not only will apply dedicated effort to the test activities, but who can keep the original

Building Software Yourself

187

requirements in mind and verify that what was built is actually what was planned. Many companies call this group quality assurance, but we feel that this can encourage the thought that quality is added at the end. Calling this function the product test department might be a better choice. Let’s look at different approaches to testing, each of which is designed to uncover certain types of defects. Component testing is the step where related modules are integrated and then tested as a group. As much of the code (called code coverage) and as many of the path combinations through the code (called path coverage) are tested as time and budget allow. Application tests then verify the behavior of the application with users. All the functions should be covered, as well as the functional characteristics that affect users, such as usability, globalization, and interoperability. The best results are gained at this stage by using automated test tools that help to generate far more transactions than individuals typing in transactions could ever produce. This stage of testing determines: Can the user perform all specified functions? ■ Are the functions related well to each other? ■ Are there any situations in which the user can get into trouble? ■ Is there a high degree of usability? ■ Are all the functional requirements met? ■

The next step, product test or system test, is broader than testing the application by itself. It includes the interaction of the application with its underlying hardware and operating system platforms and the networks over which it runs. This test also includes the system characteristics such as performance, scalability, reliability, serviceability, availability, and recoverability. A large number of automated test cases is required. It answers questions like these: ■ ■

Does the overall system behave as expected? Can the product be easily installed?

188

■ ■ ■ ■ ■

Revolutionizing IT

Can problems be isolated and repaired? Are there any missing components? Does it perform well enough? Is it stable and reliable? Can data ever be permanently lost?

As each type of testing is performed, it is good practice to build a library of these test cases to be used to retest previously proven software after further changes have been introduced. The reapplication of previously run tests, called regression testing, makes sure that everything proven to work in the past continues to work as development proceeds. The same regression tests are used after the software has become operational when further enhancements or fixes are incorporated. In this way, the next release of a product will be sure to run all of the current release’s functions flawlessly. Larger software developers even have dedicated computers, called a test bed, that continually run regression tests against the latest versions of code under development or enhancement. All too often, small and medium-sized development organizations perform only limited unit testing and application testing, short cutting or ignoring component, product, and regression tests. This is acceptable within limits. An appropriate test strategy is needed in each situation to customize the test requirements based on the characteristics of the project or product. A good test process helps the tester to ask the right questions and make conscious, informed choices—even if one of the choices is to eliminate a type of test. The test process is in itself a control process because it controls the quality of the design and code activities that preceded it. It can also have its own control processes such as reviews of the test strategy and test cases.

The Delivery Stage of Development

Delivery includes all activities between the completion of the test stage and the general unrestricted availability of the product

Building Software Yourself

189

to customers. The most common are early support programs (such as an alpha or beta test), the final build and packaging of the product, and distribution. For an in-house product, the distribution might consist simply of putting the final version into production. The delivery plan is really part of the development plan and is started back during the planning stage, when assumptions regarding delivery, packaging, and early programs are first made during the gathering of requirements. It is the collection point for all information regarding getting the tested product to customers. A good control process for this activity is to conduct a review of the delivery plan with attendees representing development, the service/customer support department, and marketing. These questions need to be answered satisfactorily: Are all activities associated with the successful movement of this product to the marketplace covered? ■ Do we know exactly how the build, packaging, and distribution will take place? ■ Do our early customers and we know what to expect of the product and what to expect of each other? ■ Are support systems in place? ■ Can we deliver national languages as planned? ■ What is the impact of adding this product to an existing production facility? ■ Are all the requirements still met? ■

The Service Stage of Development

The service stage includes all activities required to support the product from the time it becomes generally available until the end of its life, primarily customer support (helping users with problems) and handling defects (isolating and repairing programming errors). The service process must be planned and implemented before the product becomes available.

190

Revolutionizing IT

There are many ways to deliver service. Here are some general considerations to keep in mind when preparing the service plan. Most deal with establishing service policies, which must meet business goals while being clear enough to avoid misunderstandings with customers. Access. E-mail, telephone, Web site, fax back. ■ Tracking. Handling of calls, separating defects from usage problems, callback response times, distribution of repairs. ■ Support personnel. Location, training, organization. ■ Support tools. Help desk application, trouble ticket routing. ■ Warranty policies. Warranty period, conditions, registration, service outside of warranty. ■ Reporting. Gathering information during service to improve the product. ■ Process control. Measuring and improving the service process itself. ■

It is worth noting that there are two primary opportunities for the company to interact with and influence its customers. The first is during relationship building and sales activities, which is a relatively happy time for the customer. The second is when the customer has a problem and might be expected to have a negative attitude. Companies have the opportunity to influence their customers’ satisfaction more in the first few minutes of a service call than at any other time—one way or the other—and can greatly benefit from a well-planned service process. An appropriate control process would be a review of the service plan to ensure that questions like these can be answered: Are we ready to deliver service? Will this plan satisfy our customers? ■ Are physical resources in place? ■ Are personnel in place? ■ Do we have a mechanism to collect what we learn during service? ■ Are our service policies clear? ■ ■

Building Software Yourself

191

Building Quality into Software

Quality is an important characteristic of good software, but one that is not easy to define. Some of the key elements of quality software are obvious to users: It doesn’t crash, it correctly performs all the intended functions, and it is easy to use. Other elements are less obvious: It scales to handle increasing volumes with ease and it is easy to service. None of these elements just happen. They are the result of quality practices applied throughout the entire development cycle. These practices are an integral part of a disciplined software methodology, and consist of both the control processes and testing described earlier. There are three primary reasons a company should care about quality: 1. Maintaining customer satisfaction. Not just true defects, but perceived ones as well, such as difficulties with usability and performance, hurt a company’s reputation. Broken promises such as late deliveries also affect customer relationships. 2. Reducing warranty, service, and support cost. Maintaining a technical support operation, assisting users with problems, finding and fixing errors, testing, and distributing fixes are all expensive. We have already shown the high cost of fixing defects late in the development lifecycle. 3. Keeping the business under control. No business can withstand a continuing stream of late deliveries, over-budget projects, and unpredictable events, or the associated second-order effects such as declining staff morale.

The Use of In-Process Metrics

The primary purpose of testing and of the control processes, such as design reviews, is to discover defects and fix them as early in the development cycle as possible. In a well-managed development organization, these defects are recorded at the point of discovery during each activity and make up a database of quality metrics. This

192

Revolutionizing IT

information can be combined and analyzed, resulting in a number of useful profiles. Example 1: These metrics help development improve the current project by answering questions. What kind of design and coding errors are we making? Are there “soft spots” in the product that we should review further? ■ What proportion of design errors are serious? ■ Where should we perform causal analysis to get to the root of the problem? ■ ■

Example 2: When the information for a number of projects or over a period of time is combined, it helps in improving the development methodology itself, so that the trend is toward increasing quality. What proportion of our defect discoveries are “escapes” from earlier control processes (for example, a design error that escaped the design review)? ■ What proportion of defects were not discovered until a subsequent product release? ■ Did improvements to our design and coding processes result in injecting fewer defects? ■ Did improvements to our control and test processes result in finding more defects? ■

Example 3: The defect injection, discovery, and repair rates can be statistically modeled over time to capture an organization’s actual experience. These models are useful in telling development what a project’s current status really is and how much more time should be expected until completion.

The Underlying Tool Base

We have presented the elements of a traditional development process, and Chapter 15 will describe some modern methodologies.

Building Software Yourself

193

However, we cannot finish this chapter without a short discussion on the importance of providing development with the right tools, which underlie and support all other activities. Tools play an essential role in all modern development shops. We have seen that development processes are complex and need tangible controls in place. Tools have become an everyday necessity for supporting the development process because they help to manage complexity and to store and retrieve the information necessary for reviews. Tools can substantially enhance the effectiveness of a development process if the process is implemented through the use of tools along with specified procedures. Many tools have been developed to support specific process methodologies, making those methodologies easier to execute and more reliably enforced. Tools are designed for specific functions, and not every process needs to be automated with the support of a tool. Here are some common tool categories. 1. Change management. Administration of requirements, plan content, and changes thereto. Required by all development organizations. 2. Configuration management. Control of source and object code, as well as the procedures for the product build process. Required by all shops. 3. Problem management. Automation of help desk functions and tracking of all internal and external problems to completion. Required by all shops. 4. Design. Unified Modeling Language (UML), refactoring, requirements capture, object class diagrams. 5. Coding. a. Integrated development environments (IDEs). b. Code generation. ■ Visual constructors. ■ Generation from design documents. c. Refactoring. d. Code analysis. e. Problem isolation and debugging.

194

Revolutionizing IT

6. Test. a. Automated test execution. b. Tracking of test case execution and successes. c. Performance analysis. d. Reliability and availability analysis. 7. In-process metrics. Statistical analysis and measurement databases for quality assurance. All shops should perform basic defect measurement at a minimum. Some Parting Thoughts

Everything presented in this chapter about software development and the need for disciplined process hopefully sounds obvious and sensible to people who have not worked on software development efforts. Unfortunately, this level of formality is commonly not adhered to by some of the brilliant freethinkers who populate the software development community. Many of them have been very successful using a highly informal process to rapidly create software when just a few people are involved. Such approaches rapidly fall apart as soon as the number of people working on a project increases. It is commonplace for the development of a given body of software to start out being a modest undertaking that a handful of young and energetic people create using an informal process at best. Software that is successful immediately creates a hunger for more—capability, elegant interfaces, faster performance, and the like. This leads to follow-on projects of greater scope and complexity, where the informal process that created the base system is no longer adequate. Delaying the use of a rigorous development methodology can be very costly. The point is that there are times when someone (management or an experienced advisor) needs to provide the “adult supervision” for development efforts—forcing developers to accept more discipline than they like. It is necessary, but not particularly fun, being the one that must force adolescent development teams to do the equivalent of eating their vegetables, finishing their homework, and going to bed on time.

Building Software Yourself

195

Summary

The primary goal of this chapter is to help nontechnical people involved with IT projects appreciate all that is involved in the creation of complex, high-quality software. Some of the more important points we hope readers will take away from this overview are: The creation of high-quality software is expensive, time consuming, and resource intensive. ■ Senior management has a crucial and unique role to play. ■ The use of a disciplined methodology is essential. ■ There are many methodologies from which to choose, but all of them have to accomplish the stages of Planning, Design, Coding, Test, Delivery, and Service. ■ All methodologies include controls to assist in project management and ensure quality. ■

The experience and skills of the people involved, especially those who create the design, will have a great impact on the result.

C h a p t e r

F i f t e e n

The Changing World of Software Development

I

n the middle of a great battle, the individual soldiers involved rarely have a clear idea of what is happening in the war beyond the threats that they currently face on their own battlefield. In a similar way, those who fight to improve the body of software that supports their organization rarely have the luxury of taking a broad look. A military analogy seems appropriate here since we are all active participants in the information technology revolution. Live ammunition is in use, and organizations that manage IT badly are being wounded. The IT revolution is being driven by three forces: 1. Hardware. Microprocessors and storage. 2. Networking. The way electronic devices are connected. 3. Software. The logic that provides value. On the hardware front, it now seems like Moore’s Law was pessimistic. The cofounder of Intel who predicted that the power of microprocessors would double every 18 months did not foresee that eventually the pace of improvement would actually increase. As a result, the capacity of processors keeps growing while the size, cost, and power consumption drop. The result is an explosion in the number and sophistication of processors and in the ways in which they are being used.

197

198

Revolutionizing IT

At the same time, the ability to connect this vast array of increasingly intelligent devices to each other relentlessly improves at a similar rate. The vast promise of the Internet is slowly being realized as its speed, reliability, and reach continue to expand. Most new applications are now being written to Internet standards. Software has become the shortest leg on this stool. The technology for creating software continues to improve, but at a slower pace than hardware, storage, bandwidth, and networking. In comparison with what passed for leading-edge software only a few years ago, the improvement has been dramatic. In absolute terms, we still have a long way to go. Much of the software that most organizations have in place today has severe limitations. The most obvious failings are that it is: ■ ■ ■ ■ ■ ■ ■ ■

Difficult to learn and use. Unreliable. Awkward in handling unusual situations. Slow to respond at times. Not always available where it is needed. Expensive and difficult to change. Not always aligned with company goals. Not completely integrated.

The Internet constantly exposes us to the best that software currently has to offer, including powerful graphics, sound and video, built-in training aids, and much more. This serves as a reminder of the limitations of the older software that is a part of most people’s working lives. The result is pressure to bring those older applications up to date. Every year software development becomes both easier and more difficult at the same time—easier because development tools constantly improve but more difficult because greater skill is needed to use the latest tools and techniques. Constantly rising expectations also make development a greater challenge. Chapter 14 provided an overview of the elemental activities involved in building high-quality custom software. This chapter will put into perspective a few of the more important changes taking place in the software development world.

The Changing World of Software Development

199

The Trend toward Objects

Most people have heard of FORTRAN, COBOL, RPG, PL/1, and C. These are examples of what are called procedural programming languages. Using such languages, logic is constructed line by line based on functional flowcharts and then organized into routines and subroutines. The vast majority of business and scientific applications were written with these languages, and their use continues today. Early in the modern computing age, techniques were developed to improve procedural programming. Development shops created standards and best practices. Programmers learned to name, organize, and document their code, as well as to reduce the amount of memory and CPU resources a program required. Given the high cost of machines relative to programmer salaries at that time, this trade-off made sense. In a harbinger of things to come, a methodology called structured programming evolved, which promoted the building of libraries of reusable functions by organizing code into subroutines that were called through standardized interfaces. Promoting Reusability Reusability has been greatly extended over the past 20 years through an approach to development called object-oriented (OO) programming. Some of the key principles behind OO programming are as follows: Create software by assembling reusable building blocks called objects. ■ Write as little new code as possible. ■ Modify a proven object rather than create a new one. ■ Build completely new logic as objects that can later be reused. ■

OO programming forces developers to adopt a new way of thinking that is more abstract and complex. OO programmers break down the task at hand into objects that each perform a specific function. A number of reusable “primitive” objects can be combined to

200

Revolutionizing IT

form more sophisticated ones, such as an object that can calculate compound interest or one that maintains information about individual customers. Well-designed OO software mimics the real-life structure of the function being automated. For example, a programmer developing an order entry system might incorporate a customer object, a payment method object, an inventory object, and a small amount of unique new code to tie it all together. Although object technology has existed for many years, its popularity got a big boost with the development and standardization of the C++ language, because the large population of C language programmers could immediately start to become “object oriented.” C++ was adopted by tens of thousands of programmers and supported by all the popular operating systems, which provided ways to connect objects between applications and even between systems on a network. OO programming can reduce development and test time, increase programmer productivity, and improve quality. Programmers who use objects do not need to know how they operate internally or the exact format of the data—only what they are capable of doing. This simplifies their job by incorporating programming that is already available and tested. The programmer who maintains one of the objects can improve it and add function without causing any change to the applications that have already been written to use the original object’s interface. This approach can even be extended so those objects use other objects, “inheriting” their capabilities, and further isolating applications from the details. Building Applications from Objects OO developers use a sophisticated development tool called an integrated development environment (IDE). They select existing objects from what is called a class library, make necessary modifications, and then use a rich graphical interface called a visual constructor to assemble the objects into working programs. Some IDEs can even prevent the programmer from asking for capabilities that the object is not meant to provide. The unglamorous task of testing can be easier and more thorough with OO programming. An OO application can be instantly improved without effort if there is an improvement in any of the reusable components that

The Changing World of Software Development

201

make it up. For example, in the order entry application we just discussed, if additional approved credit cards were added to the payment method object, all the applications using the payment method object would be upgraded. OO programming has changed software development forever by: ■ ■ ■ ■ ■

Dramatically increasing programmer productivity. Facilitating the continuous improvement of software. Propagating enhancements with less effort. Creating a higher level of expectation from software. Increasing the skill needed to be a developer.

Software package developers are rapidly turning toward OO programming, but few have totally completed the transition. This is because the change to OO programming goes to the heart of an application and offers an irresistible opportunity for complete reengineering. Searching for the Perfect Object OO programming creates a dilemma. The productivity of developers increases with the number and sophistication of prebuilt objects they use. Making greater use of such objects requires a broad knowledge of what is available. As time passes, the number of objects available to the programmer continues to increase, as does the challenge of understanding what they are and how to use them. The tools that help OO programmers find, select, and use available objects keep improving, but it is hard for programmers to keep up with the variety and capability of the objects and frameworks that are available, especially those not included within the IDE in use. The OO vision is being further enhanced through the hot new idea of Web services. Under this concept, sophisticated software functions will reside on Internet servers. The code that makes up the Web service is used as a de facto part of the application whenever it is needed. Consider, for example, a new application that will allow retail stores to enter orders to an auto parts distributor through the Internet. The project plan calls for checking the credit of the retailer

202

Revolutionizing IT

before accepting the order. In this case, the programmer might take advantage of a Web service offered by a credit bureau that would verify the credit rating of the retailer in a matter of seconds. The Unfulfilled Promise OO programming has not yet become the solution to all of mankind’s woes. It is a wonderful new technology that is still being invented. Its evolution is advanced enough to make it clearly the way of the future as well as the best available way to create new software of any significant complexity. At the same time OO programming has a number of serious current limitations: Proficiency requires a steep learning curve. ■ The industry boasts a limited number of highly experienced developers. ■ Standards remain incomplete. ■ Reusable primitive objects abound, but not organized collections (called frameworks). ■ Finding just the right object to reuse remains challenging. ■ Database management middleware that fully supports OO development is weak. ■

Follow the Leader Intense competition to provide leadership for the OO movement has spawned a number of different approaches to resolving these issues. Usually, competition is a wonderful thing. It keeps prices down, encourages innovation, and rewards efficiency. However, competition can also create confusion, which is exactly what is currently happening in the rapidly changing universe of OO programming. The giants of the software industry, including Microsoft, IBM, Oracle, BEA, and Sun, are all investing heavily in products and services that attack different facets of these problem areas. Countless smaller software and service vendors are also helping the OO movement progress, as are a number of independent standard-setting bodies. Not surprisingly, each participant brings a different view to

The Changing World of Software Development

203

the debate. Microsoft has the most to win or lose. Its ambition is to offer a comprehensive approach to OO programming in which it controls a number of critical standards, all delivered through Windows. Over 1 million programmers already follow some level of the rapidly evolving Microsoft approach to building software. Microsoft’s OO offerings are a part of its grander strategy called .NET. The major alternative is a loose federation of vendors who tend to support the approach to OO programming used by Sun’s Java programming language. IBM provides much of the muscle for this community, which has taken the high ground of claiming that its approach is more open since so-called “pure Java” programs are not standardized by any one vendor and run on any operating system platform, not just Windows. This argument is somewhat weakened by the fact that Sun Microsystems has insisted on retaining control of many of the Java standards. It appears that neither Microsoft nor the federation will be able to drive its opponent out of the market. Additional standards and products that bridge the gap between the two are therefore being rapidly developed. It is possible to create outstanding software using either alternative, so the decision is rarely a life-or-death choice, but it does create one more challenge for those who wish to create advanced software. Java Perks Up Java is a programming language that was created in the early 1990s by Sun Microsystems, which continues to control its evolution. It was originally based on C++, the first widely used OO programming language, and has since become the most popular language for OO development. The advantages of Java include the following: Programs written to the “pure Java” standard can run on almost any computer. ■ Java has a symbiotic relationship with the Internet. ■ Programmers seem to enjoy using Java. ■ Java supports most accepted industry standards. ■ Improvements come from countless sources. ■ Outstanding development tools are available. ■

204

Revolutionizing IT

The Java language itself is only one facet of the OO development environment that has grown up around it. Many of the benefits of this broad environment can be obtained without ever actually using Java itself, since C++ and other OO languages can be used as well. Other elements of the Java environment include the various IDEs, object libraries, and commercial frameworks. In addition, a great deal of middleware offered by companies such as BEA, IBM, Oracle, and Seagull further supports the creation and operation of OO software. Chapter 14 emphasized the importance of good design, and this is truer than ever in the Java OO environment. The design of the objects themselves and their relationships (a process called refactoring) is essential to obtaining the benefits of reusability and scalability. In a Java project, the executive should expect to see lots of think time and collaborative design activity compared to the amount of coding time.

The Unified Process

With the object revolution well under way by the late 1980s, and while Java and the Internet computing model gained popularity in the mid-1990s, development methodologies were not standing still. Among a number of new approaches, one in particular has gained a considerable following and has been well documented by respected methodologists—the Unified Process. Structured enough to maintain the kind of discipline required by the waterfall methodology, but using an entirely different approach to the stages of the development cycle, the Unified Process adds the flexibility and responsiveness to change that the waterfall lacks. Its adaptability to changing requirements and environments make it fit well with the RITE Approach principles. This is how it evolved. Use Case Methodology As illustrated throughout this book, the process of identifying, prioritizing, and documenting requirements has always been problematic, not only because the requirements are changing, but also

The Changing World of Software Development

205

because this process sits on the boundary between development and the outside world—normally business analysts who represent the company’s customers. This is where the hard work occurs of rendering business objectives into something that development can understand and translate into a usable product. Use Case Methodology emerged in the late 1980s to help solve this problem. By forcing requirements into a form that directly represented how users would see and interact with the system, a common language was found that both development and the business analysts could understand. A company that provided mortgages to homeowners suffered from the traditional requirements process problems. The development team was ignoring requirements that its members could not understand. Neither development nor the business group could determine when the scope of a requirements document was complete. They could not even come to agreement on the nature of the requirements. Use Case Methodology was introduced, along with formal reviews as a control process. The reviews included representatives from development and the business function responsible for this application, and they were shown how to use a moderator and scribe so that no uncovered defect was lost. After several review iterations, both sides agreed on the requirements and found they had uncovered dozens of defects that would have been costly to fix later in the development cycle. The Unified Modeling Language (UML) Use Case Methodology focuses on the business requirements but does not formally address how to translate these requirements into design elements and then code. During the time frame when Use Case Methodology was emerging, so were other object-oriented methodologies, each having its own strengths and its own terminology. Finally, several prominent methodologists, Ivar Jacobson, Grady Booch, and James Rumbaugh (known as the three amigos), developed a standardized modeling language to overcome this problem, integrating the best elements of existing methodologies and extending them to cover the whole product lifecycle. Use Case Methodology was included as its primary mechanism for gathering requirements.

206

Revolutionizing IT

The Rational Unified Process (RUP) By the mid-1990s, the industry had mostly agreed upon coding technology (object-oriented and Java), a language to specify requirements and design (UML), and a common application architecture (the Internet model). Missing was the overall development process needed to manage all these activities. The creators of UML went beyond the language to describe a standard development methodology and how UML is used within it. The Unified Process is the grammar for applying UML throughout the entire development lifecycle. The Unified Process is intended to address many of the practical realities of software development—the difficulty in realizing products that fulfill actual business needs; the tendency of a project to fail because of complexity; and the need to realize incremental return on investment. Simply speaking, the Unified Process calls for defining incremental deliveries of function based on a subset of use cases and then iterating the activities of development over the increment until it is complete. This approach minimizes risk by allowing continual reevaluation of the project and has the added benefit of returning some completed function as the project proceeds. The Unified Process is built on Use Case Methodology, is designed to be both iterative and incremental, and relies on rigorous architecture and methodology for control. Therefore it fits very well with the RITE Approach principles in this book. For more information on this breakthrough methodology, see the seminal textbook, The Unified Software Development Process by the three amigos (Jacobson et al., 1999). Jacobson, Booch, and Rumbaugh are now employed by Rational Corporation. Rational extended and implemented the Unified Process with a set of tools and branded it the Rational Unified Process (RUP).

The Agile Movement

For many years, it seemed that the only alternative to having no methodology at all was to adopt one of the heavyweights—the Unified Process with the Unified Modeling Language or one of the

The Changing World of Software Development

207

many waterfall-based methods. But during the last part of the 1990s, a number of lighter approaches were created, each with its own committed advocates. It was informative and sometimes amusing to read the articles and letters as these proponents carried on a feud on behalf of their respective methodologies. At a gathering in February 2001, 17 creators and proponents of the lightweight processes met at a ski resort to try to coalesce their views. They agreed that the term lightweight had a negative connotation and they chose the word agile to describe their development processes. To the amazement of many onlookers, this group of independent personalities found enough agreement to produce the “Manifesto for Agile Software Development” (Beck et al., 2001), which was signed by all participants. The “Manifesto” lays out 12 principles and four value statements. The authors made it clear that they were not against traditional techniques such as plans and documentation; however, they valued other things more. For example, the “Manifesto” says: “We value individuals and interactions over processes and tools,” and “We value responding to change over following a plan.” This isn’t intended to imply that there are no controls in Agile. Instead, controls are defined that are centered squarely on the daily programming tasks and rely upon skills, discipline, and communication among the developers. Practitioners of the Agile methodologies usually keep plans, documentation, and controls to the very minimum needed to accomplish the project. Agile methodologies go further away from the formal structure of traditional processes in a quest to better balance the need for change and quick results against the need for control. Agile proponents believe that: A company can never know the whole problem, so no amount of planning will provide full, accurate requirements and a complete design in one iteration. ■ There is an 80/20 rule of functional desirability. ■ Documentation is expensive and does not often provide real communication. ■

Agile methodologies strive for small iterations of the most nec-

208

Revolutionizing IT

essary function first, so that the least necessary function falls off the end of the schedule when the customer is satisfied. Let’s look at a simple example using Extreme Programming, probably the best known today of the Agile methodologies. A subset of the desired function has been selected and defined in use cases. A team of two experienced developers is assigned to work together to implement this group of use cases. One of the developers performs the design and coding task while the other developer serves as the control process by actively monitoring the activity for adherence to programming guidelines and design principles. Controls on the design are implemented through constant refactoring of design elements; in other words, as the opportunity for better design is recognized in the program, the code is immediately rewritten or refactored to reflect these improvements. This subset of function is continually subject to thorough testing and other quality assurance processes as new code is introduced. So if Agile methodologies are getting so much attention, why would a company hesitate to select one or more of them as its official development methodology? There are very emotive arguments on both sides. The following concerns are voiced by the antagonists. If your company is thinking about adopting an Agile methodology, to be forewarned is to be forearmed. Agile methodologies: May not scale to large teams and projects. Assume a high degree of skill and motivation across much of the development team. ■ Require cohesive teams that will stay together. ■ Do not lend themselves to detailed planning. ■ ■

The Agile proponents don’t claim to be the solution for every team. Basically this is an admission that a methodology has to fit the culture, business need, and sophistication of the team. It can’t be imposed upon a company, but must be introduced and then accepted because of the benefits that are ultimately gained. In a very real sense, the Agile methodologies don’t just need to fit the culture; they are an essential part of the culture. This is because the processes are self-controlled, and every team member must be a reliable process champion.

The Changing World of Software Development

209

Summary

As the science of software development slowly becomes a more exact one, it also becomes much more intricate and demanding. Applications of extraordinary capability and elegance can now be created through the use of OO programming and the best of the newer development methodologies. Doing this remains the great management challenge it has always been because the human nature issues that are the main focus of this book have not gone away. The ideas behind object-oriented programming are over 20 years old, but making extensive use of OO programming has just become practical in recent years. Its use still remains largely confined to more advanced software developers, not because it is not a good idea but because it is difficult to learn and use, especially in the early stages. The invention of the Java programming language and development environment, and the adoption of the Internet as the most popular current programming model, have had a profound effect on development during the last eight years. Companies looking for a highly structured methodology that is more modern than the waterfall approach might consider the Rational Unified Process, which has emerged as an industry leader and is backed up by a comprehensive set of tools. A set of less structured methodologies makes up the Agile movement. Companies that demand speed and flexibility, and have a very experienced programming staff, might consider this approach.

C h a p t e r

S i x t e e n

A Review of the RITE Approach

M

ore than $1 trillion are spent each year worldwide on information technology products and services. While the return on part of that investment is very high, a great deal of it is not being spent as wisely as possible. Discovering ways to increase the return on investments in IT is thus a subject of importance to the management of most organizations. Projects turn the potential of IT into benefits, but managing them has never been easy. Recent years have seen steady improvements in techniques for managing projects. At the same time, the complexity of what needs to be done increases relentlessly. As a result, the rate of success remains unacceptably low. A new way to undertake IT projects has been discovered by observing the differences between those that succeed and those that fail. The name that has been given to this new line of thinking is the RITE Approach (which stands for Revolutionizing Information Technology Effectively). IT projects have been failing for similar reasons for decades. It is time to take a fresh look at the way they are organized and managed in order to improve the odds of success. The first step involves understanding what goes wrong.

211

212

Revolutionizing IT

What Often Goes Wrong

The path that failed projects follow is well worn. At the outset, the people involved in a new project are generally energetic and optimistic. They are given a broad mandate by management to see what can be done about a specific problem or opportunity. The first step taken is to evaluate the situation in some detail. As this is done, it is difficult to decide where to stop because every problem seems to connect to many others. Knowing which related problems should be attacked or ignored is never easy. An understanding of the problem leads to the formulation of a proposed solution. This is presented in the form of a project plan. Management understandably wants to see this plan before a great deal of time and money are spent. They expect assurances that the projected benefits will materialize and that the budget will not be exceeded. At this early stage, the project team rarely has the amount of information needed to make such assurances. The team is therefore encouraged to propose a solution that is as comprehensive as possible and to carefully document all the assumptions made. Such solutions are expensive, time consuming, and resource intensive. It is the norm for projects to be led by an IT professional either from the internal staff or an outside service firm. Representatives from the community of end users work together with the IT professionals to create the plan. The users are responsible for defining what is needed, and the IT professionals formulate a proposed solution based on their knowledge of technology. The result of this seemingly logical process is often failure as the following dynamics occur: Users overload the requirement specification out of fear that they will be criticized later for leaving something important out. ■ The IT professionals develop an elegant solution out of a desire to provide what was requested and because doing so will lead to challenging work. ■ The scope of the proposed solution spirals upward. ■ Management has little choice but to approve the carefully formulated plan that is presented. ■

A Review of the RITE Approach

213

As time passes, the underlying assumptions fail to hold true as conditions change. ■ These unforeseen events become an excuse for schedules to lengthen and costs to increase. ■ When the proposed solution nears completion, it becomes clear that the design was imperfect. ■ The end users and IT professionals can each safely blame the other. ■ In fortunate situations, the project eventually is completed, albeit late and over budget. ■ Sometimes management decides to abandon the whole thing. ■

Everyone who has worked around IT projects for more than a few years can tell personal stories that follow this outline almost exactly. The Reasons for Failure

Some of the root causes of project failure are: Unrealistic expectations by management. A flawed way of thinking about how projects should be organized and managed. ■ The lack of an effective mechanism to make trade-offs. ■ A hidden bias toward more sophisticated solutions by both users and IT professionals. ■ Continuous growth in project scope that increases the time and resources needed. ■ Changes in the environment before the project is complete. ■ ■

These underlying causes are often the direct result of the way in which projects are undertaken by the organization and of decisions made at the outset. The fate of a project can be determined to a large degree in the early stages, often before any formal effort begins.

214

Revolutionizing IT

What Sometimes Goes Right

Not all projects fail. Those that are successful usually follow a different pattern. Characteristics that are often present when projects succeed include: ■ ■ ■ ■ ■ ■

Strong leadership. Time control. Resource limitation. Scope control. Staged evolution. Concept recycling.

The pattern of behavior that leads to success is similar to that which is naturally adopted when an emergency arises. In such situations, the first priority is to come up with a workable solution as rapidly as possible. The resources that can be applied are usually limited, and it is commonplace for a strong leader to take control. The most immediate needs are addressed first, and less important issues are resolved one at a time, in priority order. Little effort is expended trying to invent innovative solutions.

More Realistic Assumptions

Understanding the dynamics that lead to failure is the first step toward preventing them. Some of the realistic assumptions that management needs to make include the following: It is impractical to solve every aspect of a problem. Complex problems are made up of an intricate web of smaller ones. ■ A high percentage of benefits come from a small part of the effort. ■ The nature of the problem will change over time. ■ The more time passes, the more change will occur. ■ ■

A Review of the RITE Approach

215

Any complex design will be imperfect. Change will meet resistance. ■ There will not be enough human talent available to create the optimum solution. ■ ■

When examined exhaustively, problems become increasingly complex. It becomes tempting to try to make a great leap forward that resolves every issue that has been uncovered. Such attempts usually fail because of changing conditions, invalid assumptions, and the difficulty of creating perfect designs for anything complex. Resisting the natural tendency of projects to become complex is thus the foundation of a better way to manage them.

The RITE Way to View Projects

The RITE Approach starts by asking management to adopt a nontraditional view of what a project is and how it should be undertaken. It is acknowledged that the problem needs to be examined and that a long-term solution needs to be formulated. At the same time, management must accept the practical limitations that all planning efforts face. Plans created rapidly by a limited number of carefully chosen people are superior to those created by larger numbers of people over a long period of time. The RITE Approach thus calls for a small task force to rapidly formulate a long-term vision of how the problem should be resolved. This vision is not expected to be either detailed or immune to change. Within the framework of the vision, specific projects are identified that attempt to rapidly make progress toward the vision. No single project is expected to accomplish the full vision. The long-term vision is updated after each project is completed in light of what was learned and in reaction to the changes that have occurred. The RITE Approach strives for continuous, incremental improvement—an approach that imitates evolution. Rapid progress is the objective—not perfection—however long it takes.

216

Revolutionizing IT

A New Way to Define Success

The RITE Approach takes a very different view of what constitutes success. It starts with the belief that success can only be determined after the fact because plans made at the outset are never perfect. Striving to create comprehensive plans leads directly to excessive complexity. A project is successful if progress is made toward the vision, not if perfection is achieved. The most important measurement of success is the extent to which the benefits provided are worth the resources that are spent. By this definition, a project can be highly successful even if it does not exactly follow the original plan. The questions management should ask after project completion include the following: How long did this specific project take? ■ What resources were used? ■ How much was spent? ■ What worked and what didn’t? ■ What measurable benefits were provided? ■ Were there permanent improvements in business processes? ■ Is the organization well positioned to take the next logical step? ■ Was progress made toward the ultimate goal? ■

Scope Control

Adopting a philosophy of continuous incremental improvement or evolution is the first step toward controlling the scope of projects. The RITE Approach includes a number of other techniques for scope control: ■ ■ ■

Let time determine scope. Control the use of resources. Limit the design team size.

A Review of the RITE Approach

■ ■

217

Gauge your ability to absorb change. Imitate frequently, invent selectively.

The time during which conditions will remain relatively stable within a given organization is usually less than a year and sometimes less than six months. Projects need to be completed during periods of relative stability, before the assumptions on which they are built change. The best way to be sure that this happens is to let time determine the project’s scope. Specifically, the completion date for the project should be set before deciding exactly what will be done. Establishing the completion date at the outset lowers the risk that change will make the design obsolete. Shorter projects also consume fewer resources, cause less disruption, and incur less risk. Most importantly, benefits arrive sooner. The availability of resources, especially the time of key people, should also be considered when determining project scope. The ability of the organization to absorb change is yet another factor that should be used to decide how ambitious an undertaking should be. Senior management can play an important role in scope control. Some of the specific things management can do to help control scope are as follows: ■ ■ ■ ■ ■ ■ ■

Create an air of urgency. Examine exactly how project benefits will be delivered. Make realistic assumptions. Reward progress, not perfection. Don’t expect detailed long-term plans. Forgive mistakes when the time lost was short and the cost low. Assign accountability to a single person.

Perhaps the most important thing executives can do is to establish effective ground rules for the project by asking the right questions at the outset. The central question for most projects should be: What can be done rapidly that will have a major positive impact, entail an acceptable level of risk, and use available resources?

218

Revolutionizing IT

Accountability

Projects are pulled toward complexity because there are always good reasons to include more capability in the solution. Each attempt to control scope therefore involves making a trade-off. For example, removing a feature means sacrificing a valuable benefit and loss of the support of an important person. Retaining that feature lengthens the project, increases cost, and requires resources that are not readily available. Trade-offs are only made when a mechanism is in place that makes sure that they occur. The best way to encourage the making of trade-offs is to assign total accountability for the success of the project to a single person. That person should come from the part of the organization that has the most to gain or lose. Management must give accountable project leaders the authority to make tradeoff decisions and accept that many of them will be unpopular. As a general principle, those who use IT systems should be totally accountable for them. The person responsible for sales will therefore be responsible for the cost, quality, and capability of the information systems that support the selling function. The internal IT department might be used to help create, operate, and maintain these systems but will not be responsible for their functional characteristics or value to the organization. Fully accountable project leaders must make trade-offs that strike a balance between the following eight critical factors: 1. Scope. The definition of what the project will accomplish. 2. Benefits. The net value the project creates. 3. Time. The period between approval and the delivery of benefits. 4. Disruption. The negative impact of the change on the organization. 5. Cost. The profit and cash impact. 6. Risk. The probability that things will not work out as expected. 7. Resources. The talent and skills that will need to be dedicated to the effort. 8. Quality. The reliability, availability, performance, and user satisfaction of the solution.

A Review of the RITE Approach

219

Imitate, Don’t Invent

The purpose of most projects is to improve the way a business process is performed. Success is improvement of the existing process, as opposed to the invention of a unique approach. Time, cost, and risk are all reduced when the improvement is based on a business process that has been proven to work elsewhere. An objective therefore needs to be set to imitate as much as practical and invent as little as possible. There will be times when invention is appropriate, but the starting assumption must be that a proven solution that can be discovered and imitated already exists. Only after determining conclusively that a solution that can be recycled is not available should an attempt at true invention be considered. Even then, it is appropriate to focus any new invention on high-impact aspects of the solution, not on cosmetics or features of marginal value. In those cases where significant innovation is required, it is appropriate to rapidly create a structured test to prove that the new concept is likely to work in practice. Under these circumstances, construction of a proof-of-concept prototype can reduce risk. Investing in a prototype that proves the central assumptions behind a new concept offers a number of benefits: User resistance is reduced if an idea has been proven to work on a small scale. ■ Risk to the organization decreases. ■ Willingness to try innovative things increases when risk and investment decrease. ■ Vulnerability to budget cuts is less of an issue. ■ Dependencies and conflicting requirements are identified early. ■

Favor Packaged Software

The best way to imitate business processes that have been proven elsewhere is through the evaluation and use of packaged software. Mature packages represent the combined experience of a large number of organizations that have faced similar issues. Taking advantage

220

Revolutionizing IT

of what they have collectively learned is far superior to attempting to create a solution optimized for one organization. Package evaluation should be based on a few simple principles: Examine what is available before deciding what you want. The best test of a package is how effectively it is being used. ■ The least valuable test is how good it looks in a formal demo. ■ Packages are the best way to discover and imitate proven business processes. ■ Look for the reason behind every design element. ■ Encourage the use of packaged software exactly as it was designed. ■ Adapt to the package to the extent practical. ■ Invest heavily in training. ■ A long selection process decreases the quality of the effort. ■ ■

The RITE Approach to package selection is quite different from that which is commonly used. The important steps in the process include the following: ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■

Assemble the project team. Line up outside help. Set a goal to make a decision rapidly. Identify the most likely choices. Visit reference accounts. Adjust your thinking. View structured demos. Decide what matters. Document your decisions. Look for the leading package. Strike a deal. Set up a conference room pilot. Beat the stuffing out of it. Be prepared to try the second choice.

A Review of the RITE Approach

■ ■ ■ ■ ■

221

Evaluate implementation providers. Plan for a staged implementation. Obtain approval and buy the product. Invest heavily in training. Implement, let it settle, evaluate, and then upgrade.

Using Outside Resources

A large part of every project is the selection and integration of many different technology building blocks. These building blocks include things such as packaged applications, development tools, report generators, Web application servers, object frameworks, and Web services. A significant amount of expertise is often needed to use these building blocks. It has become impractical for the internal IT staff to maintain all the necessary expertise. The result is the need to use outside experts on a high percentage of projects. The RITE Approach includes suggestions for undertaking projects when outside service providers contribute much of the necessary technical expertise. The core principle is that a modest number of outstanding people are more valuable than a large number of average ones. The practical implication is that clients need to control the numbers of outside experts involved in projects and then attempt to maximize the skill level of those used. Doing so is not easy since this is not the way many service firms like to manage engagements. Larger service providers often strongly prefer projects where a large number of their people can work for months or even years at a time. Many also have large numbers of inexperienced people on staff who need placement. This potential conflict can be resolved with the following framework for a working relationship: Both parties acknowledge that projects are dynamic and subject to change. ■ The client provides a leader responsible for making decisions and trade-offs. ■

222

Revolutionizing IT

The service provider helps manage the project but follows direction from the leader. ■ The schedule and resources are fixed, and the scope is adjusted to meet them. ■ The client agrees that the service provider is entitled to make a profit. ■ Clients expect the people assigned to be capable and to work in their interest. ■ Service providers are not responsible for the ultimate cost of the project. ■ The project team consists of a limited number of highly experienced and capable people. ■ Service providers are entitled to charge high rates for the most capable people. ■ The engagement forms the foundation for a long-term relationship. ■ The service provider can expect to participate in follow-on efforts. ■

Under this arrangement, clients make key decisions and tradeoffs and assume the associated risks. Service providers agree to participate in smaller projects than they would prefer, where the risk they assume is limited. Rates are fair to both parties, and there is an opportunity to build a profitable long-term relationship. Under ideal circumstances, the long-term relationship allows service providers to evolve into trusted advisors to their clients.

Managing IT Professionals

Great project management techniques alone do not ensure success: People with appropriate skills must be involved. Obtaining help from outside the organization will be necessary at times, but there is no substitute for a strong internal IT function. The way in which each organization assembles and uses IT products is so unique that it is necessary to maintain a staff that understands in detail how all the pieces fit together.

A Review of the RITE Approach

223

Some of the things that can be done to help build and maintain a strong internal IT function include the following: Treat the IT staff as a valuable and fragile asset. ■ Become an attractive place for IT professionals to work. ■ Attract more than your fair share of highly talented individuals. ■ Retain talent by helping them manage their careers. ■ Understand their special needs. ■ Help everyone feel like a leader. ■ Manage both marginal performers and high-potential employees wisely. ■

The same rule applies to staffing both projects and the permanent IT staff within an organization: A modest number of outstanding people are more valuable than a large number of average ones. It is worth the cost and effort to attract and keep a few outstanding people.

Project Scheduling

It is rare for IT projects to meet all of their objectives on schedule. One reason is that management does not always allow a disciplined and rational process to be used to create schedules. The end date is often the result of politics, negotiation, or even bullying. There is nothing wrong with setting the end date of a project by management edict. Doing so can work quite effectively, but only if the scope of the project is then scaled back to fit the target date. Important reasons why schedules are missed include the following: Schedules don’t acknowledge that the environment will change as time passes. ■ It is assumed that specifications will be an accurate reflection of what is needed. ■ IT professionals are generally optimistic by nature. ■

224

Revolutionizing IT

Schedules are needed before the information necessary to create them is available. ■

Management has a need to know what a project will cost, how long it will take, and what benefits it will provide before investing too much time and effort into it. This creates a dilemma because there is not a way to have definitive information on cost, time, and benefit until a project is well under way. The pragmatic approach taken by most project managers is to make an informed guess based on imperfect information and then to increase the time and cost to cover unknown factors. This strategy almost always fails because of Parkinson’s Law: Work expands to meet the time allotted for it. When extra time is added to a schedule in order to deal with unknown factors, it tends to be absorbed during the early stages of the project so that little is left when needed. Under the right conditions, the nontraditional approach to project scheduling outlined in the following list can work: Accept that plans created early in a project will be imperfect. ■ Commit to a completion date that includes time and funding for contingencies. ■ Work toward an aggressive schedule that does not include any contingency time. ■ As the project proceeds, give ground only when absolutely necessary, saving as much of the contingency as possible for the end. ■ Define success as meeting the commitment date. Nothing negative can happen if the aggressive schedule is not met. ■ Create positive incentives for coming in ahead of the commitment date. ■

Developing Custom Software

The RITE Approach favors the use of packages over the creation of custom software when possible. In many situations, however, there is no practical alternative to developing unique software. When this

A Review of the RITE Approach

225

is the case, application of the RITE Approach is important because the dynamics that cause failure are especially strong. Some of the reasons why custom software development is difficult are as follows: There is little agreement on how to do it properly. ■ A large number of variables must be considered. ■ A great deal of information must be communicated between many people. ■ Success generates an unquenchable thirst for more development. ■ Numerous people, each with specialized skills, are required. ■ A high degree of cooperation is needed from individuals who love independence. ■ Success is hard to define and shifts over time. ■ The problem being addressed won’t stand still. ■ The tools available to developers constantly evolve. ■ Quality is difficult to attain and hard to measure. ■

Because software development is difficult, the use of a disciplined methodology is essential. Most importantly, the developers need to be encouraged to do the hardest things first. Much of the RITE Approach is focused on scope control. This is especially important when custom code is being developed because: As design complexity increases, so does the number of programmers needed. ■ The productivity of each programmer decreases as the number of programmers increases. ■ The need for discipline, controls, and rigid procedures increases as the staff grows. ■ The accuracy of estimates declines while risks rise. ■

The creation of high-quality software is expensive, time consuming, and resource intensive. Senior management must resist the temptation to remain uninvolved and let the experts handle it.

226

Revolutionizing IT

Making sure that what is created will be right for the organization is a task that cannot be handed off.

Final Thoughts

The RITE Approach is a collection of ideas and techniques that have been proven to work. Countless different people have contributed to the line of thinking behind it over a number of decades. It is a work in progress that will continue to be reshaped, expanded, and improved. All who are interested in this topic are invited to help make the RITE Approach better and more complete—especially those who take issue with some of its current observations and advice. Your are invited to offer feedback, ideas, and suggestions at www.riteapproach.com, where additional information on this subject will be provided on an ongoing basis.

References

Beck, Kent et al. 2001. “The Manifesto for Agile Software Development.” Agile Alliance, www.agilemanifesto.org. Brooks, Frederick P. Jr. 1995. The Mythical Man-Month (anniversary edition). Reading, MA: Addison-Wesley. Jacobson, Ivar, Booch, Grady, and Rumbaugh, James. 1999. The Unified Software Development Process. Reading, MA: Addison-Wesley. Martin, James. 1991. Rapid Application Development. New York: Macmillan. Standish Group International, Inc. 2001. “Extreme CHAOS.” The Standish Group International, Inc.

227

Bibliography

Books

Ambler, Scott W., and Constantine, Larry L. 2000. The Unified Process Inception Phase: Best Practices in Implementing the UP. Lawrence, KS: CMP. August, Judy H. 1991. Joint Application Design. Englewood Cliffs, NJ: Prentice Hall/Yourdon. Bauer, Roy A., Collar, Emilio, and Tang, Victor. 1992. The Silverlake Project: Transformation at IBM. New York: Oxford University Press. Brooks, Frederick P. Jr. 1995. The Mythical Man-Month (anniversary edition). Reading, MA: Addison-Wesley. Dinsmore, Paul C. 1999. Winning in Business with Enterprise Project Management. New York: AMACOM. Goldratt, Eliyahu M. 1990. Sifting Information Out of the Data Ocean: The Haystack Syndrome. Croton-on-Hudson, NY: North River Press. Graham, Robert J., and Englund, Randall L. 1997. Creating an Environment for Successful Projects: The Quest to Manage Project Management. San Francisco, CA: Jossey-Bass. Hagerty, Lawrence. 2000. The Spirit of the Internet: Speculations on the Evolution of Global Consciousness. Tampa, FL: Matrix Masters. Hammer, Michael. 2001. The Agenda: What Every Business Must Do to Dominate the Decade. New York: Crown Business. Hammer, Michael, and Champy, James. 1993. Reengineering the Corporation: A Manifesto for Business Revolution. New York: HarperBusiness.

229

230

Bibliography

Jacobson, Ivar, Booch, Grady, and Rumbaugh, James. 1999. The Unified Software Development Process. Reading, MA: Addison-Wesley. Kauffman, Stuart. 1995. At Home in the Universe: The Search for the Laws of Self-Organization and Complexity. New York: Oxford University Press. Lientz, Bennet P., and Rea, Kathryn P. 1999. Breakthough Technology Project Management. San Diego, CA: Academic. Martin, James. 1991. Rapid Application Development. New York: Macmillan. McConnell, Steve. 1996. Rapid Development: Taming Wild Software Schedules. Redmond, WA: Microsoft Press. McDowell, Robert L., and Simon, William L. 2001. Driving Digital. New York: HarperCollins. Parkinson, Northcote C. 1957. Parkinson’s Law and Other Studies in Administration (chap. 1). Boston, MA: Houghton Mifflin. Patrick, John R. 2001. Net Attitude: What It Is, How to Get It, and Why Your Company Can’t Survive without It. Cambridge, MA: Perseus. Project Management Institute Research Series. 1999. The Future of Project Management. Newtown Square, PA: Project Management Institute Headquarters. Saunders, Rebecca. 1999. Business the Amazon.com Way. Dover, NH: Capstone US. Stapleton, Jennifer. 1997. DSDM: Dynamic Systems Development Method. Harlow, Great Britain: Addison-Wesley.

Articles

Beck, Kent et al. 2001. “The Manifesto for Agile Software Development.” Agile Alliance. www.agilemanifesto.org. Boehm, Barry W. 1985. “A Spiral of Software Development and Enhancement,” Proceedings of an International Workshop on Software Process and Software Environments, Coto de Caza, Trabuc Canyon, CA, March 27–29. Conway, B., and Hotle, M. 1996. “Best Practices in Project Management, Part 1,” Gartner Analytics, The Gartner Group, March 20. Conway, B., and Hotle, M. 1996. “Best Practices in Project Management, Part 2,” Gartner Analytics, The Gartner Group, March 20.

Bibliography

231

Conway, B., Light, M., and Hunter, R. 1995. “The AD Management Continuum: Integrated Methods, Process and Management,” Gartner Analytics, The Gartner Group, September 27. Daniel, Robert W. 1997. “Creating an Environment for Project Success.” Information Systems Management, Winter. Kapur, Gopal K. 1999. “Project Management—10 Key Practices,” GartnerGroup: Research Report, The Gartner Group, January 30. Kruchten, Phillippe. 2001. “From Waterfall to Iterative Development—A Challenging Transition for Project Managers,” Copyright Rational Software, The Rational Edge. Lane, Raymond M. 1999. “Grow Quickly, Grow Smart: Cultivating Your Own Success,” VAR Business Management, CMT net, September 20. Vans, Gary K. 2001. “A Simplified Approach to RUP,” Copyright Rational Software, The Rational Edge. Zrimsek, Brian. 2001. “Understanding Rapid Implementation Methodologies,” Gartner, The Gartner Group, November 6.

index

1-5-10 business model, 137–139 A Accountability, 87, 93–103, 218 divided, 94–96 fairness of, 98 of IT professionals, 95, 99 of users, 95–98, 218 Agile movement, 185, 206–208 cautions about, 108 Amazon.com, 76–77 Andrews, David H., xiii–xiv Andrews Consulting Group, xv prefabricated project offerings of, 140 Application tests, 187 AS/400, 49 B Bass Pro, 45–46 Best practices, 86 Booch, Grady, 205 Business environment, instability of, 31, 78, 217 predicting, 79 Business models, 110 Business processes, 4 improvement of, 5–7 C Casa Del Campo, 47 Change: attitudes toward, 84–85, 129 pace of, 121, 197 Citibank, 99–101 Clinton administration health care plan, 13 Code coverage, 187 Component testing, 187 Concept recycling, 43 Contingencies, 52 C++, 200, 203, 204 Customer relations, 190, 191

D Delivery plans, 189 Design, high-level versus low-level, 182–183 Design team, limiting size of, 83 Digital Equipment Corporation (DEC), 48–49 DNA, 56 DuPont Corporation, 80 E Emergencies, behavior in, 47–48, 214 Evolution, 56–57, 85 Executives. See also Managers attitudes toward projects, 54 edicts by, 124 role in project success/failure, 15, 19–21, 88, 127 role in scope control, 87–88, 217 role in software development, 180–181 Extreme Programming (XP), 11, 208 F FedEx, Web-based tracking offered by, 43 Fort Knox project, 48–49 FreeMarkets, 140 G Gartner Group, 114 Giant German Auto (GGA), 94 H Hurricane Hugo, recovery from, 47 I IBM, 49–50, 90–91 object-oriented programming approach of, 203 Imitation, versus invention, 85–86, 219 ethicality of, 86 and packaged software, 117

233

234

Index

Increments, 4 Information technology (IT): amount spent on, 211 changes created by, 1–2 changes in, 11 drivers of changes in, 197–198 frustrations of managing, xviii historical attitudes toward, xviii, xxv Integrated development environment (IDE), 200 Intel, 58 Internet: effects on project management, 2 growth in, 198 as imitation facilitator, 86–87 Internet start-ups, factors in failure of, 128, 184 Iterative development, 56–57 advantages of, 63 examples of, 76–77, 88–89 implementing, 57–59, 77 Iterative processes, 4 IT professionals, 15 accountability and, 95 career guidance for, 153–154 high-potential, 155–157 leadership/management functions for, 154–155 managing, 149–159, 222–223 marginal performers, 155 personal traits of, 6–7, 17, 18, 95, 151–153 roles in projects, 99 turnover of, 157–158 IT projects, 3 accountability for (see Accountability) approving, 128–129, 166 assessing, 122–123, 216, 219 best/worst times to start, 81 and business instability, 31 and business process improvements, 5–7 case studies, 25–30, 45–46, 48–50, 68–72 chess analogy, 55 classic management theory of, 7–9 common assumptions about, 7 complexity of (see Project complexity) composition of teams assigned to, 15, 42–43, 157 construction projects analogy, 54–55 defining goals for, 65–66, 81 determining fate of, 11 evaluating, 59, 166–167

evolution of methodology for, 10–11 factors in failure of, 13, 131–132, 137, 212–213 factors in success of, 41–50, 66, 214 how to view, 51–61, 215 initiation of (see Project initiation) IT professionals’ approach to, 6–7 leaders of (see Project leaders) linear view of, 33–34 management attitudes toward, 54, 67 management role in, 15, 19–21, 66–67, 88, 127, 128 need for balance in, 122 number of people to involve in, 64, 83 organizational dynamics and, 10 plans for (see Project plans) prefabricated, 140 realistic assumptions about, 55–56, 214–215 resistance to, 129–131 rewarding participants in, 101–102, 166 risk assignment in, 142–143 scheduling of (see Project schedules) service provider phases, 143 setting duration of, 78–80 success/failure rate of, 2, 10, 211 timing of problems in, 161–162 trade-offs in, 123–126, 218 unrealistic assumptions about, 30–35 user attitudes toward, 16–17, 35, 96 user role in defining, 18, 32–33 using outside resources in, 67, 82 (see also Outside resources) vision for (see Vision) IT service providers. See also Outside resources business strategies of, 137–138 choosing, 145–146 maximizing value of, 141–142 problems faced by, 137 project phasing by, 143 rates charged by, 139, 140–142 specialization by, 140 working relationships with, 221–222 J Jacobson, Ivar, 205 Java, 203–204 advantages of, 203 design strategies with, 204 Johnson, Kenneth R., xiv–xv Joint Application Design (JAD), 83

Index

L La Romana, 47 Leadership, 41–42 and IT professionals, 154–155 Lewis and Clark, 21 M Managers. See also Executives attitudes toward projects, 67 IT professionals as, 155 reactions to early project completion, 167 reactions to project delays, 162 role in projects, 66–67, 128 role in scheduling, 163, 223 role in scope control, 217 role in software development, 174–175, 176, 194, 225–226 Manhattan Project, 47 “Manifesto for Agile Software Development,” 207 Mega-Multi Manufacturing, 25–30, 35–36, 68–72, 107–109 Microsoft, object-oriented programming approach of, 203 Mississippi Chemical Corporation, 117–118 Muehlstein & Company, 88–89 The Mythical Man-Month, 124 N .NET, 203 New Jersey Bell Telephone Company, 16–17 O Object-oriented programming (OO), 199–204 advantages of, 200, 201 approaches to, 202–203 building applications with, 200–201 limitations of, 202 principles of, 199 Organizational dynamics, and IT projects, 10 Organizational politics, and IT professionals, 151–153, 156–157, 158 Outside resources, 67, 82, 114, 135–147, 135–147, 221–222. See also IT service providers advantages of, 135–136 cost of, 136 disadvantages of, 136–137 and RITE approach, 144–145

235

P Parkinson’s Law, 164, 224 and two-tier scheduling, 167–168 Path coverage, 187 Pearl Harbor, recovery from, 47 Pets.com, 132 Plan of record, 181 Presentations, corporate culture of, 20, 52, 128 Pricey, Dellaye, and Thensome, 26–30, 107–109 Procedural programming languages, 199 Products, 3 Product test, 187–188 Project complexity, 56, 218 adverse effects of, 36–37, 75 resisting, 57–58, 125 Project initiation, 13–23 challenges in, 14 importance of, 13–14 process of, 14–17 Project leaders, 87 functions of, 87, 97–98 qualities needed by, 98 rewards for, 102 Project plans, 51–53 difficulties in creating, 53 expectations of, 65 optimal levels of, 126–127 Projects, 3, 58, 65–66. See also IT projects Project schedules, 161–171, 223–224 buffer time in, 164, 167–168 case study, 168–170 factors in failure of, 163–165, 223–224 inaccuracy of, 161, 163 management role in, 163, 223 strategies for, 163–164, 224 two-tier, 165–168 Proof-of-concept prototyping, 133, 219 Prototypes, 4, 32–33 benefits of, 133, 182, 219 in software development, 181–182 Q Quality Gizmo, 94 Quality metrics, 191 R Rational Unified Process (RUP), 11, 206 Refactoring, 204 Regression testing, 188 Releases, 3–4

236

Index

Requests for Proposal (RFPs), problems with, 110, 112 Resources: controlling use of, 81–83, 124 limited nature of, 18–19 outside (see Outside resources) Risk: assigning with outside service providers, 142–143 controlling, 131–133 RITE Approach, The, xxviii–xxix, 38, 211–226 changes in management entailed in, xxviii–xxix concepts of, xxix issues addressed by, xxviii and outside resources, 144–145 principles of, 11, 64, 65, 125 selecting packaged software with, 110–113, 115–117 and software development, 178 steps in, 60–61, 63 Rumbaugh, James, 205 S Scaffolding, 186 Scope, 4 letting time determine, 78–81, 125, 217 Scope control, 43, 75–92, 216–217 case study, 90–91 and custom code development, 225 management role in, 87–88, 217 pros and cons of, 125 Scope growth: adverse effects of, 36–37, 75 causes of, 17–18, 75, 95, 182 defending against, 15 Sealectro, 168–170 Silverlake project, 49–50 Software, packaged, 34–35, 105–119, 136, 219–221 benefits of, 106, 110 case study, 117–118 expectations of, 113 history of, 106 and object-oriented programming, 201 outside help with, 114–115 selection case study, 107–109 selection via the RITE Approach, 110–113, 115–117, 220–221

selection via traditional method, 106–110, 111 user groups for, 114 uses of, 105–106 vendor assistance with, 115 Software development, 173–195, 224–226 amount of coding time in, 185, 204 approving plans for, 180–181 changes in, 33 coding stage of, 185–186 costs of fixing errors in, 184 customer relations and, 190, 191 delivery stage of, 188–189 design stage of, 182–185 difficulties in, 106, 175–177, 206, 225 ensuring quality in, 191 in-process metrics for, 191–192 limitations in current software, 198 management of, 2, 6, 174–175, 176, 180–181, 194 methodologies of, 4–5, 110, 173, 178, 194, 204. See also specific methodologies need for balanced plan, 179–180 planning stage of, 178–182 prototyping in, 181–182 requirements of, 179, 204–205 reviewing code, 186 reviewing delivery plans, 189 reviewing designs, 184–185, 191 reviewing service plan, 190 service stage of, 189–190 test stage of, 186–188, 191 test strategies for, 188 tools for, 192–194 traits of designers, 183–184 traits of developers, 176, 194 Staged evolution, 43 Stages, 4 Standish Group, The, 10 Structured programming, 199 Success, defining, 51–52 Sun Microsystems, 203 System complexity, advantages of, 76 System maintenance, 3 System test, 187–188 T Task forces, 63–65 duration of, 65 optimal size of, 64, 215 organizing, 66–67

Index

Test beds, 188 Test cases, 186 libraries of, 188 Timebox Development, 80 Time control, 42, 78–80, 125 Trevino, Lee, 65 U Unified Modeling Language (UML), 205 Unified Process, 185, 204–206 The Unified Software Development Process, 206 Unit testing, 186 Use Case Methodology, 185, 204–205 Use cases, 110 Users: accountability of, 95–98, 218

attitude toward projects, 16–17, 35, 96 role in project definition, 18, 32–33 V Vision, 64–65, 215 criteria for, 67–68 developing, 66–67 W Waterfall development cycle, 8–9 factors in survival of, 37–38 flaws in, 31, 177–178 limitations of, 9–10 Web services, 201–202 Y Y2K, 44–45, 113

237