The Build master: Microsoft's software configuration management best practices 0321332059, 9780321332059

Written by a top Microsoft consultant, this book will become the standard guide to the build process in the software eng

324 73 2MB

English Pages 249 [289] Year 2005;2006

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Cover......Page 1
Contents......Page 10
Foreword......Page 16
Preface......Page 18
The Two Types of Builds: Developers and Project......Page 38
Building from the Inside Out......Page 40
More Important Build Definitions......Page 41
How Your Product Should Flow......Page 45
Microsoft Solution Framework......Page 52
Recommendations......Page 53
Chapter 2: Source Tree Configuration for Multiple Sites and Parallel (Multi-Version) Development Work......Page 54
Definitions......Page 55
How This Process Works: An Example Using VSS......Page 58
Hatteras: A Look Into the Future of SCC at Microsoft......Page 61
Recommendations......Page 70
Chapter 3: Daily, Not Nightly, Builds......Page 72
The Importance of a Successful Daily Build......Page 75
Summary......Page 78
Recommendations......Page 79
Chapter 4: The Build Lab and Personnel......Page 80
The Need for a Build Lab......Page 82
Build Lab Rules......Page 83
Hardware Configuration......Page 84
Build Personnel......Page 88
Recommendations......Page 90
Chapter 5: Build Tools and Technologies......Page 92
Binary Generating Tools—Also Referred to Loosely as “Build Tools”......Page 94
“You Provide the Nose; We Provide the Grindstone”......Page 96
In Steps the 800-Pound Gorilla!......Page 97
What Build Tool Should You Be Using and When?......Page 99
Recommendations......Page 101
Chapter 6: SNAP Builds—aka Integration Builds......Page 102
What Is a SNAP Build?......Page 103
How SNAP Works......Page 104
Sample Machine Configuration......Page 105
Managing Throughput......Page 108
Summary......Page 110
Recommendations......Page 111
Chapter 7: The Build Environment......Page 112
Setting Up the Environment......Page 113
Setting Up a Developer or Tester Machine......Page 114
A Makefile Example That Explains How This Works......Page 120
Summary......Page 122
Recommendations......Page 123
Why Worry About Versioning?......Page 124
File Versioning......Page 125
Build Number......Page 127
Source Code Control Trees......Page 128
Should There Be Other Fields in the File Version Number?......Page 129
DLL or Executable Versions for .NET (Assembly Versions)......Page 130
How Versioning Affects Setup......Page 132
Even Installing Correctly Does Not Always Work......Page 135
Recommendations......Page 136
Chapter 9: Build Security......Page 138
Physical Security for the Build, Source, and Release Lab......Page 140
Tracking Source Changes (All Check-Ins)—The Build Process......Page 142
IT Infrastructure......Page 143
Want More Security?......Page 145
Recommendations......Page 146
The Official Deflnition of Managed Code......Page 148
What Is the CLR, and How Does It Relate to Managed Code?......Page 150
Managed Execution Process......Page 151
Delay Signing and When to Use It......Page 153
One Solution or Many Solution Files?......Page 156
Summary......Page 157
Recommendations......Page 158
Chapter 11: International Builds......Page 160
Important Concepts and Definitions......Page 161
Method 1: Internationally Ignorant Code......Page 163
Method 2: Locale-Dependent Source......Page 164
Method 3: Single Worldwide Source......Page 165
Method 4: Single Worldwide Binary......Page 166
USE Unicode......Page 168
Recommendations......Page 169
Chapter 12: Build Verification Tests and Smoke Tests......Page 170
Smoke Test......Page 172
Build Verification Tests......Page 174
Summary......Page 180
Recommendations......Page 181
Chapter 13: Building Setup......Page 182
The Basic Definitions......Page 184
Setup Is Not a Testing Tool......Page 185
Recommendations......Page 190
Chapter 14: Ship It!......Page 192
Software Release at Microsoft......Page 197
Recommendations......Page 199
Chapter 15: Customer Service and Support......Page 200
Goals of Support......Page 202
How Support Works and Communicates with the Product Teams......Page 203
Summary......Page 217
Recommendations......Page 218
Chapter 16: Managing Hotfixes and Service Packs......Page 208
Introduction to “Release Management with VSS”......Page 209
Release Management: General Scenarios......Page 211
Chapter 17: 7 Suggestions to Change Your Corporate or Group Culture......Page 220
What Is Corporate Culture?......Page 223
It Starts at the Top......Page 224
When All Else Fails......Page 234
Don’t Go Gipper......Page 235
NASA Columbia and Challenger Disasters: When Management Pulls Rank and There Is a Big Disconnect Between the Manager’s View and the Engineer’s View......Page 236
Recommendations......Page 238
Chapter 18: Future Build Tools from Microsoft......Page 240
MSBuild......Page 241
Visual Studio Team System......Page 243
Visual Studio Team Build......Page 246
The Microsoft Shell (MSH, or Monad)......Page 248
Summary......Page 250
Recommendations......Page 251
Appendix A: Embedded Builds......Page 252
Nuts and Bolts of the CE Build System......Page 253
Extreme Programming Fundamentals......Page 256
Test-Driven Development and Refactoring......Page 259
An Extreme Programming Scenario......Page 261
Microsoft Case Study......Page 262
References and Further Reading......Page 264
Test Guide: A Compilation from the Developer Division at Microsoft......Page 266
Appendix D: Debug Symbols......Page 274
The Windows Scenario That You May Run into with Your Applications......Page 275
Final Thoughts......Page 278
C......Page 280
E......Page 281
J-L......Page 282
N......Page 283
S......Page 284
T......Page 285
X-Y-Z......Page 286
Recommend Papers

The Build master: Microsoft's software configuration management best practices
 0321332059, 9780321332059

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Praise for The Build Master “Wow, what can I say? Chapter 4, “The Build Lab and Personnel,” by itself is enough justification to purchase the book! Vince is obviously a “Dirty Finger Nails” build meister and there is a lot we can all learn from how he got them dirty! There are so many gems of wisdom throughout this book it’s hard to know where to start describing them! It starts where SCM should start, at the end, and works its way forward. [This book is] a perfect complement to the “Follow the Files” approach to SCM that I espouse. I will recommend that every software lead and software configuration management person I work with be required to read this book!” —Bob Ventimiglia, autonomic logistics software configuration manager, Lockheed Martin Aeronautics “The Build Master contains some truly new information; most of the chapters discuss points that many people in the industry don’t have a full understanding of and need to know. It’s written in a way that is easy to read and will help a reader fill holes in their vision regarding software build management. I especially liked Vince’s use of Microsoft stories to make his points throughout the book. I will purchase the book and make certain chapters mandatory reading for my build manager consultants.” —Steve Konieczka, SCM consultant “Vince does a great job of providing the details of an actual working build process. It can be very useful for those who must tackle this task within their own organization. Also the ‘Microsoft Notes’ found throughout the book provide a very keen insight into the workings of Microsoft. This alone is worth purchasing this book.” —Mario E. Moreira, author of Software Configuration Management Implementation Roadmap and columnist at CM Crossroads) “Software configuration management professionals will find this book presents practical ideas for managing code throughout the software development and deployment lifecycles. Drawing on lessons learned, the author provides real-world examples and solutions to help you avoid the traps and pitfalls common in today’s environments that require advanced and elegant software controls.” —Sean W. Sides, senior technical configuration manager, Great-West Healthcare Information Systems “If you think compiling your application is a build process, then this book is for you. Vince gives us a real look at the build process. With his extensive experience in the area at Microsoft, a reader will get a look in at the Microsoft machine and also how a mature build process should work. This is a must read for anyone doing serious software development.” —Jon Box, Microsoft regional director, ProTech Systems Group

“Did you ever wonder how Microsoft manages to ship increasingly complex software? In The Build Master, specialist Vince Maraia provides an insider’s look.” —Bernard Vander Beken, software developer, jawn.net “This book offers an interesting look into how Microsoft manages internal development of large projects and provides excellent insight into the kinds of build/SCM things you can do for your large-scale projects.” —Lance Johnston, vice president of Software Development, SCM Labs, Inc. “The Build Master provides an interesting insight into how large software systems are built at Microsoft covering the set up of their build labs and the current and future tools used. The sections on security, globalization, and versioning were quite helpful as these areas tend to be overlooked.” —Chris Brown, ThoughtWorks, consultant “The Build Master is a great read. Managing builds is crucial to the profitable delivery of highquality software. Until now, the build process has been one of the least-understood stages of the entire development lifecycle. Having read this book from one of Microsoft’s leading software build experts, you really get a taste of the best practices you should apply for maximizing the reliability, effectiveness, timeliness, and security of every build you create. As the book states, builds powerfully impact every software professional: developers, architects, managers, project leaders, configuration specialists, testers, release managers, and many others. As an IT expert having worked in many of these areas, I have to say that this book hits the mark. This book helps you implement a smoother, faster, more effective build process and use it to deliver better software. The book is a success.” —Robert J. Shimonski, networking and security expert, www.rsnetworks.net

THE BUILD MASTER

This page intentionally left blank

THE BUILD MASTER MICROSOFT’S SOFTWARE CONFIGURATION MANAGEMENT BEST PRACTICES Vincent Maraia

Upper Saddle River, NJ • Boston • Indianapolis • San Francisco New York • Toronto • Montreal • London • Munich • Paris • Madrid Capetown • Sydney • Tokyo • Singapore • Mexico City

Many of the designations used by manufacturers and sellers to distinguish their products are claimed as trademarks. Where those designations appear in this book, and the publisher was aware of a trademark claim, the designations have been printed with initial capital letters or in all capitals. The author and publisher have taken care in the preparation of this book, but make no expressed or implied warranty of any kind and assume no responsibility for errors or omissions. No liability is assumed for incidental or consequential damages in connection with or arising out of the use of the information or programs contained herein. The publisher offers excellent discounts on this book when ordered in quantity for bulk purchases or special sales, which may include electronic versions and/or custom covers and content particular to your business, training goals, marketing focus, and branding interests. For more information, please contact: U. S. Corporate and Government Sales (800) 382-3419 [email protected] For sales outside the U. S., please contact: International Sales [email protected] Visit us on the Web: www.awprofessional.com Library of Congress Cataloging-in-Publication Data: 2005926326 Copyright © 2006 Vincent Maraia All rights reserved. Printed in the United States of America. This publication is protected by copyright, and permission must be obtained from the publisher prior to any prohibited reproduction, storage in a retrieval system, or transmission in any form or by any means, electronic, mechanical, photocopying, recording, or likewise. For information regarding permissions, write to: Pearson Education, Inc. Rights and Contracts Department One Lake Street Upper Saddle River, NJ 07458 ISBN 0-321-33205-9 Text printed in the United States on recycled paper at R.R. Donnelley, Crawfordsville, Indiana. First printing, October, 2005

I would like to dedicate this book to Jan, my beautiful bride and wonderful wife, the person who gives our family a foundation of steel that is covered with unconditional love. Leah, my pride and joy, the apple of my eye and the sparkle in my smile—the great love of my life. Marcus, the excitement of welcoming you into this world is unbearable! You are already loved more than you can ever imagine. Love Me

This page intentionally left blank

CONTENTS Foreword . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xv Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii Chapter 1:

Defining a Build . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 The Two Types of Builds: Developers Building from the Inside Out . . . . . . More Important Build Definitions . . . How Your Product Should Flow . . . . Microsoft Solution Framework . . . . . Summary . . . . . . . . . . . . . . . . . . . Recommendations . . . . . . . . . . . .

Chapter 2:

and Project ......... ......... ......... ......... ......... .........

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

.1 .3 .4 .8 15 16 16

Source Tree Configuration for Multiple Sites and Parallel (Multi-Version) Development Work . . . . . . . . . . . . . . . . 17 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . How This Process Works: An Example Using VSS . . . Hatteras: A Look Into the Future of SCC at Microsoft Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Recommendations . . . . . . . . . . . . . . . . . . . . . . . . .

Chapter 3:

. . . . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

18 21 24 33 33

Daily, Not Nightly, Builds . . . . . . . . . . . . . . . . . . . . . . . 35 The Importance of a Successful Daily Build. What Are You Building Every Day? . . . . . . Summary . . . . . . . . . . . . . . . . . . . . . . . . Recommendations . . . . . . . . . . . . . . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

38 41 41 42

ix

x

Contents

Chapter 4:

The Build Lab and Personnel . . . . . . . . . . . . . . . . . . . . . 43 The Need for a Build Lab . Build Lab Rules . . . . . . . . Hardware Configuration . Build Personnel . . . . . . . . Summary . . . . . . . . . . . . Recommendations . . . . . .

Chapter 5:

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

45 46 47 51 53 53

. . . . . . . .

. . . . . . . .

. . . . . . . .

57 57 59 60 62 62 64 64

SNAP Builds—aka Integration Builds. . . . . . . . . . . . . . . 65 What Is a SNAP Build? . . . . . . When to Use SNAP Builds . . . . How SNAP Works. . . . . . . . . . Sample Machine Configuration. Operations Staff . . . . . . . . . . . Managing Throughput . . . . . . . Summary . . . . . . . . . . . . . . . . Recommendations . . . . . . . . . .

Chapter 7:

. . . . . .

Build Tools and Technologies . . . . . . . . . . . . . . . . . . . . . 55 First, Every Build Needs a Script . . . . . . . . . . . . . . . . . . . . . . . . . Binary Generating Tools—Also Referred to Loosely as “Build Tools” “You Provide the Nose; We Provide the Grindstone” . . . . . . . . . . In Steps the 800-Pound Gorilla! . . . . . . . . . . . . . . . . . . . . . . . . . XML Is the Here, the Now, and the Future . . . . . . . . . . . . . . . . . . What Build Tool Should You Be Using and When? . . . . . . . . . . . . Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Chapter 6:

. . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

66 67 67 68 71 71 73 74

The Build Environment . . . . . . . . . . . . . . . . . . . . . . . . . 75 Setting Up the Environment. . . . . . . . . . . . . . . . . . Setting Up a Developer or Tester Machine . . . . . . . A Makefile Example That Explains How This Works Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Recommendations . . . . . . . . . . . . . . . . . . . . . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

76 77 83 85 86

xi

Contents

Chapter 8:

Versioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 Why Worry About Versioning? . . . . . . . . . . . . . . . . . . . . File Versioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Build Number . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Source Code Control Trees. . . . . . . . . . . . . . . . . . . . . . . Should There Be Other Fields in the File Version Number? . DLL or Executable Versions for .NET (Assembly Versions) . . How Versioning Affects Setup . . . . . . . . . . . . . . . . . . . . Even Installing Correctly Does Not Always Work . . . . . . . Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Chapter 9:

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

87 88 90 91 92 93 95 98 99 99

Build Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 Physical Security for the Build, Source, and Release Lab . . . Tracking Source Changes (All Check-Ins)—The Build Process Binary/Release Bits Assurance . . . . . . . . . . . . . . . . . . . . . IT Infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Want More Security? . . . . . . . . . . . . . . . . . . . . . . . . . . . . Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

103 105 106 106 108 109 109

Chapter 10: Building Managed Code . . . . . . . . . . . . . . . . . . . . . . . 111 The Official Definition of Managed Code. . . . . . . . . . . . . . . . . . What Is the CLR, and How Does It Relate to Managed Code? . . . Managed Execution Process . . . . . . . . . . . . . . . . . . . . . . . . . . . The Definition of Assemblies As It Pertains to the .NET Framework Delay Signing and When to Use It . . . . . . . . . . . . . . . . . . . . . . One Solution or Many Solution Files? . . . . . . . . . . . . . . . . . . . . Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

111 113 114 116 116 119 120 121

Chapter 11: International Builds. . . . . . . . . . . . . . . . . . . . . . . . . . . 123 Important Concepts and Definitions . . . . Method 1: Internationally Ignorant Code Method 2: Locale-Dependent Source . . Method 3: Single Worldwide Source . . Method 4: Single Worldwide Binary . . . USE Unicode . . . . . . . . . . . . . . . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

124 126 127 128 129 131

xii

Contents

Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 Recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132

Chapter 12: Build Verification Tests and Smoke Tests . . . . . . . . . . . 133 Smoke Test . . . . . . . Build Verification Tests Summary . . . . . . . . . Recommendations . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

135 137 143 144

Chapter 13: Building Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 The Basic Definitions . . . . . Setup Is Not a Testing Tool . Summary . . . . . . . . . . . . . Recommendations . . . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

147 148 153 153

Chapter 14: Ship It! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 Software Release at Microsoft. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162 Recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162

Chapter 15: Customer Service and Support . . . . . . . . . . . . . . . . . . 163 Goals of Support. . . . . . How Support Works and Summary . . . . . . . . . . . Recommendations . . . . .

............................ Communicates with the Product Teams. ............................ ............................

. . . .

. . . .

. . . .

. . . .

165 166 168 169

Chapter 16: Managing Hotfixes and Service Packs . . . . . . . . . . . . 171 Introduction to “Release Management with VSS”. Release Management: General Scenarios . . . . . Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . Recommendations . . . . . . . . . . . . . . . . . . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

172 174 180 181

Chapter 17: 7 Suggestions to Change Your Corporate or Group Culture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 What Is Corporate Culture? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186 It Starts at the Top . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 When All Else Fails… . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197

xiii

Contents

Don’t Go Gipper… . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . NASA Columbia and Challenger Disasters: When Management Pulls Rank and There Is a Big Disconnect Between the Manager’s View and the Engineer’s View . . . . . . . . . . . . . . . . . . . . . . . . . Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

198

199 201 201

Chapter 18: Future Build Tools from Microsoft . . . . . . . . . . . . . . . . 203 MSBuild . . . . . . . . . . . . . . . . . . . . Visual Studio Team System . . . . . . . . Visual Studio Team Build . . . . . . . . . The Microsoft Shell (MSH, or Monad) Summary . . . . . . . . . . . . . . . . . . . . Recommendations . . . . . . . . . . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

204 206 209 211 213 214

Appendix A: Embedded Builds . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 Nuts and Bolts of the CE Build System . . . . . . . . . . . . . . . . . . . . . . . 216

Appendix B: Extreme Programming . . . . . . . . . . . . . . . . . . . . . . . . 219 Extreme Programming Fundamentals . . . . Test-Driven Development and Refactoring . An Extreme Programming Scenario . . . . . Microsoft Case Study. . . . . . . . . . . . . . . References and Further Reading . . . . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

219 222 224 225 227

Appendix C: Testing Guide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 Test Guide: A Compilation from the Developer Division at Microsoft. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229

Appendix D: Debug Symbols. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237 The Windows Scenario That You May Run into with Your Applications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238

Final Thoughts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243

This page intentionally left blank

FOREWORD Since 1989, I’ve been consulting and doing architecture and programming work for various companies such as Microsoft, Intel, HP, and DreamWorks. Each of these companies has its own ways of managing the project, which includes how projects are planned, how code is written, how code is checked in (if it is at all), how the application is built, and how it is tested. It is clear to me that none of these companies has spent nearly enough time really thinking about and formalizing the process of building software. In fact, I’d say that this disorganization around the software process is the main reason why software never seems to ship on schedule and often ships years later than originally planned. Although it is important that software companies adopt good softwarebuilding processes, it’s not easy to find information on the subject. As far as software processes go, there really hasn’t been much time or effort spent on this topic in articles, books, or conferences. Each company has to go about it its own way and make its own mistakes before it can learn from them. It seems that just recently, this area is starting to get the attention it needs. We can see this in Microsoft’s soon-to-be-shipping Visual Studio Team System product line, which now offers deeply integrated tools for source code control, issue tracking, and testing. In 2004, I was teaching a C#/CLR class at Microsoft when Vince approached me at the end of the class. He told me that he had worked at Microsoft in the Windows NT build lab for years and had this idea for a book about how software should be built and what kind of standards the programmers and testers should be held to. Having just recently finished a contract job at Microsoft where the build process was less than ideal, I immediately thought that Vince’s book was sorely needed and should definitely be published. Also, I knew that Vince had unique experience as a Windows NT build master that made him the perfect person to write a book like this.

xv

xvi

Foreword

While reading this book, I learned many things and had many stimulating conversations with Vince about software processes that pertain to building software in an organized manner. If you are involved with the planning, programming, building, testing, shipping, or supporting of software, you will find useful information in this book. I’m sure that companies will be able to produce better products in a more timely fashion if they take the information that is in this book to heart. After all, there is a little build master in all of us. —Jeffrey Richter (http://Wintellect.com)

PREFACE During my 15 years at Microsoft, I have spent 10 years working in various product groups shipping 11 products, including Windows NT, Visual Studio, BackOffice, Small Business Server, and the Microsoft Mouse. I have also been on a couple of canceled projects that never made it out the door. For the past 5 years, I have been consulting on Microsoft’s best source code control (SCC), build, test, and deployment processes, with an emphasis on the build process. It seems that for all the years that I have been working at Microsoft, I have always been in some kind of Software Configuration Management (SCM) role working in the build lab, writing and running tests, creating setup programs, or coordinating project meetings. This book contains the knowledge I have gained while working on these various projects. Furthermore, I have built on the experiences of the people who were there before me and the lessons they taught me. I also gathered a lot of tips from people who are currently in the product team. Many things can and have been said about Microsoft, but there is one thing most people will agree on: Microsoft has been one of the more successful companies when it comes to shipping software. Sure, we might be notorious for missing our ship date, but despite this fact, Microsoft cranks out hundreds of released software products per year—thousands if you include hotfixes via Windows Update. That is why we have all the processes mentioned in this book: to minimize the slippage and optimize the development process. I tried to capture the best new and classic processes that we have developed throughout the years so that I could pass them on to you in The Build Master: Microsoft’s Software Configuration Management Best Practices.

xvii

xviii

Preface

What This Book Is About As I search Microsoft’s job database, looking through 397 job titles, none of them contains the words Software Configuration Management. When I look at our acronym or glossary lookup tool and search for SCM, the results come back with Source Code Manager or Supply Chain Management, pronounced SCuM. It is no wonder that the SCM term is used infrequently at Microsoft. I know that I would not like to be referred to as a SCuM manager or champion. Of course, I am being facetious and picky about the semantics here because SCM is a widely used industry term. It just isn’t used a lot at Microsoft. The only explanation I can think of is that the processes of SCM at Microsoft are broken down to specific tasks, and SCM is too broad a term to be used to describe these processes on a daily basis. So, despite the lack of the use of the term SCM at Microsoft, that is what this book is focused on since that is what we live and breathe every day.

Defining Software Configuration Management Let’s define SCM as it is used in this book. I like this definition of configuration management that Steve McConnell gives in his 1993 book, Code Complete, from Microsoft Press: Configuration management is the practice of handling changes systematically so that a system can maintain its integrity over time. Another name for it is change control. It includes techniques for evaluating proposed changes, tracking changes, and keeping copies of the system as it existed at various points in time. A more detailed description might be from Stephen A. MacKay. He quotes several sources, but they all seem to be saying the same thing: The most widely used definition of software configuration management (SCM) comes from the standards community [IEEE87, IEEE90a, IEEE90b, Buck93]. Configuration management (CM) is a discipline that oversees the entire life cycle of a software product or family of related products. Specifically, CM requires identification of the components to be controlled (configuration items) and the structure of the product, control over changes to the items

Preface

xix

(including documentation), accurate and complete record keeping, and a mechanism to audit or verify any actions. This definition is not complete. Dart [Dart92] suggests that the definition should be broadened to include manufacturing issues (optimally managing the construction of the product), process management (ensuring adherence to the defined processes), and team work (supporting and controlling the efforts of multiple developers). Tichy [Tich88] provides a definition that is popular in the academic and research communities: Software configuration management is a discipline whose goal is to control changes to large software system families, through the functions of component identification, change tracking, version selection and baselining, software manufacture, and managing simultaneous updates (teamwork). In short, at Microsoft, SCM is broken into three groups: source control, build, and deployment or release. There could arguably be a fourth group, sustained engineering—hotfixes and service packs—but this separate distinction seems to show up only on the big teams such as Windows. The jobs at Microsoft that have the responsibilities described previously are builder, build management, or release program manager. The really abbreviated term for SCM at Microsoft is builds. This is probably because when code is taken from developers and turned into a product that you can deliver to a customer, it is usually the build team that owns all the steps involved or helps manage the process. Having a good build process seems to be a lost art even in some groups at Microsoft. For example, when I recently spoke with one development team, I heard them talk about trying a distributed build system over a group of build machines (also known as build farms). This idea was tried several years ago—and tried a few times since the original—but has proven to be unsuccessful for various reasons that are covered in Chapter 5, “Build Tools and Technologies.” Maybe there have been recent improvements in tools, or the product has been broken into smaller packages of code (componentized) that will make this distributed build process more likely to succeed today than it did years ago. If so, this might justify revisiting the idea even though it was abandoned the last time someone looked at it. But if no changes have been made that would support moving to a distributed build process, trying to pursue this Holy Grail would be a waste of everyone’s clock cycles.

xx

Preface

It is impossible to talk about software builds without also addressing some of the surrounding software development areas around this process, such as source tree configuration and the deployment of a product. Therefore, I want to give the full story of setting up your development source trees, building your code, deploying your product, and supporting your customers with fixes by using examples of how we do it at Microsoft.

Who Should Read This Book The target audience for this book is SCM teams at any company that ships software internally or externally. This includes the people outlined in the next sections.

Information Technology (IT) Managers If you develop or deploy software to departments within your company or manage the servers that host your developer’s source code trees, this book will help you succeed in being more efficient and robust.

Software Development and Testing Managers Because you are the one who implements and uses these processes, it would be best to read the explanations behind the processes firsthand. This will help you drive the adoption of these processes within your group or company.

Build Teams and Build Managers Being a builder at heart and spending many years in build labs, I wrote this book as a collection of what I have learned. When software is shipped, everyone seems to have specific tasks or jobs: Developers write the code, testers test the code, program or product managers try to figure out what goes into the product, and executives sell it. So who makes sure that the flow of the product does not get interrupted? When there is a block or showstopper, who is the person who will jump through fire hoops to get things started again? It is the build or integration team, which I see as the “heart” of the product and everything else is the “soul.” Because this large responsibility falls under the build team, and the most successful groups have a very solid build team, the topics in this book will help you keep the “flow” going.

Preface

xxi

Technical Project and Product Managers If you want to be able to accurately predict when your product will be ready for release and learn the details of how a requested application feature goes from cradle to grave, this book will provide an overview of the whole process. You can work with the developers and testers on how to merge these recommendations into your current process and understand the language or lingo.

Anyone Interested in a Microsoft Case Study Although this book is not intended to be a case study in the traditional sense, such as presenting the steps that Microsoft uses to build software and then analyze them to death, you can view this book as an example of a successful software company and the lessons it has learned over the years, with a lot of insight into why Microsoft chose a particular path.

Assumptions Made on the Background of People Reading This Book This book assumes that the reader has some experience working on or with a software development team either at a company or in an academic institution. This is not a high-level 35,000-foot view of SCM. Plenty of books out there already take that approach. I am taking a more granular approach as to how to do it rather than just telling you what you need to do. Although some examples in this book concern Microsoft tools and technologies, I have tried to write this book in a tool-agnostic way. In other words, regardless of the tools or platforms that you develop on, you will still be able to use these processes to ship your software effectively. By “ship your software,” I mean any way that software can be delivered to your customers, whether via the Internet, disc media, internal server releases, Web services and applications, or out of the box.

xxii

Preface

How This Book Is Organized Each chapter can stand alone, but the book flows from software development processes to product support (sustained engineering). This is my idea of componentizing the build process and this book. You will get a better understanding of how all these topics are linked if you read this book from cover to cover, but I realize that some groups or companies will not need to spend a lot of time on a certain subject that they feel they have already mastered or are not interested in. If someone just wants help on setting up a build lab, he can turn to Chapter 4, “The Build Lab and Personnel,” and get the information he needs without having to read the previous chapters.

Contents at a Glance ■









Source Code Control—The “Golden” Rule Because it seems that the build team members tend to live in the source code trees and are usually the administrators of the trees, I spend a chapter talking about the best way to configure your sources. The Build Process—The Mission-Critical Assembly Line This is the cornerstone of this book. Nine chapters cover, in detail, how to build your product. For a more in-depth overview, please read the book’s Introduction. Setup/Release—Ship It! This is another area that tends to spill over to the build team’s responsibilities. This topic is covered in three chapters. Sustained Engineering—The Only Sure Things in Life Are Death, Taxes, and Bugs This tends to be the first area where symptoms of a failing project start to show up. Most notably, everyone on the project team is in reactive mode instead of working on new features. The Future—How to Get There from Here If you are interested in the new tools that Microsoft will be releasing with the future release of Visual Studio, I touch on how to utilize those tools using the processes described in this book.

Preface

xxiii

The Story As I write this, Microsoft is productizing some of its internal developmentcycle tools to make them available to all developers via its Visual Studio Team System product. The production of this book could not have come at a better time because I can now explain how these tools and processes have evolved in addition to the best practices of product teams that drive the functions of these new tools. This is another objective of this book. I recently completed a build architecture review for a large Microsoft customer. This customer already had a good build process and didn’t think it needed much improvement. At the end of my onsite, week-long engagement at the customer’s development headquarters, I suggested that this customer adopt the principles that I explain in more detail in this book. He agreed with me on most points but surprised me when he said, “I really appreciate all the information and suggestions that you provided. In order for you to have come up with all of these recommendations, you must have suffered through a lot of pain in the past.” This statement blew me away. I never really viewed all of this experience working on builds at Microsoft for the past 15 years as painful, but just lessons we learned by “try and try again until we get it right.” Although some initial investment in resources will be required to reboot your processes if you decide to implement all the processes and suggestions in this book, you will save 3 to 5 years of the “pain” of learning all of this from the school of hard knocks. At the very least, you will be enlightened on how software is shipped at Microsoft. And as I always ask my customers, if you know of any other better processes than what I prescribe, please let me know. I am always open to improvement and would love to hear your suggestions.

Get Stuck or Get Results. Period. The goal of this book is to get you unstuck from spinning your development cycles or to help you avoid getting stuck in the first place by providing processes and tips to help you become more productive.

xxiv

Preface

What do I mean by “getting stuck”? ■



■ ■



Your developers are spending less time writing new code and are in maintenance mode most of the time. The 80/20 rule usually comes up here. For example, developers should be spending 80 percent of their time writing new code and 20 percent fixing bugs, not vice versa. The morale of your team suffers. Developers and testers get increasingly frustrated with the lack of consistency and reliability of builds and being in reactive mode all the time. A lot of unnecessary finger-pointing is a bad indicator. You miss ship or release dates, and your customer satisfaction suffers. Your ability to reproduce builds and deliver hotfixes becomes increasingly difficult. You spend more time trying to reproduce a build or build a hotfix than you do fixing the code that caused the bug. You do not have a reliable process of tracking and building all the code changes in your product, and the stability of the product is unpredictable at best.

At the end of the day, all these issues, which you could avoid, will end up costing your company lots of money because you will be patching or hacking whatever process you currently have that does not seem to be getting the job done.

Outsourcing Nowadays, it is unlikely to have all of an application’s developers physically situated at one location. If you plan to have developers work remotely either offsite or off-shore, it is mandatory that you integrate the processes explained in this book, especially the concept of the Virtual Build Labs (VBLs) explained in Chapter 2, “Source Tree Configuration for Multiple Sites and Parallel (Multi-Version) Development Work.” Experience shows that if your software configuration management is good, outsourcing will be easy or easier to integrate. This book covers all the necessary steps toward making sure the intellectual property of your company is not compromised. This is done through securing your overall build process, not through fingerprinting every piece of code that is in your project.

Preface

xxv

What This Book Is Not About: Software Factories Well, maybe this book has something to do with software factories because you can package everything I talk about in this book, put it in one application, add a few enterprise e-commerce development tools, and crank out your software. Then I think you have a software factory. Or at least that is how I understand the concept of software factories. Everything I read on this topic keeps pointing me to Visual Studio Team System (VSTS). I give an overview of the VSTS tools in Chapter 18, “Future Build Tools from Microsoft,” but nothing more. Maybe in another edition of this book, I will talk about how you can use the processes in this book in conjunction with VSTS.

Updates and Contact Information For updates, visit my blog at http://blogs.msdn.com/vincem/. That’s also the best way to contact me. Or you can contact me at [email protected] or http://www.thebuildmaster.com. For more information on how Microsoft can help provide custom solutions to any software issue, please look at the Microsoft Partner Advantage site at http://www.microsoft.com/services/ microsoftservices/default.mspx. For other books from Addison-Wesley, go to http://www. awprofessional.com.

This page intentionally left blank

ABOUT THE AUTHOR Vincent Maraia started his computer career programming on a Commodore Pet in the 8th grade; with 8KB of RAM and a 1 MHz processor, it was an impressive machine in 1978. Looking back at high school, it seems that the typing and cooking classes he took contributed the most to his success in college, not the college prep classes. Pascal really never took off. While pursuing a duel major at Oregon State University—Engineering Physics and a Mechanical or Electrical Engineering degree—he decided to pursue an internship at a small- to medium-size software company in Redmond, Washington. The company was offering a 386/20 MHz computer (top of the line in 1989) as a gift upon completion of the internship. Since he had enough credits for an Engineering Physics degree, he decided to stay at Microsoft once his internship was over and has been there ever since. At Microsoft, he lived for about four years in the NT Build Lab shipping NT 3.1, 3.5, 3.51, and part of NT 4.0. He also worked on hotfixes, service packs, and test builds. After a couple of failed multi-media projects, he then went to the Visual Studio group, shipping version 1, VS 97. He also designed the build for BackOffice/Small Business Server, shipping SBS 1.0 and BackOffice 2.0. For the past six years, he has been in Premier Services, consulting on how Microsoft does builds, source code control, and release management. He has been to over 55 Microsoft customer sites executing Build and SCM architecture reviews. Until he met his wife eight years ago, he use to love to hang out at bookstores on Friday and Saturday nights in the computer section, occasionally drifting into the biography and science sections. He loves spending time with his family and is also a true sports fan who plays on city league softball teams when he is not writing or traveling.

xxvii

This page intentionally left blank

IN APPRECIATION AND ACKNOWLEDGMENT I have so many people to thank for this book. The first two people would be Karen Gettman, for seeing the need for a book on this topic and allowing me to write it, and the brilliant Jeffrey Richter, for all of your help, input, and guidance. It is safe to say that this undertaking would never have happened without you two. Then there are the incredible people at Addison-Wesley that I have enjoyed working with: Curt Johnson, Ebony Haight, and Elizabeth Zdunich. And some people who really earned their pay for the production: Gina Kanouse, Karen Gill, David Fender, Amy Hassos, Julie Bess, Jessica McCarty, and Chuti Prasertsith. For the artwork: Chuti Prasertsith—excellent cover work; I wish I had that kind of talent. Kyono McKean for those wonderful drawings, and Jon McKean for helping to get them done. The content contributors and reviewers: Kent Beck, Steve Konieczka, Mario Moreira, Yves Dolce, Eric Brechner, Ajay Malhotra, William Rollison, Bob Jervis, Doug Neumann, Jeff Homme, and Hans Zeitler—I appreciate all of your time and input. Other people that played an indirect but important role: Steve Ballmer, Bill Gates, Dr. Kenneth Krane, Greg Lee, Rich Eizenhoefer, and Blair Shaw. A general thanks goes to all of the people that I have had the honor to work with at Microsoft and the company’s many partners. Of course, I cannot forget my immediate family, who has been very supportive of my late nights and weekends working on this project: Jan, Leah, and Marcus. Also Mom and Dad, for everything you have done for

xxix

xxx

In Appreciation and Acknowledgment

me—especially Mom, your self-sacrifices have been greatly appreciated and will always be remembered. Mike and Chizu, there could not have been any better brother and sister to grow up with; thanks for always letting me tag along. And lastly, my old reliable friend forever—Sanka (and Maple too!).

INTRODUCTION The topics in this book apply to development shops of all sizes, from small groups of 40 to 100 developers to groups as large as the 2,000+ developers in Windows. Some of these topics might not seem that interesting or important to you depending on what stage of development you are currently in, but the sooner you start planning and designing all these processes into your product plans, the more successful you will be in the long run. Remember that NT started with six people from Digital Equipment Corporation (DEC) in 1988 and grew to 200 developers and about 4 to 5 million lines of code before shipping the first version 5 years later. Keeping this in mind, the quote from Paul Thurrott’s Windows Supersite is appropriate: (http://www.winsupersite.com/reviews/ winserver2k3_gold2.asp): One element about the NT family of operating systems—which evolved from Windows NT to Windows 2000, XP, and now Windows Server 2003—that has remained unchanged over the years, though the details have changed dramatically, is the build process. Somewhere deep in the bowels of Microsoft, virtually every day, at least one Windows product is compiled, or built, into executable code that can be tested internally by the dev, or development teams. For Windows Server 2003, this process is consummated in Building 26 on Microsoft’s sprawling Redmond campus, where banks of PCs and CD duplicating machines churn almost constantly under the watchful eyes of several engineers. —Paul Thurrott, January 30, 2003 It is worth noting that the Windows 9.x (Windows 95, 98, and Millennium) code is based off the Windows 3.0 code (that ran on top of DOS) that was released in 1991. Little, if any of this code was used in the NT releases. This group was considered the Windows 9.x team and had a different build model than what I talk about in this book. I have friends who worked in the Windows 9.x build lab at the same time I was on the xxxi

xxxii

Introduction

NT build lab team. The Windows 9.x horror stories made our build issues seem like child’s play! Microsoft wanted to end-of-life (kill) the Windows 9.x code base after Windows 95, but because of customer demand and the fact that the hardware needed to run an NT system was still a little expensive, Microsoft decided to extend the life of the Windows 9.x line until Windows 2000 (“Built on NT Technology”—a little redundancy splash screen) was released. This will be the only reference to the Windows 9.x team and processes in this book. From this point on, whenever I mention Windows, it will be in reference to the Windows NT team. This book is biased toward Microsoft practices and tools, which is the reason for the subtitle. Still, you can carry a lot of these principles to other platforms because, after all, it is just building software that we are talking about, right? Each chapter starts with a Philosophy, which is a quote or statement that sets the tone. You will also see Microsoft Sidenotes sprinkled throughout, which are historical facts, recollections from notes and e-mails, or anecdotes from my and other people’s experience while shipping our products. Everything is 100 percent accurate to the best of my knowledge. Because there can be many different definitions for a word or term, even within the same company such as Microsoft, each chapter defines specific terms in regard to how they are used at Microsoft. Sometimes the definition matches an accepted industry term, and other times I introduce a new term not known outside of Microsoft, such as Virtual Build Lab (VBL). The definitions in this book will be posted on www.thebuildmaster.com so you can adopt them if you like. I make a lot of references to the old NT ways of shipping software because the processes we used back then scaled incredibly well to NT 5.0. With NT 5.0, some new processes were introduced—most notably, the Virtual Build Labs. Smaller teams within Microsoft also use this process, so it scales down and up. Figure I.1 gives you an idea of the size of the teams and code shipped.

Introduction

xxxiii

NT Release Chart Ship Date

Product

Dev Team Size

Test Team Size

Lines of Code

Jul-93 Sep-94 May-95 Jul-96 Dec-99 Oct-01 Apr-03

NT 1.0 (released as 3.1) NT 2.0 (released as 3.5) NT 3.0 (released as 3.51) NT 4.0 (released as 4.0) NT 5.0 (Windows 2000) NT 5.1 (Windows XP) NT 5.2 (Windows Server 2003)

200 300 450 800 1,400 1,800 2,000

140 230 325 700 1,700 2,200 2,400

4-5 Million 7-8 Million 9-10 Million 11-12 Million 29+ Million 40 Million 50 Million

Figure I.1 NT release chart.

What “NT” Really Stands For Mark Lucovsky, a former Distinguished Engineer (the most prestigious title for an engineer at Microsoft) in the Windows NT group, explains the term NT: And then when we were bantering around names, we had N10 and New Technology. It worked both ways, so that’s what NT really stood for—the N10 chip—and we could use it [or] double it as New Technology. But it’s nothing more magical than that.” 0321332059 N10 was the code name for the Intel chipset (i860) that NT was MarIntro.eps originally targeted for. 4/29/05 Thus, NT was a code name titled after another Group/pab code name. I am notScan sure if the Windows marketing folks really planned on using NT for the product name. It is pretty rare that a code name at Microsoft is used for a product’s final released name. Maybe because they tacked on Windows to NT and at the time of the first release, both terms were pretty popular in the computer world, they decided to keep the NT moniker, too.

xxxiv

Introduction

How MSN Builds Code It’s déjà vu all over again. —Yogi Berra To show you a recurring theme in this book on how software is developed and shipped at Microsoft, look at Figure I.2, where each of the teams in the Microsoft Network (MSN) group has the source, build system, drops, and build staff defined. Passport Dev team

Passport Source

Passport Build team

Passport Build Environment

Messenger Dev team

Messenger Source

Messenger Build team

Hotmail Dev team

Hotmail Build team

Messenger Build Environment

Hotmail Source Hotmail Build Environment

Passport Build Process

Passport Drop

Passport QA

Messenger Build Process

Messenger Drop

Messenger QA

Hotmail Build Process

Hotmail Drop

Hotmail QA

INT Test Environment

Pre-Production Environment (PPE) and Production

Figure I.2 Previous Build Process. Looking at Figure I.3 on the following page, you can see how the new software development process has changed to a more central build process 0321332059 that builds and releases code developed by the three teams mentioned. 4/29/05 This is an evolution that most companies ultimately face; the sooner you Scan Group/pab establish this central process, the better off you are. This book guides you on how to make this happen. MarMSN_Fig01.eps

Introduction

Passport Dev team

Passport Source

Messenger Dev team

Messenger Source

Hotmail Dev team

Hotmail Source

MSN Build Process

Passport QA

Messenger Drop

Messenger QA

Hotmail Drop

Shared Source

MSN Build Environment

Passport Drop

MSN Build Team

xxxv

INT Test Environment

Hotmail QA Pre-Production and Production Environment

Figure I.3 New Build Process. The processes at Microsoft are the same across the different product teams whether you are building an operating system, MSN (Microsoft Network) components, SBS (Small Business Server), or Visual Studio. The specific tools and mileage might vary, however. 0321332059 As mentioned at the beginning of this Introduction, the processes talk4/29/05 ed about in this book Scanscale Group/pabup to the largest software project in the world (Windows) but also scale down to small team projects of about 40 to 100 developers. If you have 20 to 30 developers on your project and you never plan to grow, some of these topics might be overkill for what you are doing. On the other hand, failure is always an option if you do not consider any of the recommendations in this book but you plan on growing your group or team beyond 30 developers. Finally, with technologies and tools always changing, I tried to write this book in a classic sense that is independent of the tools or language you are developing with so that you can use the processes and principles in this book no matter what platform you use. Now let’s join the other build knights and figure out how to save the king and queen (upper management making crazy promises) of the castle (corporation or business)… MarMSN_Fig02.eps

This page intentionally left blank

C H A P T E R

1

DEFINING A BUILD Philosophy: The build is a piece of software and should be treated as such. The build is among the most heavily used and complex pieces of software in the development group and should be treated as such. —Danny Glasser, Microsoft developer in the Systems Group, March 9, 1991 The first thing we should do is define what a build is. What Danny describes in the previous quotation is important. The purpose of a build is to transform code written in any computer language into an executable binary. The end result of a software build is a collection of files that produce a product in a distributable package. In this case, package can mean a standalone application, Web service, compact disc, hotfix, or bug fix. If you do not think it is worthwhile to spend resources on a good build process, your product will not be successful. I have been on a couple of product teams at Microsoft that have failed, and I have seen many others fail because they were not able to consistently build and test all of the product’s code. I also see this at customer sites when I am reviewing their build process. The companies that have clean, crisp, reliable build and release processes are more successful than the ones with ad hoc, insufficient processes.

The Two Types of Builds: Developers and Project I like to say that there are really only two types of builds: ones that work and ones that don’t. Seriously, though, when you’re shipping a product, you should consider these two different types of builds: ■

Developers’ (local machine builds)—These types of builds often happen within an editor such as Visual Studio, Emaqs, Slick, or VI. Usually, this is a fast compile/link of code that the developer is currently working on. 1

2

Chapter 1



Defining a Build

Project (central build process)—This type of build typically involves several components of an application, product, or a large project, such as Windows, or in some cases several projects included in a product, such as Microsoft Office.

The developer’s build process should be optimized for speed, but the project build process should be optimized for debugging and releases. I am talking about optimizing the process, not compiler or linker optimization switches. Although speed and debugging are important to everyone who is writing code, you must design a project build process to track build breaks and the offender(s) as quickly as possible because numerous people are waiting for a build to be released. For a developer, what seems to be most important is clicking some type of Build and Run button to make sure the code compiles without errors and then checking it in. For the build team, building without errors and having the ability to track down the person who broke the build is the most important thing. NOTE In some simple scenarios, these two build cases can use the same process. If this is the case, the team—what I refer to as the Central Build Team— should dictate the build process. This team—not the developers—should design the project build process. All too often, the developers design the project build process, which causes problems. Because developers usually build just the code modules that they work on and not the whole project on a regular basis, they look for shortcuts that are not necessarily in the best interest of building the entire project. For example, they might use file references instead of project references. If a developer specifically references a file in Visual Studio and the sources of that file change, they are not automatically picked up because a specific version of the file was referenced instead of the project that builds the referenced file. In the interest of saving time, developers use file references. They are not interested in picking up the latest sources of the specified file, but it is not recommended to use file references in a project build. The Central Build Team should never be at the mercy of mandatory build environment settings for building a specific component. If such a setting is necessary to build a component, it should be proposed to the Central Build Team for inclusion. Then the CBT can determine the impact of the addition or change to the entire project and approve or disapprove the proposal.

Building from the Inside Out

3

Building from the Inside Out

1. DEFINING A BUILD

One of my favorite questions to ask a customer’s development or build manager when I go onsite is how often they release a new build process. I usually get long pauses or funny looks and then finally get the answer “Every day.” Of course, as you might suspect, I am not talking about releasing a daily build, but a new build process. The fact that so many companies do not release new build processes on a regular basis does not surprise me. This is because traditionally creating a build process is an afterthought when all of the specifications of a project have been written. Many project and program managers think that the actual building of a project is pretty trivial. Their attitude is that they can simply have the developer throw his code over the wall and hire someone to press a Build button, and everything will be fine. At Microsoft, we understand that whether you’re building the smallest application or something huge and complicated like Windows, you should plan and think through the process thoroughly in advance. Again, I recommend that you consider the build process a piece of software that you regularly revise and deploy throughout your product team. You should also add to your project schedule some “cushion time” to allow for unforeseen build breaks or delays, I would at least pad the milestone dates one week for build issues. The concept of “building from the inside out” tends to confuse customers who are not familiar with a centralized build process. The idea is that the Central Build Team determines what the build process is for a product and then publishes the policies to an internal build site. All development teams in the project must comply with the Central Build Team process; otherwise, their code check-in is not accepted and built. Unfortunately, this concept is usually the complete opposite of how a build system for a project actually evolves over time. The Central Build Team for a project usually goes out of its way to accommodate the way developers build their code. “Building from the inside out” means that the Central Build Team figures out the best way to get daily builds released, and everyone uses that process independently or in parallel with the way his specific development team builds. This total change in development philosophy or religion can be a culture shock to some groups. I talk more about changing a company’s culture or philosophy in Chapter 18, “Future Build Tools from Microsoft.” For now, let’s stay on the topic of builds.

4

Chapter 1

Defining a Build

What we did in the past in the Windows group—and what they still do today—is to deploy new releases of the build process at major milestones in the project life cycle. Sometimes the new releases involve tool changes such as compilers, linkers, and libraries. At other times, there are major changes such as a new source code control tool or a bug tracker. Because a build lab tends to have some downtime while the build team waits for compiles, links, and tests to finish, it should take advantage of these slow times to work on improvements to the build process. After the lab tests the improvements and confirms they are ready for primetime, it rolls out the changes. One way to deploy a new build process after a shipping cycle is to send a memo to the whole team pointing to an internal Web site that has directions on the new process that the Central Build Team will be using in future product builds. Microsoft Sidenote: Developers in a Build Lab Today, the Windows build lab has its own development team working on writing and maintaining new and old project tools. The development team also works on deploying new build processes. Conversely, of the more than 200 customers I’ve spoken to, only one or two of them have developers working in a build team. Remember Danny’s quote at the beginning of this chapter and notice the date—1991. In 1991, Windows NT had only a few hundred thousand lines of code, unlike the more than 40 million lines of code that Windows XP has today. Even in the early stages of developing Windows NT, Microsoft recognized the importance of a good build process.

Chapter 3, “Daily, Not Nightly, Builds,” covers in more detail the importance of the build team being the driving force to successfully ship a product.

More Important Build Definitions I need to define some common build terms that are used throughout this book. It is also important for groups or teams to define these terms on a project-wide basis so that everyone is clear on what he is getting when a build is released.

More Important Build Definitions





Pre-build—Steps taken or tools run on code before the build is run to ensure zero build errors. Also involved are necessary steps to prepare the build and release machines for the daily build, such as checking for appropriate disk space. Post-build—Includes scripts that are run to ensure that the proper build verification tests (BVTs) are run. This also includes security tests to make sure the correct code was built and nothing was fused into the build. Clean build—Deleting all obj files, resource files, precompiled headers, generated import libraries, or other byproducts of the build process. I like to call this cleaning up the “build turds.” This is the first part of a clean build definition. Most of the time, build tools such as NMake.exe or DevEnv.exe handle this procedure automatically, but sometimes you have to specify the file extensions that need to be cleaned up. The second part of a clean build definition is rebuilding every component and every piece of code in a project. Basically the perfect clean build would be building on a build machine with the operating system and all build tools freshly installed.

Microsoft Sidenote: Clean Build Every Night While working in the Windows NT build lab on NT 3.51, I remember reading in a trade magazine that the Windows NT group ran clean builds every night. The other builders and I laughed at this and wondered where this writer got his facts. We would take a certain number of check-ins (usually between 60 and 150 per day) and build only those files and projects that depended on those changes. Then one of us would come in over the weekend and do a clean build of the whole Windows NT tree, which took about 12 hours. We did the clean builds on the weekend because it took so long, and there were usually not as many check-ins or people waiting on the daily build to be released. Today, with the virtual build lab model that I talk about in Chapter 2, “Source Tree Configuration for Multiple Sites and Parallel (Multi-Version) Development Work,” the Windows NT team can perform clean builds every night in about 5 or 6 hours.

1. DEFINING A BUILD



5

6

Chapter 1











Defining a Build

Incremental build—The secret to getting out a daily build to the test team, regardless of circumstances, is to perform incremental builds instead of daily clean builds. This is also the best way that you can maintain quality and a known state of a build. An incremental build includes only the code of the source tree that has changed since the previous build. As you can guess, the build time needed for an incremental build is just a fraction of what a clean build takes. Continuous integration build—This term is borrowed from the extreme programming (XP) practice. It means that software is built and tested several times per day as opposed to the more traditional daily builds. A typical setup is to perform a build every time a code check-in occurs. Build break—In the simplest definition, a build break is when a compiler, linker, or other software development tool (such as a help file generator) outputs an error caused by the source code it was run against. Build defect—This type of problem does not generate an error during the build process; however, something is checked into the source tree that breaks another component when the application is run. A build break is sometimes referred to or subclassed as a build defect. Last known good (LKG) or internal developers workstation (IDW) builds—These terms are used as markers to indicate that the build has reached a certain quality assurance criterion and that it contains new high-priority fixes that are critical to the next baseline of the shipping code. The term LKG originated in the Visual Studio team, and IDW came from the Windows NT organization. LKG seems to be the more popular term at Microsoft.

Microsoft Sidenote: Test Chart Example The best way to show how Microsoft tracks the quality of the product is through an example of the way the Windows team would release its version of a highquality build. Again, the Windows team uses the term internal developers workstation (IDW), and other teams use last known good (LKG). In the early days of the Windows NT group, we had a chart similar to the one in Figure 1.1 on the home page of the build intranet site. Most people on the project kept our build page as their default home page so that whenever they opened Internet Explorer (IE), the first thing they would see was the status of the project; then they would check the Microsoft (MSFT) stock price.

More Important Build Definitions

7

Daily Quality Chart 100 90 80 60 % Regression Tests Pass

Daily Build IDW Baseline

50 40 30 20 10 0 1

2

3

4

5

6

7

8

9

10

11

Build Number

FIGURE 1.1 Sample quality chart. The way to read Figure 1.1 is that any build we released that passed more than 90 percent of the basic product functionality tests—what we called regressions tests—and did not introduce new bugs was considered an IDW build. This quality bar was set high so that when someone retrieved a build that was stamped IDW, he knew he had a good, trustworthy build of the product. As you can imagine, when the shipping date got closer, every build was of IDW quality. Furthermore, when a new IDW build was released to the Windows team, it was everyone’s responsibility to load the IDW build on the machine in his office and run automated stress tests in the evening. Managers used to walk to their employees’ offices and ask them to type winver to verify that they had the latest IDW build installed before they went home for the evening. Today, managers have automated ways to make sure that everyone is complying with the common test goal. This is also where the term “eating our own dog food” originated. Paul Maritz, general manager of the Windows team at that time, coined that phrase. It simply means that we test our software in-house on our primary servers and 0321332059 development machines before we ship it to our customers. Dogfooding is a cor01Fig01.eps nerstone philosophy at Microsoft that will never go away. 5/02/05 Scan Group/pab

1. DEFINING A BUILD

70

8

Chapter 1

Defining a Build

The build team would get the data for the quality chart from the test teams and publish it as soon as it was available. This is how we controlled the flow of the product. In a “looser” use of the word build, the quality became part of the definition of a build number. For example, someone might say, “Build 2000 was an excellent build” or “Build 2000 was a crappy build,” depending on the test results and personal experience using the build.

How Your Product Should Flow Never mistake activity for achievement. —Coach John Wooden, UCLA basketball legend Recently, while I was at a popular application development site going through a build architect review, I noticed how extra busy everyone was. Everyone was running around like he was on the floor of the New York Stock Exchange trying to sell some worthless stock before the market closed. People barely had enough time to stop and talk to me about their top five build or SCM pain points. They didn’t have time for chitchat because they were too preoccupied with putting out fires such as build breaks, administrating tools and permissions, and reacting to new bugs coming from their customers. Their explanation was that they did not have enough resources to do what the upper managers wanted them to do. This might have been partially true, but it was not the complete truth. They were equating this busy work as their job duties and why they got paid. This was later confirmed when I gave them my final trip report of how to improve their processes such that everything would be fixed and automated. The first question their build team asked was “If all of this is fixed and automated, then what will we do?” I was shocked. These guys were so used to being in reactive mode that they seemed to think that if they were not constantly putting out fires, their position was not needed. The rest of this chapter outlines a smooth flow of how your product development should go. As Kent Beck, author of Test Driven Development and several Extreme Programming books, points out, flow is what the build team should encourage and try to achieve. The build team drives the product forward. I put together Figure 1.2 to show how this works at Microsoft because I don’t think this concept is always clear. I don't think this concept is always clear, as this is the underlying philosophy of this book.

How Your Product Should Flow

Program/ Product Managers

Development team

Test team

Triage

Triage

Triage

Needs implementing Approved

WAR Meeting Set Pri/Sev

Happens everyday or as needed.

Not approved

Re-assign priority/ schedule

Happens everyday religiously!

Implemented Check-in OK

Build team picks up changes

Propagate Build Send email Build is out

Daily Build Staging Server

Release

Figure 1.2 Software development flow.

Software Development Flow The three boxes at the top of Figure 1.2 represent the respective teams listed. The members of each team meet to discuss the progress of its code development. After the teams discuss the issues, they mark their priority in a bug database, or work item tracker. Sometimes at Microsoft we call everything (features, requirements, bugs, tasks, risks, wish list) a bug, but work 0321332059 item is more accurate. 01Fig02.eps 5/02/05 Teams must enter every Scan type of code implementation or necessary fix Group/pab on the project into the work item tracker and assign it a tracking number.

Some Work Item Field Definitions With the internal Microsoft work item tracker more than 46 fields are available in each item, although not all are used all the time. For Microsoft confidentiality reasons, I cannot include a graphic of our tracking tool here. However, the following are some of the fields that are included in a work item.

1. DEFINING A BUILD

Feature/Bug Database (store everything)

Assign Bug

9

10

Chapter 1

Defining a Build

Setting work item priority and severity: ■



Priority—This field communicates overall importance and determines the order in which bugs should be attacked. A bug’s priority takes severity and other project-related factors into account. ■ Pri 0—Fix before the build is released; drop everything you are doing and fix this immediately. ■ Pri 1—Fix by the next build. ■ Pri 2—Fix soon; specific timing should be based on the test/ customer cost of the workaround. ■ Pri 3—Fix by the next project milestone. ■ Pri 4—Consider the fix by the upcoming release, but postponement is acceptable. Severity—This communicates how damaging a bug is if or when it is encountered. ■ Sev 1—This involves an application crash, product instability, a major test blockage, a broken build, or a failed BVT. ■ Sev 2—The feature is unusable, a bug exists in a major feature and has a complex workaround, or test blockage is moderate. ■ Sev 3—A minor feature problem exists, or the feature problem has a simple workaround but small test impact. ■ Sev 4—Very minor problems exist, such as misspelled words, incorrect tab order in the UI, broken obscure features, and so on. Sev 4 has little or no test impact.

Following are other work item or bug field definitions: ■ ■ ■



Status—Active, Resolved, or Closed Substatus—Fix Available Assigned To—The most critical field, because this is the owner of the item FixBy—The project due date for the bug fix

Each work item has two build fields: ■ ■

Build (1)—The build number that the bug was found on Build (2)—The build number that the bug was resolved on

How Your Product Should Flow

11

Microsoft Sidenote: How Visual Studio Resolves and Closes Bugs

I once was asked by a test manager to summarize everything I learned about builds in one sentence. I told him that “there are no free lunches, especially in the build lab, but there might be free beer.” He told me that he was disappointed that I did not have anything deeper than that. He then said his motto was “Testers close bugs.” I knew what he meant, so I said with tongue-in-cheek, “Wow, that’s deep.” I’m not sure if he took that as a compliment or just thought I was not very funny. Regardless, he did have a good point. Let’s break down the details of “a bug’s life…” When a developer fixes a bug on his machine, he marks the bug’s substatus as Fix Available and keeps it assigned to himself. After he checks in the change to the team branch or tree, he resolves the bug (changing the status from Active to Resolved) and reassigns the bug to the original bug opener or a tester who owns that area of the product. The original bug opener or tester then waits until an official build comes out that contains the bug fix. He then walks through the repro steps to ensure that the bug has truly been fixed. If it has, he closes the bug by changing the status from Resolved to Closed. If the issue still exists, the bug opener or tester reactivates the bug by resetting the status to Active and reassigning it to the developer. This continues until the bug is fixed or gets postponed for the next milestone or release.

WAR or Ship Meeting Known as WAR, Central WAR, or Ship (the softer, more friendly Visual Studio Team System term), this meeting is focused on tracking and controlling the main product build. Its goal is to ship the product at a high quality according to its schedule by dealing with day-to-day project issues, test reports, and metric tracking.

1. DEFINING A BUILD

Testers close bugs. —Deep thought of the day.

12

Chapter 1

War Room Agenda

Defining a Build

“This guy is taking project management a little too far…” “and a little too literal.”

Kill Bugs!

Figure 1.3 WAR team. The WAR team and—everyone attending the WAR meeting—must approve every work item before it can get built and shipped in the product. After the WAR team approves a work item, a field in the bug tracker gets set so that everyone on the build team knows that it’s okay to accept this check-in into the main build lab. If the WAR team does not approve the work item, the work item is reassigned to the person who opened it or to Active, which means that no specific person owns the bug, just a team. At this point, if the person who opened the bug thinks it should be fixed sooner than the people in the WAR meeting determine, it is his responsibility to push back with a solid business justification. If the person pushes back to the WAR team with a solid business justification and the WAR team still doesn’t accept the change into the build, the work item is marked as Won’t Fix or Postponed. Upon the item’s WAR team approval, the developer works with the build team to get his code changes into the next build. After the build team compiles and links all the source code, the code goes through the congeal process, which brings all the pieces of the project together. This includes files that don’t need to be compiled, such as some HELP, DOC, HTML, and other files. Then the post-build process starts (more on post-build in Chapter 14, “Ship It!”), which in some cases takes just as long or longer than the build process.

How Your Product Should Flow

13

Microsoft Sidenote: How the Visual Studio Team Controls All Check-Ins and “Tell and Ask Mode”

During tell mode, teams within our division are still given discretion to fix any bugs they want—they just need to be prepared to present and explain why they chose the ones they did to the central division ship room. This ends up ensuring a common bar across the division, slows the rate of fixes, and slowly brings up build quality. You might naturally wonder how not fixing bugs could possibly bring up build quality, since this obviously seems counterintuitive. Basically, the answer lies in the regression percentage I talked about earlier for check-ins. Even with a low regression number, you end up introducing new bugs in the product. (And when you have a division of over 1,000 developers, even a low percentage regression rate can mean lots of bugs introduced per week.) By slowing the rate of checkins, you slow the number of regressions. And if you focus the attention on bad bugs and add [an] additional review process to make sure these fixes don’t introduce regressions, the quality will go up significantly. During ask mode, teams within our division then need to ask permission of our central ship room committee before making a check-in—which adds additional brakes to slow the check-in rate. In addition, all bugs in ask mode must go through a full nightly automation run and buddy testing (which takes at least 12 hours) to further guard against introducing problems. Ask mode will also be the time when we’ll drive our stresspassing numbers up to super-high levels, and we’ll use the low rate of check-ins to find and fix pesky, hard-to-find stress failures. You can read the entire entry at http://weblogs.asp.net/scottgu. I talk more about processes to control all check-ins into the source tree in Chapter 10, “Building Managed Code.”

1. DEFINING A BUILD

The Visual Studio team controls check-ins in another way: the “tell and ask” process. Project managers use this process to slow the rate of code churn and force teams to deliberate about what work items or bugs are fixed or open. This is called triage. Scott Guthrie is the product unit manager in Visual Studio. He explains triage in his blog:

14

Chapter 1

Defining a Build

Release to Staging Servers After the build is complete and has no errors, it is propagated to the daily build servers, where at least 15 to 20 builds are stored with all the sources and tools necessary to build. Milestone releases also are kept on the server. This is where the test team picks up the build. This is the “secret” to fast development and keeping your developers happy. I realize that most if not all SCC tools can retrieve sources of a certain build but sometimes those tools are clumsy or the labels on the trees are not accurate. So we came up with this staging server with massive amounts of diskspace available and stored our releases on it. It is a lot easier for the development and test teams to search that server than the SCC database. From the staging servers, the build can go to production. This process is covered in Chapter 14.

Important Definitions The following sections discuss terms that are specific to Visual Studio but that are used all over the Web and at various companies I have visited. Solution Files

If you are new to Visual Studio .NET, you probably are not familiar with the term solution. A solution essentially represents everything you are currently working on. Visual Studio .NET uses solutions as containers for individual projects, which generate your system components (.NET assemblies). Solution files maintain project dependency information and are used primarily to control the build process. Project

In the context of this book, projects are one of three types: ■





General development projects—The term project in its loosest sense refers to your team’s current development effort. Visual Studio .NET projects—Visual Studio .NET uses project files as containers for configuration settings that relate to the generation of individual assemblies. Visual SourceSafe (VSS) projects—A project in a VSS database is a collection of files that are usually related logically. A VSS project is similar to an operating system folder, with added version control support.

Microsoft Solution Framework

15

Microsoft Solution Framework

The Goals of Each MSF Role Delivering the solution within project constraints Satisfied customers

Program Management

Product Management

Building to specification

Development Communication

User Experience Enhanced user effectiveness

Test

Release Management Smooth deployment and ongoing operations

Figure 1.4 MSF roles.

0321332059 01Fig04.eps 5/02/05 Scan Group/pab

Approval for release only after all quality issues are identified and addressed

1. DEFINING A BUILD

It would not be proper to print a Microsoft book on Software Configuration Management and not mention the Microsoft Solution Framework (MSF) that has been publicly available for years. The origin of this process came from the Microsoft Consulting Services (MCS) group and is based on the terms and the way that Microsoft organizes its software development groups. The funny thing is that many people on the Microsoft product teams have never heard of MSF. They use the processes or know the terms, but they do not realize that Microsoft has been teaching this to customers for years. That is a good example of how a documented process came from an informal undocumented process. Now the documented process (MSF) is the leader, and many new terms in the product teams come out of MSF. MSF will be included in the upcoming Visual Studio Team System. It’s a great high-level view of how Microsoft runs its product teams. Because a ton of information about MSF is available on the Microsoft Developers Network (MSDN http://msdn.microsoft.com), I will show just one chart that sums up the whole process (see Figure 1.4).

16

Chapter 1

Defining a Build

Figure 1.4 is self-explanatory. The point of the graphic is to show that there is not a hierarchical approach to shipping software at Microsoft, but a “round table” one. Ideally the Build Master would be King Arthur.

Summary Speaking the same language is important in any project or company. Making sure everyone is clear on the terms or lingo in your group is especially important. For example, if you are talking about a build process or bug to someone on your team and do not define the context, or if the terms are not explicitly defined somewhere, you’ll miscommunicate your point or vice versa. This can lead to project setbacks. In the following chapters, I will continue to define terms that we use at Microsoft and what seem to be industry standard terms. This is important because there can be variations of a definition, and I want to make sure we are all clear on the points being made. Also, it is the build team’s responsibility to set these definitions for a group and publish them on an internal Web site so that no one’s confused about what they mean and people who are unfamiliar with the terms can reference them easily.

Recommendations ■









■ ■

Define terms in your development process, and keep a glossary of them on an internal build Web page. If you like, standardize on the definitions in this chapter. Clean build your complete product at least once per week, or every day if possible. Use incremental builds on a daily basis if clean builds are not possible or practical. Start charting the quality of your product, and post it where everyone involved in the project can see it. Release LKG (or IDW) builds weekly; then switch to daily releases toward the end of the shipping cycle. Follow the Software Development Flow diagram. As noted earlier, I will also post the definitions in this book to www. thebuildmaster.com site so you can download them and publish them to your group or company.

C H A P T E R

2

SOURCE TREE CONFIGURATION FOR MULTIPLE SITES AND PARALLEL (MULTI-VERSION) DEVELOPMENT WORK Philosophy: There should be a single source tree that is owned by the Central Build Team and, if needed, that could be synced up, built, and deployed on any day of the year. This would be the mainline or “golden master” tree. —Vincent Maraia If there was a way for me to patent this concept, I would. This could be the most important topic in this book and the cornerstone of a successful software build process. I see customers struggle with too many cross-project dependencies, source tree integration problems, constant build breaks, and developers and testers spending too much time on hotfixes instead of writing new code. The majority of the time, these things can be traced to the way people have their source trees configured. Some development groups incorrectly blame their build problems on their version control tool’s lack of branching functionality. By Microsoft’s own admission, Visual SourceSafe (VSS) is not a very powerful Source Code Control (SCC) tool. (As discussed in Chapter 18, “Future Build Tools from Microsoft,” Microsoft plans to change this with future releases of Visual Studio Team System [VSTS].) It’s true that some of the tools out there are weak, but it is usually the source tree structure that is broken, not the lack of features or knowledge of these features. Keep in mind that an SCC tool is simply just a database with a frontend application that manages all the items in the database. In our particular case, the application manages sources. In this chapter, we discuss the concepts of organizing your code. Then it is a matter of figuring out how to use an SCC tool to make it happen. 17

18

Chapter 2 Source Tree Configuration for Multiple Sites and Parallel (Multi-Version) Development Work

Many books have been written on setting up source trees and different branching models of version control systems such as Rational ClearCase, Merant PVCS, and Microsoft’s own VSS. This chapter is about how to best set up your source trees and successfully track your code check-ins and your product, whether the application is a Web or a single platform application downloadable from the Internet or shipped out of the box. Also included in this chapter are the best practices that Microsoft has found in working with multiple development sites and using Virtual Build Labs (VBLs). The VBL process was developed by Mark Lucovsky, a distinguished engineer at Microsoft who had a rich history at Digital Equipment Corporation (DEC) before coming to Microsoft in 1986 to work on NT (N10, or New Technology). The VBL model is an excellent one to use if you have multiple development sites or are trying to do parallel development on a product. This process is extremely good even if you have one central development location and one development team. However, if your product or company has a maximum of 10 or 12 developers and never plans to grow beyond that number, the VBL system might be overkill. So, you ask, how does this topic on source tree configuration fit into a build book? Let’s start with some basic definitions. Then I’ll explain the connection.

Definitions Continuing the discussion from Chapter 1, “Defining a Build,” the following are additional build definitions that are good to standardize on. In keeping on the theme of “speaking the same language,” look over the terms and how they are defined here even if you are familiar with them. This will keep us in sync. ■





Source code—Files written in high-level languages such as C# that need to be compiled (for example, foo.cs). Source(s)—All the files involved in building a product (for example, C, CPP, VB, DOC, HTM, H, and CS). This term is used mostly as a catch-all phrase that is specific not only to source code files but to all the files that are stored in version tracking systems. Codeline—A tree or branch of code that has a specific purpose, such as the mainline, release line, or hotfix line that grows collectively.

Definitions



■ ■











Mainline or trunk (“The Golden Tree”)—The main codeline of the product that contains the entire source code, document files, and anything else necessary to build and release the complete product. Snapshot—A specific point in time in which the sources and build are captured and stored, usually on a release or build machine. Milestone—A measurement of work items that includes a specified number of deliverables for a given project scheduled for a specified amount of time that are delivered, reviewed, and fixed to meet a high quality bar. The purpose of a milestone is to understand what is done, what is left to do, and how that fits with the given schedule and resources. To do this, the team must complete a portion of the project and review it to understand where the project is in the schedule and to reconcile what is not done with the rest of the schedule. A milestone is the best way to know how much time a portion of the project will take. Code freeze—A period when the automatic updates and build processes are stopped to take the final check-ins at a milestone. Public build—A build using the sources from the mainline or trunk. Private build (also referred to as a sandbox build)—A build using a project component tree to build more specific pieces of the product. This is usually done prior to checking in the code to the mainline. Branching—A superset of files off the mainline taken at a certain time (snapshot) that contains new developments for hotfixes or new versions. Each branch continues to grow independently or dependently on the mainline. Forking—Cloning a source tree to allow controlled changes on one tree while allowing the other tree to grow at its own rate. The difference between forking and branching is that forking involves two trees, whereas branching involves just one. It is also important to note that forking or cloning makes a copy (snapshot) of the tree and does not share the history between the two trees, whereas branching does share the history. Virtual Build Labs (VBLs)—A Virtual Build Lab is a build lab that is owned by a specific component or project team. The owner is responsible for propagating and integrating his code into the mainline or public build. Each VBL performs full builds and installable releases from the code in its source lines and the mainline. Although

2. SOURCE TREE CONFIGURATION



19

20

Chapter 2 Source Tree Configuration for Multiple Sites and Parallel (Multi-Version) Development Work

■ ■ ■

the term virtual is used in the name of the labs, don’t confuse it with Virtual PC or Virtual Machines because the labs are real physical rooms and computer boxes. It is not recommended that you use Virtual software for build machines except possibly for an occasional one-off or hotfix build. This concept is explained in Chapter 4, “The Build Lab and Personnel.” There is usually a hierarchy of VBLs so that code “rolls up” to the mainline or trunk. For example, let’s say that you have a mainline, Project A is a branch off of the mainline, and Developer 1 has a branch off the project branch. Developer 1 has several branches off his branch, with each branch representing a different component of the product. If he wants to integrate one of his branches into main, he should first merge his changes with all the levels above the branch to make sure he gets all the changes. Alternatively, he can just roll the changes into main, which sits higher in the hierarchy. This will become clearer in the next couple of pages. Reverse integration (RI)—The process of moving sources from one branch or tree to another that is higher in the VBL hierarchy. Forward integration (FI)—The process of moving sources from one branch or tree to another that is lower in the VBL hierarchy. Buddy build—A build performed on a machine other than the machine that the developer originally made changes on. This is done to validate the list of changed files so that there are no unintended consequences to the change in the mainline build.

To answer the question on how this topic relates to builds, I would like to borrow a quote. In a paper read at the Eighth International Workshop on Software Configuration Management in Belgium in 1998, Laura Wingerd and Christopher Seiwald reported that “90% of the SCM ‘process’ is enforcing codeline promotion to compensate for the lack of a mainline.” This quote was taken from Software Configuration Management Patterns by Stephen P. Berczuk with Brad Appleton. The book offers an outstanding explanation of how to develop your branching model for your source trees. I agree that if you do not have a mainline to build your product from, you will encounter all kinds of delays in shipping your code that do not seem directly connected to source tree configuration, such as trouble deploying hotfixes (more on this in Chapter 16, “Hotfixes or Patch Management”).

How This Process Works: An Example Using VSS

21

By creating a mainline or golden source tree, you will have fewer build errors, because any potential breaks are caught before the reverse integration (RI) merge into the golden tree. Developers can work on different versions of a product simultaneously without affecting other components. These are the two biggest advantages to moving to a process like this among the other main points mentioned in the introduction. Microsoft Sidenote: The Code Just Went Golden!

How This Process Works: An Example Using VSS The best way to show how a mainline owned by the build team works is by an example using VSS as the SCC tool. You can substitute any version control tool for this example. I chose VSS because it is from Microsoft, and it is free when you purchase Visual Studio Enterprise Edition.

2. SOURCE TREE CONFIGURATION

Do you know where the term “golden bits or tree” comes from? In the old days when a product was at its final stage and ready to be released to manufacturing (RTM) to be packaged on a CD, a golden master was created. A golden master was a set of discs sent to the Product Release Services for massive duplication, packaging, and deployment to resellers. The term golden master was a morph of the CD manufacturing term glass master, in which the 24K gold reflective layer of the CD was sandwiched between two glass pieces to provide optimum copying of the discs in the manufacturing process. Hence, when we shipped a product, it went “golden.” It was expensive for Microsoft to recall the bits at this point, not to mention a PR (public relations) disaster/embarrassment. Today we still use these terms, but with a little variation. We release a lot of Web Services/applications online, such as Media Player, MSN, Passport, and Windows Update. With these products, we say released to Web (RTW) instead of RTM, or we say golden bits instead of golden master. The golden tree is where the golden bits sources are stored.

22

Chapter 2 Source Tree Configuration for Multiple Sites and Parallel (Multi-Version) Development Work

Golden Tree (Mainline “Pristine” Build) Setup Looking at Figure 2.1, you can see that the mainline or golden tree on the left is the shipping tree. This is the codeline that the build team owns, maintains, and administers. The goal of every development group—and in this example Dev Team 1—is to get its code into the golden tree so that it can ship the product and get paid. Dev team 1 codeline (VBLs, private tree, or branch)

Mainline or Golden Tree codeline

Another Developer

Specific Developer Codeline

Another team Codeline

Dev team 1 codeline

Developer checks-in to Team Tree or Branch

All Code from Group is ready to be merged (RI) into the mainline

Build Snapshot and Release Point

Code from Dev team 1 gets Reverse Integrated into the Mainline

Time

Figure 2.1 Golden tree. The codeline called Dev Team 1 is considered Virtual Build Lab 1, or a sandbox or private tree. With the limited functionality of VSS, this is a new instance of a source tree, not a branch off the mainline. With more powerful source code control tools, this can be just a branch off the mainline.

VBLs and Multisite Development Each VBL should be able to take a snapshot of the sources in the mainline (forward integration), work in isolation, and then submit its changes in 0321332059 bulk back into the mainline. This allows each VBL to work independently 02Fig01.eps 5/02/05 from one another whileScan picking Group/pab up the latest, greatest stable code from the other VBLs. As stated in the definitions, all VBLs operate independently of one another and choose when to refresh the code in their tree or branch and when to reverse integrate (RI) their changes to the mainline, making their changes visible to the other VBLs. The VBL work can be happening one floor or 1,000 miles away from the Central Build Lab.

How This Process Works: An Example Using VSS

23

Table 2.1 Private Versus Public Builds Private (VBL Build)

Public (Mainline Build)

Performed and managed by a VBL

Performed and managed by Central Build Team

Testing is minimal before releasing build

Minimum suite of tests must be run and passed before releasing

Can be done at any time

Usually done at a set time every day

Released informally

Released at proper release servers; ready for general consumption

Has its own rules and policies for checkin but should be dictated by the CBT

Strict, enforced procedure that must be followed for check-in Must go through WAR meeting to check in

When setting up a VBL structure, it is a good idea to keep the information in Table 2.1 in mind. It outlines the most important differences between VBL builds and mainline builds. If you decide to adopt this type of tree structure, I suggest that you elaborate on the entries in the table. The details will be dictated by how your development and test team is organized.

2. SOURCE TREE CONFIGURATION

Propagating the changes into the mainline is a big deal and is treated as such. This is the point where the Central Build Team sees the VBL’s changes for the first time. Build breaks in the mainline are not acceptable and should never be tolerated. There should never be a reason for a break if the check-in policies for the mainline are followed. VBLs that are not able to produce reliable builds cannot propagate their changes into the mainline. Thus, their code does not make it into the product. This is good, tough logic, but it’s the Achilles’ heel of the VBL process. Although the threat of not shipping seems like it would be enough to keep the wheels rolling, it doesn’t always work. There are too many dependencies between groups to say “Sorry, you will not make it in the product.” That’s why there should be aggressive proactive management of the VBLs through the Central Build Team to make sure the VBLs follow a stricter process. That way, they do not delay other components of the project because their build system is not up to par with the mainline. Table 2.1 is a summary of the differences between private and public builds.

24

Chapter 2 Source Tree Configuration for Multiple Sites and Parallel (Multi-Version) Development Work

Performing parallel development after the VBLs are set up should be rather painless, a huge benefit of the VBL process. Each developer can branch in his own VBL to work on multiple projects while sharing the code across other VBLs through the mainline. Because the structure of parallel development and hotfix codelines is similar, look at the examples in Chapter 16 to get a better idea about setting up the trees.

What Should Be Kept Under Source Control In some groups at Microsoft, we store only code or documents that need some kind of version control on them as they are being developed. Other groups use their SCC tool to store everything such as development tools, marketing work, and binaries. I am against the latter because I like to keep parts of the product separate. Despite what version control tool companies say, I do not think their tools are robust enough to track all binaries and other files that people like to store. Only code, documents, and anything else that needs to be built should be kept in an SCC tool. Third-party or external binaries, development tools such as compilers, and any other files that do not need to be built should be stored on a development server and kept up to date there. Release binaries belong on a release server. I discuss more about that in Chapter 3, “Daily, Not Nightly, Builds.”

Hatteras: A Look Into the Future of SCC at Microsoft Hatteras is an enterprise-class Software Configuration Management (SCM) product. The codename Hatteras comes from a lighthouse on the shores of North Carolina where the product is being developed. The final name of the product is Team Foundation, and it includes more than just the source control functionality. The Hatteras piece is referred to as Team Foundation Source Control (TFSC). The other pieces of the Team Foundation product are touched on in Chapter 18. I wanted to include this tool in this chapter as I just briefly talk about the upcoming VSTS tools in Chapter 18 but wanted to go into more details on TFSC. Another reason for me to include this section is that there are some important definitions that need to be added to our build dialect, such as all of the branching definitions. This tool has been completely designed and developed from scratch; in other words, this is not a new generation of Microsoft’s infamous VSS. It provides standard source code version control functionality that scales

Hatteras: A Look Into the Future of SCC at Microsoft

25

across thousands of developers, such as Microsoft’s own development teams. As part of the Visual Studio (VS) 2005 release, Hatteras provides integration with the Visual Studio IDE and with other enterprise tools such as the Visual Studio work item (bug) tracking tool. Hatteras also provides a standalone GUI, a command-line interface, and a Web-based interface. Let’s define some new terms as they relate to TFSC: ■ ■ ■ ■



Some of the features in TFSC are fairly standard among SCC tools: ■ ■ ■ ■ ■ ■ ■ ■ ■ ■

Workspace creation Workspace synchronization File checkout Overlapping checkout by multiple users of the same file Atomic change-set check-in File diffs Automated merge Code-line branching File-set labeling User management and security

What really sets TFSC apart from the competition is its powerful merging and branching features. I don’t try to explain the entire product here, but just touch on why I think these two features are so cool.

2. SOURCE TREE CONFIGURATION



Repository—The data store containing all files and folders in the TFSC database. Mapping—An association of a repository path with a local working folder on the client computer. Working folder—A directory on the client computer containing a local copy of some subset of the files and folders in a repository. Workspace—A definition of an individual user’s copy of the files from the repository. The workspace contains a reference to the repository and a series of mappings that associate a repository path with a working folder on the user’s computer. Change set—A set of modifications to one or more files/folders that is atomically applied to the repository at check-in. Shelve—The operation of archiving all modifications in the current change set and replacing those files with original copies. The shelved files can be retrieved at a later time for development to be continued. This is my favorite feature.

26

Chapter 2 Source Tree Configuration for Multiple Sites and Parallel (Multi-Version) Development Work

Merging Functionality in TFSC The merging functionality in TFSC is centered on the following typical development scenarios: ■











Scenario 1: The catch-up merge—The user wants to merge all changes from a source branch that have not yet been migrated to the target branch. The source and target can be a subtree or an individual file/folder. Scenario 2: The catch-up no-merge—The user wants to discard nonmerged changes in the source branch from the set of candidate changes for future merges between the specified source and target. Scenario 3: The cherry-pick merge—The user wants to merge individual change sets from the source branch to the target branch. Changes introduced to those files prior to the specified change set should not be migrated. ■ The user can specify the change sets to merge with a change set number. ■ The user can specify individual file revisions to merge between the source and target. Scenario 4: The cherry-pick no-merge—The user wants to discard a single change set from the list of all possible changes to merge between the source and target so that this change set never appears in the list of candidates for a cherry pick merge. Scenario 5: Merge history query—The user wants to know whether the specified change set has been merged into the target branch. If it has, the user wants to know what change set the merge was committed in. The user also wants to know if part of the change set has been merged, but not all. Scenario 6: Merge candidate query—The user wants to obtain a list of change sets that have been committed to a source branch but have not yet been migrated to the target branch. From this list, the user selects change sets to migrate with a cherry pick merge.

Hatteras: A Look Into the Future of SCC at Microsoft

27

How TFSC Addresses the Scenarios

Branching in TFSC Branching is the SCM operation of creating an independent line of development for one or more files. In a sense, branching a file results in two identical copies of the original file that can be modified as desired. Changes in the old line are not, by default, reflected in the new line and vice versa. Explicit operations can be performed to merge changes from one branch into another. There are many different reasons for branching and many different techniques to accomplish it. In the most common scenarios, branching is

2. SOURCE TREE CONFIGURATION

TFSC merging is designed to provide users with an extremely powerful and flexible tool for managing the contents of branches. Merges can be made into a single file or into a tree of related files. Merges can also migrate the entire change history of the specified source files or an individual change set or revision that might contain a specific fix or feature that should be migrated without moving other changes from the source in the process. Merging the entire change history prior to a given point in time is known as a catch-up merge (Scenarios 1 and 2), whereas selecting individual change sets or revisions to merge is known as a cherry-pick merge (Scenarios 3 and 4). The merge command also allows users to query for merge history and merge candidates and perform the actual merge operation. TFSC presents merge history and candidate merges as a list of change sets that have or can be migrated between a source and a target branch. Merges can be made to a subset of files in a change set, creating a situation in which a partial change set has been merged. In this case, TFSC represents the partial state of the merge and allows the user to finish merging the change set later. Merges are pending changes in TFSC. The user can choose to perform several merge operations within a workspace without committing changes following each merge. All these merges can be staged in the user’s workspace and committed with a single check-in as a single change set. In addition, the pending merge operation can be combined with the checkout and rename commands to interject additional changes to the files that will be committed with the merge. Hopefully you followed this summary and are still with me. Now let’s go into how branching works in TFSC.

28

Chapter 2 Source Tree Configuration for Multiple Sites and Parallel (Multi-Version) Development Work

reasonably simple, but branching can become complicated. A complex system with lots of branched files can be hard to visualize. I recommend mapping this with a visual product (such as Visio) so that the picture is clear. Following are a handful of scenarios in which branching is interesting. Any SCM team should adopt these definitions. Release Branching

We’ve been working on a Version 1 release for a year now, and it is time to begin work on Version 2. We need to finish coding Version 1—fixing bugs, running tests, and so on—but many of the developers are finished with their Version 1 work (other than occasional interruption for bug fixes) and want to start designing and implementing features for Version 2. To enable this, we want to create a branch off the Version 1 tree for the Version 2 work. Over time, we want to migrate all the bug fixes we make in the process of releasing Version 1 into the Version 2 code base. Furthermore, we occasionally find a Version 1 bug that happens to be fixed already in Version 2. We want to migrate the fix from the Version 2 tree into the Version 1 tree. Promotion Modeling

Promotion modeling is equivalent to release branching, where each phase is a release. It is a development methodology in which source files go through stages. Source files might start in the development phase, be promoted to the test phase, and then go through integration testing, release candidate, and release. This phasing serves a couple of purposes. It allows parallel work in different phases, and it clearly identifies the status of all the sources. Separate branches are sometimes used for each phase of the development process. Developer Isolation

A developer (or a group) needs to work on a new feature that will be destabilizing and take a long time to implement. In the meantime, the developer needs to be able to version his changes (check in intermediate progress, and so on). To accomplish this, he branches the code that he intends to work on and does all his work independently. Periodically, he can merge changes from the main branch to make sure that his changes don’t get too far out of sync with the work of other developers. When he is done, he can merge his changes back into the main branch.

Hatteras: A Look Into the Future of SCC at Microsoft

29

Developer isolation also applies when semi-independent teams collaborate on a product. Each team wants to work with the latest version of its own source but wants to use an approved version of source from other teams. The teams can accomplish this in two ways. In the first way, the subscribing team “pulls” the snapshot that it wants into its configuration, and in the second way, the publishing team publishes the “approved” version for all the client teams to pick up automatically. Label Branching

Component Branching

We have a component that performs a function (for simplicity, let’s imagine it is a single file component). We discover that we need another component that does nearly the same thing but with some level of change. We don’t want to modify the code to perform both functions; rather, we want to use the code for the old component as the basis for creating the new component. We could just copy the code into another file and check it in, but among other things, the new copy loses all the history of what brought it to this point. The solution is to branch the file. That way, both files can be modified independently, both can preserve their history, and bug fixes can be migrated between them if necessary. Partial Branching

Partial branching is equivalent to component branching, where the “component” is the versioned product. In this case, we work on a product that has a series of releases. We shipped the Everett release and are working on the Whidbey release. As a general rule, all artifacts that make up each

2. SOURCE TREE CONFIGURATION

We label important points in time, such as every build that we produce. A partner team picks up and uses our published builds on a periodic basis, perhaps monthly. A couple of weeks after picking up a build, the team discovers a blocking bug. It needs a fix quickly but can’t afford the time to go through the approval process of picking up an entirely new build. The team needs the build it picked up before plus one fix. To do this, we create a branch of the source tree that contains all the appropriate file versions that are labeled with the selected build number. We can fix the bug in that branch directly and migrate the changes into the “main” branch, or we can migrate the existing fix (if it had been done) from the “main” branch into the new partner build branch.

30

Chapter 2 Source Tree Configuration for Multiple Sites and Parallel (Multi-Version) Development Work

version should be branched for the release (source, tools, specs, and so on). However, some versioned files aren’t release specific. For example, we have an emergency contact list that has the home phone numbers for team members. When we update the list, we don’t want to be bothered with having to merge the changes into each of the product version branches, yet the developers who are enlisted in each version branch want to be able to sync the file to their enlistment.

Identifying Branches (Configurations) When a file is branched, it is as if a new file is created. We need a way to identify that new file. Historically, this has been done by including the version number of the file as part of the name of the file. In such a mechanism, the version number consists of a branch number and a revision number. A branch number is formed by taking the version number of the file to be branched, appending an integer, and then adding a second integer as a revision number. For example, 1.2 becomes 1.2.1.1 (where 1.2.1 is the branch number and 1 is the revision number). See Chapter 16 for more details on branch labeling. This is all well and good, but it quickly becomes unwieldy not only from the standpoint of dealing with individual files, but also from the standpoint of trying to pick version numbers apart to understand what they mean. To address these issues, the notion of “configurations” was developed. A configuration is a collection of files and their version number. Configurations generally have a human-readable name, such as Acme 1.0. Having named configurations is great, but before long, even that will get to be a problem. You will need a way to organize them. An interesting way to address this organization problem is to make configurations part of the actual source code hierarchy. This method of organization is natural because it is how people do it without version control. It avoids the problem of having to teach most people the concept of configuration, and it provides a great deal of flexibility in how you combine configurations. For example, two versions of an Acme product (where Version 2.0 is branched from Version 1.0) might look something like this:

Hatteras: A Look Into the Future of SCC at Microsoft

31

Acme 1.0 Anvil Hammer Head Handle Acme 2.0 Anvil Forge Hammer Head Handle

More Scenarios Shelving and offline work are such excellent features that they alone justifies moving from whatever SCC tool you currently use to TFSC. Shelving Current Changes

1. A contributor, working on a new feature, checks out a series of files from the repository.

2. SOURCE TREE CONFIGURATION

Branching granularity has different approaches. In the traditional approach, branching is done on a file-by-file basis. Each file can be branched independently at different times, from different versions, and so on. Configurations help prevent this from becoming chaotic. They provide an umbrella to help people understand the purpose of the various branches. File-by-file branching is flexible, but you must take care to ensure that it doesn’t get out of hand. In addition, file-by-file branching can be hard to visualize. Another technique is always to do branching globally. Whenever a branch is created, all files in the system are branched. (There are ways to do this efficiently, so it’s not as bad as it sounds.) The upside of this global branching is that it is easy to understand and visualize. The downsides include the fact that it forces a new namespace (the branches namespace) and is less flexible. For example, I can’t have a single configuration that includes two copies of the same file from different configurations, as in the previous component branching scenario.

32

Chapter 2 Source Tree Configuration for Multiple Sites and Parallel (Multi-Version) Development Work

2. A critical bug is found that needs immediate attention by this contributor. 3. The contributor chooses to shelve his current change set for the feature he was working on. All of his currently checked-out files are archived on the server, where they can be retrieved later. The files are replaced by the unmodified copies of the same version he originally synced from the server. The files do not appear to be checked out in the contributor’s workspace. 4. The contributor makes changes to address the bug as needed. The modified files are checked in as a new change set. 5. The contributor now unshelves his previous change set from the server. The modified files that he previously archived to the server are placed in his workspace. The files once again appear to be checked out in his workspace. 6. The contributor, wanting to merge any modifications to these files that were made during the bug fix, syncs his workspace with the server. The updates are automatically merged into the checked-out files in the local workspace. 7. The contributor continues work on the new feature and checks in all modifications as a single changeset when the feature is complete.

Offline Checkout/Check-In

1. A contributor syncs his workspace and takes his laptop home for the evening. 2. At home, he continues working and chooses to check out a file. 3. An unmodified copy of the checked-out file is placed in the contributor’s cache on his local computer. 4. The contributor continues to work and check out additional files. Unmodified copies of all these files are placed in the cache. 5. When the feature is complete, the user attempts to check in the change set. Because the user is offline, the check-in option is not available. 6. Wanting to begin work on the next feature, the user shelves his modifications for retrieval and check-in when he is able to go back online.

Recommendations

33

I have designed VBLs with customers using several different SCC tools. Some worked better than others, but what I really like about TFSC is that it is designed from the ground up to work most efficiently with the way that developers and projects interact. It’s not necessary to customize the tool with hacks or tricks to get it to do what you want. All the features are there.

Summary

Recommendations ■ ■

■ ■ ■ ■ ■

Create the mainline (public) and virtual build labs (private) codelines. Make sure the mainline is pristine and always buildable and consumable. Create shippable bits on a daily basis. Use consistent, reliable builds. Build private branches in parallel with the main build at a frequency set by the CBT. Use consistent reverse and forward integration criteria across teams. Be aware that dev check-ins are normally made only into a private branch or tree, not the mainline. Know that check-ins into a private branch are only reverse integrated (RId) into main when stringent, division-wide criteria are met. Use atomic check-ins (RI) from private into main. Atomic means all or nothing. You can back out changes if needed.

2. SOURCE TREE CONFIGURATION

I hope that after reading this chapter, you have an idea of what a VBL is and can grasp the concept of having a mainline to build and store your product sources. This is such a large topic that I could easily write a book on it. I covered only the basics here. What you should take away from this chapter are some clear definitions of terms that are used on a daily basis in any development team. You should also have a better understanding of why a mainline is necessary and how to set one up using a VBL model. Finally, I offered some recommendations and a preview of Microsoft’s enterprise-class TFSC tool that will be out in the fall of 2005.

34

Chapter 2 Source Tree Configuration for Multiple Sites and Parallel (Multi-Version) Development Work ■ ■ ■

Make project teams accountable for their check-ins, and empower them to control their build process with help from the CBT. Configure the public/private source so that multisite or parallel development works. Optimize the source tree or branch structure so that you have only one branch per component of your product.

C H A P T E R

3

DAILY, NOT NIGHTLY, BUILDS Philosophy: The build team drives the product forward. —Vincent Maraia It’s amazing how many articles and books I have read that recommend nightly builds. Unfortunately, they mean this literally. Whenever I talk to different software companies or Microsoft product groups that do nightly builds, I always ask, “What is the success rate of your nightly build?” Most of the time, I hear “10 to 20 percent success with zero errors.” I figure the people who tell me that they have 80 to 100 percent success rates are either lying or compiling very little code every night. I understand that the beautiful vision of a nightly build is that the build will be ready to go in the morning, all the first and second stage tests will be run, and if you have a really efficient process, the build will be deployed to the developer and tester’s boxes. As a result, everyone can crank away on finding and fixing new bugs and getting the new code checked in as fast they can get out of bed and connect to the network. Well, this is not reality when it comes to software builds. We tried the nightly builds at Microsoft in various groups. We found that you end up having some build hero up late at night or early in the morning fixing a build break or two. This is usually the only way that everyone can have their doughnut in the morning as they download the newly released nightly build. But for some reason, people keep trying it again. Microsoft Sidenote: When Nightly Builds Do Work The daily build process took place prior to the NT build teams switching to the VBL process, discussed in Chapter 2, “Source Tree Configuration for Multiple Sites and Parallel (Multi-Version) Development Work.” The VS and NT teams both start builds in the evening in their main lab because build breaks are so rare in the golden tree. This is a result of everything going through some kind of pre-build test during the integration process. If you have this type of setup, you can still do a daytime daily build in your VBLs and kick off a nightly build on your golden tree. But don’t do nightly builds unless you can guarantee a 95 percent or higher build success rate.

35

36

Chapter 3

Daily, Not Nightly, Builds

Nightly builds actually promote bad behavior and carelessness for developers who check in code. What usually happens is that people get used to the fact that the build breaks almost every night. The developers count on the Central Build Team to fix the breaks in the morning, which buys them some buffer time to try and get last-minute changes in. I recommend running a build at a consistent time during the day when developers are around so that they can fix their build breaks before going home. When the product gets close to the shipping date, you should be building on the weekends, too. As part of the daily build process, you should publish a regular build schedule. This should include the last check-in time, the time that the build will be completed, and the time that initial build verification tests (BVTs) will be completed. Here is roughly how this would look: 9 to 10 AM 11 to 2 PM 2 to 5 PM 5 to 6 PM 6 PM to 9 AM

Decision makers meet at WAR or Ship meeting Check-in window open Build product (development teams triage their bugs in private meetings) Release product to test shares Run automated tests at night

When any part of this process is broken, the person who checked in the defective module that broke the build should be published via build intranet page or e-mail. The build intranet page should be a collection point for all the relevant documents, policies, schedules, links, and other general information, such as new hire information. Everyone on the product team should be able to reference this site, and most will make it their default start page when they bring up IE. Years ago, we used a page similar to the one in Figure 3.1 in the Windows NT group. Microsoft Sidenote: Famous Religious Poster We used to have a poster in the Windows NT Build Lab that read: Remember the 11th Commandment: “THOU SHALL NOT BREAK THE BUILD”

Daily, Not Nightly, Builds

37

Figure 3.1 Sample build intranet page.

Microsoft Sidenote: When Are We Going to Ship?

3. DAILY, NOT NIGHTLY, BUILDS

If you really want to know when your product is going to ship, just ask your Central Build Team or someone close to the daily build. The NT group used to have a whiteboard full of predicted shipping build numbers by different people in the project. The people who always came closest to the actual build number that shipped were either the builders or the operating system core developers. Because we were the closest people to the daily grind of accepting and building check-ins, we were more scientific about calculating the ship date. The core developers were good because they seemed to always be in the build lab and used the method described in the next paragraph. It is really an easy math problem. Say that you are taking 60 check-ins per day, but 30 new bugs are being opened. The net gain is 30 closed bugs per day. If you have 100 bugs in the database, you should ship your product in 4 to 5 days as long as no new bugs are opened on the last day. This was the simple equation we used to guess ship dates. We didn’t use the crazy estimates we got from product or program managers who used a Gantt chart loaded with guesses from development managers who were in turn getting overambitious guesses from their development teams about how fast they could write code.

38

Chapter 3

Daily, Not Nightly, Builds

I. II. III.

… X.

FIGURE 3.2 Moses. The warning here is that if you break the build you will have hell to pay.

The Importance of a Successful Daily Build Former Microsoft Visual Studio Product Manager Jim McCarthy used to say, “Successful daily builds are the heartbeat of a software project. If you do not have successful daily builds, then you have no heartbeat, and your project is dead!” Daily builds also mark the progress being made and can indicate when a project is struggling. In addition, having a daily build can keep the programmers in “ship mode”—the attitude that the product could ship with the complete features on any given day. A better metaphor is that the build is your product assembly line. If your assembly line is not moving or broken, your business is hurting, because you cannot turn out a product. Look at what Henry Ford did with his assembly line. Ford did not invent the car; he just perfected the way it was manufactured with the assembly line. I think the same holds true for software: By having a good build process, your company will be more efficient, successful, and profitable, and your employees will be happier.

The Importance of a Successful Daily Build

39

How to Successfully Release Builds and Police Build Breaks I really like the Pottery Barn rule that was misquoted by Senator John Kerry in the second Presidential debate in September 2004. Kerry said that Colin Powell “told President Bush the Pottery Barn rule: If you break it, you fix it.” The anecdote comes from Bob Woodward’s book Plan of Attack, but Woodward actually reported that Powell privately talked with aides about the rule: “You break it, you own it.” He did not say this to the President, and it actually turns out that Pottery Barn has no such rule. Still, I think every build lab needs a poster with this rule regardless of who said it. This leads to one of the most important rules in the build lab: The build team never fixes build breaks, regardless of how trivial the break is. That’s the developer’s responsibility. We took this a step further: The developer who breaks the build has to go to his development machine, check out the file, fix it, and then go through all the check-in process steps again. Build Breaks Always Have the Highest Priority for Everyone

1. We would try to call the developer at his office. 2. If the developer did not pick up, we would call his manager and continue up the organizational ladder until we found someone who could fix the break or at least point us to someone who might be able to resolve it.

3. DAILY, NOT NIGHTLY, BUILDS

This rule means that if you are a developer and you can fix the build break, and the developer who broke the build cannot be found, you should fix it immediately. Afterward, send an e-mail to the developer and the build team explaining what you did to fix the build, and remind your co-worker that he owes you a favor. Chris Peters, a former Microsoft vice president in the systems and applications group, used to say that people have to remember that the reason everyone works here is to ship software. That means everyone: development, testing, product support, vice presidents, administrators, and so on. If you are not moving toward a direction of shipping a product every day, you need to talk to your manager and figure out what you are supposed to be doing. Helping fix build breaks or not breaking the build in the first place is a good way to help, but don’t forget the Pottery Barn rule! At Microsoft, developers understood that when they joined the Windows NT group, the following chain reaction would occur if they broke the build:

40

Chapter 3

Daily, Not Nightly, Builds

3. We would call the developer at home if it was past the 9 AM to 6 PM working hours. To follow this track, it is important to keep a list of developers’ home telephone numbers in the build lab or make them easily accessible to everyone who is working on a project. This list is especially helpful for build breaks that occur late at night or early in the morning. With the increasing number of people working from home, this list is even more important today than it was 10 years ago. Another way to discourage developers from breaking the build is to establish a build fine or penalty. The build fine example in the following sidenote worked well in the NT group for a couple of years. However, don’t execute the penalty by having the engineer who broke the build run the next one. Several companies try that approach, but this is another one of those “good in theory, bad in practice” deals. What usually happens is that you end up pulling a good developer off a project until some unfortunate developer breaks the build, and then you pull that person off until… you get the picture. If you entered this unknown factor into your project manager’s Gantt chart, you would see how this can mess up the development line and build progress. It really is easier and cheaper to hire a full-time builder. Microsoft Sidenote: Build Fines At one time during the development of the first version of Windows NT 3.1, we created a $5 build break fine. The rule was that whenever a developer broke the build, that person had to pay the build fund $5 before we would take another check-in from him. We didn’t do this to capitalize on something that happened every day and at times could not be prevented (hidden dependencies); rather, we did this to train the developers. Most of the developers did not have a problem coming up with the $5, and our intent was not to make money. It is amazing how much harder developers will try to keep the build going when they have to physically come to the build lab and deposit $5 out of their pocket for a mistake they could have possibly avoided. After we shipped Windows NT, the build team wanted to throw a party with the more than $400 we saved in the build fund. But then we realized that this would reward the developers. Instead, we bought a big boom box to crank our crazy music discs. In this way, we had an everlasting contribution to the build team.

Summary

41

Enforce All Build Policies on Reviews

When I was on the Visual Studio team in 1997, some developers did not respect the build and felt that it was the build team’s responsibility to fix all the breaks. Even when their manager spoke with them and asked them to comply with the Central Build Team’s build break rules, they just ignored that request. For such an extreme case, you might have to enforce developer compliance in their performance review. That is what ultimately happened to some of the Visual Studio team developers who refused to work with the build team.

What Are You Building Every Day? Let’s end this chapter with some questions you should ask yourself. Are you cranking out a daily build because you were told somewhere that you were supposed to? Is the build useful, or do you end up fixing and patching the build to get it released? Do you know what code additions/fixes are being built? Is there real value in the released daily build that moves the product forward? If you are able to take an objective look at these questions and answer them as honestly as possible, you will be on your way to greater success with your product. If you think you would get a colored or tainted view of the answers to these questions, you should hire a consulting firm to come in and perform an architect review of your build process. Don’t feel bad about doing this because even Microsoft does this periodically—yes, even in the Windows team. Consultants are great because they are not involved politically with the company and will give an objective view that is not possible with employees.

Having daily builds is a cornerstone of the development process. That will always be the case because if the builds are done correctly, the evidence is there from successful companies that the behavior pays off in the end. Keeping to a daytime build schedule helps catch the developers in the office and gets the build breaks fixed more quickly. This chapter proposed various ways of minimizing build breaks.

3. DAILY, NOT NIGHTLY, BUILDS

Summary

42

Chapter 3

Daily, Not Nightly, Builds

Even if the end result of the build on a given day is a broken application (too many build defects), you still have accomplished one important task: You have successfully built and released all the new code merged together.

Recommendations ■ ■ ■ ■ ■

Hire a consulting firm to come in and review your processes and tools. Start your build during the day, not during the evening. Publish the build schedule on an internal Web site. Release daily builds as sure as the sun comes up, but make sure they are quality, usable builds. Don’t just go through the motions. Discourage build breaks by creating and enforcing consequences.

C H A P T E R

4

THE BUILD LAB AND PERSONNEL Philosophy: Noah didn’t start building the ark when it started to rain. —Dean Kamen, inventor of the Segway and founder of FIRST

“I don't think we’re going to make our ship date.”

Figure 4.1 Noah. When I visit customers and review their build process, the second most common problem I see is that the development team does not have a build lab. The most common problem is that they lack a mainline, as described in Chapter 2, “Source Tree Configuration for Multiple Sites and Parallel (Multi-Version) Development Work.” The success of a software company 43

44

Chapter 4

The Build Lab and Personnel

or group is directly proportional to its ratio of labs to conference rooms. More labs than conference rooms equals greater success because people are spending more time doing work than sitting in meetings. As you can probably tell, I am not a big fan of meetings. Of course, not having meetings at all is bad, too, but I have seen too much time wasted talking about work instead of actually doing the work. Microsoft Sidenote: IM Wright Talk—“The Day We Met” IM Wright is the alter ego of a developer in Microsoft’s engineering best practices group. This person is in many ways the “dark side” of good people. He is well known at Microsoft, and there is a cynical, comical ring to his observations. Captured here are his thoughts while he was in a meeting: Quit wasting my time. Would you, could you, please quit wasting my time? Maybe if I jump across the table and duct tape your mouth shut, I could take action instead of sit here incredulous while you incinerate 60 minutes of my life. How does calling a meeting give people license to act like you’re worthless? If time is money, most meetings are a market collapse. I am so tired of people who could sooner drive a bus off a cliff than run a decent meeting. Well, I’m not going to take it any more. If you force me into a meeting room, be prepared for me to call you on any stunts you try to pull. You waste my time, and I’ll ensure yours gets torched with it. Don’t like it? Don’t test me. What am I going to call you on? Listen up, ’cause here it comes… IM Wright could easily be any of us who have been forced to waste our time in meetings. So, it is a good idea to pretend that you will have him attend the next meeting you decide to call. You will have a better, more useful meeting. The point here is if you were to convert one of your conference rooms into a build lab, you would not miss the conference room, because IM Wright would not be saying this in a build lab. Thus, convert one of your conference rooms to a build lab if you are “out of space” in your building. It’s a better use of the space, and you will see the production of your development team increase exponentially.

The Need for a Build Lab

45

The Need for a Build Lab If I am at a company doing a build architecture review, and that company does not have a build lab, my first recommendation is to build one. If your company already has a fully dedicated build lab, you are quite a bit ahead of the companies or groups that do not have one. One of the questions that I frequently get asked by companies or groups without build labs is, “Why do we need a build lab when our corporate IT (Information Technology) group can maintain the build machines?” My answer refers back to their question, “You do not want your IT department maintaining and owning the family jewels.” I do not mean any disrespect to IT groups by this. These are my reasons for the build team owning the machines: ■



The main purpose of a build lab is to provide the build team with more control over the hardware and software on the build machines. In addition to the build machines, it is a good idea to store the source code and release servers in the build lab. For example, if a hard drive crashes on a build server, you need physical access to the machine, and you need to change out the drive as soon as possible because you potentially have a large number of developers and testers waiting for the build to be released. Sometimes IT departments can be backed up with other priority requests or are not available to service the build machine immediately. This delay can be prevented if the build team has easy access to the machine that has crashed. Another example is that a lot of IT departments have strict guidelines on what software is allowed on a company machine. For example, you might need to download a patch to build certain .NET classes, but your IT department might block the download for policy reasons. You might have to go through a lot of approval processes to get this patch. While you are jumping through all of the proper hoops, the development clock is ticking, and the development work is being blocked until you can get the build out. Once again, this can be avoided if the build team is allowed to keep control of its build machines.

4. THE BUILD LAB AND PERSONNEL

Here’s a metaphor for you: If conference rooms are limited in a company, like parking spots in a city, the number of meetings, noise, and unnecessary foot traffic will be limited due to availability. This is a really good thing if you follow my logic.

46

Chapter 4



The Build Lab and Personnel

Another important reason for a build lab is security. This is such an important point that I dedicate Chapter 9, “Build Security” to this subject. But, for now, I am just talking about physical security. Many IT departments offer security to prevent unauthorized users, but not to the extent that a custom build lab can. For example, if you want to allow only builders access to the build machines, you can restrict this using cardkey access. If the IT department owns the machines, all the IT administrators also have access to the build machines. You might have a malicious employee in the IT department do some damage without the build or product team ever having the slightest clue of this vulnerability. It might not even be a malicious person, but a new employee’s mistake or another unforeseen, accidental situation that takes the machine offline.

Build Lab Rules After the lab is set up, you should post these rules on the build intranet page and in the lab: ■ ■

■ ■ ■

The lab must be secured and locked 24×7, 365 days a year. Developers are not allowed to fix build breaks on the build machines. For example, if a developer breaks the build, he must fix it on his development machine and check it in. Then the build team picks it up and rebuilds. Members of the build team are the only ones allowed to make hardware or software changes to the machines in the build lab. No hardware is allowed to enter or leave the build lab without a build team member’s okay. Whiners will be removed from the premises immediately.

Hardware Configuration

47

Microsoft Sidenote: Hardware Layout

Secured Hardware Area Machine Consoles that are that contains Build, Source, remotely logged into the and Release machines Secured Hardware Area

Burn Lab and Service Pack Build Machines

Figure 4.2 Lab setup. In the main area of the lab, we kept the console boxes that connected through the remote desktop to the machines in the secured hardware area. This area tended to look like NASA’s control center in Houston with all the monitors 0321332059 04Fig02.eps and people at the consoles. 5/02/05 Scan Group/pab The last section was also walled off and secure because it contained the CD burn machines and all the build machines used to generate hotfixes and service packs. Today, a whole department is dedicated to service packs and hotfixes. The main build lab does not get as involved as it did when NT first shipped.

Hardware Configuration The build lab should include some high-end hardware for building the applications. Because the entire team depends on the results of a build, the high-end computers ensure that the build is completed as quickly as possible. Furthermore, you can use high-speed network equipment to push bits around from source control to build machines to release servers.

4. THE BUILD LAB AND PERSONNEL

In the old NT days, we had all the build, source, and release servers in the build lab. There was an extra administrative cost for these machines to the build team, which we ultimately passed on to a dedicated lab hardware team. In Figure 4.2, you can see how we had the machines configured in the lab. In one section of the lab, separated by a wall, we kept all the hardware we used, including the backup power generator for the lab in case of a power failure. There was an extra layer of security to get into this room. Few people needed physical access to our mission critical machines.

48

Chapter 4

The Build Lab and Personnel

At a minimum, the build lab should have four machines: ■







Server that contains the Source Code Control program—This is your product. Do you really want this server residing someplace where you have little control over this box? Debug build machine for the mainline builds—If you don’t separate your debug and release machines, you will accidentally ship debug binaries, which is not a good thing. Release build machine for the mainline builds—This is a “golden goose” that turns out the “gold eggs” of your company or group. Treasure this machine like a princess, and guard it like all the king’s fortunes. Internal release share server—This is one more piece of hardware that stores the “bread and butter” of the group or company. Don’t give up control of this hardware to anyone unless your IT department reports through your development group.

Hardware Requirements Each machine in the preceding list should meet the following requirements: ■

■ ■



Number of processors—This depends on the build tool you use. One is usually sufficient, because few build tools really take advantage of multiple processors. Processor speed—The lab budget dictates this, but the faster the processor, the better it is. Amount of installed RAM—Max out the machine. RAM is relatively cheap these days, especially when you consider the performance increase you get. Increasing the RAM is usually the first upgrade done when trying to improve the performance of any computer. Number of hard drives—A minimum of two drives (or partitions) is preferred: ■ Drive 1 (C:) is for the operating system and installed applications. ■ Drive 2 (D:) is for building binaries, release shares, or the source database; the minimum space required is roughly ten times the space needed to build your application.

Hardware Configuration

49

The split partitions are good because if you ever need to format or blow away a drive due to corruption, only part of the project will be affected. The recovery is much faster and easier. Hard drive type—This is most likely SCSI, but it could be IDE. Number of power supplies—If you purchase server class hardware (pizza boxes) that belong in racks, you need to consider how many power supplies to order. Motherboard BIOS version—This does make a difference. Make sure you note what is being used and standardize on it. ■





BACKUP AND UNINTERRUPTIBLE POWER SUPPLY (UPS) Remember to get a good tape backup and uninterruptible power supply (UPS). Also, don’t forget to back up the build machines at least once a week, but preferably every night.

Set Up the Software After you have installed the proper hardware, you must install the proper operating system and the appropriate service packs and hotfixes. Then you can start installing the necessary applications.

Set Up the Operating System 1. Determine a location to install the approved operating system. For build machines, you do not need to install Windows Server. In fact, I highly recommend that you don’t do this because Windows Server has a lot of networking overhead built into it that a build machine doesn’t care about. Install Windows XP Professional instead. For the release and source servers, you do need Windows Server because of the amount of network traffic they get. 2. Install the appropriate service packs and hotfixes. You need to document what is installed on each machine. 3. Move pagefile.sys off of the boot partition to another partition or drive to optimize caching and O/S performance.

4. THE BUILD LAB AND PERSONNEL



50

Chapter 4

The Build Lab and Personnel

Set Up the Applications 1. Install proper Visual Studio, .NET releases, and other build tools on the build machines. The release and source servers do not need the build tools to be installed. They should be optimized to be file servers. 2. Install other applications such as virus scanners or firewalls. However, turn them on only on the release and source servers. As mentioned previously, if virus scanners or firewalls are turned on in the build machines, it will hamper the build time performance. After the final setup, you can run PSINFO.EXE (available from www.sysinternals.com) on all the machines and confirm that the list it creates matches the specs of the machine.

Set Up the Build Environment for the Build Machines Having a consistent/controlled build environment is critical for the success of a daily build. The following procedure is one way that some product groups at Microsoft keep the environment consistent. Because builders tend to live in the command shell, here are step-by-step instructions on how to set up a good environment: 1. Create the following directory structure on a drive that the operating system is not installed on: D:\\BUILD\SCRIPTS 2. On your desktop, set up a shortcut to your source control tool. You only need to do this the first time you set up the machine. 3. On your desktop, create a shortcut pointing to the build scripts. As with the previous step, you only need to do this the first time. 4. Get the build scripts from a central location owned by the build lab: ■ For example, if you are using VSS, navigate to $/Build/build, and copy the latest scripts to the D:\\BUILD working directory. ■ Navigate to $/Build/scripts, and copy the latest scripts to the D:\\BUILD\SCRIPTS working directory.

Build Personnel

51

You have now set up a project build machine. You can also use these instructions for individual workstations. Finally, when all the hardware is set up and the software is installed, put physical labels on each machine with the system name, IP address, and switch/port setting.

Build Personnel Because nobody else wanted to play bass…. —Paul McCartney I read an interview with Paul McCartney of The Beatles in which he was asked why he played bass guitar in the band. His answer was “Because nobody else wanted to play bass.” He said his true love was playing lead or rhythm guitar. My theory is that this is how a lot of builders start out. They do the builds for the team because everyone else wants to write or break code, and nobody wants to crank out builds. I would love to change this perception with this book by showing that the build team is as critical to a project as the development and test team. After working more than seven years in build labs, I believe that being a builder is the most thankless job in the computer industry. Builders are constantly fighting forest fires with spray bottles. At the same time, people are screaming at them that the build is broken, late, or they missed an important check-in. Furthermore, they get “status” questions all day long.

4. THE BUILD LAB AND PERSONNEL

5. Set up the build environment shortcut: ■ For example, create a shortcut using the %windir%\System32\ cmd.exe /k D:\\build\devenv.cmd command line. Name this shortcut “BUILD.” (A sample of devenv.cmd is included in Chapter 7, “The Build Environment.”) ■ In Properties, Shortcut, set Start In to D:\\build. ■ In Properties, Options, set Buffer Size to 999, and select both QuickEdit Mode and Insert mode. ■ In Properties, Layout, set Screen Buffer Size Height to 9999. ■ In Properties, Layout, set Window Position Left to 118. ■ In Properties, Layout, set Window Position Top to 43.

52

Chapter 4

The Build Lab and Personnel

Without knowing the size and complexity of your projects, it is difficult to give a hire ratio or builder-to-developer ratio. The number of builders also depends on how fast your product is growing and how good your Software Configuration Management (SCM) processes are. Obviously, if you are growing at a fast rate, and your SCM processes are not very developed, you need more builders than a mature product that is in maintenance mode. At the very least, you need at least one builder regardless of the size of your project or team. That’s true even for a small team of ten or fewer developers. The best way to describe the skill set that we look for in builders is to look at a Microsoft job posting for a builder: We need a cooperative team player who can work closely with a manager and team members as well as alone. Ability to function well in fast-paced work environment and the ability to work on multiple time-sensitive issues concurrently are necessary. High motivation and self-direction, sound problem-solving skills, technical troubleshooting, debugging code, and decision-making skills are crucial. Excellent written and verbal communication skills, as well as detail-oriented organizational skills are also required. Qualifications should include knowledge of Windows OS; advanced knowledge of build tools and source control systems; Web authoring using HTML and ASP .NET; experience writing automated scripts (batch files, JScript, SQL, etc.); and standard coding and debugging techniques (C++, C#). Must be able to maintain a flexible working schedule (including some late nights and weekends during critical periods). Working knowledge of Visual Studio .NET a plus. A BA/ BS degree in computer science or engineering preferred but not required. I would add that a nice and calm temperament is preferred, and type A personalities are best avoided. In a way, Captain John Smith (a leader in the early colonies that England sent over to America) said it best, “I have confidence in my own abilities and a weak reluctance to make enemies.” He would have been a great build manager. If you are lucky enough to find someone with the mentioned skill set and temperament, you should compensate that person well—at least equal to the top-paid testers. Someone with these skills usually moves on to testing or development so that he does not have to deal with the day-to-day

Recommendations

53

Microsoft Sidenote: SizeOf (NT Build Team) When we first shipped NT, the build team was about 10 to 12 people and we did everything. Today, the Windows build team has about 60 people in various positions, including build developers, build verification testers, managers, and so on.

Summary I hope you have been able to get a good idea of what a build lab entails and how to set one up. I touched on the builders and the type of people that I have found to be a good fit for the job. I also included some tips and tricks that I have learned over the years at Microsoft. In time, I think you will see that your build lab will grow and become the central meeting place in your group where the rubber meets the road.

Recommendations ■ ■ ■ ■

Set up a build lab if you do not already have one. Purchase all necessary hardware, and do not skimp on quality or number of machines. Keep the lab secure. Show a lot of appreciation for your build team. They have a difficult job that requires a special skill set and personality to be successful.

4. THE BUILD LAB AND PERSONNEL

grind of being in a build lab. When your builder decides to leave the build team, you should plan at least a one-month hit to your shipping schedule to recruit and train a new builder. One thing that managers have to worry about is builder burnout. This can happen in any position, but it is more likely with builders because the job can become tedious and unchallenging after a while. That’s why it’s important to provide some kind of good career path and training for the builders, such as assigning development projects or testing to the build team. Ultimately it is up to the person to own their career. This is true at any company that it is up to the managers to clear the paths or keep the employees challenged.

This page intentionally left blank

C H A P T E R

5

BUILD TOOLS AND TECHNOLOGIES Philosophy: Technology and tools are useful and powerful when they are your servant and not your master. —Steven R. Covey, author, The Seven Habits of Highly Effective People So many tools are available, free or at a cost, that it would be difficult to discuss all of them in this book, much less in a single chapter. Instead, I’ll describe some tools that have been effective in my experience with builds and focus on two aspects of builds that are essential to automating your processes: scripting and binary generating (build) tools. When possible, I give links to the free tools or references of where you can download them. Just like the earlier Covey quote, the approach in this chapter is that tools are just tools. I’d like to borrow another quote to make this point: Don’t blame your tools. A true craftsman doesn’t blame his tools… the (tools) may be slow, buggy, or missing features, but the blame is ultimately yours. Blaming your tools is whimpy: Fundamentally, you either do a job or you don’t. —Mac User Magazine Rather than say “the blame is ultimately yours,” I would rather say “the accountability is yours.” The way I see accountability, at least the short definition is “So what? Now what?” Accountability has nothing to do with blame or fault, right or wrong. Rather, it has to do with who will own this going forward. So if you are looking for that big, magic, miracle button to solve all your build problems and make your developers’ lives easy, forget it. It does not exist. I have seen some really nice automation tools, but they are just tools that write or wrap your build needs with another script. I think having a bag of tricks is the best approach. The tools that I will talk about in this chapter should be the smoke and mirrors that you can pull out of that bag…the “big rocks” if you would like another Covey term. I’d also like 55

56

Chapter 5

Build Tools and Technologies

to mention that the build process we will be talking about in this chapter is the Central Build Team’s process that gets pushed out to the developers. This is the process that builds the complete product and is what the developers should use before they check their code in. (For a refresher, look at the section “Building from the Inside Out” in Chapter 1, “Defining a Build.”) Microsoft Sidenote: What Build Tools Does Microsoft Use? This is the most common question I get from customers is, “What build tools does Microsoft use?” Microsoft does not have specific rules that every product development team has to follow. Instead, we have a lot of best practices or guidelines on what or how to run a product group. This is flexible by design. Some processes or tools might work for some groups but can be a real hindrance to others. The Windows team tends to be the de facto leader when it comes to the best build tools and processes. The other teams occasionally seek information about what works for them. The assumption is that when a process works for a group with more than 2,000 developers (Windows), it will scale down just as effectively. But because each product is rather unique, the questioning product team decides on what it can or will adopt to improve its processes. This is also the basis of this book: To throw everything we do at Microsoft “on the table” and let you decide what your company or group should adopt. So, to answer the question, we currently use in-house developed tools but are progressing toward adopting the VSTS tools that will be available when Whidbey (Visual Studio 2005) ships. (Note: Some of the VSTS tools that will ship, such as prefast, are what we have been using internally for years.) I don’t think we have statistics on what percentage of developers use Visual Studio for their code editor, but it has to be in the high 90s. You can’t beat Visual Studio’s intellisense and debugging! For the build in the various build labs, build.exe is popular, but MSBuild.exe is gaining some ground. Some groups use nmake and makefiles, whereas others have developed their own wrappers for devenv (the command-line tool for Visual Studio builds).

Binary Generating Tools

57

First, Every Build Needs a Script

Binary Generating Tools—Also Referred to Loosely as “Build Tools” At the risk of offending some of my friends in the languages group at Microsoft, I am going to say that compilers, linkers, and other intermediate build tools such as midl.exe are intrinsically dumb. Maybe I get this approach from a former computer science professor of mine who would say, “Computers are dumb” and that they only know what we tell them. They can’t think for themselves. To learn where this attitude comes from, look at Figure 5.1. It shows a brief summary of how computer languages work.

5. BUILD TOOLS AND TECHNOLOGIES

See Chapter 4, “The Build Lab and Personnel,” for information on configuring the build environment on your build machines and Chapter 7, “The Build Environment,” for a sample of setting up the environment. No matter which binary-generating (build) tool that you use, it will need to be wrapped by a script. By script, I mean a list of commands written in a language such as VBScript, JScript, Perl, or a command prompt (batch file) that automates certain tasks. An example of the most simple script would be just one line to execute your build command. Our build scripts at Microsoft can get complicated and contain thousands of lines of code. The most common language used for the scripts is Perl or command–prompt calls (batch files). Scripts are used because the majority of builds are done at the command line and not through a visual interface like Visual Studio or whatever your favorite editor is. Although developers are encouraged to perform builds of their code through the visual interface (in Visual Studio, the shortcut key is F5), they are required to perform the command-line build that the CBT uses when building the product. In the past, Microsoft tried to use a visual shell (or GUI) to run builds. Besides it being difficult to automate, the visual shell usually hid a lot of errors or problems that we would have seen or logged at the command prompt. Today, we just stick to the command-line builds, where we have a lot more control.

58

Chapter 5

Build Tools and Technologies

Computer Language Stack

High-level languages such as C/C++, C#, Java, and VB Assembly languages such as Microsoft Assembler (MASM) depend on the CPU type Machine language - It’s all ones and zeros at this point (binary); this lowest-level language is only understood by the computer hardware

Computer Hardware

Figure 5.1 Computer language stack. Starting at the bottom, the computer hardware layer includes the wires, power supply, transistors, and other components that make up a computer. An oversimplified way of looking at it is that a computer is just a bunch of on/off switches. The machine language layer communicates everything to the hardware through patterns of ones and zeros (binary code). For example, 1010 = 10 in decimal form. Following the machine language layer is the assembly language layer, which has some commands that use names (called instruction sets) instead of numbers that a CPU understands. This is a fast way of programming, but the instructions are specific to the CPU (or platform). Also, it is difficult to program long complicated tasks or applications. Because most companies want their code to be portable (work on different platforms), they choose to use a high-level language in which the syntax is much easier to understand and write code. An example would be C, VB, C#, or Java. This is the high-level language layer. When we build code, we call the compiler or assembler, which in turns knows how to convert those easy-to-read, high-level languages into machine language that the computer hardware can understand. That is why the compilers, linkers, or assemblers are dumb. You need to give them specific instructions on the parameters to be set so that your high-level code builds and works. Because providing instructions through command-line switches or input files can be rather cumbersome and error prone, especially if someone is not careful or experienced, the following

“You Provide the Nose; We Provide the Grindstone”

59

tools were written to “wrap” the compilers, assemblers, and linkers to make sure the tools do what you want them to. (Maybe the person who is building or writing the code is dumb?)

“You Provide the Nose; We Provide the Grindstone” This quotation is from a small poster we had in the NT Build Lab. The following are the tools that do the grinding to ensure that everything is as sharp as it can be. I will touch on what these tools are and then recommend the best ones to use that will be inline with Microsoft’s future tool releases.

Make or NMake 5. BUILD TOOLS AND TECHNOLOGIES

NMake is a Windows version of Make. Stu Feldman originally designed Make for UNIX in 1979. The two tools work the same, but I will speak toward NMake because I am more familiar with it than with Make. (I do not know where Stu got the name Make or why the Windows version is called NMake.) When you run NMake, it reads a “makefile” that you supply. A makefile—sometimes called a description file—is a text file containing a set of instructions that NMake uses to build your project. The instructions consist of description blocks, macros, directives, and inference rules. Each description block typically lists a target (or targets), the target’s dependents, and the commands that build the target. NMake compares the time stamp on the target file with the time stamp on the dependent files. If the time stamp of any dependent is the same as or later than the time stamp of the target, NMake updates the target by executing the commands listed in the description block. NMake’s main purpose is to help you build programs quickly and easily. However, NMake is not limited to compiling and linking; it can run other types of programs and can execute operating system commands. You can use NMake to prepare backups, move files, and perform other projectmanagement tasks that you ordinarily do at the operating system prompt. In this file, the term “build,” as in building a target, means evaluating the time stamps of a target and its dependent and, if the target is out of date, executing the commands associated with the target.

60

Chapter 5

Build Tools and Technologies

The downside of NMake is the not-so-intuitive syntax that seems to stump a lot of people.

ANT or NANT NANT is a .NET variation of ANT (Another Neat Tool) developed by James Duncan Davidson and owned by the Apache Software Foundation. Instead of an NMake model in which the tool is extended with commandprompt calls, ANT is extended using Java classes. Instead of writing shell commands, the configuration files are based on XML, calling out a target tree in which various tasks are executed. Each task is run by an object that implements a particular task interface. The real difference between the Make tools and the ANT tools is the command you can use (command shell versus Java classes) and the syntax (kludgy Make syntax or nice readable XML code). Other than that, with a little hand-waving, you can get either tool to do whatever is necessary to get your build to work.

In Steps the 800-Pound Gorilla! For the longest time, up until the release of Visual Studio 2002, Microsoft supported the NMake model of building software. There used to be an option to export makefiles in the VS releases prior to VS 2002, but that option was removed in VS 2002 and will never return. Many decisions went into killing that feature (among the lack of resources needed to maintain the archaic build process), but the main one was to move toward an XML file format such as MSBuild or VCBuild. MSBuild.exe, the build engine in Visual Studio 2005, is designed to be a scalable .NET build tool that is XML based and able to work independently of Visual Studio. MSBuild’s goal is to deliver a platform for build, not just a tool. VCBuild.exe is a command-line utility that is capable of building Visual C++/C projects and Visual Studio solution files. (You’ll learn more later about project and solution files in VS.) VCBuild does not require that Visual Studio is installed. It requires no registration to work, so setup is easy and uninstall is as simple as deleting the bits off the disk. VCBuild also supports such features as multiprocessor builds, output colorization, and the ability to build older versions of Visual C++ projects without having to upgrade the project files (upgrade on the fly).

In Steps the 800-Pound Gorilla!

61

Devenv.exe (prior to the release of VS 2002/2003, this was called msdev.exe) is the command-line build tool used to build Visual Studio project or solution files. This is opposed to building inside the Visual Studio Integrated Development Environment (VS IDE, or just IDE). I should stop here and explain what a project or solution file is. This can be confusing even to people who are familiar with Visual Studio. The best explanation of these files comes from Chris Flaat, a development lead in the Visual Studio core team:

Understand that a project configuration is a parallel bucket of settings. If you create a new project configuration that copies from the Debug project configuration, you are doing a copy, but any changes you make later to the Debug project configuration aren’t reflected in your copy. (The same principle applies to solution configurations.) Now, what’s a solution configuration? Think of a solution configuration as a bucket of arbitrary project configurations. Solution configurations don’t have to have the same name as project configurations. A solution configuration is essentially a list of all your projects, where you pick which projects should be included and which configurations should be built. For a given solution configuration, you can pick arbitrary configurations for your projects. Thank you Chris, now I’ll explain why XML is so exciting before I jump into tool recommendations.

5. BUILD TOOLS AND TECHNOLOGIES

Let’s start with projects. Projects can have an arbitrary set of configurations. Think of a configuration as a knob that tells Visual Studio how to build a project. The classic configurations are Debug and Release. For most users, there is no need to go beyond these. However, you might want to define your own project configuration if you want additional ways to build your project (say, with additional diagnostics built in) that don’t exactly map to what you want in a debug build or in your final release build.

62

Chapter 5

Build Tools and Technologies

XML Is the Here, the Now, and the Future XML is short for Extensible Markup Language, which is a specification developed by the World Wide Web Consortium (W3C). XML is a pareddown version of SGML, designed especially for Web documents. It allows designers to create their own customized tags, enabling the definition, transmission, validation, and interpretation of data between applications and between organizations. Following are three good reasons why you should master XML: ■





XML is seen as a universal, open, readable representation for software integration and data exchange. IBM, Microsoft, Oracle, and Sun have built XML into database authoring. .NET and J2EE (Java 2 Platform, Enterprise Edition) depend heavily on XML. ■ All ASP.NET configuration files are based on XML. ■ XML provides serialization or deserialization, sending objects across a network in an understandable format. ■ XML offers SOAP Web Services communication. ■ XML offers temporary data storage. MSBuild and the future project files of Visual Studio will be in XML format. ANT is also XML based.

Thus, if you want to learn one language that will cover many tools and technologies no matter what platform you are working on, that language is XML. The main difference in all these build tools is not so much the feature set but the syntax. I get tired of learning all the quirks of new languages, but I’m happy to learn XML because it’s here to stay and it’s fairly easy to learn.

What Build Tool Should You Be Using and When? For starters, there’s nothing wrong with a Make or ANT process. In fact, if you don’t want to have anything to do with Microsoft tools or platforms, I would recommend using either of these tools.

What Build Tool Should You Be Using and When?

63

If you are using Microsoft tools and platforms and want to stay in line with the future tools to be released by Microsoft, here is what’s recommended: ■



If you build mostly C++ projects, use VCBuild unless you have project-to-project dependencies on non-C++ projects. VCBuild can use multiple processors when building solutions. If you build mostly non-C++ projects (but all project types that build using MSBuild), or if you have project-to-project references between C++ and non-C++ projects, use MSBuild. You won’t get multiple processor build support, unfortunately.

When Should You Use MSBuild? ■ ■

When Should You Use VCBuild? ■ ■



Whenever you are building a C++ project (managed or unmanaged [a.k.a. native code]). Whenever you are building a mixed-language solution that requires multi-proc. Note: VCBuild will cooperate/interoperate with MSBuild for the C#, VB, or J# parts of the build. If you build mostly C++ projects, use VCBuild unless you have project-to-project dependencies on non-C++ projects. VCBuild can use multiple processors when building solutions.

When Should You Use devenv /build? ■

Whenever you are building a non-Microsoft project or deployment project or if you have non-MSBuild, non-C++ project types, you’ll have to use devenv /build.

5. BUILD TOOLS AND TECHNOLOGIES



Whenever you are building a C#, VB, or J# project. Whenever you are orchestrating a heterogeneous build (known as build lab scenarios). Note: MSBuild will cooperate/interoperate with VCBuild for the C++ parts of the build. If you build mostly non-C++ projects (but all project types that build using MSBuild), or if you have project-to-project references between C++ and non-C++ projects.

64

Chapter 5

Build Tools and Technologies

It is recommended that you completely de-couple and isolate C++ code and build processes from C#/VB code and build process as much as possible. The C++ model is not completely compatible with the .NET model. (Header files and object files just don’t map to assemblies and vice versa, and single-step building versus compile and link are two different worlds.)

Summary I hope that this chapter has covered the basic build tools that you should use or explore using in your build labs. The focus is on Microsoft technologies because that’s the environment that this book is based on. It should be obvious that although each of these tools is designed for specific purposes or languages, each can be modified to build any scenario that comes up in a build lab. This, like many other things in a build lab, is just “picking whatever religion or philosophy that you want to follow.”

Recommendations ■ ■ ■ ■ ■ ■

Use command-line builds for the central build process. Use Make or ANT for non-Microsoft platforms or tools. Use MSBuild for .NET builds. Use VCBuild for non-.NET builds. Write your scripts in an easy language such as Perl or Batch files. Learn XML because it is ubiquitous.

C H A P T E R

6

SNAP BUILDS—AKA INTEGRATION BUILDS Philosophy: The first rule of any technology used in a business is that automation applied to an efficient operation will magnify the efficiency. The second is that automation applied to an inefficient operation will magnify the inefficiency. —Bill Gates, chairman of Microsoft Corporation One technology that Microsoft has developed that I have not seen at any other customer site is the SNAP build system. After reading this chapter, you might say, “Our group does these types of builds, but we call them integration or continuous builds.” That is fine; call it whatever you like. I have seen similar tools at some customer sites, but none is fully automated or developed as well as the SNAP system. Perhaps it is because Microsoft has been using this system for years and has put so many resources into getting it right. When customers ask me how they can ease the pain of merging code from various locations or teams, I think of SNAP. If there is such a magic tool that does merges, this would be it. I view this tool as an advanced technology or step in a build process. A group really needs to master the other points in this book before trying to adopt a SNAP build process. In this chapter, with the help of Bob Jervis—the creator of the SNAP tool and provider of a lot of data, I explain how to set a tool like this up and detail some of the experiences we have had using it. If you ever plan on trying a continuous integration build system, the information in this chapter will be interesting to you. Microsoft does not currently sell this tool, and I don’t know of many companies that sell something similar. In the future, Microsoft plans to ship a tool that is a variation of this tool. If you don’t want to wait for Microsoft to ship this tool, you might consider developing your own SNAP tool. I outline the architecture and relative information so that you can create your own tool if you have the resources to do it.

65

66

Chapter 6

SNAP Builds—aka Integration Builds

What Is a SNAP Build? SNAP stands for Shiny New Automation Process. It was designed by the NetDocs (also known as Xdocs or InfoPath) team to control the execution of large complex jobs on a network of computers. This specific build tool has been around Microsoft for the past five years, and some of the tool’s predecessors have roots that go even further back. With some minor configuration changes, the system has also been used successfully for tasks such as managing source code check-ins, performing daily product builds, running test automation suites, and validating automation tests. Although you can use the SNAP system in a variety of ways, the emphasis in this chapter is on managing source code check-ins and using the SNAP system as an integration build tool. This is the most common way that SNAP is used at Microsoft, and it has the highest interest of the customers with whom I have spoken. As I discussed in the previous chapter, changes to source code need to be atomic. What this means is that check-ins into the mainline source tree should be reverse integrated (RI) as a whole or not at all. This ensures that any developer who makes a change will have all or none of the changes applied together. If conflicts exist among the sources being checked in and the currently checked-in sources, you must resolve them before the changes will be accepted. What the SNAP system does is add many additional checks into the atomic operation of checking in source code. It ensures that code checkins are held to a certain standard and that they do not break the mainline build or create a merging nightmare. Whereas most source code control tools require only that conflicting changes in the text of the source are fixed, SNAP permits an arbitrary amount of additional processing involving almost as many machines and discrete steps as you like. You might be thinking that this would also be a great tool for Extreme Programming—see Appendix B, “Extreme Programming,” for an overview of Extreme Programming—and I would agree. Note, though, that this is only a tool. There is a lot more to Extreme Programming than an automated check-in and build system.

How SNAP Works

67

When to Use SNAP Builds You might want to implement a SNAP build in two places: ■



In one of your Virtual Build Labs or private branches (as talked about in Chapter 2, “Source Tree Configuration for Multiple Sites and Parallel (Multi-Version) Development Work”) prior to checking into the mainline source tree. Some groups at Microsoft use the SNAP tool as a pre-check-in test to make sure that all changes in a particular project integrate with other changes in other components of the project. In some ways, this replaces a build team for a particular VBL. As recommended later, you should not view this tool as replacement for builders. You still need someone to administer or monitor the SNAP process. As a gateway to the mainline source tree. Other groups at Microsoft use this tool for their main build process. In fact, the “Lessons Learned” sidenotes are from the NetDocs team, who used SNAP as their main build process.

Alternatively, you can implement a SNAP build in both places. The implementation depends on the complexity of your build process and product.

How SNAP Works

NOTE Remember in Chapter 3, “Daily, Not Nightly, Builds,” when I discussed monitoring all check-ins? SNAP is a perfect tool to do that for you, but make sure you don’t skip one important step: The WAR team must approve check-ins before checking the code into the mainline source tree.

6. SNAP BUILDS—AKA INTEGRATION BUILDS

SNAP uses a system of networked machines, usually in a lab, to verify that a developer’s check-in will build with the current set of sources before allowing the changes to be checked into the mainline or VBL source code tree.

68

Chapter 6

SNAP Builds—aka Integration Builds

The core of the SNAP system is in two SQL Server databases. The first, called the SNAP database, shares information but logically describes two parts: a set of queues and a set of machine daemons. This database is designed to be unique throughout your lab. The second, called the Tuple database, controls the actual operations of a check-in (or any other job). Figure 6.1 illustrates in schematic form the components of a SNAP system and how they communicate. In this diagram, the drum-shaped objects are databases. Square-cornered boxes are HTML pages that use dynamic HTML and a combination of remote scripting and .NET technologies to query their corresponding databases.

SNAP UI

Queue UI

NDS

Queue

TMG

Tuple

Dmon TP Daemon Daemon UI

Figure 6.1 SNAP. 0321332059 06Fig01.eps 5/02/05 Scan Group/pab

Sample Machine Configuration The following information will give you an idea of a typical SNAP hardware configuration that you need to budget for if you plan to create or purchase this system.

Sample Machine Configuration

69

Following is the SNAP build machine configuration: ■ ■

■ ■ ■

The fastest processor and memory available to optimize Windows 2003 performance is recommended. The C drive is a minimal 20GB, the D drive is 100 to more than 200GB, and RAID Level 5 is used on the drive subsystem. The lab team maintains the hardware and performs the initial installation. The operating system is Windows 2003 Server Enterprise Edition with all the latest service pack/patches. Anti-virus real-time monitoring is disabled. This kills your build times if you leave it on. The system indexing service is turned off, which is another performance killer.

And here is the test machine configuration: ■ ■ ■ ■ ■

The fastest processor and memory available to optimize Windows XP Professional performance is recommended. The drive is split so that you have a D drive with around 5GB. The remaining—approximately 105GB—goes to the C drive. The D drive is formatted and is boot enabled. The SNAP daemon is located on this drive. A shortcut for it is placed in the startup group of the Windows installation. The C drive has Windows XP Professional with the latest service pack/patches.

6. SNAP BUILDS—AKA INTEGRATION BUILDS

After a SNAP system is set up, you should not need to configure one of these machines from scratch unless you have a catastrophic failure that requires rebuilding the unit. The SNAP administrators are responsible for keeping all the SNAP machines current with all hotfixes, patches, and critical updates. When indications of such updates are available, you need to check each machine for new updates via the Windows Update site. The biggest administration required is monitoring the test machines periodically to ensure that they are all being reset and updated correctly. SNAP is a queue-based check-in system. A developer makes a submission to the SNAP system either via the command line or through a Web UI instead of checking his files directly into the mainline source code tree. The changes that the developer submits are then copied to a SNAP server,

70

Chapter 6

SNAP Builds—aka Integration Builds

and a check-in is added to a queue in the lab. Inside the lab, a control process pulls check-ins off the queue and builds and tests them one at a time. The test team provides the tests. The system is designed to rerun parts of a check-in, ignore failures, abort a check-in, or allow modification of files in a check-in if needed. WARNING If you commit to a testing process in your check-in queue, expect to spend significant time maintaining it. If your test harness does a poor job of catching and reporting errors, you will be wasting your time. Your test harness needs to be able to reproduce lab failures on developers’ desktops. The test harness needs to have few bugs of its own and needs to clearly log stack traces and assert text.

If a developer’s check-in breaks the build or something else goes wrong with the check-in, the system sends an e-mail to the developer and the entire team letting them know where things have failed. The developer then must figure out why his job was aborted and resubmit it. Microsoft Sidenote: Lessons Learned Part 1 (from the NetDocs Team) So many different bugs and problems came and went in the course of running the check-in process that it was impossible to keep our attention focused on the lab. By watching the trend lines on our lab status reports, we were able to recognize when a tolerable level of degraded service became intolerable. Because the back end of the SNAP system used a Microsoft SQL 2000 database to store detailed logs about the progress of the check-ins, we could analyze the check-in data easily, including build failure rates, number of check-ins, and number of aborted check-ins. Occasionally, we (the NetDocs team) deliberately blocked the queue to all but high-priority jobs and only allowed through check-ins that fixed these merging or integrating problems. Inevitably, these exercises were efforts to stabilize the core functionality of the product. Time and again, we learned that by making the check-in process go smoothly, we reaped dividends in having many important bugs fixed in the product.

Managing Throughput

71

Operations Staff Keeping your development personnel productive and happy with a checkin system requires constant monitoring of the process. The key to the success of the SNAP system is constant monitoring, or babysitting. Nothing upsets a development team more than a system that does not work and no one available or with the resources available to fix it. Don’t try to run the SNAP system unattended. You need properly trained people to debug and own any problems with the system. This is usually someone from the build team, but it does not have to be. If you can make your build and testing processes highly reliable, you might find that you can go extended periods without close supervision. Either way, you need resources assigned to attending and maintaining the system. It is worth noting here that without knowing the details of your lab process, you will not be able to properly manage a SNAP system. In other words, if your IT department owns all the hardware or you do not have a properly equipped lab, it will be difficult to manage a SNAP system because you will not have direct access to the machines or be able to modify the hardware configuration if needed.

Managing Throughput

Microsoft Sidenote: Lessons Learned Part 2 (from the NetDocs Team) Predicting demand is not easy. We found that a developer’s behavior adapted to the conditions of our queue. Also, some developers like to check in once a week, whereas others want to check in as frequently as they can—even several times per day if the system allows them. In one product team, it became well

6. SNAP BUILDS—AKA INTEGRATION BUILDS

The primary measure of success of the SNAP system is throughput—the number of check-ins processed over a given period. The demand on a check-in system fluctuates during the course of the development cycle. For example, just after you finish a major milestone, the demand on the system drops significantly. Conversely, in the closing weeks of the next milestone, demand on the system rises, peaking around the milestone date.

72

Chapter 6

SNAP Builds—aka Integration Builds

known that getting a check-in through successfully was hard, so developers tended to save up changes and check in less frequently than they might have wanted to. During the evolution of the product, the number of developers went up significantly, yet the number of check-ins processed per day did not keep pace. There clearly were periods when we maxed out the system. We found that at peak loads, the queue backed up. Also, the number of merge conflicts in check-ins started rising dramatically after a check-in sat in the queue for more than a day before being processed. We discovered that an effective way to manage peak loads and extend the effective throughput of the lab was to merge unrelated check-ins. By carefully picking and choosing between check-ins that affected different parts of the product, we were able to assemble a merged check-in that processed several sets of changes at once. As a result, we saw significant gains in effective throughput. We generally experienced few complaints from the developers and did not see many merge conflicts in our check-ins if we could keep our queue wait time within about a day and a half. For us, that represented a queue length from 20 to 25 check-ins deep. When the backup went over two days (for us, 30 or more), we saw significant complaints. When one group tried flushing out stuck or failed check-ins after a certain timeout, they found that the problem was often caused by something not related to the check-in. When this happened, the SNAP queue would empty as every check-in timed out and then would have to be rerun the next day anyway. Nothing was saved. By far, the largest problems came when significant new snippets of code were checked in without quickly getting good tests in place for them. Although the product and the tests stabilized, the lab throughput suffered.

To figure out an average check-in time with a SNAP system, take the length of your longest build and add to it the length of the longest test. Add the time it takes to set up the test machines. Then add a few minutes here and there to account for the time needed to perform bookkeeping and process the actual SNAP submit statement. You now have the expected length of a check-in, assuming that you buy enough hardware to run as much as you can in parallel and you have decided to integrate testing into your SNAP process. From the length of a check-in, you know the number of check-ins per day that you can process. At peak demand periods, it seemed like our developers needed to be able to process a check-in every other day or so. During off-peak intervals, we were seeing a rate closer to once a week.

Summary

73

Again, use these guidelines in conjunction with any data that you have on your current check-in process. In most source code control systems, you can easily discover the number of change lists that have been submitted to any given branch. This data will help you determine the schedule of your project and how you can improve your build process by either adding more hardware or rearchitecting some processes. SNAP works best when you can divide processes into small, independent pieces and run them in parallel. Because a product build must often be able to be run on a single developer’s machine, it is difficult to parallelize the build. But testing can generally be made to distribute reasonably well.

Summary

6. SNAP BUILDS—AKA INTEGRATION BUILDS

The goal of using the SNAP system is a stable, clean main source code tree from which you can produce a green build (one that has no flaws in tested functionality) at any time. Ensuring that stability requires carefully screening source code check-ins to prevent bad code from entering the system. In principle, all developers should build and test their changes before checking them in, without a process in the lab. However, this is impractical. Building all the pieces of some big projects (both debug and release) and running the full check-in test suite for both builds might require several hours of development time in their office. There would also be a lot of time spent diagnosing intermittent test failures only to determine that they were known bugs. The SNAP process can minimize this problem and help developers be more productive. This chapter on the SNAP system tools assumes that it is used to process check-ins. The SNAP system can also process many other kinds of tasks, such as daily builds, test automation, and validation, as mentioned at the beginning of this chapter. Therefore, if you can develop your own SNAP system, you will get an enormous amount of benefits from it. If you are not able to develop your own and will purchase a SNAP tool, this chapter provided you with a little more insight and knowledge of some of the advantages and disadvantages of such a system.

74

Chapter 6

SNAP Builds—aka Integration Builds

Recommendations ■









Develop your own integration build tool to be used as a precheck-in build test to the golden source trees, or wait for Microsoft to release a tool similar to this one. Keep the points in this chapter in mind when you’re building this “silver bullet.” If you deploy a Virtual Build Lab process, require that the VBLs use a SNAP system to stay in sync with the Central Build Lab. This is the whole concept of an integration build system such as this. Do not rely on the SNAP system as your mainline build tool. Although I mention that some groups at Microsoft use this tool as their Central Build Team build tool, this can be a little problematic because the check-in queues can get really backed up. Understand that no magic tool is out there to do your work for you when it comes to builds and merging code. But SNAP is a good tool, and some groups at Microsoft cannot live without it. Make sure you have all the other processes down in this book before trying to roll out a SNAP system. These processes include source tree configuration, build schedules, versioning, build lab, and so on.

C H A P T E R

7

THE BUILD ENVIRONMENT Philosophy: There are two ways of being creative. One can sing and dance. Or one can create an environment in which singers and dancers flourish. —Warren G. Bennis, author of bestselling book, On Becoming a Leader Every build process that I have ever seen has a build environment, even if the people using it are not aware of it. This is the way things should be. GUI (Graphical User Interface) or visual build tools are good if you are short on resources and are willing to give up your control of the build process to another company’s build tool. As mentioned in Chapter 5, “Build Tools and Technologies,” the majority of the build labs at Microsoft use command-line tools to build software and not some kind of visual shell— not even the Visual Studio IDE (Integrated Development Environment). The key to setting up a reproducible, consistent build is to create the correct build environment and standardize it throughout the team. Microsoft Sidenote: What to Do with Old Build Machines When we released the first three versions of NT, we were so worried about not being able to re-create a working build environment that we would take the machines that were used for builds and move them to the side, reserving those specific boxes for hotfix or service pack builds. For example, when NT 3.5 shipped, we moved the build machines to the back of the lab and bought new hardware to build NT 3.51. By doing this, we did not have to worry about rebuilding and reconfiguring a build machine, and we were able to rebuild the code exactly the way we did when the product shipped. I realize that this is not practical for most products or groups, but you might be able to adopt something similar such as setting up a “build farm” that contains several machines with the appropriate images available to re-create a build machine rather quickly. The NT build team no longer archives build machines because a separate division handles the source and fixes after the product ships.

75

76

Chapter 7

The Build Environment

This chapter presents an example of how to set up a build environment using batch (or command) files and nmake. The example is targeted for a Wintel application. If you are unfamiliar with nmake and want to learn more about the tool, see www.microsoft.com/technet/ itsolutions/cits/interopmigration/unix/unixbld/ unixbld1.mspx. However, you do not need to be familiar with the nmake

syntax. What is more important is how the environment is set up and used to build. The examples in the chapter can be used with any build tool whether it is ANT, MSBuild, or VCBuild; just switch out the reference to the tool. This is how every build environment should be set up; it should just be a matter of finding/replacing a tool name and the location in the files.

Setting Up the Environment To start with, everyone needs to use a common environment owned by the Central Build Team (at least for the command-line builds). Outline to all developers and testers on the build Web page the proper directory structure that they need to create on their machines: Here’s an example: md (make directory) \\private\developer md \\private\developer\ copy \\private\developer\

I like to use three batch files to set up an environment: ■ ■ ■

\\private\developer\vincem\devenv.cmd—Sets env for the whole project. \\public\tools\developer.cmd—Sets env for the developers. \\private\developer\vincem\setenv.cmd—Sets env for a specific subproject.

Setting Up a Developer or Tester Machine

77

\\private\developer\vincem\vsvars32.bat

Update the following directories via script after each build: ■



\\public\tools—This directory contains all build/ command-line tools. Also, most importantly, this is the directory that contains the RULES.MK file, which has all the macros for the project makefiles. \\public\bin—This directory is for the developer’s environment and other public command files.

Everyone needs to set up a command file shortcut on his desktop. For example: cmd /k \\private\developer\vincem\devenv.cmd

Label the shortcut DEV ENV.

Setting Up a Developer or Tester Machine Now you have a one-click developer environment. Next, let’s dig a little deeper into the three env files and examine what they look like. See the comment lines of each file for an explanation of what the files does. Here’s the devenv.cmd file: REM This command file is the first in three that gets called to set the proper REM Build environment. Only Generic build defines should be in this command.

7. THE BUILD ENVIRONMENT

If you use Visual Studio (VS) to develop your project, you need to call another batch file that is provided with VS (vsvars32.bat). This file sets the proper environment variables needed to run Visual Studio from a command prompt. There are probably other similar batch files with other development tools that need to be called. Using the previous directory configuration, you should copy the correct version of vsvars32.bat to

78

Chapter 7

The Build Environment

if “%USERNAME%” == “” goto fail REM Username must be set SET HOST=NT REM REM If no drive has been specified for the development tree, assume REM X:. To override this, place a SET _DEVDRIVE=X: REM if “%_DEVDRIVE%” == “” set _DEVDRIVE=e: REM REM If no directory has been specified for the development tree, assume REM \project name. REM if “%_DEVROOT%” == “” set _DEVROOT=\project name set _DEVBINDIR=%_DEVDRIVE%%_DEVROOT% REM REM This command file assumes that the developer has already defined REM the USERNAME environment variable to match their email name (e.g. REM vincem). REM REM We want to remember some environment variables so we can restore later REM if necessary REM set _DEVUSER=%USERNAME% REM REM Assume that the developer has already included \%_ DEVBINDIR%\PUBLIC\TOOLS REM in their path. REM path %PATH%;%_DEVBINDIR%\PUBLIC\BIN REM REM No hidden semantics of where to get libraries and include files. All REM information is included in the command lines that invoke the compilers REM and linkers. REM

Setting Up a Developer or Tester Machine

79

:FAIL echo Username must be set! :end

The following is the developer.cmd file: @echo off REM @@ Put COPY_RIGHT_HERE REM @@ The environment unique to this userʼs machine and project REM REM Users should make a copy of this file and modify it to match REM their build environment. REM REM REM This is a sample file that should be modified to match a projectʼs REM specific build needs.

7. THE BUILD ENVIRONMENT

set LIB= set INCLUDE= REM REM Set up default build parameters. REM set BUILD_DEFAULT=-e -i -nmake -i set BUILD_DEFAULT_TARGETS=X86 set BUILD_MAKE_PROGRAM=nmake.exe REM REM Set up default nmake parameters ??? REM if “%DEVMAKEENV%” == “” set DEVMAKEENV=%_DEVBINDIR%\PUBLIC\BIN REM REM Set up the user specific environment information REM call %_DEVBINDIR%\PUBLIC\TOOLS\developer.cmd REM REM Optional parameters to this script are command line to execute REM %1 %2 %3 %4 %5 %6 %7 %8 %9 goto end

80

Chapter 7

The Build Environment

REM REM Set type of host platform. REM

Default is NT

if “%HOST%” == “” set HOST=NT :hostok REM This is where to find the projects REM REM The next lines provide default values for the root of your REM enlistment. To override, set these values before calling this REM batch file. REM IF .%DEFDRIVE%==. SET DEFDRIVE=E: IF .%DEFDIR%==. SET DEFDIR=\ REM REM The next lines provide default values for the build type. REM Currently, the default is DEBUG. To override this, set REM BUILDTYPE before calling this batch file. REM IF .%BUILDTYPE%==. SET BUILDTYPE=DEBUG REM SET PLATFORM=X86 goto done :fail echo Build environment is not completely configured. goto eof :done REM Set Variables SET _BUILD=E:\\BIN SET _LIBS=E:\\BIN\DBGLIB SET _DOC=E:\\BIN\DOC

Setting Up a Developer or Tester Machine

81

call %INIT%\vsvars32.bat [only if using Visual Studio] CD /d %_DEVBINDIR%

:eof @echo off REM if “%default_echo_off%” == “” echo on

Here’s the setenv.cmd file: REM This command file is used to set a specific developerʼs settings REM such as a dev team or test team. REM REM If no drive has been specified for the development tree, assume REM e:. To override this, place a SET _DEVDRIVE=e: REM if “%_DEVDRIVE%” == “” set _DEVDRIVE=E: if NOT “%USERNAME%” == “” goto skip1 echo !!! Error - USERNAME environment variable not set goto done :skip1 REM REM This command file is either invoked by DEVENV.CMD during the startup of REM a screen group, or it is invoked directly by a developer to REM switch developer environment variables on the fly. If the file is invoked with REM no argument, then it restores the original developerʼs environment (as REM remembered by the DEVENV.CMD command file). Otherwise, the argument is REM a developerʼs e-mail name, and that developerʼs environment is established. REM This cmd file is also used to make sure everyone has the same alias set.

7. THE BUILD ENVIRONMENT

echo Current user is now %USERNAME%

82

Chapter 7

The Build Environment

REM REM if NOT “%1” == “” set USERNAME=%1 REM if “%_DEVUSER%” == “” goto skip2 REM FOR NOW if “%1” == “” if “%USERNAME%” == “%_DEVUSER%” alias src /d REM FOR NOW if “%1” == “” set USERNAME=%_DEVUSER% :skip2 REM REM Some tools look for .INI files in the INIT environment variable, so set REM it. REM set INIT=%_DEVBINDIR%\private\developer\%USERNAME% REM REM Load CUE with the standard public aliases and the developerʼs private ones REM You will need to create a CUE file that contains the alias you want REM if “%_DEVUSER%” == “” goto skip3 REM REM Initialize user settable DEV nmake environment variables REM set DEVPROJECTS= set DEVx86FLAGS= set BUILD_OPTIONS= set _OPTIMIZATION= set _WARNING_LEVEL= REM alias src > nul REM if NOT errorlevel 1 goto skip4 REM alias -p remote.exe -f %_DEVBINDIR%\private\developer\cue. pub -f %_DEVBINDIR%\private\developer\DEVcue.pub -f %INIT%\cue. pri REM alias -f %_DEVBINDIR%\private\developer\cue.pub -f %_DEVBINDIR%\private\developer\DEVcue.pub -f %INIT%\cue.pri goto skip4 REM :skip3 REM alias src > nul REM if errorlevel 1 goto skip4 REM alias -f %_DEVBINDIR%\private\developer\cue.pub -f %INIT%\ cue.pri :skip4

A Makefile Example That Explains How This Works

83

A Makefile Example That Explains How This Works

########################################################## # #

# Copyright (C) Corporation, 200x # All rights reserved. # ########################################################## # Sample makefile default: all # need to include all important files !include filelist.mk -> directory specific – explained below !include $(COMMON)\SRC\RULES.mk -> system wide makefile

RULES.mk is a global makefile that has various dependencies and nmake macros in it. It is used to keep the builds consistent. !include depend.mk -> # Additional makefiles if necessary

directory specific

Filelist.mk should contain the names of source files that you want to compile, names of libraries to link with, and so on. The following example compiles the foo.c and moo.c files and builds foo.exe. # Sample filelist.mk ########################################################## # #

# Copyright (C) Corporation, 200x

7. THE BUILD ENVIRONMENT

To demonstrate how the environment batch files work, I will use a standard makefile. The way this build system works is that all top-level directories must contain a makefile, filelist.mk, and depend.mk files. Each file is explained in the example that follows. The makefile should be like this:

84

Chapter 7

The Build Environment

# All rights reserved. # ########################################################## # # # Name of target. Include an extension (.dll, .lib, .exe). # If the target is part of the release, set RELEASE to 1. # TARGET = foo.exe TARGET_DESCRIPTION = “Microsoft Foo Utility” NO_WINMAIN = TRUE NO_UNICODE = TRUE USE_STDCRT = TRUE # The initial .\ will be necessary (nmake bug) CFILES

= .\foo.c \ .\moo.c \

CFLAGS

= -DNDEBUG -DFLAT -DNO_OPTION_Z -DNT -DWIN32_API

CINC

= -I.

# # #

Libraries and other object files to link.

LIBS OBJFILES

= =

If you have a directory that contains several subdirectories with source files in them, and you want to launch a build from the parent directory, include just three lines in the filelist.mk. BASEDIR

= $()

SUBDIRS = foodir moodir # End of Sample filelist.mk

Summary

85

#Sample depend.mk # Empty

Then type nmake depend. Nmake calculates all the dependencies and stores them in the depend.mk file. To build the product, just type nmake. Products are created in the following directories. Note that these rules are defined in the RULES.mk file: \objind x86 debug \objinr x86 retail If you add source files to filelist.mk, you should rebuild the depend.mk file before building the product. Following are some useful nmake rules that are easy to add: ■ ■ ■ ■

nmake clean—Removes all built files. nmake all—Builds the project and all subprojects. nmake depend—Builds the dependency file. Checks it back in when you’re finished. nmake tree—Releases the build to where the DIST environment var points.

Summary This chapter shows a standard build environment and shows how to create a build using the nmake tool. You can substitute any build tool you prefer for this environment because only a small number of specific nmake variables are being set. The files discussed in this chapter are available for download at www.thebuildmaster.com as a good working sample that can be modified to build any language. The example in this chapter could have easily been done using MSBuild and xml files rather than nmake and makefiles. I have also uploaded some sample xml files developed by our United Kingdom Consultants to www. buildmaster.com.

7. THE BUILD ENVIRONMENT

This example causes nmake to build the foodir and moodir directories in the project (in the order listed), where $() is defined in the build environment. The depend.mk file contains the dependencies. When you are building for the first time, create an empty depend.mk file.

86

Chapter 7

The Build Environment

Recommendations Here is a short list of what the build environment example in this chapter is about. If everyone is using the same batch files to launch a build environment, re-creating a build will be less painful. ■ ■ ■

Keep the build environment consistent, and control it through the Central Build Team’s intranet Web page. Use batch file commands similar to the ones used in the examples in this chapter. Enforce through checks and balances in the batch files that everyone is using the published project build environment.

C H A P T E R

8

VERSIONING Philosophy: When you get to the fork in the road, take it. —Yoggi Berra, New York Yankees Hall of Fame player and coach There really is only one version number that anyone should care about— the file version of the files you ship to your customers. Still, this topic gets easily confused and convoluted because of the numerous ways that this file version number is generated. Furthermore, with the introduction of the Microsoft .NET Framework and the .NET assembly versions, this topic gets even more confusing as people move from unmanaged (native) to managed code. This chapter addresses all these topics and addresses helpful ways to keep these numbers straight. The scope of this chapter is narrowed to versioning as it applies to source code control trees, build numbers, file version, and setup. I also touch on the difference between .NET assemblies (basically a DLL or EXE) versions and file versions. For more information on how the different assembly versions are handled, refer to Jeffrey Richter’s book, Applied Microsoft .NET Framework, which is the single best source for .NET programming. Finally, I discuss the impact of versioning to your setup program and how to handle this.

Why Worry About Versioning? Having a good version scheme for your software is important for several reasons. The following are the top five things a version scheme allows you to do (in random order): ■ ■ ■

Track your product binaries to the original source files. Re-create a past build by having meaningful labels in your source tree. Avoid “DLL hell”—multiple versions of the same file (library in this case) on a machine. 87

88

Chapter 8

■ ■

Versioning

Help your setup program handle upgrades and service packs. Provide your product support and Q/A teams with an easy way to identify the bits they are working with.

So, how do you keep track of files in a product and link them back to the owner? How can you tell that you are testing or using the latest version of a released file? What about the rest of the reasons in the list? This chapter describes my recommendations for the most effective and easiest way to set up and apply versioning to your software. Many different schemes are available, and you should feel free to create your own versioning method. Ultimately, like most of the other topics in this book, the responsibility to apply versioning to the files in a build tends to fall into the hands of the build team. That might be because it is usually the build team that has to deal with the headaches that come from having a poor versioning scheme. Therefore, it is in their best interest to publish, promote, and enforce a good versioning scheme. If the build team does not own this, then someone who does not understand the full implications of versioning will make the rules. Needless to say, this would not be desirable for anybody involved with the product. What is the best way to accomplish this? Let’s start with the version number and work our way down.

File Versioning Every file in a product should have a version number; a four-part number separated by periods such as the one that follows seems to be the established best practice. There are many variations of what each part represents. I will explain what I think is the best way of defining these parts. ... ■



Major version—The component owner usually assigns this number. It should be the internal version of the product. It rarely changes during the development cycle of a product release. Minor version—The component owner usually assigns this number. It is normally used when an incremental release of the product is planned instead of a full feature upgrade. It rarely changes during the development cycle of a product release.

File Versioning





89

Build number—The build team usually assigns this number based on the build that the file was generated with. It changes with every build of the code. Revision—The build team usually assigns this number. It can have several meanings: bug number, build number of an older file being replaced, or service pack number. It rarely changes. This number is used mostly when servicing the file for an external release.

NOTE These numbers range between 0 and 64K and can be enforced by a prebuild test tool or a build verification test.



■ ■

5.1 is the major and minor internal version number of Windows XP that is always used in our bug tracker. If this was a Windows 2003 kernel, this number would be 5.2. If it was Longhorn, it would be 6.0. 2600 is the RTM (released to manufacturing) build number of Windows XP. 2180 is the RTM build number of Windows XP SP2.

How can this revision build number (2180) be smaller than the original XP RTM build (2600)? The reason is because the SP2 codeline is a branch off the XP mainline (the 2600 codeline). When the branch was made, the build number was reset to 0. The large number of builds is a result of SP2 containing all the SP1 changes, which was the same branch. (The RTM build number was 1086 for SP1.) If this was a one-off hotfix rather than a service pack release, the field might have the number of the bug that was being fixed, such as 1000 (if this was the bug number). Again, to better visualize this, see the sample VSS tree drawings in Chapter 16.

8. VERSIONING

As a rule of thumb, each codeline in a source tree should have its own build number, and that number should start at zero when the branch is created. If it is a build off of the main trunk, the build number should go into the field. Keep your mainline incrementing with each build as long as it is still the tip of your source tree. See Chapter 16, “Managing Hotfixes and Service Packs,” for detailed examples on this labeling. For example, 5.1.2600.2180 is a typical file version for a Windows XP SP2 binary. Let’s break this down in more detail:

90

Chapter 8

Versioning

WARNING: PRODUCT MARKETING NUMBERS MIXED IN VERSION STRINGS ARE A “BAD THING.” There seems to be a common misuse of product marketing version numbers for the major and minor numbers in the version string. Don’t use the product marketing version in the file version string because different products use some of the same components. Furthermore, marketing numbers don’t always follow a sequential order, and sometimes they’re random. If, on the occasional circumstance, the marketing people want to use the internal version number of a product, such as NT 4.0, for their marketing version, this would be okay. Just don’t link the two numbers in a permanent way.

Build Number Each build that is released out of the Central Build Lab should have a unique version stamp (also known as the build number). This number should be incremented just before a build is started. Don’t use a date for a build number or mix dates into a version number simply because there are so many date formats out there that it can be difficult to standardize on one. Also, if you have more than one build on a given date, the naming can get tricky. Stick with an n=n+1 build number. For example, if you release build 100 in the morning and rebuild and release your code in the afternoon, the afternoon build should be build 101. If you were to use a date for the build number, say 010105, what would your afternoon build number be? 010105.1? This can get rather confusing. It is also a good idea to touch all the files just prior to releasing the build so that you get a current date/time stamp on the files. By “touching” the files, I simply mean using a tool (touch.exe) to modify the date/time stamp on a file. There are several free tools available for you to download, or you can write one yourself. Touching the files helps in the tracking process of the files and keeps all of the dates and times consistent in a build release. It also eliminates the need to include the current date in a build number. You already have the date by just looking at the file properties. In addition, try to avoid tools that inject version numbers into binaries (as a post-build step). Although the tools might seem reliable, they introduce an instability factor into your released binaries by hacking hexadecimal code. Most Q/A teams become distressed if this happens to the binaries they are testing—and justifiably so. The build number should be built into the binary or document files.

Source Code Control Trees

91

This would be a good time to review Chapter 2, “Source Tree Configuration for Multiple Sites and Parallel (Multi-Version) Development Work,” or jump ahead to Chapter 16 unless you are able to follow this without the visuals.

Source Code Control Trees

Microsoft Sidenote: “I’ve Been Slimed” In the early 1990s, in the Windows and small business server group, we used a primitive source code control tool called Source Library Manager (SLM, affectionately pronounced slime). The labeling function was so unreliable that we would not even bother to label the sources after each build. So how did the developers, testers, and build team look up previous check-ins to previous builds? Well, fortunately, the projects were not as complicated or as large back then as they are today, so this was all manageable through our release and build process.

8. VERSIONING

All the source code control (SCC) software that I have seen has some kind of labeling function to track either checked-in binaries or source code. (Remember that I am not for checking in binaries, but I mention this for the groups or teams that do. See Chapter 2.) This type of versioning (or more appropriately named labeling) is typically used to track a group of sources that correspond to a product release. Most of the time, they combine labeling of the sources with the branching of the source code lines. I like to stick to the recommended practice of branching as infrequently as possible. Some companies and groups at Microsoft branch code off the main source line after every build and use the build number as the label. Their source tree usually ends up being an administration nightmare because some critical files get buried so deep that they forget they are there, and their SCC tool does not have a good way of managing them and bringing them back to the surface or top level of the tree. Regardless of the SCC tool you use, keep your labeling simple, and use the build number that the sources or files were built with. As mentioned in Chapter 2, if you would like a deeper look at SCC tools, I highly recommend Software Configuration Management Patterns by Berczuk and Appleton. The detail on source control that is covered in that book is unmatched.

92

Chapter 8

Versioning

As mentioned in Chapter 1, “Defining a Build,” we kept about two weeks’ worth of all the sources and binaries of each build on a release server. This allowed the developers, testers, and builders quick access to the previous builds without having to rely on the labeling function of the SCC tool. If someone needed files from a build that were not on our release server, we had custom tools that would scan the sources on the source trees by date and version stamp and copy the sources to a share. Then we would rebuild those sources on that share. Because we had weekly tape backups of the release servers, we could always restore past shares on the rare occasion that we had a reason to do this. Bringing this forward to 2005, Microsoft uses a powerful, internally developed SCC tool that handles labeling and branching extremely well. This was probably the most compelling reason to move to the new tool years ago. If you want to adopt this tool, look at the SCC tool in the Visual Studio Team System. It has all the features of Microsoft’s in-house tool and more. In the future, there are plans to replace this in-house tool with the Team Foundation Source Control (TFSC) tool mentioned in Chapter 2.

Should There Be Other Fields in the File Version Number? I am of the opinion that no, there shouldn’t be other fields in the file version number. Let’s look at some other fields that might seem like they should be included but really don’t need to be: ■

Virtual Build Lab (VBL) or offsite development group number—You can use this number to track a check-in back to a specific site or lab that the code was worked on. If you have a golden tree or mainline setup, and all your VBLs or offsite trees have to reverse integrate into the golden tree (as discussed in Chapter 2), this extra field would be overkill. That’s because you can trace the owner through the golden tree check-in. Having a field in which you would have to look up the VBL or offsite number would take just as long. The reality is that when you check a version number, you won’t care where the file came from. You’ll only care if it is a unique enough number to accurately trace the file to the source that created it. Most likely, you’ll already know the version number you’re looking for and you’ll just need to confirm that you’re using it.

DLL or Executable Versions for .NET





93

Component—If you break your whole project into separate components, should each component have its own identification number that would be included in the version string? No, similarly to the reasons in the previous bullet, you can track this information by the name of the binary or other information when checking the other properties of the file. This information would probably only come into play if you were filing a bug and you had other resources available to determine which component this file belonged to. Service Pack Build Number—If you’re doing daily builds of a service pack release, you should use the earlier example; increment the build number and keep the revision number at the current inplace file build number. This seems like a good argument for a fifth field, but it isn’t if the revision field is used properly.

DLL or Executable Versions for .NET (Assembly Versions) This section applies to you only if you are programming in .NET. When Microsoft introduced .NET, one of its goals was to get rid of DLL hell and all the extra setup steps I talk about later in this chapter. In reviewing where .NET is today, it looks like Microsoft has resolved the sideby-side DLL hell problem, but now the problem is “assembly version hell.” Without going into too much detail about the .NET infrastructure, let’s touch on the difference between file versioning and assembly versioning. Assembly versions are meant for binding purposes only; they are not meant for keeping track of different daily versions. Use the file version for that instead. It is recommended that you keep the assembly version the same from build to build and change it only after each external release. For more details on how .NET works with these versions, refer to Jeffrey Richter’s .NET book. Don’t link the two versions, but make sure each assembly has both an assembly version and a file version when you rightclick. You can use the same format described earlier for file versioning for the assembly version.

8. VERSIONING

There might be one exception to adding a fifth field that falls under the external releases of your product: You might want to add the word .BETA at the end version string so that people will know at a quick glance that they are running beta code upon checking the version. Some testers argue that this extra field, which would be truncated on the final release, changes the byte count of the binaries, which then would affect the testing. This is probably true, so you have to weigh the consequences.

94

Chapter 8

Versioning

Microsoft Sidenote: “Stupid Versioning Will Never Die” by Scott Parkerson This isn’t a Microsoft story, but Scott’s February 2, 2003 blog entry (www. smerpology.org/sprocket/?c=Writing+Software) hits some good points and is rather comical and accurate. I blame Microsoft for starting the whole damn trend. Back in the day, computer software was versioned by a steadily increasing number, always starting with 1.0. Computer software makers had no problem with this. Microsoft Windows was even at 1.0, but I bet few of you ever used it, let alone [saw] a copy of it out in the wild. It wasn’t until Windows 3.1 that it started catching on. Meanwhile, back in Redmond, Microsoft was developing a “new 32bit multitasking OS” that would be the future of computing: Windows NT. The “NT” stood for “New Technology,” a fact [that] seemed to elude the marketroids who designed the Windows 2000 splash screen, which proclaimed “Built on NT Technology.” Ah, redundancy. I’m getting ahead of myself. Oh. Right. Anyways, NT was the first “1.0” product that didn’t have a 1.0 version number. It was mostly new code, save for the Program Manager interface. Of course, to capitalize on the Windows heritage, marketing decided to name the product Microsoft Windows NT 3.1. Flash forward to 1995. Microsoft has redesigned the interface for Windows for Workgroups 3.11, added more 32-bit code, streamlined as much as possible. They didn’t dub it Windows 4.0, which is what it was. No. It was Windows 95, starting the insane “versioning products after the year in which it was released” trend. Of course, Microsoft Office had to be numbered the same way: Office 95 (not Office 7.0). Other software makers quickly followed suit, releasing things like Lotus SmartSuite 96, Quicken 98, etc., ad nauseam. Then there was Windows XP and Office XP. Where do they go from there: XP 2.0? NeXtP? The mind boggles. But the thing that started this whole rant this morning was downloading and installing Microsoft Windows Media Player 9 Series. 9 Series?! I can understand “9,” as it is the ninth-release of the venerable media player. But Series? Are they trying to be BMW?

How Versioning Affects Setup

95

At any rate, all this versioning madness is kept alive by marketing dorks who still say things like, “We can’t call this software 1.0… people will think it’s not ready for prime time.” Well, crap. So, we should raise people’s expectations needlessly to make a few bucks more at product launch, but ultimately lose the customers who bought the junk thinking it was mature? Yeah. So, this is another “Microsoft did it to us again” rant—with, at the very least, a valid point. Scott might find it hard to believe, but Microsoft employees hate all of this “marketing versioning confusion” too! It’s those people with the MBAs that dream this stuff up.

How Versioning Affects Setup

8. VERSIONING

Have you ever met someone who reformats his machine every six months because “it just crashes less if I do” or it “performs better?” It might sound draconian, but the current state of component versioning and setup makes starting from scratch a likely solution to these performance issues, which are further complicated by spyware. Most of the problems occur when various pieces of software end up installing components (DLLs and COM components) that are not quite compatible with each other or with the full set of installed products. Just one incorrect or incorrectly installed DLL can make a program flaky or prevent it from starting up. In fact, DLL and component installation is so important that it is a major part of the Windows logo requirement. If you are involved in your product’s setup, or if you are involved in making decisions about how to update your components (produce new versions), you can do some specific things to minimize DLL hell and get the correct version of your file on the machine. Installing components correctly is a little tricky, but with these tips, you can install your components in a way that minimizes the chance of breaking other products on your own.

96

Chapter 8

Versioning

Install the Correct Version of a Component for the Operating System and Locale If you have operating system (OS)-specific components, make sure your setup program(s) check which OS you are using and install only the correct components. Also, you cannot give two components the same name and install them in the same directory. If you do, you overwrite the component on the second install on a dual-boot system. Note that the logo requirements recommend that you avoid installing different OS files if possible. Related to this problem is the problem caused when you install the wrong component or typelib for the locale in use, such as installing a U.S. English component on a German machine. This causes messages, labels, menus, and automation method names to be displayed in the wrong language.

Write Components to the Right Places Avoid copying components to a system directory. An exception to this is if you are updating a system component. For that, you must use an update program provided by the group within Microsoft that maintains the component. In general, you should copy components to the same directory that you copy the EXE. If you share components between applications, establish a shared components directory. However, it is not recommended that you share components between applications. The risks outweigh the benefits of reduced disk space consumption.

Do Not Install Older Components Over Newer Ones Sometimes, the setup writer might not properly check the version of an installed component when deciding whether to overwrite the component or skip it. The result can be that an older version of the component is written over a newer version. Your product runs fine, but anything that depends on new features of the newer component fails. Furthermore, your product gets a reputation for breaking other products. We address the issue of whether it makes sense to overwrite components at all. But if you do overwrite them, you don’t want to overwrite a newer version.

How Versioning Affects Setup

97

”Copy on Reboot” If Component Is in Use Another common mistake is to avoid dealing with the fact that you cannot overwrite a component that is in use. Instead, you have to set up the file to copy on reboot. Note that if one component is in use, you probably should set up all the components to copy on reboot. If you don’t, and if the user doesn’t reboot promptly, your new components could be mixed with the old ones.

Register Components Correctly; Take Security into Account Sometimes setups don’t properly register COM components correctly, including the proxy and stub. Note that Windows CE requires that you also register DLLs. Note, too, that when installing DCOM components, you must be vigilant about permissions and security.

Copy Any Component That You Overwrite

Redistribute a Self-Extracting EXE Rather Than Raw Components If your component is redistributed by others (for instance, your component is distributed with several different products, especially third-party products), it is wise to provide a self-extracting EXE that sets up your component correctly. Make this EXE the only way that you distribute your component. (Such an EXE is also an ideal distribution package for the Web.) If you just distribute raw components, you have to rely on those who redistribute your components to get the setup just right. As we have seen, this is pretty easy to mess up.

8. VERSIONING

It is smart to make a copy of any component that you overwrite before you overwrite it. You won’t want to put it back when you uninstall unless you’re sure that no product installed after yours will need the newer component—a difficult prediction! But by storing the component in a safe place, you make it possible for the user to fix his system if it turns out that the component you installed breaks it. You can let users know about this in the troubleshooting section of your documentation, the README file, or on your Web site. Doing this might not save a call to support, but it does at least make the problem solvable. If the component is not in use, you can move it rather than copying it. Moving is a much faster operation.

98

Chapter 8

Versioning

Your EXE should support command-line switches for running silently (without a UI) and to force overwriting, even of newer components, so that product support can step users through overwriting if a problem arises. If you need to update core components that are provided by other groups, use only the EXE that is provided by that group.

Test Setup on Real-World Systems If you’re not careful, you can install all your setup testing on systems that already happen to have the right components installed and the right registry entries made. Be sure to test on raw systems, on all operating systems, and with popular configurations and third-party software already installed. Also, test other products to make sure they still work after you install your components.

Even Installing Correctly Does Not Always Work Even if you follow all the preceding steps and install everything 100 percent correctly, you can still have problems caused by updating components. Why? Well, even though the new component is installed correctly, its behavior might be different enough from the old component that it breaks existing programs. Here is an example. The specification for a function says that a particular parameter must not be NULL, but the old version of the component ran fine if you passed NULL. If you enforce the spec in a new version (you might need to do this to make the component more robust), any code that passes NULL fails. Because not all programmers read the API documentation each time they write a call, this is a likely scenario. It is also possible for a new version to introduce a bug that you simply didn’t catch in regression testing. It is even possible for clients to break as a result of purely internal improvements if they were relying on the old behavior. Typically, we assume that nothing will break when we update a component. In fact, according to Craig Wittenberg, one of the developers of COM who now works in the ComApps group at Microsoft, if you don’t have a plan for versioning your components in future releases, it is a major red flag for any component development project. In other words, before you ship Version 1, you need to have a plan for how you will update Version 1.1, Version 2, and beyond—besides how to bug-fix your updates.

Recommendations

99

In the past, it has been common to share components by default and update them whenever needed. This approach has caused many problems with system stability, although it has been efficient in terms of memory and disk usage. Clearly, a less chaotic set of guidelines is needed. With disk and memory so inexpensive today, a growing number of people argue that applications should never share non-system component files, that the stability costs of sharing far outweigh the storage and performance benefits of sharing. This does not mean that you shouldn’t use components; it just means that you should allow each application to use the version of the component it was tested with. By not sharing the component files while using components, you get almost all the benefits of component development without destabilizing your users’ systems. Sometimes it is appropriate to share component files, such as the Office DLL (MSOFFICE.DLL). If you do decide to share component files among applications, make sure all the users of the shared component files understand the versioning plan. And never write such components to system directories.

In this chapter, I covered why versioning is important and how it applies to build, source tree labels, .NET assemblies, and setup. The main point in this chapter is to make sure you have a good versioning scheme and to keep it consistent throughout your product. The four-part number scheme discussed in this chapter has been effective for the product groups at Microsoft for years.

Recommendations What I recommend is to read and re-read this chapter to make sure you fully understand everything in it and to grasp the importance of reliable versioning. Here is a short list of what you need to do: ■ ■

Use a four-part number separated by periods for your file version string. Increment your build number before starting each build.

8. VERSIONING

Summary

100

Chapter 8

■ ■ ■ ■ ■

Versioning

Avoid the use of dates in the build number. Use your build number for the label in sources you just built. Don’t try to include product marketing versions in your file version string. For .NET programmers, don’t link assembly versioning to file versioning. During your setup program, do the following: ■ Check for the OS that you are installing on. ■ Avoid copying anything to the system directory. ■ Copy all components to the same directory as the executable. ■ Avoid installing older components over newer ones. ■ Make a copy of any component you overwrite. ■ Use a self-extracting executable to update.

C H A P T E R

9

BUILD SECURITY Philosophy: Never depend on security through obscurity. —Michael Howard, author of Writing Secure Code If you are considering or have considered outsourcing your development, one of the first questions you will probably ask yourself is, “How will I secure our company’s intellectual property (IP) from potentially malicious offsite developers in the United States or abroad?” I answer those questions in the following pages, using Microsoft’s methods as an example. Even if you don’t plan to outsource your development or have developers work offsite, unless you’re writing freeware or running a software development charity, this subject is important to your software business. It’s such a delicate subject that I was told that there are some secret security projects at work at Microsoft; furthermore, if the person I spoke to talked about the projects, it would be considered a breach of security and would be grounds for termination. I guess part of being secured is being a little paranoid. Thus far, the chapters in this book have focused on the build process without regard to where your development work is done. What we have really been talking about up to this point is having the ability to track source changes to the owner and holding the owner of the code accountable for his/her work. Now, we look at how to secure that process. Security is a broad subject, and I can’t cover everything in this brief chapter. Instead, I focus on the following topics: ■ ■ ■ ■

Securing your source code from software pirates Supporting multiple site developers and teams or outsourcing your development Providing stability to your application Improving your Software Configuration Management (SCM) process

101

102

Chapter 9

Build Security

Figure 9.1 shows the typical track that a piece of source code follows at Microsoft. If you configure your shipping process to what is outlined in this book, your source life cycle will look like this, too. The illustration looks more complicated than it really is. Reading from left to right, the source code can reside in four places: anywhere, source lab, build lab(s), and release lab. In some groups and companies, the three labs are in one location, but they don’t necessarily have to be. What is important is adequate security for each group. Overview of Source Lifecycle at Microsoft Test Dev

PSS

e at gr te In

Ch Ch

ec k-

ec k-

Ou

In

REMOTE DEVELOPER MACHINE

BINARY DISTRIBUTION SERVERS

VIRTUAL BUILD LABS

t

Reverse Integrate

Release

Check-Out

Channel

Check-In

BURN

FINAL BINARIES

In kt ec Ou kc e h SOURCE SERVERS C

Ch

Release Services

MAIN BUILD PROCESS

l

hiva

Arc

Labs

Off-site

Control Source

DEVELOPER MACHINE ON CORPNET

Distribution SOURCE DISTRIBUTION SERVERS

Anywhere

Source Lab

Build Labs

PSS

Release Lab

Figure 9.1 Source life cycle. The best way to provide adequate security is to take a multilevel approach: ■ ■ ■ ■

Physical security—Doors, locks, cameras, and so on. Tracking source changes—The build process. 0321332059 09Fig01.eps Binary/release bits assurance—The tools process. 5/02/05 Scan Group/pab IT infrastructure—Company-wide policy and security.

Physical Security for the Build, Source, and Release Lab

103

Taking this four-pronged approach to security is what has evolved over the years at Microsoft. This might sound like a naïve statement, but in all the time I have spent working with developers and architects, I truly believe that their approach was (and still is to some extent) that they were working in a trusting environment. I also believe that a lot of companies I work with start with this position. Unfortunately, enough “bad apples” out there force Microsoft to take all of these security measures to protect its code. It’s never too early for a company to take the same approach.

Physical Security for the Build, Source, and Release Lab Windows NT Build Lab Entrance

“I Love Security” Figure 9.2 NT Build Lab door. Physical security should be the first line of defense in trying to protect your sources. Too many groups and companies take this for granted until they get burned by their lack of security. 9. BUILD SECURITY

104

Chapter 9

Build Security

Microsoft Sidenote: The Hidden Camera in the NT Lab In the NT 3.5 timeframe, hardware mysteriously started to disappear from our build lab, and we couldn’t figure out who was taking it. We reported this to Microsoft security, who then decided it was time to install a hidden camera in the lab. Security didn’t bother telling anyone that it was planning to do this—not even our build manager. One day we looked up at an air vent and saw a tiny red light and wondered what it was. Sure enough, it was a hidden camera (not very well hidden of course). We pointed it out to our boss, the build manager, and he was livid. He opened one of the ceiling tiles and proceeded to rip out the camera. The security team claimed that it could not tell anyone what it was doing because it could have been any of us stealing the hardware. It turns out that it was a help desk repair contractor (non-Microsoft employee) who was ripping off our lab. Thus, the era of video cameras in our build lab began. If there were going to be cameras in the lab, we were going to be the ones that installed and monitored them. We will let corporate security monitor the outside perimeters of the building or who is accessing the lab, but we have enough trust in our colleagues to monitor our own lab.

In Chapter 4, “The Build Lab and Personnel,” I outlined how to set up a build lab. Now, I’d like to expand that setup to source and release labs (if they’re at different locations) and add the following elements: ■



■ ■ ■

Video cameras that are visible and hidden. Having cameras where people can see them is good because it deters possible thieves. It’s also good to hide some cameras. As the previous sidenote implies, you don’t know who you can really trust. Limited card key access to the lab and machines, with the lab locked 24×7. Only build personnel need access to the lab all hours of the day and night. If you feel the need for other people to have access, limit it to regular hours such as 8 AM to 5 PM. Central location for all machines. It’s a good idea to keep all the build machines together, not spread around sites or labs. Biometrics. This is a new technology that is proving to be safe and convenient, especially biometric fingerprint keyboards. Not allowing laptops to check in or check out sources. At the very least, laptops should go through some kind of security scan before being allowed to log in to the network.

Tracking Source Changes (All Check-Ins)

105

Physical security is more of a deterrent than a means of actually catching criminals, but it is important because it sends a message that your company values the work your development team does and is conscious about keeping it secure. It provides the illusion of being secure just like the wand searches at every U.S. airport.

Tracking Source Changes (All Check-Ins)—The Build Process There’s a misconception among software companies that every line of code that they write must be protected like it’s gold, but it’s time for a reality check: Little source code is patentable or of real intellectual property (IP) value (see the “Richter on .NET Security” sidenote later in this chapter), so you should worry more about securing the source code control process and the build process rather than the individual source files. Some companies go to extreme lengths to try to secure their source code via physical means, including locking developer machines in a secure development lab. These companies don’t allow the developers to carry hardware or bags in or out of the lab. I don’t know many developers who can work in that type of environment, though. Also, if you can’t trust a developer not to steal your code, how can you trust him to write it? There’s a better way to secure your development: ■ ■ ■ ■ ■





9. BUILD SECURITY



Track all check-ins to a secured, locked-down, golden source tree. Create triggers to ensure that check-in rules, such as buddy build and code review, have been followed. Reject check-ins that do not comply. Schedule atomic check-ins from different groups. Set up development sponsors at the central and offsite location. These persons are ultimately responsible for the integrity of the code being checked in. This is probably the most critical step. Automatically check the developer’s user network logon to whatever headcount tracking tool you use. Verify the developer’s identity with network logon and machine name. Run security checks on check-ins and all sources that are released. You can do this with scripts or tools that search for any known holes in architecture or unsafe API calls. Limit access to “new” source code. Some groups or companies allow offsite development teams to work only on hotfixes and service packs, not to develop new features or code.

106

Chapter 9

Build Security

These suggestions are representative of how we are able to track source code changes back to the original owner and how we are assured that only the changes necessary to fix the bug or add a feature are checked in. If you combine these steps with the VBL process in Chapter 2, “Source Tree Configuration for Multiple Sites and Parallel (Multi-Version) Development Work,” you will have an incredibly reliable and dependable system of maintaining your sources and their changes.

Binary/Release Bits Assurance Unless you are working with open-source software, it is usually in everyone’s best interest to be 100 percent sure that all the release bits (the binaries and files) in your product came from your development team. Therefore, you need some kind of mechanism to verify or enforce this condition. Several tools are available that can modify binaries after they have been released. A malicious person does not need access to the source code. He can just use this tool, which is usually easily accessible via the Web, and “hack” the binary to perform some unwanted behavior. The best way to prevent this from happening is to integrate software restriction policies. Software restriction policies are a new feature in Microsoft Windows XP and Windows Server 2003. This important feature provides administrators with a policy-driven mechanism for identifying software programs running on computers in a domain, and it controls the ability of those programs to execute. Software restriction policies can improve system integrity and manageability—which ultimately lowers the cost of owning a computer. As a result, no one will be able to copy a hacked or modified binary over the original product. You should also make sure that you have a good versioning scheme so that you can track the build number associated with a particular release. We talked about this in great detail in Chapter 8, “Versioning.”

IT Infrastructure Using your IT department’s technologies to provide protection from unauthorized users is a great “free” line of defense for your source code or builds. This defense is free because it is maintained and provided by your

IT Infrastructure

107

IT department and should not come out of your group’s budget. The following is a list of inherited security you can get from your IT department: ■ ■

■ ■ ■ ■ ■

Create secured domains and user profiles—one-way trusts. Use Microsoft operations manager (MOM) or similar technology to ensure that everyone has the latest security fixes and firewalls running on his machine. Limit VPN connections to only company-approved hardware and software. Limit VPN access to source trees to machines that are members of the proper domain. Prohibit remote access to source servers to everyone except administrators. Prohibit Web access or check-ins/check-outs to the source servers. Turn on and use IPSec—Internet Protocol Security, the use of encryption at the network layer protecting and authenticating IP packets between IPSec devices.

If you work with your IT department, you should also be able to automate a lot of security measures, such as these: ■ ■





I mentioned in Chapter 4, that it is better if your IT department does not maintain your build machines. I still think this should be the case, but rely on your IT department to maintain and control your corporate network, including restricting how users log in remotely or onsite. This is the security area outlined in this section.

9. BUILD SECURITY

The process of granting access to valid users via a Web tool and adding them to the appropriate security group. The use of group polices to restrict users from running batch jobs using domain credentials, which can be a big security leak. Also, these policies can ensure that only valid users are allowed to access the source control servers. Running anti-virus programs on your source control servers and protecting them from the outside Web via firewalls, proxy servers, and domain restrictions. Randomly auditing developer desktops to make sure they are not a security hazard.

108

Chapter 9

Build Security

Want More Security? If you feel that everything we discussed in this chapter is still not enough security for you, do a code reset of your product and rewrite it for the .NET Framework. In fact, when I’m asked to describe .NET in one word, my answer is security. .NET is secure for many reasons, including these: ■ ■ ■

■ ■

Strong-named assemblies (DLLs or EXEs) provide certificates that guarantee their uniqueness and authenticity. The garbage collector manages memory, so leaks are virtually impossible. This eliminates malicious virus or worm attacks. You have the ability to confine code to run in administrator-defined security contexts, which is another safeguard against worms and viruses. Code access security revolves around the identity of source code, not user identity. This eliminates impersonation attacks. The .NET security system functions atop traditional operating system security, adding another layer of security that is already present.

Richter on .NET Security Now, if you are familiar with .NET, you probably know that some people question its security because source code written for the .NET Framework can be disassembled easily. Jeffrey Richter, the .NET Framework book author and guru that I’ve already mentioned a couple of times, points out that most of your application probably isn’t worth protecting. As he says, no one really cares how your Copy command works, and even if someone does, those details are probably not giving you a competitive advantage. For those few parts that do give you a competitive advantage, you have a few options: ■ Split those portions into an unmanaged DLL and use interop to call into it. ■ Use one of the many obfuscator tools to spindle, fold, and mutilate your code. ■ Wait for digital rights management, which Richter characterizes as “the real solution,” to become real. Microsoft will be moving DRM into the .NET runtime at some point in the not-so-distant future. I don’t want this chapter to become a .NET advertisement or a .NET book. I just want to point out that this is another option that you might want to consider to provide security to your code.

Recommendations

109

Summary The topic of build security seems to come up only when companies are starting offsite development or outsourcing their development to outside countries. This is really not a good topic to be considered as an “afterthought” or “closing the gate after the horse already ran out of the barn.” The sooner you can integrate security, the better off your company will be in the long run. Also, many public corporations and governments mandate a certain level of security. If this is the target audience you are after, I would start integrating these processes as soon as possible. The processes here are Microsoft’s approach. Feel free to use or develop your own tactics.

Recommendations With all the talk about security on the Internet and in the applications that are out there, we must not forget about keeping company “jewels” safe. Here are some recommendations that were covered in this chapter: ■

■ ■ ■

At a minimum, use the four-layer approach talked about in detail in this chapter: ■ Physical security—Doors, locks, cameras, and so on. ■ Tracking source changes—The build process. ■ Binary/release bits assurance—The tools process. ■ IT infrastructure—Company-wide policy and security. Consider the .NET platform as a means of security. Look into software restriction policies that are in Microsoft Windows XP and Windows Server 2003. Start worrying about security before a breach occurs. 9. BUILD SECURITY

This page intentionally left blank

C H A P T E R

1 0

BUILDING MANAGED CODE Philosophy: One of my biggest challenges is to keep my intellectual arteries sufficiently pliable to adapt to and accept inevitable change. —Paul Harvey, radio legend, August 4, 2002 (age: 83 yrs) Once again, I start this chapter with some basic definitions. If you are familiar with the .NET Framework, you might want to skip this section or bear with me on this remedial review. If you’re familiar with other Web service technologies, you might find this enlightening, or you might opt to skip this chapter altogether. I believe that all builders should be familiar with the basic blocks of the .NET Framework that I talk about here. This understanding will help in the move from classic Win32 (native) builds to managed code.

The Official Definition of Managed Code The “official” definition of managed code from Partition 1 (Architecture) of the Tool Developers Guide in the .NET Framework SDK documentation is as follows: Managed code is simply code that provides enough information to allow the common language runtime (CLR) to provide a set of core services, including these: ■ ■ ■ ■

Given an address inside the code for a method, locate the metadata describing the method Walk the stack Handle exceptions Store and retrieve security information

111

112

Chapter 10

Building Managed Code

Managed code requires the .NET Framework (.NET FX, Fx, or just “the Framework” is the shorthand notation) to be installed on a computer to execute or run. The .NET Framework consists of three major parts: the CLR, the Framework Class Library, and ASP.NET. You can install the .NET Framework on the platforms shown in Table 10.1. Table 10.1 Platforms That the .NET Framework Can Be Installed On Supports All of the .NET Framework

Supports the Entire .NET Framework Except Microsoft ASP.NET

Windows 98

Windows 2000 (all versions—no Service Packs required) Windows XP Professional

Windows 98 SE Windows Me Windows NT 4.0 (all versions—Service Pack 6a required) Windows XP Home Edition

Windows Server 2003 is the first operating system from Microsoft that shipped with the .NET Framework. All future operating systems from Microsoft will also include the .NET Framework, so you do not have to download or redistribute the parts that your code needs to run. You can install the .NET Framework on the existing platforms mentioned in Table 10.1 in various ways, but the easiest is to go to the Windows Update site (http://windowsupdate.microsoft.com) or just type windowsupdate in the address line of your browser. The Windows update site might prompt you to install required hotfixes or service packs before installing the framework. This is a good thing. Really. You can find a lot of information on .NET by searching on the Internet. I want to keep this chapter focused on the aspects of building the managed code rather than on providing details of how .NET works, but this brief overview is necessary so we can get to the points of building managed code.

What Is the CLR, and How Does It Relate to Managed Code?

113

What Is the CLR, and How Does It Relate to Managed Code? 10. BUILDING MANAGED CODE

As mentioned in the previous section, the .NET Framework provides a runtime environment called the CLR—usually referred to as (or just) “the runtime.” The CLR runs the code and provides services that make the development process easier. Compilers and tools expose the runtime’s functionality and enable you to write code that benefits from this managed execution environment. Managed code is developed with a language compiler that targets the runtime; it benefits from features such as cross-language integration, cross-language exception handling, enhanced security, versioning and deployment support, a simplified model for component interaction, and debugging and profiling services. To enable the runtime to provide services to managed code, language compilers must emit metadata that describes the types, members, and references in your code. Metadata is stored with the code; every loadable CLR portable executable (PE) file contains this metadata. The runtime uses the metadata to locate and load classes, lay out instances in memory, resolve method invocations, generate native code, enforce security, and set runtime context boundaries. Managed data is a special memory heap that the CLR allocates and releases automatically through a process called garbage collection. Garbage collection is a mechanism that allows the computer to detect when an object can no longer be accessed. It then automatically releases the memory used by that object. From there, it calls a clean-up routine, called a “finalizer,” which the user writes. Some garbage collectors, like the one used by .NET, compact memory and decrease your program’s working set. I find the garbage collector in .NET to be the most impressive aspect of the platform. Conversely, unmanaged code cannot use managed data; only managed code can access managed data. Unmanaged code does not enjoy the benefits afforded by the CLR: garbage collection, enhanced security, simplified deployment, rich debugging support, consistent error handling, language independence, and even the possibility of running on different platforms.

114

Chapter 10

Building Managed Code

You can still create unmanaged code (which is the new name for the standard Win32 code you wrote before .NET [native]) with Visual Studio .NET by creating a Microsoft Foundation Class (MFC) or an Active Template Library (ATL) project in the latest version of Visual C++, which is included with Visual Studio .NET. In Chapter 9, “Build Security,” I discuss why you might still want to create unmanaged code. Furthermore, you might still have some legacy components that your .NET application needs to interop with. You can also create managed code with Visual C++ thanks to something called C++ with Managed Extensions. There is also talk that Microsoft will support Visual Basic 6.0 for some time to come. Because the .NET Framework represents such a fundamental shift from Win32/COM, the two platforms will likely coexist for a number of years.

Managed Execution Process According to the “.NET Framework Developer’s Guide,” the managed execution process includes the following steps: 1. Choose a compiler—To obtain the benefits provided by the CLR, you must use one or more language compilers that target the runtime, such as Visual Basic, C#, Visual C++, or JScript. 2. Compile your code to Microsoft Intermediate Language (MSIL)—Compiling translates your source code into MSIL and generates the required metadata. This is the only part of the execution process that the build team really cares about. 3. Compile MSIL to native code—At execution time, a just-intime (JIT) compiler translates the MSIL into native code. During this compilation, code must pass a verification process that examines the MSIL and metadata to find out whether the code can be determined to be type safe. 4. Execute your code—The CLR provides the infrastructure that enables execution to take place, besides a variety of services that can be used during execution. Figure 10.1 depicts this .NET compilation process in graphical form.

Managed Execution Process

115

C# or VB. NET Source Code

MSIL Source

JIT compiled by the CLR (as needed)

CPU specific Native Code or machine language

Figure 10.1 .NET compilation process. The .NET applications are developed with a high-level language, such as C# or VB.NET. The next step is to compile this code into MSIL. MSIL is a full-fledged, object-aware language, and it’s possible (but unlikely—an analogy might be to write an application in an assembly language) to build applications using nothing but MSIL. The JIT Compiler (aka jitter) occurs at the assembly level. JIT compilation takes into account the fact that some code might never be called during execution. Rather than using time and memory to convert all the MSIL in a portable executable (PE) file to native code, it converts the MSIL as needed during execution and stores the resulting native code so that it is accessible for subsequent calls. Sometimes people confuse JIT compiling for “building,” but it is only the bold text in Figure 10.1 that the build team really cares about. This JIT compiling is what makes .NET rather unique and sometimes confusing when compared to unmanaged code builds. In the old world of building you would just compile and link everything into an executable binary and then ship the binary or binaries. In the .NET or Web services world, you ship “assemblies” that need to be JIT compiled or “assembled” by the .NET Framework. Note that the compilers for the .NET languages are included free with the .NET Framework. In addition, the C++ compiler is now free. Also notice that there is no concept of “linking” in .NET. Instead, code gets linked dynamically in the “runtime” platform that .NET provides.

10. BUILDING MANAGED CODE

C# OR VB. NET complier

116

Chapter 10

Building Managed Code

The Definition of Assemblies As It Pertains to the .NET Framework Assemblies are the building blocks of .NET Framework applications; they form the fundamental unit of deployment, version control, reuse, activation scoping, and security permissions. An assembly is a collection of types and resources that are built to work together and form a logical unit of functionality. An assembly provides the CLR with the information it needs to be aware of type implementations. To the runtime, a type does not exist outside the context of an assembly. The simplest way to look at an assembly is that it is either a .NET (managed) DLL or an EXE. Sometimes, it can be a file that contains a group of DLLs, but that’s rare. Now that we have discussed some basic building blocks of the .NET Framework, let’s move on to discuss some things you need to do when building managed code.

Delay Signing and When to Use It In working with Shawn Farkas, a tester in the CLR team, he has a lot of good points about delayed signing, which I have gathered in this section. Most people who develop .NET applications know about the delay signing feature of the CLR. (If you don’t, check out MSDN’s “Delay Signing an Assembly” for more details.) Basically, delay signing allows a developer to add the public key token to an assembly, without having access to the private key token. Because the public key token is part of an assembly’s strong name, assemblies under development can carry the same identity that they will have when they are signed; however, every developer doesn’t have to have access to the private keys. For instance, to get an assembly signed at Microsoft, we have to submit it to a special signing group. These are the only people who have access to the full Microsoft key pair. Obviously, we don’t want to go through this process for every daily build of the framework, let alone for each developer’s private builds. (Imagine the debugging process if you had to wait for a central key group to sign each build you created.) Instead of going through all this overhead, we delay sign our assemblies until we get ready to make a release to the public, at which point we go through the formal signing process. You’ll learn more about this topic in Chapter 15, “Customer Service and Support.”

Delay Signing and When to Use It

117

sn -Vr assembly [userlist]

Assembly is the name of the assembly to skip. In addition to referring to a specific assembly, Assembly can be specified in the form *,publicKeyToken to skip verification for all assemblies with a given public key token. Users is a comma-separated list of users for which verification is skipped. If this part is left out, verification is skipped for all users. CAUTION Use this option only during development. Adding an assembly to the skip verification list creates a security vulnerability. A malicious assembly could use the fully specified assembly name (assembly name, version, culture, and public key token) of the assembly added to the skip verification list to fake its identity. This would allow the malicious assembly to skip verification, too.

The Problem with This Command What this command does is tell the runtime not to verify the signature on an assembly that has the given public key token (if you use the *,publicKeyToken format), or just on a specific assembly. This is a gigantic security hole. You can easily read public key tokens from any assembly that you have access to. If you run ILDasm on System.dll, inside the manifest, you find the following line: .publickey = (00 00 00 00 00 00 00 00 04 00 00 00 00 00 00 00 )

10. BUILDING MANAGED CODE

A delay-signed assembly contains only the public key token of the signing key, not an actual signature. (That’s because the person producing the delay-signed assembly most likely doesn’t have access to the private key that’s necessary to create a signature.) Inside the PE file that was produced, a delay-signed assembly has space reserved for a signature in the future, but that signature is just a block of zeros until the real signature is computed. Because this block is not likely to be the actual signature value of the assembly, these assemblies will fail to verify upon loading because their signatures are incorrect. Obviously, it wouldn’t be useful if a delay-signed assembly were completely unable to load. To work around this problem, you need to use the Strong Named tool (sn.exe) included in the .NET Fx tools to add assemblies to the skip verification list. The specific command line is as follows:

118

Chapter 10

Building Managed Code

This corresponds to the public key assigned to any assembly that is standardized by ECMA/ISO. You can easily compute the token from this value, but an easier way to get it would be to look at ILDasm on any assembly that references mscorlib. For instance, looking at the manifest of System.Xml.dll under ILDasm shows the following lines: .assembly extern System { .publickeytoken = (B7 7A 5C 56 19 34 E0 89 ) // .z\V.4.. .ver 1:0:5000:0 }

This code shows the ECMA/ISO public key token. It’s easy for a malicious developer to write an assembly named System.dll, with an assembly version of 1.0.5000.00, and put the public key just extracted from System.dll into the assembly. He won’t be able to compute a valid signature because he doesn’t have access to the ECMA/ISO private key pair, but that hardly matters because you’ve turned off strong name verification for this particular public key token. All he has to do is install this assembly in place of System.dll in your GAC, and he now owns your machine. For this reason, don’t skip verification for assemblies unless you are developing them yourself, and be extra careful about what code is downloaded onto your machine that might claim to be from your organization.

Protecting Yourself from the Security Hole You Just Created Even if you take these precautions inside your company, how can you be sure that someone external to your company cannot somehow disable checking the strong name on your assemblies and swap your assembly with an evil counterpart? The short answer to this is that you can’t. The skip verification list is stored in the registry under HKLM\Software\Microsoft\StrongName\ Verification\, which is protected by an Access Control List (ACL) that contains both users and groups and the level of access that each has. Anyone can read an ACL, but only administrators can write to it. If a malicious developer manages to write a public key token into your user’s skip verification list, one of two things has happened:

One Solution or Many Solution Files?

■ ■

119

If the first bullet is true, revert the ACL to allow only administrators to write to the key, thus closing the hole. If the second bullet is true, the malicious developer already owns your machine. As an admin, this malicious developer could conceivably replace the CLR with a hacked version that doesn’t verify assembly signatures, or perhaps doesn’t implement CAS. If you’ve gotten into this second situation, “game over, thanks for playing.” The malicious person will already have control of your box and can do as he wants. In summary, delay-signed assemblies increase security in development shops by reducing the number of people who need access to an organization’s private keys. However, the requirement that delay-signed assemblies need to be registered in the skip verification list means that developers’ machines are open to various forms of attack. Make sure that your developers are aware of the situation and don’t overuse your skip verification list, to help make your machines more secure in these environments. Again, this is something that gets driven out of a CBT.

One Solution or Many Solution Files? In VS 2002/2003, C# and VB don’t have any notion on being up-to-date or doing incremental builds. Because the time-stamp of C#/VB output assemblies always changes when you build, any projects that depend on them will always be out-of-date with respect to those assemblies and will need to be rebuilt. This story is somewhat better in VS 2005, but there still is no notion of an “incremental build” against an assembly dependency. So if the assembly has changed at all, no matter how minor the change, all of the dependent projects will have to be rebuilt. This can be a big performance hit on your build times. How does Microsoft get around this? One answer is faster, bigger hardware, but a more practical one is the concept of the Central Build Team doing these large, time-consuming builds—to make sure everything works and plays well. The developers would then use file references in their own private solution file to decrease build times. If the code changes in the referenced files, the IDE will not automatically rebuild it, so they might not get all the current changes. This can be problematic at best.

10. BUILDING MANAGED CODE

Someone has modified the ACL, allowing more write access to this key than usual. The malicious developer is already an administrator on the machine.

120

Chapter 10

Building Managed Code

Here are some different ideas to get around this problem: ■





Use a single solution file that contains all your .NET projects for a daily build, and keep that solution file checked into source control that the CBT owns. The Central Build Team members are the only ones allowed to make changes to the “golden” solution, and the developers put in a request to the CBT if they need to add or remove projects. Each developer has a “private” solution file on his machine (which I’m willing to bet he has already) that he does not check in to source control. This allows him to have faster build times and use file references instead of project references, thus avoiding long rebuild times. Another option is to break up the big single solution file and have each component team check in a “master” solution file that the CBT owns. This would take more time to set up but is probably the best way to build .NET projects.

Summary This chapter started with a crash course in .NET. In this section, I pointed out the most relevant parts of the framework to the build team. It probably seems like we took an awfully long road to get to the two main points of this chapter: delayed signing tips and how many solution files you should be using to build. In fact, the only other major component of the .NET Framework that we did not talk about is the Global Assembly Cache (GAC).

Recommendations

121

Recommendations

■ ■ ■



If you build managed code, learn the basic terms of the .NET Framework and the compilation process explained in this chapter. Use delayed signing when developing your project to avoid having to sign the assemblies in conjunction with your daily build. Understand the risk of exposing your developer’s machine to external attacks because of the skip verification list that is created when delaying signing. Decide what is the most practical way of setting up your solution files for your .NET projects. Then enforce whatever policy you come up with through your CBT.

10. BUILDING MANAGED CODE

You will find that building projects for the .NET Framework is a bit different than the classic “unmanaged code builds” that have been around before Web services were ever dreamed up. I went over the parts of building .NET code that tend to trip people in this chapter; the following is a quick list of recommendations:

This page intentionally left blank

C H A P T E R

1 1

INTERNATIONAL BUILDS Philosophy: Change before you have to. —Jack Welch, former Chairman and CEO of General Electric Most people would agree that it is more efficient to plan ahead and build something once rather than having to rip it apart later and retrofit it. Every line of core code that needs re-engineering (or “remedial engineering”) later on is a wasted expense, a potential bug, and a lost opportunity. If you ask most developers, the majority of them would probably agree that inheriting someone else’s code is never fun. It is rather painful at best. It is usually easier to develop the code from scratch than to figure out how someone else solved a coding problem and which routines they chose to do it with. After all, we all know that software development is an “art form,” and each developer has his own style, albeit some being more “beautiful” than others. Along these lines, building code for different languages tends to be an afterthought and abstract artwork. It is usually done by “localization engineers” and not necessarily by the developers who wrote the code. The U.S. market tends to be the primary one for most software companies. But don’t get me wrong here. I realize that more than 50 percent of Microsoft’s annual revenue is from international sales and has been for almost as long as the company has been around. This is my point: If your product is successful, there will be international demand for it. So plan carefully and as early as you can for “internationalizing” your product. When localizing, there is a lot more to consider than just translating your text strings into another language. I have gathered some ideas and concepts that you should keep in mind when building your product for international releases. This chapter is by no means meant to be a comprehensive look at how to write internationally compliant code. For that type of detail, I will refer you to International Software, by Dr. International (Microsoft Press, 2002). What I cover in this chapter are the basics of building the international releases of your product for the Windows 123

124

Chapter 11

International Builds

platform—more specifically, Windows 2000 and later releases. The reason for focusing on Windows 2000 and later is because this is the first operating system that shipped a Multilingual User Interface (MUI) add-on that significantly changed the way code was localized at Microsoft. All operating systems post-Windows 2000 (XP and 2003) also have this add-on feature. As with the previous chapters, let’s start with some definitions and concepts and then end with recommendations of how we do it at Microsoft.

Important Concepts and Definitions The following concepts are key to understanding how international support works in Windows. If your application runs on the Windows platform, you need to know this information. If you use an operating system other than Windows, you should learn how that system works with multilanguage applications: ■



■ ■

Locale or User Locale—A set of user-preference information related to the user’s language and sublanguage. For example, if the language were French, the sublanguage could be standard French, Belgian, Canadian, Swiss, or Luxembourgian. Locale information includes currency symbol; date, time, and number formatting information; localized days of the week and months of the year; the standard abbreviation for the name of the country; and character encoding information. Each Windows 2000 or later system has one default system locale and one user locale per user. The user locale can be different from the default system locale, and both can be changed via the Control Panel and do not require a reboot or logoff/logon. Applications can specify a locale on a per-thread basis when calling APIs. Localization—The translation of strings into a format that is applicable to the locale and language of the user and to the input and output of data (such as currency, date, and time) in such a format. Globalization—The ability of software components to support multilingual data simultaneously. Master Language—The language of the core operating system components in the particular installation of Windows 2000/XP or later. Some parts of the operating system always remain in this language. You cannot remove the master language.

Important Concepts and Definitions

■ ■

125

User Interface (UI) Language—Language in which the operating system displays its menus, help files, and dialog boxes. Multilingual User Interface (MUI)—A set of language-specific resource files that can be added to the English version of Windows, first introduced in Windows 2000. Users can change the UI language according to their individual preferences. Users can then select the UI language or set it by using Group Policy for Organizational Units. This also allows users of different languages to share the same workstation. For example, one user might choose to see system menus, dialog boxes, and other text in Japanese, whereas another user logging onto the same system could see the corresponding text in French.

Microsoft Sidenote: Klingon Example Back in the days of Windows NT 3.1, one of the localizers had just got back from a Star Trek convention and purchased a book on the Klingon language. If you are familiar with Star Trek, you know that this is a “real” culture and language that is represented on the show and in the movies. So this localizer thought that if NT was localized for Klingon and worked, it could be localized for any language and work. Well, he did it. Somewhere in the deep archives at Microsoft, there is a Klingon copy of Windows NT 3.1. I saw this guy boot the system once, and the splash screen even had the Klingon empire emblem on it.

11. INTERNATIONAL BUILDS

Can you provide the MUI functionality in your product? You should definitely think about providing similar functionality in your product, but don’t rely on the technology in the Windows 2000/XP MultiLanguage Version to switch the user interface language for you. Only a small percentage of all Windows 2000/XP installations will be MUI-based. If you rely on the MUI technology, you will prevent customers without MUI from switching the language of the user interface. Furthermore, Windows 2000/ XP is released in 24 localized versions, including English. Other products might have a different language matrix and offer more or fewer localized versions. If you want to enable your product to switch the user interface language, you should consider using satellite resource DLLs. Again, I refer you to the International Software book for details.

126

Chapter 11

International Builds

Method 1: Internationally Ignorant Code Internationally ignorant code is not really an intentional method, but it seems to happen a lot, so it is worth mentioning. I usually see this with small companies or groups. It happens for the following reasons: ■



The product is experimental. You can minimize risk and exposure by targeting just one locale. If the product succeeds, you’ll need to completely rewrite it anyway. This reason is rare. There’s a lack of planning, or taking a focus on goals too literally at the expense of the product’s international usability. This is often caused by fear of the unknown, assuming international engineering is so complicated that it must be offloaded as someone else’s problem, or confusing international enabling with UI translation.

Advantages of internationally ignorant code include the following: ■ ■ ■

Release to the initial market is faster. Binaries are smaller and tighter (although no smaller than through #ifdef). Text entry/display is possible with other languages that happen to have the same code page.

The disadvantages of internationally ignorant code are as follows: ■ ■ ■ ■

A fundamental rewrite of source is required to take product to other locales. Release in the initial market serves notice to competitors about what to target elsewhere. Text sorting and numeric formatting are broken for other locales. Users do buy products through mail order or while traveling. Users might be sold on the product concept and look for competing products that correctly handle their language. Or users might get frustrated, give up, and be disinterested in the next version.

This is probably the worst type of “internationalizing.” Avoid it.

Method 2: Locale-Dependent Source

127

Method 2: Locale-Dependent Source In this case, the product is broken into separate sources for each locale (such as French Canadian versus French) or major sets of locales (such as Euro languages versus Far East). At some point, a code fork is made, whereby a separate team owns the source dedicated to a locale or a set of locales, with different deadlines and requirements. Advantages of locale-dependent source are these: ■





End users get the full benefit of correct locale handling for their geographic location or region, including language-content enabling, sorting, and numeric formatting. A locale or set of locales might request special or additional functionality, which separate code can accomplish. (OLE can also do this.) The core product can ship slightly earlier by offloading localedependent deadlines to a separate team and code base.

■ ■

■ ■





Users are limited to one locale or language series, which you can’t change without restarting or reinstalling the product. Separate source is owned by a different internal team down the hall or across the world, and the result is propagation of source and headcount. Or an external localizer or partner is given responsibility for engineering, which is expensive and a potential security risk. Locale-dependent source defeats the purpose of no-compile localization tools. An increase in ship deltas/delays for other locales from a code fork greatly exceeds the gain in ship date for the “core” release. This has been proven through many past product releases. Bugs reported against one set of sources misses the other, or testing has to be completely duplicated. Ship delta differences might mean that bugs found in one code base are too late to fix in the other. Synergy is lost between code bases. They might as well be different companies. Data interoperability among languages probably has not been thought through. File format interchange is probably not considered between major language differences (Japanese, Greek) or maybe not even among mathematically similar languages (French, English).

11. INTERNATIONAL BUILDS

The numerous disadvantages include the following:

128

Chapter 11

International Builds

This method is not really considered viable because it results in a source tree management nightmare.

Method 3: Single Worldwide Source Single worldwide source products have no code forks. One product group develops a single source for all anticipated targets. Advantages of single worldwide source are as follows: ■

■ ■





The binary size can be optimized for major language regions without resorting to completely separate teams and separate source. Code that is not required by all users is conditionally compiled out based on geographic target. You have less duplication of headcount by having one group accountable for worldwide-enabled product. The single-source model works well with some localization tools. The assumption is that code should not need to be retouched for a UI language. Testing for functionality that is unaffected by #ifdef branches can be combined into one team. Those bugs can be reported and fixed once. Ship deltas are reduced to time lag for actual translation instead of engineering delay.

Single worldwide source has disadvantages, also: ■ ■



Maintenance of #ifdefs can get messy, especially with complex or text-intensive products. You need to test separately for the consequence of each #ifdef. You could manage this as one team, but often this becomes a reason to continue maintaining separate testing staff. Anytime international testing is separated, bugs run a risk of missing core code. Resulting product SKUs are still geographically dependent, just as with other approaches listed earlier. You still need headcount to manage separate product delivery for each target locale. The original engineering team might never see or care about the resulting international product.

Method 4: Single Worldwide Binary



129

File format interoperability still is probably not thought through. The international “split” has been moved out into or beyond the compile step, but it is still a split. Explosive growth in the Internet accelerates users’ expectation of language interoperability.

Although single worldwide source is better than the first two methods, it can be troublesome because your code is difficult to manage.

Method 4: Single Worldwide Binary



■ ■ ■ ■



The single-binary model works great with most localization tools. Although the tool was designed initially for no-compile localization, resulting in separate EXEs, you can also use it to edit language resource DLLs, which can be called from a common worldwide EXE. Testing is easier to unify into one team. Bugs are tracked and fixed once, including those resulting from UI language changes. Data interoperability between languages becomes obvious to the development and test team, as it already is to many users. Single-tree source maintenance is easier. Also, there are no conditional compiles for international and no code propagation. A single worldwide binary fits well with the movement to electronic product distribution, which moves the world to a geographically independent model. This model offers the best flexibility for users, which is a tangible benefit you can use as a competitive advantage. Consider what this means to multinational corporations and MIS, certifying and installing one product worldwide with a UI language based on a user’s install choice or login.

11. INTERNATIONAL BUILDS

In this situation, one binary or set of binaries meets the requirements of all users, regardless of language or geographic location. One team develops, compiles, and tests a single source, with no conditional compiles. The text language either is bound into the one executable or is user-callable from separate language DLLs at runtime using LoadLibraryEx( ). Code that is required by all users in some regions is either built into the main EXE or is installable from DLLs at user options to conserve use of memory or diskspace. Single worldwide binary offers the following advantages:

130

Chapter 11





International Builds

Product delivery has the option of managing SKUs separately by locale, or as a single project all the way to shrink-wrap product delivery worldwide. The original development group can easily check consequences of translated strings. The language DLL approach is extensible. (This applies to applications and other high-level products that can rely on GDI.) You can add additional language DLLs later, provided that you develop and test a core set initially to check consistency. This reduces some pressure on ship deltas for second-tier languages.

Disadvantages of single worldwide binary are as follows: ■





The binary size is larger, depending on the method (all languages are stuffed into EXE or callable DLLs) and type of code, such as low-level code with rudimentary messages versus UI-intensive application code. File formats are larger. The perpetual question, “Won’t Unicode double everything?” is actually not the case, since even rudimentary compression can pack out the null characters from Unicode data, and raw Unicode data is not much bulkier than raw DBCS data. But there is measurable growth in file format. Advanced planning and careful management are required. There’s no room for afterthoughts or special requirements based on locale. Either it all fits the original specs, or you’ll have to wait until the next revision.

It’s safe to say that method 1 is least efficient, and method 4 is most efficient. Many products outside (and some inside) of Microsoft fall between methods 2 and 3. Method 4 is the preferred method. The more advanced teams, such as Windows and Office, use it. Microsoft Sidenote: Localized Builds for Windows XP Most Microsoft products are a complex mix of new and old code, shared components, and sometimes parts licensed from other companies. Some products might use a combination of all four methods. The Windows team tends to follow the last method throughout its code, and, when integrating new components, requires that the guidelines discussed next are followed.

USE Unicode

131

USE Unicode An excellent whitepaper at www.microsoft.com/globaldev/handson/ dev/muiapp.mspx called “Writing Win32 Multilingual User Interface Applications” describes the whole localization process and describes why using Unicode is so important: No matter which approach you use to ship a localized product, your application must be fully Unicode enabled. Unicode is a 16-bit character encoding capable of representing most of the languages in common use throughout the world (a far cry from the old 8-bit character encodings such as ANSI, which restrict language support to 220 different characters). A Unicode-enabled application can process and display characters in any of these languages.

11. INTERNATIONAL BUILDS

English and localized versions of Windows XP are generated from the same source code, except for a few operating system loader and setup binaries that are running before any globalization features of the operating system have been loaded. The localized versions of the binaries are transformed during a process applied to the release build of the English version. The build lab compiles the source code with the English resources and creates the English Windows Build. After verification, the build is released to manufacturing and to localization teams. The localization teams extract the resource portions from the binaries, translate the tokens, and test them before returning them to the build lab. There they replace the English resource files. The build lab creates localized INF files, which contain install sections and a strings section. Install sections contain the logic for installing software and use strings that are defined in the strings section. The strings section contains localizable strings that can be displayed to the user during setup. With exceptions to a few setup INF files, the build lab just attaches a localized strings section to a template INF file that contains only install sections. After the build lab has these files, it compiles the specific binaries for the localized version, thus producing a localized version. This represents great progress from the early days of Windows, when many core binaries had to be recompiled to generate localized versions. The fact that only resource sections and INF files change now means that the executable portion of Windows XP is world-ready.

132

Chapter 11

International Builds

Implementing Unicode support frees you from having to worry about the language and codepage support of the platform that your application runs on. For more information about implementing Unicode and shipping a single binary that will run on both Windows 9x and Windows NT, see the article on developing applications using “Microsoft Layer for Unicode” from www.microsoft.com/globaldev/handson/dev/ mslu_announce.mspx.

Summary The success of your product in the international market is completely dependent on how well your localization process works. Choosing how you will approach this at the beginning of your product development is critical, since the way your developers write the code depends on the localization technique you choose. What I did not talk about in this chapter is the specific localization tool that is used at Microsoft (called LocStudio) that is not publicly available—hence, why I did not want to spend any time on it. There are several tools available on the public market that can help you convert your application or translate it into a different language. What I feel is more important than the execution of the tool is to decide how your localization will be architected. That is what is presented in this chapter and is what will affect a build team the most.

Recommendations Here is what is done at most of the bigger groups at Microsoft and is thus our preferred way of internationalizing our products. ■ ■ ■

Write single binary code as the first step toward truly world-ready products. Implementing a multilingual user interface that allows users to switch between all supported languages is the next logical step. Write Unicode-aware code and create satellite DLLs for your language resources to make these goals much easier to achieve.

C H A P T E R

1 2

BUILD VERIFICATION TESTS AND SMOKE TESTS Philosophy: Eating our own dog food. —Paul Maritz, former vice president of systems at Microsoft What does this dog food philosophy mean? As explained in the earlier chapters, we at Microsoft test our own software on our developer, IT, and—in some cases—home systems before we ship a product out the door. In other words, we eat our own dog food. Why is this relevant? The following sidenote illustrates why. Microsoft Sidenote: How BVTs Evolved In the early days of NT, whenever a build was released, testers would immediately pick up the build and start pounding on it with their standard and extensive suite of tests. Because NT is an operating system and does a lot of kernel-level calls, it always pushes the technology envelope. Because we “ate our own dog food” occasionally, a build that had a violent bug in it would take down several servers in our IT department. If the servers that went down were e-mail or database servers, this bug could be very troublesome and costly to the company. To keep this from happening again, we required that a certain quality level be achieved before we released the software for “dog food testing.” Build verification tests (BVTs) were one way of achieving this. BVTs were also helpful in determining the quality of the build, because if the build could not pass the first level of testing—the BVTs—you knew that the quality of the released bits was low.

It’s possible to work on a Web application or a small standalone application that doesn’t take your whole company down if there’s a critical bug, but don’t let that stop you from implementing BVTs and smoke tests. Every software application needs them, and they should always be the first round of tests that are run on your built binaries. 133

134

Chapter 12

Build Verification Tests and Smoke Tests

Some groups have teams dedicated to running and reporting the results of BVTs, whereas others assign this task to the build team. This chapter covers how to set up BVTs and smoke tests and what the difference is between the two. One of the questions I am often asked is, “What should we use for BVTs or smoke tests?” The correct answer? It depends, and I am really not the person to answer that question. The correct people to ask are your testing personnel, because they know which critical pieces of the product need the most testing. The way that we gathered BVTs to run in the build lab was to ask the testers to give us a certain number of tests that ran for a fixed amount of time, say three hours. We did this for every component of the product. Every company should run some standard tests that not only try out the application, but also try out the integrity of the build. I mention those tests later. I also mention a good “Testing Guide” (complete guide in Appendix C) compiled by Josh Ledgard, a program manager in the developer division at Microsoft. It is a great starting point for building a test plan. I would like to define two terms that are used loosely to label a build once tests have been run on it. Feel free to customize the definitions to what you and your development/test team would like: ■



Self-Test Builds—This is a build that passes all unit tests and BVTs. The binaries are not optimized and have minimal testing done on them. These builds are usually picked up by developers or testers that are awaiting a build release so they can verify their last check-ins. Self-Host Builds—This build is also known as a “dog food build.” By self-host, we mean that you not only install this build on any machine you have, but you also use it on a day-to-day basis as if it were a released product. A self-host build requires a certain acceptance level of tests passed, which include unit, BVT, regression, and stress. These builds are a higher quality than the Self-Test builds; there is even a tracking field in our bug database that states whether the bug needs to be fixed so the build can be marked Self-Host (e.g. Fixbyrel = Self Host).

Smoke Test

135

Another term I would like to define is unit test. A unit test is a test that checks the functionality of a particular code module or component. Usually, unit tests make good BVTs or smoke tests. Let’s look further into smoke tests and BVTs, which are really a subset of smoke tests.

Smoke Test The term smoke test originated in the computer hardware industry. The term derived from the practice of powering up the equipment after a piece of hardware or a hardware component was changed or repaired. If the component didn’t smoke, it passed the test. After all, chips run on smoke, right? If you let the smoke out, the chips don’t work anymore. (That was a joke. Get it?) In software, the term smoke test describes the process of validating code changes before checking them into the source tree. Smoke tests are focused tests that ensure that changes in the code function as expected and don’t destabilize an entire build. If your smoke tests are good, they keep your source trees from catching on fire. (Another joke. Sorry!) NOTE Sometimes, smoke tests are the same as unit tests, but that’s another chapter in another book. I don’t want to stray off the subject of what tests the build team are concerned about. For more info on testing in general, refer to James A. Whittaker’s book How to Break Software: A Practical Guide to Testing.

12. BUILD VERIFICATION TESTS AND SMOKE TESTS

It’s important to state up front that the build team does not own the maintenance of smoke or BVTs, but they own the execution and reporting of the tests and results. The ownership should follow under the development/test/QA group. I will provide suggestions on how these tests are written and designed because I always get asked this question by customers and have gathered some good tips over the years. Since builders are the priests responsible for administering the sacraments and policies of the build religion, it is appropriate for them to keep these guidelines in a place where everyone can easily find them—the build intranet page.

136

Chapter 12

Build Verification Tests and Smoke Tests

The following are some steps you should follow when designing and writing smoke tests: 1. Collaborate with the developer—Collaborate with the developer to understand the following: ■ ■ ■

What has changed in the code? How does the change affect the functionality? How does the change affect the interdependencies of various components?

2. Conduct a code review prior to smoke testing—Prior to conducting a smoke test, conduct a code review that focuses on any changes in the code. Code reviews are the most effective and efficient method to validate code quality and prevent code defects and faults of commission. Smoke tests ensure that the primary critical or weak area identified either by code review or risk assessment is primarily validated because the testing cannot continue if it fails. 3. Install private binaries on a clean debug build—Because a smoke test must focus on validating only the functional changes in updated binaries, run the test on a clean test environment by using the debug binaries for the files being tested. 4. Don’t use mismatched binaries—Testing with mismatched binaries is a common mistake in smoke testing. To avoid this mistake, when there is a dependency between two or more updated binaries, include all the updated binaries in the test build. Otherwise, the results of the test could be invalid. 5. Don’t perform exhaustive tests—The purpose of smoke testing is not to ensure that the binary is 100 percent error free; that would require too much time. Perform smoke tests to validate the build at a high level. You want to ensure that changes in a binary don’t destabilize the general build or cause catastrophic errors in functionality. Running your tests under debugger is important for debugging hardto-reproduce issues. You should set up the debugger and make sure that you have the proper symbols before starting the smoke test (see Appendix D, “Debug Symbols,” for more information on symbols).

Build Verification Tests

137

Ideally, you would run your smoke test on debug binaries. Make sure that the developer gives you symbols for the binaries when he gives the binaries to you if this is a private build from his development machine. This is good practice if the test is run before he checks his code in. A common pitfall of smoke testing is mismatched binaries. This occurs when a dependency exists between two or more binaries and you are not given updated versions of all the binaries. Often, you get registration errors, or the application completely fails to load. If you suspect you have mismatched binaries, ask the developer to build you a complete set of binaries. Make sure that you enter any issues you caught and didn’t fix during the smoke test process into your bug tracking database so that these issues do not fall through the cracks. Create a record of all smoke tests. Testers must sign off before closing the smoke test record. A good practice is to enter a summary of the testing done both as a record and so that the developer understands exactly what you did and did not test. Smoke tests are sometimes incorrectly referred to as BVTs. That’s not to say that you might have the same test run as a smoke and a BVT, but it is rare that all of your BVTs are the same as the smoke tests in your group. Let’s go on to what BVTs are and explore some tips on how to set them up.

Build Verification Tests

■ ■ ■

Validate the integrity and testability of each new build. Ensure basic functionality for continued in-depth testing. Test critical functionality and the highest priority use cases or user scenarios.

BVTs must be able to determine if the general quality of the build is sufficient for self-hosting, or if the build is unstable and should be used only in testing environments. Unit tests are sometimes used for BVTs, but

12. BUILD VERIFICATION TESTS AND SMOKE TESTS

BVTs are automated suites of tests designed to validate the integrity of each new build and the basic functionality of the build before it is released for more in-depth testing. A BVT is not a comprehensive test; it is a type of test that is designed to do the following:

138

Chapter 12

Build Verification Tests and Smoke Tests

only if they are critical to the execution of the program.

Define the Scope or Span of the BVTs It is important to set limits of what is tested and how long BVTs should run. This limit is determined by the amount of time you have between the build completing and when the reports of the BVTs are due. As a general rule (and what we have found to be a good guideline), three hours should be sufficient for testing. In order to determine what tests need to be included in a BVT suite, you need to determine the high-risk areas of your product. Consider the following factors when assessing priority and high-risk areas: ■ ■ ■

Probability of failure—How likely is failure? Severity of failure—What are the consequences of failure? Visibility of failure—Will the user see a failure?

Now that you have identified the high-risk areas and some tests that should be included in a BVT suite, it is time to start setting up the boundaries of the BVT suite. When setting up BVTs, the following recommendations should be followed: ■





Establish a quality bar—BVTs validate the quality of the build. Their purpose is to quickly test and assess blocking issues that would make the build unusable before further testing continues. BVTs should determine the quality and stability of the build for self-hosting by everyone on the team, or whether the build should be limited only to test environments. Define a process for immediate problem resolution—Consider bugs that are identified in the BVTs as top priority. In the case of a BVT failure that makes the build unusable for further testing, identify the bug that caused the failure as a hotfix bug. Mandate quick turnaround time for these. Automate the BVT test suite—Because a successful result from the BVTs signals the start of more comprehensive testing, it is important to quickly get the results of the BVTs. The same set of

Build Verification Tests









tests also needs to ensure the baseline functionality for further testing, so consider automating the BVT suite if it is fiscally possible. Limit the duration of the BVTs—BVTs are not extensive test suites. The purpose of a BVT is simply to determine the validity of the build and the stability of high-priority functional areas. Don’t attempt to run entire test suites during the BVTs because this can be time-consuming. The more time a builder spends running BVTs, the less time the build is exposed to a greater number of testers. Don’t run an entire BVT suite on partial builds—The BVT suite is inappropriate to validate the quality of partial builds. Developers should perform unit tests to validate code fixes or changes. In addition, developers should run subsets of the BVT suite on their private builds prior to checking in code for the next build to ensure that the new code does not break the build or the BVT test pass. Run the BVT in a controlled environment—Control the environment for your BVTs by taking the following precautions: ■ Install each new build on a “clean” test environment to keep the focus of the test on the build and not on external influences. ■ Ensure that no heavy prerequisites are required for the setup of the BVT environment or framework. ■ Make the BVT suite portable so that it can run on multiple language versions of the same product. Track BVT results—The BVT suite should provide a baseline measurement of build stability for comparative analysis: ■ Compare results between test passes to provide information on potential problem areas. ■ Track the results of the BVT pass to provide quality trend information. ■ Use BVT metrics to establish release-to-test and release-for-selfhost quality criteria. ■ Create a link from the build intranet page to the BVT results. Review the BVT suite periodically—Update the BVT suite as necessary to validate the integrity of the build and ensure its essential functionality. Update the suite by removing tests that are no longer applicable. Only incorporate tests in the BVT suite that help to accomplish its two primary objectives:

12. BUILD VERIFICATION TESTS AND SMOKE TESTS



139

140

Chapter 12

Build Verification Tests and Smoke Tests

Evaluating build integrity. Verifying basic functionality. Don’t constantly change the parameters of the BVT suite— The BVT test suite should provide a baseline measurement on build integrity and basic functional quality. Although features might change, don’t modify the BVT suite unless you need to modify it to more effectively validate either the integrity of the build or changes in high-priority functionality. Changing the parameters or variables in the BVT suite skews the baseline metrics and other measurements, such as the quality bar. Create a service level agreement (SLA) with program management and development teams—Collaborate with program management and development teams to agree about the BVT process, defect resolution and timelines, the quality bar, results reporting, and ownership. Use SLAs to establish and record the BVT process and responsibilities. ■ ■





Since you now have some good recommendations on how to set up your BVT suite, you should make sure that your BVTs are automated tests so they will be less prone to human interaction errors. Also, you should not release a build until all of the BVTs have passed. A good way to do this is to hide the release shares so that overzealous testers or developers don’t pick up the build too early. You really need to customize BVTs to the product. The testing team is always in the best position to determine what tests should be in the BVTs. Some basic tests apply to the actual build process itself. These tests can be broken into a subgroup of BVTs that are focused on the build (e.g. Build BVTs). The following are two tools that focus on the build itself that you should use after every build: ■

Visual File Information tool (vfi.exe)—Use this tool to validate the build and the integrity of all the files in each build. You can download this tool at http://www.microsoft.com/downloads. Search for “Windows Resource Kit tools.” Check each file for the following: ■ Correct version information ■ Correct time/date stamp ■ Appropriate file flags ■ Correct cyclic redundancy check (CRC) keys ■ Correct language information

Build Verification Tests

141

Potential ISO 9660 filenaming issues Viruses VerCheck tool (vercheck.exe)—Use this tool to validate installed files and registry keys after setup of each new build on a clean test environment. You also can download this tool at http://www. microsoft.com/downloads. Search for “Windows Resource Kit tools.” ■ ■



VerCheck also lists all files matching a given pattern and reads out the internal description, version number, and other properties. The entire list can be copied, stored, or printed for a later comparison. Each file can be deleted if no longer needed. Several other valuable tools are available in the resource kit (for free!) that you might want to use for testing. These are just the two basic tools that need to be included in your BVT suite. Microsoft Sidenote: Tips from an Old Test Manager

What I’ve learned that I’d like to pass along (the condensed version): 1. If [people] really want you to do something, they’ll ask at least twice. The first time they ask is just to see if what they’re asking for sounds good to them. If you do what they ask when they’ve only asked once, you’re not playing the game right. 2. The bug resolution fixed is more a wish than a statement of fact. 3. You can still be friends with your developer even after threatening him/ her with a baseball bat.

12. BUILD VERIFICATION TESTS AND SMOKE TESTS

Since we have been focused on testing in this chapter, I thought it would be appropriate to include this classic list from a 16-year Microsoft test manager. You may want to make sure your Q/A or test team reads this. Developers will have an appreciation for it as well. In her own words: For most of my career, I’ve been a test manager. I’ve come to realize that the job of a test manager is really quite simple and consists of three activities: 1. Say “no” to development at least once a day. 2. On a regular basis, complain that the project is off track, the schedule is ludicrous, and the quality is terrible only to be told to “lighten up” by the program manager. 3. Find the killer metric to measure the product.

142

Chapter 12

Build Verification Tests and Smoke Tests

4. A schedule is not a sausage casing designed to be stuffed to the breaking point. 5. 95 percent of reorgs have no impact on the people actually doing the work. The other 5 percent of reorgs mean you’ll be looking for a new job. The trick is to know which one is about to happen. 6. It’s fun to have one’s mistakes made into a Dilbert cartoon. 7. If no one else knows how to do what you do, leaving your group is going to be tough. 8. A spec is like a fairy tale. Only the naïve and childlike believe it. 9. The first time a question is asked, it’s fine to say, “I don’t know.” When the same question is asked again a week later, you’d better have an answer. 10. Metrics are dangerous. No matter how carefully you caveat them in the original mail, they are interpreted to mean something completely different.

BVT States It’s important to talk about different states in which BVTs can exist. There are three basic states: ■





An Active BVT is one that stops a self-host build from being released if it fails. For the most part, when you see a reference to a BVT, it refers to an Active BVT. An Inactive BVT is one that does not stop a self-host build from being released if it fails. Such tests are usually in this state because they are new and have not yet passed against a build. Sometimes they are in this state because they contain errors in the test script or because they show a bug that does not have a reproducible scenario. A Disabled BVT is one that is never run. Typically, BVTs are disabled when they are still under development or the test is made obsolete for some reason.

Keep in mind that BVTs are meant to ensure that major functionality of a shared feature or an application is not disabled to the extent that you cannot test the functionality. Keep this in mind when you’re creating a BVT. If a BVT is run and fails but does not result in a BVT-blocking or self-host designation, it probably shouldn’t be a BVT.

Summary

143

Did I confuse you about the differences between BVTs and Smoke Tests? If I did, here is a basic list of the differences: ■ ■ ■ ■



BVTs are a subset of smoke tests. For some components, unit tests are used for BVTs and smoke tests. The build team or a BVT team runs BVTs. Testers or developers run smoke tests. Smoke tests are used for new functionality, whereas BVTs are usually reserved for core functions of the code that are more critical to the stability of the product. BVTs have a limited time frame in which to run. Smoke tests should be allowed to work without time limits.

Microsoft Sidenote: Testing Guide (See Appendix C) Another very popular question that I get from customers is: Do you have any resources on testing? Well, I do. There is a Test Guideline that has been floating around the developer division (devdiv)—the people that bring you Visual Studio—that Josh Ledgard has pulled together. I have reprinted the guide in Appendix C. You should not consider it to be a compressive test matrix, but it is a good starting point for a test plan. As with anything in this book, if you see something we’re missing or you see any mistakes, please let me know and I will add to this “testing snowball.”

This chapter should give you an understanding of what smoke tests and BVTs are and how they affect a build, with the result being a self-test or self-host build. Also provided are some suggestions to follow when working with or developing these tests. There are several places to go that have been mentioned in this chapter if you need further information on testing.

12. BUILD VERIFICATION TESTS AND SMOKE TESTS

Summary

144

Chapter 12

Build Verification Tests and Smoke Tests

Recommendations Some of the general points of the chapter are: ■ ■ ■ ■ ■ ■ ■

Run Visual File Information and VerCheck on each build. Establish a quality bar that the build will be measured against Let the test team own the BVTs, but let the build team run them. Automate BVTs and smoke tests. Track BVT results on a public intranet page. Know the difference between BVTs and smoke tests. Always have the testers or developers run the smoke tests, not the build team.

C H A P T E R

1 3

BUILDING SETUP Philosophy: No more rewards for predicting rain, only for building arks. —Lou Gerstner, Former IBM CEO You might be thinking, “What does setup have to do with building software?” A lot. I never set out to be a setup expert, but somehow I found myself researching and answering questions about setup even though my work was in build labs and on build processes. What seems to happen, as with most of the topics in this book, is that the build team by default becomes the owner of setup because it is the next step after the build is complete. When a team grows, a setup or install team is formed eventually. It is the build team’s responsibility to create the setup packages and thus the build should not be released until setup has been created successfully. In the past, I used to recommend keeping build and setup scripts separate. For example, after all build scripts have run and some basic build verification tests have passed (such as a file copy script to ensure zero errors), we enter the post build phase or process. At this point, we build all the setup packages and deploy the product to test servers. Now my recommendation has changed to integrating setup into the build process and pushing the responsibility of setup back to the developers that own the code or modules that are included in a setup package. How can you do this? By using the WiX tool (which I talk about in this chapter). Performing the setup creation process on a daily basis is as important as building your product every day. The same reasons apply, with the emphasis on being able to build setup successfully—you do not want to wait until your product is about to ship to test or create setup packages. This would be like not doing any homework during a school semester and then attempting to do all of it during the last week of the semester, just prior to taking the final exam. Usually with the same bad result.

145

146

Chapter 13

Building Setup

This chapter covers some basic architecture of how you should design your setup programs using the Windows Installer XML (WiX—pronounced wicks). It provides enough information to give you a basic foundation on which you can build. This chapter is not intended to provide details about any specific tool. For specifics, search the Internet. A lot of good information is available for free. See the sidenote for download locations. Included in this chapter is some input from the WiX creator, Rob Mensching, and setup development lead, Christopher Hilbert, who also provided the example. Microsoft Sidenote: Wise, InstallShield, or WiX? Many customers ask me what setup programs Microsoft uses. Similar to build tools, no one standard is required, just some recommendations. In fact, we do not have site licenses for the two most common setup application tools: Wise Solutions (www.wise.com) and InstallShield (www.installshield.com). Microsoft leaves it up to the specific product group to purchase licensing agreements with whichever tool it chooses to use. Over the past few years, most groups have adopted a new choice: Windows Installer XML (WiX—http:// sourceforge.net/projects/wix). WiX has been spreading like wildfire at Microsoft for the following reasons: ■ Rather than describe the steps of installation in GUI form, WiX uses a declarative form that specifies the state of the target machine after various phases of installation. ■ The WiX tool is designed to integrate setup development with application development, thus pushing the setup process back to the developers, who best understand the requirements for their components to install. ■ It’s free. WiX is the first project from Microsoft to be released under an OSS-approved license, namely the Common Public License, which means it is open source.

The Basic Definitions

147

The Basic Definitions In order to understand how WiX works, we must define some basic setup and WiX terms:



■ ■



■ ■



Windows Installer—A feature of Microsoft Windows that defines and manages a standard format for application setup and installation and tracks components, such as groups of files, registry entries, and shortcuts. (Note: The extension for the installer files is “MSI” because the original name of the tool was the “Microsoft Installer” but the marketing team chose another route and called it the “Windows Installer.”) Windows Installer XML (WiX)—A toolset that builds Windows installation packages from XML source code. The toolset provides a command-line environment that developers can integrate into their post-build processes to build setup packages. WXS files—The files that WiX uses to create setup packages. Most setup components have one or more of these files. CABinet files (CAB files)—Files that store compressed files in a file library. A single compressed file can be spread over several CABinet files. During installation, the setup application decompresses the files stored in the CABinet(s) and copies them to the user’s system. Stock Keeping Unit (SKU)—A term borrowed from the manufacturing world to refer to a different variation of a product, such as Office Standard, Professional, and Enterprise. Each variety is considered an SKU. The different language of each product is also an SKU. A unique number is usually assigned to each SKU, which is often represented by a bar code. SKU.XML file—This file outlines what a setup command script will do. (See the sample file later in this chapter.) Setup component—Set of files, registry keys, services, UIs, custom actions, and so on that you must use to get a standalone piece of setup working. Setup SKU—A setup package that you can install on a given machine. Another way to define this is as a product that you can install that shows up with an entry in the Add/Remove programs list.

13. BUILDING SETUP



148

Chapter 13



Building Setup

BinPlace tool—BinPlace.exe is a tool for managing large code projects and moving executable files, symbol files, and any other type of file. It can also extract symbols from an executable file and remove private symbols from a symbol file. BinPlace is useful if you are building several modules and delivering them in a variety of packages. You can get the BinPlace tool free by downloading the Microsoft Driver Development Kit (DDK) located on MSDN (www.microsoft.com/whdc/devtools/ddk/ddkfaq.mspx).

Setup Is Not a Testing Tool Your setup program is not a test. Don’t rely on it to determine if the files you build were built without any errors. You should have a specific build verification test (explained in detail in Chapter 12, “Build Verification Tests and Smoke Tests”) that verifies that the files were built properly. Too many times, at Microsoft and at other companies, I see groups that use a setup tool as a test. For example, if the setup tool fails with an error, the build teams backtrack to see what failed in the build or what was added or removed. This is the wrong approach. You should do the opposite. Each product team needs to come up with a list of files that are contained in each release. In the simplest example, you can use a spreadsheet application like Microsoft Excel to track this, but it would be better to use a database tool that allows richer queries and entries. Track the following information for each file: ■ ■ ■ ■

The name of the file The location of the sources for the file and a list of the source files The owners (development/test/program management) of the file The SKUs that the file ships with

After you create this master list, you can create the appropriate SKU. XML files for your product. Here is an example of how the Windows group uses WiX to build some of their components, such as IIS or the DDK team, setup program. Figure 13.1 shows an overview of how the post-build process works. Starting from the top, the SKU.XML files or Setup Build XML files are checked into the

Setup Is Not a Testing Tool

149

Setup MSI & Cab Build Process

Build Phase Source Tree

Checked into source

Setup Build XML File(s)

WXS Files

Binplaced

Binplaced

PostBuild Phase: Binary Tree

COMMONTEST \Setup

Setup Build XML File (s)

Binary Tree dir under Commontest

SKU Dest Dir specified by SetupBuild xml file

Binplaced WXS Files

SetupBuild.cmd Script reads SetupBuild xml file(s) for settings and then runs WIX V2 on specified directories to produce MSI & Cab Files

Binary Tree dir under Commontest

WXS Files

Figure 13.1 Setup structure.

0321332059 13Fig01.eps 5/03/05 Scan Group/pab

MSI File

Cab Files

SKU Dest Dir specified by SetupBuild xml file

MSI File

Cab Files

13. BUILDING SETUP

source tree along with the WXS files that developers own and generate. In this example, we then binplace (copy) the files into a COMMONTEST/ Setup directory. After we copy the files, we use a script called SetupBuild. cmd to produce the MSI and CAB files. In most other build processes that use WiX, the .wxs files are compiled along with all of the source code in the project. This is one of the big gains of the WiX toolset and what makes it so useful. It is easier to push back the setup process to development if the files get compiled at the same time. In most build processes, the .wxs files are compiled along with all of the other source code. Again, this is one of the major benefits of WiX. It fits right in with all of the other tools in your build system.

150

Chapter 13

Building Setup

WARNING Don’t use SKU.XML for the filename. Name it something unique to the SKU or SKUs that it represents.

Figure 13.1 shows only two SKUs being created, but this architecture allows for more than two SKUs. The following configuration settings are definable in the SKU.XML file: ■ ■ ■ ■ ■

Environment variables Candle.exe command line Light.exe command line Source directory that contains all .wxs files Destination directory where the resultant MSI and CABs will be placed

The following is a sample SKU.XML file that shows the previously mentioned settings that are passed to SetupBuild.cmd:

This is the ➥top-level container element for every SKU XML doc





Setup Is Not a Testing Tool

151



➥type=”xs:string” />





This will be ➥used to set environment vars on a per-SKU build basis





This is used to ➥specify the command line for Candle.exe



This is used to ➥specify the command line for Light.exe ➥

13. BUILDING SETUP



Finish

Cancel

18. FUTURE BUILD TOOLS FROM MICROSOFT

Click next to proceed.

210

Chapter 18

Future Build Tools From Microsoft

This build script automates the entire build process and more. After you have an out-of-the-box build process running out of VSTB, you can edit the build script if you need to further customize or extend the process. To provide this end-to-end integration, VSTB integrates seamlessly with other VSTS tools, such as Team Foundation Source Control, Team Foundation Work Item Tracking, and Team Test. When the build process is over, a comprehensive report is generated with information about the build result and the general health of the build. Some of the things included in the report are the number of errors and warnings for each build configuration with links to the log files, results of the test runs included in the build, a list of change sets that went into the build, and who checked it in (which could be used to detect the cause of build failure). Other information, such as the code coverage of the tests and work items associated with the build, is listed (which could be used to determine the quality of the build). The report has active links to change sets, work items, and test results for further details. Let’s examine a scenario to illustrate this a little better. A developer has just looked at a work item assigned to him and fixes his code. While checking into Team Foundation Source Control, he associates the work item with the check-in. VSTB picks up the source for the nightly build, and as a post-build step, it updates the Fixed In field of the work item with the build number. The generated build report lists all the work items that were associated with this build. The tester looks into it to make sure the work item opened by her was resolved in this build and installs the build for further investigation. This is a small example of an integration pointing among the Team Foundation Source Control, Work Item Tracking Tool, and VSTB. Figure 18.3 explains the VSTB flow. But what if your organization has some extra steps beyond what VSTB provides out of the box? Fortunately, you can customize VSTB to suit your needs. The underlying build engine is MSBuild, and most of the steps in the build process are MSBuild tasks. All that you need to do is write an MSBuild task that executes the extra steps and include it in the build script that the wizard generates. Through some simple editing, you can specify the order in which this custom step needs to run.

The Microsoft Shell (MSH, or Monad)

Team Build Client

Build Configuration files - MSBuild Scripts

Build request and Reports

TF Client

Build Server

Build start/stop

Team Build Service

Build events

Source Control

Team Build

TFS

MSBuild Scripts and targets

Build sources and scripts

Work item Tracking

Open and Update bugs

Build and Test data

211

Team Build logger

Build Drop site

Static Analysis & Testing

Build Events

Team Build Store TFS Data Tier

TF Warehouse

Figure 18.3 VSTB architecture.

The Microsoft Shell (MSH, or Monad) The last new product I want to discuss is Monad, or MSH. The build team lives in the command shell environment, and anyone coming from a UNIX background knows that the Windows command line is weak at best. This tool is not as well known or talked about on the Internet as MSBuild or VSTS. Perhaps the marketing people do not think that many people will care about it. Little do they know, we care!

18. FUTURE BUILD TOOLS FROM MICROSOFT

Did you find the information on VSTB interesting? The information is taken from the Team Foundation blog at http://blogs.msdn.com/ Team_Foundation. If you go to0321332059 this site, you will find more incredibly useful information. One thing to18Fig03.eps note is that VSTB is being designed to 5/03/05 Scan Group/pab work with MSBuild.exe and no other build tool. So before you get excited about adopting it, and unless the specs have changed, you had better roll out MSBuild before spending much time on this tool.

212

Chapter 18

Future Build Tools From Microsoft

Monad is Microsoft’s next-generation command-line scripting language. With the advent of a new command-line interface (CLI) (also known as MSH), Microsoft has greatly simplified server administration. Systems administrators can now learn about .NET classes, and developers can easily extend the functionality by adding Monad commandlets (Cmdlets) and providers. This provides a glide-path to higher-level languages, such as C#. ■





Cmdlets—Commands are created by Monad from classes that derive from the Cmdlet class and override a well-defined set of methods. Cmdlets define parameters by specifying public properties in their derived class with appropriate attributes. When this is done, Cmdlets can be registered with Monad, which provides both a programmatic and a command-line access to their functionality. Pipelines—Pipelines are a sequence of Cmdlets that pass structured data (frequently .NET objects) to the next Cmdlet. This approach provides tremendous leverage and allows the creation of reflection-based Cmdlets (Cmdlets that can operate on incoming data—such as a “where” or a “sort” Cmdlet). CmdletProvider—The CmdletProvider namespace defines a set of specific base classes and interfaces. When a developer implements these, the CmdletProvider engine automatically exposes a number of Cmdlets to the user in a common way. It also exposes methods for other Cmdlet developers to access that functionality programmatically.

The Monad Shell has several goals: ■



Excite and empower—The Shell excites and empowers Windows system administrators, power users, and developers by delivering a CFCC command-line environment: ■ A powerful interactive shell and script runtime ■ A rich procedural scripting language ■ An endorsed, consistent, and well-documented set of commands and utilities providing comprehensive and fast access to system objects Innovate—Leverage unique assets of the Windows platform to deliver key innovations that empower users to accomplish new tasks or perform existing tasks more efficiently.

Summary

■ ■

■ ■





213

Secure—Increase the security of user data for local and remote administration scenarios. Leverage—Provide the command-line environment for Longhorn Server Management, .NET Infrastructure Scale-Out (BIG), Advanced Provisioning Framework (APF), Embedded Server Appliance Kit, MOM, WinPE, and so on. Clean up—Will make existing/inadequate tools obsolete (for example, cmd.exe1, duplicate utils). Educate—Ensure that a user community is created and empowered to be productive and self-supporting with the new capabilities. Discoverable—Make certain that users can, through command completion (Intellisense), determine how and what to use to accomplish tasks. Also provide direct means for assisting users to determine which commands to use and how to use them. Concise—Create an environment where keystrokes are minimized to maximize efficiency.

I hope you see that the MSH.EXE shell is different from traditional command shells. First, this shell does not use texts as the basis for interaction with the system, but uses an object model based on the .NET platform. This provides a unique way to interact with the system. Second, the list of built-in commands is much longer; this ensures that the interaction with the object model is accomplished with the highest regard to integrity with respect to interacting with the system. Third, the shell provides consistency with regard to interacting with built-in commands through the use of a single parser, rather than relying on each command to create its own parser for parameters.

You will see or have seen several books on the products discussed in this chapter. I wanted only to mention what I thought was relevant to software build teams. Please take all this information with a grain of salt. As of this writing, the information presented here is accurate. Because pressures to release cause features to be dropped, some of what I mention here might not make it to release.

18. FUTURE BUILD TOOLS FROM MICROSOFT

Summary

214

Chapter 18

Future Build Tools From Microsoft

Recommendations Looking into a crystal ball that predicts the future, you should see: ■ ■ ■

Adopt MSBuild as soon as possible. Start researching VSTS and see if there are tools you can implement in your current build process and future processes. Keep checking back to the links mentioned in this chapter or the www.thebuildmaster.com Web site for the most recent information on these products.

A P P E N D I X

A

EMBEDDED BUILDS This book has been based on Windows NT, Visual Studio, Small Business Server, MSN, and a few other product team build processes at Microsoft. Embedded device builds is a category or variation of this process that deserves at least some mention in this book. In a way, the builds for embedded devices are in a category all by themselves. That is because Microsoft licenses the operating systems to different vendors, who then customize it and use it in their devices. Microsoft also sells portable devices that use these operating systems. In this appendix, I touch on how Microsoft performs Windows CE builds and point out the minor variations on the process described in the chapters of this book. This appendix is not intended to be a comprehensive explanation; numerous custom tools are needed to successfully create a CE build. Refer to the links at the end of this appendix for more details. When someone talks about embedded systems at Microsoft, he is either talking about Windows CE or Windows XPe (XP Embedded). As of this writing, there is no Windows Server Embedded system. Mike Hall, technical product manager in the mobile and embedded devices (MED) group, explains the difference between the operating systems best: A question that comes up at every customer meeting is how to choose between Windows CE and Windows XP Embedded. The answer can be pretty simple… Windows XP Embedded is a componentized version of Windows XP Pro, broken down to approximately 12,000 components, 9,000 device drivers, and 3,000 operating system technologies. Because Windows XP Embedded is a componentized version of Windows XP Pro [that] only runs on x86 processor and PC architecture hardware, the great thing is that desktop applications and drivers

215

216

Appendix A

Embedded Builds

will work with Windows XP Embedded without changes. There are embedded specific technologies added to XP Pro: the ability to run headless, boot from read-only media or boot from the network, resume multiple times from a hibernation file, and device update technologies. Image sizes scale from about 40MB. Windows XP Embedded is not natively real-time but can be real-time through adding third-party real-time extensions. Windows CE is a small footprint (200KB and up), hard real-time, componentized operating system that runs on x86, MIPS, ARM, and SH4 processor cores. There is no reliance on BIOS or PC architecture hardware. Windows CE exposes approximately 2,000 Win32 APIs (compared to the 20,000 APIs exposed on Windows XP Pro). The operating system is UNICODE based but does support APIs to convert to/from ASCII and ANSI. As far as application development is concerned, Windows XP Embedded runs standard desktop applications, so you could use TurboPascal, Visual Studio, or any or your favorite desktop application (or driver) development tools. Windows CE has two tool choices: eMbedded Visual C++ for “native” code development, and Visual Studio .NET 2003 for “managed” application development. Hopefully this gives you the 20,000-ft view of the differences between Windows CE and Windows XP Embedded. I have taken some general information from the Windows CE build documentation, deleted some terms and references to internal tools that would have been confusing to you, and included it next. This should give you a good idea of the steps it takes to create a CE build.

Nuts and Bolts of the CE Build System This is a high-level view of the Windows CE command-line build system. It is intended to give a general understanding of what is happening behind the scenes when a build is run so that the selected operating system components (by an OEM) are included in the CE image. It doesn’t attempt to cover all the details of the build system.

Nuts and Bolts of the CE Build System

217

One unique term used by the CE team that is not mentioned anywhere else in this book is sysgen. This term refers to the process of selecting Windows CE components and the actual building of an image from these components.

Following are the steps for a complete clean build: 1. Build project (compilation)—In the build phase, the build system compiles operating system (OS) component source files and produces libraries. The basic unit of componentization in Windows CE is the library—components are not conditionally compiled. Because of this, components can be mixed and matched without worrying about changes in their behavior. 2. Link project—During the link phase, the build system attempts to build all target modules. Modules are drivers and executables produced from Windows CE components. In CE 4.0 and later, you can select modules via sysgen environment variables. For example, the “common” project’s modules are listed in CE_MODULES, the DirectX project’s modules are listed in DIRECTX_MODULES, Internet Explorer’s modules are listed in IE_MODULES, and so on. Microsoft introduced the separation of the build phase and the link phase in Windows CE .NET. Because the operating system was getting more and more complex, linking drivers and other components during the build phase could possibly cause hard-todiagnose crashes at runtime because coredll entry points that were present during the build phase (which occurs prior to componentization) might not be present in an OEM’s final platform. 3. Copy/filter project (headers and libraries)—The Copy/Filter phase of system generation is responsible for moving parts of the operating system to the target project’s cesysgen directory. Note: only the components of the OS that the OEM has selected are moved. In addition, header files and various configuration files such as common.bib and common.reg are “filtered” to remove the parts that are unrelated to the OEM’s selected components. The copy/filter is performed at the same time as linking.

A. EMBEDDED BUILDS

NOTE The build system supports the compiling of debug and retail variants of multiple CPU architectures in the same source tree. This is another unique aspect of CE builds.

218

Appendix A

Embedded Builds

4. Post-process project (miscellaneous post-sysgen cleanup)— The “postproc” sysgen target provides the build system with a mechanism to do some work after most of the system has been generated. Although the post-process phase is important for the small number of OEMs who use it, most developers don’t do much with it. 5. Platform sysgen—If an OEM wants to write his platform in such a way that it can be used with any selection of OS components, he immediately runs into a problem. Some of the drivers, Control Panel applets, or applications in the platform directory might depend on unselected components. When these source files are built, there are compilation or linker errors because header files or coredll entry points are missing. The platform sysgen step helps address this problem by checking for a platform sysgen makefile. 6. Build platform—This phase consists of running a clean build on all projects and building only the projects that sysgen settings specify. 7. Create release directory—After all the OS components and the platform have been compiled and linked, you need to assemble all the appropriate binaries and configuration files into one place so that you can combine them into a downloadable image. You can use a batch file to perform this step. 8. Create downloadable image—After you populate the release directory with the appropriate binaries, the next step is to create a binary file that is suitable for flashing or downloading to your device’s RAM. Use the makeimg command for this step. For details of this command, see the Platform Builder documentation. (The link is provided at the end of this appendix.) Repeat steps 1 to 4 several times during a complete build, once for each “project.” For more information check the following sources: ■ ■



General Embedded information from Microsoft (http://msdn. microsoft.com/embedded/) Platform Builders Documentation (http://msdn. microsoft.com/library/default.asp?url=/library/ en-us/ wceintro5/html/wce50oriWelcomeToWindowsCE.asp) Mike Hall’s Embedded Web Log (http://blogs.msdn.com/ mikehall/default.aspx)

A P P E N D I X

B

EXTREME PROGRAMMING Extreme programming, or XP—not to be confused with XP from Windows XP, where XP stands for eXPerience—has gained in popularity over the past few years as an accepted development methodology. It probably doesn’t do justice to talk about XP in an appendix because it is a rather involved software development practice. However, I will touch on what XP is and what a case study at Microsoft revealed about the practice. I include links throughout this appendix and references at the end for further reading.

Extreme Programming Fundamentals Extreme programming is an agile software development methodology formulated by Kent Beck, Ward Cunningham, and Ron Jeffries. Kent Beck wrote the first book on the topic, Extreme Programming Explained, in 2000. As Beck says (taken from Extreme Programming in Practice, by Newkirk and Martin): Extreme programming, familiarly known as XP, is a discipline of business and software development that focuses both parties on common, reachable goals. XP teams produce quality software at a sustainable pace. The practices that make up “book” XP are chosen for their dependence on human creativity and acceptance of human frailty. Although XP is often presented as a list of practices, XP is not a finish line. You don’t get better and better grades at doing XP until you finally receive the coveted gold star. XP is a starting line. It asks the question, “How little can we do and still build great software?” The beginning of the answer is that, if we want to leave software development uncluttered, we must be prepared to completely embrace the few practices we adopt. Half measures leave problems 219

220

Appendix B

Extreme Programming

unsolved to be addressed by further half measures. Eventually you are surrounded by so many half measures that you can no longer see that the heart of the value programmers create comes from programming. I gathered the following list of the fundamental characteristics or practices of the extreme programming method from the sources listed at the end of this appendix: ■





Incremental and iterative developments—In contrast to traditional development practices, detailed specifications are not written at the beginning of the project. Do small improvements after small improvements. Start with a rough plan of what your features and product should do, and then start writing code. As development progresses, modify and shape the original plan as necessary. Continuous, often repeated automated unit test, regression testing—Every feature of the product should be testable with a comprehensive set of tests. Luis Miguel Reis has written a good document, “Test Engineering: Microsoft Solutions Framework vs. Extreme Programming” (see link at end of this appendix), that discusses the test methodology of the two practices. In summary, in the extreme programming method, you basically run BVTs all the time, and if the tests run and pass, you’re done. What’s interesting is that it is recommended that you write the tests before you start coding the features. This seems awkward, but it becomes clearer when you understand the process. Even so, Microsoft’s approach has been to write the tests and feature code simultaneously instead of first. To dive into details, see the section on test-driven development (TDD) later in this appendix. Short iterations/small, frequent releases—Usually every 1 or 2 weeks, binaries are released to the customer, not just iterations on a project chart. The idea is to put a simple system into production immediately and then release new versions on a short cycle. At the very least, you get better at the most important skill: releasing software.

Extreme Programming Fundamentals











Pair programming—Production code is written by two people sharing one keyboard and one mouse. Each member performs the action the other is not currently doing. For example, while one types in unit tests, the other thinks about the class that will satisfy the test. Either person can do the typing. The person who does the typing is known as the driver, whereas the person who guides is known as the navigator. It is often suggested that the two partners switch roles at least every half-hour. This idea seems to be the hardest one to sell to non-XP believers. User interaction in the programming team (onsite customer)—A customer representative is attached to the project and should be onsite at all times to evaluate the system, give feedback on new builds, and answer questions. This practice seems to be the most expensive (for the customer at least) but is ideal, if possible. Refactoring—Whenever a new feature is added, ask if there is a way to change the existing system to make the feature simpler. If there is, change the existing system. For more details, read the book Refactoring: Improving the Design of Existing Code by Martin Fowler. Shared code ownership—Just as the term suggests, everyone owns the code. Although there might be experts in different areas, for the most part, anyone on the team can program in any area. Simplicity—At any give time, the “best” design for the software is one that runs all the tests, has no duplicated logic, states every intention important to the programmers, and has the fewest possible classes and methods. Anything extraneous should be tossed, or better yet, not written. This is also known as YAGNI—you ain’t gonna need it. Choose the simplest possible design that satisfies the existing need. Extreme programmers write code only to meet actual needs at the present time in a project and go to some lengths to reduce complexity and duplication in their code. Organizing the system with a metaphor—Use a guiding “story” to describe how the system is supposed to work. This is a replacement for “architecture.” It’s meant to be readable by both technical and nontechnical people and to give everyone a common set of words to describe parts of the system, as well as an idea of how things basically fit together.

B. EXTREME PROGRAMMING



221

222

Appendix B

■ ■

Extreme Programming

Continuous integration—All code is integrated and tested on a continuous basis. Sustainable pace—Working more than 40 hours a week can be counterproductive. When you’re tired, you might not be able to concentrate 100 percent of the time and might make major coding blunders. The idea is to stay rested. The rule is that you can’t work a second week of overtime.

Ian Lewis, a development lead in the Xbox team, says it best: Serve short-term interests—Work with people’s short-term interests to serve your long-term goals. In a way, this is XP in a nutshell. We’re constantly being told to do things that serve longterm interests, like writing lengthy specs or creating architectures and base classes. XP asks how we can do things that make us productive in the short term, yet still serve our long-term goals. One thing that XP explicitly abandons is the idea that it’s prohibitively expensive to make changes to a system. This quick overview should give you a basic idea of what XP is. I see a lot of companies adopting bits and pieces of the XP methodology and then call themselves an “agile development shop.” The companies usually run into a lot of problems when they do this because all of the XP methods work in harmony, and if you take one out, “all bets are off.” For example, if you try to do continuous integration and do not have good unit tests or pair-programming, you will probably end up with a bunch of build breaks and unstable code. Even so, Microsoft seems to or was already practicing similar methods to the XP model. Next, I talk about the two that I tend to see the most in the development teams.

Test-Driven Development and Refactoring The two most popular XP practices that seem to be adopted by various teams at Microsoft are test-driven development (TDD) and refactoring. The developers I have spoken to who have used the TDD technique swear they would never go back to the traditional “write first, test later” process. They say that by writing the tests up front, they have fewer bugs in their code when they are close to shipping. The only difference I see that they do from what is prescribed by Kent Beck is that the Microsoft testers write

Test-Driven Development and Refactoring

223

Test-driven development (TDD) is a programming technique heavily emphasized in extreme programming. Essentially, the technique involves writing your tests first [and] then implementing the code to make them pass. The goal of TDD is to achieve rapid feedback and implement the “illustrate the main line” approach to constructing a program. 1. Write the test—It first begins with writing a test. In order to write a test, the specification and requirements must be clearly understood. 2. Write the code—The next step is to make the test pass by writing the code. This step forces the programmer to take the perspective of a client by seeing the code through its interfaces. This is the design-driven part of TDD.

B. EXTREME PROGRAMMING

their tests at the same time they write their production code, not before they write their production code. Refactoring is the process of rewriting written material to improve its readability or structure, with the explicit purpose of keeping its meaning or behavior. In software engineering, the term refactoring is often used to describe modifying source code without changing its external behavior. It is sometimes informally referred to as “cleaning it up.” Refactoring is often practiced as part of the software development cycle: Developers alternate between adding new tests and functionality and refactoring the code to improve its internal consistency and clarity. Testing ensures that refactoring does not change the behavior of the code. Refactoring is the part of code maintenance that doesn’t fix bugs or add new functionality. Rather, it is designed to improve the understandability of the code or change its structure and design to make it easier for human maintenance in the future. In particular, adding new behavior to a program might be difficult with the program’s given structure, so a developer might refactor it first to make it easy and then add the new behavior. Refactoring has been around Microsoft for years. I know when I was in the NT group in 1991, developers were refactoring and optimizing their code as needed or as a general practice, and this was in the early days of the product when the code was just being written. The following is an explanation taken from http://encyclopedia. laborlawtalk.com/Extreme_programming:

224

Appendix B

Extreme Programming

3. Run the automated tests—The next step is to run the automated test cases and observe if they pass or fail. If they pass, the programmer can be guaranteed that the code meets the test cases written. If there are failures, the code did not meet the test cases. 4. Refactor—The final step is the refactoring step, and any code clean-up necessary will occur here. The test cases are then re-run and observed. 5. Repeat—The cycle will then repeat itself and start with either adding additional functionality or fixing any errors. You can go about using TDD in various ways. The most common one is based on KISS (keep it simple, stupid) or YAGNI (you ain’t gonna need it). This style focuses on writing code any way necessary to pass the tests. Design and proper principles are cast aside in the name of simplicity and speed. Therefore, you can violate any rule as long as the tests will pass. This can be unsettling for many at first, but it allows the programmer to focus only on what is important. However, the programmer pays a higher price in the refactoring step of the cycle because the code must be cleaned up to a reasonable level at this point before the cycle can restart.

An Extreme Programming Scenario To see how a typical extreme programming scenario might look from a programmer’s view, the following is a generic procedure taken from www. linuxdevcenter.com/pub/a/linux/2001/05/04/xp_intro.html. (By the way, this is the only reference to Linux in this book, so the subtle hint here is that extreme programming seems to serve the open source community best at this time.) 1. The customer lists the features that the software must provide. 2. Programmers break the features into standalone tasks and estimate the work needed to complete each task. 3. The customer chooses the most important tasks that can be completed by the next release. 4. Programmers choose tasks, and work in pairs. 5. Programmers write unit tests. 6. Programmers add features to pass unit tests.

Microsoft Case Study

7. 8. 9. 10. 11. 12.

225

Programmers fix features/tests as necessary, until all tests pass. Programmers integrate code. Programmers produce a released version. [The] customer runs acceptance tests. [The] version goes into production. Programmers update their estimates based on the amount of work they’ve done in the release cycle.

This is just an example, but it should give you an idea of the workflow if you are a developer in an extreme programming environment.

Microsoft Case Study

■ ■

Overall opinion on extreme programming methods success was mixed. Positives in using the extreme programming methods as described by Beck: ■ Able to quickly respond to business needs. ■ Test-driven design was mandated company-wide because of successful results. ■ Continuous integration testing was very valuable. ■ Simplicity noticeable when a request came in the middle of the development and was easily implemented in two weeks. The revenue generated from the change would pay for the entire development effort. ■ Trust with management increased because of openness.

B. EXTREME PROGRAMMING

Before adopting something, especially a new development process, Microsoft forms a task force or focus group to do some research. Following are some results of a case study done by some project managers at Microsoft in 2003. This is just one case study of the extreme programming practices to see if it would be viable for Microsoft to adopt, so take it for what it is worth. After studying a couple of companies that have used the extreme programming methods to ship a couple of products (whose names must be withheld for confidentiality reasons), this is what the task force came up with for recommendations:

226

Appendix B





Extreme Programming

Negatives in using the extreme programming methods as described by Beck: ■ Did not see a noticeable quality improvement—possibly because of the lack of unit tests. ■ Difficult to address “plumbing” and future releases. Barriers that were there: ■ Overall, there was little resistance. ■ Alignment with business drivers was critical. ■ Management resistance to pair programming. ■ Onsite customer was not practical.

The primary barrier was the name extreme. Agile development was a better term. ■



General observations: ■ XP can revolutionize the ability to respond to changes in the business climate. ■ XP increases software development predictability. ■ Value can be derived from partial implementation, but… ■ The whole is greater than the sum of the parts. Final observation: ■ XP depends on solid programming frameworks and mature technologies.

In general, extreme programming is believed to be useful for small- to medium-sized teams with fewer than 12 persons. It seems that if the development group is large or greater than 20–30 people, the quality of the code will suffer unless you have “extremely” good tools and processes that monitor all the extreme programming methods and you adopt all or none. The general consensus is that extreme programming is good for some projects and not so good for others. It depends on the project you are working on and the culture your developers live in, so choose wisely.

References and Further Reading

227

References and Further Reading ■ ■ ■

“eXtreme Programming.” Overview and case studies by Microsoft Architect Council. Sept 2003. “Extreme Programming.” Microsoft document by Ian Lewis, development lead. Reis, Luis Miguel. “Test Engineering: Microsoft Solutions Framework vs. Extreme Programming.” http://berlin.inesc. pt/cadeiras/pfsi/PFSI2003/SEMINARIO/pdfs/testesluis-reis.pdf.

■ ■

Beck, Kent. Extreme Programming Explained—Embrace Change. Addison-Wesley: Boston, 2000. “XProgramming Software Downloads.” http://xprogramming. com/software.htm.

B. EXTREME PROGRAMMING

This page intentionally left blank

A P P E N D I X

C

TESTING GUIDE Test Guide: A Compilation from the Developer Division at Microsoft The reference to this appendix can be found in Chapter 12, “Build Verification Tests and Smoke Tests.” This testing guide was created by the Developer Division (devdiv) at Microsoft and is intended for people who are interested in building good test plans. Some of the test conditions mentioned might be good for Smoke or BVTs, while others are just good test ideas regardless of what your code does. I hope you can find some good tips or ideas in it. The guide is broken down into the different resource types that a program might depend on.

File Tests: Does Your Code Rely on Files? If your code relies on files (which the majority of code written does), you should test these scenarios: ■



If a file that the application depends upon is removed or renamed, what would happen? Does the application crash or exit gracefully? (For example, in Visual Studio, the ToolBox depends on a .tbd file that is stored in the users/appdata directory. What would happen if we deleted that file?) What happens if the file doesn’t have the expected structure or contents; is it corrupt? For example, the VS Start Page expects that a custom tab XML file will comply with the schema and will be correctly formatted XML. What happens if it doesn’t, or if instead of XML, we have a simple text file or binary file?

229

230

Appendix C



■ ■

■ ■







Testing Guide

What if the file is in the expected format, but the values of the data that we get from the file are invalid? (For example, wrong data type, out-of-value bounds, containing invalid chars, and so on.) Does your feature expect a file to be ASCII? What happens if it has a different encoding, such as Unicode or UTF8? Does the application expect that the file it reads from will not exceed a certain size? What happens when we try working with big files? While you are doing this, what if the file is 0 length? What happens if you try to use a file that is in use by another process or user? What happens if the application depends on a file, but the permissions set on the file don’t allow it to access the file? Try security versus read/write/execute permissions. What if the file is just hidden? If the disc is full, what is the result? Use a floppy disc, for example, and do reads/writes and open/close the application. You can use tools to emulate disc full conditions. Canned Heat is a good tool for that. Use FileMon to identify file system access. What happens if you can access the media at first but it becomes unavailable while the application is doing work? Errors when accessing media can happen if the hard drive, floppy drive, CDROM drive, and so on are unavailable or slow. Use Canned Heat to emulate the errors or delays. Just pop out the disc you are writing to or disconnect from the network share you were using and see what happens. Windows 2000 and above allow junction points. A junction point is basically a link from one point on disc to another. If your features do any type of recursive directory walking, you should try creating a loop and see how your feature handles it. For example, say that you map c:\myproject\linkedfolder to c:\myproject, and you have a file foo.txt in c:\myproject that contains the word foo in it. Now, if you do “Find in Files” from Visual Studio starting from c:\myproject and searching for “foo,” you’ll find 14 matches instead of 1. You can use linkd.exe to create junction points.

Test Guide: From the Developer Division at Microsoft

231

File Paths: Are File Paths Ever Given As Input to Your Program? Testing file paths is critical, so if you use them, check for these conditions: ■



■ ■ ■ ■ ■ ■

■ ■ ■

C. TESTING GUIDE

Invalid filename (invalid characters, and so on). Make sure your feature uses the operating system to handle invalid filenames, rather than writing its own interpretation of that code. Filename/path longer than the allowed max path. Be aware of when your feature adds something to the filename specified by the user. What if the user file path was already Max length and we still added an .aspx extension to the end of the path? It’s important to test for the valid longest filename. Is this a hardcoded number? If so, why? Filename with spaces, dots, or semicolons; check these invalid or valid inputs. Using reserved names such as COM1 or AUX to test any functions that create or open files. The list of reserved names: CON, PRN, AUX, CLOCK$, NUL, COM1-COM9, LPT1-LPT9. Varied filenames, using all operating system allowed characters (`output^%$#@#.,!@#$%^)(& is a valid filename). Does your feature depend on the name of the file it uses? Try the same filename canonical representations: trailing white spaces, .\foo = foo, short file format (aka 8.3 format: ~.). (See Canonical Representation issues from Writing Secure Code for many more examples.) Check for paths of type: \\?\, file://? Try saving or opening by specifying a directory path rather than a file path. What is the default path that your features use when the user tries to open, save, or find a file? Does that default make sense? Are we changing the default according to the path that the user navigated to the previous time?

232

Appendix C

Testing Guide

Input from Registry: Is the Registry Used to Store or Retrieve Information? Try these tests if your application is using the registry. ■ ■ ■

■ ■ ■ ■

If the registry keys are deleted (removed from system), what happens? What about registry data type changes (delete key, create same named key with different data type)? Try changing access control lists on folder that contains the key. (Read only, can’t set value, and so on.) For example: Remove the key and then remove the right to create a key; see how the application responds. Delete a folder that contains the key. Data content changed. (See the API tests that follow.) Make sure no user-specific data is written in the HKLM tree (and vice versa). You can use RegMon to find the registry keys that your features are using.

Strings: Do You Input Strings into Your Application? Since most applications have at least one location where strings are used as input, you should test these various inputs: ■ ■ ■ ■

■ ■ ■

Input a null string. What happens? Enter a string without ‘/0’ (if usig C or C++ code). Does the app crash? Try different extended ASCII characters (for example, Alt-0233). What about reserved strings or sequences that require special treatment? Or strings that mean something to the underlying code, such as “++”, “&”, “\n” (C++)? There are also special ASCII and UNICODE characters, such as Ctrl characters, that should be tested. Don’t forget reserved device names, such as AUX and COM1. A good table enumerating special/troublesome ASCII characters can be found on page 29 of How to Break Software by James Whittaker.

Test Guide: From the Developer Division at Microsoft

■ ■ ■



233

Using long strings that reach the maximum limit and over the limit of any functions that have string parameters. Input white space, carriage return, tab characters. Test International strings. There are lots of things you need to do here that I won’t get into details on. Enable INTL character sets on your machine and have at it. Most INTL strings tend to take up 30% more space than the ENU versions. Look for overlaps and poor UI in your LOC versions. You can sometimes get around various UI verifications by pasting the text rather than typing it.

Numeric Values: Does Your Program Take Numeric Values for Input? After checking string inputs, you should test any numeric values that can input into your program: ■ ■ ■



Try entering negative values, zero, or nothing (in VB, where applicable). Try least numeric value and greatest numeric value for type. Does your numeric value input have boundaries (from 1 to 50)? If yes, test those. Test at least 1, 50, 0, 51, and, of course, valid inputs. Test different data types. (For example, if the input expects int, enter a real number or a character.)

What Inputs Does Your Web Application Take? For web applications, all previous tests apply as well as testing these inputs:



Escape sequences that are not allowed or checked to ensure that they do not allow something malicious to render. (For example, there are four ways to represent ‘\’. ‘\’ = %5C = %255C = %%35%63 = %25%35%63’.) Look for HTML encoding check where applicable: ‘