265 18 2MB
English Pages 264 pages Year 2005
Contents
Expanding Choice Moving to Linux and Open Source with Novell Open Enterprise Server ®
®
JASON WILLIAMS, PETER CLEGG, AND EMMETT DULANEY
Published by Pearson Education, Inc. 800 East 96th Street, Indianapolis, Indiana 46240 USA
Expanding Choice: Moving to Linux and Open Source with Novell Open Enterprise Server Copyright © 2005 by Novell, Inc. All rights reserved. No part of this book shall be reproduced, stored in a retrieval system, or transmitted by any means, electronic, mechanical, photocopying, recording, or otherwise, without written permission from the publisher. No patent liability is assumed with respect to the use of the information contained herein. Although every precaution has been taken in the preparation of this book, the publisher and author assume no responsibility for errors or omissions. Nor is any liability assumed for damages resulting from the use of the information contained herein. International Standard Book Number: 0-672-32722-8 Library of Congress Catalog Card Number: 2004109609 Printed in the United States of America First Printing: March 2005 08 07 06 05 4 3 2 1 Trademarks All terms mentioned in this book that are known to be trademarks or service marks have been appropriately capitalized. Novell Press cannot attest to the accuracy of this information. Use of a term in this book should not be regarded as affecting the validity of any trademark or service mark. Novell and NetWare are registered trademarks, and Novell Press and the Novell Press logo are trademarks of Novell, Inc. in the United States and other countries. SUSE is a registered trademark of SUSE Linux AG. Linux is a registered trademark of Linus Torvalds All brand names and product names used in this book are trade names, service marks, trademarks, or registered trademarks of their respective owners. Warning and Disclaimer Every effort has been made to make this book as complete and as accurate as possible, but no warranty or fitness is implied. The information provided is on an “as is” basis. The authors and the publisher shall have neither liability nor responsibility to any person or entity with respect to any loss or damages arising from the information contained in this book or from the use of the CD or programs accompanying it. About Novell Press Novell Press is the exclusive publisher of trade computer technology books that have been authorized by Novell, Inc. Novell Press books are written and reviewed by the world’s leading authorities on Novell and related technologies, and are edited, produced, and distributed by the Que/Sams Publishing group of Pearson Education, the worldwide leader in integrated education and computer technology publishing. For more information on Novell Press and Novell Press books, please go to http://www.novellpress.com. Special and Bulk Sales Pearson offers excellent discounts on this book when ordered in quantity for bulk purchases or special sales. For more information, please contact U.S. Corporate and Government Sales 1-800-382-3419 [email protected] For sales outside of the U.S., please contact International Sales [email protected]
ii
Acquisitions Editor Jenny Watson Managing Editor Charlotte Clapp Senior Project Editor Matthew Purcell Copy Editor Karen Annett Indexer Ken Johnson Proofreader Seth Kerney Technical Editors Scott Ivie Matt Ryan Publishing Coordinator Vanessa Evans Book Designer Gary Adair
Contents at a Glance CHAPTER 1:
Introduction
1
A History of Linux and the Open Source Movement
7
CHAPTER 2:
The Open Source Solution
31
CHAPTER 3:
Open Source in the Real World
75
CHAPTER 4:
A Brief History of NetWare
145
CHAPTER 5:
The Rise and Reason for Open Enterprise Server (OES)
151
Installing and Upgrading to Open Enterprise Server
175
Administering Open Source
193
CHAPTER 6: CHAPTER 7:
APPENDIX A: Open Source Case Studies
Index
205 219
iii
Table of Contents Introduction
1
Looking for a Common Thread . . . . . . . . . . . . . . . . . . . . . 1 Golden Gate University. . . . . . . . . . . . . . . . . . . . . . . 1 CCOC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Hewitt Associates . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Why This Book? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 How This Book Is Organized . . . . . . . . . . . . . . . . . . . . . . . 5 CHAPTER 1:
A History of Linux and the Open Source Movement
7
Ancient History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 The Roots of Open Source . . . . . . . . . . . . . . . . . . . . . . . . . 9 GPL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Internet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 The History of Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Open Source Is More Than Linux . . . . . . . . . . . . . . . . . . 12 Apache. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 Mozilla. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 JBoss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 OpenOffice.org. . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 MySQL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 How Open Source Development Occurs. . . . . . . . . . . . . . 15 Open Source Versus Proprietary . . . . . . . . . . . . . . . . . . . . 18 Open Source AND Proprietary . . . . . . . . . . . . . . . . 21 How Open Source Makes Money . . . . . . . . . . . . . . . . . . . 25 Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 Leverage—Software . . . . . . . . . . . . . . . . . . . . . . . . 26 Leverage—Hardware. . . . . . . . . . . . . . . . . . . . . . . . 26
iv
Consulting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 Training and Education. . . . . . . . . . . . . . . . . . . . . . 27 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 CHAPTER 2:
The Open Source Solution
31
Open Source Advantages . . . . . . . . . . . . . . . . . . . . . . . . . 31 Open Source Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . 33 Operating System—Linux . . . . . . . . . . . . . . . . . . . . 35 Server Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 Desktop Services . . . . . . . . . . . . . . . . . . . . . . . . . . 40 Web Applications and Services . . . . . . . . . . . . . . . . 42 Development Tools . . . . . . . . . . . . . . . . . . . . . . . . . 42 Databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 Documentation. . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 Open Source Project Sites . . . . . . . . . . . . . . . . . . . . 47 Software Costs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 Simplified License Management . . . . . . . . . . . . . . . . . . . . 50 Open Source License Templates . . . . . . . . . . . . . . . 51 Simplified License Management. . . . . . . . . . . . . . . . 53 Lower Hardware Costs. . . . . . . . . . . . . . . . . . . . . . . . . . . 54 Scalability, Reliability, and Security . . . . . . . . . . . . . . . . . . 55 Scalability. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 Reliability. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 Deny Vendor Lock-in . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 Quality Software and Plentiful Resources . . . . . . . . . . . . . 68 Who Are Open Source Developers? . . . . . . . . . . . . . 69 How Does the Open Source Process Work? . . . . . . . 70 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
v
CHAPTER 3:
Open Source in the Real World
75
Integration Factors to Consider . . . . . . . . . . . . . . . . . . . . 75 Assessment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 The Linux Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 SUSE Linux Product Line . . . . . . . . . . . . . . . . . . . . 81 How It Works. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 Implementation Overview. . . . . . . . . . . . . . . . . . . . 86 File Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 What You Can Do . . . . . . . . . . . . . . . . . . . . . . . . . 87 Print Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 How Internet Printing Works . . . . . . . . . . . . . . . . . 94 How Internet Printing Is Implemented . . . . . . . . . . 95 Edge Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 Build Your Own . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 Buy Commercial . . . . . . . . . . . . . . . . . . . . . . . . . . 102 DNS/DHCP Servers and Routing, Oh My!. . . . . . . . . . . . 102 DNS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 DHCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 Routing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 Web Servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 Workgroup Databases . . . . . . . . . . . . . . . . . . . . . . . . . . 108 Database for the Enterprise . . . . . . . . . . . . . . . . . . 109 Database for Workgroup . . . . . . . . . . . . . . . . . . . . 109 Database Implementation . . . . . . . . . . . . . . . . . . . 110 Light Application Servers . . . . . . . . . . . . . . . . . . . . . . . . 111 Novell Enables Web Services Creation . . . . . . . . . . 114 Open Source Web Services Tools . . . . . . . . . . . . . . 118 Proprietary Application Servers . . . . . . . . . . . . . . . 119
vi
Computation Clusters . . . . . . . . . . . . . . . . . . . . . . . . . . 120 High-Availability Clusters . . . . . . . . . . . . . . . . . . . 120 High-Performance Cluster. . . . . . . . . . . . . . . . . . . 122 Data Center Infrastructure . . . . . . . . . . . . . . . . . . . . . . . 124 Symmetric Multiprocessing . . . . . . . . . . . . . . . . . . 124 Non-Uniform Memory Access . . . . . . . . . . . . . . . . 125 Hyperthreading . . . . . . . . . . . . . . . . . . . . . . . . . . 125 Grid Computing . . . . . . . . . . . . . . . . . . . . . . . . . . 126 Terminal Services/Thin Clients . . . . . . . . . . . . . . . 127 Multisite Clustering . . . . . . . . . . . . . . . . . . . . . . . 128 Storage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 Platform Support . . . . . . . . . . . . . . . . . . . . . . . . . 128 Enterprise Applications . . . . . . . . . . . . . . . . . . . . . . . . . 130 Oracle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 SAP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 Siebel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 PeopleSoft . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 Other . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 Messaging and Collaboration . . . . . . . . . . . . . . . . . . . . . 133 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 Novell GroupWise . . . . . . . . . . . . . . . . . . . . . . . . 136 Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 Internal Development . . . . . . . . . . . . . . . . . . . . . . . . . . 137 Mono . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 Novell exteNd . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 Lower Development Cost . . . . . . . . . . . . . . . . . . . 140 Power Workstations. . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 Power . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 Flexibility. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 Programmability . . . . . . . . . . . . . . . . . . . . . . . . . . 142 Cost . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
vii
CHAPTER 4:
A Brief History of NetWare
145
The History of NetWare . . . . . . . . . . . . . . . . . . . . . . . . . 145 NetWare for the Uninitiated . . . . . . . . . . . . . . . . . . . . . . 147 Management Tools . . . . . . . . . . . . . . . . . . . . . . . . 147 Client Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148 IP/IPX Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148 Why Novell and Open Source . . . . . . . . . . . . . . . . . . . . 148 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 CHAPTER 5:
The Rise and Reason for Open Enterprise Server (OES)
151
What OES Offers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 Services and Utilities in Both Operating Systems . . 152 Services and Utilities in Linux . . . . . . . . . . . . . . . . 153 Services and Utilities in NetWare. . . . . . . . . . . . . . 153 A Transition Strategy for the Data Center . . . . . . . . . . . . 153 Initiate the Project . . . . . . . . . . . . . . . . . . . . . . . . 154 Planning and Design . . . . . . . . . . . . . . . . . . . . . . . 155 Deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 A Transition Strategy for the Desktop . . . . . . . . . . . . . . . 159 Office Productivity Applications . . . . . . . . . . . . . . 159 Thin-Client Applications . . . . . . . . . . . . . . . . . . . . 160 Business Applications . . . . . . . . . . . . . . . . . . . . . . 161 The Linux Desktop . . . . . . . . . . . . . . . . . . . . . . . . 161 Transition Considerations . . . . . . . . . . . . . . . . . . . 162 Approaches for the Desktop Knowledge Worker . . . . . . . 163 Proprietary OS Desktops . . . . . . . . . . . . . . . . . . . . 164 Thin-Client Desktops . . . . . . . . . . . . . . . . . . . . . . 166 Novell Linux Desktop . . . . . . . . . . . . . . . . . . . . . . 166 Business Appliances. . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 Single-Purpose Configurations. . . . . . . . . . . . . . . . 168 Multiple Boot Options . . . . . . . . . . . . . . . . . . . . . 169 Client Management. . . . . . . . . . . . . . . . . . . . . . . . 169
viii
Business Appliance Applications . . . . . . . . . . . . . . 170 Thin-Client Hardware . . . . . . . . . . . . . . . . . . . . . . 170 Remote and Branch Offices . . . . . . . . . . . . . . . . . . . . . . 171 How It Works. . . . . . . . . . . . . . . . . . . . . . . . . . . . 172 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 CHAPTER 6:
Installing and Upgrading to Open Enterprise Server
175
Installation Considerations. . . . . . . . . . . . . . . . . . . . . . . 175 Minimum System Requirements . . . . . . . . . . . . . . 176 Choosing a Path: Linux or NetWare. . . . . . . . . . . . 176 Choosing a File System . . . . . . . . . . . . . . . . . . . . . 177 eDirectory Design Considerations . . . . . . . . . . . . . 177 Walking Through a New Installation . . . . . . . . . . . . . . . 178 Installing the Startup Files and Creating a SYS Volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 Running the NetWare Installation Wizard . . . . . . . 180 Upgrading and Migrating . . . . . . . . . . . . . . . . . . . . . . . . 181 Upgrading a NetWare Server . . . . . . . . . . . . . . . . . 181 Migrating from a NetWare Server . . . . . . . . . . . . . 190 Migrating from a Windows Server . . . . . . . . . . . . . 190 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 CHAPTER 7:
Administering Open Source
193
Working with YaST . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194 Novell eDirectory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196 Hierarchy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196 Inheritance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196 Standards Support . . . . . . . . . . . . . . . . . . . . . . . . 197 Distributed Systems . . . . . . . . . . . . . . . . . . . . . . . 197 Novell iManager . . . . . . . . . . . . . . . . . . . . . . . . . . 198 RPM Package Manager . . . . . . . . . . . . . . . . . . . . . 198 Novell ZENworks . . . . . . . . . . . . . . . . . . . . . . . . . 200
ix
Administering with ZENworks . . . . . . . . . . . . . . . . . . . . 202 Drive Imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 Applying Policies . . . . . . . . . . . . . . . . . . . . . . . . . 203 Application Distribution . . . . . . . . . . . . . . . . . . . . 204 Hardware/Software Inventory . . . . . . . . . . . . . . . . 204 Remote Management . . . . . . . . . . . . . . . . . . . . . . 204 APPENDIX A: Open Source Case Studies
205
Data Center Transition Case Study—Golden Gate University. 205 The Way Things Were. . . . . . . . . . . . . . . . . . . . . . 206 How They Did It . . . . . . . . . . . . . . . . . . . . . . . . . 207 How Novell Helped . . . . . . . . . . . . . . . . . . . . . . . 209 Smooth Transition . . . . . . . . . . . . . . . . . . . . . . . . 210 Novell Open Source Transition Case Study . . . . . . . . . . . 210 The Desktop. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 The Data Center . . . . . . . . . . . . . . . . . . . . . . . . . . 214 Other Businesses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216 Burlington Coat Factory . . . . . . . . . . . . . . . . . . . . 216 Overstock.com . . . . . . . . . . . . . . . . . . . . . . . . . . . 216 INDEX
x
219
About the Authors Jason Williams is the product manager for Novell Open Enterprise Server (OES) and the creator of the requirements document for OES. He joined Novell in 1999 and had been product manager for GroupWise®, WebAccess, and Wireless. He created Novell’s Instant Messaging system, GroupWise Messenger. Previously, he has held positions in London for numerous financial institutions including the Bank of England and the BBC World service. Peter Clegg is a freelance author and former technology editor for McGrawHill. He has been published in major trade and business magazines and specializes in evolving IT services. He has been writing about networking technology for over 15 years. He was Director of Marketing over NetWare and Internet services at Novell. Emmett Dulaney is the certification columnist for UnixReview and author of Novell Certified Linux Professional Study Guide from Novell Press. He has earned 18 vendor certifications, written several books on Linux, Unix, and certification study, spoken at a number of conferences, and is a former partner at Mercury Technical Solutions.
xi
Acknowledgements The authors would like to acknowledge and thank the many different people at Novell who helped out in some way to the creation of this book—as well as all those individuals around the world who contribute so generously to establishing, building, and advancing the body of open source work.
xii
We Want to Hear from You! As the reader of this book, you are our most important critic and commentator. We value your opinion and want to know what we’re doing right, what we could do better, what topics you’d like to see us cover, and any other words of wisdom you’re willing to pass our way. You can email or write me directly to let me know what you did or didn’t like about this book—as well as what we can do to make our books better. Please note that I cannot help you with technical problems related to the topic of this book, and that due to the high volume of mail I receive, I may not be able to reply to every message. When you write, please be sure to include this book’s title and author as well as your name and email address or phone number. I will carefully review your comments and share them with the author and editors who worked on the book. Email:
[email protected]
Mail:
Mark Taber Associate Publisher Novell Press/Pearson Education 800 East 96th Street Indianapolis, IN 46240 USA
Reader Services For more information about this book or others from Novell Press, visit our website at www.novellpress.com. Type the ISBN or the title of a book in the Search field to find the page you’re looking for.
xiii
This page intentionally left blank
Introduction “Open source” is a catch phrase that passes from lip to lip and from magazine article to magazine cover more times a day than can be counted. So many speak of it, yet you have to wonder if they really understand the depth of the topic they’re discussing and the world of possibilities that it holds.
Looking for a Common Thread Let’s start with a pop quiz. Read the following three short case studies and try to ascertain what each organization has in common.
Golden Gate University In the heart of San Francisco’s financial district is the central hub of California’s fifth-largest private university. Golden Gate University offers a variety of exceptional undergraduate and graduate programs in business management, information technology, and law, utilizing the most advanced technologies and learning tools available. Golden Gate’s CyberCampus permits students to work and study on their own schedules using the Internet. Golden Gate is 75% of the way through a major data center consolidation project, which replaces a cacophony of hardware platforms, operating systems, database applications, and web services with Linux, open source applications and networking services, Oracle databases, and Dell hardware. This five-year migration plan allows Golden Gate to simplify administration and consolidate hardware, software, and database content without sacrificing any functionality while providing students and faculty with easier access to more resources. In addition to quantified hardware, software, and management savings, Golden Gate has implemented data center technology that provides it scalability and choice, and positions it to take advantage of new and emerging open standards solutions.
Expanding Choice: Moving to Linux and Open Source with Novell Open Enterprise Server
CCOC CCOC is a mutually-owned service bureau providing information technology services to community banks and credit unions in the northeastern United States. The company has more than 130 customers and nearly 300 employees. The company services 7,000 workstations and 500 ATMs, and processes more than seven million check images each month. CCOC’s solutions included a mix of Unix and Microsoft Windows applications and services. CCOC’s dedication to customer support, security, and value led it to evaluate and select Linux as the platform of choice for its major applications, including email, Oracle databases, bank teller applications, back office processing software, reporting tools, and financial applications. With Linux, CCOC gets the same performance as Unix on less expensive hardware without sacrificing performance or reliability. CCOC transitioned its business to Linux and established 99.999% uptime at a fraction of the cost of its previous Unix system. As a result, the company has reduced its overall hardware costs by 40%. Moving several of its Windows applications to Linux has also reduced administration time by 40%.
Hewitt Associates Hewitt Associates is a global human resources outsourcing and consulting firm delivering a complete range of human capital management services, such as HR, payroll, benefits, and health care. Hewitt’s client roster includes more than half of the Fortune 500, with more than 17 million participants. Hewitt deployed a grid of Intel-based Linux blade servers to off-load CalcEngine, an employee pension calculation program, from its mainframe. The result was that the number of pension calculations doubled (1.2 million to 2.4 million) with a 90% reduction in costs over the mainframe. So what does each organization have in common? If you stated “open source,” you have the main point. If you also picked up on the fact that each company was a migration in process, you’re observant. If you noted that open source coexists with proprietary software or legacy applications, you did very well. And, if you deduced that open source is fundamentally altering the way these companies do business (both internally and externally), congratulations, you scored the maximum points possible.
2
Introduction
Why This Book? The purpose of this book is threefold. First, for those in mainstream organizations who are just coming to an awareness of open source, it provides a snapshot of what open source is, how the open source movement has evolved, where it currently is positioned, and what future possibilities might lay in store. The second purpose for this book is to provide information, tools, and a framework for analysis that will enable computer system professionals and IT decision makers to assess the benefits of implementing open source. The third purpose for this book is to introduce Open Enterprise Server (OES)—the newest operating system from Novell. Once again, Novell has raised the bar for the IT industry by creating an operating system that is truly second to none. Not only does OES offer all the services that you have come to know and depend upon in NetWare, but it also combines the best features of Linux and open source as well, to provide great value and a plethora of reasons to incorporate it in your environment. Novell has been providing enterprise customers with networking solutions for over 20 years. With OES, Novell continues to provide you with additional choices and flexibility options that you might not be getting from other solution vendors. The following are the top 10 reasons why Novell Open Enterprise Server belongs in your environment: 1. You can achieve greater choice and flexibility by deploying Linux and
other open source technologies into your environment. 2. You can minimize training costs by leveraging your current infrastruc-
ture, skill sets, and technology investments. 3. OES allows you to use Linux—the leading open source application devel-
opment platform—to run your line-of-business applications. 4. With OES, you can install the foundation for additional industry-leading
network resource- and identity-management services available for your enterprise. 5. You can achieve enterprise-class scalability for applications, collaboration,
file, print, security, and storage. 6. You can build the foundation for a complete Linux-based server-to-desk-
top alternative to current proprietary offerings. 7. You can partner with the leading network vendors and gain access to the
Novell global ecosystem for technical assistance, consulting, training, certification, and development services.
3
Expanding Choice: Moving to Linux and Open Source with Novell Open Enterprise Server
8. OES allows you to mitigate business risks through proven high-availabili-
ty, service-failover, and disaster-recovery solutions. 9. You can manage your mixed environment of NetWare and Linux through
a single web-based management interface and simplify IT staff workload. 10. You will realize a solid return on investment by lowering ongoing system-
management and hardware costs by moving to an Intel-based platform. OES is the future. Novell has been recognized in the press and by analysts for its clear commitment to making Linux better for enterprise-level customers and for continuing to offer choice and flexibility. Key partners such as IBM, Dell, HP, Computer Associates, and Veritas are supporting Novell’s strategic move to support Linux and offer customers a choice. OES is a complex product, but its complexity is all behind the scenes. From the standpoint of an implementer and administrator, there is no other product offering the same features that is easier to work with. Open source is and will continue to be a significant factor for companies of all sizes, but particularly for enterprise sites in which greater degrees of information exchange and collaboration are required between employees, customers, partners, and suppliers. Open source cost and capability factors will increasingly influence the dynamics of IT, which, in turn, will affect the overall competitiveness and viability of an organization. It’s worth noting that most analysts (including strategists at Novell) believe open source will never completely displace proprietary software. In general, open source is a commodity resource available to accommodate common, mass-market IT needs. The spread of open source will advance, filling massmarket needs while moving from general to specialized applications. In the foreseeable future, however, the demand for proprietary software with specialized function and custom utility will always exist. IT solution providers, including independent software vendors such as Novell, can continue to be profitable by meeting specialized application needs and by providing a support and service ecosystem around the complete stack with support and management for both open and proprietary solutions. Open source, while revolutionary in many ways, will in the long run serve as an evolutionary catalyst for greater IT capability.
4
Introduction
How This Book Is Organized This book removes the mystery surrounding open source and walks you through various elements of OES. When you finish reading this book, you will have a much better understanding of the open source movement and Novell Open Enterprise Server and be better prepared to incorporate it in your environment. Chapter 1, “A History of Linux and the Open Source Movement,” introduces you to the history and basics of Linux. Knowing the background of both Linux and open source is crucial to understanding how OES has come to be. Chapter 2, “The Open Source Solution,” moves beyond the history and into the theoretical advantages of implementing open source as an answer to your implementation issues. In Chapter 3, “Open Source in the Real World,” you will learn about the major institutions that have tried open source and embraced it wholeheartedly. Taking from their experiences, you can evaluate its suitability for your environment. In Chapter 4, “A Brief History of NetWare,” you will be introduced to the organization behind the best network operating systems on the market. In Chapter 5, “The Rise and Reason for Open Enterprise Server (OES),” you will see why this operating system has come to be and how it represents that which all other network operating systems will strive to reach. In Chapter 6, “Installing and Upgrading to Open Enterprise Server,” you will walk through the installation and migration to OES from another operating system. Chapter 7, “Administering Open Source,” introduces you to tools that can be used to manage your new environment.
5
This page intentionally left blank
CHAPTER 1
A History of Linux and the Open Source Movement This chapter focuses on Linux for the audience that knows very little about it. It looks at the history of the operating system, and the open source movement of which it is a part. If you have worked with Linux for some time, you might want to jump to the next chapter. On the other hand, if you are coming to Linux from a NetWare background, this chapter should help fill in some gaps in your knowledge.
Ancient History Long before there was Linux, there was Unix. The Unix operating system came to life more or less by accident. In the late 1960s, an operating system called MULTICS was designed by the Massachusetts Institute of Technology to run on GE mainframe computers. Built on banks of processors, MULTICS enabled information-sharing among users, although it required huge amounts of memory and ran slowly. Ken Thompson, working for Bell Labs, wrote a crude computer game to run on the mainframe. He did not like the performance the mainframe gave or the cost of running it. With the help of Dennis Ritchie, he rewrote the game to run on a DEC computer, and in the process wrote an entire operating system, as well. Several hundred variations have circulated about how the system came to be named what it is, but the most common is that it is a mnemonic derivative of MULTICS.
C H A P T E R 1 : A History of Linux and the Open Source Movement
In 1970, Thompson and Ritchie’s operating system came to be called Unix, and Bell Laboratories kicked in financial support to refine the product in return for Thompson and Ritchie adding text-processing capabilities. A side benefit of this arrangement was that it enabled Thompson and Ritchie to find a faster DEC machine on which to run their new system. By 1972, 10 computers were running Unix, and in 1973, Thompson and Ritchie rewrote the kernel from assembly language to C language—the brainchild of Ritchie. Since then, Unix and C have been intertwined, and Unix’s growth is partially due to the ease of transporting the C language to other platforms. Although C is not as quick as assembly language, it is much more flexible and portable from one system to another. AT&T, the parent company of Bell Laboratories, was not in the computer business (partially because it was a utility monopoly at the time and under scrutiny from the government), so it did not actively attempt to market the product. Instead, AT&T offered Unix in source-code form to government institutions and universities for a fraction of its worth. This practice led to Unix eventually working its way into more than 80% of the universities that had computer departments. In 1979, it was ported to the popular VAX minicomputers from Digital, further cementing its position in universities. The subsequent breakup of the AT&T monopoly in 1984 enabled the former giant to begin selling Unix openly. Although AT&T continued to work on the product and update it by adding refinements, those unaffiliated individuals who received early copies of the operating system—and could interpret the source code—took it upon themselves to make their own enhancements. Much of this independent crafting took place at the University of California, Berkeley. In 1975, Ken Thompson took a leave from Bell Labs and went to practice at Berkeley in the Department of Computer Science. It was there that he recruited a graduate student named Bill Joy to help enhance the system. In 1977, Joy mailed out several free copies of his system modifications. While Unix was in-house at Bell, enhancements to it were noted as version numbers—Versions 1 through 6. When AT&T began releasing it as a commercial product, system numbers were used instead (System III, System V, and so on). The refinements done at the university were released as Berkeley Software Distribution, or BSD (2BSD, 3BSD, and so on). Some of the more significant enhancements to come from Berkeley include the vi editor and the C shell. Others include increased filename lengths; AT&T accepted 14 characters for filenames, and Berkeley expanded the limit to 25.
8
The Roots of Open Source
Toward the end of the 1970s, an important moment occurred when the Department of Defense (DOD) announced that its Advanced Research Projects Agency would use Unix and would base its version on the Berkeley software. This achievement gave Unix a national name and put a feather in the cap of the Berkeley version. One of the demands placed by the DOD on the operating system was for networking, and Unix thus moved farther along the line of technological advancements. Bill Joy, in the meantime, left the campus setting and became one of the founding members of Sun Microsystems. The Sun workstations used a derivative of BSD known as the Sun Operating System, or SunOS. In 1988, Sun Microsystems and AT&T joined forces to rewrite Unix into System V, release 4.0. Other companies, including IBM and Digital Equipment, fearful of losing their positions in the Unix marketplace, countered by forming their own standards group to come up with a guideline for Unix. Both groups incorporated BSD in their guidelines, but still managed to come up with different versions of System V. In April 1991, AT&T created a spin-off company called Unix System Laboratories (USL) to market and continue development of Unix. Immediately, Novell bought into the company. Later that year, a joint product, UnixWare, was announced that would combine Unix with some of the features of NetWare. It finally came to market in November 1992, and in December of that same year, Novell purchased all of USL from AT&T. In the early 1990s, Berkeley announced that it was in the business of providing education and was not a commercial software house—no more editions of BSD would be forthcoming. Sun Microsystems, one of the largest providers of Unix in the world, quickly moved to the System V standards that had grown from enhancements to the original AT&T Unix.
The Roots of Open Source Although there are multiple forces that have contributed to the evolution of open source, there are two significant factors that cannot be overlooked: the General Public License (GPL) and the Internet.
GPL The GPL is a fundamental twist on property law that ensures that the primary right of an owner is to distribute—not restrict access to—software. The GNU GPL originated with Richard Stallman, who was a developer at the MIT
9
C H A P T E R 1 : A History of Linux and the Open Source Movement
Artificial Intelligence Lab through the 70s and early 80s when software (especially in academia) was generously shared with anyone who was interested. Source code was always included, recognition was in the form of credits in the header files, and solutions were freely available. In this spirit, the University of California at Berkeley had become the focal point for Unix development and the major source of free software with the Berkeley Software Distribution (BSD). Around 1984, AT&T sued UC Berkeley for copyright infringement and set the price of Unix (which had been close to free) at $100,000. In the same general time frame, Stallman resigned his position at MIT to head up the GNU Project and issued the GNU Manifesto in an effort to continue the free distribution of Unix. GNU (a recursive acronym that stands for GNU is Not Unix) is a complete Unix-style operating system that is “free software.” Stallman obtained legal counsel and formulated what has come to be the commonly used template for open source licensing, the GNU General Public License, or GPL. The GPL basically protects the rights of anyone to use, modify, and distribute software. Stallman’s objectives are freedom to run software for any purpose, freedom to study source code and modify it for any need, freedom to pass it on, and freedom to build the community by making any modifications available to all parties. If you benefit from the community, they should benefit from your work. The “freedom to build the community” objective is the restrictive portion of the GPL. In practice, any code that is modified, enhanced, or altered in any way must also be subject to the same license. In other words, you can have the software, modify it, and give it away, but you must make your modifications available to the public, where the same freedoms are attached. This has been sometimes referred to as the “viral” effect, as GPL software and anything it touches becomes subject to the same license. Debates have swirled around the word “free,” mostly because of common perceptions and the unique nature of the GPL. The adage is “it’s free as in free speech, not free beer; free as in libre, not free as in gratis.” Stallman has been characterized as religiously zealous about perpetuating software freedom and there is excellent information online about the history and sometimes rocky evolution of free software. Here’s just a note of clarification on the term open source. Software originally released under the GPL was termed “free software” as a heritage of Stallman’s “Free Software Foundation.” But several factors, including confusion over multiple meanings of the word free, Stallman’s fervent stance that all software should be free, and the roiling momentum of the movement, led to the estab-
10
The History of Linux
lishment of the Open Source Initiative (OSI) with the Open Source Definition (OSD), and the more accurate moniker “open source” was spawned. The important point, however, is that the concept of free or open software was articulated in words, framed in legalese, and has now become established as mainstream. At least two versions of the GPL and several other license types have evolved to legally define the free realm of open source.
Internet The second significant open source factor is the Internet—without the Internet, open source simply wouldn’t be feasible. The World Wide Web services and the globally connected Internet have affected open source in two key ways. The first is as a communications infrastructure that easily links geographically dispersed developers in a loose association for a common purpose. Much has been written about the Internet as a catalyst for evolution because of the fact that people anywhere on earth, with the common language of code, can organize themselves around a common need. The Internet provides the link, facilitates coordination, transmits communication, and often is key to demonstrating the value of a particular solution. The second key effect is that the Internet reduces software distribution costs to zero. With one click, productive functionality can be distributed anywhere in the world in real time. Eben Moglen, general counsel for the Free Software Foundation, postulates that current copyright law is an outgrowth of industrial production and physical distribution. Given zero distribution costs, contributed code, and an established common need, is the concept of copyright for open source software even relevant? That’s one reason why the GPL and its variations have been termed the “copyleft.” There is fuel aplenty to fire any discussion around “free” law, the future of software as we know it, and the political, sociological, ethnological, and even religious ramifications of open source. But for our purposes, let’s narrow the view to what’s happened so far and the evidence that open source is indeed a viable alternative to some proprietary software.
The History of Linux At the heart of any operating system is the kernel. It is the kernel that is responsible for all low-level tasks and that handles all system requests. You must have a kernel to have an operating system. In 1991, Linus Torvalds, a computer science student at the University of Helsinki, made a kernel that he had written freely available. The kernel he 11
C H A P T E R 1 : A History of Linux and the Open Source Movement
wrote mirrored many of the features of Unix and Minix—a version of Unix that was used in many university learning environments. Linus Torvalds licensed his Linux kernel under GPL. Anyone who obtains a copy of Linux can have access to the source code, and is free to modify and distribute it. They are bound by the GPL license, however, to make any modifications available to the public. The GPL in a real way ensures that Linux will not go unsupported or be fractured like Unix because everyone has access to the source code. Developers and programmers around the world took to the concept of an open source operating system and quickly began writing and adding features. The kernel version numbers quickly incremented from the 0.1.1 that was first made available to the 2.6.x versions available today. To this day, Linus Torvalds continues to oversee the refinement of the kernel. The GNU GPL that governs Linux continues to keep it open source and free to this day. To summarize the license, it states that everyone is given the same set of rights and anyone may make modifications as long as they make those modifications available to all. Ray Noorda, one of the first CEOs of Novell, saw the value of the Linux operating system early on and formed another company—Caldera—to pursue opportunities there. This company eventually grew to purchase the SCO Group, and change its name to SCO. In 2003, Novell purchased Ximian, a cutting-edge Linux company formed by Miguel de Icaza and Nat Friedman—two visionaries in the Mono movement for an open source platform. In 2004, Novell purchased SUSE—the biggest vendor of Linux in Europe.
Open Source Is More Than Linux Linux isn’t the only open source success story. Eric Raymond, a Unix “hacker” since 1982 and author of The Hacker’s Dictionary and How to Be a Hacker, was interested in both the ethnographical and technological ramifications of open source development. After monitoring the growing success of Linux, he decided to test whether the open source development process was replicable. Raymond was able to duplicate the open source effect with fetchmail, a remotemail retrieval and forwarding utility that is now in use worldwide at Linuxbased ISPs and included with Linux distributions from Red Hat, Debian, and SUSE. Fetchmail users are estimated to be in the hundreds of thousands, and possibly more than a million. After observation and duplication of the open source process, Raymond authored his now famous “The Cathedral and the Bazaar” paper, which captured
12
Open Source Is More Than Linux
the essence of the open source movement and summarized it in concise and witty aphorisms. The Apache Web Server has been termed the “open source killer application,” as it currently hosts 68% of all websites on the Internet. Apache started with a core group of eight developers in different companies who had been using a public domain http daemon developed at the National Center for Supercomputing Applications (NCSA) at the University of Illinois. When several of the daemon’s collaborators left NCSA to start Netscape in 1994, the developers pooled resources and talents to further the code. Brian Behlendorf, who was working for Wired magazine and using the daemon to host articles, volunteered to host the code and share public email with anyone interested. The patched-up NCSA code became the basis of the Apache Web Server. Robert Thau reworked much of the code to make it modular and more conducive to distributed development, and version 1.0 was released in December of 1995. Interest in Apache grew quickly as it was a solution that was immediately available to fill the commercial need to publish information on the Internet. The success of open source Apache has been phenomenal. In addition to providing the world’s leading web server, the Apache Foundation was established for further development of dynamic services around the web server and has also produced Jakarta Tomcat, a mainstream web application server. The field of open source development has dramatically expanded to include a host of project communities that specialize in providing applications, utilities, and development tools. These resources are typically available under an open source–type license that provides access to source code and includes the freedom to copy and use the software for any purpose. The following sections discuss a short list of open source solutions and organizations beyond Linux.
Apache Apache, as mentioned, provides not only a web server but services around web applications, including dynamic application servers and scripting and programming capabilities.
Mozilla In an effort to thwart Microsoft’s domination of the browser market, Netscape released code for the Netscape browser, making it open source. Mozilla, the browser’s original internal code name, has become a leading browser but is also a framework for creating open source desktop applications.
13
C H A P T E R 1 : A History of Linux and the Open Source Movement
JBoss JBoss is the umbrella term for a collection of Java-based open source projects, the most notable of which is the JBoss Application Server. The JBoss application server is a full-featured, J2EE application server that is quickly gaining ground on offerings from established players like IBM, BEA, and Oracle.
OpenOffice.org Open source solutions extend to the desktop as well with a full office suite available in the form of OpenOffice.org. OpenOffice.org has as a mission, “To create, as a community, the leading international office suite that will run on all major platforms and provide access to all functionality and data through opencomponent based APIs and an XML-based file format.” Presentation, drawing, spreadsheet, HTML editing, and word processing are all available as an integrated open source solution.
MySQL With more than five million installations, MySQL is the world’s most popular open database solution. MySQL is used by some of the most database-intensive organizations in the world, including Yahoo!, NASA, and the Associated Press. MySQL AB, the Swedish company that owns the MySQL trademark and source code, has made a profitable business with support and commercial licensing services around an open source solution. Many more open source communities and projects are available, including programming languages, such as Perl, Python, Tcl/Tk, and the GCC compilers. Open source Linux desktops include KDE and GNOME. GIMP (GNU Image Manipulation Program) is open software for photo retouching and image composition. In addition, there are communities for device drivers as well as advanced technologies like clustering. Add to this the abundance of web services applications, such as email, firewall, printing, load balancing, management, and more, and the picture of available open source software begins to look more intact. Hundreds of software applications for line-of-business solutions from insurance to retail have been developed using open source methodologies and are available at sites such as sourceforge.net, freshmeat.net, and forge. novell.com. This evolving open source picture is what has attracted the attention—and support—of leading industry independent software vendors (ISVs) and original equipment manufacturers (OEMs), such as IBM, Dell, HP, Oracle, BMC, SAP, and Novell. IBM has made a significant investment in Linux, porting it to the entire IBM product hardware line from desktop PCs to mainframes. Dell is
14
How Open Source Development Occurs
making an active push to attract Sun installations to cheaper Intel boxes running Linux. Oracle has ported its database offerings to run on Linux.
How Open Source Development Occurs How does open source development occur? Eric Raymond’s first principle is “Every good work of software starts by scratching a developer’s personal itch.” Some code-savvy person runs into a problem with a solution that is not readily available, too expensive, or still unknown. The developer develops a solution that solves the immediate need—it might be lacking interface, documentation, or integration, but it works. The developer posts it where others with similar needs can access it. They use it, modify it, integrate it, and similarly post their enhancements. If the itch is a common one and those needing it are loosely connected, the development effort bubbles to life with a community ecosystem. Some person—often the originator or someone who is visible and trusted by the community—voluntarily steps up to act as a gatekeeper for new modifications and code additions. Every open source project has its own unique evolution path, so detailing a typical process isn’t practical. However, let’s review the open source Linux ecosystem as an example and then detail each of the open source system principles that are options for the organization of an open source effort. Right now, Linus Torvalds is the master of the universe for Linux. Linus personally decides what will go into the Linux kernel—but not without a lot of input from the Linux community. Although the Linux community is not officially organized, under Linus is a collection of lieutenants, each with delegated responsibility for components, subsystems, or services. This group is responsible for what goes into these components and can further delegate or assign subcomponents to others, called maintainers. It’s interesting to note that although this ad hoc organization resembles a typical hierarchy or tree, the structure has not been formalized. Much of the individual responsibility comes as a function of the interest, availability, and trust of the community. It’s also notable that Linux is not the full-time job for either Linus or his lieutenants. And yet, the results are quality software produced in a timely manner. The following list describes several elements of a typical open source software development process: ■
Informal responsibility system—The Linux system resembles an inverted tree or hierarchy with Linus, his lieutenants, and the maintainers. Other successful structures include the Apache developers league, which started with a core of eight core developers and expanded to form 15
C H A P T E R 1 : A History of Linux and the Open Source Movement
Apache Group, with membership extended to any interested individual. An email voting system was instituted and issues are voted on by Apache Group members. Those who have contributed code can vote on any issue and votes are binding; other members can vote to voice an opinion. The BSD (Berkeley Software Distribution) structure consists of an inner core of committers who control what goes into the code. Developers submit code to the committers, who then present it to the inner core for approval. The organization structure around the scripting language Perl was described as the “pumpkin holder system,” where lead responsibility for what is added to the code is passed around the inner circle of developers as a rotating responsibility. In each of these open source processes, there is a method for submitting code and an informally established system of responsibility.
16
■
Development infrastructure—Thanks in part to open source collaboration, several development infrastructures exist in which open source projects can be created, stored, updated, and managed online. Open source communities like those hosted at sourceforge.net have access to Concurrent Versions System (CVS) for managing versions of code, tracking systems for bugs, support requests, feature requests, and patches, plus online forums and more. Even open source development tools such as editors and compilers are available, as are scripting and programming languages.
■
Code forking—In his paper, Raymond contrasts two methods of software development, labeling them the cathedral and the bazaar. In cathedral-style development, the end product is visualized from the beginning with an architectural plan that details form and functionality—often without much input from the intended audience. In the bazaar or open style, form and features evolve amid input from both those developing and using the software in what sometimes appears to be a chaotic process. A principle that facilitates potential conflicts in this bazaar environment is forking. If there is disagreement by one or more parties as to the direction of the code, any party is free to take the existing code at that point and create a new code path. Whether the new code lives or dies depends on the value that the new direction brings to the community. The new path might gain enough momentum so that the divergent paths of new and old are merged at some point. The new path might have no following and die. Or, it might establish a new direction that eventually leads to differentiated products filling the needs of separate markets. Thus, forking becomes a moderating as well as an innovating element in the open source development process.
How Open Source Development Occurs
■
Bug fixing—One reason that open source code has a quality reputation is because of rapid and relevant bug fixes. Generally, those applying the code are both users and developers. They are able to determine deficiencies and often help pinpoint problems—even supplying fixes. Another of Raymond’s adages is “Given enough eyeballs, all bugs are shallow.” With a large pool of co-developers and testers, bugs are going to be found quickly. And, as Raymond states, with a co-developer audience, the fix will be obvious to someone.
With Linux, Torvalds built on this principle and also established a simple yet effective method for maintaining stable production code as well as adding features for the next version release. Odd number versions (2.5, for example) are the equivalent of beta code with evolving features for the next production release. Even numbered versions (for example, 2.6) are release versions that are stable enough for production use. Some theorize that you can’t get good quality code from developers who are not motivated by money or the threat of losing their jobs. Research doesn’t support this. Studies have found that open source code can actually be superior to proprietary code for several reasons, including the following: ■
Developer passion—Most developers of open source are working on something that is interesting to them. Steven Weber, a political scientist on the faculty at UC Berkeley and author of The Success of Open Source, states that, “The key element of the open source process, as an ideal type, is voluntary participation and voluntary selection of tasks.”
■
Peer review—All code is open and viewable for inspection (and testing) by anyone in the community. Proprietary code is often crunched together to meet a launch deadline and is never seen by those who will be evaluating its functionality or usefulness. Knowing that code is widely visible is often motivation for performing at one’s highest level.
■
Code release—No production code is released before its time. There are no market launches, no quarter ends, and often no products competing for market share in the same price range. Code is only released when the community feels it’s ready. However, with an open and often modular approach, code can also be released often. Organizations that need early release functionality can always take preproduction code and make it work for them. Another Raymond principle is “Release early. Release often. And listen to your customers.”
■
Feedback mechanisms—Listening to customers is definitely an integral part of open source development—customers are often co-developers. With rapid release and open code, the customer community not only
17
C H A P T E R 1 : A History of Linux and the Open Source Movement
helps develop but can effectively help debug code. Raymond says, “Treating your users as co-developers is your least-hassle route to rapid code improvement and effective debugging.” A quote from the OpenSource.org website captures the essence of the open source movement. “The basic idea behind open source is very simple: When programmers can read, redistribute, and modify the source code for a piece of software, the software evolves. People improve it, people adapt it, and people fix bugs. This can happen at a speed that, if one is used to the slow pace of conventional software development, seems astonishing” (www.opensource.org).
Open Source Versus Proprietary Between the press and blogs, you can find open source prognostications across the spectrum from “just the latest high-tech marketing fad” to “the end of proprietary software and all the companies that produce it.” The reality, of course, is somewhere in the middle and is more a matter of evolution—a function of time and maturation. To be as simple as possible, open source will generally supply software moving along two market axes—starting at mass-market and the operating system level and moving to specialized niche and the application level. Open source solutions will move up the application stack and toward specialized applications (see Figure 1.1). FIGURE 1.1
The relationship between the application stack and specialized applications.
application proprietary
open source
OS mass-market
18
specialized
Open Source Versus Proprietary
As a note of clarification, understanding the principle of “applications validate systems” is paramount. To be more descriptive, applications drive the demand for infrastructure and network services. How information is consumed and for what purpose is what justifies all information technology. For example, a vice president of sales being able to show a slide to her distributed sales force, pinpointing a correlation between new orders and a new product in order to drive new revenue, justifies every IT resource used that makes this productive transfer of knowledge possible: the presentation software, the connected network, the databases from which the product sales information was gleaned, the trending software that analyzed and pinpointed the trend, the backup infrastructure that ensures it is always available, the authentication system that is in place to maintain security, and much more. Applications are solutions that enable productivity toward an organization’s objectives. When referring to the stack, we are generally referring to a monolithic model with a foundation in infrastructure (underlying system components) and an apex of applications (knowledge tools that facilitate the highest level of information exchange and productivity). Some similarities are common with the networking stack, which starts with the data layer at the bottom and proceeds to the application layer at the top (see Figure 1.2). FIGURE 1.2
The application stack.
Application
Application Services
Networking Services
Operations Services
Functions
19
C H A P T E R 1 : A History of Linux and the Open Source Movement
Again, stack is a general term with multiple connotations, but the application stack is descriptive when looking at open source and especially when gauging where open source is relevant and where it probably isn’t appropriate. For purposes of open source, let’s look at the concept of stack with the same pattern— lower-level, universally useful, standards-based elements at the bottom that support type-specific and sometimes unique combinations of services, proceeding up the stack to expose or present data for view or consumption through an application. As an example, consider the application stack for a web-based application or service. At the base of all applications and services is an operating system (OS) or kernel. The OS is the traffic engine that interfaces with a specific processor or chipset and acts as a start mechanism and supervisor for all processes. The OS manages processor-level systems, tasks, data management, device management, and security. The OS is central to all functions, services, and security, and exists at the base of the stack. NOTE Marketecture (as these figures are often referred to) often have little or no relation to actual system architecture, so are limits that prevent the feasibility of pushing this model too far.
Above the OS is a collection of network services that facilitate secure data transfer, storage, and exchange. This list of services includes (but is not limited to) file transfer, file sharing, authentication, high speed and remote storage, backup, and more—the things that store data, move it around, and make sure that only authorized people can get to it. Data is really only valuable if it is organized—if it has context, order, or relation to other data. The realm of “information organization” is supported by elements further up the application stack in the form of databases, transaction processing, querying, searching, analysis, and reporting. Information value can only be captured if it is exposed for presentation or consumption. The application stack elements that enable presentation and consumption in a web-based paradigm are web servers, browsers, application servers, and a host of client and presentation services that are available. Spiraled through the application stack can be an assortment of utilities, applications, and services that provide unique or intra-/inter-stack functionality. In a distributed application, for example, there might be software that enhances the client/server interaction with added security, graphics, or task processing.
20
Open Source Versus Proprietary
As a general rule, open source solutions will be available and implemented moving up the application stack, where stack elements meet the needs of a majority of organizations. If the need is mass-market anywhere along the application stack, there’s probably an open source solution available or on its way. This isn’t rocket science—just look at the history of open source to date. At the bottom of the stack in the operating system slot is Linux. Beyond the kernel, Linux services include file, security, and device management capabilities. The OS and network services are common to almost all computerized processing. It’s at the bottom of the stack and also a mass-market need with a range of uses across chipsets and hardware platforms. The next general need is data organization. MySQL and PostgreSQL fill the bill here with organization, querying, and reporting capabilities. Common to many companies is a standard selection of applications that have also emerged in the open source arena and includes collaboration, CRM, accounting, document management, and many others. At the top of the stack, and definitely mass-market, are desktop solutions. Lighter versions of Linux are available here, and a fairly comprehensive collection of desktop applications for the common tasks of word processing, spreadsheet, and presentations is obtainable through OpenOffice.org. A quick survey of the most active projects on open source sites shows file sharing, collaboration, MP3 players, and instant messaging solutions—definitely solutions with mass-market interest. Several other factors affect the spawning of an open source solution—elements that must be present before an open source solution can come into existence. These include a community of developers/users with interest in and the need for a solution and the technical talent to make it happen. The concepts of an open source community are discussed in later chapters. In addition, there will most likely be an existing proprietary or closed source product already in existence from which the open source solution will be cloned. Cost or barriers to access of proprietary solutions motivate open source development, and existing feature sets become the basis for functionality.
Open Source AND Proprietary Will open source eliminate proprietary software? The answer is no. It will generally supplant proprietary software that is mass-market or settles toward the lower end of the application stack. The unique nature of open source licensing and the historical trend so far leaves open the possibility for a paradigm-shifting phenomenon with open source dominance—but when you consider the complexity and specialization of everything in the information technology universe, the number of tech-savvy developers available, and the current market 21
C H A P T E R 1 : A History of Linux and the Open Source Movement
infrastructure that exists to fill IT needs, a wholesale conversion to open source isn’t likely. Novell is strongly aligned (and significantly invested) with the concept of “dual source”—a future with both closed source or proprietary as well as open source solutions. Enterprise companies, large organizations, small-to-medium businesses—all can expect to gain valuable benefit and functionality from open source solutions while at the same time requiring the specialty, unique functionality, or cutting-edge technology that will only be available from closed source vendors. Many CIOs have either been given or issued the directive to establish an open source strategy. The growth and success of the movement has made open source a topic of consideration for almost every IT organization in the world. But the question is immediately asked, “How do you implement open source in what, for most companies, is a predominantly proprietary-centric IT configuration?” Only a small minority of companies will be able (or disposed) to implement a completely open source–based IT solution. For the vast majority of organizations, large and small, there will be a mixed environment with both proprietary and open source solutions at multiple points from desktop to data center. The unique value of Novell’s services and product line is that they enable the integration of open source solutions—at any level—with security, management, and support. Novell solutions, now and in the future, provide comprehensive control and powerful management in addition to a collection of the best open source solutions and technologies available. The significance of open source is that it fundamentally provides IT managers with choice: Choice to decide which open source and proprietary technologies they will adopt. Choice to determine at what rate adoption will occur. Choice to select open source for the desktop, the data center, or both. Choice to select from the widest assortment of hardware platforms. Choice to keep what works and adopt what’s better. Choice to simplify and reduce management and costs while improving service and capability. Whether an organization decides to migrate completely to open source technologies, remain with predominantly proprietary systems, or implement any combination of both, Novell accommodates their choice with a framework for security, identity management, system management, an ecosystem of support, testing, and expertise, and an extensive product line of open source–based solutions. In short, Novell’s mission is to support and enhance the adoption of open source technologies with immediate emphasis on Linux and network services. Here’s how Novell is accomplishing this objective: ■
22
Management—Over the past several years, Novell has quietly been working on enterprise management solutions that are OS-agnostic and
Open Source Versus Proprietary
standards-based. Novell administration solutions manage multiple operating systems with the capability to monitor, update, and control Windows, NetWare, and Linux at both the data center and the desktop. Administration extends to access control and identity management for authentication and security protection. The capability to manage multiple operating systems, at the desktop and the data center, through a single interface with holistic and comprehensive control enables IT to easily manage a mixed environment. It doesn’t matter if your organization’s OSs are all Linux, all NetWare, all Windows, or, as is more likely, a dynamic mix of all three, Novell solutions enable you to manage them effectively. Novell management services span the stack from bottom to top, allowing you to implement open source solutions where desired, as complements or substitutes, while maintaining the capability to monitor, administer, and securely control. ■
Identity services—Key Novell management products include eDirectory, the world’s leading identity management and directory service solution. Novell eDirectory acts as a virtual repository for IT organizational intelligence, network resource relationships, identification, and credentials, as well as rules and policies. eDirectory provides both a security and management framework that accommodates any type of network resource, at any level in the stack. This includes hardware, applications, peripherals, users, storage, connections, services, and a host of other network resources.
■
Hardware flexibility—As a supplier of the leading Linux distribution (SUSE), Novell supports the widest selection of hardware platforms available. Your organization can provide services on hardware ranging from thin-client early Intel to IBM mainframe with Novell supplying consistent operating system solutions across the spectrum. Novell SUSE Linux is available for Intel x86, POWER PC, Itanium, Opteron, PA-RISC, SPARC, IBM iSeries, pSeries, and zSeries.
■
Desktop to data center—Open source applies to the networked desktop as well as the data center with support for Linux desktops, Windows desktops, and the services, applications, and security associated with each. Novell ZENworks provides services such as software distribution, patch management, hardware and software inventory, remote control, and more for both desktops and servers on Linux and Windows.
■
Network services—Novell fully supports the collection of standard network services that are common to all three operating systems (Linux, Windows, and NetWare). File sharing and file access across mixed clients and mixed servers with the capability to maintain and manage access control is supported across multiple file types. Advanced file capabilities
23
C H A P T E R 1 : A History of Linux and the Open Source Movement
for large files and large volumes of files with clustering and failover solutions are supported. Novell also accommodates printing, again from Linux and Windows workstations, with support for the Internet Printing Protocol (IPP) and advanced printer technology such as Novell iPrint. ■
Collaboration—Standards-based Internet technologies and the best proprietary collaboration solutions converge in Novell’s open source and closed source solutions with support for common mail standards, as well as integration with complete and feature-rich calendaring, scheduling, and group collaboration. Novell supplies scalable open source mail services as well as GroupWise, the comprehensive collaboration and personal management solution that is accessible through open technologies, such as a browser or standards-based mail clients like Evolution. Novell Virtual Office provides web-based portal desktops to collaborative team resources, such as calendars, files, discussion groups, and more.
■
Application services—And finally, Novell fully supports the implementation and use of open source applications. These include the common applications and application services that are available from Novell on Linux such as databases (MySQL, PostgreSQL), web services (Apache, Tomcat, JBoss, and so on), scripting and programming languages (Python, PHP, Perl), and development environments such as Java. With the inclusion of Linux as a key Novell offering, organizations can deploy application servers in conjunction with network and management services with the capability to consistently and holistically manage everything. With Linux, organizations have the capability to acquire or develop applications of all types, using or building on the vast collection of existing open source solutions. In addition, Novell fully supports open source applications at the desktop, such as OpenOffice.org.
Novell is the only company dedicated to helping IT implement open source at any level desired—full open source, strictly proprietary, or any mix of both; solutions for all standard hardware platforms; management from operating system to application with all services in between; the best collection of integrated file system solutions; the best source of Internet printing solutions; the most advanced and comprehensive directory and identity management services; and a wide range of collaboration options. If you are serious about implementing open source in your organization and want to know how to implement it in the best way and with the best solutions, with support from the world’s leading Linux and open source experts, Novell can help you do it.
24
How Open Source Makes Money
How Open Source Makes Money Given the market acceptance of Linux, Apache, and MySQL, it is clear that open source has carved a niche, and to a degree has carved away at the profits of proprietary software vendors. In any dynamic market, participants need to adjust to market forces if they want to remain viable—that’s a given, and there usually isn’t any sympathy for vendors that fail to compete effectively. But with open source, the market dynamics are different. IT organizations concerned about ongoing innovation, support, and advancement ask an obvious question. If the product is to a certain extent free, how do open source organizations make any money? CIOs and IT managers aren’t necessarily concerned with vendor profitability, but more importantly with ensuring that open source solutions will be maintained, updated, supported, and enhanced to meet the demands of dynamic organizations. Without getting too ethereal, we can look at open source as a natural resource. Natural resources such as timber and water don’t necessarily cost us anything as they exist naturally, but there have been large and profitable industries built around the refining, repackaging, customizing, distribution, and maintenance of natural resource by-products. In a sense, open source can be viewed in the same light. The collective of open source producers provide a resource that, in its native form, provides value to a community. This resource, if augmented or refined, is of value to a larger community, a community that is willing to compensate for the added value. There are several models for open source profitability that have been proven, including distribution, support, augmentation, training, and custom development. The following sections provide a little more detail on each with some examples.
Distribution In an open source context, the term distribution refers to the organizing and packaging of a particular version of software. For example, you might hear a version of Linux referred to as the SUSE distribution, the Red Hat distribution, or the Debian distribution. In reality, each of these distributions contains the common Linux kernel released from the Linux developer community at Linux.org. In addition, each distribution vendor bundles enhancements that simplify the process of installing, administering, and learning Linux, such as an installation interface, install media, documentation, and even limited support— all in a convenient package for which customers are willing to compensate. Customers pay for the peace of mind that comes with knowing a distribution
25
C H A P T E R 1 : A History of Linux and the Open Source Movement
has been tested for compatibility and reliability and has significant industry momentum.
Support Customers are also often willing to pay for consulting and support for open source software. The open source community usually includes a wealth of information in the form of online message threads, patches and fixes, and online support requests. But support is often not timely enough or accessed as easily as needed for managers of mission-critical solutions. Novell CEO Jack Messman summarizes it as enterprise IT managers want “one call—one throat to choke.” Several companies have emerged in the open source arena that offer long-term support contracts or incident support for a fee.
Leverage—Software A common trend is to package open source software in conjunction with proprietary software to enhance value and provide a packaged set of services at a price point that is lower than would be possible with a proprietary-only solution. A significant number of ISVs are porting their applications to Linux, including Oracle, SAP, BMC Software, Computer Associates, and many others. By providing an operating system that is scalable, reliable, and tested in conjunction with their specialized software, vendors are able to increase value without increasing price.
Leverage—Hardware The same logic applies to hardware. Providing a complete hardware system with a tested and approved operating system helps sell hardware. IBM has invested heavily in this concept and dividends are being reaped through increased hardware sales. Dell includes Linux with hardware platforms and can offer a server with an operating system for 10% less than the cost of the system with a proprietary OS. Most of the standard hardware-based security gateway vendors include Linux and can do so at a price point lower than if they were paying license fees.
Consulting SUSE (now Novell) was founded as a consultancy for integrating Unix and migrated to Linux as it became more pervasive. Companies of all sizes often require custom development and integration to create systems that further the organizations’ specialized objectives. Consulting and development firms can create lucrative businesses around custom work that integrates open source as
26
How Open Source Makes Money
part of organization-specific solutions. Consulting is an integral part of the MySQL AB profit model, where the database is freely available and the company makes money from custom development.
Training and Education Profitable institutions have emerged as a result of supplying the knowledge and information required to understand and implement open source through certification training, seminars, and publications. O’Reilly Media has become the premier source of information for open source technologies with books, conferences, and websites, and has been comfortably profitable through the rise of the movement. Novell has placed a sizable wager on Linux and other open source solutions with a multipronged strategy for accommodating, leveraging, and enhancing the market. Here are a few key points on the synergy possible with Novell and open source: ■
Reach—Novell has (and has had for a number of years) an established distribution channel with a network of the world’s best IT resellers and integrators. Novell’s partner network includes over 4,200 points of distribution through its channel partners worldwide. These partners service the organizations that are integrating or potentially interested in open source with experience and an existing knowledge of their IT operations.
■
Training—Novell was a pioneer of the technical certification movement, creating the rigorous training regimen, course content, and qualification testing for the Certified Novell Engineer (CNE) program. Since then, hundreds of thousands of individuals have been trained and certified as Certified Novell Administrators (CNAs), Master CNEs, Certified Novell Instructors (CNIs), and more recently, Certified Linux Engineers (CLIs) and Certified Linux Professionals (CLPs). Training programs exist for end users, developers, IT technicians, and administrators.
■
Consulting—Novell Consulting has an established history of successfully providing custom solutions in the areas of heterogeneous systems integration and identity management. Premium services have been a significant part of Novell’s business model even before the acquisition of Cambridge Technologies, a recognized IT consulting leader. Areas of specialty and expertise now include integration of Linux and open source solutions.
■
Support—Again, Novell was a pioneer in providing multiple levels of support, from an extensive online knowledgebase to premium service with priority access 24×7×365. Linux and other open source solutions are now included at every level of Novell support.
27
C H A P T E R 1 : A History of Linux and the Open Source Movement
■
Leverage software—Novell’s first Linux product was Nterprise Linux Services, a collection of traditional networking services that were made available on the Linux platform. By including the Linux kernel, Novell was able to leverage premium software to new target markets. Open source software is being infused through Novell’s entire product line. Novell products include open source technology, and Novell products help manage and secure it.
As you can see, Novell’s history, expertise, and infrastructure meshes consistently with almost every major attempted strategy for sustaining a profitable business using open source. If open source software can be considered as a form of natural resource, Novell is in the business of refining, combining, distributing, and maintaining open source by-products. Open source in the case of SUSE Linux is packaged for consumption and distributed. MySQL, Apache, PHP, and Perl are included with NetWare and Open Enterprise Server to add value and leverage proprietary Novell software. Novell consulting, training, and support groups provide valuable services to organizations that prefer to pay for professional and responsive assistance. A common question arises as to what is Novell’s position regarding how much of the world of proprietary software open source will displace. There’s a saying in the open source community that determines what happens whenever two solutions for the same problem are submitted. The answer is “let the code decide.” Given the fact that it’s difficult to tell exactly which option is the best at the outset and that evolution will still occur as time passes, the best action is to go with a dual strategy. Eventually, the best solution evolves, which might be a selection of one code base over the other, a combination of both, or even something completely different. With open source, time and trial eventually yield the best outcome. The open source versus proprietary question at Novell will follow a similar pattern, but in this case, it will be “let the customer decide.” Novell has a wide selection of both open source and proprietary solutions. For some customers and their environments, open source might be the only software used. Others might only use proprietary software. And for a great many, there will be a combination of both open source and proprietary with evolving ratios as their needs and environments change. It’s all about choice. And, given that Novell’s strategy integrates multiple open source products at multiple levels of support, and that the existing company organization accommodates several of the existing profit models, there is great potential for new synergies to arise. Novell isn’t banking on open source—but they’re definitely betting on it!
28
Summary
Summary This chapter offers an overview of the history of Linux and the open source movement. There are a great many locations where you can learn more about Linux and its history. Among the sites to visit are http://www.linux.org and http://www.li.org. To learn more about different tools available in Linux and the administration of the operating system, visit the Novell Press site at http://www.novell.com/ training/books/. Click the View All Titles link, and you’ll find a number of titles that can help you manage Linux from the desktop to the server.
29
This page intentionally left blank
CHAPTER 2
The Open Source Solution This chapter focuses on the advantages of going with an open source solution. It looks at application availability, software costs, license management, and other issues that must factor into a decision of whether to adopt open source in your environment.
Open Source Advantages Before you commit to the adoption of open source, Critical Thinking 101 mandates that you ask the question, “Why?” This section attempts to answer that question from a variety of perspectives. Open source has impact not just for developers and in-house IT managers, but also potentially for every person throughout the value chain of an organization from management to knowledge workers to suppliers, customers, and partners. By and large, the effects of open source are advantageous with benefits ranging from lower costs to simplified management to superior software. These advantages include the following: ■
Lower software costs—Open source solutions generally require no licensing fees. The logical extension is no maintenance fees. The only expenditures are for media, documentation, and support, if required.
■
Simplified license management—Obtain the software once and install it as many times and in as many locations as you need. There’s no need to count, track, or monitor for license compliance.
C H A P T E R 2 : The Open Source Solution
■
Lower hardware costs—In general, Linux and open source solutions are elegantly compact and portable, and as a result require less hardware power to accomplish the same tasks as on conventional servers (Windows, Solaris) or workstations. The result is you can get by with less expensive or older hardware.
■
Scaling/consolidation potential—Again, Linux and open source applications and services can often scale considerably. Multiple options for load balancing, clustering, and open source applications, such as database and email, give organizations the ability to scale up for new growth or consolidate to do more with less.
■
Ample support—Support is available for open source—often superior to proprietary solutions. First, open source support is freely available and accessible through the online community via the Internet. And second, many tech companies (not the least of which is Novell) are now supporting open source with free online and multiple levels of paid support. All open source solutions distributed by Novell are included in support and maintenance contracts.
■
Escape vendor lock-in—Frustration with vendor lock-in is a reality for all IT managers. In addition to ongoing license fees, there is lack of portability and the inability to customize software to meet specific needs. Open source exists as a declaration of freedom of choice.
■
Unified management—Specific open source technologies such as CIM (Common Information Model) and WBEM (Web Based Enterprise Management) provide the capability to integrate or consolidate server, service, application, and workstation management for powerful administration.
■
Quality software—Evidence and research indicate that open source software is good stuff. The peer review process and community standards, plus the fact that source code is out there for the world to see, tend to drive excellence in design and efficiency in coding.
Taking a comprehensive and critical view of open source should raise some questions as well, regarding drawbacks. There have been several criticisms by detractors of open source, but most of these can be mitigated or relegated to myth status. Here’s a short list of possible concerns (each of which are discussed in subsequent sections): ■
32
Open source isn’t really free—“Free, as in a free puppy” is the adage meaning no up-front costs, but plenty (often unseen or unanticipated)
Open Source Solutions
afterward. Implementation, administration, and support costs—particularly with Novell solutions—can be minimized and the reality is that there are still no licensing fees. ■
There’s no service and support—For some companies, support is mandatory. More on this later, but open source support equal to that available for proprietary software is available for the same price or less.
■
Development resources are scarce—Linux and open source resources are actually abundant—the developers can use the same tools, languages, and code management processes. In reality, the universe of developers is the largest of any segment. And, with the evolution of Mono (the open source equivalent to .NET), all of those Windows/.NET developers become an added development resource for Linux.
■
Open source is not secure—It might seem to be a simple deduction of logic to think that the code is available, so anyone can figure out how to break it. That’s not quite true with the momentum of the community (especially Linux). Also, the modularity required for distributed development of Linux and open source also contributes to security with tight, function-specific, and isolated code segments.
■
Training is not available—This used to be true, but not anymore. Available Linux training, for example, has ballooned with certification courses coming from every major training vendor. Novell has created multiple levels of Linux certification and integrated training programs. Check your local bookstore and you’ll see a whole section on Linux and open source.
■
All open source is a work-in-progress—True for some, but not for all. The key components like Linux, Apache, MySQL, and Tomcat are dominating prime-time Internet with stable, secure, and production-quality solutions. Some open source offerings are maturing, but they are still workable, and for the companies that use them (with access to source code), the software is good enough.
Open Source Solutions Talking to people who are in IT or IT-related fields (resellers, analysts, and even press), you find that even though most have heard of open source, five out of six don’t know the breadth or depth of open source solutions that are available. It’s also generally assumed that open source mainly means Linux or maybe Linux and Apache. As the open source movement gains momentum, and par-
33
C H A P T E R 2 : The Open Source Solution
ticularly as established vendors market open source concepts, these perceptions will change. Offerings include the following: ■
Operating system—Linux
■
Server services—Beowulf, Samba
■
Desktop services—OpenOffice, Xfree86, GNOME, KDE, Mozilla
■
Web applications and services—Apache, JBoss
■
Development tools—GCC, Perl, PHP, Python, Mono, Eclipse
■
Databases—MySQL, PostgreSQL
■
Documentation—The Linux Documentation Project
■
Open source project sites—sourceforge.net, freshmeat.net, forge. novell.com
At this point, it’s possible (although maybe not practical for the majority of organizations) to completely run a small or medium business using open source solutions. A $20 million Utah business in Novell’s backyard is entirely open source—it has never paid a dime for software licensing fees. This company supports 70 users, provides an external website, includes desktop applications, customer resource management applications and internal databases, and even sells its main line-of-business products packaged with open source components. Open source produces savings in licensing fees, hardware costs (the average company computer is a Pentium II), and management resources (the CTO is also the IT manager and spends about two hours per week on administration). Estimates of open source savings are between 7%–10% gross annual sales, which, in this case, is a significant portion of net profit. This is an example of a viable company that, without open source, probably wouldn’t be in business. This example isn’t typical (nor will it be) of open source implementations, but it is pointed out to illustrate the breadth and depth of solutions that are available and the market expansion that’s possible though open source. Open source adoption will be a matter of substitutes and complements: Substitute where open source is as reliable, scalable, and feature equivalent to proprietary software and complement existing and established and proprietary IT services with open source when it’s cost-effective. A more detailed discussion of what’s currently available follows.
34
Open Source Solutions
Operating System—Linux We’ve already talked about Linux and its place in the application stack. It’s the foundation or basis for any IT service or application. In general, the Linux operating system provides the following features: ■
User interface—Methods for interacting with an operating system, including BASH
■
Job management—Coordination of all operations and computer processes
■
Data management—Tracking and management of all data on a computer and attached devices
■
Device management—Control of all peripheral devices, including disk drives, printers, network adapters, monitors, keyboards, and so on
■
Security—Mechanisms for restricting access to data and devices by authorized users
The following are just a few of the general, high-level Linux services available: ■
File system support—ext, JFS, Reiser, and more than 15 other different file systems
■
Printing—Configuration, print-job interpretation, and printer sharing
■
Network services—Basic protocols and services that connect computers:
■
■
TCP/IP—Transmission Control Protocol/Internet Protocol (manages, assembles, and transmits messages between computers)
■
DNS—Domain Name System/Service (matches IP addresses to computer names)
■
DHCP—Dynamic Host Configuration Protocol (shares and assigns IP addresses)
■
FTP—File Transfer Protocol (transfers files between computers over TCP/IP network)
■
SLP—Service Location Protocol (announces services across a network)
Security—Mechanisms include encryption and digital certificate handling: ■
SSL/OpenSSL—Secure Sockets Layer (encrypts data transmission)
■
Certification authority—Trusted source for identity verification
35
C H A P T E R 2 : The Open Source Solution
■
Identity management—Management tool that grants/denies access based on identity or role
■
LDAP—Lightweight Directory Access Protocol (provides directory query and directory management capability)
Of course, many more features and services of the Linux operating system are available, but those just listed are the most common and are found in practically every distribution for every platform. Part of the appeal of Linux is that it is available on a broad variety of hardware platforms from custom-built microdevices to IBM mainframes. The following is a short list of available Linux platform ports:
36
■
x86—Intel’s x86 or IA-32 was the original development architecture used by Linus Torvalds and is still the primary core development platform.
■
Itanium—Itanium or IA-64 is Intel’s next-generation, 64-bit architecture, providing faster and more powerful processing. Linux was ported to the Itanium Processor Family (IFP) during early platform development by a group of companies, including IBM, HP, Intel, and others. Continued support for Linux on Itanium is coordinated at http://www.gelato.org/.
■
Alpha—Linux is available on the Alpha platform, a family of RISC-based, 64-bit CPUs available from HP. See http://www.alphalinux.org/ for more information.
■
PowerPC—Linux is also available on a wide range of machines from handhelds to super computers. PowerPC is a RISC-based chipset designed by Apple, IBM, and Motorola and owned by IBM. Motorola offers the chips for sale and they show up in Apple PowerMacs and IBM RS/6000 and AS/400 machines, as well as in embedded systems. See http://www.penguinppc.org/.
■
PA-RISC—This is an implementation of HP RISC architecture utilizing workstation and server hardware. See http://parisc-linux.org.
■
SPARC—The Scalable Performance Architecture (SPARC), a 32-bit RISC CPU developed by Sun, is also supported with a Linux port. Details are at http://www.ultralinux.org/.
■
M68K—Workstations running the Motorola 68000 chipset (Sun3, Apple Macintosh, Amiga, Atari, and so on) can run Linux.
■
Z/Series—IBM’s eServer zSeries computers are enterprise mainframes with advanced workload technology. Linux allows a wide range of applications plus scalability and flexibility on zSeries. See http:// www-1.ibm.com/servers/eserver/zseries/os/linux/.
Open Source Solutions
Linux distributions are available from multiple sources. Again, a distribution usually consists of the core Linux operating system packaged with administration utilities, documentation, installation media, custom libraries, desktop interfaces, and common drivers. Distributions can be use-specific (for example, workstations, servers, or applications) or hardware platform-specific. The following sections discuss the most common Linux distributions and a little background for each. SUSE LINUX SUSE’s (pronounced soo sah) development history includes extensive experience with enterprise organizations. SUSE has both workstation and server versions of Linux and was used to leverage integrated and bundled software through Linux to provide quality solutions. These aspects all mesh well with Novell’s history and strategic direction. This is discussed in more detail later. SUSE originated in Germany as a Unix consulting company. With the advent of Linux, SUSE evolved to provide personal and professional versions of Linux plus distributions that included services geared to corporate networks and enterprise use. SUSE includes an extensive and unique (to SUSE) administration program called YaST (Yet another Setup Tool) that has become the most powerful installation and system management tool in the Linux world. Driver support is excellent, and desktops included are KDE and GNOME. All of these factors plus the company culture, market share, and market potential contributed to the decision for Novell to acquire SUSE (http://www.novell.com). RED HAT Red Hat is probably the most widely known of the Linux distributors because it was one of the first to provide Linux in an easy-to-install package—and it was the first Linux-centric company to go public. Formed in 1994, Red Hat specialized in providing Linux packaging, service, and support while keeping everything included with their offerings open source. Red Hat employs between 600–700 employees and retains developers who have contributed to the Linux community, most notably Red Hat Package Manager (RPM), a system for updating Linux services and utilities, which is open source (http://www. redhat.com). Red Hat’s product line has evolved to provide the Red Hat Enterprise Linux (RHEL) line, several versions of Linux targeted at enterprise companies, including workstation (WS), advanced server (AS), and enterprise server (ES). Each of these versions require license fees and an ongoing subscription fee for maintenance and support—a requirement that has drawn ire from customers and has been the basis for disparaging comparisons to Microsoft and their “vendor 37
C H A P T E R 2 : The Open Source Solution
lock-in” strategy. Red Hat also now “sponsors” Fedora Core, a Linux distribution supported by the open source community that is freely available and intended to replace the consumer version of Red Hat Linux. DEBIAN The Debian distributions are prominent for several reasons. First, the Debian Project is solidly based on “free software philosophies” with no official ties to any profit-oriented companies or capitalistic organizations. The second head of the Debian Project, Bruce Perens, was instrumental in the development of the Debian Social Contract and the Debian Free Software Guidelines, which are the basis of the Open Source Definition (OSD), the guidelines that are used to determine whether a software license is truly “open source.” Debian’s free heritage is a draw for organizations that are leery of corporate control. Debian distributions are also available on a wide variety of platforms, including 11 different computer architectures, and there are a wide range of services, with over 8,500 packages in the current distribution. Entry to the Debian community requires proficiency levels and philosophy commitments, and results include thorough testing across all distributions. The Debian development process is a structured system of voting and hierarchy that tends to produce stable releases. The major voiced concern with Debian is that officially stable code is too dated to be valuable (http://www.debian.org). MANDRAKELINUX Mandrakelinux is a classic example of forking. The objective of MandrakeSoft, a French software company, was to provide a version of Linux that was optimized for the Intel Pentium processor. Mandrake started with a version of Red Hat and added a control center that simplified the management of peripherals, devices, hard drives, and system services for Linux. Mandrake is a popular desktop distribution and more widely used among Europeans (http://www.mandrakesoft.com). TURBOLINUX Turbolinux is a major supplier of Linux distributions in the Asia Pacific countries, with strengths in the area of Asian language support. Turbolinux is particularly popular in China and is being used to build backbones for both government and corporate networks (http://www.turbolinux.com). OTHER DISTRIBUTIONS In addition to those mentioned previously, several other Linux distributions are platform- or geographic-specific. Gentoo Linux is a modular, portable distribution that is optimized for a single, specific machine according to one of 38
Open Source Solutions
thousands of package-build recipes. Red Flag Linux is a distribution supported by a venture capital arm of China’s Ministry of Information Industry. Conectiva is a Brazilian company that provides Linux in Portuguese, Spanish, and English for Latin America. United Linux was an attempt by several Linux distribution companies to consolidate efforts around common installation and integration issues for enterprise companies. These companies included SUSE, Turbolinux, Conectiva, and Caldera (now SCO), and their objective was to dilute the dominance of Red Hat in the enterprise market. SUSE provided most of the engineering on the project that produced an enterprise-based distribution, but the effort was disbanded after SCO filed suit against IBM.
Server Services Linux, as an operating system, has appeal across a broad spectrum from embedded devices to mainframes. There is, however, a superset of enhanced services that are available for mission-critical applications. Two open source projects that have evolved to become significant elements of an enterprise-class IT infrastructure are Beowulf and Samba. BEOWULF Beowulf is a commodity-based cluster system designed as a low-cost alternative to large mainframes or supercomputers. A cluster is a collection of independent computers that are combined and connected to form a single unified system for high-performance computing or to provide high availability. Donald Becker at NASA started the Beowulf Project in 1994 for solving highly computational problems for Earth and Space Sciences projects. Using off-theshelf commodity hardware connected via Ethernet, Becker was able to build a 16-node cluster that immediately was of interest to universities and research institutions. Succeeding implementations of Beowulf have provided clustercomputing solutions that rival some of the fastest supercomputers. Today, Linux is the operating system of choice for Beowulf clusters (called Beowulfs). Beowulf is managed and enhanced through an open source community (http://www.beowulf.org) and consists of a number of pieces of software that are added to Linux. The Beowulf architecture is flexible and accommodates a broad range of applications ranging from science to engineering to finance, including financial market modeling, weather analysis, simulations, biotechnology, data mining, and more. For an application to take advantage of cluster processing, it must be enabled for parallel processing, which allows computing tasks to be divided among multiple computers with messages between them.
39
C H A P T E R 2 : The Open Source Solution
SAMBA Samba provides seamless file and print services to SMB/CIFS clients. In simple terms, Samba makes a Unix or Linux computer look like a Windows computer to anyone who is accessing it through a network. Samba was originally developed by Andrew Tridgell at Australian National University, who was trying to link a Windows PC to his Linux system for file access. The Microsoft networking system is based on the Server Message Block (SMB) protocol—Samba is SMB with a couple of “a”s added. From a Windows workstation, a Unix or Linux server appears just like a Windows server with the capability to map drives and access printers. Samba has evolved to include the next generation of SMB, the Common Internet File System (CIFS) and currently includes file and print services as well as the capability to provide domain services, Microsoft’s NT version of access management. Samba is also available on IBM System 390, OpenVMS, and other operating systems. Samba is available under the GNU General Public License (GPL) with an online community at http://us1.samba.org. Samba has been very popular for two reasons. First, it simplifies the process of network management and administration by providing more access without additional client requirements. Second, and probably more important, it provides a stealth entry point for Linux, eliminating barriers to interoperability and adoption.
Desktop Services The open source equivalent to Microsoft Windows isn’t a single package solution. As the movement continues, a potential Windows killer is feasible. Today, open source desktop solutions include desktop interfaces (GNOME, KDE), an office productivity suite (OpenOffice), client libraries (Xfree86), and a client application platform (Mozilla). GNOME The GNU Network Object Model Environment, or GNOME, is a graphical desktop environment for end users running Linux or Unix. The GNOME free desktop project was started in 1997 by Miguel de Icaza and, in addition to providing a desktop, provides a framework for creating desktop applications. Most commercial Linux distributions include the GNOME desktop as well as KDE. More information on GNOME is found at http://www.gnome.org. KDE K Desktop Environment (KDE) originated with Mattias Ettrich in 1996 at the University of Tuebingen as an effort to provide a desktop for Linux in which all
40
Open Source Solutions
applications could have a common look and feel. KDE was built using the Qt toolkit and concerns about licensing restrictions from the toolkit spawned GNOME. Qt was later relicensed under the GNU GPL, eliminating any problems. Both GNOME and KDE are part of freedesktop.org and work to standardize desktop functionality. There are many KDE desktop applications and the community is headquartered at http://www.kde.org. OPENOFFICE.ORG OpenOffice.org (OOo) is a full office productivity suite designed to compete directly with Microsoft Office. It includes a word processor, spreadsheet, graphics program, presentation program, and an HTML editor, as well as database tools, macros, and more. It also includes conversion tools so that users can go between OpenOffice.org files and Microsoft Office files with little difference. OpenOffice.org (not OpenOffice due to a trademark dispute) originated as StarOffice, a commercial office suite produced by StarDivision. In 1999, Sun Microsystems acquired StarOffice and in 2000 renamed it and contributed it to the open source movement. OpenOffice.org is available for Unix, Linux, Windows, and Macintosh computers in a host of languages and can be downloaded at http://www.openoffice.org. XFREE86 The X Window System (X) was originally developed as a windowing, graphical interface for Unix. Graphically, it functions like Microsoft Windows, but architecturally, it is more sophisticated with the capability to graphically display applications from any machine on the network as if it were local according to a true client server model. Xfree86 is an open source implementation of X that consists of a set of client libraries to write X applications in which client and server communicate via the X protocol (http://www.xfree86.org). MOZILLA Mozilla, as has been mentioned, originated with the release of Netscape Communicator source code to the public. Eric Raymond’s paper and philosophies gained traction at Netscape during the time that Microsoft was foisting Internet Explorer on the market. The Mozilla platform enables more than just a browser and includes an email client, instant messaging client, and HTML editor, as well as other standalone applications. Currently, a full-featured Internet suite is available and Mozilla can be found at http://www.mozilla.org. There are similarities between Mozilla and GNOME as far as being a client application platform. Desktop applications that have been written using GNOME include contact management, accounting, spreadsheet, word processing, instant messaging, and more.
41
C H A P T E R 2 : The Open Source Solution
Web Applications and Services Moving from the operating system toward applications, there are several open source solutions that are available for the creation of applications, such as web application development components and tools. Two major web services are Apache and JBoss. APACHE HTTP SERVER The Apache Web Server (its roots were detailed earlier) is the leading HTTP server on the Internet today, hosting over 67% of the websites according to http://www.netcraft.com. Apache is open source with versions running on Linux, BSD, Unix, NetWare, and Windows, as well as other platforms. Apache includes a powerful feature set with scripting, authentication, proxy, and logging capabilities. Popular features include multihoming, the capability to host multiple sites on the same machine, and the capability to password protect pages. Apache is also highly configurable and extensible for third-party customization and modules. Apache has been termed the killer application for the Internet. The Apache community is located at http://httpd.apache.org/. JBOSS JBoss is a Java-based application server. In the Web sense, an application server is a collection of protocols and services that exchange data between applications. Applications are often written in different languages and run on different platforms, but as long as they are based on open standards, an application server will support them with querying, formatting, repackaging, and customizing content for consumption. JBoss was designed to be an open source application server that supports the entire Java 2 Enterprise Edition (J2EE) suite of services, including Java Database Connectivity (JDBC), Enterprise Java Beans, Java Servlets, and Java Server Pages (JSP), as well as other content exchange standards such as Extensible Markup Language (XML). With JBoss, distributed applications can be developed that are portable across platforms, scalable, and secure. JBoss can be used on any platform that supports Java. JBoss Application Server is owned and supported by JBoss, Inc. (http://www.jboss.org), an employee-owned company backed by venture capital and Intel. JBoss provides a line of middleware products and consulting services that support the development of JBoss Application Server.
Development Tools Open source projects are not limited to packaged solutions only but extend to every facet of solution creation, including programming languages, compilers, 42
Open Source Solutions
and integrated development environments (IDEs). To create good software, you need good development tools, and several options are available depending on the task at hand, the amount of sophistication required, and, of course, personal style. PYTHON Python is a portable, interpreted, interactive, object-oriented programming language. Python and Perl are commonly referred to as scripting languages because they are interpreted, but Python developers have great flexibility. Python is a multiparadigm language, making it possible to develop in any of several code styles, including structured programming, aspect-oriented programming, object-oriented programming, and more. Powerful, high-level data types and an elegant syntax are distinguishing characteristics. Python has been extensively used at Google and is considered by programmers, in addition to being powerful, to be more artistic, simple, and fun. Python was started in 1990 in Amsterdam, is owned by the Python Software Foundation, and can be found at http://www.python.org. PERL Practical Extraction and Report Language (Perl) was originally designed by Larry Wall in 1987 as a practical language to extract text from files and generate reports. Perl, like Python, is multiparadigm and is referred to as the mother of all scripting languages. Perl has been described as the “glue language” for the Web that enables developers to integrate and tie disparate systems and interfaces together. This is possible, as Perl has borrowed bits and pieces from many programming languages. Slashdot.org, “News for Nerds. Stuff that Matters.” is a popular technology weblog that was built using Perl technology. If you’re looking for evidence that open source isn’t controlled by stuffy corporations with sanitized marketing communications restrictions, you can find it in the industry humor. The name Python was taken from the TV show Monty Python’s Flying Circus. Mozilla was the internal code name for Netscape Communicator and is a contraction of “Mosaic-killer Godzilla.” Perl is known as the “Swiss Army Chainsaw of Programming Languages.” PHP PHP is a prevalent, general-purpose scripting language that is used for web development and can be embedded into HTML. PHP was originally started in 1994 by Rasmus Lerdorf as a way to post his résumé and collect viewing statistics, and was called Personal Home Page Tools. It was rewritten by two Israeli developers, Zeev Suraski and Andi Gutmans, and renamed PHP: Hypertext Preprocessor. PHP is popular as a server-side scripting language and enables
43
C H A P T E R 2 : The Open Source Solution
experienced developers to easily begin creating dynamic web content applications. PHP also enables easy interaction with the most common databases, such as MySQL, Oracle, PostgreSQL, DB2, and many others. See http://www.php.net/ for more information. GCC Scripting languages such as Perl, PHP, and Python are great tools for certain types of solutions, but if you are going to create fast, native applications for a platform, you need a compiler. Richard Stallman wrote the initial GNU Compiler Collection (GCC) in 1987 as a free compiler for the GNU Project. Ten years later it was forked and in 1999, the forked enhancements were integrated back into the main product. GCC is the primary compiler used in developing for Linux and Unix-like operating systems and currently has been ported to support more processors and operating systems than any other compiler, including Mac OS X and NeXTSTEP. Programming languages supported include C, C++, Java, Fortran, Pascal, Objective C/C++, and others. The GCC open development environment community is located at http://gcc.gnu.org/. ECLIPSE Although Eclipse is commonly known as an Integrated Development Environment (IDE), it is more accurately a platform-independent, application development framework. Originally developed by IBM, Eclipse is unique in that it uses a plug-in model to support multiple programming languages. The Eclipse framework was used to develop the popular Java IDE and compiler that most developers know as Eclipse. Eclipse uses a graphical user interface that includes an intuitive workbench with navigator, task, and outline views that help integrate and access web services, Java, and C++ components. You can download Eclipse at http://www.eclipse.org. MONO To understand Mono, you need to know a little about .NET. .NET is the Microsoft answer to the complexity of developing Internet applications that integrate existing and often diverse Microsoft development and web services components. .NET is an initiative with the major task of integration being enabled through the Common Language Infrastructure (CLI), a virtual machine, and Common Language Runtime (CLR) that are standard class libraries. CLI and CLR combined allow developers to write applications using any language and a wide range of components and have them compiled and executed in a common byte code. .NET replaces Microsoft’s earlier (and somewhat vulnerable) component object model (COM).
44
Open Source Solutions
Mono is an open source version of .NET that was originally developed by Miguel de Icaza of Ximian (now Novell). Mono includes both the developer tools and infrastructure needed to run .NET client and server applications on platforms other than Windows. Mono overcomes the single biggest drawback for developers using .NET from Microsoft—the requirement to run on the Windows platform. By bringing the shared source release of the .NET framework to multiple platforms and then building an open source project around extending it, Mono has made the strengths of .NET available to a much wider range of developers. The capability to develop using a variety of languages, all using a common interface and deployable on a number of platforms, is a very compelling development strategy. The Mono community is based at http://www.mono-project.com.
Databases The bulk of all data has no relevance unless it is presented in context—in relation to other data. The majority of what we see on the Internet, whether it be news clips, product catalogs, directories, customer information, manufacturing statistics, or stock quotes, is content that is extracted from a database. Without underlying database services to provide relation and context as well as querying and reporting, what we see would be far less valuable. Two relational databases have evolved through open source that contain a respectable amount of the Internet’s web-accessible data: PostgreSQL and MySQL. POSTGRESQL PostgreSQL (pronounced post-gress-Q-L) originated with the Ingress project at UC Berkeley in the early 1970s. Ingress was commercialized and became the roots for databases Sybase, Informix, and SQL Server. After commercialization, a new effort was started at Berkeley called Postgres. The Postgres project evolved through several versions with new features, but was ended at Berkeley in 1994 because of demands for support. Because Postgres was available through the open source–like BSD license, it was then adopted by the open community, enabled with an SQL language interpreter, and soon renamed PostgreSQL. PostgreSQL is comparable to commercially available databases, but includes even more advanced technology, such as the capability to simplify object-relational mapping and the creation of new type definitions. A PostgreSQL strength is the capability to eliminate impedance-mismatch that occurs when manipulating data that results when combining object-oriented programming with relational tables. The PostgreSQL site is located at http://www.postgresql.org/.
45
C H A P T E R 2 : The Open Source Solution
MYSQL The MySQL database, as mentioned earlier, is a relational database server owned and sponsored by Swedish company MySQL AB, which profits by selling service, support, and commercial licenses for applications that are not open source. MySQL works on a comprehensive collection of platforms, including Linux, Windows (multiple versions), BSD, Solaris, Tru64, AIX, HP-UX, Mac OS X, and more. It is also accessible from multiple languages, including Perl, Python, PHP, C, C++, Java, Smalltalk, Tcl, Eiffel, and Ruby, and it supports the ODBC interface. MySQL is very popular for web-based applications in conjunction with Apache. More information is available at http://www.mysql.com.
Documentation Early open source projects were known to be light on descriptive documentation. As the movement has matured, reference information for open source projects has become more plentiful and more informative. The open source model itself is being used to generate documentation, training resources, and teaching aids. The most notable body of content is the Linux Documentation Project located at http://www.tldp.org/. The Linux Documentation Project (LDP for short) originated in 1992 as a place on the World Wide Web where Linux developers could share documentation with each other and with those who were using their software. The LDP is maintained by a group of volunteers who provide a fairly extensive library of help, including feature man pages (feature or command documentation), guides, and frequently asked questions (FAQs). A guide usually contains lengthy treatment of a general topic, such as network administration, security, understanding the Linux kernel, and so on. The LDP includes guides for beginners as well as advanced developers. HOWTOs are detailed, step-by-step treatments of specific topics from 3-D modeling to battery management to installing Oracle. The current number of HOWTOs is more than 500. Other open source projects have followed the LDP model, and fairly comprehensive documentation is available for most major projects. Python documentation, for example, ranges from tutorials for beginners who have never programmed to detailed documents on parser generators. Documentation is often translated in multiple languages. LDP guides are available in German, English, Spanish, French, Italian, Portuguese, Russian, and Slovenian.
46
Open Source Solutions
Open Source Project Sites Everything discussed to this point can be classified as application enablers— things that get us to the point at which we can create the types of IT solutions that enhance productivity and enable business. Open source components— including the operating system, network services, desktop, development tools, databases—all exist to support applications. In addition to open source infrastructure and tools, thousands of open source applications are available that run the gamut from games to enterprise resource planning (ERP). Most of these projects are hosted at one of several websites that are tailored to hosting, categorizing, and servicing multiple open source projects and their associated communities. These project sites provide access to and information about open source development efforts based on the type of application field, intended use, platform or environment supported, license type, programming language, and development status. In addition to providing downloads, these sites often include forums for comments, online chat capabilities, mailing lists, news on software updates, and more. Several of the more popular sites are SourceForge, FreshMeat, and forge.novell.com. SOURCEFORGE.NET VA Software produces a web-based software collaboration and development application called SourceForge (http://sourceforge.net/index.php). SourceForge.net is an online instance of SourceForge that is available for the management of open source development projects. Several services, including Concurrent Versioning System (CVS) for version control and distributed collaboration, are available to facilitate online development and management of code. SourceForge.net markets itself as “the world’s largest Open Source software development website, with the largest repository of Open Source code and applications available on the Internet.” As of September 2004, SourceForge.net claimed more than 87,000 projects and 913,000 registered users. Basic membership and project management is free, but for $39 per year, users can access a collection of premium services. FRESHMEAT The freshmeat collaboration site (http://www.freshmeat.net) was a central gathering point for developers writing to the Portable Operating System Interface (POSIX), a common Unix (now Linux) standard. Freshmeat hosts applications released under the open source license for “Linux users hunting for the software they need for work or play. freshmeat.net makes it possible to keep up on who’s doing what, and what everyone else thinks of it.”
47
C H A P T E R 2 : The Open Source Solution
NOVELL FORGE In April 2003, Novell publicly presented its road map for Linux and open source. At the same time, it announced the launch of Novell Forge, an open source developer resource. Through the Novell Forge site, developers can download, modify, exchange, and update open source code released by Novell, share projects and ideas with the broader Novell development community, and participate in vertical market and technology communities. Since that time, Novell has contributed several products to open source, including the Novell Nsure UDDI Server and Novell iFolder. Novell Forge is located at http://forge.novell.com.
Software Costs What is the number one reason IT organizations are turning to open source solutions? It shouldn’t surprise you that it’s to save money. It has been estimated that software costs U.S. companies more than any other capital expenditure category, including transportation and industrial equipment. Given the fact that software doesn’t naturally decompose like delivery vans or office chairs, this expense might be justified. But in reality, software costs a lot—maybe too much. Trying to determine the actual value of software to any particular business or organization is difficult. In some cases, such as financial trading markets in which the value of instant access to information is measured in millions of dollars per minute, if a connection goes down, software costs are easy to justify. In other cases in which a particular application is made available to all with only a few utilizing it, or most using it only occasionally, the standard market value is tough to justify. In different instances, the same software might provide a dramatic range of actual business benefit. This disparity is what has driven both software vendors and their customers to devise elaborate licensing schemes based on everything from individuals, to workstations, to usage, to sites, to organizations, to connections, and beyond. Although licensing strategies such as site or corporate licensing and software metering have helped to quantify the value of use, they have also introduced management variables that have increased costs to maintain and manage. Plus, licensing schemes have in some cases conditioned the environment for software producers to take advantage of vendor lock-in tactics. In many respects, the open source movement is a consequence of disparity in use value for the same software in different situations. It is also a result of the vast difference between what software costs to produce compared to what the market will bear—the difference between the actual resources required to
48
Software Costs
develop, debug, and post software by a user community to what a monopolistic market leader can charge to reap a maximum rate of return. In general, you get what you pay for, but with open source, some interesting dynamics are altering the rules of the software licensing game. The primary area affected is licensing fees. With Linux, there are no licensing fees. You don’t need to pay for extra seats—no cost to make copies and run multiple servers, and no fee differentials for high-end enterprise uses as opposed to simple desktops. It costs nothing to obtain and nothing to share. The GNU GPL under which Linux is protected specifically states, “You have the freedom to distribute copies of free software.” This single fact could be worth significant savings to your organization. Eliminate the cost of an operating system for every desktop and every server and you could cut your budget considerably. Add to it an office productivity suite for every desktop and you gain even more. You might be thinking, “Well, both Red Hat and Novell charge for Linux— what gives?” The Linux kernel is free. What you are paying for with most distributions in which a transaction fee is involved is the packaging, testing, additional services, and utilities that are included with that particular distribution. There is a cost, but it’s significantly less than you would be paying for proprietary software. For example, Novell’s packaged distribution of SUSE Linux Enterprise Server 9 is retail priced at $349 USD. Compare that to Microsoft Windows 2003 at $1,020 USD. That’s a big difference! This might be elementary, but the savings are not just a one-time benefit with the initial software purchase. Here are several ways that you can significantly reduce software expenses: ■
Initial purchase—If you buy software outright as a capital expenditure and plan to buy it once and forget it, you eliminate or reduce this cost. The initial purchase price is reduced to either the cost of the distribution or the manual effort to go find a place to download it.
■
Software maintenance—Software often continues to evolve and improve, even after a version has been distributed to market. These improvements are packaged and sold as the next generation of the product or as a new, updated version. If these updates are released to market within a certain window from the time you made your initial purchase (for example, 60–90 days), you are often entitled to the latest bells and whistles by getting the update at no charge. But, if the release is beyond that window, you have to purchase the update to obtain the new features. The price for software updates is often less than the initial purchase price, but updates can still be a significant expense—especially if the software is rapidly evolving with new features and enhancements. With open source software, the price of an update is, again, usually just the
49
C H A P T E R 2 : The Open Source Solution
time it takes to download it. With open source software, there are no fees, and as updates are made available to the community, you can implement them as needed. Open source can help enterprise companies cut other costs related to software asset management. Some organizations have created entire departments around ensuring that licensing restrictions are enforced and that the company is in compliance with all signed software agreements. With open source, resources used to monitor and enforce software compliance can be repurposed and expensive software asset management solutions can be shelved. The detailed and sometimes tedious business of contract negotiation and determining actual use can be eliminated. A study conducted by a leading industry analyst firm asked enterprise software purchasers which software-licensing model they preferred. The most frequent response was, “Allow me to pay for what I use.” The second most frequent was, “Allow me to pay for the value I derive.” Open source helps bring licensing fees in line with value received. Software licensing fee savings are dramatically evident for organizations that are choosing to eliminate replacement costs by migrating to open source. An article from IT Manager’s Journal details the savings for a company of 300 users. After pricing the cost to upgrade server OS software and email server and client software with associated access licenses, the IT manager was able to implement a superior solution using open source for 25% of the cost of a Windows solution. With the savings, he was able to buy all new hardware and purchase a new Oracle application. Savings of software costs can be significant. Novell recently migrated from Microsoft Office to OpenOffice.org. With the open source solution, it was possible to get equivalent feature and functionality but immediately trim nearly a million dollars per year off of licensing costs.
Simplified License Management Simplified license management is almost a given with the elimination of software fees. In addition to the termination of complex licensing negotiations and software asset management efforts, there are no nagging concerns about compliance. In reality, however, all licensing issues aren’t eliminated and that has to do with the fact that the GNU General Public License isn’t the only commonly accepted open source license. A summary of other open source licenses in use will be helpful—but first, a little about how they have evolved.
50
Simplified License Management
When Richard Stallman started the GNU Project, he also established the Free Software Foundation, a nonprofit organization that promotes and supports the free software movement. As has been mentioned, the word “free” was a source of confusion in that you were free to give software away, but software wasn’t necessarily without cost. When the Debian Project with a version of GNU/Linux began to grow, a set of guidelines was needed to determine what contributed software could appropriately be included with the distribution and still maintain “free software” status. Bruce Perens, the second project leader of Debian, helped create the Debian Social Contract and the Debian Free Software Guidelines. The Debian Social Contract is a set of moral guidelines that engage users and software makers in an agreement regarding free software. The Debian Free Software Guidelines (DFSG) are the rules used to determine whether software actually is free. These rules include free redistribution, inclusion of source code, allowance for modification, and other specifications that protect integrity and redistribution. Bruce Perens teamed with Eric Raymond to create the Open Source Initiative (OSI) in 1998, which attempted to stem confusion around “free” by coining the term “open source” and writing the Open Source Definition (OSD) that was similar to the DFSG. (See the complete guidelines at http://www.opensource.org/docs/definition.php.)
Open Source License Templates So what does all of this have to do with simplified licensing? Plenty. Today, many different open source licenses are used and almost all of them certify their openness by comparing compliance to the Open Source Definition. Here are a few of the more common examples: ■
GNU General Public License (GPL)—As mentioned earlier, this is a legacy of Richard Stallman and the GNU Project. The GPL basically protects the rights of anyone to use, modify, and distribute software licensed under it with freedom to run software for any purpose, freedom to study source code and modify it for any need, and freedom to pass it on with the restriction that any changes made must be contributed back to the community.
■
GNU Lesser General Public License (LGPL)—With the GPL, anything that is linked to the original work (statically or using a shared library) is technically considered bound by the same freedoms (or restrictions— depending on your point of view). Works that are created and then linked to Linux would then be required to be open and source code provided. The GNU Lesser General Public License or LGPL was created to allow nonfree programs to use shared libraries without the requirement
51
C H A P T E R 2 : The Open Source Solution
to make them free as well. For example, a database application that uses a library which is licensed using LGPL is not required to provide source code. Applications running on Linux that link using shared libraries such as glibc are not subject to the GPL. ■
BSD License—The Berkeley Software Distribution, a derivative of Unix distributed by UC Berkeley, included a license agreement that granted provisions to modify code, distribute changes, and charge for the derivative work without any responsibility to return code changes to the open community. Unlike the GPL, BSD licensed code can be made private and then redistributed under a proprietary license. As a result, you can find BSD code in Microsoft networking products and in Mac OS X. The BSD license, or BSD template as it is often referred to, is another popular licensing option used for making software open. It is not as powerful/restrictive as the GNU GPL, but is useful in many situations in which proprietary software is combined with free software to create hybrid, proprietary solutions.
NOTE As a side note, Linus Torvalds has indicated that the main reason Linux hasn’t fractured as Unix did is the difference between the BSD and GPL licenses. With GPL, the source code with enhancements or modifications will always be available to the general public. Everyone will be able to see any changes that are made, and thus keep the community consistent and current with any updates. With Unix, different factions were able to hide modifications behind proprietary licenses, building on private versions of the code. These mutating versions were then set on divergent proprietary paths.
52
■
Mozilla Public License (MPL)—The Mozilla Public License was developed by Netscape when Communicator was converted to open source. It closely adheres to the Open Source Definition guidelines and is widely used as an “open” template. The MPL was written after the GPL, BSD, and MIT licenses and has since become the template for the majority of open software licenses.
■
MIT License—The MIT license is a very basic license that carries no restrictions for use other than to apply ownership. The source may be applied for any purpose without any restrictions other than that the text of the license must be included with the code.
■
IBM Public License—The IBM Public License is a full open source license similar to the Mozilla Public License, but also includes
Simplified License Management
indemnification provisions. Specifically, contributors making claims about capabilities are solely responsible for those claims; no one else in the chain of original or derived works can be held responsible. Currently, more than 50 licenses have been approved by the Open Source Initiative as adhering to the guidelines of the Open Source Definition. These are maintained and available for use on the OSI website at http://www. opensource.org/licenses. Using any of these licenses ensures that software covered by the license is “OSI Certified Open Source Software” and is classified as “free” software.
Simplified License Management Pragmatically, how does this affect enterprise licensing concerns? It reduces workload and worry—reduces, not eliminates. In general, the majority of open source licenses are based on the Mozilla Public License, which means that you don’t have to worry about license count, reuse issues, or distribution. If you’re running Linux as a server operating system, there’s no need to count boxes. Just install it whenever and wherever you need, making as many copies as you like. If you have a fully redundant failover cluster in a remote location, there’s no double cost for software. Want to add extra web servers to accommodate a surge in online sales? No charge. For many organizations, people changes at the desktop are far more frequent than data center servers. Here licensing can be significantly simplified with open source. If you want no licensing worries at all, you can run Linux OS at the desktop and OpenOffice.org as the office productivity suite. The typical user has everything he needs, including word processing, email, browser, file sharing, and more. This is especially relevant to organizations that can fluctuate in size over short periods of time as a result of special projects, mergers and acquisitions, divestitures, and seasonal changes. It was mentioned that licensing issues would be reduced, not eliminated. Realistically, you would still need to deal with proprietary software that might be running on or in conjunction with open software. Web application services, databases, desktop applications, and even many of the software services that Novell sells still require license management and compliance. The net result of all this is that you will have far fewer licensing headaches. There won’t be any renewals, contract negotiations, or renegotiations, and no need to spend valuable resources ensuring that you are in compliance with agreements for what could be a significant portion of your software. At minimum, software covered by any of the preceding licenses allows you the freedom to apply it to any use, copy it, modify it if needed, and redistribute it.
53
C H A P T E R 2 : The Open Source Solution
If your organization is developing open source software or modifying it for redistribution, you need to look more closely at the main open source licenses at http://www.opensource.org and see which template would be most appropriate to work from.
Lower Hardware Costs Consider the following quick facts gathered from Novell customers who have implemented Linux: ■
The Asian Art Museum moved to SUSE Linux and IBM hardware and decreased costs by 80%.
■
Burlington Coat Factory implemented SUSE Linux and reduced hardware costs tenfold from its previous Unix environment.
■
Central Scotland Police were able to update 1,000 users to state-of-theart software applications while making use of slower existing computers and saved £250,000.
■
A CRM services provider was able to reduce its hardware costs up to 50% by switching to Linux and eliminating the need for expensive Unix servers.
These cost benefits are fairly typical of the savings that are possible when using Linux as a foundation for applications, networking services, and web services. Linux, in general, makes more efficient use of hardware due to its efficient architecture, economic use of code, and the intense focus that multiple groups have put into kernel performance enhancements. Linux not only allows IT departments to get more use out of existing hardware, it enables them to more fully take advantage of new scalability and performance features that have been built into the latest hardware. Linux allows you to get more out of Intel Itanium, AMD64, IBM POWER, and IBM zSeries processors. Some IT organizations are seeing dramatic hardware savings by switching to Linux from proprietary Unix/hardware solutions—Sun/Solaris and IBM/AIX solutions have been particularly expensive. Migrating to Linux on commodity Intel hardware can save a bundle. When Bank of America first started moving from Solaris to Linux, it found it could buy four times the hardware for the same money using regular Intel x86-based hardware (http://www.linuxvalue.com/networkint.shtml). AutoTradeCenter in Mesa, AZ, saved 60% by going with its Oracle application on HP/Intel and Linux rather than Oracle on Sun Solaris (http://www.linuxvalue.com/autotradectr_cs.shtml). 54
Scalability, Reliability, and Security
Golden Gate University found that Linux in Intel was three to five times less expensive than Solaris on Sun. Clustering as a business continuance solution has already been discussed, but many companies are moving mainframe processing to Linux grid computers with significant success. The global oil company Amerada Hess, Corp., was able to move its 3D sea-floor rendering application from an IBM supercomputer, which it was leasing for $2 million per year, to a $130,000 Beowulf Linux cluster and get results in the same amount of time. DreamWorks has saved millions of dollars over its previous SGI Unix systems by moving to Intel-based Linux clusters for business operations and rendering (http://www.linuxvalue.com/ncollinsHP.shtml). Implementing Linux on the desktop can save hardware money as well. The same principles apply with lower-powered hardware being able to perform equally well using open source software. Utility desktops with office productivity, web browser, and email functionality (which is what 80% of office workers only use) can be configured using base or entry-level hardware. If you’re considering thin-client desktops with Linux (something that’s a lot easier to do with Linux than with Windows), the hardware savings can be even more significant. Client hardware can be reduced to the bare minimum as applications, storage, and everything else is maintained at the data center. The client workstation only needs to have a monitor, keyboard, mouse, network adapter, and CPU. In addition, management costs are minimized as all operations can be done remotely.
Scalability, Reliability, and Security To this point, we have discussed open source software in general terms and have included desktop and frontend software as well as server OS and back office solutions. This section hones in on several Linux advantages that have aided in its meteoric rise to a full-fledged OS player in the data center. These advantages are scalability, reliability, and security, and generally apply to all Linux distributions.
Scalability Scalability encompasses several technologies that enable a system to accommodate larger workloads while maintaining consistent and acceptable levels of performance. Three specific scalability areas are clustering, symmetric multiprocessing (SMP), and load balancing.
55
C H A P T E R 2 : The Open Source Solution
CLUSTERING As mentioned previously, the Beowulf Project allows multiple individual Linux machines to be harnessed together in a single, high-performance cluster. Several commercial-grade cluster implementations are available, including Shell Exploration’s seismic calculations, the National Oceanographic and Atmospheric Administration’s (NOAA) weather predictions, and Google. Google is reported to have 15,000 Intel processors running Linux that are used to index more than three billion documents and handle 150 million searches per day. Linux clustering capabilities are outstanding with practical applications from finite element analysis to financial simulations. Clustering is enabled using separate packages, Beowulf and Heartbeat. Beowulf includes a message-passing interface and bonding network software between parallel virtual machines. This provides for distributed interprocess communications and a distributed file system for applications that have been enabled for parallel processing. Simply said, it puts lots of processors on a single large task sharing data and processing power. Clustering can also be used to ensure high availability for tasks that are not necessarily computation-intensive but must be up all the time. With a highavailability cluster, multiple (at least two) identical systems are in place with a form of keep-alive monitor or “heartbeat,” which monitors the health of nodes in the cluster. If the heartbeat fails on the primary system, a second system takes over providing uninterrupted service. Cluster management is not tied to any particular machine but management services are shared among the cluster nodes so that if any single point fails, the system continues uninterrupted. What is clustering available for? Any service that demands continuous access is a candidate. Take authentication, for example. An enterprise network might have thousands of users who authenticate each time they access network resources. If the authentication service goes down, everyone is prevented from getting to what they need. High availability ensures that authentication is always possible. E-commerce applications, email, DHCP, FTP, and high-traffic download sites are also candidates for clustering. Linux clustering capabilities provide both powerful parallel processing and enterprise-class high availability at a relatively low cost. Scalability features that were built into the Linux 2.6 kernel provide for larger file system sizes, more logical devices, larger main memories, and more scalable SMP support, allowing it to comfortably compete on par with most Unix operating systems. Other scalability technologies include Linux support for
56
Scalability, Reliability, and Security
hyperthreading, the capability to create two virtual processors on a single physical processor, providing twice the capacity to process threads. The NFS4 file system is a secure, scalable, distributed file system designed for the global Internet. With the 2.6 kernel, memory and file sizes can scale to the full limits of 32-bit hardware. A distinct advantage of open source is that you have multiple groups such as the Linux Scalability Project at the University of Michigan’s Center for Information Technology (CITI) specifically focusing on scalability. Several of the discoveries and advancements made by this group have been incorporated into the Linux 2.6 kernel. Another example is the Enterprise Linux Group at the IBM T.J. Watson Research Center that has worked to increase scalability for large-scale symmetric multiprocessor applications. The breadth and depth of intellectual manpower applied to solving scalability problems is responsible for the accelerated acceptance of Linux as a truly scalable solution. SYMMETRIC MULTIPROCESSING Multiprocessor support (simultaneously executing multiple threads within a single process) has long been marketed as a performance-enhancing feature for operating systems on IA-32 hardware. But not until the Linux 2.6 kernel have multiple processors really been much of an advantage. Linux supports both symmetric multiprocessing (SMP) and non-uniform memory architecture (NUMA). Novell SUSE Linux has been tested with more than 128 CPUs, and with hardware based on HP/Intel Itanium 64-bit architecture, there is no limit on the number of supported processors. Multiprocessor support with two processors can help enhance performance for uniprocessor applications such as games. Multiple processor support performance enhancements become increasingly visible with software compiles and distributed computing programs in which applications are specifically designed for divided computations among multiple processors. LOAD BALANCING An early problem of large Internet sites was accommodating the sometimes wild fluctuations in traffic. An onslaught of page views or database queries could completely clog a connection or bring an application server to its knees. The open source technology called squid is widely used for load balancing traffic between multiple web and application servers. Squid is an open source proxy web cache that speeds up website access by caching common web requests and DNS lookups. Squid runs on a number of platforms, including Unix, Mac OS X, and Windows. Caching eliminates the distance and number of locations that are required to supply an HTTP or FTP request and accelerates web servers, reducing access time and bandwidth 57
C H A P T E R 2 : The Open Source Solution
consumption. Load balancing is also accomplished with PHP scripts that allocate database requests across multiple databases. A master database is used for updates, and proxy or slave databases are used for queries.
Reliability As web and commerce sites have become more integral to standard business processes, the requirement for high levels of uptime is much more critical. The staying-on power of Linux when it comes to mean times between required system reboot is outstanding. One Linux shop has an interesting IT management problem. Using diskless, Linux user workstations with shared backend services also running on Linux, the primary point of failure is CPU fans and power supplies. The IT manager has a box of fans and power supplies and 80% of his administration time (he’s only a part-time administrator) is spent replacing worn-out fans and burned-out power supplies. For this company, Linux is extremely reliable. Many Novell customers have been able to significantly improve reliability by switching from Windows to Linux. A construction company improved uptime from 95% to 99.999% after moving from Windows to SUSE Linux. The Asian Art Museum in San Francisco enjoys the same levels of reliability with its IBM/SUSE implementation, which helps it showcase nearly 15,000 treasures, and ends the need to reboot servers on average twice per month. The modular, process-based Linux architecture allows different services to be upgraded without ever taking the system down. Customers report that Linux servers have gone through dozens of upgrades and have never been rebooted. IBM has performed extensive Linux stress tests with heavy-stress workloads on Linux kernel components, such as file system, disk I/O, memory management, scheduling, and system calls, as well as TCP, NFS, and other test components. The tests demonstrate that the Linux system is reliable and stable over long durations, and can provide a robust, enterprise-level environment. It’s worth noting that IBM has ported Linux to every system they sell, including the IBM S/390. Customers for these systems demand absolute reliability and through IBM’s research and testing, they have found that Linux delivers—no “blue screen of death,” no memory leaks, no monthly reboots, and no annual reinstalling of the operating system to regain and ensure stability. SUSE worked with IBM, HP, and Intel to ensure that the SUSE Linux distribution was reliable, scalable, and secure enough for carrier-grade telecommunications service providers. The SUSE Carrier Grade Linux (CGL) solution is quickly becoming a preferred platform for other applications with less stringent reliability requirements, such as financial and retail markets.
58
Scalability, Reliability, and Security
Security And last, but not least, Linux security is a major advantage over other options—particularly Windows. The viruses Love Bug, Code Red, Nimda, Melissa, Blaster, SoBig, and others have collectively cost companies billions and billions of dollars ($55 billion in 2003 according to Trend Micro). But, companies running Windows servers—not those running Linux—have for the most part, incurred this cost. Windows is estimated to have between 40 and 60 million lines of code, as compared to Linux with around 5 million. Windows code has evolved over the years from a desktop operating system with new functionality and patches added, creating an unwieldy collection of services that is full of potential security vulnerabilities. A major culprit is unmanaged code—the capability to initiate processes with access across OS functions without the protection of a sandbox or protected area. Many Windows modules rely on complex interdependencies that are very difficult to compartmentalize and secure. Outlook is linked to Internet Explorer, and a security hole in one leads to a security breach in the other. Also, technologies such as ActiveX and IIS expose these weaknesses to outside access. Linux programs are designed to operate in a more secure manner as isolated processes. Email attachments can’t be executed automatically, as are ActiveX controls and other specially built virus files. Linux (and Mac OS X) prevent any real damage occurring on a system unless the user is logged in with the highest levels of permissions as root or administrator. With Windows, workstation users are almost always logged on with these high-level privileges that exploit vulnerabilities. According to a report by Dr. Nic Peeling and Dr. Julian Satchell, “There are about 60,000 viruses known for Windows, 40 or so for the Macintosh, about 5 for commercial Unix versions, and perhaps 40 for Linux. Most of the Windows viruses are not important, but many hundreds have caused widespread damage. Two or three of the Macintosh viruses were widespread enough to be of importance. None of the Unix or Linux viruses became widespread—most were confined to the laboratory.” The vulnerabilities of Windows have also been of higher severity than those of Linux. From this, you might agree with Security Focus columnist Scott Granneman who writes, “To mess up a Linux box, you need to work at it; to mess up a Windows box, you just need to work on it.” Historically, the Linux open community has also been much quicker at detecting security vulnerabilities, creating and testing patches, and providing them to the community for download. The Linux code is open to thousands of “eyeballs” and both the problem and
59
C H A P T E R 2 : The Open Source Solution
the fix are readily apparent to someone. Word in the open source community is that no major security defect has ever gone unfixed for more than 36 hours. Security isn’t just about worms and viruses. It also includes the administration framework that controls user access anywhere in the network. Unix-like operating systems such as Linux were designed based on multiuser, distributed architectures with the capability to work on any machine from any location as if it were local. As a result, security mechanisms for the protection of running processes, transmission of data, and authentication are very secure. Using advanced Novell directory technology in conjunction with a Linux network provides a strong layer of additional security. Governments are choosing Linux for security reasons as well. Although most Linux distributions include a rich collection of preselected packages for automatic installation, as a Linux user you can pick and choose the packages you want installed. Government organizations can create hardened security servers with a specialized set of services and minimal vulnerability by customizing the package list and compiling it according to their own security policies. For example, a server might be compiled to provide only FTP services, which makes it impervious to attacks through email, HTTP, or other common services.
Support For 46% of the enterprise companies in a Forrester Research 2003 study, “lack of support” was the biggest concern in using Linux and open source software (Forrester Research, The Linux Tipping Point, March 2003). That concern is quickly fading, in part because companies now better understand the open source development model and how to find the wealth of online support that is accessible. In addition, the amount of commercial or vendor support that is now available, even since the Forrester research was conducted, is significantly higher. The open source community provides support in several ways, including documentation, FAQs, bug reports, webrings, mailing lists, and more. For example, if you examine the Apache HTTP server website for support resources, you’ll find the following: ■
60
Documentation—Apache documentation is complete for multiple versions of Apache, including sections for release notes, a reference manual, a users’ guide, how to/tutorials, and platform-specific information. Documentation is localized in 13 different languages.
Support
■
FAQs—The frequently asked questions section is comprehensive, with answers to over 100 questions in sections such as configuration, features, error messages, and authentication.
■
Bug reports—Apache uses Bugzilla, a comprehensive online bug reporting and tracking database. You can search the database for specific support problems, enter new bug reports, or get bug summary reports for specific packages.
■
Mailing lists—Something you’re not likely to get from any proprietary software developer, unless you are an official beta customer, is constant updates on software development. You can join multiple mailing lists that keep you current on new announcements, support discussions, bugs, source change reports, testing results, and more. Depending on your desired level of interest, you can be intimately involved with the status of a particular issue.
■
Webrings—A webring is a community of related websites that are linked together with navigation aids that make it easy to find relevant information from different sites. For example, the Apache Tools webring pulls together over 800 Apache tools for installation, portal creation, log analysis, search tools, and others.
■
Product download—Of course, you always have access to product code, whether it be the latest version of development code or any version of production code. Open source software means open access to what you need, when you need it.
■
Discussion logs—A wealth of support information is contained in discussion logs. With enough eyeballs viewing the code and discussing it online, most problems have already surfaced, and either a workaround or fix is on its way and information about it is online.
■
Training—Most open source projects include training sections that consist of everything from beginner tutorials to advanced configuration HOWTOs. The spirit of “give back” that is integral to the open source movement tends to sharing of knowledge as well as code. In addition, quality Linux training programs are quickly emerging to satisfy the demand for Linux professionals. As mentioned previously, O’Reilly Media has a full selection of books, training, and conferences.
The support resources mentioned here for Apache are fairly typical of all major open source projects. In fact, many of them use the same open source web applications for discussion groups, bug reporting, and mailing lists. In reality, the support process for open source software is not much different than company-based proprietary software, except for the fact that everything is open and
61
C H A P T E R 2 : The Open Source Solution
available. Proponents of open source methods claim that as soon as the support methods and sources are understood, it’s easier and faster to get support than from many established commercial support organizations. That said, there is still ample need for commercial support. Many IT organizations are more than willing to pay for the security and knowledge that when they pick up the phone for help, there will be a skilled, intelligible professional available on the other end. The success of companies such as SUSE and Red Hat, which have built their businesses around consulting and support, are proof that the demand exists. Organizations that demand five 9s (99.999%) of reliability absolutely must have quality support, even onsite if need be, and that type of support for open source is now available from companies such as Novell. Open source support is being supplied by a number of leading ISVs (independent software vendors) and OEMs (original equipment manufacturers), including Oracle, BMC, Red Hat, IBM, HP, and others. HP provides multiple levels of training on Linux from simple administration to advanced driver development. Hardware and software products from these industry leaders that include open source solutions are covered by enterprise-class support agreements. Support is just one element of a solution that these companies include as part of a total package solution. In addition, hundreds of small integrators have emerged that provide support and development services for open source software. A quick search of http://www.findopensourcesupport.com yields over 150 providers that supply open source services. Novell sees the open source support need as a primary opportunity to supply customers with the service and support that they require. The sweeping momentum of open source has created a support vacuum of sorts, and no global software services company is in a better position to fill this need than Novell. Novell was a pioneer in the creation of technical support models from the beginning of networking software in the early 1990s. As a result, the Novell support infrastructure that was created to support networking installations in the largest companies around the world is now focused on open source—and focused with intensity. NOTE The origins of NetWare, the network operating system that was Novell’s flagship product for over a decade, are not that far from Linux. NetWare is a client/server model that provides data sharing among systems, the first of which were Unix, PCs, and Macintosh. Over the years, NetWare evolved to support an extensive line of networking services and applications, even to the point of supporting Apache,
62
Support
MySQL, and PHP/Perl/Python in 2003. This is mentioned to point out that Novell’s position as a support supplier for open source is a logical and synergistic extension of the service it has been supplying for years. The adoption of Linux, both by Novell’s development and support organizations, has been an easy transition and a change that has infused the company with a new energy around a platform of promise for advanced technologies.
Novell can contribute significantly to the support requirements of companies adopting open source in three main categories: technical support at multiple levels, training and education, and consulting: ■
Technical support—A full description of Novell’s support offerings is beyond the scope of this book, but to summarize, nine categories of free support are available from the Novell website, including an extensive knowledgebase, documentation, and product updates. The Novell knowledgebase is comprehensive with searches by products and categories of help documents. Twelve categories of paid support include everything from per incident telephone support to premium support that includes priority access to expert resources 24×7×365 onsite. Support can be included as part of a product purchase price, on a subscription basis or as part of a corporate license agreement. Novell support is very flexible and designed to accommodate the needs of any organization. Novell maintains support centers in seven different locations around the world, and can service support needs in any time zone and in many languages. Where Novell contributes to (and accelerates) the open source adoption is that open source solutions distributed by Novell are now backed by world-class support. SUSE Linux, Apache, MySQL, JBoss, RSync, Samba, and other solutions are eligible for any of the currently offered levels of technical support available from Novell. Free support with knowledgebase and discussion group content, per incident support calls, or premium support services all cover open source solutions distributed by Novell. Check out what Novell can provide at http://support.novell.com.
■
Training and education—Novell is also a recognized innovator in the areas of training and certification with the establishment of the Certified Network Novell Engineer (CNE) and Certified Novell Administrator (CNA) credentials in the 1990s. Hundreds of thousands have been trained through Novell certified training programs and courses. Novell has expanded the training certifications to include open source and now provides the Certified Linux Professional and Certified Linux Engineer programs. These certifications continue the tradition of high-quality Novell education and provide technical professionals with credentials 63
C H A P T E R 2 : The Open Source Solution
that are widely sought after for implementing state-of-the-art open source solutions. Training courses are available online, as are self-study kits. Training is available onsite or at hundreds of locations through certified training partners and academic institutions. Find more on training and education at http://www.novell.com/training. ■
Consulting—Many organizations find outsourcing a simple and costeffective method for solution development. The Novell Consulting organization contains a rich history of practical expertise gained through years of experience from the Novell consulting group and Cambridge Technology Partners, a strategic IT management and consulting company acquired by Novell in 2001. Novell’s Linux and open source consulting experts work with companies to help them leverage existing infrastructure investments, support business goals, and provide for future expansion using open source. Novell uses a proven, comprehensive approach to identify and implement solutions for key business problems that help achieve tangible results and realize return on investment in a short time frame. Details on consulting services are located at http://www.novell.com/consulting.
Finally, Novell support is augmented, amplified, and widely distributed through an extensive world network of Novell channel partners. More than 4,200 globally distributed Novell partners provide services from product sales, to technical support, to integration and setup, to custom development, and more. You can browse to find a Novell support partner at http://www.novell.com/partnerlocator/. Is Linux support now available? Absolutely! And not just from Novell. Most of the support services just mentioned are also available from leading OEMs and ISVs providing open source solutions. Service, support, and training for open source is of high quality, relevant, and significantly helps to promote open source in the industry. Novell is in a strategic position to promote open source and especially Linux with its SUSE Linux expertise and the position of Linux as a basis for network and application services. The closely related NetWare experience is easily leveraged to further open source in IT organizations around the world.
Deny Vendor Lock-in In economics, vendor lock-in, also known as proprietary lock-in or more simply lock-in, is a situation in which a customer is dependent on a vendor for products and services and cannot move to another vendor without substantial
64
Deny Vendor Lock-in
costs, real and/or perceived. By the creation of these costs to the customer, lock-in favors the company (vendor) at the expense of the consumer. Lock-in costs create a barrier to entry in a market that, if great enough to result in an effective monopoly, might result in antitrust actions from the relevant authorities (the FTC in the United States). Lock-in is often used in the computer industry to describe the effects of a lack of compatibility between different systems. Different companies or a single company might create different versions of the same system architecture that cannot interoperate. Manufacturers might design their products so that replacement parts or add-on enhancements must be purchased from the same manufacturer rather than from a third party (connector conspiracy). The purpose is to make it difficult for users to switch to competing systems. (Source: Wikipedia, the free encyclopedia.) “Enraged” is probably the most appropriate description of CIO sentiments toward a certain ISV market leader’s attempts to limit choice through the practice of vendor lock-in. After a quick review of recent history, it’s not hard to conclude that a good deal of the intensity and momentum of the entire open source movement is a reactive backlash to the monopolistic and coercive methods of commercial software vendors. Many ISVs, including Novell, work to ensure that customers return again and again for services and establish income flows through subscriptions and maintenance. But historically, others have consistently endeavored to limit choice to a single vendor with closed and proprietary solutions. As described in a recent eWeek article, “The economic value of ‘open’ lies in the ability for users to walk away from onerous vendor pricing or licensing, the negotiating leverage they have, and the ability to avoid vendor-unique extensions.” The following list includes several of John Terpstra’s definitions of vendor lockin. John has been part of the open source Samba project since its early days, and has methodically identified several restrictions that illustrate vendor lockin scenarios: ■
Proprietary data storage formats that lack interoperability with other vendors’ software applications and/or that prevent the data from being exported from the vendor’s product to another vendor’s product
■
A vendor that has a restrictive partnership agreement to implement (that is, support) the product only on said partners’ platforms or systems
65
C H A P T E R 2 : The Open Source Solution
■
A vendor that requires customers to sign an agreement that mandates that support fees will continue to be paid even if use of that vendor’s product is discontinued within the contract period
■
A vendor that places license restrictions on what OS platform data from the vendor’s application may be placed
■
A vendor that demands contractual exclusivity, preventing all other competitors’ products from being onsite
■
A vendor that does not supply source code for its applications
■
A vendor that provides source code but fails to provide any essential component or software needed to rebuild a working binary application from that code
Open source provides fundamental freedom of choice. Sure, there are differences in distributions and features, but the fact that code is open and available means that in the majority of cases, good features and enhancements will eventually be merged with the mainstream code. Several elements of open source reduce the probability of vendor lock-in. These include open standards, application portability, and alternative selection: ■
Open standards—Support for open standards and being open standards-based can be two completely different things. Hundreds of examples exist in which a solution that accommodates a standard by conversion, tunneling, or mapping often breaks down when a new version arrives, or the “patch” must be flexed to conform to supposed standard requirements. Although standards themselves are sometimes fluid, there is more longevity, momentum, support, and compatibility when a solution is based on open standards. If an implemented solution is only open standards-compliant, any changes such as adding new components, upgrading applications, making new connections, and so on could require reimplementing the original solution at additional cost. If the support-solution doesn’t flex, you’re locked in. Internet mail is a good example. Microsoft Outlook and Exchange, although compliant with mail standards such as POP and IMAP, are not based on them. Therefore, integrating other standards-based mail systems in the event of a merger or acquisition takes a lot of work. They don’t seamlessly interoperate, and the net result is usually a complete conversion of one system to another. If you convert to Exchange, you’ve solved the immediate problem, but it will resurface again at the next reorganization.
66
Deny Vendor Lock-in
Open source solutions are, by and large, interoperable with little modification. It doesn’t matter if your email system is Sendmail or Postfix, one can be substituted for another. Any POP client works with both. There’s no pressure to get into a proprietary groove and stay with the commercially metered flow for everything to work together. ■
Portability—Open source means you’re not tied to a specific operating system or platform. The great majority of open source applications that have been written for Linux run on all distributions of Linux. They also run on most versions of Unix, including multiple versions of BSD. This extends to the majority of web applications and services as well. PHP, Perl, Python, Java, and the solutions that are built using them run virtually unchanged anywhere. There’s no requirement to develop to a specific platform and then port it to another platform while maintaining multiple versions of code. The Unix/Linux architecture design provides for performance without intricate hooks or hardwiring between the operating system and the application. The thought of Internet Explorer or Exchange running on Linux seems odd, but it shouldn’t be.
■
Substitutes—In a free-market economy, the availability of substitutes is responsible for a number of market factors, including competitive pricing and an increase in features. Vendor lock-in is achieved when there are no practical substitutes and prices are at a premium. As a source of substitutes for a widening selection of mass-market software solutions, open source is helping release the stranglehold of monolithic, end-to-end solution providers.
At this point in time, almost every common software solution needed for a typical business is available as an open source solution. Operating system, file sharing, firewall, web server, database, office productivity applications—multiple versions are available for each of these with no interlocking dependencies. Here are the top vendor lock-in dangers: ■
Price—Call it monopolistic pricing, call it extortion. If you don’t have choice, you’re stuck paying whatever the vendor demands. With a subscription fee schedule and no competition, you don’t necessarily get enhancements, but you keep paying.
■
Adequate technology—Why be saddled with “good enough” solutions when excellent options are available? If you’re on a train, you eat what’s in the dining car, sleep where they put you, and go where the train is going. If you drive, you have your choice of restaurants, hotels, and destinations. Windows is good enough, but you get viruses, blue screens,
67
C H A P T E R 2 : The Open Source Solution
and you have to rebuild it periodically. A Windows file and print server will support 50–100 users, but Linux on the same hardware will support thousands. Windows supports clustering, but Linux supports it powerfully and elegantly. An “adequate technology” strategy works for some organizations, but most will suffer long-term disadvantages by being locked in to only what a vendor provides. ■
Flexibility—Most organizations are NOT single-vendor shops. They are a heterogeneous mix of dissimilar systems, and monolithic vendor solutions don’t “work and play well with others.” Common administration, holistic and comprehensive management, and seamless interoperability are not priorities for a vendor bent on eliminating all competing options.
Lest you think it hypocritical to point out the blemishes of vendor lock-in in the face of Novell’s 20-plus years as an independent software vendor, we must look at the facts. Novell’s first product was an effort to get Unix, CPM, and DOS machines to work together seamlessly with the capability to share back and forth. If you look at Novell’s history, practically every product produced has had the stamp of interoperability—getting heterogeneous, disparate systems to work together. This includes NetWare with its many clients, eDirectory with its user authentication synchronization, ZENworks with cross-platform management, and now products such as SUSE Linux, Open Enterprise Server, and iFolder. In a way, Novell’s success is a result of providing interoperability and connection services between disparate systems. With a focus on open source and Linux, the tradition of managing flexibility and choice continues.
Quality Software and Plentiful Resources People like to take potshots at open source—especially threatened ISV executives. Realistically, it’s probably hard to imagine how thousands of developers, from all over the world, without bosses, compensation, deadlines, or fear of retribution can create software solutions that rival the best efforts of proprietary companies. This section takes an in-depth look at the development process, with the objective of illustrating just how open source code gets written. It also shows just how deep the resource pool for open source development and administration talent really is.
68
Quality Software and Plentiful Resources
Who Are Open Source Developers? Linux, Apache, and several other mainstream solutions have shown that collaborative open source software projects can be successful. “Why open source works” is the big question—one that reportedly has puzzled even Bill Gates. “Who can afford to do professional work for nothing? What hobbyist can put three man-years into programming, finding all the bugs, documenting his product, and distributing it for free?” Gates once wrote. Steven Weber, a political science professor at UC Berkeley, has researched the motivations of Linux developers and summarized the reasons why they do what they do in the following categories (Source: The Success of Open Source, Harvard University Press, 2004): ■
Job as a vocation—In the process of “scratching a personal itch,” developers solve a problem that empowers them. With virtually no distribution costs, they can empower others as well.
■
Ego boosting—The challenge of programming, of solving a problem with visual evidence of the accomplishment, is a source of satisfaction.
■
Reputation—Open source developers strongly identify with the community. As a result, good work leads to recognition and reputation within the community—and potentially to better work and more reward.
■
Identity and belief systems—Much of the open source community culture is rooted in the “freedom” movement espoused by Richard Stallman, which includes principles of free software, free access to data, wariness of authority, appreciation for art and fun, and judgment of value based on creation over credentials. Developers strongly identify with this.
■
The joint enemy—Uniting in benevolent purpose against a common enemy is at least an element of motivation for open source developers.
■
Art and beauty—Code either works or it doesn’t, but the elegance of a simple solution is art—the difference between “clean” and “ugly” code. With open source, a creation can be shared with others.
From this research, you can see that open source developers are not merely geeks, united in a hobby-love for computers with code a by-product of their cyber fun. The motivations for development are deep, real, and closely parallel the motivations for almost any other productive endeavor, whether it’s motivated by profit, altruism, self-interest, or religious belief. Bottom line, the open source movement is real, with traction provided by a competent, skilled development force.
69
C H A P T E R 2 : The Open Source Solution
What do we know about who open source developers really are? Concrete data on demographics is scarce, but some estimates give us an idea. The credits in the 2.3 version of Linux code listed developers from over 30 countries. The community is definitely far-flung and international. More developers have .com addresses than .edu, indicating that many are working at for-profit institutions. The O’Reilly Open Source Convention attendee list included people from aerospace (Boeing, Lockheed Martin, General Dynamics, Raytheon, NASA); computers and semiconductors (Agilent, Apple, Fujitsu, HP, Intel, IBM, Philips, Intuit Macromedia, SAIC, Sun, Texas Instruments, Veritas); telecom (ATT Wireless, Nokia, Qualcomm, Verizon Wireless); finance, insurance, and accounting (Barclays Global Investors, Morgan Stanley, Federal Reserve Bank, PriceWaterhouseCoopers, Prudential); media (AOL Time Warner, BBC, Disney, LexisNexis, Reuters, USA Today, Yahoo!); and pharmaceuticals (GlaxoSmithKline, McKesson, Merck, Novartis, Pfizer). A 2002 study conducted by Boston Consulting Group from SourceForge and Linux kernel email lists produced quantified demographics on the Free/Open Source Software community, including the following: ■
Seventy percent are Generation X, between 22 and 37, with an average age of 28.
■
They typically volunteer between 8 and 14 hours per week to open source projects.
■
Fifty-five percent are professional programmers, IT managers, or system administrators; 20% are students.
■
The average person has 11 years of programming experience.
Interesting quotes from survey respondents included the following: “How often in ‘real life’ do you find Christians, Jews, Muslims, Atheists, Whites, Blacks, Hispanics, Republicans, Democrats, and Anarchists all working together without killing one another?” (NYC IT consultant), “People will always want to contribute what they know to a project that scratches an itch. Open software will continue to depend on projects that meet people where they need help” (San Jose IT manager).
How Does the Open Source Process Work? Each open source project can have its own unique process development cycle. After all, there is no forced hierarchy, preferred method, or established “right way” with open source. However, an open source project will generally include
70
Quality Software and Plentiful Resources
the elements illustrated by the flowchart developed by BCG after their 2002 research (see Figure 2.1). FIGURE 2.1
A general map of the open source development process.
How does open source work? Virtual teams Provide “home” for code
Get code
Get code Donate code
Establish team norms
Self organize
Use software Report bugs
For-profit companies
Integrate code submissions into new releases
Users
”Coordinate“ /Influence
Support each other
“Leadership”
User developers
Join
Scratch coding itch, solve problem
Initial code base and vision
Implement features and fix bugs
Own and modularize kernel
OEM open source “Liberate” code Provide support and consulting
Licensing
Provide expertise
Proselytize movement
Prepackage open source
The process can be summed up as follows: ■
An itch develops—It might be as simple as someone wanting to organize digital photos to share with family members, or as complex as the need to standardize customer information across divisions of a multinational company. The itch is common; that is, the need is shared across a larger audience than just one.
■
Germination occurs—The inception of a project could take one of several forms. It might be Linux-like in that one person establishes an initial code base. It might be Apache-like, with a committee meeting to clarify need and establish direction. It might be Mozilla-like, with a gifting of free code. It might be the posting of a project to SourceForge. Any activity or event that produces evidence of productive activity toward scratching the itch can be considered germination.
■
The project takes roots—From inception, the project begins to grow. Several activities can be part of this phase. A “home” is provided for the
71
C H A P T E R 2 : The Open Source Solution
project, which includes an accessible storage location for code. Intercommunication is established that can be as simple as trading emails, or as extensive as established mailing lists with subgroups. Informal leadership is established based on respect and trust. Norms informally evolve shaping communication, interaction, and productivity. Word of the project is spread to others who might have the same itch. ■
Cultivation—The often iterative process of creating, submitting, testing, and evaluating code occurs. Here users, developers, or user/developers work to create, enhance, and refine the product. The code might be housed in a CVS-type environment in which bugs and feature requests are tracked and successive iterations of code are progressively versioned, with the objective of reaching a state of hardened, production-quality code.
■
Recurring harvest—Open source code is released when at a state useful to some population. It does not go dormant, but continues the cycle of enhancement, often with many releases (Raymond states, “release early, release often”). At this point, the software is often valuable for productive use within the project community.
■
Commercial productive use—Some projects (with widespread itch appeal) are released for general use and become part of mainstream, commercial solutions. A standard “open” license is applied to the product, it is adopted by commercial ISVs or OEMs, and support is provided. Responsible commercial organizations join the community, often contributing back to the project in terms of manpower, hosting services, support, leadership, and enhancements, or by contributing related technologies to the effort.
Several key principles facilitate the development of open source software. Without these key elements, it would be much more difficult to completely evolve a project to productive reality. These elements include code modularization, peer review, developers as users, and, of course, an effective communication infrastructure: ■
72
Code modularization—In the Linux example, you will see that the Linux kernel project is modularized with subprojects for the kernel state, security, device I/O, networking, file system, process management, memory management, and more. In addition, many more projects for device drivers, functions, utilities, and applications are available. The elegance of Linux in part is because of the architecture, which supports intercommunication among these separate modules. Packages—distinct collections of functional code or objects that can be loaded or unloaded without
Quality Software and Plentiful Resources
affecting the kernel or other packages—are a reflection of this modular architecture. “It is suggested that mindful implementation of the principles of modularity may improve the rate of success of many Free/Open Source software projects.” This assertion is developed in detail in the paper, “Free/Open Source Software as a Complex Modular System” by Narduzzo and Rossi. ■
Peer review—The term peer review might not be completely descriptive of this element, but it encompasses the idea that a person’s contribution is subject to observation and analysis by the entire open community. One’s contribution is not shielded by proprietary binaries. The process can include kudos as well as flames, acceptance or rejection, reputation, and responsibility. As a result, the established norms of peer review tend to produce quality code. Solutions, if not at first, eventually evolve to become effective, simple, and elegant.
■
Developers as users—Much has been said about the ineffectiveness of the multistep, silo-prone process of gathering marketing requirements from users, feeding that to developers who create code based on perceived need and theory, and then throwing it over the wall to be sold to users. With open source, the majority of the initial users are also developers. They solve their own problems, and in doing so create more effective solutions. This eliminates miscommunication and also dramatically speeds the development cycle. According to Raymond, both developers and users “develop a shared representation grounded in the actual code.”
■
Communication infrastructure—Without the Internet, the open source process would be impossible. Global access, instant communication, shared storage, and open standards are all elements that are key to the development of open source.
And the result? Open source development leads to quality software. Here’s a quantifiable sample of the caliber of open source code. Reasoning is a security inspection service that uses automated technology to uncover coding flaws, such as memory leaks, out-of-bounds array access, bad deallocations, and uninitialized variables. In comparing two open source solutions (MySQL and the Linux TCP/IP stack) to the collection of commercially available versions of each, Reasoning found that defect density in the open source solutions was significantly lower than in the commercial versions. The defect density per one thousand lines of code for MySQL was .09, as compared to an average of .57 for commercial versions. The defect range for
73
C H A P T E R 2 : The Open Source Solution
commercial databases was between .36 and .71, still significantly higher than for MySQL. The defect rate for the Linux TCP/IP stack was .10 per thousand lines of code, as compared to a range of .15 for the best of the commercial stacks, up to more than 1.0 for the worst (http://www.reasoning.com). Linus Torvalds simplifies the concept. “I think, fundamentally, open source does tend to be more stable software. It’s the right way to do things. I compare it to science vs. witchcraft. In science, the whole system builds on people looking at other people’s results and building on top of them. In witchcraft, somebody had a small secret and guarded it—but never allowed others to really understand it and build on it…When problems get serious enough, you can’t have one person or one company guarding their secrets. You have to have everybody share in knowledge.”
Summary This chapter thoroughly explains why open source is a viable solution for the computing environment. The next chapter builds upon this and examines how open source has fared in the real world.
74
CHAPTER 3
Open Source in the Real World S
o far, you’ve discovered what open source is, where it originated, its advantages, and how it is created. That’s all good information if you need background data to feel comfortable about the decision to move ahead. This chapter gets down to the building blocks—the actual products and solutions that are currently available to help you implement open source in your organization. This chapter covers product specifications and integration, as well as details case studies of open source implementations in real organizations. It’s worth pointing out again that, although in some instances, an organization might elect to implement only open source software, total open source IT implementations are generally not the case. By and large, open source is integrated with proprietary software to create a solution that optimally meets the requirements of the organization. These requirements might include heterogeneous system support and management, gradual migration strategies, custom application support, recouping capital investments, or any one of a number of other organization-specific factors.
Integration Factors to Consider Figure 3.1 graphically illustrates a general model and framework for Linux integration that was developed by Novell Consulting. It is divided into two halves, one representing desktops and the platform, applications, and services that exist at the node or user end. The other half represents the infrastructure, applications, and services that are considered to be backend or data center solutions. Each of these halves is divided into concentric circles, shaded
C H A P T E R 3 : Open Source in the Real World
different colors representing different levels of technical integration complexity (less complex at the outer ring moving to most complex at the center). FIGURE 3.1
Novell’s Linux and open source integration framework.
Desktops
Servers Technical Workstations Business Appliances
Desktop Management Point of Sale
Portal
File
Light Application Servers
Print
SAP Peoplesoft General Knowledge Worker
Data Center
Workgroup
Edge Servers
MP/NUMA Oracle Development Branch/ Remote Office System Administration
Computation Clusters
DNS
Web
When implementing Linux and open source solutions, it often makes sense to start with the outer rings and work to the center using a phased or incremental approach. Depending on customer needs, it can also make sense to start with the backend or data center areas first. These services can be staged, tested, and implemented seamlessly without ever affecting the majority of end users. As always, this is a “general” model, and is used as a framework for analysis. Different environments require alternative approaches. An objective of this book is to systematically work through each of the items in the figure, starting with the data center side first and working from the outside ring toward the center. Novell Consulting has developed a phased approach for implementing open source solutions. Each phase is general and might include elementary issues, but experience shows that going through a comprehensive checklist minimizes problems and helps accelerate full, successful deployment. Implementation phases include assessment, design, implementation/migration, training, and support. The following sections provide a brief overview of each.
76
Integration Factors to Consider
Assessment The assessment phase clarifies objectives and determines feasibility. Don’t blow off this step—it might sound simple, but the assessment phase in some cases should be extensive. Invariably, information turned up during the assessment phase affects objectives and feasibility. The following questions will help refine the objectives and provide critical assessment of feasibility: ■
What is the objective of this open source project? New capability, cost savings, integration with partners, better security/reliability/scalability? What are primary, secondary, and ancillary objectives and how should they be weighted? Is there a gating measurement or criteria that determines whether this project is successful?
■
How well does this project fit with existing corporate strategy? Is it an extension of an existing objective, or is it breaking new ground in a new direction? How ready and willing, in general, is the organization to accommodate this technology?
■
What are the stakeholder and business unit requirements (business, technical, and political for management, employees, customers, partners, suppliers)?
■
Where is a logical starting point? Which servers, services, and applications does it makes sense to implement/migrate in the short- and longterm?
■
Which open source services and applications are available that meet your needs and provide comparable (or better) features? What other alternatives might need to be evaluated, such as terminal services or emulation? Are there any solutions that are only available as proprietary software?
■
What are the general steps for how servers, services, and applications, along with associated data and business processes, can best be implemented/migrated?
■
What are the interdependencies among services and applications, desktop clients, or other software that might affect this migration/implementation plan?
■
Are there any regulatory policies and requirements that need to be taken into account, such as licensing, government, security, or encryption?
■
What opportunities for hardware and service consolidation exist?
■
What level of requisite skills to support Linux/open source does your staff have? Are there differences in requirements between implementation and long-term support?
77
C H A P T E R 3 : Open Source in the Real World
■
What will be the estimated cost of this project and does it fall within a practical range given budget and timing restrictions?
■
What is the estimated internal rate of return, return on investment, or total cost of ownership, and does it meet established requirements?
Design After objectives have been defined and general feasibility has been confirmed, it’s time to begin the implementation/migration design phase. In this phase, details are refined and a framework for execution is established. Design questions and activities include the following:
78
■
Evaluate and select a Linux distribution. Will the distribution be multipurpose, single purpose, enterprise, or single task focused? Will the configuration be compiled with a standard list of packages, a custom list, or with minimal function?
■
Select applications or services. Is the service or application you require available as open source? If so, does it meet all requirements for functionality, security, manageability, and so on? If not open source, does it exist from a proprietary vendor? If not, can it be developed internally using open source components?
■
Design a migration/implementation plan. What will be the specific project steps for the process?
■
Consider staging with a test bed or staging configuration to help determine actual implementation/migration steps and to expose unforeseen issues.
■
If migrating, determine each phase of migration, including data, users, applications, connections, services, and so on.
■
Determine the services architecture, which might include file, print, identity, directory, messaging, collaboration, and web services, as well as DNS/DHCP, virus protection, backup, virtual private networks (VPNs), clustering, high availability, and so forth.
■
Determine the application architecture. Will there be client applications, server applications, client/server applications, terminal services, or OS emulation required?
■
Determine business continuity requirements. Is there a need for backup, data mirroring, system redundancy, simple failover, geo-site failover, or remote storage?
Integration Factors to Consider
■
Identify migration tools and processes that meet the needs of your specific environment.
■
Design a training plan that takes into account the existing level of Linux/open source training and certification among your staff.
Implementation The implementation phase consists of hands-on piloting, testing, configuration, rollout, and training. Here are several possible implementation phase activities: ■
Create a staging lab and establish setup procedures and acceptance criteria for actual implementation.
■
Train migration staff. This might include self-training using staging lab or professional training and coursework.
■
Validate the migration pilot test plans, scripts, and acceptance criteria with trained staff.
■
Roll out the migration design in a limited-production pilot. This might be to a limited set of advanced users or for a particular edge service that has redundancy or fail-back capabilities.
■
Install and configure any required migration tools.
■
Install or migrate server operating systems. This might include manual activities or might be automated through remote control, package management, or server management.
■
Install or migrate applications.
■
Migrate users, groups, accounts, domains, and any other management controls that are required.
■
Migrate data, always ensuring that copies exist and that there is an established method for fail-back if needed.
■
Validate pilot migration processes and deployment, and modify as necessary.
■
Complete migration/implementation of all services and transition to production.
Training Training can be interspersed throughout the implementation/migration process. In addition to staff training for transition, there will be a requirement for staff
79
C H A P T E R 3 : Open Source in the Real World
training for management and support. In addition, users might need to be trained for use of new applications and services: ■
Establish good communication across all stakeholders. This includes education about the new services/applications, what they are, why they are needed, and what the enhanced value to all stakeholders will be. It could also include timelines, scheduled outages, and endorsements by management. Use this opportunity to set expectations to ease transition and change.
■
Educate management. Management can be key in establishing a good environment for accepting change. Ensure that all affected management have a clear understanding of objectives, benefits, timelines, and possible problem areas.
■
Train support staff for management and administration. This might include specific training on administration software, management standards, and protocols, such as Simple Network Management Protocol (SNMP) or Common Information Model (CIM), health monitoring, logging and reports, and so on.
■
Train help desk support staff, providing tools and education for accommodating end users. This might include support request logging and tracking, phone support management, new application or service training, and so forth.
■
Train end users. Training might be provided through in-house training sessions, web-based or computer-based training, offsite professional training courses, distributed manuals, train-the-trainer programs, or any one of a number of training solutions. With effective training, less demand will be placed on help desk and system administration resources.
Support After a solution has been migrated or implemented, the responsibility still exists to provide ongoing support and management. This might be a simple responsibility easily assumed by an existing administrator or might involve the efforts of multiple managers in different locations and an extensive help desk department: ■
80
Consolidate administration. Look for methods, tools, and applications that simplify administration by consolidating and automating management tasks. This includes tools that are interoperable, standards-based, and work with a common interface.
The Linux Solution
■
Simplify support by creating self-help solutions using frequently asked questions, an online support database, and web-based support solutions for chat, problem submittal, or remote control.
■
Analyze support problems looking for simpler methods, workarounds, and processes to avoid repetition and redundancy.
■
Consider outsourcing support and administration if third-party sources can provide superior expertise at comparable or lower costs.
■
Periodically reevaluate solution for upgrades and enhancements both in terms of features and functionality and also for security.
Open source migrations or integration doesn’t need to be a painful process. Many administrators and IT personnel have already picked up Linux knowledge just out of curiosity, so in-house expertise is available. Starting with edge services, open source solutions can be implemented without major negative impact to other services or end users. Often, the actual migration will go quite smoothly. One Novell customer, after working through each of the preceding factors, was geared to expect a two-week implementation process. The entire process went without a hitch and was accomplished in four hours.
The Linux Solution Every application, service, or solution requires a protocol engine or operating system. Although several open source operating systems or kernels are available, for all practical intents and purposes, Linux is the market leader. Linux is a platform for everything from computation-intensive clusters to thin-client desktops to web servers. In reality, Linux can be lightweight and small enough to be embedded in a wristwatch or powerful and scalable enough to manage the largest IBM mainframe running hundreds of instances of applications. Distributions are available from Red Hat, Debian, Mandrake, TurboLinux, and others that are based on the same Linux kernel and include similar capabilities. However, Novell believes the SUSE distributions provide significant advantages in the areas of management, platforms supported, compatibility testing performed, and advanced technologies.
SUSE Linux Product Line The SUSE Linux product line from Novell includes SUSE Linux Enterprise Server (SLES) 9 and SUSE Linux Desktop.
81
C H A P T E R 3 : Open Source in the Real World
SUSE LINUX ENTERPRISE SERVER 9 The SUSE Linux Enterprise Server is targeted to enterprise computing uses in which scalability, performance, reliability, and security are required. This distribution includes many distinctive features that make it particularly suited to an enterprise environment, including those shown in Tables 3.1, 3.2, and 3.3. TABLE 3.1
SUSE Linux Enterprise Server 9 Distinctive Features FEATURE
DESCRIPTION
Linux 2.6 kernel
First enterprise-class Linux server operating system built on the 2.6 Linux kernel (see Table 3.2)
YaST
Installation and configuration tool for operating system, network services, storage, clusters, and applications
CIM
Open application programming interfaces (APIs) for CIM for integration of third-party management tools
CKRM
Class-based kernel resource management for mainframe-like partitioning of large-scale servers
Clustering
Clustering support for automatic failover and high-performance computing; implements Open Clustering Framework APIs to provide low-level services for node fencing and fault isolation
Hotplug
A feature providing Hotplug services for hardware change without system disruption
Certified security
A feature currently in evaluation for compliance with EAL 4+
Enterprise Volume Manager
Network-based storage using multiple file systems with simple management
User Mode Linux
A feature used to enable virtualization of different Linux configurations on the same hardware
Distributed Replicated Block Device (DRBD)
A feature that creates single partitions from multiple disks that mirror each other (similar to RAID 1 but over a network)
TABLE 3.2
Linux 2.6 Kernel Features Included with SLES FEATURE
DESCRIPTION
Processor support
Can have a theoretical infinite number of processors; Novell-tested 128 CPU configuration
File support
Provides dynamic tuning to accommodate maximum number of simultaneously open files
82
The Linux Solution
TABLE 3.2
Linux 2.6 Kernel Features Included with SLES (continued) FEATURE
DESCRIPTION
Processes
Run up to 65,535 user-level processes, plus additional kernel-level thread processes
Users
Support up to 4 billion unique users
Device types
Support 4,095 major device types and more than a million subdevices per type
Device support
Manages more devices with better performance (for example, 32,000 SCSI disks)
High speed
Supports USB 2.0 and FireWire (IEEE 1394 and 1394b)
High throughput
Provides high-speed Serial (ATA, S-ATA) device support, which enables throughput of 150MB/sec
Non-Uniform Memory Access
Scales more efficiently for systems with dozens or hundreds of processors
Hyperthreading
Executes parallel threads within a single processor; speeds transaction rates and performance for multithreaded applications
Flexible I/O scheduler
Allows manual tuning of I/O scheduler according to I/O behavior policies
TABLE 3.3
SUSE Linux 9 Hardware Platform Support PLATFORM
DESCRIPTION
X86
Intel x86 architecture including 386, 486, and Pentium processors; same as IA-32 (32-bit architecture)
AMD64
64-bit extension of the IA-32 architecture by manufacturer AMD (includes Athlon and Opteron)
Intel EM64T
Extended Memory 64-bit Technology (EM64T), Intel implementation of AMD64 (code-named Nocona, sold as Xeon)
Intel Itanium
64-bit microprocessor developed by HP and Intel (IA-64)
IBM Power
RISC CPU designed by IBM (Performance Optimized With Enhanced RISC)
IBM zSeries
IBM 64-bit architecture for IBM System/390 mainframes
IBM S/390
Earlier IBM zSeries platform
In addition, SUSE Linux Enterprise Server 9 includes features contained in other Linux distributions, such as file, print, web application, relational
83
C H A P T E R 3 : Open Source in the Real World
database, and networking services. Built-in services and protocols include CUPS, DNS, DHCP, IMAP, NTP, SLP, Postfix, PXE, Proxy, Samba, SNMP, SMTP, and many others. Security features include encrypted file systems, certificate authority, integrated firewall, and proxy. SUSE LINUX DESKTOP Novell SUSE Linux Desktop is an end-user platform designed as a complete solution for any desktop. It includes the Linux operating system as well as a host of applications and services that provide end users with office productivity applications, client services, and access to network and web services. Table 3.4 offers an overview of what’s included. TABLE 3.4
SUSE Linux Desktop Features FEATURE
DESCRIPTION
Linux operating system
Linux 2.X kernel with standard components for configuration (YaST), interface (KDE, GNOME), security, network services, and more
Standard hardware support
X86, notebooks, workstations, USB, storage, CD-ROM, DVD, cameras, tablets, PDAs, mouse, keyboards, FireWire, PCMCIA, and so on
Internet clients
Web browser (Mozilla, Konquerer), FTP, mail (Kmail, Evolution, Mozilla Mail), and so on
Desktop applications
Office suite (OpenOffice.org—word processing, spreadsheet, drawing, presentation, database access tools), address book, calendar, PIM, and more
Distinct advantages of Novell SUSE include the SUSE common code base. All versions of SUSE Linux on all platforms are rooted in a single code base. This ensures the consistent use of common management tools and automatic updates across all Linux deployments—all versions on all hardware platforms.
How It Works How Linux works could be the topic of an entire series of books. For most purposes, Linux is the engine that powers every service, application, or interface being discussed. Image generation, input/output, management of files, packet generation, security checking, and thousands more processes are enabled through the operating system. Figure 3.2 illustrates the high-level OS functions.
84
The Linux Solution
FIGURE 3.2
The Linux operating system consists of a core kernel and multiple services.
Device Drivers
Functions
Packages
Security/????????????
Processor Cache
Device and I/O
Networking
Process Management
Memory
File System
Linux Operating System
Kernel State Management
A key concept in Linux is the package. In Linux, a package is a single file that contains a list of other files that are to be installed or included as part of a software solution. In addition to the list of files, a software package includes rules indicating interdependencies or other software packages that need to be installed for the solution to function properly. In Linux, these packages are usually in the form of text files and are open for viewing and access. The ability to create complex solutions by assembling collections of packages is a key element for Linux flexibility. A particular Linux distribution is defined by the unique collection of packages that are configured by the distributor. Often, a single installation package predetermines what other packages are to be installed and automatically proceeds through the installation process. For example, the difference between a Red Hat distribution and a SUSE distribution is the collection of packages that have been configured to run on the Linux kernel by each distributor. Linux is very flexible in that anyone can define a custom configuration to install only the desired packages. A Linux operating system can be configured
85
C H A P T E R 3 : Open Source in the Real World
with just the packages required for it to function as a web server, or the bare minimum set of packages to function as an identity server. Custom Linux packages can be configured to create organization-specific applications or department-specific processes. In simple terms, Linux is a high-performance kernel engine that can be configured to power any software vehicle. It might run a desktop, a firewall, a database, a file server—or any combination of these processes and more. Built-in at the engine level are excellent mechanisms for security, file management, memory management, I/O, processor management, device management, and more. Linux distribution vendors can provide preconfigured Linux solutions, or you can configure your own based on your needs and environment.
Implementation Overview Installing Novell SUSE Linux (Enterprise or Desktop) is intuitive and straightforward using the standard preconfigured options. Depending on your preference, you can use either the graphical user interface or the console-based text mode. Research has found that administrators new to a system prefer the GUI initially, but the flexibility and speed of console mode is often preferred after the system has become familiar. To install SUSE Linux, you simply insert the CD-ROM and restart the computer. A start screen is displayed, which gives you the option to proceed automatically using GUI YaST or manually in console text mode. SUSE YaST (Yet another Setup Tool) is one of the clear advantages of the SUSE distribution, providing a powerful collection of installation and management tools. SUSE Linux performs all standard installation procedures, including hardware detection, driver configuration, selection of network resources, and peripheral support. Linux supports hot plugging, which recognizes newly connected or installed hardware and automatically makes it available for use. SUSE Linux also provides multiple boot options, including the use of a Linux boot manager that allows you to select one of several different operating systems prior to booting. This can be valuable if multiple configurations are stored on one machine, or if the need exists to alternate between operating systems; for example, Windows and Linux. You can also boot Linux from the network if desired, making it possible to secure workstations and more tightly control the operating system. The de facto GUI standard for Linux and Unix is the X Window System (X). X is the basis for advanced and excellent graphics with the added advantage of
86
File Services
being network- (not machine-) based. Applications running on one machine can display results on another machine or be controlled by a user remotely as if they were local. GUI-based Linux administration and management utilities are as intuitive, easy to use, and as visually useful as any available in Windows. Linux can be flexibly configured and includes all the necessary networking tools and features for integration into all types of networks. Configuration files can be modified and applied without bringing down the server—this is a major Linux advantage, as it provides advanced configuration and control flexibility. From this general Linux overview, you can see that many of the early barriers to Linux adoption have been eliminated. Installation and device detection are as good as any available. Flexibility for customization is unsurpassed, as are management, control, and configuration.
File Services File sharing was the original inspiration for networking. Novell originated with the need to share files between Unix, CPM, and DOS operating systems. The simplest solution was an independent file “server” that allowed file access from each of the three different “clients.” The Novell NetWare file server was born, and the wave of connectivity that ensued was phenomenal. Early file services consisted of both client/server and peer-to-peer connections in a local area network (LAN) using the IPX protocol. Today, the area of file services covers a vast collection of protocols, across multiple types of platforms, on different devices, distributed both locally and globally through VPNs and the Internet. In many enterprise environments, there is a need to share files based on different file formats, on different platforms, accessible from different clients with the need for seamless connectivity. Several open source solutions, including Samba, iFolder, and Linux, allow you to do just that.
What You Can Do Linux and open source technologies provide you with the ability to accommodate a wide range of file services. Figure 3.3 shows what you can do.
87
C H A P T E R 3 : Open Source in the Real World
FIGURE 3.3
Linux technology allows you to share files across multiple server and client types. Linux
Finance
Windows
Finance Linux
Windows
Web
Finance Netware
The file services available include the following:
88
■
Access Linux file server from a Windows client—Samba is open source software that makes Linux or Unix platforms look like Microsoft Windows for file access and printing. It has been called “mediation” software because it allows Linux, Unix, and Windows servers to exist in the same network and be accessed by the same Windows clients without the clients ever detecting that the server is not Windows. This transparent access is handled through a protocol suite called the Common Internet File System (CIFS). CIFS is based on Server Message Block (SMB), an earlier version of the Microsoft file-sharing protocol. To a Windows workstation, a Linux server appears just like a Windows server with the same directory/file navigation and file manipulation capabilities.
■
Access a Windows server from a Linux client—Using Samba, the reverse is also possible, allowing Windows servers to appear as Linux or Unix hosts.
■
Access a Linux file server from a Macintosh client—Using Netatalk, a kernel-level implementation of the AppleTalk Protocol Suite (ATP), a
File Services
Macintosh OS 9 or OS X client can retrieve, store, and manipulate files on a Linux server as if it were on another Macintosh machine. ■
Linux to Linux (Unix to Unix) file sharing—Network File System (NFS) is the native file-sharing protocol for both Linux and Unix. You can share files in any direction among these operating systems.
■
Store files in different formats—While Linux-based files can be accessed using different protocols, they can also be stored using different formats. Major supported formats include the following: ■
Reiser—ReiserFS is a journaling file system with optimized diskspace utilization, fast disk access, and quick access recovery. It is the default file system for SUSE Linux.
■
Ext 2/3—The ext3 file system is a journaling extension to the standard ext2 file system on Linux. Journaling reduces file-systemcrash-recovery time and is widely used in high-availability sites with shared disks.
■
XFS—XFS is a high-performance journaling file system originally developed by SGI for use in its IRIX systems.
■
JFS—JFS is a full 64-bit file system that can support very large files and partitions. JFS provides a log-based, byte-level file system that is ideal for high-performance systems.
Other file formats supported include Lustre, ISO9660 (CD-ROM), UDF (DVD/packet mode), EFS, CRAMFS (compressed RAM file system), ROMFS (small ROM file system), TMPFS (RAM disk file system), BFS (UnixWare boot file system), SYSV (SCP/Xenix/Coherent), UFS (BSD and derivatives), FAT/VFAT (Microsoft DOS and Windows 9x), NTFS (Microsoft Windows NT), HFS (Macintosh), HPFS (OS/2), QNX4, and Minix. ■
Upload and download files—Linux includes File Transfer Protocol (FTP) support, making it possible for authenticated users to upload, download, copy, delete, or rename files from any location on the Internet.
■
Synchronize files on different workstations—How often do you find yourself copying files to a floppy or emailing them to yourself so you can take your work home with you? Novell iFolder technology makes it possible for files to automatically follow you anywhere. Think of it as a synchronizing mechanism that automatically replicates changes made to a file on your workstation, first to a replica on the Internet, and then to any other workstation (notebook, home office, remote office, and so on) that you have specified. You make changes to files in your iFolder on your office PC and when you get home, those file changes are reflected in the updated files on your home computer. 89
C H A P T E R 3 : Open Source in the Real World
Here’s how iFolder works. A central iFolder server is configured at the data center on either Linux, NetWare, or Windows. This server includes a file repository and synchronization software. The iFolder synchronization capabilities are built using RSync, the open source utility that provides high-speed, incremental file transfer. On every workstation that is to be synchronized, an iFolder agent is installed and one or more synchronization directories are specified (MyDocuments, for example). Periodically or on demand, synchronization occurs, transferring file changes at the block level (minimal bandwidth required) on the workstation to the server and then out to the other workstations. The net effect is that a user can go from workstation to workstation, modifying or changing the same files without having to worry about copying or working from an outdated version. Novell iFolder supports both Linux and Windows clients, making it possible for different workstations to be running different operating systems but still have access to the exact same files. You can also access iFolder files through the Internet using any standard web browser, giving you the ability to get to current files from the road, at a customer site, or from any device where you have access. A peer-to-peer version of Novell iFolder that allows users to share files in workgroup mode is being contributed to open source (see Figure 3.4). FIGURE 3.4
Novell iFolder keeps files on multiple workstations current all the time. Laptop
i John Home Your iFolder files through a browser
Corporate Firewall
Office
Novell iFolder Server i LDAP directory
John Home
Home John’s iFolder account
90
i
Kathy’s iFolder account
John Home
Sam’s iFolder account
File Services
Other actions you can do include the following: ■
Access network files from a web browser—Using a standard web browser, you can access files that are stored on network Linux, Solaris, Windows, and NetWare file servers. Novell’s NetStorage Gadget and Network File Gadgets work as a portal enhancement that can be included as part of any portal solution providing network file access. A user portal, for example, can be configured as a one-stop interface for access to mail, applications, collaboration tools, and file access. The NetStorage Gadget takes mapped drives, home directories, and iFolder directories that are accessible based on a user’s authentication and makes them available through a web portal interface. The Network File Gadget lets users access and upload files from any location (given proper authentication) using a specific network file provider gadget. File provider gadgets can be configured for NetWare UNC paths or eDirectory, CIFS file systems, or local file systems through Java. In effect, users never need a personal workstation to access personal network files.
■
Create storage area networks using the Internet—Storage area networks (SANs) are popular as storage repositories based on the Small Computer Systems Interface (SCSI) communications protocol, which uses low-level block storage access methods for high-speed access. SANs can be separate from individual workstations or servers and provide storage capabilities for entire projects, groups, or organizations. Common SAN technology uses expensive Fibre Channel hardware and the SCSI protocol. The Internet SCSI (iSCSI) protocol uses the same SCSI command set, but connects storage devices over less expensive Ethernet hardware using TCP/IP. This allows you to build high-speed, distributed SANs that are scalable, flexible, and easy to manage. Novell supports iSCSI on NetWare, and an open source version is available for Linux (see Figure 3.5).
■
Turn any Intel computer into Network-Attached Storage—Are your users adding video files or images to your network? If so, excess storage is most likely in sudden short supply. Novell provides a product called NetDevice NAS that turns any Intel-based computer into NetworkAttached Storage (NAS) that is accessible by Windows, Linux, Unix, and web-based clients. NCP, CIFS, NFS, HTTP, HTTPS, AFP, and FTP protocols all have native access to the file system for transparent access to storage regardless of the client platform. NAS can be mapped by clients for access as they would any other network storage resource. NetDevice NAS is “soft appliance” software that when installed, lays down a NAS appliance image, automatically configured to the hardware used. This creates a headless, lights-out NAS appliance that when connected to a network provides immediately accessible storage managed in 91
C H A P T E R 3 : Open Source in the Real World
conjunction with other network resources. Management is through Novell eDirectory, Telnet, or a web browser or can be console-based if a monitor and keyboard are attached. FIGURE 3.5
iSCSI provides SAN capabilities across the Internet.
ISCIS driver TCP/IP Network driver
NIC
IP Network
Storage router
Storage
■
92
Supersize your file storage capabilities—Organizations with distributed and sometimes complex storage systems can benefit from the storage pooling and central management provided through Novell Storage Services (NSS). NSS is a storage and file system that provides an efficient way to use all of the space on your storage devices. NSS is best used with systems that require the capability to store and maintain large volumes and numerous files or large databases. NSS uses free storage space from multiple storage devices, combining it to create storage pools and logical volumes that are physically larger than the free space available on any single file server. NSS includes advanced storage technology that speeds the mounting of large volumes, mirrors and stripes data for redundancy, quickly recovers data after a file system crash, protects database systems with Transaction Tracking System (TTS), and more. NSS provides advanced file management capability and disk utilization for storage devices that are running on both NetWare and Linux platforms (see Figure 3.6).
Print Services
FIGURE 3.6
Novell Storage Services pools available storage to create logical volumes.
Storage Devices
Free Space CD ROM
Free Space - Storage Deposit
Free Space
Free Space
Storage Pool
Storage Pool
Partitioned Free Space
Free Space
Volume size can equal storage pool size
Logical Logical Logical Traditional NSS Vol1 Vol2 Vol3 volume volume (Read-Only)
■
Map drives from your Linux workstation—Windows clients have long been able to map network drives using either Microsoft or Novell client software. This makes a drive appear as if it were local, usually with the next available driver letter (for example, F:). This same functionality is now possible with Linux clients as well. Workstations running Linux can map network drives that appear local to the user providing access to mass storage, SANs, or any other network storage.
Print Services Printer sharing ranks right up with file sharing as a major motivation for networking. Network printing has definitely evolved over the years to where almost any printer can be shared over a network. However, printers have primarily been accessed through Windows workstations, and the common printing scenario involves using the Windows printer manager to direct print jobs.
93
C H A P T E R 3 : Open Source in the Real World
Two things have changed that, for many organizations, makes network printing slightly more complex. These changes include the Internet with its ability to connect to any resource, anywhere in the world, and the emergence of Linux and its integration within the enterprise network arena. Technology from Novell and the open source community that leverages the Internet and integrates Linux printing includes Novell iPrint and CUPS.
How Internet Printing Works Both iPrint and CUPS are based on the Internet Printing Protocol (IPP). IPP is a printing protocol that is designed to work over Internet protocol (IP) and support printer access control, authentication, and encryption. It is much more secure, especially for public networks, than earlier printing protocols. Common Unix Printing System (CUPS) was developed for the Unix and Linux communities as a standard printing system that simplifies the accessing of printers and writing of print drivers, which has historically been a problem as all variants of Unix had their own printing system. A single print driver written to CUPS can support a wide range of file formats. CUPS is an open source solution and, as of this writing, the CUPS database includes 244 drivers for 1,233 different printers. Using CUPS, a printer can be identified using a uniform resource identifier (URI) and, if configured to do so, can be accessed from any device on the Internet. Printers can be standalone units with built-in print servers (software/hardware combinations that deliver print data to a particular printer), or can be attached to workstations (Windows, Linux, and so forth) that include the print serving software. CUPS can deliver print jobs to an attached printer, to a printer on the local area network, or to a printer connected via IP across the Internet. Novell iPrint provides similar Internet printing capability, but also adds some useful functionality. One of the primary difficulties in printing to remote locations across the Internet is in locating and correctly identifying a specific printer. URI and uniform resource locator (URL) names are hard to keep track of. iPrint solves the problem by providing Hypertext Markup Language (HTML) web-accessible map images that provide users with the capability to graphically search or drill down to find the desired printer in a specific location. Locating a printer for remote printing is simple and intuitive. Novell iPrinter utilities allow administrators to easily create any custom printer map to give end users simple ways to find and install printers (see Figure 3.7).
94
Print Services
FIGURE 3.7
Novell iPrint helps locate remote printers and automatically configures the workstation for printing.
Browser
Custom Map iPrintServer (NNLS server)
Install
Download and install print driver
User
Users printing from a workstation to a network printer can only do so if the print drivers for that particular printer model have been installed and configured on that particular workstation. Remote or unfamiliar printers tend to present driver location and configuration problems, which iPrint now alleviates. When a user locates any printer hosted by iPrint, the appropriate drivers for that printer are automatically downloaded, installed, and configured and appear as part of the standard Windows printer selection list. Desired driver settings can be determined by the administrator prior to the printer being installed on the workstation. Printing across the Internet becomes as easy as printing to the printer on your desk.
How Internet Printing Is Implemented You have multiple options for installing IPP-based printing using either iPrint or CUPS. Novell iPrint includes three components: the iPrint server software, the iPrint client, and the iPrint Map Designer. The iPrint server software installs on NetWare or Linux and handles all the IPP requests. The iPrint client is a plug-in for a standard browser that acts as a gateway, translating Windows print requests to IPP and sending those requests to the Novell IPP server. The iPrint Map Designer is a browser-based design tool that administrators can use to create a graphical representation of the location of iPrint printers. Installing an iPrint server is accomplished via the standard install methods for each operating system. iPrint installs as part of a new server installation or at a later point using the iPrint install CD. When installed, iPrint is managed and accessible to users via a web page. To install the iPrint client, users simply go to the iPrint web page, click the Install iPrint Client option, and the client plug-in is installed automatically.
95
C H A P T E R 3 : Open Source in the Real World
To locate a printer, users again go to the iPrint web page, navigate an iPrint map to locate the desired printer, and then click on it. The drivers are downloaded and configured on the workstation automatically. With a Windows client workstation, iPrint installs the printer in the Windows print manager, and it appears as any other local or network printer. The iPrint client is also available for Linux clients. One of the great advantages of iPrint is that driver information for all printers is centralized. As part of iPrint setup, you create an iPrint Driver Store that maintains a collection of drivers for all types of printers. There’s never a need to go looking for print drivers when a new client needs to be configured. iPrint can be managed using Novell eDirectory, with access to printer objects and drivers based on a user’s identity. Individual users, groups, or complete organizations (containers) can be granted access to individual printers or groups of printers. Novell’s iManager, a browser-accessible management interface, also can be used to manage iPrint (and printer access management as well). iPrint server works virtually the same way installed on Linux with access to IPP services via a web browser. The iPrint server includes the driver store, specific printer agents redirect printer-ready files to a printer (see Figure 3.8), and the entire solution can be managed for access through eDirectory (which also runs on Linux as well as NetWare and Windows). FIGURE 3.8
Novell iPrint configuration for Linux.
Linux eDirectory LDAP Server
Print Manager-ipsmd
Manager
Tomcat 4
Driver Store - idsd Printer Agent
Printer Agent
IPP Server
Printer Agent
Print Gateway
LDAP Apache Web Server
IPP - HTTP
Windows Workstation
96
LPR - SNMP
Printer
Edge Services
IT managers have the option of implementing traditional Linux printing services as well. As mentioned, Linux Internet printing is based on CUPS and provides access to printers through Linux-based printer servers and print queues. CUPS can be installed and printers administered through YaST. Printing from Linux can be done either through the Linux command line, from a graphical interface such as KDE, or from an application. Linux also supports LPRng and lpdfilter, traditional Linux printing services. Samba, the open source Microsoft SMB/CIFS emulator, can also play a major printing role when Linux print servers and print queues are used with Windows clients. Samba allows CUPS printers to be printed to and accessed from Windows workstations, just like a common Windows or NetWare network printer. Samba handles the print job forwarding, queuing, and print management. Samba lets a Linux computer function as a Windows server by enabling the computer to provide Windows print services to end users. Samba also lets a Linux workstation function as a Windows client by enabling it to access and use Windows print services, whether they originate on a Windows server or a Linux server with Samba installed. The Samba implementation provided by Novell focuses on providing authenticated file services from the Linux server to Windows clients with access control being managed by eDirectory. You can see that that there are multiple print options whether you are running a Linux-only shop or, like most organizations, a mixture of Windows, Linux, and NetWare. With a straight Linux data center operation, you can still enjoy the benefits of iPrint with mapped access to Internet-connected printers. Client workstations (Windows or Linux—it doesn’t matter) can access printers regardless of the print server platform. Linux can access Windows print services and vice versa. iPrint simplifies locating printers, driver storage, and workstation configuration for everyone.
Edge Services The definition of edge services has expanded to include multiple technologies that monitor, protect, and enhance “inflow” information—data that is in transit between a client and a server and includes all layers of communication. Edge services have evolved from being predominantly firewalls to include proxy cache, VPNs, intrusion detection, antispam, antivirus, web filtering, and other quality-of-service (QOS) solutions. Moving forward, these services are increasingly being combined to create a class of solutions referred to as security gateways. Open source played a large role in the early evolution of inflow solutions with several technologies that provide specific security and edge services. Linux
97
C H A P T E R 3 : Open Source in the Real World
continues to play a key role in the growth of security gateways and is the leading operating system platform for many of the existing hardware-based security gateways on the market today. In addition, Linux is well suited as a platform for software-based security gateways, and is integral to the family of security gateway services that are emerging from Novell. As a note of interest, the common path for adoption of Linux in many organizations typically begins with edge services. At the edge of a data center or as a periphery service for branch or remote offices, the only noticeable effect of Linux implementation is to make things better. There is no inconvenience, slowdown, or new training required at the user end; there is no disruption of service or alteration at the data center. Linux at the edge typically speeds up data delivery or makes resources more secure. There is no need to rip and replace, and existing services can often be significantly enhanced. If your organization is in the early stages of testing and adopting Linux, edge service solutions provide an ideal place to start. So what’s available for establishing edge services with Linux and open source? You have several options. First, multiple Linux/open source technologies can be configured to provide elements of a security gateway, including proxy/cache, content filtering, firewall, and VPN. Or, you can select one of the commercially available solutions from Novell or any of the other vendors that provide security gateways. Again, it’s about complements and substitutes—what makes the best sense for you to build or buy.
Build Your Own This section examines the individual technologies from the open source community and the functionality that each provides: ■
98
Caching/proxy—Caching is a fundamental element of fast web access. Information pulled from a source is temporarily stored at a location closer to the end user. If you request an HTML page with a few graphics, the first time the request is made, the content is transmitted the entire route from source to end user. If a cache is involved, this information is stored closer to the client (maybe in the browser cache or maybe on a proxy cache within the local area network) so that with the next request for content, the data is much closer and the request time shortened. Caches provide better perceived performance and reduce the demand for bandwidth. A “proxy” cache performs caching services for a number of different clients (see Figure 3.9).
Edge Services
FIGURE 3.9
Proxy cache stores content closer to clients for faster access.
Internal Network
Firewall Proxy Cache
Internet Router
Internal Network External Network
The Internet Cache Protocol (ICP) is a standard that was developed to allow communication between caching servers for the purpose of creating multiple-level caches that can accommodate complex caching requirements. A caching solution can include firewalls for protection of clients and data. Caching hierarchies can be architected to pull content from multiple sources (external and internal), providing high-speed access to popular objects or pages with a minimum of latency. Cache mechanisms exist that check to make sure that content is current or fresh and that new requests are initiated if content is dated or stale. The most popular open source caching solution is Squid. Squid is an open source, proxy-cache server that runs on Linux and supports proxying and caching of HTTP, FTP, and other URLs, proxying for Secure Sockets Layer (SSL) for encrypted sessions, ICP, and cache hierarchies, and a cache management interface with logging. You can find more information on Squid at http://www.squid-cache.org/. ■
Firewalls—The most common form of content protection from outside access is a firewall. A basic firewall can be constructed using Network Address Translation (NAT), which comes with every standard Linux distribution. A firewall is simply a server with two network adapter cards— one connected to the internal network and the other connected to the external network or the Internet; all traffic from inside to outside passes through this server. Using a technique known as masquerading, IP headers from internal packets going out are rewritten making them appear to all come from one address—the firewall. Reply packets from the outside are translated back and forwarded to the internal machine that sent the request. This makes it difficult for probing outside machines to ever find, let alone access, internal machines for destructive purposes. NAT also provides port forwarding, making it possible for IP packets written to a 99
C H A P T E R 3 : Open Source in the Real World
specific port (such as CGI or Java applets) to be forwarded to the internal server providing the service. ■
VPN—Internet Protocol Security (IPsec), a standard for encrypting and authenticating IP packets for portal-to-portal or end-to-end secure packet transmission, can also ensure a secure communication. The Openswan (successor to FreeS/WAN) project is an open source solution that uses strong encryption to ensure that packets are secure from end to end. Openswan allows you to create VPNs using IPsec to build secure tunnels through untrusted networks such as the Internet. IPsec can work in conjunction with a router or firewall, or on a separate machine. It works for all kinds of Internet traffic, including HTTP, FTP, email, Telnet, and more. For more information on the Openswan project, see http://www.openswan.org.
■
Filtering—The area of filtering is extremely important and encompasses both address filtering and content analysis and filtering. Address filtering can be accomplished using routing in which access to or content from a particular website can be blocked at the router, or at a point where filtering occurs, such as during NAT. Specific IP addresses can be blocked, a range of addresses can be blocked, or addresses can be filtered based on a dynamic block and allow list. For example, a blacklist service provider tracks IP addresses that are known to generate spam and provides this list to your router, where you can automatically block spam traffic from ever entering your network. Content filtering encompasses the analysis and filtering of data for virus detection and eradication for both Web and email. Because solutions in this particular area often require continuous updating to keep abreast of evolving security threats, they are usually best available from independent software vendors.
In addition to the services mentioned previously, the category of edge services is sometimes defined to include load balancing—the performance smoothing of content delivery using virtual servers and mirroring. Two common load balancing problems exist that Linux and open source can easily remedy. The first is simple web server overload, in which there are far too many requests for content or objects than can be handled by a single web server. In this case, multiple web servers (for example, Linux running Apache) can be mirrored with identical information (in the same or different locations). An incoming request for information is intercepted by the load balancing or virtual server running NAT software, which rewrites the request header, directing the request to the mirrored server that is least busy, closest, or meets a
100
Edge Services
predetermined algorithm criteria. The mirrored server delivers the request without the user knowing the request has been redirected (see Figure 3.10). FIGURE 3.10
A load balancing or virtual server redirects requests to the least busy web server. Real Servers Client
Virtual Server
This same method of NAT using virtual servers can also be used to solve problems due to lack of public IP addresses. Only the virtual server has a valid external IP address—only one is needed—while the internal servers can use nonpublic addresses. NAT directs the outside request to one of several internal servers and then returns the reply as if it were from a single source. Another method of load balancing is possible using DNS load sharing. In this case, multiple IP addresses are associated with the same web server name (www.mycompany.com). Each different IP address is associated with a different physical server that houses the same content. With each new request to the web server, the DNS server sequentially cycles to the next IP address (that is, the next server) on the list in a round-robin fashion. If there were four physical web servers with different IP addresses, each server would service every fourth request. Load balancing using these or other methods makes it possible to significantly scale web-based solutions on Linux and open source with very little extra effort. Several open source technologies address advanced load balancing issues. Check out the Eddie Project (http://eddie.sourceforge.net/), which disperses URL requests to different web servers based on a number of different load balancing algorithms. A second type of load balancing problem is relevant when using databases in which the number of queries is large enough to overwhelm the database server. To solve this problem, multiple database servers are configured as read-only mirrors or replicas and one database is configured as a writable master. As a request for queries comes in, the query is redirected to the least busy database server for processing. If the request is for a database write (update database
101
C H A P T E R 3 : Open Source in the Real World
content), the request is directed to the single database instance that can be written to. Because most database applications have far more reads than writes, this simple method of load balancing is sufficient for the majority of cases. Database redirection can be accomplished using NAT and scripting technology, such as PHP, CGI, or Perl.
Buy Commercial If building your own security gateway isn’t what you had in mind, plenty of options are available from Novell or other providers. It’s worth noting that the leading commercial security gateway vendors have standardized on Linux as the operating system platform. Check Point Software Technologies, Symantec, Stonesoft, StillSecure, and CyberGuard all provide hardware, software, or combination security gateway solutions based on Linux. IBM’s WebSphere Edge Server platform enables companies to develop their own solutions for the edge of the data center and the enterprise, as well as geographically distributed content points of presence—all with gateways and proxies where end-user traffic is generated. Emerging security gateway technologies available from commercial vendors accommodate endpoint solutions for wireless, thin clients, and digital assistants, as well as inflow monitoring and management services for Extensible Markup Language (XML) security with schema validation and encryption and XML routing and processing. The Novell Security Manager, powered by Astaro, combines open source technologies with advanced proprietary engineering to provide a comprehensive Linux-based security gateway. The Novell Security Manager edge services include firewall, VPN, forward-reverse proxy-cache, virus filtering, instant message filtering, web filtering, intrusion detection, antispam, DNS, DHCP, and more. Integrating these services with web- and directory-based management provides powerful security, as well as granular control for providing quality of service solutions and secure networks.
DNS/DHCP Servers and Routing Other common edge services that are easily handled by Linux are Domain Name System/Service (DNS), Dynamic Host Configuration Protocol (DHCP), and routing. Again, both of these services, which are usually required in almost every large network, are great places to begin Linux implementation, as they do not directly affect end users or business applications and can be seamlessly integrated.
102
DNS/DHCP Servers and Routing, Oh My!
DNS At a high level, DNS maps host or domain names to IP addresses. Because humans tend to use word-based names and computers require number-based addresses, DNS is given the job of matching them up. Every time you type in a domain name (such as www.novell.com), a DNS sever is employed to get you to the right host machine. DNS servers for individual users or small businesses are usually hosted by Internet service providers. In large organizations where subdomains are required (for example, support.novell.com, forge.novell.com, and so on), a DNS server that provides navigation addresses to domain specific machines is hosted by the company. DNS is more than just simple mapping. It includes the methodology for navigating hierarchical trees based on domains and subdomains to determine actual IP addresses. DNS also accommodates multiple addresses being assigned to the same domain name (that is, load balancing) or when many domain names map to a single IP address (that is, virtual hosting). DNS caching is particularly useful for increasing performance for website access. A local DNS cache stores the hostname/address mappings for the most commonly requested websites. Rather than search the full chain of external DNS servers to determine an IP address, the previously resolved name exists in the local cache for fast access. Several open source DNS Linux solutions are available with the most popular being Berkeley Internet Name Domain (BIND). BIND is included with SUSE Linux and is usually configured at startup. It includes a domain name server, a domain name system resolver library, and tools for verifying the proper operation of a DNS server. DNS resolution is also key to routing of email, listing the email exchange servers that accept mail for each domain.
DHCP DHCP is even more commonly used in organizations than DNS. In simple terms, every machine connecting to an IP network (internal or the Internet) must have an IP address. Static addresses (IP numbers or addresses permanently assigned to a single device) may be in short supply and/or are difficult to manage in dynamic organizations. DHCP simplifies the management and distribution of IP addresses. DHCP provides several methods of IP address distribution, but the most common is dynamic allocation or leasing. A DHCP server acts as a leasing agent, providing IP addresses to any client on the network that requests one from a predetermined pool of addresses specified by the administrator. As a client (a Windows workstation, for example) boots up, the network adapter card
103
C H A P T E R 3 : Open Source in the Real World
submits an open request for an IP address to the network. The DHCP server responds, granting a specific address for a specified period of time. When the client shuts down or the time period expires, the address is again available in the pool for reuse. DHCP allows multiple internal network clients to access the Internet without static or hard-to-come-by outside IP addresses. IP address management is greatly simplified without the need to manually configure every client. DHCP also includes information about DNS server addresses or gateways that need only be configured once for everyone on the entire network. DHCP is backward compatible with BOOTP, an earlier (now dated) version of IP address leasing. Novell SUSE Linux includes the DHCP server dhcpd published by the Internet Software Consortium (ISC) and two DHCP client options, dhclient and dhcpcd. DHCP client support is part of the standard install for most Linux workstations—it is there unless you specifically decline the option to install it. With a DHCP server somewhere on the network, IP address assignment is automatic every time the workstation boots up.
Routing In addition, a Linux server can be used as a common router. Static routing tables can be created that establish routes to a host, routes to a host via a gateway, or routes to a network. Complete and complex routing solutions can be created using Linux and other open source software. In fact, Linux is commonly used as the platform for commercial routing solutions available from original equipment manufacturers (OEMs).
Web Servers It has been argued that the most instrumental factor in the growth of the Internet was the capability to graphically and universally serve up content— more specifically, the web server. This implies the software that provides the function to serve documents (standardized according to the Hypertext Markup Language, or HTML) via the Hypertext Transport Protocol (HTTP) to clients that are standardized to receive it (web browsers). As mentioned earlier, by far the most successful and widely used web server software is Apache, an open source implementation available for multiple platforms, including Linux and NetWare. Apache is part of the standard SUSE Linux distribution and is used extensively as part of most Linux implementations.
104
Web Servers
The entire category of web applications is huge and could be the subject of several books. This section outlines a simple overview with several possible considerations when moving to Linux and open source. Web server implementations generally fall into two categories: static and dynamic. Every web server includes a root HTML directory, often the www/, html/, or htdocs/ directory by default. Files placed in this directory are served up when an HTTP request comes in from a web browser. If no specific file is requested, the index.html file is served. For static websites, creators will design the index.html file as the home page with links to other pages and graphics. Putting together a website can be as simple as creating a few HTML pages and copying them to the htdocs directory. If the server machine is on the network, using a browser to connect via the IP address renders the pages visible. Apache can service websites from simple to very complex. Often, a web page (such as index.html) is a collection of scripts, processes, or objects that dynamically generate content and images. These page elements can be generated or processed at the browser, the server, or may be the result of multilayer processes and links distributed across hundreds of other web or application servers. Many of today’s websites are complex aggregations of content, links, applications, and data streams from far-flung, multilayer sources. Apache can accommodate all of these (see Figure 3.11). FIGURE 3.11
A web server delivers pages that include multiple types of elements to a web browser. WEB SERVER (HTTP server)
CLIENT browser, HTTP client app
request
Web page (HTML and JavaScript) PI IP. P crl Python. CGI Java applets
TCP Connection IP packets
Any requested file
Linux and Apache are the core elements of a basic web service. Combining these with other services, utilities, and tools is the difference between static sites and dynamic sites. Apache functionality can be expanded significantly 105
C H A P T E R 3 : Open Source in the Real World
through the use of additional programming, database, and customized modules. Some of these include the following: ■
Secure transmission—Using SSL, any communication or transmission between a client and the web server can be securely encrypted.
■
Authentication—With directory services such as Lightweight Directory Access Protocol (LDAP) and Novell eDirectory, access to specific webbased resources can be tightly controlled so that only authorized users have access.
■
Virtual hosting—A single instance of Apache on a single machine can host multiple websites with the single web server appearing as several independent web servers. Virtual hosts can be configured on the basis of different IP addresses or different hostnames.
■
Active contents—Active contents are HTML pages that are generated on the basis of variable input from the client. Apache offers three ways of generating active contents:
■
■
Server Side Includes (SSI)—These are directives that are embedded in an HTML page by means of special comments. Apache interprets the content of the comments and delivers the result as part of the HTML page.
■
Common Gateway Interface (CGI)—These are programs that are located in certain directories. Apache forwards the parameters transmitted by the client to these programs and returns the output of the programs. Programs are generally written in scripting languages, but can be in C, C++, or any other language.
■
Modules—Apache offers interfaces for executing any modules within the scope of request processing. Modules including scripting language modules for Perl, Python, PHP, and Ruby are available that work with Apache for the creation of any customized application or utility.
Content negotiation—Apache can deliver page content adaptable to the capabilities of a particular client. Browser-based clients can be phones, pagers, text-only devices, or full-blown PC-based browsers—Apache can determine client type and capabilities and customize content accordingly. Apache can also negotiate with the client to determine in what language the browser expects to receive content and deliver it accordingly.
In addition, Apache works in conjunction with the wide range of application servers that are enabling a new generation of web-accessible applications. These range from XML connectors that pull information from legacy applications and
106
Web Servers
expose it for access, to new Java-based line-of-business applications, to webbased database front ends. Application server solutions are available from independent software vendors (ISVs), such as IBM (WebSphere), BEA (WebLogic), and others, as well as open source projects such as Jakarta Tomcat. Web servers are “information doors” that can be elaborate, eye-catching, and complex or basic, text-only, and simple. They can also be found in many locations throughout an organization with open access to the outside or controlled and hidden access from the inside. Web servers are used to expose or publish content and provide access to a wide array of services. Think of what you access on a regular basis that is published via a web server. Content can include email, news, documentation, product information, catalogs, and shopping sites. Web servers facilitate full user portals with instant messaging, online calendaring, and scheduling, but can also be small, supporting a simple interface to a router. Increasingly, enterprise software applications, such as customer resource management (CRM), enterprise resource planning (ERP), human resources (HR), and financial systems are incorporating web-based user interfaces that expose rich content to a wide audience of employees, managers, partners, customers, and suppliers. Web servers are everywhere. Implementing web servers based on Linux and Apache can be a gradual and incremental process that starts with peripheral services and applications and moves to core data center operations. Common first uses include providing static web content internally to an intranet. This might include company resource manuals or directories, project sites, or bulletin boards. Next phases often include exposing database content, which can be anything from product listings or budget and finance information to internal groups. Organizations can easily create web-based database applications using open source databases, such as MySQL and PostgreSQL, and scripting tools, such as PHP and Perl. In addition, many existing open source business applications have been prebuilt based on Linux, Apache, MySQL, and PHP/Perl/Python. These are commonly referred to as LAMP solutions and are available as webaccessible business and productivity solutions. As needs for content become more complex, application servers and custombuilt applications are integrated. Depending on your organization’s needs, time frame, and budget, you have the option of developing custom solutions inhouse, working with consulting and development partners, or purchasing ready-made web-accessible enterprise solutions. Web-based management tools are becoming a mandatory requirement for data centers and IT organizations. Novell’s highly popular iManager aggregates all types of management utilities and functions to a common portal-like interface. Administrators are provided a dashboard view of entire operations with drilldown management capability in any one particular area. 107
C H A P T E R 3 : Open Source in the Real World
In all cases, web servers are a key component of the solution, and Apache is the web server of choice. Apache enjoys the largest market share, and as a result provides the greatest degree of flexibility, compatibility, and security.
Workgroup Databases Historically, there has been a fundamental barrier to the widespread use of databases. Data input and reporting output has been available to common users, but the activities of forming data structures, creating data relationships, and filtering output through custom queries and reporting have always required database expertise—something that very few people have access to. Input and output formats have been canned, while flexible access and manipulation—things that are extremely valuable to knowledge workers and decision makers—have only been possible if you roll up your sleeves and learn the technical details. And, if you are technical enough to write your own queries, you often end up maintaining a high-end workstation in your office to run the database, usually with the added expense of database licenses and high-powered hardware. Again, open source helps break down this barrier by extending database creation and manipulation capabilities to anyone with browser access. It provides simple tools for data structure development, manipulation, and reporting, so more than just the quant jocks can see what’s happening. In addition, it significantly lowers the expense of database implementation and use by providing distributed access to GPL-based software running on low-cost hardware. Several open source databases are available for Linux in addition to many commercial versions that have recently been ported to Linux. Two open source databases that come with Novell SUSE Linux are MySQL and PostgreSQL. Both provide a feature-rich selection of database services that are easily exposed to web users for data input, access, and manipulation when combined with Apache Web Server and web services tools such as scripting languages and application servers. As mentioned earlier, PostgreSQL originated with the Ingress project at UC Berkeley. PostgreSQL architecture is designed to accommodate high volume environments while maintaining performance and responsiveness. GUI tools exist for both administrators and users for database design and data manipulation. The latest release supports over 34 different platforms of Unix and Linux. MySQL is also a fast and responsive database with good connectivity and manipulation through PHP, Perl, PHP, and Java. MySQL is multithreaded, multiuser, SQL-based, and includes ODBC support. MySQL AB, the owner of MySQL, has as its mission to “define a new database standard…based on its 108
Workgroup Databases
dedication to providing a less complicated solution suitable for widespread application deployment at a greatly reduced TCO.” MySQL has gained widespread popularity and currently boasts five million active installations. Companies using MySQL for web-based and business-critical applications include Yahoo!, Sabre Holdings, Cox Communications, the Associated Press, and NASA.
Database for the Enterprise The Associated Press’s (AP) implementation and use of MySQL provides an interesting case study into both the reasons for using open source and the benefits gained. AP hosts a huge news repository, with news not only coming in from all over the world, but also being made available to over 15,000 member affiliates who incorporate AP news content into their own websites. With big breaking sports or news stories, the AP hosted news system must be able to concurrently support up to 11,000 users. This includes news story storage and search capabilities as well as fully hosting 600 websites for affiliated newspapers. Using MySQL, Associated Press was able to re-architect a solution from the ground up, based on nonlicense-binding open source, which allows it to flexibly add database capability as the demand flucuates. The system was relatively easy to set up and deploy and has required minimal management—it doesn’t even require one full-time database administrator.
Database for Workgroup One of the simple database needs at Novell was to organize and classify the open source solutions that run on Linux. With the focus on Linux, it was extremely important that many groups within the company be able not only to see how many solutions were available, but also to sort and categorize them based on a number of different factors. This was a perfect use of database technology that served different internal workgroups, such as product marketing, product management, training, and consulting. Putting this information in a web-accessible database made it available to anyone who needed it in the company from any location. How do nondatabase types create and maintain databases? Try the open source database management tool called phpMyAdmin. phpMyAdmin is an open source tool written in PHP intended to handle the administration of MySQL over the Web. It can create and drop databases, create/drop/alter tables, delete/edit/add fields, execute any SQL statement, manage keys on fields, manage privileges, export data into various formats, and more. From a web browser, any authorized user can create a database, populate tables and fields, construct queries, and then distribute the results of their work to anyone else. It
109
C H A P T E R 3 : Open Source in the Real World
provides database capabilities to the masses—without licensing expense or the requirement for specialized technical help. Stability, performance, ease of use, and price has led the majority of leading database companies to port high-quality commercial applications to run on Linux. Full access to source code has made it possible for database vendors to provide extensions that significantly enhance database performance and tailor solutions to specific needs and requirements. The list of commercial database solutions that are available on Linux is extensive and includes the following market leaders: ■
Oracle—Oracle first ported Oracle8 to Linux in 1998 and has since then standardized on Linux. All Oracle’s application developers develop on Linux, and with the Oracle 10g release, Linux is the base development platform for Oracle. Support for Oracle on Linux is standard and equal to any other operating system platform.
■
DB2—IBM has had its DB2 database available on Linux since late 1998. DB2 is a multimedia, Web-ready relational database management system, strong enough to meet the demands of large corporations and flexible enough to serve medium-sized and small businesses.
■
Informix—Another IBM database, Informix software delivers the highest levels of application performance for transaction-intensive environments. Enterprise-class OLTP (online transaction processing) and data warehousing solutions combined with reliability, flexibility, and cost-effectiveness are reasons why IBM chose to support Informix on Linux.
■
Sybase—Sybase delivered an enterprise-class relational database management system on Linux in 1999. Today, all Sybase solutions, from their high-end Adaptive Server Enterprise to SQL Anywhere Studio for embedded applications, run on Linux.
■
Other notable commercial databases running on Linux include Empress, SOLID, Adabas D, D3, Flagship, jBASE, Interbase, Ingres, Pervasive.SQL, Polyhedra, and many more found at http://www.linuxlinks.com/ Software/Databases/. About the only commercial database not ported to Linux is Microsoft SQL Server.
Database Implementation Workgroup databases are another category of services that are prime targets for early open source and Linux implementation and migration. If a current database solution is from an existing database vendor, chances are great that there is a commercially available version of the same database that runs on Linux. In these cases, moving the database to Linux could be as simple as copying over 110
Light Application Servers
the data. At minimum, upgrading to a newer database version might be required using vendor migration tools. Many organizations are following the lead of Associated Press and redesigning a solution from the ground up based on open source. Advantages are flexibility, the ability to use advanced web services, and often the opportunity to consolidate and simplify data management to single servers with greater scaling capacity. Migrating or implementing databases, such as edge services, can have minimal impact on end users. Access to a database can be provided through a webbased portal—there’s no need for changes to the client workstation. Access to the database can be controlled through a central directory, such as LDAP or eDirectory. Inversely, web-based access to structured data can have a huge impact on users inside and outside an organization. Think of access to inventory information, self-service tracking of accounts payable, shipping and delivery monitoring, collaborative project tracking—the list goes on and on. Database access can be a very powerful and effective tool for tracking and measuring organizational objectives. Linux and open source make it reliable and inexpensive.
Light Application Servers One of the problems with trying to definitively predetermine areas of deployment and implementation is that classifications are general and there is always overlap. This section looks at “light” application servers, and succeeding sections look at “heavy” application servers. What’s the difference? There’s no clear delineation; we’ll make some general assumptions that data center or heavy application servers require more complex infrastructure with five 9s of reliability required, so multiprocessing, clustering, geo-site failover, and other technologies are required. In reality, most of the basic components for application servers are the same for heavy and light, but heavy application servers require more horsepower. What is an application server? It can have a broad definition, including a piece of hardware, an operating system hosting an application, a client/server software application, a software solution that web-enables an application, or any combination of these. What are web services? Another good question without a simple answer. Wikipedia (the online open source encyclopedia) defines web services as “a collection of protocols and standards used for exchanging data between applications. Software applications written in various programming languages and
111
C H A P T E R 3 : Open Source in the Real World
running on various platforms can use web services to exchange data over computer networks like the Internet. This interoperability is due to the use of open standards.” Perhaps an illustration will make it more clear. If the core of an application can be compared to a TV studio—the place where real images are created—then web services can be likened to the network of transmission services that send image signals anywhere in the world to be viewed on individual TV sets. TV sets vary in size and type, but image reception is based on a fairly common standard or two. The image traverse from studio to TV set can vary dramatically. Images can be converted, compressed, and encrypted; encoded with extra information and ratings; transmitted through wire, satellite, microwave; held for delay or retransmission at a local broadcast facility; and then transmitted to an endpoint where it might again be converted, unencrypted, or decompressed. Web services can be likened to all the processes, services, and standards that take the image from the studio to the TV set. It is a collection of protocols and standards that govern the exchange of data to, from, and across applications to provide a universe of possible services using the Internet as the infrastructure. The basic idea is that an application component, function, or process developed to run on a client or server can communicate with (take input, send output) to another “service” (application/function/process) across the Internet. These “services” can be aggregated into collections to create hybrid services; they can broadcast their availability for use or search for other services; they can tap into legacy applications or enterprise software. Web services by design are flexible and accommodating, yet provide a framework and components for establishing business processes based on rules and policies. Here’s an example of a simple solution built using web services. A Fortune 500 company purchases goods and services at a cost of about $1 billion per year, and in the course of paying bills, processes over 220,000 purchase orders. Historically, purchase orders were manually processed through the company’s ERP system upon receiving paper invoices. With the existing ERP system still in place, a web services solution was created that provides paperless invoice submittal by vendors, self-service invoice and payment tracking, and automated process flows for problem invoices based on definable business rules and policies. XML connectors were created to query and populate the ERP system. These connectors and the processing of input/output are managed on a web application server. Invoice data in XML format is transmitted to web servers, where it is displayed in formats specific to viewing devices. The entire system is secured by directory-based authentication that is again coordinated with inhouse legacy applications. This particular web services-based solution resulted in reduced headcount, a decrease in problem invoice turnaround from 30 days to 10, plus a net increase in income for discounts received on early payments.
112
Light Application Servers
In the simplest web services case, there is a provider, a consumer, and a registry. A web service provider is an organization that creates and hosts a web service. Typically, a provider publishes information about its organization and the services it offers in a web service registry that can be queried by consumers. A web service consumer finds a web service (typically by querying a registry), then runs the service by establishing a connection to the provider (called binding to a web service). A web service registry is a collection of business and service information that is readily accessible to providers and consumers, through programmatic publish and querying interfaces. Requests for web services and responses from them are made using Simple Object Access Protocol (SOAP), a standardized XML-based messaging protocol. These requests and responses are embedded in HTTP so the interaction between service/application or provider/consumer can take place across the Internet. A web service performs the application logic for a particular request and then returns any application output in the form of another SOAP message embedded in an HTTP response. To specify information about a web service in a standard form, the provider creates a Web Services Description Language (WSDL) document describing its characteristics. WSDL is also an XML-based format. The success of web services is rooted in the fact that they are based on common industry standards—standards that have evolved over time and have been established through a process of testing and implementation that is not controlled by a single group or vendor. Standards enable portability across platforms, integration with a vast selection of other applications, and consistency for development and reuse. Web services standards typically include J2EE, XML, SOAP, WSDL, and UDDI: ■
J2EE—Java 2 Platform, Enterprise Edition (J2EE) is the platform for building distributed enterprise applications using Java technology. The platform consists of an application server or servlet container that manages and coordinates the execution of Java servlets, Java Server Pages (JSPs), Enterprise Java Beans (EJB), and other interfaces and services. J2EE is not operating system- or hardware-dependent and provides for distributed processing across locations and service platforms.
■
XML—Extensible Markup Language (XML) is a standard for the definition and description of data in context. XML tags describe what a particular selection of text represents and how it is related to other text in the same group or hierarchy. XML makes data and its meaning portable for consumption by other processes and displayable across a wide range of devices in a variety of formats. XML is an important web services standard, as it is used both for SOAP and WSDL.
113
C H A P T E R 3 : Open Source in the Real World
■
SOAP—Simple Object Access Protocol (SOAP) is an XML-based message protocol for invoking web services in a decentralized, distributed environment. SOAP’s text-based format and XML-like syntax provide a simple mechanism for process-to-process information exchange and invoking services across the Web. SOAP’s protocol provides a framework that describes what is in a message (envelope), how to process it (rules for expressing datatypes), and a standard for expressing remote procedure calls.
■
WSDL—Web Services Description Language (WSDL) is a protocol for describing the capabilities of a web service, including the protocols and formats used by the service. WSDL is XML-like in format and describes network services with either document- or procedure-oriented information about what a web service is, how to establish communication, and where it is located.
■
UDDI—Universal Discovery and Directory Integration (UDDI) is a standard for registering and discovering web services. It defines a directory of contact information and a catalog of available web services. UDDI defines a framework that enables businesses to discover each other, define how they interact over the Internet, and share information in a global registry. The registry includes business entity information (name, category, location, and so on), service information (category, communication specifications, and so on), and technical information (request/response, security, other metadata).
Novell Enables Web Services Creation Novell provides the tools and infrastructure components that allow you to create web services solutions. The Novell exteNd Workbench and Novell exteNd Application Server, available individually or with Novell Open Enterprise Server, provide an integrated development environment (IDE) in which to develop web services and the platform on which to deploy them. exteNd Workbench is a visual development interface with an extensive collection of wizards, editors, and deployment tools. Novell exteNd Application Server is a robust and complete J2EE application server that runs on Linux, Netware, Windows NT/2000, Unix, HP-UX, and AIX. The Novell exteNd Workbench includes enhanced J2EE component and webservice creation wizards, visual designers, archive-based projects, and one-button deployment to J2EE application servers. exteNd Workbench is a J2EE-oriented IDE that providers can use to create, deploy, and maintain web services based on the JAX-RPC standard (Java API for XML-Remote Procedure Call). JAX-RPC enables Java technology developers to create SOAP-based 114
Light Application Servers
interoperable and portable web services and deploy them on any J2EE-compatible server. Workbench can also be used to develop Java-based web service consumers that comply with JAX-RPC. Workbench includes the following tools, wizards, and capabilities: ■
Component wizards—Create J2EE components, such as JSP pages, EJBs, servlets, Java classes, JavaBeans, and tag libraries
■
Web service facilities—Include a wizard for creating Java-based web service components, a SOAP runtime environment, and a registry manager for searching and publishing to web service registries
■
Graphical and text-based editors—Edit Java files, JSP files, XML files, WSDL documents, deployment descriptors, and plain-text files; editor formats include open source NetBeans or native Workbench (Deployment Description Editor, Deployment Plan Editor, WSDL); image and class viewers are also included
■
Project views—Show the structure of a project’s source files and the structure of a project’s generated archives
■
Project tools—Assist in building projects, creating and validating J2EE archives, and deploying archives to J2EE application servers
■
Deployment tools—Provide one-button deployment of application to all leading J2EE application servers; hot-deployment automatically and immediately deploys to IBM WebSphere, BEA WebLogic, Oracle, or Novell exteNd Application Server
■
Version control integration—Provides access from Workbench to thirdparty version control systems
■
Migration wizard—Automatically updates deployment descriptors and other definitions from J2EE 1.2 to J2EE 1.3
■
UDDI tools—Include a test-bed UDDI server, a UDDI browser, and a WSDL editor
■
jBroker Web—Provides core technologies for exteNd web service support, including compilers and SOAP runtime based on JAX-RPC
■
Web service wizard—Helps invoke jBroker web compilers to generate Java classes and WSDL files, plus convert Java to WSDL and WSDL to Java
■
Debugger—Debugs server-based applications (exteNd Debugger can be invoked from Workbench); test bed client allows running of sample code for accuracy
■
Registry Manager—Defines profiles, search registries, view business and service information, publish new services to a registry 115
C H A P T E R 3 : Open Source in the Real World
■
Integration with other IDEs—Uses best-of-breed J2EE development tools, such as WebGain Visual Café, Borland Jbuilder, InLine Standard Edition, and Macromedia DreamWeaver or exteNd Designer in conjunction with exteNd Workbench
Novell exteNd Workbench provides an intuitive, graphical tool that eliminates the historically tedious process of deploying in J2EE. Using Novell exteNd Workbench project wizards, developers can create projects in the following J2EE archive formats: ■
Enterprise archives (EAR)
■
Web archives (WAR)
■
EJB archives (JAR)
■
Application client archives (JAR)
■
Resource adapter archives (RAR)
■
Simple Java archives (JAR)
■
Deploy-only (nonbuildable) archives
The Novell exteNd Application Server is a comprehensive, J2EE certified platform for building and deploying enterprise-class web applications. It supports the full Java 2 Enterprise Edition standard, including JavaServer Pages (JSP pages), Enterprise JavaBeans (EJBs), standards-based programming interfaces for databases (Java Database Connectivity, or JDBC), directories (Java Naming and Directory Interface, or JNDI), messaging (Java Messaging Service, or JMS), transactions (Java Transaction API, or JTA), authorization (Java Authentication and Authorization Service, or JAAS), Java Messaging Service (JMS), XML parsing (Java API for XML Parsing, or JAXP), and other services, such as CORBA and JavaMail. The Novell exteNd Application Server now runs on Novell NetWare 6.5 in addition to various Windows, Unix, and Linux platforms. Novell exteNd Application Server was recently selected as Editor’s Choice by Network Computing Magazine, which cited exteNd Application Server’s seamless IDE integration and enterprise-class feature set. exteNd Application Server is fully J2EE 1.3-compliant with advanced enterprise features, such as session-level failover, server-level failover, clustering support, floating JDBC connection pools with dynamic reconnect, remote server console, and hot deployment. Novell exteNd Application Server integrates with Apache Web Server on NetWare, as well as Internet Information Server (IIS), Apache, and iPlanet on other platforms. It supports internationalization and localization in 14 languages. exteNd provides outstanding scalability and fault tolerance, and independent benchmarks show that exteNd outperforms BEA, IBM, and Oracle. 116
Light Application Servers
Novell exteNd Application Server includes a rich management console that allows administrators to perform all system management functions, including viewing usage and performance graphically, viewing log files, changing security, and so on. Management functionality is also provided through an SNMP agent for use with Tivoli or CA (Computer Associates) management consoles. Other features of Novell exteNd Application Server include the following: ■
jBroker MQ—Includes a full implementation of Java Message Service (JMS) 1.0.2 with features for point-to-point and publish-subscribe
■
jBroker ORB—Provides CORBA 2.3 services as well as the RMI-IIOP protocol; features include forward (IDL to Java) and reverse (JavaRMI to IIOP) compilers, Portable Object Adapter (POA), Java Objects by Value, Server Activation, IIOP Connection Concentrator, Pluggable Authentication support, IIOP/SSL, Multicast invocations, COS Name Service, and Object Transaction Service plugability
■
Enterprise JavaBeans (EJB 1.1)—Provides a full-featured EJB server with support for session beans and complex container-managed entity beans; EJBs enable deployment of object-oriented, distributed, enterpriseclass applications
■
Servlets 2.2—Enables server-based dynamic HTML; servlets are instantiated once and reused with caching for better performance; supports WARs both as a packaging mechanism and as an application context at runtime
■
JavaServer Pages (JSP 1.1)—Includes JSP-to-servlet compiler for faster compilation and error reporting; also for dynamic HTML through the use of embedded Java tags
■
Other J2EE platform services: ■
JNDI 1.2 (Java Naming and Directory Interface)—Standardizes access to a variety of naming and directory services
■
JDBC 2.0 (Java Database Connectivity)—Provides access to relational databases and other data repositories
■
JavaMail 1.1—Provides the capability to send and receive email messages to the server
■
JTA 1.0 (Java Transaction API)—Provides a way for J2EE components and clients to manage their own transactions, and for multiple components to participate in a single transaction
■
XML (Extensible Markup Language)—Provides data definitions as well as messaging and communication hierarchies
117
C H A P T E R 3 : Open Source in the Real World
■
Enterprise Data Connectors—Enables connectivity to nonrelational databases and packaged applications, such as SAP, PeopleSoft, and Lotus Notes (connectors are available with Novell exteNd Composer)
■
Data Source Object—Enables automatic data binding in which clientside, data-aware controls such as text boxes, list boxes, and drop-downs can be bound to columns of a Data Source Object without writing code; also interact with transaction-processing monitors and servers, such as CICS (Customer Information Control System), Tuxedo, Microsoft Transaction Server, and so on
NOTE Imperial Sugar Company, one of the largest sugar refiners and processors in the United States, needed to enhance customer service and reduce order-processing costs to be competitive in an industry with razor-thin margins. Using exteNd technology, Imperial Sugar XML-enabled sales transactions from a legacy system, assembled them into appropriate business process flows, and exposed them as web services in an advanced web services-based portal that allows its customers to place orders electronically and view a real-time picture of their relationship with the company. Novell exteNd’s intuitive, visual design environment requires very little training to create services from mainframe applications, and Imperial Sugar completed the design and implementation of its web services architecture in six weeks, followed by two months of testing, after which it began rollout to its 10,000 customers. Imperial Sugar uses the Novell exteNd Application Server to manage all of the runtime execution for both its web services components and the new Imperial extranet portal application. The load balancing features of the exteNd Application Server deliver the scalability and performance necessary for Imperial Sugar’s phased rollout of the extranet, and its fault tolerance assures high availability to its customers.
Open Source Web Services Tools You can also get web services components and tools from the open source community. JBoss (the company) provides JBoss Application Server, a certified J2EE-compatible platform. JBoss is based on aspect orientation functionality. Aspects allow you to easily modularize a code base through a separation from application logic and system code. JBoss offers clustering of objects, including EJB, JMS, HTTP, and Java objects. Jakarta Tomcat, or Tomcat as it is commonly called, is another open source application server available through the Apache Group. Tomcat is more accurately a servlet container that accommodates Java Servlets and JavaServer Pages (JSP). At 118
Light Application Servers
a simple level, a servlet container is an application that hosts servlets, providing a communication path between the web server and the servlet. The servlet container coordinates client requests for a process, ensuring that a single instance of the servlet is in memory while executing a new thread to execute servlet methods for each request. We’ve spent a lot of time exploring the tools and components available for the creation of web services. Note, however, that this is one area in which implementation or migration is generally not a simple process. Implementing web services-based solutions often coincides with or requires redefining business processes. This affects not only how information flows through an organization, but also how it flows between organizations with partners, suppliers, and customers. An effective web services solution takes planning, education, training, development, and implementation—often in iterative cycles. The benefits, however, can be significant with streamlined data flows, wider distribution of more focused content, and efficient self-service operations. Customers are happier, partners are more effective and better integrated, and the bottom line goes up.
Proprietary Application Servers Web services applications won’t be the holy grail for every organization. There are, and will continue to be, many line-of-business applications that are not web-based, and might or might not be available on Linux. Many existing applications are built using proprietary servers using proprietary fat clients. Applications for networked project planning, financial analysis and planning, and a whole raft of networked applications that are Windows client-dependent are not going to be quick migration projects. A backend Exchange server tied to Active Directory with Outlook clients is not going to be an easy thing to eliminate—have we mentioned vendor lock-in? Several application server accommodation strategies are possible, including coexistence. It might be that some applications will never migrate (in the foreseeable future) and, in this case, Novell technologies can again help. Using file sharing, storage, and clustering technologies, all data from applications can be stored in Linux-based storage solutions. iSCSI SANS, Samba, and clustering can be used to ensure that proprietary application data is secured and managed in conjunction with other Linux-managed data. In addition, web services can be used to leverage proprietary solution services to open standards-based clients. The majority of users will never know that content is coming from an antiquated source. It’s also worth mentioning that practically every ISV that is serious about staying in business and maintaining market share is porting its applications to Linux. Just look at some of the top business software applications (large and
119
C H A P T E R 3 : Open Source in the Real World
small) and see who’s not including Linux as a supported platform. Players such as Oracle, CA, IBM, Novell, SAP, PeopleSoft—all have applications that call Linux home. Also take into account all of the Unix-based applications that can run virtually unchanged or with only minor modifications on Linux. There’s only one major application player that isn’t listed! If you look at the database offerings from all the major commercial database vendors, you will note that each includes technical and product support on Linux. These vendors have adopted the same model as Novell in substituting Linux for the operating system platform while continuing to provide a collection of complementary proprietary services and offerings. The complete package—open source and proprietary—solves customer needs with valuable solutions. As a result, the entire package is available with a full complement of customer support.
Computation Clusters Linux clustering capabilities using Beowulf have previously been mentioned and the advantages have been outlined. This section covers in more detail what clustering can be used for, how it works, and how it can be configured. Clustering for high availability as well as high-performance computing is providing companies, large and small, peace of mind and solidly reliable IT services, as well as supercomputing capabilities for a fraction of the traditional cost. Linux clustering can be useful to meet both high availability and high-performance needs. Examples of high availability include web and e-commerce geo-site redundancies, high-demand application failover contingencies, distributed applications, and load balancing. High-performance cluster candidates include finite element analysis, bioscience modeling, animation rendering, seismic analysis, and weather prediction.
High-Availability Clusters IDC estimated that 25% of all high-performance computing shipments in 2003 were clusters. It’s an area that is growing at a considerable rate. Organizations that are using clustering to create high-performance solutions include Google (10,000 node cluster), Overstock.com (Oracle database for product tracking), Burlington Coat Factory (IBM PolyServe clustering), Epiphany, and many more. Here’s how high availability clusters work—starting with a simple two-node cluster and building from there. Assume that the services you want to ensure are “always available” are email, NFS, and a web server. For a two-node cluster, you will have at least two servers—each with a storage drive and identical
120
Computation Clusters
copies of each of the services installed (mail, NFS, web). These servers are connected together with two to three connections. The first connection is a dedicated serial cable that supports the Heartbeat service, which constantly monitors the status of each service (mail, NFS, web). If Heartbeat detects that one or more of the services has failed, it immediately starts the same service(s) on the second server. The second connection between the two clustered servers is a dedicated data connection for keeping drives on both systems mirrored and in sync. 100MB or gigabyte connections are recommended here, especially if your services are data intensive and often changing. If a service suddenly switches to the second machine, the data that it may require will be there and ready. This high-speed data connection could also service the Heartbeat. Assuming these services (mail, NFS, web) are for web clients, there will be a third connection that links the servers to the network or the Internet. To play it safe, you should have an uninterruptible power supply (UPS) for each server that will be attached via UPS control cables. Heartbeat and the application that keeps data storage in sync, DRDB, are both available with SUSE Linux and can be installed and managed through YaST. They are also available with other Linux distributions or from popular Linux sites. You have the options of configuring failover to occur immediately upon fault detection, initiating a manual failover, or specifying that failover operations occur in a specific order according to rules based on resource priority or system availability (see Figure 3.12). FIGURE 3.12
Simple two-node cluster for high-availability failover. Switch
100mbit Ethernet DRBD/Heartbeat Connection /dev/hdc
/dev/hdc (Heartbeat Serial Connection)
Paul
Serial
Serial
(UPS)
(UPS)
Silas
UPS UPS
121
C H A P T E R 3 : Open Source in the Real World
Now that you understand cluster basics, you can mix and match, adding servers and services to create any configuration that meets your needs. Highavailability clusters can range from simple to very complex. You don’t have to mirror the exact services on each machine. You can specify that one service on machine A fails over to machine B, and another service on machine A fails over to machine C. If you don’t want the expense of duplicate hardware for every set of services, you could configure one machine to be the failover recipient for several other machines. The chances are slim that two or three machines would fail at once, overloading the target server. You could also have active services running on several servers (no idle failover machines) with failover to other servers in the cluster running other services. Clustering provides a lot of flexibility for storage configuration. Instead of failing from drives on one machine to drives on another, you could create a storage array with combinations of RAID, mirroring or striping. These storage subsystems could also be configured to fail over to other systems, if that level of redundancy is needed. The possibilities are endless, and with the Internet and technologies such as iSCSI, a cluster can be geographically distributed. If you’ve ever wondered what an extra “9” of availability gives you, here’s a summary chart of how often you would be down in one year given a specific level of reliability. With high availability clusters on Linux, it’s easy to reach five 9s. ■
1
90.0000%
37 days
■
2
99.0000%
3.7 days
■
3
99.9000%
8.8 hours
■
4
99.9900%
53 minutes
■
5
99.9990%
5.3 minutes
■
6
99.9999%
32 seconds
High-Performance Cluster When it comes to high-performance computing on Linux, each high-performance cluster consists of one master and as many slave nodes as needed. Some of today’s largest clusters have over 10,000 nodes. All nodes should be the same architecture (Intel, Apple, and so on) and for optimal performance, the hardware configurations should be identical (a slow node can slow down the entire cluster). Because the master node performs management functions, more RAM and faster network, processor, and disk speeds are highly recommended for better performance.
122
Computation Clusters
Linux is installed on every node and the application to be run on the cluster is installed on the master. Applications generally must be parallelized (written to take advantage of multiple processors) before computation can be spread across multiple computers for high-performance results. The exception is a serial application that is run repeatedly on different data sets. The clustering software, which could be Beowulf or any other open source or commercial version, is also installed on the master node and every node in the cluster. The clustering software for high performance includes message-passing libraries that facilitate high-speed communication between nodes. Effective highperformance clusters also require high-speed connections between nodes. This can be provided using several methods, such as Ethernet, Gigabit Ethernet, or one of the commercial high-bandwidth, low-latency interconnect systems from vendors such as Myrinet, Infiniband, or Quadrics (see Figure 3.13). FIGURE 3.13
High-performance clusters consist of a master, slave nodes, and a high-speed switch.
Master Application
High speed switch
Node 1 Node 2
Node N
With Linux, both high-availability and high-performance clusters can be created without added expense. The Heartbeat and Beowulf solutions are open source, and are included with the Novell SUSE Linux distribution. Novell gives you clustering right out of the box for either SUSE Linux or NetWare with basic versions of Open Enterprise Server. A Novell advantage is that Linux nodes can fail over to NetWare nodes and vice versa. A two-node cluster license is included that allows you to create mirrored or failover systems to ensure that data or applications are always available.
123
C H A P T E R 3 : Open Source in the Real World
A major advantage to Linux clusters is that they are incrementally scalable. It doesn’t take a complete redesign of an application or buying a new supercomputer to get more horsepower. You just augment the cluster by one, two, five, or 50 machines or more, depending on what you need to get the job done.
Data Center Infrastructure This section looks at technologies that can be implemented at the data center— technologies that can provide a portion or all of the services to support the most complex enterprise IT requirements. These include data center infrastructure technologies such as multiprocessing, server farms, storage, and advanced computing—things that enable industrial strength applications and services. Common technologies include symmetric multiprocessing (SMP), NonUniform Memory Access (NUMA), hyperthreading, terminal services, grid computing, storage support, and high-end hardware platform support. This list of data center infrastructure technologies is complemented by a wide selection of commercial hardware/software combinations, including high-speed switches, routers, and hubs, as well as power supplies and generators, cooling systems, and emergency protection mechanisms. The data center solutions discussed here can also include technologies previously discussed for use in workgroups or small- to medium-sized businesses.
Symmetric Multiprocessing The Linux 2.6 kernel includes enhancements that let applications that have been written for parallel processing take advantage of even more processors. In simple terms, SMP employs several processors on the same machine using the same common memory pool to execute application tasks. This can significantly speed up the processing for data center-type applications, such as large databases and enterprise resource planning (ERP) applications (see Figure 3.14). FIGURE 3.14
Symmetric multiprocessing architecture supported on Linux.
SMP System CPU 0
CPU 0
L3 Cache Memory
124
CPU 0
CPU 0
IO
Data Center Infrastructure
Non-Uniform Memory Access A drawback to SMP technology has been that even though it can theoretically scale for a large number of processors, realistically the number of possible processors has been limited to around eight to twelve. This is because the shared memory bus becomes a bottleneck as each processor accesses the common bus directly. The next generation of SMP technology is NUMA, which provides for flexible nodes (memory blocks) that act as separate buses and can include memory available on each processor. NUMA allows you to accommodate many more processors than standard SMP technology—Novell has successfully tested 512 CPUs in a multiprocessor configuration and found the technology capable of scaling further. There is no theoretical limit on the number of processors that will work (see Figure 3.15). FIGURE 3.15
NUMA architecture supported on Linux.
Multiple Cpu Chip System
CPU 0
CPU 0
Cache
Cache
CPU 0 Cache
CPU 0 Cache
Cache Memory
IO
NUMA scalability is also enhanced for multiprocessing with hardware optimization such as is available with AMD’s 64-bit Opteron processor or IBM NUMA servers built with Intel Xeon chips. SUSE Linux includes tools that allow developers to fine-tune applications for NUMA support. Applications that are parallelized or multithreaded for SMP can be run on NUMA servers generally with little modification, and include databases such as Oracle and DB2 and extended ERP systems for supply chain, customer, financial, and human resources management.
Hyperthreading Hyperthreading is Intel technology that enables multithreaded server software applications to execute two threads in parallel within each individual server processor, thereby dramatically improving transaction rates and response times.
125
C H A P T E R 3 : Open Source in the Real World
In essence, a hyperthreading-equipped processor appears as two logical processors to an application. Depending on the application logic, execution on two logical processors can provide better overall performance. Multithreaded or SMP-enabled applications can automatically take advantage of hyperthreading, yielding better performance for e-business and enterprise software implementations. Specific fine-tuning will enhance performance even further. Novell SUSE and Intel engineering teams worked closely to ensure that SUSE took advantage of the Intel’s hyperthreading technology in the Xeon processor.
Grid Computing The goal of grid computing is to create virtual systems, independent of specific resources (processors, memory, storage, networking), that provide computing power on demand. In theory, a grid system can provide as much or as little system resource on an “as needed” basis as required. It’s analogous to the electricity power grid in which you only pay for what your meter says you use. Grid computing can service one group or many groups, one company or many companies—the grid is not necessarily tied to an organizational unit, enterprise infrastructure, or geographical location. Linux is emerging as a key element of grid computing for many enterprise companies as well as industry-specific service providers. Here’s how it works. Several Linux/open source technologies contribute to the ability to create computing grids or solutions that can scale significantly to accommodate increased loads. These include load balancing, failover, multiprocessing, and clustering, as well as web services technologies, authentication, and management. These Linux-centric technologies provide a high degree of flexibility for creating scalable combinations or grid services and are based on open standards. This flexibility combined with the openness makes it possible to create comprehensive or complex solutions that can be scaled up simply by adding more open components. These Linux and open source applications and services are like building blocks that can be interconnected, combined, and replicated to create global gridlike solutions that service needs from small to gargantuan. Grid computing allows you to add or subtract services or underlying hardware as needed. Are you running the quarterly financials report and need lots of extra computing power for a weekend? Pull it from the grid, a pool of resources normally maxed-out during the week but idle on weekends. By Monday morning, all CPU cycles are available again for regular use. The concept is extended to data as well, with applications like Real Application Cluster
126
Data Center Infrastructure
(RAC) Oracle. Oracle’s grid solutions provide databases that can float from platform to platform without changes and are not tied to specific machines. These types of solutions are increasingly being supported on blade hardware—gridspecific machines for which Linux is ideal—built by IBM, SUN, HP, Dell, and others. IBM and Novell SUSE have teamed up to provide grid capabilities with Linux and IBM zSeries computers using the Globus Toolkit, a collection of software services and libraries that facilitate creation of grid systems. With a zSeries computer (mainframe), hundreds of instances of Linux can be running with others added and subtracted dynamically depending on grid needs.
Terminal Services/Thin Clients Linux desktops are discussed more later, but it’s worth noting here that enterprise applications can be configured to run completely at the data center, with all processing and data manipulation occurring at the back end and only the display at the client. This is possible using open standards/Linux technologies, including X Windows, web services, and a form of terminal services. With Linux, any computer can function as a client to any other computer. The graphics display and interface running on one machine can control the application or data processing on another machine using Xfree86, the open source version of X Windows. This is true distributed computing, and makes possible a wide range of centrally based and centrally managed applications. As discussed previously, web services applications also make it possible for thin clients with only a browser running to access, interact with, and control enterprise-class applications. Many of the major ERP solutions now have web-based clients allowing users full access to application data with a simple browser from any location on the Internet. Finally, using any of several Linux boot technologies such as PXE Boot, it is possible to create diskless thin-client workstations that access all services on a “fat” server. A sample scenario is using a floppy or network boot version of Linux to bring up the client and connect to the server—no local hard drive is necessary. The server then houses or provides access to all applications, including browser, office applications, ERP, mainframe access, or web services applications. Everything is centralized and manageable by corporate IT and little if any support at the workstation is required. Organizations can provision a few to thousands of users in a matter of minutes. Thin-client implementations work particularly well with grid computing, making it possible to scale up or down dramatically with very little management or support effort.
127
C H A P T E R 3 : Open Source in the Real World
Multisite Clustering Clustering has been covered in previous sections, but is mentioned again here to illustrate that it can be a valuable tool for enterprise failover, high availability, and redundancy requirements. Geographically remote sites can be included as part of clusters using private VPNs or the Internet. For example, a primary storage area network might be located at site A, which accommodates data needs for a specific enterprise application. A mirrored SAN at site B is available if the SAN at site A should go down. The applications that the SAN services could also be separated and mirrored to other clusters at other sites so that a multisite failover network is in place should anything happen at site A or B. High-availability cluster networks can be set up using technology available with Linux. Adding to it technology such as iSCSI SANs from Novell, redundant arrays of storage can be ready for access in case of emergency, anywhere on the Internet.
Storage The high-end storage capabilities included with Linux also are conducive to data center-class storage solutions. Journaling File System (JFS) is a full, 64-bit file system that can support very large files and partitions and is ideal for highperformance systems. Maximum file size can be 8EB. Novell Storage Services (NSS) enables snapshot file services for backup and versioning of enterprise applications, even when files are open. Software RAID 5 support included with NSS lets organizations create hot swappable storage solutions, and NSS does not experience performance degradation, even with millions of files in a given subdirectory.
Platform Support Linux is the only operating system that you can use on any hardware platform from Atari to IBM S/390 mainframes. It is extremely easy to scale up by moving to the next level of hardware platform power. For many applications, even in the data center, today’s best x86 hardware is more than adequate. But, if your application needs more advanced hardware, Linux and the applications it supports will run there, too. AMD Athlon64 and Opteron run Linux, as does the Intel Itanium Processor Family (IPF). IBM POWER series hardware (64-bit) for the IBM iSeries and pSeries servers is supported by Linux. Linux also supports the IBM zSeries, IBM’s 64-bit platform mostly used in the S/390x mainframe series and the IBM 31-bit S/390 systems. On this system, Linux can be virtualized, making it possible to run hundreds of instances of Linux on one mainframe, with each instance performing as an independent operating system. This
128
Data Center Infrastructure
provides phenomenal scalability for a wide range of applications on a single, centrally managed machine. Table 3.5 shows Linux support just for the IBM line of servers. TABLE 3.5
Linux on IBM Hardware Scales from Desktop Workstations to Largest Mainframes LINUX ON INTEL-BASED SERVERS
IBM eServer xSeries
Blades, 1-2 way Servers, Workstations, 32-bit, 64bit
IBM eServer Cluster
High-performance cluster interconnect
IBM eServer OpenPower
64-bit POWER Servers
IBM eServer pSeries
32/64-bit, dual OS, 8-way
IBM eServer iSeries
Virtualized server farm, advanced logical partitioning (LPAR), capacity on demand, virtual storage, and virtual Ethernet
IBM eServer 325
64-bit dual AMD Opteron
IBM zSeries S/390 Mainframe
Virtualization and dynamic allocation, hundreds of individual Linux servers on a single server
LINUX ON POWER
LINUX ON AMD
LINUX ON IBM MAINFRAME
Each of these technologies mentioned in Table 3.5 can be used to provide enterprise-class, full data center applications and services, and Linux accommodates or supports all of them. Combining Linux with commercial applications and services can provide any enterprise company with the capacity and processing power required for much less capital expense and management effort.
129
C H A P T E R 3 : Open Source in the Real World
Enterprise Applications By this point, you might be thinking that your company could feasibly run on Linux and open source. The infrastructure, networking services, and application services to support even the largest implementations are available and certainly there are web services components to create your own applications. But what if you want to go with established application ISVs? Are the leading enterprise applications available on Linux? Absolutely! This section looks briefly at what enterprise applications are already available on Linux, and at some of the open source integration solutions that these vendors provide. Many of them deploy their historically successful applications using the enterprise-class data center solutions discussed in a previous section. In many cases, enterprise applications from ISVs such as Oracle have been able to deliver their solutions with superior performance on Linux. The advantages for enterprise ISVs porting or developing to Linux are manifold. First, the overall cost of a solution for customers goes down due to reduced operating license fees, lower hardware costs, and simplified management. Second, ISVs can leverage their existing product lines and expertise to new markets, which might be more price sensitive, more security conscious, or are in the process of Linux migration or consolidation efforts. ISVs can simplify development efforts with the capability to create solutions using a single code base that scales all the way from a simple x86 processor to an IBM mainframe. Other advantages include the capability to capitalize on open source components such as web servers, application servers, messaging services, storage, and high-performance or high-availability architectures. The following sections list the leading enterprise application software vendors and what they have done with Linux so far. Some of these have standardized initially on Red Hat’s distribution and some on SUSE, but they all are looking at Linux as a serious piece of their product strategy going forward.
Oracle Oracle has been serious about Linux since its arrival. As mentioned, with the release of Oracle 10g, Linux is the internal development platform for all Oracle development. Oracle claims the first commercial database available on Linux, and since that time has cultivated enough expertise and confidence in Linux to provide Oracle customers with seamless and complete technical support for the Linux operating system in addition to support for the Oracle stack. All key Oracle products, including Oracle Database 10g with Real Application Clusters, Oracle Application Server 10g, Oracle Collaboration Suite, Oracle Developer Suite 10g, and Oracle E-Business Suite, are available for Linux.
130
Enterprise Applications
Oracle works with key Linux distributors to test and optimize the operating system to effectively handle mission-critical applications. Oracle collaborated with Novell SUSE to create a core set of enhancements in the areas of performance, reliability, clustering, and manageability to support Oracle customers’ enterprise-class deployments. Oracle is also actively supporting the open source community by contributing source code for products such as Oracle Cluster File System. By mid-2004, more than 1.4 million copies of Oracle products for Linux have been downloaded from the Oracle Technology Network. Like Novell, Oracle has deployed Linux internally in various ways to make its infrastructure more efficient and less expensive. Oracle’s internal IT organization found through analysis that Linux-based systems are one of the most costeffective ways to reduce costs for its IT infrastructure.
IBM As previously mentioned, IBM has a total hardware commitment to Linux, with every hardware platform it sells able to run Linux. This extends to IBM’s extensive software product line as well. Table 3.6 offers a short summary of IBM software solutions for enterprise businesses in multiple industries that are available on Linux as well as IBM’s hardware. TABLE 3.6
IBM Solutions SOLUTION
DESCRIPTION
WebSphere
Commercial web application server with enterprise-class application, portal, and commerce capabilities
DB2
Relational database and enterprise synchronization architecture
Tivoli
Comprehensive IT security manager for data, services, applications, operating systems, hardware, and more
Finance Foundation
Real-time securities and risk analysis for financial or capital markets
Multimedia Kiosk Kiosk solutions for retail, travel, transportation, manufacturing, finance, and government Domino
Web access for Lotus Notes users
In addition, IBM supports a host of partner ISV solutions on IBM hardware and Linux. Solutions include a wide assortment of applications, including engineering and analysis, payment card authorization, point of sale, ERP, accounting, supply chain, and distribution management.
131
C H A P T E R 3 : Open Source in the Real World
SAP SAP was one of the first software vendors to supply critical business applications on the Linux platform. SAP’s entire product line is extensive, providing solutions based on best business practices that enable employees, customers, and business partners to work together anywhere at any time. mySAP Technology is a standards-based architecture that permits enterprises to integrate a wide variety of IT systems. MySAP.com, forerunner of mySAP Technology, was developed from the ground up using Linux. Current Linux-based products include mySAP All-in-One, a solution for small and midsize businesses with complex business processes and IT configuration and functionality requirements. mySAP All-in-One is preconfigured and available for more than 80 industries providing basic ERP functionality, such as general ledger, sales, purchasing, inventory, costing, and CRM. All-in-One also includes microvertical components, country specifics, as well as interfaces and technical infrastructures based on customer requirements. Standard enterprise applications available from SAP on IBM and Linux include solutions for the following disciplines and industries: accounting, banking, automotive, customer relationship management, engineering and construction, enterprise resource planning, finance, health care, education, human resources, oil and gas, pharmaceuticals, retail, supply chain management, telecommunications, utilities, and many more.
Siebel Siebel Systems is another enterprise application vendor that specializes in CRM solutions. Seibel supplies over 20 different industry-specific versions of its software for sales, marketing, and customer service. Specific Siebel applications include sales force automation, call/contact center, marketing automation, business integration, business intelligence, employee relationship management, partner relationship management, and customer order management. In 2004, Siebel and IBM joined together to announce that the Siebel CRM applications are enabled on IBM’s DB2 Universal Database running on Linux. Again, the motivations for enterprise applications on Linux are lower total cost of ownership, ease of integration, scalability, and security.
PeopleSoft Also in 2004, PeopleSoft announced support for Linux stating security, stability, open source flexibility, and demand by customers as major reasons. Linux will support the PeopleSoft Enterprise One suite with solutions for human capital management, supply chain management, supplier relationship management, 132
Messaging and Collaboration
financial management, asset lifecycle management, and project management applications.
Other The preceding enterprise applications are just a sample from the industry leaders. Thousands more applications, general business and vertical solutions alike, are available from other ISVs. For an ongoing list of Linux-supported applications and services, check out Linux Knowledgestorm at http://linux.knowledgestorm.com or the Linux Links site at http://www.linuxlinks.com/Software/. As of October 2004, more than 16,000 software applications for Linux were listed on this site. The process of implementing enterprise applications within your organization can vary from a simple package install to months of integration. Novell provides a number of solutions that can simplify the process. Other valuable services for implementing and managing enterprise applications include software distribution, system health monitoring and control, storage management, workstation management, and much more.
Messaging and Collaboration Technically, messaging and collaboration could be included in the previous section with enterprise applications because it’s almost always mission-critical and is used by every person in an organization. Mail is such a significant application that it merits covering it separately, looking at what is available, how it can be implemented, and some strategies for coexistence with other mail systems because this is often a common problem for enterprise companies. It can be argued that email, not the web server, is the killer application for the Internet. More than half the people in the United States use email, and that’s a low percentage compared to some other parts of the world. The point is, a lot of people use email and a lot of it is business related. For that reason, email becomes a critical part of not only a company’s internal operations, but also a company’s outward face to the world. If email goes down, internally production grinds to a halt. Externally, you lose contact, presence, and, ultimately, business. Everything about Linux discussed to this point as far as security, reliability, performance, and scalability, therefore applies to email as much as any other application. Fortunately, email capabilities and solutions have evolved along with the Internet from the very beginning, so several viable options are available from which to choose. Novell believes that some messaging and collaboration
133
C H A P T E R 3 : Open Source in the Real World
options are better than others, but when it comes to open source, the customer should decide. Open source messaging solutions include Postfix and Sendmail. A number of commercial email packages are also available on Linux, but Novell GroupWise is highly recommended.
Background Before describing what solutions are available, it’s worth taking a little time to outline email architecture. Email solutions generally are not a single application package on a single machine, but are a distributed collection of interconnected services that require multiple elements to get mail from one point to another. This distributed nature provides some flexibility when considering how to deploy or implement email. The common elements of a mail solution include the following:
134
■
SMTP message transfer agents—These include Simple Mail Transfer Protocol (SMTP) agents, agents that act as mail carriers, transferring mail from one location to another. These processes are responsible for directing outgoing mail to other SMTP agents in other locations. They are also responsible for directing incoming mail from other locations to a local POP or IMAP server.
■
POP/IMAP message transfer agents—Post Office Protocol (POP) and Internet Message Access Protocol (IMAP) agents transfer mail in user-specific message stores (sometimes called post offices) to individual user clients. POP and IMAP function slightly differently in that with POP, all messages are moved from the message store to the message client. With IMAP, messages can be retained at the message store for access and manipulation.
■
DNS—Domain name servers act as local or Internet-wide lookup tables for SMTP agents. Using DNS, sending SMTP agents can determine the correct IP address of the receiving SMTP agent based on the domain name of the email address.
■
LDAP—Lightweight Directory Access Protocol (LDAP) directories are often used as a control list, authorizing access to message transfer agents (MTAs). LDAP directories are also used as email white pages to look up email addresses for individuals.
■
Email clients—Client applications interact with POP/IMAP message transfer agents (or servers as they are often called) to send, retrieve, and organize email and email attachments.
■
Web access—A common mail requirement is the capability to access email from any location without a mail client. This is made possible using a combination of web server and web-based mail access services.
Messaging and Collaboration
■
Mail protection—Critical to productivity and mail security is the ability to protect against viruses and filter spam. This can include dynamic virus or spam recognition filters with scanning and deletion policies.
Several viable open source email solutions are available, as well as many commercial email applications that are standards-compliant. The leading open source solutions are viable for enterprise organizations, depending on the level of features needed. Most Internet service providers (ISPs) use open source mail solutions providing client-based and web-based access to email for thousands and thousands of users. With proper scalability load balancing techniques, these solutions scale well and adequately provide basic email services. The most common open source email solutions consist of one or more of the following components: ■
Sendmail—Sendmail is a popular open source message transfer agent available for Unix and Linux systems. Sendmail originated at UC Berkeley in the 1980s as a rewrite of the ARPANET delivermail program, the first mail service for the Internet. Sendmail is estimated to be running over 40% of the mail servers on the Internet, so it has functionality that is useful to a large population. Sendmail critics claim it is insecure, difficult to manage, and slow. Sendmail includes both SMTP and POP/IMAP transfer agents.
■
Postfix—Postfix was developed as a fast, easy-to-administer, secure alternative to Sendmail. It was originally developed at the IBM T.J. Watson Research facility as Vmailer and IBM Secure Mailer. It doesn’t send mail from the root user, can reload settings without being downed, and can use a standard database to store configuration information. Postfix also includes SMTP and POP/IMAP transfer agents.
■
Mail clients—Several popular open source mail clients work with Sendmail or Postfix. The Mozilla (Netscape Communicator) mail components have been enhanced in new Mozilla versions Thunderbird and Firefox. LinuxLinks.com “Mail Clients” section includes entries for 117 different open source clients. Common Windows-based email clients that work with standards-compliant POP mail servers include Microsoft Outlook, Pegasus Mail, and Eudora Email. You can use any of these standards-compliant browsers with Postfix or Sendmail as well as POP/SMTPcompliant mail servers such as GroupWise.
■
Directory server—Like mail clients, there are open source versions of LDAP directory server, and there are proprietary or commercial versions as well. Because LDAP is an open standard, mail clients and mail servers can use any standards-compliant LDAP server for authentication and lookup. The most common open source LDAP directory is OpenLDAP, a 135
C H A P T E R 3 : Open Source in the Real World
server implementation of LDAP that includes the LDAP daemon and the tools and utilities for implementing client applications. LDAP Java class libraries are part of this project and were contributed by Novell. Novell has included LDAP compliance and a host of LDAP-based solutions as part of eDirectory. In addition to other services, eDirectory functions as an LDAP directory for mail server and client applications.
Novell GroupWise For many organizations, basic email using a POP client and SMTP server is entirely adequate. Mail with attachments is the only required form of collaboration, and using standard hardware and software platforms, the mail always gets through. If you want a richer feature set, however, it’s available. For years, Novell has offered the premier communication and collaboration solution GroupWise. GroupWise includes several features not available with standard open source solutions that are often highly valuable to organizations. The short list includes calendaring, scheduling, busy searches, message retraction, document management, rules-based message handling, shared folders, message routing, and much, much more. Using GroupWise Web Access or Evolution, these features are available through a browser client. Collaboration is one area in which open source has not provided a substitute with equivalent features, and Novell’s product offering is an excellent complement. Novell GroupWise runs on Linux as well as NetWare and Windows. Advantages of GroupWise on Linux include flexibility, scalability, and security. The GroupWise architecture expands to include post offices, server-based mail repositories that house user mail and content. These can be strategically positioned on different servers for performance gains and in different locations for quicker access by users. The distributed, modular nature of GroupWise makes it possible to easily scale so that thousands of users in multiple time zones can have easy access to rich planning, calendaring, and collaboration tools—all while providing failover and redundant systems for uninterrupted services.
Integration Email integration, especially with company mergers and acquisitions, has historically been a nightmare. Before email transport protocols and message formats were standardized, practically all major mail systems were proprietary and could only be merged with the use of sophisticated and customized integration solutions. Today, using Linux and open source as a framework or interim step in email migration or deployment can deliver significant advantages.
136
Internal Development
Because email is a typical client/server application, moving the server to Linux is a process that can be done with very little impact on end users. Because standards-based POP/IMAP mail is a simple text format, mailboxes can be migrated with very little effort. Most email clients have import utilities that allow you to pull mail from other client formats with a couple of simple commands. With Linux scalability and performance, consolidating email servers or services can simplify administration and reduce management costs. Using LDAP directories instead of proprietary mail user lists in conjunction with other applications or services that are user-based can also simplify management. A single directory, instead of multiple (often out of sync) user lists provides the advantage of one-stop change management. Novell simplifies the process of consolidating/migrating both directory and email. The GroupWise migration utilities allow you to easily convert from Microsoft formats to GroupWise, while the Import Conversion Export utility enables administrators to move date between LDAP directories or import directory information to Novell eDirectory. Email needs vary from organization to organization and even from group to group within organizations. Between open source options and Novell’s GroupWise offering, an entire range of email selections is available—simple email to full-blown collaboration. Clients can be web-based, open source, or full-featured desktop for Windows, Macintosh, or Linux. Novell supports all levels of an open stack email solution.
Internal Development Many of those contributing to open source are employees of existing companies who are looking to solve the problems that they encounter in their own daily tasks. Preliminary studies have shown that many organizations already employ staff that are well schooled in the development tools and languages that are used for open source projects. This might or might not be the case for your organization. The information in this section is a summary of the development resources that are available for deploying, creating, or integrating open source solutions—inside or outside of an organization. If you have an internal development group, chances are these resources are already in use at your site. If you don’t have an internal development organization, you might find that with the availability and usability of development tools, an internal programming group could be leveraged very effectively to create powerful, customized applications and services. Responsible development organizations should be seeking to simplify as much as possible. As mentioned earlier, increasing the number of variables through 137
C H A P T E R 3 : Open Source in the Real World
differences in hardware, operating systems, standards, applications, and so on tends to have a multiplying effect on the complexity of management. It’s no different with software development and any variables that are eliminated or consolidated can reduce the demand for management, support, and capital costs. One of the most obvious simplifications is to eliminate the need to support multiple platforms. You don’t want to manage two versions of an application— one for Windows and one for Linux. And, thanks to several technologies including Mono and Novell exteNd, you don’t need to. Using the Mono framework, it is possible to create one version of an application that works on both Linux and Windows, taking advantage of the latest technologies from each. With Novell exteNd, corporate application developers can powerfully combine identity, integration, and portal services to securely deliver relevant business information and services. In addition, with open source AMP technologies (Apache, MySQL, Perl/PHP/Python), developers can create applications that are web-accessible, platform-independent, scalable, and secure. Developers have programming flexibility without limiting choice.
Mono .NET was designed as a rapid application development platform that was to simplify programming by allowing developers to write applications without concern for network services or operating systems. The idea was, you write modular application components in any language and they run on any operating system, accessing Internet and networking services. This was accomplished by creating a common language infrastructure, which is really a type of virtual machine. Different programming languages are converted into a common intermediate language (CIL) that is run by the virtual machine. It’s a great idea with only one flaw—the only family of operating systems that it supports is Windows. Mono was developed as a more inclusive, open source solution with the capability to accommodate Unix and Linux as well as Windows. The Mono objective is to enable Unix/Linux developers to build and deploy applications that work on all common platforms, including Windows, Unix/Linux, NetWare, and Mac OS. Probably the biggest benefit to Mono is that it allows applications written to .NET to run on Unix and Linux. Developers don’t have to make exclusive decisions about which operating systems to support and can focus on multisystem solutions that meet market needs regardless of platform. Mono includes (like .NET) a common language infrastructure (CLI) virtual machine. The CLI virtual machine includes a class loader, just-in-time compiler, and a garbage collection runtime. It also includes class libraries (both .NET and Mono) that can work with any supported language and a compiler for the
138
Internal Development
C# language. Languages supported include Managed C++, Java Script, Eiffel, Component Pascal, APL, Cobol, Perl, Python, Scheme, Smalltalk, Standard ML, Haskell, Mercury, and Oberon. Mono also includes tools that facilitate the creation of product APIs. You can’t talk about Mono without wondering where the name came from. From the Mono website at http://www.mono-project.com, “Mono is the word for ‘monkey’ in Spanish. We like monkeys.” Mono was originally developed by Miguel de Icaza and sponsored by Ximian, which is now part of Novell. It’s important to note that Novell doesn’t own Mono even though it was started at Ximian. Novell will continue to contribute to Mono and support it. Novell is working on components of Mono, which are on a critical path to release a development and execution environment. After being released, it will follow the evolutionary path of all open source software. Novell, however, will provide support and consulting services around Mono as customers need and request it. Microsoft has contributed several technologies to EMCA (international information and communications systems standardization organization) that are part of the Mono project.
Novell exteNd Novell exteNd is a comprehensive suite for the rapid development and deployment of service-oriented web applications. With Novell exteNd, corporate application developers can powerfully combine identity, integration, and portal services to securely deliver relevant business information, at the appropriate time, to the right people. The exteNd suite includes the following: ■
Visual development tools that help you transform legacy systems into web services, orchestrate them into business processes, and create interactive portals—without writing a single line of code
■
Support for the Linux operating system, adding to the industry’s widest platform support for NetWare, Windows, Solaris, HP-UX, and AIX
■
Compliance with multiple industry standards that provide integration with existing systems, including Xforms, which simplifies the process of creating web pages and forms, and the Portlet 1.0 specification, which allows portlets from different vendors to interoperate
With Novell exteNd, you can integrate web services and data from throughout your organization to create innovative business solutions. Several prepackaged exteNd solutions are available including the following: ■
Secure Enterprise Dashboard Portal—Integrates business information from various data sources and proprietary portals into a single comprehensive view of enterprise performance 139
C H A P T E R 3 : Open Source in the Real World
■
Novell Partner Portal—Securely opens information and applications to trusted partners, suppliers, and customers, enabling companies to strengthen business relationships and improve efficiency
Lower Development Cost It should be obvious, but it’s worth mentioning again that using Linux and open source can dramatically reduce development costs for both internal programming efforts and the cost of goods created for ISVs. It is possible and feasible for companies to create a viable application or service that is developed entirely using open source software, utilities, and tools. A manufacturing line management program can use Apache, MySQL, JBoss, and a Mozilla client while including some proprietary scripting or binary logic. The entire solution carries no additive royalty requirements. If the solution is for internal use, you can install as many instances of the solution as you need without incurring incremental licensing fees. If you are an ISV, license the proprietary elements and include the open source components at no additional charge. Creating solutions with Linux and open source can provide a competitive advantage for companies in two ways. First, by including all the components for a solution such as the application, operating system, and web services components, the solution becomes a complete package with no need to go anywhere else to make it work—”batteries are included.” Second, if you are an ISV, by supporting your customers on open source, their entire solution will be less expensive without the requirement to license an operating system, database, or client. Finally, developers who have adopted Linux as a development platform and utilized open source services and tools have found the transition effort to be minimal. As mentioned, a large collection of existing services and tools are easily combined to create web-based applications. These include MySQL, the leading open source database; scripting tools such as Perl, PHP, and Python; Apache Web Server; application servers such as JBoss and Tomcat; and all the supporting open standards technologies such as XML, SOAP, UDDI, and more. In addition, thousands of existing open source projects are available from sites such as http://www.hotscripts.com, http://www.sourceforge.net, and http://forge.novell.com that can be combined or leveraged for development. Programmers who have developed in C or Java can transition to Linux with minor (if any) adjustments. It has been interesting to note at Novell that the level of enthusiasm for development has increased because Linux, as a platform, lends itself to clean, elegant solutions. Developers enjoy creating
140
Power Workstations
solutions that can be leveraged, scaled, and valuable across a wide range of uses. With the Linux architecture, modular packages, and the collection of web services available, developers can create libraries of reusable components that contribute to current as well as future projects. In a few words, development on and for Linux is cool. Other ISVs and Novell customers who have adopted Linux and open source as a development platform/environment have had a similar experience.
Power Workstations Every organization has at least one or more power users, individuals who are obsessed with wringing every possible ounce of capability out of technology. It might be the finance guy who is modeling complex pricing models to determine optimal rate of return. It might be the marketing person who is mixing and massaging model after model to determine who the target customer is and what his buying habits are. It might be the engineer who links together spider webs of structure for finite element analysis. Whatever their jobs, these people want flexibility, power, and control, and usually have the technical skills to implement whatever they can find. Your job as IT manager should be to liberate these individuals. Support them with the services that they need but don’t want to worry about. You make sure that they have connection, storage, backups, are secure from unauthorized intruders, protected from virus and outages, and more. At the same time, you have to ensure that all other IT assets are protected. So what’s the advantage of Linux and open source to these “power users?” These advantages can be broken down into four categories: power, flexibility, programmability, and cost.
Power Linux provides power workstations with some hefty performance capabilities. For starters, applications written to run on Linux or using any of the open source solutions, such as databases, have some great scaling options. These advantages were mentioned previously on the server side, but power users can take advantage of all the Linux scaling options, including symmetric multiprocessing, NUMA, clustering, SANS, and more. In addition, a wide range of workstation scalability options are moving from the x86 hardware platform to 64-bit, and even mainframe if needed. Getting more power can be accomplished incrementally as needed without the cost of a complete changeover. Linux performance gives power users the capability to do more with less, but then they usually end up doing the most with the best available. 141
C H A P T E R 3 : Open Source in the Real World
Flexibility Linux provides power users with a broad range of flexibility in terms of creating solutions and sharing results. Multiple machines can be harnessed together for combined computation or for shared information. The capability to access one or many machines simultaneously, regardless of whether they are local or remote, using the same X Windows interface can be very powerful. With a wide assortment of available services and utilities, plus the selection of open source projects, a power user can often find solutions that fits her needs, or collaborate with others who are working to solve the same types of problems. In addition, sharing or collaborating using Linux and open source is very flexible and secure with FTP, web servers, password-protected access, and more. Power users can employ the same techniques used by system administrators to grant or restrict access to collaborators, partners, or other groups within the company. The pricing wizard can securely provide access to marketing so they can run gaming scenarios against projected sales numbers given specific pricing conditions.
Programmability Linux and open source provide a huge toolbox of development aids, ranging from simple scripting to as complex-as-you-want-to-make-it object-oriented programming. Often power users need the capability to modify input or output. They also need the capability to share information with interested stakeholders. Open source allows them the capability to do so. Results can be displayed using a web-accessible database. Analysis can be shared using a web server. Automated processes can be implemented with notifications for updates or changes being sent to one or thousands. Powerful scripting gives users the capability to manipulate databases, control trial scenarios, produce iterative tests, and much more. Novell customers who have adopted Linux have commented that although Linux appears more complex than Windows at the outset, once learned it is not only more simple, but much more powerful as well. The modular, distributed, open, web-based nature of Linux and open source solutions provides a much richer development environment that can be leveraged across platforms, locations, and diverse applications.
Cost By now a consistent theme, Linux with open source is a low-cost alternative. Even if specialty applications are proprietary, chances are the overall costs on Linux are lower because of reduced hardware requirements and lower licensing
142
Summary
fees. Developing in-house applications can be less expensive, and they can be leveraged without added copyright requirements. If your power users are the type who are constantly looking to get the most out of technology, there’s a good possibility they have already installed a dual-boot option and have experimented with or are using Linux!
Summary This chapter covers why an open source solution can succeed in the real world, and offers a plethora of topics illustrating why this is the case. The next chapter introduces NetWare and helps you understand where OES, the open source solution from Novell, comes from.
143
This page intentionally left blank
CHAPTER 4
A Brief History of NetWare To truly understand Novell Open Enterprise Server (OES), you must know something of its heritage. By understanding what has come before it and the evolution that NetWare has undergone, you can understand what a truly innovative product OES is. This chapter focuses on NetWare for those who have not worked with it before. It looks at the history of the network operating system and the history of Novell itself. You’ll learn about the different versions, the people who made it what it is, and some of the tools used to administer it.
The History of NetWare In 1979, a company bearing the moniker of Novell Data System came into being in Utah. In 1981, this company hired three Brigham Young University graduates for a short-term project to come up with a way to network CPM computers so the company could sell more hardware. Those three were Drew Major, Dale Neibaur, and Kyle Powell. Together, the three—known as the SuperSet—came up with a solution that served its purpose and set the company on a completely different path than it was expecting. The short-term contract employees stayed on with the company and helped it grow. In 1981, the IBM PC began to show up in the marketplace, and the small company in Utah first focused on networking it with CPM machines so it could continue to sell them. This led to the concept of ideally creating a method to network all types of computers together and not just PCs and CPM machines.
C H A P T E R 4 : A Brief History of NetWare
In 1983, Novell Data System changed its name to Novell and began the move away from the not-so-profitable hardware business. Ray Noorda joined the company as president and the push to become a networking company was in full force. The first version of NetWare focused on a file server arrangement, wherein one central computer is responsible for controlling user access to files and resources. As PCs rapidly grew in acceptance in the 1980s and spread throughout corporations, the need to connect them together into local area networks (LANs) grew as well. Novell NetWare became the de facto network operating system (NOS) of choice in most implementations. Throughout the 1980s and early 1990s, the NetWare versions incremented through the 2.x and 3.x numbers as more features and modularity were added. The modularity was made possible by including a plethora of software components that could be loaded and unloaded as needed. Called NetWare loadable modules (NLMs), these provided features to the server on an as-needed basis. All versions of NetWare up to and including the 3.x series used a flat-file database structure, known as the bindery, to hold all user and resource information. Although this worked well in corporate LANs, it limited the size to which the network could grow. Determined to break down the barriers to the enterprise, Novell released NetWare 4 in 1993. NetWare 4 did away with the bindery and offered the NetWare Directory Services (NDS) in its place. NDS (which came to stand for Novell Directory Services in subsequent versions) distributed the database of users and resources to servers across the enterprise, and essentially made the world its network. It was now possible for a user in one office to access resources on a server in another office without needing to log in to each server separately. NDS was the forefather to eDirectory, which now serves the same purpose, only on a much grander scale. eDirectory first appeared in 1999 and serves as a cross-platform directory service. Realizing that the Internet was a tool for networks worldwide, Novell decided in the mid-1990s, under the direction of John Young, to start moving away from the proprietary IPX/SPX protocol it was known for, and embrace TCP/IP for networking. NetWare 4.11, called IntraNetWare, offered TCP/IP as an option. NetWare 5 standardized upon TCP/IP by including native support for it. NetWare 6 expanded upon this acceptance by including more utilities aimed at using the Internet as a tool for maintaining the network through a browser from any web-enabled device.
146
NetWare for the Uninitiated
Lastly, NetWare 6.5 expanded upon version 6 and offered the capability to deploy open source technologies across the entire organization and manage projects through the browser-based interface. This version focused on adding value to the virtual office, maintaining business continuity, and offering the most powerful web services available. “One Net” was the mantra around which this version was created. On the corporate side, Jack Messman became CEO of Novell for the second time in 2001, following Eric Schmidt, who took over in 1997. Mr. Messman continues to lead the company today, as it has now become a leader in the open source movement and prepares to release its most powerful operating system to date.
NetWare for the Uninitiated To understand what Open Enterprise Server offers, it is helpful to have some understanding of the environment from which NetWare has evolved. Versions of NetWare prior to 4 used a great many command-line tools and simplistic menus. Version 4 introduced graphical tools for administration, and many of those tools evolved into the browser-based utilities that exist today. Many of these tools are referenced in subsequent chapters, but the following lists summarize some of the troubleshooting tools and utilities you might have to use as you prepare your environment for OES.
Management Tools The following tools can help you troubleshoot and manage the network: ■
ConsoleOne—This network-compatible Java-based tool allows you to quickly and easily administer the network resources from a browser.
■
iManager—This web-based application can be used for managing, maintaining, and troubleshooting eDirectory from within your browser interface.
■
iMonitor—This tool offers you monitoring and diagnostic capability from within a web browser to all servers in your eDirectory tree.
■
Remote Manager—The easiest way to think of this tool is to imagine that you are sitting at the server console. Remote Manager gives you all the functionality that would be available at the server console within a web browser.
147
C H A P T E R 4 : A Brief History of NetWare
Client Tools The following tools—predominantly console commands—can help you troubleshoot issues with networking clients: ■
ARP—Displays the Address Resolution Protocol cache. This is useful for identifying MAC addresses.
■
IPCONFIG—Displays the client IP address, subnet mask, and default gateway for each network adapter bound to TCP/IP. In some Windows-based operating systems, this information can be obtained through WINIPCFG instead of IPCONFIG.
■
NETSTAT—Displays details for the protocol operations of the client’s TCP/IP connection.
■
ROUTE—Shows the routing table used by the client. A number of parameters can be used with the utility to add, delete, and modify entries in the routing table.
IP/IPX Tools The following tools—predominantly console commands—can help you troubleshoot issues with the IP and IPX protocols: ■
CONFIG—Returns information about the file server, including name, loaded drivers, and so on.
■
DEBUG—Helps you identify and resolve TCP/IP communication problems through a number of DEBUG screens.
■
NSLOOKUP—Queries Domain Name System (DNS) name servers to enable you to identify your server’s DNS configuration, diagnose DNS setup problems, and identify DNS problems in a server application.
■
PING—Checks to see whether a remote host is accessible.
■
TCPCON—Monitors TCP/IP operations. This server console utility can show detailed information on the status of network segments, protocols, and routing tables, among other things.
■
TRACERT—Shows the path taken to reach a remote host.
Why Novell and Open Source Novell is the open source alternative with the most complete stack for mixed, open, and proprietary IT. The combination of Linux, open source technologies, 148
Why Novell and Open Source
and Novell products provides the best collection of solutions for matching realworld IT needs. Novell’s dual-source strategy spans the operating system through directory services to secure identity management from desktop to data center. With experience in migrating to Linux and open source, both with customers worldwide and internally, Novell has developed proven solutions, best practices, and consulting expertise to help any business identify their best opportunities and successfully migrate to Linux and open source—whether it’s a single server or the entire organization. Here are the top 10 reasons why Novell is the best choice as a partner to support your migration: 1. SUSE Linux technology is the most secure and stable, enterprise-ready
operating system for mission-critical applications. Novell is the only company offering retail Linux products, including SUSE LINUX Enterprise Server, SUSE LINUX Professional for power users, and the Novell Linux Desktop. 2. Novell has “full stack” services that sit on top of the operating system.
Services developed over the last 20 years to work on NetWare—including file, print, messaging, collaboration, directory, resource management, security, and application servers—now work on Linux as well. 3. Novell has a worldwide technical support organization offering 24/7/365
support—a relief to CIOs who want to know who will be there to back them up if problems occur. Novell has 650 Linux-trained support people—more than the entire employee base of other Linux distributors. 4. Novell offers true indemnification—not just the warranty protection
offered by other distributors. The Novell indemnification program is backed in part by unique contractual and intellectual property rights owing to the ownership chain of Unix and UnixWare. 5. Novell has a large consulting staff with the credentials to help customers
design their IT strategies to take advantage of Linux. The Novell migration plan and solution methodologies are based on real experience and solve not only Linux-related problems but business problems as well. 6. Novell service contracts allow customers to buy only as much technical
support as they need and to integrate Linux support with their other support needs. 7. Novell already markets a rich set of desktop Linux products from both
SUSE LINUX and Ximian. 8. Novell has more resources and talent focused on delivering enterprise-
class Linux and open source technologies than any other vendor.
149
C H A P T E R 4 : A Brief History of NetWare
9. Novell has the only channel program in the Linux market. It includes
more than 4,500 active channel partners, more than half of which support Linux. 10. Novell offers Managed Services for Linux, which will help customers
reduce IT costs while adding the value of Novell expertise in Linux administration. With Novell, choice is always an option. IT managers are wholly supported in implementing solutions that are open source, closed source, or a mixture of both—dual source. As open source solutions and the community that provides them progressively evolve, Novell will continue to provide the support ecosystem, the management tools, and the proprietary complements that help organizations safely and economically provide the most effective IT services.
Summary This chapter offered an overview of the history of Novell and NetWare for the purpose of introducing administrators to the predecessors of OES. It introduced some of the tools that exist within the operating system as well as on individual clients. To learn more about different versions of NetWare and their administration, you can visit the Novell Press site at http://www.novell.com/ training/books/. Click the View All Titles link and you’ll find an extensive list of books that can help you in every aspect of your network—from clients to clusters.
150
CHAPTER 5
The Rise and Reason for Open Enterprise Server (OES) Up to this point, this book has spent a great deal of time looking at two different worlds that exist in enterprise networking today: NetWare and the open source solution Linux represents. Open Enterprise Server (OES) synergistically merges the best of both worlds into one, and in so doing, creates a new product that surpasses anything else available for the enterprise today. This chapter looks at what OES offers and why it represents the best solution to consider.
What OES Offers The preceding chapters examined the histories of NetWare, Linux, and open source. When OES is installed, it offers you access to a great many utilities and services. In most cases, these services and utilities are available regardless of the operating system on which you are installing OES. In some cases, however, certain features are available in only one operating system and not another. The following sections list the utilities and services available based upon the operating system on which OES is installed.
C H A P T E R 5 : The Rise and Reason for Open Enterprise Server (OES)
Services and Utilities in Both Operating Systems The following services and utilities are available in OES regardless of the operating system on which it is installed:
152
■
Apache Web Server
■
Cluster support
■
Console Utilities
■
DNS/DHCP services
■
eGuide
■
FTP Server
■
iFolder
■
iManager
■
iMonitor
■
iPrint
■
iSCSI support
■
Java Virtual Machine (JVM)
■
Multiprocessor support
■
MySQL Database
■
NetDrive Client
■
NetIdentity
■
NetStorage
■
NICI
■
NICI Client
■
NMAS
■
NMAS Client
■
Novell Client
■
Novell eDirectory
■
Novell Identity Manager
■
Novell Remote Manager
■
Novell Storage Services (NSS)
■
OpenSSH
■
Perl and PHP
A Transition Strategy for the Data Center
■
Quickfinder
■
Storage Management Services (SMS)
■
Tomcat Servlet Engine
■
Virtual Office
Services and Utilities in Linux The following services and utilities are available in OES only when it is installed on Linux: ■
Linux User Management
■
NCP Server
■
Open WBEM CIMOM
■
Package Management
■
Samba
Services and Utilities in NetWare The following services and utilities are available in OES only when it is installed on NetWare: ■
Audit
■
ConsoleOne
■
DFS
■
Native File Access
■
Nterprise Branch Office
A Transition Strategy for the Data Center The “current state” panel, shown in Figure 5.1, represents a typical configuration for many of Novell’s customers today.
153
C H A P T E R 5 : The Rise and Reason for Open Enterprise Server (OES)
FIGURE 5.1
A transition strategy.
Data Center NetWare
Nterprise Services
(6.5)
(SLES - NLS 1.0)
NT
NT
Open Enterprise Services
(NT 4.0 200x)
(200x)
Intermediate State
Linux OS
Edge Servers
Database
Management
Email
Netware OS
Web Servers
File
Print
Exchange
Applications
Linux OS
Application(s)
Database
Edge Servers
Web Servers
Applications
File
Current State
Domain/AD
Application(s)
eDirectory
eDirectory
Print
Exchange
Application(s)
Edge Services
Data Center
*nix
(NT 4.0, 200x)
Application(s)
Web Servers
File
Print
Windows NT Domain / AD
Database
NIS
NDS
Applications
(4.x, 5.x)
Unix Servers (Solaris, SCO ,AIX, HP-UX)
AD Windows Only Application(s)
Data Center NetWare
Future State
The data center consists of a heterogeneous mix of operating systems, network services, and applications—often with little if any integration between systems. By and large, the data center is a result of an evolution and merging of technologies over time and, as such, includes solutions that have evolved from the desktop and workgroups as well as from mainframe and centralized computing services. In this environment, the primary objective for most CIOs is to consolidate, simplify, and reduce in such a way that service is maintained or enhanced while costs are maintained or reduced. The powerful potential of Novell solutions in the data center is that Linux and open source technologies enable simplification and consolidation without diminishing flexibility, manageability, scalability, or security within budgets that are equal to or less than current costs. IT can provide more without sacrificing functionality, serviceability, or potential. The next step then becomes implementation—how to actually do it. Novell Consulting has done it—many times—for enterprise companies around the world. Feel free to use the steps outlined here as a framework for your own transition plan, or Novell Consulting is always available if you want to work with an experienced hand.
Initiate the Project A good transition will start with the formation of a project steering committee. This consists of executives, IT managers, and line-of-business managers that have vested interests in the benefits of a successful transition strategy. Up front, committee members should be able to understand and articulate business objectives, project charter and administrative procedures, corporate communication plans, change control and risk management procedures, acceptance criteria, and critical success factors.
154
A Transition Strategy for the Data Center
This committee should conduct or commission an assessment study on Linux and open source readiness that takes into consideration the following factors: ■
How well a Linux migration fits into your organization strategy and how ready your environment is for a Linux migration
■
What the stakeholder and business unit requirements are (business, technical, and political)
■
Where to start—which servers, services, and applications it makes sense to migrate, short- and long-term
■
Whether Linux and open source services and applications are available that meet your needs and provide comparable (or better) features—and if not, what alternatives might need to be evaluated (terminal services or OS emulation, for example)
■
How servers, services, and applications, along with associated data and business processes, can best be migrated
■
How any interdependencies among services and applications, desktop clients, or other software might affect your migration plan
■
Where hardware and service consolidation opportunities exist
■
Whether your staff has the requisite skills to support a Linux environment, and what support capabilities for maintaining and supporting Linux, over time, need to be considered
Through this assessment, an implementation road map can be created that outlines migration prerequisites, staging lab setup and testing, transition procedures, and acceptance criteria.
Planning and Design Every migration plan must accommodate the unique nature of each organization, but the following general activities should be understood with procedures outlined: ■
Server migration path—Design a server OS migration path from Unix, Windows, or NetWare to Linux. This includes hardware selection, installation/migration tools, management tools, and so on.
■
Edge services—Design a Linux networking services architecture, including DNS/DHCP, remote services (VPN), clustering, high availability, virus protection, and backup services.
155
C H A P T E R 5 : The Rise and Reason for Open Enterprise Server (OES)
■
Networking services—Design a basic Linux services architecture, including file and print, identity, directory, messaging and collaboration, and web services.
■
Application architecture—Design an application architecture that considers whether Linux native applications or Linux alternatives, OS emulation, or terminal services best meet your application needs. Solutions might include database systems, web services solutions, Java applications, Linux-supported ERP systems, line-of-business applications, or any combination of these.
■
Directory environment—Design a Linux-compatible directory environment (Novell eDirectory or open source LDAP) taking into account geographies, administration delegation, branch/remote offices, crossplatform/cross-service authentication, and so on.
■
Management tools—Identify management tools and applications that meet the needs of your specific environment.
■
Training—Design a training plan that takes into account the existing level of Linux training and certification among your staff.
After the migration plan is complete and validated with the steering committee, the proposed design should be tested in a staging lab and revised as necessary. Based on the lab experience, a detailed migration timeline and work-breakdown-structure is developed. Anticipate the next phase by developing a plan for limited-production pilot testing, including environment setup, test scripts, acceptance criteria, and a rollback plan (safety net).
Deployment After training and pilot testing are completed, it’s time for rollout and deployment. If your organization is similar to that depicted in Figure 5.1 (earlier in this chapter) with mixed Window, Unix, and NetWare systems, the deployment sequence could start with unifying directory and authentication systems and then continue in the following order. DIRECTORY/IDENTITY MANAGEMENT Starting with a common directory provides a security and management framework that will be the foundation of everything going forward. Novell eDirectory simplifies management of users, groups, roles, and identities across the enterprise, as well as provides integrated management across hardware and software resources. The fact that eDirectory runs on Windows and Linux as well as NetWare means that these systems can be managed through eDirectory in the current configuration with future options open to easily consolidate
156
A Transition Strategy for the Data Center
them to a single platform. eDirectory trees, organizational units, or groups can all be tied together, regardless of the underlying operating system. EDGE SERVICES Edge services are easy Linux/open source transition targets as they only affect end users indirectly and can be easily cut over to or cut back if the need arises. Target services for early deployment include proxy/cache, DNS, DHCP, static web servers, load balancing, content filtering, intrusion detection, virus protection, and VPN. WEB SERVICES Web services also have less impact on end users with the only client portion of a web services solution being the browser (and maybe a few plug-ins). Linux and open source provide an abundant collection of services in this area that can be utilized to create advanced solutions that affect employees, customers, partners, and suppliers. These backend services can be implemented and rolled out by simply supplying a URL. DATABASES Similar to edge and web services, databases are backend solutions that can be transferred with minimal impact. The new generation of Linux-supported/webaccessible databases simplifies migration in this area. In some cases, stored procedures or database structure must be modified, but MySQL or Oracle on Linux can accommodate practically any database requirement. FILE/PRINT SERVICES Transition in this area does affect users, as most users store data to the network or access file and print services. With eDirectory as a foundation, access control lists and permissions are retained when files, directories, or volumes are transitioned to a new platform. Migration utilities from Novell that move files from Windows, Unix, and NetWare to Linux can assist in this area, making the network services transition transparent. MAIL AND COLLABORATION Depending on your existing mail and groupware services, this area might or might not be an easy migration. For organizations that want to leave Microsoft Exchange in place, tools such as the Evolution client with the Evolution Connector allow them to have a Linux/open source client and still access Exchange. If the collaboration solution of choice is GroupWise, the GroupWise Migration Tool is available for consolidating to a single solution running on NetWare or Linux. Moving to directory-based management simplifies administration, while having incremental scalability at the post office/MTA back end is 157
C H A P T E R 5 : The Rise and Reason for Open Enterprise Server (OES)
also a definite advantage. In merger and acquisition situations, a complete migration might be necessary with the need to convert centralized message stores and individual client message stores. APPLICATIONS With edge, network, database, file, print, and collaboration services all transitioned, the final category for migration is applications and the applications to be migrated are a function of organizational needs and application availability on Linux. As mentioned previously, some applications might not be available on Linux in the near future and a coexistence strategy must be followed. If this is the case and the applications are on Windows, you can still take advantage of common directory authentication using eDirectory and centralized management through ZENworks and iManager. ONGOING SUPPORT At any point during the transition process, Novell is available to assist organizations that need help. Technical support is available online, over the phones, in person, onsite, or through network partners. Training is available on Linux as well as the entire Novell product line. Novell Remote and Managed Services are available to proactively monitor and manage all Novell technologies. Novell Consulting can come to your organization and do any or all of the tasks outlined in this section. In addition, the open source community continues to be a vibrant collection of information and training on Linux and open source solutions. Transitioning to Linux and open source in the data center provides all the advantages discussed so far, such as the following:
158
■
Administration—Management is simpler because it is directory-based, web-based, consistent across all systems, and enhanced through powerful tools, such as eDirectory, iManager, YaST, and RPM.
■
Scalability—Do more with less hardware, or do bigger things with the hardware you have. Clustering, the capability to create storage systems, upgrading to more powerful hardware while using the same platform and applications, incremental additions, load balancing—all these technologies and more allow you to easily scale to meet new or periodic demands.
■
Flexibility—You have one platform for everything—file, print, applications, edge services, web services, and so on—with the capability to mix and match, combine, fail over, and any other required tasks.
■
Security—The platform and application architecture is inherently more secure, less of a target for disruption and more easily made safe.
A Transition Strategy for the Desktop
■
Cost—Lower licensing fees, lower hardware costs, reduced administration and management costs, lower application costs, cheaper development—all these combine to make Linux and open source a viable and preferred alternative at the data center.
A Transition Strategy for the Desktop Moving completely to Linux and open source at the desktop is something that is viable, feasible, and practical. Novell Consulting has defined an established approach and refined methods for migration and the process has been successfully tested with Novell and other customers. This section summarizes a general transition plan for moving from a proprietary desktop to an open source desktop and details some of the tools that are available for doing so. “Proprietary desktop,” by and large, refers to Windows—everything from Windows 95 (which still exists in some organizations) to Windows 98, Windows Me, Windows XP, Windows NT, Windows 2000, and even Windows 2003. The differences between these desktops can be significant, and if organizations are working with multiple Windows versions in-house, the opportunity to consolidate to a single platform that can be centrally and uniformly managed is valuable. Some organizations also might be running the Sun desktop, which has some of the same management benefits of an open source desktop, but not the flexibility. A transition plan should, in most cases, consist of a phased approach. A complete and abrupt change of platform, applications, and services can introduce so many variables that user productivity grinds to a halt. In addition, IT management and support staff can be overwhelmed with help desk calls, requests for training, and resource access emergencies. In general, at least two phases should be considered, and these should be differentiated by applications and operating system services. The order in which these two phases should be implemented depends on the needs and requirements of your organization, but in most cases, the open source applications will be implemented first.
Office Productivity Applications Office productivity software is the most prevalent application on the desktop for most organizations—word processing, spreadsheet, presentation, and graphics software—followed by web browser software and email clients. All of these desktop applications are available as open source solutions with versions that run on both Linux and Windows. This allows you to phase the
159
C H A P T E R 5 : The Rise and Reason for Open Enterprise Server (OES)
implementation of open source by leaving the existing operating system in place—complete with network resources, security systems, and hardware— while deploying office applications together or in a phased approach. Start with whichever application you think will serve as the best trial. Novell began with OpenOffice.org, but Mozilla—with its web browser and email client—is also an option. The user interface for OpenOffice.org is very similar to Microsoft Office with common drop-down command menus, text formatting, file control, printing, and so on. Even users who have limited understanding of productivity applications but are familiar with Windows navigation will have minimal difficulty using OpenOffice.org software. Mozilla Firefox provides a web browser with a similar look and feel to Microsoft Internet Explorer, with added features such as a pop-up blocker, mail notification, and advanced media control. On installation, Mozilla also imports all relevant information from the old Internet Explorer, such as favorites, passwords, cookies, browsing history, and more—making the migration close to seamless from an end user’s perspective. The look and feel of these open source solutions running on Linux is nearly the same as the look and feel on Windows. This makes it easy and seamless to transition users without any application learning curve when the underlying operating system is eventually replaced. It’s relevant to point out the tools available from Novell for managing desktops that are particularly useful for migrating desktop applications. These types of activities are what Novell ZENworks Desktop Management was designed to simplify. Using ZENworks, administrators have the ability to inventory existing desktop software and hardware. In addition, they can push new software (such as OpenOffice.org or Mozilla) to the desktop, enabling it to be installed, checked for functionality, and remotely controlled if necessary. Using Novell ZENworks, it is entirely feasible to install OpenOffice.org with icons placed on the desktop and ready for use for hundreds of workstations in a matter of a few hours. An entire company could be enabled for open source application use over a weekend.
Thin-Client Applications We’ve discussed at length web services applications and how using open source technologies, Java, databases, scripting, and other tools enable the creation of business applications that require only a web browser at the desktop. These thin-client applications are obvious candidates for initial migration. Web browsers work on either Windows or Linux and don’t vary significantly from browser to browser. Thin-client applications might be completely sufficient for
160
A Transition Strategy for the Desktop
organizations with single-use applications, such as call centers, shop floor, or inventory management. The major consideration for thin-client applications at the desktop is the requirement for browser plug-ins. Applications might require Java runtime, media players, display enhancements, or special thin-client application plug-ins that have been developed for specific applications. Many plug-ins can be automatically installed when first connecting to the backend application, and they can also be configured through workstation management tools, such as ZENworks, YaST, or RPM.
Business Applications As rapid and as widespread as the open source movement is, there are still many areas in which valuable proprietary applications are not available on Linux—nor will they be for some time. IT managers need to accommodate the users of these applications in the best way possible, and have several options for doing so. The policy at Novell (see Appendix A, “Open Source Case Studies”) in this area involves first determining whether there is a native Linux version of the application available. If not, is there a Linux application that provides equivalent functionality? If the answer is no, can an interim solution be created? This could involve dual-boot systems with both Linux or Windows, Windows emulators, or terminal services solutions. If none of these options prove viable, a coexistence strategy is implemented, often with open source applications running on Windows in addition to the proprietary software and the entire workstation still being managed using ZENworks.
The Linux Desktop The second phase of open source workstation transition is the replacement of the desktop operating system. Depending on how “networked” your organization is, this phase might include the critical work of ensuring that users stay connected to valuable network resources and that authentication systems remain in place to secure digital assets. Multiple methods exist for actually implementing Linux at the desktop. Technicians who implement Linux manually will see very little difference between a SUSE Linux and a Windows install. The Linux operating system can be installed from CD media or the network; autodetection is comprehensive and discovers all attached storage, peripherals, and hardware devices; the install screens are graphical and intuitive; and the resulting desktop is iconbased with application, system, and hardware management utilities.
161
C H A P T E R 5 : The Rise and Reason for Open Enterprise Server (OES)
Like the applications mentioned previously, ZENworks can also be used to install Linux. In addition, tools such as YaST and RPM are extremely flexible in allowing you to initiate boot sequences and customize builds based on predefined rules as to how a machine will be installed and configured. Because applications are, in essence, packaged like other portions of the Linux operating system, application functionality can be included with an initial installation. Again, all of this can be handled remotely without the need to physically configure each workstation. Much of what we’ve discussed outlines open source networking possibilities. In reality, a well-planned transition can be seamless to end users. Network files remain in the same locations—appearing as NetWare or Windows files. Printers are still accessible with technologies such as iPrint, CUPS, or Samba. Authentication and login remains the same using LDAP or eDirectory. If adequate planning and the data center transition has been coordinated, migration at the desktop can be relatively transparent. It might be appropriate to upgrade or integrate new hardware at the time of a workstation migration to Linux. A major Linux advantage is that it works on a broad range of hardware. An organization that has a mix of hardware from different vendors or is using different chipsets can still provide a consistent set of productivity tools to users, regardless of hardware. Novell SUSE Linux supports many desktop processor types, including x86, AMD54, Intel EM64T, Intel Itanium, and IBM POWER.
Transition Considerations Several variables should be considered to determine the feasibility of a full or partial desktop transition to open source and Linux. These include application use, sophistication of users, and the future strategic direction for desktop applications. APPLICATION USE If the desktop applications in use are specialized or different applications are in use by different groups of users, a full transition might be less desirable. For example, if marketing is heavily using Macintosh with the full Adobe suite of applications and engineering is entrenched using AutoCAD, a migration to Linux is probably out of the question. On the other hand, if knowledge workers (like 90% in most organizations) can productively complete their jobs using standard office productivity word processing, spreadsheet, presentation, and simple graphics programs, or can effectively collaborate and access applications through web-based solutions, there is no reason not to transition to Linux. Each organization can map its position with regard to application use and
162
Approaches for the Desktop Knowledge Worker
determine whether a proprietary operating system with proprietary applications, a coexistent mix of open source and proprietary applications, or a complete Linux/open source solution is best. SOPHISTICATION OF USERS Linux and open source solutions can most easily be deployed at both ends of the sophistication spectrum. For unsophisticated users for whom operations are simple, repetitive tasks through a consistent interface, these processes are easily transitioned to open source. Using web services and browsers, management is simplified and end user involvement is minimal. Transition to Linux is also simplified with power users—those familiar with installing and uninstalling applications, learning new software, and even data manipulation through database queries, scripting, and programming. In many cases, these users might have already experimented with Linux or open source solutions and transition efforts will be minimal. A good transition plan accurately accesses the sophistication of users in the middle area and determines what steps should be taken to ensure a smooth transition. This might include awareness, management directives and education, group or individual training, online or printed resources, and more. Novell Consulting has developed detailed and methodical transition plans that are available to customers interested in migration. STRATEGIC DIRECTION As the case study on Golden Gate University (in Appendix A) illustrates, a three-, five-, or ten-year plan can be extremely important in determining the value of a contemplated migration. For GGU, strategic plans requiring that all future IT solutions be based on Java and Oracle strongly weighted factors in favor of Linux and open source. If your company is trending to web services, grid computing, mobile workforces, integrated customer/partner/supplier networks, or any number of other common access/distributed solutions, Linux and open source at the desktop will help position you to more quickly adopt these technologies.
Approaches for the Desktop Knowledge Worker Although many IT managers might enjoy their jobs more without them, users are the reason IT exists in the first place. IT is a support extension to the thoughts, actions, and activities of those who execute the purposes of an organization. As such, the mission of IT should be to provide the best environment
163
C H A P T E R 5 : The Rise and Reason for Open Enterprise Server (OES)
and collection of tools to facilitate business processes and organization communication. What can Linux and open source contribute to the general knowledge worker? Plenty! Let’s assume that there are two common desktop scenarios that exist within organizations, generally standardizing on one or the other. The first is a desktop based on Windows or Macintosh, and the second is a desktop configured with predominantly open source technology. Advantages available through Novell in the first scenario include web-based access to common resources through Novell Virtual Office and the availability of an open source office productivity suite. In the second scenario, using Novell Linux Desktop provides users with a complete open source workstation environment, including a comprehensive collection of desktop solutions and centrally controlled software updates—all at lower cost.
Proprietary OS Desktops The growth of Linux on the desktop is still in the early stages in most organizations, with several common reasons for growth to proceed slowly. These include the fact that there are more established desktop applications for Windows or Macintosh than there are for Linux. The user population in general is familiar with these applications and reluctant to change unless there is clear perceived benefit. Users are also commonly partial to one operating system or the other because of the time invested to understand and use it. For many organizations, existing desktop operating systems are considered as capital investments with the cost to replace them increasing as users become more educated and productive in a particular environment. For this reason, Novell fully supports Windows and Macintosh and provides several management and access solutions that enable knowledge workers to be more productive. These include user-provisioned desktop and web environments based on user identity, group membership, location, and organizational policies. Novell technologies that play a major role in desktop management include ZENworks Desktop Management, Nsure Identity Manager, and Novell Virtual Office. ZENWORKS DESKTOP MANAGEMENT ZENworks was previously discussed in terms of software management for both servers and workstations, but this section points out its usefulness as a desktop control agent. One of the major concerns of IT management is keeping a handle on what actually gets installed on user workstations. With common highspeed access to the Internet, it only takes a matter of seconds for a user to locate, download, and install an application, utility, or update that could have
164
Approaches for the Desktop Knowledge Worker
devastating effects not only on that particular user’s workstation, but also on the entire network. ZENworks Desktop Managment acts as a monitoring and control center for Windows workstations, providing administrators the capability to limit what goes on a workstation, add to a workstation, or delete and repair workstation software. ZENworks features include configuration differencing in which a workstation’s configuration is periodically captured and compared to earlier configurations to see what has changed. This difference-checking, combined with patch management and remote control, help keep knowledge worker data safe while providing users with the resources they require. ZENworks services are integrated with Novell eDirectory, and as such are available to manage a single workstation, groups of workstations, or the entire enterprise based on policy-driven profiles. ZENworks aggregates workstation diagnostics for a comprehensive real-time view of all the workstations on the entire network. ZENworks also provides an advanced workstation solution that has traditionally only been available through Unix/Linux via X Windows—the capability to have a particular user’s Windows environment follow him from workstation to workstation. Based on user identity and workstation configuration information stored in eDirectory, ZENworks enables personality migration so personal and application settings for each desktop can follow users wherever they go. SINGLE SIGN-ON—NOVELL NSURE IDENTITY MANAGER Rarely do knowledge workers access only a single system. One of the plaguing problems for many users has been managing usernames and passwords for access to multiple resources, including email, network access, database, enterprise applications, and more. Novell simplifies the business of managing user identity and authentication information across multiple systems using technologies such as Nsure Identity Manager. In effect, Nsure Identity Manager is a metadirectory solution that synchronizes identity credentials across multiple, disparate systems in different physical locations. The result is knowledge workers get single sign-on, the capability to log in once and have access to all network resources and applications. Nsure helps establish business policies and identity attribute ownership so that authority groups such as HR always maintain rights to specific information. Information changed in one user database or directory is automatically reflected in all other identity repositories. USER PORTAL—NOVELL VIRTUAL OFFICE Companies faced with the task of providing integrated resource access toknowledge workers on dissimilar desktop platforms can do so with a combination open source/Novell solution. Novell Virtual Office is a browser-based desktop environment that provides users access to network files, email,
165
C H A P T E R 5 : The Rise and Reason for Open Enterprise Server (OES)
calendaring, instant messaging, printers, document sharing, and a host of custom applications and services. These services are client operating system-agnostic and available through any standard web browser. Virtual Office also includes Novell Virtual Teams, which provides for team creation, document sharing, project planning, and more. The advantage is that users on any workstation platform (Windows, Macintosh, Linux, or Unix) can work using the same basic virtual desktop environment—from anywhere.
Thin-Client Desktops For certain types of users (for example, transactional or call center workers), a web-based client is completely adequate. Users interact with a web servicesbased application and the only required service at the desktop is a browser client. Open source solutions such as Mozilla, email clients, and RPM are abundant and adequate for implementing and managing this type of desktop. Novell technologies suitable for web-only access include Virtual Office, GroupWise, exteNd Portal, and others.
Novell Linux Desktop For organizations interested in moving to an open source desktop environment, Novell not only has a solution, but also has shown how it can be successfully implemented. The details of Novell’s internal migration to a Linux desktop environment are detailed in Appendix A, but this section takes a look at what is available for creating a complete open source desktop solution. As mentioned, Linux is suitable as an operating system for everything from watches to mainframes, and the desktop is no exception. Whether the workstation is a notebook, desk-side tower, garden-variety clone, or the latest hot processor with hundreds of gigs of storage, Linux works well as a desktop operating system. If your organization is interested in creating open sourceonly desktops, you can easily do so. Current distributions of Linux (including Novell SUSE Linux) include packages that autodetect and autoconfigure hardware. The KDE or GNOME graphical user interface (available with almost every Linux distribution) provides end users with a Windows-like navigation experience. File access, even across different OS platforms, is available. Web browsers, email clients, and the OpenOffice.org office productivity suite provide knowledge workers with all of the applications required by 80%–90% of average users. You can completely configure a new desktop by downloading everything from the Internet with zero costs for licensing. Novell Linux Desktop (NLD) is an integrated desktop operating system, desktop environment with desktop applications—all bundled together in a single
166
Approaches for the Desktop Knowledge Worker
package. Novell Linux Desktop is also tightly coupled with Novell desktop management solutions from Novell such as ZENworks Linux Management, which provides many desktop management functions, not the least of which is software distribution and patch management. With Novell Linux Desktop, the operating system is a hardened version of SUSE Linux Enterprise Server with specific desktop features included and unnecessary server services excluded. Novell Linux Desktop provides a graphically rich and easy-to-navigate desktop environment, similar to Windows. Also included are several user productivity and collaboration applications, including Evolution (a full-featured email and collaboration client), a Novell-enhanced version of OpenOffice.org, Mozilla, several common browser plug-ins, and dozens of other applications and tools. OpenOffice.org is an effective and complete office productivity suite with word processing, spreadsheet, presentation, and drawing software. If your user environment is mixed, or with a gradual migration in process with both Microsoft Office and OpenOffice.org, files can be shared back and forth. OpenOffice.org has the capability to read or save documents in Microsoft Office formats for Word, Excel, and PowerPoint. Mozilla is included for use as a web browser, and browser plug-ins for Adobe Acrobat Reader, Real Audio Real Player, Macromedia Flash Player, and Java 2 Run-time Environment are available. Tools and utilities for Novell Linux Desktop include instant messaging (GAIM), video conferencing (GNOMEMeeting), 3270 support (x3270), graphics and image editing (GIMP), and much more. Novell Evolution is the world’s most popular and powerful collaboration solution for Linux and Unix systems. It seamlessly integrates email, calendaring, scheduling, contact management, and task lists. Sample Evolution features include contextual views of messages, multiple calendar views, Palm device support, and an integrated view of all personal data management. Novell Evolution fully supports open standards mail and calendaring systems, and is also compatible with common collaboration solutions, such as Microsoft Exchange, Novell GroupWise, and Lotus Domino. Novell Linux Desktop also allows users to seamlessly access network files without regard to the file system type. Files on Windows, NetWare, Linux, and Unix NFS systems—all appear with a consistent look and feel. A major advantage to Novell Linux Desktop is that it can virtually eliminate the need to train end users familiar with Windows on a new operating system. The interface (along with applications like OpenOffice.org) is designed to be so Windowscompatible in appearance that users can transition to Novell Linux Desktop with little if any effort. The migration from Windows to Novell Linux Desktop
167
C H A P T E R 5 : The Rise and Reason for Open Enterprise Server (OES)
at Novell has proceeded at a pace set by the comfort level of end users, yet still has gone smoothly and quickly. Novell accommodates and supports all common desktop solutions. Depending on the needs of your organization, you can retain Windows, include thin clients, migrate to complete Linux and open source, or support a mix of all three options. Knowledge workers are provided with the tools, services, and applications required to be productive and effective while administration and management tasks are consolidated and minimized.
Business Appliances Using many of the open source technologies discussed so far, it’s possible to create “business appliances” that can be fully preconfigured with general or specialized applications and made ready for easy deployment to different locations. Examples of business appliances, or utility desktops as they are also called, include point-of-sale (POS) terminals, thin-client kiosks, manufacturing or shop floor terminals, email or web browsing stations, and other single-purpose clients. There are several advantages to deploying these types of solutions if they are suitable to your organization or business needs. First, let’s outline the technologies that can be used to deploy and manage thinclient solutions. These include single-purpose operating systems, network booting, thin-client management, the collection of web services that can be used to create business appliance applications, and, of course, thin-client hardware.
Single-Purpose Configurations As mentioned previously, a Linux implementation can be “minimally” configured to provide only the services or packages that are absolutely necessary to perform a specific task or function. This minimal configuration can protect against security breaches as well as tune a system for optimal performance. With business appliances or utility desktops, a Linux implementation can be built that specifically addresses the appliance requirements. With a point-ofsale solution, for example, a customized client configuration can be created that includes bar-code scanning software, an inventory client application, and a cash register cache. This configuration would also exclude unnecessary services, such as office productivity applications or storage management. These specialized configurations or profiles can be centrally managed using technologies such as RPM or ZENworks Linux Management, which tracks and controls package inclusion/exclusion and allows administrators to keep all
168
Business Appliances
utility desktops current and consistent. If new features or packages are required at the client, these can be manually or automatically pushed to the client nodes from a central site. A strong benefit to these appliance nodes is that they are enabled with a full graphical interface, thanks to the use of browser technology and web services. A manufacturing floor process program or a car parts inventory system doesn’t have to be alpha-based or left without menus or window options.
Multiple Boot Options A Linux-based business appliance can boot from the network, eliminating the need for a local hard drive or locally stored operating system and applications. The appliance becomes a browser terminal with all client configuration being centrally managed. When referring to “thin client,” this is about as thin as it gets. As the client workstation boots, it looks to a BOOTP/DHCP server for kernel location and root file mounting information. Centrally managed initialization scripts can handle the rest. A set of inventory terminals can all be configured identically and the management for the configuration is performed centrally. If you need to change the configuration, for example to add a new reporting option, one change makes it available to all clients with no physical work required at each workstation. Linux can boot from the hard drive, a CD, a floppy, the network, and even a USB device. Technology advances are being made such that USB devices can carry personal files and applications and everything necessary for an individual to plug in and have access to her own personal environment, complete with access to local and remote resources.
Client Management Multiple technologies that allow administrators to centrally and comprehensively manage remote servers or clients has been covered previously. With utility desktops, this capability is critical as the majority of users will have little if any technical skills or troubleshooting expertise—client solutions must be as idiot-proof as possible. The requirement often exists for an entire collection of clients to be modified simultaneously. Tools such as RPM and ZENworks Linux Management can dramatically simplify this process. As a quick example, have you ever noticed that your bank ATM seems to have more physical buttons on the sides of the screen than there are software options in the menu? Assume the ATM is the thin-client kiosk and it’s enabled using Linux, which is managed using ZENworks or RPM. The sponsoring bank
169
C H A P T E R 5 : The Rise and Reason for Open Enterprise Server (OES)
decides that it wants to treat all customers who make a $100 ATM deposit with discounted tickets for the local pro basketball game. A software application or package could be written for the ATM that is added to the existing distribution (using one of those extra buttons). The application checks the deposit amount; if it is $100 or more, a query asks the customer if he wants tickets. If he does, he presses the appropriate button and the kiosk prints out a discount voucher, or even prints the discount tickets. The ticket utility would remain in use and active while the ticket supply lasts and then discontinued automatically. In this example, a client/server application is created and the client (which is a specialty business application) is updated both to install the new service and then to delete it when it is no longer required. The range of applications for Linux-based utility clients is unlimited and the tools to securely manage them are readily available.
Business Appliance Applications Many of the components that are available to create business appliance or utility clients have already been covered, and they include the range of web services, such as web servers, application servers, Java, XML, and the IP network. It’s important to note that by using these open source/open standards tools, services, and elements, the creation of custom or specialized business appliance applications is much simpler than it has been in the past. Development efforts can be as simple as establishing content flow and business rules and logic—the rest of an application is snapped together using available open source elements. Databases, web browsers, directories, and edge services are all available for development of thin-client solutions.
Thin-Client Hardware Thin-client hardware comes in multiple form factors, including workstations (diskless workstations, network computers, and network appliances), personal digital assistants (PDAs), phones, and even watches. Diskless workstations are available from a number of suppliers, including Wyse, Fujitsu-Siemens, AMD, and HP. Novell and HP have worked together to bring SUSE Linux to the HP Compaq Thin Client series of workstations. In the PDA market, Linux has become a major third alternative to Palm OS and PocketPC for handheld devices and personal digital assistants. These devices, especially when equipped with wireless access, become valuable hardware platforms for all types of specialty applications. POS terminals and data collection are just two of the possible uses. http://www.linuxdevices.com lists 30 different PDAs (all the major players are there) that are able to provide services running on embedded Linux. 170
Remote and Branch Offices
Mobile phone capabilities have been expanded on Linux with the ability to accommodate applications such as PDA functions, handwriting recognition, touch screen functions, cameras, web surfing, voice recognition, text-to-speech, and even custom Java applications. Smart phones provide multilanguage support, wireless access (in addition to cellular), and enable a new generation of utility clients. It’s also worth noting that Linux is finding its way into all types of devices (embedded Linux), such as watches, cameras, single-board computers, portable mapping systems, diagnostic tools, and much more. These devices when linked to a network for feedback, data transmission, monitoring, or control provide unlimited new client application solutions—all based on open standards. If your organization has a need to extend application and network services reach to thin clients, PDAs, or specialty devices, Linux and open source provide the capability to do so. And, with Novell technology, the tools and services are available to help you secure, monitor, and manage them in conjunction with all other network resources.
Remote and Branch Offices According to Strategic Research Corp., as much as 60% of a corporation’s data resides outside its managed services on remote networks, desktops, and mobile systems. As much as 75% of this “edge data” is unprotected because it is either ineffectively backed up or not backed up at all. This is risky because edge data can be as critical to a company’s survival as its more manageable centralized data. Today’s enterprise companies are seeking to provide the same level of service for branch offices as exists at the corporate office. Branch office users need to be able to authenticate securely to network services, access their files, print, and be protected from losing data in the event of a mishap or disaster. A report by NetsEdge Research group revealed that more than half of the employees at Fortune 500 companies are located away from main corporate campuses. There are currently three million branch offices worldwide. Costs for branch offices can be significant with expenses for local or traveling IT staff, local backup, disaster recovery, WAN links, help desk calls, and decreased productivity because of low bandwidth and slow response times. How does Novell help companies solve this problem? By combining advanced proprietary technology with open source, Novell has created a unique solution that puts management of critical data assets at the corporate office while providing branch offices with a full complement of network services: Novell Nterprise Branch Office.
171
C H A P T E R 5 : The Rise and Reason for Open Enterprise Server (OES)
Novell Nterprise Branch Office is server software for distributed enterprises that are seeking to reduce the cost and complexity of maintaining network services in branch offices. It takes the “remote” out of remote office by delivering employees the same level of service regardless of location. Contrary to other approaches, Nterprise Branch Office delivers a fault-tolerant, enterprise-class experience while reducing costs and increasing productivity. Nterprise Branch Office lets enterprises simplify directory management, deliver an improved user experience, automate branch office backup, eliminate onsite IT staff, and leverage standards and low-cost Internet connections. Branch offices are typically expensive to support because they are physically dispersed and lack central control. Problems include expensive private WAN links, fractured directory services, the need for onsite support staff, local backup, and disaster recovery, and lack of policy enforcement. Remote and branch offices also suffer because of low levels of service, lengthy response times, and intermittent or varying bandwidth availability.
How It Works Novell Nterprise Branch Office is a multifunction “software” appliance. Installing Branch Office on standard hardware creates a lights-out appliance that provides multiple services, including authentication, file and print serving, data backup, and more. A basic Nterprise Branch Office configuration consists of two components—a central office server and a branch office appliance. Although Branch Office can be used as a standalone appliance for file, print, directory, and networking services, its primary purpose is to provide corporatelevel connectivity and services to branch office users. At the corporate office, one or more “central office” servers are configured to provide branch office services, including directory and file synchronization. These central office servers can be running Linux or NetWare, or even Windows. The software that keeps files between the branch office and the central office synchronized is the open source solution RSync. RSync provides fast and incremental synchronization with only delta or incremental changes in data being synchronized. Changing a file does not require the entire file to be transferred—only the data blocks that have been altered. Incremental synchronization reduces bandwidth requirements and allows a wide range of resources to be kept current at multiple locations. Branch office user authentication is also controlled from the central office through the use of directory synchronization. The directory at the central office can be any LDAP-compliant directory including OpenLDAP and, again, can be running on Linux, NetWare, or Windows. Optimal management capabilities are achieved using Novell eDirectory, but Microsoft Active Directory is also supported.
172
Remote and Branch Offices
At the branch office, a dedicated appliance runs corresponding client software that works in tandem with the central office server. An Nterprise Branch Office appliance also coordinates automatic user provisioning, printing, and access to files. The connection between the central office server and the branch office appliance can be via standard Internet (DSL, broadband, or dial-up) or private WAN. Nterprise Branch Office is designed to accommodate slow or inconsistent connections, and, in many cases, costly private WAN connections and VPN technologies can be eliminated in favor of more common and less expensive Internet connections with the security of Secure Sockets Layer (SSL; see Figure 5.2). FIGURE 5.2
Central office services integrate with branch office appliances to provide connection, synchronization, file, and printing services.
Branch office
Central office
Portal
Branch Office appliance
• iPrint
Terminal emulator
Application server
• Automatic user Access provisioning
• Access to files using all file protocols (HTTP, CIFS, NCP, NSF, AFP, FTP)
Client/Server
LDAP Directory • Flat • Hierarchical
• File transfer to central office (RSync) • Local backups (optional)
NetWare, Linux, or Windows servers
RSync for file transfer
Same features at any branch office Branch office
iPrint
Branch Office appliance
• Automatic user Access provisioning
Standby queue (optional)
• iPrint • Access to files using all file protocols (HTTP, CIFS, NCP, NSF, AFP, FTP) • File transfer to central office (RSync) • Local backups (optional)
173
C H A P T E R 5 : The Rise and Reason for Open Enterprise Server (OES)
Nterprise Branch Office can provide high levels of service over standard Internet connections because of advanced design and architecture. The services shared between central and branch sites (directory, file, print, authentication, web services, and so on) are “loosely coupled.” In effect, Nterprise Branch Office works as a cache in which directory information and files are held, transferred, or synchronized based on the viability of the connection. File, print, and authentication data exist at both locations and user access to either is transparent. If the connection is lost or central site data is unavailable, users continue to work from the local cache. If the local system is down, users continue using central site services. File, portal, print, and authentication services continue uninterrupted. Nterprise Branch Office can dramatically reduce or even eliminate the need for any onsite support at the branch office. Directory, backup, provisioning, and printing support services can be managed at the central site without the need to physically visit the remote office. Deployment is simple and can be performed by nontechnical personnel. Enterprise companies that are successfully using Nterprise Branch Office include Intermountain Health Care, American Diabetes Association (ADA), and Mariner Health Care. ADA estimated annual savings of more than a million dollars by implementing Nterprise Branch Office. Mariner Health Care can maintain centralized administration and security access while providing the same quality network services to approximately 300 facilities without the overhead of maintaining a server at each location. In addition, they have a costeffective disaster recovery solution with the capability to back up centrally and restore a system from the corporate office. Using Nterprise Branch Office, Novell reduced its own WAN costs from $9 million (USD) per year to just under $4.5 million and cut remote access costs by $60,000 per month, new-hire costs by $3,000 per month, and internal support costs by $125,000 per month, for a total annual savings of $6.8 million.
Summary This chapter looks at the features present in OES and walks through the concepts of transitioning from a number of different types of businesses. The next chapter walks through the process of installing, migrating, and upgrading servers to OES.
174
CHAPTER 6
Installing and Upgrading to Open Enterprise Server Open Enterprise Server (OES) is the revolutionary operating system from Novell that embraces open source and includes the best of everything that came before it. This chapter walks through the installation of it on a new operating system, as well as the upgrade/migration from existing operating systems.
Installation Considerations Before beginning an installation, it is important to verify that the prerequisites are met. This will save you not only a great deal of time, but much frustration as well. NOTE Before moving to any new network operating system, you should thoroughly test all configuration and implementation operations in a lab environment. Only after you are satisfied with the performance obtainable in the lab should you consider implementing in your production network.
This section examines what must be present before an installation can begin and then walks through an installation.
C H A P T E R 6 : Installing and Upgrading to Open Enterprise Server
Minimum System Requirements To meet the minimum hardware requirements, the server needs an Intel Pentium II or AMD K7 processor or higher. However, Novell recommends Pentium III 700MHz or higher processors for multiprocessor servers. A minimum of 512MB of RAM must be present, and the server needs an ISO9660-compatible CD-ROM drive. (A bootable CD-ROM drive compatible with the El Torito specification is recommended for booting directly from the CD.) A super video graphics adapter (SVGA) or better video adapter and monitor is needed, as well as a keyboard and mouse. NOTE Although this book does not address such, it is possible in some instances—and always for security reasons—to run a server without direct input/output devices. This is known as a “headless” server.
If installing with NetWare, Novell recommends a minimum DOS partition of 1GB on the hard drive, and at least 4GB for the NetWare partition—specifically the SYS: volume. Be certain to factor into any calculations the space needed for additional volumes. Being a server, the system will also need a network interface card (NIC). If the server has more than one processor, HotPlug PCI, or any other similar attribute, you will need to have the current drivers for those items readily accessible during the installation. Lastly, if there are specific services you want the server to support, you will need to make certain you have the hardware needed to support those services.
Choosing a Path: Linux or NetWare Open Enterprise Server can be installed on either SUSE Linux or NetWare. Each platform has its own advantages and disadvantages, and in many cases, you might find that the best solution is to mix and match servers in your network environment. The biggest advantage to installing on SLES 9 is that you are able to work within an open source operating system. You are no longer tied to the offerings of a single vendor and have the ability to maximize the features of a nonproprietary operating system.
176
Installation Considerations
Conversely, the biggest advantage to installing on NetWare is the rich set of features and utilities cultivated by Novell. The NetWare network operating system has been tweaked with each new version to be the best on the market. Because this is a book on moving to open source, it is assumed that you are coming from NetWare, and thus the discussions that follow focus on the issues you will face in so doing.
Choosing a File System Open Enterprise Server is truly a product that offers you as many choices as possible. Not only can you choose the operating system platform on which you want to install OES, but you can also choose from a number of file systems. File systems fall into three broad categories—traditional, journaling, and virtual. Traditional file systems include ext2, vfat, and so on. Journaling file systems include ReiserFS (the SUSE default) and ext3. Virtual file systems, also known as Virtual Filesystem Switch (VFS), are actually hybrids that exist between the processes and the other file systems (traditional or journaling). Based upon the operating system that you select and other choices you make, a total of six possible file systems are available: ■
EXT—A traditional file system that does not support journaling
■
JFS—The journaling file system
■
NSS—A robust, journaling file system (NSS stands for Novell Storage Services)
■
POSIX—A file system mostly known for its compatibility across operating systems
■
REISER—The SUSE default, which is a journaling file system
■
XFS—A high-performance journaling file system
NOTE One of the best places to get additional information about each of the file system types is at WikiLearn. An excellent discussion on each of the types begins at http://twiki.org/cgi-bin/view/Wikilearn/LinuxFilesystems.
eDirectory Design Considerations When it comes to eDirectory, the most important thing is to make certain that you have the latest version. OES includes Novell eDirectory v8.7.3, and you
177
C H A P T E R 6 : Installing and Upgrading to Open Enterprise Server
want to be certain that the rest of your network (assuming there is one already in existence) can support this version before deploying OES. You can download the latest eDirectory support packs for the other servers on your network at support.novell.com. When installing the eDirectory component, you will need to know the following information: ■
Tree name
■
Server location within the eDirectory tree
■
Administrator name and password
■
Server’s time zone
■
Time synchronization
Walking Through a New Installation Before beginning ANY installation, do a full system backup. Following that, verify that the data from the backup can be restored. This is your insurance policy for the unlikely event that you should need to revert to this configuration. To begin the installation of Open Enterprise Server, simply insert the first OES CD in the system and reboot. This starts the Deployment Manager. In addition to starting on bootup from the CD, Deployment Manager can also be run from within Windows 2000 or XP by summoning NWDEPLOY.EXE or NWDEPLOYNOBROWSER.EXE in the root directory of the CD. The difference between the two executables is that with NWDEPLOY, Internet Explorer 5 or greater is needed, whereas NWDEPLOYNOBROWSER runs without the ActiveX Controls. Deployment Manager has three main categories: ■
Network preparation
■
Install/upgrade options
■
Postinstall tasks
The three main steps to OES NetWare installation are as follows:
178
■
Configure a DOS partition if desired.
■
Install the startup files and create a SYS volume.
■
Run the NetWare Installation Wizard.
Walking Through a New Installation
Under most circumstances, the OES bootable CD-ROM starts the server installation process. If this does not happen, follow these steps to create a bootable partition: 1. Boot the server with the OES NetWare License/Cryptography disk. 2. Load FDISK /X. Delete any existing partitions and create a Primary DOS
partition of at least 1GB. 3. Reboot the server and use FORMAT C: /S/X to format the new partition
and transfer system files. 4. Copy over any needed configuration files that were backed up before you
began. These can include CONFIG.SYS, AUTOEXEC.BAT, and device drivers.
Installing the Startup Files and Creating a SYS Volume The following steps install the OES NetWare startup files and create the SYS volume on the system: 1. Boot from the NetWare 6.5 SP3 CD 1. 2. At the Languages screen, select the language in which you want the serv-
er installed, and press Enter. 3. Make your selections at the Regional Settings screen, and click Continue. 4. Read the NetWare license agreement, and press F10 to accept it. 5. Read the JInfonet license agreement, and press F10 to accept it. 6. Select Default or Manual installation, and press Enter. 7. Make your selections at the Prepare Boot Partition screen, and click
Continue. 8. Make your selections at the Server Settings screen, and click Continue.
Startup files and drivers are now copied to the C:\NWSERVER folder on the DOS partition. 9. Specify the Platform Support Module, and click Continue. 10. Specify the HotPlug Support Module and the Storage Adapter(s), and
click Continue. 11. Specify the Storage Devices, and click Continue. 12. Specify the Network board, and click Continue. 13. Choose the size of SYS: volume you want created, and click Create (at
least 4GB is recommended). 14. Click Continue Installation at the NSS Management Utility main menu. 179
C H A P T E R 6 : Installing and Upgrading to Open Enterprise Server
The newly created SYS: volume is now mounted and all the needed system files are copied to it. The NetWare Installation Wizard begins, and walks you through the rest of the installation.
Running the NetWare Installation Wizard After the NetWare Installation Wizard begins, you are close to completing the installation. You now walk through the following steps to complete the configuration of the OES NetWare server environment: 1. Choose the type of installation you want at the Product Installation Type
screen. The choices are as follows: ■
Open Enterprise Server 1.0—This includes NetWare 6.5 Support Pack 3 as well as iManager 2.5, Virtual Office 1.5, and Quickfinder Server (to replace Web Search Server).
■
NetWare 6.5 Support Pack 3—This installs NetWare 6.5 Support Pack 3 without the updates to iManager, Virtual Office, and Web Search Server.
2. Choose Customized NetWare Server at the Choose a Pattern screen, and
click Next. 3. Choose any optional NetWare components you want to install on your
server at the Components screen, and click Next. 4. Review your choices at the Summary screen, and click Copy Files. You
will have to insert the NetWare 6.5 SP3 CD 2 (Products) when prompted to install the OES NetWare components. 5. Enter a name for the server, and click Next. 6. Make your choices at the Protocols screen, and click Next. You can
choose either IP, IPX, or both. Based upon your choices, you will need to enter configuration information for those protocols. 7. Make your choices at the Domain Name Service screen, and click Next. 8. Make your choices at the Time Zone screen, and click Next. 9. Make your choices at the eDirectory Installation screen, and click Next. 10. Read the settings on the eDirectory Summary screen, and click Next. 11. Review the Licenses screen, and click Next. 12. Review the License Certificate Context screen, and click Next. 13. Make your choices at the LDAP Configuration screen, and click Next.
180
Upgrading and Migrating
14. Make your choices at the Novell Modular Authentication Service (NMAS)
screen, and click Next. 15. Click Yes at the Installation Complete window to restart the OES
NetWare server.
Upgrading and Migrating In the previous section, you walked through the installation of Open Enterprise Server on a new system. This section examines how to upgrade and migrate existing servers to OES. If are only installing OES on new servers, you can skip this chapter and move to the topic of administration in the next chapter. A number of types of upgrades and migrations are possible, depending upon the platform with which you are working. The five major possibilities are ■
Upgrade an existing NetWare server
■
Migrate from an existing NetWare server
■
Upgrade from an existing Linux server
■
Migrate from an existing Linux server
■
Migrate from an existing Windows server
Because this book is targeted for an audience moving to open source, it focuses on, and explores, the methods for moving from NetWare and Windows.
Upgrading a NetWare Server Two types of upgrade options are available for moving to NetWare 6.5: ■
In-Place
■
Server Consolidation
In-place upgrades are the most traditional of methods. With this method, the server is directly upgraded from its previous version to the new version using the server console. In-place upgrades are available from NetWare versions 6.5, 6.0, 5.1, and 4.2. With a server consolidation upgrade, volumes, directories, users, printers, and printer agents are transferred from a source server to an OES destination server by using the NetWare Server Consolidation Utility. Server consolidation is available with NetWare 6, 5, and 4.
181
C H A P T E R 6 : Installing and Upgrading to Open Enterprise Server
IN-PLACE UPGRADE The in-place upgrade updates the operating system, eDirectory, and any other NetWare components installed on the server. Before beginning the upgrade, you should install the latest support pack installed on the server, and perform a full backup. All deleted files are purged as part of the upgrade, so be certain you do not need to salvage any deleted files prior to the upgrade. For NetWare 6.5, you should have Support Pack 2 installed. For NetWare 6.0, Support Pack 5 (or later) should be installed. For NetWare 5.1, Support Pack 6 (or later) should be installed. For NetWare 4.2, Support Pack 9 (or later) should be installed. NOTE NetWare support packs can be found at http://support.novell.com/filefinder.
NOTE Although NetWare 4.2 can be upgraded, it is known as a Down Server upgrade and requires a few deviations from the steps given for the other servers. A section on Down Server upgrade follows the coverage of other in-place upgrades.
The upgrade to OES can be run from one of three locations: ■
The server console
■
Remotely from Deployment Manager
■
Remotely from iManager
UPGRADING FROM THE SERVER CONSOLE
To perform an upgrade from the server console, follow these steps: 1. Insert the NetWare 6.5 SP3 CD 1 (Operating System) into the server CD-
ROM drive. 2. From the system console (the server command line), mount the NetWare
6.5 SP3 CD 1 (Operating System) as a NetWare volume by typing CDROM at the server console. Next type VOLUMES to confirm the CD-ROM is mounted. 3. Switch to the GUI console and choose Novell, Install. 4. Click Add at the Installed Products screen.
182
Upgrading and Migrating
5. For a Source Path screen, highlight PRODUCT.NI in the root of the
NetWare 6.5 SP3 CD 1. 6. Click OK to return to the Source Path screen, and click OK to continue. 7. Read the NetWare license agreement, and click I Accept. 8. Read the JInfonet license agreement, and click I Accept. 9. Make your choices at the Backup Server Files screen, and click Next: ■
Backup the Server Boot Directory Files—Select the Yes option button to back up your existing server files.
■
Automatically Reboot—Select the Yes option button to have the server automatically reboot after completing the upgrade.
■
Allow Unsupported Drivers—Select the Yes option button to allow device drivers that have not been approved by Novell to be automatically loaded if no other suitable driver is found.
■
Specify the Upgrade Type—Select either a Default or Manual installation.
10. Choose the type of installation you want to perform at the Product
Installation Type screen, and click Next: ■
Open Enterprise Server 1.0—This option includes NetWare 6.5 Support Pack 3 as well as iManager 2.5, Virtual Office 1.5, and Quickfinder Server (to replace Web Search Server).
■
NetWare 6.5 Support Pack 3—This option installs NetWare 6.5 Support Pack 3 without the updates to iManager, Virtual Office, and Web Search Server.
11. Make your choices at the Components screen, and click Next. 12. Read the configuration settings on the Summary screen, and click Copy
Files. The new server files are now copied and you are prompted to insert the NetWare 6.5 SP3 CD 2 (Products) for any needed product files to be copied. 13. Log in as the Admin user at the prompt, and click OK to complete the
eDirectory installation. 14. Review the information on the eDirectory Summary screen, and click
Next. 15. Make your choices at the Licenses screen, and click Next. 16. Make your choices at the License Certificate Context screen, and click
Next.
183
C H A P T E R 6 : Installing and Upgrading to Open Enterprise Server
17. Make your choices at the Novell Modular Authentication Service (NMAS)
screen, and click Next. 18. Click Yes at the Installation Complete window, and the server restarts.
The installation of OES NetWare is now complete. UPGRADING WITH DEPLOYMENT MANAGER
Deployment Manager can be run from a Windows 9x or Windows NT/2000/XP workstation. To upgrade using Deployment Manager, follow these steps: 1. Insert the NetWare 6.5 SP3 CD 1 (Operating System) into the worksta-
tion. Run NWDEPLOY.EXE from the root of the CD to start Deployment Manager. 2. Choose Upgrade to NetWare 6.5 in the left pane. 3. Choose Upgrade a Server Remotely in the right pane. 4. Read the NetWare license agreement, and click I Accept. 5. Read the JInfonet license agreement, and click I Accept. 6. Make your choices at the Backup Server Files screen, and click Next: ■
Backup the Server Boot Directory Files—Select the Yes option button to back up your existing server files.
■
Automatically Reboot—Select the Yes option button to have the server automatically reboot after completing the upgrade.
■
Allow Unsupported Drivers—Select the Yes option button to allow device drivers that have not been approved by Novell to be automatically loaded if no other suitable driver is found.
■
Specify the Upgrade Type—Select either a Default or Manual installation.
7. Choose the type of installation you want to perform at the Product
Installation Type screen, and click Next: ■
Open Enterprise Server 1.0—This option includes NetWare 6.5 Support Pack 3 as well as iManager 2.5, Virtual Office 1.5, and Quickfinder Server (to replace Web Search Server).
■
NetWare 6.5 Support Pack 3—This option installs NetWare 6.5 Support Pack 3 without the updates to iManager, Virtual Office, and Web Search Server.
8. Make your choices at the Components screen, and click Next.
184
Upgrading and Migrating
9. Read the configuration settings on the Summary screen, and click Copy
Files. The new server files are now copied and you are prompted to insert the NetWare 6.5 SP3 CD 2 (Products) for any needed product files to be copied. 10. Log in as the Admin user at the prompt, and click OK to complete the
eDirectory installation. 11. Review the information on the eDirectory Summary screen, and click
Next. 12. Make your choices at the Licenses screen, and click Next. 13. Make your choices at the License Certificate Context screen, and click
Next. 14. Make your choices at the Novell Modular Authentication Service (NMAS)
screen, and click Next. 15. Click Yes at the Installation Complete window, and the server restarts.
The installation of OES NetWare is now complete. UPGRADING WITH iMANAGER 2.0
To upgrade a server by using iManager, follow these steps: 1. Launch iManager and choose Install and Upgrade in the left pane. 2. Choose Upgrade to NetWare 6.5. 3. Choose Upgrade a Server Remotely in the right pane. 4. Browse to the root of the NetWare 6.5 SP3 CD 1 (Operating System), and
click OK. 5. Read the NetWare license agreement, and click I Accept. 6. Read the JInfonet license agreement, and click I Accept. 7. Make your choices at the Backup Server Files screen, and click Next: ■
Backup the Server Boot Directory Files—Select the Yes option button to back up your existing server files.
■
Automatically Reboot—Select the Yes option button to have the server automatically reboot after completing the upgrade.
■
Allow Unsupported Drivers—Select the Yes option button to allow device drivers that have not been approved by Novell to be automatically loaded if no other suitable driver is found.
■
Specify the Upgrade Type—Select either a Default or Manual installation.
185
C H A P T E R 6 : Installing and Upgrading to Open Enterprise Server
8. Choose the type of installation you want to perform at the Product
Installation Type screen, and click Next: ■
Open Enterprise Server 1.0—This option includes NetWare 6.5 Support Pack 3 as well as iManager 2.5, Virtual Office 1.5, and Quickfinder Server (to replace Web Search Server).
■
NetWare 6.5 Support Pack 3—This option installs NetWare 6.5 Support Pack 3 without the updates to iManager, Virtual Office, and Web Search Server.
9. Make your choices at the Components screen, and click Next. 10. Read the configuration settings on the Summary screen, and click Copy
Files. The new server files are now copied and you are prompted to insert the NetWare 6.5 SP3 CD 2 (Products) for any needed product files to be copied. 11. Log in as the Admin user at the prompt, and click OK to complete the
eDirectory installation. 12. Review the information on the eDirectory Summary screen, and click
Next. 13. Make your choices at the Licenses screen, and click Next. 14. Make your choices at the License Certificate Context screen, and click
Next. 15. Make your choices at the Novell Modular Authentication Service (NMAS)
screen, and click Next. 16. Click Yes at the Installation Complete window, and the server restarts.
The installation of OES NetWare is now complete. PERFORMING A DOWN SERVER UPGRADE
The Down Server upgrade is necessary only when upgrading from a NetWare 4.2 server or an upgrade fails after the Health Check Summary completes and the upgrade cannot be restarted. A Down Server upgrade can be completed with these steps: 1. Enter DOWN and then EXIT at the server console to bring the server down. 2. Insert NetWare 6.5 SP3 CD 1 (Operating System) in the server’s CD
drive. 3. Restart the server. 4. When prompted, press any key to interrupt the installation process. 5. Press P to specify installation parameters. 186
Upgrading and Migrating
6. Type [inst: upgrade] and press Enter. 7. Press I to continue the installation.
The Down Server upgrade now follows the installation of a new OES NetWare server, as discussed in Chapter 4, “A Brief History of NetWare.” SERVER CONSOLIDATION UPGRADE Open Enterprise Server isn’t just another operating system that can be used to replace one you are already using. It is far more than that! OES is so robust, scalable, and powerful that it has the power to enable one server to do the job that it took many servers to do in the past. Given that, OES offers the Server Consolidation Utility (SCU) to allow you to consolidate data from redundant servers into a smaller number of servers that are more easily administered and managed. With SCU, you can copy anything from whole volumes to specific directories from source servers (NetWare 4, 5, 6, or Windows NT) to destination servers (NetWare 5.1 or later). SCU must be installed on a Windows-based workstation running Windows NT 4, Windows 2000 (Service Pack 2 or later), or Windows XP Professional. There needs to be at least 50MB available disk space, a Novell client for Windows NT/2000 version 4.83 or later, and Microsoft Data Access Components (MDAC) 2.7 or later installed on the workstation. NOTE Information on MDAC can be found at http://msdn.microsoft.com/library/ default.asp?url=/downloads/list/dataaccess.asp.
SCU files are on the NetWare 6.5 SP3 CD 1, in the \PRODUCTS\SERVCONS directory. To install the utility: 1. Launch the SCU installation by executing NWSC.EXE. 2. Specify the language for the utility installation, and click OK. 3. At the Introduction screen, click Next. 4. At the License Agreement screen, select the I Accept the Terms of the
License Agreement option button, and then click Next. 5. At the Choose Destination Location screen, specify the path into which
you want to install SCU. 6. At the Installation Summary, click Install.
187
C H A P T E R 6 : Installing and Upgrading to Open Enterprise Server
7. At the ReadMe screen, click Next. 8. At the Install Complete screen, click Done.
You can consolidate from NDS/eDirectory, or from a Windows NT domain. This book focuses on consolidating a NetWare server to OES NetWare, and you should consult the OES online documentation for other options. Within SCU, a project file is used to record the consolidation plan. The project file holds information on the consolidation and you can choose to run them immediately, or save them for execution at another time. To create an SCU project, follow these steps: 1. Launch SCU (Start, Program Files, Novell Server Consolidation Utility). 2. Click OK at the opening splash screen. 3. At the Startup screen, choose Create a New Project, and click OK. 4. Make your choices at the Project Type screen, and click OK. NOTE Choose NetWare NDS/eDir Tree for a consolidation from a NetWare server. Choose Microsoft Windows Domain for a Windows NT consolidation. 5. At the Setup Tasks screen, click Next. 6. Choose a name and location for the project file, and click Next. 7. Choose the source and destination tree for the consolidation, and click
Next. NOTE You must be logged in to a tree to see it in the drop-down list. 8. Click Create to finish creating the project file. Select the volumes, direc-
tories, and printer objects to move, and drag them to the desired location in the new tree. 9. Click Do the Consolidation. NOTE You can save your project at any time by selecting File, Save As.
188
Upgrading and Migrating
10. After the SCU project is created, SCU launches the Verification Wizard.
Click Next to start the verification process. 11. Review the source and destination paths at the Dropped Folders screen,
and click Next. 12. Make your choices at the Duplicate File Resolution screen, and click
Next: ■
Don’t Copy Over Existing Files—The source file will not be copied, thereby keeping the existing destination file.
■
Copy the Source File if It Is Newer—The source file will be copied over the destination file only if it is newer than the existing destination file.
■
Always Copy the Source File—The source file will always be copied over the destination file.
13. Make your choices at the Synchronize Files and Folders screen, and click
Next. NOTE Choose Yes to delete all files and folders on the destination server that do not exist on the source server. 14. Make your choices at the Compare Files and Folders screen, and click
Next. 15. Make your choices at the File Date Filters screen, and click Next. If you
want to filter the file copy based on file date, choose Yes and configure your options: ■
Attribute—You can filter files based on the Accessed, Modified, and Created file attributes.
■
Attribute Dates—For each attribute, specify the appropriate dates to define how filtering will be done.
16. Make your choices at the Wildcard Selection screen, and click Next. 17. Make your choices at the Check for Trustees and Ownerships screen, and
click Next. 18. Enter the passwords for the source and destination trees at the Password
Verification screen, and click Next. 19. Click Next to begin the verification process.
189
C H A P T E R 6 : Installing and Upgrading to Open Enterprise Server
20. Click Next at the Error Resolution screen to continue the consolidation.
If there are any errors during the verification process, they will be listed in two categories: ■
Errors that must be resolved before files can be copied
■
Errors that should be resolved but might not affect the copy process
The project is now completely defined and you are ready to perform the actual server consolidation. You can execute it immediately or schedule it to start at some other time—such as when the server is slower. At the Start Novell Server Consolidation Utility screen, click Proceed to perform the consolidation immediately. When the copy process is complete, the Process Finished screen lets you view the error log, view the success log, and close the consolidation process.
Migrating from a NetWare Server The NetWare Migration Wizard is used to migrate old NetWare 3.x servers to new OES NetWare servers. The Migration Wizard copies the file system and bindery objects from the NetWare 3.x bindery to a destination Novell eDirectory tree and converts them to eDirectory objects. More information on the NetWare Migration Wizard is available in the OES NetWare online documentation.
Migrating from a Windows Server Server consolidation is possible for Windows-based servers by using the NetWare Server Consolidation Utility to transfer all configuration settings from the source server to the OES NetWare destination server. During the migration from NT 3.5 and 4.0, domain users—as well as local and global groups—are converted to eDirectory objects and placed in the eDirectory tree. Shared folders are moved to the NetWare file system and rights/permissions are converted from Windows NT settings to NetWare trustee rights. NOTE As of this writing, a direct upgrade or migration path for Windows 2000 or XP is not available.
190
Summary
Summary This chapter walks through the process of performing a clean installation as well as the ways in which you can upgrade or migrate existing servers to Open Enterprise Server. Bear in mind that, ideally, all configurations and installations should be tested in a lab environment before moving to the production network.
191
This page intentionally left blank
CHAPTER 7
Administering Open Source At last, you arrive at the topic that is probably most critical to the entire success of open source integration—system administration. It is critical in two aspects. First, there must be management advantages to open source, both in terms of cost and simplicity, which make deploying or migrating worth the effort. Novell customer organizations have proven this is possible by implementing Linux systems that are easy to manage using browser-based management tools or Yet another Setup Tool (YaST). The scalability of open source also enables IT to consolidate servers, services, data, and users on fewer systems making it far more cost-effective to administer. The second advantage is that combined, open source, Linux, and Novell technologies fully enable the management of heterogeneous systems. Few, if any, organizations are completely open sourcebased and, for this reason, it is imperative that management solutions be able to accommodate not just Linux but Linux, Windows, NetWare, and the networking services and applications that they support. A key strategy of the Novell mission is to provide holistic, comprehensive, cross-platform management for disparate systems that include Linux and open source applications and services. Novell enables companies to implement Linux and open source using products and solutions that make integration and management as seamless as possible. This chapter covers in detail both the Linux and open source management tools, as well as those available from Novell that help to make deployment, migration, and integration as easy and seamless
C H A P T E R 7 : Administering Open Source
as possible. Console and browser-based utilities are abundant for Linux, Apache, MySQL, JBoss, and more. YaST is a comprehensive management tool that provides server configuration, multisystem software management, and much more. Novell iManager is an extensible management console that provides for multisystem, multiplatform management of services as well as applications. All of these combined give IT managers everything they need to manage and administer users and resources using simple, automated, policy-based methods. NOTE Note all of the products discussed in this chapter are included with Open Enterprise Server. Many of the products discussed are supporting products available from Novell.
Working with YaST YaST has become the most powerful installation and system management tool in the Linux world, and it is only available with Novell SUSE Linux. YaST provides an intuitive graphical installation interface that guides you through each step of installation, automatically detecting available hardware. YaST provides installation options for user administration, security settings, installation of printers, scanners, and other hardware, and even guides you through the process of resizing any existing Windows installation (see Figure 7.1). YaST includes management modules for configuring networking services, such as Domain Name Server/System (DNS), Dynamic Host Configuration Protocol (DHCP), web servers, and Samba. Inexperienced Linux users can easily network Linux and Windows hosts for seamless access. SUSE YaST also includes an automated update and optimization service called YOU (YaST Online Update). YOU automatically provides managers with information on securityrelevant patches, enabling them to keep all systems up to date. YaST provides a graphical management interface that has not been available on Linux systems to this point (see Figure 7.2).
194
Working with YaST
FIGURE 7.1
YaST guides administrators through an intuitive installation process.
FIGURE 7.2
The YaST Control Center provides a centralized, icon-based interface to system and network resources.
195
C H A P T E R 7 : Administering Open Source
Novell eDirectory Directory services are one of the most powerful yet least understood networking management technologies. The evolution of the Internet and e-business has intensified the need for a viable identity management service that allows individuals and systems to securely interact across all types of networks—the Internet, intranets, and extranets; wired to wireless; and corporate and public networks across leading operating systems. Identities of associates, customers, partners, and suppliers are validated, and access to specific applications and resources are easily managed through a secure, full-service directory service. This allows companies to build highly customized inter-/intra-business relationships. Novell’s eDirectory is the industry’s premier and most prevalent directory service technology. Several characteristics differentiate a true directory service from a common database or user identification list. These characteristics are hierarchy, inheritance, standards support, and distributed systems.
Hierarchy In 1988, the X.500 standard was established for managing online directories of users and resources. The X.500 standard is hierarchical, meaning that it appears like an inverted tree with a single trunk at the top and branches that extend below. This hierarchical structure fits the world’s classification system: countries, states, cities, streets, houses, families, and so on. The objective is to have a directory organization structure that can classify any resource globally. Every resource or user can be classified by the company it belongs to (tree), the division it is in (organizational unit), the department that it is part of (group), and its unique characteristics. Using a hierarchical organization structure, it is possible to classify and map relationships between any two users or resources—no matter where they might physically reside.
Inheritance Hierarchy is crucially important to the second distinguishing characteristic of a directory service, inheritance. Objects in a tree (organizational units, groups, users, services, and so on) are defined by attributes—characteristics such as access rights or descriptive properties. Inheritance makes it possible for many objects lower in the tree to be assigned or “inherit” properties from other objects higher in the tree. For example, a group object might have a uniform resource locator (URL) attribute that provides 15 people in the group with access to a specific Internet data feed. By changing the URL attribute once at the group level, all 15 users automatically inherit access to the new address.
196
Novell eDirectory
Inheritance is a powerful characteristic that makes it possible to manage large quantities of users and resources with a few simple steps.
Standards Support The number of disparate and single-purpose user management solutions for verifying identity and granting access is staggering. Many applications have their own proprietary user database; each of the leading operating systems have their own domain methods; and thousands of others have been kludged together to accommodate existing IT environments. Integrating, synchronizing, and managing these disparate systems is a daunting, if not impossible, task. The evolution of an open standard that provides true directory characteristics that are extensible or modifiable is extremely important. Lightweight Directory Access Protocol (LDAP) is that standard. It provides the capability to import or export directory objects and attributes as well as extend a directory schema (add new object types or properties). The capability to pull disparate directories together for central management based on open standards support is a primary requirement for a true directory service. Open standards become increasingly important as disparate resources are connected across the Internet.
Distributed Systems Conceptually, a directory service appears as a single central repository of all resource information. In reality, an enterprise-level directory service must have the capability to be distributed both physically and virtually. To avoid a single point of failure (such as if all directory information were in one physical place at the time of a disaster), copies of a directory must be placed in different locations and linked in such a way that if one location goes down, all information is still accessible from other locations. A directory must also have the capability to be distributed or divided virtually so that pieces of it are available for different applications. For example, some user object attributes such as phone number and address might be valuable throughout a directory network; other attributes such as directory rights only need to be available from where the user regularly logs in. Novell eDirectory is distributed using replicas, partitions, and indexes. A true directory service then becomes the nerve center for a distributed system of resources. It holds attribute or characteristic information about the resources; it monitors status; it prevents or grants access; and it includes mechanisms to protect or heal itself in the event of outages. A directory can vastly simplify administration by consolidating all management functions to a single service. All applications, users, peripherals, devices, connections, processes, groups, and so on can be managed through a single interface using a directory service such as Novell eDirectory. 197
C H A P T E R 7 : Administering Open Source
Novell iManager The power of true directory services and the Web come together in a comprehensive administration package, Novell iManager. iManager provides a global view of your network from one browser-based tool, allowing you to manage Novell eDirectory and many other services from any workstation at any location. This centralized web-based management allows you to eliminate both administrative redundancy and unnecessary travel and provides an intuitive, guided management process for managing all types of resources. Novell iManager is unique among management products because of its tight integration with eDirectory. iManager features delegated administration, which allows you to assign individuals specific management tasks and duties. Delegating simple administrative tasks, such as resetting passwords or adding network printers, allows you, the network administrator, to focus on strategic projects that more directly affect the company’s bottom line. As an eDirectory management solution, iManager provides a single point of administration for eDirectory objects, schema, partitions, replicas, and many other network resources. iManager also provides a number of advanced capabilities that take network administration to a new level. Most notably, Novell iManager is a server application that runs on a variety of operating systems, including Linux, NetWare, Windows, Solaris, and HP-UX. This multiplatform support gives you flexibility in where and how you deploy the management console, which helps to address the challenge of managing complex, heterogeneous networks, in which a wide range of applications are running on a variety of operating systems. Novell iManager is designed to accommodate new services and can easily be extended to include multiple types of resources as they are added to your network. The iManager Plug-in Studio allows you to create new management operations and tasks without writing a single line of code. Leveraging standard technologies and application interfaces, Novell iManager provides an ideal architecture for the creation of a customized management interface. As new Novell products are added to a network, the iManager console will seamlessly incorporate the attributes, property pages, object types, and other functions that correspond to those products. With iManager, secure, flexible network administration is accessible from any location on the Internet from a standard web browser.
RPM Package Manager Red Hat Package Manager (RPM) is a streamlined process for installing, updating, uninstalling, verifying, and querying software packages that are installed
198
Novell eDirectory
on different machines. Linux is installed as a collection of separate files or packages, and RPM helps track and manage the installation of these files. Using RPM, administrators can build new installation images on the fly, rebuild corrupted or crashed systems based on their original/unique configurations, update systems with new features or functionality, and uninstall all or portions of a system. The following is a simple example of how RPM works. RPM uses a configuration database that includes all the information about a particular system, including the packages that are used to create it and each of the files that are part of a package. Package information includes the package name, checksums, and a list of all the dependencies on any other packages. A complete Linux operating system, for example, could be a single encapsulated file that includes many other packages, each providing a function or service, and each possibly requiring other packages. The RPM database tracks all of these dependencies and associations and makes them available for manual and automated management. To install a Linux distribution that has been configured as only a Network File System (NFS) server, for example, you would use RPM, select an option for preconfigured NFS, and install that particular configuration. If the installation should become corrupted, you can use RPM to rebuild it to the original state. In the course of time, new packages might need to be added to an existing configuration. RPM tracks these new additions and their dependencies, as well as any package subtractions or deletions. The power of RPM is in the fact that it can be configured to automatically track and manage hundreds of machines with various configurations. Because Linux allows you to add and subtract or start and stop processes without taking the system down, RPM can be a powerful tool for mass-updating either workstations or servers across a network with little, if any, impact on end users. RPM includes technology that allows you to query systems for integrity and to ensure that packages are from signed or authorized sources. Enhancements that Novell has made with RPM include the capability to install patches in addition to packages. This is an important feature for organizations that need to quickly update a process to protect against a security threat. Novell has also ported RPM to NetWare, where the same functionality can be used to load and unload NetWare loadable modules (NLMs) and update NetWare services. RPM for both Linux and NetWare provides advanced and powerful management tools for managing servers and workstations in a mixed network.
199
C H A P T E R 7 : Administering Open Source
Novell ZENworks When it comes to comprehensive, cross-platform, server/workstation management, nothing compares to Novell ZENworks. ZENworks provides unparalleled management capabilities by combining eDirectory technology with distributed network services to provide a comprehensive centralized management platform. ZENworks takes advantage of the object-oriented and hierarchal nature of eDirectory, allowing administrators to make minor or extensive changes across the entire enterprise with a few simple operations. Management is secure, can be delegated, and is graphical and intuitive. The Novell ZENworks Suite is a comprehensive set of integrated tools that automate IT management processes across the life cycle of servers, desktops, laptops, and handhelds. This innovative solution automatically maintains and enforces enterprise policies to dynamically manage resources based on device and user identities. With support for Windows, Linux, NetWare, Palm, Windows CE, PocketPC, and RIM BlackBerry platforms as well as directory integration with Novell eDirectory and Microsoft Active Directory, ZENworks significantly enhances the management of complex, diverse IT environments. Moreover, it provides the technology foundation for policy-driven automation, which can eliminate manual system management processes and substantially reduce IT costs. ZENworks is available as a suite with comprehensive support or as individual packages managing servers (ZENworks Server Management), desktops (ZENworks Desktops Management), personal digital assistants (ZENworks Handheld Management), and Linux (ZENworks Linux Management). Several of the administrative activities that can be accomplished using Novell ZENworks, which works across Linux, NetWare, and Windows environments, are as follows:
200
■
Software inventory—Discover and track all software installed on a server or workstation. Information is stored in a database (Oracle or PostgreSQL) for comparison or rollback.
■
Hardware inventory—Discover and track all hardware elements, including BIOS, processor, memory, hard drives, and much more. This information is available in the database for multiple purposes, including difference reporting or determining software requirements.
■
Software distribution—Centrally control software distribution to servers and desktops (inside or outside the firewall), based on policies or user needs and identity.
■
Policy-driven automation—Automate management tasks for distribution, updates, patches, rollbacks, inventories, and more by using
Novell eDirectory
eDirectory and rules-based policies. For example, if a particular machine meets minimum hardware requirements and the user is a manager, install the software linking the user to the employee evaluation system. ■
Personality migration—Again using eDirectory and policies, automatically configure a workstation for a specific user based on his job requirements, location, group membership, and any other related factors. Provide the capability for desktops and a personalized environment to follow a user wherever he goes.
■
Mass upgrades—Automate the process of simultaneously providing new software (operating system, utilities, packages, or applications) to multiple machines. Upgrade entire organizations with new software with a few simple operations. Supply all servers in your network with the latest health-monitoring control through a single operation.
■
Patch management—Stay current with the latest security or performance patches for Linux, NetWare, or Windows operating systems by using ZENworks patch management. Dynamic graphical reporting provides a visual reference for status.
■
Package management—Apply package management tools popular with Linux to NetWare as well. Package management determines conflict resolution and package dependencies while providing granular control scheduling based on users and groups.
ZENworks Linux Management is a Novell solution that expands the capabilities of RPM to provide a greater degree of control and flexibility. You can manage dozens or thousands of machines, creating groups of machines and software channels for software distribution. You can delegate authority to multiple administrators, follow progress with the built-in reporting system, and conduct transaction dry runs to check package, package set, and channel dependencies. Other ZENworks Linux Management features include machine comparison for checking discrepancies between two or more machines, subnet server caching for distributing ZENworks servers to speed updates over slower links, and enhanced transactions for running program scripts before or after software transactions (see Figure 7.3). These technologies, some open source and some proprietary, provide a powerful and comprehensive collection of cross-platform management tools. Novell validates its commitment to help centralize and simplify the management and administration of dissimilar systems by providing the management infrastructure to accommodate Linux as well as NetWare and Windows. Linux administration tools, such as RPM and YaST, are adequate for many management tasks, but they are significantly enhanced through the use of eDirectory, iManager, and ZENworks for Linux Management. 201
C H A P T E R 7 : Administering Open Source
FIGURE 7.3
ZENworks Linux Management provides controlled package management for servers.
Red Hat
Server
desktops
SuSE
HTTPS/HTTP [XML-RPC]
Mandrake
servers
RHEL AS
(ADMIN)
SLES
For most organizations, the implementation of Linux and open source will occur over a period of time and will involve the coexistence of multiple systems. Tools such as ZENworks, eDirectory, iManager, or RPM that work across platforms are needed to simplify the deployment. The ability to include Linux servers as part of an existing operation without the need to add specialized management or extraneous utilities is critical to keeping a lid on administration requirements.
Administering with ZENworks Regardless of whether your organization decides to fully migrate to Linux on the desktop, stay with what you have, or support a mix of workstations, it’s good to know that one desktop management solution will handle any option— Novell ZENworks. ZENworks is a vital element of Novell’s dual-source strategy in that it provides a consistent management structure for all desktop workstations—both proprietary and open source—that streamlines, simplifies, and provides valuable utilities. In addition to ZENworks, other desktop
202
Administering with ZENworks
management solutions including Novell SUSE’s YaST provide simplified package and patch management, and, of course, open source RPM is available. One of the significant advantages of a common operating system across workstations and servers is that the same administration tools can generally be applied to manage both. ZENworks provides the same functionality for workstations as it does for servers with the capability to lay down new images, apply policies, distribute applications, capture and track inventories, and enable remote management on both Linux and Windows. YaST and RPM manage both servers and workstations as well, as long as they are running Linux. ZENworks allows a company to manage the entire life cycle of desktops. Using policy-driven automation, administrators can enforce a standard configuration and simultaneously update the configurations and software on thousands of desktops and laptops—all from a single location. ZENworks even provides personality migration so personal settings and application settings for each desktop can be fully restored to minimize disruption. With detailed inventory reports from PCs to handheld devices, across multiple operating systems, ZENworks helps companies enforce standard configurations, prepare for upgrades, determine device locations, and meet corporate asset reporting requirements. The basic ZENworks functions valuable for desktop management are detailed in the following sections.
Drive Imaging ZENworks allows you to configure a workstation by laying down a new drive image. This can be done as new workstations come in, or on corrupted workstations that are already deployed. Drive images can be classified and associated with particular workstations or classes of workstations for easy management. Using Preboot Execution Environment (PXE), a workstation can boot up, check for a new image, and then lay it down before the workstation operating system starts. If workstations are PXE-enabled, you could move to Linux without ever physically touching the machine.
Applying Policies A policy is a set of rules that defines how workstations, users, and servers can be configured and controlled, including application availability and access, file access, and the appearance and contents of individual desktops. Policies are contained within policy packages, where they are also administered and customized en masse for multiple workstations. A policy package is a Novell eDirectory object containing one or more individual policies. A policy package groups policies according to function, making it easier to administer them en masse as well. 203
C H A P T E R 7 : Administering Open Source
Using policies, administrators can automate many of the tasks and activities of workstation management. For example, rules check for dependencies or prerequisites before performing operations such as software installation or application updates. With eDirectory, administrators can make a policy change once and it is reflected across every object (workstation) associated with it.
Application Distribution ZENworks supports multiple methods for getting software on a workstation. The distribution process for an application might be as simple as creating a shortcut to an already installed network application, web application, or terminal server application, or it might be as complex as installing the application files on the workstation, modifying the workstation’s registry and configuration settings, and drive path mappings. ZENworks provides the equivalent of Linux package management for Windows with the capability to copy files, update the Registry, and fully configure the workstation for use. ZENworks also includes Application Launcher, which is the capability to control application access through icons that appear on the desktop and run local code, web applications, or network applications.
Hardware/Software Inventory ZENworks can inventory workstation hardware and software with information stored in a database for system differencing, rollback, license management, and any number of other uses. Inventories can occur at scheduled intervals or on demand, and can also be used to verify system compatibility.
Remote Management If an administrator can’t be at a workstation physically, she can be there virtually with ZENworks remote control. Remote control lets you control a managed workstation from the management console to provide user assistance and to help resolve workstation problems. Remote control establishes a connection between the management console and the managed workstation. With remote control connections, the remote operator can go beyond viewing the managed workstation to taking control of it to see problems firsthand or make configuration changes. ZENworks’ chief advantage is that it can provide these management services not just for Linux workstations and servers, but also for Windows desktops and servers as well as NetWare servers. One interface manages multiple platforms from desktop to data center.
204
APPENDIX A
Open Source Case Studies This appendix examines some real-world implementations of open source solutions. It looks at the transition Golden Gate University went through, as well as Novell and others.
Data Center Transition Case Study—Golden Gate University Golden Gate University (GGU), located in the heart of San Francisco’s financial district, is the central hub of California’s fifthlargest private university, with seven different locations throughout the state. The university offers a variety of exceptional undergraduate and graduate programs in law, business management, information technology, and taxation, utilizing the most advanced technologies and learning tools available. GGU caters to adult working professionals and their CyberCampus permits students to work and study on their own schedules using the Internet. In 2002, GGU began a formal evaluation of its existing data center technologies in an effort to simplify and consolidate what had become an unwieldy IT environment. The university evaluated several technologies and selected Linux and open source to benefit from Java and vendor relationships with Oracle and Dell. At the time of this writing, GGU is two years into a five-year consolidation plan and the creation of a data center based on Linux and open source is 75% complete. To date, the biggest surprise for CIO Anthony Hill has been that there have been no major surprises. Linux and open source have not only allowed the university to
A P P E N D I X A : Open Source Case Studies
consolidate and simplify web services as originally planned, but also to move enterprise applications and databases, email, networking services, and edge services to Linux. The results include expanded capability, simplified management, reduced hardware and software costs, and far more efficient use of IT staff resources.
The Way Things Were Similar to other organizations that have evolved through the growth of the Internet, GGU was using an eclectic mix of technologies. In-house solutions consisted of servers running Unix, NetWare, Solaris, Windows, and MPE/iX (HP 3000 operating system). The university had five different databases running on five different hardware platforms, and was hosting web servers on Macintosh, Apache, Microsoft Internet Information Server, and Sun iPlanet. Hardware consisted of Macintosh, Compaq PCs, HP, and DEC Alpha. Some of the challenges encountered with this environment included the following: ■
Specialized expertise was required for each technology and specialized skills were often not transferable. Some staff were overworked and others underutilized, with no cross-training for backup support.
■
Multiple vendor relationships contributed to complex licensing, maintenance and support agreements, and fees.
■
Incompatibilities existed between systems with little integration and no common standards.
■
Disagreement and confusion existed regarding the strategic direction for new technology acquisitions.
At the time of the initial evaluation, the university’s major choice was between Unix or Windows-supported solutions. GGU selected Unix based on the following criteria:
206
■
Web services—Strategically, Java and open standards-based solutions would allow GGU to create the types of applications and services that would best serve the needs of faculty, staff, and students during the fiveyear planning window. A Unix-like operating system such as Solaris or Linux provided this capability with full support of Java.
■
Oracle—GGU was already using Oracle enterprise resource planning (ERP) applications and had chosen Oracle as a strategic partner for database solutions. GGU wanted an operating system platform that would continue to support Oracle with performance and scalability, as well as
Data Center Transition Case Study—Golden Gate University
provide new database capability that could be leveraged through web services. The Unix/Java/Oracle combination was optimal. ■
Cultural fit—As an academic institution with Unix power users, a Unixlike solution was optimal. GGU had an existing library of Unix-based applications and services.
■
Consolidation potential—With a wide assortment of web solutions, data center applications and network services, it was imperative that the final solution support all three of these areas on a single platform that could scale sufficiently.
■
Future choice—It was also important not to limit choice. GGU was particularly concerned about freedom of choice for ERP solutions and found that more independent software vendors (ISVs) support Oracle and Unix than Microsoft SQL Server and Windows.
As GGU proceeded with plans for consolidation, Linux became more mainstream and emerged as a viable rival for the original choice of operating system, Sun Solaris. GGU selected Dell as a vendor who could provide flexibility on lower-cost Intel hardware, and Dell’s support and scalability on Linux helped convince GGU to reconsider Solaris. The university’s cost analysis found that Linux on Intel was three to five times less expensive than Solaris on Sun. GGU also discovered that Linux talent and administration skills were more abundant and less expensive to develop than with other options. Another factor that was not easy to quantify, but nonetheless significant, was the Linux “cool” factor. People were motivated to learn and implement Linux and open source technologies and excited to be working with it. The barriers to both learning and implementation were eliminated with downloadable software, low-cost hardware, and abundant online documentation. GGU found that many on its internal staff had already experimented with Linux and were familiar with open source technologies.
How They Did It The university’s migration began with a six-month assessment phase to document each of its applications and services and evaluate them for functionality and portability. During the trial phase, the GGU team built test configurations for experimentation and performance analysis. The team then created full-scale systems and implemented them in a phased approach with respective owners signing off before migrating to full production. The phases of implementation roughly followed the Novell Consulting pattern outlined in previous chapters, beginning with the migration and consolidation
207
A P P E N D I X A : Open Source Case Studies
of web servers and edge services. Here’s a short summary of the implementation of services and applications: ■
Web servers—Consolidated static-content websites to new servers running Linux, Apache, and Dell hardware (all hardware for the GGU data center is Dell)
■
Databases—Migrated many internally developed database applications to Oracle 9i on Linux
■
Edge services—Used Linux for several widely available edge services, including Checkpoint for firewall, Nagios for host monitoring, MRTG for router traffic monitoring, and others
■
Dynamic websites—Used Java to re-create dynamic websites that had been written for various platforms and applications and consolidated them on Linux, using Apache and Resin application servers and Oracle where needed
■
Course services—Converted several applications specific to developing and managing courses and teaching to Linux
As a longtime Novell customer, GGU has relied heavily on NetWare to provide file, print, and Dynamic Host Configuration Protocol (DHCP). With Novell Nterprise Linux Services (the forerunner of OES), these same services with equivalent features are available on the Linux platform. These technologies will also be part of Novell Open Enterprise Server, including print with Novell iPrint, directory services with eDirectory, management services with iManager, file services with iFolder and Network File System (NFS), Common Internet File System (CIFS) and Samba file sharing, and Virtual Office. The move to Linux has been transparent with the ability to retain login scripts and drive mappings. GGU used Novell GroupWise for email, collaboration, and personal planning. GroupWise also runs on Linux with the added bonus of being able to take advantage of Linux clustering and failover capabilities. GroupWise supports POP/IMAP clients, web clients, or full-featured (fat) clients for Windows, Linux, and Macintosh—a great mix of options for a university audience. GGU will also implement Novell NetMail on Linux, a pure standards-based POP/IMAP mail server that is tightly integrated with Novell eDirectory for simplified user management. GGU has also implemented Novell iChain for identity-based web security services, enabling authorized users—including employees, customers, and partners—to securely authenticate from anywhere at any time.
208
Data Center Transition Case Study—Golden Gate University
Because of the prevalence of Windows education and productivity desktop applications, and the fact that Microsoft higher education pricing is favorable, GGU is not migrating desktops to Linux at this time. Many GGU students can be classified as power users who heavily use existing Microsoft spreadsheet, word processing, and presentation programs. However, the campus does host Linux-based kiosks that are used for registration and web access, and some classrooms have workstations with dual-boot capabilities for both Windows and Linux. GGU will continue to support its vertical market Windows applications, such as advancement and alumni systems, until Linux or open source systems become available. Using Novell ZENworks, GGU can combine server and desktop management for both Linux and Windows platforms for simplified administration with a single interface and a common database for hardware and software inventories.
How Novell Helped When asked what value Novell delivers as a strategic partner, Hill had quick and positive answers: “Novell’s product strategy matches our commitment to consolidation.” These are some of the key benefits he mentioned: ■
Novell helps GGU consolidate not only to a single operating system platform, but also to strategic relationships with fewer vendors. Novell provides Linux, as well as a number of required network services and applications that run on Linux. GGU now has one OS platform in the data center and can work with one vendor for platform and network services.
■
Migrating from its old networking services platform (NetWare) to Linux was easy with migration tools, utilities, and management similarities for services on both platforms.
■
Qualified, commercially available technical support for Linux by Novell was a major benefit. With Novell’s support of Linux and other open source technologies, along with Dell’s hardware support for Linux, GGU had high service levels that eliminated any concerns about the quality of support available for an enterprise data center.
■
The end-to-end value chain from Novell was also a significant factor. With Novell’s open source solution, Linux and open source are applied from desktop to data center. Identity management, network services, web services, application and desktop platforms, edge services—the entire chain is integrated, supported, and managed through Novell products.
209
A P P E N D I X A : Open Source Case Studies
Smooth Transition As mentioned previously, when Hill was asked what unanticipated surprises Golden Gate had encountered so far during the transition, he said, “From a senior IT level, I have seen no surprises—that was more or less a surprise. Normally for an undertaking this extensive, there would have been at least a few major problems, but the migration has been largely transparent with no major issues.” Golden Gate did experience some positive surprises. Originally, the plan was to only use Linux and open source for web servers and exposing database applications. However, after researching the solutions available, GGU was able to implement a broad range of Linux-based, open source services, including IT firewall, intrusion detection, and more. GGU has simplified IT management with the staff sharing the same skill set and able to perform a broad range of services. This has eased workloads and provides human backup and failover options not previously available.
Novell Open Source Transition Case Study Novell believes in using its own products. It has internally rolled out every new version of NetWare, moved to WordPerfect when it acquired that company, transferred all its websites when it adopted Apache; even before the SUSE acquisition, it was well into the process of moving network and edge services to Linux; and when embracing open source, it moved all of its desktops to OpenOffice.org. This one move has saved Novell almost a million dollars per year in licensing fees to Microsoft. The Novell environment includes users that vary widely in technical expertise, organizational roles, and application needs. Its 5,000+ users have approximately 12,000 workstations—more than half of which are laptops. The complexity of such an environment has given Novell ideal opportunities for testing a wide range of possibilities with Linux desktop adoption. This section takes a look at Novell’s migration to OpenOffice.org, the Novell Linux Desktop, and Linux at the data center, and outlines the steps taken, showing you what worked and what didn’t. The experience was similar to Golden Gate University in the surprising smoothness with which such an extensive change was made. Novell Desktop and data center migration strategies were separate and involved planning and direction setting, as depicted in Figure A.1.
210
Novell Open Source Transition Case Study
FIGURE A.1
Novell Desktop and data center migration strategies.
Planning and Strategy Direction Setting
Implementation Desktop Microsoft Office Microsoft Windows
Office Productivity Desktop Functionality
Data Center Roadmap Business Case Architecture
Services Applications N
NT
SUSE+LINUX
Red Carpet Enterprise
The savings and advantages have been substantial, and if they benefit a global enterprise company such as Novell, other companies can enjoy the same gains.
The Desktop The first phase of migration to Linux on the desktop was to move from Microsoft Office products to open source tools. Moving to OpenOffice.org allowed users to make a gradual transition to open source without major disruptions. The Windows operating system remained in place and Microsoft Office was retained temporarily after OpenOffice.org was installed in the event users needed to revert back for any reason. The similarities of OpenOffice.org and Microsoft Office allowed users to start using OpenOffice.org immediately without difficulty or much of a learning curve. Microsoft Office documents could be opened in OpenOffice.org, modified, and then saved in Microsoft Office format if needed. The OpenOffice.org migration became a testing ground for the larger Linux desktop migration. It had many of the same change management activities and requirements—enterprise-wide communications, user training and support, and license management—but with much less risk and complexity. IS&T (Novell’s IT group) was able to develop its migration processes and use this “safe” project to understand how the business would react. It also eased users into the full migration. As the next phase began and users migrated to the Novell Linux Desktop, they were already comfortable with OpenOffice.org as their main productivity application suite. Novell found that OpenOffice.org users needed to be trained in policies for sharing files. The Document Conversion team defined the
211
A P P E N D I X A : Open Source Case Studies
standards for sharing documents inside and outside Novell and then submitted them to the executive steering committee for approval and enforcement. These standards were as follows: ■
Use native OpenOffice.org file formats for internal collaboration.
■
Use PDF files for distributing read-only information to customers and the press.
■
Use Microsoft formats when collaborating with external vendors and partners who don’t use OpenOffice.org.
The question of whether to convert existing documents and templates to OpenOffice.org formats was also a concern. Team members on the Document Conversion team decided that mass conversion would not be necessary because OpenOffice.org can read Microsoft Office formats. Instead, the team created a decision tree to help users prioritize conversions. These priorities were based primarily on whether the document was in active use and whether it was a template. Novell began its Linux desktop migration in mid-2003. IS&T recruited 160 users from all parts of the business to participate in an early adopter program. Their goal was to work through any issues and challenges that initially occurred in the process of doing their job on a Linux desktop. Early challenges related to hardware drivers (for example, wireless), tools (for example, CD burner), and applications. Using combined technology from Novell, Ximian, and SUSE Linux, these challenges were quickly overcome. The drivers, tools, and applications are now widely available and IS&T created a library of standard images for current and legacy desktop/notebook platforms. An applications team was created and tasked with determining the best route for migrating each business function to new, Linux-based, and open source applications. Having canvassed user needs, the team evaluated and prioritized applications by asking three questions: 1. Is there a native Linux version of the existing application? The sim-
plest solution was a native Linux version of an existing application. For example, Novell iFolder, iPrint, GroupWise, and other Novell desktop clients all run on Linux. Novell Nterprise Linux Services provides the backend functionality, running natively on Linux servers. These native Linux applications offered planners the quickest and easiest migration path with little or no adjustment required as users moved from Windows to Linux. 2. If a native version is not available, is there a native Linux application
that offers equivalent functionality? Linux versions of some applications 212
Novell Open Source Transition Case Study
are still in development. Certain applications—Microsoft applications, for example—may never run on Linux. In these cases, a growing number of well-designed, third-party or open source applications often provide a solid alternative. Migration planners assessed the functionality and usability of these alternative applications, chose applications that promised the easiest transition and greatest productivity, and initiated training and support programs to help employees make the transition. 3. If neither a native version nor an equivalent open source application
is available, is there a viable interim solution? In the few cases in which no reasonable alternative existed, interim solutions were chosen to fill the gap until a permanent Linux solution could be implemented—a solution that matched up with all the required business processes. Depending on the need, these interim solutions included terminal services that emulate Windows in a Linux environment or dual-boot systems that allow Linux and Windows to coexist. Here’s a snapshot of what happened: ■
Within 12 months, 90% of users were using OpenOffice.org as their main productivity suite, and all new internal corporate documents, spreadsheets, and presentations were created in OpenOffice.org formats.
■
Within 15 months, 50% of users were on the Novell Linux Desktop.
■
By the end of 18 months, the rest of the 5,000+ Novell users had moved to the Novell Linux Desktop.
At this point, Novell was able to eliminate all license fees to Microsoft for desktop Windows operating systems and applications. As soon as Novell announced its internal initiative to migrate to Linux, top executives organized themselves into a cross-functional steering committee. This executive steering committee then defined a series of business “tracks” for each of the key functional areas deemed critical for a successful migration. Company leaders organized teams to manage each track and to ensure the attainment of critical mileposts along the migration path. Business tracks and team responsibilities were as follows: ■
External Communications—To communicate the Novell migration strategy to customers and explain how it will benefit them
■
Internal Communications—To promote an esprit de corps around the Linux migration, provide ongoing progress updates, and create feedback loops and user collaboration opportunities
213
A P P E N D I X A : Open Source Case Studies
■
Applications—To categorize user roles and application needs, develop a rollout strategy, and provide application support on the Novell Linux Desktop
■
Training—To create training programs for new Linux and open source applications and provide how-to guides, FAQs, and web-based reference materials
■
Solutions—To create and document Linux and open source migration methodologies for OpenOffice.org, the desktop, and the data center
■
Desktop—To install OpenOffice.org on Windows and define the Novell Linux Desktop, including standard applications and services, conversion tools and images, deliver deployment guides, and track migration progress
■
Support—To assign and train support personnel, develop support practices, create and maintain a knowledge base of problems and resolutions, and filter bugs and fixes back into the open source community
■
Document Conversion—To create a strategy and provide resources for corporate-wide document and template conversion and make migrated documents available through knowledge repositories for OpenOffice.org applications
■
Program Management—To coordinate project tracks, maintain project plans, verify progress, and report to migration and steering teams
This strategy made it possible for multiple tracks to be executed in parallel for maximum speed and efficiency, while giving each team the autonomy needed to pursue the best solutions. Lines of authority extended down from the executive steering committee, which set the overall direction for the project. The Program Management team made sure that each team stayed on target and provided the support needed by other teams. Regular status reports ensured that milestones were being met and that problems were identified and escalated to the teams best equipped to solve them. This coordinated planning across business tracks produced a road map for deploying open source software in phases, both in the data center and on the desktop, in region-by-region and group-by-group rollouts. As a result, the IS&T support team reported an average of only 10 support calls per day for Linux and 2 per day for OpenOffice.org, a number much lower than expected.
The Data Center In the data center, the first migrated servers were FTP running on HP-UX. Other early candidates were Oracle databases, followed by Novell eDirectory
214
Novell Open Source Transition Case Study
and web production servers. The entire Novell web presence—external and internal—is now hosted on three Intel servers running Apache/Tomcat on SUSE Linux. Novell then developed Nterprise Linux Services, which IS&T now uses for identity management, resource management, and some file and print services. All these well-planned migrations happened without any disruptions to user productivity. Applications that had native Linux versions were migrated first. Novell iFolder, for example, is used heavily at Novell and most versions were running on NetWare even though iFolder was available on Linux as well as Windows. The time required to move all iFolder installations from NetWare to Linux for the entire company from start to finish was just four hours. The process was transparent to users and was completed without a hitch. Like Golden Gate University, Novell has migrated database applications that run on Oracle to Linux. At the time of this writing, ERP applications from Siebel and PeopleSoft are still in the Novell evaluation stage and pending development by these vendors. Novell IS&T is also developing a road map for migrating other data center applications to Linux, including enterprise applications such as ERP, payroll, resource planning, and expense reporting. These migrations are more complex and require additional planning and coordination across business units and geographies. Novell has proven the benefits of Linux in its own data center and is finding that broad Linux adoption is possible and worthwhile if it is well planned and focused on business benefits and priorities. As a development company, Novell relies heavily on bug tracking and problem reporting, and is in the process of adopting Bugzilla for software development. Additionally, Novell will be able to eliminate an existing proprietary solution. It’s of interest to note that the standard version of Bugzilla is lacking certain functions that Novell requires. Novell is adding this functionality internally and contributing it back to the open source community. Novell’s entire management interface for IS&T is web-based. Server monitoring and reporting applications generate real-time statistics that are aggregated to a central point so that staff can tell instantly whether problems exist anywhere in the world. As a web-enabled service, IS&T staff have access to all this information from any standard web browser from any place on the Internet with the proper authentication credentials. The benefits of Linux and open source in the data center have been so substantial that when considering any major system or application upgrade, Novell IS&T has adopted a strategy of “Linux unless it can’t be.” When other departments in Novell acquire new software, IS&T requires that they investigate their Linux and open source options. If there is a viable Linux version, it is the only version IS&T will support.
215
A P P E N D I X A : Open Source Case Studies
As a global enterprise company, Novell has successfully implemented Linux— desktop to data center—and gained advantages in reduced licensing fees, lower hardware costs, simplified management, and better reliability and scalability— all the standard Linux and open source benefits. Novell has seen firsthand why this technology and the services available with it are so valuable to organizations of all sizes. For Novell, eating your own dog food in the move to Linux and open source has been surprisingly palatable!
Other Businesses GGU is just one of the many Novell customers implementing Linux and open source technologies in a major way. The following are highlights of how other Novell customers are using Linux and open source. NOTE For more customer stories, see http://www.novell.com/success.
Burlington Coat Factory Burlington Coat Factory has been running its enterprise on Linux for several years, replacing the “green screens” in its retail stores with web-based applications. But the company needed to replace its large-scale servers, which were obsolete, complex, and difficult to manage. Implementing clustered Intel servers throughout its enterprise provided better performance while reducing administration time. By running its enterprise on SUSE Linux Enterprise Server, Burlington reduced its hardware costs tenfold from its previous Unix environment, while creating a more reliable infrastructure. SUSE Linux provides greatly improved uptime, particularly during peak transaction times around holidays, as well as the scalability to support multiple Oracle databases and a variety of storage devices. “Without SUSE Linux, we wouldn’t be as far along in deploying a heavy-duty data center,” said CIO Michael Prince. “We also would have been more constrained by costs and would have ended up spending more money on a system not nearly as robust. SUSE Linux gives us a level of reliability that retailers never could have afforded before.”
Overstock.com Overstock.com Inc. is an online “closeout” retailer offering discount, namebrand merchandise for sale over the Internet. The company offers its customers 216
Other Businesses
an opportunity to shop for bargains conveniently while offering its suppliers a global distribution channel. Overstock has recently expanded its technology to accommodate online auctions, providing an online alternative to eBay. “Strategically, we have three goals in selecting IT components: stability, performance, and, to a lesser degree, expense,” said Sam Peterson, Overstock’s Director of Network Engineering. “Novell SUSE Linux made all three happen.” Overstock.com has more than 70 SUSE Linux Enterprise servers in production, processing millions of web requests a day. All internal users and external customers hit a SUSE Linux server when they interact with the company. Overstock.com is also developing new in-house applications on Linux. With the amount of software that comes with SUSE Linux, IT staff don’t need to seek for or compile new tools. Novell SUSE Linux has been simple to learn and administer. “We don’t have ‘official’ training programs for SUSE,” said Jon Fullmer, Overstock’s Lead Network Engineer. “We simply haven’t needed one. The evolution of SUSE Linux and YaST almost eliminated the need for training of our IT staff. If we bring in Unix administrators, they can find what they need in YaST and we don’t need to retrain them.”
217
This page intentionally left blank
Index
A active contents, generating, 106 adequate technology strategies (vendor lock-in examples), 67-68 administration (open source)
advantages of, 193 eDirectory (Novell), 196-197 iManager (Novell), 198 RPM, 198-199 YaST, 194 ZENworks (Novell), 200-204 administration architectures (Linux security), 60 Allow Unsupported Drivers option (NetWare 6.5 SP3 installation CD)
Deployment Manager upgrades, 184 iManager 2.0 upgrades, 185 Server Console upgrades, 183 Alpha platform port (Linux), 36 Alpha website, 36 Always Copy the Source File option (SCU), 189 AMD64, SLES support, 83 Amerada Hess, Corp., reducing open source hardware costs, 55 Apache HTTP Server. See Apache web server
219
Apache web server
Apache web server, 42, 105
authentication, 106 expanded functions of, 106 open source development, 13, 16 technical support bug reports, 61 discussion logs, 61 documentation, 60 FAQ, 61 mailing lists, 61 product downloads, 61 training, 61 webrings, 61 application architectures (migration plan development), 156 application servers
coexistence strategies, 119 defining, 111 exteNd Application Server (Novell) data source objects, 118 EJB 1.1, 117 enterprise data connectors, 118 features of, 116-118 Imperial Sugar Company usage example, 118 J2EE platform services, 117 jBroker MQ, 117 jBroker ORB, 117 JSP 1.1, 117 Servlets 2.2, 117 JBoss Application Server, 118 proprietary application servers, 119 technical support, 120 Tomcat Application Server (Jakarta), 118 web server integration, 106-107 web services, 113-114
220
application stack example (open source code), 19
data organization, 21 desktop solutions, 21 information organization, 20 information presentation/consumption, 20 inter-stack functionality, 20 intra-stack functionality, 20 network services, 20 open source solutions, 21 OS, 20 applications
distributing, ZENworks (Novell), 204 transition strategy deployment, 158 ARP (Address Resolution Protocol) NetWare client tool, 148 assessment phase (Linux/open source integration framework model), 77-78 assessment studies (transition strategies), 155 AT&T, history of Unix, 8 ATP (AppleTalk Protocol Suite), accessing Linux file servers, 88 Attribute Dates filters, SCU project files, 189 Attribute filters, SCU project files, 189 authentication, Apache web servers, 106 authentication services, clustering, 56 Automatically Reboot option (NetWare 6.5 SP3 installation CD)
Deployment Manager upgrades, 184 iManager 2.0 upgrades, 185 Server Console upgrades, 183
case studies
B Backup the Server Boot Directory Files option (NetWare 6.5 SP3 installation CD)
Deployment Manager upgrades, 184 iManager 2.0 upgrades, 185 Server Console upgrades, 183 backups, OES installation, 178 Behlendorf, Brian, open source development, 13 Bell Laboratories, history of Unix, 8 Beowulf, 39
clustering, 56 open source hardware costs, reducing, 55 BIND (Berkeley Internet Name Domain), 103 bindery, 146 bootable partitions, creating, 179 Boston Consulting Group, open source developer demographics study, 70 branch offices, Nterprise Branch Office (Novell), 171
central office server integration, 173 cost effectiveness of, 174 directory/file synchronization, 172 Internet connections, 174 technical support, 174 user authentication, 172 BSD (Berkeley Software Distribution)
BSD Licenses versus GPL licenses, 52 open source software development, 16 Unix, history of, 8, 10 bug fixing (open source software development process), 17
bug reports (Linux technical support), 61 Bugzilla, 61, 215 Burlington Coat Factory transition case study, 216 business appliances, thin-client solutions
client management, 169 network booting options, 169 single-purpose configurations, 168-169 thin-client hardware, 170-171 BusinessWeek, open source development, 74
C C programming language, history of, 8 CA (certification authority), Linux network services, 35 caching, 98
DNS caching, 103 Squid, 99 case studies
Burlington Coat Factory transition, 216 GGU (Golden Gate University), 205 evaluating Novell assistance, 209 initial environment evaluation, 206 service/application implementation, 208 transition solution selection criteria, 206-207 vendor selection, 207 Novell transition data center migration, 215 Linux Desktop 211-213 OpenOffice.org migration, 211
221
case studies
Overstock.com Inc. transition, 216-217 transition case study website, 216 “Cathedral and the Bazaar, The,” 12, 16 CD-ROM drives, OES installation requirements, 176 cell phones, thin-client business applications, 171 certified security feature (SLES), 82 CGI (Common Gateway Interface), generating active contents, 106 CGL (Carrier Grade Solution, SUSE Linux, 58 CIFS (Common Internet File System), 88 CIM, SLES, 82 CITI (Center for Information Technology), University of Michigan, LSP, 57 CKRM, SLES, 82 CLI virtual machine (Mono), 138 closed source code, dual source solutions, 22 clustering
authentication services, 56 Beowulf, 56 Heartbeat, 56 SLES, 82 clustering services
data centers, 128 high availability clusters availability summary chart, 122 dedicated data connections, 121 failover configurations, 121 Heartbeat connections, 121 storage configurations, 122
222
high-performance clusters, 122 high-speed connections, 123 parallelized applications, 123 usage examples, 120 scalability, 124 code forking (open source software development process), 16 coexistence strategies (applications), OES desktop transition strategies, 161 collaboration services, transition strategy deployment, 157-158 collaboration solutions (Linux), 133-137 commercial support (Linux), 62-54 communication (open source development), 73 component wizards (exteNd Workbench), 115 Conectiva, 39 CONFIG NetWare IP/IPX tool, 148 ConsoleOne NetWare management tool, 147 consolidation, open source advantages, 32 consultancies (open source profitability), 26-27 consulting services (Linux technical support), 62 content filtering, 100 Copy the Source File If It Is Newer option (SCU), 189 copyright law, open source development, 11 cost
open source, 32 hardware costs, reducing, 54-55 software, 48-50 internal development resources, 140-141 power workstations, 142
design phase
CRM applications (Siebel), 132 CUPS (Common Unix Printing System)
Internet printing, 94, 97 locating printers, 96 printer installation, 95 Samba, 97 CVS (Concurrent Versions System), open source software development, 16
D data centers
clustering, 128 file storage, 128 grid computing, 126 hyperthreading, 125-126 Linux platform support, 128-129 NUMA (Non-Uniform Memory Access), 125 OES (Open Enterprise Server) transition strategy, 154 advantages of, 158-159 assessment studies, 155 deployment, 155-159 symmetric multiprocessing architectures, 124 terminal services, 127 thin-client workstations, creating, 127 data management (Linux), 35 data organization, application stack example, 21 database servers as read-only mirrors, 101 databases, 108
commercial solutions DB2 (IBM), 110 Informix (IBM), 110
Oracle, 110 Sybase, 110 website, 110 implementing, 111 MySQL, 46, 108-109 PostgreSQL, 45, 108 transition strategy deployment, 157 DB2 commercial database solution (IBM), 110, 131 DB2 Universal Database (IBM), 132 de Icaza, Miguel
Linux, history of, 12 Mono, development of, 45, 139 Debian, 38, 51 Debian Social Contract, 51 DEBUG NetWare IP/IPX tool, 148 Debugger (exteNd Workbench), 115 defect densities (open source development), 73 Department of Defense (DOD), history of Unix, 9 deploying transition strategies, 156
applications, 158 databases, 157 directory/identity management, 156 edge services, 157 file/print services, 157 groupware services, 157-158 mail services, 157-158 technical support, 158 web services, 157 Deployment Manager
NetWare server in-place upgrades, 184-185 starting, 178 design phase (Linux/open source integration framework model), 78
223
desktop services
desktop services (Linux)
GNOME, 40 KDE, 40 Mozilla, 41 OpenOffice.org, 41 Xfree86, 41 desktop solutions, application stack example, 21 desktops
OES (Open Enterprise Server) transition strategy, 159 application coexistence strategies, 161 application usage, 162 future trends, 163 office productivity applications, 160 OS replacement, 161-162 thin-client applications, 160-161 user sophistication, 163 thin-client desktops, OES (Open Enterprise Server) desktop knowledge worker transition strategies, 166 desktop knowledge workers, OES (Open Enterprise Server) transition strategies, 164
Evolution (Novell), 167 Mozilla, 167 NLD (Novell Linux Desktop), 166-167 Nsure Identity Manager (Novell), 165 OpenOffice.org, 167 thin-client desktops, 166 Virtual Office (Novell), 165 ZenWorks, 165 developers as users (open source development), 73
224
development infrastructures (open source software development process), 16 development resources
internal, 137 costs, 140-141 exteNd (Novell), 139 Mono, 138 open source concerns, 33 development tools (Linux)
Eclipse, 44 GCC, 44 Mono, 44 Perl, 43 PHP, 43 Python, 43 device management (Linux), 35 DFSG (Debian Free Software Guidelines), 51 DHCP (Dynamic Host Configuration Protocol), 35, 103-104 directory environments (migration plan development), 156 directory management (transition strategies), 156 directory services
characteristics of distributed systems, 197 hierarchies, 196 inheritance, 196 standards support, 197 eDirectory (Novell), 196-197 iManager (Novell), 198 discussion logs (Linux technical support), 61 diskless workstations, thin-client business applications, 170 distributed systems (directory service characteristics), 197
email
distribution (open source profitability), 25 distributions (Linux)
availability of, 37 Conectiva, 39 Debian, 38 Gentoo Linux, 38 Mandrakelinux, 38 Red Flag Linux, 39 Red Hat Linux, 37 SUSE Linux, 37 Turbolinux, 38 United Linux, 39 DNS (Domain Name Servers)
BIND, 103 caching, 103 email solutions, 134 documentation (Linux), 46, 60 DOD (Department of Defense), history of Unix, 9 Domino (IBM), 131 Don’t Copy Over Existing Files option (SCU), 189 DOS partitions, OES installation requirements, 176 down server upgrades, 182, 186 downloads (Linux technical support), 61 DRBD (Distributed Replicated Block Device), SLES, 82 drive imaging, ZENworks (Novell), 203 drive mapping, 93 dual source solutions, 22
E Eclipse, 44 Eddie Project, 101 edge data, 171
edge services
caching, 98-99 content filtering, 100 defining, 97 DHCP, 103-104 DNS, 103 firewalls, 99 load balancing, 100-101 migration plan development, 155 security gateway software, 102 static routing tables, 104 transition strategy deployment, 157 VPN, 100 eDirectory (Novell), 196-197
development of, 146 iPrint, managing, 96 OES installation, 177-178 open source implementation, 23 education (open source profitability), 27-28 EJB 1.1 (Enterprise JavaBeans 1.1), exteNd Application Server, 117 EM64T (Intel), SLES support, 83 email solutions, 133
DNS, 134 email clients, 134 IMAP message transfer agents, 134 integrating, 136-137 LDAP, 134-137 mail clients, 135 mail protection, 135 POP message transfer agents, 134 Postfix, 135 Sendmail, 135 SMTP message transfer agents, 134 web access, 134 email, vendor lock-ins, 66
225
emulators
emulators (Windows), OES desktop transition strategies, 161 enterprise applications
IBM, 131 implementing, 133 Oracle, 130-131 PeopleSoft, 132 SAP, 132 Siebel, 132 website, 133 Enterprise Linux Group, 57 enterprise volume manager (SLES), 82 ERP applications (Siebel and PeopleSoft), 215 eServer ZSeries (IBM), 36 Ettrich, Mattias, 40 Evolution (Novell), OES (Open Enterprise Server) desktop knowledge worker transition strategies, 167 eWeek, vendor lock-ins, 65 Ext 2/3 file systems, Linux file storage, 89 EXT traditional file system, 177 exteNd Application Server (Novell), 139
data source objects, 118 EJB 1.1, 117 enterprise data connectors, 118 features of, 116-118 Imperial Sugar Company usage example, 118 J2EE platform services, 117 jBroker MQ, 117 jBroker ORB, 117 JSP 1.1, 117 Novell Partner Portal, 140 Secure Enterprise Dashboard Portal, 139 Servlets 2.2, 117
226
exteNd Workbench (Novell), 114
accepted J2EE archive formats, 116 component wizards, 115 Debugger, 115 deployment tools, 115 graphics/text-based editors, 115 IDE integration, 116 jBroker web, 115 Migration wizard, 115 project tools, 115 project views, 115 Registry Manager, 115 UDDI tools, 115 version control integration, 115 web service facilities, 115 web service wizard, 115
F FAQ (Frequently Asked Questions), Linux technical support, 61 feedback (open source code), 17-18 file services, 87
drive maps, 93 file storage Ext 2/3 file systems, 89 JFS file systems, 89 NAS, 91 NSS, 92 ReiserFS file systems, 89 XFS file systems, 89 file synchronization, 89 file uploads/downloads, 89 Linux file servers, 88 Linux to Linux file sharing, 89 network file access via web browser, 91
Gutmans
SAN creation, 91 transition strategy deployment, 157 Unix to Unix file sharing, 89 Windows file servers accessing from Linux clients, 88 file storage
data centers, 128 Ext 2/3 file systems, 89 JFS file systems, 89 NAS, 91 NSS, 92 ReiserFS file systems, 89 XFS file systems, 89 file systems, journaling, 177 Finance Foundation (IBM), 131 Firefox (Mozilla), desktop transition strategies, 160 firewalls, 99 flexibility
lack of (vendor lock-in examples), 68 power workstations, incorporating into, 142 Forge (Novell) website, 48 Forrester Research, Linux technical support, 60 free software versus open source, 10 Free Software Foundation, 11, 51 Freshmeat website, 47 Friedman, Nat, history of Linux, 12 FTP (File Transfer Protocol)
Linux file uploads/downloads, 89 Linux network services, 35
G GCC (GNU Compiler Collection), 44 Gentoo Linux, 38 GGU (Golden Gate University) transition case study, 205
initial environment evaluation, 206 Novell assistance, evaluating, 209 service/application implementation, 208 transition solution selection criteria, 206-207 vendor selection, 207 Globus Toolkit, grid computing, 127 GNOME (GHU Network Object Model Environment), 40 GNU, 51-52 GNU Manifesto, 10 GPL (General Public License), 51
Linux, history of, 12 open source, development of, 9-10 versus BSD License, 52 Granneman, Scott, Linux security, 59 graphics editors, exteNd Workbench (Novell), 115 grid computing, 126 groupware services, transition strategy deployment, 157-158 GroupWise (Novell), 136
GGU (Golden Gate University) case study, 208 integrating, 137 Gutmans, Andi, PHP, 43
full system backups, OES installation, 178 Fullmer, Jon, Overstock.com Inc. transition case study, 217
227
hardware
H hardware
costs open source advantages, 32 reducing, 54-55 leverage (open source profitability), 26 OES installation requirements, 176 headless servers, 176 Heartbeat, high-availability clustering service connections (Linux), 121 hierarchies (directory service characteristics), 196 high-availability clustering services (Linux)
availability summary chart, 122 dedicated data connections, 121 failover configurations, 121 Heartbeat connections, 121 storage configurations, 122 high-performance clustering services (Linux), 122
high-speed connections, 123 parallelized applications, 123 usage examples, 120 Hill, Anthony, GGU (Golden Gate University) transition case study, 205, 209 hosting (virtual), Apache web servers, 106 Hotplug, SLES, 82 hyperthreading, 125-126
I IA-64 platform port (Linux), 36 IBM enterprise applications, 131 IBM Power, SLES support, 83
228
IBM Public License, 52 IBM T.J. Watson Research Center, Enterprise Linux Group, 57 iChain (Novell), GGU (Golden Gate University) case study, 208 ICP (Internet Cache Protocol), proxy caches, 99 identity management, 36, 156 iFolder (Novell), file synchronization, 89 iManager (Novell), 198 iManager 2.0 (Novell), NetWare server in-place upgrades, 185-186 iManager NetWare management tool, 147 IMAP (Internet Message Access Protocol) message transfer agents (email solutions), 134 iMonitor NetWare management tool, 147 Imperial Sugar Company usage example (exteNd Application Server), 118 implementation phase (Linux/open source integration framework model), 79 in-place upgrades (NetWare servers), 181
down server upgrades, 186 from Deployment Manager, 184-185 from iManager 2.0, 185-186 from Server Console, 182-184 informal responsibility systems (open source software development process), 15 information organization, application stack example, 20 information presentation/consumption, application stack example, 20 Informix commercial database solution (IBM), 110
JDBC 2.0
inheritance (directory service characteristics), 196 installing
NetWare startup files, 179 OES, 175 choosing file systems, 177 configuring server environment, 180-181 creating boot partitions, 179 creating SYS volumes, 179 eDirectory design, 177-178 full system backups, 178 installing NetWare startup files, 179 Linux versus NetWare, 176 steps of, 179 system requirements, 176 SCU, 187-188 Intel EM64T, SLES support, 83 Intel Itanium, SLES support, 83 inter-stack functionality, application stack example, 20 internal development resources, 137
costs, 140-141 exteNd (Novell), 139 Mono, 138 Internet
open source development, 11 printing, 94, 97 printer installation, 95 printers, locating, 96 intra-stack functionality, application stack example, 20 inventory hardware/software (ZENworks), 204 IP/IPX tools (NetWare), 148 IPCONFIG NetWare client tool, 148 IPP (Internet Printing Protocol), 94-95
iPrint (Novell)
configuring, 96 Internet printing, 94, 97 locating printers, 96 printer installation, 95 managing, 96 iPrint Driver Stores, 96 IPSec (Internet Protocol Security), open source security, 100 iSCSI (Internet Small Computer Systems Interface) protocol, SAN creation, 91 ISV (independent software vendors), Linux technical support, 62 IT Manager’s Journal, reducing software costs, 50 Itanium (Intel)
SLES support, 83 website, 36 Itanium platform port (Linux), 36
J-K J2EE (Java 2 Enterprise Edition)
exteNd Application Server (Novell), 117 web services, 113 Jakarta Tomcat, 13 Jakarta Tomcat Application Server, 118 JavaMail 1.1, exteNd Application Server, 117 JAX-RPC (Java API for XML-Remote Procedure Call), web services, 114 JBoss Application Server, 14, 42, 118 jBroker MQ (exteNd Application Server), 117 jBroker ORB (exteNd Application Server), 117 jBroker web (exteNd Workbench), 115 JDBC 2.0 (Java Database Connectivity), exteNd Application Server, 117
229
JFS
JFS (Journaling File System), 177
data center file storage, 128 Linux file storage, 89 JNDI 1.2 (Java Naming and Directory Interface), exteNd Application Server, 117 job management (Linux), 35 journaling file systems, 177 Joy, Bill, history of Unix, 8 JSP 1.1 (JavaServer Pages), exteNd Application Server, 117 JTA 1.0 (Java Transaction API), exteNd Application Server, 117 KDE (K Desktop Environment), 40 kernels, application stack example, 20 knowledge workers, OES (Open Enterprise Server) desktop transition strategies, 164
Evolution (Novell), 167 Mozilla, 167 NLD (Novell Linux Desktop), 166-167 Nsure Identity Manager (Novell), 165 OpenOffice.org, 167 thin-client desktops, 166 Virtual Office (Novell), 165 ZENworks, 165
L LAMP solutions, 107 LDAP (Lightweight Directory Access Protocol), 36, 134-137, 197 LDP (Linux Documentation Project), 46 Lerdorf, Rasmus, PHP, 43 LGPL (Lesser General Public License), 51-52
230
license management (software), 50
license templates, 51-52 open source advantages, 31 simplifying, 53 license templates
BSD License, 52 GPL, 51 IBM Public License, 52 LGPL, 51-52 MIT License, 52 MPL, 52 licensing fees (software), 50 Linux
application servers coexistence strategies, 119 defining, 111 exteNd Application Server (Novell), 116-118 proprietary application servers, 119 technical support, 120 clustering services, 120 data centers, 128 high-availability clusters, 121-122 high-performance clusters, 122 parallelized applications high-performance clusters, 120, 123 scalability, 124 collaboration solutions, 133-137 data centers clustering, 128 creating thin-client workstations, 127 file storage, 128 grid computing, 126 hyperthreading, 125-126 NUMA
Linux
parallelized applications NUMA, scalability, 125 platform support, 128-129 symmetric multiprocessing architectures, 124 terminal services, 127 data management, 35 databases MySQL, 46 PostgreSQL, 45 desktop services GNOME, 40 KDE, 40 Mozilla, 41 OpenOffice.org, 41 Xfree86, 41 developers demographics study, 70 developmental process, 74 communication, 73 defect densities, 73 developers as users, 73 overview of, 71 peer reviews, 73 motivations for, 69 O’Reilly Open Source Convention, 70 development tools Eclipse, 44 GCC, 44 Mono, 44 Perl, 43 PHP, 43 Python, 43 device management, 35 distributions availability of, 37 Conectiva, 39
Debian, 38 Gentoo Linux, 38 Mandrakelinux, 38 Red Flag Linux, 39 Red Hat Linux, 37 SUSE Linux, 37 Turbolinux, 38 United Linux, 39 documentation, LDP, 46 edge services caching, 98-99 content filtering, 100 defining, 97 DHCP, 103-104 DNS, 103 firewalls, 99 load balancing, 100-101 security gateway software, 102 static routing tables, 104 VPN, 100 enterprise applications IBM, 131 implementing, 133 Oracle, 130-131 PeopleSoft, 132 SAP, 132 Siebel, 132 website, 133 features of, 35 file services, 87 drive maps, 93 file storage, 91-92 file synchronization, 89 file uploads/downloads, 89 Linux file servers, accessing, 88 Linux to Linux file services, 89 network file access via web browser, 91
231
Linux
SAN creation, 91 supported file storage formats, 89 Unix to Unix file services, 89 Windows file servers, accessing from Linux clients, 88 history of, 11 identity management, 36 job management, 35 LDAP, 36 messaging solutions, 133 DNS, 134 email clients, 134 IMAP message transfer agents, 134 integrating, 136-137 LDAP, 134 LDAP directory servers, 135 LDAP directory servers, integrating, 137 mail clients, 135 mail protection, 135 POP message transfer agents, 134 SMTP message transfer agents, 134 web access, 134 network services, 35 OES installation, 176 platform ports, 36 power workstations, 141-142 print services, 35, 93-97 reliability, 58 scalability, 55 clustering, 56 load balancing, 57 SMP, 57
232
security, 35 administration architectures, 60 firewalls, 99 package installation, 60 viruses, 59 VPN, 100 server services Beowulf, 39 Samba, 40 SUSE Linux grid computing, 127 SLES, 82-87 SUSE Linux Desktop, 84-87 workgroup databases, 108-109 technical support availability, 64 bug reports, 61 commercial support, 62-64 discussion logs, 61 documentation, 60 FAQ, 61 ISV, 62 mailing lists, 61 OEM, 62 product downloads, 61 training, 61 webrings, 61 user interface, 35 web applications/servers, 42 web servers Apache, 105-106 application server integration, 107 application server solutions, 106 LAMP solutions, 107 root HTML directories, 105 uses of, 107-108
Microsoft vendor lock-ins
web services components of, 113 defining, 111 example of, 112 exteNd Workbench (Novell), 114-116 implementing, 119 J2EE, 113 JAX-RPC, 114 SOAP, 113-114 UDDI, 114 WSDL, 113-114 XML, 113 workgroup databases commercial solutions, 110 implementing, 111 MySQL, 108-109 PostgreSQL, 108 YaST, 194 Linux 2.6 kernel (SLES), 82 Linux Desktop
Novell migration to application evaluation prioritization, 212-213 defining document sharing standards, 211 defining/initiating business tracks, 213-214 OES desktop transition strategies, 161-162, 166-167 Linux Scalability Project, 57 Linux/open source integration framework model (Novell Consulting), 76
assessment phase, 77-78 design phase, 78 implementation phase, 79 support phase, 80-81 training phase, 79
load balancing, 57, 100-101 LSP (Linux Scalability Project), 57
M M68K platform port (Linux), 36 Macintosh viruses, 59 mail services, transition strategy deployment, 157-158 mailing lists (Linux technical support), 61 maintainers, 15 Major, Drew, history of NetWare, 145 management tools
migration plan development, 156 NetWare, 147 Mandrakelinux, 38 mapping drives, 93 marketecture, 20 MDAC (Microsoft Data Access Components) website, 187 mediation software, 88 messaging solutions, 133
DNS, 134 email clients, 134 IMAP message transfer agents, 134 integrating, 136-137 LDAP, 134-137 mail clients, 135 mail protection, 135 POP message transfer agents, 134 Postfix, 135 Sendmail, 135 SMTP message transfer agents, 134 web access, 134 Messman, Jack, history of NetWare, 147 Microsoft vendor lock-ins, 66
233
migrating from NetWare and Windows Servers
migrating from NetWare and Windows Servers, 190 migration plans (transition strategies), 155-156 Migration wizard (exteNd Workbench), 115 MIT License, 52 mobile phones, thin-client business applications, 171 Moglen, Eben, copyright law, 11 Mono, 44
CLI virtual machine, 138 development of, 139 NET, 138 website, 139 Mozilla, 41
MPL, 52 OES (Open Enterprise Server) desktop knowledge worker transition strategies, 167 open source, development of, 13 Mozilla Firefox, desktop transition strategies, 160 MPL (Mozilla Public License), 52 MULTICS, 7 Multimedia Kiosk (IBM), 131 MySQL, 46, 108
Associated Press implementation example, 109 open source, development of, 14 phpMyAdmin database management tool, 109
N NAS (Network-Attached Storage), creating, 91 NCSA (National Center for Supercomputing Applications), open source, development of, 13
234
NDS (Novell Directory Services), development of, 146 Neibaur, Dale, history of NetWare, 145 NET (Mono), 138 Netatalk, 88 NetDevice NAS, 91 NetMail (Novell), GGU (Golden Gate University) case study, 208 NetsEdge Research group, remote/branch offices, 171 NETSTAT NetWare client tool, 148 NetStorage Gadget (Novell), 91 NetWare
client tools, 148 history of, 145-147 IP/IPX tools, 148 management tools, 147 OES installation, 176 servers in-place upgrades, 181-186 migrating from, 190 server consolidation upgrades, 181, 187-190 startup files, installing, 179 NetWare 4.2, down server upgrades, 182 NetWare 6.5 SP3 installation CD
Allow Unsupported Drivers option Deployment Manager upgrades, 184 iManager 2.0 upgrades, 185 Server Console upgrades, 183 Automatically Reboot option Deployment Manager upgrades, 184 iManager 2.0 upgrades, 185 Server Console upgrades, 183
Nterprise Branch Office
Backup the Server Boot Directory Files option Deployment Manager upgrades, 184 iManager 2.0 upgrades, 185 Server Console upgrades, 183 Specify the Upgrade Type option Deployment Manager upgrades, 184 iManager 2.0 upgrades, 185-186 Server Console upgrades, 183 NetWare 6.5 Support Pack 3
installing, 180 NetWare server upgrades from Deployment Manager, 184 from iManager 2.0, 186 from Server Console, 183 NetWare Installation Wizard
NetWare 6.5 Support Pack 3, 180 Open Enterprise Server 1.0, installing, 180 server environment, configuring, 180-181 NetWare Migration Wizard, 190 Network File Gadget (Novell), 91 network files, accessing via web browsers, 91 network services, 35
application stack example, 20 migration plan development, 156 NIC (network interface cards), OES installation requirements, 176 NLD (Novell Linux Desktop), OES (Open Enterprise Server) desktop knowledge worker transition strategies, 166-167 NLM (NetWare loadable modules), development of, 146 Noorda, Ray, history of Linux, 12
Novell
as Linux commercial support, 62-64 open source implementation, 23-24 open source migration, reasons for choosing, 149-150 vendor lock-ins, 68 Novell Consulting, Linux/open source integration framework model, 76
assessment phase, 77-78 design phase, 78 implementation phase, 79 support phase, 80-81 training phase, 79 Novell Press website, 150 Novell Security Manager, 102 Novell transition case study
data center migration, 215 Linux Desktop migration application evaluation/prioritization, 212-213 defining document sharing standards, 211 defining/initiating business tracks, 213-214 OpenOffice.org migration, 211 NSLOOKUP NetWare IP/IPX tool, 148 NSS (Novell Storage Services)
creating, 92 data center file storage, 128 NSS journaling file system, 177 Nsure Identity Manager (Novell), OES (Open Enterprise Server) desktop knowledge worker transition strategies, 165 Nterprise Branch Office (Novell), 171
central office server integration, 173 cost effectiveness of, 174 directory/file synchronization, 172
235
Nterprise Branch Office
Internet connections, 174 technical support, 174 user authentication, 172 Nterprise Linux Services (Novell), GGU (Golden Gate University) case study, 208 NUMA (Non-Uniform Memory Access), 125
O OEM (original equipment manufacturers), Linux technical support, 62 OES (Open Enterprise Server)
available services/utilities, 152-153 Linux-only availability, 153 NetWare-only availability, 153 data center transition strategy, 154 advantages of, 158-159 assessment studies, 155 deployment, 156-158 developing migration plans, 155-156 desktop knowledge worker transition strategies, 164 Evolution (Novell), 167 Mozilla, 167 NLD (Novell Linux Desktop), 166-167 Nsure Identity Manager (Novell), 165 OpenOffice.org, 167 thin-client desktops, 166 Virtual Office (Novell), 165 ZENworks, 165 desktop transition strategy, 159 application coexistence strategies, 161 application usage, 162
236
future trends, 163 office productivity applications, 160 OS replacement, 161-162 thin-client applications, 160-161 user sophistication, 163 installing, 175 choosing file systems, 177 configuring server environment, 180-181 creating boot partitions, 179 creating SYS volumes, 179 eDirectory design, 177-178 full system backups, 178 installing NetWare startup files, 179 Linux versus NetWare, 176 steps of, 179 system requirements, 176 office productivity applications, OES desktop transition strategies, 160 OOo (OpenOffice.org), 41
desktop transition strategies, 160 Novell migration to, 211 OES (Open Enterprise Server) desktop knowledge worker transition strategies, 167 open source, development of, 14 website, 21 Open Enterprise Server 1.0
installing, 180 NetWare server upgrades from Deployment Manager, 184 from iManager 2.0, 186 from Server Console, 183 open source
advantages of escaping vendor lock-ins, 32 hardware costs, 32
open source
license management, 31 scaling/consolidation, 32 software costs, 31 software quality, 32 support, 32 unified management, 32 application servers coexistence strategies, 119 defining, 111 JBoss Application Server, 118 proprietary application servers, 119 technical support, 120 Tomcat Application Server (Jakarta), 118 concerns of, 32-33 developers demographics study, 70 motivations for, 69 O’Reilly Open Source Convention, 70 development of, 9-14 developmental process, 74 communication, 73 defect densities, 73 developers as users, 73 overview of, 71 peer reviews, 73 dual source solutions, 22 edge services caching, 98-99 content filtering, 100 defining, 97 DHCP, 103-104 DNS, 103 firewalls, 99 load balancing, 100-101 security gateway software, 102
static routing tables, 104 VPN, 100 hardware, costs of reducing, 54-55 implementing, 23-24 internal development resources, 137 costs, 140-141 exteNd (Novell), 139 Mono, CLI virtual machine, 138 Mono, NET, 138 messaging solutions integrating, 136-137 mail clients, 135 Postfix, 135 Sendmail, 135 power workstations, 141-142 profitability of consultancies, 26-27 distribution, 25 hardware leverage, 26 software leverage, 26-28 support, 26-27 training/education, 27-28 security costs of, 48 firewalls, 99 VPN, 100 licensing fees, 50 reducing, 49-50 software development process, 18 bug fixing, 17 code forking, 16 development infrastructures, 16 informal responsibility systems, 15 technical support availability of, 64 bug reports, 61
237
open source
commercial support, 62-64 discussion logs, 61 documentation, 60 FAQ, 61 ISV, 62 mailing lists, 61 OEM, 62 product downloads, 61 training, 61 webrings, 61 vendor lock-ins, reducing, 66-67 web servers Apache, 105-106 application server integration, 107 application server solutions, 106 LAMP solutions, 107 root HTML directories, 105 uses of, 107-108 web services components of, 113 defining, 111 example of, 112 implementing, 119 J2EE, 113 SOAP, 113-114 UDDI, 114 WSDL, 114 WSDL documents, 113 XML, 113 workgroup databases, 108 commercial solutions, 110 implementing, 111 MySQL, 108-109 PostgreSQL, 108
238
open source administration
advantages of, 193 eDirectory (Novell), 196-197 iManager (Novell), 198 RPM, 198-199 YaST, 194 ZENworks (Novell), 200-204 open source code
application stack example, 19-21 versus proprietary code application stack example, 19-21 code releases, 17 developer passion, 17 feedback, 17-18 peer reviews, 17 open source integration framework model (Novell Consulting), 76
assessment phase, 77-78 design phase, 78 implementation phase, 79 support phase, 80-81 training phase, 79 open source project websites
Freshmeat, 47 Novell Forge, 48 SourceForge.net, 47 OpenOffice.org. See OOo OpenSource.org website, 18 OpenSSL, Linux network services, 35 Openswan, open source security, 100 Oracle commercial database solutions, 110, 130-131, 206 O’Reilly Open Source Convention, open source developers, 70 OS (operating systems)
application stack example, 20 replacing, OES desktop transition strategies, 161-162
proprietary application servers
OSD (Open Source Definition), 11, 51 OSI (Open Source Initiative), 11, 51 OSI website, 53 Overstock.com Inc. transition case study, 216-217
P PA-RISC platform port (Linux), 36 PA-RISC website, 36 package installation (Linux security), 60 parallelized applications, 123-125 Partner Portal (Novell), 140 PDA (Personal Digital Assistants), thinclient business applications, 170 Peeling, Dr. Nic, Macintosh viruses, 59 peer reviews (open source development), 17, 73 PeopleSoft enterprise applications, 132 Perens, Bruce
Debian, 38 OSD, 51 OSI, 51 Perl (Practical Extraction and Report Language), 43 PHP (Personal Home Page), 43 phpMyAdmin database management tool, 109 PING NetWare IP/IPX tool, 148 platform ports (Linux), 36 platform support (Linux), 128-129 plug-ins, OES desktop transition strategies, 161 POP (Post Office Protocol) message transfer agents (email solutions), 134 POSIX file system, 177 Postfix email solution, 135 PostgreSQL, 45, 108 Powell, Kyle, history of NetWare, 145
power workstations
costs, 142 flexibility, incorporating, 142 power, incorporating, 141 programmability, incorporating, 142 PowerPC platform port (Linux), 36 PowerPC website, 36 price (vendor lock-in examples), 67 print services, 35, 93
Internet printing, 94, 97 locating printers, 96 printer installation, 95 transition strategy deployment, 157 processors, OES installation requirements, 176 product downloads (Linux technical support), 61 profitability of open source
consultancies, 26-27 distribution, 25 hardware leverage, 26 software leverage, 26-28 support, 26-27 training/education, 27-28 programmability, incorporating into power workstations, 142 project files (SCU)
creating, 188-190 file copy, filtering, 189 project steering committees
assessment studies, 155 developing migration plans, 155-156 property law
Linux, history of, 12 open source, development of, 9-10 proprietary application servers, 119
239
proprietary code
proprietary code
dual source solutions, 22 versus open source code application stack example, 19-21 code releases, 17 developer passion, 17 feedback, 17-18 peer reviews, 17 proprietary lock-ins, 64
defining, 65 email, 66 examples of, 67-68 Microsoft, 66 Novell, 68 open source solutions, reducing in, 66-67 proxy caches, 98-99 Python, 43
Q-R RAM, OES installation requirements, 176 Raymond, Eric
bug fixing, 17 “Cathedral and the Bazaar, The”, 12, 16 code forking, 16 open source, development of, 12, 15 OSD, 51 OSI, 51 read-only mirrors, database servers as, 101 Red Flag Linux, 39 Red Hat Linux, 37 reducing
hardware costs, 54-55 software costs, 49-50 vendor lock-ins, 66-67
240
Registry Manager (exteNd Workbench), 115 REISER journaling file system, 177 ReiserFS file systems, Linux file storage, 89 reliability (Linux), 58 remote management, ZENworks (Novell), 204 Remote Manager NetWare management tool, 147 remote offices, Nterprise Branch Office (Novell), 171
central office server integration, 173 cost effectiveness of, 174 directory/file synchronization, 172 Internet connections, 174 technical support, 174 user authentication, 172 Ritchie, Dennis, history of Unix, 7 root HTML directories (web servers), 105 ROUTE NetWare client tool, 148 RPM (RPM Package Manager), 198-199
S S/390 (IBM), SLES support, 83 Samba, 40, 88, 97 SAN (storage area networks), creating, 91 SAP enterprise applications, 132 Satchell, Dr. Julian, Macintosh viruses, 59 scalability, 55
clustering, 56, 124 load balancing, 57 NUMA, 125 open source advantages, 32 SMP, 57 Schmidt, Eric, history of NetWare, 147 SCO, history of Linux, 12
smart phones
SCSI (Small Computer Systems Interface) protocol, SAN creation, 91 SCU (Server Consolidation Utility)
Always Copy the Source File option, 189 consolidation type, selecting, 188 Copy the Source File If It Is Newer option, 189 Don’t Copy Over Existing Files option, 189 installing, 187-188 launching, 188 project files Attribute Dates filters, 189 Attribute filters, 189 creating, 188-190 filtering file copy, 189 Secure Enterprise Dashboard portal (exteNd), 139 security, 35, 59
administration architectures, 60 email solutions, 135 firewalls, 99 Linux network services, 35 open source concerns, 33 package installation, 60 viruses, 59 VPN, 100 security gateway software, 102 Security Manager (Novell), 102 Seibel and PeopleSoft ERP applications, 215 Sendmail email solution, 135 Server Console, NetWare server in-place upgrades, 182-184 server services (Linux)
Beowulf, 39 Samba, 40
servers
consolidation upgrades (NetWare servers), 181, 187-190 migration paths (migration plan development), 155 NetWare environment, configuring, 180-181 in-place upgrades, 181-186 migrating from, 190 server consolidation upgrades, 181, 187-190 Windows, migrating from, 190 Servlets 2.2 (exteNd Application Server), 117 Siebel CRM applications, DB2 Universal Database (IBM), 132 Siebel enterprise applications, 132 single sign-on systems, OES (Open Enterprise Server) desktop knowledge worker transition strategies, 165 SLES (SUSE Linux Enterprise Server)
certified security feature, 82 CIM, 82 CKRM, 82 clustering, 82 DRBD, 82 enterprise volume manager, 82 features of, 82-83 functions of, 85 hardware platform support, 83 Hotplug, 82 implementation of, 86-87 Linux 2.6 kernel, 82 user mode Linux feature, 82 YaST, 82 SLP (Service Location Protocol), Linux network services, 35 smart phones, thin-client business applications, 171
241
SMB
SMB (Server Message Block), 88 SMP (symmetric multiprocessing), 57 SMTP message transfer agents (email solutions), 134 SNS (Domain Name System/Service), Linux network services, 35 SOAP (Simple Object Access Protocol), 113-114 software
costs of, 48 licensing fees, 50 open source advantages, 31 reducing, 49-50 inventory, 204 leverage (open source profitability), 26-28 license management, 50 license templates, 52 simplifying, 53 mediation, 88 quality, open source advantages, 32 security gateway software, 102 SourceForge.net website, 47 SPARC platform port (Scalable Performance Architecture), 36 Specify the Upgrade Type option (NetWare 6.5 SP3 installation CD)
Deployment Manager upgrades, 184 iManager 2.0 upgrades, 185-186 Server Console upgrades, 183 Squid
caching, 99 load balancing, 57 SSI (Server Side Includes), generating active contents, 106 SSL (Secure Sockets Layer)
Apache web servers, 106 Linux network services, 35
242
Stallman, Richard
Free Software Foundation, 51 GNU Manifesto, 10 GPL, 9 standards support (directory service characteristics), 197 startup files (NetWare), installing, 179 static routing tables, creating, 104 Strategic Research Corp., edge data, 171 Success of Open Source, The, 69 Sun Microsystems, history of Unix, 9 SunOS (Sun Operating System), history of Unix, 9 SuperSet, history of NetWare, 145 support, 60
application servers, 120 availability of, 64 bug reports, 61 commercial support, 62-64 discussion logs, 61 documentation, 60 FAQ, 61 ISV, 62 mailing lists, 61 OEM, 62 open source advantages, 32 open source profitability, 26-27 product downloads, 61 training, 61 webrings, 61 support phase (Linux/open source integration framework model), 80-81 Suraski, Zeev, PHP, 43 SUSE Linux, 37
BIND, 103 CGL, 58 DHCP, 104 grid computing, 127
technical support
history of, 12 reliability, 58 SLES certified security feature, 82 CIM, 82 CKRM, 82 clustering, 82 DRBD, 82 enterprise volume manager, 82 features of, 82-83 functions of, 85 hardware platform support, 83 Hotplug, 82 implementation of, 86-87 Linux 2.6 kernel, features of, 82 user mode Linux feature, 82 YaST, 82 SUSE Linux Desktop features of, 84 functions of, 85 implementation of, 86-87 workgroup databases MySQL, 108-109 PostgreSQL, 108 YaST, 194 SUSE Linux Desktop
features of, 84 functions of, 85 implementation of, 86-87 SUSE LINUX Enterprise Server
Burlington Coat Factory transition case study, 216 Overstock.com Inc. transition case study, 217 SVGA (super video graphics adapters), OES installation requirements, 176 Sybase commercial database solution, 110
symmetric multiprocessing architectures (data centers), 124 synchronizing files, iFolder (Novell), 89 SYS volumes, creating, 179 system administration
advantages of, 193 eDirectory (Novell), 196-197 iManager (Novell), 198 RPM, 198-199 YaST, 194 ZENworks (Novell), 200-201 advantages of, 203 application distribution, 204 drive imaging, 203 hardware/software inventory, 204 policy application, 203 remote management, 204 system requirements
OES installation, 176 SCU installation, 187
T TCP/IP (Transmission Control Protocol/Internet Protocol), Linux network services, 35 TCPCON NetWare IP/IPX tool, 148 technical support
application servers, 120 availability of, 64 bug reports, 61 commercial support, 62-64 discussion logs, 61 documentation, 60 FAQ, 61 ISV, 62 mailing lists, 61 OEM, 62
243
technical support
open source concerns, 33 product downloads, 61 training, 61 transition strategy deployment, 158 webrings, 61 technical support phase (Linux/open source integration framework model), 80-81 templates (license)
BSD Licenses, 52 GPL, 51 IBM Public License, 52 LGPL, 51 MIT License, 52 MPL, 52 terminal services
data centers, 127 OES desktop transition strategies, 161 Terpstra, John, vendor lock-ins, 65 text editors, exteNd Workbench (Novell), 115 Thau, Robert, open source development, 13 thin-client applications, OES desktop transition strategies, 160-161 thin-client business appliances
client management, 169 network booting options, 169 single-purpose configurations, 168-169 thin-client hardware, 170-171 thin-client desktops, OES (Open Enterprise Server) desktop knowledge worker transition strategies, 166 thin-client hardware, 170-171 thin-client workstations, 127 Thompson, Ken, history of Unix, 7-8 Tivoli (IBM), 131 Tomcat (Jakarta) Application Server, 118
244
Torvalds, Linus
Linux, development of, 15 Linux, history of, 11, 17 open source development, 15-17, 74 TRACERT NetWare IP/IPX tool, 148 traditional file systems, 177 training
Linux technical support, 61 open source concerns, 33 open source profitability, 27-28 training phase (Linux/open source integration framework model), 79 training plans (migration plan development), 156 transitioning case studies
Burlington Coat Factory, 216 GGU (Golden Gate University), 205 evaluating Novell assistance, 209 initial environment evaluation, 206 service/application implementation, 208 transition solution selection criteria, 206-207 vendor selection, 207 Novell data center migration, 215 Linux Desktop migration, application evaluation/prioritization, 212-213 Linux Desktop migration, defining document sharing standards, 211 Linux Desktop migration, defining/initiating business tracks, 213-214 OpenOffice.org migration, 211 Overstock.com Inc., 216-217 website, 216 Turbolinux, 38
web services
U UDDI (Universal Discovery and Directory Integration), 114 United Linux, 39 University of California, Berkeley, history of Unix, 8-10 University of Michigan, LSP, 57 Unix, history of, 7-10 upgrading
down server upgrades, 182 NetWare servers, 181 in-place upgrades, 182-186 server consolidation upgrades, 187-190 user interface (Linux), 35 user mode Linux feature (SLES), 82 USL (Unix System Laboratories), history of Unix, 9 utility desktops, thin-client solutions
client management, 169 network booting options, 169 single-purpose configurations, 168-169 thin-client hardware, 170-171
V vendor lock-ins, 64
defining, 65 email, 66 escaping, open source advantages, 32 examples of lack of flexibility, 68 price, 67 Microsoft, 66 Novell, 68 open source solutions, reducing in, 66-67
Verification Wizard, 189 virtual file systems, 177 virtual hosting, Apache web servers, 106 Virtual Office (Novell), OES (Open Enterprise Server) desktop knowledge worker transition strategies, 165 Virtual Teams (Novell), 166 viruses, 59 VPN (Virtual Private Networks), 100
W Wall, Larry, Perl, 43 web browsers
network files, accessing, 91 OES desktop transition strategies, 160-161 web pages, content negotiation via Apache web servers, 106 web servers
Apache, 42, 105-106 application server integration, 107 application server solutions, 106 JBoss Application Server, 42 LAMP solutions, 107 root HTML directories, 105 uses of, 107-108 web service wizard (exteNd Workbench), 115 web services
components of, 113 defining, 111 example of, 112 exteNd Workbench (Novell), 114-116 implementing, 119 J2EE, 113 JAX-RPC, 114
245
web services
OES (Open Enterprise Server) desktop knowledge worker transition strategies, 166 SOAP, 113-114 transition strategy deployment, 157 UDDI, 114 WSDL, 113-114 XML, 113 Weber, Steven, open source development
developer motivations, 69 software development, 17 webrings (Linux technical support), 61 WebSphere (IBM), 131 WikiLearn website, 177 Wikipedia, vendor lock-ins, 65 Windows emulators, OES desktop transition strategies, 161 Windows servers, migrating from, 190 Windows viruses, 59 WINIPCFG NetWare client tool, 148 wizards
component wizards, 115 Migration Wizard (exteNd Workbench), 115 NetWare Installation Wizard configuring server environment, 180-181 installing NetWare 6.5 Support Pack 3, 180 installing Open Enterprise Server 1.0, 180 NetWare Migration Wizard, 190 Verification Wizard, 189 web service (exteNd Workbench), 115
246
workgroup databases
commercial solutions, 110 implementing, 111 MySQL, 108-109 PostgreSQL, 108 workstations, diskless, 170 WSDL (Web Services Description Language), 113-114
X-Y-Z x86 platform port (Linux), 36, 83 Xfree86, 41 XFS file systems, 89, 177 XML (Extensible Markup Language)
exteNd Application Server, 117 web services, 113 YaST (Yet another Setup Tool), 82, 194 Young, John, history of NetWare, 146 ZENworks (Novell), 200
advantages of, 203 application distribution, 204 drive imaging, 203 functions of hardware inventory, 200, 204 mass upgrades, 201 package management, 201 patch management, 201 personality migration, 201 policy-driven automation, 200 software distribution, 200 software inventory, 200, 204 OES (Open Enterprise Server) desktop knowledge worker transition strategies, 165
ZSeries platform port
policy application, 203 remote management, 204 ZENworks Linux Management (Novell), 201 zSeries (IBM)
grid computing, 127 SLES support, 83 ZSeries platform port (IBM), 36
247