MCSE: Windows 2000 Web solutions design study guide 9780782140354, 0-7821-4035-1

--The ultimate study guide for exam 70-226, Designing Highly Available Web Solutions with Microsoft Windows 2000 Server

201 12 6MB

English Pages 600 Year 2002

Report DMCA / Copyright

DOWNLOAD PDF FILE

Recommend Papers

MCSE: Windows 2000 Web solutions design study guide
 9780782140354, 0-7821-4035-1

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Using Your Sybex Electronic Book To realize the full potential of this Sybex electronic book, you must have Adobe Acrobat Reader with Search installed on your computer. To find out if you have the correct version of Acrobat Reader, click on the Edit menu—Search should be an option within this menu file. If Search is not an option in the Edit menu, please exit this application and install Adobe Acrobat Reader with Search from this CD (doubleclick rp500enu.exe in the Adobe folder).

Navigation Navigate through the book by clicking on the headings that appear in the left panel; the corresponding page from the book displays in the right panel.

Search

To search, click the Search Query button on the toolbar or choose Edit >Search > Query to open the Search window. In the Adobe Acrobat Search dialog’s text field, type the text you want to find and click Search. Use the Search Next button (Control+U) and Search Previous button (Control+Y) to go to other matches in the book. The Search command also has powerful tools for limiting and expanding the definition of the term you are searching for. Refer to Acrobat's online Help (Help > Plug-In Help > Using Acrobat Search) for more information.

Click here to begin using your Sybex Elect ronic Book!

www.sybex.com

MCSE: Windows® 2000 Web Solutions Design Study Guide

Michael D. Stewart with Kevin Lundy

San Francisco • London Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Associate Publisher: Neil Edde Acquisitions and Developmental Editor: Jeff Kellum Editors: Carol Henry Production Editor: Liz Burke, Jennifer Campbell Technical Editors: Alex Chong, Larry Passo Book Designer: Bill Gibson Graphic Illustrator: Tony Jonick Electronic Publishing Specialist: Nila Nichols Proofreaders: Yariv Rabinovitch, Emily Hsuan, Laurie O’Connell, Nancy Riddiough, Dave Nash Indexer: Ann Rogers CD Coordinator: Dan Mummert CD Technician: Kevin Ly Cover Designer: Archer Design Cover Illustrator/Photographer: Natural Selection Copyright © 2002 SYBEX Inc., 1151 Marina Village Parkway, Alameda, CA 94501. World rights reserved. No part of this publication may be stored in a retrieval system, transmitted, or reproduced in any way, including but not limited to photocopy, photograph, magnetic, or other record, without the prior agreement and written permission of the publisher. Library of Congress Card Number: 2002100055 ISBN: 0-7821-4035-1 SYBEX and the SYBEX logo are either registered trademarks or trademarks of SYBEX Inc. in the United States and/or other countries. Screen reproductions produced with FullShot 99. FullShot 99 © 1991-1999 Inbit Incorporated. All rights reserved. FullShot is a trademark of Inbit Incorporated. Microsoft ® Internet Explorer © 1996 Microsoft Corporation. All rights reserved. Microsoft, the Microsoft Internet Explorer logo, Windows, Windows NT, and the Windows logo are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries. SYBEX is an independent entity from Microsoft Corporation, and not affiliated with Microsoft Corporation in any manner. This publication may be used in assisting students to prepare for a Microsoft Certified Professional Exam. Neither Microsoft Corporation, its designated review company, nor SYBEX warrants that use of this publication will ensure passing the relevant exam. Microsoft is either a registered trademark or trademark of Microsoft Corporation in the United States and/or other countries. TRADEMARKS: SYBEX has attempted throughout this book to distinguish proprietary trademarks from descriptive terms by following the capitalization style used by the manufacturer. The author and publisher have made their best efforts to prepare this book, and the content is based upon final release software whenever possible. Portions of the manuscript may be based upon pre-release versions supplied by software manufacturer(s). The author and the publisher make no representation or warranties of any kind with regard to the completeness or accuracy of the contents herein and accept no liability of any kind including but not limited to performance, merchantability, fitness for any particular purpose, or any losses or damages of any kind caused or alleged to be caused directly or indirectly from this book. Manufactured in the United States of America 10 9 8 7 6 5 4 3 2 1

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

To Our Valued Readers: Since its inception nearly ten years ago, Microsoft’s MCSE program has established itself as the premier computer and networking industry certification, with nearly half a million IT professionals having attained this elite status. And with Microsoft’s recent creation of the MCSA program, certification candidates can now choose to pursue the certification that best suits their career goals. Sybex is proud to have helped thousands of MCSE candidates prepare for their exams over these years, and we are excited about the opportunity to continue to provide computer and networking professionals with the skills they’ll need to succeed in the highly competitive IT industry. The authors and editors have worked hard to ensure that the study guide you hold in your hand is comprehensive, in-depth, and pedagogically sound. We’re confident that this book will exceed the demanding standards of the certification marketplace and help you, the Microsoft certification candidate, succeed in your endeavors. As always, your feedback is important to us. Please send comments, questions, or suggestions to [email protected]. At Sybex we're continually striving to meet the needs of individuals preparing for IT certification exams. Good luck in pursuit of your Microsoft certification!

Neil Edde Associate Publisher—Certification Sybex, Inc.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Software License Agreement: Terms and Conditions The media and/or any online materials accompanying this book that are available now or in the future contain programs and/or text files (the "Software") to be used in connection with the book. SYBEX hereby grants to you a license to use the Software, subject to the terms that follow. Your purchase, acceptance, or use of the Software will constitute your acceptance of such terms. The Software compilation is the property of SYBEX unless otherwise indicated and is protected by copyright to SYBEX or other copyright owner(s) as indicated in the media files (the "Owner(s)"). You are hereby granted a single-user license to use the Software for your personal, noncommercial use only. You may not reproduce, sell, distribute, publish, circulate, or commercially exploit the Software, or any portion thereof, without the written consent of SYBEX and the specific copyright owner(s) of any component software included on this media. In the event that the Software or components include specific license requirements or end-user agreements, statements of condition, disclaimers, limitations or warranties ("End-User License"), those End-User Licenses supersede the terms and conditions herein as to that particular Software component. Your purchase, acceptance, or use of the Software will constitute your acceptance of such End-User Licenses. By purchase, use or acceptance of the Software you further agree to comply with all export laws and regulations of the United States as such laws and regulations may exist from time to time. Software Support Components of the supplemental Software and any offers associated with them may be supported by the specific Owner(s) of that material, but they are not supported by SYBEX. Information regarding any available support may be obtained from the Owner(s) using the information provided in the appropriate read.me files or listed elsewhere on the media. Should the manufacturer(s) or other Owner(s) cease to offer support or decline to honor any offer, SYBEX bears no responsibility. This notice concerning support for the Software is provided for your information only. SYBEX is not the agent or principal of the Owner(s), and SYBEX is in no way responsible for providing any support for the Software, nor is it liable or responsible for any support provided, or not provided, by the Owner(s). Warranty SYBEX warrants the enclosed media to be free of physical defects for a period of ninety (90) days after purchase. The Software is not available from SYBEX in any other form or media than that enclosed herein or posted to www.sybex .com. If you discover a defect in the media during this warranty

period, you may obtain a replacement of identical format at no charge by sending the defective media, postage prepaid, with proof of purchase to: SYBEX Inc. Product Support Department 1151 Marina Village Parkway Alameda, CA 94501 Web: http://www.sybex.com After the 90-day period, you can obtain replacement media of identical format by sending us the defective disk, proof of purchase, and a check or money order for $10, payable to SYBEX. Disclaimer SYBEX makes no warranty or representation, either expressed or implied, with respect to the Software or its contents, quality, performance, merchantability, or fitness for a particular purpose. In no event will SYBEX, its distributors, or dealers be liable to you or any other party for direct, indirect, special, incidental, consequential, or other damages arising out of the use of or inability to use the Software or its contents even if advised of the possibility of such damage. In the event that the Software includes an online update feature, SYBEX further disclaims any obligation to provide this feature for any specific duration other than the initial posting. The exclusion of implied warranties is not permitted by some states. Therefore, the above exclusion may not apply to you. This warranty provides you with specific legal rights; there may be other rights that you may have that vary from state to state. The pricing of the book with the Software by SYBEX reflects the allocation of risk and limitations on liability contained in this agreement of Terms and Conditions. Shareware Distribution This Software may contain various programs that are distributed as shareware. Copyright laws apply to both shareware and ordinary commercial software, and the copyright Owner(s) retains all rights. If you try a shareware program and continue using it, you are expected to register it. Individual programs differ on details of trial periods, registration, and payment. Please observe the requirements stated in appropriate files. Copy Protection The Software in whole or in part may or may not be copyprotected or encrypted. However, in all cases, reselling or redistributing these files without authorization is expressly forbidden except as specifically provided for by the Owner(s) therein.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

This book is dedicated to my family (Mom, Dad, Lynn, Trice, Aunt Barbara, Uncle Corteland, the Longs, the Allens, Alicia and Kiara) and all of my friends who put up with me during the writing of this book. Thanks for your support and encouragement. I could not have done it without you. —Michael Dedicated to my beautiful wife, Michelle, for her patience and understanding. —Kevin

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Acknowledgments

I would like to acknowledge Neall Alcott for initially providing me with this opportunity, and Kevin Lundy for his help in getting the project completed. I would also like to thank the crew at Sybex—especially Jeff Kellum, Liz Burke, and Carol Henry. Your guidance and support were priceless. Thanks for everything. —Michael D. Stewart Thanks to the entire team at Sybex. Jeff Kellum pulled the book together. Carol Henry and Liz Burke really worked hard to clean up our work. The technical editors, Alex Chong and Larry Passo, kept us honest. Finally, thanks to Kerri Gustin, Tadas Osmolskis, Brian Vincent, and Dan Wilson for fielding numerous questions from me. Much credit goes, also, to a superb production team: proofreaders Yariv Rabinovitch, Emily Hsuan, Laurie O’Connell, Nancy Riddiough, and Dave Nash, compositor Nila Nichols, and indexer Ann Rogers. —Kevin Lundy

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Introduction

Microsoft’s Microsoft Certified Systems Engineer (MCSE) track for Windows 2000 is the premier certification for computer industry professionals. Covering the core technologies around which Microsoft’s future will be built, the MCSE Windows 2000 program is a powerful credential for career advancement. This book has been developed to give you the critical skills and knowledge you need to prepare for one of the core requirements of the new MCSE certification program: Designing Highly Available Web Solutions with Microsoft Windows 2000 Server Technologies (Exam 70-226).

The Microsoft Certified Professional Program Since the inception of its certification program, Microsoft has certified almost 1.5 million people. As the computer network industry grows in both size and complexity, this number is sure to grow—and the need for proven ability will also increase. Companies rely on certifications to verify the skills of prospective employees and contractors. Microsoft has developed its Microsoft Certified Professional (MCP) program to give you credentials that verify your ability to work with Microsoft products effectively and professionally. Obtaining your MCP certification requires that you pass any one Microsoft certification exam. Several levels of certification are available based on specific suites of exams. Depending on your areas of interest or experience, you can obtain any of the following MCP credentials: Microsoft Certified System Administrator (MCSA) The MCSA certification is the latest certification track from Microsoft. This certification targets system and network administrators with roughly 6 to 12 months of desktop and network administration experience. The MCSA can be considered the entry-level certification. You must take and pass a total of four exams to obtain your MCSA. Microsoft Certified System Engineer (MCSE) on Windows 2000 This certification track is designed for network and systems administrators, network and systems analysts, and technical consultants who work with Microsoft Windows 2000 Professional and Server software. You must take and pass seven exams to obtain your MCSE.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

xxii

Introduction

Since this book covers one of the MCSE Core Design exams, we will discuss the MCSE certification in detail in this Introduction.

MCSE vs. MCSA In an effort to provide those just starting off in the IT world a chance to prove their skills, Microsoft recently announced its Microsoft Certified System Administrator (MCSA) program. Targeted at those with less than a year’s experience, the MCSA program focuses primarily on the administration portion of an IT professional’s duties. The requirements for the MCSA are: 





One client operating system exam, including the Windows 2000 Professional exam. Two networking system exams, including Windows 2000 Server and Managing a Windows 2000 Network Environment exams. One elective, including many of the MCSE electives as well as a combination of either the CompTIA A+ and Network+ exams, or the A+ and Server+ exams.

Of course, it should be any MCSA’s goal to eventually obtain his or her MCSE. However, don’t assume that, because the MCSA has to take two exams that also satisfy an MCSE requirement, the two programs are similar. An MCSE must also know how to design a network. Beyond these two exams, the remaining MCSE required exams require the candidate to have much more hands-on experience.

Microsoft Certified Solution Developer (MCSD) This track is designed for software engineers and developers and technical consultants who primarily use Microsoft development tools. Currently, you can take exams on Visual Basic, Visual C++, and Visual FoxPro. However, with Microsoft’s pending release of Visual Studio 7, you can expect the requirements for this track to change. You must take and pass four exams to obtain your MCSD.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Introduction

xxiii

Microsoft Certified Database Administrator (MCDBA) This track is designed for database administrators, developers, and analysts who work with Microsoft SQL Server. As of this printing, you can take exams on either SQL Server 7 or SQL Server 2000, but Microsoft is expected to announce the retirement of SQL Server 7. You must take and pass four exams to achieve MCDBA status. Microsoft Certified Trainer (MCT) The MCT track is designed for any IT professional who develops and teaches Microsoft-approved courses. To become an MCT, you must first obtain your MCSE, MCSD, or MCDBA, then you must take a class at one of the Certified Technical Training Centers. You will also be required to prove your instructional ability. You can do this in various ways: by taking a skills-building or train-the-trainer class, by achieving certification as a trainer from any of several vendors, or by becoming a Certified Technical Trainer through CompTIA. Last of all, you will need to complete an MCT application.

NT 4 vs. Windows 2000 Certification In response to concerns that NT 4 MCSEs would lose their certification if they did not upgrade to Windows 2000, Microsoft has announced changes to its certification program. Microsoft will now distinguish between NT 4 and Windows 2000 certifications. In part, Microsoft acknowledges that NT 4 is still being used either as the sole operating system or in combination with Windows 2000. Therefore, it is the opinion of Microsoft that those certified in NT 4 should still be identified as MCSEs. Those who have their NT 4 MCSE certification will be referred to as MCSEs on NT 4. Those who obtained their MCSE on the Windows 2000 (and the recently announced XP/.NET exams) will be referred to as MCSEs on Windows 2000. Microsoft no longer offers the Core NT 4 exams. However, several of the NT 4 elective exams also satisfy the Windows 2000 electives. For details on these changes, go to www.microsoft.com/traincert/highlights/ announcement.asp.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

xxiv

Introduction

How Do You Become an MCSE on Windows 2000? Attaining MCSE certification has always been a challenge. In the past, students have been able to acquire detailed exam information—even most of the exam questions—from online “brain dumps” and third-party “cram” books or software products. For the new MCSE exams, this is simply not the case. Microsoft has taken strong steps to protect the security and integrity of the new MCSE track. Now, prospective MCSEs must complete a course of study that develops detailed knowledge about a wide range of topics. It supplies them with the true skills needed, derived from working with Windows 2000, and related software products. The MCSE Windows 2000 MCSE program is heavily weighted toward hands-on skills and experience. Microsoft has stated that “nearly half of the core required exams’ content demands that the candidate have troubleshooting skills acquired through hands-on experience and working knowledge.” Fortunately, if you are willing to dedicate the time and effort to learn Windows 2000, XP, and .NET Server, you can prepare yourself well for the exams by using the proper tools. By working through this book, you can successfully meet the exam requirements to pass the Designing Highly Available Web Solutions exam. This book is part of a complete series of MCSE Study Guides, published by Sybex Inc., that together cover the core MCSE requirements as well as the new Design exams needed to complete your MCSE track. Study Guide titles include the following: 







MCSE: Windows 2000 Professional Study Guide, Second Edition, by Lisa Donald with James Chellis (Sybex, 2001) MCSE: Windows 2000 Server Study Guide, Second Edition, by Lisa Donald with James Chellis (Sybex, 2001) MCSE: Windows 2000 Network Infrastructure Administration Study Guide, Second Edition, by Paul Robichaux with James Chellis (Sybex, 2001) MCSE: Windows 2000 Directory Services Administration Study Guide, Second Edition, by Anil Desai with James Chellis (Sybex, 2001)

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Introduction



















xxv

MCSE: Windows 2000 Network Security Design Study Guide, Second Edition, by Gary Govanus and Robert King (Sybex, 2001) MCSE: Windows 2000 Network Infrastructure Design Study Guide, Second Edition, by William Heldman (Sybex, 2001) MCSE: Windows 2000 Directory Services Design Study Guide, Second Edition, by Robert King and Gary Govanus (Sybex, 2001) MCSE: Windows 2000 Migration Study Guide, by Todd Phillips (Sybex, 2001) MCSE: SQL Server 2000 Administration Study Guide, by Lance Mortensen, Rick Sawtell, and Joseph L. Jorden (Sybex, 2001) MCSE: SQL Server 2000 Design Study Guide, by Marc Israel and J. Steven Jones (Sybex, 2001) MCSE: Exchange 2000 Server Administration Study Guide, by Walter Glenn with James Chellis (Sybex, 2001) MCSE: Exchange 2000 Server Design Study Guide, by William Heldman (Sybex, 2001) MCSE: ISA Server 2000 Administration Study Guide, by William Heldman (Sybex, 2001)

Exam Requirements Candidates for MCSE certification on Windows 2000 must pass seven exams, including one client operating system exam, three networking system

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

xxvi

Introduction

exams, one design exam, and two electives, as described in the sections that follow.

For a more detailed description of the Microsoft certification programs, including a list of current and future MCSE electives, check Microsoft’s Training and Certification Web site at www.microsoft.com/traincert.

The Designing Highly Available Web Solutions with Microsoft Windows 2000 Server Technologies Exam The Designing Highly Available Web Solutions with Microsoft Windows 2000 Server Technologies exam covers concepts and skills related to designing an

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Introduction

xxvii

infrastructure to support a critical website that must be available around the clock. The exam emphasizes the following elements of working with a website infrastructure design: 

Analysis of the business environment



Analysis of the technical environment



Designing a network architecture



Configuring highly available servers



Application integration



Designing security, management, and monitoring strategies

This exam differs from the Core MCSE exams in that there are no objectives that represent physical tasks. The test objectives involve your ability to analyze a given situation and suggest solutions that meet the business needs of that environment. System analysis is not a skill that can be quantified into a series of facts or procedures to be memorized. Because of the emphasis on providing business solutions, much of this book (and most of the exam objectives) revolves around your ability to create a website structure that is stable, optimized, and designed in such a way that it fulfills true business needs.

Microsoft provides exam objectives to give you a very general overview of possible areas of coverage on the Microsoft exams. For your convenience, this Study Guide includes objective listings positioned within the text at points where specific Microsoft exam objectives are discussed. Keep in mind, however, that exam objectives are subject to change at any time without prior notice and at Microsoft’s sole discretion. Please visit Microsoft’s Training and Certification website (www.microsoft.com/traincert) for the most current listing of exam objectives.

Types of Exam Questions In an effort to both refine the testing process and protect the quality of its certifications, Microsoft has focused its Windows 2000 exams on real experience and hands-on proficiency. There is a higher emphasis on your past working environments and responsibilities, and less emphasis on how well

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

xxviii

Introduction

you can memorize. In fact, Microsoft says an MCSE candidate should have at least one year of hands-on experience.

Microsoft will accomplish its goal of protecting the exams’ integrity by regularly adding and removing exam questions, limiting the number of questions that any individual sees in a beta exam, limiting the number of questions delivered to an individual by using adaptive testing, and adding new exam elements.

Exam questions may be in a variety of formats: Depending on which exam you take, you’ll see multiple-choice questions, as well as select-andplace and prioritize-a-list questions. Simulations and case study–based formats are included, as well. You may also find yourself taking what’s called an adaptive format exam. Let’s take a look at the types of exam questions and examine the adaptive testing technique, so that you’ll be prepared for all of the possibilities.

For the Designing Highly Available Web Solutions with Microsoft Windows 2000 Server Technologies exam, you will see Case Study questions. For more information on the various exam question types, go to http://www .microsoft.com/traincert/mcpexams/faq/innovations.asp.

MULTIPLE-CHOICE QUESTIONS

Multiple-choice questions come in two main forms. One is a straightforward question followed by several possible answers, of which one or more is correct. The other type of multiple-choice question is more complex and based on a specific scenario. The scenario may focus on a number of areas or objectives. SELECT-AND-PLACE QUESTIONS

Select-and-place exam questions involve graphical elements that you must manipulate in order to successfully answer the question. For example, you might see a diagram of a computer network, as shown in the following

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Introduction

xxix

graphic taken from the select-and-place demo downloaded from Microsoft’s website.

A typical diagram will show computers and other components next to boxes that contain the text “Place here.” The labels for the boxes represent various computer roles on a network, such as a print server and a file server. Based on information given for each computer, you are asked to select each label and place it in the correct box. You need to place all of the labels correctly. No credit is given for the question if you correctly label only some of the boxes. In another select-and-place problem you might be asked to put a series of steps in order, by dragging item from boxes on the left to boxes on the right, and placing them in the correct order. One other type requires that you drag an item from the left and place it under an item in a column on the right. SIMULATIONS

Simulations are the kinds of questions that most closely represent actual situations and test the skills you use while working with Microsoft software interfaces. These exam questions include a mock interface on which you are asked to perform certain actions according to a given scenario. The simulated interfaces look nearly identical to what you see in the actual product, as shown in this example.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

xxx

Introduction

Because of the number of possible errors that can be made on simulations, be sure to consider the following recommendations from Microsoft: 







Do not change any simulation settings that don’t pertain to the solution directly. When related information has not been provided, assume that the default settings are used. Make sure that your entries are spelled correctly. Close all the simulation application windows after completing the set of tasks in the simulation.

The best way to prepare for simulation questions is to spend time working with the graphical interface of the product on which you will be tested. CASE STUDY–BASED QUESTIONS

Case study–based questions first appeared in the MCSD program. These questions present a scenario with a range of requirements. Based on the information provided, you answer a series of multiple-choice and select-andplace questions. The interface for case study–based questions has a number of tabs, each of which contains information about the scenario.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Introduction

xxxi

Expect to see any of these types of questions on the Designing Highly Available Web Solutions with Microsoft Windows 2000 Server Technologies exam. It is highly recommended that you become familiar with these types of questions prior to taking the exam. In addition, you should look at the case study questions on this book’s CD, as well as any of the various test simulation software programs available on the market. You can also download the case study demo from this exam’s page on Microsoft’s website.

ADAPTIVE EXAM FORMAT

Microsoft presents many of its exams in an adaptive format. This format is radically different from the conventional format previously used for Microsoft certification exams. Conventional tests are static, containing a fixed number of questions. Adaptive tests change depending on your answers to the questions presented. The number of questions presented in your adaptive test will depend on how long it takes the exam to ascertain your level of ability (according to the statistical measurements on which exam questions are ranked). To determine a test-taker’s level of ability, the exam presents questions in an increasing or decreasing order of difficulty.

Because the questions for the Design exams are based on case studies, you will be allowed to go back and reread the case studies.

Exam Question Development Process Microsoft follows an exam-development process consisting of eight mandatory phases. The process takes an average of seven months and involves more than 150 specific steps. The MCP exam development consists of the following phases: Phase 1: Job Analysis Phase 1 is an analysis of all the tasks that make up a specific job function, based on tasks performed by people who are currently performing that job function. This phase also identifies the knowledge, skills, and abilities that relate specifically to the performance area being certified.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

xxxii

Introduction

Phase 2: Objective Domain Definition The results of the job analysis phase provide the framework used to develop objectives. Development of objectives involves translating the job-function tasks into a comprehensive package of specific and measurable knowledge, skills, and abilities. The resulting list of objectives—the objective domain—is the basis for the development of both the certification exams and the training materials. Phase 3: Blueprint Survey The final objective domain is transformed into a blueprint survey in which contributors are asked to rate each objective. These contributors may be MCP candidates, appropriately skilled examdevelopment volunteers, or Microsoft employees. Based on the contributors’ input, the objectives are prioritized and weighted. The actual exam items are written according to the prioritized objectives. Contributors are queried about how they spend their time on the job. If a contributor doesn’t spend an adequate amount of time actually performing the specified job function, his or her data are eliminated from the analysis. The blueprint survey phase helps determine which objectives to measure, as well as the appropriate number and types of items to include on the exam. Phase 4: Item Development A pool of items is developed to measure the blueprinted objective domain. The number and types of items to be written are based on the results of the blueprint survey. Phase 5: Alpha Review and Item Revision During this phase, a panel of technical and job-function experts review each item for technical accuracy. The panel then answers each item and reaches a consensus on all technical issues. Once the items have been verified as being technically accurate, they are edited to ensure that they are expressed in the clearest language possible. Phase 6: Beta Exam The reviewed and edited items are collected into beta exams. Based on the responses of all beta participants, Microsoft performs a statistical analysis to verify the validity of the exam items and to determine which items will be used in the certification exam. Once the analysis has been completed, the items are distributed into multiple parallel forms, or versions, of the final certification exam.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Introduction

xxxiii

Phase 7: Item Selection and Cut-Score Setting The results of the beta exams are analyzed to determine which items will be included in the certification exam. This determination is based on many factors, including item difficulty and relevance. During this phase, a panel of job-function experts determines the cut score (minimum passing score) for the exams. The cut score differs from exam to exam because it is based on an item-by-item determination of the percentage of candidates who answered the item correctly and who would be expected to answer the item correctly. Phase 8: Live Exam In the final phase, the exams are given to candidates. MCP exams are administered by Prometric and Virtual University Enterprises (VUE).

Tips for Taking the Designing Highly Available Web Solutions Exam Here are some general tips for achieving success on your certification exam: 









Arrive early at the exam center so that you can relax and review your study materials. During this final review, you can look over tables and lists of exam-related information. Read Case Studies carefully. Don’t be tempted to jump to an early conclusion. Make sure you know exactly what the scenario is. Answer all questions. Remember that the adaptive format does not allow you to return to a question. Be very careful before entering your answer. Because your exam may be shortened by correct answers (and lengthened by incorrect answers), there is no advantage to rushing through questions. On simulations, do not change settings that are not directly related to the question. Also, assume default settings if the question does not specify or imply which settings are used. For questions you’re not sure about, use a process of elimination to get rid of the obviously incorrect answers first. This improves your odds of selecting the correct answer when you need to make an educated guess.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

xxxiv

Introduction

Exam Registration You may take the Microsoft exams at any of more than 1,000 Authorized Prometric Testing Centers (APTCs) and VUE Testing Centers around the world. For the location of a testing center near you, call Prometric at 800755-EXAM (755-3926), or call VUE at 888-837-8616. Outside the United States and Canada, contact your local Prometric or VUE registration center. Find out the number of the exam you want to take, and then register with the Prometric or VUE registration center nearest to you. At this point, you will be asked for advance payment for the exam. The exams are $100 each and you must take them within one year of payment. You can schedule exams up to six weeks in advance or as late as one working day prior to the date of the exam. You can cancel or reschedule your exam if you contact the center at least two working days prior to the exam. Same-day registration is available in some locations, subject to space availability. Where same-day registration is available, you must register a minimum of two hours before test time.

You may also register for your exams online at www.prometric.com or www.vue.com.

When you schedule the exam, you will be provided with instructions regarding appointment and cancellation procedures, ID requirements, and information about the testing center location. In addition, you will receive a registration and payment confirmation letter from Prometric or VUE. Microsoft requires certification candidates to accept the terms of a NonDisclosure Agreement before taking certification exams.

Is This Book for You? If you want to acquire a solid foundation in the principles of website architecture design, and your goal is to prepare for the exam by learning how best to structure a website environment, this book is for you. You’ll find clear explanations of the fundamental concepts you need to grasp, and plenty of help to achieve the high level of professional competency you need to succeed in your chosen field. If you want to become certified as an MCSE, this book is definitely for you. However, if you just want to attempt to pass the exam without really

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Introduction

xxxv

understanding Windows 2000, this Study Guide is not for you. It is written for people who want to acquire hands-on skills and in-depth knowledge of Windows 2000.

How to Use This Book What makes a Sybex Study Guide the book of choice for over 100,000 MCSEs? We took into account not only what you need to know to pass the exam, but also what you need to know to take what you’ve learned and apply it in the real world. Each book contains the following: Objective-by-objective coverage of the topics you need to know Each chapter lists the objectives covered in that chapter, followed by detailed discussion of each objective. Assessment Test Directly following this Introduction is an Assessment Test that you should take. It is designed to help you determine how much you already know about Windows 2000. Each question is tied to a topic discussed in the book. Using the results of the Assessment test, you can figure out the areas where you need to focus your study. Of course, we do recommend you read the entire book. Exam Essentials To highlight what you learn, you’ll find a list of Exam Essentials at the end of each chapter. The Exam Essentials section briefly highlights the topics that need your particular attention as you prepare for the exam. Key Terms and Glossary Throughout each chapter, you will be introduced to important terms and concepts that you will need to know for the exam. These terms appear in italic within the chapters, and a list of the Key Terms appears just after the Exam Essentials. At the end of the book, a detailed Glossary gives definitions for these terms, as well as other general terms you should know. Review Questions, complete with detailed explanations Each chapter is followed by a set of Review Questions that test what you learned in the chapter. The questions are written with the exam in mind, meaning that they are designed to have the same look and feel of what you’ll see on the exam. Following the Review Questions, each chapter has a mini Case Study Question.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

xxxvi

Introduction

Real World Scenarios Because reading a book isn’t enough for you to learn how to apply these topics in your everyday duties, we have provided Real World Scenarios in special sidebars. These explain when and why a particular solution would make sense in a working environment you’d actually encounter. Interactive CD Every Sybex Study Guide comes with a CD complete with additional questions, flashcards for use with a palm device, and two complete electronic books. Details are in the following section.

What’s On the CD? With this new member of our best-selling MCSE Study Guide series, we are including quite an array of training resources. The CD offers numerous simulations, bonus exams, and flashcards to help you study for the exam. We have also included the complete contents of the Study Guide in electronic form. The CD’s resources are described here: The Sybex E-book for This Study Guide Many people like the convenience of being able to carry their whole study guide on a CD. They also like being able to search the text via computer to find specific information quickly and easily. For these reasons, the entire contents of this Study Guide are supplied on the CD, in PDF format. We’ve also included Adobe Acrobat Reader, which provides the interface for the PDF contents as well as the search capabilities. The Sybex MCSE Edge Tests The Edge Tests are a collection of both multiple-choice and case study questions that will help you prepare for your exam. There are four sets of questions: 





Two bonus exams designed to simulate the actual live exam. All the questions from the Study Guide, presented in a test engine for your review. You can review questions by chapter, by objective, or you can take a random test. The Assessment Test.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Introduction

xxxvii

Here is a sample screen from the Sybex MCSE Edge Tests:

Sybex MCSE Flashcards for PCs and Palm Devices The “flashcard” style of question offers an effective way to quickly and efficiently test your understanding of the fundamental concepts covered in the exam. The Sybex MCSE Flashcards set consists of more than 150 questions presented in a special engine developed specifically for this study guide series. Here’s what the Sybex MCSE Flashcards interface looks like:

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

xxxviii

Introduction

Because of the high demand for a product that will run on Palm devices, we have also developed, in conjunction with Land-J Technologies, a version of the flashcard questions that you can take with you on your Palm OS PDA (including the PalmPilot and Handspring’s Visor).

How Do You Use This Book? This book provides a solid foundation for the serious effort of preparing for the exam. To best benefit from this book, you may wish to use the following study method: 1. Take the Assessment Test to identify your weak areas. 2. Study each chapter carefully. Do your best to fully understand the

information. 3. Read over the Real World Scenarios, to improve your understanding

of how to use what you learn in the book. 4. Study the Exam Essentials and Key Terms to make sure you are famil-

iar with the areas you need to focus on. 5. Answer the review and case study questions at the end of each chapter.

If you prefer to answer the questions in a timed and graded format, install the Edge Tests from the book’s CD and answer the chapter questions there instead of in the book. 6. Take note of the questions you did not understand, and study the cor-

responding sections of the book again. 7. Go back over the Exam Essentials and Key Terms. 8. Go through the Study Guide’s other training resources, which are

included on the book’s CD. These include electronic flashcards, the electronic version of the chapter review question (try taking them by objective), and the two bonus exams. To learn all the material covered in this book, you will need to study regularly and with discipline. Try to set aside the same time every day to study, and select a comfortable and quiet place in which to do it. If you work hard, you will be surprised at how quickly you learn this material. Good luck!

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Introduction

xxxix

Contacts and Resources To find out more about Microsoft Education and Certification materials and programs, to register with Prometric or VUE, or to obtain other useful certification information and additional study resources, check the following resources: Microsoft Training and Certification Home Page www.microsoft.com/traincert This website provides information about the MCP program and exams. You can also order the latest Microsoft Roadmap to Education and Certification. Microsoft TechNet Technical Information Network www.microsoft.com/technet 800-344-2121 Use this website or phone number to contact support professionals and system administrators. Outside the United States and Canada, contact your local Microsoft subsidiary for information. Palm Pilot Training Product Development: Land-J www.land-j.com 407-359-2217 Land-J Technologies is a consulting and programming business currently specializing in application development for the 3Com PalmPilot Personal Digital Assistant. Land-J developed the Palm version of the Edge Tests, which is included on the CD that accompanies this Study Guide. Prometric www.prometric.com 800-755-3936 Contact Prometric to register to take an MCP exam at any of more than 800 Prometric Testing Centers around the world. Virtual University Enterprises (VUE) www.vue.com 888-837-8616 Contact the VUE registration center to register to take an MCP exam at one of the VUE Testing Centers.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

xl

Introduction

MCP Magazine Online www.mcpmag.com Microsoft Certified Professional Magazine is a well-respected publication that focues on Windows certification. This site hosts chats and discussion forums, and tracks news related to the MCSE program. Some of the services cost a fee, but they are well worth it. Windows 2000 Magazine www.windows2000mag.com You can subscribe to this magazine or read free articles at the website. The study resource provides general information on Windows 2000. Cramsession on Brainbuzz.com cramsession.brainbuzz.com Cramsession is an online community focusing on all IT certification programs. In addition to discussion boards and job locators, you can download one of a number of free cramsessions, which are nice supplements to any study approach you take.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Assessment Test 1. Your website has approximately 200GB. Management wants a com-

plete backup every night. There is a nightly one-hour maintenance window available when the files are static. What minimum speed network will be required for this backup operation? A. T1 B. 10MB C. 100MB D. ATM 2. What are Active Server Pages (ASP)? Select all answers that apply. A. A file that contains a combination of text, HTML, and scripting B. A powerful tool for creating dynamic web applications C. A new standard for presentation of data on the Internet D. A tool that uses an OLE DB provider to provide data access from

an OLE DB source 3. Which of the following technologies provides load balancing for

COM+ objects? A. NLB B. CLB C. RAID 5 D. ASP 4. Network Load Balancing requires applications that utilize which net-

work protocol? A. IPX/SPX B. TCP/IP C. HTTP D. DLC

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

xlii

Assessment Test

5. Which IIS authentication mode is universally supported by most web

servers and browsers? A. Anonymous access B. Basic authentication C. Integrated Windows authentication D. Client certificate mapping 6. Where is Network Load Balancing service configured? A. Under Add/Remove Programs in Control Panel B. On the General tab of the Local Area Connection Properties

dialog box C. Under Network and Dial-up Connections in the Control Panel D. Under Compute Manager, which is located under Administrative

Tools 7. What IIS permission allows a user to upload a file to a Web site? A. Write B. Directory Browsing C. Read D. Script Source Access 8. Which of the following logical groups of hardware or software

resources provides a service using Microsoft Cluster service? A. RAID B. Resource group C. Object Manager D. Resource Manager

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Assessment Test

xliii

9. Which of the following is the best DNS configuration to use for high

availability? A. Active Directory Integrated B. Primary C. Delegated Domain D. Forwarder 10. What is the purpose of Component Load Balancing (CLB)? A. CLB is a program mode that defines how objects interact between

applications. B. CLB is a service that provides load balancing for COM+ objects. C. CLB is a set of standards for sharing information among client

applications. D. CLB distributes IP traffic across multiple members in a cluster. 11. Which of the following technologies are included in Windows 2000

for use with your website to provide high availability of resources? Choose all answers that apply. A. Microsoft Cluster service B. Network Load Balancing (NLB) C. DHCP D. DNS E. Component Load Balancing (CLB) 12. What solution should you consider for your network utilizing the

resources of multiple servers handling large amounts of data that must be transmitted at very high transfer rates? A. RAID B. Fibre Channel C. SAN D. SCSI-3

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

xliv

Assessment Test

13. Exchange 2000 Server is integrated with IIS 5. Which of the following

protocols does IIS not provide to Exchange 2000 Server? A. SMTP B. POP3 C. DNS D. NNTP 14. What is Active Directory’s mode of replication? A. Single master B. Multi-master C. Primary/Backup D. Primary 15. What is the total number of nodes or servers that can be supported in

a Windows 2000 Advanced Server cluster solution? A. 2 B. 4 C. 16 D. 32 16. If your network design calls for an Active Directory with one forest

and two domains per forest, what is the minimum number of domain controllers required for fault tolerance? A. One B. Two C. Four D. Eight

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Assessment Test

xlv

17. How does Network Load Balancing determine which network traffic

to load-balance and which to ignore? A. By setting affinity to None B. By applying port rules C. By reducing the number of cluster heartbeats D. By adjusting the AliveMsgPeriod value in the Registry 18. Which of the following protocol types is used for an SSL connection

between a web browser and IIS? A. HTTP B. FTP C. HTTPS D. TCP/IP 19. What feature in Microsoft Cluster service allows a resource to be

restored to its original node when that node has recovered and is back online? A. Failover B. High availability C. Failback D. Network Load Balancing 20. Which of the following data-access technologies do not use the Com-

ponent Object Model? Select all answers that apply. A. ASP B. OLE DB C. ODBC D. XML

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

xlvi

Assessment Test

21. Which of the following are attacks on a network that a firewall can be

used to guard against? Select all answers that apply. A. Port scans B. IP address spoofing C. Denial of service attacks D. Packet sniffing 22. Which of the following is not a consideration of capacity planning? A. Number of users B. Server capacity C. Site complexity D. Quantity of files served 23. Which of the following clustering technologies supports resource

failover and failback? A. Network Load Balancing (NLB) B. Microsoft Cluster service (MSCS) C. Application Center D. Component Load Balancing (CLB) 24. What types of authentication can be used when clients are trying to

access a SQL 2000 Server database located in the data tier of a n-tier solution? Select two answers. A. Anonymous logon B. Client certificates C. Windows authentication mode D. Mixed mode

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Assessment Test

xlvii

25. Several RAID levels can be deployed to provide fault tolerance for

your highly available Web solution. Which of the following RAID levels should not be used because it does not provide fault tolerance? A. RAID 5 B. RAID 1 C. RAID 0 D. RAID 0+1 26. Which of the following Windows 2000 tools can be used to capture

and view real-time network data? A. Network Monitor B. Pstat C. Pviewer D. Netstat 27. What is NIC teaming used for? A. Connecting a single server to multiple subnets B. Connecting multiple NICs to a single subnet for redundancy C. Joining multiple servers together via the heartbeat D. Allows a server to “team” two subnets by routing traffic 28. Which feature in Network Load Balancing is used with applications

that require session states? A. Affinity B. Multicasting C. Convergence D. Port rules

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

xlviii

Assessment Test

29. Which of the following layers in the Windows DNA (Distributed

interNet Applications) Architecture is responsible for acting as an interface between clients and the database stores or applications? A. Presentation layer B. Data store layer C. Business logic layer D. Systems layer 30. Internet Security and Acceleration (ISA) Server is an upgrade to what

service in previous versions of Windows? A. Microsoft Proxy Service B. Internet Information Server C. Routing and Remote Access Server D. WINS 31. Which of the following is responsible for routing client requests to

COM+ application clusters? A. COM+ components B. COM+ services C. NLB D. COM+ routing cluster 32. Which of the following utilities consists of workload simulations that

can be used to test the capacity of IIS web servers? A. Web Application Stress Tool B. System Monitor C. Web Capacity Analysis Tool D. Performance Logs and Alerts

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Assessment Test

xlix

33. The e-commerce site you manage has a Catalog database that can be

partitioned. Your database server is currently at its maximum performance capacity. Which of the following is a legitimate option for upgrading your site? Choose all answers that apply. A. Install an active/active cluster. B. Install an NLB cluster. C. Add memory and/or CPUs to the existing server. D. Spread the database over multiple servers. 34. Which of the following counters tells you the number of threads that

are in the processor queue? A. Processor: %Processor Time B. System: Process Queue Length C. Processor: Interrupts/sec D. Processor: Page Files Bytes:Total 35. What is the name of the process in which a RAID array uses two con-

trollers, each with its own set of disk drives? A. Disk mirroring B. Striping C. Disk duplexing D. Striping with parity 36. Which of the following ISA clients does not have the capability to

authenticate users? A. Firewall B. SOCKS C. Web Proxy D. SecureNAT

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

l

Assessment Test

37. Which two of the following RAID levels provide the capability for

fault tolerance when using Windows 2000 for a highly available web solution? A. RAID 1 B. RAID 5 C. RAID 0 D. RAID 3

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Answers to Assessment Test

li

Answers to Assessment Test 1. C. Required bandwidth is calculated as follows: 200GB * 8 bits/byte

÷ by 3600 seconds/hr. = 44.4 Mbits/sec minimum bandwidth. So 100MB is the minimum bandwidth that satisfies the requirement. T1 is a WAN technology. See Chapter 6 for more information. 2. A, B. Active Server Pages (ASP) are files that contain HTML, text,

and scripting, and the ASP technology is a powerful tool for presenting data on the Web. XML, Extensible Markup Language, is an emergent standard for displaying data on the Internet and is replacing HTML as the programming language of choice for creating dynamic web applications. ADO (ActiveX Data Objects) is a tool that uses an OLE DB provider to access an OLE DB source or database. For more information, see Chapter 9. 3. C. CLB (Component Load Balancing) provides load balancing for

COM+ components. This technology has two parts: CLB software and a COM+ cluster. NLB (Network Load Balancing) is a clustering technology used to distribute IP requests across multiple clustered servers. RAID 5 is a storage method used in clustering to provide high availability. ASP (Active Service Pages) is a programming tool for combining HTML, scripting, and COM components to create dynamic Web applications. For more information, see Chapter 1. 4. B. TCP/IP is the only protocol that should be used with NLB. IPX/

SPX stands for Internetwork Packet Exchange/Sequenced Packet Exchange and is used primarily for Novell networks. HTTP (Hypertext Transfer Protocol) is the protocol used by browsers to communicate on the Internet. DLC (Data Link Control) is used for connectivity to IBM AS/400 mainframes and network attached HP printers. For more information, see Chapter 2. 5. B. Basic authentication is the mode used in IIS 5 that is part of the

HTTP 1.0 specification and is supported by most web servers and browsers on the Internet. For more information, see Chapter 7.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

lii

Assessment Test

6. B. Network Load Balancing service is configured on the Local Area

Connections Property dialog box, which is located under Network and Dial-up Connections. It is installed by default but is not active until you select the Network Load Balancing check box and configure additional properties. For more information, see Chapter 2. 7. A. The Write permission allows a user to upload a file. Write by itself

will not allow you to view a file. For more information, see Chapter 7. 8. B. A resource group is used in Windows 2000 Advanced Server to

provide services via Microsoft Cluster service. RAID, or Redundant Array of Independent Disks, is a technology for implementing a faulttolerant disk subsystem. The Resource Manager oversees the starting and stopping of resources and is responsible for initiating resource group failover. For more information, see Chapter 3. 9. A. Whenever possible, always use Active Directory Integrated

domains. Active Directory Integrated DNS domains supply secure updates, automatic replication, and native fault tolerance. For more information see Chapter 5. 10. B. Component Load Balancing (CLB) is a service of Microsoft Appli-

cation Center 2000. It provides dynamic load balancing for COM+ application components. For more information, see Chapter 10. 11. A, B, D. The Microsoft Cluster service (MSCS) can be used on the

back end of your n-tier environment to provide failover capability for database servers. Network Load Balancing (NLB) can be used on the front end of your n-tier environment to load-balance IP requests across multiple web servers. Component Load Balancing (CLB) loadbalances COM+ components. The clients on the front-end servers, when requesting services from the back-end servers, use these components. For more information, see Chapter 1. 12. C. SANs (Storage Area Networks) can be used to provide movement

of large amounts of data from multiple servers at very fast data-transmission rates. For more information, see Chapter 4.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Answers to Assessment Test

liii

13. C. Exchange 2000 Server is tightly integrated with IIS 5. IIS provides

HTTP, SMTP, POP3, IMAP4, and NNTP services for Exchange 2000 Server. DNS service is not part of IIS. For more information, see Chapter 10. 14. B. Active Directory runs in multi-master configuration, meaning that

each domain controller maintains a full read/write copy of the directory. Single Master and Primary/Backup are NT 4.0 terms. Primary is type of DNS server. For details, see Chapter 5. 15. A. Windows 2000 Advanced Server supports two-node clusters. If

your needs warrant more capacity than is provided with a two-node configuration, consider using Windows 2000 Datacenter, which has support for four-node clusters. For more information, see Chapter 3. 16. C. For fault tolerance in an Active Directory-based design, you need

two controllers per domain, so one forest with two domains requires at least four domain controllers (2 * 2 = 4). For more information, see Chapter 5. 17. B. Port rules are designated to control the traffic flow of the network.

This is done through the filtering mode, which is applied to a range of ports. Affinity is used with applications that require session states. The None setting means requests are distributed equally to all hosts in the NLB cluster. Reducing cluster heartbeats will have no effect on traffic flow; heartbeats are used to determine which cluster hosts or nodes are available. The AliveMsgPeriod parameter is used for convergence. For more information, see Chapter 2. 18. C. The HTTPS (Hypertext Transfer Protocol Secure) protocol is used

by SSL to make a secure connection between a web browser and IIS. For more information, see Chapter 7. 19. C. Failback is the feature that allows for a resource’s return to the

original node when that node is restored and back online in the cluster. Failover is the cluster node’s capability to take ownership of resources on a failed node. High availability refers to the ability for resources to be available in the event of hardware or software failures. Network Load Balancing (NLB) is a service in Windows 2000 that is used to distribute incoming IP requests across multiple clustered servers. For more information, see Chapter 3.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

liv

Assessment Test

20. C, D. ODBC, Open Database Connectivity, and XML are the only

data-access tools in this list of options that do not use a COM component for accessing a SQL 2000 Server database. For more information, see Chapter 9. 21. A, B, C, . All of these events are attacks that can be prevented by fire-

walls. Port scanning is used by hackers to find potentially vulnerable ports on a computer system. IP address spoofing allows an intruder on the Internet to impersonate a local system’s IP address. Denial of service (DoS) attacks are designed to make a network crash by flooding it with useless traffic. While, Packet sniffers can be used to monitor and steal data as it travels through a network, firewalls are no defense against them since the packets can be captured before they ever reach the firewall . For more information, see Chapter 8. 22. D. Capacity planning includes the following factors: the number of

users (average and peak) of the site, the capacity of servers and network, and the complexity of the site. The number of files on the servers is not a factor. See Chapter 6 for more information. 23. B. Microsoft Cluster service (MSCS) supports both failover and fall-

back of cluster resources and resource groups. Network Load Balancing (NLB) is a way to distribute client requests across multiple clustered servers. Application Center is new for Windows 2000 and provides the capability for administrators to create and manage clusters from one central resource. Component Load Balancing provides dynamic load balancing for COM+ components. For more information, see Chapter 3. 24. C, D. Windows authentication mode and mixed authentication are

the two modes used when a client is trying to access databases located in the data layer of a n-tier network. Both anonymous logon and client certificates are Internet Information Server (IIS 5) authentication techniques. For more information, see Chapter 9. 25. C. RAID 0, simple striping, provides no data redundancy or fault tol-

erance. It does provide significantly better I/O throughput than the other RAID methods but offers no help with high availability or error correction. It is not a viable candidate for a highly available Web solution. RAID 1, RAID 5, and RAID 0+1 all provide fault tolerance and are candidates for use in a Web environment. For more information, see Chapter 1.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Answers to Assessment Test

lv

26. A. Network Monitor is an administrative tool included in Win-

dows 2000 that will capture data frames and display, filter, and edit these frames for later analysis. It is basically a software-based network packet analyzer. For more information, see Chapter 11. 27. B. NIC teaming connects redundant NICs to a subnet. Connecting a

single server to multiple subnets doesn’t require anything special. Heartbeat connections cannot use NIC teaming. Routing traffic between subnets has nothing to do with teaming. For more information see Chapter 5. 28. A. Affinity is used with applications that require session states. When

NLB is configured to use affinity, a client’s requests are associated with a specific cluster host. Multicasting is used to determine whether or not a multicast MAC address is used for cluster operations. Convergence occurs when the state of the NLB cluster changes and a new load distribution is decided for servers that share the handling of network traffic. Port rules specify both the load weight and priority handling parameters for applications that are being load-balanced. For more information, see Chapter 2. 29. C. The business logic layer, tier 2, is responsible for connecting users

on the presentation layer (tier 1) with the data resources on the data store layer (tier 3). This is done via technologies that include ISAPI extensions, ASP, CGI applications, and COM components. For more information, see Chapter 10. 30. A. ISA Server is the successor to Microsoft Proxy Server. More than

just an upgrade, ISA Server includes the features of both Proxy Server and firewalls in one package. For more information, see Chapter 8. 31. D. COM+ routing clusters are responsible for routing requests from

the presentation tier to COM+ application clusters. A COM+ routing cluster is basically a load balancer for COM+ components. For more information, see Chapter 10. 32. C. The Web Capacity Analysis Tool (WCAT) is included on the Win-

dows 2000 Resource Kit. It tests IIS responses to various client workload simulations. WCAT has more than 40 prepacked content and workload simulations for testing various server and network configurations. For more information, see Chapter 11.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

lvi

Assessment Test

33. A, C, D. An active/active cluster requires that the database be parti-

tionable, to spread the database over multiple servers. An NLB cluster is not a reasonable option for a database server. Please explain validity of options C and D, which are included in the Answers. See Chapter 6 for more information. 34. B. The System: Processor Queue Length counter gives you the num-

ber of threads in the processor queue. A number greater than 2 indicates that the processor may be a bottleneck for the system and that an additional CPU may be needed. For more information, see Chapter 11. 35. C. Disk duplexing involves two controllers used in a disk array to

provide an additional degree of fault tolerance. For more information, see Chapter 4. 36. D. The ISA client SecureNAT cannot authenticate users and security

groups. However, security rules can be configured to allow for filtering of ports by IP address. For more information, see Chapter 8. 37. A, B. RAID 1, defined as disk mirroring, and RAID 5, defined as disk

striping with parity, can both provide fault-tolerant data storage in Windows 2000. For more information, see Chapter 4.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Chapter

1

Introduction to High-Availability Web Solutions

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

H

ighly available web solutions are a must in today’s enterprise environment. Organizations and corporations today depend on applications and data that reside on the Internet or on the organization’s own intranet— or, more likely, both. This chapter gives you an overview of some of the reasons why high availability is so important. We will discuss past situations where corporations did not have a high-availability solution in place and, as a result of poor business planning or a natural disaster, suffered when their existing solution failed. The chapter concludes with an introduction to some of the high-availability technologies provided in the Windows 2000 family of operating systems.

The Importance of High Availability

I

n the mid- to late 1990s, the Internet changed from a relatively unknown entity into a worldwide phenomenon. Everyone was excited by its potential. Once a topic of conversations among academics and technical people, “the Web” quickly became cool subject matter for students, retirees, middle managers, parents, and golfing buddies. With this explosion in Internet-based activity, more and more corporations and organizations began leveraging web technologies to provide customer service, product delivery, training, financial services, universal e-mail access, online gambling, online shopping, and much more. The list of services and potential services continues to grow. Understandably, as people grow more and more dependent on the Internet and web-based technologies, web servers have become mission-critical elements of business. These applications and services must be available at all times.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

The Importance of High Availability

3

A number of imperatives are requiring corporations to implement highlyavailable web infrastructures: 

Achieving customer satisfaction



Promoting investor confidence



Increasing worker productivity



Preventing loss of competitiveness

Although all of these factors together are related to overall high availability, each one individually can have a different impact on the corporation. Customer satisfaction is usually the first to be affected by the failure to provide high availability. When users cannot access the service or data they are paying for, customer satisfaction takes an instant negative impact. Users expect their online services to be there at all times, just like the telephone…or the cable television service…or utilities such as gas and electricity. It’s yet more proof of how pervasive the Internet is today. Without access to their e-mail, many users feel they are cut off from the world.

User Dependence on the Internet I was once responsible for a data center that housed a large corporation’s North American network. Due to a problem at their ISP (Internet Service Provider), Internet connectivity was down. As a result, all Internet access including Internet e-mail was not operating. My team received a frantic call from a user working on a contract with a lawyer from another company. The user, Jeff, had just an hour to get a marked-up version of the contract to the lawyer, but with e-mail down he knew that wouldn’t be possible. He demanded we get the e-mail flowing again. I explained to Jeff that it was out of our hands and that the ISP was working on it. After thinking about it, I asked him if he could fax the information, since the phone system was not affected. He said, “Oh…yeah, I can do that. I forgot about the fax!” For me, this was a real paradigm shift. Almost everyone in the world is now so dependent on Internet access and Internet e-mail that they forget or discount options that are equally effective.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

4

Chapter 1



Introduction to High-Availability Web Solutions

The next factor to be affected by high availability is investor confidence. Apart from plummeting customer satisfaction when a service is disrupted, a corporation may actually lose customers whenever there is an outage. At the very least, refunds and credits may have to be issued to the affected customers. As a result, investor confidence can be severely damaged. Share prices have been seen to fall within hours of major outages on some large Internet websites. For an Internet-based company, brand-name and market leadership are the most important elements in maintaining a successful existence. If the company’s management team allows service outages to occur, or worse, to continue, the investing community can quickly lose confidence in the management team. Customers may accept a refund or credit and cut the company some slack, but investors are usually less forgiving. Share prices can be affected for weeks, months, or possibly years. Loss of competitiveness is another calamity that is closely related to failing investor confidence. When a major business system fails—an ecommerce website, for instance—that company is at a competitive disadvantage. Customers wanting to order or shop for products website can easily and quickly go to another e-commerce site to find what they want. And they may not come back.

The AOL Story In the mid-1980s, America Online (AOL) started out as just another service provider offering a very limited number of online services to a very small home-computer market. The market leader at the time was CompuServe, which was not exactly known to be user-friendly and sported a quaint, textbased interface. If you were using CompuServe, you were most likely someone who was very technical and loved computers. AOL, on the other hand, fully embraced the new and exciting graphical user interface (GUI) popularized by the newly released Microsoft Windows operating system (OS) and the Apple Macintosh. The environment was more user-friendly and allowed people without a strong knowledge of computers to take part in the growing Internet revolution.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

The Importance of High Availability

5

By the early nineties, AOL had steadily eroded CompuServe’s market share. In a bold step, AOL implemented a new marketing campaign and an aggressive pricing plan. The marketing campaign was to essentially “carpet bomb” homes throughout the United States with AOL installation diskettes that included free time on the service. The pricing campaign offered unlimited access for $19.95 per month. At the time, AOL, CompuServe, and nearly all other online services were charging hourly fees. AOL’s new campaigns caused a huge demand for the provider’s services, and the infrastructure took a terrible hit from the overwhelming demand. AOL’s service, somewhat shoddy even before the campaigns, became almost nonexistent. Users were constantly getting busy signals, and when they did get online were constantly being disconnected. In August of 1996, AOL suffered a daylong outage, and customer satisfaction collapsed. The company quickly became the butt of jokes. Many users switched to other services—including CompuServe, which had implemented its own unlimited access plan. Once Wall Street learned of AOL’s problems, investor confidence fell drastically. Stockholders were concerned that the provider could never satisfy its increased customer demand and would continue relinquishing market share to CompuServe. Of course, AOL’s stock price suffered greatly. To address these concerns, AOL pumped tens of millions into advertising and customer service. It began implementation of an improved infrastructure to make the service more highly available. Access points were added, as well as more phone lines at each access point. The network’s backbone was beefed up to handle the high traffic and load. Soon, users were getting fewer busy signals. They were not getting disconnected. Investor confidence began to return, and the stock price soared. As a result of its aggressive approach to retaining market share and the improvements to a more highly available infrastructure, AOL increased customer satisfaction and vastly improved investor confidence. Today, it is the largest ISP in the world. AOL itself now services over 30 million users and owns former rival CompuServe (3 million users). Netscape (40 million users) and the online instant messaging service ICQ (80 million users) are also part of the AOL empire. Thanks to a merger with Time Warner, AOL is a major force in the entertainment and media universe. The overall company, AOL Time Warner, comprises several cable giants (HBO, CNN, Turner Broadcasting, Time Warner Cable), movie studios (New Line Cinema and Warner Brothers), a publishing enterprise (Time), and a music label (Warner Music).

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

6

Chapter 1



Introduction to High-Availability Web Solutions

In many of today’s corporations, the role of the intranet continues to grow. Employees and managers use intranets to share information, to file status reports, and to record time sheets. Also, by means of web-based applications and interfaces, more functions are being transferred to the corporate intranet. As a result of its growing importance, the failure of an entire intranet or key parts of it means worker productivity suffers. As you can see, one or all of the four high-availability imperatives can be the driving force in a corporation’s success or failure. Today, the growing dependence on the Internet and web-based technologies make highly available web infrastructures are critical to keeping the demands of these imperatives in check. The purpose of this book is to prepare you to pass exam 70226, Designing Highly Available Web Solutions with Microsoft Windows 2000 Server Technologies, and to give you the background necessary for designing effective, highly available web solutions for your organization.

The Web’s Role in Today’s Enterprise Network Today’s enterprise network is quickly embracing web-based technologies. Most users are quite web savvy and can easily find their way around a web interface such as Microsoft’s Internet Explorer. Today’s users are also quick to embrace new web-based technologies, such as instant messaging and streaming media. Because of their familiarity with the interface, they don’t need to be trained in multiple applications or to be retrained when new versions are released. Accordingly, corporations and organizations are turning to web-based technologies to implement applications and services. For example, many corporations are implementing web-based timesheets. Corporations have used various methods in the past to track employees’ time, from time clocks and punch cards to computer spreadsheets. The user would hand in their timesheet, and the data would be manually entered into another program, such as a payroll or accounting system. These methods were inefficient and error-prone. Users today can simply access the corporate intranet and, bring up a web-based timesheet. Once the employee number is entered, all information relevant to that employee is automatically accessed (employee’s name, the period-ending date, and active projects). The user simply enters the hours worked on the appropriate day and for a particular project. The timesheet is submitted, and a supervisor accesses the intranet site and reviews the record. It can be approved or changed online. Once approved, the timesheet is automatically submitted into the payroll or accounting system. The use of this web-based timesheet helps eliminate most user errors, resulting in a more efficient and streamlined timesheet process.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

The Importance of High Availability

7

Intranet timekeeping is just one example of how the Web is changing today’s enterprise network. There are many other applications of web technology, such as implementing online phone directories, service requests, company information, service monitoring, human resources, inventory checking, and customer service. Virtually any business process can be implemented in a web interface.

High Availability and the Internet Presence For companies that rely heavily on an Internet presence, a period of “dead air” can have serious repercussions to customer satisfaction. Internet users have high expectations for consistency and performance. Many studies show that if the site is unavailable, customers will shop elsewhere. Enterprises that rely heavily on the Internet, unlike their real-world brickand-mortar competitors, are effectively out of business when an outage occurs. Imagine a large department store with hundreds of stores throughout the United States having to suddenly close and lock the doors for a few hours or days. A single store might conceivably be so affected by a power outage, fire, or some natural disaster, but all stores being hit at once is virtually impossible. For Internet-reliant organizations however, such an event is a real, distinct possibility. And finally, to add insult to injury, online outages attract media attention. For a company that relies on its brand name, its worse nightmare is to see that name being bandied about the evening news because to an outage. For Internet-reliant companies, high availability is their lifeblood. Without it, they are walking a dangerous line between life and death for their business. Such organizations must optimize the uptime for their site— although true 100%, 24/7 uptime is impossible, it can be effectively achieved through the use of high-availability technologies.

Real-World Lessons from the Web Here are some stories about the effects of service outages at some wellknown companies. eBay.com eBay.com is a very popular online auction service where people auction off or bid on items. Its website consistently ranks among the most visited on the Internet. The site boasts over 29 million registered users, and in 2000 more than $5 billion in goods were sold on eBay.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

8

Chapter 1



Introduction to High-Availability Web Solutions

In June of 1999, eBay.com suffered a major service outage that lasted for 22 hours. The problem was caused by a faulty system disk that in turn corrupted a database. eBay did not have a “warm” backup replica of its site, so when the database was corrupted, the site was effectively down. Users could not bid on items nor list items for auction. While the technical staff was replacing the disk and recovering the database, the site remained down. Customer satisfaction suffered greatly—the company paid out $3.9 million in credits to people affected by the outage. The Monday following the outage, investors trimmed $4 billion from eBay’s market value. As a result of this event, eBay beefed up its infrastructure and hired more and better technical staff. The company worked diligently with its hardware and software vendors to build a more highly available infrastructure. A warm backup facility was implemented, to take over all site operations within two to three hours of a site failure. Although the site has experienced occasional outages since taking these preventive measures, the interruptions have been very temporary and performance has vastly improved. Today, eBay remains one of the most successful websites, and I believe it is partially due to their insistence on using a highly available infrastructure. E*TRADE.com One of the first high-profile outages to afflict an Internet company happened to the online brokerage firm E*TRADE.com, where customers can buy and sell stocks over the Internet. Because it deals in equities and investments, E*TRADE must maintain a high level of trust with their customers and investors. In early 1999, software engineers at E*TRADE were implementing some software upgrades to the website. Unfortunately one of the upgrades contained a bug, which caused sporadic site outages over the next few days. Although customers had come to expect snarls or delays due to heavy trading days, the same customers were highly frustrated by the software glitch. Many claimed they lost money because they could not cancel buy and sell orders. They also claimed loss of financial opportunities. Investors hammered E*TRADE’s stock, causing it to lose over five percent by the end of the first day of the outage.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

The Importance of High Availability

9

Microsoft.com Perhaps the biggest and most embarrassing outage happened on January 24, 2001. On that day, many Microsoft websites were shut down for up to 23 hours; this included MSNBC.com, Hotmail.com, Carpoint.com, MSN.com, Encarta.com, as well as the parent Microsoft.com. At first, many speculated that the outage was caused by hackers and a denial of service (DoS) attack. After spending nearly a day determining and correcting the problem, Microsoft told the world the truth: The incident had occurred because of a router configuration error. The routers affected were responsible for directing traffic to and from a network that housed Microsoft’s Domain Name Service (DNS) servers. Due to the configuration error, users from the Internet were no longer able to resolve any host names serviced by the Microsoft DNS servers, including the popular Hotmail.com and MSNBC.com. Although the router configuration error caused the problem, it was exacerbated by the way Microsoft had designed its DNS infrastructure to be hosted on four DNS servers, all housed in the same data center on the same network. So when the router configuration error occurred, the entire DNS infrastructure went down. Small companies and organizations can get away with such an inadequate design because their impact on the Internet is minimal. But with a company as substantial as Microsoft, this design had dire implications. Large, international companies must have their DNS infrastructure spread out over various networks, facilities, and even ISPs, in order to limit their exposure to service outages. There was little actual revenue loss at Microsoft due to the outage, but it was a significant public relations gaffe for a high-profile name that strives to provide outstanding, reliable, and high-performing operating systems.

High Availability and Intranet Resources High availability for intranet resources is a different animal from its Internet big brother. Most intranet resources tend to not be mission critical. For example, a company may host its employee directory on its intranet. When someone needs information about a fellow employee, such as a phone number, e-mail address, or location, they can go to the intranet site and

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

10

Chapter 1



Introduction to High-Availability Web Solutions

access the employee directory. If the employee directory was unavailable for some reason, employees would be inconvenienced, but it would not stop the company from operating. That said, however, there may be applications or data that are in fact mission critical. For example, consider a pharmaceutical company that has placed on its intranet data about a new drug the company is testing. Before a drug can be released or sold, the U.S. Food and Drug Administration (FDA) requires that trial studies be conducted. These trial studies are used to ensure that the drug is safe and to document any known side effects. The FDA also requires that this data be readily available to scientists, doctors, and pharmacists for use in identifying possible side effects or life-threatening reactions to the drug. The application making this data available would be mission critical for the obvious reasons: It’s an FDA requirement, and lives may be in danger if the physician prescribing the drug is not informed of its side effects. Other types of intranet applications that may require high availability are inventory tracking, manufacturing processes, order tracking, and customer databases.

High-Availability Technologies

W

ith the Internet becoming increasingly pervasive, Microsoft realized that its customers would require a broad array of technologies to ensure that web infrastructures remained highly available. Many of Microsoft’s competitors in the Internet arena already sported systems with excellent uptime. These competitors were known for the high availability and scalability of their operating systems. Numerous large Internet companies were relying heavily on these providers. Microsoft strongly aspired to this market segment but unfortunately was not known for highly available and highly scalable systems. They were primarily recognized for their desktop and file/print service capabilities. The company set out to address these concerns by implementing new features into their Windows NT operating system. Windows NT 4.0 provides features that improve the OS availability, including clustering, software RAID, and COM. With the release of Windows 2000, a further assortment of new technologies were added. These included Network Load Balancing (NLB), server clustering, COM+, and Component Load Balancing (CLB), as well as robust and highly fault tolerant data-storage solutions. Today, thanks to these

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

High-Availability Technologies

11

enhancements, Windows 2000 can be scaled where and when growth occurs. It is also capable of providing high availability to the companies that require it.

Network Load Balancing (NLB) One way that Microsoft increased the scalability and availability of Windows 2000 was to add a new clustering technology called Network Load Balancing (NLB). NLB, found in the Windows 2000 Advanced Server and Windows 2000 Datacenter Server operating systems, allows mission-critical services such as web servers and applications, Terminal Services, streaming media servers, and even virtual private network (VPN) servers. Network Load Balancing does exactly as its name implies: It loadbalances IP network traffic across multiple cluster nodes. As a result, NLB can scale by increasing the number of nodes participating in the cluster. NLB is also highly available and fault tolerant. It can detect node failures and automatically redistribute traffic across the remaining, healthy nodes. So how does the NLB technology work? The cluster is configured with a virtual IP address. As shown in Figure 1.1, this is the IP address to which clients make requests. A DNS resource record can also be created to give clients an easier name to remember. FIGURE 1.1

Network Load Balancing Client

Client

Client

Internet

Router

Node

Node

Node

Node

Network Load Balancing (NLB) cluster

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

12

Chapter 1



Introduction to High-Availability Web Solutions

NLB distributes the IP traffic among the nodes. Each node is running the desired service, such as a web server. Every node in the cluster receives the IP packets from all client requests, and then filters the traffic, dropping client requests that are handled by another node. Cluster nodes can even respond to different client requests simultaneously, resulting in faster performance for the client. For example, if a client makes a web page request from the cluster, several nodes may be involved to send the various elements, such as images and text, to the client. The NLB cluster nodes all report the amount of workload they can handle. Using this information, the Network Load Balancing service can redistribute workloads throughout the cluster accordingly—thus accomplishing load balancing. Each node in the NLB cluster emits a “heartbeat” signal on the network. Other member nodes listen for these signals. When a node fails, the other nodes observe that they are no longer receiving the heartbeat. Accordingly, the remaining nodes in the cluster redistribute the workload. Clients are not affected, since most web browsers will automatically retry a failed connection. Clients may experience a slight delay while the connection with a different node is opened.

Server Clustering Microsoft first introduced the Cluster service in Windows NT Server 4.0 Enterprise Edition. A new and improved version of server clustering is now part of Windows 2000 Advanced Server and Windows 2000 Datacenter Server. The Microsoft Cluster service (MSCS) allows multiple nodes, or servers, to operate in unison as a server cluster, creating a highly available platform for the data as well as any applications running within the cluster. As a result, clustering provides improved availability as well as increased scalability. Where Network Load Balancing is concerned with high availability for front-end systems such as websites, server clustering is really meant for backend systems, where access to the data must be maintained. Examples of these back-end systems are messaging and database systems. Server clustering operates similarly to Network Load Balancing. Clients accessing the cluster do so via a virtual IP address. The server cluster itself can maintain two connections to the network. The first connection is to the production network (where clients will access the cluster). The second

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

High-Availability Technologies

13

connection is to a private network used for intracluster communications, such as heartbeats and status and control messages. The cluster itself manages resource groups. A resource group contains a number of resources that may include physical devices such as hard disks and network interface cards (NICs), and logical devices such as databases, IP addresses, and applications. Using resource groups makes it easier to manage a cluster. When a node fails, the Cluster service will move resource groups from the failed node to a working node. A main component of a cluster configuration is the external storage array. This component is common and connected to both nodes. This is an important concept, because each node must have access to data on the hard drives in the event of a node failure. The connection between both nodes and the external storage array can be SCSI or Fibre Channel. Server clustering can operate in two different configurations, depending on which version of the Windows 2000 Server OS is being used. With Windows 2000 Advanced Server, server clustering can operate using two nodes (see Figure 1.2). With Windows 2000 Datacenter Server, server clustering can operate with up to four nodes (see Figure 1.3). FIGURE 1.2

Server clustering in Windows 2000 Advanced Server Network clients Public Network

Cluster server (node)

SCSI or Fibre Channel

Cluster Private Network

C: C: C: C:

Cluster server (node)

SCSI or Fibre Channel

External disk array

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

14

Chapter 1



Introduction to High-Availability Web Solutions

FIGURE 1.3

Server clustering in Windows 2000 Datacenter Server Network clients Public Network Private Network Cluster server (node)

Fibre Channel switch C: C: C: C:

C: C: C: C:

C: C: C: C:

C: C: C: C:

External disk arrays

COM+ and Component Load Balancing (CLB) COM+ objects or components are small, individually compiled software objects. They are typically housed in dynamic-link library (DLL) or executable (EXE) files and are written in a number of programming languages, such as C++, Visual Basic, Active Server Pages (ASP), or Visual Basic Scripts (VBS). Because they are small and portable, COM+ components can be reused in various applications. Applications and other components can place calls to the COM+ objects. In turn, they perform certain functions and return the data to the requesting application or component. COM+ components can be stored anywhere on the network. They can be placed directly on the clients, on web servers, or on a dedicated COM+ server. In Windows 2000, COM+ services are built into the operating system. These services can support a limited use of Component Load Balancing (CLB). Application Server 2000, on the other hand, contains a full-blown COM+ cluster.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

High-Availability Technologies

15

Component Load Balancing allows the COM+ objects to be load balanced. CLB operates in a cluster configuration, much like the two technologies mentioned earlier, NLB and server clusters. COM+ components are placed on nodes within the CLB cluster, and calls to the COM+ components are load-balanced among the nodes (see Figure 1.4). FIGURE 1.4

Component Load Balancing Client

Client

Client

Internet

Router

Network Load Balancing (NLB) cluster Node

Node

Node

Node

Node

Node

Node

Router

Component Load Balancing (CLB) cluster Node

A component that is running COM+ Services uses one or more interfaces to render its services to clients. The client must have software that allows it to communicate with these interfaces. Typically, clients communicate with the interface through one of several languages, including Visual Basic, ASP, and JavaScript. This interface is basically a table that lists the COM components, their location (that is, on which node of the cluster), and their respective response times. If a component or node fails to respond, COM+ services moves on to the next node or component in the table.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

16

Chapter 1



Introduction to High-Availability Web Solutions

Data Storage Solutions If you saw the movie Field of Dreams, you’ll remember the phrase “If you build it, they will come.” Of course, in the IT industry we’re not building a baseball diamond out of a cornfield, but the phrase nevertheless is very appropriate when discussing data storage solutions. In terms of storage space, you can depend on the fact that users will always use every byte of disk space available. You’ve doubtless watched data storage use continue to expand in every environment you’ve ever been in. To manage such demands, there are a number of technologies out there for increasing the data storage size while also increasing availability and fault tolerance. Let’s look at brief overviews of these technologies.

RAID Redundant Array of Inexpensive Disks (RAID) is a method of storing data across multiple disks. These systems are often referred to as arrays. RAID operates by striping the data, whereby each drive is divided equally into multiple sections. Data is striped (that is, copied) across the array. By storing the data across multiple disks, I/O operations can occur more efficiently. It’s more efficient because the multiple disks operate independently. Another noteworthy feature of RAID is the ability to create fault-tolerant RAID configurations. With these configurations in place, should a disk fail, the remaining disks in the array will keep the data available. To the operating system, whether it’s Windows NT, Windows 2000, Novell NetWare, or something else, the RAID array appears as a single logical drive. There are many different RAID levels, but we are focusing on the levels that are relevant to Windows 2000. RAID 0 This first level of RAID does not include any form of fault tolerance. It only stripes data across the array (see Figure 1.5). In Windows 2000, a volume that incorporated RAID 0 is called a stripe set. If a drive in a RAID 0 array fails, the entire array will fail. Implementation of RAID 0 requires at least two drives. FIGURE 1.5

RAID 0

S1

S2

S3

S4

S5

S6

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

High-Availability Technologies

17

RAID 1 RAID 1 is also known as disk mirroring (see Figure 1.6). A RAID 1 array consists of at least two drives that are entirely duplicated (mirrored). If a drive in a RAID 1 array fails, data is still accessible via the remaining healthy drive. Implementation of RAID 1 requires at least two drives. FIGURE 1.6

RAID 1

S1

S1

S2

S2

S3

S3

RAID 5 RAID 5 incorporates parity information into the data being stored. This parity information is produced via an algorithm and stored in a parity stripe. This stripe is rotated among the drives in the array (see Figure 1.7). In the event of a drive failure, the RAID system uses the parity information to recreate the data from the failed drive. Once the drive is replaced, the RAID system uses the parity information to rebuild the missing stripes from the failed drive. Implementation of RAID 5 requires at least three drives. FIGURE 1.7

RAID 5

S1

S2

S3

Parity

S5

S6

Parity

S8

S9

Parity

S11

S12

Drive A

Drive B

Drive C

Drive D

RAID 0+1 RAID 0+1, sometimes also called RAID 10, consists of stripes (that is, RAID 0) written across a RAID 1 array. In other words, each stripe is actually written to a mirrored pair of drives (see Figure 1.8). This configuration combines

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

18

Chapter 1



Introduction to High-Availability Web Solutions

the performance of RAID 0 with the fault tolerance of RAID 1. As can be imagined, however, it also increases the number of drives that must be utilized and therefore may not be cost effective for all implementations. FIGURE 1.8

RAID 0+1

A

B

A

A

C

D

B

B

E

F

C

C

G

H

D

D

Stripe Set

Mirror RAID 0+1

Software and Hardware RAID We’ve refrained so far from distinguishing between software or hardware RAID configurations. The distinction is irrelevant in terms of the type of RAID being utilized. So what is the difference? Software RAID systems are usually built into the Windows NT, Windows 2000, Novell NetWare, or other operating system. This is a lessexpensive solution for organizations already utilizing one of these particular systems, but its main drawback is performance. Software RAID systems are usually slow and will tax the server. Software RAID solutions are only OS recommended in situations where the cost of moving to a hardware RAID system is prohibitive, or when the poor performance issue will not be detrimental to the overall mission of the system. Hardware RAID systems usually consist of a controller card/interface and a drive mount cage. This cage can be within the server itself or implemented externally with its own circuitry and power supply. Hardware RAID systems offer superior performance as compared with software versions. Of course, the cost to implement these hardware-based systems can be prohibitive and must be seriously considered.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

High-Availability Technologies

19

Network Storage Solutions Throughout the past few years, many vendors have begun implementing stand-alone network storage solutions. These usually consist of some type of RAID array housed in its own cabinet. The cabinet includes power supplies, network interface cards, and SCSI/Fibre Channel controllers. It is a selfcontained unit that can satisfy the very large data needs for almost any sized organization. Usually these units include a robust support package that provides the organization with a relatively hands-off experience. When a drive in the unit fails (or even begins to experience some trouble), the vendor will automatically be notified by the unit. The vendor will then dispatch technicians to service the unit and/or replace the failed drive. The high cost of these network storage devices put them beyond the budget of some organizations. That said, there is no more effective and robust storage solution.

The High-Availability Infrastructure Typically, the clustering technologies discussed so far are used in an n-tier environment (see Figure 1.9). The n-tier environment usually comprises three tiers (it’s called n-tier because it’s is not mandatory that all tiers be used).The first tier is for presentation services such as web servers. This front-end tier would utilize Network Load Balancing. The second or middle tier utilizes Component Load Balancing. The third tier represents the back end and utilizes server clustering for housing data. The n-tier method of infrastructure design allows an organization to tailor its needs to create a highly available infrastructure. For example, if clients are experiencing delays at the first tier, they can add more servers to that tier without having to redesign the entire infrastructure.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

20

Chapter 1



Introduction to High-Availability Web Solutions

FIGURE 1.9

An n-tier infrastructure Client

Client

Client

Internet

Router

Network Load Balancing (NLB) cluster

First Tier

Node

Node

Node

Node

Router

Component Load Balancing (CLB) cluster

Second Tier Node

Node

Node

Node

Router

Server cluster

Third Tier Node

C: C: C: C:

Node

External array

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Summary

21

Disaster Recovery Solutions A few years ago, a disaster recovery solution was, for the most part, simply a tape backup. In today’s high-availability world, disaster recovery solutions implement a number of different technologies to reduce an organization’s risk of data loss. Many modern disaster recovery solutions include some of the technologies we have discussed in this chapter. In the best case scenario, a solid disaster recovery solution would include a “hot site” or duplicate facility where data processing needs could be filled in the event of a disaster, natural or otherwise. The solution could incorporate the various clustering technologies to guarantee that servers are always available to accept client requests. The clusters themselves would be built using either RAID technologies or some vendor-specific network storage solution. The data itself would be backed up nightly via a comprehensive backup architecture comprising backup servers, tape drives, and robotic tape libraries. All of these components would work together to form the comprehensive disaster recovery solution.

Summary

H

igh availability is a critical factor in web solution design. Many companies use web technologies to provide services to their clients, and continuous availability of these services will have a direct impact on customer satisfaction. Customer satisfaction is key to the successful operation of any organization, whether it’s a standard brick-and-mortar company or an Internetbased organization. To be competitive today, an enterprise must ensure that its services are available on the Internet 24/7—even in the event of hardware or software failures. As business moves from traditional companies to Internet-based ones, customer satisfaction and loyalty are directly proportional to the availability of website resources. Microsoft, in Windows 2000, has included a number of features designed to increase high availability for an organization’s applications and data. Three of these are Network Load Balancing (NLB), Component Load Balancing (CLB), and server clustering. These three technologies can be combined into an n-tier architecture, which keeps the infrastructure highly scalable.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

22

Chapter 1



Introduction to High-Availability Web Solutions

NLB is used to load-balance customer request across multiple clustered servers. With NLB in place, up to 32 servers or nodes can be clustered to provide superior performance and high availability. NLB requires no proprietary hardware, and most industry-standard computers can be used to provide this service. CLB allows for the load balancing of COM+ components. COM+ components are compiled in a variety of programming languages and are used to provide functionality through one or more interfaces. The third high-availability technology discussed in this book is server clustering. Windows 2000 includes the Microsoft Cluster service (MSCS), which will provide failover capability to ensure that resources are continuously available. Windows 2000 Advanced Server supports two-node clusters, and Windows 2000 Datacenter supports four-node clusters.

Key Terms

B

efore you take the exam, be certain you are familiar with the following terms: cluster nodes

intranet

clustering

IP address

COM+

ISP (Internet Service Provider)

Component Load Balancing (CLB)

Microsoft Cluster service (MSCS)

configurations

Network Load Balancing (NLB)

denial of service (DoS)

network storage solutions

disaster recovery

n-tier

Domain Name Service (DNS) Redundant Array of Inexpensive Disks (RAID) hardware RAID

resource group

high availability

server clustering

Internet

software RAID

intracluster communications

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Review Questions

23

Review Questions 1. Which versions of the Windows 2000 operating system support Net-

work Load Balancing (NLB)? Select all that apply. A. Windows 2000 Professional B. Windows 2000 Server C. Windows 2000 Advanced Server D. Windows 2000 Datacenter Server 2. In which types of files do COM+ components usually reside? Select all

that apply. A. COM B. EXE C. DLL D. VBS 3. Of the following tiers found in an n-tier infrastructure, where would

server clustering be located? A. First tier B. Second tier C. Third tier D. Fourth tier 4. Which RAID level offers the best utilization of disk space while still

providing fault tolerance? A. RAID 0 B. RAID 1 C. RAID 5 D. RAID 0+1

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

24

Chapter 1



Introduction to High-Availability Web Solutions

5. What is the main drawback of software RAID systems? A. Poor performance B. High cost C. Slow speed D. Availability issues 6. Why did Microsoft’s websites go down in early 2001? (Select all that

apply.) A. They used Windows 2000. B. The DNS database was corrupted. C. They used BIND. D. The DNS infrastructure was housed on one subnet. 7. The Cluster service is new to Windows 2000. True or false? A. True B. False 8. What is used to communicate with a Network Load Balancing cluster

when clients are requesting website objects that are located on that cluster? A. A node’s IP address B. The cluster’s virtual IP address C. The master node D. The cluster controller

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Review Questions

25

9. Of the following tiers found in an n-tier infrastructure, where would

Component Load Balancing (CLB) be located? A. First tier B. Second tier C. Third tier D. Fourth tier 10. If you need to create a four-node server cluster, which Windows 2000

operating system is required? A. Windows 2000 Professional B. Windows 2000 Server C. Windows 2000 Advanced Server D. Windows 2000 Datacenter Server

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

26

Chapter 1



Introduction to High-Availability Web Solutions

Answers to Review Questions 1. C, D. Both Windows 2000 Advanced Server and Datacenter Server

provide Network Load Balancing. Windows 2000 Professional is the client workstation to Windows 2000 Server, which does not have inherent Network Load Balancing or clustering capabilities. 2. B, D. COM+ components are usually found in dynamic link library

(.DLL) and executable (.EXE) files. VBS (VBScript) files are used in Windows Script Hosts to support scripting. COM (command) files are smaller versions of EXE files. 3. C. Server clustering is considered a third-tier application. 4. C. RAID 5 satisfies both needs. It utilizes disk space better than

RAID 1. Although RAID 0 may utilize more disk space, it is not fault tolerant. RAID 1 provides fault tolerance, but one-half the total disk space is used for mirroring. 5. A. Software RAID systems require the use of the server’s CPU. As a

result, server performance will suffer drastically. Software RAIDs are less costly to implement than hardware RAIDs but do not provide the fault tolerance of hardware RAID solutions. 6. D. Microsoft’s websites went down due to a router configuration

error on the subnet where the DNS servers were all located. 7. B. The Cluster service was first designed and implemented in Win-

dows NT 4.0 Enterprise Edition. 8. B. All client communication with the cluster occurs via the cluster’s

virtual IP address. 9. B. Component Load Balancing (CLB) is considered a second-tier

application. 10. D. Windows 2000 Datacenter Server provides up to four-node clus-

tering. Windows 2000 Advanced Server is limited to two-node clustering. Neither Windows 2000 Professional nor Windows 2000 Server supports clustering.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Chapter

2

Designing NLB Solutions MICROSOFT EXAM OBJECTIVES COVERED IN THIS CHAPTER:  Design NLB solutions to improve availability, scalability, and fault tolerance. Considerations include the number of hosts, number of clusters, placement of servers, multicast versus unicast, failover strategy, priority, affinity, filtering, load weighting, and application types.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

W

indows 2000 makes a very credible leap above and beyond the performance and reliability of Windows NT; nevertheless, a single Windows 2000 server can provide only an understandably limited amount of availability and scalability. In an effort to enhance the availability and scalability of Windows-based servers running Internet-related applications such as web servers, ftp servers, and other mission-critical application servers, Microsoft added the Windows 2000 Network Load Balancing service to Windows 2000 Advanced Server and Datacenter Server. The Network Load Balancing service (NLB) takes the concept of a single, reliable Windows 2000 Server and provides a method to combine multiple single servers into a single computing entity: a cluster. The servers that are members of the cluster act as a single entity working together to satisfy client requests for data and application services. In turn, the cluster improves the availability and scalability of the data or application platform. This chapter discusses the concepts of Network Load Balancing and how to design an effective NLB solution.

Understanding NLB Concepts

N

LB is one of the clustering solutions provided in Windows 2000 Advanced Server and Datacenter Server, to improve the availability and scalability of Internet server programs and the data supplied by them. Many businesses and organizations rely on mission-critical Internet-based applications such as financial transactions, databases, business logic, and corporate intranets. These applications must run 24 hours a day, 7 days a week. Besides the requirement to be highly available, these applications must be capable of scaling their performance to meet the demands of large volumes of client requests. NLB allows an organization to create and manage a

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Understanding NLB Concepts

29

group of individual servers as a single entity that meets those high availability and scalability requirements.

A cluster running NLB distributes client requests and connections across all cluster member servers (also known as nodes). It is important to note that these requests and connections are only for TCP/IP-based services and applications. NLB does not support NetBEUI or NWLink (IPX/SPX) protocols.

Network Load Balancing provides the following benefits: High Availability In the event of a node failure, the cluster will redistribute client requests to the remaining healthy nodes. Any open client connections to the failed node are closed or terminated. When the client retries the connection, the request is automatically routed to a healthy node in the cluster. The user may have to reinitiate the client request. However, some client applications, such as web browsers, are configured to automatically retry any failed connections. Nevertheless, client downtime is usually limited to a matter of seconds. Scalability Scalability is achieved with the Network Load Balancing service by providing the ability to add up to 32 nodes to an NLB cluster. These nodes can be added without having to shut down the cluster. Load Balancing Load balancing is, of course, what the Network Load Balancing service is all about. As the cluster receives client requests, they are automatically distributed across the nodes.

Other NLB Technologies Before we move into Network Load Balancing operations in more detail, you need to understand some of the alternative network load balancing technologies that are available. These technologies are Round Robin DNS (RRDNS), hardware-based NLB, and software-based NLB (dispatching).

Round Robin DNS Round Robin DNS is the most basic form of network load balancing and is an inherent function found in most forms of DNS (Domain Name Service). Round Robin DNS provides a static form of load balancing for TCP/IP-based

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

30

Chapter 2



Designing NLB Solutions

servers. It is achieved by creating multiple host-resource records (also called A resource records) for the same host name in DNS. For example: helpandlearn helpandlearn helpandlearn

IN IN IN

A A A

192.168.0.20 192.168.0.21 192.168.0.22

With this arrangement, when a client queries DNS for the computer named helpandlearn, DNS returns the IP address of 192.168.0.20. When another client requests helpandlearn, DNS returns the next IP address in the list, 192.168.0.21, and so on. DNS continues to cycle through the IP addresses as clients request them. Round Robin DNS provides a simple and cheap solution for network load balancing. However, this technology has a serious limitation: If one of the servers goes down, DNS is not aware of this. DNS continues to give out the down server’s IP address to clients. As a result, clients will fail to connect to any server until DNS supplies an IP address of a working server, or until the down server is repaired and active again.

Hardware-Based Network Load Balancing Hardware-based NLB uses a network device, which could be a router or a computer, to redirect client requests to multiple nodes within the cluster. Typically this function is accomplished by using a form of Network Address Translation (NAT), where the hardware-based load balancing device exposes a public IP address on the Internet and translates this public address to multiple private IP addresses on the private network. All client requests to the public address are readdressed and transmitted to the private IP addresses. The main limitation to hardware-based NLB is that the device itself exposes a single point of failure. If the device fails, all load-balancing traffic halts. Every client’s attempt to connect to the cluster fails.

Software-Based Network Load Balancing Software-based NLB is also known as dispatching. Dispatching resembles hardware-based load balancing, except that it utilizes a single server as the main point of contact for clients. This server in turn retransmits requests to the other servers in the cluster. Again, like hardware-based NLB, this solution exposes a single point of failure. It can also seriously limit client throughput since all clients have to communicate via the dispatching server.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Understanding NLB Concepts

31

How Does NLB Work? By now you probably have a pretty good understanding of what NLB does, but how does it work? Most TCP/IP-based applications facilitate client/ server communications via TCP/IP port numbers, IP addresses, and protocol numbers. For example, a primary role for NLB is the creation of web server farms. Web traffic uses the Hypertext Transfer Protocol (HTTP); this TCP/ IP protocol utilizes port number 80. To increase availability and scalability, an enterprise may decide to introduce Windows 2000’s Network Load Balancing feature. Multiple Windows 2000 Advanced Server hosts could be configured as an NLB cluster. This cluster would be configured with a single virtual IP address. Clients would query DNS for the IP address of the cluster, and DNS would respond with the cluster’s virtual IP address. The client and the cluster begin communicating via the virtual IP address. On each node, NLB is installed as a network driver. This network driver creates a virtual IP address that is then presented to the network for client requests. Clients send requests to the virtual IP address as though it is a single server. In turn, all nodes that are members of a cluster receive the client requests. The NLB driver on each node filters a portion of the traffic to be received by the host. Figure 2.1 shows a diagram of how NLB works. FIGURE 2.1

NLB diagram NLB Host

NLB Host

Internet LAN (Ethernet/FDDI)

NLB Host

NLB Host

NLB uses a statistical mapping algorithm to determine which node will handle the client request. Each client request or packet received is inspected by the NLB driver and subjected to the algorithm. Since all nodes run the

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

32

Chapter 2



Designing NLB Solutions

algorithm, an NLB cluster does not require a central processing point to determine which node handles a request. This eliminates a potential bottleneck and failure point in the cluster. Let’s take a look at the various components and factors that allow Windows 2000’s Network Load Balancing to accomplish its tasks.

NLB Components Network Load Balancing is made up of two primary components, which are installed on each NLB cluster node: WLBS.SYS WLBS.SYS is the Network Load Balancing device driver (Figure 2.2). This is the heart of NLB. FIGURE 2.2

NLB driver (WLBS.SYS) Cluster Host Server application

WLBS.exe

Windows 2000 Kernel TCP/IP NLB driver Network adapter driver

Cluster network adapter

Network adapter driver

Dedicated network adapter

LAN

WLBS stands for Windows Load Balancing Service, which was its previous name in Windows NT.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Understanding NLB Concepts

33

WLBS.EXE WLBS.EXE is the control and configuration program for NLB. WLBS is a command-line utility that can be used to start and stop NLB, as well as to administer NLB, for tasks such as configuring ports and observing cluster status.

Statistical Mapping Algorithm NLB uses a statistical mapping algorithm to determine which node handles a particular request. The algorithm filters incoming requests to determine how traffic is partitioned. This algorithm is run on all nodes, eliminating the need to centrally distribute client requests among the nodes. Filtering client requests is significantly faster than using a central distribution point. The central distribution point would need to examine each packet, modify the packet to send it to the desired node, and then actually send the packet via the network to the desired node. The NLB driver simply filters client requests to determine which ones it will handle and drops any that are to be handled by the other nodes. This design also eliminates the single point of failure introduced when using a central distribution point. The algorithm takes a number of factors into account to determine how the traffic is partitioned across the nodes: 

Load weight



Port rules



Multicast or unicast mode



Affinity



Host priority



Load percentage distribution



Client’s IP address



Client’s port number

Although the main determination is the port rule’s Load Weight parameter, there are several other factors that influence the partitioning of traffic: communication within the cluster, heartbeats, and the effects of convergence.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

34

Chapter 2



Designing NLB Solutions

Intracluster Communications Intracluster communications are used to determine the overall health and configuration of the cluster. Remember that there is no central distribution or control point for an NLB cluster. If a node fails or is added to the cluster, the other cluster nodes must be notified so they can adjust their algorithms accordingly. Heartbeats Each node in an NLB cluster uses heartbeats to determine and maintain the cluster membership. Heartbeat messages are sent out every five seconds. If a node fails to send a heartbeat message, the remaining nodes in the cluster consider the missing node as “failed” and begin a process known as convergence. Convergence During convergence, the working nodes first determine which nodes are currently active cluster members. From among the remaining active nodes, the node with the highest handling priority is elected as the default node. Finally, once cluster membership is determined, client requests from the failed node are redistributed to the remaining nodes. Nodes that are sending out consistent heartbeats are considered members of the cluster. If a failed node begins once again to send heartbeat messages consistently, it can rejoin the cluster via convergence. Note that you can modify parameters used to determine heartbeats and convergence. These parameters are located in the following registry location: HKEY_LOCAL_MACHINE\System\CurrentControlSet \Services\WLBS\Parameters The two parameters relating to convergence are 



AliveMsgPeriod specifies in milliseconds the period of time between heartbeats. The default setting is 1000 milliseconds. AliveMsgTolerance specifies the number of missed heartbeats before convergence is initiated. The default setting is 5 missed heartbeats.

If you make the time period between heartbeats too short, excessive network traffic could result.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Understanding NLB Concepts

35

Port Rules Port rules are used by the Network Load Balancing driver (WLBS.SYS) to determine which TCP/IP traffic should be load-balanced and which traffic should be ignored. NLB is configured by default to load-balance all TCP/IP ports. Depending on your application needs, you can configure the port rules as required. Port rules can be added to allow the Network Load Balancing driver to load-balance traffic based on individual ports or groups of ports. Multiple port rules can be configured if needed. Unfortunately, Windows 2000’s Network Load Balancing service does not provide a centralized method of defining port rules. As a result, you must define the port rules on each node in the cluster. If any node is misconfigured, your application may not be properly load-balanced.

NLB Implementation and Configuration To install Network Load Balancing service, perform the following tasks on each node in the cluster: 1. From Control Panel, open Network and Dial-up Connections. 2. Right-click the Local Area Connection on which NLB is to be

installed. The Local Area Connection Properties dialog box is displayed (see Figure 2.3). FIGURE 2.3

The Local Area Connection properties

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

36

Chapter 2



Designing NLB Solutions

3. In the center window, select Network Load Balancing. 4. Click the Properties button to open the Network Load Balancing

Properties dialog box. 5. Open the Cluster Parameters tab. In the Primary IP Address field,

enter the virtual IP address that clients will use to access the cluster. 6. Enter the Subnet mask for the Primary IP address. 7. For the Full Internet name, enter the fully qualified name that is

entered into DNS. 8. Notice that the Network Address field is ghosted. This field contains

the virtual MAC address for the cluster, which is generated automatically by NLB based on the cluster’s Primary IP address. 9. If multicast support is desired for this cluster configuration, check the

Multicast Support check box. 10. If you plan to use the WLBS.EXE control program from remote com-

puters, in the Remote Password field enter a password for secure remote access. 11. Confirm the remote password. 12. Finally, if you plan to use remote control, check the Remote Control

check box. See Figure 2.4 for a sample view of the completed Cluster Parameters tab.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Understanding NLB Concepts

FIGURE 2.4

37

Sample view of the Cluster Parameters tab

13. Click on the Host Parameters tab. There are several settings that

can be made here; see Figure 2.5 for a sample view of the completed settings. A. TCP/IP-based traffic that is not otherwise specified in the Port

Rules tab will be sent to the node with the lowest priority. This allows a particular node to handle any traffic that is not to be load-balanced across the cluster. Select the host’s priority by entering a number in the Priority (Unique Host ID) field. Remember that the host with highest priority (that is, the one with the lowest value in this field) will handle the cluster’s default network traffic. B. The Initial Cluster State setting is used to determine whether a

node will automatically join a cluster once Windows 2000 is loaded. By default, this setting is active. If you want to manually add or remove nodes from a cluster using WLBS.EXE, uncheck this box.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

38

Chapter 2



Designing NLB Solutions

C. The Dedicated IP Address field is used to designate the IP address

for all noncluster traffic. NLB references this Dedicated IP Address when a single network adapter is used to handle both cluster network traffic and other network traffic. D. Enter the Subnet Mask for the Dedicated IP Address you entered. FIGURE 2.5

Sample view of the Host Parameters tab

14. Click on the Port Rules tab in the Network Load Balancing

dialog box. 15. By default, all ports are load balanced across the cluster. Use this tab

to configure a port rule to define how a particular port will be handled. To do this, enter the port number or port range, whether the protocol is TCP, UDP, or both; and the settings for filtering mode, affinity, load weight, and handling priority (see Figure 2.6). All these parameters are described in the later section “Port Rule Design.”

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Understanding NLB Concepts

FIGURE 2.6

39

Sample view of the Port Rules tab

Using NLB with Switches In most network environments, switches are employed to provide maximum bandwidth to the computers and servers on the network. Switches operate by identifying the MAC address of the computer on each port. Once the MAC address is identified, traffic destined to that MAC address is sent via the corresponding port only. As a result, the entire bandwidth on each port is available for the computer. Switching works quite well in most situations where there is one MAC address per computer. Cluster nodes that are connected via a switch must receive incoming cluster traffic simultaneously in order to function properly. To prevent the switch from identifying the source MAC address, NLB uses different MAC addresses when replying. Since the switch cannot match the MAC address to a particular port, it must continue to send all traffic to all switch ports (see Figure 2.7). As a result, computers that are not cluster members also receive this traffic; the task of determining which packets to discard consumes network bandwidth as well as the computer’s processing resources. This is known as switch flooding.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

40

Chapter 2



Designing NLB Solutions

FIGURE 2.7

Switch flooding

Layer 2 Switch Hosts on individual ports results in switch flooding

Cluster Host

Non-Cluster Host

Switch flooding can be an issue if the cluster’s applications receive a significant amount of incoming network traffic, or conversely, if the cluster’s applications generate a significant amount of outgoing network traffic. Switch flooding can also be an issue if multiple NLB clusters are placed on the same switch. Their combined incoming and outgoing network traffic may place a considerable load on the switch. These switch-flooding problems can be overcome by separating the incoming and outgoing traffic for each cluster node. This can be accomplished by installing two network adapters in each node, one adapter dedicated to outbound traffic and the other to incoming cluster traffic. The default gateway address must be set on the outbound adapter, and the default gateway address for the inbound adapter’s NLB driver must not be set. This makes incoming traffic arrive via the inbound adapter, and makes the cluster reply via the outbound adapter because it is configured with a default gateway. Another switch-flooding solution is to connect the cluster nodes to a hub, which in turn is connected to a single switch port (see Figure 2.8). Of course, you must ensure that the bandwidth available on a single switch port will support the aggregate bandwidth required for the cluster.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

NLB Design Considerations

FIGURE 2.8

41

Adding a hub to eliminate switch flooding

Layer 2 Switch

Hosts on a hub limits switch flooding Layer 1 Hub

Cluster Host

Non-Cluster Host

To implement this solution, you change the Registry on each cluster node. This change forces the Network Load Balancing driver to omit masking the source MAC address. As a result, the switch will be able to learn the MAC addresses of the nodes. The switch will associate the MAC addresses for all cluster nodes with the switch port that is connected to the hub. Change the value of the following Registry key to 0 (it has a default value of 1): HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\WLBS\ Parameters\MaskSourceMAC

NLB Design Considerations

T

o design and implement a Network Load Balancing solution, you need to take a number of items into consideration. Some of these are obvious, such as the hardware platform and network protocols. Others will require your extensive observation of the applications being used on the cluster, and the method of clients’ access and use of those applications. The

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

42

Chapter 2



Designing NLB Solutions

following section describes some of these considerations, including the choice of unicast versus multicast IP protocol configurations, and the type of port rules and affinity that will be used.

Node and Cluster Planning Windows 2000’s Network Load Balancing does not require any dedicated hardware support. You do, however, need to consider a few critical items when planning your NLB cluster.



Microsoft Exam Objective

Design NLB solutions to improve availability, scalability, and fault tolerance. Considerations include number of clusters and placement of servers.

First, the Network Load Balancing service is only found in Windows 2000 Advanced Server and Windows 2000 Datacenter Server. To build an NLB cluster, you must use one of these two operating systems. (In most cases, Windows 2000 Advanced Server is used.) The hardware platform for the cluster nodes must meet the minimum hardware requirements for the operating system. Also, the hardware platform should be listed on the Hardware Compatibility List (HCL). Next, you need to consider your network infrastructure. The Network Load Balancing service only operates in Ethernet or Fiber Distributed Data Interface (FDDI) network topologies. All cluster nodes must reside within the same broadcast domain or subnet, within the same Virtual LAN (VLAN), or on a single hub. In addition, although Network Load Balancing will operate with a single network adapter, it is strongly recommended that multiple network adapters be utilized to provide optimum cluster performance. With this configuration, one adapter can be dedicated to incoming cluster traffic, while other network adapters can accommodate other traffic such as heartbeats and convergence. Finally, the network adapters must support dynamic allocation of MAC addresses.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

NLB Design Considerations

43

Multicast vs. Unicast Windows 2000’s NLB feature can use either unicast or multicast IP protocol configurations. Unicast is the default configuration. A critical design consideration, then, is whether unicast mode is sufficient for your cluster solution.



Microsoft Exam Objective

Design NLB solutions to improve availability, scalability, and fault tolerance. Considerations include multicast versus unicast.

In unicast mode, NLB provokes switch flooding to allow the delivery of network traffic to all cluster nodes simultaneously. Unicast mode accomplishes this by using a single MAC address for all cluster nodes. One sideeffect of unicast mode is that certain communications among the cluster nodes are not possible. When one cluster node attempts to communicate with another, its outbound packets get sent to the same MAC address as the sender’s. When these packets are sent down the sender’s TCP/IP stack, the TCP/IP driver sees that the packets are destined for the sender’s own MAC address and sends them back up the stack. As a result, the packets never reach the actual network. Because of this side-effect, each cluster node requires multiple network adapters. One adapter can be used for the load balancing in general, while the other adapters handle intracluster communication and other functions. In multicast mode, NLB adds a multicast MAC address to all cluster nodes, thus allowing all nodes with the same multicast MAC address to receive cluster traffic. To utilize multicast mode, network routers must meet one of the following requirements. The routers must 





Accept multicast MAC addresses in ARP replies. Accept ARP replies that contain a MAC address different from the MAC address on the ARP request. Allow creation of static ARP entries.

All nodes in an NLB cluster must be configured for either unicast or multicast mode. Mixing of modes is not possible.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

44

Chapter 2



Designing NLB Solutions

Port Rules Design Once you have your NLB cluster designed and built, you can begin to consider how the cluster will be utilized. Port rules set the policy of the cluster. The port rule tells the cluster how to perform load balancing for network traffic on the cluster.



Microsoft Exam Objective

Design NLB solutions to improve availability, scalability, and fault tolerance. Considerations include the number of hosts, number of clusters, placement of servers, multicast versus unicast, priority, affinity, filtering, and load weighting.

Answering the following questions about your cluster environment will help you decide which settings to make on the Port Rules tab of the Network Load Balancing Properties dialog box. If a particular application needs a specific port to function, then a port rule will have to be set for that particular port, for that application. 

What applications will be running on the cluster?



How many clients will be connecting to the cluster?



From which network are the clients connecting?



Does the application require affinity settings?

Filtering Modes For each port rule, you must define a filtering mode. A filter defines how the incoming traffic is handled by the node. There are three possible settings: 





In Multiple Hosts mode, cluster traffic will be handled across multiple hosts. You can specify whether the load is distributed equally among the nodes or that each node can handle a specified load weight. In Single Host mode, a single node will handle all cluster traffic. With this filtering mode, cluster traffic is sent to the node with the highest priority. As a result, fault tolerance can be achieved. If the node with the highest priority goes down, the node with the next highest priority will receive the cluster traffic. In Disabled mode, all network traffic will be discarded.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

NLB Design Considerations

45

Remember that filter modes are defined for each port rule. For example, if the filtering mode for a web traffic port rule (i.e., port 80) is disabled, all network traffic on port 80 will be discarded.

Load Weighting Load weighting allows you to specify, for a particular port rule, the percentage of load-balanced network traffic to be handled by the node. Load weighting is only in effect when the filter mode for a port rule is set to Multiple Hosts. You can set this parameter from 0 to 100. A value of 0 signifies that the node should not handle any of the traffic for the particular port rule. The amount of traffic a particular node can handle is calculated by dividing the local load percentage by the total load weight for the cluster.

Host Priority A host priority comes into effect when the filter mode for a port rule is set to Single Host. The node with the highest priority setting for the particular port rule will handle all cluster network traffic. You can set the handling priority from 1 to 32, with 1 being the highest priority. NLB supports a maximum of 32 nodes in a cluster; hence the lowest priority is 32. The priority value must be unique for each node in the cluster; that is, there cannot be two nodes with a priority of 1.

Affinity and Session States In special situations, it may be necessary for a specific node in the cluster to handle requests from a specific client. These situations are known as session states. If an application requires a session state (say, a website’s shopping cart application, for instance), it is imperative that the client maintains a relationship with a particular cluster node. This relationship allows the contents of the shopping cart to be consistent and/or updated. If the client requests were distributed throughout all cluster nodes, there might be several shopping carts with varying contents, depending on the node handling the requests. With a direct relationship between the client and the node, however, the cart contents will remain consistent and accurate. This relationship between the client and node is known as affinity.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

46

Chapter 2



Designing NLB Solutions

Understanding affinity helps you to effectively configure and implement an NLB cluster. The affinity types use the client’s IP address and port number to determine which nodes will handle the client’s requests. Let’s take a look at the three affinity types available for configuring NLB.

When the filtering mode setting is Single Host or Disabled, the option to set affinity is not available in the Port Rules tab. The Affinity setting is only available when Multiple Hosts is the selected filtering mode.

None The None affinity setting distributes all client requests evenly across the nodes. This setting is used when maintaining session state is not an issue. This setting decreases response time to client requests; as a result, the client experiences faster access to cluster resources. An example of using the None setting for affinity would be a typical website that only presents information, such as a news site. When the client requests a web page, multiple requests are sent for the various objects on the page. Objects such as images and text will all generate a request. With affinity set to None, the requests are distributed to all nodes, which in turn respond with the requested objects. Since the requests are essentially simultaneous, the client experiences faster response times. The NLB statistical mapping algorithm, as discussed earlier in the chapter, uses the client’s entire IP address and port number to distribute client requests when None affinity is set. Since most web-based applications do not require the maintaining of a session state, the None affinity setting is used for most applications. Single With the Single setting for affinity, all requests from a particular client are distributed to a single cluster node. This setting is used to maintain session state. Single affinity sends a client’s traffic to the same node at all times. The Single affinity setting is used for any applications that require session state support, Secure Sockets Layer (SSL), and/or multiconnection protocols such as File Transfer Protocol (FTP) or Point-to-Point Tunneling Protocol (PPTP). SSL uses port 43, FTP uses ports 20 and 21, PPTP uses port 1723, and IP uses protocol number 47.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

NLB Design Considerations

47

The NLB statistical mapping algorithm, discussed earlier in this chapter, uses the client’s entire IP address to distribute client requests when Single affinity is set.

If cluster membership changes (that is, if convergence occurs), session states may be affected due to the addition or elimination of a particular node.

Class C With the Class C affinity setting, all client requests from a particular Class C network are distributed to a single cluster node. This setting is used to maintain session state for users residing behind a proxy array. For example, many organizations use proxy arrays to provide Internet access for their network clients. All client requests to the Internet are handled by the proxy array, which can be configured with a single public IP address or several public IP addresses. As a result, a client could be utilizing several IP addresses within the same network address (hence, a Class C network) during an exchange with a TCP/IP-based application. The Class C affinity setting tells the Network Load Balancing driver to direct all client requests from this particular Class C network to the same cluster node. The NLB statistical mapping algorithm, discussed earlier in this chapter, uses the client’s Class C portion of its IP address (the first three octets) to distribute client requests when Class C affinity is set.

If cluster membership changes (that is, if convergence occurs), session states may be affected due to the addition or elimination of a particular node.



Microsoft Exam Objective

Design NLB solutions to improve availability, scalability, and fault tolerance. Considerations include application types.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

48

Chapter 2



Designing NLB Solutions

Application Types Applications that are compatible with Network Load Balancing must meet all of the following characteristics: 



The application must be TCP/IP-based and use TCP or UDP data streams. NLB only supports TCP/IP-based applications. NetBEUI and NWLink (IPX/SPX) are not supported. For applications that require session state, the suitable Affinity setting (None, Single, Class C) must be enabled. If the administrator does not utilize the Affinity setting, the applications must use another method of maintaining session state.

Other methods of maintaining session state include the use of client-side cookies or a separate database on the back end. Client-side cookies are small files that are written to the client’s hard drive. The cookie contains relevant information that provides session state. Back-end databases can also store relevant session state data and are typically employed in larger applications that require n-tier architectures.



In applications that contain relatively static content that is sent to the clients, another means must be employed to synchronize data across the nodes if the content changes.

The following types of applications are incompatible with Network Load Balancing: 



Applications that require some or all files to be continuously open for writing, such as some database applications. Applications that bind to the NetBIOS computer name of the server. In this case, the application would bind with the computer name of the node, instead of the cluster.

Another application issue is the importance of reviewing any licensing requirements for the application. In some cases, you may need to purchase multiple licenses (say, a license for each node in the cluster).

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

NLB Design Considerations

49

There are several primary areas of uses for the Windows 2000 Network Load Balancing service. These applications are as follows: Web Server Farms/E-Commerce Sites NLB works best with web server farms. Farms that contain relatively static content can be network loadbalanced by creating a port rule that defines TCP port 80 (Protocol HTTP) to be load-balanced. If the web farm supports websites that require security, a port rule can also be defined for TCP port 443 (Protocol HTTPS). Terminal Services For applications that utilize Windows 2000’s Terminal Services, NLB should be configured with a port rule for TCP port 3389 (Protocol RDP). VPN Services To create a scalable VPN Service, you can use NLB with a port rule for either PPTP or L2TP, the tunneling protocols in Windows 2000. 



For PPTP, create port rules for IP protocol number 47 and TCP port 1723. For L2TP/IPSec, you must create the following port rules: IP protocol 50, IP protocol 51, UDP port 500, and UDP port 1701.

NLB as an Availability Solution Network Load Balancing provides high availability by dynamically loadbalancing TCP/IP-based network traffic. When a node in the cluster fails, NLB initiates convergence. During convergence, NLB reconfigures the cluster automatically. Client requests are sent to the remaining cluster nodes, and the load is redistributed among the nodes that are still operating. Convergence occurs very quickly, usually in less than 10–20 seconds.



Microsoft Exam Objective

Design NLB solutions to improve availability, scalability, and fault tolerance. Considerations include failover strategy.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

50

Chapter 2



Designing NLB Solutions

A concept known as N-1 failover is used to determine availability. Simply, N-1 means that if one out of N nodes fails, where N equals the total number of nodes in the cluster, then NLB will automatically recalculate the handling of incoming cluster traffic. During a failover, all client connections to the down node are lost. Once the clients reconnect to the cluster, their connections will be made to the remaining nodes in the cluster. Once the down node is repaired and brought back online, the cluster will once again initiate convergence and cluster traffic will be redistributed. In this case, clients will maintain their already-open connections with their respective nodes. Network Load Balancing as an availability solution is accomplished by using one of two possible strategies: 



FIGURE 2.9

The first strategy involves the even distribution of incoming cluster traffic (see Figure 2.9). In this strategy, when a server fails, the cluster initiates convergence to determine which nodes are still available. Once convergence is complete, incoming cluster traffic is loadbalanced among the remaining cluster nodes. The second strategy involves the use of host priorities to specify that a particular node handles all incoming cluster traffic (see Figure 2.10). In this strategy, if the node with the highest priority fails, the cluster initiates convergence. Convergence determines which working node will be given the highest priority. Once convergence is complete, incoming cluster traffic is sent to the new highest-priority node.

Using even distribution to create an availability solution

Virtual IP: 192.168.0.20

· Load balance 1/3 each · Server B fails · Convergence · Load balance 1/2 each

N-1 Failover A

B

C

Even Balance

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

NLB Design Considerations

FIGURE 2.10

51

Using host priority to create an availability solution

Virtual IP: 192.168.0.20

· Load all on Host 1 · Server 1 fails · Convergence · Load all on Server 2

N-1 Failover 1

2

3

Priority

Network Load Balancing as a Replacement for Round Robin DNS Sunteck is a major distributor of consumer data to various large Fortune 500 companies. Sunteck keeps a customer database that must be available for client access 24 hours a day, 7 days a week. The information in the database is static and is updated by Sunteck employees once a week during the night when traffic is extremely low. Sunteck recently migrated its network infrastructure to Windows 2000, from Windows NT and Novell NetWare. All of Sunteck’s websites are running Windows 2000 Advanced Server. The company migrated to Windows 2000 because of its reliability and scalabilty. Equally important was the high-availability capabilities of Windows 2000, since server downtime at Sunteck is not an option.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

52

Chapter 2



Designing NLB Solutions

Initially, the company implemented a form of IP load-balancing known as Round Robin DNS. This technology provided a basic method of load balancing but did not provide the high availability needed. Sunteck could not afford for their clients to be denied access to customer data because a server was down. After looking at alternatives to RRDNS, Sunteck’s administration determined that Network Load Balancing was a viable solution for the company’s needs. NLB provides the automatic detection of server failure and recovery features that Sunteck’s business demands. The network load can be redistributed and balanced with no interruption to clients in the event of a server failure. Single servers can be taken offline as needed for repair, or upgraded with no degradation in network performance and no noticeable network downtime. Future scalability will not be an issue with the NLB solution because it can easily be scaled up to 32 computers in a single cluster. Since Sunteck already has a Windows 2000 infrastructure in place, no proprietary hardware will be needed for NLB. This will have a significantly positive impact on the total cost of ownership (TCO) of the network over the long term. Once the NLB arrangement is configured, it will require little intervention from the Sunteck IT support team. Overall, NLB provides a sensible and cost-effective solution for Sunteck that is scalable, controllable, and provides fault tolerance.

NLB as a Scalability Solution Network Load Balancing provides scalability by providing the capability to scale up and/or scale out. Many TCP/IP-based applications can benefit from scalability, including websites, proxy arrays, VPN servers, Terminal Services, and streaming media such as audio and video. To “scale up” a cluster, you simply add more resources to each node. These resources can be multiple processors, added memory, multiple disks or RAID solutions, and/or high-speed network adapters. With the additional resources, each node can handle more workload for the cluster. When determining whether to scale up a cluster, you must take into consideration the limitations of the hardware platform as well as the applications and operating systems. For example, you current server hardware may not accommodate multiple processors. Also, operating systems typically can only support a given amount of memory.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

NLB Design Considerations



Microsoft Exam Objective

53

Design NLB solutions to improve availability, scalability, and fault tolerance. Considerations include the number of hosts.

To “scale out” a cluster, you add more nodes to the cluster, up to a maximum of 32. As more nodes are added, the cluster’s overall work throughput is increased. Scaling out a cluster is much more simple than scaling up. The only limitation to scaling out is the 32-node maximum.

NLB as a Fault Tolerance Solution You can create a fault tolerant Network Load Balancing solution by combining multiple clusters and Round Robin DNS (see Figure 2.11). The result is a highly available, highly scalable, and fault-tolerant server environment. FIGURE 2.11

Creating a fault-tolerant NLB solution · Query DNS for helpandlearn · DNS resolves to list of IPs · Client selects first in list · Creates session with server

Virtual IP: 192.168.0.20 3 Cluster 1 with up to 32 hosts

192.168.0.20 192.168.0.21

2

· Query DNS for helpandlearn · DNS resolves to list of IPs · Client selects first in list · Creates session with server

192.168.0.21 192.168.0.20

1 DNS rotates list for each query, which statically loadbalances the incoming requests 4 5

DNS Server

helpandlearn IN A 192.168.0.20 helpandlearn IN A 192.168.0.21

Virtual IP: 192.168.0.21

6

Cluster 2 with up to 32 hosts

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

54

Chapter 2



Designing NLB Solutions

The fault-tolerant solution can be accomplished by building multiple NLB clusters on multiple subnets. In DNS, a Round Robin entry is created that lists each NLB cluster’s virtual IP address. When a client queries DNS for the IP address of the cluster, DNS returns a list of addresses to the client. The client selects the first address on the list and begins communicating with the cluster. The next client queries DNS, which again returns a list of IP addresses, but in a different order. This client accepts the first address on the reordered list and begins communicating with that server. This combination of load-balancing technologies results in a network infrastructure that can tolerate server failures. Clients will still be able to communicate with a cluster even if a portion of the network fails. In addition, since multiple NLB clusters are utilized, clients will enjoy continuous access to a cluster until all nodes in both clusters fail…which is a very unlikely possibility.

Summary

T

his chapter teaches you how to design and implement a Network Load Balancing solution for a server environment. It began with a description of what NLB is and how it is used. Next, a step by step guide to installing and configuring a cluster node is described. Finally, the various components that need to be considered in an NLB design are described and considered. Network Load Balancing provides websites that are highly available. Client requests will be automatically redistributed to active nodes in the event of a node failure. NLB is scalable; an NLB cluster can be scaled up to 32 nodes. It also uses load balancing to distribute client requests across all member servers in the cluster. NLB’s two primary components are WLBS.SYS, the device driver for NLB, and WLBS.EXE, the configuration and control program for NLB. Both of these are installed on each NLB cluster node. Network Load Balancing uses a statistical mapping algorithm to determine which node of the cluster receives client requests. To determine this, the algorithm considers load weight, port rules, affinity, the client’s IP address, and the client’s port number. Each node in the NLB cluster uses a heartbeat to determine what cluster members are active. Heartbeats are sent out every five seconds; if a node does not respond to a heartbeat message, a process called convergence takes

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Exam Essentials

55

place. During convergence, a determination is made as to which nodes of the cluster are active. Client requests for a node determined to be inactive are redistributed to the remaining nodes. When designing a Network Load Balancing solution, several factors must be considered: NLB can only be used on Windows 2000 Advanced Server and Windows 2000 Datacenter. It only operates in network topologies that use Ethernet or Fiber Distributed Data Interface (FDDI). All nodes in the NLB cluster must reside in the same domain or subnet. And finally, network adapters that are being used with NLB must support dynamic allocation of MAC addresses. In the next chapter, we will examine the Microsoft Cluster service and the benefits it provides when implemented in your highly available Web solution. We’ll look at the hardware issues that are part implementing a clustering solution, as well as the software components of clustering, including the operating system, resources, and resource groups.

Exam Essentials Identify the four configuration models that can be used with Network Load Balancing. When planning a Network Load Balancing cluster, you can choose from one of the following four configuration models: single network adapter in unicast mode, multiple network adapters in unicast mode, single network adapter in multicast mode, and multiple network adapters in multicast mode. Know the three affinity settings and when to use them. A relationship that maintains session state between the client and node is known as affinity. In NLB, there are three affinity types: None, Single, and Class C. The None affinity type does not maintain any session state affinity. The Single affinity type maintains a session state affinity between a single IP address and a cluster node. The Class C affinity type maintains affinity between all clients of a particular Class C network and a cluster node. Understand convergence. During convergence, the remaining working nodes first determine which nodes are active cluster members. From the remaining active nodes, the node with the highest priority is elected as the default node. Finally, once cluster membership is determined, client requests from the failed node are redistributed to the remaining nodes.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

56

Chapter 2



Designing NLB Solutions

Understand how host priorities are used in an NLB cluster. Priorities are used to specify that a particular node handles all incoming cluster traffic. If the node with the highest priority setting fails, the cluster initiates convergence. Convergence determines which node then gets the highest priority. Once convergence is complete, incoming cluster traffic is sent to the new highest-priority node. Know the difference between multicast and unicast modes. In unicast mode, Network Load Balancing provokes switch flooding to allow delivery of network traffic to all cluster nodes simultaneously. Unicast mode accomplishes this by using a single MAC address for all cluster nodes. In multicast mode, NLB adds a multicast MAC address to all cluster nodes. Multicast mode allows all nodes with the same multicast MAC address to receive cluster traffic. Unicast, along with two network adapters, is the recommended configuration for an NLB cluster.

Key Terms

Before you take the exam, be certain you are familiar with the following terms: A resource records

MAC address

affinity

multicast

availability

port rules

convergence

Round Robin DNS (RRDNS)

dispatching

scalability

DNS (Domain Name Service)

session states

fault tolerance

software-based NLB

filtering

statistical mapping algorithm

filtering modes

switch flooding

hardware-based NLB

TCP/IP

heartbeats

unicast

host priority

virtual IP address

load weight

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Review Questions

57

Review Questions 1. Which of the following network protocols is/are supported by Win-

dows 2000’s Network Load Balancing? A. NetBEUI B. DLC C. NWLink (IPX/SPX) D. TCP/IP 2. Which of the following benefits can be provided by Network Load

Balancing? Select all that apply. A. Load balancing B. Lower TCO C. Scalability D. High availability 3. How does Round Robin DNS (RRDNS) provide load balancing? A. Uses RRDNS resource records B. Uses multiple A resource records per host name C. Uses multiple CNAME resource records per host name D. Uses one A resource record and multiple CNAME resource records 4. Of the following, which versions of the Windows 2000 operating sys-

tem supports Network Load Balancing? Select all that apply. A. Windows 2000 Professional B. Windows 2000 Server C. Windows 2000 Advanced Server D. Windows 2000 Datacenter Server

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

58

Chapter 2



Designing NLB Solutions

5. When a client communicates with an NLB cluster, with which IP

address does it communicate? A. The cluster’s primary IP address B. The desired node’s IP address C. The cluster’s master IP address D. The cluster’s virtual IP address 6. How does Network Load Balancing determine which cluster node will

handle a client’s request? A. NLB chooses the primary node. B. NLB uses the statistical mapping algorithm. C. NLB uses a First In, First Out (FIFO) algorithm. D. NLB examines node priorities. 7. Which of the following IP protocol configurations can be employed in

an NLB cluster? Select all that apply. A. Unicast B. Broadcast C. Multicast 8. You want a particular cluster node to handle all network traffic for a

web-based application. This application currently uses port 8087. When defining a port rule for this application, which filter mode would you select? A. Class C B. Multiple Hosts C. Single Host D. None

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Review Questions

59

9. You are implementing an NLB cluster for an application that needs to

maintain a session state. The application’s users reside behind a proxy array. Which affinity setting should you choose? A. None B. Single C. Class C D. Proxy 10. What is the maximum number of nodes in an NLB cluster? A. 2 B. 4 C. 16 D. 32

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

60

Chapter 2



Designing NLB Solutions

Answers to Review Questions 1. D. Network Load Balancing does not support NetBEUI , DLC, or

NWLink (IPX/SPX) protocols. It only supports the TCP/IP protocol. NetBEUI stands for NetBIOS Extended User Interface and was originally developed by IBM and Microsoft to support workgroup-size LANs of up to 200 workstations. DLC, Data Link Control, is primarily used for the IBM mainframe environment. NWLink (IPX/SPX) is primarily used for networks running Novell NetWare. 2. A, C, D. Network Load Balancing provides load balancing, scalabil-

ity, and high-availability benefits to organizations that need to maintain web-based applications 24 hours a day, 7 days a week. 3. B. Round Robin DNS achieves load balancing by using multiple A

resource records per host name. Each time the host name is queried, the DNS server answers with the next A resource record in the list. 4. C, D. Both Windows 2000 Advanced Server and Datacenter Server

can be used to provide Network Load Balancing. Windows 2000 Professional and Windows 2000 Server do not have the capability to be used as NLB cluster members. 5. D. Clients send requests to the cluster’s virtual IP address as though

it is a single server. In turn, all nodes that are members of a cluster receive the clients’ requests. 6. B. Network Load Balancing uses a statistical mapping algorithm to

determine which node will handle the client request. Each client request or packet received is inspected by the NLB driver and subjected to the algorithm. 7. A, C. Windows 2000’s Network Load Balancing feature can use

either unicast or multicast IP protocol configurations. In a unicast configuration (the default), NLB reassigns the MAC address of the network adapter or cluster adapter and assigns all cluster hosts the same MAC address. In a multicast configuration, all incoming network traffic is distributed simultaneously to each cluster host.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Answers to Review Questions

61

8. C. The Single Host filtering mode specifies that a single node will

handle all cluster traffic.This is the node that has the highest host priority. When using the Multiple Hosts filtering mode, all incoming client requests are distributed to all cluster hosts. 9. C. The Class C affinity setting distributes all client requests from a

particular Class C network to a single cluster node. This setting is used to maintain session state for users that reside behind a proxy array. 10. D. Windows 2000’s Network Load Balancing supports a maximum

of 32 nodes.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

CASE STUDY

62

Chapter 2



Designing NLB Solutions

Helpandlearn.com, LLC

Take about 10 minutes to look over the information presented for this case study and then answer the questions at the end. In the testing room, you will have a limited amount of time—it is important that you learn to pick out the important information and ignore the “fluff.”

Background A company in Exton, Pennsylvania—Helpandlearn.com, LLC—has asked you to configure a Network Load Balancing cluster to handle the traffic for the company’s website. Helpandlearn.com uses its website to facilitate discussion between teachers and students. Students are encouraged to help their fellow students as well. CEO Interview “The sharing of knowledge is very important in the new economy. Our mission is to get students and teachers talking, which in turn will foster a productive learning environment for all involved. As our reputation grows, we expect traffic to the website to explode. The site has to be able to handle whatever we throw at it.” CIO Interview “I have a very limited IT staff. You need to come up with a solution that will not tie up my staff.”

Current Environment Helpandlearn.com currently deploys a group of Linux-based web servers to house the website. These servers are providing a limited, static form of fault tolerance by utilizing Round Robin DNS. CEO Interview “We know that we need to move into the twenty-first century, and we have set aside funding for it. What we don’t have is internal expertise—we deal in knowledge, but not that kind of knowledge! That’s where you come in—you tell us what would be the best solution.” CIO Interview “After compiling a business plan, we see the advantages of moving to some form of automated network-load-balancing solution. We looked at the numbers, though, and decided that our best bet is to outsource our technical design staff (instead of hiring internal staff). Our major concerns are to keep future costs under control and to be able to use off-the-shelf applications for which training is available.”

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Helpandlearn.com, LLC

63

Like every other business in the world, Helpandlearn.com wants to ride the wave of e-commerce. It has registered a domain (helpandlearn.com) and its Round Robin DNS entries are being hosted by a local ISP. Due to the nature of Round Robin DNS, when one of the Linux web servers fails, Internet users intermittently cannot connect to the helpandlearn.com website.

Technical Requirements Helpandlearn.com would like to first create a more stable environment for its website, in which a single web server failure does not cause problems for Internet clients. The company also wants a solution that will be able to meet the needs of the organization as its web presence grows. You will be responsible for its sole support system. Once the website is in place, client connections are expected to triple overnight. Your challenge is to design a system that can handle the overhead of such a busy web-based business. Before any other action is taken, the CEO wants the website code and data to be safe and secure. She wants only a few people authorized to make changes, but she wants everyone else to be allowed to view the information. Because the entire business revolves around the helpandlearn.com website, the site has to achieve an almost 100 percent availability rating.

Maintenance The company has decided to outsource the entire implementation and maintenance of its server environment to your company. No on-site IS staff will be available. Your recommendations will determine the future of helpandlearn.com’s web environment.

Funding Knowledge management has been coming in and out of vogue for the past decade. Helpandlearn.com was able to find some venture capital for use in funding this expansion into the e-commerce arena. Funding should not be an issue but, as with all other consulting jobs, dollar accountability is critical to a continued relationship with the client.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

CASE STUDY

Business Requirements

CASE STUDY

64

Chapter 2



Designing NLB Solutions

Questions 1. Which of the following would be valid justifications for your choice of

Windows 2000? Select all that apply. A. Because Microsoft leads the industry. Your best bet is usually to go

with its flagship product. B. Windows 2000 is based on Windows NT—a solution that has

been in the market long enough to prove itself reliable, secure, and stable. C. Windows 2000 integrates well with other products that helpandle-

arn.com may have to implement, such as Internet Information Server, Exchange Server, and SQL Server. D. The advanced security built in to Windows 2000 will ensure that

any data is protected. 2. What two requirements must be met by helpandlearn.com’s Internet

applications before they can be implemented into the cluster? A. IPX/SPX-based B. TCP/IP-based C. Integrates with Active Directory D. Supports affinity settings 3. The CIO is concerned that the convergence process may take too long

in the event of a node failure. What is the default amount of time that must pass before convergence will be intitiated? A. 1 minute B. 5000 milliseconds C. 1000 milliseconds D. 10 minutes

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Helpandlearn.com, LLC

65

A. Purchase a group of servers and a copy of Windows 2000 and get

the Network Load Balancing cluster up and running as soon as possible. B. Observe the website and its related network traffic. Note the type

of traffic and the ports it operates on. C. Configure a group of servers as a Network Load Balancing cluster. D. Delete all Round Robin DNS entries. 5. While testing your new Network Load Balancing design, helpandlearn

.com’s Network Manager approaches you about excessive network traffic on their internal network. What happened? A. You plugged the cluster nodes into the production switch. This

resulted in switch flooding. B. Internal employees began using the network. C. Cluster convergence is eating up too much bandwidth. D. The network adapters were not compatible with the network

topology. 6. The CIO is intrigued by Windows 2000’s Network Load Balancing

service. He would like to implement a Terminal Services solution using Network Load Balancing. How could you implement this solution? A. It cannot be implemented. Terminal Services is not supported

by NLB. B. Implement a Single Host Filtering Mode. C. Install two network adapters in each node. D. Create a port rule for TCP port 3389. 7. To implement a solution that is highly available, scalable, and fault

tolerant, which of the following technologies or capabilities would you implement? A. Configure multiple Network Load Balancing clusters B. Place each cluster on a separate subnet C. Install the website code and data on each cluster node D. Configure Round Robin DNS with the IP addresses of all clusters

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

CASE STUDY

4. Which of the following would be your first action on behalf of this client?

CASE STUDY ANSWERS

66

Chapter 2



Designing NLB Solutions

Answers 1. A, B, C, D. This is one of those marketing questions that Microsoft

always throws into its tests. The point here is that you not only have to know how to implement technology, you also have to know how to justify it. 2. B, D. Applications in an NLB cluster must use TCP/IP as the network

protocol and must support affinity settings to maintain a client’s session state. 3. B. There are two parameters for convergence. AliveMsgPeriod

specifies in milliseconds the period of time between heartbeats. AliveMsgPeriod is by default 1000 milliseconds. AliveMsgTolerance specifies the number of missed heartbeats before convergence is initiated. AliveMsgTolerance is by default 5 missed heartbeats. 4. B. Before anything else, examine the current environment! By noting

how the website is currently used and any ports being utilized, you can begin planning your cluster and designing port rules. 5. A. Cluster nodes that are connected via a switch must receive incom-

ing cluster traffic simultaneously to function properly. Network Load Balancing prevents the switch from identifying the source MAC address by using different MAC addresses when replying. Since the switch cannot match the MAC address to a particular port, it must continue to send all traffic to all switch ports. As a result, computers that are not cluster members also receive this traffic, which consumes network bandwidth as well as the computer’s processing resources as since it must determine which packets to discard. This is known as switch flooding. 6. D. For applications that utilize Windows 2000’s Terminal Services,

Network Load Balancing should be configured with a port rule for TCP port 3389 (Protocol RDP). 7. A, B, C, D. You can create a Network Load Balancing solution that

can survive failures in the network infrastructure, as well as intracluster failures such as node failures, by configuring multiple NLB clusters. You should place each cluster on a separate subnet and install the website code and data on each cluster node. You should also configure Round Robin DNS with the IP addresses of all cluster.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Chapter

3

Designing Server Clustering Solutions MICROSOFT EXAM OBJECTIVES COVERED IN THIS CHAPTER:  Design Cluster service cluster solutions to improve fault tolerance. Considerations include the number of nodes, placement of servers, cluster resource groups, failover and failback strategy, active/active and active/passive configurations, application types, and dependencies.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

M

icrosoft has defined several clustering solutions that can be incorporated into your network when designing a highly available web solution. In Chapter 2 you looked at one of those solutions, Network Load Balancing (NLB), which allows an administrator to distribute IP traffic across the network to multiple servers. Three additional cluster technologies are available for delivering cluster solutions. These are Component Load Balancing (CLB), Microsoft Cluster service (MSCS), and Application Center 2000. Here in Chapter 3 we will examine Microsoft Cluster service. MSCS provides several benefits when used for clustering, including high availability, centralized and easier management of resources, and scalability. We’ll define what a cluster is and how MSCS provides clustering services, specifically focusing on Windows 2000 Advanced Server and the components of the server cluster. We’ll look at the hardware issues that are part of planning to implement a clustering solution. Also covered here are the software components of clustering, including the operating system, the Microsoft Cluster service, resources, and resource groups. Finally, we’ll examine other planning issues to be addressed when you’re designing a clustering solution; these issues include cluster configuration models and schemes.

This chapter covers all considerations of the Microsoft exam objective for designing Cluster service solutions to improve fault tolerance, except the consideration for placement of servers. This issue is covered in Chapter 10, “Designing Content and Application Topologies.”

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Windows 2000 Cluster Service

69

Windows 2000 Cluster Service

Before we talk about the Microsoft Cluster service (MSCS), let’s define exactly what is meant by a server cluster. It’s a group of independent servers that are managed as a single system. A server cluster must have, at a minimum, the following components: 





Two servers connected by a network A method by which each server can access data on the other server’s disks Special cluster software to manage the data access, such as MSCS

Server clustering provides several primary benefits, including high availability, ease of management, and cost-effective scalability. Clusters are used in environments where mission-critical applications must be available at all times. These include database applications, webbased applications that depend on database applications, e-mail or messaging, and data sharing. The primary functionality of a cluster is evident when a server in the cluster fails or its status becomes offline. The other server in the cluster will take over the operations for the failed server. Depending on how the cluster was implemented, there is no loss of data or interruption in service (or only minimal loss or interruption) for the end user during the transfers of ownership from one server to another. Windows 2000 Cluster service is based on the shared nothing model. In this model, each server in the cluster owns and manages its own resources. Common devices such as the cluster disks that are shared by both servers are owned and managed by only one server at a time. Cluster service supports standard Windows NT and 2000 drivers for local server storage. The external storage devices used for the cluster must support standard PCI-based SCSI connections. This includes SCSI over Fibre Channel, where SCSI devices are hosted on a Fibre Channel bus instead of SCSI bus. Cluster applications and servers are presented to users on the network as virtual servers. A virtual server consists of a group of resources that provide some function or service. Each of these groups has its own IP address and network name. The end users’ perception is that they are connecting to a single physical server. A virtual server can be hosted by any node on the cluster. In fact, a single node can contain multiple virtual servers, each of which can provide a different function or service.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

70

Chapter 3



Designing Server Clustering Solutions

In the event of an application failure, the Cluster service can move the resource group for the failed application to another node on the cluster. The IP address that was assigned to the virtual server is reassigned to the functioning node of the cluster. End users do not know that this operation has taken place and that the application is being hosted on another cluster node.

Advantages of Clustering

A

s discussed in Chapter 2, Network Load Balancing can be used to distribute the load for client requests across multiple web servers. With this in mind, you may be asking, “Why would I want to use clustering? Why not just use NLB for my needs?” Microsoft Cluster service provides added benefits and can be used in conjunction with NLB to provide a higher degree of functionality in the web environment. Let’s look at some of the specific benefits. High availability of resources High availability of resources ensures that critical resources such as disk drives are always available. MSCS incorporates a “shared nothing” clustering architecture. A software heartbeat (see Chapter 2) is used to monitor and detect failed servers or applications on the server. If a failure is detected, ownership of the failed resources is automatically transferred to a live server. This transfer is transparent to the user. Scalability Clusters are designed to be scalable. Individual resources such as CPUs, storage devices, and applications can be added incrementally as needed to expand network capacity. Instead of having to replace your existing hardware, you can add additional services or hardware to your current configuration in order to extend the capability of the cluster. Centralized administration/manageability MSCS provides a single tool, the Cluster Administrator, to manage the cluster environment. Administrators use it to manage all servers and file shares on the cluster, as well as to specify which applications and components will run on the servers. Cluster Administrator can also be used to manage failover and failback policies and review all activities and failures of servers in the cluster. (Failover and failback policies are discussed later in this chapter.)

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Using Windows 2000 Advanced Server for the Clustering Solution

71

Using Windows 2000 Advanced Server for the Clustering Solution

W

indows 2000 Advanced Server can be used to configure clusters containing two nodes or servers, as shown in Figure 3.1. It can support up to eight processors and 8GB of memory. Both the Cluster service and NLB have been incorporated into the Advanced Server operating system. If you are planning to use clustering in your web environment, you must have Windows 2000 Advanced Server at minimum. You cannot use Windows 2000 Server; it does not support clustering, although it does have support for up to four processors and 4GB of RAM.



Microsoft Exam Objective

FIGURE 3.1

Design Cluster service cluster solutions to improve fault tolerance. Considerations include the number of nodes.

Windows 2000 Advanced Server two-node cluster

Node

Disk Array

Node Clients

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

72

Chapter 3



Designing Server Clustering Solutions

If your network needs a larger node configuration, consider using Windows 2000 Datacenter Server. It has built-in support for four-node clusters. It was designed for large-scale environments and supports up to 32 processors and 64GB of memory. Both Cluster service and NLB are standard features of Windows 2000 Datacenter Server, as they are for Windows 2000 Advanced Server.

For large-scale enterprise installations, you may want to research Application Center, Microsoft’s deployment and management tool for high-availability web applications built on the Windows 2000 operating system. This tool provides a unified user interface to create a web farm that leverages NLB, CLB, and MSCS. Application Center can be used to create and remove clusters, add and remove cluster members, configure load balancing, and monitor cluster performance. For additional information, visit http://www.microsoft.com/ applicationcenter/default.asp.

Windows 2000 Advanced Server through Microsoft Cluster service provides several features for setup in a cluster environment. Let’s look briefly at each of these new features and how they enhance the functionality of clustering: 





MSCS provides failover support for all resources, including applications; physical resources such as hard disks and modems; services and logical resources such as file shares; IP addresses; and network names. Dependencies can be created between resources. If you have a web application that depends on a database such as SQL Server 2000, you could establish a dependency between the two resources. When the web application is brought online, any resources on which it is dependent (such as the SQL Server 2000 database) will be checked to make sure they are also online. MSCS is a fully integrated function of Windows 2000 Advanced Server. It can be installed during the initial setup of Windows 2000 or later via the Add/Remove Programs tool in Control Panel.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Using Windows 2000 Advanced Server for the Clustering Solution





73

For administrators who need to set up multiple instances of MSCS, unattended installations can be scripted through answer files. The Sysprep tool can be used to prepare a cluster system so that its image can be cloned onto other servers. Rolling upgrades were introduced in Service Pack 4 (SP4) for the Microsoft Windows NT 4.0 Server, Enterprise Edition. They allow for the upgrade of software or applications on the cluster without having to take the entire cluster down. One node can be taken offline, upgraded, and then brought back online and synchronized with the other nodes on the cluster.

Rolling upgrades can be performed for upgrading system hardware, applications, or the operating system. If you plan to do a rolling upgrade of an application, keep in mind that you should first check with the application vendor to determine if rolling upgrades are supported.













MSCS supports many services in Windows 2000, including DFS, SMTP, NNTP, and IIS. The Cluster Administrator is integrated with the Microsoft Management Console (MMC). A Wizard has been included to aid in configuring the virtual server. This virtual server is the virtual network name or IP address that clients use to access resources on the cluster. Detection of hardware devices added to the cluster is provided through Plug and Play technology. New storage devices added to the system are automatically detected by the Cluster service without the need for a system reboot. To enhance the capability of writing scripts for managing the cluster, a COM Automation interface has been added to the Cluster API. Two backup APIs have been added to MSCS to provide for the restoration of a cluster following an event such as a cluster failure.

Now that you are familiar with some of the benefits of using Microsoft Cluster service, let’s examine the components that are required for configuring a cluster.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

74

Chapter 3



Designing Server Clustering Solutions

Cluster Components

In this section we will examine some of the components to be considered when you are designing your cluster solution. First we will look at the hardware components of the cluster. These include the network adapter cards, the shared cluster disks, the nodes or servers, and the quorum disk. Next we will discuss the cluster network, which is used for internal communication between nodes in the cluster. Finally, we will examine the various software components that are incorporated in the cluster.

Cluster Hardware Components One of the most critical factors affecting the success or failure of a clustering solution implementation is the hardware that is used for the cluster. The majority of problems encountered by administrators when installing MSCS result from hardware that has not been tested and certified for use in a clustering environment. Microsoft will not support a cluster configuration that includes hardware not approved for clustering and not on the Hardware Compatibility List (HCL) for the Cluster category. All servers that are to become nodes must meet the minimal requirements for Windows 2000 Advanced Server.

The hardware used in a cluster must be on the Cluster Service HCL. The latest version of the HCL can be found at http://www.microsoft.com/hcl/. Doing a search on Cluster will bring up the current list of supported hardware.

Each cluster node will have two PCI network adapters. One of these adapters is connected to the public network, and the other is used for the private or node-to-node cluster network. (See Figure 3.2.)

We will discuss the differences between the private and public network for clusters later in this chapter, in the Cluster Networks section.

Each node is attached to a shared external SCSI bus. This bus must be separate from the system bus of the server. This adapter can be PCI SCSI or Fibre Channel and is attached to an approved external disk storage unit that

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Cluster Components

75

will be used as the clustered disks. RAID is the recommended disk storage solution because of its fault tolerance capabilities. Failure of one disk does not result in a single point of failure in the cluster. FIGURE 3.2

Cluster hardware configuration

Clients

SCSI card

PCI

PCI NIC

NIC

NIC

Private NIC Interconnect

PCI Node A

SCSI card

PCI Node B

SCSI or Fibre Channel Bus

Disk Array

Chapter 4 discusses the various storage technologies available when designing a cluster using Windows 2000 Advanced Server and Windows 2000 Datacenter.

If possible, all servers should use identical hardware. Though this is not a requirement, having similar hardware for all servers that act as nodes in a cluster makes configuration and problem resolution easier and more manageable. We’ve mentioned the term node several times already in this chapter. Let’s take a closer look at the role of a node in reference to Microsoft Cluster service. We will also examine further the other hardware components of the cluster, including the network adapter cards, the clustered disks, and the quorum disk.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

76

Chapter 3



Designing Server Clustering Solutions

Nodes A node is a server that is part of a Windows 2000 cluster. This node can be either an active or inactive member of the cluster. Cluster nodes can be configured as member servers or domain controllers. Nodes have several characteristics: 









Every node in the cluster is attached to a storage device. Some nodes may be attached to multiple storage devices. Each node communicates with other nodes via a physical, independent network called an interconnect. (We will discuss this independent network later in the chapter.) Servers leaving or joining the cluster are detected by every node of the cluster. All resources running locally and on other cluster nodes are detected by every node of the cluster. One common name is used to group all nodes in the cluster. This cluster name is used when the cluster is managed or administered.

Nodes have specific states of operations. Table 3.1 describes the various states of nodes participating in a cluster. TABLE 3.1

States of Nodes in a Cluster Node State

Description

Down

Node is not active and not participating in cluster operations.

Joining

Node is actively becoming a participant in the cluster operations.

Paused

Node is active but cannot take ownership of resources or bring resources online.

Up

Node is active in cluster operations.

Unknown

Status of the node cannot be determined.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Cluster Components

77

Nodes must communicate with one another on the cluster. These communications are accomplished in three ways: 





Using Remote Procedure Calls (RPCs) to communicate with other nodes that are active and online. RPC is an industry standard that is used for client-server communications. It uses Interprocess Communication (IPC) mechanisms to establish communications between the client and server. Using heartbeats to verify the other nodes that are online. Using the Quorum resource, since all cluster configuration changes are stored in the quorum log.

Network Interface Cards (NICs) You’ve learned that one of the hardware requirements for a server or node in a cluster is that it must have no less than two PCI network adapters. One of the network adapters is used for connection to the public network. This is the network through which your users will access the virtual server. The second network adapter is used for node-to-node cluster communications (the private network). Microsoft will not support a cluster configuration that only uses one NIC card per node. You must have a separate network adapter card in each node for the private node-to-node network. This is a requirement for Cluster HCL certification.

Clustered Disk All nodes in a cluster are connected via a clustered disk. This is an external disk storage unit that is separate from the system disk on each node or server. Many companies deploy external disk arrays called SANs for the storage of the clustered data. A SAN (storage area network) is a high-speed network used to establish a connection between the storage elements, such as the disk array, and the host servers or nodes.

We will study SANs and disk storage in Chapter 4.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

78

Chapter 3



Designing Server Clustering Solutions

Quorum Disk One disk in the cluster storage system is designated as the quorum disk for the cluster. This is the most important disk in the system. All cluster configuration information is kept on the quorum disk, and all nodes in the cluster must be able to communicate with this disk. The quorum disk prevents nodes from forming their own clusters if network communications between the nodes fails. When a failure occurs, the node detecting the failure will attempt to take control of the quorum resource. If it does not get control of the quorum disk, that node cannot form a cluster.

A node can form a cluster in Windows 2000 only if it can take control of the quorum resource. The quorum resource ensures that only active, communicating nodes can operate as a cluster.

The most current version of the cluster configuration is stored on the quorum resource or disk. This data is stored in the form of recovery logs and checkpoint files. The recovery logs are used for several purposes: 





To guarantee that only one set of active nodes is allowed to operate as a cluster. To allow a node to form a cluster only if it can gain control of the quorum resource. To allow a node to remain in or join an existing cluster only if it can communicate with the quorum resource.

Cluster Networks In the earlier introduction to the cluster server’s hardware components, you learned that each server must have at least two network adapter cards. One card is for access to the public network, and the other is for the private network that is used for internal communication between cluster nodes. Networks in the cluster environment are a little different from the public networks that you may be familiar with already. A network in a server cluster is referred to as an interconnect. All nodes in the cluster must be connected

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Cluster Components

79

through one or more physical, independent networks. This interconnect is required for node-to-node communications within the cluster. An interconnect can take on any of the following roles in the cluster: 







Carrying internal cluster communications or node-to-node communications. Acting as a public network that provides clients with access to cluster applications or client-to-cluster communications. Acting as a public and private network that carries internal cluster communications and also connects clients to the cluster applications and services. This provides both node-to-node communications and client-to-cluster communications. Carrying traffic that is not related to cluster operation.

The interconnect that supports only node-to-node communication is referred to as a private network. If the network supports client-to-cluster communication it is referred to as a public network.

The Role of the Private Interconnect The private network or interconnect is used to carry internal cluster communications. It is used by the Cluster service to detect node status and to monitor status changes. A heartbeat or datagram is used on this network to detect if a cluster node is alive. The status of each node is constantly monitored so that the Cluster service knows whether groups or resources have failed. The interconnect ensures that the cluster will continue to function even if one of the servers in the cluster loses its network connections. See Figure 3.3.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

80

Chapter 3



Designing Server Clustering Solutions

FIGURE 3.3

Cluster interconnect or heartbeat

Public Network Node

Node

NIC

NIC

NIC

Private NIC Network (Interconnect)

Disk Array

Cluster Heartbeats in the Private Network The cluster heartbeat is used by the nodes in the cluster to verify which nodes are online and operational. The heartbeats are actually datagrams that are transmitted between nodes. Each datagram is a single UDP packet sent at specified intervals by the Node Manager. The Node Manager is a component of the Cluster service that exists on each node in the cluster. If a node fails to respond to the datagram or heartbeat, that node is considered failed and is marked as unavailable in the cluster. If the node fails to respond to the heartbeat after a set number of tries, it is marked as unavailable. The default number of attempts is five and can be changed by using the AliveMsgTolerance Registry parameter. By default, the first node to come online in the cluster is responsible for initiating the sending of the heartbeat to the other nodes in the cluster. All other nodes are responsible for responding to the original node.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Cluster Components

81

The heartbeats are 48 bytes in size and are transmitted at specific intervals. The first node in the cluster sends a heartbeat every 0.5 seconds, and the other nodes respond within 0.2 seconds. If the first node does not get a response from another node, the first node begins to send heartbeats to the apparently failed node every 18 seconds, using the following interval schedule: 

Four heartbeats at 0.7-second intervals



Three heartbeats within the next 0.75 seconds



Two heartbeats at 0.3-second intervals



Five heartbeats with the next 0.9 seconds



Two heartbeats at 0.3-second intervals



Two heartbeats 0.3 seconds later

If the apparently failed node does not respond to any of these heartbeats, it is considered failed and marked as unavailable. After the first node in the cluster fails, the second node begins the heartbeat-sending process within 0.7 seconds of the last received heartbeat.

In addition to its role in providing the heartbeat for the cluster, the private network has several other functions. It carries replication state information between nodes, allowing each node to know what cluster groups and resources are currently running on every other node in the cluster. The private network can be used to issue commands from the Cluster service from one node to another. Applications that are cluster aware can use the interconnect to communicate with other “copies” of the application that may be running on multiple servers. Cluster-aware applications can also use the private network to transfer data among nodes. Later in this chapter we’ll discuss cluster-aware applications again, when we look at the application types that are supported by Microsoft Cluster service. If for some reason the Cluster service cannot communicate over the private network, the service will attempt to send its cluster communications over the public network.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

82

Chapter 3



Designing Server Clustering Solutions

IP Address for the Private Interconnect You may be concerned about what type of IP address scheme you should use when configuring the private network or interconnect. When designing your cluster environment, do not use routable Internet Protocol (IP) addresses. Best practices suggest that the IP addresses for the private network should come from the IP addresses that have been assigned by the IANA for private networks in RFC 1597. This RFC defines three classes of IP address that can be used for private networks, as follows: Class A: 10.0.0.0 through 10.255.255.255 Class B: 172.16.0.0 through 172.31.255.255 Class C: 192.168.0.0 through 192.168.255.255 Under RFC 1597, any TCP/IP network that is not directly or indirectly connected to the Internet can use any range of valid IP addresses from the Class A, B, or C addresses listed above.

Cluster Software Components This section describes the software component side of clustering technology.

The Operating System To utilize Microsoft Cluster service, your computers must be running either Windows 2000 Advanced Server or Windows 2000 Datacenter. Windows 2000 Server does not support clustering. Windows 2000 Advance Server supports two-node clusters, and Windows 2000 Datacenter supports fournode clusters.

The Cluster Service Components The Microsoft Cluster service must be running on all nodes in the cluster. This service uses a domain user account that must be the same on all the cluster’s nodes, so that they are able to authenticate. The Cluster service in both Windows 2000 Advanced Server and Windows 2000 Datacenter provides the functionality to create a cluster that is highly available, scalable, and manageable. Following are descriptions of the several components included in the Cluster service.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Cluster Components

83

Microsoft Cluster Service (MSCS) The Cluster service (MSCS) software runs in Windows 2000 and provides cluster-specific functions through several individual components. Each node of the cluster runs an instance of MSCS. The Cluster service is responsible for the management of cluster configuration and cluster objects. It handles event notification and coordinates cluster activities with other instances of MSCS. It is also responsible for the communications occurring among all software components of the cluster and for performing failover operations when needed. Resource Monitors The Resource Monitors are responsible for monitoring activity between the Cluster service and the Resource DLLs (described next). By default, one Resource Monitor is enabled per cluster node, but you can enable additional Resource Monitors if they are needed. The Cluster service uses the Resource Monitor to make requests from a resource. The resource uses the Resource Monitor to send status and event information to the Cluster service. The Resource Monitor is run as a separate process from the Cluster service to prevent the Cluster service from failing if a resource fails. If the Cluster service fails, the Resource Monitor is responsible for taking all groups offline on the affected node. A cluster node can run more than one Resource Monitor, but by default, only one is stated by the Cluster service. The can be changed by the administrator to allow Resource Monitors to be assigned to specific resource DLLs that may be causing problems. Resource DLLs Each cluster resource has a specific Resource DLL that manages the services for that specific resource. The Resource DLL is called by the Resource Monitor when a request is made by the Cluster service. Resource DLLs are provided for the resource types that are included in Windows 2000 Advanced Server. Additional DLLs can be provided by developers to supplement the DLLs that are provided in Windows 2000. Cluster Administrator The Cluster Administrator is the primary tool used to manage, configure, and monitor the cluster. The Cluster Administrator can be installed and used on Windows 2000 Professional, Server, and Advanced Server computers.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

84

Chapter 3



Designing Server Clustering Solutions

Cluster Automation Server The Cluster Automation Server (CAS) facilitates the automation of cluster management through the scripting of COM objects. These objects can be designed using a scripting language such as Visual Basic or Windows Script Host (WSH). Cluster Database The cluster database, also known as the cluster hive, resides in the Windows 2000 Advanced Server Registry on each cluster node. The Quorum resource, too, has a copy of the cluster database. This database contains important information about all physical and logical elements of the cluster. Each node in the cluster is responsible for maintaining an updated copy of the cluster database. This is done through one of these three methods: 





Global updates, in which the Cluster service replicates all database changes to each node on the cluster. Periodic checkpointing, in which the Cluster service checks each node’s copy of the database to ensure consistency. Via the Quorum resource, where the Cluster service ensures that the recovery logs in the Quorum resources contain the most current cluster database information.

Network and Disk Drivers The cluster network driver, or Clusnet.sys, is run on each node of the cluster. This driver is responsible for monitoring the network paths among nodes. It routes messages among the nodes and detects any communication failures. The cluster network driver is responsible for notifying the Cluster service to initiate a failover in the event of loss of the heartbeat. The cluster disk driver, or Clusdisk.sys, is responsible for placing reservations on the disk for the local system. Cluster API The Cluster API (application interface) acts as the main interface to the Cluster Automation Server, the Cluster Administrator, and any cluster-aware applications. Applications that support the Cluster API are defined as clusteraware applications and can register with the Cluster service and receive status and event notification.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Cluster Resources

85

Resource API The Resource API acts as the main interface between the Resource DLLs and the Resource Monitor. IP Address The IP Address resource is used to manage IP network addresses. Disk The Disk resource is used to manage disks that form the cluster’s storage.

Cluster Resources

U

nderstanding the resources is critical to your mastery of clustering. A resource represents a logical or physical component of the cluster; the resource provides a service to the client and has the following characteristics: 

Resources can be brought online and they can be taken offline.



Resources can be managed by the cluster.



Resources can be hosted on only one node of the cluster at a time.

The Microsoft Cluster service organizes resources into types. Several standard resource types are integrated with Windows 2000 to facilitate management of your cluster resources. These standard resources are associated with specific Resource DLLs provided by the Cluster service. Physical Disk resource The Physical Disk resource is used to manage the shared drives on the cluster. This resource must be created before the shared drives can be used. With the Physical Disk resource you can control which node of the cluster has control of the shared drives. Data corruption can occur on the shared drive if more than one node has control of a drive. The Physical Disk resource cannot be created for logical drives or partitions. It can only be created for the entire physical disk drive. DHCP resource Dynamic Host Configuration Protocol (DHCP) is used to dynamically allocate TCP/IP addresses and related resources. The DCHP resource is used to deploy a DHCP server cluster in Windows 2000 Advanced Server. When using the DHCP resource, keep in mind that it has several resource dependencies. The DHCP resource requires a

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

86

Chapter 3



Designing Server Clustering Solutions

Disk resource, an IP Address resource, and a name resource to function in the cluster environment. WINS resource The WINS resource can be used to deploy and manage WINS services in a cluster. Like the DHCP resource, the WINS resource has several resource dependencies. IIS resource The IIS resource is used to cluster web and FTP sites. If you plan to implement your websites that use transactions, you’ll have to install the Microsoft Distributed Transaction Coordinator (MS DTC) resource. DTC resource The DTC resource is resource is used for a clustered installation of Microsoft Distributed Transaction Coordinator. DTC, which is part of COM services, aids in the management transactions within Internet Information Server (IIS). IP Address resource A number of the cluster components, when configured, may require their own IP addresses. The IP Address resource is used to manage these IP addresses. Quorum resource The Quorum resource is a physical resource that is required for node operations. Normally, this is a single disk in the cluster storage system that is defined as the quorum disk. This disk stores and updates the cluster’s configuration information, stored in the form of recovery logs. Network Name resource The Network Name resource provides an alternate computer name for an entity that exists on the network. Normally this is used in conjunction with the IP Address resource to create a virtual server. Print Spooler resource The Print Spooler resource can be used to support network printers on the cluster. You can only set up network printers as resources. Local printers are not supported on a cluster. Users would access the printers using their network name or IP address. If a document is being printed when the Print Spooler resource fails over, the printing of the document will be restarted on the node that took over the Print Spooler resource. If the Print Spooler resource is taken offline, any documents in the print queue will be finished first. Any documents that are spooling during the process will be lost from the print queue and will have to be resubmitted.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Cluster Resources

87

File Share resources There are three types of File Share resources: basic, share subdirectories, and DFS root. Basic File Share resources provide availability to a single folder that is shared using a standard network share name. Share subdirectories resources can be used to cluster a large number of folders and their subdirectories. The DFS Root resource clusters a distributed-file, shared-root folder. The following generic resource types are provided for use with non-clusteraware applications. When using one of these generic resource types, if you find that your application is not interacting with the cluster, you may have to develop a custom Resource DLL. Generic Application resource The Generic Application resource can be used to implement applications that are not cluster-aware. This resource attempts to provide the application with basic low-level cluster capabilities. If, after being assigned to this resource, the application is terminated and restarted as the result of a failover, you can trust that the application will work with the cluster. If it does not work with the generic application resource, you may have to develop a custom Resource DLL so that the application can be used on the cluster. Generic Service resource The Generic Service resource can be used to provide support for services that are cluster unaware. As with the Generic Application resource, if the cluster-unaware service does not function with the Generic Service resource, you may have to develop a custom Resource DLL.

Resource Dependencies A resource dependency is created when one resource on the cluster depends on another resource in order to operate. A resource that is dependent on another resource must be taken offline before the resource it depends on is taken offline. Inversely, the resource on which another is depending must be brought online before it can be brought online.



Microsoft Exam Objective

Design Cluster service cluster solutions to improve fault tolerance. Considerations include resource dependencies.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

88

Chapter 3



Designing Server Clustering Solutions

One good example of a resource dependency is represented in the virtual server’s setup. A virtual server consists of two resources: a Network Name resource and an IP Address resource. You cannot have a virtual server without both of these resources. If one fails, they both fail. Figure 3.4 illustrates a sample dependency tree configuration. FIGURE 3.4

Sample dependency tree File Share

Print Share

IIS Virtual Root

Network Name

Physical Disk

IP Address

IP Address

Physical Disk

The Cluster service brings resources online and takes them offline in accordance with the dependencies that you have specified when you configured the resource group. Table 3.2 lists some of the standard cluster resources and their resource dependencies. TABLE 3.2

Resource Dependencies for Standard Cluster Resources Cluster Resource

Resource Dependency

DHCP

Physical Disk IP Address Network Name

Distributed Transaction Coordinator (DTC)

Physical Disk

Network Name File Share

Physical Disk Network Name

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Cluster Resources

TABLE 3.2

89

Resource Dependencies for Standard Cluster Resources (continued) Cluster Resource

Resource Dependency

Generic Application

No dependency resource

Generic Service

No dependency resource

IIS

IP Address Physical Disk Network Name

IP Address

No dependency resource

Network Name

IP Address

Physical Disk

No dependency resource

Print Spooler

Physical Disk Network Name

WINS

Physical Disk IP Address Network Name

One other thing you should consider when implementing resource groups and resource dependencies is the preferred nodes list. This list is used to determine the primary node of the cluster on which the resource would normally operate.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

90

Chapter 3



Designing Server Clustering Solutions

Cluster Resource Groups As you learned earlier in the chapter, a resource is a logical or physical component within the cluster. A resource can be brought online and taken offline and is managed by the cluster. Now we take the concept of resources a step further by introducing resource groups. A resource group is a logical collection of resources. They are logical in the sense that they can include both software and associated hardware.



Microsoft Exam Objective

Design Cluster service cluster solutions to improve fault tolerance. Considerations include cluster resource groups.

A resource group defines what resources as units are set to fail over. When one resource group fails and is moved to another node on the cluster, all of the resources in that group must also fail over to the same node. A resource group can only be owned by one node of the cluster at a time. This ensures that all group members are located on the same server or node. Proper planning of resources and resource groups is integral when implementing a cluster. Keep in mind that the configuration and implementation of the cluster’s resource groups will have a direct effect on cluster operations. The following steps should be taken first to ensure a successful implementation: 1. Make a list of all applications that you plan to run on the cluster.

Include every application, even if you do not plan to use it with the cluster server. 2. Determine which applications in the list have the capability to fail over

and which ones don’t. (There’s more on failover later in this chapter.) 3. Determine what other resources on your system could benefit from

being clustered. These might include printers, the operating system, or the network components. 4. Create a dependency list for each resource. A resource’s dependencies

must be in the same resource group.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Cluster Resources

91

5. Once you have created your dependency list, you’re ready for the next

step: making a decision on how to group your resources. When grouping resources, keep in mind that all resources and their dependencies must be placed into a single group. 6. The final step is to name each group and create a dependency tree for

each one. The dependency tree provides a visual representation of the resource and the resource groups required for its operation in the event of a failover.

Creating a Resource Dependency Tree One of the critical steps in creating resource groups is to create a dependency tree. A dependency tree shows the relationship of all resources in a specific group. Figure 3.5 shows the dependency scheme for a set of cluster resources. Here are a few rules to following when creating your dependency tree: 











You need not include the cluster name and the cluster IP address in the dependency tree. The Cluster service creates these automatically during installation. Draw lines to link resources to others on which they are dependent. All resources in the dependency tree must be online in the same cluster node. A resource can be online or active on only one node at a time. All resources in a dependency tree can belong to only one resource group. Resources that are dependent on other resources can only be brought online when the supporting resources are online.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

92

Chapter 3



Designing Server Clustering Solutions

FIGURE 3.5

Dependency scheme for cluster resources DTC MSMQ Print Spooler Network Name

File Share

Generic Service

Generic Application

IIS Virtual Root

IP Address

Physical Disk

Required links Typical, but not required

Three concepts that are important for your understanding of resource groups are virtual servers, failover, and failback. In the next sections we will look at each of these.

Virtual Servers Clients connect to applications and services on the cluster through virtual servers. A virtual server is a group of resources that provide a service. This group has its own network name and IP address. When a virtual server fails over, all of the resources of the group fail over as a unit. When a client connects to a virtual server, the client perceives a single, physical server. Unbeknownst to the user, this virtual server can be hosted by any node of the cluster. A virtual server is not associated with a specific physical server on the cluster. During a failover (discussed in the next section), the virtual server can be moved from one node to another just like any cluster resource. When the virtual server is moved or taken offline dynamically, it does not affect any other virtual servers or resources on the cluster.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Cluster Resources

93

You create a virtual server with the Cluster Administrator by binding a Network Name resource to an IP Address resource. The Network Name becomes the virtual server name that clients use to access the resources. If the node housing the virtual server fails, clients can still access the virtual server using the same name. They will be redirected to the node to which the virtual server failed over. All of this is transparent to the user. To create a virtual server on your cluster, follow these steps: 1. Start the Cluster Administrator (CluAdmin.exe), shown in Figure 3.6. FIGURE 3.6

The MSCS Cluster Administrator

2. Right-click the cluster name and select Configure Application. This

starts the Cluster Application Wizard. Click Next to continue. 3. Select Create A New Virtual Server. 4. Select Create A New Resource Group. 5. In the Name box, enter a name for the new group click Next. 6. In the Network Name box, enter a unique NetBIOS name, and enter

an IP address in the IP Address box. Click Next to continue.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

94

Chapter 3



Designing Server Clustering Solutions

7. In the Advanced Properties for the New Virtual Server dialog box,

click Next. 8. Select the option “No, I’ll create a cluster resource for my application

later” and click Next. 9. Click the Finish button. 10. Right-click the newly created group and select Bring Online. 11. Test the connectivity to the virtual server. Click Start  Run, and type

\\%virtualServer%

Failover and Failback The private network or interconnect is used by the Cluster service to detect resource failures. The Cluster service checks resources at specified intervals to see if they are operational. It does this through the Resource Monitor, which in turn relies on the Resource DLL. This section discusses the concepts of failover and failback in a clustered environment. We will explain each arrangement and how you should configure your cluster to prepare for them.



Microsoft Exam Objective

Design Cluster service cluster solutions to improve fault tolerance. Considerations include a failover and failback strategy.

Failover If an individual application on the cluster fails, the Cluster service will attempt to restart the application on the same node on which it failed. If the application can’t be restarted, Cluster service will attempt to move the application’s resources to another node and restart the application there. In the world of clustering, this is called a failover. The failover process involves taking cluster resources, either individually or in groups, offline on one node and bringing them back online on another node. The administrator can set resource dependencies for the application

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Cluster Resources

95

and can choose whether or not to restart the applications through failover policies. These settings are configured using the Cluster Administrator. The Cluster service will attempt a failover for resource groups if any of the following conditions exist: 





The node on which the group is hosted fails. A resource in the group fails and its failure affects the other resources in the group. An administrator forces the failover from one node to another.

The following steps occur during the process of failing over resources or resource groups: 









The Cluster service attempts to take each resource in the group offline in relation to its order in the group dependency hierarchy. The Cluster service attempts to take the resource offline by invoking the Resource DLL for the resource. This is done through the Resource Monitor. If the resource does not go offline after a specified amount of time, the Cluster service forcefully takes the resource offline. Once the resource is offline, the Cluster service moves the resource to the node that has been specified in the resource’s list of preferred host nodes. After the resource group is successfully moved to the other node, the Cluster service brings all the group’s resources online. The failover is considered complete when all the resources are brought back online.

The Cluster service continues to attempt to fail over a resource group for a specified number of failover attempts as defined in the group’s failover policy. If failover is not successful within the time specified in the failover policy, the Cluster server stops trying to bring the resource group online and stops the failover. Failover can be initiated either automatically by the Cluster service or manually through the Cluster Administrator. Keep in mind that both processes are basically the same.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

96

Chapter 3



Designing Server Clustering Solutions

An automatic failover forces the application to be shut down, whereas a manual failover gracefully shuts down the application. Thus data may be lost during an automatic shutdown if the application was not closed properly before it was terminated.

Failover Policies Failover policies are used to determine the behavior of resource groups when an automatic or manual failover occurs. These policies are configured individually per group in the cluster through the properties of the resource group. This is done in the Failover tab of the resource group’s Properties dialog box. Three failover settings are available in Windows 2000 Advanced Server Cluster Service; they are failover timing, preferred node, and failback timing. Failover Timing With the Failover Timing settings, the administrator can set the group for immediate failover in the event of a resource failure. Another option within this setting lets you configure the Cluster service to attempt to restart the group a predefined number of times before the actual failover occurs. Preferred Node This option lets you designate a preferred node on which the resource group will operate. Failback Timing These options allows you to configure the group to fail back to the node designated in the Preferred node option after the failed node has been restored and detected by the Cluster service. You can also configure the Cluster service to wait for a specific time of day before it attempts to fail back the group to the preferred node.

Failback Failover is used by the Cluster service to fail over resource groups from an inactive node to an active node. When their original node becomes active again, the Cluster service will fail the resource groups back to that node and attempt to bring the resources back online there. This is called a failback. Resources are failed back to the original node in the same order that they were failed over. When a group fails over to another node in the cluster, the group will continue to operate on that node unless a failback policy has been implemented to fail it back to its original node.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Cluster Resources

97

A preferred owner must be assigned to a resource group before it can fail back to a recovered node. This owner is defined in the resource group’s properties. Failback Policy Failback polices are not enabled by default in the Windows 2000 Advanced Server Cluster service. You must enable them if you want the group to fail back after the failed node is restored and comes back on line in the cluster. The Failback tab of the resource group’s Properties dialog box is used to set the failback policy for the group. Two options are available—Prevent Failback and Allow Failback. If Allow Failback is enabled, two options become available to specify when the failback occurs. Setting the Immediately option means the Cluster service will attempt to fail back the group when the preferred node becomes active. The Failback Between option is used to configure a time interval for the failback.

Failback will not occur unless a preferred owner is specified for the group.

Designing a Highly Available Solution for Web Servers As the IT manager for your company, you understand the necessity of keeping your web farm operating as its peak. In today’s business-oriented society, customers want the best possible service from both online entities and traditional brick-and-mortar companies. A significant challenge confronting modern IT managers is to design an e-commerce system that is highly available and capable of meeting the needs of demanding customers. E-commerce is a growing and exacting marketplace. If your e-commerce site cannot provide the services required by its customers, quickly and reliably, those customers can easily find one that will.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

98

Chapter 3



Designing Server Clustering Solutions

One of the jobs of IT is to create sites that are not only robust, highly available, and fault tolerant, but also scalable—open to upgrading and expansion as needed. As your customer base expands, you want the capability to dynamically allocate additional web resources to meet the changing demand. Always keep this in mind when designing or upgrading your web solution. Front-end systems can be designed with web servers that are configured using load balancing, to provide client access via a single IP address or DNS name. When you’re the customer base grows, the additional load generated by increased web traffic can be distributed equally across all servers on the front-end. This ensures that all of those servers are being utilized, rather than one server getting hit with excessive network traffic. The back-end tier of your web farm can consist of database servers that provide content for the web servers on the front-end. These supporting servers supply the mission-critical content that must always be available for customer requests. You never want any customer to be denied the information they need because a back-end database server is down or unavailable to service the front-line web servers. To maintain such a high level of availability, you as an IT manager must design your back-end tier to ensure continual accessibility. Implementing a clustering solution with failover, to back up those nodes in the event of hardware or software failure, is one way to ensure that accessibility.

Cluster Application Types When deciding on which applications to place on your cluster, consider that two types of applications are supported by Microsoft Cluster service: clusteraware applications and cluster-unaware applications.



Microsoft Exam Objective

Design Cluster service cluster solutions to improve fault tolerance. Considerations include application types.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Cluster Resources

99

Cluster-Aware Applications Cluster-aware applications can take full advantage of the features of Cluster service. One of the principal features of these applications is their ability to fail over to another node in the event of a node failure. Cluster-aware applications use their own specific Resource DLLs, rather than the default Resource DLL provided through the Cluster service. These applications also use the Cluster API to update information for the Cluster service and request information from it. There are two categories of cluster-aware applications: Cluster Management Applications Cluster management applications do not run a resource on the cluster, but rather interact with the cluster. The Cluster Administrator tool is an example of a cluster management application. Cluster-aware management applications are developed with three areas of functionality in mind. First, they must have the capability to receive cluster events. Second, they must be able to access cluster-related information. Third, they must be able to change cluster information. An application developed to meet each and every one of these three requirements is considered to be cluster aware. Cluster-aware applications use the Cluster API to provide these three areas of functionality. Cluster Resource Applications The second type of cluster-aware application is developed to take full advantage of the features of Cluster service. These applications are the cluster resource applications. They’re designed to be managed by cluster-aware management applications such as the Cluster Administrator. An example of this type of application would be Exchange 2000. Exchange would be configured and managed on the cluster as a resource via the Cluster Administrator.

Remember that there are two types of cluster-aware applications: those that interact with the cluster, such as the Cluster Administrator, and those that are managed as a cluster resource, such as Exchange 2000 or SQL 2000.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

100

Chapter 3



Designing Server Clustering Solutions

Cluster-Unaware Applications A cluster-unaware application does not use its own Resource DLLs. It uses the default Resource DLL that is provided with the Cluster service. These applications do not use the Cluster API and are not able to benefit from the functionality provided by Cluster service. You can, however, provide some rudimentary cluster functionality with cluster-unaware applications. As an administrator, you can configure cluster-unaware applications with the capability to interact with the cluster on the most fundamental level, such as configuring the application to fail over after a node failure. This can be done by configuring the cluster-unaware application to use either the Generic Application resource or the Generic Service resource. An alternative to using one of the generic resources is to have a custom resource developed for the application.

Planning Your Server Cluster

W

indows 2000 Advanced Server provides users with the foundation for an integrated clustering solution that will meet current and future business needs. And implementing a successful, highly available web solution involves proper planning in many areas. The next several sections look at some of the steps that should be taken when planning your cluster environment. Recognize and identify network risks. Single points of failure in your network infrastructure should be identified to aid in minimizing risks to ensure that service can continue in the event of a failure. Select the applications to run on the cluster. Not all applications can run in a clustered environment, and you should identify those applications that can be configured to fail over in the cluster. You should check the following criteria to determine an application’s acceptability for use in the cluster environment: 

Application must use TCP/IP or DCOM. Applications that only use NetBEUI or IPX cannot take advantage of failover.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Planning Your Server Cluster





101

Application must be able to store its data in a configurable location, such as a shared disk. Application must support NTLM authentication. Kerberos authentication is not supported when using virtual servers.

Select a domain model. Servers, which serve as nodes in a cluster, must be configured using either Windows 2000 Advanced Server or Windows 2000 Datacenter. They can be either domain controllers or member servers. All nodes in the cluster must be members of the same domain. If you decide to use member servers to implement your cluster, bear in mind that the cluster availability depends on the availability of the domain controller for the domain.

Domainlets Before a cluster can be implemented, it must be part of a domain. The servers that make up the nodes can be member servers or domain controllers and must all be in the same domain. When a server is configured as a domain controller additional overhead is created by the processes required for its operation. To minimize this overhead, you can install the cluster in what’s called a domainlet. This mini-domain provides the same functionality of a domain but with limited group policy and authentication capabilities. Domainlets are preferable over domains for installing clusters.

Select a cluster model. Windows Cluster service provides several cluster models that can be tailored to meet your specific organization’s requirements. Three models are available for design implementation: single node, active/passive, and active/active. Plan your resource groups. Steps are provided earlier in this chapter (in “Cluster Resource Groups”) for determining the resource groups that will need to be created to run on the cluster. Determine failover and failback policies. Once a decision has been made about the resource groups to be used, a failover/ failback policy must be implemented to determine what happens at node failure.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

102

Chapter 3



Designing Server Clustering Solutions

Plan fault-tolerant storage. The majority of resource groups in the cluster depend in one way or another on the shared clustered disks. A faulttolerant disk storage solution should be implemented to ensure high availability of the data on the disk in the cluster. Determine capacity requirements for the system. Perform an assessment to determine the specific hardware requirements that are going to be needed to implement your highly available cluster solution.

Single Points of Failure A key factor when designing your cluster solution is to identify single points of failure. Any single point of failure in the cluster can result in loss of data or client services. It’s important to address the risks of failure from hardware, software, resource dependencies, telco lines, and even A/C power. When designing a cluster, try to minimize any existing single points of failure and provide recovery mechanisms in case of failure. Windows 2000 Advanced Server and Windows 2000 Datacenter both have built-in features to protect various components of the system. You should examine all areas of your system that could be potential problem areas and points of failure. Table 3.3 identifies some of the areas that should be scrutinized for failure risk. TABLE 3.3

Potential Points of Failure for Highly Available Web Solutions Points of Failure

Solution

Network hub

Redundant networks

Cluster power supply

UPS (uninterruptible power supply)

Server connection

Failover

Disk

Hardware RAID

Server hardware

Failover

Server memory

ECC (Error Correction Code) memory

Cluster interconnects

Dual NICs per cluster node

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Planning Your Server Cluster

TABLE 3.3

103

Potential Points of Failure for Highly Available Web Solutions (continued) Points of Failure

Solution

Client-to-cluster connectivity

TCP/IP network with redundant routers

WAN links

Redundant WAN links

Dial-up connection

Multiple modems

Data

Frequent and regular backups

Implementation Models Two models of implementation are used for installing clusters in today’s enterprise environments. These are shared device and shared nothing. Let’s look at the differences between the two.

Shared Device In a cluster that is configured after the shared device model, any software that is running on the cluster can access any resource that is connected to the system, regardless of which node the resource is located on. If two systems need to access data on the same disk, it must be read twice—once for each system accessing the data. When using a shared device mode, the software must synchronize and serialize its access to the data. Conflicts may occur if both systems try to access the same data at the same time. A service called Distributed Lock Manager (DLM) is used to track and manage access to the resource. One drawback to using the shared device model is the performance hit taken by the system with the additional network traffic created by the Lock Manager.

Shared Nothing The shared nothing model is an alternative to the shared device mode. Windows 2000 Advanced Server Cluster service is based on the shared nothing model. In this arrangement, only one node at a time can own and access specific resources. When a failure occurs, another node can take ownership of the resource. This provides a high level of availability.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

104

Chapter 3



Designing Server Clustering Solutions

Configuration Models Microsoft has defined five deployment models that can be used in a Windows 2000 clustering environment to support various failover scenarios. We’ll discuss each of the following: 

High-availability solution with static load-balancing



Hot spare



Partial server cluster



Virtual server only



Hybrid solution

High-Availability Solution with Static Load-Balancing This cluster model (see Figure 3.7) consists of two nodes, each of which has its own set of resources configured as virtual servers. This model provides the highest level of availability and performance when both nodes are online and each virtual server is being detected and accessed by clients. If one node has to be taken offline or fails over, the other node can assume the responsibility of the virtual server on the failed node. A cluster running in this configuration still provides a high level of availability, but there is degradation in performance. All clients are still able to access the virtual server with only a brief interruption in service during the failover process. Once the failed node is brought back online, all services return to their normal performance, with both nodes providing client services via their virtual servers. FIGURE 3.7

High-availability model

File Services Group A

Failover capability to Group B Node

File Services Group B

Disk Array

Failover capability to Group A Node

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Planning Your Server Cluster

105

Hot Spare The hot spare cluster model (see Figure 3.8) consists of two nodes; one is designated the primary, and the other as a dedicated “hot spare.” When the primary node fails over, the hot spare takes over and immediately starts servicing clients. This mode provides a high level of availability and performance but with the added cost of hardware that sits idle until a failover is needed. FIGURE 3.8

Hot spare model

File Services Node

Disk Array

Failover for File Services Node

Partial Server Cluster The partial-server model (see Figure 3.9) is a good one to implement when you are using applications on your cluster that are not cluster aware and cannot fail over in the event of node failure. With cluster-unaware applications, data for the applications is stored on the local disks for the server, not the shared disks of the cluster. When failover occurs, only the cluster-aware applications are failed over to the secondary node. Applications that are cluster-unaware remain unavailable until the failed node is restored. This model provides high availability for cluster-aware applications.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

106

Chapter 3



Designing Server Clustering Solutions

FIGURE 3.9

Partial-server cluster model

File Services Group A

Capability to failover Group B

Virtual Server (no failover) Node

Capability to failover Group A

Disk Array

File Services Group B Node

Virtual Server Only The virtual server model (see Figure 3.10) uses a single-node server cluster. A virtual server is configured on the node to provide services to clients. This model provides no failover or failback capabilities. It does provide the capability to restart applications on the virtual server when the server is restored after a failure. This is helpful in the event the applications housed on the virtual server are unable to restart themselves after the server failure. This model provides only a normal level of availability. FIGURE 3.10

Virtual server model Virtual Server (File and Print Services) (Web Services)

Hybrid Solution The hybrid solution model (see Figure 3.11) is exactly that: a hybrid of the other models. Administrators can use this model as a template to create a cluster that incorporates the advantages of any of the other cluster models discussed here. The hybrid solution can provide high levels of availability depending on the types of failover policies that are configured for it.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Planning Your Server Cluster

FIGURE 3.11

107

Hybrid solution model

File Services Group A

Capability to failover Group B

Virtual Server (no failover) Node

Capability to failover Group A

Disk Array

File Services Group B Node

Configuration Schemes Microsoft has developed three cluster configurations that can be used when designing your cluster environment: the single-node server cluster configuration, the active/active configuration, and the active/passive configuration.



Microsoft Exam Objective

Design Cluster service cluster solutions to improve fault tolerance. Considerations include active/active and active/ passive configurations.

Single-Node Cluster Configuration A single-node cluster is merely a way of organizing resources on a server. This configuration is also known as the virtual server model. Failover and failback are not options when using a single-node cluster configuration. One advantage of the single-node configuration is that it can easily be clustered with other nodes at a later time.

Active/Active Configuration In an active/active cluster implementation, each node in the cluster is capable of managing the resource groups in that node. Each node works independently of the others. If one node fails, the other can dynamically assume the role of the failed node.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

108

Chapter 3



Designing Server Clustering Solutions

In this type of configuration, an instance of the application is installed on each node as a virtual server. In the event of node failure, the remaining node of the cluster is responsible for the activities of its virtual server and the virtual server of the failing node. When both nodes of an active/active cluster implementation are online, the system provides a high level of availability and performance. If for some reason only one node of the cluster is online, a high level of availability can still be achieved, but your applications will perform at only an acceptable level. A good example of active/active implementation is a cluster configuration serving as a print server. Both nodes of the cluster are configured with multiple print shares that are each configure with their own separate resource groups on each node. If one server or node fails, the other can take over the print shares for both servers. Once the preferred server is restored and brought back online, servers resume responsibility for their own resource groups.

Active/Passive Configuration In an active/passive cluster implementation, one node is designated as the primary for resources. When a failover occurs, the resources fail over to the secondary node. When the primary node returns online, all of the resources fail back to the primary node. When using active/passive configuration, you will always have hardware that is not being used most of the time. The secondary node sits idle until the primary node fails and all of the resource groups are failed over to the secondary. You get a high level of both availability and performance with this configuration. What may be an issue, however, is the associated cost of standby, idle hardware. An example of an active/passive cluster implementation is a web farm that uses a SQL Server database on the back end to service multiple web servers on the front end. If the SQL Server fails, its customer information cannot be accessed by the web servers. A secondary server with the exact specifications of the first SQL Server can be configured to take over operations in the event of the primary’s failure.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Summary

109

Summary

In this chapter we looked at how Microsoft Cluster service (MSCS) can be used to implement a highly available web solution. Some of the benefits that are obtained by using MSCS include high availability of resources, scalability, and centralized management. We examined both the hardware and software components that are required for implementing this service. To implement Microsoft Cluster service, you must use either Windows 2000 Advanced Server or Windows 2000 Datacenter. Windows 2000 Advanced Server supports two-node clusters, and Windows 2000 Datacenter supports four-node clusters. The successful implementation of your cluster solution is directly dependent on the type of hardware used. Microsoft will not support hardware that is not on the Cluster HCL. You also learned about resource groups and how they are used to provide services to clients. Windows 2000 divides resources into types and provides several standard ones that can be used for your implementation—some of the most common of these include the Physical resource, the IP Address resource, the Quorum resource, and the Network Name resource. Resources can have resource dependencies, which are created when one resource is dependent on another resource in order to function. You studied the failover strategies that can be used in a cluster, including how to implement failover and failback policies. The private network detects resource failures and will attempt to move all resources from a failed node to one that is functioning. Failovers can be implemented automatically or manually. A failback occurs when the resource is failed back to its original node once that node becomes active again in the cluster. You learned that clients access services and applications on the cluster through virtual servers—a group of cluster resources that together provide cluster access to clients. When a virtual server is failed over, all of its resources fail over. Microsoft has provided several deployment models that can be used as the basis for designing your cluster environment. We looked at each model— high-availability with load balancing, hot spare, partial server, virtual server, and hybrid solution—and discussed the benefits and limitations of each. The next chapter discusses design of high-availability data storage and disaster recovery solutions.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

110

Chapter 3



Designing Server Clustering Solutions

Exam Essentials Identify the advantages of using clustering for your web solution. By using Microsoft Cluster service, you can design a web solution that is highly available, scalable, and easy to manage. Know what the quorum disk is and why it is important in a cluster. The quorum disk is the most important disk in the cluster because it contains the configuration information for the entire cluster. All nodes must be able to communicate with the quorum disk to participate in the cluster. Understand the difference between the private and public networks of a cluster. The private network is used for internal node-to-node communications within the cluster, and the public network is used for client communications to the cluster. Identify hardware requirements for implementing a cluster. You should only use hardware that is on the Hardware Compatibility List. Microsoft only supports clusters that are configured from equipment that has been tested and validated for clustering.

Key Terms

B

efore you take the exam, be certain you are familiar with the following terms: active/active

cluster hive

active/passive

cluster network driver (Clusnet.sys)

Cluster Administrator

cluster-aware

Cluster API

clustered disk

Cluster Automation Server (CAS)

clusters

cluster database

cluster-unaware

cluster disk driver (Clusdisk.sys)

DCHP resource

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Key Terms

111

Distributed Lock Manager (DLM) Print Spooler resource domainlet

quorum disk

DTC resource

Quorum resource

failback

Resource API

failback policy

resource dependency

failover

Resource DLL

failover policies

resource groups

File Share resources

Resource Monitors

Generic Application resource

resources

Generic Service resource

rolling upgrades

hot spare

SAN (storage area network)

IIS resource

server cluster

interconnect

shared device

IP Address resource

shared nothing

Microsoft Cluster service (MSCS)

single point of failure

Network Name resource

virtual servers

node

WINS resource

Physical Disk resource

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

112

Chapter 3



Designing Server Clustering Solutions

Review Questions 1. Which of the following operating systems can be used to provide clus-

tering services? Select all that apply. A. Windows 2000 Professional B. Windows 2000 Datacenter C. Windows 2000 Server D. Windows 2000 Advanced Server 2. What service allows the cluster administrator to upgrade the applica-

tions on the cluster while it is still online servicing client requests? A. DHCP B. Rolling upgrade C. Message queuing D. Cluster Administrator 3. What is the purpose of the quorum disk in a cluster? A. It stores the cluster configuration information. B. It accomplishes failover for the nodes in the cluster. C. It provides communications between all nodes in the cluster. D. It monitors the activity between the Cluster service and the

Resource DLLs. 4. How many nodes or servers can be used in a cluster when Windows 2000

Advanced Server is the operating system? A. Two B. Four C. Unlimited D. Eight

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Review Questions

113

5. Which of the following resources are used when configuring a virtual

server? Select all that apply. A. IIS B. IP Address C. Quorum Disk D. Network Name 6. The cluster heartbeat provides what functionality? A. It facilitates the automation of cluster management through scripting. B. It stores the cluster configuration information. C. It communicates with the Resource DLLs. D. It is used to verify that all nodes on the cluster are online and

operational. 7. Which of the following resources can be used to aid in implementing

cluster-unaware applications on your cluster? A. Quorum Disk B. Physical Disk C. Generic Application D. File Share 8. What is the name of the process in which a virtual server is moved

from one node of the cluster to another? A. Failover B. Failback C. Resource sharing D. Hot spare

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

114

Chapter 3



Designing Server Clustering Solutions

9. Which of the following would be valid IP addresses for use with the

cluster interconnect? Select all that apply. A. 10.10.10.3 B. 172.16.10.1 C. 192.168.35.3 D. 208.185.160.10 10. In which of the cluster models is one node always in an idle state? A. Hybrid B. Partial-server cluster C. Virtual server D. Hot spare

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Answers to Review Questions

115

Answers to Review Questions 1. B, D. Only Windows 2000 Advanced Server and Windows 2000

Datacenter can be used to provide clustering services. Advanced Server supports two-node clusters, and Datacenter supports four-node clusters. Neither Windows 2000 Professional nor Windows 2000 Server can be used for clustering. 2. B. Rolling upgrades enable an administrator to upgrade both soft-

ware and hardware on a cluster while it still services client requests. DHCP or Dynamic Host Configuration Protocol is used to dynamically allocate IP addresses and additional host configuration information. Message queuing provides the communications infrastructure for building distributed applications. The Cluster Administrator is used to configure and manage clusters. 3. A. The quorum disk is the hardware component that stores the con-

figuration information for the entire cluster. It is the most important disk in the cluster storage system and is owned by only one node in the cluster. All nodes in the cluster must be able to communicate with the quorum disk. Failover is the process by which resources are failed over from one node on a cluster to another. The private network or interconnect provides communication between all nodes in the cluster. The Resource Monitor monitors activity between Resource DLLs and the Cluster service. 4. A. Windows 2000 Advanced Server supports two-node clusters.

Windows 2000 Datacenter supports four-node clusters. 5. B, D. The IP Address and Network Name resources are used when

configuring a virtual server. If one of these resources fails, the virtual server fails. 6. D. The cluster heartbeat communicates with all nodes of the cluster

to determine which ones are online and operational. The COM Automation interface provides the capability to automate cluster management through scripting. The quorum disk stores the configuration information for the cluster. The Resource Monitor communicates with Resource DLLs in the cluster.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

116

Chapter 3



Designing Server Clustering Solutions

7. C. The Generic Application resource can be used to make cluster-

unaware applications available for use on the cluster. Certain minimal requirements must be met by these applications. They must be able to store data in a configurable location, they must use TCP/IP to connect to clients, and clients must be able to reconnect in the event of intermittent network failure. 8. A. Failover is the process where all resources are moved from one

node on the cluster to another. Failover is initiated automatically by the Cluster service when a node failure is detected. Failback, then, is the movement of resources back to a failed node once it comes back online. A hot spare is a drive used to replace one that has failed in a disk subsystem. It’s called a hot spare because the system need not be brought down for the replacement process; the replacement is automatically recognized and incorporated into the disk subsystem. 9. A, B, C, D. All of the addresses are valid IP addresses that can be

used in the private network or interconnect, as defined by RFC 1597. 10. D. In the hot spare cluster model, one node is always offline and

waiting in idle mode for failover. The hybrid model is any combination of the other models. In the partial-server cluster model, resources remain offline until the original node is brought back online. The virtual server model is also known as a single-node cluster and provides no failover capability.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

RexWare Utilities

117

Take a few minutes to look over the information presented for this case study, and then answer the questions at the end. In the testing room, you will have a limited amount of time for the case studies on the exam—it is important that you learn to pick out the important information and ignore the “fluff.”

Background RexWare Utilities Corporation provides data query services to several large corporations. The company uses a data warehousing solution to gather customer sales and inventory information on customers; the information is then provided to RexWare’s corporate clients. The main office is located in the Southeastern Region of the country and there are multiple RexWare satellite offices located throughout the U.S. Back in 1996, all of the company’s data was moved from a mainframe environment to a Windows NT environment. All of the customer records were migrated from the mainframe to databases on NT servers using SQL 6.5. Within the last year, the company has migrated its network to Windows 2000 and upgraded the SQL 6.5 database to SQL 7.0. You work for a consulting firm that has been hired by the IT manager at RexWare. Your main job will be to analyze their current system configuration and make recommendations on how they can improve their current network infrastructure. CEO Statement “We have two challenges that need to be met. First, we need to upgrade our current system so that it provides fast and reliable service. Secondly, we need an infrastructure that will meet our future business needs. We have to be able to scale our services as our needs warrant. Server downtime is not an option because our bottom line depends on clients being able to access data on a timely basis.”

Current Environment RexWare’s current system consists of several Windows 2000 computers (servers) that support customer access. One of these is designated as the database server for RexWare itself and houses the SQL 7 application. The other servers act as front-end web servers that provide data from the SQL database server.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

CASE STUDY

RexWare Utilities

CASE STUDY

118

Chapter 3



Designing Server Clustering Solutions

All of the servers in RexWare’s network are currently running Windows 2000. There are no domain controllers, and each server is configured as a member server.

Business Requirements RexWare is planning to expand and increase its current customer base. Senior management wants to make sure that the RexWare network will be able to handle the additional traffic that is expected to occur within the next two years. In addition, the company wants to guarantee the availability of customer data in the event any server needs to be taken down for maintenance. They want to make administration of the servers easier for the IT support staff. Due to the nature of its business, RexWare wants to ensure that its presence on the Internet is always visible and available. The firm wants to implement fault tolerance to make certain that customers are able to access the information they need whenever they need it. Server failure cannot be a reason for RexWare’s clients’ inability to access valuable data.

Funding Funding is not an issue for this endeavor. The CFO has approved fund allocation for additional database servers for the back end of the network and six new Web servers to service client requests on the front end.

Technical Requirements RexWare wants to upgrade its SQL 7.0 server to SQL Server 2000, and add additional database servers to meet with an expected increase in business. They also want to take advantage of the new XML functionality that has been incorporated into SQL Server 2000. All Windows 2000 Servers will need upgrading to Windows 2000 Advanced Server. Management is also looking at Windows 2000 Datacenter, which in the long term may prove to be a more viable solution for their expected future network infrastructure requirements.

Maintenance In-house technical staff will do all maintenance and implementation. Several staff members currently hold Windows 2000 MCSE certification, and one is preparing to sit the exam for Microsoft Clustering Services.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

RexWare Utilities

119

1. What solution should you consider for improving the performance

and availability of the RexWare website? A. Deploy a shared Network Load Balance (NLB) cluster at RexWare. B. Deploy clustering using Microsoft Cluster service. C. Deploy additional SQL 7 database servers at RexWare. D. Deploy an NLB cluster with a Windows 2000 Application Center

server on the back end. 2. What is the first thing to be done to the servers at RexWare in order

to support your proposed solution? A. Upgrade all servers at RexWare to Windows 2000 Advanced

Server. B. Upgrade the database server from SQL 7 to SQL Server 2000. C. Increase the number of database servers on the network from one

to four. D. Install Exchange 2000 on each web server. 3. What is the best configuration model for use at RexWare? A. High availability B. Hot spare C. Virtual server D. Partial-server cluster 4. What feature in Windows 2000 Advanced Server’s Cluster service will

ensure that clients can always access the SQL database server? A. Microsoft Message Queuing B. Failover C. Indexing service D. SNMP

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

CASE STUDY

Questions

CASE STUDY

120

Chapter 3



Designing Server Clustering Solutions

5. Which cluster model should RexWare not consider for the upgrade to

their network, because that model provides no means for recovery in the event of a server failure? A. Hybrid B. Partial-server cluster C. High availability D. Virtual server 6. How many nodes will be supported by the servers currently in use at

RexWare when their solution is implemented? A. Four B. Two C. Zero D. Eight 7. What can RexWare implement to control the action of resource

groups in response to a server failure? A. Failover policy B. Group policy C. Data management D. Disaster recovery 8. Specify the resource dependencies for each cluster resource listed

below. Move the resource from the Resource Dependency list to the appropriate cluster resource that depends on it. Cluster Resource

Resource Dependency

IP Address

IP Address

Physical Disk

Network Name

Virtual Server

Physical Disk

File Share

No dependency resource

IIS Network Name Print Spooler

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

RexWare Utilities

121

1. B. RexWare should employ a cluster solution using Microsoft Cluster

service. This will provide both the required level of performance as well as high availability to ensure clients’ access to resources. As they increase the number of Web servers on the front end of their network, RexWare could use a Network Load Balance cluster to distribute client requests. But for their initial installation, Microsoft Cluster service will provide the functionality needed. RexWare doesn’t want to deploy additional SQL 7 servers at this time, but they do want to upgrade the existing SQL 7 server to SQL Server 2000 and add SQL Server 2000 machines to form a cluster solution. Deploying NLB and Application Center is above and beyond their immediate needs. 2. A. The current servers at RexWare are all Windows 2000 member

servers. You’ll have to upgrade them to Windows 2000 Advanced Server in order to run MSCS. Windows 2000 does not support clustering. RexWare must use Windows 2000 Advanced Server or Windows 2000 Datacenter. 3. B. The hot spare model is the best solution at this time. RexWare

could add an additional SQL 7 server to form a SQL cluster. By implementing the hot spare model, they can be assured that there will be no interruption in service due to hard disk failure. They may also want to first upgrade the SQL 7 server to a SQL Server 2000 server before adding the second SQL Server to form the cluster. 4. B. The failover feature in Microsoft Cluster service will enable Rex-

Ware to provide its clients with access to the SQL server database in the event of a server failure. SQL will be configured as a resource on the cluster and, in the event of node or server failure, this resource will be moved to an active node in the cluster. Service to clients will not be interrupted during the failover process. 5. D. The virtual server configuration model is not a good solution

because it has no failover capabilities. All of the other listed cluster models could be used by RexWare, but the best one for their environment is the hot spare model. This model provides a high level of availability and will ensure that clients are being consistently serviced in the event of node failure.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

CASE STUDY ANSWERS

Answers

CASE STUDY ANSWERS

122

Chapter 3



Designing Server Clustering Solutions

6. C. RexWare is currently using Windows 2000 member servers, which

do not support clustering. These machines must first be upgraded to Windows 2000 Advanced Server if RexWare wants to implement a two-node cluster. For a four-node cluster, they’ll have to upgrade the servers to Windows 2000 Datacenter. 7. A. RexWare can configure a failover policy to determine how resource

groups react in the event of a node failure. Failover can be configured to occur automatically at node failure or it can be done manually. The Failover tab of the cluster resource group’s properties is used to configure failover settings. 8.

IP Address No dependency resource Physical Disk No dependency resource Virtual Server IP Address Network Name File Share Network Name IP Address Physical Disk IIS Network Name IP Address Physical Disk Network Name IP Address Physical Disk

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

RexWare Utilities

123

Physical Disk Network Name The IP Address and Physical Disk resources have no dependencies. The Virtual Server resource requires both the IP Address and Network Name resources. The File Share resource requires the Network Name, IP Address, and Physical Disk resources. The IIS resource is dependent on the IP Address, the Network Name, and Physical Disk resources. The Network Name resource is dependent on both the Physical Disk and the IP Address. The Print Spooler resource requires both the Physical Disk and the Network Name.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

CASE STUDY ANSWERS

Print Spooler

Chapter

4

Designing High-Availability Data Storage Solutions MICROSOFT EXAM OBJECTIVES COVERED IN THIS CHAPTER:  Design data storage for high availability. Considerations include RAID and storage area networks.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

I

n the last chapter we examined how the Microsoft Cluster service can be used to implement a highly available web solution. In studying the hardware and software components that are required for implementing a cluster, you learned that redundancy is a key to providing a highly available network. You also learned that, when designing your network, you want to avoid having any single point of failure that could possibly bring the system down. This concept is known as fault tolerance. A highly available web solution must be supported by a fault-tolerant system that can continue to operate in the event of either a hardware or software failure. Such fault tolerance calls for redundant components such as memory, network adapter cards, and disks. In this chapter we will look at one of the primary components of your system that is of prime concern when creating a fully fault-tolerant system: the disk subsystem, or your data storage. We will examine the two primary methods of achieving fault-tolerant disk storage: RAID and SANs.

RAID

R

AID stands for Redundant Array of Independent Disks. Before we start our discussion of RAID, let’s review a few terms that you will need to know. Array An array is a logical disk that consists of multiple hard disks. Disk mirroring In disk mirroring, an exact replica of a disk or volume is kept on a separate disk. If one disk fails, the replica takes its place immediately.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

RAID

127

Fault tolerance Fault tolerance is a system’s or system component’s capability to continue functioning normally in the event of a failure. Parity Parity is a method of redundancy intended to protect against data corruption and loss. Striping Striping involves taking data, dividing it into blocks, and then spreading these data blocks across several hard disks in an array. RAID is one of the primary ways to provide disk fault tolerance for your highly available web solutions. When utilizing a RAID, data redundancy is realized by the writing of data to multiple disks. In the event of a single harddisk failure, data is still accessible from the other drives in the RAID set. RAIDs can be implemented as either hardware or software solutions. Windows 2000 has inherent capability for implementing three types of software RAID solutions. These are RAID Level 0, which is striping; Level 1, which is mirroring; and Level 5, which is striping with parity. All are discussed here, in addition to the other RAID levels that are available.



Microsoft Exam Objective

Design data storage for high availability. Considerations include RAID.

Over the last couple of years, RAID has become an extremely popular choice for disk storage. It can be implemented in a variety of architectures to ensure that the network is available 99.9999 percent of the time. RAID provides improved performance as well as protection of critical data. These are key considerations when designing a highly available web solution.

RAID Levels Let’s first define what is meant by a RAID level. A RAID arrangement is used to store data and to supply various types of data protection. RAID levels are used to define the type of data protection that is provided and the amount of fault tolerance in the event of hardware failure. There are several RAID levels available, but only a select few provide fault tolerance and are options for use in highly available web solutions.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

128

Chapter 4



Designing High-Availability Data Storage Solutions

RAID 0 RAID 0, illustrated in Figure 4.1, is defined as simple disk striping; it is also known as disk striping without parity. In this RAID level, data is broken down into blocks of data called stripes. These stripes are written sequentially to all drives in the array. RAID 0 provides no data redundancy or fault tolerance and is not a candidate for implementation when designing highly available web solutions. One advantage offered by RAID 0 is that the input/output throughput is significantly larger in relation to the other RAID levels. But again, this level does not provide data redundancy, which is critical when designing RAID for your web solutions. In addition, RAID 0 provides no error recovery because parity information is not stored on the drives. If one drive in the array fails, all data is unavailable and the only way to recover it is to restore from a system backup. FIGURE 4.1

RAID 0, simple disk striping

1

2

3

4

5

6

7

8

9

10

11

12

RAID 0 Disk Striping

RAID 1 There are two types of RAID 1: disk mirroring and disk duplexing. Mirroring is achieved using a minimum of two disks, with each disk storing identical data. All data that is transferred to the controller is written to both disks in the mirror set. If there is a disk failure, data is not lost because the mirrored drive contains the same identical data as the failed drive. This RAID level uses parallel access, which provides high data-transfer rates. It supports high availability, but at a cost. The hardware requirements are doubled in order to retain the mirror image of the data on the second drive. A second type of RAID level 1 uses disk duplexing. A RAID level 1 configured for disk duplexing uses two mirrored drives, as in mirroring—but with one exception. Instead of using only one controller, the second drive is

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

RAID

129

attached to the bus via a second controller. This guards against not only disk failure but controller failure, as well. Windows 2000 uses the fault-tolerant driver Ftdisk.sys to write data to both volumes in a mirrored array simultaneously. A mirrored volume can hold the system or boot partition and must consist of Windows 2000 dynamic disks. Basic disks cannot be used for mirrored volumes in a disk array. Figure 4.2 shows a RAID 1 configuration with mirroring, and Figure 4.3 shows a RAID 1 configuration with disk duplexing. FIGURE 4.2

RAID 1 mirroring Controller

1

1

2

2

3

3

4

4 RAID 1 Mirroring

FIGURE 4.3

RAID 1 disk duplexing Controller

Controller

1

1

2

2

3

3

4

4 RAID 1 Duplexing

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

130

Chapter 4



Designing High-Availability Data Storage Solutions

RAID 2 RAID 2 (Figure 4.4) employs a minimum of two drives with all data spread across each drive in the array. This RAID level provides a high data-transfer rate. RAID 2 is also called Hamming Code ECC. ECC stands for error checking and correcting. Each data byte is written to an ECC disk that is used on reads to verify that the data is correct. Single disk errors are corrected on-the-fly. This RAID level is very inefficient and has an associated high cost due to the extensive amount of error correction. RAID level 2 was designed for use with hard drives that do not have built-in error correction. All SCSI (Small Computer System Interface) drives on the market today do have error correction capability, so this RAID level has little applicability and is rarely used. (SCSI is discussed later in the chapter in the section “Selecting Physical Interfaces.”) FIGURE 4.4

RAID 2

A

A

A

ECC/ A

ECC/ A

ECC/ A

B

B

B

ECC/ B

ECC/ B

ECC/ B

C

C

C

ECC/ C

ECC/ C

ECC/ C

RAID 2

RAID 3 RAID 3 requires a minimum of three disks to implement. Data is written or “striped” to each data disk in the array. One disk in the array is used specifically to store the parity information that is generated on writes and checked on reads. This parity or ECC information is used to detect and correct errors on the drives. RAID 3 has very high read and write transfer rates. Also, with RAID 3 there is a low ratio of data to parity (ECC) disks, which greatly improves data transfer efficiency. RAID 3 arrays, illustrated in Figure 4.5, are easily scalable and are best suited for situations where large file transfers are required.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

RAID

FIGURE 4.5

131

RAID 3

A

A

A

B

B

B

C

C

C

Parity Parity Parity

RAID 3

RAID 4 RAID 4 (see Figure 4.6) is like RAID 3 except that data need not be written to all stripes in the array. As with RAID 3, a minimum of three disks is required for RAID 4, and one disk in the array is dedicated for parity information. RAID 4 has very good read-transfer rates, but it has the worst writetransfer rates among all RAID levels. Level 4 is also difficult to rebuild in the event of a disk failure. FIGURE 4.6

RAID 4

A

A

A

B

B

B

C

C

C

Parity Parity Parity

RAID 4

RAID 5 RAID 5 is also known as disk striping with parity. Both the data and parity information are distributed across multiple drives in the array. RAID 5 has a high data-read rate but a medium write rate. You must have a minimum of three disks in the array to implement RAID 5. RAID 5 provides great efficiency because of its low ratio of data to parity (ECC) disks. In addition, this level offers the capability to hot swap or easily replace defective drives without having to restart the server. When the defective drive is replaced, it is automatically added to the array. Data is rebuilt

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

132

Chapter 4



Designing High-Availability Data Storage Solutions

on the newly installed drive with no loss of data or system functionality during the drive-rebuilding process. Failure of a single drive in a RAID 5 array does not result in loss of the array—but a noticeable degradation in performance does occur. For a RAID 5 array to fail, at least two drives in the array must fail. This high level of redundancy is one of the reasons why RAID 5 is one of the most popular RAID levels used in industry today. Figure 4.7 shows a RAID 5 configuration. FIGURE 4.7

RAID 5

Parity Parity Parity Parity Parity RAID 5

Parity Parity in its simplest form is a way to protect against data corruption and loss. RAID levels that provide parity do so by storing an additional data block for every write made to the disk storage system. This data is used to re-create and reconstruct a disk volume in the event of a disk failure. Data redundancy is analogous to parity when discussing RAID systems. Parity is used by RAID 5 in Windows 2000 to re-create a failed disk. RAID 5 volumes are called striped sets. They write stripe data and parity data intermittently across the disks in the striped set. One block of parity data is written per stripe set. A RAID 5 array must have at least three disks to allow for the parity information. This parity information is distributed across all the disks in the array to help balance the input/output load on the array.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

RAID

133

Windows 2000 uses a Boolean operation called XOR (exclusive OR) and the data on the good disks to re-create the data on to the failed disk. All of the RAID levels can be categorized by the type of parity they provide, as follows: Nonparity RAID This category offers no data protection but very high performance. Included in this category is RAID level 0. Mirrored Parity RAID This category offers the highest level of data protection and high performance. Included in this category are RAID level 1 mirroring, RAID level 1 duplexing, and RAID level 10 or 1E. Distributed Parity RAID This category offers high performance and high data protection. It includes RAID levels 2, 3, 4, and 5.

RAID 6 RAID 6 is like RAID 5 with one exception: RAID 6 includes a second set of parity information that is distributed across the drives. This provides a disk subsystem that is exceptionally drive-fault-tolerant and can survive multiple drive failures in the array. In order for a RAID 6 array to fail, at least three drives must stop working. This is the perfect RAID scheme for implementing mission-critical applications, but it is complex and costly to put into operation. Figure 4.8 shows a RAID 6 configuration. FIGURE 4.8

RAID 6

Parity B

Parity A Parity B

Parity A Parity B

Parity A Parity B

Parity A Parity B

Parity A

RAID 6

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

134

Chapter 4



Designing High-Availability Data Storage Solutions

RAID 7 RAID 7 (see Figure 4.9) is a proprietary system that currently is being offered by only one vendor. This system includes a fully implemented, processoriented real-time operating system (OS) residing on an embedded arraycontrol microprocessor. All reads and writes are centrally cached via a highspeed x-bus. This solution is only serviceable by the vendor and has a very high cost per MB of data. FIGURE 4.9

RAID 7

A

A

A

B

B

B

C

C

C

Real-time Operating System

A B C

RAID 7

RAID 10 Level 10 RAID is also called RAID 1+0 because it combines the elements of both RAID 0 striping and RAID 1 mirroring. It’s also known as RAID 1E (1 Extended). RAID 10 is implemented as a striped array with RAID 1 segments. It has the same level of fault tolerance as RAID 1. A minimum of four drives is required to implement RAID 10, and users will realize a performance boost over RAID 1. RAID 10 is very expensive to implement and has very limited scalability. It does have the advantage of being able to sustain, in certain situations, multiple hard-drive failures and still continue to operate at or near normal capability. Figure 4.10 shows a RAID 10 configuration.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

RAID

FIGURE 4.10

135

RAID 10 mirroring plus striping

A

A

A

B

B

B

C

D

C

C

E

F

D

D

G

H

Mirroring

Striping RAID 10

RAID 0+1 RAID 0+1, shown in Figure 4.11, is known as mirrored drives in an array. With this RAID configuration, data is striped across the drives and each stripe set is mirrored. Loss of a drive will break a mirror only for a single drive pair. This is an expensive solution to implement, but it does provide high performance in addition to fault tolerance. FIGURE 4.11

RAID 0+1

A

B

A

A

C

D

B

B

E

F

C

C

G

H

D

D

Stripe Set

Mirror RAID 0+1

RAID 53 A RAID 53 implementation involves striped arrays of RAID 3 disks. It has the same fault tolerance and overhead as a RAID 3 array. It is very expensive and requires a minimum of five hard disks. RAID 53 can be a consideration for implementation when you want the benefits of RAID 3 but you need better performance than what RAID 3 offers. Figure 4.12 shows a RAID 53 configuration.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

136

Chapter 4



Designing High-Availability Data Storage Solutions

FIGURE 4.12

RAID 53

A

A

Parity

A

B

B

B

Parity

C

D

C

C

Parity

E

F

RAID 3

Striping RAID 53

Software vs. Hardware RAID RAIDs can be implemented through either a software or a hardware solution. Several factors come into play when you’re deciding on the solution to use for your highly available web solution. Software RAID solutions are operating-system dependent and consume host system memory and CPU cycles. They contend with other applications that are running on the server and can cause an overall degradation in system performance. A software RAID is heavily dependent on the CPU performance and load. Hardware RAID does not depend on the CPU because the adapter’s processor executes the array functions. Hardware RAIDs are not OS-dependent and are highly fault tolerant. Let’s examine each concept in more detail.

Software RAID Software RAID is basically a core component of the OS that can be used to configure and administer a RAID array attached to the system. Software RAID runs on top of the OS and simulates a RAID controller. This allows RAID arrays to be created and assigned to the system without the cost associated with additional hardware. Windows 2000 supports three software implementations of RAID: RAID 0, 1, and 5. Only levels 1 and 5 are fault tolerant. Software RAID is cheaper to implement than hardware RAID, but the trade-off is less data availability and protection. A drive failure in a software RAID will result in downtime until the failed drive is replaced. In some situations, you may have to restore the failed drives from a previous backup before the system can be used.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

SAN

137

Microsoft Cluster service (see Chapter 3) does not support software RAID implementations. You must use a hardware RAID solution if you plan to use clustering.

Hardware RAID Hardware RAIDs provide better performance and fault tolerance than software RAIDs. The disk controller interface used in the RAID arrangement handles the establishment and regeneration of the redundant or parity information. Hardware RAIDs offer additional functionality that is not available with software RAIDs, including hot spares, hot swapping of disks, and dedicated cache. The RAID level supported with a hardware implementation is determined by the manufacturer of the hardware. One thing to keep in mind when deciding whether to use a hardware RAID is that your implementation may be limited by the equipment options provided by the vendor or manufacturer. Hardware implementations are more expensive than software implementations but do provide better performance and fault tolerance. Both reads and writes are generally faster on a hardware implementation of RAID.

SAN

S

AN or storage area network is the second solution for disk storage that we will examine for consideration in designing the highly available web environment. A SAN is an intricate network composed of one or more storage systems. It is has the capability to transmit data at extremely high rates. Fibre Channel technology is the basis for most SAN networks in use today; you’ll find a discussion of Fibre Channel in the next section on “Selecting Physical Interfaces.”



Microsoft Exam Objective

Design data storage for high availability. Considerations include storage area networks.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

138

Chapter 4



Designing High-Availability Data Storage Solutions

SANs are typically used to connect existing nodes within a distributed computer system. All members of this system share a common administrative domain and usually are in close proximity. SANs are switched, and each hub on the SAN can normally support four to eight additional nodes. One of the benefits of using a SAN is that very large networks can be built using cascading hubs. Figure 4.13 shows a typical SAN configuration. FIGURE 4.13

Typical SAN configuration Workstation Workstation Workstation Workstation Workstation Workstation Workstation Workstation Workstation

Hub

Hub

Hub

Server

Ethernet

Server Hub

Node Node Cluster

Tape

Fibre Channel Switch

Router

Disk

Tape

Fibre Channel Switch

Disk

SAN (Storage Area Network)

SANs use Fibre Channel technology to protect and distribute data. Extremely large amounts of data can be distributed to clients who may need to have access to it regardless of their geographical location within the network. The SAN architecture permits both mirrored and RAID disk configurations to be supported for data redundancy and protection. SANs have data transfer rates in the gigabit-per-second range, with very low error rates. Compare this with the 160 and 320 megabits data transfer rates realized when using SCSI interfaces. SANs eliminate the bandwidth bottlenecks and scalability limitations that are imposed by SCSI bus–based configurations. See “Selecting Physical Interfaces” later in this chapter. When deciding on the design for your high-availability solution, keep in mind that SANs are excellent for environments moving large amounts of

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

SAN

139

data among multiple servers. Because the data in the SAN is transferred at an extremely fast rate using the SAN Fibre Channel network, other network resources are freed for different transactions. The initial cost for implementing a SAN is substantial and may only be justified if your network has storage capacity requirements of 100–200GB. The SAN is not a cost-effective alternative for small networks but rather is designed for large-scale, fault-tolerant networks. Deploying a SAN can be an expensive endeavor, but over the long term the total cost of ownership (TCO) may prove to be more cost effective than a RAID implementation. Hardware requirements for a SAN vary depending on the hardware vendor selected and the type of SAN implemented. Servers must be configured with Fibre Channel interfaces known as host bus adapters (HBAs); these are used to interface to the SAN. The storage components are the same as for any other LAN configuration and include disk arrays, tape drives, hubs, and switches. SANs offer several benefits that may more than justify the initial costs for implementation: High bandwidth SANs provide extremely high bandwidth. The basis for SAN technology is the Fibre Channel Arbitrator Loop or FC-AL. The current standard for FC-AL is 1GB, and you can expect to see this increase in the near future to 2–4GB. Currently, FC-AL provides a 2.5-to-10-fold increase in data bandwidth over standard SCSI interfaces. Scalability SANs offer scalability by enabling the existing infrastructure to grow substantially as the demand warrants. A traditional SCSI bus configuration is limited to a total of 15 devices. FC-AL, in contrast, can support up to 126 nodes per loop. Multiple loops can be added to make the scalability of the SAN is virtually limitless. Modular connectivity The modular connectivity not possible through SCSI architecture is supported in a SAN. Modular network devices such as hubs, switches, and routers can be added to enhance the availability and capability of the SAN. High availability and fault tolerance RAID is fully supported in a SAN environment. FC-AL has its own inherent fault-tolerance features such as dual porting. Dual porting has become a standard for FC-AL disk drives and is used to facilitate dual loop configurations. These dual loops create a redundant path for each storage device in the array in the event that one of the loops goes down or is busy.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

140

Chapter 4



Designing High-Availability Data Storage Solutions

Management SANs offer management capability that is normally provided with traditional LAN-management techniques. Administrators are able to monitor and control individual nodes, Fibre Channel loops, and storage and connectivity devices. By utilizing a SAN, the administrator has one centralized management location from which to monitor status and alerts of all network operations.

SANs: The Alternative to Traditional Storage Solutions Your company, Marcus Engineering, has experienced a remarkable amount of growth in the last couple of years. The CEO has approached you after a discussion among the management team in their last meeting. They are concerned about the storage capacity of the servers for the network, as well as the amount of information distributed among the network’s various servers and LANs. There is a wealth of knowledge available throughout the company but no centralized method for individuals to access that information. Currently your network consists of many distributed systems. Most of these systems are configured using RAID storage solutions that utilize SCSI technology. To continue competing in today’s marketplace, Marcus needs to be able to adjust to changing demands for information. As storage needs have increased, the management of individual servers has become costly and complex. The SCSI storage devices are currently offering poor resource utilization, because any excess capacity on one server cannot be shared when it may be needed on another server. Your research tells you that directly attached SCSI drives cannot handle the storage demands of today’s fast-growing enterprise networks. You know that using individual storage devices is not a cost-effective way to manage information access. Traditional SCSI storage environments limit a company’s ability to effectively share data among multiple network users and could be a factor in limiting growth.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Selecting Physical Interfaces

141

You recognize that one solution for resolving the problem is implementation of a storage area network to consolidate all the corporation’s storage environments. A SAN will also maximize the efficiency and flexibility of your IT operations. SANs are for enterprise-class networks that are scalable to accommodate many storage devices. The storage capacity on each individual device can be shared by any number of servers on the network. You tell your management team that consolidating resources by implementing a SAN will create an infrastructure that enables connectivity among multiple distributed servers and storage devices. Capacity can be shared among all servers on the network and resource utilization will be greatly improved. Existing SCSI RAID configurations, including any clustered configurations, can be incorporated into the SAN infrastructure to accommodate additional users.

Selecting Physical Interfaces

I

n this section, we examine two interfaces for use with highly available web solutions: SCSI and Fibre Channel. These solutions can be used for both RAID and SAN, as mentioned earlier in the chapter. The physical interfaces between the host system and the arrays will have a direct impact on the performance of the RAID. RAID arrays can be connected using a variety of interfaces. These can include IDE, SCSI, Ultra-Wide SCSI, and Fibre Channel. The type of interface selected will depend on the type of RAID being implemented and the expected performance benefits. When determining the type of interface to use for your data storage solution, keep in mind that the number of drives on the bus must be in ratio to the throughput of the physical interface. You don’t want to put more drives on a bus than the number it was designed to handle; if you do, it may cause system performance degradation and affect the overall performance of the RAID. By the same token, you don’t want to underestimate the amount of drive utilization and underuse your available bandwidth. Remember that a SAN makes it possible to configure an existing array of storage units to provide high availability as well as fault tolerance. Even though the physical disk arrays in a SAN are connected to a single computer,

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

142

Chapter 4



Designing High-Availability Data Storage Solutions

they can be shared to multiple servers in the SAN. SCSI and Fibre Channel are the two principal connection technologies in use.

A new technology, Internet SCSI (iSCSI) is slowly gaining popularity in the marketplace but will not be part of our discussion. We will focus on SCSI and Fibre Channel.

SCSI Controllers To design a highly available server, you should use a SCSI controller card to access the SCSI disks that hold the operating system, and the shared-data disk used for data storage. You might also consider a Fibre Channel host bus adapter, but this is a more expensive solution. SCSI RAID controllers can read and write multiple input/output requests from the host server to every drive on the SCSI bus at the same time. SCSI will support up to 15 devices interconnected to one SCSI bus. There is no shortage of SCSI controller manufacturers these days. Each controller offers its own advantages and disadvantages when used with RAID solutions. Look for SCSI controllers that have features that will enhance the functionality of your RAID. Capabilities should include the following: 











The ability to hot-swap drives in order to expand the capability of the RAID without having to take the server down. Using a hardware RAID that will result in increased input/output performance over a software RAID solution. The option to change RAID level without taking the system out of production. This is known as Online RAID Level Migration. Failover redundancy and prevention of cached-data loss in the event of a failover. Backup and storage of the SCSI controller card’s configuration on removable media. Preventing loss of data that has not yet been committed to disk.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Selecting Physical Interfaces

143

The most common SCSI standard interfaces are listed in Table 4.1. TABLE 4.1

Standard SCSI Interfaces SCSI Type

Bus Width (Bits)

Speed (MB/sec)

SCSI-1

8

5

Fast SCSI

8

10

Fast Wide SCSI

16

20

Ultra SCSI

8

20

Ultra-Wide SCSI

16

40

Ultra2 SCSI (LVD)

8

40

Wide-Ultra2 SCSI

16

80

Wide-Ultra3 SCSI

16

160

Ultra-320 SCSI

16

320

Fibre Channel If you are looking for a high-speed connectivity solution for your RAID implementation, consider using a Fibre Channel bus instead of a SCSI controller bus. Fibre Channel Arbitrated Loop (FL-AC) is designed for applications that require higher performance capacity than can be provided through SCSI controllers. Applications included in this group are on-line transaction processing (OLTP), data warehousing, and video and broadcast implementations. Bus lengths for Fibre Channel can reach up to 30 meters on copper cable and 10 kilometers on fiber-optic cable. SCSI buses are limited to a maximum of 15 interconnected devices, while a Fibre Channel Arbitrated Loop can support up to 126 devices per loop. When researching fibre controller bus adapters for use with your web solution, look for adapters that support redundant cards and switches.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

144

Chapter 4



Designing High-Availability Data Storage Solutions

Fibre Channel can be used to enhance existing SCSI-based configurations. An existing SCSI array can be optimized by linking a Fibre Channel environment to it via a SCSI bridge.

SCSI vs. Fibre Channel Following is a summary of the advantages and disadvantages of using SCSI and Fibre Channel interfaces. Keep these issues in mind when choosing an interface that will best serve your high-availability web solution. Advantages of SCSI 







The initial and overall investment costs for SCSI hardware will be considerably less than for Fibre Channel. SCSI components can be shared between internal and external hardware platforms. Ultra-SCSI disks can deliver performance equivalent to Fibre Channel disks. SCSI is an industry standard that offers a high level of interconnectivity among other components on the system. Disadvantages of SCSI





SCSI has a distance limitation of 3.5 meters. Only a maximum of 15 devices can be connected to one SCSI controller.



Data throughput is much slower than for Fibre Channel.



SCSI controllers support only the SCSI protocol. Advantages of Fibre Channel









Fibre Channel was designed for bi-directional data transmission of 100MB per second. Hot-pluggable drive swapping is supported. Hundreds of devices can be connected using switch technology in a SAN environment. Fibre Channel supports many data communications protocols including FDDI, HPPI, IPI-3, SCSI-3, Ethernet, Token Ring, and ATM.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Deciding on a Solution

145

Disadvantages of Fibre Channel 





Implementation requires extreme care. Fibre Channel is normally used only for external storage and not for the operating system. Fibre Channel is expensive and not cost effective.

Deciding on a Solution

I

n this chapter you have been introduced to several solutions that can be used when designing a system that provides fault-tolerant data storage. The first decision to be made is a determination of the type of protection you want to implement. Will a RAID solution meet your current and future projected needs, or should you research the possibility of implementing a SAN and linking it to your existing infrastructure? If you decide to go with the RAID solution, what type of RAID should you implement? You’ll need to decide on either a hardware- or softwarebased RAID implementation. If you choose a software RAID and you are running Windows 2000, you’ll be limited to using either RAID 1 disk mirroring or RAID 5 disk striping with parity for your highly available web solution. Following is a discussion of the pros and cons of the various types of storage solutions that have been presented to you in this chapter.

Using a SAN Solution Implementing a SAN is a good solution if you need substantial storage capacity. Remember that the initial cost of implementation will be high, but the long-term total cost of ownership may more than justify this expense. If the cost of a SAN cannot be justified, then RAID is your solution.

Using a RAID Solution RAIDs can be implemented as hardware or software solutions. Windows 2000 does not support clustering in a software implementation of RAID. If your design includes clustering, a hardware RAID solution will be necessary.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

146

Chapter 4



Designing High-Availability Data Storage Solutions

Hardware RAIDs provide additional functionality over software RAIDs, including hot swapping of drives and better overall system performance. The software RAID implementation is cheaper in the short run but may be more costly in the long run. If a drive fails in a software RAID configuration, the system is down while you’re replacing the failed drive.

RAID 1 vs. RAID 5 RAID 1 and RAID 5 are the two types of fault-tolerant RAID configurations that are supported by Windows 2000. RAID 5 has better read performance than RAID 1 and a lower cost per megabyte of data. RAID 1 has better write performance than RAID 5 and can be used to protect the system or boot partitions. Here’s a summarized comparison of RAID 1 and RAID 5: RAID 1 

Can be used on the system or boot partition



Requires two disks



Costs more than RAID 5 per megabyte of data



Disk redundancy accounts for 50 percent of disk space



Has better write performance than RAID 5



Has good read performance



Uses less system memory than RAID 5 RAID 5



Cannot be used on the system or boot partition



Requires a minimum of three disks and can go up to a maximum of 32



Costs less than RAID 1 per megabyte of data



Has only average write performance



Has excellent read performance over RAID 1



Uses more system memory than RAID 1

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Summary

147

Summary

In this chapter we looked at how to enhance a highly available web solution by using fault-tolerant data storage solutions to protect our data. The two solutions available for Windows 2000 are RAID (Redundant Array of Independent Disks) and SAN (Storage Area Network). RAID can be implemented as either a software or hardware solution. Software RAIDs are more taxing on the operating system and can cause performance degradation. Hardware RAIDs offer better performance than software RAIDS, as well as fault tolerance and hot swapping of disks. Various RAID levels are available. To provide a fault-tolerant data storage system under Windows 2000, you can use RAID 1 (disk mirroring) and RAID 5 (stripe set with parity). RAID level 0 is supported by Windows 2000 but is not a fault-tolerant solution and therefore cannot be used for your high-availability web solution. SANs are the second way to provide a fault-tolerant storage solution for Windows 2000. SANs can be expensive to implement and may not prove cost effective unless your network must support high data-transfer rates and substantial disk storage of 100–200GB. Two primary types of physical interfaces can be used for highly available web solutions: SCSI and Fibre Channel. Both are appropriate in RAID and SAN solutions. SCSI controllers cost less to implement than Fibre Channel and are an industry standard that provides a high level of system interoperability. They run at slower speeds than Fibre Channel and can only be used to a maximum cable length of 3.5 meters. Fibre Channel was designed for environments where fast data transmissions are required. It supports more data-communications protocols than SCSI and can connect up to 126 devices. Fibre Channel, due to its stringent hardware requirements, is more costly to implement than SCSI. Storage area networks are good solution when you need large storage capacity with high data-transmission speeds. SANs have a substantial cost factor. If the cost cannot be justified for your network, you may want to research a RAID solution. Both RAID 1 and 5 provide fault tolerance in the highly available web solution. RAID 1 can be used on the system and boot partitions and has better write performance than RAID 5. RAID 1 requires two disks, and 50 percent of the disk space is used for redundancy. In contrast, RAID 5 cannot be used on the system or boot partitions and requires a minimum of three disks. RAID 5 does not have the write performance of RAID 1, but it does have better read performance.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

148

Chapter 4



Designing High-Availability Data Storage Solutions

Exam Essentials Know which RAID levels are supported in Windows 2000. RAID 0, 1, and 5 are supported in Windows 2000. RAID 0 is known as disk striping because the data is striped across all the disks in the array. RAID 1 is known as disk mirroring; data is written to two disks simultaneously. If a second controller card is used in the system, RAID 1 is known as disk duplexing. RAID 5 is disk striping with parity. Both data and parity information are written across each disk in the array. Know which RAID levels can be used in Windows 2000 for a highly available web solution. RAID 1 and RAID 5 can both be used to provide fault tolerance when designing a highly available web solution using Windows 2000. RAID 0 is supported by Windows 2000 but does not provide fault tolerance and is not a candidate the highly available web solution. Understand when a SAN is the best choice for a highly available web solution. A SAN (storage area network) comprises one or more storage systems and is capable of transmitting data at very high transfer rates. SANs are appropriate in environments that require the transfer of large amounts of data at very high speeds. A SAN can be incorporated into an existing network infrastructure, but implementation is costly.

Key Terms

B

efore you take the exam, be certain you are familiar with the following terms: array

fault tolerance

disk duplexing

Fibre Channel

disk mirroring

Fibre Channel Arbitrator Loop (FC-AL)

disk striping

hardware RAID

disk striping with parity

hot swap

dual porting

parity

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Key Terms

RAID (Redundant Array of Independent Disks)

SCSI (Small Computer System Interface)

RAID levels

software RAID

SAN (storage area network)

striping

Copyright ©2002 SYBEX, Inc., Alameda, CA

149

www.sybex.com

150

Chapter 4



Designing High-Availability Data Storage Solutions

Review Questions 1. Which of the following RAID levels can be used to provide fault tol-

erance in Windows 2000? Select all that apply. A. RAID 1 B. RAID 0 C. RAID 5 D. RAID 2 2. What is the total number of devices that can be interconnected with

one Fibre Channel Arbitrated Loop (FC-AL)? A. 15 B. 126 C. 7 D. 100 3. What is the name of the process in which data is stored in blocks

across disks in an array? A. Parity B. Striping C. Threading D. Fault tolerance 4. Which of the following are valid reasons for using a software RAID

over a hardware RAID for your highly available Web solution? Select all that apply. A. Software RAIDS provide better performance than hardware

RAIDs. B. Software RAIDS support hot swapping of disks. C. Software RAIDS are cheaper to implement than hardware RAIDs. D. Software RAIDS support Microsoft Cluster service, while hard-

ware RAIDS do not.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Review Questions

151

5. William is going to implement a RAID 5 array on his existing e-commerce

server. He uses four disks in the array, each 2GB in size. How much available disk space will William have for data on his e-commerce server after he implements the RAID 5 array? A. 4GB B. 16GB C. 6GB D. 12GB 6. Kathleen wants to implement fault tolerance on her company’s web

server. She has decided that she should probably use RAID 5, disk striping with parity, rather than disk mirroring (RAID 1). What is the minimum number of disk drives Kathleen will need for the disk array before she can implement RAID 5 on the web server? A. 5 B. 2 C. 3 D. 10 7. Which of the following items are types of RAID 1 implementation?

Choose all that apply. A. Mirroring B. Stripe sets C. Duplexing D. Stripe sets with parity 8. Storage area networks (SANs) make it possible to transmit an

extremely large amount of data at rapid rates. The basis of most SAN networks is Fibre Channel technology. What feature of Fibre Channel supplies inherent fault tolerance by providing redundant paths for each storage device in the array? A. Dual porting B. SCSI C. ATM D. Fiber switch Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

152

Chapter 4



Designing High-Availability Data Storage Solutions

9. Which RAID level combines the functionality of both mirroring and

disk striping? A. RAID 10 B. RAID 1 C. RAID 5 D. RAID 53 10. Jon wants to implement some type of RAID solution on his network

and is considering software RAID because he is reluctant to buy additional equipment. Nonetheless, he is also looking at several manufacturers of hardware RAID solutions. One of the primary requirements for his storage solution is preservation of the system and boot partitions in the event of hardware failure. What type of RAID solution can he use to ensure that not only his data, but his system and boot information, is recoverable after a disk failure? A. RAID 1 B. RAID 3 C. RAID 5 D. RAID 0+1

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Answers to Review Questions

153

Answers to Review Questions 1. A, C. Windows 2000 supports fault tolerance with RAID levels 1 and 5.

RAID 0 is disk striping, and striping by itself does not provide fault tolerance. If you need disk striping and fault tolerance, you can use RAID 5 (disk striping with parity). 2. B. A total of 126 devices can be interconnected via one Fibre Channel

Arbitrated Loop (FC-AL). FC-AL provides great scalability and, as growth occurs, the network can be expanded by increasing the number of devices attached to the Fibre Channel Arbitrated Loop. FC_AL can support data-transmission rates of up to 100 Mbps and distances of up to 6 miles (10 km). 3. B. Striping is the process in which data is stored in blocks or “stripes”

across all disks in the disk array; RAID 0 is also known as disk striping. Threading is the ability of an OS to execute different parts of a program, called threads, simultaneously. Fault tolerance is the ability of a system to respond gracefully to an unexpected hardware or software failure. There are many levels of fault tolerance, some of which are covered in this chapter. Parity is redundant information used to reconstruct data in a RAID 5 array after a disk failure. 4. C. Software RAIDs cost less to implement than hardware RAIDs.

They do not provide better performance than hardware RAIDs, and they don’t support hot swapping of disks. Microsoft Cluster service is supported on hardware RAIDs but not on software RAIDs. 5. C. If William is using RAID 5 and each disk in the array is 2GB, the

final size of the data drive will be 6GB. The other 2GB, or 25 percent of the disk space, is for the parity data that is used to reconstruct data in the event of a disk failure. 6. C. You need a minimum of three disks in order to implement RAID 5. 7. A, C. RAID 1 can be implemented as disk mirroring or disk duplex-

ing. Disk duplexing is the same as disk mirroring, except that two controller cards are used instead of one.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

154

Chapter 4



Designing High-Availability Data Storage Solutions

8. A. Dual porting provides redundant paths for each storage device by

provide dual loops. Dual porting has become an industry standard for Fibre Channel Arbitrary Loop disks drives and provides a backup route in the event of that one of the loops fails or is busy. 9. A. RAID 10 provides the functionality of RAID 1 disk mirroring and

RAID 0 disk striping. 10. A. Jon can implement a RAID 1 solution, which will give him fault tol-

erance and can be used on the system and boot partitions of the computer. RAID 1 is the only RAID level that be used on the system and boot partitions.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Wesley Technologies

155

Take about a few minutes to look over the information presented and then answer the questions at the end. In the testing room, you will have a limited amount of time—it is important that you learn to pick out the important information and ignore the “fluff.”

Background Wesley Technologies is an information technology company that provides consulting services for various clients throughout the U.S. and in several countries in Asia. Wesley is exhibiting a growth explosion as a result of winning several substantial contracts. CEO Statement “We want to be prepared to handle any business that these new contracts provide. Our internal infrastructure must be reexamined to ensure that we have the capability to meet our new and changing needs. The company needs to build a technology base and an underlying structure for meeting future business needs.”

Current Environment Wesley Technologies currently has a network of four Windows NT servers, with 50 Windows NT Workstations. All the computers are configured as stand-alone servers. One of the servers houses a SQL 7.0 database. This database stores not only employee data but also important client contact information that is used daily by members of the marketing department. Several times during the preceding year, the server with the SQL database has crashed. Luckily, the company has been adamant about backing up all their servers, and has been able to restore the database several times using a DLT tape backup.

Business Requirements Continuous high availability of critical resources—namely, the important customer data—has become essential at Wesley. System downtime is not an option, and something must be done to ensure that the infrastructure is able to meet the new demands anticipated as the result of the new contracts. Funding Senior management at Wesley wants to keep funding for this project to a minimum. They want to implement a fault-tolerance solution within the approved budget criteria. Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

CASE STUDY

Wesley Technologies

CASE STUDY

156

Chapter 4



Designing High-Availability Data Storage Solutions

Technical Requirements Wesley has relied on Windows NT Server 4.0 and SQL Server 7.0 for their server and database backbones, respectively. The organization wants to migrate all the NT 4.0 servers to Windows 2000, and the one database server to SQL Server 2000. They want to add an additional database server for use in failover clustering. They want to ensure that customer data is always available, even in the event of failure of critical system components. Their increasing business means they cannot arrange for any planned downtime within the coming months. They want to make sure that servers are always available to meet client needs. Implementing failover clustering will allow Wesley’s IT team to perform system maintenance on one of the SQL database servers while another server or node handles data requests. Any necessary database upgrades can be performed with minimal impact on network resources. This benefit can also ensure that system downtime due to normal maintenance is minimized.

Questions 1. IT management at Wesley will research the requirements for imple-

menting failover clustering after the system upgrades are complete. (NT 4.0 servers will go up to Windows 2000, and SQL 7.0 database will go up to SQL Server 2000.) What levels of RAID can Wesley use to provide fault tolerance for the database servers? (Choose two.) A. RAID 1 B. RAID 2 C. RAID 3 D. RAID 4 E. RAID 5

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Wesley Technologies

157

tection level he should implement on the upgraded database servers. He knows that disk mirroring provides fault tolerance, but disk striping provides faster read and write throughput. What is the only RAID level that combines the functionality of both disk mirroring and disk striping? A. RAID 1 B. RAID 2 C. RAID 5 D. RAID 10 3. The Wesley management team wants to examine a comparison of the

RAID levels that can be used in a RAID array. Match each item from the Characteristics list to its corresponding RAID level. RAID Level

Characteristics

RAID 1

Can be used for both the system and boot partitions

RAID 5

Requires two disks Requires three disks Has the highest cost per megabyte of data Has the lowest cost per megabyte of data 50% of disk space is used for redundancy One drive of the array is used for redundancy Has excellent write performance Has excellent read performance

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

CASE STUDY

2. Joseph, manager of IT operations, is unsure about which storage pro-

CASE STUDY

158

Chapter 4



Designing High-Availability Data Storage Solutions

4. Match the RAID levels with their descriptions.

RAID Levels

Description

RAID 0

Fault tolerant

RAID 1

Not fault tolerant

RAID 2

Hamming Code ECC

RAID 5

Disk striping

RAID 10

Disk striping with parity Mirroring Disk duplexing

5. If Wesley Technologies decides to implement RAID 5, what will be the

total amount of disk space available for data storage if the array has a total of five 10GB disks? A. 5GB B. 20GB C. 40GB D. 50GB E. 10GB 6. What benefits will be realized by Wesley Technologies if it decides to

use a hardware RAID solution over a software RAID solution? Choose two. A. Both reads and writes will be processed faster. B. Hardware RAID is less expensive than software RAID. C. Hot swapping of drives is an option for failed drives in the array. D. Wesley will not be able to use a clustering solution if they go with

a hardware RAID.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Wesley Technologies

159

1. A, E. Wesley Technologies can use either RAID level 1, disk mirror-

ing, or RAID level 5, disk striping with parity, to provide fault tolerance for their file servers. RAID 1 should be used on the disks that house the operating system. RAID 5 should be used on the disks that will store the SQL Server 2000 databases. 2. D. RAID 10 provides the functionality of RAID 1 disk mirroring and

RAID 0 disk striping. 3.

RAID 1 Can be used for both the system and boot partitions Requires two disks Has the highest cost per megabyte of data 50% of disk space is used for redundancy Has excellent write performance RAID 5 Requires three disks Has the lowest cost per megabyte of data One drive of the array is used for redundancy Has excellent read performance RAID 1 requires a minimum of two disks for its array. It can house the system and boot partitions. It has excellent write performance and average read performance. Fifty percent of the disk space in a RAID 1 is used for the mirrored set. RAID 5 requires a minimum of three disks for its array. It has a lower cost per megabyte than RAID 1, and one drive of the array is used for parity information. RAID 5 has excellent read performance and average write performance. Unlike RAID 1, a RAID 5 implementation cannot contain the system and boot partitions.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

CASE STUDY ANSWERS

Answers

CASE STUDY ANSWERS

160

Chapter 4



Designing High-Availability Data Storage Solutions

4.

RAID 0 Not fault tolerant Disk striping RAID 1 Fault tolerant Mirroring Disk duplexing RAID 2 Not fault tolerant Hamming Code ECC RAID 5 Fault tolerant Disk striping with parity RAID 10 Fault tolerant Disk striping Mirroring RAID 0 is also known as disk striping and provides no fault tolerance. There are two types of RAID 1—disk mirroring and disk duplexing; both are fault tolerant. RAID 2 is also known as Hamming Code ECC and is not a fault-tolerance solution. RAID 5 does provide fault tolerance and is known as disk striping with parity. RAID 10 is a combination of RAID 0 disk striping, and RAID 1 disk mirroring. RAID 10 does provide fault tolerance.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Wesley Technologies

161

with an array of five 10GB disks, the total amount of usable space is 40GB. Ten GB or 20 percent would be used for parity data. 6. A, C. If Wesley Technologies chooses a hardware RAID 5 solution,

the benefits will be faster disk reads and writes than with a software RAID, and the option of hot swapping failed drives.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

CASE STUDY ANSWERS

5. C. If Wesley Technologies decides to implement a RAID 5 solution,

Chapter

5

Designing a Highly Available Network MICROSOFT EXAM OBJECTIVES COVERED IN THIS CHAPTER:  Design a TCP/IP network infrastructure. Considerations include subnet addressing, DNS hierarchy and naming, DHCP server environment, and routed and switched environments.  Design a highly available network topology. Considerations include redundant paths, redundant services, and redundant components.  Plan server configurations. Considerations include network adapters, cluster communication, connectivity, and bandwidth.  Analyze and design end-to-end bandwidth requirements throughout an n-tier environment.  Design directory services. Considerations include Active Directory, LDAP, availability, authentication, and sizing.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

T

he underlying network and services of a website infrastructure are important considerations in designing your site. The supporting network is just as critical as the web application itself. In this chapter, we’ll look at considerations for designing that supporting network. We start with the TCP/IP network infrastructure. Then we review network addressing and subnetting, and TCP/IP services such as DNS and DHCP. After that we move on to discuss redundant routers, switches, paths, and other redundant components. You’ll see ways to increase the reliability of server communications, cluster communications, and bandwidth. You’ll study the n-tier environment and the necessity of planning bandwidth requirements within each tier.

This chapter covers all the objective skills included under the exam category “Designing a Highly Available Network Infrastructure.” In addition, it covers the “Design directory services” skill from the “Planning Capacity Requirements” objective. The remaining skills from “Planning Capacity Requirements” are covered in Chapter 6.

Designing the TCP/IP Infrastructure

T

he logical component of a website network infrastructure is based on Transmission Control Protocol/Internet Protocol (TCP/IP). Just as the design of a highly available network consists of redundant physical components, the logical component, too, may require redundancy. Microsoft realized the importance of TCP/IP and made it the default network protocol

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Designing the TCP/IP Infrastructure

165

starting with Windows NT 4.0, and TCP/IP has become pervasive in a majority of networks. You’ll encounter TCP/IP questions on most if not all current Microsoft certification exams. Unless you are preparing for your first Microsoft exam, most of the material in this section will be familiar. Make sure you review it, though, to ensure that you understand the topic well. In this section, we cover the following topics: TCP/IP addressing, subnetting, Dynamic Host Configuration Protocol (DHCP), and Domain Name System (DNS).

TCP/IP Addressing and Subnetting There’s no shortage of books completely focused on TCP/IP addressing. This section serves as a quick review to help you be ready for the exam questions. Many factors affect the addressing scheme used in a network. Some of these factors are the number of hosts, the number of required physical and logical networks, redundancy, and the availability of public addresses.



Microsoft Exam Objective

Design a TCP/IP network infrastructure. Considerations include subnet addressing.

A TCP/IP host is identified by a logical address. In TCP/IP parlance, a host is anything that receives an address—it can be a computer, printer, router, switch, etc. A TCP/IP address is comparable to a street address that identifies a specific business location. Each TCP/IP address must be unique on the network. To be Internet accessible, each address must also be unique in the world, unless you are using network address translation (NAT). NAT is discussed later in this chapter and in Chapter 8. The original architects of TCP/IP defined addresses in five classes, identified A through E. (Classes D and E are special and are not discussed here.) A TCP/IP address is composed of four octets separated by decimals. Each class uses from one to three of the octets to identify the network, with the remaining octet(s) representing the host(s). Address classes A, B, and C identify network ranges as shown in Table 5.1.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

166

Chapter 5



Designing a Highly Available Network

TABLE 5.1

TCP/IP Address Classes Class

Address Range

Network Octet

A

1.0.0.0 to 126.255.255.255

1

B

128.0.0.0 to 191.255.255.255

2

C

192.0.0.0 to 223.255.255.255

3

A couple of network addresses, while technically being class A addresses, are reserved and can’t be used for network addressing. The 0.0.0.0 address is used to designate a default route; and 127.0.0.0 is reserved for a loopback address, which is used to represent the local system. Within each class, there are also reserved host-level addresses. A host address when represented in binary, cannot be all 0s or all 1s. The 0 address is used to identify the network itself, and 255 is used for network broadcasts. For example, the address 192.1.1.0 refers to the network 192.1.1. The address 192.1.1.255 is the broadcast address for the same network. What we have really just described is the addressing precedent for predefined subnets. In other words, a class A address uses the first octet to identify the network, and the remaining three octets to identify the hosts on that network. A class C address uses the first three octets to identify the network, and the last one to identify the hosts. By default, a class A network allows for over 16 million hosts, and a class C allows for 254. Subnetting is what do you do when your network architecture doesn’t fit the defaults. Subnetting works by stealing bits from the host portion of an address, allowing you to divide the network into multiple subnetworks. Each subnet is identified by its own unique network address. The 32 bits of the address are divided between network addresses and host addresses. By default, a class A address uses 8 bits for the network address and 24 bits for the host address. These bits work left-to-right in binary. So if we are working with a single class A address space and we don’t want a 16-million host network, we might use 9 bits for the network and 23 bits for the host addresses. That would give us two networks of roughly 8 million hosts. A subnet mask is simply the binary-to-decimal translation of the bits assigned to identify the network. In the subnet mask, a binary value of 1 indicates the network address, and 0 represents the hosts. So the default class A address has a subnet mask of 255.0.0.0

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Designing the TCP/IP Infrastructure

167

which, represented in binary, is 11111111.00000000.00000000.00000000 Using our example of the class A address with 9 bits for the network, we have 11111111.100000000.00000000.00000000, which in decimal is 255.128.0.0 That’s the concept, which is pretty straightforward. The actual process of determining appropriate subnet assignments is more complicated, however. The following sections discuss the three steps of subnetting: 1. Determine the number of host bits required. 2. Enumerate the network IDs. 3. Enumerate the TCP/IP address for each network ID.

Determine the Number of Host Bits Figuring out the number of host bits can be accomplished from two directions. One way is to determine the number of subnets required, and work from your total network address forward to the number of host bits. Conversely, you can determine the number of hosts required on each subnet, and work backward from your total network address. The more bits you use for the network, the more subnets allowed, but each subnet can have fewer hosts. Using fewer bits allows for more hosts but fewer subnets. Remember that all nodes in a single logical network reside in the same broadcast domain. Thus the easiest way to calculate the host bits is often to figure out the maximum number of nodes you will ever want on a single logical network. For example, let’s say your current design will call for 60 hosts, but you want to plan for doubling the infrastructure. That would mean you need to design for 120 addresses in the subnet. Using binary to represent 120, we get 1111000, which is 7 bits. So at a minimum you must allocate 7 bits to the host address. If you’re using a class C network address, that means you must allocate 25 bits to the network. This number is calculated by subtracting the 7 bits required for hosts from the 32 total bits in an address. Converting that back to decimal, the subnet mask would be 255.255.255.128.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

168

Chapter 5



Designing a Highly Available Network

Enumerate the Network IDs Your public network will require a public TCP/IP address scheme. The base number will be provided by your ISP, and you can further subnet this base network if necessary. On your private networks, it is recommended that you use addresses that are nonroutable across the Internet. Nonroutable addresses are not recognized on the public network. Most ISPs should have their routers configured to block private network addresses. RFC 1918 specifies address ranges that are reserved and available for use in any private addressing scheme. It is recommended that your internal networks use these address spaces, because growth of the internal tiers is limited only by application and server scalability, and not by address availability. Following are RFC 1918 address ranges: 10.0.0.0 through 10.255.255.255 172.16.0.0 through 172.31.255.255 192.168.0.0 through 192.168.255.255 Once you have calculated the number of host bits, you can determine the new subnetted network IDs. This can be accomplished by listing all possible combinations of the relevant network bits. In the example above, in the hosts bit calculation, we identified the network using the 8th bit (working right-toleft) from the host bits. With only this 1 bit, there are only two possible options, 0 and 1. Let’s look at a more complex example using the RFC 1918 reserved network of 192.168.1.0. Assume that we need only 40 nodes per subnet. Going through step 1 again (determining the number of host bits), we see that we need 6 bits to represent 40 as 101000. That leaves us 2 bits for subnetting, so our subnet options are as follows (look only at the 2 left bits): Binary

Decimal

Network ID

00000000

0

192.168.1.0

01000000

64

192.168.1.64

10000000

128

192.168.1.128

11000000

192

192.168.1.192

We now have our network IDs.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Designing the TCP/IP Infrastructure

169

In the list just above, the first and last subnets are invalid in a classful network. We will define networks that are classful, and its counterpart classless, shortly.

Enumerate TCP/IP Addresses for Each Subnetwork Enumerating the host addresses within each subnetwork is simple once you have the network IDs. The host addresses are simply the range of addresses between each network, allocating the network and broadcast addresses. The network ID is calculated in step 2 of the subnetting process and is always the base number. The broadcast ID is always the highest number in the range. So for our example, the valid addresses are as follows: Network ID

Host Range

Broadcast Address

192.168.1.0

192.168.1–62

192.168.1.63

192.168.1.64

192.168.1.65–126

192.168.1.127

192.168.1.128

192.168.1.129–190

192.168.1.191

192.168.1.192

192.168.1.193–254

192.168.1.255

Now that we know how to calculate subnet addresses, let’s take a look at the actual network design and why we want to logically subnet.

Subnetting the Network Security and network capacity are the controlling factors in determining the quantity of required subnets. In any web infrastructure, you must design with the objective of isolating and protecting critical data. Chapters 7 and 8 discuss firewalls and security in detail. Additionally, Chapter 6 discusses network bandwidth capacity. Between the demands of security and bandwidth, it’s a fair assumption to say that most website designs will involve segmenting the network. Start by grouping systems with similar security requirements onto the same network segment. Each segment should be separated by a firewall or a router with filters. At a minimum, you will likely have at least three subnets: 

A public network

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

170

Chapter 5



Designing a Highly Available Network



A perimeter network, sometimes called a DMZ



A data network

Figure 5.1 shows the basic network design of this simple three-tier network. FIGURE 5.1

A simple three-tier network Public

DMZ

Data

Internet Web farm

Firewalls

Database servers

Each network tier requires a dedicated network address and enough host addresses to accommodate the required hosts. Your network design may have additional subnet requirements based on security and capacity. For example, many infrastructures have an added management network. The management network allows for management traffic, such as backups or content deployment, without affecting the overall network performance and response time.

Classful and Classless Routing In the preceding sections, we’ve discussed the classes of TCP/IP addresses. Classful routing was used in the early days of the Internet. In classful routing, routers based their routing decisions on the concept of the class A, class B, or class C network addresses. Classful routing is limiting in certain subnetting situations. These conditions are known as the All-0s and the All-1s subnets. Basically, this means the subnetted network ID, when represented in binary, is either all 0s or all 1s. Giving an example is the easiest way to describe this. Let’s go back to the subnetting example we’ve been using: Binary

Decimal

Network ID

00000000

0

192.168.1.0

01000000

64

192.168.1.64

10000000

128

192.168.1.128

11000000

192

192.168.1.192

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Designing the TCP/IP Infrastructure

171

You can see that the first network ID, 192.168.1.0, when represented in binary, is all 0s. The problem is that there is no binary difference between the full class C network of 192.168.1.0 and the subnetted network 192.168.1.0 on which to base routing decisions. Thus, in classful routing, you can use neither the first or last subnet. Classless Internet Domain Routing (CIDR) was developed to solve this problem. CIDR (classless routing) works without the class designation and without the classful routing limitations. CIDR networks are represented by appending the characters /x to the end of the network address, where the x designates the number of bits in the address used by the network. (CIDR networks are described as “slash” networks because of the annotation used.) For example, the old classful addresses would be represented in CIDR format as follows: Class

CIDR Format

A

10.0.0.0/8

B

172.1.0.0/16

C

192.168.1.0/24

With CIDR, networks are limited to the sizes of class A, B, and C, but a network can be a size in between. The issues of All-0s and All-1s disappear. Most current routers and operating systems support classless routing.

On the test, most of the time you will need to search for clues about classful or CIDR networks. If you don’t find a clue, most likely the possible answers will not require the all 0s or all 1s networks.

Taking Advantage of Classless Routing As IT manager of a rapidly growing corporation, you are always looking for ways to improve your website infrastructure. The current system was designed and installed several years ago; it has a simple two-tier design and uses classful routing.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

172

Chapter 5



Designing a Highly Available Network

In the CTO’s discussions with you, she tells you that the company will soon be spending a significant amount of money redesigning the website and she needs your feedback on a few issues. The new website will be three tiers, utilizing n-tier technology. The CTO also wants to add a network management segment that is isolated from the rest of the infrastructure. The company’s ISP originally assigned it a class C network address, and to handle the original two tiers it was subnetted with a 255.255.255.192 subnet mask. She is concerned that not enough subnetworks will be available because of classful routing. You explain that by upgrading the old routers, your network will be able to support classless (CIDR) routing. CIDR will allow you to immediately recover two subnets: the All-0s and the All-1s networks. That will give you four subnets to work with based on your current subnet design. This meets the requirements of the three-tier network plus the new management network. CIDR is much more flexible than class-based routing. It also prevents the waste of valuable TCP/IP addresses to the All-0s and All-1s networks. Classless routing also allows the ISP to assign you TCP/IP addresses in the quantities your network requires, without regard to legacy class designations.

DHCP and Address Assignment In a large installation, managing the address assignments can be an administrative nightmare. Each system needs at least an address and subnet mask, and most will also need DNS and default gateway addresses. Each address must be unique, so there must be some method of tracking assignments. If a change is made to the network, such as replacement of a DNS server, that change needs to be made on each system. The Dynamic Host Configuration Protocol (DHCP) is a service that can be utilized to automate this process. Although DHCP may not be highly critical for the operation of your website, it is an important management tool useful in large infrastructures.



Microsoft Exam Objective

Design a TCP/IP network infrastructure. Considerations include DHCP server environment.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Designing the TCP/IP Infrastructure

173

DHCP dynamically assigns TCP/IP configuration data to systems as they boot. This allows for centralization of address assignment, as well as easy change management. The DHCP server leases an address to a DHCP client for a specified lease period.

DHCP Lease Process The DHCP lease process has four steps, as shown in Figure 5.2, involving exchange of the following packets: DHCPDISCOVER, DHCPOFFER, DHCPREQUEST, and DHCPACK. FIGURE 5.2

The DHCP lease process DHCPDISCOVER DHCPOFFER DHCPREQUEST DHCPACK DHCP Server

DHCP Client

The first step is a DHCPDISCOVER packet. When a client boots, a small version of TCP/IP is loaded and broadcasts the DHCPDISCOVER. This packet is used to find the location of a DHCP server. The broadcast is an important consideration in your network design. Broadcasts by default do not cross network boundaries, which can prevent a client from receiving an address. We’ll discuss remedies to this problem later. The second step is a DHCPOFFER. After a DHCP server receives a DHCPDISCOVER packet, the server responds with a DHCPOFFER. This packet contains the client’s MAC address, the offered TCP/IP address information, and the address of the DHCP server. This packet is also a broadcast message, since the client hasn’t yet been configured with an address. The next step is a DHCPREQUEST. When the client receives one (or more) DHCPOFFERs, it responds with another broadcast. This broadcast packet contains the IP address of the server of the selected DHCPOFFER. This allows other servers that have issued DHCPOFFERs to return the offered addresses to their pool for future leases. The final step is DHCPACK. The server broadcasts a final packet, acknowledging the DHCPREQUEST. This packet of information also contains the lease duration.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

174

Chapter 5



Designing a Highly Available Network

At this point, the client has received and acknowledged the address lease and may begin to use the address. Every time the client system boots, it attempts to contact the DHCP server and renew the address. If the client can’t contact the DHCP server and the lease time has not expired, the client will continue to use the address. The system also tries to renew the lease after 50 percent of the lease time has expired. This is an important consideration in your DHCP server implementation. If a server can’t contact a DHCP server to obtain an address, the server will not be able to communicate on the network. Later we’ll discuss how to build redundancy into your system to handle this situation.

Scopes and Superscopes Now that we’ve discussed the purpose of DHCP and the network interaction of server and clients, let’s take a look at the configuration of the server. DHCP configurations are organized into scopes. A scope is a group of TCP/ IP addresses for a given logical subnet. Each subnet can have only one scope defined. The scope includes, at a minimum, a range of IP addresses for the subnet, the subnet mask, and the lease duration. Scopes typically include other TCP/IP parameters, as well, such as the default gateway, DNS server, and domain suffix. Superscopes are collections of scopes, in which each scope is then called a member scope. A superscope allows for supporting multiple logical TCP/ IP networks on a single physical network segment. Superscopes can also be put to work when an existing scope is getting near to being completely used; the superscope allows addition of another scope of addresses. The biggest benefit of superscopes in the context of highly available networks is added fault tolerance, which we will discuss later in this section. When defining scopes for multiple segments to accomplish fault tolerance, you must be sure not to define overlapping addresses. Microsoft has two design guidelines for multiple servers. Superscopes allow you to configure DHCP in the following ways: Multiple DHCP servers If your design has multiple DHCP servers on the same logical network, divide the total available dynamic address space evenly. Each DHCP server will get 50 percent of the range. Be sure that you have allocated enough addresses so that 50 percent will serve all the clients. For example, if you have 50 servers that need an address in a given tier, you’ll want to allocate 100 addresses in your subnet design. Each DHCP server will then get a scope of 50 addresses. Should one server fail, clients will still have an address. Each of these DHCP servers can also have a superscope providing redundancy to DHCP servers on other segments.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Designing the TCP/IP Infrastructure

175

One DHCP server If your design has only a single DHCP server on a logical network, with a DHCP server on another segment providing redundancy, use the 80/20 rule. The local DHCP server gets 80 percent of the total range, and the remote server gets 20 percent. So for the same 50server example as just above, our subnet design needs at least 63 addresses for that segment. This is determined by multiplying 50 by 80 percent. We want to have the servers obtain the addresses locally if possible, but we want to have sufficient addresses in the subnet for the remote DHCP server to provide some addresses if necessary. The remote server will need a relay agent (discussed next) to receive requests from the clients.

DHCP Relay Agent Recall that the four steps in the DHCP negotiation are accomplished via broadcasts. Routers in most environments are configured to block broadcasts. Without any further configuration, this would then require a DHCP server on every network segment. The workaround is to use a DHCP relay agent. A relay agent listens for DHCP broadcast messages and then relays those messages directly to a DHCP server. The relay agent also takes the response from the server and broadcasts it back to the client. Most routers today are capable of being a relay agent. The applicable RFC is RFC 1542. Windows 2000 Server versions, as well, have a DHCP relay agent. The relay agent also informs the DHCP server as to which logical network segment the client is on, so that the DHCP server can provide an address from the proper scope.

Lease Duration As mentioned earlier, the final step in the DHCP process, DHCPACK, provides a lease duration. This is a finite time frame during which the client is allowed to use the given address. The DHCP client will attempt to renew the lease halfway through the lease period. When determining your lease duration, there are two primary considerations: 



The lease duration must be short enough that the rate of node replacement (due to failure or upgrade) does not deplete the address pool, and short enough so that any lease option modifications are applied within an acceptable period. The lease duration must be long enough to prevent temporary failure of the DHCP server (or relay agent) from harming the site.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

176

Chapter 5



Designing a Highly Available Network

Microsoft recommends a lease duration of 14 days. If a node fails, its address can be released and returned to the pool within a maximum of 14 days (that is, if the node fails immediately after obtaining the lease). If the DHCP server fails, you have several days to restore or replace it even if it fails right before a client attempts the halfway-point (7-day) renewal. If you need to make option changes, such as a newly acquired DNS server, the maximum time to get the option applied to all clients is only 7 days.

Fault Tolerance We’ve seen that using DHCP has several advantages relating to easier address management. However, Microsoft’s DHCP server has no built-in fault tolerance. You cannot simply set up two DHCP servers and use the same scopes, because the servers will give out the same addresses.



Microsoft Exam Objective

Design a highly available network topology. Considerations include redundant services.

There are two methods available to provide fault tolerance under DHCP. You can install DHCP on a Windows 2000 Advanced or Windows 2000 Datacenter cluster. The DHCP service is cluster aware and will properly fail over if a node stops working. The cluster approach is valid if your network only requires a single DHCP server. The second approach for fault tolerance is installing multiple DHCP servers. In a network with multiple physical segments isolated by firewalls, you will need to install DHCP servers on each segment. Unless you’re installing DHCP on a cluster server, it is recommended that you install at least two DHCP servers. Here are some design guidelines for minimal fault tolerance: 





If only one server is required and a cluster server is available, install DHCP in the cluster. Install DCHP on at least two servers. Configure the scopes for each subnet so that each server provides unique, nonoverlapping addresses.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Designing the TCP/IP Infrastructure





177

Ensure that routers between subnets are configured to be DHCP relay agents. Install DHCP services on the network segments that have the greatest demand (most clients).

In a typical website infrastructure, the load on the server created by the DHCP service is relatively minor. Installing DHCP on your domain controllers is acceptable. If you are running Active Directory, don’t forget to authorize each server to run. This overlaps with your domain controller design, as well, since it is desirable to have domain controllers on each network segment. By installing on a domain controller, you ensure that the DHCP service will be authorized even in the event of a router failure.

Design Guidelines for DHCP Although it does diminish the management requirements of TCP/IP addressing, DHCP must be used appropriately. Your infrastructure will need both static and dynamic addresses. Static addresses are appropriate for 

Domain controllers



DNS servers



Public interfaces of public servers



Routers



VPN servers

You should be able to use DHCP to configure all other system components. This includes any data, management, and monitoring interfaces in your network design. If possible, use multi-homed DHCP servers to avoid reliance on relay agents. In some environments, such as highly secure sites, this may not be possible if firewalls are isolating each logical tier. Position the servers as close to the clients as possible.

Name Resolution and DNS Domain Name Service (DNS) is a distributed, hierarchical database that maps host names to TCP/IP addresses. DNS was originally designed to make the location of systems easier for users to work with. It’s much easier to remember www.microsoft.com than it is to remember 123.456.789.000. In

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

178

Chapter 5



Designing a Highly Available Network

essence, DNS works just like a phone book; you give it a name, and it returns a number. DNS is what allows users to contact your site. In a Windows 2000 network, DNS is also the primary internal name resolution used in Active Directory. Internal Windows clients (which can be servers) use DNS to find other servers, services, and domain controllers for logging in.

Domains and Hierarchy DNS host names are hierarchical, evaluated from right to left. A DNS name that explicitly identifies a host is called a Fully Qualified Domain Name (FQDN). For example, www.sybex.com is an FQDN that identifies the Sybex web server. The rightmost entry is also know as the top-level domain; in the Sybex example, the com says that the top-level domain is a commercial organization. After evaluating the top-level domain, the DNS process continues from right to left until you have an FQDN. Table 5.2 lists the current valid top-level domains. TABLE 5.2

Top-Level Domains DNS Name

Type of Organization

com

Commercial

edu

Educational

net

Networks

gov

U.S. government (nonmilitary)

mil

U.S. military

yy

Two-letter country codes

biz

Business (new)

info

Information

arpa

Reverse DNS

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Designing the TCP/IP Infrastructure

179

Zones, Replication, and Server Types DNS servers can be further partitioned into zones. A zone is a part of the DNS database with a contiguous namespace. Zones are useful for two reasons. In large networks, zones can identify related systems; for example, a geographical zone such as east.sybex.com or an organizational zone such as development.sybex.com. The other role for zones is in delegation, which is assigning responsibility for portions of the namespace to another person or group. Zones are also used for DNS replication, which enables multiple DNS servers to host the same names. The zones in this case are called primary and secondary. The primary zone server replicates a copy of the zone file to the secondary server via a zone transfer. The primary zone file is the master copy, and the secondary zone file is a read-only copy. Zone transfer is what makes the DNS hierarchy fault tolerant. There are five DNS server types available: Active Directory Integrated All domain controllers in Active Directory also provide DNS services. Primary In traditional DNS (not Active Directory Integrated), the Primary is the first DNS server for a zone and maintains the master copy of the zone file. Secondary The Secondary DNS server provides redundant DNS service with a read-only copy of the zone file. This server improves performance at remote locations by keeping a local copy of the zone file. Delegated Domain Contains a subset of a domain’s namespace, improving performance by reducing the number of records in the local zone. Forwarder This is a server that doesn’t host zone files, but only supplies DNS lookup services for local clients.

Designing the DNS Solution Now that we have reviewed general DNS concepts, let’s take a look at the details of designing your DNS solution. The factors you’ll need to address are the number of DNS servers, the location of the DNS servers, the namespace for the internal and external network, and the DNS service type.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

180

Chapter 5





Designing a Highly Available Network

Microsoft Exam Objective

Design a TCP/IP network infrastructure. Considerations include DNS hierarchy and naming. Design a highly available network topology. Considerations include redundant services.

If you’re designing a web infrastructure for the Internet public, you’ll obviously need a public namespace and servers. You must contact an Internet name registrar to register the domain name and DNS servers. If you’re designing an intranet website, then you’ll only need an internal namespace. The remaining discussion in this section assumes a public website; the design is more complicated, but it includes everything applicable to an intranet site. From a security perspective, it is recommended that you maintain separate namespaces and servers for the public and private parts of your network. The internal private namespace prevents external users from accessing the names and TCP/IP addresses of the internal network computers. For the external public network, you’ll need DNS to resolve FQDNs into an address so that the Internet users can reach your public servers. For the private network, you’ll employ DNS to resolve names for services and computers within the architecture. If your servers on the private network need access to the public Internet, you’ll have to configure the private DNS servers to resolve names in both the internal and external namespaces.

Using completely different namespaces for the internal and external networks reduces configuration issues and administrative overhead.

Public DNS Services The public Internet network requires DNS servers for name resolution to the Internet users. As discussed earlier, the namespace for the public network should be different from that of your private DNS network. To make name resolution highly available to the Internet users, set up at least two DNS servers. One consideration is that many ISPs will host your DNS on their servers. This has the benefit of moving all external name resolution resources outside your infrastructure. It saves both CPU resources and bandwidth.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Designing the TCP/IP Infrastructure

181

To maintain administrative control of the DNS records, have the ISP configure their DNS servers as Secondary servers. You will then configure your DNS as Primary server and allow zone transfers to the ISP servers. If you choose not to use the ISP’s DNS servers, ensure that you have at least two DNS servers. If you’re running Active Directory in the public user tier, configure DNS to be Active Directory Integrated; this will minimize administration tasks. From ISP server’s point of view, an Active Directory zone appears as a traditional BIND-based DNS server. (BIND stands for Berkeley Internet Name Domain and was one of the original DNS servers on the Internet.) If you aren’t running Active Directory, you’ll need a standard Primary zone for the external namespace. In this case, you’ll have a single-master model, with one master replicating via zone transfers to your other secondary servers. Internal DNS Services DNS is a critical network service and must be highly available even in data or management networks. Windows 2000 uses FQDN for locating computers and services, and without name resolution, applications may fail. Location of your DNS servers is a critical design issue. The primary consideration is that the service should be located as close as possible to the clients. For the internal network, it is recommended that you use Active Directory Integrated DNS. There are two benefits of this arrangement. First, DNS is automatically made redundant by deploying multiple domain controllers. Then, in reverse, Active Directory will not depend on an external DNS server for name resolution. Following are some other benefits of an Active Directory Integrated zone that apply to the internal tiers: 

Multi-master, read/write copies of the zone



Automatic replication by Active Directory



Secure, dynamically updated DNS zones



Replicated only within the Active Directory domain

In this section, we discussed DNS servers and their roles. We discussed using Active Directory Integrated DNS when possible, and setting up a minimum of two DNS servers for redundancy. We also saw that DNS is a critical service in a Windows 2000 web solution infrastructure and, for performance reasons, DNS servers should be located as close as possible to the DNS clients.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

182

Chapter 5



Designing a Highly Available Network

Routed and Switched Environments Designing a highly available network infrastructure often results in a complex system of redundant components, paths, and services. Redundancy allows a site to tolerate failures of individual components. Hubs, switches, and routers are the low-level components that tie computers together into networks.



Microsoft Exam Objective

Design a TCP/IP network infrastructure. Considerations include routed and switched environments.

A hub is a device that allows for connection of multiple systems. Hubs operate at the lowest layer (Physical) of the OSI model, and essentially act like a big concentrator of the electrical signals. When a transmission of packets comes into one port from a computer, the hub transmits that signal to every other port. A switch is also a device that connects multiple systems; however, switches operate at layers 2 and 3 (Data Link and Network layers) of the OSI model. Rather than transmitting a stream of packets to all ports, a switch establishes a temporary dedicated path between the sender and receiver. This allows for multiple communications to occur simultaneously without collisions. The cost of switches has decreased to the point that, for any new network design, you should select a switch over a hub. Switches also provide other features that may be valuable in your network infrastructure. Two of these features are quality of service (QoS) and virtual local area networks (VLANs). QoS, discussion of which is beyond the scope of this book, supports the prioritization of network traffic based on a number of variables. This helps you prevent one type of service from consuming all your network resources. VLANs are useful when you have more logical subnets than you do physical switch devices. One switch can host multiple VLANs, where each VLAN is still isolated as if it were on a stand-alone switch. For example, on a 36-port switch, ports 1 through 12 can be configured as one VLAN servicing one subnet; ports 13 through 24 can be configured as another VLAN servicing another subnet; and so on.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Designing the TCP/IP Infrastructure

183

Although hubs and switches are reliable, they do fail occasionally. In order to configure for high availability, you will need to use redundant switches. Figure 5.3 shows a three-tier network with a redundant switch configuration. Notice that each server is connected to at least two different NICs, connecting each subnet to redundant switches. This connectivity is best accomplished using the NIC vendor’s teaming driver. Teaming is discussed later in this chapter. FIGURE 5.3

Redundant-switch multi-tiered network Subnet A

Subnet B

Switch

Switch

Server

Server Switch

Switch

A router is a device that connects networks of different types, such as Ethernet to T1. Routers are also used to connect different logical networks such as multiple subnets. Routers operate at the Network layer of the OSI model. Like switches, routers are reliable, but they do fail. In order to configure for high availability, you will need redundant routers just as you do switches. However, since routers operate at a higher OSI level (the Network layer 3), configuring redundant routers is not a simple plug-and-play operation. For the purpose of this discussion, any firewalls in your design are also considered routers. After all, they route packets, filtering out inappropriate packets as it goes. When you use multiple subnets with multiple routers, the routers must be configured with a routing protocol.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

184

Chapter 5



Designing a Highly Available Network

Typical routing protocols include OSPF, RIP, and BGP, and these will most likely meet the requirements of your infrastructure. Routers and routing protocols typically operate in one of two ways: 



The routers can load-balance across both routers by using a common virtual IP address. This works conceptually just like Network Load Balancing. The routers can use a single routed connection only, with the redundant path remaining unused unless there is a failure.

Load balancing of the routers is usually supported by vendor-specific protocols. For example, Cisco routers load-balance with Hot Standby Router Protocol (HSRP). Other vendors may use Virtual Router Redundancy Protocol (VRRP).

Designing the Network Topology

I

n this section, you’ll study the concepts of designing network topology for high availability. The primary method for network availability is redundant paths, which is a requirement in a highly available website infrastructure. Redundant paths allow for continued operation during failures and temporary bottlenecks. In Chapter 6 we discuss the methods for determining required bandwidth, so that will not be included here; we assume the proper bandwidths of all tiers have been calculated. This discussion includes redundancy for your Internet connectivity and the internal networks.



Microsoft Exam Objective

Design a highly available network topology. Considerations include redundant paths and redundant components.

Redundant Paths Once you have determined the required bandwidth for your network connections, you need to make the connections highly available. This is done by adding redundant connections. We’ll examine the requirements for Internet and internal connections separately.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Designing the Network Topology

185

For a Public Internet Connection Your Internet connection can be made highly available by adding a redundant connection. If you are locating your web farm in a hosting facility, the facility may already provide this service. When you consider redundant Internet connections, make sure that each connection has enough capacity to handle the peak bandwidth requirements. For example, if your website has a peak load of 1 Mb/sec, you don’t want to use an ISDN circuit for the redundant connection (unless subpar performance is acceptable!). Each router must have a secondary route for transmitting outgoing traffic if the other connection fails.

For the ultimate in single data-center availability, consider adding a redundant Internet link that connects to a different ISP. Should your primary ISP go offline, your website still has connectivity. This arrangement requires significant routing protocol configuration that goes beyond the scope of this book (and the exam). But from a design perspective (and for the exam), just be aware that it is possible. One consideration for a multiple-ISP setup is that you’ll need to use network address translation (NAT, see Chapter 8) to hide the addresses of one ISP from the other.

Figure 5.4 illustrates redundant Internet connections. FIGURE 5.4

Redundant Internet connections

ISP 1

Router

Switch

Server farm

ISP 2

Router

Switch

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

186

Chapter 5



Designing a Highly Available Network

For a Private Network For a variety of reasons, including security and bandwidth as discussed throughout this book, you’ll usually want to isolate your Internet network from all other communications. Although data and management networks aren’t directly accessible to the Internet end-users, they are just as critical to the operation of the infrastructure. For example, suppose you have a catalog database stored on a SQL Server 2000 on a private network. Even when the Internet connection is working fine, failure of the private data network will essentially bring your website down. It’s vital to design from end to end, from the client’s browser to the data being requested, and to design redundant connections everywhere possible The two topologies used primarily in modern systems are Ethernet and Fiber Distributed Data Interface (FDDI). Ethernet For redundancy in an Ethernet network, consider the following: 

Using redundant connections for all communications



Using teaming network adapter cards (NIC teaming)

Most vendors of popular NICs offer a feature called teaming. For NIC teaming, you install multiple network cards (all the same manufacturer and model) in your server. Within Windows 2000, you install the adapter driver and the teaming driver. The teaming driver is a small shim that sits between the adapter driver and the higher protocols. One of the teamed adapters is active, and the second is in standby mode. If the primary card fails, the second card takes over the IP address. This provides protection against failure of the NIC and the patch cable. If the second NIC is attached to a separate secondary switch, then the teaming protects you against failure of the switch, as well. Figure 5.5 shows a network with redundant Ethernet paths. FIGURE 5.5

A network with redundant Ethernet connections Switch

NIC Teaming Server Switch

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Designing High-Availability Server Configurations

187

FDDI Fiber Distributed Data Interface (FDDI) is another method of affording redundant private network connectivity. A proven, mature technology, FDDI is similar to old-fashioned token ring, in which a token is passed around the ring and the current owner of the token is allowed to transmit. This prevents collisions on the FDDI network. The FDDI equivalent of teaming is known as dual homing. One FDDI adapter is connected to one switch, and the secondary FDDI adapter is connected to another switch. The failover of FDDI adapters is so quick that often not a single bit is lost. Figure 5.6 shows a network with redundant FDDI paths. FIGURE 5.6

A network with redundant FDDI paths

Primary FDDI ring

Primary FDDI ring

Secondary FDDI ring

Secondary FDDI ring

FDDI Network

Web Servers

Business Logic and Data Servers

Designing High-Availability Server Configurations

I

n this chapter and others, a primary topic has been using redundant servers and services to design highly available sites. In this section, we’ll look at designing some fault tolerance into each individual server.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

188

Chapter 5



Designing a Highly Available Network

Hardware Considerations One of the simplest ways to increase the fault tolerance of a site is to build as much fault tolerance as possible into each component. A server comprises a motherboard, CPUs, memory, disk systems, power supplies, NICs, and other components. To be 100 percent fault tolerant would require that every component be redundant. Such specialized servers do exist, but they are very expensive.

Server Hardware Motherboards and CPUs are difficult to configure with redundancy other than what is supplied by the specialized hardware vendors. Fortunately, these components are generally fairly reliable. Memory, in the form of RAM, is available in three types: nonparity, parity, and error-correction coding (ECC). Nonparity memory is the least desirable but also the least expensive. A single bit failure in nonparity memory can cause Windows to crash. Parity memory moves a step up by providing a parity bit. If the parity memory detects a parity mismatch, it signals Windows 2000 about the error. Depending on what the server had stored in that page of memory, Windows 2000 may be able to recover it. Parity memory gives error detection only, not correction of the error. ECC memory is the best high-availability solution and therefore the most expensive. ECC memory will correct a single bit error. However, bear in mind that with both ECC and parity memory, the entire chip can fail. It’s a good idea to keep some spare memory in supply at all times. Heat is a major cause of failure in many server components. An overheated machine can cause any of its components to go down. Many modern servers support installation of multiple cooling fans, and the cost of this insurance against overheating is only a small part of total system expense. It may seem an obvious point, but a server without power doesn’t work. Most servers offer the capability of redundant power supplies that will kick in if the primary supply fails. Ideally, each of the power supplies will go to a separate UPS on a separate power feed, which protects you against a UPS failure or a tripped circuit breaker. Redundant supplies should be considered for all critical components, not just the servers. If a shared storage RAID array loses power, your investment in a cluster will not protect you. Disk controllers, too, can be duplicated. With a single controller, you have a single point of failure, even with a RAID array. The disk storage is an

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Designing High-Availability Server Configurations

189

important consideration for high availability. Since disk drives have moving parts, they are more prone to failure.

Data storage design is discussed in Chapters 4 and 6.

Cluster Communication When you’re running a clustered or network-load-balanced (NLB) configuration, the internal cluster heartbeat is a critical component. You’ll want to arrange high availability for the cluster network as well as all other components.



Microsoft Exam Objective

Plan server configurations. Considerations include network adapters, cluster communication, connectivity, and bandwidth.

It is recommended that all cluster or NLB servers have at least two NICs, one for public communications and one for the private heartbeat network. In fact, for a Microsoft Cluster services cluster, a dedicated heartbeat NIC is a requirement, not just an option. Since the heartbeat is critical, fault tolerance is necessary. The first thought that may come to mind is to use the teaming functionality discussed earlier in this chapter. However, Microsoft recommends that teaming not be used in private communications in a clustered arrangement. (See Technet article Q254101 for details.) You can use teaming for the public interfaces. Both Microsoft cluster technologies (MSCS and NLB) provide for fault tolerance in the implementation. If you have only two adapters in your cluster server, configure one adapter for private communications and the other for mixed networking. (Refer to Chapters 2 and 3 for details on accomplishing this.) If the private network fails in this configuration, the nodes will communicate over the public network. After the private network is repaired, private communications then automatically fail back to the private network. Other best practices for cluster communications are as follows: 

Use a single switch or VLAN for the cluster members’ public interface and the routers.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

190

Chapter 5



Designing a Highly Available Network





Use a single switch or VLAN for the cluster members’ private communications. Use full-duplex connections to minimize collisions.

Of course, you’ll also install multiple private and public adapters to increase the fault tolerance. An extreme example of this is shown in Figure 5.7, where each adapter is connected to a dedicated switch. VLANs could be used, as well, to reduce the number of physical switches required.

NLB is covered in Chapter 2, and Cluster services in Chapter 3.

FIGURE 5.7

An arrangement with redundant cluster communications Heartbeat

Data Tier

Switch

Switch

Node A

Node B Switch

Switch

Designing End-to-End Bandwidth Solutions

N

-tier architecture is a fancy way of saying that an application is divided into many tiers of operation. Throughout this book you have studied various network segments. Very often these segments correspond to an n-tier design. A basic n-tier architecture has the following parts.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Designing End-to-End Bandwidth Solutions



Microsoft Test Objective

191

Analyze and design end-to-end bandwidth requirements throughout an n-tier environment.

Presentation layer This layer is composed of the clients. In a web solution architecture, the presentation layer is the client operating system and a browser. Business logic layer Business objects such as COM+ components and objects are on this layer, for data manipulation. Data access layer This layer has the pooled COM+ components and Microsoft Transaction Server (MTS) objects. It protects the database from direct access by the business logic layer. Data layer This is the actual database, such as Microsoft SQL Server 2000. In order to provide a highly available website, it is imperative that the site’s desired overall performance be evaluated, from the end user at the presentation layer, through all other layers and back to the user. Let’s look at some of the factors affecting end-to-end performance. Each server must communicate with clients or other servers. As the number of servers or the number of clients grows, more bandwidth is consumed. At some point, the network may become saturated and performance will decline. This can happen in any of the network segments in the n-tier architecture—one reason for using multiple segments and subnets as discussed earlier in the chapter. This subnetting provides for future growth in servers and bandwidth within each tier. Applications deployed in the business logic and data access layers will require various levels of bandwidth. Some applications tolerate latency better than others. Servers use RPC (remote procedure calls) to communicate with each other, some more often than others, increasing traffic. Some applications communicate via broadcasts, again consuming bandwidth. One factor that is not directly under the control of the site architect may be the end user’s bandwidth when connecting to the Internet. Nevertheless, that bandwidth can affect visitors’ perception of your website. Unfortunately, there isn’t much you can do technically to increase the end user’s

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

192

Chapter 5



Designing a Highly Available Network

bandwidth if you’re running a public website. You’ll need to work with your marketing department to develop an understanding of the target market and end users’ typical connectivity. What you can do is to design the site to minimize transmissions on the public network. For example, if the typical visitor to your site connects to the Web with a 28.8bps modem, you might design your web pages with smaller graphics. You don’t want users sitting and waiting for a 2MB multimedia splash screen to appear.

Bandwidth capacity planning is discussed in Chapter 6. There is nothing mysterious or unique about calculating capacity or planning the configuration for an n-tier network. Chapter 6 explains how to either calculate theoretical bandwidth requirements, or determine requirements empirically. Often, empirical is easier.

The test objective is itemized to stress the importance of considering the total end-to-end performance in your design. When designing the infrastructure of a highly available website, each tier needs to be considered individually in regard to bandwidth and availability.

Designing Directory Services

Active Directory can play a key role in your web infrastructure. Active Directory provides authentication, security, and management tools. If you are running an Active Directory domain, it is essential that the Active Directory controllers be available at all times. It is also possible to have a separate domain in the data tier, to provide end-user authentication and control access to website information, again making the availability of the Active Directory controllers critical. In this section, we will examine the process for planning an Active Directory structure, along with guidelines for making the Active Directory highly available.

Full details of an Active Directory structure are beyond the scope of this book, of course, as well as this exam. Microsoft offers a certification exam on Designing Active Directory, 70-219.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Designing Directory Services



Microsoft Test Objective

193

Design directory services. Considerations include Active Directory, LDAP, availability, authentication, and sizing.

At a very high level, the steps of planning an Active Directory structure can be summarized as follows. Obviously, each of these summarized steps will have several phases in practice. 

Create a forest plan.



Create a domain plan for each forest.



Create an organizational-unit plan for each domain.



Create a site topology for each forest.

Directory services are critical to your infrastructure for authentication. Active Directory was designed with high availability in mind. Active Directory operates in a multi-master configuration, unlike the Primary Domain Controller/Backup Domain Controller (PDC/BDC) design in Windows NT 4.0. Multi-master mode implies that each controller can maintain a full readwrite copy of the database. Communications among the domain controllers is called replication. Active Directory replication depends on two primary considerations. There must be reliable network connectivity among the controllers, and there must be a reliable DNS infrastructure.

Domain Controller High Availability Other sections of this chapter discuss hardware and network issues for increasing server availability. Those same techniques, such as redundant power supplies or redundant NICs, also apply to your domain controllers. In addition, there are other things you can do to increase the availability of domain controllers. Set up multiple domain controllers per domain. Installing multiple domain controllers in each domain is a quick way to boost availability. In the event of a critical failure, the other domain controller will continue to service requests. This redundant controller also helps in the event of a catastrophic failure of the first controller, providing instant access to the data without requiring recovery from tape.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

194

Chapter 5



Designing a Highly Available Network

Use DNS servers. Configure the domain controllers as DNS servers. This improves the availability of Active Directory in your infrastructure. The local DNS service removes the dependency of connecting to an external DNS server for name resolution. This also benefits your DNS infrastructure by distributing the load, as well as providing DNS redundancy. Optimize placement of forest root domain zone. Store the forest root domain zone as a secondary zone on domain controllers for child domains. Child domains need DNS entries in the forest root DNS zone, and a local copy of this data makes Active Directory more highly available.

Directory Services High Availability In addition to designing the domain controllers, the network, and DNS servers for high availability, there are other things you can do to increase the availability of your Active Directory. These design issues relate to security and manageability, which indirectly affect the Directory’s availability—if your Directory gets compromised or hacked, then for practical purposes it is no longer available. Following are guidelines to follow when designing your directory services. For security, configure all the internal servers as members of a domain. This enables the domain to use Kerberos authentication. Even with the internal servers behind a firewall, each system should be as secure as possible. If possible, configure all computers to communicate using IPSec (Internet Protocol Security), which will make it almost impossible for hackers to alter network traffic. Use a group policy with the “no override” setting to enforce the security policies in the domain. That will ease the administrative load when configuring servers, as well as protect you against an accidental change. If servers require individual security policies, set up an organizational unit for each set of policies and assign a specific group policy to each organizational unit. Create a forest with a single domain dedicated to this front tier. User accounts and computer accounts should be in separate organizational units in this network. Servers in the user tier are accessible through the firewall, so they are the next layer of protection against any attack. Store as little data as possible in this tier—specifically, no valuable corporate information. The only accounts in this domain will most likely be the front web-server computer accounts and user accounts for administration.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Designing Directory Services

195

Any internal private networks, such as data or management tiers, should have their own forests. This helps minimize the amount of data available from Active Directory to the front web servers. There may be multiple domains in this forest: 





The primary domain will contain the computer and administration accounts. If you are providing user authentication to your web application, that will be another domain. This separate domain isolates end-user authentication from the administrative domain. Finally, if you are running clusters, create a dedicated cluster domain. Each node in the cluster should be configured as domain controller and DNS server. A cluster domain is sometimes called a domainlet because of its small size. This domainlet configuration ensures that the cluster will always be available, by eliminating dependency on DNS and other domain controllers. Keeping those on a unique domain ensures that replication traffic is minimal, and the servers will not be taxed with authenticating other systems. If the domain nodes were domain controllers on the primary domain, they might end up spending processing cycles authenticating rather than performing their primary function (such as SQL Server 2000).

If necessary, create trusts as appropriate between the private network domains. For example, if you do have an application domain for user authentication, create a trust between the front-end user tier domain and the application domain.

LDAP Active Directory communicates using an industry-standard protocol call Lightweight Directory Access Protocol (LDAP). Active Directory uses LDAP extensively and is the only protocol supported. It’s a TCP/IP protocol, but LDAP can also work over connectionless UDP. Installed by default on domain controllers, LDAP is a hierarchical definition of how clients access the directory and how they can search and share directory data.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

196

Chapter 5



Designing a Highly Available Network

LDAP is used by both users and applications to access information in Active Directory. Since Active Directory is really a database, standard database functions such as query, create, update, and delete are available.

In a web solution infrastructure, you will not want to give users direct access to LDAP. If your web application is using Active Directory, you will want to control access, and only allow the application to access LDAP.

In terms of website infrastructure design, there are only two simple things to remember: 



LDAP communicates over port 389 by default. In your security design, if you have Active Directory controllers on the opposite side of a firewall from your website, you’ll need to open port 389. LDAP allows you to interoperate with other vendors’ directory services. This can be beneficial in extranets where you are linking your site to a partner, for example.

Summary

I

n this chapter we looked at designing the supporting network to be highly available. We reviewed TCP/IP addresses and subnetting. We introduced one way to calculate the number of hosts required and showed you how to enumerate the available network addresses. Classful and classless CIDR routing was discussed. You learned about DNS servers, and how redundancy can be implemented via zone transfers. We discussed the hierarchical nature of DNS design. We discussed using DHCP for network address assignment, especially in larger network farms. DHCP doesn’t provide for native redundancy like DNS, so we told you about techniques of providing DHCP reliability. We examined redundant network connections and paths. Multiple NICs in each server can be used to increase reliability as well as bandwidth. Connecting multiple NICs to multiple switches can be accomplished with vendor drivers known as teaming. You also learned about designing each server to be as fault tolerant as possible. We discussed the importance of looking at network capacity from end to end in your design. Each network segment must have sufficient capacity

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Exam Essentials

197

to allow the applications in that tier to perform; otherwise the complete system underperforms. Finally, we covered the subject of directory services. Directory services can be used for authenticating users in the end-user tier, or simply for authenticating users and services in the internal tiers. Security and availability of directory services is just as critical as any other supporting service.

Exam Essentials Understand how to enumerate network and host TCP/IP addresses. First determine the number of hosts required and count the number of binary bits representing that number. Use the remaining network bits and enumerate all binary combinations. Understand the four steps of DHCP negotiation. The four steps in obtaining a DHCP assigned address are DHCPDISCOVER, DHCPOFFER, DHCPREQUEST, and DHCPACK. Remember that DHCP lease negotiation is via broadcast messages. Know when and where to use a DHCP relay agent. When the DHCP server and potential DHCP clients are not on the same segment, you need a DHCP relay agent or an RFC 1542–compliant router between every logical subnet. Understand the two methods for making a DHCP service fault tolerant. The DHCP service can be installed on a cluster. You can set up redundant DHCP servers with address scopes that do not overlap. Know when to use static addresses. Static addresses should be used for domain controllers, DNS servers, routers, VPN servers, and the public interfaces of public servers. Be familiar with DNS server types. The types of DNS servers are Active Directory Integrated, primary, secondary, delegated, and forwarder. Be able to determine how many DNS namespaces are required. You need at least one namespace per Active Directory domain. Additionally, it is recommended that your public namespace be isolated from all internal namespaces.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

198

Chapter 5



Designing a Highly Available Network

Know where to place DNS servers. You need at least two DNS servers for public access. You should place at least one DNS server on every segment. If possible, use an Active Directory Integrated DNS server for all internal namespaces. Design fault tolerance into the physical network. Make all connections, switches, routers, and NICs redundant. Know which server hardware can be made redundant within the server itself. Servers support redundant power supplies, fans, drives, and errorcorrecting memory. The only components that can’t be made redundant in a single server are the motherboard and CPU. Be able to design n-tier bandwidth solutions. This just means that you need to evaluate all tiers, from end to end, for proper bandwidth. List the high-level steps for planning Active Directory. The four steps of planning Active Directory are to create a forest plan first, then a domain plan for each forest, then an organizational plan for each domain, and finally a site topology. Know the one other service that is critical to Active Directory operation. Active Directory requires efficient name-resolution via DNS. Understand Active Directory fault tolerance. For Active Directory fault tolerance, you can install multiple domain controllers. Each controller maintains a full read-write copy of the database.

Key Terms

B

efore you take the exam, be certain you are familiar with the following terms: Active Directory

Classless Internet Domain Routing (CIDR)

Active Directory replication

DHCP relay agent

broadcasts

DNS replication

classful routing

Domain Name Service (DNS)

classless routing

dual homing

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Key Terms

Dynamic Host Configuration Protocol (DHCP)

subnet mask

Fully Qualified Domain Name (FQDN)

subnets

host

subnetting

hubs

superscopes

Lightweight Directory Access Protocol (LDAP)

switches

network address translation (NAT)

TCP/IP addressing

NIC teaming

teaming

n-tier architecture

Transmission Control Protocol/ Internet Protocol (TCP/IP)

quality of service (QoS)

virtual local area networks (VLANs)

routers

zones

199

scopes

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

200

Chapter 5



Designing a Highly Available Network

Review Questions 1. Which of the following is not a valid Network address? Select all that

apply. A. 64.64.5.1 255.0.0.0 B. 132.132.132.1 255.0.0.0 C. 193.111.228.1 255.255.255.0 D. 128.128.128.1 255.255.0.0 2. You are working with the RFC-standard address 192.168.1.0. You

need to subnet the network and are planning for 40 host servers on each subnet. All networking components support classless routing. How many subnets can you create? A. 2 B. 3 C. 4 D. 5 3. Which of the following can be used to achieve high availability for

your DHCP server? A. NLB B. Replication C. Clustering D. Zone transfer 4. In an n-tier environment in which the routers are not RFC 1542–

compliant, how many DHCP servers are required for high availability? A. Two B. N C. n * 2 D. (n–1) * 2

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Review Questions

201

5. Which of the following is the best DNS configuration to use whenever

possible? A. Active Directory Integrated B. Primary C. Delegated Domain D. Forwarder 6. What is the recommended minimum number of namespaces? A. One B. Two C. Three D. Four 7. What technology allows for a server to have multiple NICs on the

same subnet? A. NIC Clustering B. NIC teaming C. NIC Hot Spare Routing Protocol (HSRP) D. VLANs 8. What is the minimum number of NICs required for a cluster? A. One B. Two C. Three D. Four

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

202

Chapter 5



Designing a Highly Available Network

9. If your network design calls for an Active Directory with two forests,

and two domains per forest, what is the minimum number of domain controllers required for fault tolerance? A. One B. Two C. Four D. Eight 10. Your network structure is three-tier: the client tier, perimeter tier, and

data tier. All the tiers are separated by firewalls. The website application being deployed queries data in Active Directory in the data tier. What ports must be open on the firewalls? A. Port 389 on the perimeter firewall B. Port 389 on both firewalls C. Port 389 on the data firewall D. Port 1433 on the data firewall

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Answers to Review Questions

203

Answers to Review Questions 1. B. 132.1.32.132.1 is a class B address. The subnet listed is a class A

subnet. 2. C. The first step is to represent 40 in binary; 40 decimal is 101000

binary, which is 6 bits. Since 192.168.1.0 is a class C, we know that we only have the last octet from the dotted quad to work with. And 6 bits are needed for the hosts, which leaves us 2 bits from the final octet. With 2 bits, the valid binary combinations are 00, 01, 10, 11. So we can have four subnets, each with up to 62 hosts. 3. C. Clustering is the only valid option listed for achieving high avail-

ability on a DHCP server. Replication is a technique used with both DNS and Active Directory, and zone transfer is also a DNS concept. NLB (Network Load Balancing) is used to load-balance web (HTTP, FTP, SMTP, etc.) traffic. 4. D. This question is meant to reinforce the idea that n-tier refers to

every tier. That includes the public tier, where we want static addresses, so there is no need for DHCP. Without relay agents, we need two DHCP servers for every segment. So if we have n tiers, we subtract the public tier (n – 1) and then multiply by 2 for redundancy. 5. A. Whenever possible, always use Active Directory Integrated

domains for a DNS-based web solution network. Integrated domains offer secure updates, automatic replication, and native fault tolerance. 6. B. It is highly recommended that the internal and external namespaces

be isolated. The minimum, then, is two. 7. B. NIC teaming is provided by vendors for redundant NICs. NIC

Clustering doesn’t exist. HSRP is a Cisco routing protocol. VLANs are used to segregate multiple logical segments within switches. 8. B. Clusters require one NIC for the heartbeat, and one for data com-

munications. For fault tolerance, both of these can be made redundant, but you can’t run teaming on the heartbeat network.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

204

Chapter 5



Designing a Highly Available Network

9. D. For fault tolerance, you need two controllers per domain. Two for-

ests with two domains each equals four domains (4 * 2 = 8). 10. C. LDAP (Lightweight Directory Access Protocol) is used to query

Active Directory. LDAP operates on port 389 by default. The perimeter firewall doesn’t need to have the port open, because the end user in the client tier shouldn’t have direct access to the data. The data firewall, too, allows access from only the perimeter web servers to the Directory. Port 1433 is used for ODBC data connections.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

E*tradeBay

205

Take a few minutes to look over the information presented in this case study and then answer the questions at the end. In the testing room, you will have a limited amount of time—it is important that you learn to pick out the important information and ignore the “fluff.”

Background E*tradeBay is an online provider of resources allowing end users to trade collectibles among themselves. E*tradeBay started up in a test mode to prove the business plan. For revenue, the company collects a percentage of all transactions among the hobbyists. The test period has gone well, and the company has recently acquired venture capital to build up a highly available website.

Current Environment The existing website is a simple two-tier architecture with a front-end and a back-end network. The front network is protected only by a packet-filter router. The back-end data network is protected only by virtue of using private, nonroutable addresses. Even though the network architecture is two-tier, the software designers developed the software to support expansion into multiple tiers. The front network has one router and two web servers running Windows 2000 Advanced Server and NLB. The router and both servers are plugged into a 100 Mb/sec hub under the theory that the T1 connection to the Internet is slower than even a shared hub. The back network is connected with a 100 Mb/sec switch. On the network are a SQL Server 2000, a domain controller that also provides backup service, and the back-side connections from the web servers. External DNS service is provided by the ISP.

Business Requirements As part of the agreement with the venture capitalists, E*tradeBay management has agreed to spend as necessary for upgrades making the network highly available and highly secure. The company wants to build the infrastructure now and leave it as is for a year as the customer base grows. At the

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

CASE STUDY

E*tradeBay

CASE STUDY

206

Chapter 5



Designing a Highly Available Network

present time, the company doesn’t expect to offer any services on protocols other than HTTP (and HTTPS for credit card transactions).

Technical Requirements Based on security-related business requirements, the network engineers would like to isolate each logical tier onto a dedicated network segment. In addition to the business logic tiers, the engineers anticipate adding dedicated networks for management. Furthermore, although it’s not currently needed, a segment dedicated to backup is planned for future growth. Security is a major concern, so firewalls are planned between the user tier and the web servers, and between the web servers and the data tier. Capacity planning numbers are showing that the web server farm requires 20 front-end web servers, and the planners feel it’s best to keep the loadbalancing traffic isolated. The network team plans to install two switches per subnet and use NIC teaming on the servers. The database team will upgrade the server to an MSCS cluster server, with an external storage array offering 500GB of disk space.

Questions 1. For the described network, how many logical segments are required? A. Two B. Four C. Six D. Seven 2. Where on the network should the public-namespace DNS servers be

placed? A. At the ISP B. On the management tier C. In the data tier D. In the front-end user tier

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

E*tradeBay

207

column and arrange them to reflect the order in which an end user would proceed to retrieve information from the database. User Access Order

Components Data tier Perimeter tier User firewall Data firewall Management tier Internet router Backup segment

4. If you decide to implement Active Directory for user authentication,

but you don’t want to add another subnet, which of the following subnets would be the best candidate for Active Directory? A. Perimeter network B. Management network C. Data network D. Backup network 5. How many firewalls are required for E*tradeBay’s solution? A. One B. Two C. Four D. Eight

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

CASE STUDY

3. In the table below, take the tiers, routers, and firewalls from the right

CASE STUDY

208

Chapter 5



Designing a Highly Available Network

6. How many internal domain controllers are required for E*tradeBay’s

solution? A. Two B. Six C. Eight D. Fourteen 7. One of the elements of E*tradeBay’s plan is missing redundancy.

Which element is it? A. SQL Server 2000 B. Internet connectivity C. Server connectivity D. Web servers

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

E*tradeBay

209

1. D. The desired collection of segments includes the following: (1) A

public user tier on the public side of the user firewall. (2) On the back side of the user firewall is a segment with the public interfaces of the web servers. (3) On the back side of the web servers is a segment connecting to the data firewall, which connects to (4) the data tier, (5) the management tier, (6) the future backup segment, and (7) the private heartbeat of the web cluster. 2. A. Whenever it’s possible to arrange, the public DNS should be

hosted by the ISP, reducing the bandwidth consumed on the Internet connection. Management has stated that the website, once built, will remain static for a year, indicating there won’t be frequent DNS updates. 3.

User Access Order

Components

1. Internet router 2. User firewall 3. Perimeter tier 4. Data firewall 5. Data tier

Management tier Backup segment

The path from the end user to the data proceeds from the Internet router, through the user firewall, over the perimeter network (the public web servers), through the data firewall, and finally into the data tier. The management and backup subnets aren’t used by the end users.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

CASE STUDY ANSWERS

Answers

CASE STUDY ANSWERS

210

Chapter 5



Designing a Highly Available Network

4. C. The Perimeter network is relatively unsecure. The Management

network is not an option because that subnet will be used for administration and will require strong security to protect administrative accounts. The Backup network eventually will have its bandwidth consumed by constant backup traffic. The Data network is already open to the user tier, so adding Active Directory access won’t significantly lower security. 5. C. Management wants high availability, which demands redun-

dancy. The technical design called for a firewall at the user and data tiers, so both must be redundant. 6. A. Since all subnets will be connected to the management network,

that network can host the internal namespace and provide account and service authentication. That’s all that is required for E*tradeBay because directory requirements for the end users have not been specified. Accounting for redundancy, then, you need two domain controllers on the management domain. 7. B. Of the answers given, the only element for which redundancy is

not discussed is Internet connectivity. The plan does mention an upgrade to a SQL cluster, NIC teaming, and multiple web servers. Note that other components, as well, are missing redundancy, but are not listed as possible answers for this question.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Chapter

6

Capacity Planning MICROSOFT EXAM OBJECTIVES COVERED IN THIS CHAPTER:  Calculate network, server, and cluster capacity. Considerations include memory, CPU, cost, flexibility, manageability, application scalability, and client/server and server/server communications.  Design an upgrade strategy for networks, servers, and clusters. Considerations include scaling up and scaling out.  Calculate storage requirements. Considerations include placement, RAID level, and redundancy.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

A

n important consideration when designing a website infrastructure is capacity planning. Indeed, three objectives on the exam are devoted to this issue. Capacity planning is the process of measuring the resources needed to perform operations within your site, and using that information to properly scale the site infrastructure. With insufficient resources, your site’s performance will slow down when it’s under peak load. And that means visitors may be encouraged to try another site. In this chapter we will examine methods you can use to calculate network, server, and cluster capacity. Calculating capacity can be done theoretically or empirically, and we’ll look at both techniques. Then we’ll discuss the importance of doing these capacity calculations on a ongoing basis so that you can design an upgrade strategy. Finally, we’ll discuss storage requirements, and how to use data placement, RAID, and redundancy to increase the availability of your web site.

The Capacity Calculation Formula

C

apacity planning is the process of measuring a website’s ability to serve content to its visitors at an acceptable speed. The planning is done by measuring two things: the number of visitors the site currently receives, and how much demand each user places on the server. Then these measurements are used to calculate the computing resources necessary to support current and future usage levels at the site. The resources included in capacity calculations include the memory, CPU, network bandwidth, and disk storage. The site’s capacity itself is determined by three factors: Number of users As a website’s popularity increases, capacity must increase, or users will experience performance degradation.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Calculating Network Capacity

213

Server capacity and configuration of hardware and software The site’s capacity can be increased by upgrading the computing infrastructure, thereby allowing more users or more complex content, or a combination of the two. Site content On a dynamic and complex site, the servers may have to do more work per user, thus reducing the capacity of the site. Conversely, increased capacity can be obtained by simplifying content. In this chapter we will learn how to measure the load on each resource within a site. Once that is known, capacity planning is a simple equation: Number of supported users = Hardware capacity / Load on hardware per user

Calculating Network Capacity

T

here are several potential locations of bottlenecks in a website network. Many website designs use at least three different network segments, each of which is a potential bottleneck. When calculating network capacity, each segment should be treated separately. These segments, a simple view of which is illustrated in Figure 6.1, are as follows:

FIGURE 6.1



The forward-facing Internet segment



A back-end segment (sometimes called the data segment)



A management segment

A simple network

Management Servers

Management Network Internet Network Internet Data Network Front Web Servers

Database Servers

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

214

Chapter 6



Capacity Planning

This section discusses the types of networks available for the Internet connection and the internal networks.

Internet Bandwidth The speed of your Internet connection is an obvious potential bottleneck. As the popularity of a website increases, the quality of its connection to an ISP (Internet service provider) can reduce the site’s performance output. Client/ server communications for a website are usually over the Internet using WAN (wide area network) connections. Intranet web servers have client/ server communications, too, but these are often delivered over LAN (local area network) technology.



Microsoft Exam Objective

Calculate network capacity. Considerations include cost, flexibility, and client/server communications.

There are three ways to address bottlenecks related to Internet bandwidth: 

Scale out by adding a redundant connection



Scale up by increasing the speed of your connection



Reduce the amount of data you send

When you scale out a network connection, you install an additional network connection. In addition to increasing bandwidth, scaling out has the advantage of increasing your site’s reliability by providing multiple connections should one route have a problem. For example, a site might have two T1 connections, for a combined bandwidth of 3Mb/sec. If one of the T1s goes down for any reason, traffic can still reach the site over the redundant connection. Scaling up is accomplished by increasing the bandwidth of the network connection. For example, you might scale up from a 56K Frame Relay connection to a T1 connection. The decision between scaling up and scaling out is a balancing act between costs and technical requirements. As bandwidth increases, generally

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Calculating Network Capacity

215

so does cost. For example, the cost difference between a T1 and a T3 connection is significant. If your bandwidth requirements are only marginally greater than what a T1 provides, then from a financial perspective you’d be better served by adding a second T1. On the other hand, if your bandwidth requirements are significantly above the T1, purchasing a T3 is less expensive than 30 T1 circuits. Reducing the amount of data you send out can help when you’re experiencing an unexpected increase in traffic. This reduction can be realized by removing graphics, sound, and video files from outgoing transmissions. You’ll see immediate improvements by taking this step, rather than having to wait for installation of faster or additional connections. Popular Internet connection speeds are listed in Table 6.1. TABLE 6.1

Popular Internet Connection Speeds Connection Type

Speed

56K Frame Relay

56.0 Kbps

ISDN

56.0–64.0 Kbps

T1

1.54 Mbps

T3

45.0 Mbps

ATM

155 Mbps

Exam Tip: Remember that connection speeds are given in bits per second (bps), also sometimes referenced as b/sec. Most file references, on the other hand, are given in bytes per second (Bps or B/sec). When calculating the required bandwidth, be sure to convert bytes to bits by multiplying by 8!

Calculating the Bandwidth Needed Now that we have an understanding of bandwidth, let’s take a look at how to determine a website’s required bandwidth. There are two methods for calculating this requirement. You start with an understanding of your site, the

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

216

Chapter 6



Capacity Planning

files, and how the clients use your site. Using these factors, you can create estimates of the average size of the files used as clients navigate through the website, and the number of concurrent users. This gives you an estimate of the number of files of a given size that will need to be transmitted on the client/server network (which is commonly the Internet connection). The second method of determining bandwidth requirements is to measure it in a lab setting. Let’s work through an example of calculating bandwidth using estimates. Assume you have a website with a T1 connection of 1.54 Mbps, serving up static HTML pages with an average size of 5K. The first thing to do is calculate the actual traffic requirements per page. Each page of data also requires a certain amount of bandwidth for overhead. The total bandwidth of the sample 5K page is composed of the following: 

The TCP connection request (approximately 180 bytes)



The GET request (approximately 250 bytes)



Packet overhead (approximately 1360 bytes)



The actual data (5000 bytes)

In order to transmit the single 5K file, you’ll need 54,320 bits. We get that by adding the data and overhead (5000 + 180 + 250 + 1360) and multiplying by 8 bits per byte. We divide the T1 bandwidth of 1,540,000 bits per second by 54,320 bits per page, to arrive at 28. The T1 connection in our example, then, will support about 28 pages per second. Please remember that this is the best-case scenario.

The numbers given here for overhead are estimates; each request varies in the exact number of bytes.

Larger files will have a smaller percentage of associated overhead. This is because there is only one TCP connection request and only one GET request, regardless of file size. In addition to calculating the bandwidth, the System Monitor counters provide for capturing the bandwidth used during operations (see Figure 6.2). This is the second, “lab setting” way to estimate the required bandwidth and is often the easiest. You use the Windows System Monitor utility to capture data while loading the website with a typical user. (In Windows NT 4.0, this utility was called Performance Monitor.)

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Calculating Network Capacity

217

The primary System Monitor counter for estimating bandwidth is Network Interface\Bytes Total/sec. This counter gives you the rate at which bytes are sent and received over each network adapter, including framing characters. Network Interface\Bytes Total/sec is a sum of Network Interface\Bytes Received/sec and Network Interface\Bytes Sent/sec. FIGURE 6.2

The System Monitor utility, capturing network interface data

Capturing this network interface data will give you the bandwidth requirements per user. Obviously, the more accurate you are in loading the website as a typical user, the more accurate your bandwidth calculations will be. Using this calculated bandwidth per user, you can move forward to the required bandwidth by using the estimated number of concurrent users (a number that will probably come from marketing projections). Conversely, you can divide this number into various typical bandwidths to estimate how many users each connection type will support. In this section we’ve studied the process for estimating capacity for the client/server network (which is also sometimes called the forward-facing Internet segment). Next, we’ll see that the same process applies to the other segments of the network.

Management and Data Networks Network capacity planning for the remaining two network segments, the management and back-end (data) segments, is performed in much the same way as for the Internet segment. The primary difference is that generally

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

218

Chapter 6



Capacity Planning

these segments operate on LAN rather than WAN technologies. The management and data networks support server/server communications.



Microsoft Exam Objective

Calculate network capacity. Considerations include cost, flexibility, manageability, and server/server communications.

Why would you want separate management and data segments? For security reasons, the data network should not be exposed directly to the Internet. By creating a distinct network segment, isolating the data servers from the Internet, you add a layer of protection to critical data.

Security is discussed in more detail in Chapters 7 and 8.

Separate management segments are not as prevalent as separate data segments, because some sites combine the data and management segments. Management segments allow for monitoring the servers, backing them up, and deploying content without affecting performance of either the Internet segment or data segment. Although the process of determining the connection speed requirement is the same as for the Internet segment, there is different connection technology available for the management and data segments. With LAN technology, you have more options and thus more flexibility. Typical LAN topologies and their speed are listed in Table 6.2. TABLE 6.2

Popular LAN Connection Speeds Connection Type

Speed

10Base-T

10 Mbps

100Base-T

100 Mbps

1000Base-T

1000 Mbps

FDDI

100 Mbps

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Calculating Network Capacity

219

In addition to these basic speed fundamentals, LAN technologies have some other characteristics you will need to consider when designing capacity for your server/server networks. Network design was discussed in Chapter 5, as well, so only the components that affect capacity are covered here in Chapter 6. Recall from Chapter 5 the discussion of n-tier networking. Often each tier of business logic is configured on respective physical networks. That gives you the advantage of extra bandwidth on each tier. Manageability is also increased because the logical tiers correspond to physical tiers, keeping diagnostics simple. It’s much easier to diagnose problems when physical networks equate with the logical tiers. Security management is easier, too, for the same reason. The first three topologies listed above in Table 6.2 are all based on Ethernet. (Ethernet was also known previously as Carrier Sense Multiple Access with Collision Detections, CSMA/CD.) The original design of Ethernet was a shared medium, meaning all systems on the Ethernet segment shared the available bandwidth. For example, if two computers are transmitting a packet, other computers on that segment cannot communicate. Hubs, whether classified as 10 or 100 Mbps, are fully shared media. All systems connected to a hub are sharing the total bandwidth with all other systems. Switches were developed to alleviate this concentrated demand. Switches work by “switching” paths electronically, creating virtual segments between communicating nodes. The result is that each pair of communicating systems gets a dedicated circuit, which allows other systems to communicate simultaneously. This can significantly increase the bandwidth of a network segment. The price of switches has come down in recent years, and if you’re purchasing new equipment there is no reason to select a hub architecture over a switched environment. Switches also can increase network bandwidth by allowing full-duplex communications. Hubs operate in half-duplex mode, which either sends or receives data. Full-duplex can send and receive at the same time. This may provide extra capacity in the data network. For example, a web server can be receiving the response of a data query on behalf of one client, while concurrently sending another request on behalf of a second client. If both your network card and your switch support full-duplex, by all means use it; there is no negative aspect at all to running in full-duplex. In this section, we have found that the process of determining the bandwidth requirements for server/server communications commonly used in data or management networks is no different from the calculation described for client/server networks.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

220

Chapter 6



Capacity Planning

Calculating Server Capacity

Server capacity comprises CPU, memory, and storage. If a website has inadequate server resources, it may become slow or unresponsive or even totally unusable during peak traffic periods. There is no simple way to predict CPU and memory requirements. Setting up a test environment and taking measurements is mandatory.



Microsoft Exam Objective

Calculate server capacity. Considerations include memory and CPU.

In order to properly measure the load placed on your servers, you need to simulate a large number of “users” hitting your website. There are numerous third-party tools for doing this. In addition, Microsoft offers a free tool, the Web Application Stress Tool (WAST). WAST replaces the previous Web Capacity Analysis Tool (WCAT). Both WAST and WCAT are included in the Internet Information Service (IIS) 5.0 Resource Kit.

To download the free WAST tool, go to http://webtool.rte.microsoft.com.

WAST is quite powerful. You can create scripts manually, by recording browser activity, by pointing to an IIS log file, by selecting files in your content directory, or by importing a script. The IIS log file is particularly useful if your website already exists, because it allows the script to model actual traffic to the web servers based on previous traffic. If you are testing a new site, you can use the Record feature in WAST to record the viewing pattern you expect your visitors to follow. WAST and WCAT are multithreaded tools. Each thread simulates a single site user. Multiple client machines can be coordinated. This flexibility allows you to stress-test a site’s architecture with only a few client machines.

You won’t be expected to know how to use the tool on the test. On the test, you will be given the results of WAST testing and asked questions concerning the results.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Calculating Server Capacity

221

Certainly WAST is an important tool in your capacity planning arsenal. But actually using WAST is only half the process. The other half is taking the appropriate measurements and interpreting the results. System Monitor is the tool for collecting server measurements, just as it is for measuring bandwidth. To monitor your servers, you can view data in a graph or you can collect the data in log files for use in other applications. Viewing data in a graph is helpful for spot-checking performance. Collected (logged) data is more applicable to defining baselines and predicting performance capacity.

As mentioned earlier for WAST, you will not be tested on using System Monitor, but you will be tested on your ability to analyze the results.

In the next couple of sections, we will examine the specifics of planning the memory and CPU capacity of your servers.

Calculating Memory Requirements Providing enough memory for your servers is critical. There is really no such thing as “too much” RAM. You’ll have to consider two limits to memory amounts: how much memory the OS supports, and your budget. Table 6.3 lists the memory limitations for the various Windows 2000 versions.



Microsoft Exam Objective

TABLE 6.3

Calculate server capacity. Considerations include memory.

Windows 2000 Memory Limits OS Version

Maximum Memory

Windows 2000 Server

4GB

Windows 2000 Advanced Server

8GB

Windows 2000 Datacenter Server

64GB

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

222

Chapter 6



Capacity Planning

Optimizing the memory in your server is the first thing to investigate when tuning a server—it’s the quickest way to increase performance for most applications. The OS optimizes memory, to an extent, automatically. Windows 2000 optimizes memory by adjusting the size of caches, memory pools (paged and nonpaged), and the paging file. In order to properly interpret the data captured in System Monitor, it is important that you understand how an application uses its memory. Servers in a web farm can run a variety of applications; for example, as a web server or a database server. Since this book and the exam are about websites, we’ll use the Microsoft web server, IIS, as the application for our discussion. IIS and similar applications run as a process, Inetinfo.exe, running on Windows 2000. Windows allocates memory to each process in chunks known as the working set. Individual threads within each process get memory allocated from that working set. Windows adds memory to the working set as the processes grow. All memory within a working set is pageable, meaning it can get swapped out to disk. Some threads within a process require nonpageable memory. One example is TCP/IP connections. As the number of TCP/IP connections increases, Windows can run out of free memory (fortunately, each connection only uses 10K). Within System Monitor you have several counters available to monitor memory (see Figure 6.3). Following are descriptions of the most essential ones: Memory\Available Bytes Tracks the total amount of available memory in the system. The OS tries to keep this value above 4MB. Ideally, it should be at least five percent of the total system memory. Process\Working Set: Inetinfo Tracks the total amount of memory used by the Inetinfo.exe process. This counter only reports the most recent value, but by collecting data over time you can get important information about the memory used by IIS. If you are measuring utilization for another application, you’d select its process, of course, rather than Inetinfo.exe. Memory\Page Faults/sec. Tracks the rate at which pages are read from or written to disk, to resolve hard page faults. The Memory\Page Faults/ sec. count is a primary indicator of the kinds of faults that cause systemwide delays. It is the sum of Memory\\Pages Input/sec. and Memory\\ Pages Output/sec. The count is in numbers of pages, so it can be compared with other counts of pages, such as Memory\\Page Faults/sec., without

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Calculating Server Capacity

223

conversion. This number includes pages retrieved to satisfy faults in the file system cache (usually requested by applications) and noncached mapped memory files. Process\Page Faults/sec.: Inetinfo Tracks the number of times per second that Windows has to page to disk pieces of the Inetinfo.exe working set. This number should be kept as small as possible. Again, if you are measuring utilization for an application other than IIS, you’d select its process instead of Inetinfo.exe. Memory: Cache Bytes Tracks the size of the file system cache, which by default is set to use up to 50 percent of available physical memory. If Inetinfo.exe is running out of memory, it will automatically reduce the cache size, so this is an important counter to watch over time for trend analysis. FIGURE 6.3

System Monitor, showing the critical memory counters

With a understanding of the IIS memory requirements, we can estimate the amount of memory required for the server. The basic process is to use WAST or another load tool to load up your web servers to the expected peak user load. While the test is running, you collect System Monitor counters. For optimal performance, you’ll need sufficient memory so that Inetinfo .exe can be kept in memory during peak loads, plus enough to run the nonpageable items. Additionally, if you have web applications that run outside

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

224

Chapter 6



Capacity Planning

the Inetinfo.exe process, you’ll need to account for the maximum size of those process’s working sets as well. Next we’ll estimate the CPU capacity for an application, again using IIS.

Calculating CPU Capacity Web applications can be processor bound, meaning the CPU is the limiting factor in performance. This is particularly true with dynamic and e-commerce sites. The process of calculating processor load is not much different from calculating memory. There are, however, a few things to watch out for.



Microsoft Exam Objective

Calculate server capacity. Considerations include CPU.

Potentially misleading data can occur if your test server doesn’t have enough memory. When your test server is spending time paging threads to disk, the processor is busy performing that paging. That can lead to artificially high processor utilization. Several counters are available within System Monitor to track CPU activities (see Figure 6.4). The most essential are as follows: System\Processor Queue Length Tracks the number of threads in the processor queue. There is a single queue for processor time even on computers with multiple processors. A sustained processor queue of greater than two threads generally indicates processor congestion. This counter displays the last observed value only; it is not an average. Processor\% Processor Time Tracks the percentage of time when the processor is executing a non-Idle thread. (Each processor has an idle thread that consumes cycles when no other threads are ready to run.) This counter was designed as a primary indicator of processor activity. It is calculated by measuring the time spent by the processor executing the thread of the Idle process in each sample interval, and subtracting that value from 100 percent. This calculation can be viewed as the percentage of the sample interval spent doing useful work; it displays the average percentage of busy time observed during the sample interval. You should monitor each processor in a multiprocessor system.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Calculating Server Capacity

225

Process\% Processor time: Inetinfo.exe Tracks the percentage of elapsed time during which all threads of this instance of Inetinfo.exe used the processor to execute instructions. (An instruction is the basic unit of execution in a computer; a thread is the object that executes instructions; and a process is the object created when a program is run.) Code executed to handle some hardware interrupts and trap conditions is included in this count. This counter will help you isolate the load placed on the server by the Web Service. If you are measuring utilization for an application other than IIS, select its process rather than of Inetinfo.exe. FIGURE 6.4

System Monitor, showing critical CPU counters

Once you have collected the critical counters, it’s a simple process to project the required CPU capacity for your application and anticipated client load. The two counters System\Processor Queue Length and Processor\% Processor Time described just above are helpful general indicators of CPU performance. It’s also a good idea to track a counter unique to the application—Process\% Processor Time: Inetinfo in our example—to ensure that the application is actually getting the resources. For example, let’s say you load the server simulating 25 users. If the % processor time is 25% with the application using 22%, you can see that the application is getting the resources needed. Further, in our IIS example, you see that the server would handle 100 users at maximum utilization.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

226

Chapter 6



Capacity Planning

In this section, we have looked at the critical System Monitor counters useful for calculating CPU capacity. The process is the same as for measuring memory capacity. In the next section, we’ll continue this same calculation process to include cluster capacity.

Calculating Cluster Capacity

Recall from Chapters 2 and 3 that there are three clustering technologies available in Windows 2000: 

Network Load Balancing (NLB)



Component Load Balancing (CLB)



Microsoft Cluster services (MSCS)

The process of calculating cluster capacity is much the same process as the previous sections. Memory and CPU capacity calculations for a cluster are performed in the same manner as for stand-alone servers. You stress-test your server application and collect data with System Monitor, just as we have discussed. In this section, we’ll concentrate on the unique issues of capacity planning for cluster services.



Microsoft Exam Objective

Calculate cluster capacity. Considerations include memory, CPU, and application scalability.

With clustering, we are providing high availability to applications such as Microsoft SQL Server and Exchange Server. Scaling out of an application server requires that the application be initially designed to support scaling out. For example, a database must be designed in such a way that it can be partitioned to scale out. If the data design doesn’t support partitioning, that SQL server cannot be readily scaled out. Conversely, scaling up an application server is limited to upgrading RAM and CPU. In addition, the Cluster services setup also has another constraint in the maximum number of nodes: 2 for Advanced Server and 4 for DataCenter Server.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Calculating Cluster Capacity

227

Also discussed in Chapter 3 are the two modes of operation for clusters: active/passive and active/active. In an active/passive cluster, one node is actively serving the application (say, SQL Server) and the second passive node is idle, waiting to go to work should the active node fail. In order to maximize the hardware investment, some sites are arranged with the cluster installed in an active/active mode. In this mode, each node is serving an independent application.

It’s important to recall from Chapters 2 and 3 that clusters running Cluster services are not load-balanced and cannot serve the same application and data concurrently. If the first node in an active/active cluster fails, the second node is forced to serve applications from both servers.

One example of the differences in cluster types is shown in Figures 6.5 and 6.6. Here, the database in question is Catalog. In the active/passive mode (Figure 6.5), the first node serves the entire Catalog database. In the active/ active mode (Figure 6.6), the Catalog database has been partitioned into two smaller databases A-L and M-Z. Not all databases are designed so that they can be partitioned. FIGURE 6.5

Active/passive cluster

Shared Array

Active Node Serving Catalog A–Z

Passive Node Standby for Catalog A–Z

Active/Passive Cluster

FIGURE 6.6

Active/active cluster

Active Node Serving Catalog A–L Standby for Catalog M–Z

Shared Array

Active/Active Cluster

Active Node Serving Catalog M–Z Standby for Catalog A–L

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

228

Chapter 6



Capacity Planning

For the clustered arrangement, the process to determine and plan for capacity doesn’t change. You set up a test server with the application, and then use a tool such as WAST to stress the server and take key measurements using System Monitor. These key measurements are the same as for any other server. You might also need to track counters specific to the cluster application. Let’s look at SQL Server, for instance. Although a full description of performance monitoring for SQL Server is beyond the scope of this book, following are descriptions of a few important counters: SQLServer: Cache Hit Ratio The percentage of time when SQL Server finds data in its cache, rather than having to go to disk. A Cache Hit Ratio less than 80 percent indicates that SQL Server does not have enough RAM. This can happen even when there is plenty of RAM on the system, if not enough of that RAM has been allocated to the SQL Server process. SQLServer\Databases: Transactions/sec. Indicates the number of transactions per second on the selected database. Clusters are unique in that scaling is not straightforward. For example, suppose you have a database with performance requirements that exceed the abilities of a single server. This is different from web-server expansion—you can’t simply add another database server to share the load. Current Microsoft technology allows only a single server to provide access to a database. If the single server is inadequate, then the database must be partitioned, usually along some logical boundary such as alphabetic, numeric, or geographic. The front-end applications must also then understand the partitioning. For example, an online realtor’s database might partition its home inventory based on geographic location, one database serving the eastern part of the country and one database serving the west. In active/active clusters, one other factor must be taken into consideration. Each node must have the resources not only to perform its primary function, but to perform its failover function concurrently. In failover, when one node in the cluster stops working, Cluster services moves the functionality to the other node. That second node then must perform the functions of both servers. As stated already in this chapter, the same process is used to initially calculate capacity requirements for the network, server, and cluster components in a website. In the next section, we’ll see that that this same process continues after a site is operational, for predicting and planning site upgrades.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Planning an Upgrade Strategy

229

Planning an Upgrade Strategy Highly available websites need to perform efficiently at all times. From the end user’s perspective, a poorly performing website is just as bad as a site that is unreachable. A well-designed architecture allows for expanding capacity as required. Part of planning for the future is to continually monitor your site’s performance. At any point in time, you should be able predict the number of users your site can support.



Microsoft Exam Objective

Design an upgrade strategy for networks, servers, and clusters. Considerations include scaling up and scaling out.

Capacity Planning is a Continual Process You run the website of a membership-driven association. The Marketing VP walks into your office on a Friday afternoon and informs you about an unexpected decision to send out a membership drive for the organization. The VP expects that the campaign will result in a 50 percent temporary increase of web traffic as potential new members browse the site to research the benefits of membership. Further, the VP expects a 10 percent increase in online membership registration. The key in this scenario is that on Friday afternoon, when the VP drops by to announce her plans, it’s too late to start capacity planning and performance monitoring. But continual monitoring of performance is imperative to planning an upgrade. Looking at recent performance logs for the site, you see the front-end web servers are running at 30 percent CPU utilization and 50 percent memory utilization. So the anticipated temporary increase in web traffic is not a problem. Examination of your application cluster indicates that the database RAID volume is at 45 percent utilization. So an increase of 10 percent in membership is not a long-term issue, either.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

230

Chapter 6



Capacity Planning

The result is that you can comfortably tell the Marketing VP that her membership drive won’t cause a significant problem. Neither the temporary increase in traffic nor the long-term growth in membership will require an upgrade to the website itself.

There are two approaches to upgrading the capacity of a server farm: scaling out and scaling up. Scaling Up Scaling up, sometimes called scaling vertically, means increasing capacity by upgrading the existing hardware. This upgrade can be the addition of more CPUs to a server, increasing the speed of the CPUs, boosting the amount of RAM, plugging in another network interface, or adding disk drives. Scaling up is typically done at the database tier. Scaling Out Scaling out, sometimes called scaling horizontally, means increasing capacity by adding servers to the farm. Scaling out has the advantage of enabling the addition or removal of servers to meet temporary demands without interrupting site operations. Scaling out is typically used at all layers except the database layer.

Calculating Storage Capacity Storage capacity is probably the easiest of the resources to calculate and plan for. In planning storage capacity, you must account for the Windows 2000 operating system, the applications (such as SQL and Exchange), and the data (such as HTML files, ASP files, and any databases).



Microsoft Exam Objective

Calculate storage requirements. Considerations include placement, RAID level, and redundancy.

For a highly available website, you will also want to plan for disk redundancy. In computers, there are two components that have moving parts: disks and cooling fans. Those moving parts increase the probability of

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Calculating Storage Capacity

231

component failure. The technology used for disk redundancy is known as RAID, Redundant Array of Inexpensive Disks.

RAID is discussed at length in Chapter 4, which covers the objective “Design data storage for high availability.”

Other considerations of storage capacity are data location and data redundancy. These can be particularly important for very large sites that may be geographically diverse.

RAID Technology RAID arrays can be configured in various levels. Here we will discuss the levels predominantly in use. Windows 2000 Server versions support software implementation of RAID, but in an enterprise environment a hardware RAID adaptor is a more logical choice. See Chapter 4 for a more complete discussion of RAID. One of the concerns when calculating RAID capacity is the loss of disk space used for redundancy or parity. The amount of loss of disk space, or storage loss, depends on the RAID level selected. RAID 0 RAID 0 is disk striping. Data is striped (divided) across all disks in the array. This RAID level improves both read and write performance because all operations are performed across multiple disk drives. RAID 0 provides no fault tolerance. There is no storage loss associated with RAID 0 because all the disks are used to store data. Two 36GB disks will provide 72GB of available storage. RAID 1 At the RAID 1 level, data is mirrored (written to) a redundant disk, creating a fully functional identical copy. RAID 1 improves read performance since data can be read from either disk, but write performance is slightly degraded. There is a 50% storage loss as one disk is duplicated. Two 36GB disks will provide 36GB of available storage. RAID 2 through 4 These two RAID levels are essentially development levels of RAID and are not generally used for data storage. RAID 5 RAID 5 is disk striping with parity. Data and parity information are striped across all disks. Redundancy is provided via the parity information. RAID 5 has slightly better write performance than RAID 1.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

232

Chapter 6



Capacity Planning

One disk in the array is lost to parity. Storage is calculated with the formula (N – 1) * capacity, where N is the number of disk drives and capacity is the storage capacity of each physical drive. RAID 5 requires a minimum of three disks. Three 36GB disks will provide 72GB of available storage. RAID 10 (1+0) RAID 10, also called RAID 1+0, is mirroring with striping. Data is striped across a RAID 0 array, and that array is then mirrored via RAID 1 to another RAID 0 array. This provides the performance of disk striping with the high availability of RAID 1. There is a 50% storage loss as in RAID 1. Windows 2000 Server versions support RAID levels 0, 1, and 5 in software. Hardware RAID controllers offer performance advantages by offloading the calculations from the CPU, as well as offering other levels of RAID. One additional benefit of a hardware RAID solution is that it may support hot swapping, which lets you replace a failed drive in the array while the system continues to operate, so there’s no downtime. Microsoft Cluster services requires the use of a hardware RAID controller; software RAID is not supported.

Storage Location and Redundancy There are two perspectives when you are considering storage locations for data. The first is locating the data physically near the users. The second perspective is appropriately positioning various types of data on the various types of storage available. Data should be stored near the systems that are accessing the data. That may seem so obvious it doesn’t need stating. Although the location of the data is really an application performance design issue, you must account for location in your storage calculations. The concept of appropriate storage location is best illustrated with an example. Think about a national real estate website offering multimedia previews of the homes offered for sale. To improve performance of the streaming media to the end users, the media files could be stored in different geographic locations. Further, to ensure their availability to users, a separate copy of each region’s files can be stored in adjacent regions.

On the exam, you’ll be given various data storage locations and be asked to account for storage capacity in each location as appropriate.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Calculating Storage Capacity

233

Let’s look at this real-estate example in a little more detail. Assume that the website is distributed to four regions: East, East-Central, West-Central, and West. The total streaming media of all demos of homes is 100GB. If the site were centralized, the storage capacity requirement is simple; it’s still 100GB. For simplicity’s sake, let’s say the homes listed for sale are evenly distributed throughout the regions. Each region therefore needs 25GB of storage for its own streaming media files. This website is designed with storage redundancy. Each region will store a copy of an adjacent region’s files, for high availability. So the East data center will also store a copy of the media files for homes in the East-Central region, and East-Central will store a copy of the files for homes in the East region. That results in an additional 25GB. So every regional data center ultimately requires 50GB of storage. In the real world and on the exam, data is often not as evenly distributed as in our example, but the concept and process of determining storage location is the same. Now for the second aspect of storage location: taking into account the application requirements of the site’s infrastructure. The location of various types of data is an important design criterion for the servers; it affects performance as well as availability. Simply increasing the number of disks (physical disks, or spindles) on a server can improve performance if they’re used properly. Installing the OS and page files and log files, for example, on discrete physical disks can increase the system I/O and allow multiple read/ write operations to occur concurrently. Some general guidelines for storage by type are as follows. Note that the phrase physical disk set is used to represent physical RAID volumes. 

Put operating system files on one physical disk set.



Put operating system page files on one physical disk set.



Put applications on one physical disk set.



Put application data on one physical disk set.



Keep application data logs (for example, SQL transaction logs) on one physical disk set.



Arrange disk drives on multiple controllers to increase bus I/O.



Increase the number of disks in a RAID 5 volume.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

234

Chapter 6



Capacity Planning

There is one caveat to the second guideline, for putting OS page files on a separate physical disk set. If the paging file is moved to a separate volume, should the system crash (with the dreaded blue screen of death), the system won’t be able to create a memory dump for analysis. A complete memory dump requires a page file on the boot volume equal to the amount of RAM plus one megabyte.

Summary

I

n this chapter we looked at the basics of measuring and planning capacity. The primary resources involved in capacity planning are CPU, memory, network bandwidth, and disk storage. Capacity planning is an ongoing exercise. To properly measure capacity, you must understand your site’s content, your typical user, and the hardware involved. Each resource is easy to measure on its own. Network capacity must be calculated independently for each network segment. This planning can be done by calculating average packet size and quantity, or can be measured empirically. Different network technologies can be used on each segment to meet the capacity requirements if necessary. Server capacity includes CPU and memory capacity. Server capacity is best measured empirically. Options for increasing server capacity include scaling up by adding memory or processors, or scaling out by adding servers. Clusters are composed of multiple servers. Cluster capacity is calculated by measuring the capacity of each member node. We learned about various counters provided by the Windows System Monitor utility that are useful to us when stress-testing systems. These counters help to measure system activity and predict resources required for the CPU, memory, network, and disk drives. Scaling up and scaling out provide two means for increasing your site’s capacity. Upgrading your site can be accomplished by scaling up or out at each component level. Networks can be scaled up by increasing capacity, or scaled out by adding network segments. Servers can be scaled up by adding memory or processors, or scaled out by adding servers.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Exam Essentials

235

Finally, we discussed various RAID technologies. The location of data can play an important role in your site architecture. We looked at using redundant sets of data to increase reliability as well as speed of access. The effects of using the different RAID levels on calculating the storage requirements were explained. When using RAID, some capacity is lost to the redundancy, which must be considered when planning storage.

Exam Essentials Know what resources are important in capacity planning. You should track the utilization of memory, CPU, network, and disk resources. Be familiar with the System Monitor counters that are important for tracking memory. Memory\Available Bytes tracks current available memory. Process\Working Set (process executable) helps you track the memory used by the given application. Process\Page Faults/sec.(process executable) tracks the hard faults by the given application. Memory: Cache Bytes tracks the size of the file system cache, an important trend indicator, especially for IIS. Be familiar with the System Monitor counters that are important for tracking CPU resources. Process\% Processor Time: (process executable) measures the CPU utilization of the given process. Processor\% Processor Time measures the overall CPU utilization. System\Processor Queue Length is the number of threads waiting in the queue to be executed; a sustained count greater than 2 indicates a processor bottleneck. Know the popular Internet connection speeds. Know the speeds of typical frame circuits: ISDN, T1, T3, and ATM. Also be aware that the speeds are expressed in bits per second. Know the RAID levels that are used in modern systems. RAID 0 is disk striping, RAID 1 is disk mirroring, and RAID 5 is disk striping with parity. Understand the performance implications of each level. Understand upgrade techniques for web farms. Scaling up and scaling out are the two techniques to upgrade your web farm.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

236

Chapter 6



Capacity Planning

Key Terms

Before you take the exam, be certain you are familiar with the following terms: 1000Base-T

hubs

100Base-T

ISDN

10Base-T

LAN

active/active

network capacity

active/passive

network segments

ATM

partitioning

bottlenecks

resources

capacity

scale out

capacity planning

scale up

Carrier Sense Multiple Access with Collision Detections (CSMA/CD)

server capacity

cluster capacity

switches

Ethernet

System Monitor

FDDI

T1

Frame Relay

T3

full-duplex

WAN

half-duplex

Web Application Stress Tool (WAST)

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Review Questions

237

Review Questions 1. Which resources are important in capacity planning? Select all that

apply. A. CPU B. Redundant power supplies C. Network bandwidth D. Video refresh rate 2. Your website provides multimedia content. Each file is approximately

2MB in size, and there are 1000 files stored for the site. The files get updated weekly. During the week the updated files are staged on a server in the management network. There is a weekly one-hour maintenance window available to deploy the updated files. What is the slowest possible network connection that will accommodate this situation? A. T1 B. 10MB C. 100MB D. ATM 3. You just received a new web server. It has three 18GB drives in it. You

want to use all three drives and provide for fault tolerance. Which of the following RAID level(s) will you be able to use? A. RAID 0 B. RAID 1 C. RAID 5 D. RAID 0+1

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

238

Chapter 6



Capacity Planning

4. Vic’s company is going to start selling its products via the website. He

has been tasked with configuring the database server. The current catalog database is 250MB. Developers are predicting that the Customer and Order databases will require 50GB. The marketing department is expecting to double orders within a year. Vic wants to configure the server with a RAID 5 volume to handle the databases for the next year. Which of the following will work? A. Three 36GB disks B. Four 36GB disks C. Five 18GB disks D. Six 18GB disks 5. What Windows management tool can you use to measure real-time

resource requirements when testing a server? A. Performance Analyzer B. Capacity Manager MMC plug-in C. Capacity Manager Resource Kit utility D. System Monitor 6. The current web server in the Maris corporate computer room is

reaching its full capacity. As part of growth to accommodate demand, Maris management wants to add fault tolerance. Which of the following upgrade methodologies will allow the site to grow while still using the existing hardware? A. Scale out with Cluster services. B. Scale up with Cluster services. C. Scale out with Network Load Balancing. D. Scale up with Network Load Balancing.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Review Questions

239

7. Your office has a T1 connection to the Internet. If the average file size

on your web server is 5K, how many pages per second will the T1 support? Note: For the purposes of this question, ignore TCP overhead. A. 3.8 pages/sec. B. 38.5 pages/sec. C. 385 pages/sec. D. 308 pages/sec. 8. The e-commerce site Kim manages has a Catalog database that cannot

be partitioned. The database server is currently at its full performance capacity. Which of the following is a legitimate update option for Kim to consider? A. Install an active/active cluster. B. Install an NLB cluster. C. Add memory and/or CPUs to the existing server. D. Spread the database over multiple servers. 9. Your company’s web server provides dynamic content via ASP (Active

Server Pages). Recently, users have started to complain about performance on the website. The server is a dual processor 500MHz system, capable of four processors. System RAM is 256MB. In System Monitor, you observe the following counters: 

Processor\% Processor Time is averaging 70%.



System\Processor Queue Length is averaging 2.



Memory\Available Bytes is averaging 300,000.

Which of the following should you do first in attempting to improve performance? A. Add two more CPUs. B. Add two CPUs and 256MB of RAM. C. Add 256MB of RAM. D. Add another server in an NLB cluster.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

240

Chapter 6



Capacity Planning

10. 10Your company just signed a contract with a Web content provider.

The provider suggested the installation of a dedicated 56K Frame Relay circuit for deploying the content. Over the Frame Relay, how long will it take for 20MB to be deployed? A. 48 minutes B. 6 minutes C. 356 minutes D. 3.56 minutes

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Answers to Review Questions

241

Answers to Review Questions 1. A, C. Both CPU resources and network bandwidth are important in

capacity planning. The presence of redundant power supplies doesn’t affect performance. Video refresh rate is not an issue for the website users. 2. B. Minimum bandwidth required is determined by the calculation

1000 files * 2MB * 8 bits/byte ÷ 60 min. * 60 seconds/min., which results in 4.44 Mb/sec. The minimum bandwidth that satisfies the requirement is 10MB. Both 100MB and ATM meet the requirement but are not the minimum. T1 is a WAN technology. 3. C. The correct option is RAID 5, disk striping with parity, which

does provide fault tolerance. It requires a minimum of three drives. RAID 0 provides no fault tolerance. RAID 1 would only use two of the drives. RAID 0+1 requires at least four drives. 4. B. The database is currently 50GB. Doubling that over the next year

is 100GB. Add the Catalog database, and the total required disk space is 100.25GB. So option B, four 36GB disks, is the correct answer. Three 36GB disks in a RAID 5 only gives 72GB usable space. Five 18GB disks in a RAID 5 only gives 72 GB usable space. Six 18GB disks in a RAID 5 only gives 90GB usable space. 5. D. System Monitor is the Windows 2000 tool that allows you to col-

lect real-time metrics. This utility was called Performance Monitor in NT 4.0. The other options do not exist. 6. C. Scaling out is generally the best option for web servers. Scaling out

with Cluster services is an option, but it wouldn’t leverage the existing hardware. Scaling up won’t leverage the existing hardware. 7. B. A T1 provides 1.54 megabits/second. A 5K files is 5 kilobytes or

40 kilobits. Divide 1,540,000 bits/second by 40,000 bits/page, and the result is 38.5 pages per second.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

242

Chapter 6



Capacity Planning

8. C. Upgrading the existing server is the only valid option. An active/

active cluster would require that the database be partitioned to spread the database over multiple servers. An NLB cluster is not a reasonable option for a database server. 9. C. The CPUs in this system are very busy—but what’s keeping them

busy is processing page faults due to a lack of memory. Windows 2000 tries to maintain free memory of at least 4MB. When it falls below 4MB, the system starts swapping processes to the page file. With only 3MB available, this system is in dire need of more memory. 10. A. 20MB of data is really 160 megabits of data to be transferred, so

160,000,000 divided by 56,000 bits/second, divided by 60 seconds/ minute, is 48 minutes.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Year-End Rush at Northwind Traders

243

Take a few minutes to look over the information presented in this case study and then answer the questions at the end. In the testing room, you will have a limited amount of time—it is important that you learn to pick out the important information and ignore the “fluff.”

Background Northwind Traders is an online novelty gift store. On an average day, Northwind processes 20,000 orders. Last year over the holidays the company processed 30,000 orders per day, but it had just started doing business online. This year, Northwind has been aggressively marketing the website in anticipation of holiday shopping. The marketing division expects to see 60,000 orders per day during December, with traffic returning to normal in January. The website performed adequately during last year’s busy season, but the executives want to be sure the increased load will be handled, as well, so as not to lose the sale of a single calendar or coffee cup! You’ve been contracted to examine the current system and the IT department’s plan for the upcoming flood of orders.

Current Environment The Northwind Traders website is currently located at a hosting facility. The site is designed in two tiers: The front-end web servers are connected via a firewall to the Internet, and database servers are on a separate network behind the web servers.

Web Servers Each of the three web servers has two Pentium III 500MHz processors and 512MB of RAM, with two 9GB disks. There is a total of 3GB in data on the servers. During peak periods on a typical day, CPU utilization is 40 percent. Typical memory utilization is represented as follows: Memory\Available Bytes: 128MB Process\Working Set: Inetinfo: 50MB Memory\Cache Bytes: 256MB

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

CASE STUDY

Year-End Rush at Northwind Traders

CASE STUDY

244

Chapter 6



Capacity Planning

Database Servers Each of the two database servers has two Pentium III 500MHz processors, with 512MB of RAM and seven 18GB disks. Windows 2000 is installed on one of the disks, and six disks are configured in a RAID 5 array. One database server hosts the product catalog. The Catalog database is 20GB and is not expected to grow over the coming year. Average CPU utilization is 20 percent. System Monitor shows Memory\Available Bytes is 256MB. Average network utilization is indicated by Network Interface\ Bytes Total/sec. of 15. The second database server hosts the Orders database (45GB). This database grows by 1GB every month during which there’s typical load. Average CPU utilization is 25 percent. System Monitor shows Memory\Available Bytes is 128MB. Average network utilization is indicated by Network Interface\Bytes Total/sec. of 15.

Networks The Internet connection for Northwind at the hosting facility is over two T1s. Routing between the T1s is handled by the firewall. The data network is on a 100MB hub.

Business Requirements This e-commerce company is one year old. The first holiday season was a success and management is looking forward to a second season that is just as productive. The marketing effort has been aimed at doubling the preceding season’s website traffic. In order to meet the expanded business plan, management has directed the IT department to increase the redundancy of the site as much as possible. In the event of a server failure, Northwind wants to prevent its customers from being greeted with a blank screen. However, like most companies in the currently iffy economy, cash is at a premium. You’ve been charged with using as much of the existing hardware as possible.

Technical Requirements As part of the site’s new architecture, IT management wants to consider a network upgrade to meet two additional objectives: They want to install a separate network segment dedicated to system management functions such

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Year-End Rush at Northwind Traders

245

Questions 1. The Northwind IT plan calls for redeploying the existing two database

servers into an active/active cluster. Given all the stated requirements for the new site architecture, designate the resources that will be needed for Northwind’s site. From the Disks column, select disk quantities and RAID levels, as appropriate, and place them under resources listed in the Resources column. Some items from Disks column may be used more than once; some may not be used at all. Resources

Disks

Server A: Operating System

1 disk

Server B: Operating System

2 disks

The cluster’s Quorum resource]

3 disks

Catalog database

4 disks

Orders database

5 disks RAID 0 RAID 1 RAID 5

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

CASE STUDY

as monitoring and backup processes. The second objective is to provide some redundancy at the network level, especially on the data segment. All servers will continue to run Windows 2000 Server or Windows 2000 Advanced Server as necessary. One feature not in the current configuration is protection of the OS installation through RAID, and Northwind’s management wants this implemented in the new architecture.

CASE STUDY

246

Chapter 6



Capacity Planning

2. Because of budget constraints, one member of the IT group has sug-

gested delaying the upgrade of the data network. Under the current network design, if IT wants to back up the complete databases, how long will the backups take? Assume the Orders database is at its maximum size of 59GB, and calculate the time to back up during a typical month. A. 105 minutes B. 120 minutes C. 150 minutes D. 180 minutes 3. What are your recommendations for the front web servers? A. The current servers will handle the load. B. Add 512MB to each server. C. Add an additional server identical to the existing servers. D. Convert the existing servers to a Microsoft Cluster service cluster. 4. One of the options under consideration for the two database servers is

to convert them to an active/active cluster, moving the data disks to a shared disk array. As the IT consultant reviewing the plan, what is your opinion of this configuration option? A. This configuration will not work. B. This is an ideal solution. C. This configuration will work if the cluster is configured as an NLB

cluster. D. This arrangement will work, but performance will be negatively

affected if there is a failover during the holiday season.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Year-End Rush at Northwind Traders

247

increase their availability? A. Convert the disk drives to RAID 0. B. Convert the disk drives to RAID 1. C. Purchase an additional drive and convert to a RAID 5 array. D. Convert the drives to NTFS. 6. During the 24-hour day, Northwind’s website receives about the same

number of visitors per hour, and each visit per order generates a total of 1MB of data transmitted to the end user. What is the resulting effect on the Internet bandwidth? A. Northwind does not currently have enough bandwidth for typical

traffic, and definitely not enough to handle the holiday shopping rush. Northwind should add Internet bandwidth immediately. B. There is enough bandwidth for typical traffic, but not enough for

the peak holiday traffic. Northwind should add bandwidth before the holiday season begins. C. Northwind has enough bandwidth for both typical and holiday

traffic. D. Northwind has a substantial excess of bandwidth and should

downgrade the Internet connection in order to reduce costs. 7. Suppose the average session time of a client connection is 10 minutes.

If Northwind Traders does not add web servers, choosing to keep the current three, how much memory at any point in time will be used by the TCP connection overhead by the client web visitors during the holiday season? A. 1.38MB B. 13.8MB C. 138MB D. 200MB

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

CASE STUDY

5. What change to each web server would you recommend in order to

CASE STUDY

248

Chapter 6



Capacity Planning

8. From the right-hand column, drag the resource to be measured to the

appropriate components on the left. Some resources may be used more than once. Component

Resource

Client/server network

Percentage of processor utilization

Server/server network

Available bytes

Web server

Bytes/sec.

Cluster services cluster

Database disk size Number of partitions RAID 0 RAID 1 RAID 5

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Year-End Rush at Northwind Traders

249

CASE STUDY ANSWERS

Answers 1.

Resources Server A: Operating System 2 disks RAID 1 Server B: Operating System 2 disks RAID 1 The cluster’s Quorum resource 2 disks RAID 1 Catalog database 3 disks RAID 5 Orders database 5 disks RAID 5 There is no requirement that all RAID volumes in a cluster must be equivalent. One of Northwind’s technical requirements is to provide RAID protection of the Windows 2000 system files, and this is best accomplished with RAID 1 mirroring. Recall from Chapter 3 that clusters must have a shared quorum disk to store the cluster’s initialization information. The Quorum resource, too, is best stored on a RAID 1 volume. The 20GB Catalog database has no growth requirements. Three drives are needed to create a RAID 5 volume larger than 20GB. The Orders database. currently 45GB, grows by 1GB for every typical month. The highvolume month of December, though, has three times as many orders, resulting in a 3GB increase. So, accounting for growth over the coming 12 months requires 45GB current, plus 11GB for the typical months, plus 3GB for December: a total of 59GB. After allowing for the OS, Quorum, and Catalog volumes, there are five drives available. Five drives in a RAID 5 volume delivers 72GB, more than enough to handle the Orders database. Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

CASE STUDY ANSWERS

250

Chapter 6



Capacity Planning

2. C. Northwind’s data network is currently 100MB. The two data-

bases total 79GB. Normal operation of the website consumes 15MB for each server. That leaves only 70MB available for the backup. To calculate the time needed, divide the amount of data in bits (79GB * 8 bits/byte) by the bandwidth available (70 Mb/sec.), and convert to minutes. 3. C. The CPU load during a typical month is 40 percent. If the traffic

triples during the holiday season, the existing CPUs clearly will be overloaded. For this Case Study, we’ll associate an order with a user. At 20K orders per day and with three servers, each server is supporting 6667 orders. Dividing 6667 orders by 40 percent gives you 16,667 orders per day, per server, if the CPUs operate at 100 percent. For holiday shopping, management expects 60K in orders per day. Divide 60K in orders/day by 16.667K orders/day/server, and you need 3.5 servers. Since a half-server hasn’t been invented yet, Northwind needs one new server for a total of four. 4. D. An NLB cluster will not work for databases. In an active/active

cluster, each node must be capable of handling the load of every server during a failover. Average load on the database servers is 20–25%. Outside the holiday shopping season, if a failover occurs, each node can handle the cumulative load of 45 percent. However, the load triples during the holidays, so the two servers are loaded to 60 percent and 75 percent, respectively. During a failover, then, the remaining node will be overloaded and performance will be severely reduced. 5. B. RAID 1 volumes will protect the server against failure of a single

drive. RAID 0 doesn’t increase availability. Nor does NTFS, though it’s a good idea for security. With only 3GB total data, there is no reason to increase capacity, so adding a drive for a RAID 5 array is unnecessary. 6. B. The calculation 20K visits/day * 1MB/visit equals 20GB/day.

From there, 60K visits/day * 1MB/visit is 60GB/day. Spread equally over a 24-hour period, this is 231,481 bytes/sec. and 694,444 bytes/ sec., respectively. Convert those numbers to bits per second, which is the standard measurement for bandwidth, and you get 1.85Mb/sec and 5.55Mb/sec, respectively. Northwind has 2 T1s, at 1.54 Mb/sec. each, for a total bandwidth of 3 Mb/sec. So existing bandwidth is adequate for the typical traffic, but not for the holiday traffic.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Year-End Rush at Northwind Traders

251

during holiday shopping days. Since the company has three web servers, each will handle 20K connections during the day. Divide these connections by the number of minutes in a day: 20,000 ÷ (24 hours * 60 minutes/hour). That gives 13.8 connections/min. Since the average session time is 10 minutes, there are 13.8 * 10 connections open on the server at any point in time. Each connection consumes 10K of memory, so multiply 138 connections times 10KB, to get 1380KB or 1.38MB in memory used by TCP connection overhead. 8.

Component

Resource

Client/server network

Number of partitions

Bytes/sec.

RAID 0

Server/server network Bytes/sec.

RAID 1 RAID 5

Web Server Percentage of processor utilization Available bytes Cluster service cluster Percentage of processor utilization Available bytes Database size Answering this question correctly reinforces your knowledge of capacity planning, and the fact that doing it uses the same process regardless of the resource tier in question. The first two resources are networks, and the fundamental capacity issue for networks is bandwidth. Web servers and clustered servers are also measured by CPU and memory capacity. Database size is a reference to disk size for an application server, which resides on a cluster. RAID levels are not a measure of capacity, but rather an input into capacity. The number of partitions is not a capacity measurement, either.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

CASE STUDY ANSWERS

7. A. Marketing projects a peak load of 60K in visits/orders per day

Chapter

7

Designing Security Strategies MICROSOFT EXAM OBJECTIVES COVERED IN THIS CHAPTER:  Design an authentication strategy. Considerations include certificates, anonymous access, Directory services, Kerberos, and public key infrastructure (PKI).  Design an authorization strategy. Considerations include group membership, IP blocking, access control lists, and Web content zones.  Design an encryption strategy. Considerations include IPSec, SSL, certificates, Encrypting File System (EFS), and PPTP.  Design a security auditing strategy. Considerations include intrusion detection, security, performance, denial of service, logging, and data risk assessments.

The first three objectives covered in this chapter fall under the exam’s objective category 4, Designing Security Strategies for Web Solutions. In addition, this chapter discusses part of the “Design a security auditing strategy” skill (the security, performance, logging, and data risk assessment considerations only). You’ll find the remaining security auditing considerations covered in Chapter 8.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

T

he open nature of the Internet, and the widespread access to it, means websites are susceptible to threats and damage from unauthorized users. Website security is a primary concern for site administrators, who must ensure that safeguards are implemented against those potential threats. This chapter examines the various techniques included in Windows 2000 for implementing website security. Security mechanisms and services such as encryption, authentication, and authorization are available to manage and curtail security risks. We’ll examine methods for designing strategies to use these mechanisms for securing your highly available web solutions. We start with a brief overview of computer security in general, touching on some comprehensive issues that must be addressed when designing a highly available website. We then examine the authentication methods that can be used by Microsoft Internet Information Services 5.0 (IIS 5) and address the pros and cons of each. You’ll also learn about the requirements for designing an authorization strategy for users that have been authenticated. Finally, the chapter’s third major section covers several encryption methods for guaranteeing the secure transmission of data over a public network or the Internet.

Computer System Security

S

ecurity technologies are critical to protect a computer system against both internal and external threats and vulnerabilities. Several core components are key to making your computer systems secure: 

Authentication involves identifying users and validating their credentials. This is done normally through the use of user logon accounts and passwords. Once users are validated, they have access to system resources.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Windows 2000 Built-In Security Features







255

Authorization is the next core component of computer security. Once users are authenticated, they are given authorization to use certain system resources (such as files and printers) based on the access-control information set up for the resource. In order for users to be authorized to access resources, users must first be authenticated. Two other important elements in computer security are the privacy and integrity of data in the system. Privacy is usually provided through some form of encryption. Integrity ensures that data that is being used or transmitted cannot be deleted or changed without the permission of the data owner. Auditing is a method for maintaining a list of parties who have accessed a system, and the files or resources they accessed.

You also need to be prepared for the likelihood of threats, vulnerabilities, and attacks. Threats to a computer system can take many forms, including the access of sensitive data by unauthorized individuals. A system’s vulnerabilities include its weaknesses, either hardware- or software-related, that may allow a threat to become a reality. Attacks can occur when an individual discovers a system vulnerability and takes advantage of it to compromise the system.

Windows 2000 Built-In Security Features

W

indows 2000 was designed with a number of built-in security features that support data protection when you deploy your highly available web solution. These features let companies control user access to network resources. Key components of security services include Windows 2000 integration with Active Directory, support for Kerberos, user authentication by means of public key certificates, Encrypting File System (EFS), and secure communications support over public networks using IPSec. In this section we’ll examine some of the security features that are included with Windows 2000. In preparation for this exam, you should be familiar with the security concepts presented here. The exam gives you various scenarios for which you’ll select from a range of security options. You’ll need to have an understanding not only of the security facilities of Windows 2000, but also industry-wide security and encryption standards such as SSL, IPSec, and PKI.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

256

Chapter 7



Designing Security Strategies

Authentication Windows 2000 uses authentication technology to verify the identity of users trying to access system resources. The means of authentication usually takes the form of a username and password. Once a user is authenticated, they are associated with a specific access token. This token identifies the user and the Windows 2000 groups to which the user belongs. The token also contains the user’s SID (security identifier), which is a unique number that identifies every user and every group in the Windows 2000 operating system.

SIDs Security identifiers (SIDs) are internal numbers created when an account or group is set up on the computer. They are used to identify users, groups, and computer accounts. Each SID that is created is unique, and no two users, groups, or accounts can use the same SID. The SID for a local account is created by the Local Security Authority (LSA) and stored in the Registry. The Local Security Authority is the central component of the security subsystem in Windows and has responsibility for managing interactive logons on the system. SIDs for domain accounts are generated by the Domain Security Authority and stored in Active Directory. In addition to individual users and groups, SIDs are used with the following access control components: access tokens, security descriptors, and ACEs. Table 7.1 lists some of the well-known SIDs. TABLE 7.1

Well-Known SIDs SID

Description

Anonymous Logon (S-1-5-7)

Any user who has connected to the computer without supplying a username or password

Authenticated Users (S-1-5-11)

Users who have been authenticated

Everyone (S-1-1-0)

The generic group Everyone

Creator Owner (S-1-3-1)

The generic user Creator Owner

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Windows 2000 Built-In Security Features

TABLE 7.1

257

Well-Known SIDs (continued) SID

Description

Interactive (S-1-5-4)

Anyone who logs on locally or through a Remote Desktop connection

Network (S-1-5-2)

Users who log on through a network connection

Terminal Users (S-1-5-13)

Users who log on to a Terminal Services server

The LocalSystem Account The LocalSystem account is the account under which most services run in Windows 2000. This account has no password, and a user cannot log on to the computer system using this account. LocalSystem is used on the local computer to perform tasks that may be required of the administrator. The LocalSystem account has limited capability beyond the few privileges allowed on the local computer.

Internet Information Services 5.0 requires the startup account to be the LocalSystem account. Changing the logon to a different account may causes services to fail at startup.

Logon Types When a user is authenticated, an access token is generated for that user. This token can be of several logon types, including interactive, network, and batch. Interactive logons When a user logs on to a computer by entering his or her credentials, this generates an interactive logon. The user account logging on must have the Log on locally privilege, or the logon will fail. Network logons When a user connects to a computer using a network account, this generates a network logon. The account attempting to log

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

258

Chapter 7



Designing Security Strategies

on using a network logon must have the Access this computer from the network privilege. Otherwise, the logon will fail. Batch logons The batch logon is generated by applications that run as batch processes. This logon is rarely used. The application must have the Logon as batch job privilege, or the logon will fail.

DACLs To determine if a user can access a specific file or system resource, Windows 2000 compares the user’s token with the discretionary access control list (DACL) for the resource. The DACL holds a list of access control entries (ACEs) that contain the identity of users and groups having permission to access the resource. If the DACL and the user information in the token are different, access to the resource is denied. To use and set DACLs, you must be using NTFS (the New Technology File System).

Impersonation Impersonation is an important concept in a distributed system, where servers must pass client requests to other servers on the network or the operating system. Windows 2000 uses impersonation to determine if a client has sufficient permissions to access a resource. In impersonation, the security context of the server is altered so that it matches that of the client. When the client attempts to access a resource on the server, the client’s operating system tells the server the level of impersonation that the server can use to service the client’s request. The client can offer one of the following impersonation levels: Anonymous The server does not receive any information about the client. Identify The server can identify the client but cannot use the client’s security context to perform access checks. Impersonate The server can authenticate the client and use its security context to perform access checks. Delegate The server can authenticate the client and pass its security context to a remote server on the client’s behalf. This level of impersonation is supported only in Windows 2000.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Windows 2000 Built-In Security Features

259

Microsoft IIS uses impersonation to allow anonymous Internet users to access websites. Anonymous access is provided by using the IUSR_ computername account on the IIS server. When an IIS server receives an HTTP request from a Web browser, the server impersonates the IUSR_ computername account to allow the remote client access to the resource.

Authorization Once users have been authenticated in Windows 2000, they are given access to certain objects such as files, folders, and other network resources. Users cannot be authorized until they have been authenticated. Authorization is determined through DACLs. The DACLs contain ACEs, which contain the SIDs of users or groups that have access to the resource. Windows 2000 looks at the DACL on the resource and compares the user SID and group membership to see if they match a SID in the ACE. If the SIDs match and the ACE allows access, the user is granted access to the resource. If the SIDs do not match, access is denied to the resource.

Auditing Auditing is critical to maintaining system security. Windows 2000 provides the capability to collect information on system resources. Three types of auditing can be configured and used: User Account, File System, and System Registry auditing. Auditing is not enabled by default and must be set up through audit policies. Both Windows Explorer and the Security Templates snap-in tool can be used to configure file-system auditing on a Windows 2000 computer. The Security Event log is used to record events or resources that are being audited.



Microsoft Exam Objective

Design a security auditing strategy. Considerations include performance, logging, and data risk assessments.

This section covers only the performance, logging, and data risk assessment considerations of the security auditing strategy skill under the exam’s

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

260

Chapter 7



Designing Security Strategies

objective 4, Designing Security Strategies for Web Solutions. The remaining security auditing considerations are discussed in Chapter 8. Table 7.2 lists the events that can be audited in Windows 2000. TABLE 7.2

Auditable Events in Windows 2000 Event Audited

Audit Type

Account logon

Success/Failure

Account management

Success/Failure

Directory Services access

Success/Failure

Logon

Success/Failure

Object access

Success/Failure

Policy change

Success/Failure

Privilege use

Success/Failure

Process tracking

Success/Failure

System

Success/Failure

In addition to the auditing capability inherent in Windows 2000, auditing can be enabled to log events from the IIS environment. Events related to HTTP, HTTPS, FTP, and STMP traffic can be recorded to the event logs and monitored to identify potential trouble and any suspicious activity.

Although auditing is a critical component for safeguarding your system, it can consume a large amount of processor time and disk space. If you plan to audit your system, plan accordingly for the additional workload that the system may have to handle. Also, it’s good practice to put all auditing log files on a separate disk partition.

Networks should be assessed on an ongoing basis to ensure that no security holes exist. Network administrators should monitor releases from manufacturers of hardware and software used in the network to keep abreast of

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Windows 2000 Built-In Security Features

261

updated security fixes. Administrators should also stay informed about the latest Microsoft hot fixes and security bulletins. Microsoft issues patches as soon as security holes are discovered. It also provides a free e-mail notification service that informs subscribers about the security of Microsoft products. Administrators can use this service to get the most current hot fixes and security bulletins as soon as they are released. In addition, Microsoft’s website publishes their security bulletins at www.microsoft.com/technet/ security.

Encryption Encryption is included in Windows 2000 core security features. Following are descriptions of some of the key services that provide secure website communications.

PKI Public key infrastructure (PKI) is an industry standard for regulating public and private keys and certificates. PKI gives an administrator the tools needed to support a wide range of network security safeguards. Implementation of PKI in a Windows 2000 environment gives users the ability to use smart cards for authentication, work with secure e-mail, and use digital signatures and authentication through Secure Sockets Layer (SSL) and Transport Layer Security (TLS). The PKI technology is employed by the following services in Windows 2000: 

Network logon authentication



RRAS (Routing and Remote Access Services)



User authentication based on SSL, TLS, and EAP



Communication over PPTP and L2TP



Microsoft Internet Information Services (IIS 5)



Encrypting File System (EFS)



IPSec

Smart Cards Smart cards can be used in Windows 2000 systems for certificate-based authentication as well as single sign-on to an Active Directory domain. A smart card is the size of a credit card and can store user certificates, private

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

262

Chapter 7



Designing Security Strategies

keys, and logon information. The card is tamper-proof and is a secure means for user authentication. Smart cards use PINs (personal identification numbers) instead of passwords; if the correct PIN is not entered, the smart card locks (after a predetermined number of unsuccessful entries).

EFS Encrypting File System (EFS) was first introduced in Windows 2000 and enables users to encrypt their files and folders. EFS is based on public key encryption and is tightly integrated with NTFS. Users who encrypt their files can work with them normally. The EFS process is transparent to the user. If a user who did not encrypt the file tries to open it, a message is generated indicating that access to the file is denied.

IPSec Internet Protocol Security (IPSec) is a suite of security protocols and services that provide not only data encryption but also user-level authentication. IPSec works with the L2TP protocol for VPN connections. The protocols make it possible to establish secure communications between two computers over an unsecured network. They accomplish end-to-end security, which means the packets sent are unreadable en route and can only be decrypted by the destination computer. IPSec is configured using IPSec policies, which are stored in Active Directory for a domain environment or locally in the Registry for a nondomain computer.

Designing an Authentication Strategy

A

uthentication involves verifying the identity of a user or group of users trying to access a website. Windows 2000 supports several authentication methods for verifying the identity of users. In this section we will examine some of the authentication methods used by Internet Information Services to provide users access to your website. These include anonymous logon, certificates and keys, Kerberos, and more.



Microsoft Exam Objective

Design an authentication strategy. Considerations include certificates, anonymous access, Directory services, Kerberos, and public key infrastructure (PKI).

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Designing an Authentication Strategy

263

This section covers all the considerations for the “Design an authentication strategy” skill except for the Directory services consideration. Directory services are discussed in Chapter 5, “Designing a High Availability Network.”

IIS Authentication Methods Before they can access Web resources in IIS, users must be authenticated. The following five Web authentication methods are supported by Microsoft Internet Information Services 5.0 (IIS 5): 

Anonymous logon



Basic authentication



Integrated Windows authentication



Digest authentication



Client certificates

The following two FTP authentication methods are also supported: 

Anonymous



Do Not Allow anonymous

When an authentication request is made to IIS 5, the request is executed as a thread that impersonates the user’s security context. Several simultaneous threads can be running in IIS, each for a different user. Let’s take a closer look at the authentication methods that are supported by IIS 5.

Anonymous Access Windows 2000 requires valid usernames before users are allowed to log on to systems and access websites. When connecting to most websites on the Internet, however, no user logon name or password is required. For example, if you want to connect to www.Microsoft.com or www.sybex.com, you don’t need to enter a username or password to access these sites. This is known as anonymous logon. To get around the need for a username and password for logging on to websites, IIS 5 provides a Windows 2000 user account that is used for anonymous access. This account is called IUSR_ computername and is automatically installed by IIS. The IUSR_computername account gives anonymous Internet users the Log on locally right and is used by users who are being authenticated with anonymous access.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

264

Chapter 7



Designing Security Strategies

Keep in mind that there are some password issues for IUSR_computername. Even though IIS may be configured for anonymous access, a password is still required. The password created for the IUSR_computername account must be the same for both IIS and Windows 2000. By default, the system synchronizes these passwords to be the same. If users need access to other network resources, such as a database, a different password can be required for access to that resource. In this case, you may not want the password synchronized with the others, because problems may occur when the user tries to access the network resource using the same password used for authentication under IUSR_computername. To get around this problem, you can disable automatic password synchronization.

Basic Authentication For basic authentication, users must provide authentication before they are able to log on to a website. This is unlike anonymous access, under which no user credentials are required for website access. Basic authentication is part of the HTTP 1.0 specification. The majority of web servers and browsers running on the Internet today support this specification. Users must have logon rights to the web server before they are authenticated and granted access to resources. Access can be restricted based on the rights of the user logging on. Basic authentication uses NTFS to provide this access restriction. Although passwords are required for basic authentication, they are not transmitted in an encrypted form by default. Transmitted in cleartext, they can be easily intercepted and decoded. If encrypted passwords are required in a particular environment, basic authentication can be used along with Secure Sockets Layer (SSL) to established secure sessions. SSL is discussed later in this chapter, in the section on designing an encryption strategy.

Integrated Windows If your needs warrant a more secure authentication method than basic authentication, you may want to consider using Integrated Windows authentication. Integrated Windows authentication was formerly known as both NT LAN Manager (NTLM) and Challenge/Response authentication. This authentication method is more secure than basic authentication and supports both NTLM and Kerberos authentication. (Kerberos will be discussed shortly.)

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Designing an Authentication Strategy

265

When a user connects to a website running with Integrated Windows authentication, IIS attempts to use the current user’s domain logon credentials. If the domain logon is rejected, the user is prompted for a username and password. Integrated Windows authentication is good for intranet environments where all users who need access to the website have domain accounts.

One principal limitation of Integrated Windows authentication is that it cannot be used through a proxy server. If users of your website are going through a proxy server, you must set up a different authentication method.

Digest Authentication Another authentication technology that provides more security than basic authentication is digest authentication. This kind of authentication encrypts passwords before they are transmitted, rather than using cleartext as basic authentication does. In addition, digest authentication will work for users who are connecting through proxy servers. One limitation of digest authentication is that it is only supported on Windows 2000 domains. Before digest authentication can be used correctly, several things must be set on the server: 

The server must be a Windows 2000 server that is in a domain.



The IISUBA.DLL file must be installed on the domain controller.



User accounts must have the “Store Passwords Using Reversible Encryption” option enabled.

Client Certificates Windows 2000 security is based on the public key infrastructure (PKI), which uses keys, both public and private, to provide user authentication. Digital certificates are a major component of PKI and provide electronic credentials to identify individuals and organizations. Certificates are issued by Certificate Authorities (CAs), which are also a major component of the PKI framework in Windows 2000. Client certificates are applied by IIS to authenticate users. Information in the client certificate identifies the user, as well as the organization that issued

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

266

Chapter 7



Designing Security Strategies

the certificate. Windows 2000 employs standard X.509 certificates that contain, at a minimum, the following: 



The version of the X.509 standard that applies to the certificate. Serial number of the certificate that is used by the entity (CA) that created it. This number distinguishes the certificate from others issued by that entity.



Signature algorithm ID



Issuer name



Validity period



Subject or username



Subject’s public key information

Certificates are mapped to Windows 2000 user accounts, and authentication depends on the rights and permissions of the user account. Two types of certificate mappings are supported in IIS: one-to-one mapping and many-toone mapping. One-to-one certificate mapping To use one-to-one mapping, the client must have a valid Windows 2000 user account. IIS maps a certificate to a corresponding Windows 2000 user account for the owner of the certificate. The user is authenticated based on their account information and are granted access to resources depending on the account’s rights and permissions. One-to-one mapping is a good choice when you are supporting a small number of clients. Many-to-one certificate mapping When using many-to-one mapping, multiple users can be mapped to a single user account. Rights and permissions granted to the users are based on that single user account. If you are supporting a large number of clients, many-to-one mapping is the practical choice.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Designing an Authentication Strategy

267

Table 7.3 summarizes the authentication models that can be used with IIS 5. TABLE 7.3

IIS 5 Authentication Methods

Model

Used with Internet Explorer 4?

Used with Internet Explorer 5?

Used with Other Web Browsers?

SSL a Requirement?

Remarks

Anonymous

Yes

Yes

Yes

No

Provides no authentication

Basic

Yes

Yes

Yes

No

You can increase security by adding SSL

Integrated Windows

Yes

Yes

No

No

Will not work with proxy servers

Digest

Yes

Yes

Varies

No

Requires Active Directory

Client Certificates

Yes

Yes

Varies

Yes

Scalable and very secure; requires configuration

Kerberos Kerberos v5 is the default protocol used for authentication in a Windows 2000 domain. Known as a mutual authentication protocol, Kerberos confirms both the identity of the user and the network services to which the user has access. Three components are critical to Kerberos authentication: the client, the server, and the Key Distribution Center (KDC). The KDC acts as a trusted intermediary and is actually a service that runs on a domain controller. It issues a ticket-granting ticket (TGT) that contains encrypted data about the user. The TGT gets a service ticket (ST) that provides the access to the network services.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

268

Chapter 7



Designing Security Strategies

Here are the steps in the Kerberos authentication process: 1. A client authenticates to the KDC using a password or a smart card. 2. A special ticket-granting key is issued to the client. The client uses this

key to access the ticket-granting service (TGS). 3. The ticket is issued to the client by the TGS. 4. The service ticket is presented to the requested network service.

PKI Public key infrastructure or PKI is the core component of Windows 2000 security. Windows 2000 PKI is based on the open standards for X.509 certificates. A public key infrastructure allows administrators to provide a framework of security services and technologies that are based on public key technology. The following Windows 2000 security systems used public key technology: 

Kerberos authentication



Routing and Remote Access



Remote Authentication Dial-In User Service (RADIUS)



Extensible Authentication Protocol and Transport Layer Security (EAP-TLS)



PPTP



L2TP



Internet Information Services (including SSL)



IPSec



EFS

A Windows 2000 public key infrastructure will allow you to deploy a security solution that uses public key technology and digital certificates. Secure websites can be configured using certificates and certificate mapping. Secure web communications can be guaranteed by the use of certificates and SSL to provide confidential communication between servers and clients. Public key systems depend on the mathematical relationship between the public key and private key. It’s not feasible to derive one from the other. The

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Designing an Authorization Strategy

269

two fundamental operations that are associated with public key cryptography are encryption and signing. The purpose of encryption is to obscure data in such a way that it can only be read by the intended party. The purpose of signing, which uses encryption, is to prove the origin of the data. A PKI is not a thing, per se, but rather a set of operating system and application services that together make it easy and convenient to use public key cryptography. The following are major components of Windows 2000 PKI: 

Windows 2000 Certificate Services



Microsoft CryptoAPI



Cryptographic service providers (CSPs)



Certificate stores



Public key Group Policy



Certificate enrollment and renewal methods



Certificate revocation lists



Preinstalled trusted root certificates

Designing an Authorization Strategy

Once users are authenticated, they must be authorized before they are given access to your website’s resources. Both IIS and NTFS permissions can be used to control resource access. In this section, we’ll examine the steps that are performed when a user attempts to access resources on a website. We then look at the IIS and NTFS permissions that can be used to determine the level of access given.



Microsoft Exam Objective

Design an authorization strategy. Considerations include group membership, IP blocking, access control lists, and Web content zones.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

270

Chapter 7



Designing Security Strategies

Also in this section are discussions of the concepts of IP blocking and Web content zones. By using IP blocking, site administrators can deny access to a specific IP address, a range of IP addresses, and subnets and domains. Web content zones, included in Microsoft Internet Explorer, are security zones that permit websites to be divided into trusted and untrusted areas. Websites can also be rated on a scale as defined by the Internet Content Rating Association (ICRA). If Web content rating is not enabled on a site, it may be barred from being viewed via certain browsers. Although both IP blocking and Web content zones are not inherently associated with authorization, they have been included in this section because they can have an indirect impact on how users are able to access Web sites.

User Authorization Before a user can have access to a website or its files and directories, they must be given the appropriate permissions to access the resource. This is known as authorization. A user cannot be given authorization to resources until Windows 2000, IIS, or both have authenticated that user. IIS permissions include allowing or denying access to the website itself and the directories and files it contains. NTFS permissions include allowing or denying access to specific NTFS folders on the servers that house the websites.

The Steps of the Access Process An access process occurs when a user attempts to access resources on a website. Let’s take a look at the steps involved in this process. Step 1: IP Address Access Control When a user first makes the initial request to access a resource on the website, IIS looks at the user’s IP address. IIS can be configured to allow or deny access to specific IP addresses. Based on this configuration, a decision is made whether or not the user should be given access to the website or the virtual directories it contains. Step 2: IIS Authentication After determining that the user is attempting access from a valid IP address, IIS attempts to authenticate the user. This is done using one of the IIS authentication methods discussed earlier in the chapter, such as anonymous logon

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Designing an Authorization Strategy

271

and client certificates. For a logon to be successful, it must meet the following criteria: 



The user must have entered a valid username and password. There must be no Windows 2000 account restrictions on the user account attempting to log on.



The user account must not be disabled or locked out.



The password on the user account must be current (not expired).

Step 3: IIS Authorization After the user has been successfully authenticated, IIS determines what resources the user has permission to access. NTFS permissions can be used to restrict access to specific resources. These permissions apply to users or groups that have valid Windows accounts. Web permissions determine how users interact with specific web and FTP sites. Users can be allowed to access a specific page on a website, for example. They can be allowed to upload information to the site, or they can run scripts on the site. Web permissions are applied to every user who is trying to access the website’s resources. Step 4: Custom Application Authentication An organization may have developed custom applications for use on its website. These applications may have their own methods for authenticating users. Users would have to be authenticated first by the application before being allowed to use it. Such processes may include Internet Server Application Programming Interface (ISAPI) filters or Active Server Pages (ASP) to support authentication for these applications. Step 5: NTFS Authorization IIS is based on the Windows 2000 NTFS security mode. When users attempt to gain entry into a resource, their authenticated user-security context is used to determine the level of access they can have. For users who are accessing the website anonymously, the IUSR_computername account is used for authorization. For authenticated users, their Windows 2000 user accounts are checked in order to authorize access to resources. If users do not have specific NTFS permissions to a resource, they will be denied access to that resource, even if they have been authenticated and authorized by IIS.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

272

Chapter 7



Designing Security Strategies

IIS Permissions IIS permissions give users access to virtual directories. This is different from NTFS permissions, which give users access to physical directories on the websites. IIS permissions apply to all users who have been authenticated for access to the website. Table 7.4 lists the IIS permissions that are available. TABLE 7.4

IIS Permissions Permission

Description

Read

Grants permission to view file contents and properties

Write

Grants permission to change file contents and properties

Directory Browsing

Grants permission to view file lists

Script Source Access

Grants permission to access source code for files

Log Access

Creates a log entry for each visit to the website

Index This Resource

Allows Microsoft Indexing Service to include this directory in a full-text index of the website

IIS permissions can also be used to set for the operation of executables in directories. Table 7.5 lists these permissions. TABLE 7.5

IIS Permissions for Executables Permission

Description

None

No scripts or executables can be run in this directory.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Designing an Authorization Strategy

TABLE 7.5

273

IIS Permissions for Executables (continued) Permission

Description

Scripts Only

Scripts can be run, but without the Execute permission.

Scripts and Executables

Executables and scripts can run in this directory.

NTFS Permissions NTFS permissions control access to objects (physical directories on the website). When a user is authenticated in Windows 2000, an access token is attached to all processes that the user runs. This token identifies all of the groups of which the user is a member. The group membership determines what resources a user has access to. If a user attempts to access a resource, Windows 2000 takes the user information included in the token and compares it with the resource’s access control list (ACL). If the user is included on that list and thus has been given the proper access permission, then access to the resource is allowed. Let’s take a closer look at the elements of the NTFS permissions structure. Access Control Lists (ACLs) Access control lists (ACLs) are lists of access control entries (ACEs) that define what actions are allowed, denied, or audited for users or groups. Two ACLs are contained in an object’s security descriptor: a discretionary ACL (DACL), and a system ACL (SACL); these will be explained shortly. Every file and folder on an NTFS volume has an ACL, which lists every user and group with granted access to that file or folder. The ACL also contains the type of access granted (that is, whether access is allowed or denied to the object in question). When a user/group attempts to use a file/folder, it won’t be allowed unless the ACL for the file/folder has an ACE for that user account. If no ACE exists in the ACL, the user account is denied access. Security Descriptors Every resource is controlled by the use of a security descriptor, which contains access control information designating whether or not a user can access the resource. Security descriptors contain information about an object’s

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

274

Chapter 7



Designing Security Strategies

access control rather than a user’s. When an access attempt occurs, the operating system checks the object’s security descriptor to see whether the user has the appropriate permission to open it. Usually, security descriptors contain the following: information about the object owner; names of users and groups who have Allow and Deny access to the object; names of users and groups for whom access should be audited; and inheritance information about the object. DACL As stated earlier in the chapter, the discretionary access control list (DACL) is used to determine if a user can access a specific file or system resource. To determine access for a resource, the operating system looks at the DACL for the object and compares the user’s SID and group membership with those listed in the DACL. If the SID matches the SID in the ACE, and the ACE allows access, then access to the object is granted for the user. The owner of the object is the only one who can change the granted or denied permissions in a DACL. Owners can share control of the object with another user by granting them Change permissions to the object. ACEs The access control entries in the DACL specify the permissions for allowed access to an object. ACEs are entries in the ACL that contain the SID for a user or group. These entries also contain an access mask that determines what actions are allowed, denied, or audited for the user or group. SACL The SACL, System Access Control List, is the portion of the security descriptor that determines what events are audited for users or groups. Like the DACL, the SACL contains ACEs. But it’s different from the DACL in that it’s used for controlling access to audit objects rather than the objects themselves. The ACEs in the SACL contribute to the determination of whether an event is written to the system log, and whether the event is a success or failure. NTFS File Permissions The six types of NTFS file permissions are Full Control, Modify, Read and Execute, List Folder Contents, Read, and Write. Table 7.6 lists the NTFS file permissions and the access allowed with each one.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Designing an Authorization Strategy

TABLE 7.6

275

NTFS Permissions Permission

Description

Full Control

Users can modify, add, move, delete files. They can change permissions on all files and subdirectories

Modify

Users can view, modify files

Read & Execute

Users can run executable files

List Folder Contents

Users can view a list of a directory’s content

Read

Users can view files and their properties

Write

Users can write to a file

Planning Permissions for Your Website Following are some key points to remember about permissions when planning for your website. 

When both NTFS and IIS permission are used, the most restrictive settings will take effect.



IIS permissions apply to all users who access the website.



NTFS permissions apply to specific users and groups.









NTFS permissions can be allowed or denied. Denied permissions take precedence over allowed permissions. If your website supports anonymous users, the proper NTFS permissions must be set on the IUSR-computername account. Nonanonymous users who are accessing the website must have the appropriate NTFS permissions to the resources they need access to. If both IIS and NTFS permissions are used, the most restrictive permissions will apply.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

276

Chapter 7



Designing Security Strategies

IP Blocking An important measure of security for your website is the provision of appropriate protection of TCP/IP data sent over the network or Internet. Windows 2000 by design can prevent unauthorized access of data by blocking specific ports and protocols. In addition, Windows 2000 can deny access to specific network IP addresses, subnets, and domains. IP blocking or filtering can be implemented to prevent the delivery of IP packets that do not meet predefined criteria. You can filter well-known protocols such as TCP and UDP. Although traffic filtering can be done via a firewall solution, this may be difficult to manage if each host has a different filtering requirement. Firewalls are an excellent means of network security protection, but they must be implemented and managed by specifically trained team members. The TCP/IP filtering can be implemented based on TCP port number, UDP port number, and the IP protocol type. This filtering can be included in your system’s design for controlling traffic to and from specific servers on your network.

Web Content Zones Internet Explorer contains security zones that divide the Internet and company intranets into four trusted and untrusted areas (see Figure 7.1). These web content zones are as follows: FIGURE 7.1

Security tab of Internet Properties dialog box for Internet Explorer

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Designing an Authorization Strategy

277

Internet The Internet zone contains all websites that have not been placed into one of the other zones. Local intranet The local intranet zone contains all websites that are on a company’s intranet. Trusted sites The trusted sites zone can contain websites that you can trust and that you do not consider capable of causing damage to your computer or your data. Restricted sites Sites in the restricted zone have been identified as potentially dangerous to your computer and data. Web browser levels of security can be designated for each zone. There are four levels of security available: high, medium, medium-low, and low. Each web content zone has a recommended level of security, but users can select a lower level if their needs warrant it. IIS 5 can also be used to configure content rating for a particular website, directory, or file. A rating system has been developed by the Internet Content Rating Association (ICRA) that categorizes website content according to levels of violence, nudity, sex, and offensive language. The ICRA’s goal is to protect children from potentially harmful material, while also protecting free speech on the Internet. This organization maintains a content-labeling system for web authors to rate their sites by filling in a questionnaire. ICRA uses the questionnaire to generate a label that the web author then adds to the site. A site carrying an ICRA label is more likely to be perceived as trustworthy than one not so labeled.

If you decide not to label your website, remember that it may not be available to web browsers that have content ratings enabled.

To connect to the ICRA registration page and enable content ratings on your website, follow these steps: 1. Open Control Panel, and then Administrative Tools. 2. Open the Internet Services Manager. 3. Expand the elements below your website name. 4. Right-click the website and select Properties.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

278

Chapter 7



Designing Security Strategies

5. Select the HTTP Headers tab, and click the Edit Ratings button. 6. Click the Rating Questionnaire button on the Rating Service tab. This

takes you to the Internet Content Rating Association website, where you start the registration process.

Designing an Encryption Strategy

D

ata encryption is a way for data to be transmitted over the Internet without fear of interception by unauthorized individuals. Windows 2000 provides several ways to encrypt and protect data stored on websites. In this section we will look at some of these technologies, including Secure Sockets Layer (SSL), IPSec, Encrypting File System (EFS), and Point-to-Point Tunneling Protocol.



Microsoft Exam Objective

Design an encryption strategy. Considerations include IPSec, SSL, certificates, Encrypting File System (EFS), and PPTP.

SSL SSL or Secure Sockets Layer is a protocol that is used for secure transmission of data communications over the Web. SSL was originally developed by Netscape in the 1990s to create an encrypted path between client and server computers, regardless of their respective operating systems. This protocol uses public key encryption to encrypt and decrypt data so that it cannot be intercepted and read during transmission. Public key encryption uses two keys; one private and one public, to encrypt data at the originating end and then decrypt it at the destination. Over a secure connection between a web browser and IIS, SSL uses the Hypertext Transfer Protocol Secure (HTTPS) protocol. This protocol uses port 443 instead of the standard HTTP port 80. Before users can start a secure SSL connection using IIS, they must request and install a certificate. Certificates can be requested from trusted Certificate Authorities (CAs) or from Windows 2000 Certificate Services. The certificate allows the user to authenticate to the server and establish a secure connection.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Designing an Encryption Strategy

279

Here are the steps for SSL encryption: 1. A secure link is established by the web browser to IIS using the HTTPS

protocol (https://). 2. The browser and IIS negotiate the level of security needed for the

session. 3. IIS sends its public key to the browser. 4. The browser uses the public key to encrypt the data. The server hous-

ing the data will use this encrypted data to generate a session key. The browser forwards the encrypted data to IIS. 5. IIS decrypts the session key data that was sent from the browser. IIS

generates a session key, encrypts it with the public key and sends this to the browser. 6. Now the browser and IIS can use the session key to encrypt and

decrypt the data during the transmission session. The number of bits in the session key determines its strength during the transmission of data. SSL can be implemented using either 40-bit encryption or 128-bit encryption. Session keys that contain a larger number of bits offer a higher degree of security. When the web browser and IIS establish a secure communication channel, they try to do so by using the highest degree of security that can be negotiated. If you configure IIS to support 128-bit session keys, the browsers accessing the websites must be able to support this number of bits, or they will not be able to negotiate and establish a session. The SSL protocol operates at the Transport and Session layers of the OSI model and supports web servers and browsers that operate at the Application layer. The protocol itself has two layers: 



The SSL Record protocol is SSL’s foundation for data transfer and is responsible for building the data transmission path between the client and server computers. The SSL Handshake protocol allows the server to authenticate to the client computer using public key techniques. Client and server can both cooperate in the creation of symmetrical keys that are used for encryption, transmission, and decryption of the data. The SSL Handshake protocol can also be employed at the client computer to authenticate to the server.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

280

Chapter 7



Designing Security Strategies

When SSL is working, it requires a significant amount of processor resources and will result in slower response times. Clients will notice slower reply time when requesting data from sites that are using SSL connections. SSL can only be employed to provide security with applications that are SSL aware, such as Internet Explorer and Netscape.

IPSec IPSec is a suite of protocols that, like SSL, is used to encrypt and decrypt data as it is transmitted over a public network or the Internet. IPSec provides endto-end encrypted data transmission. It uses IP packets that are encrypted by the originating computer and decrypted by the destination computer. A special algorithm is used to generate a shared encryption key that is used by both computers. This key is not passed over the network, and the packets that are transmitted are unreadable during the transmission process. IPSec is a Layer 3 protocol standard that supports the secured transfer of information across an IP network. IPSec has the following features: 

It supports IP traffic only.



It functions at the bottom of the IP stack.



It is controlled by a security policy that establishes the encryption and tunneling mechanisms. As soon as there is traffic, the two computers perform mutual authentication and negotiate the encryption methods to be used.

Standard authentication protocols are applied by IPSec, including EAP, MS-CHAP, and PAP. An IPSec Security Association (SA) determines the type of encryption that is used by IPSec. This SA consists of a destination address, a security protocol, and a Security Parameters Index (SPI), which is a unique identification value. The actual encryption algorithm can be DES (Data Encryption Standard), which uses a 56-bit key, or Triple DES (3DES), which uses two 56-bit keys. IPSec policies are used to configure IPSec. These policies can be stored in Active Directory, or locally in the Registry for a computer that is not part of a domain. The type of IPSec policy that is implemented will determine the level of security used for the communications sessions.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Designing an Encryption Strategy

281

Providing Secure E-Commerce with Windows 2000 PKI As the IT manager in your company, Petralis Instrument Design, you have been approached by the CIO about the company’s Internet presence. The CIO is concerned that Petralis is not transacting business on the web. You are currently supporting a Windows 2000 infrastructure comprising several Windows 2000 servers, a few NT 4.0 servers, and several hundred Windows 2000 Professional workstations. Petralis’s principal competitors have recently added online transaction capabilities to their websites. The competing firms have experienced a tremendous amount of client growth due to this implementation. This increased competition is of concern to Petralis, and the CIO needs your help in convincing upper management that providing online transactions is the answer. She believes online commerce is a way to not only increase customer base but to expand and improve the types of services Petralis provides. Market research indicates that businesses accepting transactions via the Web gain a competitive edge by reaching a worldwide audience, at very low cost. One problem of concern to management, though, is the Web’s unique set of security issues, which businesses must address at the outset to minimize their risk. Your research and experience show that customers will only submit information via a website if they are confident that their personal data, especially credit card numbers, is secure. Establishing a secure transaction area for your website can add powerful competitive advantages for Petralis over its competitors. Because you are running Windows 2000, you already have in place the basis for a system infrastructure that will support cash transactions on the Web, while minimizing the risks of Internet-based business. Windows 2000’s public key infrastructure (PKI) offers full-featured public-key technology that can deliver e-commerce solutions with the benefits of public-key cryptography. PKI standards such as X.509 certificates, IPSec, and SSL are supported in Windows 2000 and can be easily implemented using your existing infrastructure.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

282

Chapter 7



Designing Security Strategies

EFS Encrypting File System (EFS) can be used to encrypt data that is stored on an NTFS volume. EFS applies public key encryption to protect a data file so that only the owner of the file can open it. Public keys are obtained from a certificate supplied by the user or a CA. To run EFS, all users must have EFS certificates. If users do not have access to a CA, the operating system will automatically assign self-signed certificates. EFS is transparent to the end user. The user who encrypts the file can work with it just like any other file or folder, opening and closing it as they normally would. If an individual who did not encrypt the file tries to open it, they will receive a message that access to the file is denied. When a folder is encrypted, all of its files and subfolders are also encrypted (including any files that may be added to the encrypted folder). EFS uses either DES or Triple-DES (3DES) as its encryption method. Note that EFS does not protect data that is transmitted over a network. If you want data encryption for transmissions over a public network or the Internet, consider SSL or IPSec. EFS includes built-in data recovery. If the user who created the encrypted file leaves the enterprise or loses their private key, the data can still be recovered from the encrypted file. Recovery policies are created to designate users as recovery agents. These agents are authorized to decrypt the data that was encrypted by another user. The recovery policy is configured locally for stand-alone computers and through Group Policy on a domain. Before they can decrypt encrypted data, recovery agents must be issued a recovery certificate by a CA. These certificates are managed by the Certificates snapin tool.

For security reasons, the certificate and private key should be exported to a floppy disk or other removable media and stored in a secure location.

PPTP PPTP (Point-to-Point Tunneling Protocol) is a Layer 2 protocol that encapsulates PPP (Point-to-Point Protocol) frames in IP datagrams for transmission over an IP network, such as the Internet. PPP enables the encapsulation of data so that it can transmitted over an unsecured public network. PPTP

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Designing an Encryption Strategy

283

can be used to create VPNs (virtual private networks), which tunnel TCP/IP traffic through the Internet. The PPTP protocol accomplishes authentication through the same methods as PPP, including PAP, CHAP, and MS-CHAP. An IP-based network is required, and header compression is not supported. PPTP does not support IPSec, and encryption is provided via standard PPP methods. PPTP provides simple-to-use, lower-cost VPN security. Unlike IPSec technology, PPTP is compatible with Network Address Translators (NAT) and supports both multiprotocol and multicast environments. It also combines standard user password authentication with strong encryption, without the complexity and expense of PKI. PPTP was the earliest, widely supported VPN protocol. Developed before the existence of IPSec and PKI standards, PPTP provides for automated configuration and supports legacy authentication methods. Because PKI is not required, PPTP can be much more cost-effective and easier to deploy in situations that do not need the most sophisticated security. In addition, PPTP may be the only viable option when VPN connections must pass through NATs (which are incompatible with any IPSec implementation). With Windows 2000, it is possible to use IPSec transport mode within a PPTP tunnel to get extremely powerful encryption services while also passing through NATs.

L2TP L2TP (Layer 2 Tunneling Protocol) is another option for creating VPNs over public networks. Like PPTP, L2TP supports the same authentication methods as PPP. It differs from PPTP, however, in that it can be tunneled over a wide range of connection media, not just IP. L2TP encapsulates PPP frames to be sent over IP, X.25, Frame Relay, or Asynchronous Transfer Mode (ATM) networks. In addition, L2TP supports header compression, as well as tunnel authentication using IPSec. When configured to use IP as its datagram transport, L2TP can be used as a tunneling protocol over the Internet. Because it uses PPP as the method of negotiating user authentication, L2TP can authenticate with legacy password-based systems through PAP, CHAP, or MS-CHAP. It can also support advanced authentication services through EAP, which offers a way to plug in different authentication services without having to invent additional PPP authentication protocols. Because L2TP is encrypted inside of an IPSec transport-mode packet, these pluggedin authentication services are strongly protected as well.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

284

Chapter 7



Designing Security Strategies

L2TP uses PPP and so can easily be integrated with existing IP addressmanagement systems. L2F, for example, a technology proposed by Cisco, is a transmission protocol that allows dial-up access servers to frame dial-up traffic in PPP and transmit it over WAN links to an L2F server (a router). L2TP is a combination of PPTP and L2F.

Summary

I

n this chapter we looked at security strategies that should be considered for designing a highly available web solution. Windows 2000 has a number of built-in security services. Authentication is used to identify users who are trying to access network resources on your network. Once a user or group has been authenticated, they must be authorized for the specific objects they are trying to access. Discretionary Access Control Lists (DACLs) are used to determine if a user has access to a requested resource. Windows’s auditing operations will monitor who is logging on and what resources they are trying to access. Internet Information Services (IIS) must authenticate users before they can access Web resources. Supported authentication methods are anonymous, basic, Integrated Windows, digest, and client certificate. For anonymous authentication, the IUSR_computername account is used for anonymous access to websites, and no user credentials are required. Basic authentication requires a username and password. The password is transmitted in plaintext, which poses a risk because it can easily be intercepted and decoded. SSL can be used with basic authentication to make it more secure. Integrated Windows authentication provides more security than basic authentication but does not work with proxy servers. Digest authentication uses encrypted passwords but only works with Windows 2000 domains. Client certificates are issued by Certificate Authorities and are part of the Windows 2000 PKI infrastructure. Certificates are mapped to Windows 2000 user accounts, and the level of authentication depends on the user account to which the certificate is mapped. Two types of mapping are supported by IIS 5.0: one-to-one certificate mapping and many-to-one certificate mapping. Once users are authenticated, they must be given the appropriate permissions before they can access system resources. Both IIS and NTFS permissions control access to files and folders on a website. IIS permissions give users

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Exam Essentials

285

access to virtual directories; NTFS permissions give users access to physical directories. Several types of encryption can be enacted with IIS to protect data. SSL (Secure Sockets Layer) provides secure data transmission over the Web. SSL uses the HTTP protocol, which uses port 443 by default on Web browsers. The IPSec suite of protocols transmits encrypted data over a public network. IPSec is used with L2TP for VPN connections. Encryption File System (EFS) protects by encrypting data stored on NTFS volumes. The data is only accessible by the creator of the data or a user designated by the recovery policy. PPTP (Point-to-Point Tunneling Protocol) transmits IP datagrams over IP networks. PPTP can be used to create VPN tunnels for secure data transmission. L2TP (Layer 2 Tunneling Protocol) differs from PPTP in that it is not limited to transmission over IP networks. In the next chapter we will look at additional ways to safeguard your network with the use of firewalls, proxy servers, and network address translation (NAT).

Exam Essentials Identify the built-in security features of Windows 2000. Authentication, authorization, auditing, and encryption are all accomplished by the security features that are part of the Windows 2000 operating system. Understand the difference between IIS permissions and NTFS permissions. IIS permissions are applicable to virtual directories. NTFS permissions are applicable to physical directories. Identify the authentication models that are supported with Internet Information Services (IIS 5). IIS 5 supports the anonymous access, basic authentication, Integrated Windows authentication, digest authentication, and client certificate mapping models. Know the IIS permissions applied when users are authenticated to a website. The IIS permissions applied for user authentication to a website are Script Source Access, Read, Write, Directory Browsing, Log Visits, and Index This Resource. The IIS Executable permissions are None, Scripts Only, and Scripts and Executables.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

286

Chapter 7



Designing Security Strategies

Identify and explain the encryption methods available to encrypt data transmitted over the Internet. Windows 2000 Server supports SSL, IPSec, EFS, Certificates, and PPTP. All of these can be used for encryption of data on websites.

Key Terms

Before you take the exam, be certain you are familiar with the following terms: access control entries (ACEs)

HTTPS (Hypertext Transfer Protocol Secure)

access control list (ACL)

IIS permissions

access token

impersonation

anonymous access

Integrated Windows authentication

anonymous logon

Internet Information Services 5.0 (IIS 5)

auditing

Internet Protocol Security (IPSec)

authentication

IP blocking

authorization

Kerberos

basic authentication

Key Distribution Center (KDC)

certificates

L2TP (Layer 2 Tunneling Protocol)

client certificates

many-to-one mapping

DES (Data Encryption Standard)

NTFS permissions

digest authentication

one-to-one mapping

discretionary access control list (DACL)

PPTP (Point-to-Point Tunneling Protocol)

Encrypting File System (EFS)

private key

encryption

public key

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Key Terms

public key infrastructure (PKI)

service ticket (ST)

public key technology

signing

Secure Sockets Layer (SSL)

ticket-granting ticket (TGT)

security descriptor

VPNs (virtual private networks)

security identifiers (SIDs)

Web content zones

Copyright ©2002 SYBEX, Inc., Alameda, CA

287

www.sybex.com

288

Chapter 7



Designing Security Strategies

Review Questions 1. IIS 5 uses which of the following accounts to provide anonymous user

access to a website? A. IUSR_computername B. Administrator C. Guest D. Username 2. Which of the following authentication models cannot be used through

proxy server connections? A. Anonymous access B. Client certificate mapping C. Integrated Windows authentication D. Digest authentication 3. Which of the following IIS permissions allows a user to view the con-

tents of a file but not make any changes to it? A. Write B. Scripts Only C. Read D. Log Visits 4. What NTFS permission does a user need in order to run a script on a

website? A. Read B. Write C. Read & Execute D. List Folder Contents

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Review Questions

289

5. What is the name of the process that alters the security context of a

server so that it matches that of a client trying to access resources? A. Authentication B. Impersonation C. Authorization D. Delegating 6. Which of the following IIS authentication methods is supported by

Windows 2000 domains only and is a good candidate for use by Windows 2000-authenticated users who are accessing your website from an intranet? A. Digest authentication B. Integrated Windows C. Anonymous access D. Basic authentication 7. What element in Windows 2000 specifies the users and groups that

have access to particular objects? A. ACE B. DACL C. Security descriptor D. SID 8. Jonathan wants to set up a secure website for use by workers in the

field to check on their account data. On what port will they access the website if Jonathan sets it up to support SSL encryption? A. Port 25 B. Port 23 C. Port 80 D. Port 443

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

290

Chapter 7



Designing Security Strategies

9. Select the impersonation level that does not provide the server with

any information about the client’s security context. A. Impersonation B. Delegation C. Identification D. Anonymous 10. Which of the following encryption methods has built-in data

recovery? A. IPSec B. SSL C. EFS D. PPTP

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Answers to Review Questions

291

Answers to Review Questions 1. A. Windows 2000 is configured to allow access only to valid users.

The IUSR_computername account is authenticated by Windows 2000, which gives the anonymous user the right to log on locally. Various anonymous accounts can be configured for the website, virtual directories, files, and directories. 2. C. Integrated Windows authentication supports both NTLM and Ker-

beros authentications, but it cannot be used through proxy server connections. If your environment is strictly a Windows 2000 domain, you can use digest authentication. Your best bet is to use client certificates, which not only provide access through proxy servers and firewalls, but also ensure very secure communications. 3. C. The Read IIS permission allows a user to view a file or directory but

not make any changes to it. Write allows users to upload files and to change content in a Write-enabled file. Scripts Only allows only scripts to be run. Log Visits is used to record visits to a directory in a log file. 4. C. The Read & Execute NTFS permission is required when a user

wants to execute a script on a website. With Read permission, users can open a file and view its permissions and attributes. Write permission lets a user modify a file. List Folder Contents allows viewing of folder contents. 5. B. Impersonation is the process in which a server computer will tem-

porarily alter its security context so that it matches that of a client trying to access that server and use its resources. Four levels of impersonation can be offered by the clients: Anonymous, Identification, Impersonation, and Delegation. 6. A. The Digest authentication model is only supported when used with

Windows 2000 domains. Digest authentication is a good choice when used for a company intranet, where users have already been authenticated by Windows 2000.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

292

Chapter 7



Designing Security Strategies

7. B. Discretionary access control lists (DACLs) indicate what users and

groups have access to objects in Windows 2000. A DACL contains access control entries (ACEs). ACEs apply to objects; they contain the SID of the user or group for which the ACE applies, and the level of access allowed for the object. 8. D. Users who are establishing a connection to websites using SSL will

do so on port 443. Port 25 is used for SMTP (Simple Main Transfer Protocol). Port 23 is used by telnet. Port 80 is used by HTTP. 9. D. The Anonymous level of impersonation does not receive any infor-

mation about the security context from the client. The Impersonation level can both authenticate the client and use its security context. The Delegation level authenticates the client and passes the client’s security context to a remote server on the client’s behalf. In the Identification level, the server can authenticate the client but cannot see the security context information provided by the client. 10. C. Encryption File System (EFS) has built-in data recovery to allow

users to access encrypted files if they have lost their encryption key. An EFS recovery agent is a user account with an assigned EFS Recovery Agent certificate. This certificate allows access to data in encrypted files. IPSec (Internet Protocol Security) applies to the transmission of data over TCP/IP networks. SSL, Secure Sockets Layer, is a protocol for secure communications over the Internet. By default, SSL uses port 443 on the web browser. PPTP, Point-to-Point Tunneling Protocol, is a data-link protocol that is based on PPP. It allows for data to be encapsulated and transmitted over an unsecured public network.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Bailor Financial Services

293

Take a few minutes to look over the information presented in this case study and then answer the questions at the end of it. In the testing room, you will have a limited amount of time—it is important that you learn to pick out the important information and ignore the “fluff.”

Background Bailor Financial Services is a leading provider of various financial services to clients all across the country. Bailor specializes in debt management for small corporations, businesses, and individuals. The company also provides retirement planning and investment advice for individuals. CEO Statement “We want to make it easier for our clients to make financial decisions. We need to benefit from the recent increase of activity on the Internet. I would like for us to have a website that is a ‘virtual debt management’ company. All services currently offered through our brickand-mortar offices should also be accessible to our customers through the website. A customer should be able to perform all transactions that they need through their Web browser.”

Current Environment Bailor’s current network consists of a group of Windows 2000 servers at the company headquarters. One of these computers is a certificate authority server that is not being used, two are file servers that store company data, and 25 run Windows NT 2000 Terminal Services. Bailor maintains an intranet that includes web pages for technical support, human resources information, and other company information. All of Bailor’s branch offices have desktop terminals and one computer with a modem. The branches connect to a Terminal Services server at headquarters. Bailor already has one DNS namespace, www.bailor.com.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

CASE STUDY

Bailor Financial Services

CASE STUDY

294

Chapter 7



Designing Security Strategies

Business Requirements Bailor Financial Services wants its users (clients) to be able to access their account information from any Web browser whenever they need services. Company management wants these clients to be exposed to the various services offered by Bailor, such as mutual funds and bonds. The key goal is to position the company so that its existing customer base can be expanded. Currently many of Bailor’s customers must phone into regional offices to get account information and purchase new services. For the last year or so, several online dial-up accounts have been put to work on an experimental basis for certain clients who demand immediate access to their account information. So far, this trial is going well, and Bailor wants all its clients to have the same immediate access via the Internet from anywhere in the world. At the same time, customers should be able to see information that will encourage their expanded use of Bailor’s service offerings. Visibility on the Web will be critical to all these worldwide expansion efforts. Funding Funding is not an issue for this upgrade. Bailor intends to spend whatever is necessary to ensure that services are available to clients at any time of the day, any day of the week. The firm already has in place the infrastructure to successfully implement a system that will support online transactions, with only minor upgrades to the present system. Fault Tolerance Since access to financial information is needed 24/7, Bailor wants to implement fault tolerance to protect against system failures. System downtime is not an option. Users must be able to access their account information at any time of the day.

Technical Requirements Bailor already has the basic Windows 2000 infrastructure for implementing an effective website that will extend the services needed. Internet Information Services 5.0, which is part of Windows 2000, can be used to create the “virtual debt-management company” to meet Bailor’s customers’ demands. Bailor has already acquired a DNS name, www.bailor.com. The website can be configured so that users can access the specific services they need through one central location. Customers will click on hyperlinks on the Bailor.com home page and transfer to sites specific to the desired service. Bailor wants to implement digital certificates as the method of secure communication for its customers. This capability is already in place. The

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Bailor Financial Services

295

Maintenance Bailor’s current IT team has the capability to manage any proposed upgrades to their current system. Several of its system administrators have recently upgraded their Microsoft certifications to Windows 2000 and are excited about the opportunity to support the new network configuration.

Questions 1. Sharon is the IT manager for Bailor Financial Services and has the

responsibility of moving the company from its current position to one that is accessible from anywhere in the world. She has decided on IIS as the way for Bailor to offer worldwide services via the Web. Sharon needs to make a decision on the type of authentication method to be implemented on the website to accommodate customer access. In the following list, match the authentication methods used by IIS with its appropriate description in the right-hand column. IIS Authentication

Description

Anonymous

Provides no authentication

Basic

Can only be used in Windows 2000 domains

Integrated Windows

Works with Internet Explorer 5

Digest

Requires SSL

Client Certificates

Will not work with proxy servers Supported by most web browsers Passwords are not encrypted Supports both NTLM and Kerberos Requires certificates Passwords are encrypted before they are transmitted

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

CASE STUDY

company has one enterprise root Certificate Authority server that is not being otherwise used. Once that server is implemented, customers can perform transactions on the Bailor website with the knowledge that all their business dealings are secure.

CASE STUDY

296

Chapter 7



Designing Security Strategies

2. Senior management at Bailor is trying to get a grasp on the concept of

using a website to provide financial services. They have seen the dialup test site that Sharon has been maintaining to connect several highprofile clients. Sharon needs to explain to management the processes that will take place when users connect and log on to the website to get their account information. Following are the steps that occur when a user attempts to access a website resource using IIS. Place the steps in the correct order. IIS accepts the IP address and, using one of its authentication methods, authenticates the user. IIS authorizes the user for the site’s resources. Client’s browser makes a request. The client is authorized by NTFS permissions. IIS determines if the client IP address is allowed. The client is authenticated by any custom applications the client needs to access. 3. Bailor’s managers have told Sharon that they want clients to be able to

make purchases of services via the website, using credit cards. What security protocol will be required to implement this capability? A. PPTP B. SSL C. EFS D. Active Directory 4. Sharon wants users to be able to view and look at files on the website

but she does not want anyone to be able to make changes to them. What NTFS permissions should she use for the files and directories that will be accessed by clients via the Internet? A. Read B. Write C. Modify D. Directory Browsing E. Read & Execute

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Bailor Financial Services

297

of browsers and operating systems. Sharon wants to ensure that all clients, regardless of their system, can access their account data in a secure environment. Which of the following URLs will provide a secure link to the Bailor Financial Services website? A. http://www.bailor.com B. http://www.bailor.com/secure C. https://www.bailor.com D. ftp://www.bailor.com 6. Sharon wants to create home directories on the website so that each

client will have a depository to store important documents from their account. To make sure that only the user who creates a document is able to read it, what technology can Sharon implement on the Bailor website? A. EFS B. SSL C. PPTP D. IPSec

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

CASE STUDY

5. Bailor’s clients accessing the website will be doing so through a variety

CASE STUDY ANSWERS

298

Chapter 7



Designing Security Strategies

Answers 1.

Anonymous Provides no authentication Basic Passwords are not encrypted Supported by most web browsers Integrated Windows Supports NTLM and Kerberos Will not work with proxy servers Digest Passwords are encrypted before they are transmitted Client Certificates Requires certificates Requires SSL 2.

1.

Client’s browser makes a request.

2.

IIS determines if the client IP address is allowed.

3.

IIS accepts the IP address and, user using one of its authentication methods, authenticates the user.

4.

IIS authorizes the user for the site’s resources.

5.

The client is authenticated by any custom applications the client needs to access.

6.

The client is authorized by NTFS permissions.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Bailor Financial Services

299

information will be encrypted when it is transmitted over the Internet to the Bailor website. Bailor will be implementing a Certificate Authority at their site, which will give them the capability to issue certificates that can be used by SSL for customer authentication. SSL uses the HTTPS protocol to provide an Internet connection that ensures privacy and message integrity. 4. A. Sharon should apply the Read permission to the files and directo-

ries that she wants users to be able to look at but not modify. This will enable viewing access to the information, but users won’t be able to make any changes to files or directories on the website. 5. C. The https indicates utilization of the HTTPS protocol, meaning

that the website is a secure site. This protocol is normally used when SSL is the chosen method of encryption for data being transmitted over the Internet. 6. A. EFS, Encrypting File System, can be used to encrypt files and fold-

ers in the home directories for the clients. The only individuals who will be able to access the files will be the clients who created them.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

CASE STUDY ANSWERS

3. B. Secure Sockets Layer (SSL) can be used to ensure that credit card

Chapter

8

Designing Firewall Strategies MICROSOFT EXAM OBJECTIVES COVERED IN THIS CHAPTER:  Design a firewall strategy. Considerations include packet filters, proxy servers, protocol settings, network address translation (NAT), and perimeter networks (also known as DMZs).  Design a security auditing strategy. Considerations include intrusion detection, security, performance, denial of service, logging, and data risk assessment.

Both objectives covered in this chapter fall under the exam’s objective 4, Designing Security Strategies for Web Solutions. This chapter covers all aspects of the “Design a firewall strategy” skill. In terms of the “Design a security auditing strategy” skill, this chapter discusses the intrusion detection and denial of service considerations. You’ll find the remaining considerations of security auditing covered in Chapter 7.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

I

n Chapter 7 we looked at computer security and why it is critical for protecting your highly available website. We focused on objective 4 of the exam—Designing Security Strategies for Web Solutions. To cover the first three skills of that objective, we discussed the authentication methods supported by Internet Information Server (IIS 5) and the issues involved for each method. Once users have been authenticated for website access, they must be authorized to use system resources, and you studied various authorization strategies. Another part of website security is data encryption, and you examined the types of encryption methods for ensuring that data transmitted over the Internet is not compromised. Also in Chapter 7 we covered some of the considerations of the security auditing skill under objective 4: performance, logging, and data risk assessment. The fourth objective under the category of website security strategies covers firewall strategy. Firewalls are another solution for protecting the website against unauthorized access, and in this chapter we’ll look at the types of firewalls that are available. We’ll examine the differences between the filtering provided by firewalls: packet, circuit, and application filtering, and when they are appropriate for use in your web solution. You’ll learn how a proxy server can be used to protect users on your internal network from hazards presented by external users. We’ll examine Internet Security and Acceleration (ISA) Server, which is the replacement for Microsoft Proxy Server 2.0. ISA provides firewall protection through all three types of filtering. It also supplies web caching, which can reduce network traffic and increase performance by caching frequently accessed web content. We’ll also examine a few of the other features provided with ISA Server, including web and server publishing rules.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Firewalls

303

Another ISA feature is intrusion detection functionality that will monitor your system for known network vulnerabilities. ISA does this at the packetfilter layer and the application-filter layer. We’ll see how this works, and also talk about protecting your network against denial of service (DoS) attacks. DoS attacks can consume all available network bandwidth and, at worst, bring your entire website down. You’ll see how network address translation (NAT) can be used to hide internal IP addresses from sources outside your network. You’ll learn about the benefits of setting up a perimeter network, also known as a DMZ, and the proper way to configure a DMZ to protect your highly available web solution from external threats. On the exam, you may see several scenarios for which you must decide on an appropriate firewall solution, according to the technical requirements presented within the case studies. Having a good understanding of the security issues discussed in this chapter will not just prepare you for passing the exam, but will add to your knowledge of and ability to set up adequate protection for your highly available website.

Firewalls

A

firewall is a combination of hardware and software used to provide network security and prevent unauthorized access from external sources. The means for accomplishing this varies widely among manufacturers of firewall products. In principle, a firewall can be thought of as two mechanisms: one to block traffic and one to permit traffic. Depending on the requirements for your network, a firewall can be configured with emphasis on either blocking or permitting network traffic. The firewall is used to implement the access-control policy; thus the firewall’s role is extremely important because it provides a single “choke point” where the policy can be imposed. Firewalls also provide logging and auditing functions to aid in determining who is accessing your network and by what methods. Summaries can be produced for network administrators, showing the type and amount of traffic being passed and any attempts that may have been made to compromise the integrity of the network.

Firewalls cannot be used to protect against attacks that do not go through the firewall. Firewall strategies that are implemented must be realistic and reflect the overall security policy of the company.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

304

Chapter 8



Designing Firewall Strategies

Firewalls also offer limited protection to a network against viruses or data-driven attacks that are e-mailed or copied to an internal host on the network. Organizations concerned about viruses should, of course, implement a comprehensive virus-control program and ensure that software for virus protection and scanning is running on each computer in the network. Firewalls are generally intended to protect a network against the following types of attacks: 

Passive eavesdropping and packet sniffing



IP address spoofing



Port scans



Denial of service attacks (DoS)



Application layer attacks



Trojan horse attacks

Types of Firewalls There are two basic categories for firewalls: those that operate at the Network layer, and those that operate at the Application layer. As technology advances, the distinction between the two is blurring. Network-layer firewalls Firewalls operating at the Network layer send forward the individual IP packets based on the source and destination address of the packet and the ports that the packet is using. Today’s Network-layer firewalls maintain information about the connections that are passing through them. Since traffic is routed directly through the firewall, a valid IP address block must be used. Network-layer firewalls are transparent to the end user and they tend to have very fast operation. Application-layer firewalls Firewalls operating at the Application layer as a rule are host computers that are running proxy servers. These firewalls do not permit any direct traffic to flow between networks; the firewall does logging and auditing of the traffic that passes through it. Application-layer firewalls typically provide auditing reports with more detail than those of Network-layer firewalls.

Proxy servers are discussed in their own section later in this chapter.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Firewalls

305

Understanding Ports As a rule, a computer has one connection to a network. All data destined for the computer arrives through this connection. Ports are virtual slots that are used to map connections between two computers, so that the computer knows which application should receive the forwarded data. An application establishes a connection with another application by binding a socket to a port number. The application “registers” with the system to receive all data destined for it on that port. No two applications can bind to the same port: If an application attempts to bind to a port that is already in use, the connection will fail. See Figure 8.1. Ports are identified by a 16-bit number from 0 to 65535, which is divided into three ranges. Range 0–1023 is reserved for well-known ports used by applications such as FTP, HTTP, and Telnet. Ports 1024–49151 are known as registered ports and are loosely bound to multiple services. Ports 49152–65535 are known as dynamic or private ports. Normally, services are not bound to these ports. Ports have two uses: for listening and for opening a connection. Wellknown services “listen” on their designated ports for users trying to make a connection. You can run the TCP/IP utility, Netstat, with the -an switch to identify ports that are in a listening mode on your computer. Since two ports are needed to make a connection, the user wanting to connect to a server process also needs a communication channel or port on their end. Their system will open a port (in the range above 1024) that is currently not in use by another application or service. Following are the steps that occur when a computer attempts to make a connection to port 80 (or HTTP) on a server:

1. Computer A listens on port 80 (HTTP) for new connections. 2. Computer B wants to surf Computer A and instructs its browser to initiate a connection to Computer A via port 80.

3. Computer B searches for a local port to initiate the connection. 4. The TCP stack on Computer B finds an unused dynamic port above 1024.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

306

Chapter 8



Designing Firewall Strategies

5. The first packet is sent from Computer B on the selected dynamic port to Computer A’s port 80.

6. Computer A responds back to the dynamic port that was initially selected on Computer B. For a complete list of port numbers, visit www.iana.org/assignments/ port-numbers.

FIGURE 8.1

Port connections Application

Application

Application

Application

Port 21

Port 23

Port 80

Port 7

TCP or UDP

Packet Port 80

Data

Packet Filtering Packet filtering is one of the methods for implementing an enterprise security policy. In order to filter the IP packets that are going through a firewall, packet filtering looks at both the source and destination IP address of any connection. It verifies the destination port of the packet and determines whether it should be allowed or denied entry to the network. A packet filter does not check the content of the packets; it only looks at the source and destination addresses.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Firewalls



Microsoft Exam Objective

307

Design a firewall strategy. Considerations include packet filters.

There are three primary types of packet filters: static or simple, dynamic, and stateful. Static (simple) packet filters Static packet filters primarily forward data packets between the internal and external networks for predetermined IP address and ports. Data that is transmitted is done so from the external endpoint with no inspection of the contents of the packet. Dynamic packet filters Dynamic packet filters take packet filtering one step farther than static filtering by examining the packet header information. Once all packets associated with a transmission reach their destination, the firewall disables forwarding. This reduces the possibility of external attacks by enabling forwarding rules based on the data in the packet. Stateful packet filters Stateful packet filters expand on dynamic packet filtering by introducing packet-filtering tracking of the context and state information in a packet. This kind of packet filter interprets higher-level protocols and adjusts its filtering rules depending on the type of protocol being used. A packet filter is situated between the internal network and the external network or the Internet. When packets are passed through the packet filter, they are compared to a set of pre-established filter rules. If the firewall is configured to permit a packet, it is routed to its next hop on the network. If the firewall is configured to deny the packet, it is dropped or discarded. Packet filtering is inexpensive to implement and does not require configuration at client computers. On the downside, it does not provide the same level of security afforded by application filtering or proxy servers. It can, however, be used with network address translation (NAT) to hide internal IP addresses from external networks.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

308

Chapter 8



Designing Firewall Strategies

NAT is discussed later in this chapter.

How Packet Filters Work Packet filters work at the Network and Transport layers of the TCP/IP protocol stack. Every packet that enters this stack is examined for the information shown in Table 8.1. TABLE 8.1

Packet Fields Field

Description

Protocol

This operates at the Network layer and is defined in the IP header or byte 9. Most packet filters can differentiate between TCP, UDP, and ICMP.

Source address

This operates at the Network layer and is also defined in the IP header. The source address is normally the 32-bit address of the host computer that created the packet.

Destination address

This operates at the Network layer and is the 32-bit address of the host computer that the packet is meant for.

Source port

This operates at the Transport layer and is in either the TCP or UDP header. When a TCP or UDP connection is made, it has to be bound to a port.

Destination port

This operates at the Transport layer and designates the port number to which the packet is sent.

Connection status

This operates at the Transport layer and indicates whether the packet is the first packet of the network session.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Firewalls

309

The device being used to filter the packets compares the values of the packet fields against predefined rules. Based on this comparison, the packets are either passed on to their next destination in the network or discarded.

Anatomy of an IP Header The Internet Protocol (IP) uses packets to transfer data between end systems. The packet’s IP header consists of 20 bytes of data. The header is illustrated in Figure 8.2, and following are descriptions of its fields: Version Set to the value of the current version of IP, which at this writing is 4. IP Header Length Indicates the length of the header in 32-bit words. This is normally set to 5. Type of Service Holds the flags for the type of service and allows an IP node to designate certain datagrams or packets as having higher priority over others. Size of Datagram The total length of the packet, including header and data. Identification An integer identifying all fragments of the datagram or packet. Flags Used to control fragmentation. Fragmentation Offset Indicates the position of the data in the original message. Time To Live Set by the sender, this field is decremented by each router that the packet passes through. If the value is too small, the packet or datagram may not reach its final destination. Protocol Indicates the Transport-layer protocol that is carried by the datagram or packet; 17 is used for UDP, 6 for TCP, 1 for ICMP, 8 for EGP, and 89 for OSPF. Header Checksum Used to check processing errors that may have been introduced into the packet by a router or bridge. Packets with an invalid checksum are discarded.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

310

Chapter 8



Designing Firewall Strategies

Source Address The IP address of the source or original sender of the packet. Destination Address The IP address of the final destination for the packet. Options Not normally used but can be used for debugging.

FIGURE 8.2

A packet’s IP header Packet Header

Version

IP Header Length

Type of Service

Identification Time to Live

Size of Packet (Datagram) Flag

Protocol

Fragmentation Offset

Header Checksum

Source IP Addresss Destination IP Address Options (if any) DATA

Limitations of Packet Filters Packet filters have several limitations that may warrant using another filtering method that provides a higher level of security. 



The contents of the packet are not inspected before it is transmitted. Data within the packet is not read. Viruses that attach themselves to e-mails could easily pass through on an SMTP connection. Packet filters are not stateful; that is, they do not remember a packet once it has been passed. Some filtering methods do have stateful capability, but most packet filters do not.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Firewalls







311

Protocols that use secondary connections are not handled well with packet filters. Consider the FTP protocol, for example. It establishes a command stream on port 21/TCP and a data stream on port 20/TCP. The client computer uses random ports. The data connection will use a source port of 20 and attempt to connect to the client on a random, high, destination port number. Packet filters have limited or no auditing capability. Packet filters offer no value-added features such as URL filtering, authentication, and HTTP object caching.

Packet filtering is a good selection of firewall strategy for networks on which modest security is required. It’s also a good choice for isolating one subnet from another on an intranet or internal private network.

Circuit Filtering Circuit filtering differs from packet filtering in that it filters sessions rather than connections or ports. These sessions are initiated by client requests and by support services and applications such as FTP, HTTP, and SMTP. Circuit filtering can support multiple simultaneous connections. Unlike packet filtering, circuit filtering provides built-in support for protocols that use secondary connections. Applications used through circuit filters perform as they normally would if directly connected to the Internet. When a firewall uses circuit filtering, the firewall examines each connection during the session setup process to ensure that a legitimate handshake is being used for the transport protocol in effect. A handshake is the operation that establishes communications between two networking devices. Data packets are not transmitted until this handshake is successfully completed. A table of valid connections is maintained on the firewall and used to determine whether data packets should be passed through the network or dropped.

How Circuit Filters Work The following information is stored when a session is set up for a circuit filter connection: 

For tracking purposes, a unique session identifier for the connection

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

312

Chapter 8



Designing Firewall Strategies



The state of the connection; can be handshake, established, or closing



Sequencing information



The source IP address



The destination IP address



The network interface through which the data packets arrive



The network interface through which the data packets leave

The firewall checks the above-listed information against the header information in each data packet. Based on the comparison, the firewall determines whether to pass the data packet through the network or to discard it. Circuit-filter firewalls run faster than those using application filtering. They can be used to guard an entire network by barring connections from particular Internet sources and computers. Like packet filtering, circuit filtering can be used with NAT to hide internal IP addresses from external sources.

Limitations of Circuit Filters Circuit filters have the following limitations: 







They can only be used to restrict access to the TCP protocol. They cannot supply as high a level of security as is provided with Application-layer firewalls. They have limited auditing capabilities. They offer limited value-added features such as URL filtering, authentication, and HTTP object caching.

Application Filtering Application filtering firewalls (proxy services) provide their functionality at the Application layer of the TCP/IP protocol stack. They examine data packets before allowing a connection. Application-filtering firewalls maintain complete information on connection state and sequencing. They can also validate security information such as user passwords and service requests.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Firewalls

313

Application filters can supply value-added features that are lacking in packet and circuit filters, including HTTP object caching, URL filtering, and user authentication. They generate audit records for use by administrators to monitor attempts at security policy violations. In addition, application filters can be used to deny network services to some, while permitting access to others. Unlike the other firewall types, firewalls that use application filtering offer data processing and manipulation capabilities. Most application-filtering firewalls have specialized software and proxy services. Proxy services are special programs that manage the traffic through a firewall for particular services such as FTP and HTTP. Proxy services are designed for the specific protocols being used to transmit the data packets. They provide additional functionality over other filtering methods, including increased access control, detailed data-validity checks, and audit records about all data traffic going through the firewall. Application filtering requires two components: a proxy server and a proxy client. All communication between the proxy client and the Internet occurs through the proxy server. The proxy services are put to work on the top of the firewall’s host network stack, and they operate in the application space of the operating system. These services sit transparently between the client on the internal network and the real service on the external network or Internet. The client is not allowed to make direct connections. All data packets are examined and filtered. Like circuit filters, application filtering provides additional checks of data packets to ensure that they’re not being spoofed. Application filters can also perform network address translation.

Limitations of Application Filters/Proxy Services Application filters have the following limitations: 





If you are planning to run proxy services, you must replace the native network stack on the firewall server. Network servers cannot be run on the firewall server because proxy servers listen on the same port. Because of the amount of processing involved when using application filtering, system performance is slower than when using the other firewall types.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

314

Chapter 8



Designing Firewall Strategies









Normally a proxy must be written for each protocol that is to pass through the network. Application filtering does not provide proxies for certain protocol types, including UDP and RPC. Before they can be used, proxy services normally must be configured on the client end. Proxy services can be susceptible to OS and application bugs. This is not true with packet filtering, which does not depend on the OS.

Dynamic Packet Filters Dynamic packet filtering, mentioned earlier in this section, is the next generation of firewall technology. Dynamic packet filters address the disadvantages inherent in simple packet filtering. Firewalls running with dynamic packet filtering send data forward based on the data in the packet. They allow modifications to a security rule on-the-fly. Primary implementations for this firewall technology are in situations where support for the UDP transport protocol is required. Dynamic packet filters operate by associating all UDP packets with a virtual connection. A virtual connection is established if a response packet is generated and returned to the original requestor. The information for the virtual connection is only remembered for a brief time. If no response packet is received within a specified time period, the virtual connection is nullified.

Advantages and Disadvantages Dynamic packet filters have the same advantages and disadvantages of packet filters, except that dynamic packet filters do not allow unsolicited UDP packets to enter the internal network. This kind of firewall implementation is effective with protocols such as Domain Name Service (DNS) that can use either a TCP or UDP connection. Dynamic packet filtering also provides limited support for the ICMP protocol.

Proxy Servers

A

proxy server is software that acts as a mediator between your Internet browser and an Internet server. The proxy server connects to the Internet

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Proxy Servers

315

and gathers the pages you have requested. It stores them locally on its own hard drive and delivers them to your desktop. When your computer is configured to use a proxy server, you are not actually connected to the Internet; the proxy server is.



Microsoft Exam Objective

Design a firewall strategy. Considerations include proxy servers.

There are three primary reasons for using a proxy server: 





A proxy server can share an Internet connection across a whole LAN. A proxy server caches local copies of web pages. This reduces the time required for browsers to display the page content. Users do not connect directly to the Internet. They connect to the proxy server that connects to the Internet.

The proxy server has two network interfaces—one connected to the LAN for the client computer, and the other connected to the Internet. When users make requests, the proxy server translates the requests and passes them to the Internet. The computer on the Internet responds to the proxy server, which passes the requested data back to the client computer. Proxy servers have several security features, primary of which are the following: 

Capability to block inbound connections.



Capability to restrict outbound connections.



Users can initiate connections to Internet servers, but the Internet servers cannot initiate connections to the user’s computers.

When using proxy servers, users are authenticated by their Windows security credentials. Username, protocol, TCP/IP port number, time of day, and destination IP address and/or domain name can all be used to restrict proxy-server outbound connections. Proxy services are application specific, and a proxy must be developed before it can support new protocols. One of the most popular servers today

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

316

Chapter 8



Designing Firewall Strategies

is the TIS Internet Firewall Toolkit, which includes proxies for telnet, FTP, HTTP, and NNTP. Another general proxy is the SOCKS (Windows Sockets) proxy, which can be compiled on the client-side application to make it work through firewalls. A proxy is a secure solution because it protects the internal network from the hazards intrinsic to direct IP connections. An external network perceives the location with a proxy system as a single computer with one IP address. In addition to security, a proxy system also offers a way to conserve IP addresses. Any network scheme can be used within the internal network, even the use of false IP addresses.

Microsoft Proxy Server Microsoft Proxy Server acts as a liaison between client computers and the Internet and routes requests between them.

Microsoft Proxy Server is primarily used for Windows NT 4.0. It has been replaced in Windows 2000 with ISA (Internet Security and Acceleration Server), which is a better firewall solution.

Proxy Server can be used to establish Internet access for all your internal network clients. It is compatible with most Internet protocols and Windows Sockets–based applications. These include Transport Control Protocol/ Internet Protocol (TCP/IP), Hypertext Transport Protocol (HTTP), HTTPS (Hypertext Transfer Protocol over Secure Socket Layer), File Transfer Protocol (FTP), Telnet, Internet Relay Chat (IRC), RealAudio, Post Office Protocol 3 (POP3), Simple Mail Transfer Protocol (SMTP), and Network News Transfer Protocol (NNTP). Proxy Server has capability to cache frequently accessed information. Cached copies of popular web pages can be maintained locally and updated automatically. Proxy Server facilitates secure access between your internal networks by preventing unauthorized access to your internal network and eliminating the need for clients to connect directly to the Internet.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Proxy Servers

317

ISA Server: Firewall Protection and Web Caching Internet Security and Acceleration Server (ISA) is the successor to Microsoft Proxy Server—but it is much more than just an upgrade. ISA provides the features of both proxy servers and firewalls in one package. ISA provides an enterprise-level firewall and a web cache server that integrate seamlessly with the Windows 2000 policy-based security. ISA comes in two versions. ISA Server Standard Edition was designed to meet the firewall needs of small businesses. ISA Server Enterprise Edition provides the scalability and management capabilities needed in larger organizations. Table 8.2 compares the two versions.

When we speak of ISA Server in this section, we are specifically referring to the ISA Server Enterprise Edition.

TABLE 8.2

Differences between ISA Standard and Enterprise Editions

Characteristic

ISA Server Standard Edition

ISA Server Enterprise Edition

Scalability

Limited

Yes

Distributed and hierarchical caching

Hierarchical only

Yes

Active Directory integration

Limited Active Directory integration

Yes

Supports array-level and enterprise-level policy

No

Yes

Multiserver management

No

Yes

ISA Server Enterprise Edition is the version that you will need to implement for a highly available web solution. The Enterprise Edition can be scaled up and optimized for Windows 2000 symmetric multiprocessing

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

318

Chapter 8



Designing Firewall Strategies

(SMP) and utilizes this extra processing power to increase performance. Scaling out can be done using Network Load Balancing (NLB) services and Caching Array Routing Protocol (CARP). ISA Server Enterprise Edition contributes to a web solution that is fault tolerant, highly available, and capable of increased performance by using clustered multiple ISA servers. It has a lower associated total cost of ownership because it includes integrated services, management tools based on the Windows 2000 platform, and an open platform that third-party vendors can use to provide value-added applications.

ISA Server

ISA Server, which replaces Microsoft Proxy Server, integrates seamlessly into Windows 2000. The ISA Server Enterprise Edition discussed here is designed for large-scale deployments and supports server arrays and enterprise-level policy. It provides modular flexibility for deployment. Both firewall and caching services can be deployed separately or together in an integrated solution. This section describes the ISA Server technology.

ISA Server Modes of Operation During the initial installation of ISA, you configure it to operate in one of three modes: firewall protection mode, web caching mode, or an integrated mode that combines the functionality of both firewall protection and web caching. ISA’s available features depend on the mode selected during installation. Table 8.3 lists these features. TABLE 8.3

ISA Server Features Availability by Mode ISA Server Feature

In Firewall Mode

In Web Caching Mode

Enterprise policy

Yes

Yes

Access policy

Yes

Only for HTTP

Web publishing

Yes

Yes

Server publishing

Yes

No

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

ISA Server

TABLE 8.3

319

ISA Server Features Availability by Mode (continued) ISA Server Feature

In Firewall Mode

In Web Caching Mode

Packet filtering

Yes

No

Cache configuration

No

Yes

Application filtering

Yes

No

Web filtering

Yes

Yes

Real-time monitoring

Yes

Yes

Alerts

Yes

Yes

Reports

Yes

Yes

VPN

Yes

No

SecureNAT client support

Yes

No

Firewall client support

Yes

No

Web Proxy client support

Yes

Yes

ISA Arrays Multiple ISA servers grouped together into an array contribute to providing fault tolerance, load balancing, and distributed web caching for the Windows 2000 enterprise. The array is managed as a single server. By using an array, a network administrator will realize a reduction in management overhead. All of the servers in the array share a common configuration. Once one server in the array is configured; its configuration is applied to all servers that are members of the array. To be a member of an ISA array, a server must be a Windows 2000 computer. All the array’s servers must reside in the same domain and the same site. (By site, we mean a Windows 2000 site—a group of computers that exists in a well-connected TCP/IP subnet.)

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

320

Chapter 8



Designing Firewall Strategies

All servers in an array must be configured using the same ISA mode. If a new server is introduced into an array that is configured in firewall mode, the new server must also be configured in that mode to become part of the array. Increased performance for the network is also achieved through the use of ISA server arrays. Client requests are distributed among several servers, allowing faster client response times. Fault tolerance is ensured through each server’s having the same configuration. When one of the servers becomes unavailable, the client requests are distributed among the remaining working servers in the array.

ISA Multilayer Firewall Mode This section describes how firewall security is provided by ISA Server at four levels: IP packet filtering, circuit filters, Application-level filtering, and dynamic packet filtering. Each of these levels has its own advantages and disadvantages. (And remember, all mentions of ISA Server in this section refer specifically to the ISA Server Enterprise Edition.)

IP Packet Filtering The packet filtering capabilities inherent in ISA Server allow for the control of data from IP packets to and from the ISA Server. When packet filtering is enabled, all packets from external interfaces are dropped unless they have been explicitly allowed. ISA Server packet filtering can be configured to block packets that originate from specific hosts on the Internet, as well as any packets that may be targeted for a specific service on the internal network (such as web, proxy, or SMTP servers). IP packet filters on an ISA Server can control access based on both the source and destination IP addresses of the packets; by the communication protocol being used; and by the specific IP port or communication channel being used. The advantages of using IP packet filters on ISA Server are as follows: 





They’re fast. They use limited processing power when comparing a packet to the rules that have been set. They drop all packets except those that have been explicitly allowed.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

ISA Server



321

The client computer requires no configuration to process the packet filters.

And here are the disadvantages of using IP packet filtering on ISA Server: 



They do not inspect each packet as it is transmitted across the network. They cannot manipulate the data of the packet as it passes through the firewall.

A few allow packet filters are predefined in ISA Server. These are DNS Lookup, which allows access to DNS servers on behalf of web clients; and ICMP, which is used for such things as flow control between the ISA Server and the external network.

Circuit-Level Filtering Circuit-level filtering monitors communications sessions between computers on different networks. In ISA Server’s circuit filtering, the firewall verifies a session by determining whether a data packet is a connection request, belongs to an existing connection, or consists of a virtual circuit between two peer Transport layers. If the session is valid, communication through the firewall is allowed. One of the advantages of ISA Server’s circuit-level filtering is that it’s faster than application filtering, simply because less processing is required for each packet. Another advantage is that circuit-level filters, like packet filters, explicitly drop all packets unless some type of rule has been set to permit them. On the downside, circuit-level filtering does not inspect the data in the packets before they are transmitted on the network. Caching is not available with circuit-level filters; and authentication is not possible unless it originates from the requesting firewall.

Application Filtering Application-level filters on an ISA Server introduce the most intelligence when monitoring data packets. This filtering works at the Application layer and uses the application’s communications protocol to determine whether a

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

322

Chapter 8



Designing Firewall Strategies

data packet should be allowed or denied access to the network. Applications filters can provide protection at the Application layer by inspecting the communication. They can redirect communications to another target or create dynamic IP packet filters. They can also alert the system of possible intrusions. (Intrusion detection is discussed later in this ISA Server section.) Principal advantages of using application filtering include its ability to inspect data that is passing through the firewall. Also, firewall rules can be configured to allow or deny access independent of specific application resources. One of the disadvantages of using application filters is a slight degradation in network performance because of the extra processing required for inspecting each data packet. Another drawback is that each application being used with ISA Server must have a specific application filter that is configured to inspect the communication.

Several application filters are built into ISA Server. These include filters for HTTP, FTP, SOCKS, RPC, streaming media, and H.323.

Dynamic Packet Filtering Dynamic packet filters can dynamically manipulate firewall rules in response to user requests. When dynamic packet filtering is used on ISA Server, and a client requests data through the firewall, IP packet filters are opened and examined for the duration of the communication session, to allow for a response from the server. When the communication sessions ends, the IP packet filters are removed. Dynamic packet filters on ISA Server have all the advantages of the other filtering levels; plus, they allow only a minimum amount of open IP packet filters at a time. One of their disadvantages is that they do not inspect the data as it is passed through the firewall. They also cannot manipulate data as it is transmitted through the firewall.

Integrated Intrusion Detection An intrusion occurs when an individual attempts to break into or misuse your system. This misuse can be the stealing of data or purposeful destruction of hardware or software. An intrusion detection system is used to spot

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

ISA Server

323

intrusions into your system. ISA Server contains built-in intrusion detection. This is accomplished with tools that provide constant monitoring of the network. ISA’s intrusion detection is based on the Internet Security Systems (ISS) technology, which monitors for well-known system and network vulnerabilities.



Microsoft Exam Objective

Design a security auditing strategy. Considerations include intrusion detection and denial of service.

Intrusion detection is performed at two layers by ISA. First it discovers intrusions at the IP packet-filter layer, which makes it possible for ISA to see vulnerabilities that are intrinsic to the IP protocol. Second, ISA employs application filtering to see vulnerabilities at the application filter layer. Third-party application filters can be used, or administrators can create their own using the filter APIs defined in the ISA Server SDK. When ISA Server detects an intrusion, it generates one of its predefined built-in alerts. Alerts can be configured to send an e-mail message to a responsible individual, run an application, log the intrusion to the event log, or stop or start services on the ISA Server. Let’s look at the specifics of how ISA detects intrusions at the IP packet filter layer and the application filter layer. Detection at the IP Packet Layer ISA provides intrusion detection for the most common IP vulnerabilities. The IP packet-filtering engine on the ISA Server can detect and block the following: Windows out-of-band This attack is a kind of denial of service (DoS) attack and is used to disable a Windows network. If successful, this attack causes the affected computer to crash or lose its connection to the network. Land attack In a land attack, an IP address is faked or spoofed. The spoofed IP address matches the destination IP address and port. It the attack is successful, some TCP/IP implementations on the computer can go into a loop that may eventually crash the computer.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

324

Chapter 8



Designing Firewall Strategies

Ping of Death The Ping of Death attack uses an echo packet with a size greater than 65536 bytes, as opposed to the default 64 byles. This can cause a buffer overflow on the targeted computer. Port scan In a port scan, the intruder attempts to gain access by scanning open ports on the computer and cataloging the services that are running on those ports. IP half-scan In the IP half-scan, repeated attempts are made to complete a TCP handshake without any ACK packets being communicated. When transmission is attempted using TCP, a SYN packet is sent to the destination computer, which responds with a SYN/ACK packet. The initial sender replies with an ACK packet, and the session is established. If the destination computer never receives the ACK packet, it sends an RST packet that causes the connection to not be logged. The attacker can then determine which ports are listening based on whether they received a SYN/ACK or an RST packet. UDP bomb This type of intrusion uses UDP packets that contain incorrect values, which can cause susceptible operating systems to crash.

Tools for IP packet-filter intrusion detection are disabled by default. They must be enabled before intrusion detection will take place at the packet layer.

Detection at the Application Layer ISA Server has built-in intrusion detection for DNS and Post Office Protocol (POP) protocol infringements. Filters can be configured to detect the following intrusion attempts: DNS host name overflow This occurs when a DNS response for a host name exceeds a certain fixed length. Certain applications do not check the length of host names and this can provide the hacker with a way to execute random commands on a targeted computer. DNS length overflow DNS responses for IP addresses should contain a field that is 4 bytes in length. A remote hacker can use this field to execute random commands on a targeted computer.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

ISA Server

325

DNS zone transfer from privileged ports (1–1024) A zone transfer can occur on privilege ports when a DNS client application transfers zones from an internal DNS server, using ports 1–1024. DNS zone transfer from high ports (greater than 1024) This is the same intrusion as the preceding one, except that ports above 1024 are used. POP buffer overflow A POP overflow occurs when a hacker attempts to gain root access to a POP server by overflowing an internal buffer on that server.

It is recommended that you configure ISA Server to monitor all four of the DNS detection filters.

For additional information on intrusion-detection systems, go to the Network Intrusion Systems FAQ at http://www.robertgraham.com/pubs/networkintrusion-detection.html.

Intrusion Detection Alerts and Responses Once an alert has been triggered because of an intrusion, the Administrator has several options for responding to each individual alert: 



An e-mail can be send to an SMTP server, to primary recipients, or to secondary or cc’d recipients. A reply address can be included in the e-mail message. A program can be executed under a service account or a specific account that has been set up in the alert configuration.



The alert can be reported to the Event log.



One or more ISA Server services can be stopped or started.

Also, with ISA Server, the Administrator can specify what events will trigger alerts. The number of alert occurrences can be designated, as well as the number of events per second that must occur before an alert is generated. After an initial alert has been generated, the Administrator can designate the

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

326

Chapter 8



Designing Firewall Strategies

amount of time that should lapse before a second alert is triggered (immediately, after a manual reset, or for a specified amount of time).

Denial of Service (DoS) A denial of service (DoS) attack floods the available bandwidth of a website with a high volume of traffic. All available resources are consumed by the bogus traffic, preventing legitimate client traffic from accessing the site. A worst-case DoS attack can take the entire website down. Denial of service attacks are most commonly executed against network connectivity, and the result is that hosts or other networks are unable to communicate with the disabled network. These attacks can be initiated in other ways, as well. They can use a site’s own resources against itself, for example, by having two services on the site consume all available bandwidth. The DoS attack itself can consume all available bandwidth by generating large amounts of echo packets directed to the attacked site. They can also consume resources the system needs to operate, and they can make the system unstable by sending unexpected data over the network. Several steps can be taken to avoid and prevent DoS attacks: 

Implement some type of router filtering on the website.



Remove or disable all unused and unneeded services.



Create a baseline of system performance and monitor it faithfully. Changes in the baseline should be documented and reviewed for potential problem areas.



If possible, implement fault-tolerant network configurations.



Establish and maintain regular backups of every system on the network.



Establish and maintain rigorous password policies.

Secure Web and Server Publishing ISA Server allows for the placement of servers behind firewalls, and thus internal servers can be published on the Internet without fear of the internal network’s being compromised. Web publishing and server publishing rules

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

ISA Server

327

can be configured within ISA Server to determine what requests will be eligible for forwarding to servers on the internal network. Web Publishing and SSL Bridging The content of published web servers can be cached to allow for quicker response to client requests. When the IP address of a web server is configured in ISA Server, all requests to the address are directed to the Web Proxy service. This service answers the requests, compares them to web publishing rules that have been configured, and determines what action to take based on the rules. By using web publishing rules, ISA Server can define which IP addresses or domain names will be answered and redirected to internal web servers. ISA Server can provide SSL bridging, which encrypts or decrypts data as it passes through the firewall from the client computer to the host. The HTTPS protocol comes into play when a secure connection to a resource is requested through a web browser on the Internet. When a client makes an HTTPS request, ISA uses SSL tunneling, which allows a Web Proxy client to make an HTTPS request. It establishes a secure connection that remains encrypted from the client to the destination host during the entire session. Three possible scenarios are available for SSL bridging using ISA Server: 





HTTP requests are forwarded as SSL requests. The ISA Server encrypts the client’s HTTP request and forwards it to the destination web server. The web server returns an encrypted object to the ISA server, which decrypts the object and forwards it to the client. SSL requests are forwarded as SSL requests. The SSL client request is decrypted by the ISA Server, encrypted again, and forwarded to the web server. The web server returns the encrypted object to the ISA Server, which decrypts it and sends it to the client. SSL requests are forwarded as HTTP requests. The client requests an SSL object, which is sent to the ISA Server, which forwards it to the destination web server. The web server returns the HTTP object to the ISA Server, which encrypts the object and forwards it to the client.

Server Publishing ISA Server includes server publishing. E-mail servers, web servers, and other computers on an internal network are published on the Internet behind the

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

328

Chapter 8



Designing Firewall Strategies

ISA Server to protect them against external attacks. Security is not compromised because all incoming requests are filtered through the ISA Server. Server publishing rules define what happens when incoming requests are made to the ISA Server. If an incoming request matches a server publishing rule, the request is redirected to another server or port on the ISA Server. Server publishing uses SecureNAT, which means there is no need to install or configure additional software on the internal server to provide this capability. A good example of server publishing is the placement of a Microsoft Exchange Server behind an ISA Server to allow e-mail to be published on the Internet. Incoming e-mail traffic can be intercepted by the ISA Server, filtered, and then passed on to the Exchange Server. The Exchange Server is never directly exposed to external users. Here’s how server publishing works: 





A client requests an object from an IP address that the client knows as the publishing server. This address is actually the IP address of the ISA Server. The ISA Server processes the requests, contingent on server publishing rules that have been established, mapping the requests to the internal IP address of the internal server. The internal server returns the object to the ISA Server, which in turn forwards the object to the client that initially requested it.

Client Support Before users can make requests through the ISA Server, they have to use one of the client-connection options that are supported by ISA Server. When a user makes a request—say, for an HTTP object—the request is first directed to the Firewall Service on the ISA Server. ISA Server supports four options for client connections: Firewall clients, SecureNAT clients, Web Proxy clients, and SOCKS clients. Firewall Clients Of all the options available for client connection to an ISA Server, the Firewall Client software offers the most flexibility. It provides connectivity to most protocols and for some protocols is the only option allowed for connecting to an ISA Server.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

ISA Server

329

A Firewall client is a computer with the Firewall Client software installed on it. It can only be installed on computers running Windows ME, 95, 98, NT 4.0, or 2000. It supports Windows Sockets version 1.1 applications, but before clients can access the Internet through the ISA Server, they must be configured for the required protocols and ports. Firewall clients are able to provide authentication information about the user who is trying to access the Internet through the firewall.

Installing Firewall Client on an ISA Server is not supported by Microsoft and can have unpredictable results.

SecureNAT Clients Clients without the Firewall Client software installed can use SecureNAT, which requires no special software and can be used by any operating system. Requests from SecureNAT clients are directed to the SecureNAT layer and inspected by the Firewall Service to determine whether access should be allowed. If access is permitted, the request is handled by the SecureNAT engine, which substitutes a global IP address that is valid on the Internet for the internal IP address. No special software is required on the client computer. One of the main disadvantages of using SecureNAT is that NAT clients are not authenticated and rules cannot be enforced on a per-user basis. A second disadvantage is that a protocol definition must exist for the protocol that the client needs to use. SecureNAT needs explicit protocol rules defined for its use. Web Proxy Clients Any CERN-compatible web browser application can be a Web Proxy client. Requests from Web Proxy clients are directed to the Web Proxy Service on the ISA Server. The client’s web browser is configured so that it points to port 8080 on its proxy server. Web Proxy clients support only the HTTP, HTTPS, and tunneled FTP protocols. Like Firewall clients, Web Proxy clients are able to provide authentication information about the user.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

330

Chapter 8



Designing Firewall Strategies

SOCKS Clients SOCKS clients are used primary for compatibility with Unix applications. SOCKS version 4 clients do not support authentication, and access control can only be achieved by IP address.

SOCKS version 5 does support user-level authentication.

Table 8.4 summarizes the features that are available with ISA clients. TABLE 8.4

ISA Client Options On Firewall Clients

On SecureNAT Clients

On Web Proxy Clients

Installation on client required

Yes

No

No, only requires web browser

No

Operating system support

Windows platform only

Any OS that supports TCP/IP

All platforms

All platforms by way of SOCKS applications

Requires change to the overall network configuration

No

Yes

No

No

Protocol support

All Winsock applications

Application filters required for multiconnection protocols

HTTP, HTTPS, FTP

Any protocol supported by SOCKS applications

User-level authentication

Yes

No

Yes

No

Option

Copyright ©2002 SYBEX, Inc., Alameda, CA

On SOCKS clients

www.sybex.com

ISA Server

331

ISA Web-Caching Mode A web cache is used to watch for requests for content such as HTML pages, images, and file. A copy of the requested content is saved in the local cache of the ISA Server, and if there is another request for the same data, ISA Server uses the copy rather than asking for the data again from the original server. By maintaining a cache of web objects, ISA Server delivers fast RAM caching and a reduction in the amount of network bandwidth consumed for requests. The cache is checked first for data to fill the client requests; if the request cannot be fulfilled, the ISA Server will initiate a request to the remote server on the client’s behalf. Once the remote server responds to the ISA Server’s request, the response is cached and sent to the client. The most requested content stays in the local cache of the ISA Server. Network traffic is reduced because the content is only requested once from the original server. Retrieving “hot” items from RAM instead of disk storage reduces response time. ISA Server supports forward caching for outgoing requests and reverse caching for incoming requests. ISA Server caching is implemented through the Web Proxy service. Web clients request content through this service, which checks the local cache to see if the requested content is there. If it’s not, the Web Proxy service makes a request to the server in the array that has the content, and stores the response in cache. The content is now available to be serviced from the local cache. When using ISA Server Enterprise Edition, it’s easy to add more CPUs, disks, or RAM in order to scale the web cache service. Cache Array Routing Protocol (CARP), discussed later in this section, allows for the grouping of multiple ISA Servers into one logical cache. This array provides not only fault tolerance, but also load balancing and distributed caching. By using an array configuration, not only will you see increased performance and bandwidth savings, but also the ISA Servers in the array can be managed as one single logical unit. ISA Server improves on basic web caching by contributing the features described in the following paragraphs. Forward and Reverse Caching Forward caching is a key principle behind the significant performance benefits realized through ISA Server. Recurring requests are serviced from the local cache on the ISA Server. The system achieves gains in quicker response time as well as reduction in network traffic. Reverse caching improves the performance of internal websites that have been published to the Internet. ISA Servers are placed in front of the published servers, and frequently accessed content is cached on these servers.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

332

Chapter 8



Designing Firewall Strategies

Client response time is improved, and there is less stress load on the published servers. Reverse caching can use the Scheduled Cache Content Download service to pull content from a remote computer to a server in your internal network. Active Caching Bandwidth can be optimized by automatically updating popular objects in the cache. Active caching is a way to keep objects fresh in the cache during periods of low network activity. Objects are verified against the original web server before they expire from cache. ISA uses an algorithm to determine the objects that would benefit most from being refreshed automatically by the Web Proxy service. Scheduled Content Download HTTP content that is frequently requested by clients can be downloaded to the ISA Server. This content is directly accessible from the ISA Server instead of from the Internet. Administrators can download a single URL, multiple URLs, or complete websites. They can also create schedule jobs to download cached content at predetermined times or on a recurring schedule. Jobs can be scheduled for both outgoing and incoming web requests. Web Proxy Routing Before we explain Web Proxy Routing, we need to introduce the concept of chaining as it relates to ISA Server. Chaining refers to the hierarchical connection between individual ISA Servers or ISA Servers in an array. Client requests are submitted upstream through a chain of cache servers until the requested object is found. The object is locally cached on each server in the chain. The concept of chaining, or hierarchical caching, is taken one step further with web proxy routing, in which requests can be routed depending on their destination. Chained Authentication Before requests are routed to an upstream server in a chained server configuration, ISA Server can be configured to require the client to authenticate himself or herself. In this scenario, the downstream ISA Server would pass the client’s authentication information to the upstream server. FTP Caching The same routing and caching benefits that are realized through ISA Server for HTTP content are extended to the FTP protocol. Streaming Media Support A streaming media filter is provided by ISA Server to support the popular media formats. These include Microsoft Windows Media (MMS), Progressive Networks Protocol (PNM), and Real Time Streaming Protocol (RTSP). Support is transparent for both firewall and SecureNAT clients.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

ISA Server

333

Programmable Cache Control Cache control features are included with ISA Server. The following types of HTTP objects can be cached: 

Objects that are larger than a specified size



Objects whose modification data is not specified



Objects that do not have a “404 - File Not Found” response



Objects with questions marks in their URL

CARP When using the ISA Server Enterprise Edition, multiple ISA servers can be configured as a single local cache called a server array. All of the servers in the array are managed as one. When a change is made to the configuration on one member of the array, all of the other servers in the array are modified with that change. The Cache Array Routing Protocol (CARP) provides this caching capability when using an array of ISA Enterprise Edition Servers. CARP is enabled by default on all servers in an array. It uses queryless distributed caching, which means it does not conduct queries. Instead, it uses hashbased routing to provide a request resolution path to an array of servers. The result is a single-hop resolution. CARP provides scalability and can automatically adjust to the addition or deletion of servers in the array. This protocol uses the open HTTP standard and is compatible with existing firewall and proxy server solutions. Protocol rules, packets filters, and web and server publishing rules can all be defined at the array level. These are collectively known as array policies and apply to all ISA servers in the array.

How CARP Works Here’s what CARP does when caching information in an ISA server array: 

An array membership list is used to track all the servers in the array.



A hash function is computed for the name of each server in the array.



A hash function is computed for the name of each requested URL.



The two hash values (for server and URL) are combined. The URLplus-server array member that comes up with the highest value becomes the owner of the information cache. Each downstream server in the array has direct knowledge of where the requested URL is stored locally.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

334

Chapter 8



Designing Firewall Strategies

Deciding on a Firewall and Security Solution for a Growing Network As IT manager for your company, you are responsible for not only the dayto-day operations of the network but strategic long term-planning to meet the company’s growth needs. Several smaller companies have recently been purchased, and their networking resources have been consolidated with yours. Of these three acquisitions, only one does not have a Windows 2000 infrastructure in place. In the past six months you have completed upgrading of all your servers to Windows 2000. The process of converting all workstations to Windows 2000 Professional and Windows XP Professional is moving along. Since several of the acquired companies are located far from what is now corporate headquarters, you are heavily dependent on Web-based applications for supporting both the customer base and your own internal operations. There is an intranet at the corporate office, but you’re concerned about its security now that the other companies have been purchased. Employees must be able to access human resource information from their particular location without fear of vital employee information being compromised. You also need to come up with some type of security policy to ensure that both employee data and customer data are protected from internal and external intrusions. You’ve already spent a good deal of time researching security solutions for your network, looking for one that can protect employee and customer information and be scalable, as well, to meet the long-term growth needs that appear to be in your future. Microsoft Internet Security and Acceleration (ISA) Server looks like the tool you need. ISA has both state-of-the-art firewall ability and integrated management capability that can be seamlessly integrated into your existing Windows 2000 network. The virtual private network (VPN) functionality inherent in ISA can be used to ensure that employees are able to access their private data without fear of compromise. As the company grows, ISA can be expanded with minimum disruption because of its ability to use server arrays. These arrays also offer the necessary load balancing and caching that you need to control the shared bandwidth concerns arising from expansion. Finally, policy rules can be implemented to control access to the corporate intranet—restricting the people who have access, as well as the locations that have access.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

ISA Server

335

Protocol Rules Protocol rules are used by ISA Server to determine the type of connections that are allowed for client access. Under protocol rules, a hybrid filter type is applied, which consists of both circuit-level and dynamic IP packet filtering. This allows the ISA Server to filter packets for a client connection dynamically.



Microsoft Exam Objective

Design a firewall strategy. Considerations include protocol settings.

Protocol rules are defined by protocol definitions. These definitions identify both the primary and secondary connections that may be required for the protocol. Multiple protocols can be allowed or denied by the use of one protocol rule by using one or more protocol definitions. Protocol definitions include rules for secondary connections. When a client requests communication using a primary connection, ISA Server will automatically create filters for the secondary connection. This is required by FTP and other applications that use the primary port by the client to make the connection to the FTP server, and the secondary port by the FTP server to respond to the client’s requests. Protocol rules can also be scheduled to allow or deny access during a predetermined period of time. A good example of this arrangement is for denying certain protocols (such as streaming media) during business hours. Protocol rules can also be used to restrict access by protocol to certain users and groups. This won’t work with SecureNAT because it cannot authenticate by username. SecureNAT clients can only be restricted by IP address. Protocol rules apply to all ISA clients, including Firewall and SecureNAT. To use secondary connections with SecureNAT, the client computer must have the Firewall client installed. Additionally, an application filter must define the protocol rule. If it does not, the connection will not work with SecureNAT.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

336

Chapter 8



Designing Firewall Strategies

NAT

The Internet is growing exponentially every year. Network address Translation (NAT) technology was developed to provide an immediate solution to the shortage of IP address, as well as to address network security.



Microsoft Exam Objective

Design a firewall strategy. Considerations include network address translation (NAT).

Analysis of an IP Address The IP address is divided into two parts: a network number and a host number. Each physical network on the Internet must have a unique network number, which makes up some of the bits of the IP address, and the other bits represent the host number. Four classes of network IP addresses have been allocated: Class A, B, C, and D. Most of the Class A and B addresses have already been allocated. Class D addresses are reserved for special functions. This leaves only class C for the addresses distributed to networks on the Internet. As the Internet grows, the number of available Class C addresses shrinks. The new generation of Internet Protocol, IP version 6, will allow for a larger number of IP addresses, but it’s likely that years will pass before the new protocol is migrated to existing networks. Thus the IP address shortage will probably continue. By using NAT, an organization can share a single IP address among multiple computers on the network.

One thing computer users should keep in mind when they connect to the Internet is that it’s not a “one-way street.” When they are connected to the Internet, it is also connected to them. Anyone able to access the Internet creates the potential for hackers/intruders to get to the resources on the user’s computer. This security dilemma has become critically important in today’s enterprises. A number of firewall products have been introduced to keep unwanted and unauthorized people away from the resources of internal

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

NAT

337

networks, and NAT, too, can be used to provide firewall-style protection. NAT only allows connections that are generated from the inside of the network. A user can connect to a resource outside their network, but that resource cannot connect to the user’s computer.

How NAT Works Network address translation is basically the translation of an IP address within one network to a different IP address that is known outside the network. Figure 8.3 shows a typical NAT installation. FIGURE 8.3

Network address translation (NAT) in a network

Workstation 10.0.0.2

Workstation 10.0.0.3

10.0.0.1

192.168.32.2 Internet NAT Server

Workstation 10.0.0.4

Workstation 10.0.0.5

Traffic destined for the Internet or other external networks goes through the NAT server. This server acts as a gateway between the private clients and the Internet. Since the private clients do not have public IP addresses, NAT represents them on the public network. It keeps a mapping table of all network traffic that flows to and from each client. All traffic generated from the clients is received by NAT on the private-side interface. This data is sent to the Internet through a public interface on the NAT server. Conversely, when data packets are destined for a particular client, they arrive at the public interface on the NAT server. The NAT server checks its mapping table and forwards the packets to the appropriate client on the internal network.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

338

Chapter 8



Designing Firewall Strategies

NAT provides the following benefits: 





It minimizes the cost for Internet access by allowing multiple computers to share a single Internet connection. It introduces security into the network by making it possible to hide internal IP addresses from external networks and the Internet. It conserves registered IP addresses by allowing computers to use assigned private IP address.

Keep in mind that although NAT is an excellent security solution, some applications fail to operate properly when being accessed over the Internet through a NAT gateway. This misbehavior may be caused by the Session layer data being transmitted in the application’s data stream, or perhaps by the requirement for secondary incoming sessions that cannot be established through NAT. A few known protocols that have issues with NAT are FTP, IPSEC, DNS, and RSVP.

To learn more about NAT, see RFC 3022, “The IP Network Address Translator (NAT).”

Perimeter Networks (DMZs)

A perimeter network is a small network located between an organization’s private network and any external networks or the Internet. The perimeter network is also sometimes called a three-homed perimeter network and is often nicknamed a DMZ (demilitarized zone). It is set up to give external users access to specific servers that are located on the network. Normally these are web or e-mail servers.



Microsoft Exam Objective

Design a firewall strategy. Considerations include perimeter networks (also known as DMZs).

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Perimeter Networks (DMZs)

339

A DMZ is used as frontline protection for valuable resources, keeping them from direct exposure to untrusted networks. Untrusted networks are private networks that are outside your administrative control and your security perimeter. Setting up a DMZ is way to add an additional layer of security between a protected network and an unprotected or untrusted one. The most secure network configuration for a DMZ is to keep the DMZ network, the internal network, and the external network on separate subnets. Network traffic cannot cross among various subnets without some type of routing. By putting your DMZ in a separate subnet, you can control what traffic is allowed to go in and out of the DMZ. Without a DMZ, any hacker who can compromise the frontline of your network may possibly have gained access to the entire network.

Bear in mind that a DMZ can cause a drop in performance on the network.

DMZs are normally used for deploying web and e-mail servers for companies. Web content and email messages can be sent to the Internet while access to the company’s internal network are denied. If the DMZ security were compromised for some reason, only the servers in the DMZ would be compromised, not the entire internal network.

Configuring Your Firewall for the DMZ At a minimum, you should used two firewalls when configuring your DMZ. The two firewalls separate the internal network from the external network. The front-end firewall is connected to the Internet or other external network and to the DMZ. The back-end firewall is connected from the DMZ to the internal network. The front-end firewall should be configured to allow only the minimum required traffic to pass through. If your front-end contains web servers, this would typically mean that only TCP ports 80 for HTTP and 443 for SSL would be allowed to flow through. Figure 8.4 shows a typical DMZ configuration.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

340

Chapter 8



Designing Firewall Strategies

FIGURE 8.4

Typical DMZ configuration Client

E-mail Server

Client

Firewall

Client

Internet

DMZ

Intranet

Client

Firewall

Web Server

Several things should be taken when considering using a DMZ: 





Additional cost factors will be introduced because of the requirement for additional software and hardware for the computers needed in the DMZ. There will be a slight degradation of network performance. Additional cost will be associated to both the time to implement the DMZ and the system downtime required to add the DMZ to your existing network.

Deciding on a Firewall Solution

M

icrosoft recommends that, at a minimum, you use a two-firewall solution for your highly available web solution. This will provide the necessary segregation and protection needed to shield your network. Several firewall designs are possible, but the best minimum solution is illustrated in Figure 8.5.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Deciding on a Firewall Solution

FIGURE 8.5

341

A minimum two-firewall solution Client

Client

Client

Internet Firewall

Firewall Database Server

Web Server

DMZ

The first firewall separates your web servers from the Internet. The second firewall separates the web servers from the internal network that contains your database servers (such as SQL Server). This arrangement minimizes the likelihood that your network will be disrupted by both internal and external attacks on its infrastructure. When designing a two-firewall solution, be aware of its advantages and disadvantages. Advantages: 









Databases are isolated from both internal and external attacks. Fewer computers are exposed to the Internet because only the web servers have direct connections to the Internet. Database servers do not have a direct connection to the Internet, so there is no possibility of an external attack on these servers. Data being transmitted between the web servers and the database servers is not communicated over the public network; therefore, that data need not be encrypted before it is transmitted. Only one Internet connection is being shared on the network, which results in reduced bandwidth-sharing being done on the front-end of the network.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

342

Chapter 8



Designing Firewall Strategies



The possibility of intrusion to the website is minimized because the site is isolated from the rest of the network. Disadvantages:





A two-firewall solution can be difficult to maintain, and costly depending on the type of equipment used for the solution. When multiple paths exist on the network, routing protocols and address structures must be considered.

Summary

I

n this chapter we looked at firewall strategies for protecting the highly available web solution. A firewall is a combination of hardware and software and is used to prevent unauthorized access to resources on the network. They are designed to protect against various network attacks, including packet sniffing, IP address spoofing, port scans, and denial of service attacks. The way a firewall provides this functionality varies depending on the manufacturer of the firewall. Firewalls operate at both the Network layer and the Application layer. Network-layer firewalls forward IP packets based on the source and destination address in each packet. Application-layer firewalls generally are proxy servers. They provide functionality over and above Network-layer firewalls, including the ability to log and audit traffic that passes through the firewall. Packet filtering filters IP packets based on the source and destination address of the packets. Circuit filtering filters the packets based on session information and has built-in support for protocols that use secondary connections. Application filtering firewalls perform their filtering at the Application layer and can authenticate usernames and other security groups. Dynamic packet filters are like regular packet filters with one exception— they have the ability to make modifications to the rules that are controlling the flow of data through the firewall. Proxy servers act as mediators between client computers and the Internet. They do not allow clients to connect directly to the Internet, but instead cache the data for clients’ use. Microsoft Proxy Server has been succeeded by Internet Security and Acceleration (ISA) Server, which has increased

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Exam Essentials

343

functionality for achieving network security. ISA Server supports packet, circuit, application, and dynamic packet filtering. It supplies not only firewall capability, but also web caching. ISA comes into two versions, a Standard Edition and an Enterprise Edition. The Enterprise Edition is the version required for large enterprise networks. ISA can be configured to operate in three modes. Firewall protection mode provides firewall security at four different layers. In web-caching mode, the results of web server requests are cached for faster response times. The third mode is a combination of both the firewall and web caching modes. ISA supports both secure web and server publishing. Internal servers can be published to the Internet without concern of their being compromised by external users. Before users can make requests through an ISA Server, they must use one of the client connection options that ISA Server supports: Firewall clients, SecureNAT clients, Web Proxy clients, and SOCKS clients. ISA Server has integrated intrusion detection to protect your network against potential break-ins. Intrusion detection is provided at the IP packetfilter layer and the Application layer, and ISA Server monitors for wellknown system vulnerabilities. Another threat to guard against is denial of service attacks, which can take many forms and consume vital network resources. Network address translation (NAT) can be used to hide a private network’s internal IP addresses from external sources. It acts as a gateway between client computers and the Internet. A perimeter network (DMZ) is yet another option for network security. It’s a way for external uses to have safe access to specific servers within your internal network. Normally, frontend web servers or e-mail servers (such as Microsoft Exchange 2000) are located within a DMZ.

Exam Essentials Know the types of data filtering supported by firewalls. Firewalls can support several types of data filtering including packet filtering, circuit filtering, and application filtering. Explain how packet filters differ from circuit filters. Packet filters use the source and destination addresses of a packet and the ports being used, in determining whether to deny or allow access on the network. Circuit filters examine packets based on sessions instead of ports.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

344

Chapter 8



Designing Firewall Strategies

Identify three primary reasons for using a proxy server. First, a proxy server can share an Internet connection across an entire LAN. Second, the proxy server caches local copies of web pages, which reduces the time required for a browser to display the page content. Third, users do not connect to the Internet directly but rather to the proxy server, which connects to the Internet. Identify the modes of operation for ISA Server. ISA Server can be configured to operate in three modes: firewall mode, web-caching mode, and an integrated mode that combines both. Know about ISA Server’s intrusion detection role at the IP packet-filter layer. ISA Server can protect your network against the following IP vulnerabilities: Windows out-of-band (denial of service), land attack, Ping of Death, port scan, IP half-scan, and UDP bomb. Know about ISA Server’s intrusion detection role at the Application layer. ISA Server can protect your network against the following Application layer vulnerabilities: DNS host name overflow, DNS length overflow, DNS zone transfer from high ports, and POP buffer overflow. Define protocol rules and how they are used with ISA Server. ISA Server uses protocol rules to determine the type of connections that are allowed for client access. Protocol rules apply to all ISA clients. Know what an ISA array is and how it is used with ISA Server. An ISA array is a group of ISA servers that share a common configuration. Together, they provide fault tolerance, load balancing, and distributed web caching. A single ISA server of the group manages the array. Identify the client connection options that are supported by ISA Server. ISA Server supports Firewall clients, SecureNAT clients, Web Proxy clients, and SOCKS clients. Understand how a perimeter network (DMZ) can protect an internal network. A perimeter network (DMZ) is a small network located between a company’s internal network and the Internet or an external network. The DMZ, internal, and external networks are configured on separate subnets to provide network security for the internal network. Describe network address translation (NAT) and how it is used to provide website security. NAT, network address translation, translates internal IP addresses to public external IP addresses to hide the internal addresses from external networks.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Key Terms

345

Key Terms

Before you take the exam, be certain you are familiar with the following terms: application filtering

IP address spoofing

Application layer attacks

IP addresses

array policies

ISA Server Enterprise Edition

cache, caching

Microsoft Proxy Server 2.0

Cache Array Routing Protocol (CARP)

network address translation (NAT)

circuit filtering

packet filtering

denial of service (DoS) attacks

packet sniffing

DMZ

perimeter network

DNS (Domain Name Service)

POP

dynamic packet filters

port scans

File Transfer Protocol (FTP)

ports

firewall client

Post Office Protocol 3 (POP3)

Firewall Service

protocol rules

firewalls

protocols

handshake

protocol rules

HTTP (Hypertext Transport Protocol)

proxy server

HTTPS (Hypertext Transfer proxy services Protocol over Secure Socket Layer) Internet Security and Acceleration (ISA) Server

SecureNAT

intrusion detection

security policy

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

346

Chapter 8



Designing Firewall Strategies

server array

symmetrical multiprocessing (SMP)

server publishing rules

TCP/IP (Transport Control Protocol/Internet Protocol)

Simple Mail Transfer Protocol (SMTP)

untrusted networks

SOCKS

web cache

SSL bridging

web publishing rules

stateful packet filters

Windows Sockets

static packet filters

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Review Questions

347

Review Questions 1. Which two of the following identify basic categories for firewalls? A. Network layer B. Application layer C. Session layer D. Physical layer 2. Which of the following firewall filtering methods can be used to block

network traffic originating from specific Internet hosts? A. Application filtering B. Circuit filtering C. IP packet filtering D. Session filtering 3. Which of the following firewall technologies filters sessions? A. IP packet filtering B. Circuit filtering C. Dynamic packet filtering D. Application layer 4. Which of the following statements describe the difference between

packet filtering and dynamic packet filtering? Select all that apply. A. Dynamic packet filters examine the header information that is

included in the packet header. B. Packet filters examine the header information that is included in

the packet header. C. Packet filters do not check the content of the packets being

transmitted. D. Dynamic packet filters can validate security information such as

user passwords and service requests.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

348

Chapter 8



Designing Firewall Strategies

5. What service can be used to hide internal IP addresses from the Inter-

net? Select all that apply. A. Proxy services B. SSL Bridging C. SOCKS D. NAT 6. Which of the following elements replaces Microsoft Proxy Server,

used in previous versions of Windows? A. NAT B. ISA C. DMZ D. IIS 7. What kind of network can be positioned between a corporation’s

internal and external network? Select all that apply. A. DMZ B. WAN C. Perimeter network D. LAN 8. Which of the following ISA Server clients cannot authenticate users? A. Firewall client B. SOCKS client C. SecureNAT client D. Web Proxy client

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Review Questions

349

9. Which of the following is the correct description of a port? A. Ports forward individual IP packets based on the source and des-

tination addresses of the packet. B. Ports are virtual slots that are used to map connections between

two computers. C. A port is an integer that is used to identify all fragments of the

datagram. D. A port is software that acts as a mediator between your web

browser and an Internet server. 10. SOCKS clients are primarily used to access what type of application? A. Unix B. Windows NT C. Windows 2000 D. Apple

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

350

Chapter 8



Designing Firewall Strategies

Answers to Review Questions 1. A, B. Firewalls generally fall into two categories: Network-layer fire-

walls that forward individual IP packets based on the source and destination addresses of the packets; and Application-layer firewalls that (as a rule) do not allow direct connections between a client and the Internet. Firewalls can be configured to allow or deny passage of network traffic and provide both logging and auditing functions to aid in determining who is accessing a network. 2. C. IP packet filtering is used to forward IP packets through a firewall.

This is done by examining both the source and destination IP address of each packet. Packets can either be allowed or denied access to the network depending on their IP address. 3. B. Instead of filtering packets by examining the IP address informa-

tion, circuit filtering controls packets based on session or port information. Normally these sessions are initiated by a client requests with a web browser using either HTTP or FTP. Circuit filters also have capability to use secondary connections and support multiple connections. 4. A, C. Dynamic packet filters go one step beyond ordinary packet fil-

tering by examining the header information in the data packet and using it to make a decision on the packet’s destination. As a rule, packet filters do not check packet header information or content data. Neither packet filters nor dynamic packet filters can validate security information such as user passwords. Only application filtering can perform this function. 5. A, C, D. Proxy services, SOCKS, and NAT (network address transla-

tion) can all be used to hide internal network addresses from external sources. Proxy services are specialized software that manages network traffic through a firewall and requests content from the Internet on behalf of client computers. SOCKS is used by both firewalls and proxy servers to permit client access to the Internet while preventing untrusted hosts on the Internet from accessing the internal network. NAT will hide internal network IP addresses from external sources. In addition, it conserves registered addresses by allowing administrators to assign private IP addresses for internal use.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Answers to Review Questions

351

6. B. ISA (Internet Security and Acceleration) Server not only replaces

Proxy Server but adds additional functionality. It seamlessly integrates with Windows 2000 security policy and is part of an enterprise-wide solution to securing a highly available website. 7. A, C. A DMZ or perimeter network can be set up between a corpora-

tion’s internal network and an external network or the Internet. The DMZ traditionally is the spot for web servers that contain web content or e-mail services for external access. If for some reason this area is compromised, the internal network is still secure. 8. C. The SecureNAT client does not have the capability to authenticate

users or security groups. Protocol rules can be employed along with SecureNAT to allow access to only certain IP addresses. 9. B. A port is a virtual slot used to map a connection between two com-

puters. Ports are identified by a 16-byte number from one of three ranges. The range 0–1023 is reserved for well-known ports used by applications such as HTTP and FTP. 10. A. SOCKS (Windows Sockets) was initially designed for accessing

Unix applications. It now is used for accessing applications in multipleOS client/server networking environments. SOCKS v5 is used in Windows 2000 and supports a number of authentication methods. It also supports public-key encryption for secure data communications.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

CASE STUDY

352

Chapter 8



Designing Firewall Strategies

WestStar, Inc.

Take a few minutes to look over the information presented in this case study and then answer the questions at the end of it. In the testing room, you will have a limited amount of time—it is important that you learn to pick out the important information.

Background WestStar, Inc., provides cutting-edge networking services to Fortune 500 companies all over the world. The company has more than 1,300 employees, most of them located in the corporate offices in Seattle, WA. WestStar has recently sold several of its subsidiaries to a competitor. The parent company needed the resulting influx of cash to purchase selected, small dot.coms recently on the market due to a downward slide in the technology marketplace. The purchased dot.coms have technology that WestStar wants to integrate with its existing service offerings. CEO Statement “The purchase of these small companies will give us the additional technology we need to further our business in several new markets. I am really excited about the direction we’re moving in and will need the support of all employees to ensure that we are successful during this transition. I want a network that reduces our cost of ownership, while improving productivity for all users.”

Existing Environment There are 10 Windows NT Server 4.0 computers at headquarters. One of these computers is a certificate server but is not being used. Five of the servers run Windows NT Server 4.0, Terminal Server Edition. All the WestStar headquarters employees, except those in the IT department, use Windows 98 desktop computers. The IT staff use Windows NT Workstation 4.0 desktop computers. Two of the servers hold SQL 7.0 databases. One of these is used for record keeping of employee timesheets and other human resources data. The other SQL database stores pertinent customer data. Each of the NT servers has a 50GB hard disk configured as RAID 5. Only 10 percent of the disk space is currently being utilized on each server. Disk

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

WestStar, Inc.

353

Business Requirements WestStar needs a cost-effective and secure solution that will effectively bridge the corporate offices and the newly acquired dot.com satellites. Many of the employees at headquarters travel frequently and need access to the corporate intranet. Secure access must be arranged for the corporate intranet, and all employee communications must be encrypted. These employees submit their timesheets through the Internet to Human Resources and frequently check on client billing issues. WestStar wants to implement a stringent security policy that includes monitoring and logging all connection activity. Certain areas of the corporate intranet will only be accessible to a few management-level employees. Funding Funding has been approved for the additional two SQL servers, to be added to the existing ones in a clustered environment for fault tolerance and high availability. Money has also been budgeted for five additional web servers for the front-end of the network.

Technical Requirements All servers will be upgraded to Windows 2000 Advanced Server, and all clients will be upgraded to Windows 2000 Professional and Windows XP Professional. The proposed solution must integrate seamlessly with Windows 2000. The existing SQL database servers will require upgrading to SQL Server 2000. Active Directory integration is going to be a key component of the upgraded network. WestStar needs technologies that will seamlessly integrate with Active Directory services.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

CASE STUDY

space will not be an issue when deciding on a solution for the anticipated growth of the company. All headquarters employees are granted access to e-mail and the Internet. Users in the new satellite offices will also need both e-mail and Internet access. WestStar is currently connected to the Internet by means of a T1 line. A proxy server is running on one of the NT servers, and it is constantly crashing and causing unnecessary downtime for users trying to connect to the Internet.

CASE STUDY

354

Chapter 8



Designing Firewall Strategies

The company also needs the flexibility of being able to expand their current infrastructure with minimum downtime and with no interruption in existing services. Some type of firewall solution must be implemented to protect the existing resources on the network and future installations. Management has considered replacing their current proxy server with something else—it does not meet current or future security needs.

Questions 1. As WestStar expands its network, the company’s exposure on the

Internet will grow, as well. Some type of solution is needed for the company’s network that will guarantee it won’t be comprised by hackers and that all the data on the network can be securely transmitted from both internal and external sources. Which of the following Windows 2000 technologies will give WestStar the leverage to protect their existing network as well as any additional resources they add later? A. Internet Information Server B. ISA C. SOCKS D. Microsoft Cluster Service E. DHCP F. DNS 2. Management at WestStar want to ensure that data being transmitted

over the Internet to secure company websites won’t be compromised by intruders. They also don’t want outside interests to be able to connect to client computers within the network; they want to shield their internal network from outsiders. What service can WestStar use to provide this level of security? A. SOCKS B. Proxy Server C. HTTPS D. NAT

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

WestStar, Inc.

355

to provide access to the corporate intranet for both remote users and clients from the purchased dot.coms. Some of these clients will not be upgrading their workstations to Windows 2000 for at least a year after the corporate network upgrade. What can be installed on each client computer to give it secure access to the corporate intranet? A. The firewall client for ISA B. The SOCKS client for ISA C. WINS D. RAS 4. Steve is the IT manager for WestStar. He has been told that upper

management is considering several solutions to increase the security of their network. They have heard that ISA Server is an upgrade to their existing proxy server, but they’re a little skeptical about using another proxy-type product because of problems they’ve experienced in the past. Steve wants to explain the various levels of protection that can be provided if they go with the ISA solutions. In the table below, match the ISA Server features in the left column with their correct descriptions on the right. Feature

Description

Packet filtering

Checks source and destination IP address of packets

Circuit filtering

Can be used with NAT

Application filtering

Inspects contents of packet

Dynamic packet filtering

Supports protocols that use secondary connections Filters sessions rather than ports Also known as proxy services Can only be used with TCP protocol Has slower performance than the other filtering modes

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

CASE STUDY

3. Once WestStar upgrades its internal network, the company will have

CASE STUDY

356

Chapter 8



Designing Firewall Strategies

Allows on-the-fly modifications to security rules Can validate usernames and security groups Has limited auditing capabilities 5. Many of the employees from the dot.com acquisitions will eventually

need access to the internal corporate intranet. This access will be provided through a Web Proxy client, so the dot.com folks will be able to connect using any CERN-compliant web browser. WestStar has secured two DNS names to be used for their network. One, www.weststar.com, is for external customer use; the second, www .weststar-hr.com, is for use by internal employees. Which of the following URLs will give employees a secure link to the WestStar intranet website? A. http:// www.weststar-hr.com B. http:// www.weststar.com C. https:// www.weststar-hr.com D. ftp:// www.weststar-hr.com 6. Steve must decide what type of clients to install on the computers for

the dot.com satellite employees who require access to the corporate intranet. In the following table, arrange the appropriate features under each type of ISA Server–supported client. ISA Server Clients

Feature

Firewall client

Must be installed on the client

SecureNAT client

Supports only Windows platforms

Web Proxy client

Works with any operating system that supports TCP/IP

SOCKS client

Requires only a web browser Requires changes to the network configuration

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

WestStar, Inc.

357

Can only support multiconnection protocols with an application filter Supports HTTP, HTTPS, and FTP Can validate usernames and security groups Supports all operating system platforms

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

CASE STUDY

Supports all Winsock applications

CASE STUDY ANSWERS

358

Chapter 8



Designing Firewall Strategies

Answers 1. B. ISA (Internet Security and Acceleration Server) will supply the fire-

wall capabilities WestStar needs to protect its network. ISA Server can easily be expanded as warranted along with growth of the company’s infrastructure. ISA Server also integrates seamlessly with Windows 2000, which will support the upgrade WestStar is proposing. 2. D. NAT (network address translation) can be used to shield internal

network IP addresses from external sources. With NAT in use, only one IP address is provided for access to all of the internal clients for requests from outside the internal network. NAT determines which client should receive the request and forwards it accordingly. 3. A. By installing the Firewall client for ISA, you are ensuring that each

client has secure connectivity to the corporate intranet. ISA’s Firewall client supports Windows 95, 98, NT, and 2000 clients, so the dot.com satellites will be able to access the network even though their workstations have not been upgraded. 4.

Packet filtering Checks source and destination IP address of packets Can be used with NAT Has limited auditing capabilities Circuit filtering Filters sessions instead of ports Supports protocols that use secondary connections Can be used with NAT Can only be used with TCP protocol Has limited auditing capabilities

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

WestStar, Inc.

359

Also known as proxy services Inspects contents of packet Has slower performance than the other filtering modes Can validate user names and security groups Dynamic Packet filtering Checks source and destination IP address of packets Allows on-the-fly modifications to security rules Packet filtering works at the Network and Transport layers and determines whether to deny or allow packets depending on their source and destination IP addresses. This is the least secure method of IP packet filtering, but its security can be hardened with the addition of NAT. Packet filters have very limited auditing capabilities. Circuit filtering checks sessions rather than the IP packet’s IP addresses. Circuit filters support secondary connections. Like packet filters, their security can be hardened with the addition of NAT. Application filtering is also known as proxy services and provides functionality at the Application layer. Proxy services examine the packets before allowing a connection. They can validate users and passwords, and they generate audit records of all connections. Dynamic packet filters are like regular packet filters with one exception—they allow for security rules to be change dynamically depending on the protocol being used. 5. C. The correct choice is https:// www.weststar-hr.com. The

HTTPS protocol is used when connecting to a secure website, so the employees would use this URL to connect securely to the corporate intranet.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

CASE STUDY ANSWERS

Application filtering

CASE STUDY ANSWERS

360

Chapter 8



Designing Firewall Strategies

6.

Firewall client Must be installed on the client Supports only Windows platforms Supports all Winsock applications Can validate usernames and security groups SecureNAT client Works with any operating system that supports TCP/IP Can only support multiconnection protocols with an application filter Requires changes to the network configuration Web Proxy client Supports HTTP, HTTPS, and FTP Supports all operating system platforms SOCKS client Supports all operating system platforms The Firewall client is the only ISA Server client that requires installation on the client computer. It supports only the Windows platform. It does provide user-level authentication. The SecureNAT client only works with operating systems that support the TCP/IP protocol. SecureNAT requires a change to the network but none to the client computer. If multiple connections are needed, they must be done with an application filter. The Web Proxy client need not be installed on the client computer. Any CERN-complaint web browser can be used for access. This ISA Server client supports all operating system platforms and the HTTP, HTTPS, and FTP protocols. Like the Firewall client, Web Proxy clients supply user-level authentication. The SOCKS client does not require installation on a client computer and supports all operating system via the SOCKS applications. No changes are required on the network for its implementation. SOCKS version 5 on ISA Server does not support user-level authentication. Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Chapter

9

Designing Database Integration Strategies MICROSOFT EXAM OBJECTIVES COVERED IN THIS CHAPTER:  Design a database Web integration strategy. Considerations include database access and authentication.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

I

n this chapter we’ll address a critical and complex aspect of web solution design: the integration of databases for access by the site’s users. You’ll learn about the two authentication methods supported by SQL Server 2000: Windows authentication mode and mixed mode. We’ll also examine the various data-access methods that can be used when attaching to a SQL Server database located in the data layer of an n-tier network environment. An important part of this chapter is its description of the technologies that can be used to integrate databases into your highly available web solution. We focus on failover clustering and NLB solutions for achieving high performance and fault tolerance when incorporating SQL Server 2000 as part of a web integration strategy. We also look at several database technologies that are specific to SQL Server 2000, including partitioning, log shipping, and transaction replication. As you study this chapter and prepare for the exam, be sure that you understand the types of database access methods available in an n-tier architecture, and when it is appropriate to use each method. Another important concept is the type of user authentication applied for accessing resources on both IIS and SQL Server 2000. On the exam you will be asked to select appropriate database access methods and authentication options based on technical and business requirements. In some situations, you may consider more than one option to be the appropriate answer; remember to focus on the objectives that have been stated for the exam.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Client Authentication for Database Access

363

IIS 5 Authentication for Website Access

Before a client is allowed to access the databases on a website, that client must be given access to the site itself. Several authentication methods are provided with Internet Information Services 5.0 (IIS 5). These methods can be used to control client access to the sites in the web layer, which use data in the data layer of an n-tier environment. Anonymous logon Anonymous authentication uses the IURS_ computername account and is for users who require general access to your site. Basic authentication For basic authentication, users must enter a username and password before they are allowed to connect to the site. The password is transmitted in clear text. Integrated Windows authentication Integrated Windows is a more secure authentication mode than basic authentication. It supports both NTLM and Kerberos authentication. Client certificates In this method, digital client certificates are used to authenticate clients. Digest authentication Digest authentication is only supported by Windows 2000 clients; the password is encrypted before it is transmitted.

For the most thorough coverage of the authentication methods, see the section “IIS Authentication Methods” in Chapter 7.

Client Authentication for Database Access

A

uthentication of clients before they access a resource also applies when clients are trying to access SQL Server 2000 databases. Two types of authentication are provided with SQL Server 2000: Windows authentication mode and mixed mode. One of these two methods must be applied when clients are attempting to access the database resources.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

364

Chapter 9





Designing Database Integration Strategies

Microsoft Exam Objective

Design a database Web integration strategy. Considerations include authentication.

Windows authentication mode is the default authentication mode for SQL Server 2000. In this mode, SQL Server depends on Windows for the authentication of users or clients. The database administrator can grant users the access right to log on to the computer that is running SQL Server. SIDs (security identifiers) are used to track user logon access. The right to log on can be granted to individual users or to groups of users. Once they have been granted access, their rights are check for any objects they attempt to access. Mixed mode authentication differs from Windows authentication mode in that users can be authenticated by either Windows or SQL Server. Mixed mode uses NTLM or Kerberos to authenticate users’ credentials. If authentication is not possible with either of these two protocols, a username and password will be required and is then maintained by SQL Server 2000. This username/password pair is compared against names that are stored in the SQL Server tables. Mixed mode is generally provided for backward compatibility with applications written for earlier versions of SQL Server and by non-Windows clients. The preferred mode of authentication is Windows authentication and should be used whenever possible for database access. By using Windows authentication, the SQL Server can deny or allow the client’s logon to the data resource, without the requirement of a separate and different username and password.

Data Access Methods

M

icrosoft provides several methods for clients to access data that is located in the data tier of a highly available web solution. This section explores the data access technologies ADO, ODBC, OLE DB, and XML. We will also look at how Active Server Pages (ASP) and COM objects employ these technologies to communicate with a data source in the data tier. Figure 9.1

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Data Access Methods

365

illustrates the relationship between the various data-access components described in this section.



Microsoft Exam Objective

FIGURE 9.1

Design a database Web integration strategy. Considerations include database access.

Components used by clients on front end to access database on back end of n-tier network

Client

Browser

HTTP

COM Component

ADO

OLE DB SQL 2000 Server Database

Client

Browser

HTTP

ODBC SQL 2000 Server Database

ODBC Open Database Connectivity (ODBC) is a core component of Microsoft’s strategy for accessing data from relational and nonrelational databases. An open, vendor-neutral, and universal data-access interface, ODBC eliminates the need for developers to learn multiple APIs.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

366

Chapter 9



Designing Database Integration Strategies

Several components are required for using ODBC: 





The first requirement is an ODBC client. This is normally positioned on the front end; the client software can be written in a program such as Access, Visual Basic, or any other ODBC-enabled application (Lotus, for instance). The second component needed is the ODBC driver for access to the back-end server. In the ODBC Driver Catalog, Microsoft has provided an extensive list of ODBC drivers that can be utilized for database access on the back end. This catalog contains descriptions of the Microsoft-provided drivers and an explanation of ODBC, in addition to examples of industry usage. The last component for ODBC data access is the database management system (DBMS) server or back-end server. Any ODBC client can access any DBMS server, as long as the client has the correct ODBC driver for that server.

ODBC is an excellent choice of data-access method because application developers don’t have to modify their applications on the front end when trying to access databases on the back end. As long as there is an ODBC driver for the back-end server, the front-end client can access it. ODBC is supported by SQL Server for use by applications that communicate with SQL Server databases utilizing APIs written in C, C++, and Microsoft Visual Basic. When SQL Server 2000 is installed, the setup program installs an ODBC driver. Application support for ODBC is provided in Windows 2000, NT, 98, and 95. Both Microsoft and the manufacturers of the data source itself provide ODBC drivers. For updated ODBC drivers, check Microsoft’s website or the website of the particular data-source manufacturer.

OLE DB The OLE DB is an API that enables COM applications to access data from any OLE DB data source. The data source can be a SQL database or in another database format. When an application needs to access an OLE DB data source it uses an OLE DB provider, which accepts requests and provides access to the data source. A native OLE DB provider is included in SQL Server 2000 for access to data in SQL Server.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Data Access Methods

367

OLE DB is the recommended tool for developers who need high performance. The OLE DB specification provides database access for most applications. Applications that cannot be exposed to SQL Server 2000 through ADO can use the OLE DB provider to acquire access to the database. Like ADO, OLE DB includes support for XML in SQL Server 2000.

ADO Microsoft ActiveX Data Objects (ADO) is a data-access tool that allows applications to use the OLE DB API to access data from OLE DB data sources. ADO is a COM wrapper for OLE DB, which means that developers can use it to access a variety of resources via the OLE DB providers. ADO can be used to connect to SQL Server databases, retrieve and manipulate their stored data, and update data from an instance of SQL Server. ADO uses OLE DB functionality. It is a library of COM interfaces that work at the application layer of the OSI model. ADO can be utilized by applications written in languages such as Visual Basic, Visual C++, and Visual FoxPro. All data accessed by ADO applications is done so through the OLE DB provider included with SQL Server 2000. ADO is a good general-purpose data-access tool for SQL Server for several reasons: 

ADO concepts are easy to learn and use.



Most general-purpose applications can use ADO’s included feature set.



Programmers are able to quickly change applications that employ ADO for data access.

XML functionality is supported by ADO. This affords developers an easy migration path when retrieving data using ADO and converting it for use with XML. Microsoft offers OLE DB providers for ODBC that ADO can use to leverage existing ODBC databases through OLE DB.

XML Extensible Markup Language (XML) is replacing HTML (Hypertext Markup Language) as the standard for presentation of data over the Internet. XML is a more flexible way to define, author, and manage documents

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

368

Chapter 9



Designing Database Integration Strategies

on the Web. Both XML and HTML are subsets of SGML (Standard Generalized Markup Language). SQL Server 2000 supports XML functionality. An XML formatted document can be retrieved via SQL Server 2000 using one of several methods. Clients can access SQL Server databases using XML by sending a query string in an HTTP request. Clients can also retrieve XML documents or data by using Transact-SQL statements in a URL. Clients using XML no longer have the need to know the underlying structure of the data source to retrieve the data. Developers can write generic applications for retrieving information from the data source or from SQL Server. XML for database access is supported by both ADO and OLE DB.

ASP Active Server Pages (ASP) are powerful programming tools that combine HTML, scripting, and COM components to create dynamic web applications. HTML and script code are combined into a single file that can present both static and dynamic content. An ASP (.asp) file can contain a combination of text, HTML, XML, and both client- and server-side scripts. When a client requests an ASP file, the server-side script runs a browser request for the .asp file. The file is processed, and a web page is sent to the client’s browser showing the result of the script that was run from the requested file. The web server that houses the ASP file does all the work of generating the HTML pages sent to the client’s browser. These files or scripts cannot be copied, because the only thing sent to the browser is the resulting output when the script is run. Clients are unable to view the actual script commands that produce the pages being viewed. These script commands are resident on the server housing the ASP files. ASP is a neutral programming environment that includes native support for Microsoft JScript and VBScript. Since the resulting output is standard HTML, ASP can work on any browser. It can integrate with any ODBCcompliant database including SQL Server, Oracle, and DB2. ASP provides several benefits when used for building web solutions: 

It is browser-neutral.



It can be written in standard languages such as VBScript and Perl.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Data Access Methods





369

Although ASP is very powerful, developers need not learn complicated tools to use it. ASP is based on the Component Object Model, which is becoming the industry standard for interoperability of software components.

COM The Component Object Model (COM) was originally developed by Microsoft and has become an industry standard for the facilitation of interoperability among various software components on the web server. COM is used by virtually every Microsoft application as well as by thirdparty vendors to extend the functionality of their applications and services. When a client connects to a website to access a database using ADO, they are also using COM. COM is the foundation for interoperability among applications in the Windows platform. When two Microsoft applications need to communicate with each other, they often do so through COM. COM allows any two components to communicate regardless of what computers they are running on, regardless of the computers’ operating system (as long as it supports COM), and regardless of the language the components are written in. Other non-Windows platforms, including Sun, Macintosh, and Unix, have adopted COM because of its interoperability. The foundation of COM is components, self-contained entities that perform particular functions. A COM component is a binary file that ends with a .dll or .exe extension. It contains the code for creating a COM object. COM components can be written in and used from most standard programming languages. COM objects encapsulate the functionality of the components. COM objects reside in memory and are able to do specific tasks. They go away when program execution is terminated. COM objects can only be accessed through an interface, and an object can support multiple interfaces. All useful COM objects support at least two interfaces, the IUNKNOWN interface and a custom interface that exposes the component’s functionality. The interface defines an agreement between an object and its clients. This agreement specifies the methods that must be associated with each interface and the behavior of each method in terms of input and output. Certain basic interfaces are defined to provide functions common to all COMbased technologies.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

370

Chapter 9



Designing Database Integration Strategies

Of the technologies introduced in this section, ODBC is the only one that is not a COM interface. Both ADO and OLE DB provide vendor-neutral interfaces for connecting to COM components. Windows 2000 introduces COM+, which is the next generation of component object model. It builds on COM’s integrated services and features, and makes it easier for software to be created and used in any language. COM+ is an essential part of Windows Web Services solutions.

Database Availability for Web Solutions

C

lients request data that resides in the data tier of an n-tier environment. This is normally done through web servers located on the front-end of the web tier in a highly available web solution. Clients may need to access a SQL Server 2000 or some other type of database to fulfill their data information needs. This access may be transparent to the end user or it may require a separate logon and authentication operation. Several technologies have been developed to ensure that the servers housing the databases in the data tier are available 24 hours a day, 7 days a week. In this section, we’ll examine a few of these technologies, including failover clustering and Network Load Balancing (NLB). We’ll also look at several SQL Server technologies for ensuring that data is available in the event of hardware or software failure. These SQL Server concepts include log shipping, partitioning, and transaction replication.

Failover Clustering One of the major concerns for a network administrator is keeping missioncritical resources available continuously on the website 24 hours a day, 7 days a week. To meet this goal, two clustering technologies are available in Windows 2000 Advanced Server and Windows 2000 Datacenter Server. These technologies are Microsoft Cluster services (MSCS) and Network Load Balancing (NLB). This section discusses Microsoft Cluster services— specifically, failover clustering.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Database Availability for Web Solutions

371

For additional information on clusters, see Chapter 3, “Designing Server Clustering Solutions.”

Failover clusters are a way to ensure that critical resources are available even when applications or hardware fail. Microsoft SQL Server 2000 supports failover clustering in the Enterprise Edition. Hardware redundancy in failover clustering makes possible the automatic transferal of resources to a secondary server when resources on the primary server fail. Failover clustering also allows administrators to provide routine maintenance with minimal system downtime. Failover clustering arrangements require three components: a virtual server name, a virtual server IP address, and a virtual server administrator account.

Single- and Multiple-Instance Clusters Two new SQL Server 2000 concepts have been introduced for failover clustering: the single-instance cluster and the multiple-instance cluster. In SQL Server, an instance is an installation of SQL Server that is separate from other installations of SQL Server on the same server. You can think of an instance as a virtual server. In a single-instance cluster configuration, one instance of the cluster is installed on the server. With the SQL Server 2000 Enterprise Edition, Microsoft has introduced support for running multiple instances (up to 16) of SQL Server 2000 on one server. The single-instance cluster replaces the concept of active/passive clustering. The multiple-instance cluster replaces the concept of active/active clustering.

Active/Passive and Active/Active Clustering in SQL Server 2000 In active/passive clustering, SQL Server is configured so that only one of the SQL servers is active at a time. The other server is in a passive state and only becomes active when the primary server or node fails. In a failover, the secondary node or server becomes the primary node. When active/passive clustering is configured, several assumptions are made for its implementation.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

372

Chapter 9



Designing Database Integration Strategies









Each disk in the node contains either Windows 2000 Advanced Server or Windows 2000 Datacenter Server, and Windows Cluster services (MSCS) is installed. SQL Server is installed on the shared disk. A network connection exists between the two physical servers (or nodes). A virtual server has been created that represents both physical servers.

Active/active clustering is useful in environments where multiple database applications are required. An active/active cluster can optimize database application availability and improve performance. Each node in the cluster runs one or more copies of SQL Server 2000 Enterprise Edition. Each node can act as the primary node for one or more virtual servers, as well as the secondary node for other servers in the cluster. The use of both nodes is maximized while maintaining failover capability. When active/active clustering is configured, several assumptions are made for its implementation: 









Each disk in the node contains either Windows 2000 Advanced Server or Windows 2000 Datacenter Server, and Windows Cluster services (MSCS) is installed. SQL Server is installed on the shared disk. Two shared physical SCSI disks or hardware RAID arrays exist on a shared SCSI bus. Two virtual servers have been created that represent a composite of the two physical servers. Each virtual server contains the master, model, msdb, and tempdb system databases, and user databases.

If you decide to configure your solution with multiple instances of SQL Server 2000 (that is, the active/active configuration), you must ensure that all clients are running Microsoft Data Access Components (MDAC) version 2.6 or later.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Database Availability for Web Solutions

373

Using an n-tier Architecture Solution Your employer is a traditional brick-and-mortal retailer, VeraCruz Consulting, which has started migrating the bulk of its service offerings to customers via the Internet. The current infrastructure consists of four Windows NT 4.0 servers, three IIS servers and two SQL 7.0 servers. VeraCruz management has decided that an n-tiered architecture is the appropriate solution, not only to meet increasing customer demands but also to support the company’s requirements for scalability, availability, and high performance on the website. The back-end distribution side will be configured using Windows 2000 Advanced Server and SQL Server 2000. All of the Windows NT 4.0 servers will be upgraded to Windows 2000 Advanced Servers, and all SQL 7.0 Servers will be upgraded to SQL Server 2000. Web servers that are positioned on the front end of the n-tier architecture must be able to handle millions of hits per day. The actual presentation tier will consist of customer-service applications made up of Active Server Pages. The company will rely exclusively on ASP for server-side scripting, and on ADO and ODBC to provide access to multiple data resources. COM+ components in the business logic tier will perform a variety of functions, including preparation of orders and authorization of credit cards. Database servers on the back end of the VeraCruz site will be responsible for ensuring that orders are filled within the committed time frame. The data tier will consist of a two-node cluster server running Microsoft Windows 2000 Advanced Server and SQL Server 2000 Enterprise Edition in an active/active configuration. One node of the cluster will hold important internal company documents, and the other node will manage customer data and orders. Both servers will operate in a failover configuration using the inherent functionality provided by SQL Server 2000.

Network Load Balancing Network Load Balancing (NLB) is the second clustering method provided in Windows 2000 Advanced Server and Windows 2000 Datacenter Server. NLB clusters can be deployed to provide high availability in addition to

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

374

Chapter 9



Designing Database Integration Strategies

scalability. This arrangement can distribute client requests across multiple servers, and additional servers can be installed to handle client needs as required by increased demand. High availability is accomplished because server failure can be automatically detected, and traffic rerouted among the remaining servers to ensure continuous service. NLB is automatically installed by default with Windows 2000 Advanced Server and Windows 2000 Datacenter Server. It requires no propriety hardware, and any industry-standard computer can be used to run it. All IP traffic is distributed transparently among the host servers, and clients accessing the cluster use one of the cluster’s virtual IP addresses. Preventive maintenance can be performed on individual servers in the cluster without disruption to the cluster operations. In addition, rolling upgrades can be set up so that software and hardware are maintained with minimal disruption in service.

For additional information on Network Load Balancing, see Chapter 2, “Designing Network Load Balancing Solutions.”

Stored Procedures A stored procedure is a compiled set of queries that are executed on the local database. A stored procedure is a group of Transact-SQL statements that are compiled into a single executable.

Transact-SQL Statements Transact-SQL (also known as T-SQL) is a core component of SQL Server 2000. It is the language used by all applications to communicate with SQL Server 2000 databases. A transaction by definition is a single unit of work. When a transaction is successful, the transaction’s data modification becomes a permanent part of the database. If errors occur during the transaction, none of the data modifications that occurred are saved to the database, and all are erased. Three types of transactions are used in SQL Server 2000: explicit, implicit, and automatic.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Database Availability for Web Solutions

375

Transact-SQL statements fall into one of two categories: Data Definition Language (DDL ) or system stored procedures. Transact-SQL is used by many applications for a variety of reasons. Web pages use T-SQL to extract data from a SQL Server database. Applications use T-SQL to find the data that a user is requesting. Distributed databases use T-SQL for database replication. T-SQL includes statements that support all administrative work done in SQL Server 2000.

Stored procedures are like other programming procedures, in that they can accept input parameters and return values in the form of output parameters. Stored procedures contain programming statements that perform an operation in the database, and they can return a status value indicating success or failure of the procedure. The following are some benefits achieved when you use stored procedures to integrate SQL Server databases into a highly available web solution: 







Stored procedures are modular and can be modified independently of program source code. They execute faster than ad-hoc SQL queries. They reduce the amount of network traffic that is required when running code. They provide an additional means of implementing security for the databases in a web solution. Users can be granted execute permission to the stored procedures without having direct access to the underlying tables and views being used by the stored procedures.

Partitioning Partitioning is a way for data to be distributed from one table on a SQL Server to multiple identical tables on various SQL servers. Distributed partitioned views can be used to access this distributed data. When using partitioning, the first step is creating the member tables in SQL Server 2000. These member tables can be distributed over a collection of servers. A partitioned view is then created, which merges all the data so that the member table divisions are transparent to the users. It doesn’t matter how many tables are included in the view, since they all appear as one unit. Access to all member tables is provided seamlessly through this partition view.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

376

Chapter 9



Designing Database Integration Strategies

Figure 9.2 illustrates a distributed partitioned view. To be available to users, all servers that contain member tables must be available for the partitioned view. FIGURE 9.2

Data access through a distributed partition view

Client

SQL 2000 Server 2

Member Table 1

SQL 2000 Server 3

Member Table 2

SQL 2000 Server 4

Member Table 3

Query Partitioned View

Response SQL 2000 Server 1

Partitioning has no inherent fault tolerance. If one of the servers that holds a member table crashes or is unavailable, the partitioned view cannot access data from any of the member tables. This occurs even if the requested data is located on one of the servers other than the failed server.

Transaction Replication Replication guarantees that data undergoing modification is made available to users in multiple locations. SQL Server 2000 has three basic methods of replication: snapshot, transaction, and merge. For our highly available web solution, we will focus our attention on transaction replication. First, here are a few terms you should know in order to understand transaction replication. Publisher A publisher maintains the original source SQL Server 2000 database. This database is made available for publishing. The publisher detects and sends all changes to the distributor.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Database Availability for Web Solutions

377

Distributor The distributor is responsible for moving the data from the publisher to the subscriber. It contains the distribution database, which stores metadata, history data, and transactions. Although the publisher and distributor can be on the same server for performance reasons, it’s best to put each on its own server. Subscriber The subscriber is the server that is receiving the copied data from the publisher. SQL Server 2000 uses Transact-SQL statements to process all actions, and each completed statement is called a transaction. In transaction replication, each completed transaction is replicated to the subscriber as the transaction occurs. Database administrators can configure this process to happen in real time or at scheduled timed intervals. Transaction replication is used in environments that have low latency and high-bandwidth connections. Latency refers to the amount of time that elapses after a change is made on one server before it appears on another server. When using transaction replication, you want to ensure that there is low latency. You don’t want too much time to go by before changes on a publisher appear on a subscriber. Achieving low latency requires a continuous and reliable connection and is extremely useful in an environment where the subscribers need to receive data changes as soon as they occur.

Log Shipping Log shipping is a new feature introduced in SQL Server 2000 Enterprise Edition. In log shipping, transaction logs are taken from a primary SQL Server and applied to a secondary SQL Server. Should a database on the primary SQL Server fail, clients can be redirected to the secondary server. The copying of transaction logs from primary to secondary server occurs on a scheduled basis. Log shipping is built into SQL Server 2000 Enterprise Edition. It is built on the SQL Server 2000 backup and restore commands and is a complement to the replication and failover technologies in SQL Server 2000. Log shipping can be used to provide high availability by automating the process of maintaining a backup standby server. This is done using backup, copy, and restore jobs that are executed by the SQL Server Agent on both the primary

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

378

Chapter 9



Designing Database Integration Strategies

server and a server designated as a standby server. The log shipping function is easy to install and configure and can be monitored using SQL Server 2000 Enterprise Manager. Three components are required for the implementation of log shipping: 





A primary server, which holds the source database. A secondary server, which receives the transaction logs from the primary server and applies them to the database that is being synchronized. A monitor server, which is used to display the log-shipping process. Because it stores the status of the log-shipping process, the monitor server should not be the primary or secondary server. If either of these servers crashes, the log history would be lost.

Should You Use Log Shipping? When deciding whether or not to use log shipping in database integration for a website, several things should be taken into consideration: 





The synchronization rate for databases. How often do you want the transaction log data synchronized between the primary and secondary servers? Transaction logs are backed up every 15 minutes by default in SQL Server 2000. The placement of the primary and secondary servers. Log data can be dispersed across geographical boundaries. At a minimum, you should place the primary and secondary server on separate grids of your network. Server capacity. The secondary server should be configured with the same capacity as the primary server to ensure that the applications will work properly when they are failed over.

Log shipping can be used with failover clustering to provide highly available database integration. Cost is an important consideration when combining the two technologies. The hardware requirements for clustering are more rigorous than for log shipping alone, and additional hardware may have to be procured to ensure a high level of performance when log shipping is incorporated.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Choosing a Database Web Integration Strategy

379

Database Maintenance Plan Wizard in SQL Server 2000 will aid you in configuring log shipping between a primary server and standby servers. With the Wizard, administrators can accomplish the following: 







Create the destination database on all destination servers. Designate which destination servers will assume the role of the source server if the primary server is offline. Identify the frequency of transaction log backups, and how often logs are shipped to the destination servers. Specify the monitor server, which provides information about the status of the entire group of log-shipping servers.

Choosing a Database Web Integration Strategy

B

efore deciding on a database integration solution for your highly available website, you must set down the requirements of your situation.

High Availability and Fault Tolerance Identify the type of information that will be housed on the site. This will determine your choice of web integration solution. If the databases for your highly available website are primarily read/write databases, consider using failover clustering. This arrangement provides the highest degree of availability and fault tolerance in the event of a hardware or software failure. If your website’s databases are primarily read-only, consider using a Network Load Balancing solution. Read-only databases normally hold static content only, which is not updated when clients assess the site. For highly available web solutions, Microsoft recommends using SQL Server 2000 with failover clustering. As described earlier in this chapter, failover clustering is supported with the Enterprise Edition of SQL Server 2000. It provides redundant computer systems and automatic detection of server failure. This ensures that clients are able to access your site in the event of hardware or software failure. Failover clustering also provides an automatic failover solution between all nodes in the cluster. When one node fails, it’s

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

380

Chapter 9



Designing Database Integration Strategies

automatically detected, and all resources are transferred to a secondary node on the cluster. Time required for recovery from failure is minimal, and all failover operations are transparent to the end user or client. For SQL Server 2000 to be configured using failover clustering, Microsoft Clustering services (MSCS) is required. This service is available with Microsoft Windows 2000 Advanced Server, which supports two-node clusters, and Microsoft Windows 2000 Datacenter Server, which supports fournode clusters.

Communications and Traffic You’ll also need to consider the communications configuration when designing a SQL Server 2000 solution that is part of an n-tier environment. Security arrangements may include firewalls, NAT, and other security methods to protect the network against external attacks. Efforts should be made to ensure that network congestion caused by firewalls and other security technologies does not degrade the operation of the network and cause unnecessary delays. Microsoft recommends that the number of hops between query sources and the SQL server should be kept to a minimum. In an ideal world, all of the interfaces would be positioned on the same subnet. If either log shipping or transaction replication is part of your overall database integration solution, remember that network congestion can occur when these technologies are at work. Replication traffic can cause delays significant enough to slow the overall performance of the network. A possible solution would be to isolate these queries onto a separate network or subnet. TCP/IP should be the primary protocol used between requestors (ASP or COM) and the SQL Server 2000. To restrict inbound connections and provide encryption of data, an IPSec policy could be implemented to allow only valid requestors to access resources in the n-tier.

IPSec is not supported in a clustered environment.

Hardware and Data Storage When determining the hardware requirements for database integration, separate the system, data, and log files and put them on multiple physical disks and controllers if possible. If the database receives a substantial amount of

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Choosing a Database Web Integration Strategy

381

read and write queries, there will be a noticeable performance improvement if the various log files are located on different physical disks. RAID 1 is an appropriate solution for storing data log files. Selecting the appropriate disk architecture can have a dramatic impact on the performance, reliability, and high availability of your web solution. Considerations include Storage Area Networks (SANs), Network Attach Storage (NAS), and RAID. Disks can be configured as SCSI or Fibre Channel.

If you are using Windows 2000 Datacenter Server, you’ll need to use Fibre Channel for your disk solution. SCSI is not supported in a Windows 2000 Datacenter Server cluster.

The following RAID solutions can provide the needed reliability and performance while ensuring fault tolerance: RAID 5, Disk Striping with Parity Loss of a single drive in a RAID 5 array will cause performance degradation but will not result in total failure of the data set. Loss of two drives will result in a total failure. RAID 5 allows hot swapping of drives in the array. This reduces or eliminates downtime because the system is still operational and servicing requests while the faulty drive is being repaired or replaced. RAID 0+1, Striped Arrays That Are Mirrored Loss of one drive will break the mirror but will not cause system performance degradation. Loss of two or more drives will result in total system failure. RAID 0+1 provides both high performance and fault tolerance. One consideration is the synchronization time required for the mirrored pairs after a failure. RAID 1+0, Mirrored Drives in a Striped Array RAID 1+0 provides the best overall performance in addition to fault tolerance. Loss of one drive in the mirrored array will not cause performance degradation. This solution has a shorter synchronization time than RAID 0+1 following mirrored-drive replacement after a failure.

The terminology used to describe RAID configurations that employ striped mirrors and mirrored stripes (RAID 0+1 and RAID 1+0, respectively) differs among manufacturers of hardware RAID solutions.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

382

Chapter 9



Designing Database Integration Strategies

Fibre Channel supports higher bandwidths than SCSI configurations and has larger storage capacity. If possible, cluster configurations should be deployed using a SAN with Microsoft-supported storage devices.

You’ll find additional information on data storage solutions for highly available web solutions in Chapter 4, “Designing High Availability Data Storage Solutions.”

Security SQL Server 2000 will attempt to validate any user who is trying to access its databases. For users who are accessing databases anonymously, consider using a guest account that corresponds to the IUSR_computername account. Make sure that this account has the appropriate permissions to log on and access the resources of the SQL Server 2000. If you are using a clustering solution, keep in mind that clients cannot be authenticated using Kerberos, which is not supported in a clustered environment. User authentication can be performed with either Windows NT LAN Manager or NTLM authentication. In addition, NetBIOS over TCP/IP will be required for clients to communicate with the clustered virtual servers.

SQL Server 2000 Applications A few items need to be addressed by developers who are creating SQL Server 2000 applications for your integrated web solution: 



Applications should be designed to limit the amount of time waiting for a database connection. Higher database activity can result in longer wait time, and client requests can become backlogged. To aid in reducing the demand on database resources, applications should be designed to close connections when they are no longer needed. When an application has finished accessing the database, its connection should be closed so that other users can access the resources.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Summary





383

Applications, if possible, should be designed to share active connections. Instead of single requests, multiple record requests can be made into local memory. Connection throughput can be optimized by increasing the size of the record cache.

Summary

I

n this chapter we looked at the methods that can be used to integrate databases into your highly available web solution. First we revisited the authentication methods of IIS 5. Before a user is able to access resources on a database server in the data layer of your n-tier solution, they must be authenticated by IIS 5 in the web layer. The authentication methods used by IIS 5 are anonymous logon, basic authentication, Integrated Windows authentication, client certificates, and digest authentication. The method used is contingent on the type of access required by the client. Authentication is also done for SQL Server 2000, the primary database solution for integrated websites. Windows authentication mode is both the default and preferred method for authenticating users in SQL Server 2000. Mixed mode authentication is primarily for backward compatibility with applications that use earlier versions of SQL Server such as SQL 6.5 or 7.0. Various data access technologies can be used by clients in the web layer to access SQL Server 2000 databases in the data layer of an n-tier solution. These include ODBC, ADO, COM, COM+, and OLE DB. 





ODBC is a core component of Microsoft’s strategy for accessing data from relational and non-relational databases. It is a vendor-neutral data access interface that is an excellent choice because developers do not have to make modifications to their application to access the databases. OLE DB is an API that allows COM applications to access data from any OLE DB data source. The application uses an OLE DP provider when accessing the resource. ADO is a COM wrapper that uses OLE DB providers to access data from OLE DP data sources. XML is replacing HTML as the industry standard for presentation of data over the Internet. SQL Server 2000 has inherent support for XML functionality. It has the capability to use both ADO and OLE DB for data access.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

384

Chapter 9



Designing Database Integration Strategies







ASP is a programming tool that combines HTML, scripting and COM components to produce dynamic web applications. ASP files end in the .asp extension and can contain both static and dynamic content. COM has become the industry standard for interoperability between software components on a web server. Virtually every Microsoft application uses COM. COM allows for the communications of two components regardless of the operating system that each is running. The foundation of COM is COM components, which are binary files that perform specific functions. COM is based on COM objects, which encapsulate the functionality of the COM components. COM objects can only be accessed through an interface, and most COM objects support multiple interfaces. COM+ (or component services) is the latest version of COM. COM+ provides a simplified tool that can be used to administer all applications visually through a single user interface. MTS functionality has been rolled into COM+.

This chapter also discusses database technologies that are appropriate for integrating a database solution into a highly available web solution. One solution is failover clustering supported by the SQL Server 2000 Enterprise Edition. It requires three components: a virtual server name, a virtual IP address, and a virtual server administrator account. Failover clustering can be configured for a single instance of SQL Server 2000 or for multiple instances. Up to 16 multiple instances can be run on a computer running SQL Server 2000 Enterprise Edition. Another integration solution is Network Load Balancing. NLB is primarily used to distribute IP traffic on the front-end of your n-tier solution to multiple host servers. Last, you learned about several database integration mechanisms specific to SQL Server 2000. Stored procedures are Transact-SQL statements that are compiled into a single executable. They are modular and can be modified independently of the program source code. They also can be used for implementing security for databases in the web solution. Partitioning enables database administrators to distribute data from a table in one SQL Server 2000 database to multiple identical tables on various SQL Server 2000 servers. Transaction replication is one of the three methods of replication available in SQL Server 2000. It is used to replicate transactions from a server that has the original source database, called a publisher, to another SQL server called a subscriber. Log shipping can be used to automatically copy transaction logs

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Exam Essentials

385

from a primary SQL server to a secondary SQL server. Log shipping requires a primary server, a secondary server, and a monitor server. Several items need to be considered before you decide on your database integration solution. Identify the type of data that will be housed at your website. If these databases are primarily read/write databases, you can consider using failover clustering for the database solution. If the databases are primarily read-only, NLB is probably the appropriate solution. If log shipping or transaction replication are part of your web solution, you need to consider the network congestion that can occur as a result of their implementation. In addition, the choice of storage solution will have a dramatic impact on performance and reliability.

Exam Essentials Identify the authentication methods used by IIS 5 for accessing websites. IIS 5 supports anonymous authentication, basic authentication, Integrated Windows authentication, client certificates, and digest authentication. Know which data-access methods are used with SQL Server 2000 for highly available web solutions. ADO, ASP, ODBC, OLE DB, and XML are all database access methods that can be used with SQL Server 2000 in a highly available web environment. ADO, ActiveX Data Objects, allows applications to use the OLE DB API to access data from OLE DB data sources. ASP, Active Server Pages, is server-side scripting for creating dynamic web pages. ODBC, Open Database Connectivity, is an industry standard data-access interface that allows applications to access data from a variety of resources. OLE DB is an API that provides access to SQL and non-SQL data sources. XML, Extensible Markup Language, is replacing HTML as the Internet standard for displaying information on the Web. Know the components required for using ODBC. Three components are required for using Open Database Connectivity (ODBC): an ODBC client, an ODBC driver, and a database management system server (DBMS). The ODBC client is located on the front end of the n-tier environment, and the DBMS is located on the back end.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

386

Chapter 9



Designing Database Integration Strategies

Identify the technologies that are available for integrating a database into a highly available web solution. Failover clustering, Network Load Balancing, log shipping, and transaction replications are mechanisms suitable for integrating databases into a highly available web solution. Failover clustering is based on Microsoft Cluster Service. NLB distributes client requests across multiple servers on the network. Log shipping will copy transaction logs from a primary to a secondary SQL server. Transaction replication reproduces real-time data to SQL servers on the network. Identify the client authentication methods used in accessing resources on the data layer of an n-tier solution using SQL Server 2000. Two types of authentication are provided with SQL Server 2000: Windows authentication mode and mixed mode. Windows authentication mode is the default, and SQL Server 2000 depends on Windows 2000 for the user authentication. Mixed mode authentication provides backward compatibility for applications that use earlier versions of SQL Server such as SQL 6.5 and 7.0. Describe Microsoft’s recommended solution for integrating a SQL Server 2000 database into a highly available web integration strategy. Microsoft recommends that you use failover clustering as your database integration solution when incorporating SQL Server 2000 into a highly available web solution. Know the three components of the transaction replication process. Transaction replication requires (1) a publisher, which contains the source SQL Server 2000 database; (2) a subscriber, which is receiving the data from the publisher; and (3) the distributor, which is responsible for moving the data from the publisher to the subscriber.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Key Terms

387

Key Terms

Before you take the exam, be certain you are familiar with the following terms: active/active cluster

Network Load Balancing (NLB)

active/passive cluster

OLE DB

Active Server Pages (ASP)

OLE DB provider

ActiveX Data Objects (ADO)

Open Database Connectivity (ODBC)

COM+

partitioning

Component Object Model (COM)

single-instance cluster

Extensible Markup Language (XML)

stored procedures

failover clustering

transaction replication

log shipping

Transact-SQL

mixed mode authentication

virtual server

multiple-instance cluster

Windows authentication mode

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

388

Chapter 9



Designing Database Integration Strategies

Review Questions 1. Which of the following IIS authentication methods uses the IUSR_

computername account for gaining access to network resources? A. Basic B. Anonymous logon C. Digest D. Client Certificates E. Integrated Windows 2. Which of the following are authentication methods supported by SQL

Server 2000? Select all that apply. A. Anonymous logon B. Integrated Windows C. Windows authentication D. Mixed mode 3. Which of the following is the preferred authentication mode for SQL

Server 2000? A. Mixed Mode authentication B. Windows Authentication C. Kerberos authentication D. NTLM authentication 4. Which of the following database access technologies is not built on

COM? Choose all that apply. A. ADO B. XML C. ODBC D. OLE DB

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Review Questions

389

5. Which of the following is becoming the Internet standard for display-

ing information on the Web? A. HTTP B. HTML C. ASP D. XML 6. Which of the following database access technologies combines script-

ing, HTML, plain text, and XML for creating dynamic web content? A. ASP B. COM C. HTTP D. ADO 7. Which of the following components are required before failover clus-

tering can be implemented? Select all that apply. A. A virtual server name B. A virtual server IP address C. A virtual server administrator account D. ODBC E. Windows 2000 Server 8. Which of the following are clustering methods that can be used with

both Windows 2000 Advanced Server and Windows 2000 Datacenter Server to provide highly available web solutions? Select all that apply. A. SecureNAT B. Failover clustering C. NLB D. SOCKS E. Web Proxy

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

390

Chapter 9



Designing Database Integration Strategies

9. Which of the following is the method used with SQL Server 2000 to

access tables that are distributed among multiple SQL servers? A. Log shipping. B. Partitioning. C. Merge replication. D. Snapshot replication. 10. Which of the following hardware configurations can be used with

SQL Server 2000 for affording high availability and fault tolerance to database web integration? Select all that apply. A. RAID 0 B. RAID 5 C. RAID 7 D. RAID 1+0 E. RAID 0+1

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Answers to Review Questions

391

Answers to Review Questions 1. B. Anonymous logon authentication uses the IUSR_computername

account for providing access to resources on a network that is running IIS 5. Basic authentication requires a username and password; passwords are transmitted in plain text. Digest authentication, too, requires a username and password but is more secure than basic authentication because passwords are encrypted before transmission. Digital client certificates determine client access to network resources. Integrated Windows authentication is more secure than basic authentication, but it cannot authenticate users through proxy servers. 2. C, D. Windows authentication mode and mixed mode authentica-

tion are both supported in SQL Server 2000. Windows authentication mode is the default. It relies on Windows authentication of users and groups. Mixed mode authentication offers backward compatibility for applications that used earlier versions of SQL Server. Anonymous logon and Integrated Windows are authentication modes that are used for IIS. 3. B. Windows authentication mode is the preferred authentication

method for SQL Server 2000. It depends on Windows for authentication of users and clients. Mixed mode authentication is used for backward compatibility, for applications that primarily used SQL Server 6.5 or 7.0. Kerberos authentication is the default authentication mode used for Windows 2000. NTLM authentication is based on a challenge/response mechanism and was primarily used for authenticating to Windows NT 4.0 networks and earlier. 4. B, C. Neither ODBC nor XML uses COM components to afford cli-

ent access to database resources. 5. D. XML, Extensible Markup Language, is becoming the industry

standard for displaying and presenting data on the Web. XML is similar to HTML but provides more functionality. HTML, Hypertext Markup Language, is used to create documents for the Web. ASP, Active Server Pages, combines the ease of HTML with programming tools such as Visual Basic for production of dynamic web content. HTTP, Hypertext Transfer Protocol, is the Internet protocol standard for client/server interaction among web browsers on the Internet.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

392

Chapter 9



Designing Database Integration Strategies

6. A. ASP, Active Server Pages, is a tool that lets you combine not just

scripting, but also HTML, plain text, and XML to produce dynamic web pages. The Component Object Model (COM) was developed by Microsoft to facilitate interoperability among applications. HTTP, Hypertext Transfer Protocol, is the standard protocol for browser communications on the Web. ADO is a data-access tool that uses an OLE DB provider to access database resources. 7. A, B, C. A failover cluster must contain, at a minimum, a virtual

server’s name, IP address, and administrator account. Windows 2000 Server does not support clustering. To create a failover cluster, you must install Microsoft Cluster Services, which is included in Windows 2000 Advanced Server and Windows 2000 Datacenter Server. ODBC is a Microsoft technology that provides a programming interface to Windows-based applications for accessing a data source. 8. B, C. Failover clustering and NLB, Network Load Balancing, are

two clustering technologies that are provided with Windows 2000 Advanced Server and Windows 2000 Datacenter Server. SecureNAT is a form of network address translation that works with Microsoft Internet Security and Acceleration Server (ISA) clients. SOCKS is a circuit-layer protocol that is used to provide authentication in client/ server networking environment. Web Proxy is another ISA client used for authentication through a web browser. 9. B. SQL Server 2000 uses partitioning for distributing one table’s data

among multiple SQL servers. Log shipping involves copying transaction logs from a primary to a secondary SQL server. Merge replication and snapshot replication can be used in SQL Server 2000 to copy data from one server to another. 10. B, D, E. RAID 5, 1+0, and 0+1 all give both high availability and

fault tolerance. RAID 0 is also known as simple striping and provides no level of fault tolerance. RAID 7 is a proprietary system that includes a fully implemented, process-oriented, real-time operating system resident on an embedded array control microprocessor. It is not suitable for a highly available web solution.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Marconi International

393

Take a few minutes to look over the information presented in the case study and then answer the questions at the end of it. In the testing room, you will have a limited amount of time—it is important that you learn to pick out the important information.

Background Marconi International is a worldwide retailer of books, posters, and specialty items. The firm has storefronts in 10 major U.S. cities and in six international locations. Corporate offices are located in Seattle, Washington. The headquarters office employs 300 people. There are 30 people at each branch office location, including the 10 U.S. and six international sites. Each branch has a branch manager, and one person in each branch is responsible for that location’s computer setup and troubleshooting. Many of the branch offices have individuals who are providing computer help but have no background in computers. A few were given the job because no one else wanted it. The branch offices often have to pay for outside help with system problems due to the lack of experienced staff within the company. This has started to become costly as many of the older systems break down and need replacing or upgrading. Though it’s headquartered in Seattle, considered one of the hubs of the technology industry because of Microsoft’s presence there, Marconi has been slow to embrace new web technology. The organization finds itself being out-marketed by other distributors who have embraced the Internet’s advantages as a selling and marketing tool. Marconi wants to extend beyond familiar brick-and-mortal retail settings. The only website Marconi currently uses is provided by the HR department to give employees pertinent employee benefit information. It is not accessible outside the corporate intranet. CEO Statement “We need to move our product offerings beyond the current scope. Several of our competitors are seeing increased sales due to their expansion to the Internet. The only way that we are going to be able to compete is to ensure that our services are available worldwide via the Web.”

Current Environment The current network at corporate headquarters consists of five Windows NT 4.0 computers and 300 Windows NT Workstation clients. Two of the

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

CASE STUDY

Marconi International

CASE STUDY

394

Chapter 9



Designing Database Integration Strategies

servers run SQL Server 7.0. One of the SQL Server databases stores the company’s product catalog. This catalog is updated yearly and used primarily in house. Another SQL server houses a database of important customer information, containing key data on every client with which Marconi has done business. The database is the employees’ main source of a wealth of information on customer spending habits.

Business Requirements Marconi wants to make its presence known on the Web. The company boasts a large existing customer base that management thinks can be leveraged to increase sales. They also want to expand the Marconi product line to include items the marketing department has been testing regionally. Before the firm starts a major expansion, it wants to be sure that its infrastructure will be able to handle the expected e-commerce increases. Funding Funding has been approved for all work that’s needed at the corporate headquarters. Management wants to see how the main implementation goes before they allocate more funds to bring the other locations up to upgrade-level. The headquarters site will be the test bed for the companywide implementation.

Technical Requirements Marconi International will migrate existing Windows NT 4.0 servers to Windows 2000 servers. Existing SQL servers will be upgraded to SQL Server 2000 to take advantage of its scalability and performance improvements over SQL Server 7.0. One of the factors that influenced Marconi’s decision to upgrade to SQL Server 2000 was the need to bring the firm’s ecommerce offerings to market expediently and accurately. Management is especially interested in the failover capabilities of SQL Server 2000 for providing high availability and fault tolerance. Once they have migrated to the Web, system downtime is not an option.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Marconi International

395

1. The first SQL server needing upgrade to SQL Server 2000 is the one

that holds the Marconi product catalog. Once it’s upgraded, the catalog will have to be updated weekly. Marconi is offering a new product line and expects to continue the trend of introducing fresh products to the market on a regular basis. Once it’s up on the Web, the product catalog will be available for customer access 24/7. Updates will occur at least twice a month. Which of the following is/ are a design that will work for Marconi’s database web integration? Choose all that apply. A. A failover cluster B. An NLB cluster C. Database storage on RAID 5 D. Database storage on RAID 0+1 E. Database storage on RAID 1 F. Installation of Application Center 2000 for storing the database 2. One SQL server will hold customer information. This database will be

accessed constantly as customers call in to request new items or check on existing orders. Eventually, customers will be able to place orders and check them directly on the company’s website. For the initial implementation, however, management at Marconi International wants all orders to be phoned in. What is the appropriate solution for Marconi to implement for housing the customer information database? Select all that apply. A. A failover cluster B. An NLB (Network Load Balancing) cluster C. Store the database on a RAID 5 array D. Store the database on RAID 0+1

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

CASE STUDY

Questions

CASE STUDY

396

Chapter 9



Designing Database Integration Strategies

3. Users accessing the product catalog can see nearly all product offer-

ings, including new items that have been added by marketing. Also listed will be items that Marconi wants to get rid of through clearance sales. What should be done to allow users to have anonymous access the product catalog? Select all that apply. A. Configure SQL Server 2000 to use Windows authentication mode. B. Configure an account to use the IUSR_computername account for

logging on to the SQL Server 2000. C. Set appropriate permissions on database resources to allow indi-

vidual users access to the SQL Server 2000. D. Set appropriate permissions on database resources to the account

that will be using the IUSR_computername account for logging on to the SQL Server 2000. E. Configure SQL Server 2000 to use mixed mode authentication. 4. Marconi International management has received several proposals

with viable options for configuring database integration into their proposed web solution. In the following table, match the technology in the left column with its description on the right. Some items may be used more than once. Technology

Description

Failover clustering

Consists of Transact-SQL statements

Network Load Balancing

Distributes client requests to multiple servers

Stored procedures

Requires Windows 2000 Advanced Server or Windows 2000 Datacenter Server

Partitioning

Used in environments with highbandwidth connections

Transaction replication

Requires a primary, secondary, and monitor server

Log shipping

Uses a publisher, distributor, and subscriber

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Marconi International

Description Provides no fault tolerance Enables transfer of resources to a secondary node Used in environments with low latency Copies transaction logs from a primary SQL server to a secondary SQL server Uses virtual server Built on SQL Server 2000 Backup and Restore commands Can be modified independently of programmer’s source code Data is distributed from one table to multiple tables on different SQL Server 2000 machines Installed by default in Windows 2000 Advanced Server and Windows 2000 Datacenter Server Can be single-instance or multipleinstance arrangement Can be configured to run in real time or at predefined intervals Requires no proprietary hardware Supports active/active configurations Provides high availability Provide a way to implement security on databases Supports active/passive configurations

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

CASE STUDY

Technology

397

CASE STUDY

398

Chapter 9



Designing Database Integration Strategies

5. Before management approves the website implementation beyond

headquarters, they want to ensure that multiple backup SQL servers are available in case of a major hardware catastrophe. Company officers want to be able to direct the applications on the front end of their network to additional servers on the back end with minimal effort in the event of a failure. Which of the following SQL Server technologies can be implemented to ensure that backup servers are kept up-to-date with transaction logs from the primary SQL server? A. Log shipping B. Failover clustering C. Transaction replication D. Data Transformation Services or DTS E. Transact-SQL 6. ASP and COM objects use which of the following technologies to

access data from SQL Server 2000 databases? Select all that apply. A. ODBC B. ADO C. OLE DB D. CLB E. NLB

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Marconi International

399

1. B, C. Since the product catalog is a read-only database, NLB (Net-

work Load Balancing) is the best choice for this implementation. If scaling out becomes a requirement, it can done a lot more easily with an NLB solution than with failover clustering. Since there are no special hardware requirements, RAID 5 is the most cost effective storage solution for this database. A failover cluster solution would be more beneficial for read/write databases. RAID 1 is not a viable solution for a high-availability arrangement. 2. A, D. Since the customer information database will be regularly

updated, a failover clustering solution is the best configuration for this database integration. RAID 0+1 data storage will ensure maximum read/write performance. An NLB solution is not a good choice for read/ write databases. The RAID 5 array does not offer the read/write performance that is needed and that will come from a RAID 0+1 arrangement. 3. A, B, D. To allow anonymous access to the SQL database or product

catalog through the site’s front end, Windows authentication should be used on the SQL server. An account should also be created that uses the IUSR_computername account with the appropriate permissions for the resources it is trying to access. 4.

Failover Clustering Uses virtual servers Enables transfer of resources to a secondary node Can be single-instance or multiple-instance arrangement Supports active/active configurations Supports active/passive configurations Requires Windows 2000 Advanced Server or Windows 2000 Datacenter Server Provides high availability

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

CASE STUDY ANSWERS

Answers

CASE STUDY ANSWERS

400

Chapter 9



Designing Database Integration Strategies

Network Load Balancing Installed by default in Windows 2000 Advanced Server or Windows 2000 Datacenter Server Requires Windows 2000 Advanced Server or Windows 2000 Datacenter Server Requires no proprietary hardware Provides high availability Stored Procedures Consists of Transact-SQL statements Provides a way to implement security on databases Can be modified independently of programmer’s source code Partitioning Data is distributed from one table to multiple tables on different SQL 2000 servers Provides no fault tolerance Transaction Replication Uses a publisher, distributor, and subscriber Can be configured to run in real time or at predefined intervals Used in environments with high-bandwidth connections Log Shipping Copies transaction logs from a primary SQL server to a secondary SQL server Built on SQL Server 2000 Backup and Restore commands Requires a primary, secondary, and a monitor server

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Marconi International

401

Network Load Balancing, like failover clustering, requires either Windows 2000 Advanced Server or Windows 2000 Datacenter Server. NLB is installed by default but has to be configured before it can be used. It has no proprietary hardware requirements and, like failover clustering, provides high availability. Stored procedures consist of Transact-SQL statements and are an additional way to implement security on SQL Server 2000 databases. In addition, stored procedures can be modified as needed, independently of the programmer’s source code. Partitioning is a way for data to be distributed from one table in a SQL Server 2000 database to multiple tables on different SQL Server 2000 machines. One limitation of partitioning is that it does not provide fault tolerance. Transaction replication is one of three replication methods used in SQL Server 2000. It utilizes three servers (a publisher, distributor, and subscriber) to ensure that transactions are replicated on a real-time basis. Transaction replication is useful in environments that utilize high-bandwidth connections. Log shipping is a way for transaction logs to be copied from a primary SQL server to a secondary SQL server. Log shipping is a new feature that was introduced in SQL Server 2000 and is built on the Backup and Restore commands. To use log shipping, a group of one primary, one secondary, and a monitor server are required.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

CASE STUDY ANSWERS

Failover clustering uses virtual servers. These are resource groups that contain all the resources necessary for running an application. At a minimum, a virtual server contains an IP address and network name. Failover clusters allow movement of resources from a failed node to a secondary node. They can be configured as single-instance (active/passive) clusters or as multiple-instance (active/active) clusters. Both Windows 2000 Advanced Server and Windows 2000 Datacenter Server support failover clustering. In addition to providing high availability, they can also provide fault tolerance.

CASE STUDY ANSWERS

402

Chapter 9



Designing Database Integration Strategies

5. A. Log shipping copies transaction logs from one database server to

another. Failover clustering provides high availability in the event of a system failure. One node of the cluster is configured to take over the resources of a failed node. With transaction replication, you can take a snapshot of data and apply it to a subscriber database. DTS allows an administrator to extract, transform, and consolidate data from various sources into a single destination. Transact-SQL statements are used by all applications that communicate with SQL Server 2000 databases. 6. A, B, C. ODBC, ADO, and OLE DB are all technologies that can be

used by ASP and COM components to access data from SQL Server 2000 databases. CLB stands for Cluster Load Balancing, a service of Application Center 2000 to provide dynamic load balancing of COM+ components. Network Load Balancing (NLB) is a service provided in Windows 2000 Advanced Server and balances incoming client requests across multiple clustered hosts.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Chapter

10

Designing Content and Application Topologies MICROSOFT EXAM OBJECTIVES COVERED IN THIS CHAPTER:  Design an n-tier, component-based topology. Considerations include component placement and CLB.  Design CLB solutions to provide redundancy and load balancing of COM+ components. Considerations include the number of nodes, placement of servers, NLB, and CLB routing.  Design content and application topology. Considerations include scaling out, load balancing, fault tolerance, deploying and synchronizing Web applications, state management, service placement, and log shipping.  Design a Microsoft Exchange 2000 Server messaging Web integration strategy. Considerations include browser access and Wireless Access Protocol (WAP) gateways.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

I

n Chapter 9 we examined ways to integrate databases into your highly available web solution—one of the skills included under the fifth and final category of exam skills: Designing Application and Service Infrastructures for Web Solutions. Here in Chapter 10 we’ll continue coverage of the objectives in that category. This chapter also addresses Component Load Balancing for COM+ components—another of the design skills under objective 1, Designing Cluster and Server Architectures for Web Solutions. We start with an introduction to Windows Distributed interNet Applications Architecture (Windows DNA). DNA is the model that was developed by Microsoft for developing n-tier web solutions—it’s a blueprint for websites that are both scalable and secure. Three layers are used to divide the core components of your site’s logical structure when using the DNA model. These layers are the clients (the reason for having the website in the first place), the front-end systems that are responsible for hosting the web services being requested, and the back-end systems that normally store the data being accessed by the front-end systems. When preparing for this exam, you need to have a thorough understanding of Microsoft’s DNA model, its layers, and the types of services supported within each layer. You will be answering questions about the location and roles of these services in the n-tier environment. You’ll also get an introduction in this chapter to Microsoft’s Application Center 2000, which is part of the Windows .NET initiative and helps you create clusters and manage content and configuration settings for websites. Two types of load balancing are available with Application Center: Network Load Balancing (NLB) and Component Load Balancing (CLB). NLB facilities the distribution of IP requests across multiple clusters and has been part of the design discussions throughout this study guide. If one server in the cluster fails, the IP requests are automatically spread among the remaining

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Windows DNA

405

cluster members. CLB is used to load-balance COM+ components across multiple servers. Three types of CLB models are supported with Application Center: two-tier with full load-balancing, three-tier with full load-balancing, and three-tier with failover. Another aspect of using Application Center 2000 is deploying and synchronizing content to websites, and you’ll study the three steps of this process. The New Deployment Wizard will deploy not only web content but also ASP and COM+ applications and virtual sites. This chapter also examines ways to ensure that session state is maintained during client sessions. By maintaining session state, information for a client is retained for use during the entire time it is connected to the website. We will also look at scaling up and scaling out in terms of their effect on a site’s content and applications. Microsoft Exchange 2000 Server provides a variety of electronic messaging components that can be integrated into a website. This chapter discusses Exchange 2000 topologies and methods of providing fault tolerance.

Windows DNA

T

he Windows DNA (Distributed interNet Applications Architecture) model was developed by Microsoft to address the need for a network infrastructure that is scalable, highly available, and manageable. Windows DNA provides the framework for client/server web services while leveraging the integrated services of the Windows platform with the Component Object Model (COM). A large percentage of websites operating today are based on the DNA model.

The DNA Blueprint Business needs are constantly changing and forcing more demands on the underlying infrastructure of a network. The key to meeting these ever-changing demands is to build your website based on a scalable n-tier model. DNA was developed to serve as a blueprint for designing n-tier websites and to meet several architectural goals. These goals are the same goals you’ve encountered throughout this book as you studied the concepts of web solution design: scalability, high availability, and security.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

406

Chapter 10



Designing Content and Application Topologies

Scalability Websites are divided into two components: a front-end system that is accessible by the client, and a back-end system that stores business processes. Load balancing can be used to distribute the workload equally across each system in the n-tier environment. The DNA architecture makes it possible to scale the front-end system to meet the growing needs of client requests. Availability Front-end systems can be made highly available through the use of load balancing among multiple cloned servers. Clients’ access to the network is accomplished through a single IP address. Failure detection can ensure that servers not servicing front-end clients are automatically removed from the load-balanced set. Back-end systems can be made highly available through the use of failover clusters and replication. Security DNA proposes the use of multiple security zones to provide adequate protection against internal and external threats to the network. DNA security is based on the use of three security zones: the public network, a DMZ, and a secure network where content data is created and stored.

DNA Architectural Elements Several key elements are required for the architecture of an n-tier businessnetworking model. These include the clients, the front-end systems, and the back-end systems. Figure 10.1 shows the elements of an n-tier website. FIGURE 10.1

An n-tier website based on DNA architecture SQL Server 2000 Cluster

Data Access Layer (Tier 3)

Business Services Layer (Tier 2) Web Server

Web Server

Presentation Layer (Tier 1) Thin Client

Web Client

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Windows DNA

407

Clients Clients are the reasons for having the website. They access core applications by submitting service requests. Normally these requests come through a single IP address or URL. The inner workings of the network infrastructure are invisible to the client.



Microsoft Exam Objective

Design content and application topology. Considerations include service placement.

Front-End Systems Front-end systems are responsible for hosting the web services that are being requested by the clients. These systems provide core services such as HTTP/ HTTPS and FTP. Front-end systems are known as stateless systems. They are called stateless because they don’t store any client information during the session. This is done on the back end. When client information does need to be retained during the session, this can be done through one of several methods. The most common method is through the use of cookies, which are small files managed by the client’s web browser. A second method is to write client data into the HTTP header string of the web pages that are being accessed. A third way is for client information to be stored on the back-end database server. Typically, front-end systems consist of cloned servers that host the same applications and services. Load balancing can be implemented on the front end to ensure that client access is always available in the event of individual server failures. Back-End Systems The back-end system houses the data services that are utilized by the frontend systems. Back-end systems can actually store the data, or they can be an access point to data storage elsewhere on the network. This can be accomplished through databases such as SQL Server 2000 or applications such as SAP (Service Advertising Protocol).

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

408

Chapter 10



Designing Content and Application Topologies

Back-end systems are stateful. They are called stateful because they maintain client information or data during the client session, versus stateless systems, which do not.

DNA Tiers Several steps are recommended for preparing to use the DNA blueprint for an n-tier system: 







Divide your website applications into three logical tiers: presentation, business logic, and data services. Select the Windows technologies that will be used on the presentation layer for the client interface. Write COM+ components to implement the business logic using Windows 2000. In the data services layer, use ADO to access data, and OLE DB for exposure of data.

Windows DNA is for applications that are capable on running on multiple tiers. The two-tier architecture historically has been the most common model (see Figure 10.2). Windows DNA, however, is based on a three-tier model. The three core elements are the presentation layer or tier 1, the business logic layer or tier 2, and the data services or tier 3 (see Figure 10.3). FIGURE 10.2

Two-tier model for a website infrastructure

Web Client

Front-end Servers

Back-end Servers

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Windows DNA

FIGURE 10.3

409

Three-tier model for a website infrastructure

Web Client

Thin Client

Business Logic Layer ASP, COM+

Data Layer SQL Server 2000

Presentation Layer The presentation layer or web tier is the user interface to the underlying technologies of the websites. It is the first layer of user authentication to the applications on the system. In a web-based environment, the client can access the presentation layer through a Dynamic HTML browser or a standard HTML version 3.2 browser. In a non-web environment, the client can access through a front-end Distributed Component Object Model (DCOM) application. Figure 10.4 illustrates two clients in the presentation layer accessing a SQL Server 2000 database. The first client is a thin client that is accessing the database through DCOM. The second client is accessing the database through a web browser that is using HTTP.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

410

Chapter 10



Designing Content and Application Topologies

FIGURE 10.4

Service layers of an n-tier DNA application DCOM

COM+

ADO

OLE DB SQL Server 2000 Database

Thin Client

COM+

ADO

OLE DB SQL Server 2000 Database

HTTP

IIS

Web Client

Business Logic Layer The business logic layer is also known as the middle tier and can include IIS web servers, Active Server Pages, COM+ components, and messaging queuing services. A key component of the business logic layer is COM+ because of its capability to integrate the various business and data components. This middle tier is responsible for connecting the end user on the presentation layer end, to the data that is stored on or accessible through the data access layer. In a web-based environment, the business logic layer can include ASP and COM objects, CGI applications, and ISAPI extensions and filters.

Data Access Layer The data access layer supports data access and storage. It contains the application data that the clients are trying to access from the presentation layer, or connectivity to another data store located in a different place on the network. The data access tier can contain a structured relational database that is stored in flat files such as SQL Server 2000, or other data storage mechanisms such as directory services, e-mail stores, and spreadsheets. This layer can also be configured to provide temporary, persistent state data such as the intrasession transaction data needed for online e-commerce transactions.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Designing with Application Center 2000

411

Designing with Application Center 2000

Microsoft Application Center 2000 is part of the Windows .NET initiative to provide enterprise-wide Web Services. This initiative also includes Commerce Server 2000, BizTalk Server 2000, and SQL Server 2000. Application Center was designed to provide content deployment and management of websites that are built on a Windows 2000 infrastructure. The Application Center tool helps you create websites that are scalable, manageable, and reliable. By using Microsoft Application Center 2000, administrators are able to create and manage Windows 2000 servers in a highly available clustered environment. Website content, configurations, and applications can be easily controlled using Application Center’s centralized management console (MMC snap-in). Applications Center 2000 is based on the context of web applications. Websites are managed through an application image or resource set. Application images consist of various elements and are deployed and synchronized to all cluster members. Resources comprised in application images include system Data Source Names (DSNs), Registry keys, files and file system directories, COM+ applications, and websites and virtual directories. In addition, both dynamic content (ASP pages and VBScripts) and static content (HTML and JPEGs) can be incorporated into application images. In the next section we will specifically focus on Application Center and its capabilities for network load balancing, component load balancing, and application content deployment and synchronization. We will also examine the three primary cluster types that can be implemented using Application Center.

Application Center 2000 Features Application Center includes several features that allow an administrator to create and manage load-balanced clusters. The features described in this section are accessible through Application Center’s MMC snap-in (see Figure 10.5) or through a web browser. They are a simplified path for accomplishing cluster creation, application deployment, and management.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

412

Chapter 10



Designing Content and Application Topologies

FIGURE 10.5

Application Center’s MMC snap-in

Cluster services Clusters are configured so that each cluster member serves the same content to clients. Application Center includes wizards and a GUI for use in cluster creation and management. The following cluster types are supported: 





Web-tier clusters typically include web and database servers, email servers, staging and deployment servers, COM+ loadbalanced routing servers, and stand-alone servers. COM+ application clusters handle COM+ components specifically. Web service requests for COM+ components can be loadbalanced across multiple members of a COM+ cluster. COM+ routing clusters are used to route requests from web-tier clusters to COM+ application clusters using Component Load Balancing.

Load balancing Application Center supports both Network Load Balancing and Component Load Balancing. Load Balancing is discussed further in the section titled “Load Balancing.” Synchronization and deployment Application Center synchronizes applications among the cluster, by means of the application images

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Designing with Application Center 2000

413

defined earlier. These images can include an entire website, individual web pages, COM+ applications, and additional components. Deployment involves copying the content of the application images to each member of the cluster. Both deployment and synchronization are discussed further in a later section. Monitoring Monitoring features help the administrator to detect and analyze availability problems on the cluster. The Microsoft Health Monitor tool built into Application Center allows the administrator to set thresholds for systemwide application information. Event logs, HTTP calls, and Windows service failures from each cluster member can be monitored through a single view. Programmatic support Command-line tool and scripting support are provided with Application Center. Administrative functions can be scripted for easier management. Local and remote administration Clusters can be administered locally or remotely through secure connections. High availability Individual server failure does not affect operations because requests and transactions are rerouted automatically when a server failure is detected.

Load Balancing in Application Center Various load-balancing techniques can be applied with Application Center to provide multi-tiered clusters for your highly available website. Both Network Load Balancing and Component Load Balancing are supported.



Microsoft Exam Objective

Design content and application topology. Considerations include load balancing.

Network Load Balancing NLB is native to Windows 2000 Advanced Server and is the default method for distributing IP requests in Application Center 2000. It is administered

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

414

Chapter 10



Designing Content and Application Topologies

with the Application Center MMC snap-in. NLB provides not only cluster load-balancing but also high availability. Should one server in the cluster fail, requests are automatically distributed among the remaining servers in the cluster.

NLB is not included with Windows 2000 Server but is installed if you install Application Center 2000.

NLB can be configured through one of Application Center’s wizards. The New Cluster Wizard and the Add Server Wizard simplify NLB configuration by reducing the number of choices that the administrator must make. The wizards perform both hardware and software diagnostics to ensure that the platform on which NLB is being installed is capable of supporting it. A load-balancing algorithm must be selected for the cluster when configuring NLB. This algorithm is known as affinity and is used to determine how IP requests are handled. Three affinity options are available: None, Single, and Class C.

For additional information on Network Load Balancing, see Chapter 2, “Designing NLB Solutions.”

Component Load Balancing Component Load Balancing (CLB) is an Application Center service that provides the capability for COM+ components to be load-balanced across multiple servers. CLB is used to move the COM+ component workload from the web servers themselves to a dedicated COM+ tier. This is extremely useful if the COM+ components are consuming substantial memory and/or processing power on the web servers. CLB is a good choice for load balancing when security is of primary concern. COM+ objects can be situated behind a firewall, achieving an additional layer of security for the highly available web solution. Scalability is accomplished through the use of multiple COM+ clusters to service component requests. Application workload can be distributed across multiple tiers to provide increased performance.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Designing with Application Center 2000



Microsoft Exam Objective

415

Design an n-tier, component-based topology. Considerations include the number of nodes, component placement, and CLB. Design CLB solutions to provide redundancy and load balancing of COM+ components. Considerations include the number of nodes and placement of servers.

A CLB setup consists of CLB software and a COM+ cluster. The CLB software determines the order in which COM+ cluster members are used for activating COM+ components. The COM+ cluster is a set of server clusters that are managed by the Application Center. CLB is normally used on the back end in combination with a variety of front-end cluster arrangements. Three primary CLB models are support by Application Center: Two-tier with full load-balancing In a two-tier CLB model, client requests are passed through the front-end web cluster to the back-end CLB cluster. Three-tier with full load-balancing In this model, client requests are passed from the front-end web cluster to a load-balanced middle tier that routes the requests to a CLB cluster on the back end. Three-tier with failover This model is the same as the three-tier with full load-balancing, except that requests are routed to two servers in the middle tier instead of one. The second server is configured as a backup to the primary server in case of server failure.

Deciding to use CLB is a choice that should not be made without prior research. Performance degradation may result with the implementation of a CLB arrangement on your network, and this is a major concern. Inherent system overhead is introduced because of the amount of client-request processing that must traverse the network when using CLB.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

416

Chapter 10



Designing Content and Application Topologies

Managing with Microsoft Application Center 2000 As IT manager of your corporation, you are always looking at new ways to improve your web-based environment. In its current arrangement, several IIS 5 servers are providing web content for the company, a major distributor for web-based training. You are finding that monitoring these servers and upgrading web content has become a hassle. Not only are you looking for an easier and quicker way to deploy web content, but you need a more efficient way to monitor the status and logs of each server. Your company has shown interest in moving the web environment into some type of configuration that will provide high availability and failover capabilities. Application Center 2000 will help you simplify the complex tasks of managing the web applications. Using Application Center you can quickly and effortlessly deploy new content as it becomes available, and COM+ applications and ISAPI filters to existing servers. New files that are loaded on the cluster controller can be easily distributed to each cluster member server. With Application Center’s Health Monitor feature, you’ll be able to see a view of an entire cluster’s status and performance. Health Monitor will monitor the servers and even correct certain problems, such as restarting services. Its tools generate reports, detect software or hardware malfunctions, and implement diagnostic and troubleshooting procedures. By using Application Center, clusters can be created and managed for both the web tier, which serves HTTP and HTTPS clients, and the business logic (middle) tier, which load-balances COM+ components. For scalability issues, Application Center easily accommodates changing throughput needs. It simplifies the addition and removal of members from clusters. Its load balancing capability distributes the workload throughout a cluster and is compatible with the leading hardware load-balancing devices in use today.

Application Center Clusters Cluster types in Application Center are based on their primary role, which is generally defined by the type of applications and content that the cluster is

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Designing with Application Center 2000

417

hosting. One of the primary goals of Application Center is to make managing groups of clusters as easy as managing a single server. Application Center includes wizards and context menus that make your cluster creation and management tasks a breeze. The New Cluster wizard simplifies the process for setting up and configuring load-balanced, synchronized clusters in a matter of minutes with minimal administrator input. Three types of clusters can be created using the New Cluster Wizard: general/web clusters, COM+ application clusters, and COM+ routing clusters.

It’s important that you understand the term cluster as it relates to Application Center 2000. Clusters, as defined in Application Center, are not the same as the Microsoft Windows clusters introduced in Chapter 3 and discussed throughout this book. Windows clusters use shared disk resources to handle stateful applications such as SQL Server 2000 and Exchange 2000. These applications are housed in the back-end of the n-tier environment. Application Center clusters, on the other hand, are designed for use with stateless applications—in this case, load-balanced clusters on the front-end that handle the client requests and COM+ applications in the middle tier.



Microsoft Exam Objective

Design content and application topology. Considerations include fault tolerance.

General/Web Clusters A general or web cluster is used for general purposes and for hosting websites and local COM+ applications. Normally these clusters use NLB or some other type of load balancing. Non-NLB clusters usually use some type of external load balancing device to distribute client requests. Also included under the heading of general or web clusters is the staging cluster. Staging clusters are stand-alone or single-node clusters used as stagers, for testing content and applications before they are deployed to production clusters. In Application Center, stagers can also be used to deploy the applications and content after testing, to view the health and status of clusters, and to view performance data and event logs.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

418

Chapter 10



Designing Content and Application Topologies

COM+ Application Clusters COM+ application clusters host COM applications that are used by servers in a general/web cluster, COM+ routing clusters, or Windows-based applications. If this option is selected as cluster type, one of two sources must be identified for client requests. One source is applications running on web servers such as ASP or CLB. The other source is desktop COM client applications such as Win32-based applications that are written in Microsoft Visual Basic.

COM+ Routing Clusters COM+ routing clusters can function as web server clusters, but their primary role is to service routing requests to COM+ application clusters. COM+ routing clusters use CLB to route the requests. In essence, a COM+ routing cluster is a load balancer for COM components that exist on a COM+ application cluster.

Designating Load Balancing in Application Center If you select a general/web cluster or COM+ routing cluster solution when using Application Center 2000, you must then designate the type of load balancing that will be used. Application Center 2000 supports three types of load balancing for these configurations: Network Load Balancing, nonNetwork Load Balancing, and no load balancing at all.



Microsoft Exam Objective

Design CLB solutions to provide redundancy and load balancing of COM+ components. Considerations include NLB.

Network Load Balancing Network Load Balancing (NLB) is easily set up and configured in Application Center. This load-balancing option is selected by default when Application Center is installed. If multiple network adapters are installed or if DHCP is enabled on a system with multiple network adapters, NLB will be disabled and a third-party loadbalancing tool will have to be used. It can be configured before or after Application Center is installed.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Designing with Application Center 2000

419

Non-Network Load Balancing Application Center includes support for third-party load balancers. Application Center was designed to handle the setup and configuration of NLB that is provided in Windows 2000. But if NLB is unavailable, other third-party load balancers can be used, and can be monitored and managed by Application Center just as NLB can be. Communication between the third-party device and Application Center must be configured before the administrative functions of Application Center can be used, however. Once communication is established, Application Center will monitor cluster status, synchronize website content and configurations, and bring cluster members on- and offline. The Application Center 2000 Resource Kit contains tools to accommodate third-party load balancers.

Unpredictable behavior can result if content or configuration information is different on the cluster controller and cluster members. All cluster members are synchronized from the cluster controller, and you should make sure that all content and configuration information is on the controller before you add additional members.

No Load Balancing If you are doing application testing or setting up a staging environment, you may decide that no load balancing is required.

CLB Components The highly available website can include many elements, including dynamic and static web content, database resources on the data layer, and services that assist in accessing the data resources in the business logic layer. COM+ components, which provide some of the business logic functionality, can be installed in the web layer. But this may bring up significant scalability and performance issues. Application Center 2000 gives you a means to separate the COM+ components into their own tier with load-balanced COM+ component application servers, through Component Load Balancing. By using CLB, you are able to scale the component layer independently of the presentation layer and distribute the workload across multiple servers. You are also able to manage the COM+ components independently of the front-end and backend systems, and have the benefit of an additional layer of security.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

420

Chapter 10



Designing Content and Application Topologies

Component Load Balancing consists of the following key components: COM, COM+ Services, COM+ Components, routing lists, and a response timetable.

COM The Component Object Model (COM) was developed by Microsoft to facilitate seamless interoperability among varied software components. Under the COM standard, the components of a web server can communicate regardless of what operating system each component is using. This communication takes place through the use of interfaces. COM component interfaces are written in a variety of languages, including Visual Basic and C++. COM components can be installed locally on the client, or they can be located on a remote server.

For additional information on COM, see Chapter 9.

COM+ COM+ is introduced in Windows 2000 and is the latest version of COM. It extends COM to simplify the creation and use of software components, while improving the scalability and flexibility of both applications. COM+ replaces Microsoft Transaction Server (MTS), which was used to develop and deploy server-centric applications that use COM. MTS was ideal for developing line-of-business and e-commerce applications with webbased interfaces. Every MTS-based application is built from COM components and can be remotely accessed via DCOM. COM+ provides this functionality in Windows 2000. Applications that currently work in the COM environment will work under COM+.

COM+ Services The suite of COM+ Services is a core component of Windows 2000. It comprises COM, DCOM, Microsoft Distributed Transaction Coordinator (DTC), and MTS. Together, the COM+ Services make it possible for Windows 2000 to act as an application server for web-based applications. They give the administrator a simplified toolset for managing all applications through a single user interface.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Designing with Application Center 2000

421

COM+ Components COM+ components are binary files that create COM objects. COM-based applications are dispensed as COM components. Each COM component has an interface that is used to interact with other COM components. COM+ components are COM components that benefit from COM+ Services. COM+ components that are grouped together are identified as applications. COM+ components differ from COM components in that they contain configuration information that tells the underlying COM architecture what COM+ Services are supported.



Microsoft Exam Objective

Design CLB solutions to provide redundancy and load balancing of COM+ components. Considerations include CLB routing.

Routing Lists Routing lists are located on each cluster in the web layer and include all loadbalanced COM+ cluster members. Routing lists can also be located on the COM+ router cluster. Initially, the routing list is created by the administrator and synchronized to all members of the cluster. Having the routing list on each member in the web layer ensures that there is no single point of failure. The routing list has no web functionality, however; it is meant exclusively for routing purposes.

Response Timetable A response timetable is used to determine which COM+ cluster members should receive incoming client requests. This in-memory timetable is located on each web-layer cluster member. Each of these members polls each COM+ cluster member in its routing list every 200 milliseconds. The in-memory table is created from the responses that the cluster member receives from the polling. The COM+ cluster member that responds most quickly is given the highest position in the table. This member receives the activation request. Activation requests are passed round-robin style to all COM+ cluster members, with the member responding most quickly receiving the first request, and the

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

422

Chapter 10



Designing Content and Application Topologies

next fastest responding member receiving the next request. Once a request has been distributed to each COM+ cluster member in the table, the activations are recycled through the table. The response timetable is not synchronized among the web-layer cluster members because its information cannot be replicated expediently enough to keep pace with the ever-changing COM+ cluster load.

Performance Issues When Using CLB CLB can perform well as a dynamic tool for load balancing of COM+ components. But before deciding on a CLB solution, you need to be aware of the potential performance implications if its use. Keep the following throughput and response considerations in mind before implementing a CLB solution: Throughput When an application must make a call across a network, throughput performance may suffer. Application or service calls that are made over a network have slower throughput than calls from applications that are installed on the same server; this slowdown is the result of calls being made to other tiers in the n-tier environment. CLB may not be a viable solution if reliably fast throughput is critical for your highly available web solution. You may want to consider placing COM+ components locally on the web tier to avoid the slowdown in performance. Response time Users want fast response when they are accessing websites. Nothing frustrates a user more than lengthy response time or a delay in receiving requested information. COM+ components running on the web tier can result in slower response times and an overall degradation in network performance. If your users require faster response time for nonCOM+ components and are not concerned with throughput, COM+ components can be moved to a COM+ cluster tier. This will alleviate the burden on the web tier for the clients using non-COM+ components.

High performance and CLB are mutually exclusive, and you should be aware of this when designing a highly available web solution. CLB’s benefits of security and ease of use may warrant its utilization, but these benefits are counterbalanced by CLB’s adverse effect on overall throughput.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Designing with Application Center 2000

423

Deployment and Synchronization Deployment is the transmission of web content and configuration settings from one cluster to another. Synchronization is the replication of content data and configuration settings from the cluster controller to member clusters. New members added to the cluster are automatically synchronized.



Microsoft Exam Objective

Design content and application topology. Considerations include deploying and synchronizing web applications.

Deployment of applications involves three steps: development, testing/ staging, and production. Content is created in the development environment, tested in the testing/staging environment, and finally distributed and put into production. Once applications are moved to the production phase, they must be kept current. This is done through synchronization. Application Center 2000 includes the New Deployment Wizard to aid the developer in deploying applications. The New Deployment Wizard can also be used to ensure that applications are synchronized across the environment. In addition to applications, cluster settings and configurations must be synchronized to each member cluster and to the cluster controller. Administrators can use the wizard to perform partial or full synchronizations as needed. Single applications can be replicated to cluster members, or all applications currently hosted on the cluster controller can be replicated. Remember that in Application Center an “application” is considered a collection of software resources that are required for a website or COM+ application. This means the application can include not only the website itself, but Registry keys, COM+ applications, and any necessary certificates. When new content is added to a cluster, it is synchronized to all other members of the cluster by default. This setting can be changed as needed if specific members are to be excluded from receiving content. A full synchronization occurs every 60 minutes. This schedule can be disabled or changed to a different frequency. Information that can be replicated to clusters includes the following: 

Web content



ASP applications

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

424

Chapter 10



Designing Content and Application Topologies



COM+ applications



Virtual sites



Global ISAPI filters



Files and file directories



Metabase configuration settings (IIS 5 configuration settings)



Registry keys



System data-source names



Windows Management Instrumentation (WMI) settings

Deploying Web Content Deployed web content normally consists of only static pages and graphics. This content can first be deployed to a staging cluster and then replicated to other cluster members. If Application Center is being used, this replication will take place automatically as cluster synchronization is performed. Since no COM+ components or ISAPI filters are required when deploying static content, the web services on these cluster members do not have to be reset.

Deploying COM+ Applications When COM+ applications are deployed, the web services on the cluster members must be reset. This may mean a disruption in active client connections, and deployment should be done when it will affect the fewest users. If you are deploying applications that contain both web content and COM+ applications, you should first take the cluster controller and the affected cluster member out of the load-balancing loop. Then deploy the applications to both computers. You can then reset the web services on the servers and bring them back into the load-balancing loop. You’ll need to repeat this process for each cluster member that needs to be upgraded with the new content and COM+ applications.

Remember to remove the cluster controller and cluster member from the routing list before you remove them from the load balancing loop. Once the applications have been deployed, add them back into the routing list.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Designing with Application Center 2000

425

Note that the CLB service is automatically restarted after COM+ applications are deployed to cluster members. If you are deploying applications to the web cluster member and deploying COM+ applications to a separate COM+ application cluster, you should first deploy the COM+ applications to the COM+ application cluster. Then deploy the application or content to the web cluster member.

Application Center does not support the synchronization of COM components. If you need to synchronize a COM component, it must be installed in a COM+ application.

Keep in mind that “applications” in Application Center are collections of web resources. This definition does not include applications such as Microsoft Office. Application Center cannot be used to deploy and synchronize applications such as the ones included in the Microsoft Office suite, service packs, or operating systems. You need Systems Management Server (SMS) or a third-party utility to perform this sort of deployment.

Cluster Synchronization Synchronization is a core component in Application Center to ensure that content and configuration settings are consistent across all clusters. It does not require any special hardware or software and is transparent to the end user. Application Center is based on shared-nothing clustering; each member in the cluster maintains is own set of content and settings. It uses singlecontroller synchronization, where one cluster member is designated as the controller and is the authority for all content and settings synchronization. The controller cluster retains the most current and accurate content for the cluster and is responsible for synchronizing this content to each cluster member. Application Center enables both automatic and manual synchronization. Automatic synchronization can be changed-based, where new content and settings are updated immediately on all cluster members in the load-balanced loop; or interval-based, where the administrator determines the interval of synchronization. There are three types of manual synchronization: by cluster, by member, or by application. With cluster synchronization, all cluster members are synchronized. In member synchronization, only specified cluster members are synchronized; and in application synchronization, only specified applications are synchronized.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

426

Chapter 10



Designing Content and Application Topologies

Considering Application-Specific Technologies

A

n important consideration when designing the highly available web solution is determining the appropriate technologies and applications that can be incorporated. You want to ensure that the applications chosen have been specifically designed for use in an n-tier environment. Certain characteristics are associated with applications that reside in the web layer. Following are some of the application characteristics with which you should be familiar with when designing n-tier solutions. Table 10.1 lists the application characteristics and whether they are recommended for use with particular web solution technologies. Stateless Stateless applications do not maintain client information on the server. Stateless applications are perfect for environments that require scaling and high availability. Client connections can be dynamically loadbalanced across the nodes in the cluster and can easily adapt to a singlenode failure. Stateful Stateful applications maintain client information on the server. This information is also known as session state because the client information is maintained in the IIS server during the client session. File Upload Applications may require the uploading of files via FTP or through web folders. These files must sometimes be available to clients immediately after the files are uploaded to the website. NLB may not be the best solution when an application does file uploads. When a file is uploaded to an NLB member, it is stored only on that node. Users who connect to that specific node will have access to the file, but users who connect to other nodes in the NLB cluster will not have access to the recently uploaded file. HTTPS Applications may require the use of HTTPS (Hypertext Transfer Protocol Secure)—that is, a secure connection that requires user authentication. Anonymous authentication is not an option. In a typical highly available solution, HTTPS communication accounts for a smaller percentage than HTTP. You may want to dedicate a single node to HTTPS communications and allow anonymous authentication on the other nodes.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Considering Application-Specific Technologies

427

ISAPI Filters ISAPI filters process inbound and outbound data and can be stateful or stateless. In some circumstances, they require communication among multiple instances of the filter that is running. Some ISAPI filters that are designed specifically for highly available web solutions do not function correctly when run in multiple instances. It may not be possible to run multiple instances in an NLB cluster. COM+ Components COM+ components, too, can be stateful or stateless. Like ISAPI filters, COM+ components may require the running of multiple instances, and it may not be possible to run multiple instances on an NLB load-balanced cluster. TABLE 10.1

Application Characteristics for Highly Available Web Solutions

Application Characteristic

Web Solution Technologies

Use Recommended?

Stateless

NLB in Load Balancing mode

Yes

Provides scale-out capabilities

NLB in Failover mode

No

Does not provide good scale-out capabilities

Server cluster

No

Increased failover time

NLB in Load Balancing with Affinity

Yes

Affinity assures client connections

NLB in Failover mode

No

Does not provide good scale-out capabilities

Server cluster

No

Does not provide good scale-out capabilities

NLB in Load Balancing mode

No

Clients won’t get immediate access to file

Server cluster

Yes

All uploads done on single active node

Stateful

File Upload

Comments

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

428

Chapter 10



Designing Content and Application Topologies

TABLE 10.1

Application Characteristics for Highly Available Web Solutions (continued)

Application Characteristic

Web Solution Technologies

Use Recommended?

HTTPS

NLB in Load Balancing with Affinity

Yes

Use separate cluster for HTTP and HTTPS

NLB in Failover mode

No

Does not provide good scale-out capabilities

Server cluster

No

Does not provide good scale-out capabilities

NLB in Failover mode

Yes

Only one node allows access to data

NLB in Load Balancing mode

No

Does not function properly with multiple instances

Server cluster

No

Not recommended because of increased failover time

ISAPI/COM+

Comments

Maintaining Session State One of the main concerns when designing applications for highly available web solutions is maintaining session state to ensure that client information is not lost. Session state is temporarily stored for the time of the session. Load-balanced multiserver environments present a problem when you’re trying to maintain client session state during a session. All client requests have the possibility of being mapped to a different cluster member. This is a problem not only with Network Load Balancing but with all load-balancing solutions. Disruption in client service can occur if client requests are loadbalanced to cluster members in which their session information does not exist.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Considering Application-Specific Technologies



Microsoft Exam Objective

429

Design content and application topology. Considerations include state management.

Application Center provides a solution to this disruption problem. The Application Center uses a request forwarder to ensure that HTTP requests are made to a specific cluster member in a load-balanced environment. The request forwarder is an ISAPI filter and extension that sits between the HTTP client and the applications. The server that houses the application is known as the “sticky” server, and information about this server is stored in an HTTP cookie. The cookie is returned to the client and checked on each subsequent client request, to ensure that the server that first handled the client continues to do so during any given session. If the NLB sends a client request to a server other than the sticky server, the requests is forwarded to the appropriate server. HTTPS client requests can also use request forwarders. Since HTTPS uses secure communications, a full Secure Sockets Layer (SSL) handshake is required before the cookie can be examined. If port 443 is being used for HTTPS requests, only one site per cluster can be used. This is a limitation in Application Center. If you need to use multiple sites, you’ll need to bind SSL to another port other than 443. By default, request forwarding is enabled only on sites that use ASP session states. To achieve the best performance when using session state, you should eliminate web cluster involvement. Allow the back-end or SQL Server 2000 database to maintain the client’s cookie. You could disable or even remove request forwarder from the front-end or web clusters.

Scaling Up vs. Scaling Out When you’re designing a highly available web solution, two primary concerns will be that your solution is highly available and that it is scalable. Your solution should be able to grow as needed to meet changing demands. Two approaches for scalability are scaling up and scaling out. Scaling up is the more traditional approach for website growth. It involves upgrading the hardware on existing servers. This can mean increasing the number of disks used for storage, or adding CPUs and memory. Upgrading

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

430

Chapter 10



Designing Content and Application Topologies

from Windows 2000 Advanced Server to Windows 2000 Datacenter can increase individual server capacity per server cluster. Windows 2000 Datacenter can scale up to 32 CPUs and 64GB of RAM. An alternative to scaling up is scaling out, which entails adding servers to aid in handling the workload. Cloned web servers can be added on the front end and grouped into web clusters. These clusters can be load-balanced to handle any increase in client requests. Scaling out on the back end can involve adding more memory and processor support. Windows 2000 Advanced Server can scale up to 8 CPUs and 8GB of memory. An alternative to scaling out in this case would be to partition the data on the back end. In partitioning, software and hardware are duplicated, but the data is divided or partitioned among the nodes. Requests are routed to the partition on the node where the data is stored. When new data is added, partitioning automatically adapts to the data and balances the load among the existing nodes.

For additional information on partitioning, see Chapter 9.

Log Shipping If SQL Server 2000 is the primary database on the back end, log shipping can be applied as a method to keep data consistent across the SQL Server 2000s. With log shipping, transaction logs are automatically sent from one server to another. This provides a “warm” standby for multiple servers by ensuring that transaction logs are continually fed from one database to another.



Microsoft Exam Objective

Design content and application topology. Considerations include log shipping.

Log shipping can increase scalability by providing a method to offload query processing from a primary server to read-only destination servers. You can incorporate log shipping with both NLB and failover clustering to offer a solution with excellent high availability.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Integrating Exchange 2000 Server

431

For additional information on log shipping, see Chapter 9.

Integrating Exchange 2000 Server

W

ebsites often provide services other than static web pages. These services take the form of applications. Electronic messaging is an example of the type of supplemental application that can enhance a website. In this section, we’ll take a look at integrating Microsoft Exchange 2000 Server into a website.



Microsoft Exam Objective

Design a Microsoft Exchange 2000 Server messaging Web integration strategy. Considerations include browser access and Wireless Access Protocol (WAP) gateways.

Exchange 2000 Server is highly integrated with Microsoft’s web technologies. IIS 5 provides many of the core communication protocols to Exchange 2000 Server, including HTTP, Post Office Protocol v3 (POP3), Internet Message Access Protocol v4 (IMAP4), Network News Transfer Protocol (NNTP), and Simple Mail Transfer Protocol (SMTP). This tight integration of IIS and Exchange 2000 Server allows Exchange to support web-based access to e-mail. This web-based access is called Outlook Web Access or OWA.

Outlook Web Access With OWA, users can access their mailboxes from any location on the Internet, through a web browser. The supported web browsers range all the way back to Netscape Navigator 4.08, Internet Explorer 4.01 service pack 1, and Internet Explorer 5. From Explorer 5 on, the end users have functionality comparable to clients running Microsoft Outlook. Microsoft Outlook uses Messaging Application Programming Interface (MAPI) to communicate with

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

432

Chapter 10



Designing Content and Application Topologies

the Exchange server, resulting in functionality beyond that provided with OWA. Outlook Web Access uses IIS and an Exchange ISAPI extension to parse information from a web browser and pass it to the Exchange 2000 Server storage system. The storage system keeps and manages the e-mail messages, files, web pages, and other items in a database.

Exchange 2000 Server Topology There are two primary options for designing an Exchange 2000 Server topology. It can be installed on a single server with all components operating on that server, or distributed on multiple servers. Single servers contain everything needed for clients to connect a web browser to their mailboxes. Single-server installations are easy to administer and require less hardware. Conversely, single-server installations provide little fault tolerance or load balancing. Multiple server configurations, on the other hand, allow you to design the infrastructure to support high availability and provide load balancing. Multiple server arrangements use front-end and back-end servers. Frontend servers work similarly to a proxy server. They receive requests from clients and forward requests to the back-end servers. The front-end servers use LDAP to query Active Directory for the location of the appropriate back-end server. Back-end servers hosts the actual data storage system. Except in very small installations, it is highly recommended that you use a multiple-server topology for an Exchange 2000 Server integration. There are several advantages of using multiple servers: 







A single namespace for the end users. End users’ mailboxes can be distributed across multiple back-end message stores. End users only need to know the front-end server name. The back-end servers can be isolated from direct internet connection. This significantly increases the security of the message stores. With multiple-server configurations you can segregate functionality to various servers. For example, if you want to run the OWA web server over SSL, the SSL overhead is on the front-end server only and not on the back-end message-storage servers. Multiple-server configurations make it easy to scale for performance and to provide fault tolerance.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Integrating Exchange 2000 Server

433

Exchange 2000 Fault Tolerance Fault tolerance for Exchange 2000 Server integration is accomplished by utilizing a combination of other Microsoft technologies: NLB and MSCS (Microsoft Cluster Service). For more details on these technologies, refer to Chapters 2 and 3. Front-end Exchange 2000 Servers are configured for fault tolerance using NLB. NLB balances the traffic to the servers’ HTTP, POP3, IMAP, NNTP, and SMTP services. If one of the servers in the NLB configuration fails, the other servers will pick up the load. NLB should be installed and configured prior to installing Exchange 2000 Server. The back-end Exchange 2000 Servers are running a database. Systems running database servers cannot utilize NLB, so fault tolerance requires application of Cluster services. Install and configure MSCS prior to installing Exchange 2000 Server. MSCS provides failover when one of the servers in the cluster fails; in failover, the disk group is moved from the failed server. When you configure the Cluster services, Exchange 2000 Server is installed on a cluster virtual server, and each virtual server is responsible for an Exchange 2000 Server storage group. An Exchange 2000 Server can support up to four storage groups. In an active/active cluster, this means each server should be configured with only two storage groups, so that when one node fails, the failover server only has a total of four storage groups.

Outlook Web Access Authentication OWA supports authentication of end users before accessing mailboxes. OWA supports both basic and Integrated Windows authentication. The most flexible authentication is basic authentication, which is supported by most browsers on most operating systems. Basic authentication does not encrypt passwords during transmission. The end user is prompted for username, domain name, and password in order to access their mailbox. Integrated Windows authentication supports both Kerberos and NTLM authentication. This is the most secure method because the authentication negotiation is fully encrypted. Kerberos authentication is limited to Internet Explorer 5.0 and later, running on Windows 2000 and later. Unfortunately, you can’t use Integrated Windows authentication when you use a multiserver topology. SSL can be used to encrypt the authentication traffic in multiserver configurations. SSL doesn’t provide the authentication directly, but will encrypt

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

434

Chapter 10



Designing Content and Application Topologies

the entire communications session. Authentication is then provided by basic authentication. Since most browsers support SSL, this technique offers secure encrypted authentication to browser platforms other than Internet Explorer. Anonymous access can also be utilized with OWA. Anonymous access provides access to certain resources without prompting the user for authentication. This configuration is not recommended, however.

Wireless Access Protocol Gateways Wireless Access Protocol (WAP) gateways are applications that allow wireless devices to access Internet services. Many modern cell phones and handheld devices support WAP. Users can collect e-mail, chat, and browse the web from almost any location. WAP is a new protocol and is not a part of TCP/IP. Since the WAP protocol isn’t TCP/IP, the WAP devices can’t talk directly to Internet servers. If a WAP user wants to access standard Internet servers, the WAP device communicates with a gateway. The gateway server receives requests from the WAP device, then translates the request into a standard TCP/IP request on behalf of the WAP device. Conceptually, the gateway operates similar to a proxy server. If your website design calls for offering services to wireless users, your topology will require a WAP gateway.

Summary

In this chapter, we introduced you to the Windows DNA framework for developing highly available web solutions. Windows DNA (Windows Distributed interNet Applications Architecture) was developed by Microsoft to leverage the integrated services of Windows 2000 and COM for development of web-based business solutions. DNA provides scalability, availability, manageability, and security. Several DNA elements are required for an ntier network: clients, front-end systems, and back-end systems. Windows DNA is based on a three-tier model. The first layer is the presentation layer and is the user interface to the applications resident on the back-end. The second or middle layer is the business logic layer and is

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Summary

435

responsible for connecting the clients on the front-end to the applications that reside on the back-end. The third layer in this model is the data access layer, which actually houses the databases or storage solutions that the clients on the front-end are trying to access. Windows Application Center 2000 is built around the Windows 2000 infrastructure and provides administrators with a tool for creating and managing websites. Application Center can also be used to deploy website content and configuration settings, and to synchronize cluster members to ensure that content is consistent across the network. Application Center 2000 supports both Network Load Balancing and Component Load Balancing. Wizards and menus make setting up and configuring both NLB and CLB a simple process. NLB, the default load-balancing method when Application Center is installed, distributes IP requests across servers in a cluster. CLB provides the administrator with a method to loadbalance COM+ components. Three Component Load Balancing models are supported with Application Center: two-tier CLB with full load-balancing, three-tier CLB with full load-balancing, and three-tier CLB with failover. CLB introduces additional system overhead that can cause degradation in the networks’ overall performance. Extensive research should be done before implementing a CLB solution. Application Center helps you deploy and synchronize content among cluster members, using application images to deploy content. These images can contain entire websites, individual web pages, COM+ applications, and other components that an administrator may want to deploy to cluster members. The Health Monitor in Application Center monitors and analyzes problems on each cluster member. In the final section of this chapter, you learned about integrating Exchange 2000 Server’s service known as Outlook Web Access. OWA allows users to access mailboxes and other Exchange 2000 Server resources via a web browser from any location. Exchange 2000 Server can be configured on a single server or in a front-end/back-end multiserver configuration. The multiserver configuration supports using NLB on the front-end servers and Cluster services on the back-end servers for fault tolerance. OWA supports using basic, Integrated Windows, anonymous and SSL authentication. Installation of a WAP gateway allows users of your highly available web solution to retrieve e-mail from WAP-compatible wireless devices.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

436

Chapter 10



Designing Content and Application Topologies

Exam Essentials Understand the three tiers or layers of distributed web-based applications. Web-based applications are partitioned into three tiers: the presentation layer, the business logic layer, and the data access layer. The presentation layer consists of the client interface to the applications. The business logic tier is responsible for connecting the clients on the presentation layer to the data that is stored in the data access layer. The data access tier is used to store data. Identify the features of Microsoft Application Center 2000. Application Center 2000 includes Cluster services, integrated Network Load Balancing, and Component Load Balancing. It can be used to synchronize and deploy website content and applications. It can also be used to monitor cluster members and provide local and remote administration. Identify the cluster types supported in Microsoft Application Center 2000. Microsoft Application Center 2000 offers three cluster options: General/ web clusters host websites and COM+ applications. COM+ application clusters host only COM+ applications. COM+ routing clusters provide routing requests to COM+ application clusters. Identify the three Component Load Balancing models supported by Application Center 2000. Three primary models are available in Application Center 2000 for Component Load Balancing: two-tier with full load-balancing, three-tier with full load-balancing, and three-tier with failover. Know how web content is deployed and synchronized to websites using Application Center. The New Deployment Wizard, included with Application Center 2000, helps you deploy new content to websites in a cluster and synchronize the content across the clustered environment. Understand the two main components of Component Load Balancing. The two main components of CLB are the CLB software, and a COM+ application cluster. The CLB software determines the order in which COM+ cluster members are used for activating COM+ components. A COM+ application cluster is a set of clusters managed by Microsoft Application Center.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Key Terms

437

Identify the two different Exchange 2000 Server topologies and know how to use them to provide fault tolerance when integrating Exchange Server in a website. Exchange 2000 Server supports single-server and multiserver topologies. In a multiserver arrangement, the front-end servers can use NLB for fault tolerance and the back-end servers can use Cluster services. Know the types of authentication available for OWA. OWA (Outlook Web Access) can use basic, Integrated Windows, anonymous, and SSL to authenticate clients’ requests.

Key Terms

B

efore you take the exam, be certain you are familiar with the following terms: affinity

cookies

Application Center 2000

data access layer

application image

deployment

back-end server

front-end server

back-end system

front-end system

business logic layer

general/web clusters

CLB software

Internet Message Access Protocol v4 (IMAP4)

COM+ application clusters

load-balanced clusters

COM+ cluster

log shipping

COM+ components

Messaging Application Programming Interface (MAPI)

COM+ routing clusters

Network Load Balancing (NLB)

Component Load Balancing (CLB)

Network News Transfer Protocol (NNTP)

Component Object Model (COM) n-tier

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

438

Chapter 10



Designing Content and Application Topologies

Outlook Web Access (OWA)

Simple Mail Transfer Protocol (SMTP)

Post Office Protocol v3 (POP3)

staging cluster

presentation layer

stateful

request forwarder

stateless

routing lists

synchronization

scaling out

throughput

scaling up

Web-tier clusters

security zones

Windows Distributed interNet Applications Architecture (Windows DNA)

session state

Wireless Access Protocol (WAP)

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Review Questions

439

Review Questions 1. The acronym DNA stands for which of the following? A. Domain Name Architecture B. Distributed interNet Applications Architecture C. Domain Internet Applications D. Distributed Internet Architecture E. Distant interNet Applications Architecture 2. Which of the following are required for an n-tier network? Select all

that apply. A. Clients B. Back-end systems C. NLB D. Front-end systems E. Clusters 3. What is a stateful system? A. A system that maintains client data during the entire session in

which the client is connected to the website. B. A system that does not maintain client information during the

entire session in which the client is connected to the website. C. A system that primarily uses Component Load Balancing. D. A system that is using the Windows 2000 Datacenter operating

system. 4. Which layer of the DNA architecture is responsible for connecting cli-

ents to data resources? A. Business logic layer B. System layer C. Presentation layer D. Data access layer

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

440

Chapter 10



Designing Content and Application Topologies

5. Which of the following Windows 2000 technologies can be used to

distribute both website content and cluster configuration settings? A. SQL Server 2000 B. BizTalk 2000 C. Commerce Server 2000 D. Application Center 2000 6. Which of the following is used to test and deploy Application Center

cluster applications? A. Staging cluster B. COM+ application cluster C. NLB D. Log shipping 7. Which of the following is also known as a load balancer for COM+

components? A. Response timetable B. COM+ routing cluster C. COM+ application cluster D. Routing lists E. Web cluster 8. What CLB component contains a list of all load-balanced cluster

members? A. COM component. B. COM+ component C. Response timetable D. Routing list

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Review Questions

441

9. What are two factors that should be considered before deciding

whether to use a CLB solution? A. Throughput B. Number of cluster members C. Type of cluster being used D. Response time E. Cluster server sizes 10. Which of the following is not an advantage of using a multiserver

Exchange 2000 topology? A. Scalability B. Single namespace C. Increased security capability D. Default high availability 11. Which of the following is the least flexible authentication scheme that

can be configured with Outlook Web Access (OWA)? A. Basic B. Integrated Windows C. SSL D. Anonymous

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

442

Chapter 10



Designing Content and Application Topologies

Answers to Review Questions 1. B. DNA stands for Distributed interNet Applications Architecture. It

is also known as Windows DNA Architecture and is the framework for n-tier networks. 2. A, B, D. An n-tier network consists of clients that access database

resources located on back-end systems, by means of web servers located on front-end systems. NLB (Network Load Balancing) and clusters are technologies for providing web services on front- and back-end systems. 3. A. A stateful system is one that maintains client information for the

entire time the client is connected to the website. The client information can be held in a cookie on the either the front-end or back-end system. Better performance is obtained by placing it on the back-end of the network. 4. A. The business logic layer is responsible for connecting clients from

the presentation layer to data resources on the data source layer. This is normally done through the use of COM+ components. 5. D. Microsoft Application Center 2000 helps you deploy web content

and create and manage website clusters. Application Center 2000 is part of the Microsoft’s .NET initiative and is integrated into Windows 2000 for seamless operation. 6. A. Staging clusters are stand-alone or single-node clusters that are

used primarily to test Application Center applications. These stagers can also be used to deploy the applications once they have been tested. They can also be used to view cluster status and performance data. 7. B. The COM+ routing cluster is an Application Center 2000 cluster

whose primary purpose is to route requests to COM+ application clusters. This routing cluster does this load balancing in addition to functioning as a web server cluster.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Answers to Review Questions

443

8. D. The routing list contains a list of the COM+ cluster members. This

list is located on each cluster in the presentation or web layer of the n-tier network. 9. A, D. When deciding on whether or not to use CLB, throughput and

response time must be considered by the designer. If your applications will make calls across the network, this can cause a degradation in throughput performance. Slow response times are unacceptable on any website. Slow response time can be the result of locating COM+ components on the web or presentation layer, and you may want to consider moving them to their own layer. 10. D. An Exchange 2000 Server multiserver topology provides for scal-

ing the system to meet performance requirements. The multiserver topology uses a front-end/back-end server arrangement, which allows for a single namespace design—end users need only remember the name of the front-end server and not the particular back-end server hosting their mailbox. When using multiserver configurations, security can be increased by the administrator’s limiting access to the backend servers, and by isolating various server processes. High availability can be designed into a multiserver topology, but the default installation does not provide fault tolerance. 11. B. Integrated Windows authentication uses Kerberos and NTLM

authentication and requires Internet Explorer 5.0 or later. Other browsers are not supported. Furthermore, Integrated Windows authentication is limited to single-server topologies.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

CASE STUDY

444

Chapter 10



Designing Content and Application Topologies

TechNew Research

Take a few minutes to look over the information presented and then answer the questions at the end. In the testing room, you will have a limited amount of time—it is important that you learn to pick out the important information.

Background TechNew Research Corporation is a leading supplier of business and industry data to the academic, library, and government markets. The company publishes a yearly research guide containing extensive market data that is used by major U.S. and European companies. The research guide is highly regarded within the industry as one of the best resources for current business and industry trends. TechNew’s main office is in Kansas City, MO, and there are several regional offices located throughout the U.S. Currently, clients get their research data through traditional delivery methods. A request is submitted and the data is sent via overnight carrier. TechNew’s management wants to automate data access so that clients can get their information from anywhere in the world without having to wait for a hand delivery. Most of the research data is stored on one of the company’s SQL servers (SQL Server 7.0). Management wants this data—truly the bread and butter of TechNew’s operation—well protected. They also want to make certain that clients are able to access the data in real time. CEO Statement “We are in a position to substantially increase our customer base by updating our networking infrastructure. I want to increase the amount of client requests that we are able to handle. Hopefully this proposed upgrade would accomplish that and more. Our marketing department is planning a major publicity blitz in the first quarter of the coming year and I want us to have the infrastructure in place to handle the expected increase in traffic.”

Current Environment TechNew Research’s current network is a Windows 2000 environment consisting of two domain controllers, one DHCP server, three IIS web servers, and two SQL Server 7.0 machines, one of which is used for client database

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

TechNew Research

445

Business Requirements TechNew Research wants to implement Microsoft Application Center 2000 and use it to improve their current web environment. Management plans to deploy additional web servers on the front-end to handle an expected increase in client usage. These servers will house both static and dynamic content. The staff are currently developing the content and will need to test it before it is implemented into production. Research data will be updated daily as information is collected and analyzed. TechNew wants its clients to have access to this data from any web browser at any time. The IT manager is familiar with the Microsoft DNA model and wants to use it as the basis for upgrading the current environment. Control of all web resources must be manageable through a single entity. By using Application Center, the company will be able to leverage its existing environment while at the same time provide the scalability needed for future expansion. The office at headquarters is the only site planned for upgrading in the immediate future. Headquarters will be the prototype before proceeding to upgrades of the regional offices. Eventually, all regional offices will also be upgraded with new servers and additional network infrastructure. Funding Funding has been approved only for the first phase of implementation. TechNew Research’s management wants to see what benefits they will reap from this new network configuration before they allocate additional funds for future expansion.

Technical Requirements TechNew Research will use the Windows DNA architecture to implement a multi-tier network. This network will be based on a three-tier model with Network Load Balancing on tier 1 and Component Load Balancing on tier 2. All data resources will be located on tier 3. The two existing SQL 7.0 Servers

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

CASE STUDY

storage. All the servers are currently running Windows 2000. The two SQL servers are running on older Intel platforms, each with only one processor. The IIS servers are currently being used primarily for in-house applications. Eventually, TechNew management wants to configure the IIS servers so that they become the entry point for clients accessing the market research data on the SQL Servers.

CASE STUDY

446

Chapter 10



Designing Content and Application Topologies

will be upgraded to SQL Server 2000 and will be configured for failover. The hardware for the two SQL servers will have to be upgraded before SQL Server 2000 can be installed. At a minimum, they will need to have at least one additional processor installed. All servers will be upgraded from Windows 2000 to either Windows 2000 Advanced Server or Windows 2000 Datacenter.

Questions 1. Winston is the IT manager for TechNew Research. He has the ulti-

mate reasonability for ensuring that all upgrades and changes to the network are implemented seamlessly. He wants to make sure that some type of high-availability solution is implemented when the network is upgraded. TechNew has decided to use Application Center 2000, and management is asking Winston questions about the type of services Application Center can support. What types of clusters are supported with Application Center 2000? A. Web cluster B. Failover cluster C. COM+ application cluster D. DHCP cluster E. COM+ routing cluster F. DNS cluster G. WINS cluster

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

TechNew Research

447

to service the expected increase in client requests. They are considering both Network Load Balancing and Component Load Balancing for assistance in distributing the workload. What cluster type could be used to host the websites and also support NLB and CLB? A. Web cluster B. ISA cluster C. COM+ application cluster D. Failover cluster 3. One of the developers has suggested to Winston that he look at a com-

ponent-loading solution for the load balancing of COM+ components. In the following table, identify the primary Component Load Balancing models on the left and match them to their description on the right. Some items in the table may not be used. CLB Model

Description

Two-tier with full load-balancing

Front-end web cluster

One-tier with full load-balancing

Requests passed to back-end CLB cluster

Three-tier with full load-balancing

Requests passed to load-balanced middle tier

Three-tier with failover

Requests passed to two members in middle tier

Two-tier with failover

Requests passed to back-end tier

4. Once the websites are configured and are ready to receive client

request, how will these requests be routed to the web clusters? A. Through Component Load Balancing B. Through Network Load Balancing C. Through failover clustering D. Through a DHCP server

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

CASE STUDY

2. Management wants to implement additional servers on the front-end

CASE STUDY

448

Chapter 10



Designing Content and Application Topologies

5. Winston has identified several cluster types that are options with

Application Center 2000. They are web clusters, COM+ application clusters, and COM+ routing clusters. What is the primary role of a COM+ routing cluster? A. It provides load balancing of COM+ components. B. It routes requests to the COM+ application cluster. C. It provides load balancing of client requests on the front-end of the

n-tier network. D. It is used by Application Center to deploy new web content. 6. Before he puts it into production, Winston wants to test the new web

content that the marketing department has created for its marketing blitz. Put the following steps for testing and deploying new web content into the correct order. Some of the steps will not be used. Set up a COM+ application cluster Set up a staging server Configure Network Load Balancing Deploy the new web content to a failover cluster Replicate the new content to cluster members Test the new content on the staging server Deploy the new web content to the web cluster controller 7. TechNew’s developers have decided to use ASP and COM+ compo-

nents on the web cluster. What will happen to servers in these clusters when COM+ components are deployed? A. Web services on the servers will have to be restarted. B. The servers will have to be removed from load balancing and rein-

serted after the COM+ components are deployed. C. All of the servers in the clusters will have to be cold-booted. D. IIS 5 will have to be reinstalled on each of the servers.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

TechNew Research

449

1. A, C, E. Microsoft Application Center 2000 supports three cluster

types: General/web clusters host websites and COM+ applications and can also be used for staging contents. COM+ application clusters hosts COM+ applications only. COM+ routing clusters are used to route requests to COM+ application clusters. 2. A. A web cluster could be used to host websites. It supports Network

Load Balancing for the distribution of IP requests across member clusters. It also supports Component Load Balancing if the cluster is hosting COM+ applications or components. 3.

CLB Model

Description

Two-tier with full load-balancing

Front-end web cluster Requests passed to back-end CLB cluster

Three-tier with full load-balancing

Front-end web cluster Requests passed to load-balanced middle tier Requests passed to back-end CLB cluster

Three-tier with failover

Front-end web cluster Requests passed to two members in middle tier Requests passed to back-end CLB cluster

The two-tier model with full load-balancing has a front-end web cluster that passes requests to a Component Load Balanced cluster on the back-end. The three-tier model has a front-end web cluster that passes requests to a load-balanced middle tier. Requests are then passed from

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

CASE STUDY ANSWERS

Answers

CASE STUDY ANSWERS

450

Chapter 10



Designing Content and Application Topologies

the middle tier to a CLB cluster on the back end. The three-tier with failover model has a front-end web cluster that passes requests to two members in the middle tier. One of the members is active and the other is in standby mode. Then requests are passed to a CLB cluster on the back end. 4. B. Once client requests are received, they are routed to a cluster mem-

ber through Network Load Balancing on the front end. 5. B. A COM+ routing cluster is responsible for directing client requests

to the COM+ application cluster. This is the routing cluster’s primary role, but it can also serve as a web server cluster. CLB provides load balancing for COM+ components. NLB handles load balancing of client requests on the front-end. The New Deployment Wizard is Application Center’s tool for deploying web content. 6.

1. Set up a staging server 2. Test the new content on the staging server 3. Deploy the new web content to the web cluster controller 4. Replicate the new content to cluster members The first thing Winston should do to create the testing environment is to set up a staging server (or a cluster of one). He should then load the web content onto this server and test it. Once the tests are completed to everyone’s satisfaction, Winston can deploy the new web content to the cluster controller. The cluster controller will replicate the new content to all the cluster members. 7. A. When COM+ components are deployed, the web services on each

server must be restarted. This should be taken into account when planning for deployment because it will probably disrupt normal client service.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Chapter

11

Monitoring, Management, and Disaster Recovery Strategies MICROSOFT EXAM OBJECTIVES COVERED IN THIS CHAPTER:  Design a system management and monitoring strategy. Considerations include performance monitoring, event monitoring, services, data analysis, and WMI.  Design an application management and monitoring strategy. Considerations include detection and notification of application failure.  Design a disaster recovery strategy.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

I

n Chapter 10 you were introduced to Microsoft’s framework for designing n-tier web solutions: DNA (Distributed interNet Application Architecture). On the DNA framework, websites can be developed to have not only scalability, but also high availability, manageability, and security. You also learned how Windows Application Center 2000 helps developers to create and manage websites. Both these skills are part of the exam’s fifth and final objective, Designing Application and Service Infrastructures for Web Solutions. Chapter 10 covered other issues of website topology, as well, including the Component Load Balancing service included in Application Center. Here in Chapter 11, we’ll change directions and focus on monitoring and management of the website and the system overall. The system monitoring skills studied in this chapter are included under the exam’s objective 1 (Designing Cluster and Server Architectures for Web Solutions), and the application monitoring skills are under objective 5 (Designing Application and Service Infrastructures for Web Solutions). We will look at the valuable monitoring tools available for following the performance of a website in a Windows 2000 system. Some of these, including the HTTP Monitoring Tool and Network Monitor, are part of Windows 2000 itself. You’ll also study some command-line tools for monitoring individual processes, including Process Tree, Process Viewer, and Process Explode. In addition, the Windows Management Instrumentation (WMI) framework can be used to monitor, track, and control computers in an enterprise environment. WMI is based on the WBEM initiative and is natively supported by Windows 2000. Several tools discussed in this chapter are for testing your websites. The Web Capacity Analysis Tool (WCAT) tests capacity for both IIS servers and their clients. The Web Application Stress (WAS) tool is used to stress-test

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Monitoring Tools

453

web servers. Simulated workloads are provided to appraise server performance. You’ll also be introduced to specific performance counters that can be used to develop an overall performance monitoring strategy for Windows 2000, IIS 5, and SQL Server 2000. This chapter also explains specific tools that are useful when developing a disaster recovery plan for your highly available Web solution. This discussion addresses the disaster recovery strategy skill under exam objective 1, Designing Cluster and Server Architectures for Web Solutions.

Monitoring Tools

As the number of visitors to your website increases, so does the load on system resources. By monitoring the system overall, an administrator can be proactive and spot problems before they become emergencies. With the help of system monitoring, you can make decisions about replacing system resources or tuning them for better performance on the website. Windows 2000 provides several monitoring and management tools that supply a wide range of information on current system operations. You’ll also be able to gather from these tools much of the data needed for future capacity planning. In this section, you’ll study some of the facilities available for monitoring and improving the performance of your website in particular, and some aspects of the system overall.

HTTP Monitoring Tool The HTTP Monitoring Tool was designed for immediate monitoring of web servers using existing web pages on the servers. It can be customized to perform complex website tests. Multiple servers can be tested, and reports generated to a standard comma-separated file or to event logs. The HTTP Monitoring tool is located on the Windows 2000 Resource Kit Companion CD. It has three main components: Real-time sampling service The real-time sampling service makes HTTP requests to ASP files on specified servers. The results of these requests are recorded in a file on the local computer.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

454

Chapter 11



Monitoring, Management, and Disaster Recovery Strategies

SQL reporting server A SQL reporting server uses FTP to retrieve the sampling results. These results are copied into a database and massaged for monitoring and reporting. Client monitor The client monitor uses ASP pages that are hosted on a web browser to display the information that was recorded using the SQL reporting server.

Network Monitor Network Monitor is a software-based network packet analyzed that can be used to capture network real-time traffic. This traffic can be displayed immediately after it is captured or saved for later analysis. The Network Monitor driver is a network protocol that is automatically installed when the Network Monitor is installed. The Network Monitor itself is illustrated in Figure 11.1. FIGURE 11.1

Network Monitor

Network Monitor will help you troubleshoot network problems for computers running Windows 2000 Server on a LAN. This includes local computers, remote computers, and computers using a dial-up connection. The Microsoft Systems Management Server (SMS) includes a full version of Network Monitor, for use in monitoring, diagnosing, and troubleshooting network problems on all computers in a network segment. The Windows 2000 Server version of Network Monitor only captures packets on the LAN.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Monitoring Tools

455

Network Monitor produces system degradation when it operates. For this reason, it should be run during periods of low network usage. This may not be feasible if you are trying to troubleshoot a networking issue, but keep in mind that overall network performance will suffer while you’re using Network Monitor.

Netstat Netstat (netstat.exe) is a command-line utility that displays your system’s current TCP/IP connections and protocol statistics. Table 11.1 lists the parameters that can be used with the netstat.exe command, and you can see a sample run in Figure 11.2. TABLE 11.1

Netstat’s Command-Line Parameters Parameter

Description

-a

Displays all network connections and listening ports

-e

Displays Ethernet statistics; can be used with the -s parameter

-n

Displays addresses and port numbers

-p proto

Shows connections for protocol specified by proto

-r

Displays the routing table

-s

Displays per-protocol statistics

interval

Redisplays statistics, pausing interval seconds between each display

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

456

Chapter 11



Monitoring, Management, and Disaster Recovery Strategies

FIGURE 11.2

A sample run of the netstat command

Process Explode Process Explode (pview.exe) is a command-line utility that shows you the current count of system objects (see Figure 11.3). It can be used to change the base priority of a running process, the security context of processes, thread and token permissions, and to stop running processes. It also can be used when you need to examine detailed counts of physical and virtual memory. FIGURE 11.3

The Process Explode command-line utility

Process Explode provides a lot of the same information that is available in Performance Monitor. Unlike Performance Monitor, however, Process

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Monitoring Tools

457

Explode does not have to be configured for usage. It also does not refresh or update counters automatically; you must use the Refresh button to update data. Process Explode is available on the Windows NT Resource Kit.

Process Viewer Process Viewer (pviewer.exe) is a command-line utility for displaying data about processes running on local and remote computers (see Figure 11.4). It is extremely useful when you’re trying to diagnose process memory-usage problems. Process Viewer can be used stop running processes and to change the base priority of a running process. The information obtained with this utility is similar to that of Process Explode, with one exception: Process Viewer allows you to look at processes running on remote computers. FIGURE 11.4

The Process Viewer command-line utility

Process Viewer is available on the Windows NT Resource Kit.

Process Thread and Status Process Thread and Status or pstat.exe is a command-line tool that displays all running process and threads running on a system. It also shows the status of each process. The pstat.exe utility has no switches but can be used with the pipe option ( | ) to display one screenful of data at a time. Figure 11.5 shows sample output.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

458

Chapter 11



Monitoring, Management, and Disaster Recovery Strategies

FIGURE 11.5

Output from Process Thread and Status tool (pstat.exe)

Process Tree The Process Tree command-line tool allows administrators to query the process tree and kill processes. It works on both local and remote Windows 2000 computers. Before Process Tree can be used, it must be installed from the Windows 2000 Resource Kit. Once Process Tree is installed, members of local Administrators, Power Users, and Users groups can query the process tree. Only local Administrators and Power Users group members can kill processes. Process Tree consists of several components: 

A kernel-mode driver, ptreedrv.sys



A Windows 2000 service, ptreesvc.exe and ptreesvcps.dll



A COM+ server, ptreesvr.dll



A console client, ptree.exe



A GUI client, ptreeg.exe

WMI: Microsoft’s Implementation of WBEM Web-Based Enterprise Management (WBEM) is an initiative that allows for the management of computers through standard web browsers. The WBEM architecture uses three components for collecting and distributing system data. The first component is an object repository, which is a standard method for storing object definitions. The second component is a protocol

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Monitoring Tools

459

such as COM or DCOM, used to obtain and disseminate data about the objects. The third component is one or more Win32 dynamic link libraries (DDLs) that function as WMI data providers. WMI, Windows Management Instrumentation, is Microsoft’s implementation of WBEM. WMI is used to monitor, track, and control computers and network devices using a web browser such Internet Explorer. WMI applies the Common Information Model (CIM), which is used to describe network objects.



Microsoft Exam Objective

Design a system management and monitoring strategy. Considerations include WMI.

Windows 2000 has built-in support for WMI. Its two principle components are the object repository and the CIM Object Manager. The object repository stores information collected through WMImanageable hardware and software. By default, Windows 2000 collects information about system resources from the Registry, and WMI can be used instead to collect this same data. WMI will collect the data from the various system resources and pass this data on to a management information store. The CIM Object Manager is a key component for WMI. It is the collection and manipulation point for objects that are stored in the object repository; WMI facilitates the management of these objects. WMI providers act as intermediaries between the CIM Object Manager and the objects. The following are some of the providers included in Windows 2000: 









Win32 Provider supplies information about the operating system, security, and computer system. WDM Provider supplies low-level Windows Driver Model driver support for input and storage devices. Event Log Provider enables WMI events to be added to event logs for reading and analysis. Registry Provider generates WMI events when Registry keys are modified. Performance Counter Provider exposes any performance counters that are installed on the system.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

460

Chapter 11



Monitoring, Management, and Disaster Recovery Strategies

WMI can also be partnered with the Windows Performance Tools (described shortly) to gather performance data about a system. To enable performance monitoring using WMI, type perform /wmi at a command prompt. The Performance Tools window will open as it normally does, but the data will be collected through WMI instead of the Registry.

Web Capacity Analysis Tool (WCAT) Microsoft’s Web Capacity Analysis Tool or WCAT is a utility for testing IIS servers and client capacity. It can test IIS servers’ response to client requests for data, content, or HTML pages. The results of these tests are helpful in determining optimal server and network configurations. WCAT has more than 40 prepacked content and workload simulations that can be used to test various server and network configurations. These ready-to-run simulations are suitable for servers with single or multiple processors, and will check the performance of your IIS servers in response to various workloads. The simulations to test Active Server Pages can be run even if the server does not currently house any ASPs. There are also simulations to test server response to Secure Sockets Layer (SSL) encryption, and to test the response of HTTP to HTTP Keep-Alives. WCAT enables users to design and run their own workload simulations in addition to the ones mentioned here. WCAT’s four components are a server, a client, a controller, and the network. The server is typically configured with Windows 2000 Server, Advanced Server, or Datacenter and responds to client requests for connections. It establishes, manages, and terminates connections, and processes and responds to client requests. The client consists of one or more computers that are running the WCAT client software. The controller initiates and monitors the WCAT tests, using three input files(a configuration file, a script file, and a distribution file). An optional performance counter file can be configured to specify performance measurements for the server.

Web Application Stress Tool (WAS) The Web Application Stress (WAS) tool simulates web activity on web servers (see Figure 11.6). It does so through a variety of workload scenarios that appraise the performance of a web server or web application. You can set stress-test parameters for the number of users; the size, type, and rate of client requests; specific pages to be requested; and the frequency of the client

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Windows 2000 Performance Tools (perfmon)

461

requests. Once the stress test is run, a detailed report is produced to show the total number of hits to the site, number of hits per page, number of sockets per second, and any performance counters that were specified prior to the start of the test. FIGURE 11.6

Web Application Stress Tool

There are two categories of web stress-testing: performance testing and stability testing. Performance testing involves two tasks. First, stress is applied to the website in order to acquire a reading of maximum requests handled per second. The second task is to determine which resource is preventing the requests-per-second rate from going higher. This is done by identifying the bottleneck (which typically is the CPU). The stability test is run to ensure that there are no memory or threading issues. WAS contains six sample scripts for testing the tool’s functionality. The files associated with this script are located in the samples subdirectory in the location where you installed the WAS tool. To use any of the sample scripts, you can move this folder and its contents to your IIS default website.

Windows 2000 Performance Tools (perfmon)

I

n this section we will examine several of the performance tools that are provided with Windows 2000. These tools are accessed by typing perfmon at the Run command or at a command-line prompt.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

462

Chapter 11



Monitoring, Management, and Disaster Recovery Strategies



Microsoft Exam Objective

Design a system management and monitoring strategy. Considerations include performance monitoring, event monitoring, services, data analysis.

System Monitor The System Monitor gives administrators the capability to monitor and troubleshoot resources on enterprise computers (see Figure 11.7). The System Monitor collects user-defined data for display in a graph. It can display realtime performance data and current and previous log data. You can define the appearance of the interface and how the data is displayed. The collected data can be displayed in a printable graph, a histogram, or a report. Administrators can also produce HTML pages from the collected data. FIGURE 11.7

The System Monitor

When System Monitor is initially opened, it opens to a blank graph. Microsoft has provided some predefined settings for the Counter logs that can be used to monitor data. These counters are Memory\Pages/sec, Physical Disk(Total)\Avg. Disk Queue Length, and Processor(Total)\%Processor Time. You can also define a specific type of data that you want the graph to collect, such as performance objects, performance counters, or object instances. You can define the source of the data to be collected. This can be from a local computer or another computer on the network for which you have the permissions to monitor. Data can also be sampled at predefined intervals.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Windows Task Manager

463

Performance Logs and Alerts The Performance Logs and Alerts monitor collects resource data just as System Monitor does, but it has functionality over and above System Monitor’s. Performance Logs and Alerts can collect data in a format that is usable by SQL databases. The data can be viewed in real time or saved for later analysis. Data is collected whether or not the user is logged on to the monitored computer. Both stop and start times can be defined for logging. Filenames and file sizes can be defined for automatic log generation. Alerts, too, can be set to specify actions that should be performed when a threshold on a specific counter is reached. Two types of logs are produced by Performance Logs and Alerts: counter logs and trace logs. Counter logs sample data that is contingent on the performance counters and objects that have been selected for monitoring. This sample data is gathered at predefined intervals. Trace logs do not sample data. They track data from start to finish. You can configure alerts to send e-mail to an administrator when a threshold is met. Performance Logs and Alerts also can be configured to run a specific program or start a log.

Windows Task Manager

T

he Windows Task Manager supplies dynamic data on processes and applications that are running on the computer and is thus an important tool for getting a view of the overall system performance. You can start programs from Task Manager and terminate programs that have stopped responding. See Figure 11.8. FIGURE 11.8

Task Manager

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

464

Chapter 11



Monitoring, Management, and Disaster Recovery Strategies

You can start Task Manager in one of several ways: Press Ctrl+Shift+Esc, or right-click the Taskbar and select Task Manager, or press Ctrl+Alt+Del and select Task Manager. You can also enter taskmgr at a command prompt or in the Run dialog. The Task Manager has three tabs, each with a specific function. Applications The Applications tab is used to monitor applications that are currently running. From this tab you can start or stop a program, or you can switch between the ones that are running. The Applications tab is usually the first place to look when you’re trying to troubleshoot nonresponsive applications. You can identify the specific processes associated with an application by right-clicking the application and selecting Go To Process. Processes The Processes tab shows you all processes that are running on the system. Each application running on the server has at least one associated process, and many applications require multiple processes. Processes run in their own memory space, dedicated specifically for that process. One important piece of information that can be ascertained from the Processes tab is the amount of CPU time being consumed by a particular process. Performance In the Performance tab you can monitor the current performance of the computer. This tab gives you current and historical data on CPU and memory usage. It also presents the total number of handles, threads, and processes currently running, along with information on physical and kernel memory.

Event Viewer The Windows Event Viewer is where you view the event logs that gather data on the computer’s events (see Figure 11.9). Windows 2000 provides System, Application, and Security logs. The System log records OS-generated events, including system startup and shutdown and the loading of device drivers. The Application log records events generated by the applications on the computer. The events logged here are determined by the developers of the applications. If Dr. Watson is enabled, for instance, its errors are written to the Application log. The Security log monitors system security events, including users’ logging on and off the system. The Security log is only accessible by system administrators.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Monitoring with System Monitor

FIGURE 11.9

465

Event Viewer

Three types of events are written to the event logs: Information Information events are triggered by the successful operation of a device or service. Warning Warning events indicate that a problem may exist and the event needs to be looked at. Error An Error is a serious event that should be investigated immediately. It indicates that a significant problem has occurred on the system.

Monitoring with System Monitor

I

n this section we will look at counters that can be configured in System Monitor so that you can follow processes that have an effect on the overall performance and operation of a website. This information aids in determining whether system hardware upgrades are needed, or whether applications need to be rewritten.

The first step of the monitoring process is to select the counters you need to monitor and create a baseline log. This log can be used for comparison when later analysis is required.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

466

Chapter 11



Monitoring, Management, and Disaster Recovery Strategies

Memory Usage and Availability Having sufficient memory is critical to any system, but especially so on servers that house e-commerce sites. IIS 5 requires a minimum of 128MB of RAM, but 256MB to 1GB (or even more) is highly recommended. Keep in mind that memory shortage can appear disguised as other problems on the system. The System Monitor produces the following counters helpful in determining if your system has sufficient memory resources: Memory: Available Bytes The total bytes of physical memory available to processes running on the system. You want to reserve at least 10% of this memory for peak use. Memory: Page Faults/sec The rate of page faults handled by the processor. These can be hard or soft page faults. A soft page fault occurs when a fault is found in physical memory. A hard page fault requires disk access, versus a soft page fault which occurs in memory instead of on the disk. A hard page fault can cause significant system delays. If this counter is above zero, it indicates that too much memory is being allocated to a particular application and not to the Windows 2000 operating system. Memory: Pages Input/sec The number of pages read from disk to resolve hard page faults. Memory: Page Reads/sec The number of times the disk was read to resolve hard page faults. This number should remain as low as possible. High numbers indicate the occurrence of disk reads as opposed to cache reads. Memory: Cache Bytes The size of the file system cache, which is set by default to use up to 50 percent of available physical memory. If IIS is running out of memory, it will attempt to trim the file system cache.

IIS Object Cache Items that are frequently used by the system are stored in the IIS Object Cache. This cache is maintained by the IIS service and is part of the IIS 5.0 working set. The working set is the amount of physical RAM that is available to a process or activity. If a process is unable to store all of its code or data in physical memory, it will store it elsewhere, most likely on the disk in

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Monitoring with System Monitor

467

virtual memory. This causes an increase in disk activity, which will slow the server down. Monitor the following counters for the IIS Object Cache: Internet Information Services Global: File Cache Hits % The ratio of file cache hits to total cache requests. Internet Information Services Global: File Cache Flushes The number of file cache flushes that have occurred since server startup. These can be objects that were deleted from the IIS Object Cache because they timed out. Internet Information Services Global: File Cache Hits The total number of successful lookups in the file cache. This measurement indicates how often data being sought was found in the IIS Object Cache. Process: Page File Bytes: Total The current number of bytes a process has used in the paging files. Paging files are used to store pages of memory used by the process that are not contained in other files. Paging files are shared by all processes, and lack of space in paging files can prevent other processes from allocating memory. Memory: Pool Paged Bytes The number of bytes in the paged pool. This is an area of system memory for objects that can be written to disk when they are not being used. Memory: Pool Nonpaged Bytes The number of bytes in the nonpaged pool. This is an area of system memory for objects that cannot be written to disk but must remain in physical memory as long as they are allocated.

Processor Issues Bottlenecks can occur when the processor consumes most or all of the processor time for the system. Increasing the number of processors in a system sometimes alleviates bottlenecks. Windows 2000 can easily be scaled to up to four processors. The following counters can be monitored for processor issues: System: Processor Queue Length The number of threads in the processor queue. A number greater than 2 generally indicates processor congestion. If this number stays at 2 or more, you should consider either upgrading your processor or adding additional processors.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

468

Chapter 11



Monitoring, Management, and Disaster Recovery Strategies

Processor: %Processor Time The percentage of time during which the processor is executing a non-idle thread. This is the primary indicator of processor activity; if it’s high in comparison to your baseline, you may need to upgrade your process or add additional processors. Thread: Context Switches/sec:Inetinfo=>Thread# The rate of switching from one thread to another. System: Context Switches/sec The combined rate at which all processors on the computer are switched from one thread to another. Processor: Interrupts/sec The average number of hardware interrupts being received and serviced by the processor in each second. Processor: % DPC Time The percentage of time during which the processor received and serviced deferred procedure calls (DPCs) during a sample interval. A DPC is an interrupt that runs at a lower priority than standard interrupts.

Network Measurements The purpose of a website is to take requests from clients and service these requests as quickly as possible. In addition to this IIS work, other system resources require a certain amount of bandwidth. The amount of bandwidth available to website users will have a direct impact on the amount of bandwidth required by the web server and whether or not client requests are being serviced. Check the following counters for indications of your web server’s network input/output performance. Network Interface: Bytes Total/sec The rate at which bytes are sent and received on the interface, including framing characters. Web Service: Maximum Connections The maximum number of simultaneous connections established with the web service. Web Service: Total Connection Attempts The number of connections that have been attempted using the web service since service startup.

Disk Counters IIS 5 regularly writes to log files on the hard disk and produces constant disk activity. Physical disk counters provide insight into the frequency of disk

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Monitoring with System Monitor

469

access on the system and help to monitor for spikes in disk reads. The following counters will help determine the frequency of disk access. Processor: %Processor Time This counter measures the percentage of time during which the processor is executing non-idle threads. This is the primary indicator of processor activity. Network Interface Connection: Bytes Total/sec The rate at which bytes are sent and received on the interface, including framing characters. Physical Disk: %Disk Time This counter measures the percentage of elapsed time during which the selected disk drive is busy servicing read or write requests. A measurement significantly above 80% may indicate a possible memory leak.

Web Application Performance Poorly written applications can have significant performance ramifications for your website’s performance. Applications should be thoroughly tested before they are implemented. If websites on the front-end are requesting information from databases on the back-end, the databases may have problems keeping up with the requests. You can use the following counters to help determine whether web applications are causing bottlenecks on your website: Active Server Pages: Requests/Sec The number of requests executed per second. This number fluctuates depending on the complexity of the ASP pages. Active Server Pages: Requests Executing The number of requests currently executing. Active Server Pages: Request Wait Time The number of milliseconds for which the most recent request was waiting in the queue. This measurement should remain low—you don’t want to see requests in the queue. Active Server Pages: Request Execution Time The number of milliseconds taken to execute the most recent request. Active Server Pages: Requests Queued The number of requests waiting for service from the queue. Both Requests Queued and Request Wait Time should remain close to 0. If the limit for Requests Queued is reached, user will see the message “HTTP 500/Server Too Busy.”

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

470

Chapter 11



Monitoring, Management, and Disaster Recovery Strategies

Web Service: CGI Requests/sec The rate of CGI requests being processed simultaneously by the Web service. Web Service: Get Requests/sec The rate at which HTTP requests using the GET method are made. Get requests are generally used for basic file retrievals or image maps, though they can also be used with forms. Web Service: Post Requests/Sec The rate at which HTTP requests using the POST method are made. Post requests are generally used for forms or gateway requests.

Monitoring SQL Server 2000

A comprehensive set of monitoring tools is provided with SQL Server 2000. The measures you use will depend on the type of monitoring to be done and the events to be monitored. This section examines some of the tools available.

Transact-SQL Statements Transact-SQL statements can be used with system stored procedures to perform monitoring of SQL instances. SQL administrators use transact-SQL statements to gather an assortment of information including current user activity, data space used by a table or database, space used by a transaction log, input/output performance information, memory usage and availability, network throughput, and general statistics about SQL Server activity and usage.

SQL Server 2000 Error Log SQL Server 2000 logs events to both the SQL Error log and the Windows Application log. The SQL Error log can be viewed to make sure that processes have completed. Every entry in the log is time-stamped. This log can be viewed through SQL Server 2000 Enterprise Manager or any text editor. It is located at Program Files\Microsoft SQL Server\Mssql\Log\Errorlog. A new SQL Error log file is created every time a new instance of SQL Server 2000 is started. The SQL Error log can be used in conjunction with the Windows

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Monitoring SQL Server 2000

471

Application log to diagnose and troubleshoot problems related to SQL Server 2000.

Current Activity window The Current Activity window is located in the SQL Server 2000 Enterprise Manager. It provides administrators with a graphical display of the following information: 





Current user connections and locks. Process number, status, locks, and commands being run by active users. Objects that are locked, and the kinds of locks that are present.

An administrator can use the current activity windows to view information about a selected process, send a message to a user who is connected currently to an instance of SQL Server, or terminate a selected process.

SQL Profiler The SQL Profiler is used to monitor instances of SQL Server 2000. It is a graphical tool that can capture events and store them for later analysis. An event is defined as any action generated within the SQL Server. These can include logon connections, failures, and disconnections; the start or end of a stored procedure; Transact-SQL SELECT, INSERT, UPDATE, and DELETE statements; errors written to the SQL Server error log; and the start or end of stored procedures. The user determines what events to monitor using SQL Profiler. A template is created that defines the data to be collected. The data is then gathered by running a trace on the events that were defined in the template. SQL Profiler is started from SQL Server 2000 Enterprise Manager and can be used to monitor the performance of an instance of SQL Server 2000. The Profiler also provides the capability to debug Transact-SQL statements and stored procedures, and identify slow-executing queries. SQL Profiler can be used during the development phase of a project to test SQL statements and stored procedures. Events can be captured on a production system and replayed on a test system for debugging purposes. In this way the administrator can troubleshoot the problem without introducing interference to the production system. Administrators can also use SQL Profiler to audit events on each instance of SQL Server 2000, including logon attempts and the success and failure of permissions in accessing statements and objects.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

472

Chapter 11



Monitoring, Management, and Disaster Recovery Strategies

System Monitor Just as it can for Windows 2000 and IIS 5.0, System Monitor can monitor parameters for instances of SQL Server 2000. Counters can be configured to monitor system performance and track resource usage. The following counters will be helpful in monitoring your instances of SQL Server 2000.

Monitoring Disk Activity SQL Server 2000 uses Windows 2000 I/O calls to perform its reads and writes. Windows 2000 actually performs the I/O calls, but SQL Server 2000 manages how and when disk I/O is performed. Here are counters of interest for monitoring disk activity for SQL Server 2000: Physical Disk: %Disk Time The percentage of elapsed time when the selected disk drive is busy servicing read or write requests. If this number is high (90% or more) you may want to look at the PhysicalDisk: Avg. Disk Queue Length counter to see if system requests are waiting for disk access. Physical Disk: Avg. Disk Queue Length The average number of both read and write requests that were queued for the selected disk during the sample interval. This counter should be at no more than 1.5 times the number of spindles in the physical disk. If this counter and the PhysicalDisk: %Disk Time counter are constantly high, you may need to install a faster disk drive, move files to another disk, or add another disk to a RAID array. Physical Disk: Current Disk Queue Length The number of requests outstanding on the disk at the time the performance data is collected. Memory: Page Faults/sec The overall rate at which faulted pages are handled by the processor, measured in numbers of pages faulted per second. SQL Server: Buffer Manager Page Reads/sec The number of physical database page reads issued.

Monitoring CPU Usage CPU rates that are outside of the normal range may indicate that additional CPUs need to be installed in the system. Such statistics can also mean

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Monitoring SQL Server 2000

473

that an application has been poorly designed and needs to be rewritten or tuned. Processor: %Processor Time The amount of time that the processor spends executing Windows 2000 or SQL Server 2000 commands (I/O requests). If this counter is high (in comparison to baseline) at the same time that the Physical Disk counter is high, a more efficient disk subsystem may be warranted. If you have multiple processors in the system, this counter can be monitored for each processor. System: %Total Processor Time The total processing time for all processors in the system. Processor: %User Time The amount of time that the processor spends executing user processes such as SQL Server. System: Processor Queue Length The number of threads that are waiting for processor time. If threads require more processor cycles than are available, a bottleneck occurs. If this happens, the system may need a faster processor.

Monitoring Memory Usage As with all Windows applications, having sufficient memory is critical when using SQL Server 2000. The following counters can be used to ensure that your system has enough memory resources, and/or that it is not consuming too much system memory for a particular job. Memory: Available Bytes The amount of memory that is currently available for all processes running on the system. If the value of this counter is low, additional memory may need to be added to the system. Memory: Pages/sec Can indicate the number of pages that have been retrieved from disk due to hard page faults. It also can indicate the number of pages that have been written to disk in order to free working-set space due to page faults. A high number for this counter is indicative of excessive paging. Process: Working Set The number of bytes in the working set for SQL Server.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

474

Chapter 11



Monitoring, Management, and Disaster Recovery Strategies

SQL Server: Buffer Manager: Buffer Cache Hit Ratio The percentage of pages that were found in the buffer pool without having to incur a read from disk. This measure should be above 90%. SQL Server: Buffer Manager: Total Pages The number of pages in the buffer pool. SQL Server: Memory Manager: Total Server Memory (KB) The total amount of dynamic memory the server is currently consuming. If this counter is high when compared to the physical memory in the system, more memory may be required.

Monitoring with Application Center 2000

A

pplication Center 2000 uses a four-step approach to monitoring data. The steps are data generation, data logging, data query, and data viewing. Data generation is done through Application Center events, Health Monitor data collectors, Windows events, and performance counters. The generated data is logged using the SQL desktop engine installed on each cluster member. Data that has been stored is queried using Application Center’s built-in information views. Finally, the queried data is viewed through the Application Center monitoring screens.



Microsoft Exam Objective

Design an application management and monitoring strategy. Considerations include detection and notification of application failure.

Application Center continuously monitors information about the functionality of the cluster and its members. The four aspects that are monitored are events, performance counters, server health (by using Microsoft Health Monitor 2.1 data collectors), and member status. Data generated from this monitoring provides information on the availability of your cluster and its members and any problems that might be developing. Windows 2000 events, WMI, Health Monitor, and services and applications within Application Center can generate event data. Performance

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Monitoring with Application Center 2000

475

counters provide administrators with an indication of how specific resources are being used. Application Center logs several default performance counters that can provide an overall view of cluster performance. These logs can be viewed using the Application Center Performance view.

Health Monitor One of the core components of Application Center’s monitoring capabilities is the Health Monitor data collectors. Data collectors are objects that collect data for a particular process or service. Following are the nine types of data collectors that can be used with Health Monitor to monitor cluster status and data information: 

Performance monitor



Service monitor



Process monitor



Windows Event Log monitor



COM+ application



HTTP monitor



TCP/IP monitor



Ping (ICMP) monitor



WMI Instance



WMI Event Query



WMI Data Query

The performance, service, process, Windows Event Log, and COM+ application monitors all use WMI.

Data collectors can be of two types in Application Center: local and global monitors. Local monitors are associated with a particular cluster member and are not synchronized to the other cluster members. Global monitors are synchronized across the entire cluster to all cluster members. Application Center creates global monitors by default.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

476

Chapter 11



Monitoring, Management, and Disaster Recovery Strategies

The data that is collected by Health Monitor is compared against userdefined thresholds. When a threshold limit or condition is met, a status change is triggered in the particular data collector, and an action is performed depending on the conditions that were set. The purpose of a threshold is to evaluate the data returned by a data collector and to perform an action contingent on this data. A threshold is basically a monitoring rule that is applied to what is collected by the data collector. Several types of actions can be initiated as the result of a threshold being met: 

Notification through an e-mail alert



Restarting of the server



Running of an executable or batch file



Generation of an Windows 2000 event



Data written to a log



Running of a script written in VBScript or Jscript

Ensuring System Health by using Application Center 2000 Westeck recently upgraded their network infrastructure to Windows 2000. The company expects to double the number of servers in use within the next 12 months. Currently, Westeck is using an n-tier configuration, with the web servers located on the front-end running IIS 5 and utilizing Active Server Pages. The back-end servers are running SQL Server 7.0. They are scheduled for an upgrade to SQL Server 2000 within the next 2 months. Westeck has decided to install Microsoft Application Center 2000 for its capabilities in advanced network load balancing, component load balancing, replication, and health monitoring. Application Center will enable the company’s IT staff to independently scale each tier of the Westeck’s systems as the need arises. By using Application Center 2000, they can be assured of the real-time health of system components through constant monitoring. The tools provided with Application Center generate performance reports and detect software/hardware malfunctions. Application Center 2000 can also be used to implement diagnostic and troubleshooting procedures, to alert local and remote monitoring centers, and to distribute software with little or no onsite intervention.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Disaster Recovery

477

Disaster Recovery

No matter how much prior planning is done for implementing a network, no system will ever be completely foolproof. Preventive measures built into a network’s design can make it as resistant to failure as possible, however. For this discussion, consider a disaster as any incident that makes it impossible for users to access resources on the website. Your preparation for possible disaster scenarios should include plans and procedures for recovery from not only an operating system failure, but also the failure of system hardware and applications. System and configuration information should be collected for all the hardware and software resources. This data should be clearly documented and safely stored in a location that is easily accessible by all who may need it. Recording past maintenance actions for hardware and software configurations is a key component, as well, not only for helping to prevent failure but also to aid in disaster recovery.



Microsoft Exam Objective

Design a disaster recovery strategy.

To develop a disaster recovery strategy, first assess your current network. Some of the questions you should answer include the following: 

Where are the possible points of failures in my system?



What data is critical and where is it located?



How and when should system backups be performed?



Should backups be stored offsite?



Who is responsible for performing backups?

All recovery procedures should be well documented with step-by-step procedures. These procedures should be upgraded regularly to reflect system changes as they occur. Windows 2000 includes several tools that can play a role in your overall disaster recovery strategy. First we’ll look at the Windows 2000 Backup and

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

478

Chapter 11



Monitoring, Management, and Disaster Recovery Strategies

Recovery Tool. Then we’ll examine the Windows 2000 startup options that are available in case the computer system will not start. Finally, we’ll look at two sets of disks that should be an integral part of every recovery strategy: Windows 2000 Setup disks and startup disks.

Windows 2000 Backup and Recovery Tool The Windows 2000 Backup and Recovery Tool (Figure 11.10) contains three wizards that walk you through the necessary procedures to ensure that data can be restored if needed. 





The Backup Wizard is used to create a backup of data that can be used for recover after a disk failure. The Restore Wizard is used to restore data that has previously been backed up. The Emergency Repair Disk Wizard helps you create ERDs for use in repairing and restarting the Windows OS if it is damaged.

This utility can also be used to back up the computer’s system state. The system state includes the Registry, the COM+ Class Registration database, and the system boot files—all of which are critical for system operations. You start the Windows 2000 Backup and Recovery tool from a command line by typing ntbackup. Data can be backed up to a tape drive, removable disks, or a library of disks or tapes that are organized into a media pool. FIGURE 11.10

The three wizards available in the Windows 2000 Backup and Recovery Tool

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Disaster Recovery

479

Emergency Repair Disks (ERDs) The Emergency Repair Disk Wizard creates an ERD that can be used to help with the repair of system files that have become corrupted or accidentally deleted. You can also use ERDs to repair your system startup environment, or the partition boot sector of the system boot volume. An ERD should be created immediately after you’ve completed the system installation, as well as whenever any changes are made to the system. ERDs are computer specific, so you have to make one for each Windows 2000 computer in your network.

The ERD in Windows 2000 does not copy the Registry files as the ERD in Windows NT 4.0 does. Registry files are located in the systemroot\repair\ regback directory, if the system state data has been backed up.

When you start the ERD for use in a repair operation, you will be presented with two options. The first repair option is for a Manual repair. This option allows you to select from a list of repair options, and only advanced users should use this option. The second option, Fast Repair, attempts to automatically repair problems for system files, the boot sector, and the startup environment. Fast Repair is far easier to perform than the Manual option because it does not require user input.

Windows 2000 Startup Options for Recovery Windows 2000 includes several startup features that can help put things to right when the system does not start. These include the advanced startup options Last Known Good Configuration, various forms of the Safe Mode, and the Recovery Console.

Last Known Good Configuration When you select Last Known Good Configuration at system startup, Windows 2000 attempts to boot the operating system using the most recent Registry information or settings (the ones that were saved at the last system shutdown). This option will not aid in solving problems resulting from corrupted or incorrect drivers. It is only useful in correcting configuration errors.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

480

Chapter 11



Monitoring, Management, and Disaster Recovery Strategies

Startup Options Windows 2000 includes several startup options that can be used in the event that the system will not start. 













Safe Mode starts Windows 2000 using only the most basic files and drivers. No networking capabilities are available. Safe Mode with Networking is just like Safe Mode except that it does include networking support. Safe Mode with Command Prompt loads only the most basic files and drivers and displays only a command prompt. Enable Boot Logging creates a boot log of devices and services that have been loaded. The log file is named Ntbtlog.txt and is stored in the system root directory. Enable VGA Mode starts Windows 2000 with the basic VGA driver. This is a good option if you have installed a new video driver that is preventing the operating system from starting. Directory Services Restore Mode is used to restore the Activity Directory on a domain controller. This option is not available on Windows 2000 member servers. Debugging Mode can be used to send debug data through a serial cable to another computer.

Recovery Console The Recovery Console was introduced in Windows 2000. If you are unable to start your Windows 2000 system using the advanced startup options, the Recovery Console should be your next consideration. The Recovery Console is a command-line utility with which administrators can execute basic commands for identifying and repairing files and drivers that may be causing problems. You must be logged on as an administrator to use the Recovery Console. The Recovery Console can be used to perform the following functions: 

Stop and start services



Read and write data to a local disk



Copy data from a CD-ROM or floppy disk

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Disaster Recovery



Format a drive



Repair the boot sector



Repair the master boot record

481

The Recovery Console can also be used to copy the Registry, which is located in the systemroot\repair folder, to the systemroot\system32\ config folder. Table 11.2 lists the commands available in the Recovery Console. Enter a specific command with the /? switch to get a description of the command including syntax and parameters. TABLE 11.2

Recovery Console Commands ATTRIB

DISKPART

LOGON

BATCH

ENABLE

MAP

CD/CHDIR

EXIT

MD/MKDIR

CHKDSK

EXPAND

MORE

CLS

FIXBOOT

RD/RMDIR

COPY

FIXMBR

REN/RENAME

DEL/DELETE

FORMAT

SET

DIR

HELP

SYSTEMROOT

DISABLE

LISTSVC

Setup and Startup Disks You can use the Windows 2000 Setup disks to recover from a system failure. They can be used to start a Windows 2000 installation, the Recovery Console, and the Emergency Repair process. Setup disks are crucial for accessing a system that cannot be started using a Windows 2000 CD. Setup disks are created using the makebt32 utility located under the \bootdisk subdirectory on the Windows 2000 Server installation CD. To

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

482

Chapter 11



Monitoring, Management, and Disaster Recovery Strategies

create a complete set, you’ll need a total of four blank 1.44MB floppy disks. They can be made on any computer running any version of Windows or MSDOS, but they must be made from the CD for the specific Windows version for which they will be used. Startup disks are computer specific and can be used to access a system that will not boot because of a faulty startup sequence. You can use startup disks to access drives that are configured with either FAT or NTFS partitions. A startup disk is a good tool to use for help with problems involving corrupted Master Boot Records, corrupted boot sectors, viruses, and missing or corrupted NTLDR or NTDETECT.COM files. A startup disk should be made for every computer in your network. It is created by copying the NTLDR, NTDETECT.COM and BOOT.INI files to a floppy disk. The startup disk should also include device drivers for the system’s hard disk in the system.

If you are using a SCSI system and the SCSI BIOS is not enabled, you’ll also need to add NTBOOTDD.SYS to the startup floppy.

Summary

In this chapter we looked at several monitoring features that can be used to monitor performance on your highly available websites. We started with a look at some of the tools that can be used to monitor servers running Windows 2000. The first tool we looked at was HTTP Monitor. This tool provides administrators with the capability to gather information on and monitor Web servers. Next we looked at Network Monitor, which is basically a softwarebased network packet analyzer that can be used to capture real-time network traffic. Network Monitor is an excellent tool for diagnosing and troubleshooting network problems on local area networks. The Netstat commandline tool can be used to display current TCP/IP connections. One of the benefits of using Netstat is its capability to provide the user with a list of ports that are listening on the server. Several command-line utilities can be used with Windows NT and Windows 2000 to examine and monitor individual processes running on servers. These include Process Explode, Process Viewer, Process Thread and Status, and Process Tree. Both Process Explode and Process Viewer let you change

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Summary

483

thread priority and view details on memory usage. Process Thread and Status shows the status of each individual process that is running on a system. Process Tree allows administrators to query the process tree. It can also be used to kill running processes. Windows Management Instrumentation (WMI) is integrated into Windows 2000 and is used to monitor, track, and control computers and network devices through a web browser. WMI is Microsoft’s implementation of the WBEM initiative for management of computers through standard Web browsers. WMI can be used with the Performance Tool to gather performance data about a system. We looked at two tools that administrators of Web servers will find very beneficial, the Web Capacity Analysis Tool (WCAT) and the Web Application Stress (WAS) tool. WCAT offers a way to test the capacity of IIS servers and clients. It supplies 40 content and workload simulations that can be used in a variety of ways to gauge server and client response under certain workloads. WAS simulates web activity through a variety of workload scenarios, in order to appraise the performance of a web server or a web application. Detailed reports can be produced for these stress tests, for later analysis. The System Monitor is part of the collection of Windows 2000 Performance Tools and is used to monitor and troubleshoot system resources with the use of performance counters. Information can be collected using System Monitor and saved for later analysis. Performance Logs and Alerts, one of the Performance Tools, lets you save logging information in a format that can be integrated into SQL Server 2000 databases. Alarms can be configured to alert users when predefined thresholds have been met. Performance counters can be used to monitor Windows 2000, IIS 5, and SQL Server 2000. The first step is to establish a baseline as the reference point against which all the collected data is compared. Microsoft Application Center 2000 can be used to continuously monitor the status of clusters and their members. The Application Center’s Health Monitor is a data collection utility that produces both cluster and data information. It includes nine types of data collectors for system monitoring. The last topic in this chapter was disaster recovery—moving forward after an event that prevents users from accessing the website’s resources. The first step to ensure against being disabled by a disaster is to develop a disaster recovery plan. The plan should accommodate both system hardware and software, and should be documented and stored in an easily accessible location. The Windows 2000 Backup and Recovery Tool will back up and restore system data and create Emergency Repair Disks (ERDs). Windows 2000 supplies several startup options for situations when your system does not boot. The Recovery Console, new to Windows 2000, is a commandline utility administrators can use to perform emergency functions such as Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

484

Chapter 11



Monitoring, Management, and Disaster Recovery Strategies

stopping and starting services, and repairing the master boot record and the boot sector. Startup and setup disks are also key components of your plan for recovery.

Exam Essentials Identify tools available in Windows 2000 to monitor system performance. Windows 2000 offers several items for monitoring performance of your website. These include the command-line Performance Tools, Windows Task Manager, Windows Management Instrumentation (WMI), and Windows 2000 Event Viewer, Network Monitor, and HTTP Monitoring Tool. Identify the two snap-in tools contained in the command-line Performance Tools. System Monitor, and Performance Logs and Alerts are the two snap-in tools contained within the Performance Tools. System Monitor collects and display data about the usage of hardware resources and activities on the computer. Performance Logs and Alerts collects and saves performance data from local and remote computers. Identify the tools for simulating capacity and workload on web servers. The Web Capacity Analysis Tool (WCAT) helps you test the capacity of web servers and clients. The Web Application Stress Tool (WAS) simulates workloads or web activity on web servers. Identify tools for monitoring SQL Server 2000. Transact-SQL Statements, the SQL Server Error Log, and the SQL Profiler will all monitor instances of SQL Server 2000. Know how to use Health Monitor in Microsoft’s Application Center 2000. Health Monitor is one of the core components of Application Center 2000 monitoring. It includes data collectors that can be used to monitor cluster and data information. Know which items should be part of an overall recovery strategy. Some of the tools appropriate for overall disaster recovery include Windows 2000 Backup and Recovery, the Recovery Console, and Emergency Repair Disks (ERDs). Windows 2000 Setup disks and computer-specific startup disks should be created and kept on hand.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Key Terms

485

Key Terms

Before you take the exam, be certain you are familiar with the following terms: Application Center 2000

Process Tree

CIM Object Manager

Process Viewer (pviewer.exe)

counter logs

Recovery Console

Current Activity window

Safe Mode

data collectors

SQL Error log

Emergency Repair Disk (ERD)

SQL Profiler

Event Viewer

startup disks

global monitors

System Monitor

HTTP Monitoring Tool

system state

IIS Object Cache

Task Manager

Last Known Good Configuration trace logs local monitors

Transact-SQL statements

Netstat (netstat.exe)

Web Application Stress (WAS) tool

Network Monitor

Web Capacity Analysis Tool (WCAT)

object repository

Web-Based Enterprise Management (WBEM)

Performance Logs and Alerts

Windows 2000 Backup and Recovery Tool

perfmon

Windows 2000 Setup disks

Performance Tools

Windows Management Instrumentation (WMI)

Process Explode (pview.exe)

working set

Process Thread and Status (pstat.exe)

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

486

Chapter 11



Monitoring, Management, and Disaster Recovery Strategies

Review Questions 1. Sally manages her company’s IT resources. Several users have come to

her complaining that they are unable to print. Their documents are going to the print queue but they never come out on the printer. The same users are also unable to access websites outside of their intranet. What tool can Sally use to monitor network traffic on the LAN to try and determine the problem? A. Netstat B. Network Monitor C. Process Viewer D. Task Manager 2. Jordan has been trying unsuccessfully to connect to a remote com-

puter using a third-party remote-access program. It seems that the program is trying to connect, but the logon screen never appears. Jordan knows that the program connects on port 1650, and he’s wondering if the computer is not listening on that port for some reason. What can he do to determine if the remote computer is set up properly and is listening on the correct port for the remote-access software? A. Telnet into the remote server and run netstat -an from a com-

mand prompt. B. Purchase another third-party utility to use in connecting to the

remote computer. C. Reinstall the remote-access software on Jordan’s computer. D. Telnet into the remote server and run netstat -r from a com-

mand prompt.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Review Questions

487

3. What command-line tool provides the same functionality as Perfor-

mance Monitor? A. Task Manager B. Process Viewer C. Process Explode D. Ptree 4. Process Viewer and Process Explode provide the same analysis func-

tionality, with one exception. What is that exception? A. Process Explode cannot be used to view processes on remote

computers. B. Both Process Explode and Process Viewer view processes on local

computers. C. You can kill processes in Process Explode. D. Process Viewer provides details about memory usage. 5. Carlos has acquired several computers that already have Windows 2000

and IIS 5 installed and configured for website use. He wants to add them to his existing web farm but is unsure of the machines’ capability since they came already configured. He wants to test them before he integrates them into his existing network environment. What tool can Carlos use to test the capacity of the servers to ensure that they will be able to handle the expected load of client requests? A. System Monitor B. SQL Profiler C. WAS D. WCAT

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

488

Chapter 11



Monitoring, Management, and Disaster Recovery Strategies

6. With which of these tools can you test a website for memory and

threading issues? A. WAS B. Network Monitor C. HTTP Monitoring Tool D. Process Viewer 7. You’re having problems with one of your applications. It is constantly

locking up and blocking use of several other programs that are open. What can you do to terminate the program that is locking up? A. Go to the Applications tab in Task Manager, highlight the pro-

gram that is causing problems, and select New Task. B. Go to the Performance tab in Task Manager, highlight the prob-

lem program, and select End task. C. Go to the Applications tab in Task Manager, highlight the pro-

gram that is causing problems, and select End Task. D. Reboot the computer to shut down the program that keeps

locking up. 8. Willem is concerned that the IIS working set on his web server is using

the majority of the CPU time. Response time on the server has been extremely slow, and dialog boxes are taking a long time to open. Where can Willem go to check on how much process time is being used by the IIS working set? A. The Applications tab in Task Manager B. The Processes tab in Task Manager C. The Performance tab in Task Manager D. System Monitor

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Review Questions

489

9. Which of the following actions can be initiated in Health Monitor

when a data collector threshold has been met or exceeded? Choose all that apply. A. A notification can be sent through e-mail. B. The server where the threshold was met can be restarted. C. A batch file can be executed. D. A Windows 2000 event can be generated. 10. System Monitor has been running on your web server; you are moni-

toring several parameters. You suspect that the CPU is causing performance problems and may need to be upgraded or replaced. What performance counter should you monitor to determine if you need to replace the processor in your server? A. Memory: Available Bytes B. Page File Bytes: Total C. Memory: Pool Paged Bytes D. Processor: %Processor Time E. System: Processor Queue Length

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

490

Chapter 11



Monitoring, Management, and Disaster Recovery Strategies

Answers to Review Questions 1. B. Sally can used Network Monitor to capture network traffic on the

LAN. This will provide her with the information she needs to diagnose and troubleshoot the problem. Netstat is a command-line utility that can be used to display TCP/IP statistics. Process Viewer (pviewer) is a Windows-based tool that can be used to view running processes on a server. It also can be used to stop individual processes and change their priority. Windows Task Manager supplies dynamic information about applications and processes running on a computer. 2. A. Jordan can telnet into the remote computer and run the netstat

command-line tool with the -an switch. He’ll get a list of the ports that are listening on that computer and can then see if port 1650 is actively listening. Jordan doesn’t need to reinstall the remote-access software at this time, but he might need to later if the software on the remote computer is in fact listening on the correct port. Telnetting to the remote computer and running netstat with the -r option will present the routing table on that computer, not a list of listening ports. 3. C. Process Explode offers the same monitoring features as Perfor-

mance Monitor. The only difference is that Process Explode is already configured when you start it; you don’t have to load performance counters as you do with Performance Monitor. The Windows Task Manager produces dynamic information about running processes and applications on a computer. Process Viewer supplies data about the individual processes on a computer. It also can be used to stop running processes. The Ptree tool queries the process inheritance tree. 4. A. Process Viewer and Process Explode have several similarities

except for one: Process Explode cannot be used to monitor remote computers whereas Process Viewer can. Both Process Viewer and Process Explode view processes on local computers. They both can be used to kill processes and they both provide information on memory usage.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Answers to Review Questions

491

5. D. Microsoft’s Web Capacity Analysis Tool (WCAT) tests websites

before they are integrated into a networking environment. This tool includes a series of workload simulations that can test IIS’s responses to different client requests. System Monitor is used to measure the performance of Windows 2000 computers. SQL Profiler is a graphical tool that is used to monitor instances of SQL Server 2000. WAS (the Web Application Stress tool) simulates web activity on web servers. 6. A. The Web Application Stress tool (WAS) gives administrators the

capability to perform both performance and stress testing on their web servers. WAS will determine the maximum requests per second that the site will be able to handle. The stress-testing tool also has a stability test that can be run to check on memory and threading issues. Network Monitor is used to capture and analyze data packets on a LAN. The HTTP Monitoring Tool gathers data and monitors websites. Process Viewer can be used to view information on processes that are running on both local and remote computers. 7. C. To stop the program that is causing problems, go to the Applica-

tions tab in Task Manager, select the program that is causing the problem, and select End Task. This should terminate the program that is causing problem, without affecting other processes that are running on the computer. 8. B. In the Processes tab in Task Manager, the CPU column lists all

processes that are running and the percentage of CPU time being used by each. One of the files that Willem can check from the IIS working set is inetinfo.exe. Looking at the CPU usage for this file will give an indication of how much processor time is being used by IIS. 9. A, B, C, D. Application Center 2000’s Health Monitor can perform

several options when data collectors have met a preset threshold. It can send a notification via e-mail. It can reboot the server that the threshold was set for. It can execute a batch file or script, and an event can be generated in Windows 2000 and logged to one of the event log files.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

492

Chapter 11



Monitoring, Management, and Disaster Recovery Strategies

10. E. The System: Processor Queue Length counter provides the infor-

mation you need to see if you are anticipating replacing the processor in his website. If the number of threads is greater than 2, there may be processor congestion and an additional processor may be needed. The Memory: Available Bytes counter reveals the total amount of available bytes of physical memory that are available to processes running on system. The Page File Bytes: Total counter measures the current number of bytes that a process has used in the paging file system. The Memory: Pool Paged Bytes counter measures the number of bytes in the paged pool. The Processor: %Processor Time counter measures the percentage of time that the processor is executing a non-idle thread.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Biotech Solutions

493

Take a few minutes to look over the information presented in this Case Study and then answer the questions at the end. In the testing room, you will have a limited amount of time—it is important that you learn to pick out the important information.

Background Biotech Solutions is a bioengineering company based in the Midwest. It provides a full range of consulting services in the field of bioengineering. The Biotech specialties include water quality control, habitat restoration, and erosion control. They have a staff of 20 engineers, all of whom are experts in their particular area. The company is researching the possibility of expanding its services to other parts of the United States. As part of the expansion plans, Biotech’s management has initiated a new program in which they are partnered with several universities in the Midwest. Together, the team will provide internships for ecological, civil, environmental, and water resource engineers. The CEO of Biotech is excited about the possibility of Biotech’s becoming a training ground for engineers in the bioengineering field.

Current Environment Biotech’s main office is located in Chicago. The firm recently acquired several smaller bioengineering companies and plans to combine them to form one large corporation. The main office consists of a Windows 2000 network with two domain controllers, three member servers, and 20 Windows 2000 Professional workstations. All three of the member servers are configured as web servers. One of the domain controllers is the DNS server and the other one provides DHCP services. All of the acquired companies have Windows NT 4.0 networks. Some of the smaller companies do not have domains. The majority of the networks are set up in a workgroup configuration. Two of the companies do not have a network setup up at all. They have two Windows NT 4.0 servers that are configured as stand-alone servers, and several hundred Windows NT 4.0 workstations, also configured in a stand-alone configuration.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

CASE STUDY

Biotech Solutions

CASE STUDY

494

Chapter 11



Monitoring, Management, and Disaster Recovery Strategies

Business Requirements BioTech management wants to combine all of the networking environments into one cohesive infrastructure. John is the IT Manager of the principal company and has been given the responsibility of making sure that all of the networking resources from the new acquisitions are integrated with those of the headquarters network into a seamless network.

Technical Requirements Biotech Solutions will combine all of the computer systems into one cohesive Windows 2000 network. All networks at the newly acquired companies will have to be upgraded to Windows 2000. Additional hardware may be needed. The company wants to convert its existing networking environment into one modeled after Windows DNA. Management anticipates gaining the scalability and high performance by making these changes.

Questions 1. John, the IT manager, is making the rounds of several of the newly

acquired companies to get a feel for what resources he is going to need in order to combine them with the existing infrastructure at the corporate headquarters. When he meets with one of the acquisitions that has recently upgraded from a Windows NT 4 to a Windows 2000 network, several of the employees mention having problems with their computers. Shelly, the company receptionist, has one application that is constantly locking up her computer, although it worked smoothly before her computer was upgraded to Windows 2000. She tells John that her machine occasionally stops responding to the keyboard and causes other applications running in the background to lock up. What can be done to terminate the offending application without having to reboot Shelly’s computer? A. Shelly can use the Task Manager to terminate the application. B. Shelly can ask John to reinstall the application that is giving her

problems. C. Shelly can use the Process Explorer to terminate the application.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Biotech Solutions

495

problems since the upgrade. His computer is really running slow when he tries to work on several large financial worksheets. Winston is somewhat familiar with Windows 2000’s various tools for getting information about hardware resource utilization. He is familiar with the Performance tool but is unsure about which counters he should monitor to check on memory. What counters should John tell Winston to watch in order to identify possible memory issues? A. Memory: Page Reads/sec B. Internet Information Services Global: File Cache Flushes C. System: Processor Queue Length D. Processor: %Processor Time 3. Bernie added several new programs to his computer after it was

upgraded to Windows 2000. He is experiencing extraordinarily slow response time and suspects it’s coming from one of the programs he installed. He has removed them one at a time through Add/Remove Programs but is still getting slow response. Bernie has rebooted his computer several times and once did a cold boot just to make sure that the cache had cleared on the system. John tells Bernie that some application on the computer is probably taking up most of the CPU time and he needs to find out which one it is. How can Bernie determine which application may be taking up excessive CPU time? A. Open Task Manager, go to the Applications tab, and see which

program has a status of “Not Responding.” B. Open Task Manager, go to the Processor tab, and look at the PID

column. C. Open Task Manager, go to the Processor tab, and look at the CPU

column. D. Open Task Manager, go to the Processor tab, and look at the Mem

Usage column.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

CASE STUDY

2. Winston works in the accounting department and has noticed memory

CASE STUDY

496

Chapter 11



Monitoring, Management, and Disaster Recovery Strategies

4. Mariko is having the same problem as Bernie: Her computer is run-

ning very slowly after the upgrade to Windows 2000. She has looked at the Processor tab in Task Manager and can find nothing out of the ordinary. Mariko wants to set up monitoring on her computer so that she can create a report that samples the system every three minutes, to aid in determining what program may be causing her system to run so slowly. What tool can she use? A. Task Manager B. Performance Logs and Alerts C. System Monitor D. SMS 5. Since Biotech’s websites were upgraded from Windows NT 4.0 and

IIS 4.0 to Windows 2000 and IIS 5.0, several users have complained that they are unable to connect to the corporate website. They’re getting an “HTTP 500/Server Too Busy” error. Ronnie, who manages the websites, has asked John to investigate why they are refusing connections. The servers appear to be functioning correctly and there are no major errors in the event logs. How does John tell Ronnie to research the problem? A. Reboot each web server. B. Look at the Active Server Pages: Requests Queued counter on each

web server. C. Look at the Active Server Pages: Requests/Sec counter on each web

server. D. Look at the Web Service: Maximum Connections counter on each

web server. E. Look at the Network Interface: Bytes Total/sec counter on each

web server.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Biotech Solutions

497

chased by Biotech. Before the acquisition, he had been researching several monitoring tools to be used on the company’s computers. Kendall wants to run the list by John and discuss whether any of them will be beneficial. In the following list, match the monitoring tool in the left column with its description on the right. Some items may be used more than once. Tool

Description

HTTP Monitoring Tool

Displays data about processes

Network Monitor

Displays current TCP/IP connections

Netstat

Displays protocol statistics

Process Explode

Monitors HTTP activity

Process Viewer

Captures real-time network traffic

Process Tree

Can be used to change the base priority of a running process

Process Thread and Status Displays all running processes and threads Reveals the current count of system objects Can be used to change the base priority of a process May cause system degradation when run Can be used to stop running processes Queries the process tree Software-based network packet analyzer

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

CASE STUDY

6. Kendall provides IT support in one of the small companies just pur-

CASE STUDY ANSWERS

498

Chapter 11



Monitoring, Management, and Disaster Recovery Strategies

Answers 1. A. Shelly can terminate the offending application by going to the

Applications tab in Task Manager, highlighting the nonresponsive application, and selecting End Task. This will terminate the application but will not affect any of the other programs running on that machine. John wants to explore the possibility of upgrading the application if the problem has been occurring only since her computer was upgraded to Windows 2000. It’s possible the application may have been written specifically for the Windows NT 4.0 environment. 2. A. Winston should monitor the Memory: Page Reads/sec counter.

This will give him information on the number of disk reads. A high number for this counter indicates that the system is doing a lot of disk reads versus cache reads and he may need to install additional memory. 3. C. Bernie can check under the CPU column on the Processor tab in the

Task Manager. This column will indicate the amount of the computer’s processing time being used by each process or application. If one particular process is using an excessive amount of CPU time, it may have a memory leak and need to be reinstalled or patched. 4. B. Mariko can configure Performance Logs and Alerts to monitor the

system and create a report of its findings. By creating a baseline report she can monitor the system’s performance over a period of time. She also can set up alerts to e-mail her if counters she has configured fall above or below a preset limit. 5. B. By monitoring the Active Server Pages: Requests/Queued on each

web server, Ronnie can see if the limit on the number of requests in the ASP queue has been reached. That’s what will result in the “HTTP 500/ Server Too Busy” error message.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Biotech Solutions

499

Tool

Description

HTTP Monitoring Tool

Monitors HTTP activity

Network Monitor

Captures real-time network traffic Software-based network packet analyzer May cause system degradation when run

Netstat

Displays current TCP/IP connections Displays protocol statistics

Process Explode

Reveals the current count of system objects Can be used to change the base priority of a process

Process Viewer

Displays data about processes Can be used to stop running processes Can be used to change the base priority of a running process

Process Tree

Queries the process tree

Process Thread and Status Displays all running processes and threads

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

CASE STUDY ANSWERS

6.

Glossary

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

502

Glossary

Numbers 100Base-T A networking standard that supports data transfer rates up to 100 Mbps. Often called Fast Ethernet. 1000Base-T A specification for Ethernet that supports 1GB-per-second data transfer over distances of up to 100 meters using four pairs of CAT-5 balanced copper cable. 10Base-T A networking standard that supports data transfer rates up to 10 Mbps using twisted-pair cable with maximum lengths of 100 meters.

A access control entry (ACE) An item used by the operating system to determine resource access. Each access control list (ACL) has an associated ACE that lists the permissions that have been granted or denied to the users and groups listed in the ACL. access control list (ACL) An item used by the operating system to determine resource access. Each object (such as a folder, network share, or printer) in Windows 2000 has an ACL. The ACL lists the security identifiers (SIDs) contained by objects. Only those identified in the list as having the appropriate permission can activate the services of that object. access token An object containing the security identifier (SID) of a running process. A process started by another process inherits the starting process’s access token. The access token is checked against each object’s access control list (ACL) to determine whether or not appropriate permissions are granted to perform any requested service. ACE

See access control entry.

ACL

See access control list.

active/active cluster A cluster in which all cluster partners are actively involved with client traffic. Failovers in an active/active cluster are much quicker than in active/passive clusters. In active/active clusters, each active node in the cluster is busy doing something (such as file and print services)— one node isn’t simply waiting for the other to fail over.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Glossary

503

Active Directory A directory service available with the Windows 2000 Server platform. The Active Directory stores information in a central database and allows users to have a single user account (called a domain user account or Active Directory user account) for the network. Active Directory replication The process of ensuring that changes made to a domain controller are replicated to all domain controllers within a domain. Active Directory user account A user account that is stored in the Windows 2000 Server Active Directory’s central database. An Active Directory user account can provide a user with a single user account for a network. Also called a domain user account. active/passive cluster A cluster in which only one partner is actively involved with client traffic. Failovers in an active/passive cluster occur much more slowly than in active/active clusters. Active Server Pages (ASP) A specification for creating dynamic web pages that uses a mixture of HTML, scripts, and other programming components. Active Server Pages end with an .asp extension. ActiveX Data Objects (ADO) A data-access interface that is used to communicate with OLE DB data sources. adapter Any hardware device that allows communications to occur through physically dissimilar systems. Usually refers to peripheral cards that are permanently mounted inside computers and provide an interface from the computer’s bus to another medium such as a hard disk or a network. Administrator account A Windows 2000 special account that has the ultimate set of security permissions and can assign any permission to any user or group. Administrators group A Windows 2000 built-in group that consists of Administrator accounts. ADO

See Active X Data Objects.

affinity Affinity is the relationship between a client and a server in a Network Load Balanced (NLB) cluster. It ensures that requests from specific clients are handled by the same server. The three affinity settings are None, which directs client request to servers based on load weight; Single, which causes session state to be maintained for a particular client IP address; and Class C, which maintains session state as long as all clients are in the same class C subnet. See also session state. Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

504

Glossary

aggregate bandwidth Sum total of the bandwidth users accumulate when they dial into an RRAS system. Suppose, for example, that you have twelve 56K modems hooked up to your RRAS server. If five users dial in simultaneously and manage to connect at the full 56K, they’re taking up 280K of aggregate bandwidth on the network. alert A system-monitoring feature that is generated when a specific counter exceeds or falls below a specified value. Through the Performance Logs and Alerts utility, administrators can configure alerts so that a message is sent, a program is run, or a more detailed log file is generated. anonymous access Authentication method that supports Anonymous user access to websites. anonymous logon Anonymous logons occur when users gain access through special accounts, such as the IUSR_computername and TsInternetUser user accounts. answer file An automated installation script used to respond to configuration prompts that normally occur in a Windows 2000 installation. Administrators can create Windows 2000 answer files with the Setup Manager utility. APIPA

See Automatic Private IP Addressing.

Application Center cluster Three types of clusters can be created using Application Center 2000: Web clusters, which host websites and run COM+ applications locally; COM+ application clusters, which use CLB to distribute component activations to cluster members; and COM+ routing clusters, which are used when COM+ requests are not initiated through Application Center. Application Center 2000 A deployment and management tool that is part of Microsoft’s .NET initiative. It is used for creating, deploying, and managing web-based and component-based applications. application filtering Application filtering examines data packets before allowing a connection. Application-filtering firewalls maintain complete information on connection state and sequencing, and they generate audit records for use by administrators to monitor attempts at security policy violations.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Glossary

505

application image Used in Application Center 2000 to manage and deploy clusters. An application image consists of all the required software resources for a website or COM+ application. It can include web content and ASP applications, COM+ applications, virtual sites, file-system directories and files, global ISAPI filters, and any additional content that may need to be deployed. application layer attack In a client-server environment, the application layer is used primarily by end users and developers to pass requests to the presentation layer. Several types of exploits target this layer specifically, including Code Red, Code Blue, NIMDA, and directory traversal. One consideration for firewall implementation on a network should be for protection against attacks such as these. Application log A log that tracks events that are related to applications running on the computer. The Application log can be viewed in the Event Viewer utility. A resource record The most common type of resource record used by DNS. It maps the IP address to a host name. array

A logical disk that consists of multiple physical disks.

array policy An array policy can include protocol rules, packets filters, and web and server publishing rules that are defined at the array level for all ISA servers in an array. asynchronous transfer mode (ATM) A networking methodology that uses discrete 53-byte cells instead of variable-length packets. ATM is useful for voice and video circuits as well as data. Because of the extraordinary speeds that can be attained with ATM, it’s in extensive use by telecommunication companies, ISPs, and NSPs. ATM

See Asynchronous Transfer Mode.

auditing The processing of tracking actions that are performed on a server or network. audit policy A Windows 2000 policy that tracks the success or failure of specified security events. Audit policies are set through Local Computer Policy.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

506

Glossary

authentication The process required to log on to a computer. Authentication requires a valid username and a password that exist either in the local accounts database or in Active Directory. An access token is created if the information presented matches the account in the database. authorization The access to objects that is given to users and groups once they have been authenticated by Windows 2000. Authorization is determined through Discretionary Access Control Lists (DACLs). automatic allocation The process of automatically providing TCP/IP configuration information: the IP address, default gateway, primary and secondary WINS servers, DNS servers, domain name, and so forth. Automatic Private IP Addressing (APIPA) A method for clients to automatically obtain IP configuration information without requiring manual entries or using a DHCP server. APIPA uses a reserved range of IP addresses and is used by both the Windows 2000 and Windows 98 operating systems.

B back end The back end of a website normally houses the business processes or data services being used by clients on the front end. Back-end systems are also known as stateful systems because they store client information during the session. backout plan The proposed sequence of steps to undo a change. All good change management includes a backout plan (though that backout plan may well say, “We can’t do anything to back this out once it’s implemented.”). backup The process of writing all the data contained in online massstorage devices to offline mass-storage devices for the purpose of safekeeping. Backups are usually performed from hard disk drives to tape drives. Also called archiving. Backup Operators group A Windows 2000 built-in group that includes users who can back up and restore the file system, even if the file system is NTFS and they have not been assigned permissions to the file system. Members of the Backup Operators group can only access the file system through the Windows 2000 Backup utility. In order to access the file system directly, the user must have explicit permissions assigned.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Glossary

507

backup type A backup choice that determines which files are backed up during a backup process. Backup types include normal backup, copy backup, incremental backup, differential backup, and daily backup. Backup Wizard A Wizard used to perform a backup. The Backup Wizard is accessed through the Windows 2000 Backup and Restore utility. baseline A snapshot record of a computer’s current performance statistics that can be used for performance analysis and planning. basic authentication cleartext.

Authentication method that transmits passwords as

Basic Input/Output System (BIOS) A set of routines in firmware that provides the most basic software interface drivers for hardware attached to the computer. The BIOS contains the boot routine. Basic Rate Interface (BRI) An ISDN (Integrated Services Digital Network) phone configuration in which two Bearer (B) channels can carry up to 64K of voice or data each, and one Data (D) channel carries synchronization and call-control information. basic storage A disk-storage system supported by Windows 2000 that consists of primary partitions and extended partitions. Drives are configured as basic storage after an operating system has been upgraded from Windows NT. Batch group A Windows 2000 special group that includes users who log on as a user account that is only used to run a batch job. Berkeley Internet Name Domain (or Daemon) (BIND) BIND is the original DNS implementation used to resolve host names to IP addresses, thus replacing the need for static hosts tables. BGP BIND

See Border Gateway Protocol. See Berkeley Internet Name Domain.

binding The process of linking together software components, such as network protocols and network adapters. BIOS

See Basic Input/Output System.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

508

Glossary

boot The process of loading a computer’s operating system. Booting usually occurs in multiple phases, each successively more complex until the entire operating system and all its services are running. Also called bootstrap. The computer’s BIOS must contain the first level of booting. Border Gateway Protocol (BGP) An Internet routing protocol that allows groups of routers in autonomous systems to share routing information. bottleneck A system resource that is inefficient compared with the rest of the computer system as a whole. The bottleneck can cause the rest of the system to run slowly. BRI

See Basic Rate Interface.

broadcast Simultaneously sending the same message to multiple recipients on a network. business logic layer The layer in an n-tier environment that is responsible for connecting the end user on the presentation layer end, to the data that is stored on or accessible through the data access layer. Also known as the middle tier.

C cache, caching A speed-optimization technique that keeps a copy of the most recently used data in a fast, high-cost, low-capacity storage device rather than in the device on which the actual data resides. Caching assumes that recently used data is likely to be used again. Fetching data from the cache is faster than fetching data from the slower, larger storage device. Most caching algorithms also copy data that is most likely to be used next, and perform write-back caching to further increase speed gains. Cache Array Routing Protocol (CARP) A protocol developed by Microsoft to allow multiple servers to be arranged as a logical cache for distributed content caching. CARP is implemented in Microsoft Proxy Server and Internet Security and Acceleration Server (ISA Server). canonical name The name of a network object in the form defined by the rules of the directory. In Active Directory, the canonical name is in the form domain/container/sub-container/object common name. So, the canonical name of the user bsmith, in the OU called sales, in the domain called BigCompany.com, would be BigCompany.com/sales/bsmith.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Glossary

509

capacity The amount of traffic that a website will be able to adequately handle when a peak number of users are accessing it. capacity planning Determining a website’s capability to deliver content to clients at an acceptable speed. Carrier Sense Multiple Access with Collision Detection (CSMA/CD) A method for placing signals on baseband transmission networks. CSMA/ CD is known as a contention method because computers contend for a chance to transmit data on the network. This is the standard access method for Ethernet networks. central processing unit (CPU)

The main processor in a computer.

certificates Also known as digital certificates. The certificate is an encrypted file that contains user or server information and is used to verify the authenticity of the holder. CIDR

See Classless Inter-Domain Routing.

CIM Object Manager The CIM Object Manager is a key component for Windows Management Instrumentation (WMI). It is the collection and manipulation point for objects that are stored in the object repository. circuit filtering Filters sessions rather than connections or ports (as with packet filtering). Circuit filtering sessions are initiated by client requests and can support multiple simultaneous connections. classful In classful routing, routing decision are made based on the concept of network address classes. IP host addresses are divided into three address classes: Class A, Class B, and Class C. Each class fixes the boundary between the network prefix and the host number at a different point within the 32-bit address. classless

See Classless Inter-Domain Routing (CIDR).

Classless Inter-Domain Routing (CIDR) Classless routing is a new method of IP addressing that replaces the old Class A, B, and C scheme, allowing a single IP address to refer to several IP addresses. You specify an IP address that has a slash followed by an ending number, as in 168.124 .0.0/12. The ending slash and number are called the IP prefix; it represents the number of bits used to describe the network address.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

510

Glossary

CLB

See Component Load Balancing.

CLB software One of the two main components of a Component Load Balancing arrangement. CLB software is used to load-balance a COM+ cluster. It has the responsibility of determining the order in which COM+ cluster members are used in the activation of COM+ components. client A computer on a network that subscribes to the services provided by a server. client certificates A certificate that is obtained for a client and used to digitally sign data before it is transmitted. client-server A computing and network architecture that relies on servers and clients. Servers handle applications, files, print sharing, and other large tasks. Clients use servers. In a client/server environment, the client may be a fat client, which offloads some of the work from the server, or a thin client, which does no work at all. Clients can vary from thin to fat based on how the developers created the system. CluAdmin.exe The executable for starting the Cluster Administrator. Clusdisk.sys The cluster disk driver used for Microsoft Cluster Services (MSCS). Clusnet.sys See cluster network driver. cluster Two or more computers set up to perform the same service in support of each other, for fault tolerance or load balancing. Cluster Administrator The primary tool for managing, maintaining, and troubleshooting clusters. See cluadmin.exe. Cluster API An application interface that acts as the main interface to the Cluster Automation Server, the Cluster Administrator, and cluster-aware applications. Cluster Automation Server (CAS) The CAS facilitates the automation of cluster management through COM object scripting. cluster-aware Any client application that can be run on a cluster node and managed as a cluster resource. For example, an application that supports TCP/IP, stores data in a customizable location, and supports transactions is considered cluster-aware.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Glossary

511

cluster capacity Cluster capacity is calculated by measuring the capacity of each member node. cluster database A database containing information about all the physical and logical elements of the cluster. Each node in the cluster is responsible for maintaining a copy of the cluster database. See also cluster hive. clustered disk The external disk storage unit that is separate from the system disk on each node or server. Can be a RAID array or a storage area network (SAN). cluster disk driver Each node of the cluster runs an instance of clusdisk.sys, the cluster disk driver. It is responsible for securing reservations on the disks for the local system. cluster hive Also known as the cluster database. Resides in the Windows 2000 Advanced Server Registry on each cluster node. clustering Technology that allows for two or more servers to appear as a single system. Clustering is normally used for highly available web solutions. cluster network driver (clusnet.sys) Responsible for monitoring the network paths between cluster nodes; runs on each node in the cluster. Also routes messages between nodes and detects communications failures. cluster nodes Individual servers in a cluster. Each node is attached to a storage device and communicates with other nodes in the cluster through the interconnect. Cluster services

See Microsoft Cluster Services.

cluster-unaware

Applications that cannot interact with the cluster server.

COM

See Component Object Model.

COM components Physical files that contain classes defining COM objects. COM components can be stand-alone applications or reusable software components. See also Component Object Model, COM+. COM port Communications port; a serial hardware interface conforming to the RS-232C standard for low-speed, serial communications.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

512

Glossary

Component Object Model (COM) A specification defined by Microsoft that is the foundation for various Microsoft technologies. It defines how objects interact within or between applications. See also COM components, COM+. common name known.

The name by which a network object is commonly

COM+ Windows 2000 implementation of the Component Object Model (COM). COM+ application cluster Handles requests for COM+ components specifically, using CLB to distribute component requests to the servers in the cluster. COM+ routing cluster A cluster that uses Component Load Balancing to route COM+ components from a general cluster to a COM+ application cluster. COM+ Services Included in Windows 2000, COM+ Services is a suite of services based on COM and Microsoft Transaction Server (MTS). Component Load Balancing (CLB) An Application Center 2000 service that allows for the dynamic load-balancing of COM+ application components. Compact Disk File System (CDFS) A file system used by Windows 2000 to read the file system on a CD-ROM. compression The process of storing data in a form that takes less space than the uncompressed data. Computer Management A consolidated tool for performing common Windows 2000 management tasks. The interface is organized into three areas of management: System Tools, Storage, and Services and Applications. computer name A NetBIOS name, from 1 to 15 characters in length, used to uniquely identify a computer on the network. convergence In Network Load Balancing, convergence is the process by which each host in the cluster exchanges messages to determine the state of the cluster. During the process, a new load-distribution scheme is determined for the hosts that are sharing the network traffic.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Glossary

513

convergence time After a router failure, the time it takes for a group of routers to update their routing tables and to be in agreement with one another. cookies Special text files stored on a computer that are used to identify the user. Cookies are one way in which Web developers can retain state information about a user. copy backup A type of backup that backs up selected folders and files but does not set the archive bit. counter A performance-measuring tool for tracking specific information regarding a system resource. Also known as a performance object it includes Cache, Memory, Paging File, Process, and Processor. All Windows 2000 system resources are tracked as performance objects. Each performance object has an associated set of counters. Counters are set through the System Monitor utility. Counter logs Counter logs are one of the two types of logs provided with Performance Logs and Alerts. These logs sample data contingent on the performance counters and objects that have been selected for monitoring. CPU

See central processing unit.

CSMA/CD

See Carrier Sense Multiple Access with Collision Detection.

Current Activity window The Current Activity window is located in the SQL Server 2000 Enterprise Manager. It shows administrators the current user connections, and locks and information on selected processes. The Current Activity window can be used to send a message to users who are currently connected to an instance of SQL Server 2000. It can also be used to terminate processes.

D daily backup A type of backup that backs up all the files that have been modified on the day that the daily backup is performed. The archive attribute is not set on the files that have been backed up.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

514

Glossary

data access layer The layer in an n-tier environment that contains the application data to be accessed by clients from the presentation layer, or that contains connectivity to another data store located elsewhere on the network. Datacenter Server Windows 2000 Datacenter Server is the latest member of Microsoft’s 2000 Server family. Datacenter Server provides the capability for four-node cluster solutions. It is a platform for applications that will require high levels of scalability and reliability. data collectors Data collectors are objects that collect data for a particular process or service in the Health Monitor, which is part of Application Center 2000. There are two types of data collectors: local monitors and global monitors. data compression The process of storing data in a form that takes less space than the uncompressed data. data encryption The process of translating data into code that is not easily accessible, to increase security. Once data has been encrypted, a user must have a password or key to decrypt the data. Data Encryption Standard (DES) A 40-, 56-, or 128-bit encryption standard developed by the National Institute of Standards and Technology (NIST) and commonly used in the United States and Canada. data layer The layer in an n-tier environment that houses the data resources being accessed by clients in the presentation or web layer. Data Link Control (DLC) Every network card has a Data Link Control (DLC) address known as the DLC identifier (DLCI). Some topology protocols used in networks, such as Token Ring and Ethernet, use this address to identify nodes on the network. Others use the logical link layer, but ultimately all network addresses are translated to this DLCI address. The DLC resides at layer 2 of the OSI model—the Data Link layer. In earlier times, Hewlett Packard printers used DLC, which is where you’ll predominantly find it used today. Both the SLIP and PPP protocols, as well, use the Data Link layer. Data Link layer In the Open Systems Interconnection (OSI) model, the layer that provides the digital interconnection of network devices such as network adapters, and the software that directly operates these devices.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Glossary

515

delegated domain A domain for which authority has been delegated to another DNS server. delegated zones A zone consists of a partial domain, a whole domain, or multiple domains. Delegated zones are those managed by a group that is subordinate to the root. denial of service attack (DoS) An attack often characterized by flooding of a network with garbage to the point that people are unable to get to the network. The famous ICMP (ping) attacks of a few years ago are a good example. Compare with disruption of service. deployment In Application Center 2000, deployment refers to copying the content of the application images to each member of the cluster. DHCP

See Dynamic Host Configuration Protocol.

DHCP relay agent A routing protocol that forwards, in unicast, DHCP requests from a network that has no DHCP server to a network that does. DHCP resource

A cluster resource used to deploy a DHCP server cluster.

DHCP server A server configured to provide DHCP clients with all their IP configuration information automatically. Dial-Up Networking (DUN) A service through which remote users dial in to a network or the Internet (for instance, through a telephone or an ISDN connection). differential backup A backup type that copies only the files that have been changed since the last normal backup (full backup) or incremental backup. A differential backup backs up only those files that have changed since the last full backup, but it does not reset the archive bit. digest authentication An authentication technology that provides more security than the basic authentication type. Passwords are encrypted before they are transmitted, rather than using clear text as basic authentication does. Digest authentication is only supported on Windows 2000 domains.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

516

Glossary

disaster recovery (DR) The process of restoring a network to the condition it was in prior to some sort of disaster, whether natural or caused by humans. discretionary access control list (DACL) A list that allows or denies permissions to specific users and groups. disk duplexing A fault tolerance method that is basically the same as disk mirroring except that a separate disk controller is used for each mirrored drive. Disk Management and volumes.

A Windows 2000 graphical tool for managing disks

disk mirroring Also known as RAID 1. In disk mirroring, an exact replica of a disk or volume is kept on a separate disk. Because all data written to the primary disk is also written to a secondary or mirror disk, mirroring uses only 50 percent of disk space. If one disk fails, the system uses data from the other disk. disk partitioning hard drive.

The process of creating logical partitions on the physical

disk quotas A Windows 2000 feature used to specify how much disk space a user is allowed to use on specific NTFS volumes. Disk quotas can be applied for all users or for specific users. disk striping Also known as RAID 0. Data is broken into blocks of data called stripes and written sequentially to all disks in an array. disk striping with parity Also known as RAID 5. Data and parity information is written across multiple drives in an array. dispatching Also known as software-based load balancing. Dispatching resembles hardware-based load balancing, except that it utilizes a single server as the main point of contact for clients. This server in turn retransmits requests to the other servers in the cluster. disruption of service An interruption of the service that is provided to network users. This disruption can occur within the network infrastructures, through server services that are being provided (file, application, or print), or at some other point. A disruption of service is not the same as a denial of service (DoS) because the latter involves malice.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Glossary

517

Distributed Lock Manger (DLM) DLM is used in the share device model of clustering to track and manage access to resources. distribution server A network server that contains the Windows 2000 distribution files that have been copied from the distribution CD. Clients can connect to the distribution server and install Windows 2000 over the network. DLC

See Data Link Control.

DMZ A network that a company maintains between its private network and the Internet. Typically, DMZ networks contain web servers and computers such as proxy servers or firewalls that help support the web environment. Also called a perimeter network. DNA

See Windows DNA.

DNS

See Domain Name System.

DNS replication Enables multiple DNS servers to host the same names. A primary zone server replicates a copy of the zone file to the secondary zone server via a zone transfer. The primary zone file is the master copy, and the secondary zone file is a read-only copy. Zone transfer is what makes the DNS hierarchy fault tolerant. DNS server addresses.

A server that uses DNS to resolve domain or host names to IP

domain In Microsoft networks, a domain is an arrangement of client and server computers referenced by a specific name, that shares a single securitypermissions database. On the Internet, a domain is a named collection of hosts and subdomains, registered with a unique name by the InterNIC. See also domainlet. domain component Used in Active Directory–distinguished names to indicate an identifier for a part of the object’s domain. In the example /O=Internet/DC=ORG/DC=Charity/CN=Users/CN=BillyBob, the domain components are ORG and Charity. domainlet Provides the same functionality as a domain but with limited group policy and authentication capabilities. Preferable for use when installing clusters.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

518

Glossary

domain name The textual identifier of a specific Internet host. Domain names are in the form of server organization type (www.microsoft.com) and are resolved to Internet addresses by DNS servers. domain name server An Internet host dedicated to the function of translating fully qualified domain names (FQDNs) into IP addresses. Domain Name System (DNS) The TCP/IP network service that translates textual Internet network addresses into numerical Internet network addresses. domain root The topmost part of an organization’s hierarchy. For example, MyCompany.COM might be the domain root, while Sales.MyCompany.COM and Accounting.MyCompany.COM would be subdomains subordinate to the root domain. domain user account A user account that is stored in the central database of Windows 2000 Server Active Directory. A domain user account can provide a user with a single user account for a network. Also called an Active Directory user account. DoS DR

See denial of service attack. See disaster recovery.

driver A program that provides a software interface to a hardware device. Drivers are written for the specific devices they control, but they present a common software interface to the computer’s operating system, allowing all devices of a similar type to be controlled as if they were the same. Dr. Watson A Windows 2000 utility used to identify and troubleshoot application errors. DTC resource A resource used for a clustered installation of Microsoft DTC (Distributed Transaction Coordinator). dual booting The process of allowing a computer to boot more than one operating system. dual homing The FDDI equivalent of teaming. Dual homing provides redundant private network connectivity. One FDDI adapter is connected to one switch, and the secondary FDDI adapter is connected to another switch. The failover of FDDI adapters is so quick that often not a single bit is lost.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Glossary

519

dual porting A fault-tolerance feature of Fibre Channel Arbitrator Loop (FC-AL). Dual porting is used to create a redundant path for each storage device in the array in the event that one of the loops goes down or is busy. DUN

See Dial-Up Networking.

dynamic disk A Windows 2000 disk-storage technique. A dynamic disk is divided into dynamic volumes. Dynamic volumes cannot contain partitions or logical drives, and they are not accessible through DOS. You can size or resize a dynamic disk without restarting Windows 2000. Dynamic Host Configuration Protocol (DHCP) A method of automatically assigning IP addresses to client computers on a network. dynamic packet filtering Dynamic packet filtering is an improvement over packet filtering. It operates by associating all UDP packets with a virtual connection. A virtual connection is established if a response packet is generated and returned to the original requestor. The information for the virtual connection is only remembered for a brief period of time. dynamic storage A Windows 2000 disk-storage system that is configured as volumes. Windows 2000 Professional dynamic storage supports simple volumes, spanned volumes, and striped volumes.

E EB EFS

See exabyte. See Encrypting File System.

Emergency Repair Disk (ERD) A disk that stores portions of the Registry, the system files, a copy of the partition boot sector, and information that relates to the startup environment. The ERD can be used to repair problems that prevent a computer from starting. Enable Boot Logging A Windows 2000 Advanced Options menu item that is used to create a log file that tracks the loading of drivers and services.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

520

Glossary

Enable VGA Mode A Windows 2000 Advanced Options menu item that loads a standard VGA driver without starting the computer in Safe Mode. Encrypting File System (EFS) The Windows 2000 technology used to store encrypted files on NTFS partitions. Encrypted files add an extra layer of security to the file system. encryption The process of translating data into code that is not easily accessible to increase security. Once data has been encrypted, a user must have a password or key to decrypt the data. ERD

See Emergency Repair Disk.

Error event An Event Viewer event type that indicates the occurrence of an error, such as a driver failing to load. Ethernet The most popular Data Link layer standard for local area networking. Ethernet implements the Carrier Sense Multiple Access with Collision Detection (CSMA/CD) method of arbitrating multiple computer access to the same network. This standard supports the use of Ethernet over any type of media, including wireless broadcast. Standard Ethernet operates as 10Mbps. Fast Ethernet operates at 100Mbps. Event Viewer A Windows 2000 utility that tracks information about the computer’s hardware and software, as well as security events. This information is stored in three log files: the Application log, the Security log, and the System log. exabyte

A computer storage measurement equal to 1,024 petabytes.

Exchange 2000 Server Exchange 2000 Server is an application designed specifically for Microsoft Windows 2000 to deliver a reliable, scalable, and manageable infrastructure with 24x7 messaging. It includes features such as Instant Messaging, real-time data, and video conferencing. extended partition In basic storage, a logical drive that allows you to allocate the logical partitions however you wish. Extended partitions are created after the primary partition has been created.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Glossary

521

Extensible Markup Language (XML) XML is a pared-down version of SGML or Standard Generalized Markup Language. It is not a fixed format like HTML and was designed to allow users to design their own customized markup languages. extranet An intranet that is accessible by outsiders. It typically includes some kind of authentication to verify that the person trying to access the network is actually who they say they are.

F failback Once a server is fixed after a failover, a failback operation can occur to make the failed server the active server in the cluster again. failback policy Policy implemented on a cluster to fail a resource back to its original node once that node becomes active again. failover In a cluster server environment, when a computer fails and the backup computer takes its place, a failover is said to have occurred. Conversely, when the computer is fixed and back online, a failback occurs. failover clustering Provided through Microsoft Cluster services, failover clustering is a way to ensure that applications are always available. It requires three components: a virtual name, a virtual server IP address, and a virtual server Administrator account. failover policy Policies that are used to determine how cluster resource groups behave during an automatic or manual failover. Failure Audit event An Event Viewer event that indicates the occurrence of an event that has been audited for failure, such a failed logon when someone presents an invalid username and/or password. Fast Ethernet A networking standard that supports data transfer rates up to 100 Mbps. Also known as100Base-T. fat client A client that offloads some of the work from a server. Opposite of thin client.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

522

Glossary

fault tolerance Any method that prevents system failure by tolerating single faults, usually through hardware redundancy. FDDI Fiber Distributed Data Interface. A high-speed network technology that transmits data at 100 Mbps over fiber-optic cabling. Fibre Channel A high-speed fiber-optic cabling technology that can transmit data from 266 Mbps to 4 Gbps over distances of up to 10 kilometers. Fibre Channel Arbitrator Loop (FC-AL) A Fibre Channel standard that supports full-duplex data transfer rates of 100 Mbps. FC-AL is expected to eventually replace SCSI for high-performance storage systems. File Share resources Cluster resources that are used to provide high availability. There are three major uses for a file share resource: a basic file share, share subdirectories, and a Dfs root. file system A software component that manages the storage of files on a mass-storage device by providing services that can create, read, write, and delete files. File systems impose an ordered database of files on the massstorage device. Storage is arranged in volumes. File systems use hierarchies of directories to organize files. File Transfer Protocol (FTP) A simple Internet protocol that transfers complete files from an FTP server to a client running the FTP client. FTP provides a simple method of transferring files between computers but cannot perform browsing functions. Users must know the URL of the FTP server to which they wish to attach. filtering The process of defining how the incoming traffic is handled by the nodes in an NLB cluster. filtering rule A logical rule that you impose in Windows 2000 routing or NAT, whereby you restrict certain protocols from being allowed into a network or out from a network. The most common filter is to bar incoming ICMP (PING) packets. firewall A hardware or software solution that allows or denies the transmission of network traffic.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Glossary

523

Firewall client Firewall client software is one of the options available for client connection to an ISA Server. It can only be installed on computers running Windows ME, 95, 98, NT 4.0, or 2000. Firewall service The Firewall service is a Windows 2000 service that supports requests from Firewall and SecureNAT clients. It works with Telnet, mail, news, Microsoft Windows Media, RealAudio, Internet Relay Chat (IRC), and other client applications that are compatible with Windows Sockets (Winsock). forward lookup zone A zone that allows one to look up an IP address when a host name (FQDN) is known. frame A data structure that network hardware devices use to transmit data between computers. Frames consist of the addresses of the sending and receiving computers, size information, and a checksum. Frames are envelopes around packets of data that allow the packets to be addressed to specific computers on a shared media network. Frame Relay A packet-switching technology that encapsulates LAN traffic for transmission over digital data lines. Transmission speeds range from 56 Kbps to 1.544 Mbps or higher. frame type An option that specifies how data is packaged for transmission over the network. This option must be configured to run the NWLink IPX/ SPX/NetBIOS Compatible Transport protocol on a Windows 2000 computer. By default, the frame type is set to Auto Detect, which will attempt to automatically choose a compatible frame type for the network. front end The front end of a website normally houses the web services that are being requested by clients. Front-end systems are known as stateless systems because they do not store client information. FTP

See File Transfer Protocol.

full-duplex A mode of communications in which data is transmitted and received simultaneously. Fully Qualified Domain Name (FQDN) A dotted name representation that is used in URLs for accessing web pages on the Internet. The FQDN consists of a host name with the domain name and any subdomains in which the host resides.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

524

Glossary

G GB

See gigabyte.

general/web cluster An Application Center cluster that is used for general purposes and for hosting websites and local COM+ applications. Normally these clusters use NLB or some other type of load balancing. Generic Application resource A cluster resource that can be used to implement cluster-unaware applications. Generic Service resource A cluster resource that can be used to implement services that are not cluster-aware. gigabyte

A computer storage measurement equal to 1,024 megabytes.

global monitors A type of data collector for Health Monitor. Global monitors are synchronized across the entire cluster for all cluster members, versus local monitors, which are associated with particular cluster members. GUI (graphical user interface) A computer shell program that represents mass-storage devices, directories, and files as graphical objects on a screen. A cursor driven by a pointing device such as a mouse manipulates the objects.

H half-duplex A mode of communication where data is either transmitted or received, but not simultaneously. handshake The process of establishing communications between two networking devices. hard disk drive A mass-storage device that reads and writes digital information magnetically on discs that spin under moving heads. Hard disk drives are precisely aligned and cannot normally be removed. Hard disk drives are an inexpensive way to store gigabytes of computer data permanently. Hard disk drives also store the software installed on a computer.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Glossary

525

hardware-based NLB Hardware-based NLB uses a network device, which could be a router or a computer, to redirect client requests to multiple nodes within the cluster. Hardware Compatibility List (HCL) A Microsoft-maintained list of all of the hardware devices supported by Windows 2000. Hardware on the HCL has been tested and verified as being compatible with Windows 2000. hardware RAID Hardware RAID systems usually consist of a controller card/interface and a drive mount cage. Hardware RAID offers better performance than software RAID solutions. HCL

See Hardware Compatibility List.

heartbeat A message that is sent at regular intervals between nodes in a cluster to detect node failure. Health Monitor A monitoring tool included with Application Center 2000 to monitor the status of cluster members. It uses data collectors to gather statistics for items such as performance counters, services, and running processes. high availability When defined for a clustered environment, high availability is the ability of the remaining cluster members to take over the workload of a failed cluster node with little or no interruption to the client. host A computer on a network, whether server or workstation. The term could loosely extend to printers, routers, or other devices—anything with an IP address—but is typically confined to computers. Host Integration Server 2000

The new term for Microsoft SNA Server.

host priority Used in Network Load Balancing. The host with the highest priority for a port rule will handle all of the cluster traffic. Priorities can be set from 1 to 32, with 1 being the highest priority. hot spare One of the cluster configuration models where one node of a two-node cluster is dedicated as a “hot spare” and takes over in the event of failure of the primary node. hot swapping The ability of a device to be plugged into or removed from a computer while the computer’s power is on.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

526

Glossary

HTML

See Hypertext Markup Language.

HTTP

See Hypertext Transfer Protocol.

HTTP Monitoring Tool A monitoring tool that can be used to gather data about websites. It provides immediate monitoring of websites without the need for specialized web page development. hub A network-enabled device that provides a common connection point for devices on the network. hybrid node (H-node) A name resolution method wherein the client first queries the listed name servers it has been given, then broadcasts, then checks an LMHOSTS file. hyperlink A link within text or graphics that has a web address embedded in it. By clicking the link, a user can jump to another web address. Hypertext Markup Language (HTML) A textual data format that identifies sections of a document such as headers, lists, hypertext links, and so on. HTML is the data format used on the World Wide web for the publication of web pages. Hypertext Transfer Protocol (HTTP) An Internet protocol that transfers HTML documents over the Internet and responds to context changes that happen when a user clicks a hyperlink. Hypertext Transfer Protocol over Secure Sockets Layer (HTTPS) A protocol that is used for transmitting data securely over the World Wide Web.

I See Internet Authentication Services.

IAS ICMP

See Internet Connection Sharing.

ICS IGRP IIS

See Internet Control Message Protocol.

See Internet Gateway Routing Protocol.

See Internet Information Services.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Glossary

527

IIS Object Cache The cache stored by Internet Information Services (IIS 5) that holds file objects that are frequently requested. IIS permissions IIS permissions give users access to virtual directories. They are different from NTFS permissions, which provide access to physical directories. IIS resource IMAP4

A resource that is used to cluster web and FTP sites.

See Internet Message Access Protocol.

impersonation A method used by a server to determine if a client has sufficient rights to access a resource. incremental backup A backup type that backs up only the files that have changed since the last normal or incremental backup. It sets the archive attribute on the files that are backed up. Indexing Service A Windows 2000 service that creates an index based on the contents and properties of files stored on the computer’s local hard drive. A user can then use the Windows 2000 Search function to search or query through the index for specific keywords. Information event An Event Viewer event that informs you that a specific action has occurred, such as when a system shuts down or starts. inherited permissions Parent folder permissions that are applied to (or inherited by) files and subfolders of the parent folder. In Windows 2000, the default is for parent folder permissions to be applied to any files or subfolders in that folder. Integrated Services Digital Network (ISDN) A direct, digital, dial-up connection that operates at 64KB per channel over regular twisted-pair cable. ISDN provides twice the data rate of the fastest modems per channel. Up to 24 channels can be multiplexed over two twisted pairs. Integrated Windows authentication Also known as Windows NT Challenge/Response Authentication. Integrated Windows authentication is more secure than basic authentication and supports both NTLM and Kerberos authentication.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

528

Glossary

interclient state A Network Load Balancing term, used when talking about the synchronization status of updates of transactional systems after a client connection has taken place. For example, suppose that you have an e-commerce site where a client surfs in on the Internet and buys something. The act of updating the transactions in the various databases affected by the purchase transaction constitutes an interclient state. interconnect The network that is used for node-to-node communications within the cluster. Internet The Internet is a worldwide system of computer networks. It uses TCP/IP protocols and is a public self-sustaining facility that is accessible to hundreds of millions of people worldwide. Internet Authentication Services (IAS) The Microsoft implementation of a Remote Authentication Dial-In User Service (RADIUS) server. Internet Connection Sharing (ICS) A Windows 2000 feature that allows a small network to be connected to the Internet through a single connection. The computer that dials into the Internet provides network address translation, addressing, and name resolution services for all of the computers on the network. Through Internet connection sharing, the other computers on the network can access Internet resources and use Internet applications, such as Internet Explorer and Outlook Express. Internet Control Message Protocol (ICMP) A member of the IP suite of protocols. It provides for error, control, and informational messages. The PING command makes use of ICMP in determining whether a certain TCP/ IP connection is present. Internet Explorer A World Wide Web browser produced by Microsoft and included with Windows 9x, Windows NT 4, and now Windows 2000. Internet Gateway Routing Protocol (IGRP) A protocol that allows multiple routing gateways to coordinate with each other. Internet Group Management Protocol (IGMP) A TCP/IP standard (RFC 1112) that details the routing of multicast traffic over the Internet. Internet Information Services 5.0 (IIS 5) Software that serves Internet higher-level protocols such as HTTP and FTP to clients using web browsers. The IIS software that is installed on a Windows 2000 Server computer is a fully functional web server and is designed to support heavy Internet usage.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Glossary

529

Internet Message Access Protocol v4 (IMAP4) Allows a client to access and manipulate electronic mail messages on a server. IMAP4rev1 includes operations for creating, deleting, and renaming mailboxes; checking for new messages; and permanently removing messages. Internet printer A Windows 2000 feature that allows users to send documents to be printed through the Internet. Internet Print Protocol (IPP) A Windows 2000 protocol that allows users to print directly to a URL. Printer and job-related information are generated in HTML format. Internet Protocol (IP) The Network layer protocol upon which the Internet is based. IP provides a simple connectionless packet exchange. Other protocols such as TCP use IP to perform their connection-oriented (or guaranteed delivery) services. Internet Protocol Security (IPSec) IPSec is a suite of protocols that are used to encrypt and decrypt data as it is transmitted over a public network or the Internet. IPSec provides end-to-end encrypted data transmission. Internet Security and Acceleration Server (ISA Server) ISA is the successor to Microsoft Proxy Server. It provides the features of both proxy servers and firewalls in one package. ISA comes in two versions. ISA Server Standard Edition was designed to meet the firewall needs of small businesses. ISA Server Enterprise Edition provides the scalability and management capabilities needed in larger organizations. Internet Service API (ISAPI) An application programming interface (API) written by Microsoft so that programmers can write code for Internet Information Services. Some companies specializing in augmenting the use of Microsoft Proxy Server have developed ISAPI filters that help Proxy Server filter out unwanted traffic. Internet service provider (ISP) the Internet.

A company that provides connections to

Internet Services Manager A Windows 2000 utility used to configure the protocols that are used by Internet Information Services (IIS) and Personal Web Services (PWS).

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

530

Glossary

internetwork A network made up of multiple network segments that are connected with some device, such as a router. Each network segment is assigned a network address. Network layer protocols build routing tables that are used to route packets through the network in the most efficient manner. Internetworking Packet Exchange (IPX) A networking protocol once used heavily by Novell NetWare servers and clients. IPX uses datagrams for connectionless communications. InterNIC

The agency that is responsible for assigning IP addresses.

interprocess communications (IPC) A generic term describing any manner of client/server communication protocol, specifically those operating in the Application layer. IPC mechanisms provide a method for the client and server to trade information. intraclient state A Network Load Balancing term involving the state of a client through a transaction. When a client is held throughout the transaction, as in an Internet e-commerce shopping cart transaction, the client is taken from connection to connection. intracluster communications A private network used by nodes in a cluster for heartbeats and status and control messages. intranet

A privately owned network based on the TCP/IP protocol suite.

intrusion detection The ability to detect someone or something attempting to break into or misuse your system. IP

See Internet Protocol.

IP address A four-byte number that uniquely identifies a computer on an IP internetwork. IP address spoofing By spoofing IP packets, an intruder on the Internet can effectively impersonate a local system's IP addresses. IP spoofing attacks are currently very difficult to detect. IP address resource addresses.

A cluster resource that is used to manage IP network

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Glossary

531

IP blocking The ability to deny access to a specific IP address, a range of IP addresses, and subnets and domains.

See interprocess communications.

IPC

IPCONFIG IPP

A command used to display the computer’s IP configuration.

See Internet Print Protocol.

IPSec A suite of security protocols and services that make it possible to establish secure communications between two computers over an unsecured network. IPSec is used with the L2TP protocol for VPN connections. IPX ISAPI

See Internetworking Packet Exchange. See Internet Service API.

ISA Server ISDN ISP

See Internet Security and Acceleration (ISA) Server.

See Integrated Services Digital Network. See Internet service provider.

IUSR_computername Account used by IIS to provide Anonymous access to a website.

J, K KDC

See Key Distribution Center.

Kerberos Kerberos is the default protocol used for authentication in a Windows 2000 domain. It is known as a mutual authentication protocol, because it confirms both the identity of the user and the network services to which the user has access. kernel The core process of a preemptive operating system, consisting of a multitasking scheduler and the basic security services. Depending on the operating system, other services such as virtual memory drivers may be built into the kernel. The kernel is responsible for managing the scheduling of threads and processes. Key Distribution Center (KDC) One of the three critical components for Kerberos authentication. The KDC issues a ticket-granting ticket (TGT) that contains encrypted data about the user.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

532

Glossary

L L2TP

See Layer 2 Tunneling Protocol.

LAN

Local area network; a computer network that spans a small area.

Last Known Good Configuration A Windows 2000 Advanced Options menu item used to load the configuration that was used the last time the computer was successfully booted. latency There are two acceptable ideas behind the concept of latency. The first is the notion of how long a computer component spends time waiting on another component to finish what it’s doing and honor a request. The second has to do with the amount of time that a packet takes to get from one point to another across a network. Layer 2 Tunneling Protocol (L2TP) An extension of the PPP protocol, enabling the implementation of VPNs, either through ISPs or private networks. The protocol is a combination of the best of Microsoft’s PPTP and Cisco’s Layer 2 Forwarding. LCP

See Link Control Protocol.

Lightweight Directory Access Protocol (LDAP) The primary access protocol for Active Directory. It runs directly over TCP, and can be used to accessing online directory services. Link Control Protocol (LCP) The protocol that negotiates the PPP and link parameters, configuring the Data Link layer of a PPP connection. LLC sublayer LMHOSTS

See Logical Link Control sublayer.

A static table of NetBIOS names to IP address mappings.

load-balanced clusters Clusters in which requests are distributed equally among all members of the cluster. load balancing Method for distributing client requests across multiple servers within a cluster. load weight The percentage of load-balanced network traffic that a node should handle.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Glossary

533

local logon A logon when the user logs on from the computer where the user account is stored on the computer’s local database. Also called an interactive logon. local monitors A type of data collector for Health Monitor that is included in Microsoft Application Center 2000. Local monitors are associated with a particular cluster and are not synchronized across all cluster members as with global monitors. local printer A printer that uses a physical port and that has not been shared. If a printer is defined as local, the only users who can use the printer are the local users of the computer that the printer is attached to. local security Security that governs a local or interactive user’s ability to access locally stored files. Local security can be set through NTFS permissions. local user account A user account stored locally in the user accounts database of a computer that is running Windows 2000. local user profile A profile created the first time a user logs on, stored in the Documents and Settings folder. The default user profile folder’s name matches the user’s logon name. This folder contains a file called NTUSER.DAT and subfolders with directory links to the user’s Desktop items. log shipping A feature that is built into SQL Server 2000. In log shipping, transaction logs are copied from a primary server and applied to a secondary server. The copying process takes place on a schedule basis. logical drive An allocation of disk space on a hard drive, using a drive letter. For example, a 5GB hard drive could be partitioned into two logical drives: a C: drive, which might be 2GB, and a D: drive, which might be 3GB. Logical Link Control (LLC) sublayer A sublayer in the Data Link layer of the Open Systems Interconnection (OSI) model. The LLC sublayer defines flow control. logical port A port that connects a device directly to the network. Logical ports are used with printers by installing a network card in the printers. logical printer The software interface between the physical printer (the print device) and the operating system. Also referred to as just a printer in Windows 2000 terminology.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

534

Glossary

M MAC (media access control) address The physical address that identifies a computer. Ethernet and Token Ring cards have the MAC address assigned through a chip on the network card. MAC sublayer

See Media Access Control sublayer.

manual allocation Configuring each computer’s TCP/IP settings without benefit of any automatic or labor-saving technology. many-to-one mapping Used with client certificate authentication. With many-to-one mapping, multiple users can be mapped to a single user account. Rights and permissions granted to the users are based on that single user account. MAPI MB

See Messaging Application Programming Interface. See megabyte.

Media Access Control (MAC) sublayer A sublayer in the Data Link layer of the Open Systems Interconnection (OSI) model. The MAC sublayer is used for physical addressing. megabyte

A computer storage measurement equal to 1,024 kilobytes.

member server A Windows 2000 server that has been installed as a nondomain controller. This allows the server to operate as a file, print, and application server without the overhead of account administration. Messaging Application Programming Interface (MAPI) An API that provides backward capability with Microsoft Exchange applications. Microsoft Challenge Handshake Authentication Protocol (MSCHAP) The MS-CHAP version 1 and 2 protocols are a take-off of the original Challenge Handshake Authentication Protocol (CHAP) as outlined in RFC 1994. The idea is that you connect to a remote access server, and you’re sent a challenge string. In answering the challenge string, you key in your username and password. The password is used to create a one-way hash

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Glossary

535

using the Message Digest 5 (MD5) encryption scheme. In CHAP, the password is plain text. In MS-CHAP, the password is encrypted using MD4. In MS-CHAP v2, the entire mechanism is much stronger and allows for twoway authentication. Microsoft Cluster Services (MSCS) Technology introduced by Microsoft that provides high availability for servers that host mission critical applications. Physical servers are grouped together and act as a single network server. Microsoft Disk Operating System The operating system that was developed by Microsoft in 1982 for personal computers. It is a 16-bit operating system that utilizes a command-line interface. Microsoft Management Console (MMC) The Windows 2000 console framework for management applications. The MMC provides a common environment for snap-ins. Microsoft Proxy Server 2.0. An extensible firewall and content cache server that provides Internet security, improves network performance, and reduces communication costs. Proxy Server has been replaced by Internet Security and Acceleration Server (ISA Server). mixed mode authentication Mixed mode authentication is one of the authentication modes for database access. It differs from Windows authentication mode in that users can be authenticated by either Windows or SQL Server. Mixed mode authentication is primarily used for backward compatibility with applications written for earlier versions of SQL Server and by non-Windows clients. MMC

See Microsoft Management Console.

MS-CHAP MS-DOS

See Microsoft Challenge Handshake Authentication Protocol. See Microsoft Disk Operating System.

multicast Transmitting data to a select group of recipients. Used primarily in video or audio streaming, stock ticker programs, etc. multiple-instance cluster Multiple instances of SQL running on a single server. SQL Server 2000 supports up to 16 instances running on one server. mutual authentication another and vice versa.

The process of one host verifying the identity of

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

536

Glossary

N namespace Within Windows 2000, the term namespace has two different connotations. In the Microsoft Management Console (MMC), when viewing the context tree and looking at the items or resources found therein, you’re looking at a namespace (which would include any Active Directory objects—simply because they’re viewed with the MMC as well). Alternatively, any hierarchical structure viewed within DNS is also said to be a namespace. NAT

See Network Address Translation.

NCP

See Network Control Protocol.

Netstat (netstat.exe) A command-line utility that displays your system’s current TCP/IP connections and protocol statistics. network address translation (NAT) The process of hiding an entire network behind a single IP address. This process both helps reduce IP address space shortages and hides the internal network addressing scheme from external hackers. Network Basic Input/Output System (NetBIOS) A client/server IPC service developed by IBM in the early 1980s. NetBIOS presents a relatively primitive mechanism for communication in client/server applications, but its widespread acceptance and availability across most operating systems makes it a logical choice for simple network applications. Many of the network IPC mechanisms in Windows 2000 are implemented over NetBIOS. network capacity A network’s capability to handle traffic from all LAN and WAN segments. Networks should be scalable to accommodate for predicted growth. They should have sufficient network capacity to ensure that periodic bursts of traffic don’t cause degradation in network performance. Network Control Protocol (NCP) A series of individual control protocols used by PPP to negotiate with LAN protocols. Network File System (NFS) An open design for Unix systems that allows all users of a network to access files on a server, regardless of their platform.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Glossary

537

Network layer The layer of the Open System Interconnection (OSI) model that creates a communication path between two computers via routed packets. Transport protocols implement both the Network layer and the Transport layer of the OSI stack. For example, IP is a Network layer service. Network Load Balancing (NLB) Service that allows you to combine two or more Windows 2000 Advanced Servers into a cluster for fault-tolerance or performance improvement purposes. Network Management System (NMS) System that allows you to monitor the network for errors and provides alerting if an error takes place. Network Monitor A software-based network packet analyzer, and you can use it to capture network real-time traffic. Network Name resource A resource used in conjunction with the IP Address resource to create a virtual server. network segment A section of a network. Also one of the most common ways of increasing available bandwidth on the LAN. network storage solutions A self-contained unit that normally consists of some type of RAID array housed in its own cabinet. This cabinet includes power supplies, network interface cards, and SCSI/Fibre Channel controllers. New Technology File System (NTFS) A secure, transaction-oriented file system developed for Windows NT and Windows 2000. NTFS offers features such as local security on files and folders, data compression, disk quotas, and data encryption. NFS

See Network File System.

NIC teaming NLB

See teaming.

See Network Load Balancing.

node In an clustered environment, a node is a server that is a member of the cluster.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

538

Glossary

node type One of four different name resolution paradigms that a client will use when attempting to resolve computer names to IP addresses on a network. The most common node type is 0x8, or more commonly called hybrid node. With hybrid node, the client first checks any primary or secondary WINS servers for the name referenced, then broadcasts for it, and then checks LMHOSTS. NTFS

See New Technology File System.

NTFS permissions Permissions used to control access to NTFS folders and files. Access is configured by allowing or denying NTFS permissions to users and groups. n-tier architecture A multi-tier networking architecture for building scalable applications. It is based on either a two-tier or three-tier architecture. The three-tier architecture is the basis for Windows DNA and consists of presentation, business logic, and data layers. n-tier client/server A client/server environment that contains multiple server and/or client layers. An example might be an e-commerce site where the client is someone using a browser, and several servers are needed (web, database, firewall, etc.) to handle the transactions.

O object repository One of three components that WBEM architecture uses for collecting and distributing system data. Object repository is a standard method for storing object definitions. ODBC

See Open Database Connectivity.

OLE DB An Application Programming Interface (API) that enables COM applications to access data from any OLE DB data source. OLE DB provider DB data source.

Used by an application when it needs to access an OLE

one-to-one mapping Used with client certificate authentication. The client must have a valid Windows 2000 user account. They are authenticated based on their account information and are granted access to resources depending on the account’s rights and permissions.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Glossary

539

Open Database Connectivity (ODBC) A core component of Microsoft’s strategy for accessing data from relational and nonrelational databases. An open, vendor-neutral, and universal data-access interface, ODBC eliminates the need for developers to learn multiple APIs. Open Shortest Path First (OSPF) the link-state algorithm.

A routing protocol developed using

Open Systems Interconnection (OSI) model A reference model for network component interoperability developed by the International Standards Organization (ISO) to promote cross-vendor compatibility of hardware and software network systems. The OSI model splits the process of networking into seven distinct services, or layers. Each layer uses the services of the layer below to provide its service to the layer above. optimization Any effort to reduce the workload on a hardware component by eliminating or reducing the amount of work required of the hardware component through any means. For instance, file caching is an optimization that reduces the workload of a hard disk drive. OSI model OSPF

See Open Systems Inter-connection model.

See Open Shortest Path First.

Outlook Web Access Outlook Web Access or OWA was developed to provide users with access to e-mail, personal calendars, group scheduling, and collaboration applications through a Web browser. It was originally designed to provide clients with access to Microsoft Exchange Server version 5 from any Web browser. This functionality has been extended to all versions of Exchange Server. OWA

See Outlook Web Access.

P packet filtering Packet filtering looks at the source and destination addresses of IP packets to determine whether they should be allowed or denied entry to the network. There are three primary types of packet filters: static or simple, dynamic, and stateful.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

540

Glossary

packet sniffing The process of monitoring the traffic that is traveling over a network. Packet sniffing can be done for legitimate network management functions or for the purpose of stealing data. pagefile Logical memory that exists on the hard drive. If a system is experiencing excessive paging (swapping between the page file and physical RAM), it needs more memory. PAP

See Password Authentication Protocol.

parity A method of redundancy intended to protect against data corruption and loss. partition, partitioning A section of a hard disk that can contain an independent file system volume. Partitions can be used to keep multiple operating systems and file systems on the same hard disk. Password Authentication Protocol (PAP) A plain-text authentication scheme. An early precursor to CHAP and its Microsoft iterations. PB

See petabyte.

PCI

See Peripheral Connection Interface.

Peer Web Services (PWS) Software that acts as a small-scale web server, for use with a small intranet or a small Internet site with limited traffic. Windows 2000 uses PWS to publish resources on the Internet or a private intranet. When you install Internet Information Services (IIS), on a Windows 2000 Professional computer, you are actually installing PWS. perfmon Perfmon.exe is the command used to access the Window 2000 Performance Tools. Performance Logs and Alerts A Windows 2000 utility used to log performance-related data and generate alerts based on performancerelated data. Performance Tools The Windows 2000 Performance Tools feature consists of two components: System Monitor, and Performance Logs and Alerts. perimeter network A small network located between an organization’s private network and any external networks or the Internet. Perimeter networks are sometimes used to separate and protect e-mail and web servers from external sources. Also known as a DMZ.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Glossary

541

Peripheral Connection Interface A local bus standard that was developed by Intel. It is usually implemented as a 32-bit bus and runs at clock speeds of 33 or 66 MHz. permissions Security constructs used to regulate access to resources by username or group affiliation. Permissions can be assigned by administrators to allow any level of access, such as read-only, read/write, or delete, by controlling the ability of users to initiate object services. Security is implemented by checking the user’s security identifier (SID) against each object’s access control list (ACL). Personal Web Manager A Windows 2000 utility used to configure and manage Peer Web Services (PWS). This utility has options for configuring the location of the home page and stopping the website, and displays statistics for monitoring the website. petabyte A computer storage measurement that is equal to 1,024 terabytes. Physical Disk resource drives on the cluster.

The resource that is used to manage the shared

Physical layer The first (bottom) layer of the Open Systems Interconnection (OSI) model, which represents the cables, connectors, and connection ports of a network. The Physical layer contains the passive physical components required to create a network. physical port A serial (COM) or parallel (LPT) port that connects a device, such as a printer, directly to a computer. ping A command used to send an Internet Control Message Protocol (ICMP) echo request and echo reply to verify that a remote computer is available. PIX

A Cisco hardware firewall.

PKI

See Public Key Infrastructure.

plug and play A technology that uses a combination of hardware and software to allow the operating system to automatically recognize and configure new hardware without any user intervention. Point-to-Point Protocol (PPP) A connection protocol that connects remote computers to networks.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

542

Glossary

Point-to-Point Tunneling Protocol (PPTP) Protocol invented by Microsoft and several other partners in a collective effort known as the PPTP Forum. PPTP is designed to facilitate the setting up of a virtual private connection with a client coming over the Internet to a private network. The data is tunneled inside TCP/IP packets. policies General controls that enhance the security of an operating environment. In Windows 2000, policies affect restrictions on password use and rights assignments, and determine which events will be recorded in the Security log. POP3 Post Office Protocol, version 3, is a protocol used to retrieve e-mail from a mail server. Most e-mail applications use the POP 3 protocol. port A logical connection endpoint between two hosts. Port are identified by port numbers. port rules For Network Load Balancing, port rules are the configuration parameters that determine the filtering mode that is applied to a range of ports. port scan An intrusion technique in which an intruder attempts to gain access to a computer by scanning for open ports. PPP PPTP

See Point-to-Point Protocol. See Point-to-Point Tunneling Protocol.

presentation layer The layer in an n-tier environment that is the user interface to the underlying technologies of the websites. It is the first layer of user authentication to the applications on the system. Print Spooler resource A resource group that is used to cluster print services. private key One of the two keys used in public key encryption. The private key is known only to the recipient and remains private. The public key is used to encrypt the data, and the private key is used to decrypt it. private reserved range Internet.

A range of IP addresses that are not routed on the

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Glossary

543

process A running program containing one or more threads. A process encapsulates the protected memory and environment for its threads. Process Explode Process explode or pview.exe is a tool that is used to see the current count of system objects. Process Thread and Status Process Thread and Status or pstat.exe is a command-line tool that displays all running process and threads running on a system. Process Tree Command-line monitoring tool that allows administrators to query the process tree and kill processes. Process Viewer Process viewer or pviewer.exe is a utility for displaying data about processes running on local and remote computers. processor A circuit designed to automatically perform lists of logical and arithmetic operations. Unlike microprocessors, processors may be designed from discrete components rather than be a monolithic integrated circuit. processor affinity The association of a processor with specific processes that are running on the computer. Processor affinity is used to configure multiple processors. profile A collection of user settings that is established when a user logs onto a Windows NT or 2000 computer. There are also roaming mandatory and roaming user profiles. Roaming mandatory profiles are used to control user settings no matter where a user might log on—thus guaranteeing uniformity across the network. Roaming user profiles provide the same mobility, but allow the user to change the profile settings. protocol An established rule of communication adhered to by the parties operating under it. Protocols provide a context in which to interpret communicated information. Computer protocols are rules used by communicating devices and software services to format data in a way that all participants understand. protocol rules Rules that are used by ISA Server to determine the type of connections that are allowed for client access. proxy array A group of Microsoft Proxy Server computers that are configured hierarchically to provide load balancing for each other.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

544

Glossary

proxy server Software that acts as a mediator between your Internet browser and an Internet server. See also Microsoft Proxy Server. proxy services Special programs that manage the traffic through a firewall for particular services. public key encryption A cryptographic system that uses two keys, one private and one public, to encrypt data at the originating end and then decrypt it at the destination. Public Key Infrastructure (PKI) A system that uses certificates and certification authorities that can vouch for the authenticity of a client accessing an Internet or network resource. publishing rules Servers protected by firewalls can be published on the Internet without fear of the internal network’s being compromised. The content of published web servers can be cached to allow for quicker response to client requests. In ISA Server, web publishing and server publishing rules determine eligibility of data requests for forwarding to servers on the internal network. See also SSL bridging. PWS

See Peer Web Services.

Q quality of service (QoS) A circuit with a guaranteed throughput and availability that is higher than normal data. Primarily used in ATM networks for voice and video circuits. quorum disk One disk in the cluster storage system that holds all of the configuration information for the cluster. It is the most important disk in the cluster. Quorum resource The cluster resource that maintains the configuration data that is critical for cluster recovery.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Glossary

545

R RAID levels Used to define the type of data protection and fault tolerance provided by Redundant Array of Inexpensive Disks. random-access memory (RAM) Integrated circuits that store digital bits in massive arrays of logical gates or capacitors. RAM is the primary memory store for modern computers, storing all running software processes and contextual data. RAS

See Remote Access Service.

real-time application A process that must respond to external events at least as fast as those events can occur. Real-time threads must run at very high priorities to ensure their ability to respond in real time. Recovery Console A Windows 2000 option for recovering from a failed system. The Recovery Console starts Windows 2000 without the graphical interface and allows the administrator limited capabilities, such as adding or replacing files and starting and stopping services. Redundant Array of Inexpensive Disks (RAID) mented fault tolerance on a disk subsystem.

A technology for imple-

redundant power supply (RPS) A device that fits into a rack along with your switches or hubs, providing redundancy, such that if a device’s power supply gives out, the RPS takes over. Typically, you purchase RPS units that match the switch or router gear that you’re purchasing. Registry A database of settings required and maintained by Windows 2000 and its components. The Registry contains all of the configuration information used by the computer. It is stored as a hierarchical structure and is made up of keys, hives, and value entries. relative distinguished name (RDN) The name of an object within its current level in the directory. For a user DC=COM/DC=MyCompany/CN= Users/CN=jim.smith, jim.smith would be the user’s RDN. Remote Access Service (RAS) A service that allows network connections to be established over a modem connection, an Integrated Services Digital Network (ISDN) connection, or a null-modem cable. The computer initiating the connection is called the RAS client; the answering computer is called the RAS server.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

546

Glossary

Remote Monitoring (RMON) Similar to SNMP, though much richer in the context of what it can do. Where SNMP could accept one management information base (MIB) from a client such as a router, switch, or hub, RMON can receive 10 separate specialized MIBs, thus creating far more granularity in the kind of monitoring that can go on. Replicator group A Windows 2000 built-in group that supports directory replication, which is a feature used by domain servers. Only domain user accounts that will be used to start the replication service should be assigned to this group. request forwarder In Application Center 2000 a request forwarder is used to ensure that HTTP requests are made to a specific cluster member in a load-balanced environment. The request forwarder is an ISAPI filter and extension that sits between the HTTP client and the applications. Requests for Comments (RFCs) The set of standards defining the Internet protocols as determined by the Internet Engineering Task Force and available in the public domain on the Internet. RFCs define the functions and services provided by each of the many Internet protocols. Compliance with the RFCs guarantees cross-vendor compatibility. resource

Any useful service, such as a shared folder or a printer.

Resource API Acts as the main interface between the Resource DLLs and the Resource Monitor. resource dependency When one resource on the cluster depends on another resource in order to operate. Resource DLL Manages the services for that specific resource. The Resource DLL is called by the Resource Monitor when a request is made by the Cluster service. resource group

A logical collection of resources.

Resource Monitors Responsible for monitoring activity between the Cluster service and the Resource DLLs. resource record A description of the type of host listed in a DNS database. There are many different resource records that can be used to describe different types of host.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Glossary

547

Restore Wizard A Wizard used to restore data. The Restore Wizard is accessed through the Windows 2000 Backup and Restore utility. return on investment (ROI) Generally speaking, the income that an investment returns in one year. In computing terms, the “income” wouldn’t be measured in dollars, but in added bandwidth, faster and smarter applications, more secure enterprises, and so forth. reverse lookup address. RFC RIP

Retrieving a name from a DNS server when given an IP

See Request For Comments. See Routing Information Protocol.

risk In the business sense, that portion of a project or system that may be prone to failure, to extra costs, to unpredictability, to hazard, or to other unknown complications. Risk-takers in business often reap big rewards, but they also often have projects fail because they underestimated the size of the risk. RMON

See Remote Monitoring.

rolling upgrade The process of upgrading software or applications on a cluster without having to take the entire cluster down. One node can be taken offline, upgraded, and then brought back online and synchronized with the other nodes on the cluster. Round Robin DNS The ability of DNS to have more than one IP address for a given FQDN. The entries are traversed first one, then the next, and so on down the list, then back to the first—hence the round-robin name. router A Network layer device that moves packets between networks. Routers provide internetwork connectivity. router hop Each router that is crossed as a packet moves from a source to a destination. We sometimes talk about how many “router hops away” a resource is. Routing and Remote Access Services (RRAS) The Windows 2000 service that facilitates various remote access services (such as demand-dial and RAS) and routing services (such as RIP, OSPF, and others).

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

548

Glossary

Routing Information Protocol (RIP) A small, lightweight protocol that allows for routing between small- to medium-sized networks. Limited to routes no more than 15 routers away. routing lists Routing lists are located on each cluster in the web layer and include all load-balanced COM+ cluster members. Routing lists can also be located on the COM+ router cluster. Initially, the routing list is created by the administrator and synchronized to all members of the cluster. Having the routing list on each member in the web layer ensures that there is no single point of failure. The routing list has no web functionality, however; it is meant exclusively for routing purposes. RRAS

See Routing and Remote Access.

S Safe Mode A Windows 2000 Advanced Options menu item that loads the absolute minimum of services and drivers that are needed to start Windows 2000. The drivers that are loaded with Safe Mode include basic files and drivers for the mouse (unless a serial mouse is attached to the computer), monitor, keyboard, hard drive, standard video driver, and default system services. Safe Mode is considered a diagnostic mode. It does not include networking capabilities. SAP

See Service Advertising Protocol.

scalability

The ability to scale your network to meet changing demands.

scale out (a cluster)

To add more nodes to a cluster.

scale up (a cluster) To add more resources to each node in the cluster. These resource can include memory, multiple disks or RAID solutions, and/ or high-speed network adapters. scope A range of TCP/ IP addresses for a given logical subnet. It includes, at a minimum, a range of IP addresses for the subnet, the subnet mask, and the lease duration. Each subnet can have only one scope defined. screened subnet

Microsoft’s term for a perimeter network.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Glossary

SCSI

549

See Small Computer Systems Interface.

SecureNAT One of the clients supported by ISA Server. It requires that explicit protocol rules be defined before it can be used. Secure Sockets Layer (SSL) A protocol that is used to supply secure data communications through encryption and decryption. It uses RSA public-key encryption. See also SSL bridging. security The measures taken to secure a system against accidental or intentional loss. Measures are usually in the form of accountability procedures and use restriction, for example through NTFS permissions and share permissions. security descriptor An object stored in Active Directory that contains security identifiers (SIDs), discretionary access control lists (DACLs) or system access control lists (SACLs). security identifier (SID) A unique code that identifies a specific user or group to the Windows 2000 security system. SIDs contain a complete set of permissions for that user or group. Security log A log that tracks events related to Windows 2000 auditing. The Security log can be viewed through the Event Viewer utility. security policy Policies used to configure security for the computer. Security option policies apply to computers rather than to users or groups. These policies are set through Local Computer Policy. security zone DNA proposes the use of multiple security zones to provide adequate protection against internal and external threats to the network. DNA security is based on the use of three security zones: the public network, a DMZ, and a secure network where content data is created and stored. Serial Line Internet Protocol (SLIP) An older predecessor of the PPP protocol. SLIP is a connection protocol that gets clients connected to remote networks or the Internet. server array In ISA Server, a server array consists of multiple ISA servers that are configured as a single local cache.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

550

Glossary

server capacity Includes a server’s CPU, memory, and storage. A website can become unresponsive and slow if these resources are not sufficient to handle traffic during peak periods. server cluster system.

A group of independent servers that are managed as a single

server publishing rules

See publishing rules.

service A process dedicated to implementing a specific function for another process. Most Windows 2000 components are services used by userlevel applications. Service Advertisement Protocol A protocol used by Novell Netware to enable file and print servers to advertise their availability to clients on a network. service pack An update to the Windows 2000 operating system that includes bug fixes and enhancements. Service Ticket (ST) A ticket that is issued by the ticket-granting service that allows a user to authenticate to a specific domain. Session layer The layer of the Open Systems Interconnection (OSI) model dedicated to maintaining a bi-directional communication connection between two computers. The Session layer uses the services of the Transport layer to provide this service. session state share

A temporary store for user session information.

A resource such as a folder or printer shared over a network.

shared device One of the two implementation modes used for clustering. In this model any software running on a cluster can access any resource that is connected to the system. Shared Nothing The basis for Windows 2000 cluster service and one of the two implement models for clustering. In the shared nothing model, each server or node in the cluster owns and manages its own resources. share permissions Permissions used to control access to shared folders. Share permissions can only be applied to folders, as opposed to NTFS permissions, which are more complex and can be applied to folders and files.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Glossary

551

Shiva Password Authentication Protocol (SPAP) A proprietary version of Password Authentication Protocol that is used by Shiva equipment. SID

See security identifier.

signing Used with public key cryptography to prove the original of the data being transmitted. Signing is done through digital signatures. Simple Mail Transfer Protocol (SMTP) An Internet protocol for transferring mail between Internet hosts. SMTP is often used to upload mail directly from the client to an intermediate host, but can only be used to receive mail by computers constantly connected to the Internet. Simple Network Management Protocol (SNMP) An early set of protocols that were designed to facilitate the managing of network equipment. single-instance cluster An instance is an installation of SQL that is separate from other installations of SQL on the same server. Single-instance clustering replaces the concept of active/passive clustering. single point of failure (SPOF) The place at which a device, system, program, or other computing entity has only one point of support and thus will completely shut down on failure. SLIP

See Serial Line Internet Protocol.

Small Computer Systems Interface (SCSI) A high-speed, parallel-bus interface that connects hard disk drives, CD-ROM drives, tape drives, and many other peripherals to a computer. SCSI is the mass-storage connection standard among all computers except IBM compatibles, which use SCSI or IDE. SMTP

See Simple Mail Transfer Protocol.

snap-in An administrative tool developed by Microsoft or a third-party vendor that can be added to the Microsoft Management Console (MMC) in Windows 2000. SNMP

See Simple Network Management Protocol.

SOCKS SOCKS is a circuit-layer protocol used in client/server communications. SOCKS is widely used by proxy servers and firewall software for controlling access between clients and hosts. SOCKS version 5 is the standard SOCKS version used with Microsoft Windows 2000. See also Winsock.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

552

Glossary

software-based NLB

Another term for dispatching.

software RAID Software RAID systems are usually built into the operating system. They are less expensive than hardware RAIDS but the money savings is at the cost of performance. Software RAID solutions are only recommended in situations where the cost of moving to a hardware RAID system is prohibitive. SPOF

See single point of failure.

SQL Error log errors.

Records SQL Server 2000 information, warnings and

SQL Profiler A SQL Server 2000 utility that is used to monitor server performance. SQL Server 2000 SQL Server 2000 is a fully web-enabled database product. It provides extensive database programming capabilities built on World Wide Web standards and can be used to achieve maximum availability through enhanced failover clustering. SRV A type of resource record new to Windows 2000 DNS. The SRV resource record specifies which computers provide which kind of TCP/IP services on the network. Used to find out which servers are providing LDAP, Kerberos, and global catalog services, for example. SSL

See Secure Sockets Layer.

SSL bridging A process by which data is encrypted or decrypted as it passes through a firewall from the client computer to a host. staging cluster Included in the category of general or web clusters. Staging clusters are stand-alone or single-node clusters used as stagers, for testing content and applications before they are deployed to production clusters. In Application Center, stagers can also be used to deploy the applications and content after testing, to view the health and status of clusters, and to view performance data and event logs. startup disks Disks that can be used to access a system that will not boot because of a faulty startup sequence.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Glossary

553

stateful A system that stores client information is considered to be a stateful system. Normally systems located on the back end of the website are stateful systems. stateful packet filtering Stateful packet filters interpret higher-level protocols and adjust their filtering rules depending on the type of protocol being used. stateless A system that does not store any client information is considered to be a stateless system. Normally systems located on the front-end of a website are stateless systems. static packet filter Static packet filters forward data packets between the internal and external networks for predetermined IP address and ports. statistical mapping algorithm An algorithm that filters incoming requests to determine how traffic is partitioned. storage area network (SAN) An architecture that uses hardware and software arrays on dedicated subnets using a variety of disk technologies. SANS normally use Fibre Channel technology for interconnection between devices. stored procedures A precompiled collection of Transact-SQL statements that can be executed on demand. strategic planning The ability to think and plan long-term—looking down the road several years and asking “Where should this network be then?” stripe set A single volume created across multiple hard disk drives and accessed in parallel for the purpose of optimizing disk-access time. striped volume A dynamic disk volume that stores data in equal stripes between 2 to 32 dynamic drives. Typically, administrators use striped volumes when they want to combine the space of several physical drives into a single logical volume and increase disk performance. striping Dividing data into blocks and spreading these data blocks across several hard disks in an array.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

554

Glossary

subnet mask A number mathematically applied to IP addresses to determine which IP addresses are a part of the same subnetwork as the computer applying the subnet mask. subnetting The act of partitioning a single TCP/IP network into separate networks called subnets. Success Audit event An Event Viewer event that indicates the occurrence of an event that has been audited for success, such as a successful logon. superscope

Multiple DHCP scopes combined into one administrative unit.

switch A device that connects multiple systems. Switches operate at the Data Link and Network layers of the OSI model. switch flooding Used by Network Load Balancing in unicast mode to allow the delivery of network traffic to all cluster nodes simultaneously. symmetric multiprocessing (SMP) A computer architecture in which multiple CPUs can be used to complete individual processes simultaneously. With SMP, any idle processor can be assigned any task. SYN attack Act of a hacker sending thousands of Synchronize requests to a server, flooding the server so badly that the network cannot send or receive packets. synchronization Synchronization is the replication of content data and configuration settings from the cluster controller to member clusters. New members added to the cluster are automatically synchronized. Sysprep

See System Preparation Tool.

System log A log that tracks events that relate to the Windows 2000 operating system. The System log can be viewed through the Event Viewer utility. System Monitor A Windows 2000 utility used to monitor real-time system activity or view data from a log file. System Preparation Tool (Sysprep) A Windows 2000 utility used to prepare a disk image for disk duplication.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Glossary

555

Systems Management Server (SMS) Systems Management Server is a change and configuration management tool for Windows-based desktop computers and servers. SMS can be used to deploy applications, software updates, and operating systems. It tracks all distributed Windows-based software and hardware assets, and helps troubleshoot and solve common problems with Windows-based systems. Systems Network Architecture (SNA) Developed by IBM in 1974 as a network protocol that could be used with IBM mainframes. SNA was enhanced so that it could also be used to work connecting peer-to-peer networks. Today’s client/server work with SNA typically involves retrieving database data from mainframe systems in order to be processed by network servers. system state System-specific data that includes the Registry, the COM+ Class Registration database, and the system boot files. System Tools A Computer Management utility grouping that provides access to utilities for managing common system functions. The System Tools utility includes the Event Viewer, System Information, Performance Logs and Alerts, Shared Folders, Device Manager, and Local Users and Groups utilities.

T T1 A dedicated phone connection that supports data rates of 1.544 Mbps. A T1 line actually consists of 24 individual 64 Kbps channels. T3 T3 is the same type of dedicated phone connection as a T1, except T3 supports data rates of about 43 Mbps. A T3 line consists of 672 individual 64 Kbps channels. Task Manager A Windows 2000 utility that can be used to start, end, or prioritize applications. The Task Manager shows the applications and processes that are currently running on the computer, as well as CPU and memory usage information. Task Scheduler A Windows 2000 utility used to schedule tasks to occur at specified intervals.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

556

Glossary

TB

See terabyte.

TCO

See total cost of operations.

TCP

See Transmission Control Protocol.

TCP/IP

See Transmission Control Protocol/Internet Protocol.

TCP/IP addressing An identifier for a computer or device on a TCP/IP network; consists of a 32-bit numeric address written as four numbers separated by periods. The four numbers identify a particular network and a host on that network. The InterNIC Registration Service assigns Internet addresses from three classes Class A, Class B, and Class C. TCP/IP port A logical port, used when a printer is attached to the network by installing a network card in the printer. Configuring a TCP/IP port requires the IP address of the network printer to connect to. TCP Selective Acknowledgement (TCP SACK) A valuable enhancement to TCP/IP. When TCP packets are sent across the wire, if one is missing or out of order, a retransmit is requested. If the packet is not retransmitted within a very brief time window, the packet and all subsequent packets must be retransmitted. TCP SACL allows for the retransmission of only those packets that are actually missing. teaming Accomplished through the installation of two multiple network cards (all the same manufacturer and model) in your server. One of the teamed adapters is active, and the second is in standby mode. Microsoft recommends that teaming not be used in private communications in a clustered arrangement. telnet A terminal emulation program for TCP/IP that allows you to connect to a server and execute commands on it as though you were actually sitting at its console. terabyte (TB) gigabytes.

A computer storage measurement that equals 1,024

thin client A client that holds very little responsibility for the processing involved in a client/server application. Browsers make great thin clients.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Glossary

557

thread A list of instructions running in a computer to perform a certain task. Each thread runs in the context of a process, which embodies the protected memory space and the environment of the threads. Multithreaded processes can perform more than one task at the same time. three-tier client/server A system running three separate processes on up to three computers. The computers can be of different platforms. The first tier, the client process, is used to communicate with the application. The “middle tier” is the component that does the processing. The third process is the database tier, where the databases are housed. In a three-tier client/ server system, data is retrieved from or put back into databases. A classic example is a browser that connects to a web server in order to make an online purchase. The item being purchased is debited from the inventory database, and the information about the person ordering the item is recorded in yet another database. throughput The amount of data that is transferred from one place to another in a specified amount of time. ticket-granting ticket (TGT) A key component of Kerberos authentication. The TGT is a ticket that is issued by the Key Distribution Center for the purposes of obtaining a service ticket from the ticket-granting service. topology

The physical layout and design of the network.

total cost of operations (TCO) The total cost of performing a certain function on the network. For example, TCO would answer the question, “What is the cost, in dollars per thousand e-mails sent, to maintain an Exchange server?” trace logs A type of log used with the Performance Tools. Trace logs measure data continuously, in contrast to counter logs, which measure data at periodic intervals. transaction replication A type of replication in SQL Server 2000 that uses Publisher, Subscriber, and a Distributor to replicate data from one server to another. Transact-SQL A command language that is used to administer instances of SQL Server.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

558

Glossary

Transmission Control Protocol (TCP) A Transport layer protocol that implements guaranteed packet delivery using the IP protocol. Transmission Control Protocol/Internet Protocol (TCP/IP) A suite of Internet protocols upon which the global Internet is based. TCP/IP is a general term that can refer either to the TCP and IP protocols used together or to the complete set of Internet protocols. TCP/IP is the default protocol for Windows 2000. Transport layer The Open Systems Interconnection (OSI) model layer responsible for the guaranteed serial delivery of packets between two computers over an internetwork. TCP is the Transport layer protocol in TCP/IP. transport protocol A service that delivers discreet packets of information between any two computers in a network. Higher-level, connection-oriented services are built on transport protocols. TTL (time to live) A timer value that is include with packets; indicates how long the packet should be held or used. two-tier client/server A system with a fat client (one that runs a lot of the application code) coupled to a server; for example, the Exchange 2000 Server system talking to an Outlook client.

U UNC

See Universal Naming Convention.

unicast Packets that are sent from a source to a single destination are said to be sent in unicast. Unicode A 16-bit standard that represents characters as integers, capable of representing 65,000 unique characters. Because of this huge number of possible character values (compared to only 256 in ASCII), almost all the characters from all the languages in the world can be represented with a single character set. Uniform Resource Locator (URL) An Internet standard naming convention for identifying resources available via various TCP/IP application protocols. For example, http://www.microsoft.com is the URL for

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Glossary

559

Microsoft’s World Wide web server site, and ftp://gateway.dec.com is a popular FTP site. A URL allows easy hypertext references to a particular resource from within a document or mail message. Universal Naming Convention (UNC) A multivendor, multiplatform convention for identifying shared resources on a network. UNC names follow the naming convention \\computername\sharename. untrusted network A private network that is outside your administrative control and your security perimeter. upgrade A method for installing Windows 2000 that preserves existing settings and preferences when converting to the newer operating system. upgrade pack Software in the form of a migration DLL (dynamic link library) used with applications that need to be upgraded to work with Windows 2000. user principal name (UPN) A user logon name coupled with the @ sign and the domain that the user is associated with in the forest. [email protected] is an example of a UPN.

V virtual IP address cluster.

The IP address that is used to communicate with a

virtual LAN (VLAN) A group of computers that behave as though they were connected via the same LAN, even though they may be separated into different subnets. VLANs are created using software and managed through software interfaces, though routers can work with VLANs. virtual memory A kernel service that stores memory pages not currently in use on a mass-storage device to free the memory occupied for other uses. Virtual memory hides the memory-swapping process from applications and higher-level services. virtual private network (VPN) A private network that uses links across private or public networks (such as the Internet). When data is sent over the remote link, it is encapsulated, encrypted, and requires authentication services.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

560

Glossary

virtual server A cluster resource group that contains a Network Name resource and an IP Address resource. A virtual server is not associated with a specific server and can be failed over like any other cluster resource group. VPN

See virtual private network.

W WAN A geographically distributed network comprised of local area networks that are joined by the use of common carriers. WAP

See Wireless Access Protocol.

Warning event An Event Viewer event that indicates that you should be concerned with the event. The event may not be critical in nature, but it is significant and may be indicative of future errors. Web Application Stress Tool (WAS) A tool that can be used to stress test web servers. Web-Based Enterprise Management (WBEM) A set of standards that allow computers to be managed using web browsers. web browser An application that makes HTTP requests and formats the resultant HTML documents for the users. Most web browsers understand all standard Internet protocols. Web Cache server An ISA Server that listens for requests for content such as HTML pages, images, and file. Web Capacity Analysis Tool (WCAT) A tool that can be used to test capacity for both IIS servers and their clients. Web content zones Security zones included in Internet Explorer that permit websites to be divided into trusted and untrusted areas. Web Proxy Service Web Proxy clients. web publishing rules

The service on ISA Server that accepts requests from See publishing rules.

web server Web servers are used to publish content on to the Internet. They support the server side of HTTP for the World Wide Web.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

Glossary

561

web tier The presentation layer or web tier is the user interface to the underlying technologies of the websites. It is the first layer of user authentication to the applications on the system. In a web-based environment, the client can access the presentation layer through a Dynamic HTML browser or a standard HTML version 3.2 browser Windows Distributed interNet Applications (Windows DNA) Application development model developed by Microsoft for creating highly available web solutions. Windows DNA

See Windows Distributed interNet Applications.

Windows Internet Name Service (WINS) A network service for Microsoft networks that provides Windows computers with Internet numbers for specified NetBIOS computer names, facilitating browsing and intercommunication over TCP/IP networks. Windows Management Instrumentation (WMI) Microsoft’s implementation of the web-based Enterprise Management architecture. Can be used to track, monitor and control computers and other network devices using a standard web browser. Windows 2000 Backup and Recovery Tool The Windows 2000 utility used to run the Backup Wizard, the Restore Wizard, and create an Emergency Repair Disk (ERD). Windows 2000 boot disk A disk that can be used to boot to the Windows 2000 operating system in the event of a Windows 2000 boot failure. Windows 2000 Setup disks Floppy disks that can be used to recover from a system failure. They can also be used to start a Windows 2000 installation, the Recovery Console, and the Emergency Repair process. WINS

See Windows Internet Name Service.

WINS proxy agent A Windows 2000 component that relays broadcast NETBIOS name resolution requests in unicast mode across a router to a WINS server for name resolution services. WINS resource A resource that is used to deploy and manage WINS services in a cluster. WINS server The server that runs WINS and is used to resolve NetBIOS names to IP addresses.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com

562

Glossary

Winsock An abbreviation for Windows Sockets; an application programming interface (API) that was written so that developers could write TCP/IP interface code for Windows programs. Wireless Access Protocol (WAP) A free, unlicensed protocol for wireless communication that makes it possible to create advanced telecommunications services and to access Internet pages from a mobile telephone. WAP is a de facto industry standard supported by a large number of suppliers. WMI Control A Windows 2000 utility that provides an interface for monitoring and controlling system resources. WMI stands for Windows Management Instrumentation. working set activity.

The amount of physical RAM that is available to a process or

write-back caching A caching optimization wherein data written to the slow store is cached until the cache is full or until a subsequent write operation overwrites the cached data. Write-back caching can significantly reduce the write operations to a slow store because many write operations are subsequently obviated by new information. Data in the write-back cache is also available for subsequent reads. If something happens to prevent the cache from writing data to the slow store, the cache data will be lost. write-through caching A caching optimization wherein data written to a slow store is kept in a cache for subsequent rereading. Unlike write-back caching, write-through caching immediately writes the data to the slow store and is therefore less optimal but more secure.

X XML

See Extensible Markup Language.

Z zones DNS term for a group of records that share a namespace. A zone can contain a few records, a domain, or multiple domains, as long as the namespace for each host is common. See also DNS replication.

Copyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com